From Connectionists-Request at cs.cmu.edu Wed May 1 00:05:16 1996 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Wed, 01 May 96 00:05:16 -0400 Subject: Bi-monthly Reminder Message-ID: <29055.830923516@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated September 9, 1994. This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is a moderated forum for enlightened technical discussions and professional announcements. It is not a random free-for-all like comp.ai.neural-nets. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to thousands of busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. -- Dave Touretzky & Lisa Saksida --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, corresponded with him directly and retrieved the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new textbooks related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. The file "current" in the same directory contains the archives for the current month. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU 2. Login as user anonymous with password your username. 3. 'cd' directly to the following directory: /afs/cs/project/connect/connect-archives The archive directory is the ONLY one you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into this directory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- Using Mosaic and the World Wide Web ----------------------------------- You can also access these files using the following url: http://www.cs.cmu.edu/afs/cs/project/connect/connect-archives ---------------------------------------------------------------------- The NEUROPROSE Archive ---------------------- Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. We strongly discourage the merger into the repository of existing bodies of work or the use of this medium as a vanity press for papers which are not of publication quality. PLACING A FILE To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. Current naming convention is author.title.filetype.Z where title is just enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. The Z indicates that the file has been compressed by the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is in the appendix. Make sure your paper is single-spaced, so as to save paper, and include an INDEX Entry, consisting of 1) the filename, 2) the email contact for problems, 3) the number of pages and 4) a one sentence description. See the INDEX file for examples. ANNOUNCING YOUR PAPER It is the author's responsibility to invite other researchers to make copies of their paper. Before announcing, have a friend at another institution retrieve and print the file, so as to avoid easily found local postscript library errors. And let the community know how many pages to expect on their printer. Finally, information about where the paper will/might appear is appropriate inside the paper as well as in the announcement. In your subject line of your mail message, rather than "paper available via FTP," please indicate the subject or title, e.g. "paper available "Solving Towers of Hanoi with ART-4" Please add two lines to your mail header, or the top of your message, so as to facilitate the development of mailer scripts and macros which can automatically retrieve files from both NEUROPROSE and other lab-specific repositories: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/filename.ps.Z When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. To prevent forwarding, place a "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your file. If you do offer hard copies, be prepared for a high cost. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! A shell script called Getps, written by Tony Plate, is in the directory, and can perform the necessary retrieval operations, given the file name. Functions for GNU Emacs RMAIL, and other mailing systems will also be posted as debugged and available. At any time, for any reason, the author may request their paper be updated or removed. For further questions contact: Jordan Pollack Associate Professor Computer Science Department Center for Complex Systems Brandeis University Phone: (617) 736-2713/* to fax Waltham, MA 02254 email: pollack at cs.brandeis.edu APPENDIX: Here is an example of naming and placing a file: unix> compress myname.title.ps unix> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put myname.title.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for myname.title.ps.Z 226 Transfer complete. 100000 bytes sent in 1.414 seconds ftp> quit 221 Goodbye. unix> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file myname.title.ps.Z in the Inbox. Here is the INDEX entry: myname.title.ps.Z mylogin at my.email.address 12 pages. A random paper which everyone will want to read Let me know when it is in place so I can announce it to Connectionists at cmu. ^D AFTER RECEIVING THE GO-AHEAD, AND HAVING A FRIEND TEST RETRIEVE THE FILE, HE DOES THE FOLLOWING: unix> mail connectionists Subject: TR announcement: Born Again Perceptrons FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/myname.title.ps.Z The file myname.title.ps.Z is now available for copying from the Neuroprose repository: Random Paper (12 pages) Somebody Somewhere Cornell University ABSTRACT: In this unpublishable paper, I generate another alternative to the back-propagation algorithm which performs 50% better on learning the exclusive-or problem. ~r.signature ^D ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "ftp.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. Another valid directory is "/afs/cs/project/connect/code", where we store various supported and unsupported neural network simulators and related software. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "neural-bench at cs.cmu.edu". From lba at inesc.pt Thu May 2 04:56:02 1996 From: lba at inesc.pt (Luis B. Almeida) Date: Thu, 02 May 1996 09:56:02 +0100 Subject: deadline extension - Sintra spatiotemporal models workshop Message-ID: <318878A2.398A68D@inesc.pt> Due to requests from several prospective authors, the deadline for submission of papers to the Sintra Workshop on Spatiotemporal Models in Biological and Artificial Systems has been extended. The new deadline is May 10. Papers received after this date will NOT be opened. The call for papers and the instructions for authors can be obtained from the web: http://aleph.inesc.pt/smbas/ or http://www.cnel.ufl.edu/workshop.html They can also be requested by sending e-mail to luis.almeida at inesc.pt -- Luis B. Almeida INESC Phone: +351-1-3544607, +351-1-3100246 R. Alves Redol, 9 Fax: +351-1-3145843 P-1000 Lisboa Portugal e-mail: lba at inesc.pt or luis.almeida at inesc.pt ------------------------------------------------------------------- *** Indonesia is killing innocent people in East Timor *** From ludwig at ibm18.uni-paderborn.de Thu May 2 07:05:07 1996 From: ludwig at ibm18.uni-paderborn.de (Lars Alex. Ludwig) Date: Thu, 2 May 1996 12:05:07 +0100 (DFT) Subject: Call for Papers: Fuzzy-Neuro Systems '97 Message-ID: <9605021005.AA11495@ibm18.uni-paderborn.de> A non-text attachment was scrubbed... Name: not available Type: text Size: 7822 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/a2b7963e/attachment.ksh From KOKINOV at BGEARN.BITNET Thu May 2 13:15:30 1996 From: KOKINOV at BGEARN.BITNET (Boicho Kokinov) Date: Thu, 02 May 96 13:15:30 BG Subject: CogSci96 in Sofia Message-ID: 3rd International Summer School in Cognitive Science Sofia, July 21 - August 3, 1996 First Announcement and Call for Papers The Summer School features introductory and advanced courses in Cognitive Science, participant symposia, panel discussions, student sessions, and intensive informal discussions. Participants will include university teachers and researchers, graduate and senior undergraduate students. International Advisory Board Elizabeth BATES (University of California at San Diego, USA) Amedeo CAPPELLI (CNR, Pisa, Italy) Cristiano CASTELFRANCHI (CNR, Roma, Italy) Daniel DENNETT (Tufts University, Medford, Massachusetts, USA) Ennio De RENZI (University of Modena, Italy) Charles DE WEERT (University of Nijmegen, Holland ) Christian FREKSA (Hamburg University, Germany) Dedre GENTNER (Northwestern University, Evanston, Illinois, USA) Christopher HABEL (Hamburg University, Germany) Joachim HOHNSBEIN (Dortmund University, Germany) Douglas HOFSTADTER (Indiana University, Bloomington, Indiana, USA) Keith HOLYOAK (University of California at Los Angeles, USA) Mark KEANE (Trinity College, Dublin, Ireland) Alan LESGOLD (University of Pittsburg, Pennsylvania, USA) Willem LEVELT (Max-Plank Institute of Psycholinguistics, Nijmegen, Holland) David RUMELHART (Stanford University, California, USA) Richard SHIFFRIN (Indiana University, Bloomington, Indiana, USA) Paul SMOLENSKY (University of Colorado, Boulder, USA) Chris THORNTON (University of Sussex, Brighton, England) Carlo UMILTA' (University of Padova, Italy) Eran ZAIDEL (University of California at Los Angeles, USA) Courses Two Sciences of Mind: Cognitive Science and Consciousness Studies - Sean O'Nuallain (NCR, Canada) Contextual Reasoning - Fausto Giunchiglia (University of Trento, Italy) Diagrammatic Reasonning - Hari Narayanan (Georgia Tech, USA) Qualitative Spatial Reasoning - Schlieder (Hamburg and Freiburg University, Germany) Language, Vision, and Spatial Cognition - Annette Herskovits (Boston University) Situated Planning and Reactivity - Iain Craig (University of Warwick, UK) Anthropology of Knowledge - Janet Keller (University of Illinois, USA) Cognitive Ergonomics - Antonio Rizzo (University of Siena, Italy) Psychophysics: Detection, Discrimination, and Scaling - Stephan Mateeff (BAS and NBU, Bulgaria) Participant Symposia Participants are invited to submit papers reporting completed research which will be presented (30 min) at the participant symposia. Authors should send full papers (8 single spaced pages) in triplicate or electronically (postscript, RTF, MS Word or plain ASCII) by May 31. Selected papers will be published in the School's Proceedings. Only papers presented at the School will be eligible for publication. Student Session Graduate students in Cognitive Science are invited to present their work at the student session. Research in progress as well as research plans and proposals for M.Sc. Theses and Ph.D. Theses will be discussed at the student session. Papers will not be published in the School's Proceedings. Panel Discussions Cognitive Science in the 21st century Symbolic vs. Situated Cognition Human Thinking and Reasoning: Contextual, Diagrammatic, Spatial, Culturally Bound Local Organizers New Bulgarian University, Bulgarian Academy of Sciences, Bulgarian Cognitive Science Society Sponsors TEMPUS SJEP 07272/94 Local Organizing Committee Boicho Kokinov - School Director, Elena Andonova, Gergana Yancheva, Veselka Anastasova Timetable Registration Form: as soon as possible Deadline for paper submission: May 31 Notification for acceptance: June 15 Early registration: June 15 Arrival date and on site registration July 21 Summer School July 22-August 2 Excursion July 28 Departure date August 3 Paper submission to: Boicho Kokinov Cognitive Science Department New Bulgarian University 21, Montevideo Str. Sofia 1635, Bulgaria e-mail: cogsci96 at cogs.nbu.acad.bg Send your Registration Form to: e-mail: cogsci96 at cogs.nbu.acad.bg (If you don't receive an aknowledgement within 3 days, send a message to kokinov at bgearn.acad.bg) From N.Sharkey at dcs.shef.ac.uk Thu May 2 15:42:19 1996 From: N.Sharkey at dcs.shef.ac.uk (Noel Sharkey) Date: Thu, 2 May 96 15:42:19 BST Subject: 1st CALL FOR PAPERS Message-ID: <9605021442.AA19951@dcs.shef.ac.uk> ****** ROBOT LEARNING: THE NEW WAVE ****** Special Issue of the journal Robotics and Autonomous Systems Submission Deadline: August, 1st, 1996 Decisions to authors: October, 1st, 1996 Final papers back: November, 7th, 1996 SPECIAL EDITOR Noel Sharkey (Sheffield) SPECIAL EDITORIAL BOARD Michael Arbib (USC) Ronald Arkin (GIT) George Bekey (USC) Randall Beer (Case Western) Bartlett Mel (USC) Maja Mataric (Brandeis) Carme Torras (Barcelona) Lina Massone (Northwestern) Lisa Meeden (Swarthmore) INTERNATIONAL REVIEW PANEL S Perkins (UK) T Ziemke (Sweden) P Zhang (France) S Wilson (USA) P Bakker (Japan) J Tani (Japan) C Thornton (UK) M Wilson (UK) M Recce (UK) D Cliff (UK) G Hayes (UK) U Zimmer (Germany) S Thrun (USA) S Nolfi (Italy) P van der Smagt (Germany) C Touzet (France) U Nehmzow (UK) R Salmon (Switzerland) J Hallam (UK) M Nilsson (Sweden) M Dorigo (Belgium) A Prescott (UK) C Holgate (UK) E Celaya (Spain) P Husbands (UK) I Harvey (UK) The objective of the Special Issue is to provide a focus for the new wave of research on the use of learning techniques to train real robots. We are particularly interested in research using neural computing techniques, but would also like submissions of work using genetic algorithms or other novel techniques. The nature of the new wave research is transdisciplinary bringing on board control engineering, artificial intelligence, animal learning, neurophysiology, embodied cognition, and ethology. We would like to encourage work discussing replicability and quantification provided that the research has been conducted or tested on real robots. AREAS OF RESEARCH INCLUDE: Mobile autonomous robotics, Fixed Arm robotics, Dextrous robots, Walking Machines, High level robotics, Behaviour-based robotics, Biologically inspired robots. TOPICS OF INTEREST INCLUDE * Reinforcement learning * Supervised learning * Self organisiation * Genetic algorithms * Learning brainstyle control systems * High level robot learning * Hybrid learning * Imitation Learning * The learning and use of representations * Adaptive approaches to dynamic planning * Place recognition Send submissions to Ms Jill Martin, RAS Special, Department of Computer Science, Regent Court, Portobello Rd., University of Sheffield, Sheffield, S1 4DP, UK. Updates will appear on the web page: http:\www.dcs.shef.ac.uk/research/groups/nn/RASspecial.html From arbib at pollux.usc.edu Fri May 3 15:37:55 1996 From: arbib at pollux.usc.edu (Michael A. Arbib) Date: Fri, 3 May 1996 12:37:55 -0700 (PDT) Subject: SENSORIMOTOR COORDINATION: EXTENDED DEADLINE Message-ID: <199605031937.MAA02879@pollux.usc.edu> *******PLEASE NOTE THAT THE DEADLINE FOR SUBMISSION *******OF ABSTRACTS HAS BEEN EXTENDED *******FROM 1 MAY to 15 MAY, 1996. Workshop on SENSORIMOTOR COORDINATION: AMPHIBIANS, MODELS, AND COMPARATIVE STUDIES Poco Diablo Resort, Sedona, Arizona, November 22-24, 1996 Co-Directors: Kiisa Nishikawa (Northern Arizona University, Flagstaff) and Michael Arbib (University of Southern California, Los Angeles). Local Arrangements Chair: Kiisa Nishikawa. E-mail enquiries may be addressed to Kiisa.Nishikawa at nau.edu or arbib at pollux.usc.edu. Further information may be found on our home page at http://www.nau.edu:80/~biology/vismot.html. Program Committee: Kiisa Nishikawa (Chair), Michael Arbib, Emilio Bizzi, Chris Comer, Peter Ewert, Simon Giszter, Mel Goodale, Ananda Weerasuriya, Walt Wilczynski, and Phil Zeigler. SCIENTIFIC PROGRAM The aim of this workshop is to study the neural mechanisms of sensorimotor coordination in amphibians and other model systems for their intrinsic interest, as a target for developments in computational neuroscience, and also as a basis for comparative and evolutionary studies. The list of subsidiary themes given below is meant to be representative of this comparative dimension, but is not intended to be exhaustive. The emphasis (but not the exclusive emphasis) will be on papers that encourage the dialog between modeling and experimentation. A decision as to whether or not to publish a proceedings is still pending. Central Theme: Sensorimotor Coordination in Amphibians and Other Model Systems Subsidiary Themes: Visuomotor Coordination: Comparative and Evolutionary Perspectives Reaching and Grasping in Frog, Pigeon, and Primate Cognitive Maps Auditory Communication (with emphasis on spatial behavior and sensory integration) Motor Pattern Generators This workshop is the sequel to four earlier workshops on the general theme of "Visuomotor Coordination in Frog and Toad: Models and Experiments". The first two were organized by Rolando Lara and Michael Arbib at the University of Massachusetts, Amherst (1981) and Mexico City (1982). The next two were organized by Peter Ewert and Arbib in Kassel and Los Angeles, respectively, with the Proceedings published as follows: Ewert, J.-P. and M. A. Arbib (Eds.) 1989. Visuomotor Coordination: Amphibians, Comparisons, Models and Robots. New York: Plenum Press. Arbib, M.A. and J.-P. Ewert (Eds.) 1991. Visual Structures and Integrated Functions, Research Notes in Neural Computing 3. Heidelberg, New York: Springer Verlag. INSTRUCTIONS FOR CONTRIBUTORS Persons who wish to present oral papers are asked to send three copies of an extended abstract, approximately 4 pages long, including figures and references. Persons who wish to present posters are asked to send a one page abstract. Abstracts may be sent by regular mail, e-mail or FAX. Authors should be aware that e-mailed abstracts should contain no figures. Abstracts should be sent no later than 15 May, 1996 to: Kiisa Nishikawa , Department of Biological Sciences, Northern Arizona University, Flagstaff, AZ 86011-5640, E-mail: Kiisa.Nishikawa at nau.edu; FAX: (520)523-7500. Notification of the Program Committee's decision will be sent out no later than 15 June, 1996. REGISTRATION INFORMATION Meeting Location and General Information: The Workshop will be held at the Poco Diablo Resort in Sedona, Arizona (a beautiful small town set in dramatic red hills) immediately following the Society for Neuroscience meeting in 1996. The 1996 Neuroscience meeting ends on Thursday, November 21, so workshop participants can fly from Washington, DC to Phoenix, AZ that evening, meet Friday, Saturday, and Sunday, with a Workshop Banquet on Sunday evening, and fly home on Monday, November 25th. Paper sessions will be held all day on Friday, on Saturday afternoon, and all day on Sunday. Poster sessions will be held on Saturday afternoon and evening. A group field trip is planned for Saturday morning. Graduate Student and Postdoctoral Participation: In order to encourage the participation of graduate students and postdoctorals, we have arranged for affordable housing, and in addition we are able to offer a reduced registration fee (see below) thanks to the generous contribution of the Office of the Associate Provost for Research and Graduate Studies at Northern Arizona University. TRAVEL FROM PHOENIX TO SEDONA: Sedona, AZ is located approximately 100 miles north of Phoenix, where the nearest major airport (Sky Harbor) is located. Workshop attendees may wish to arrange their own transportation (e.g., car rental from Phoenix airport) from Phoenix to Sedona, or they may use the Workshop Shuttle (estimated round trip cost $20 US) to Sedona on 21 November, with a return to Phoenix on 25 November. If you plan to use the Workshop Shuttle, we will need to know your expected arrival time in Phoenix by 1 October 1996, to ensure that space is available for you at a convenient time. LODGING: The following costs are for each night. Since many participants may want to extend their stay to further enjoy Arizona's scenic beauty, we have negotiated special rates for additional nights after the end of the workshop on November 24th. Attendees should make their own booking with the Poco Diablo Resort, by phone (800) 352-5710 or FAX (520) 282-9712. Thurs.-Fri. (and additional week nights before the workshop) per night: students $85 US + tax, faculty $105 + tax Sat.-Sun. (and additional week nights after the workshop) per night: students $69 + tax, faculty $89 + tax. The student room rates are for double occupancy. Thus, students willing to share a room may stay for half the stated rate. When you make your room reservations with the Poco Diablo Resort, please be sure to indicate the number of guests in your party. Graduate students and postdocs should be sure to indicate whether they want single or double occupancy. REGISTRATION FEES: Students and postdoctorals $100; faculty, guests and others $200. The registration fee includes lunch Fri. - Sun., wine and cheese reception during the Saturday evening poster session, and a Farewell Dinner on Sunday evening. Registration fees should be paid by check in US funds, made payable to "Sensorimotor Coordination Workshop", and should be sent to Kiisa Nishikawa at the address listed below, together with the completed registration form that follows at the end of this announcement. Completed registration forms and fees must be received by 1 July, 1996. Late registration fees will be $150 for students and postdoctorals and $250 for faculty. REGISTRATION FORM NAME: ADDRESS: PHONE: FAX: EMAIL: STATUS: [ ] Faculty ($200); [ ] Postdoctoral ($100); [ ] Student ($100); [ ] Other ($200). (Postdocs and students: Please attach certification of your status signed by your supervisor.) TYPE OF PRESENTATION (paper vs. poster): ABSTRACT SENT: (yes/no) AREAS OF INTEREST RELEVANT TO WORKSHOP: WILL YOU REQUIRE ANY SPECIAL AUDIOVISUAL EQUIPMENT FOR YOUR PRESENTATION? HAVE YOU MADE A RESERVATION WITH THE HOTEL? EXPECTED TIME OF ARRIVAL IN PHOENIX (ON NOVEMBER 21): EXPECTED TIME OF DEPARTURE FROM PHOENIX (ON NOVEMBER 25): DO YOU WISH TO USE THE WORKSHOP SHUTTLE TO TRAVEL FROM PHOENIX TO SEDONA? (If so, please be sure that we know your expected arrival time by 1 October!) DO YOU WISH TO PARTICIPATE IN A GROUP HIKE IN THE SEDONA AREA ON SATURDAY MORNING? Please make sure that your check (in US funds and payable to the "Sensorimotor Coordination Workshop") is included with this form. If you plan to bring a guest with you to the Workshop, please add their name(s) to this form and enclo se their registration fee along with your own. Mail to: Kiisa Nishikawa, Department of Biological Sciences, Northern Arizona University, Flagstaff, AZ 86011-5640. E-mail: Kiisa.Nishikawa at nau.edu. FAX: (520)523-7500. Phone: (520)523-9497. From koza at CS.Stanford.EDU Sat May 4 11:20:07 1996 From: koza at CS.Stanford.EDU (John R. Koza) Date: Sat, 4 May 1996 08:20:07 -0700 (PDT) Subject: GP-96 Registration and Papers Message-ID: <199605041520.IAA08316@Sunburn.Stanford.EDU> CALL FOR PARTICIPATION, LIST OF TUTORIALS, LIST OF PAPERS, LIST OF PROGRAM COMMITTEES, AND REGISTRATION FORM (Largest discount availabe until May 15) Genetic Programming 1996 Conference (GP-96) July 28 - 31 (Sunday - Wednesday), 1996 Fairchild Auditorium and other campus locations Stanford University Stanford, California Proceedings will be published by The MIT Press In cooperation with -the Association for Computing Machinery (ACM), - SIGART - IEEE Neural Network Council, - American Association for Artificial Intelligence. Genetic programming is an automatic programming technique for evolving computer programs that solve (or approximately solve) problems. Starting with a primordial ooze of thousands of randomly created computer programs composed of programmatic ingredients appropriate to the problem, a population of computers programs is progressively evolved over many generations using the Darwinian principle of survival of the fittest, a sexual recombination operation, and occasional mutation. Since 1992, over 500 technical papers have been published in this rapidly growing field. This first genetic programming conference will feature 75 papers and 27 poster papers, 12 tutorials, 2 invited speakers, a session featuring late-breaking papers, and informal birds-of-a-feather meetings. Topics include, but are not limited to, applications of genetic programming, theoretical foundations of genetic programming, implementation issues and technique extensions, use of memory and state, cellular encoding (developmental genetic programming), evolvable hardware, evolvable machine language programs, automated evolution of program architecture, evolution and use of mental models, automatic programming of multi-agent strategies, distributed artificial intelligence, automated circuit synthesis, automatic programming of cellular automata, induction, system identification, control, automated design, compression, image analysis, pattern recognition, molecular biology applications, grammar induction, and parallelization. ------------------------------------------------- HONORARY CHAIR: John Holland, University of Michigan INVITED SPEAKERS: John Holland, University of Michigan and David E. Goldberg, University of Illinois GENERAL CHAIR: John Koza, Stanford University PUBLICITY CHAIR: Patrick Tufts, Brandeis University ------------------------------------------------- TUTORIALS -Sunday July 28 9:15 AM - 11:30 AM - Genetic Algorithms - David E. Goldberg, University of Illinois - Machine Language Genetic Programming - Peter Nordin, University of Dortmund, Germany - Genetic Programming using Mathematica P Robert Nachbar P Merck Research Laboratories - Introduction to Genetic Programming - John Koza, Stanford University ------------------------------------------------- Sunday July 28 1:00 PM - 3: 15 PM - Classifier Systems- Robert Elliott Smith, University of Alabama - Evolutionary Computation for Constraint Optimization - Zbigniew Michalewicz, University of North Carolina - Advanced Genetic Programming - John Koza, Stanford University ------------------------------------------------- Sunday July 28 3:45 PM - 6 PM - Evolutionary Programming and Evolution Strategies - David Fogel, University of California, San Diego - Cellular Encoding P Frederic Gruau, Stanford University (via videotape) and David Andre, Stanford University (in person) - Genetic Programming with Linear Genomes (one hour) - Wolfgang Banzhaf, University of Dortmund, Germany -JECHO - Terry Jones, Santa Fe Institute ------------------------------------------------- Tuesday July 30 - 3 PM - 5:15PM - Neural Networks - David E. Rumelhart, Stanford University - Machine Learning - Pat Langley, Stanford University -JMolecular Biology for Computer Scientists - Russ B. Altman, Stanford University ------------------------------------------------- Additional tutorial P Time to be Announced % Evolvable Hardware - Hugo De Garis,ATR, Nara, Japan and Adrian Thompson, University of Sussex, U.K. ------------------------------------------------- FOR MORE INFORMATION ABOUT THE GP-96 CONFERENCE: See the GP-96 home page on the World Wide Web: http://www.cs.brandeis.edu/~zippy/gp-96.html or contact GP-96 at via e-mail at gp at aaai.org. PHONE: 415-328- 3123. FAX: 415-321-4457. Conference operated by Genetic Programming Conferences, Inc. (a California not- for-profit corporation). ABOUT GENETIC PROGRAMMING IN GENERAL: http://www-cs- faculty.stanford.edu/~koza/. FOR GP-96 TRAVEL INFORMATION: See the GP-96 home page on the World Wide Web: http://www.cs.brandeis.edu/~zippy/gp-96.html. For further information regarding special GP-96 airline and car rental rates, please contact Conventions in America at e-mail flycia at balboa.com; or phone 1-800-929-4242; or phone 619-678-3600; or FAX 619-678-3699. FOR HOTEL AND UNIVERSITY HOUSING INFORMATION: See the GP-96 home page on the World Wide Web: http://www.cs.brandeis.edu/~zippy/gp-96.html or via e- mail at gp at aaai.org. FOR STUDENT TRAVEL GRANTS: See the GP-96 home page on the World Wide Web: http://www.cs.brandeis.edu/~zippy/gp-96.html. ABOUT THE SAN FRANCISCO BAY AREA AND SILICON VALLEY SIGHTS: Try the Stanford University home page at http://www.stanford.edu/, the Hyperion Guide at http://www.hyperion.com/ba/sfbay.html; the Palo Alto weekly at http://www.service.com/PAW/home.html; the California Virtual Tourist at http://www.research.digital.com/SRC/virtual- tourist/California.html; and the Yahoo Guide of San Francisco at http://www.yahoo.com/Regional_Information/States/Califor nia/San_Francisco. ABOUT OTHER CONTEMPORANEOUS WEST COAST CONFERENCES: Information about the AAAI-96 conference on August 4 P 8 (Sunday P Thursday), 1996, in Portland, Oregon is at http://www.aaai.org/. Information on the International Conference on Knowledge Discovery and Data Mining (KDD- 96) in Portland on August 3 P 5, 1996 is at http://www- aig.jpl.nasa.gov/kdd96. Information about the Protein Society conference on August 3 P 7, 1996 in San Jose is at http://www.faseb.org. Information about the Foundations of Genetic Algorithms (FOGA) workshop on August 3 P 5 (Saturday P Monday), 1996, in San Diego is at http://www.aic.nrl.navy.mil/galist/foga/. Information about the Parallel and Distributed Processing Techniques and Applications (PDPTA-96) conference on August 6 P 9 (Friday P Sunday), 1996 in Sunnyvale, California is at http://www.ece.neu.edu/pdpta96.html. ABOUT MEMBERSHIP IN THE ACM, AAAI, or IEEE: For information about ACM membership, try http://www.acm.org/; for information about SIGART, try http://sigart.acm.org/; for AAAI membership, go to http://www.aaai.org/; and for membership in the IEEE, go to http://www.ieee.org. PHYSICAL MAIL ADDRESS FOR GP-96: GP-96 Conference, c/o American Association for Artificial Intelligence, 445 Burgess Drive, Menlo Park, CA 94025. PHONE: 415-328- 3123. FAX: 415-321-4457. WWW: http://www.aaai.org/. E-MAIL: gp at aaai.org. ------------------------------------------------ REGISTRATION FORM FOR GENETIC PROGRAMMING 1996 CONFERENCE TO BE HELD ON JULY 28 P 31, 1996 AT STANFORD UNIVERSITY First Name _________________________ Last Name_______________ Affiliation________________________________ Address__________________________________ ________________________________________ City__________________________ State/Province _________________ Zip/Postal Code____________________ Country__________________ Daytime telephone__________________________ E-Mail address_____________________________ Conference registration fee includes copy of proceedings, attendance at 4 tutorials of your choice, syllabus books for the tutorials, conference reception, copy of a book of late-breaking papers, a T-shirt, coffee breaks, lunch (on at least Sunday), and admission to conference sessions. Students must send legible proof of full-time student status. Conference proceedings will be mailed to registered attendees with U.S. mailing addresses via 2-day U.S. priority mail about 1 P 2 weeks prior to the conference at no extra charge (at addressee's risk). If you are uncertain as to whether you will be at that address at that time or DO NOT WANT YOUR PROCEEDINGS MAILED to you at the above address for any other reason, your copy of the proceedings will be held for you at the conference registration desk if you CHECK HERE ____. Postmarked by May 15, 1996: Student P ACM, IEEE, or AAAI Member $195 Regular P ACM, IEEE, or AAAI Member $395 Student P Non-member $215 Regular P Non-member $415 Postmarked by June 26, 1996: Student P ACM, IEEE, or AAAI Member $245 Regular P ACM, IEEE, or AAAI Member $445 Student P Non-member $265 Regular P Non-member $465 Postmarked later or on-site: Student P ACM, IEEE, or AAAI Member $295 Regular P ACM, IEEE, or AAAI Member $495 Student P Non-member $315 Regular P Non-member $515 Member number: ACM # ___________ IEEE # _________ AAAI # _________ Total fee (enter appropriate amount) $ _________ __ Check or money order made payable to "AAAI" (in U.S. funds) __ Mastercard __ Visa __ American Express Credit card number __________________________________________ Expiration Date ___________ Signature _________________________ TUTORIALS: Check off a box for one tutorial from each of the 4 columns: Sunday July 28, 1996 P 9:15 AM - 11:30 AM __ Genetic Algorithms __ Machine Language GP __ GP using Mathematica __ Introductory GP Sunday July 28, 1996 P 1:00 PM - 3: 15 PM __ Classifier Systems __ EC for Constraint Optimization __ Advanced GP Sunday July 28, 1996 P 3:45 PM - 6 PM __ Evolutionary Programming and Evolution Strategies __ Cellular Encoding __ GP with Linear Genomes __ ECHO Tuesday July 30, 1996 P3:00 PM - 5:15PM __ Neural Networks __ Machine Learning __ Molecular Biology for Computer Scientists __ Check here for information about housing and meal package at Stanford University. __ Check here for information on student travel grants. T-shirt size ___ small ___ medium ___ large ___ extra-large No refunds will be made; however, we will transfer your registration to a person you designate upon notification. SEND TO: GP-96 Conference, c/o American Association for Artificial Intelligence, 445 Burgess Drive, Menlo Park, CA 94025. ------------------------------------------------- 90 PAPERS APPEARING IN PROCEEDINGS OF THE GP-96 CONFERENCE TO BE HELD AT STANFORD UNIVERSITY ON JULY 28-31, 1996 -------------------------------------------------- LONG GENETIC PROGRAMMING PAPERS Discovery by Genetic Programming of a Cellular Automata Rule that is Better than any Known Rule for the Majority Classification Problem --- David Andre, Forrest H Bennett III, and John R. Koza A Study in Program Response and the Negative Effects of Introns in Genetic Programming --- David Andre and Astro Teller An Investigation into the Sensitivity of Genetic Programming to the Frequency of Leaf Selection During Subtree Crossover --- Peter J. Angeline Automatic Creation of an Efficient Multi-Agent Architecture Using Genetic Programming with Architecture-Altering Operations --- Forrest H Bennett III Evolving Deterministic Finite Automata Using Cellular Encoding --- Scott Brave Genetic Programming and the Efficient Market Hypothesis --- Shu-Heng Chen and Chia-Hsuan Yeh Bargaining by Artificial Agents in Two Coalition Games: A Study in Genetic Programming for Electronic Commerce --- Garett Dworman, Steven O. Kimbrough, and James D. Laing Waveform Recognition Using Genetic Programming: The Myoelectric Signal Recognition Problem --- Jaime J. Fernandez, Kristin A. Farry, and John B. Cheatham Benchmarking the Generalization Capabilities of A Compiling Genetic programming System using Sparse Data Sets --- Frank D. Francone, Peter Nordin, and Wolfgang Banzhaf A Comparison between Cellular Encoding and Direct Encoding for Genetic Neural Networks --- Frederic Gruau, Darrell Whitley, and Larry Pyeatt Entailment for Specification Refinement --- Thomas Haynes, Rose Gamble, Leslie Knight, and Roger Wainwright Genetic Programming of Near-Minimum-Time Spacecraft Attitude Maneuvers --- Brian Howley Evolving Evolution Programs: Genetic Programming and L-Systems --- Christian Jacob Genetic Programming using Genotype-Phenotype Mapping from Linear Genomes into Linear Phenotypes --- Robert E. Keller and Wolfgang Banzhaf Automated WYWIWYG Design of Both the Topology and Component Values of Electrical Circuits Using Genetic Programming --- John R. Koza, Forrest H Bennett III, David Andre, and Martin A. Keane Use of Automatically Defined Functions and Architecture-Altering Operations in Automated Circuit Synthesis Using Genetic Programming --- John R. Koza, David Andre, Forrest H Bennett III, and Martin A. Keane Using Data Structures within Genetic Programming --- W. B. Langdon Evolving Teamwork and Coordination with Genetic Programming --- Sean Luke and Lee Spector Using Genetic Programming to Develop Inferential Estimation Algorithms --- Ben McKay, Mark Willis, Gary Montague, and Geoffrey W. Barton Dynamics of Genetic Programming and Chaotic Time Series Prediction --- Brian S. Mulloy, Rick L. Riolo, and Robert S. Savit Genetic Programming, the Reflection of Chaos, and the Bootstrap: Towards a useful Test for Chaos --- E. Howard N. Oakley Solving Facility Layout Problems Using Genetic Programming --- Jaime Garces-Perez, Dale A. Schoenefeld, and Roger L. Wainwright Variations in Evolution of Subsumption Architectures Using Genetic Programming: The Wall Following Robot Revisited --- Steven J. Ross, Jason M. Daida, Chau M. Doan, Tommaso F. Bersano- Begey, and Jeffrey J. McClain MASSON: Discovering Commonalties in Collection of Objects using Genetic Programming --- Tae-Wan Ryu and Christoph F. Eick Cultural Transmission of Information in Genetic Programming --- Lee Spector and Sean Luke Code Growth in Genetic Programming --- Terence Soule, James A. Foster, and John Dickinson High-Performance, Parallel, Stack-Based Genetic Programming --- Kilian Stoffel and Lee Spector Search Bias, Language Bias, and Genetic Programming --- P. A. Whigham Learning Recursive Functions from Noisy Examples using Generic Genetic Programming --- Man Leung Wong and Kwong Sak Leung SHORT GENETIC PROGRAMMING PAPERS Classification using Cultural Co-Evolution and Genetic Programming --- Myriam Abramson and Lawrence Hunter Type-Constrained Genetic Programming for Rule-Base Definition in Fuzzy Logic Controllers --- Enrique Alba, Carlos Cotta, and Jose J. Troyo The Evolution of Memory and Mental Models Using Genetic Programming --- Scott Brave Automatic Generation of Object-Oriented Programs Using Genetic Programming --- Wilker Shane Bruce Evolving Event Driven Programs --- Mark Crosbie and Eugene H. Spafford Computer-Assisted Design of Image Classification Algorithms: Dynamic and Static Fitness Evaluations in a Scaffolded Genetic Programming Environment --- Jason M. Daida, Tommaso F. Bersano-Begey, Steven J. Ross, and John F. Vesecky Improved Direct Acyclic Graph Handling and the Combine Operator in Genetic Programming --- Herman Ehrenburg An Adverse Interaction between Crossover and Restricted Tree Depth in Genetic Programming --- Chris Gathercole and Peter Ross The Prediction of the Degree of Exposure to Solvent of Amino Acid Residues via Genetic Programming --- Simon Handle y A New Class of Function Sets for Solving Sequence Problems --- Simon Handley Evolving Edge Detectors with Genetic Programming --- Christopher Harris and Bernard Buxton Toward Simulated Evolution of Machine Language Iteration --- Lorenz Huelsbergen Robustness of Robot Programs Generated by Genetic Programming --- Takuya Ito, Hitoshi Iba, and Masayuki Kimura Signal Path Oriented Approach for Generation of Dynamic Process Models --- Peter Marenbach, Kurt D. Betterhausen, and Stephan Freyer Evolving Control Laws for a Network of Traffic Signals --- David J. Montana and Steven Czerwinski Distributed Genetic Programming: Empirical Study and Analysis --- Tatsuya Niwa and Hitoshi Iba Programmatic Compression of Images and Sound --- Peter Nordin and Wolfgang Banzhaf Investigating the Generality of Automatically Defined Functions --- Una-May O'Reilly Parallel Genetic Programming: An Application to Trading Models Evolution --- Mouloud Oussaidene, Bastien Chopard, Olivier V. Pictet, and Marco Tomassini Genetic Programming for Image Analysis --- Riccardo Poli Evolving Agents --- Adil Qureshi Genetic Programming for Improved Data Mining: An Application to the Biochemistry of Protein Interactions --- M. L. Raymer, W. F. Punch, E. D. Goodman, and L. A. Kuhn Generality Versus Size in Genetic Programming --- Justinian Rosca Genetic Programming in Database Query Optimization --- Michael Stillger and Myra Spiliopoulou Ontogenetic Programming --- Lee Spector and Kilian Stoffel Using Genetic Programming to Approximate Maximum Clique --- Terence Soule, James A. Foster, and John Dickinson Paragen: A Novel Technique for the Autoparallelisation of Sequential Programs using Genetic Programming --- Paul Walsh and Conor Ryan The Benefits of Computing with Introns --- Mark Wineberg and Franz Oppacher GENETIC PROGRAMMING POSTER PAPERS Co-Evolving Classification Programs using Genetic Programming --- Manu Ahluwalia and Terence C. Fogarty Genetic Programming Tools Available on the Web: A First Encounter --- Anthony G. Deakin and Derek F. Yates Speeding up Genetic Programming: A Parallel BSP Implementation --- Dimitris C. Dracopoulos and Simon Kent Easy Inverse Kinematics using Genetic Programming --- Jonathan Gibbs Noisy Wall-Following and Maze Navigation through Genetic Programming --- Andrew Goldfish Genetic Programming for Classification of Brain Tumours from Nuclear Magnetic Resonance Biopsy Spectra --- H. F. Gray, R. J. Maxwell, I. Martinez-Perez, C. Arus, and S. Cerdan GP-COM: A Distributed Component-Based Genetic Programming System in C++ --- Christopher Harris and Bernard Buxton Clique Detection via Genetic Programming --- Thomas Haynes and Dale Schoenefeld Functional Languages on Linear Chromosomes --- Paul Holmes and Peter J. Barclay Improving the Accuracy and Robustness of Genetic Programming through Expression Simplification --- Dale Hooper and Nicholas S. Flann COAST: An Approach to Robustness and Reusability in Genetic Programming --- Naohiro Hondo, Hitoshi Iba, and Yukinori Kakazu Recurrences with Fixed Base Cases in Genetic Programming --- Stefan J. Johansson Evolutionary and Incremental Methods to Solve Hard Learning Problems --- Ibrahim Kuscu Detection of Patterns in Radiographs using ANN Designed and Trained with the Genetic Algorithm --- Alejandro Pazos Julian Dorado and Antonino Santos The Logic-Grammars-Based Genetic Programming System --- Man Leung Wong and Kwong Sak Leung LONG GENETIC ALGORITHMS PAPERS Genetic Algorithms with Analytical Solution --- Erol Gelenbe Silicon Evolution --- Adrian Thompson SHORT GENETIC ALGORITHMS PAPERS On Sensor Evolution in Robotics --- Karthik Balakrishnan and Vasant Honavar Testing Software using Order-Based Genetic Algorithms --- Edward B. Boden and Gilford F. Martino Optimizing Local Area Networks Using Genetic Algorithms --- Andy Choi A Genetic Algorithm for the Construction of Small and Highly Testable OKFDD Circuits --- Rold Drechsler, Bernd Becker, and Nicole Gockel Motion Planning and Design of CAM Mechanisms by Means of a Genetic Algorithm --- Rodolfo Faglia and David Vetturi Evolving Strategies Based on the Nearest Neighbor Rule and a Genetic Algorithm --- Matthias Fuchs Recognition and Reconstruction of Visibility Graphs Using a Genetic Algorithm --- Marshall S. Veach GENETIC ALGORITHMS POSTER PAPERS The Use of Genetic Algorithms in the Optimization of Competitive Neural Networks which Resolve the Stuck Vectors Problem --- Tin Ilakovac, Zeljka Perkovic, and Strahil Ristov An Extraction Method of a Car License Plate using a Distributed Genetic Algorithm --- Dae Wook Kim, Sang Kyoon Kim, and Hang Joon Kim EVOLUTIONARY PROGRAMMING AND EVOLUTION STRATEGIES PAPERS Evolving Fractal Movies --- Peter J. Angeline Preliminary Experiments on Discriminating between Chaotic Signals --- David B. Fogel and Lawrence J. Fogel Discovering Patterns in Spatial Data using Evolutionary Programming --- Adam Ghozeil and David B. Fogel Evolving Reduced Parameter Bilinear Models for Time Series Prediction using Fast Evolutionary Programming --- Sathyanarayan S. Rao and Kumar Chellapilla CLASSIFIER SYSTEMS PAPERS Three-Dimensional Shape Optimization Utilizing a Learning Classifier System --- Robert A. Richards and Sheri D. Sheppard Classifier System Renaissance: New Analogies, New Directions --- H. Brown Cribbs III and Robert E. Smith Natural Niching for Cooperative Learning in Classifier Systems --- Jeffrey Horn and David E. Goldberg From juergen at idsia.ch Tue May 7 10:07:53 1996 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Tue, 7 May 96 16:07:53 +0200 Subject: guessing recurrent nets Message-ID: <9605071407.AA09728@fava.idsia.ch> GUESSING CAN OUTPERFORM MANY LONG TIME LAG ALGORITHMS Juergen Schmidhuber, IDSIA & Sepp Hochreiter, TUM Technical Note IDSIA-19-96 (3 pages, 48 K) Numerous recent papers focus on standard recurrent nets' problems with long time lags between relevant signals. Some propose rather sophisticated, alterna- tive methods. We show: many problems used to test previous methods can be solved more quickly by random weight guessing. To obtain a copy, use ftp, or simply cut and paste: netscape ftp://ftp.idsia.ch/pub/juergen/guess.ps.gz Or try our web pages: http://www7.informatik.tu-muenchen.de/~hochreit http://www.idsia.ch/~juergen/onlinepub.html From rsun at cs.ua.edu Tue May 7 14:24:51 1996 From: rsun at cs.ua.edu (Ron Sun) Date: Tue, 7 May 1996 13:24:51 -0500 Subject: from subsymbolic to symbolic learning Message-ID: <9605071824.AA31663@athos.cs.ua.edu> Bottom-up Skill Learning in Reactive Sequential Decision Tasks Ron Sun Todd Peterson Edward Merrill The University of Alabama Tuscaloosa, AL 35487 --------------------------------------- To appear in: Proc. of Cognitive Science Conference, 1996. 6 pages. ftp or Web access: ftp://cs.ua.edu/pub/tech-reports/sun.cogsci96.ps sorry, no hardcopy available. ---------------------------------------- This paper introduces a hybrid model that unifies connectionist, symbolic, and reinforcement learning into an integrated architecture for bottom-up skill learning in reactive sequential decision tasks. The model is designed for an agent to learn continuously from on-going experience in the world, without the use of preconceived concepts and knowledge. Both procedural skills and high-level knowledge are acquired through an agent's experience interacting with the world. Computational experiments with the model in two domains are reported. From bartlett at alfred.anu.edu.au Tue May 7 05:11:14 1996 From: bartlett at alfred.anu.edu.au (Peter Bartlett) Date: Tue, 7 May 1996 19:11:14 +1000 (EST) Subject: Paper on neural net learning (again) Message-ID: <9605070911.AA24712@cook.anu.edu.au> A new version of the following paper, announced last week, is available by anonymous ftp. ftp host: syseng.anu.edu.au ftp file: pub/peter/TR96d.ps.Z The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network (21 pages) Peter Bartlett Australian National University (The main change is that I've taken out part of the first theorem, which will soon appear in a forthcoming paper with John Shawe-Taylor, Bob Williamson, and Martin Anthony. My apologies that the permissions were set incorrectly on this file. Thanks to those who pointed out the problem.) -- Peter. From goldfarb at unb.ca Tue May 7 20:10:58 1996 From: goldfarb at unb.ca (Lev Goldfarb) Date: Tue, 7 May 1996 21:10:58 -0300 (ADT) Subject: Paper: INDUCTIVE THEORY OF VISION Message-ID: My apologies if you receive multiple copies of this message. The following paper (also TR96-108, April 1996, Faculty of Computer Science, University of New Brunswick, Fredericton, Canada) will be presented at the workshop WHAT IS INDUCTIVE LEARNING held in Toronto on May 20-21, 1996 in conjunction with the 11th Canadian biennial conference on Artificial Intelligence. It is available via anonymous ftp (45 pages) ftp://ftp.cs.unb.ca/incoming/theory.ps.Z It goes without saying that comments and suggestions are appreciated. %*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*% INDUCTIVE THEORY OF VISION Lev Goldfarb, Sanjay S. Deshpande, Virendra C. Bhavsar Faculty of Computer Science University of New Brunswick, Fredericton, N.B., Canada E3B 5A3 Ph: (506)453-4566, FAX: (506)453-3566 E-mail: goldfarb, d23d, bhavsar @unb.ca Abstract ---------- In spite of the fact that some of the outstanding physiologists and neurophysiologists (e.g. Hermann von Helmholtz and Horace Barlow) insisted on the central role of inductive learning processes in vision as well as in other sensory processes, there are absolutely no (computational) theories of vision that are guided by these processes. It appears that this is mainly due to the lack of understanding of what inductive learning processes are. We strongly believe in the central role of inductive learning processes around which, we think, all other (intelligent) biological processes have evolved. In this paper we outline the first (computational) theory of vision completely built around the inductive learning processes for all levels in vision. The development of such a theory became possible with the advent of the formal model of inductive learning--evolving transformation system (ETS). The proposed theory is based on the concept of structured measurement device, which is motivated by the formal model of inductive learning and is a far-reaching generalization of the concept of classical measurement device, whose output measurements are not numbers but structured entities ("symbols") with an appropriate metric geometry. We propose that the triad of object structure, image structure and the appropriate mathematical structure (ETS)--to capture the latter two structures--is precisely what computational vision should be about. And it is the inductive learning process that relates the members of this triad. We suggest that since the structure of objects in the universe has evolved in a combinative (agglomerative) and hierarchical manner, it is quite natural to expect that biological processes have also evolved (to learn) to capture the latter combinative and hierarchical structure. In connection with this, the inadequacy of the classical mathematical structures as well as the role of mathematical structures in information processing are discussed. We propose the following postulates on which we base the theory. POSTULATE 1. The objects in the universe have emergent combinative hierarchical structure. Moreover, the term "object structure" cannot be properly understood and defined outside the inductive learning process. POSTULATE 2. The inductive learning process is an evolving process that tries to capture the emergent object (class) structure mentioned in Postulate 1. The mathematical structure on which the inductive learning model is based should have the intrinsic capability to capture the evolving object structure. (It appears that the corresponding mathematical structure is fundamentally different from the classical mathematical structures.) POSTULATE 3. All basic representations in vision processes are constructed on the basis of the inductive class representation, which, in turn, is constructed by the inductive learning process (see Postulate 2). Thus, the inductive learning processes form the core around which all vision processes have evolved. We present simple examples to illustrate the proposed theory for the case of "low-level" vision. _______________________________________________ KEYWORDS: vision, low-level vision, object structure, inductive learning, learning from examples, evolving transformation system, symbolic image representation, image structure, abstract measurement device. %*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*% -- Lev Goldfarb http://wwwos2.cs.unb.ca/profs/goldfarb/goldfarb.htm From jose at scr.siemens.com Wed May 8 11:19:27 1996 From: jose at scr.siemens.com (Stephen Hanson) Date: Wed, 8 May 1996 11:19:27 -0400 (EDT) Subject: Paper Available Message-ID: <199605081519.LAA03527@as1.scr.siemens.com> Development of Schemata during Event Parsing: Neisser's Perceptual Cycle as a Recurrent Connectionist Network Catherine Hanson & Stephen Jose Hanson Temple University Siemens Research & Princeton University Abstract: Neural net simulations of human event parsing are described. A recurent net was used to simulate data collect from humna subjects watching short videotaped event sequences. In one simulation the net was trained on one-half of a taped sequence with the other half of the sequence being used to to test transfer performance. In another simulation the net was trained on one complete event sequence and transfer to a different event sequence was tested. Neural Net simulations provided a unique means of observing the interrelation of top-down and bottom-up processing in a a basic cognitive task. Examination of computational patterns of the net and cluster analysis of the hidden units revealed two factors that may be central to event perception: (1) similarity between a current input and an activated schema and (2) expected duration of a given event. Although the importance of similarity between input and activated schemata during even perception has been acknowledged previously (e.g. Neisser, 1976, Schank, 1982), the present research provides specific instantiation of how similarity judgements can be made using both top-down and bottom-up processing. Moreover, unlike other work on event perception, this approach addresses potential mechanisms for how schemata develop. Journal of Cognitive Neuroscience 8:2, pp. 119-134, 1996. If you are interested in a reprint of the paper Please send your address in return mail to paper-request at scr.siemens.com From carmesin at schoner.physik.uni-bremen.de Wed May 8 12:04:54 1996 From: carmesin at schoner.physik.uni-bremen.de (Hans-Otto Carmesin) Date: Wed, 8 May 1996 18:04:54 +0200 Subject: Cortical Maps Message-ID: <199605081604.SAA05483@schoner.physik.uni-bremen.de> The paper TOPOLOGY-PRESERVATION EMERGENCE BY THE HEBB RULE WITH INFINITESIMAL SHORT RANGE SIGNALS by Hans-Otto Carmesin, Institute for Theoretical Physics and Center for Cognitive Sciences, University Bremen, 28334 Bremen, Germany is available. ABSTRACT: Topology preservation is an ubiquitous phenomenon in the mammalian nervous system. What are the necessary and sufficient conditions for the self- organized formation of topology preservation due to a Hebb- mechanism? A relatively realistic Hebb- rule and neurons with stochastic fluctuations are modeled. Moreover, the reasonable growth law is used for coupling growth that the biomass increase is proportional to the present biomass under the constraint that the biomass is limited at a neuron. It is proven for such general Hebb- type networks that infinitesimal lateral signal transfer to neighbouring neurons is necessary and sufficient for the emergence of topology preservation. As a consequence, observed topology preservation in nervous systems may emerge with or without purpose as a byproduct of infinitesimal lateral signal transfer to neighbouring neurons due to ubiquitous chemical and electrical leakage. Otainable via WWW at http://www.schoner.physik.uni-bremen.de/~carmesin/ The paper appeared in Phys. Rev. E, Vol.53, 993-1002 (1996). From wals96 at SPENCER.CTAN.YALE.EDU Wed May 8 16:24:47 1996 From: wals96 at SPENCER.CTAN.YALE.EDU (Workshop on Adaptive Learning Systems) Date: Wed, 8 May 1996 16:24:47 -0400 Subject: Please post the following Message-ID: <199605082024.AA12842@NOYCE.CTAN.YALE.EDU> The Ninth Yale Workshop on Adaptive and Learning Systems June 10-12, 1996 Yale University New Haven, Connecticut Announcement Objective: Advances in theory and computer technology have enhanced the viability of intelligent systems operating in complex environments. Different perspectives on this general topic offered by learning theory, adaptive control, robotics, artificial neural networks, and biological systems are being linked in productive ways. The aim of the Ninth Workshop on Adaptive and Learning Systems is to bring together engineers and scientists to exploit the synergism between different viewpoints and to provide a favorable environment for constructive collaboration. Program: The principal sessions will be devoted to adaptive systems, learning systems, robotics, neural networks, and biological systems. A tentative list of speakers includes: Adaptation: A. M. Annaswamy, M. Bodson, B. Friedland, R. Horowitz, D. E. Miller, A. S. Morse, K. S. Narendra, H. E. Rauch, H. Unbehauen Learning: A. G. Barto, E. V. Denardo, E. Gelenbe, R. W. Longman, R. K. Mehra, R. S. Sutton, P. Werbos Robotics: P. N. Belhumeur, T. Fukuda, D. J. Kriegman, M. T. Mason, W. T. Miller III, J.-J. E. Slotine Neural Networks: G. Cybenko, L. Feldkamp, C. L. Giles, S. Haykin, L. G. Kraft, U. Lenz, P. Mars, K. S. Narendra, J. Principe, J. N. Tsitsiklis, A. S. Weigend, L. Ungar Biological Systems: E. Bizzi, J. J. Collins, W. Freeman, J. Houk Registration: Registration will be limited and preregistration is highly recommended. Please complete the form below and return together with a check payable to Adaptive and Learning Systems. Information on transportation and lodging will be forwarded upon receipt of the registration form. For further information contact Ms. Lesley Kent, Center for Systems Science, Yale University, P.O. Box 208267, New Haven, CT 06520-8267. Telephone: (203) 432-2211. FAX: (203) 432-7481. e-mail: lesley at sysc2.eng.yale.edu or wals96 at nnc.yale.edu Rooms at reduced rates have been reserved at the Holiday Inn [Tel. (203) 777-6221]. Note that due to other events in New Haven at the time of the Workshop, rooms may not be available if reservations are not made prior to May 25, 1996. Please mention the Yale Workshop when you make your reservation. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - DETACH HERE - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - PREREGISTRATION FORM Name _____________________________________________________ Position _____________________________________________________ Organization _____________________________________________________ Address _____________________________________________________ _____________________________________________________ Phone _____________________________________________________ Enclose a check for $200 ($100 for students with valid ID) payable to Adaptive and Learning Systems and mail to Professor K. S. Narendra, Center for Systems Science, P. O. Box 208267, Yale Station, New Haven, CT 06520-8267, USA. From dayan at ai.mit.edu Thu May 9 09:49:55 1996 From: dayan at ai.mit.edu (Peter Dayan) Date: Thu, 9 May 1996 09:49:55 -0400 (EDT) Subject: Postdoc in computational neurobiology Message-ID: <9605091349.AA08695@sewer.ai.mit.edu> Computational Models of Cortical Development I would like to recruit a postdoc to work on computational models of activity-dependent development in the cortex. The project focuses on development in hierarchical processing structures, particularly the visual system, and we're planning to start with the Helmholtz machine and the wake-sleep algorithm. The position is in my lab in the Department of Brain and Cognitive Sciences at MIT. The job is available immediately, and will last initially for one year, extensible for at least another year. Applicants should have a PhD in a relevant area (such as computational modeling in neuroscience) and should be familiar with neurobiological and computational results in activity-dependent development. To apply, please send a CV and the names and addresses of two referees to me: Peter Dayan Department of Brain and Cognitive Sciences E25-210 MIT Cambridge, MA 02139 USA dayan at psyche.mit.edu tel: +1 (617) 252 1693 fax: +1 (617) 253 2964 From carmesin at schoner.physik.uni-bremen.de Thu May 9 13:25:40 1996 From: carmesin at schoner.physik.uni-bremen.de (Hans-Otto Carmesin) Date: Thu, 9 May 1996 19:25:40 +0200 Subject: Cortical Maps Message-ID: <199605091725.TAA09099@schoner.physik.uni-bremen.de> WWW-adress correction: The paper TOPOLOGY-PRESERVATION EMERGENCE BY THE HEBB RULE WITH INFINITESIMAL SHORT RANGE SIGNALS by Hans-Otto Carmesin, Institute for Theoretical Physics and Center for Cognitive Sciences, University Bremen, 28334 Bremen, Germany announced yesterday is otainable via WWW at http://schoner.physik.uni-bremen.de/~carmesin/ From robtag at dia.unisa.it Thu May 9 04:53:42 1996 From: robtag at dia.unisa.it (Tagliaferri Roberto) Date: Thu, 9 May 1996 10:53:42 +0200 Subject: WIRN VIETRI 96 Message-ID: <9605090853.AA28070@udsab> WIRN VIETRI `96 VIII ITALIAN WORKSHOP ON NEURAL NETS IIASS "Eduardo R. Caianiello", Vietri sul Mare (SA) ITALY 23 - 25 May 1996 PRELIMINARY PROGRAM Thursday 23 May 9:30 - A Microelectronic Retinal Implant for the Blind J. Wyatt (Invited Talk) Mathematical Models 10:30 - Neural Networks for the Classification of Structures A. Sperduti & A. Starita 10:50 - A New Incremental Learning Technique N. Dunkin, J. Shawe-Taylor & P. Koiran 11:10 - A Bayesian Framework for Associative Memories E.R. Hancock & M. Pelillo 11:30 - Coffee Break 12:00 - Cultural Evolution in a Population of Neural Networks D. Denaro & D. Parisi Pattern Recognition 12:20 - Computational Intelligence in Electromagnetics F.C. Morabito 12:40 - The Modulate Asynchronous Information Arrangement (M.A.I.A.) for the Learning of Non Supervisioned Neural Network Applied to Compression and Decompression of Images G. Pappalardo, D. Rosaci & G.M.L. Sarne' 13:00 - Neuro-Fuzzy Processing of Remote Sensed Data P. Blonda, A. Bennardo & G. Satalino 13:20 - A Generalized Regularization Network for Remote Sensing Data Classification M. Ceccarelli & A. Petrosino 13:40 - Lunch 15:30 - Virtual reality and neural nets N.A. Borghese (Review Talk) Pattern Recognition 16:30 - Age Estimates of Stellar Systems by Artificial Neural Networks L. Pulone & R. Scaramella 16:50 - The use of Neural Networks for the Automatic Detection and Classification of Weak Photometric Sub-Components in Early-Type Galaxies M. Capaccioli, G. Di Sciascio, G. Longo, G. Richter & R. Tagliaferri 17:10 - A Hybrid Neural Network Architecture for Dynamic Scenes Understanding A. Chella, S. Gaglio & M. Frixione 17:30 - Coffee Break Architectures and Algorithms 18:00 - Fast Spline Neural Networks for Image Compression F. Piazza, S. Smerli, A. Uncini, M. Griffo & R. Zunino 18:20 - A Novel Hypothesis on Cortical Map: Topological Continuity F. Frisone, V. Sanguineti & P. Morasso Friday 24 May 9:30 - Models of biological vision as powerful analogue spatio-temporal filters for dynamical image processing including motion and colour J. Herault (Invited Talk) Applications 10:30 - Neural Nets for Hybrid on-line Plant Control M. Barbarino, S. Bruzzo & A.M. Colla 10:50 - Using Fuzzy Logic to Solve Optimization Problems by Hopfield Neural Model S. Cavalieri & M. Russo 11:10 - Spectral Mapping: a Comparison of Connectionist Approaches E. Trentin, D. Giuliani & C. Furlanello 11:30 - Coffee Break Architectures and Algorithms 12:00 - Constructive Fuzzy Neural Networks F.M. Frattale Mascioli, G. Martinelli & G.M. Varzi 12:20 - Some Comments and Experimental Results on Bayesian Regularization M. de Bollivier & D. Perrotta 12:40 - Recent Results in On-line Prediction and Boosting N. Cesa Bianchi & S. Panizza (Review Talk) 13:40 - Lunch 15:00 - Poster Session 16:00 - Eduardo R. Caianiello Lectures: - T. Parisini (winner of the 1995 E.R. Caianiello Fellowship Award) Neural Nonlinear Controllers and Observers: Stability Results - P. Frasconi (winner of the 1996 E.R. Caianiello Fellowship Award) Input/Output Hmms for sequence processing 17:00 - Annual S.I.R.E.N. Meeting 20:00 - Conference Dinner Saturday 25 May 9:30 - Title to be announced L.B. Almeida (Invited Talk) Architectures and Algorithms 10:30 - FIR NNs and Temporal BP: Implementation on the Meiko CS-2 A. d'Acierno, W. Ripullone & S. Palma 10:50 - Fast Training of Recurrent Neural Networks by the Recursive Least Squares Method R. Parisi, E.D. Di Claudio, A. Rapagnetta & G. Orlandi 11:10 - A Unification of Genetic Algorithms, Neural Networks and Fuzzy Logic: the GANNFL Approach M. Schmidt 11:30 - Coffee Break 12:00 - A Learning Strategy which Increases Partial Fault Tolerance of Neural Nets S. Cavalieri & O. Mirabella 12:20 - Off-Chip Training of Analog Hardware Feed-Forward Neural Networks Through Hyper - Floating Resilient Propagation G.M. Bollano, M. Costa, D. Palmisano & E. Pasero 12:40 - A Reconfigurable Analog VLSI Neural Network Architecture G.M. Bo, D.D. Caviglia, M. Valle, R. Stratta & E. Trucco 13:00 - An Adaptable Boolean Neural Net Trainable to Comment on its own Innerworkings F.E. Lauria, M. Sette & S. Visco POSTER SESSION - The Computational Neural Map and its Capacity F. Palmieri & D. Mattera (Mathematical models) - Proposal of a Darwin-Neural Network for a Robot Implementation C. Domeniconi (Robotica) - Solving Algebraic and Geometrical Problems Using Neural Networks M. Ferraro & T. Caelli - Simulation of Traffic Flows in Transportation Networks with Non Supervisioned MAIA Neural Network G. Pappalardo, M.N. Postorino, D. Rosaci & G.M.L. Sarne' - FIR NNs and Time Series Prediction: Applications to Stock Market= Forecasting A. d'Acierno, W. Ripullone & S. Palma - Verso la Previsione a Breve Scadenza della Visibilita' Metereologica Attraverso una Rete Neurale a Back-Propagation: Ottimizzazione del Modello per Casi di Nebbia A. Pasini & S. Potesta' - Are Multilayer Perceptrons Adequate for Pattern Recognition and= Verification? M. Gori & R. Scarselli - Proof of the Universal Approximation of a Set of Fuzzy Functions F. Masulli, M. Marinaro & D. Oricchio - An Integrated Neural and Algorithmic System for Optical Flow Computation A. Criminisi, M. Gioiello, D. Molinelli & F. Sorbello - A Mlp-Based Digit and Uppercase Characters Recognition System M. Gioiello, E. Martire, F. Sorbello & G. Vassallo - Neural Network Fuzzification: Critical Review of the Fuzzy Learning Vector Quantization Model A. Baraldi & F. Parmiggiani The registration is of 300.000 Italian Lire ( 250.000 Italian Lire for SIREN members) and can be made on site. More information can be found in the www pages at the address below: http:://www-dsi.ing.unifi.it/neural Hotel Information - 1996 We are glad to inform you about the Hotel prizes of the Hotels near the International Institute for Advanced Scientific Studies. The reservation must be made directly to the Hotels at least 20 days before the arrival date. LLOYD'S BAIA HOTEL - Vietri sul Mare - tel. 089-210145 - fax 089-210186 cat. **** Including Breakfast Half board* Full board* Single room Double room Single room Double room Single room Double room L. 140.000 L. 185.000 L. 160.000 L. 130.000 p.p. L. 185.000 L. 155.000 p.p. * Including drinks (1/4 wine and 1/2 mineral water for each lunch) HOTEL PLAZA - P.zza Ferrovia - Salerno - tel. 089-224477 - fax. 089-237311 cat. *** Without Breakfast Including Breakfast Single room Double room Single room Double room L. 75.000 L. 110.000 L. 85.000 L. 130.000 HOTEL RAITO - Raito (Vietri sul Mare - 10' bus) - tel. 089-210033 - fax 089-211434 cat. **** Including Breakfast Half board Full board Single room Double room Single room Double room Single room Double room L. 130.000 L. 200.000 L. 170.000 L. 140.000 p.p. L. 200.000 L. 180.000 p.p. HOTEL LA LUCERTOLA - Marina di Vietri s/m (200 mt. from IIASS) - tel. 089-210837/8 cat.*** Including Breakfast Half board* Full board* Single room Double room Single room Double room Single room Double room L. 75.000 L. 100.000 L. 95.000 L.90.000 p.p. L. 110.000 L. 100.000 p.p. * Including drinks (1/4 wine and 1/2 mineral water for each lunch) HOTEL BRISTOL - Marina di Vietri s/m (200 mt. from IIASS) - tel. 089-210216 cat. *** Including Breakfast Half board* Full board* Single room Double room Single room Double room Single room Double room L. 79.000 L. 105.000 L. 100.000 L. 90.000 p.p. L. 110.000 L. 100.000 p.p. * Including drinks (1/4 wine and 1/2 mineral water for each lunch) HOTEL VIETRI - Marina di Vietri s/m (200 mt. from IIASS) - tel. 089-761644/210400 cat. ** Including Breakfast Half board* Full board* Single room Double room Single room Double room Single room Double room L. 57.000 L. 85.000 L. 80.000 L. 70.000 p.p. L. 90.000 L. 80.000 p.p. * Including drinks (1/4 wine and 1/2 mineral water for each lunch) From john at dcs.rhbnc.ac.uk Fri May 10 04:35:28 1996 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Fri, 10 May 96 09:35:28 +0100 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <199605100835.JAA18839@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT) has produced a set of new Technical Reports available from the remote ftp site described below. They cover topics in real valued complexity theory, computational learning theory, and analysis of the computational power of continuous neural networks. Abstracts are included for the titles. *** Please note that the location of the files was changed at the beginning of ** the year, so that any copies you have of the previous instructions should be * discarded. The new location and instructions are given at the end of the list. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-043: ---------------------------------------- Elimination of Constants from Machines over Algebraically Closed Fields by Pascal Koiran, Ecole Normale Sup\'erieure de Lyon, France Abstract: Let $\k$ be an algebraically closed field of characteristic 0. We show that constants can be removed efficiently from any machine over $\k$ solving a problem which is definable without constants. This gives a new proof of the transfer theorem of Blum, Cucker, Shub \& Smale for the problem $\p \stackrel{?}{=}\np$. We have similar results in positive characteristic for non-uniform complexity classes. We also construct explicit and correct test sequences (in the sense of Heintz and Schnorr) for the class of polynomials which are easy to compute. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-044: ---------------------------------------- Hilbert's Nullstellensatz is in the Polynomial Hierarchy by Pascal Koiran, Ecole Normale Sup\'erieure de Lyon, France Abstract: We show that if the Generalized Riemann Hypothesis is true, the problem of deciding whether a system of polynomial equations in several complex variables has a solution is in the second level of the polynomial hierarchy. The best previous bound was PSPACE. The possibility that this problem might be NP-complete is also discussed (it is well-known to be NP-hard). ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-045: ---------------------------------------- Networks of Spiking Neurons: The Third Generation of Neural Network Models by Wolfgang Maass, Technische Universitaet Graz, Austria Abstract: The computational power of formal models for networks of spiking neurons is compared with that of other neural network models based on McCulloch Pitts neurons (i.e. threshold gates) respectively sigmoidal gates. In particular it is shown that networks of spiking neurons are computationally more powerful than these other neural network models. A concrete biologically relevant function is exhibited which can be computed by a single spiking neuron (for biologically reasonable values of its parameters), but which requires hundreds of hidden units on a sigmoidal neural net. This article does not assume prior knowledge about spiking neurons, and it contains an extensive list of references to the currently available literature on computations in networks of spiking neurons and relevant results from neurobiology. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-046: ---------------------------------------- Use Of Neural Network Ensembles for Portfolio Selection and Risk Management by D.L.Toulson, Intelligent Financial Systems Ltd., UK S.P.Toulson, London School Of Economics, UK Abstract: A well known method of managing the risk whilst maximising the return of a portfolio is through Markowitz Analysis of the efficient set. A key pre-requisite for this technique is the accurate estimation of the future expected returns and risks (variance of re turns) of the securities contained in the portfolio along with their expected correlations. The estimates for future returns are typically obtained using weighted averages of historical returns of the securities involved or other (linear) techniques. Estimates for the volatilities of the securities may be made in the same way or through the use of (G)ARCH or stochastic volatility (SV) techniques. In this paper we propose the use of neural networks to estimate future returns and risks of securities. The networks are arranged into {\em committees}. Each committee contains a number of independ ently trained neural networks. The task of each committee is to estimate either the future return or risk of a particular security. The inputs to the networks of the committee make use of a novel discriminant analysis technique we have called {\em Fuzzy Discriminants Analysis}. The estimates of future returns and risks provided by the committees are then used to manage a portfolio of 40 UK equities over a five year period (1989-1994). The management of the portfolio is constrained such that at any time it should have the same risk characteristic as the FTSE-100 index. Within this constraint, the portfolio is chosen to provide the maximum possible return. We show that the managed portfolio significantly outper forms the FTSE-100 index in terms of both overall return and volatility. -------------------------------------------------------------------- ***************** ACCESS INSTRUCTIONS ****************** The Report NC-TR-96-001 can be accessed and printed as follows % ftp ftp.dcs.rhbnc.ac.uk (134.219.96.1) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-96-001.ps.Z ftp> bye % zcat nc-tr-96-001.ps.Z | lpr -l Similarly for the other technical reports. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. In some cases there are two files available, for example, nc-tr-96-002-title.ps.Z nc-tr-96-002-body.ps.Z The first contains the title page while the second contains the body of the report. The single command, ftp> mget nc-tr-96-002* will prompt you for the files you require. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. The files may also be accessed via WWW starting from the NeuroCOLT homepage (note that this is undergoing some corrections and may be temporarily inaccessible): http://www.dcs.rhbnc.ac.uk/neural/neurocolt.html Best wishes John Shawe-Taylor From josh at vlsia.uccs.edu Fri May 10 13:33:38 1996 From: josh at vlsia.uccs.edu (Alspector) Date: Fri, 10 May 1996 11:33:38 -0600 (MDT) Subject: Student assistantships in Colorado Message-ID: GRADUATE STUDENT ASSISTANTSHIPS IN NEURAL SYSTEMS IN COLORADO There are several student assistantship positions available in the electrical and computer engineering department at the University of Colorado at Colorado Springs. These are under the direction of Professor Joshua Alspector. A brief description of the areas of interest follows: The application of neural techniques to recognize patterns in handwritten documents and in remote-sensing images. The documents are from the Archives of the Indies in Seville and date from the time of Columbus. The object is to develop a version of the UNIX 'grep' command for visual images. This could also be applied to the detection of scenes in videos based on a rough visual description or a similar image. The application of neural-style chips and algorithms to demanding problems in signal processing. These include adaptive non-linear equalization of underwater acoustic communication channels and magnetic recording channels. It is likely also to involve integrating the learning electronics with micro-machined sonic transducers directly on silicon. Learning algorithms for implementation in VLSI. There is an existing learning system based on the Boltzmann machine. Improvements to this system and new algorithms are sought. Adaptive user models for information services on the internet and in other distributed information systems. A system that predicts preferences for movies has been researched for a video-on-demand service. Similar systems for other information products and services such as music, restaurants, personalized news, shopping, etc. will be investigated. Wireless local area networks. A system which uses low power electronics for classroom size wireless communication among a variety of terminals will be researched. This system may use low power neural signal processing in analog VLSI. Smart sensors for sports activities to aid in physiologic and performance measurements of athletes at the Olympic Training Center. COMPLETED applications are due July 8, 1996 for the Fall, 1996 semester. For more information on applications contact: Susan M. Bennis ECE Graduate Coordinator Univ. of Colorado at Col. Springs Dept. of Elec. & Comp. Eng. P.O. Box 7150 Colorado Springs, CO 80933-7150 (719) 593-3351 (719) 593-3589 (fax) smbennis at elite.uccs.edu For further information on projects contact: Professor Joshua Alspector (719) 593 3510 josh at vlsia.uccs.edu From maass at igi.tu-graz.ac.at Sun May 12 15:11:12 1996 From: maass at igi.tu-graz.ac.at (Wolfgang Maass) Date: Sun, 12 May 96 21:11:12 +0200 Subject: 2 papers in NEUROPROSE on spiking versus sigmoidal neurons Message-ID: <199605121911.AA22731@figids03.tu-graz.ac.at> 1) The file maass.third-generation.ps.Z is now available for copying from the Neuroprose repository. This is a 23-page long paper. Hardcopies are not available. FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/maass.third-generation.ps.Z Networks of Spiking Neurons: The Third Generation of Neural Network Models Wolfgang Maass Institute for Theoretical Computer Science Technische Universitaet Graz Klosterwiesgasse 32/2 A-8010 Graz, Austria e-mail: maass at igi.tu-graz.ac.at Abstract The computational power of formal models for networks of spiking neurons is compared with that of traditional neural network models based on McCulloch Pitts neurons (i.e. threshold gates) respectively sigmoidal gates. It is shown that networks of spiking neurons are computationally more powerful than threshold circuits and sigmoidal neural nets of the same size. A concrete biologically relevant function is exhibited which can be computed by a single spiking neuron (for biologically reasonable values of its parameters), but which requires hundreds of hidden units on a sigmoidal neural net. This article does not assume prior knowledge about spiking neurons, and it contains an extensive list of references to the currently available literature on computations in networks of spiking neurons and relevant results from neurobiology. *************************************************************** 2) The file maass.sigmoidal-spiking.ps.Z is now also available for copying from the Neuroprose repository. This is a 27-page long paper. Hardcopies are not available. FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/maass.sigmoidal-spiking.ps.Z An Efficient Implementation of Sigmoidal Neural Nets in Temporal Coding with Noisy Spiking Neurons Wolfgang Maass Institute for Theoretical Computer Science Technische Universitaet Graz Klosterwiesgasse 32/2 A-8010 Graz, Austria e-mail: maass at igi.tu-graz.ac.at Abstract We show that networks of spiking neurons can simulate arbitrary feedforward sigmoidal neural nets in a way which has previously not been considered. This new approach is based on temporal coding by single spikes (respectively by the timing of synchronous firing in pools of neurons), rather than on the traditional interpretation of analog variables in terms of firing rates. It is based on the observation that incoming "postsynaptic potentials" can SHIFT the firing time of a spiking neuron. The resulting new simulation is substantially faster and hence more consistent with experimental results about the speed of information processing in cortical neural systems. As a consequence we can show that networks of noisy spiking neurons are "universal approximators" in the sense that they can approximate with regard to temporal coding any given continuous function of several variables. This result holds for a fairly large class of schemes for coding analog variables by firing times of spiking neurons. Our new proposal for the possible organization of computations in networks of spiking neurons systems has some interesting consequences for the type of learning rules that would be needed to explain the self-organization of such networks. Finally, our fast and noise-robust implementation of sigmoidal neural nets via temporal coding points to possible new ways of implementing feedforward and recurrent sigmoidal neural nets with pulse stream VLSI. (To appear in Neural Computation.) From zhang at salk.edu Mon May 13 02:31:47 1996 From: zhang at salk.edu (Kechen Zhang) Date: Sun, 12 May 1996 23:31:47 -0700 (PDT) Subject: paper available: HD cell theory Message-ID: <9605130631.AA10324@salk.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 1053 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/b91d52a2/attachment.ksh From tbl at di.ufpe.br Mon May 13 15:13:32 1996 From: tbl at di.ufpe.br (tbl@di.ufpe.br) Date: Mon, 13 May 1996 16:13:32 -0300 Subject: call for papers Message-ID: <9605131913.AA05844@pesqueira> III Brazilian Symposium on Neural Networks ****************************************** First call for papers Sponsored by the Brazilian Computer Society (SBC) The Third Brazilian Symposium on Neural Networks will be held at the Federal University of Pernambuco, in Recife (Brazil), from the 12nd to the 14th of November, 1996. The SBRN symposia, as they were initially named, are organized by the interest group in Neural Networks of the Brazilian Computer Society since 1994. The third version of the meeting follows a very successfull organization of the previous events which brought together the main developments of the area in Brazil and had the participation of many national and international researchers both as invited speakers and as authors of papers presented at the symposium. Recife is a very pleasant city in the northeast of Brazil, known by its good climate and beautiful beaches, with sunshine throughout almost the whole year. The city, whose name originated from the coral formations in the seaside port and beaches, is in a strategic touristic situation in the region and offers a good variety of hotels both in the city historic center and at the seaside resort. Scientific papers will be analyzed by the program committee. This analysis will take into account originality, significance to the area, and clarity. Accepted papers will be fully published in the conference proceedings. The major topics of interest include, but are not limited to: Biological Perspectives Theoretical Models Algorithms and Architectures Learning Models Hardware Implementation Signal Processing Robotics and Control Parallel and Distributed Implementations Pattern Recognition Image Processing Optimization Cognitive Science Hybrid Systems Dynamic Systems Genetic Algorithms Fuzzy Logic Applications Program Committee: (Tentative) - Teresa Bernarda Ludermir - DI/UFPE - Andre C. P. L. F. de Carvalho - ICMSC/USP (chair) - Germano C. Vasconcelos - DI/UFPE - Antonio de Padua Braga - DEE/UFMG - Dibio Leandro Borges - CEFET/PR - Paulo Martins Engel - II/UFRGS - Ricardo Machado - PUC/Rio - Valmir Barbosa - COPPE/UFRJ - Weber Martins - EEE/UFG Organising Committee: - Teresa Bernarda Ludermir - DI/UFPE (chair) - Edson C. B. Carvalho Filho - DI/UFPE - Germano C. Vasconcelos - DI/UFPE - Paulo Jorge Leitao Adeodato- DI/UFPE SUBMISSION PROCEDURE: The symposium seeks contributions to the state of the art and future perspectives of Neural Networks research. Submitted papers must be in Portuguese, English or Spanish. The submissions must include the original and three copies of the paper and must follow the format below (Electronic mail and FAX submissions are NOT accepted). The paper must be printed using a laser printer, in two-column format, not numbered, 8.5 X 11.0 inch (21,7 X 28.0 cm). It must not exceed eight pages, including all figures and diagrams. The font size should be 10 pts, such as Times-Roman or equivalent, with the following margins: right and left 2.5 cm, top 3.5 cm, and bottom 2.0 cm. The first page should contain the paper's title, the complete author(s) name(s), affiliation(s), and mailing address(es), followed by a short (150 words) abstract and a list of descriptive key words. The submission should also include an accompanying letter containing the following information : * Manuscript title * First author's name, mailing address and e-mail * Technical area of the paper SUBMISSION ADDRESS: Four copies (one original and three copies) must be submitted to: Andre C. P. L. F. de Carvalho Coordenador do Comite de Programa - III SBRN Departamento de Ciencias de Computacao e Estatistica ICMSC - Universidade de Sao Paulo Caixa Postal 668 CEP 13560.070 Sao Carlos, SP Brazil Phone: +55 162 726222 FAX: +55 162 749150 E-mail: IIISBRN at di.ufpe.br IMPORTANT DATES: August 16, 1996 (mailing date) Deadline for paper submission September 16, 1996 Notification of acceptance/rejection November, 12-14 1996 III SBRN ADDITIONAL INFORMATION: * Up-to-minute information about the symposium is available on the World Wide Web (WWW) at http://www.di.ufpe.br/~IIISBRN/web_sbrn * Questions can be sent by E-mail to IIISBRN at di.ufpe.br We look forward to seeing you in Recife ! From icsc at freenet.edmonton.ab.ca Tue May 14 12:25:24 1996 From: icsc at freenet.edmonton.ab.ca (icsc@freenet.edmonton.ab.ca) Date: Tue, 14 May 1996 10:25:24 -0600 (MDT) Subject: ISFL'97 Submissions Message-ID: Please note that the deadline for submissions to ISFL'97 approaches on May 31, 1996. Please notify if you need an extension. Announcement and Call for Papers Second International ICSC Symposium on FUZZY LOGIC AND APPLICATIONS ISFL'97 To be held at the Swiss Federal Institute of Technology (ETH), Zurich, Switzerland February 12 - 14, 1997 I. SPONSORS Swiss Federal Institute of Technology (ETH), Zurich, Switzerland and ICSC, International Computer Science Conventions, Canada/Switzerland II. PURPOSE OF THE CONFERENCE This conference is the successor of the highly successful meeting held in Zurich in 1995 (ISFL'95) and is intended to provide a forum for the discussion of new developments in fuzzy logic and its applications. An invitation to participate is extended both to those who took part in ISFL'95 and to others working in this field. Applications of fuzzy logic have played a significant role in industry, notably in the field of process and plant control, especially in applications where accurate modelling is difficult. The organisers hope that contributions will come not only from this field, but also from newer applications areas, perhaps in business, financial planning management, damage assessment, security, and so on. III. TOPICS Contributions are sought in areas based on the list below, which is indicative only. Contributions from new application areas will be particularly welcome. - Basic concepts such as various kinds of Fuzzy Sets, Fuzzy Relations, Possibility Theory - Neuro-Fuzzy Systems and Learning - Fuzzy Decision Analysis - Image Analysis with Fuzzy Techniques - Mathematical Aspects such as non-classical logics, Category Theory, Algebra, Topology, Chaos Theory - Modeling, Identification, Control - Robotics - Fuzzy Reasoning, Methodology and Applications, for example in Artificial Intelligence, Expert Systems, Image Processing and Pattern Recognition, Cluster Analysis, Game Theory, Mathematical Programming, Neural Networks, Genetic Algorithms and Evolutionary Computing - Implementation, for example in Engineering, Process Control, Production, Medicine - Design - Damage Assessment - Security - Business, Finance, Management IV. INTERNATIONAL SCIENTIFIC COMMITTEE (ISC) - Honorary Chairman: M. Mansour, Swiss Federal Institute of Technology, Zurich - Chairman: N. Steele, Coventry University, U.K. - Vice-Chairman: E. Badreddin, Swiss Federal Institute of Technology, Zurich - Members: E. Alpaydin, Turkey P.G. Anderson, USA Z. Bien, Korea H.H. Bothe, Germany G. Dray, France R. Felix, Germany J. Godjevac, Switzerland H. Hellendoorn, Germany M. Heiss, Austria K. Iwata, Japan M. Jamshidi, USA E.P. Klement, Austria B. Kosko, USA R. Kruse, Germany F. Masulli, Italy S. Nahavandi, New Zealand C.C. Nguyen, USA V. Novak, Czech Republic R. Palm, Germany D.W. Pearson, France I. Perfilieva, Russia B. Reusch, Germany G.D. Smith, U.K. V. ORGANISING COMMITTEE ISFL'97 is a joint operation between the Swiss Federal Institute of Technology (ETH), Zurich and International Computer Science Conventions (ICSC), Canada/Switzerland. VI. PUBLICATION OF PAPERS All accepted papers will appear in the conference proceedings, published by ICSC Academic Press. In addition, some selected papers may also be considered for journal publication. VII. SUBMISSION OF MANUSCRIPTS Prospective authors are requested to send two copies of their abstracts of 500 words for review by the International Scientific Committee. All abstracts must be written in English, starting with a succinct statement of the problem, the results achieved, their significance and a comparison with previous work. If authors believe that more details are necessary to substantiate the main claims of the paper, they may include a clearly marked appendix that will be read at the discretion of the International Scientific Committee. The abstract should also include: - Title of proposed paper - Authors names, affiliations, addresses - Name of author to contact for correspondence - E-mail address and fax number of contact author - Name of topic which best describes the paper (max. 5 keywords) Contributions are welcome from those working in industry and having experience in the topics of this conference as well as from academics. The conference language is English. Abstracts may be submitted either by electronic mail (ASCII text), fax or mail (2 copies) to either one of the following addresses: ICSC Canada P.O. Box 279 Millet, Alberta T0C 1Z0 Canada Fax: +1-403-387-4329 Email: icsc at freenet.edmonton.ab.ca or ICSC Switzerland P.O. Box 657 CH-8055 Zurich Switzerland VIII. OTHER CONTRIBUTIONS Anyone wishing to organise a workshop, tutorial or discussion, is requested to contact the chairman of the conference, Prof. Nigel Steele (e-mail: nsteele at coventry.ac.uk / phone: +44-1203-838568 / fax: +44-1203-838585) before August 31, 1996. IX. DEADLINES AND REGISTRATION It is the intention of the organisers to have the conference proceedings available for the delegates. Consequently, the deadlines below are to be strictly respected: - Submission of Abstracts: May 31, 1996 - Notification of Acceptance: August 31, 1996 - Delivery of full papers: October 31, 1996 X. ACCOMMODATION Block reservations will be made at nearby hotels and accommodation at reasonable rates (not included in the registration fee) will be available upon registration (full details will follow with the letters of acceptance) XI. SOCIAL AND TOURIST ACTIVITIES A social programme, including a reception, will be organized on the evening of February 13, 1997. This acitivity will also be available for accompanying persons. Winter is an attractive season in Switzerland and many famous alpine resorts are in easy reach by rail, bus or car for a one or two day excursion. The city of Zurich itself is the proud home of many art galleries, museums or theatres. Furthermore, the world famous shopping street 'Bahnhofstrasse' or the old part of the town with its many bistros, bars and restaurants are always worth a visit. XII. INFORMATION For further information please contact either of the following: - ICSC Canada, P.O. Box 279, Millet, Alberta T0C 1Z0, Canada E-mail: icsc at freenet.edmonton.ab.ca Fax: +1-403-387-4329 Phone: +1-403-387-3546 - ICSC Switzerland, P.O. Box 657, CH-8055 Zurich, Switzerland Fax: +41-1-761-9627 - Prof. Nigel Steele, Chairman ISFL'97, Coventry University, U.K. E-mail: nsteele at coventry.ac.uk Fax: +44-1203-838585 Phone: +44-1203-838568 From goldfarb at unb.ca Tue May 14 13:06:41 1996 From: goldfarb at unb.ca (Lev Goldfarb) Date: Tue, 14 May 1996 14:06:41 -0300 (ADT) Subject: Workshop: WHAT IS INDUCTIVE LEARNING? (program) Message-ID: Please post. My apologies if you receive multiple copies of the following announcement. ********************************************************************** WHAT IS INDUCTIVE LEARNING? On the foundations of AI and Cognitive Science May 20-21, 1996 held in conjunction with the 11th biennial Canadian AI conference, at the Holiday Inn on King, in Toronto, Canada. Workshop Chair: Lev Goldfarb Each talk (except opening remarks) is 30 min. followed by 30 min. question/discussion period. Monday, May 20, Morning Session ------------------------------- 8:45-9:00 Lev Goldfarb, University of New Brunswick, Canada "Opening Remarks: The inductive learning process as the central cognitive process" 9:00 Chris Thornton, University of Sussex, UK "Does Induction always lead to representation?" 10:10 Lev Goldfarb, University of New Brunswick, Canada "What is inductive learning? Construction of the inductive class representation" 11:20 Anselm Blumer, Tufts University, USA (invited talk) "PAC learning and the Vapnik-Chervonenkis dimension" Monday, May 20, Afternoon Session --------------------------------- 2:00 Charles Ling, University of Western Ontario, Canada (invited talk) "Symbolic and neural network learning in cognitive modeling: Where's the beef?" 3:10 Eduardo Perez, Ricardo Vilalta and Larry Rendell, University of Illinois, USA (invited talk) "On the importance of change of representation in induction" 4:20 Sayan Bhattacharyya and John Laird, University of Michigan, USA "A cognitive model of recall motivated by inductive learning" Tuesday, May 21, Morning Session -------------------------------- 9:00 Ryszard Michalski, George Mason University, USA (invited talk) "Inductive inference from the viewpoint of inferential theory of learning" 10:10 Lev Goldfarb, Sanjay Deshpande and Virendra Bhavsar, University of New Brunswick, Canada "Inductive theory of vision" 11:20 David Gadishev and David Chiu, University of Guelph, Canada "Learning basic elements for texture representation and comparison" Tuesday, May 21, Afternoon Session ---------------------------------- 2:00 John Caulfield, Center of Applied Optics, A&M University, USA (invited talk) "Induction and Physics" 3:10 Igor Jurisica, University of Toronto, Canada "Inductive learning and case-based reasoning" 4:20 Concluding discussion: What is inductive learning? ************************************************************************ URL for Canadian AI'96 Conference http://ai.iit.nrc.ca/cscsi/conferences/ai96.html From jdcohen+ at andrew.cmu.edu Tue May 14 14:21:39 1996 From: jdcohen+ at andrew.cmu.edu (Jonathan D. Cohen) Date: Tue, 14 May 1996 14:21:39 -0400 (EDT) Subject: Postdoc Position Available Message-ID: Postdoctoral Position: Computational Modeling of Neuromodulation and/or Prefrontal Cortex Function ---------------- Center for the Neural Basis of Cognition Carnegie Mellon University and the University of Pittsburgh ---------------- A postdocotral position is available starting September 1, 1996 for someone interested in pursuing computational modeling approaches to the role of neuromodulation and/or prefrontal cortical function in cognition. The nature of the position is flexible, depending upon the individual's interest and expertise. Approaches can be focused at the neurobiological level (e.g., modeling detailed physiological characteristics of neuromodulatory systems, such as locus coeruleus and/or dopaminergic nuclei, or the circuitry of prefrontal cortex), or at the more cognitive level (e.g., the nature of representations and/or the mechanisms involved in active maintenance of information within prefrontal cortex, and their role in working memory). The primary requirement for the position is a Ph.D. in the cognitive, computational, or neurosciences, and extensive experience with computational modeling work, either at the PDP/connectionist or detailed biophysical level. The candidate will be working directly with Jonathan Cohen and Randall O'Reilly within the Department of Psychology at CMU, and in potential collaboration with other members of the Center for the Neural Basis of Cognition (CNBC), including James McClelland, David Lewis, German Barrionuevo, Susan Sesack, G. Bard Ermantrout, as well as collaborators at other institutions, such as Gary Aston-Jones (Hahnemann University), Joseph LeDoux (NYU) and Peter Dayan (MIT). Available resources include direct access to state-of-the-art computing facilities within the CNBC (IBM SP-2 and SGI PowerChallenge), neuroimaging facilities (PET and 3T fMRI at the University of Pittsburgh), and clinical populations (Western Psychiatric Institute and Clinic). Carnegie Mellon University and the University of Pittsburgh are both equal opportunity employers; minorities and women are encouraged to apply. Inquiries can be directed to Jonathan Cohen (jdcohen at cmu.edu) or Randy O'Reilly (oreilly at cmu.edu). Applicants should send a CV, a small number of relevant publications, and the names and addresses of at least two references, to: Jonathan D. Cohen Department of Psychology Carnegie Mellon University Pittsburgh, PA 15213 (412) 268-2810 From moeller at informatik.uni-bonn.de Wed May 15 08:13:54 1996 From: moeller at informatik.uni-bonn.de (Knut Moeller) Date: Wed, 15 May 1996 14:13:54 +0200 (MET DST) Subject: HeKoNN96-CfP Message-ID: <199605151213.OAA03722@macke.informatik.uni-bonn.de> This announcement was sent to various lists. Sorry if you recieved multiple copies. CALL FOR PARTICIPATION ================================================================= = = = H e K o N N 9 6 = = = Autumn School in C o n n e c t i o n i s m and N e u r a l N e t w o r k s October 2-6, 1996 Muenster, Germany Conference Language: German ---------------------------------------------------------------- A comprehensive description of the Autumn School together with abstracts of the courses can be found at the following address: WWW: http://set.gmd.de/AS/fg1.1.2/hekonn = = = O V E R V I E W = = = Artificial neural networks (ANN's) have been discussed in many diverse areas, ranging from models of cortical learning to the control of industrial processes. The goal of the Autumn School in Connectionionism and Neural Networks is to give a comprehensive introduction to connectionism and artificial neural networks (ANN's) and to provide an overview of the current state of the art. Courses will be offered in five thematic tracks. (The conference language is German.) The FOUNDATION track will introduce basic concepts (A. Zell, Univ. Stuttgart) and theoretical issues. Hardwareaspects (U. Rueckert, Univ. Paderborn), Lifelong Learning (G. Paass, GMD St.Augustin), algorithmic complexity of learning procedures (M. Schmitt, TU Graz) and convergence properties of ANN's (K. Hornik, TU Vienna) are presented in further lectures. This year, a special track was devoted to BRAIN RESEARCH. Courses are offered about the simulation of biological neurons (R. Rojas, Univ. Halle), theoretical neurobiology (H. Gluender, LMU Munich), learning and memory (A. Bibbig, Univ. Ulm) and dynamical aspects of cortical information processing (H. Dinse, Univ. Bochum). In the track on SYMBOLIC CONNECTIONISM and COGNITIVE MODELLING, consists of courses on: procedures for extracting rules from ANN's (J. Diederich, QUT Brisbane). representation and cognitive models (G. Peschl, Univ. Vienna), autonomous agents and ANN's (R. Pfeiffer, ETH Zuerich) and hybrid systems (A. Ultsch, Univ. Marburg). APPLICATIONS of ANN's are covered by courses on image processing (H.Bischof, TU Vienna), evolution strategies and ANN's (J. Born, FU Berlin), ANN's and fuzzy logic (R. Kruse, Univ. Braunschweig), and on medical applications (T. Waschulzik, Univ. Bremen). In addition, there will be courses on PROGRAMMING and SIMULATORS. Participants will have the opportunity to work with the SNNS simulator (G. Mamier, A. Zell, Univ. Stuttgart) and the Vienet2/ECANSE simulation tool (G. Linhart, TU Vienna). From ATAXR at asuvm.inre.asu.edu Tue May 14 16:20:58 1996 From: ATAXR at asuvm.inre.asu.edu (Asim Roy) Date: Tue, 14 May 1996 13:20:58 -0700 (MST) Subject: Connectionist Learning - Some New Ideas Message-ID: <01I4P4L13GWI8X1P3A@asu.edu> We have recently published a set of principles for learning in neural networks/connectionist models that is different from classical connectionist learning (Neural Networks, Vol. 8, No. 2; IEEE Transactions on Neural Networks, to appear; see references below). Below is a brief summary of the new learning theory and why we think classical connectionist learning, which is characterized by pre-defined nets, local learning laws and memoryless learning (no storing of training examples for learning), is not brain-like at all. Since vigorous and open debate is very healthy for a scientific field, we invite comments for and against our ideas from all sides. "A New Theory for Learning in Connectionist Models" We believe that a good rigorous theory for artificial neural networks/connectionist models should include learning methods that perform the following tasks or adhere to the following criteria: A. Perform Network Design Task: A neural network/connectionist learning method must be able to design an appropriate network for a given problem, since, in general, it is a task performed by the brain. A pre-designed net should not be provided to the method as part of its external input, since it never is an external input to the brain. From a neuroengineering and neuroscience point of view, this is an essential property for any "stand-alone" learning system - a system that is expected to learn "on its own" without any external design assistance. B. Robustness in Learning: The method must be robust so as not to have the local minima problem, the problems of oscillation and catastrophic forgetting, the problem of recall or lost memories and similar learning difficulties. Some people might argue that ordinary brains, and particularly those with learning disabilities, do exhibit such problems and that these learning requirements are the attributes only of a "super" brain. The goal of neuroengineers and neuroscientists is to design and build learning systems that are robust, reliable and powerful. They have no interest in creating weak and problematic learning devices that need constant attention and intervention. C. Quickness in Learning: The method must be quick in its learning and learn rapidly from only a few examples, much as humans do. For example, one which learns from only 10 examples learns faster than one which requires a 100 or a 1000 examples. We have shown that on-line learning (see references below), when not allowed to store training examples in memory, can be extremely slow in learning - that is, would require many more examples to learn a given task compared to methods that use memory to remember training examples. It is not desirable that a neural network/connectionist learning system be similar in characteristics to learners characterized by such sayings as "Told him a million times and he still doesn't understand." On-line learning systems must learn rapidly from only a few examples. D. Efficiency in Learning: The method must be computationally efficient in its learning when provided with a finite number of training examples (Minsky and Papert[1988]). It must be able to both design and train an appropriate net in polynomial time. That is, given P examples, the learning time (i.e. both design and training time) should be a polynomial function of P. This, again, is a critical computational property from a neuroengineering and neuroscience point of view. This property has its origins in the belief that biological systems (insects, birds for example) could not be solving NP-hard problems, especially when efficient, polynomial time learning methods can conceivably be designed and developed. E. Generalization in Learning: The method must be able to generalize reasonably well so that only a small amount of network resources is used. That is, it must try to design the smallest possible net, although it might not be able to do so every time. This must be an explicit part of the algorithm. This property is based on the notion that the brain could not be wasteful of its limited resources, so it must be trying to design the smallest possible net for every task. General Comments This theory defines algorithmic characteristics that are obviously much more brain-like than those of classical connectionist theory, which is characterized by pre-defined nets, local learning laws and memoryless learning (no storing of actual training examples for learning). Judging by the above characteristics, classical connectionist learning is not very powerful or robust. First of all, it does not even address the issue of network design, a task that should be central to any neural network/connectionist learning theory. It is also plagued by efficiency (lack of polynomial time complexity, need for excessive number of teaching examples) and robustness problems (local minima, oscillation, catastrophic forgetting, lost memories), problems that are partly acquired from its attempt to learn without using memory. Classical connectionist learning, therefore, is not very brain-like at all. As far as I know, there is no biological evidence for any of the premises of classical connectionist learning. Without having to reach into biology, simple common sense arguments can show that the ideas of local learning, memoryless learning and predefined nets are impractical even for the brain! For example, the idea of local learning requires a predefined network. Classical connectionist learning forgot to ask a very fundamental question - who designs the net for the brain? The answer is very simple: Who else, but the brain itself! So, who should construct the net for a neural net algorithm? The answer again is very simple: Who else, but the algorithm itself! (By the way, this is not a criticism of constructive algorithms that do design nets.) Under classical connectionist learning, a net has to be constructed (by someone, somehow - but not by the algorithm!) prior to having seen a single training example! I cannot imagine any system, biological or otherwise, being able to construct a net with zero information about the problem to be solved and with no knowledge of the complexity of the problem. (Again, this is not a criticism of constructive algorithms.) A good test for a so-called "brain-like" algorithm is to imagine it actually being part of a human brain. Then examine the learning phenomenon of the algorithm and compare it with that of the human's. For example, pose the following question: If an algorithm like back propagation is "planted" in the brain, how will it behave? Will it be similar to human behavior in every way? Look at the following simple "model/algorithm" phenomenon when the back- propagation algorithm is "fitted" to a human brain. You give it a few learning examples for a simple problem and after a while this "back prop fitted" brain says: "I am stuck in a local minimum. I need to relearn this problem. Start over again." And you ask: "Which examples should I go over again?" And this "back prop fitted" brain replies: "You need to go over all of them. I don't remember anything you told me." So you go over the teaching examples again. And let's say it gets stuck in a local minimum again and, as usual, does not remember any of the past examples. So you provide the teaching examples again and this process is repeated a few times until it learns properly. The obvious questions are as follows: Is "not remembering" any of the learning examples a brain- like phenomenon? Are the interactions with this so-called "brain- like" algorithm similar to what one would actually encounter with a human in a similar situation? If the interactions are not similar, then the algorithm is not brain-like. A so-called brain-like algorithm's interactions with the external world/teacher cannot be different from that of the human. In the context of this example, it should be noted that storing/remembering relevant facts and examples is very much a natural part of the human learning process. Without the ability to store and recall facts/information and discuss, compare and argue about them, our ability to learn would be in serious jeopardy. Information storage facilitates mental comparison of facts and information and is an integral part of rapid and efficient learning. It is not biologically justified when "brain-like" algorithms disallow usage of memory to store relevant information. Another typical phenomenon of classical connectionist learning is the "external tweaking" of algorithms. How many times do we "externally tweak" the brain (e.g. adjust the net, try a different parameter setting) for it to learn? Interactions with a brain-like algorithm has to be brain-like indeed in all respect. The learning scheme postulated above does not specify how learning is to take place - that is, whether memory is to be used or not to store training examples for learning, or whether learning is to be through local learning at each node in the net or through some global mechanism. It merely defines broad computational characteristics and tasks (i.e. fundamental learning principles) that are brain-like and that all neural network/connectionist algorithms should follow. But there is complete freedom otherwise in designing the algorithms themselves. We have shown that robust, reliable learning algorithms can indeed be developed that satisfy these learning principles (see references below). Many constructive algorithms satisfy many of the learning principles defined above. They can, perhaps, be modified to satisfy all of the learning principles. The learning theory above defines computational and learning characteristics that have always been desired by the neural network/connectionist field. It is difficult to argue that these characteristics are not "desirable," especially for self-learning, self- contained systems. For neuroscientists and neuroengineers, it should open the door to development of brain-like systems they have always wanted - those that can learn on their own without any external intervention or assistance, much like the brain. It essentially tries to redefine the nature of algorithms considered to be brain- like. And it defines the foundations for developing truly self- learning systems - ones that wouldn't require constant intervention and tweaking by external agents (human experts) for it to learn. It is perhaps time to reexamine the foundations of the neural network/connectionist field. This mailing list/newsletter provides an excellent opportunity for participation by all concerned throughout the world. I am looking forward to a lively debate on these matters. That is how a scientific field makes real progress. Asim Roy Arizona State University Tempe, Arizona 85287-3606, USA Email: ataxr at asuvm.inre.asu.edu References 1. Roy, A., Govil, S. & Miranda, R. 1995. A Neural Network Learning Theory and a Polynomial Time RBF Algorithm. IEEE Transactions on Neural Networks, to appear. 2. Roy, A., Govil, S. & Miranda, R. 1995. An Algorithm to Generate Radial Basis Function (RBF)-like Nets for Classification Problems. Neural Networks, Vol. 8, No. 2, pp. 179-202. 3. Roy, A., Kim, L.S. & Mukhopadhyay, S. 1993. A Polynomial Time Algorithm for the Construction and Training of a Class of Multilayer Perceptrons. Neural Networks, Vol. 6, No. 4, pp. 535- 545. 4. Mukhopadhyay, S., Roy, A., Kim, L.S. & Govil, S. 1993. A Polynomial Time Algorithm for Generating Neural Networks for Pattern Classification - its Stability Properties and Some Test Results. Neural Computation, Vol. 5, No. 2, pp. 225-238. From isis at cs.monash.edu.au Wed May 15 05:49:04 1996 From: isis at cs.monash.edu.au (ISIS conference) Date: Wed, 15 May 1996 19:49:04 +1000 Subject: Call for Participation for ISIS Message-ID: <199605150949.TAA22314@molly.cs.monash.edu.au> ISIS CONFERENCE: INFORMATION, STATISTICS AND INDUCTION IN SCIENCE *** Call for Participation *** Old Melbourne Hotel Melbourne, Australia 20-23 August 1996 INVITED SPEAKERS: Henry Kyburg, Jr. (University of Rochester, NY) Marvin Minsky (MIT) J. Ross Quinlan (Sydney University) Jorma J. Rissanen (IBM Almaden Research, San Jose, California) Ray Solomonoff (Oxbridge Research, Mass) This conference will explore the use of computational modeling to understand and emulate inductive processes in science. The problems involved in building and using such computer models reflect methodological and foundational concerns common to a variety of academic disciplines, especially statistics, artificial intelligence (AI) and the philosophy of science. This conference aims to bring together researchers from these and related fields to present new computational technologies for supporting or analysing scientific inference and to engage in collegial debate over the merits and difficulties underlying the various approaches to automating inductive and statistical inference. About the invited speakers: Henry Kyburg is noted for his invention of the lottery paradox (in "Probability and the Logic of Rational Belief", 1961) and his research since then in providing a non-Bayesian foundation for a probabilistic epistemology. Marvin Minsky is one of the founders of the field of artificial intelligence. He is the inventor of the use of frames in knowledge representation, stimulus for much of the concern with nonmonotonic reasoning in AI, noted debunker of Perceptrons and recently the developer of the "society of minds" approach to cognitive science. J. Ross Quinlan is the inventor of the information-theoretic approach to classification learning in ID3 and C4.5, which have become world-wide standards in testing machine learning algorithms. Jorma J. Rissanen invented the Minimum Description Length (MDL) method of inference in 1978, which has subsequently been widely adopted in algorithms supporting machine learning. Ray Solomonoff developed the notion of algorithmic complexity in 1960, and his work was influential in shaping the Minimum Message Length (MML) work of Chris Wallace (1968) and the Minimum Description Length (MDL) work of Jorma Rissanen (1978). ========================= Tutorials (Tue 20 Aug 96) ========================= 10am - 1pm: Tutorial 1: Peter Spirtes "Automated Learning of Bayesian Networks" Tutorial 2: Michael Pazzani "Machine Learning and Intelligent Info Access" 2pm - 5pm: Tutorial 3: Jan Zytkow "Automation of Scientific Discovery" Tutorial 4: Paul Vitanyi "Kolmogorov Complexity & Applications" About the tutorial leaders: Peter Spirtes is a co-author of the TETRAD algorithm for the induction of causal models from sample data and is an active member of the research group on causality and induction at Carnegie Mellon University. Mike Pazzani is one of the leading researchers world-wide in machine learning and the founder of the UC Irvine machine learning archive. Current interests include the use of intelligent agents to support information filtering over the Internet. Jan Zytkow is one of the co-authors (with Simon, Langley and Bradshaw) of "Scientific Discovery" (1987), reporting on the series of BACON programs for automating the learning of quantitative scientific laws. Paul Vitanyi is co-author (with Ming Li) of "An Introduction to Kolmogorov Complexity and its Applications (1993) and of much related work on complexity and information-theoretic methods of induction. Professor Vitanyi will be visiting the Department of Computer Science, Monash, for several weeks after the conference. A limited number of free student conference registrations or tutorial registrations will be available by application to the organizers in exchange for part-time work during the conference. Program Committee: Hirotugu Akaike, Lloyd Allison, Shun-Ichi Amari, Mark Bedau, Jim Bezdek, Hamparsum Bozdogan, Wray Buntine, Peter Cheeseman, Honghua Dai, David Dowe, Usama Fayyad, Doug Fisher, Alex Gammerman, Clark Glymour, Randy Goebel, Josef Gruska, David Hand, Bill Harper, David Heckerman, Colin Howson, Lawrence Hunter, Frank Jackson, Max King, Kevin Korb, Henry Kyburg, Rick Lathrop, Ming Li, Nozomu Matsubara, Aleksandar Milosavljevic, Richard Neapolitan, Jon Oliver, Michael Pazzani, J. Ross Quinlan, Glenn Shafer, Peter Slezak, Padhraic Smyth, Ray Solomonoff, Paul Thagard, Neil Thomason, Raul Valdes-Perez, Tim van Gelder, Paul Vitanyi, Chris Wallace, Geoff Webb, Xindong Wu, Jan Zytkow. Inquiries to: isis96 at cs.monash.edu.au David Dowe (chair): dld at cs.monash.edu.au Kevin Korb (co-chair): korb at cs.monash.edu.au or Jonathan Oliver (co-chair): jono at cs.monash.edu.au Detailed up-to-date information, including registration costs and further details of speakers, their talks and the tutorials is available on the WWW at: http://www.cs.monash.edu.au/~jono/ISIS/ISIS.shtml - David Dowe, Kevin Korb and Jon Oliver. ======================================================================= From cherkaue at cs.wisc.edu Thu May 16 15:26:37 1996 From: cherkaue at cs.wisc.edu (Kevin Cherkauer) Date: Thu, 16 May 1996 14:26:37 -0500 Subject: Connectionist Learning - Some New Ideas Message-ID: <199605161926.OAA27944@mozzarella.cs.wisc.edu> In a recent thought-provoking posting to the connectionist list, Asim Roy said: >We have recently published a set of principles for learning in neural >networks/connectionist models that is different from classical >connectionist learning (Neural Networks, Vol. 8, No. 2; IEEE >Transactions on Neural Networks, to appear; .. >E. Generalization in Learning: The method must be able to >generalize reasonably well so that only a small amount of network >resources is used. That is, it must try to design the smallest possible >net, although it might not be able to do so every time. This must be >an explicit part of the algorithm. This property is based on the >notion that the brain could not be wasteful of its limited resources, >so it must be trying to design the smallest possible net for every >task. I disagree with this point. According to Hertz, Krogh, and Palmer (1991, p. 2), the human brain contains about 10^11 neurons. (They also state on p. 3 that "the axon of a typical neuron makes a few thousand synapses with other neurons," so we're looking at on the order of 10^14 "connections" in the brain.) Note that a period of 100 years contains only about 3x10^9 seconds. Thus, if you lived 100 years and learned continuously at a constant rate every second of your life, your brain would be at liberty to "use up" the capacity of about 30 neurons (and 30,000 connections) per second. I would guess this is a very conservative bound, because most of us probably spend quite a bit of time where we aren't learning at such a furious rate. But even using this conservative bound, I calculate that I'm allowed to use up about 2.7x10^6 neurons (and 2.7x10^9 connections) today. I'll try not to spend them all in one place. :-) Dr. Roy's suggestion that the brain must try "to design the smallest possible net for every task" because "the brain could not be wasteful of its limited resources" is unlikely, in my opinion. It seems to me that the brain has rather an abundance of neurons. On the other hand, finding optimal solutions to many interesting "real-world" problems is often very hard computationally. I am not a complexity theorist, but I will hazard to suggest that a constraint on neural systems to be optimal or near-optimal in their space usage is probably both impossible to realize and, in fact, unnecessary. Wild speculation: the brain may have so many neurons precisely so that it can afford to be suboptimal in its storage usage in order to avoid computational time intractability. References Hertz, J.; Krogh, A.; & Palmer, R.G. 1991. Introduction to the Theory of Neural Computation. Redwood City, CA:Addison-Wesley. Roy, A., Govil, S. & Miranda, R. 1995. A Neural Network Learning Theory and a Polynomial Time RBF Algorithm. IEEE Transactions on Neural Networks, to appear. Roy, A., Govil, S. & Miranda, R. 1995. An Algorithm to Generate Radial Basis Function (RBF)-like Nets for Classification Problems. Neural Networks, Vol. 8, No. 2, pp. 179-202. =============================================================================== Kevin Cherkauer cherkauer at cs.wisc.edu From chris at anvil.co.uk Fri May 17 13:08:24 1996 From: chris at anvil.co.uk (Chris Sharpington) Date: Fri, 17 May 96 13:08:24 BST Subject: New Book announcement Message-ID: <9605171208.AA01885@anvil.co.uk> ===================================================================== NEW BOOK ANNOUCEMENT RAPID APPLICATION GENERATION OF BUSINESS AND FINANCE SOFTWARE SUKHDEV KHEBBAL AND CHRIS SHARPINGTON Kluwer Academic Publishers, March 1996 ISBN: 0-7923-9707-X The objectives of the work described in this book were twofold: 1) to capitalise on recent work in object-oriented integration methods to build a Framework for rapid application generation of distributed client-server systems, using the same API on both Microsoft Windows and on Unix 2) to use the Framework to generate real-world applications for intelligent data analysis techniques (neural networks and genetic algorithms) in Finance and Marketing The key requirement was to be able to "plug and play" servers i.e. unplug an Excel forecasting module and plug in a neural network forecasting tool to demonstrate the improved forecasting accuracy. Four applications were built to prove the benefits of the Framework and to demonstrate the value of intelligent techniques for improved data analysis. The application descriptions are accessible to the business manager, interested in the business issues involved, who may have little technical knowledge of neural networks and genetic algorithms. At the same time, technical experts can benefit from the examples of solving real-world application issues. The applications are Direct Marketing (customer targeting and market segmentation), Financial Forecasting, Bankruptcy Predication, and Executive Information Systems. Client-server computing has been attracting great interest of late. However, a server does not have to be a database. The approach in this work has been to standardise the interface to servers and collect a number of different servers together into a Toolkit. Application generation then becomes the rapid and simple process of plugging together the servers required (e.g. data retrieval, data analysis, data display) with a client to control their interaction. The emergence of object-oriented inter-application communication standards such as Object Linking and Embedding (OLE) from Microsoft, and CORBA from the Object Management Group, is fuelling great interest in distributed systems and their commercial benefits. An important contribution of this book is to detail and compare current inter-application communication methods. This will be of great benefit in assessing the potential of each communication method for business application and in assessing the benefits of the HANSA Framework. The work was carried out under Esprit project 6369 HANSA - a collaboration between industrial and academic partners from four European countries, with funding support from the European Commission. Having studied the technical details and illustrations of business value obtained from the Framework, the reader is given details of how to obtain the software, (for both Microsoft Windows and Unix) free of charge from an ftp site. [ There is also a World Wide Web page on The HANSA project: http://www.cs.ucl.ac.uk/hansa ] CONTENTS: ======== Chap 1: Rapid Application Development and the HANSA Project - Sukhdev Khebbal, University College London, UK. - Chris Sharpington, CRL, Hayes, UK. PART ONE: TOOLS FOR RAPID APPLICATION DEVELOPMENT ================================================= Chap 2: The HANSA Framework - Sukhdev Khebbal and Jonthan Ladipo, University College London, UK. Chap 3: THE HANSA Toolkit and The MIMENICE tool - Eric LeSaint, MIMETICS, FRANCE. - Sukhdev Khebbal, University College London, UK. PART TWO: OBJECT-ORIENTED INTEGRATION METHODS ============================================= Chap 4: Survey of Object-Oriented Integration Methods - Sukhdev Khebbal and Jonthan Ladipo, University College London, UK. PART THREE: REAL-WORLD APPLICATIONS =================================== Chap 5: Direct Marketing Application - Chris Sharpington, CRL, Hayes, UK. Chap 6: Banking Application - Thomas Look and Michael Kuhn, IFS, Germany. Chap 7: Bankruptcy Predication Application - Konrad Feldman, Jason Kingdon, Anoop Mangat, SearchSpace Ltd, London. UK. - Renato Arisi, Orsio Romagnoli, O.Group, Rome. ITALY. Chap 8: Executive Information Systems Application - Pierre Charelain and Louis Moussy, Promind, FRANCE. PART FOUR: DEVELOPING HYBRID SYSTEMS ==================================== Chap 9: Evaluating the HANSA Framework - Sukhdev Khebbal and Jonthan Ladipo, University College London, UK. Chap 10: Conclusion and Future Directions - Sukhdev Khebbal, University College London, UK. - Chris Sharpington, CRL, Hayes, UK. ISBN 0-7923-9707-X 212pp HARDBOUND March 1996 Kluwer Academic Publishers, P.O. Box 17, 3300 AA Dordrecht, The Netherlands. TO ORDER THE BOOK ================= Contact your local bookshop or supplier, or direct from the publisher using one of the addresses below: For customers in Mexico, USA, Canada Rest of the world: and Latin America: Kluwer Academic Publishers Kluwer Academic Publishers Order Department Order Department P.O. Box 358 P.O. Box 322 Accord Station 3300 AH Dordrecht Hingham, MA 02018-0358 The Netherlands U.S.A. Tel : 617 871 6600 Tel : +31 78 6392392 Fax : 617 871 6528 Fax : +31 78 6546474 Email : kluwer at wkap.com Email : services at wkap.nl =========================================================== Chris Sharpington (chris at anvil.co.uk) Anvil Software Ltd, 51-53 Rivington Street, London EC2A 3QQ tel +44 171 729 8036 fax +44 171 729 5067 From small at cortex.neurology.pitt.edu Fri May 17 08:32:44 1996 From: small at cortex.neurology.pitt.edu (Steven Small) Date: Fri, 17 May 1996 08:32:44 -0400 Subject: Connectionist Learning - Some New Ideas Message-ID: >Dr. Roy's suggestion that the brain must try "to design the smallest possible >net for every task" because "the brain could not be wasteful of its limited >resources" is unlikely, in my opinion. It seems to me that the brain has >rather an abundance of neurons. On the other hand, finding optimal solutions to >many interesting "real-world" problems is often very hard computationally. I am >not a complexity theorist, but I will hazard to suggest that a constraint on >neural systems to be optimal or near-optimal in their space usage is probably >both impossible to realize and, in fact, unnecessary. > >Wild speculation: the brain may have so many neurons precisely so that it can >afford to be suboptimal in its storage usage in order to avoid computational >time intractability. I agree with this general idea, although I'm not sure that "computational time intractability" is necessarily the principal reason. There are a lot of good reasons for redundancy, overlap, and space "suboptimality", not the least of which is the marvellous ability at recovery that the brain manifests after both small injuries and larger ones that give pause even to experienced neurologists. -SLS From Jonathan_Stein at comverse.com Fri May 17 17:42:44 1996 From: Jonathan_Stein at comverse.com (Jonathan_Stein@comverse.com) Date: Fri, 17 May 96 16:42:44 EST Subject: Connectionist Learning - Some New Ideas Message-ID: <9604178323.AA832376960@hub.comverse.com> > >I agree with this general idea, although I'm not sure that "computational >time intractability" is necessarily the principal reason. There are a lot >of good reasons for redundancy, overlap, and space "suboptimality", not the >least of which is the marvellous ability at recovery that the brain >manifests after both small injuries and larger ones that give pause even to >experienced neurologists. > One needn't draw upon injuries to prove the point. One loses about 100,000 cortical neurons a day (about a percent of the original number every three years) under normal conditions. This loss is apparently not significant for brain function. This has been often called the strongest argument for distributed processing in the brain. Compare this ability with the fact that single conductor disconnection cause total system failure with high probability in conventional computers. Although certainly acknowledged by the pioneers of artificial neural network techniques, very few networks designed and trained by present techniques are anywhere near that robust. Studies carried out on the Hopfield model of associative memory DO show graceful degradation of memory capacity with synapse dilution under certain conditions (see eg. DJ Amit's book "Attractor Neural Networks"). Synapse pruning has been applied to trained feedforward networks (eg. LeCun's "Optimal Brain Damage") but requires retraining of the network. JS From dnoelle at cs.ucsd.edu Thu May 16 14:49:11 1996 From: dnoelle at cs.ucsd.edu (David Noelle) Date: Thu, 16 May 96 11:49:11 -0700 Subject: CogSci96 Extension Message-ID: <9605161849.AA14585@hilbert> ************************************************ ***** EARLY REGISTRATION DEADLINE EXTENDED ***** ************************************************ Eighteenth Annual Conference of the COGNITIVE SCIENCE SOCIETY July 12-15, 1996 University of California, San Diego La Jolla, California SECOND CALL FOR PARTICIPATION The early registration deadline for Cognitive Science '96 has been extended to June 1, 1996. If you register now, you can still get the low early registration rates! (If you have already paid for registration at the higher "late" rates, the difference will be reimbursed to you at the conference.) Also, affordable on-campus housing is still available on a first-come first-served basis. An electronic registration form and the complete conference schedule appear below. Further information may be found on the web at "http://www.cse.ucsd.edu/events/cogsci96/". When scheduling plane flights, note that the conference begins on Friday evening, July 12th, and ends late on Monday afternoon, July 15th. If you want to attend all conference events, you should plan on staying the nights of the 12th through the 15th. Register today! * PLENARY SESSIONS * "Controversies in Cognitive Science: The Case of Language" -+-+- Stephen Crain (UMD College Park) & Mark Seidenberg (USC) Moderated by Paul Smolensky (Johns Hopkins University) "Tenth Anniversary of the PDP Books" -+-+- "Affect and Neuro-modulators: A Connectionist Account" Dave Rumelhart (Stanford) "Parallel-Distributed Processing Models of Normal and Disordered Cognition" Jay McClelland (CMU) "Why Neural Networks Need Generative Models" Geoff Hinton (Toronto) "Frontal Lobe Development and Dysfunction in Children: Dissociations between Intention and Action" -+-+- Adele Diamond (MIT) "Reconstructing Consciousness" -+-+- Paul Churchland (UCSD) * SYMPOSIA * "Adaptive Behavior and Learning in Complex Environments" "Building a Theory of Problem Solving and Scientific Discovery: How Big is N in N-Space Search?" "Cognitive Linguistics: Mappings in Grammar, Conceptual Systems, and On-Line Meaning Construction" "Computational Models of Development" "Evolution of Language" "Evolution of Mind" "Eye Movements in Cognitive Science" "The Future Of Modularity" "The Role of Rhythm in Cognition" "Update on the Plumbing of Cognition: Brain Imaging Studies of Vision, Attention, and Language" * PAPER PRESENTATION SESSIONS * Analogy Categories, Concepts, and Mutability Cognitive Neuroscience Development Distributed Cognition and Education Lexical Ambiguity and Semantic Representation Perception Perception of Causality Philosophy Problem-Solving and Education Reasoning Recurrent Network Models Rhythm in Cognition Semantics, Phonology, and the Lexicon Skill Learning and SOAR Text Comprehension Visual/Spatial Reasoning REGISTRATION INFORMATION There are three ways to register for the 1996 Cognitive Science Conference: * ONLINE REGISTRATION -- You may fill out and electronically submit the online registration form, which may be found on the conference web page at "http://www.cse.ucsd.edu/events/cogsci96/". This is the preferred method of registration. (You must pay registration fees with a Visa or MasterCard in order to use this option.) * EMAIL REGISTRATION -- You may fill out the plain text (ASCII) registration form, which appears below, and send it via electronic mail to "cogsci96reg at cs.ucsd.edu". (You must pay registration fees with a Visa or MasterCard in order to use this option.) * POSTAL REGISTRATION -- You may download a copy of the PostScript registration form from the conference home page (or extract the plain text version, below), print it on a PostScript printer, fill it out with a pen, and send it via postal mail to: CogSci'96 Conference Registration Cognitive Science Department - 0515 University of California, San Diego 9500 Gilman Drive La Jolla, CA 92093-0515 (Under this option, you may enclose payment of registration fees in U. S. dollars in the form of a check or money order, or you may pay these fees with a Visa or MasterCard. Please make checks payable to: The Regents of the University of California.) For more information, visit the conference web page at "http://www.cse.ucsd.edu/events/cogsci96". Please direct questions and comments to "cogsci96 at cs.ucsd.edu", (619) 534-6773, or (619) 534-6776. Edwin Hutchins and Walter Savitch, Conference Chairs John D. Batali, Local Arrangements Chair Garrison W. Cottrell, Program Chair ====================================================================== PLAIN TEXT REGISTRATION FORM ====================================================================== Cognitive Science 1996 Registration Form ---------------------------------------- Your Full Name : _____________________________________________________ Your Postal Address : ________________________________________________ (including zip/postal ________________________________________________ code and country) ________________________________________________ ________________________________________________ Your Telephone Number (Voice) : ______________________________________ Your Telephone Number (Fax) : ______________________________________ Your Internet Electronic Mail Address (e.g., dnoelle at cs.ucsd.edu) : ______________________________________________________________________ REGISTRATION FEES : Please select the appropriate registration option from the menu below by placing an "X" in the corresponding blank on the left. Note that the Cognitive Science Society is offering a special deal to individuals who opt to join the Society simultaneously with conference registration. The "New Member" package includes conference fees and first year's membership dues for only $10 more than the nonmember conference cost. Registration fees received after June 1st are $20 higher ($10 higher for students) than fees received before June 1st. Be sure to register early to take advantage of the lower fee rates. _____ Registration, Member -- $120 ($140 after June 1st) _____ Registration, Nonmember -- $145 ($165 after June 1st) _____ Registration, New Member -- $155 ($175 after June 1st) _____ Registration, Student Member -- $85 ($95 after June 1st) _____ Registration, Student Nonmember -- $100 ($110 after June 1st) _____ Registration, New Student Member -- $115 ($125 after June 1st) CONFERENCE BANQUET : Tickets to the conference banquet are *not* included in the registration fees, above. Banquet tickets are $35 per person. (You may bring guests.) Number Of Banquet Tickets Desired ($35 each): _____ _____ Omnivorous _____ Vegetarian CONFERENCE SHIRTS : Conference T-Shirts are *not* included in the registration fees, above. These are $10 each. Number Of T-Shirts Desired ($10 each): _____ UCSD ON-CAMPUS APARTMENTS : There are a limited number of on-campus apartments available for reservation as a 4 night package, from July 12th through July 16th. Included is a (mandatory) meal plan - cafeteria breakfast (4 days), and lunch (3 days). The total cost is $191 per person (double occupancy, including tax) and $227 per person (single occupancy, including tax). (Checking in a day early is $45 extra for a single room or $36 for a double.) On campus parking is complimentary with this package. Off-campus accommodations in local hotels are also available, but you will need to make reservations by contacting the hotel of interest directly. If you will be staying off-campus, please skip this portion of the registration form. On-campus housing reservations must be received by June 1st, 1996. Please include the cost of on-campus housing in the total conference cost listed at the bottom of this form. Select the housing plan desired by placing an "X" in the appropriate blank on the left: _____ UCSD Housing and Meal Plan (Single Room) -- $227 per person _____ UCSD Housing and Meal Plan (Double Room) -- $191 per person Arrival Date And Time : ____________________________________________ Departure Date And Time : ____________________________________________ If you reserved a double room above, please indicate your roommate preference below: _____ Please assign a roommate to me. I am _____ female _____ male. _____ I will be sharing this room with a guest who is not registered for the conference. I will include $382 ($191 times 2) in the total conference cost listed at the bottom of this form. _____ I will be sharing this room with another conference attendee. I will include $191 in the total conference cost listed at the bottom of this form. My roommate will submit her housing fee along with her registration form. My roommate's full name is: ______________________________________________________________ ASL TRANSLATION : American Sign Language (ASL) translators will be available for a number of conference events. The number of translated events will be, in part, a function of the number of participants in need of this service. Please indicate below if you will require ASL translation of conference talks. _____ I will require ASL translation. Comments To The Registration Staff : ______________________________________________________________________ ______________________________________________________________________ ______________________________________________________________________ Please sum your conference registration fees, the cost of banquet tickets and t-shirts, and on-campus housing costs, and place the total below. To register by electronic mail, payment must be by Visa or MasterCard only. TOTAL : _$____________ Bill to: _____ Visa _____ MasterCard Number : ___________________________________________ Expiration Date: ___________________________________ Registration fees (including on-campus housing costs) will be fully refunded if cancellation is requested prior to May 1st. If registration is cancelled between May 1st and June 1st, 20% of paid fees will be retained by the Society to cover processing costs. No refunds will be granted after June 1st. When complete, send this form via email to "cogsci96reg at cs.ucsd.edu". Please direct questions to "cogsci96 at cs.ucsd.edu", (619) 534-6773, or (619) 534-6776. ====================================================================== PLAIN TEXT REGISTRATION FORM ====================================================================== =*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= TENTATIVE SCHEDULE OF EVENTS The conference check-in/registration desk will be located at the UCSD Price Center at the times listed in the schedule below. On-site registration, conference packets, and names tags will be available there. FRIDAY EVENING, JULY 12, 2:00 P.M. - 9:00 P.M. REGISTRATION (PRICE CENTER THEATER LOBBY) FRIDAY EVENING, JULY 12, 7:00 P.M. - 8:30 P.M. PLENARY SESSION "Controversies In Cognitive Science: The Case Of Language" Stephen Crain (UMD College Park) & Mark Seidenberg (USC) Moderated by Paul Smolensky (Johns Hopkins University) FRIDAY EVENING, JULY 12, 8:30 P.M. WELCOMING RECEPTION SATURDAY MORNING, JULY 13, 7:30 A.M. - 5:00 P.M. REGISTRATION (PRICE CENTER BALLROOM LOBBY) SATURDAY MORNING, JULY 13, 8:30 A.M. - 10:00 A.M. SUBMITTED SYMPOSIUM "Building A Theory Of Problem Solving And Scientific Discovery: How Big Is N In N-Space Search?" Bruce Burns (Organizer) Lisa Baker & Kevin Dunbar Bruce Burns & Regina Vollmeyer Chris Schunn & David Klahr David F. Wolf II & Jonathan R. Beskin SUBMITTED SYMPOSIUM "The Role Of Rhythm In Cognition" Devin McAuley (Organizer) Mari Jones Bill Baird Robert Port Elliot Saltzman PAPER PRESENTATIONS - PHILOSOPHY "Beyond Computationalism" Giunti, Marco "Qualia: The Hard Problem" Griffith, Todd W. ; Byrne, Michael "Connectionism, Systematicity, And Nomic Necessity" Hadley, Robert F. "Fodor On Information And Computation" Brook, Andrew ; Stainton, Robert SATURDAY MORNING, JULY 13, 10:30 A.M. - 12:20 P.M. SUBMITTED SYMPOSIUM "The Future Of Modularity" Michael Spivey-Knowlton (Organizer) Kathleen Eberhard (Organizer) Michael Tanenhaus (Organizer) James McClelland Peter Lennie Robert Jacobs Kenneth Forster Dominic Massaro Gary Dell PAPER PRESENTATIONS - TEXT COMPREHENSION "Integrating World Knowledge With Cognitive Parsing" Paredes-Frigolett, Harold ; Strube, Gerhard "The Role Of Ontology In Creative Understanding" Moorman, Kenneth ; Ram, Ashwin "Working Memory In Text Comprehension: Interrupting Difficult Text" McNamara, Danielle ; Kintsch, Walter "Reasoning From Multiple Texts: An Automatic Analysis Of Readers' Situation Models" Foltz, Peter ; Britt, M. Anne ; Perfetti, Charles "Lexical Limits On The Influence Of Context" Verspoor, Cornelia PAPER PRESENTATIONS - REASONING "Dynamics Of Rule Induction By Making Queries: Transition Between Strategies" Ginzburg, Iris ; Sejnowksi, Terry "The Impact Of Information Representation On Bayesian Reasoning" Hoffrage, Ulrich ; Gigerenzer, Gerd "On Reasoning With Default Rules And Exceptions" Elio, Renee ; Pelletier, Francis "Satisficing Inference And The Perks Of Ignorance" Goldstein, Daniel G. ; Gigerenzer, Gerd "A Connectionist Treatment Of Negation And Inconsistency" Shastri, Lokendra ; Grannes, Dean SATURDAY, JULY 13, 12:20 P.M. - 2:00 P.M. LUNCH & POSTER PREVIEW SATURDAY, JULY 13, 2:00 P.M. - 3:30 P.M. INVITED SYMPOSIUM "Update On The Plumbing Of Cognition: Imaging Studies Of Vision, Attention, And Language" Helen Neville (Organizer) Marty Sereno Steven Hillyard PAPER PRESENTATIONS - DISTRIBUTED COGNITION AND EDUCATION "Hearing With Eyes: A Distributed Cognition Perspective On Guitar Song Imitation" Flor, Nick V. ; Holder, Barbara "Constraints On The Experimental Design Process In Real-World Science" Baker, Lisa M. ; Dunbar, Kevin "Teaching/Learning Events In The Workplace: A Comparative Analysis Of Their Organizational And Interactional Structure" Hall, Rogers ; Stevens, Reed "Distributed Reasoning: An Analysis Of Where Social And Cognitive Worlds Fuse" Dama, Mike ; Dunbar, Kevin PAPER PRESENTATIONS - DEVELOPMENT I "Reading And Learning To Classify Letters" Martin, Gale "Where Defaults Don't Help: The Case Of The German Plural System" Nakisa, Ramin Charles ; Hahn, Ulrike "Selective Attention In The Acquisition Of The Past Tense" Jackson, Dan ; Constandse, Rodger ; Cottrell, Garrison "Word Learning And Verbal Short-Term Memory: A Computational Account" Gupta, Prahlad SATURDAY, JULY 13, 4:00 P.M. - 5:30 P.M. PLENARY SESSION "Tenth Anniversary Of The PDP Books" "Affect and Neuro-modulators: A Connectionist Account" Dave Rumelhart (Stanford) "Parallel-Distributed Processing Models Of Normal And Disordered Cognition" Jay McClelland (CMU) "Why Neural Networks Need Generative Models" Geoff Hinton (Toronto) SATURDAY, JULY 13, 5:30 P.M. - 7:30 P.M. POSTER SESSION & RECEPTION SATURDAY, JULY 13, 9:00 P.M. - 1:00 A.M. BLUES PARTY SUNDAY, JULY 14, 7:30 A.M. - 5:00 P.M. REGISTRATION (PRICE CENTER BALLROOM LOBBY) SUNDAY, JULY 14, 8:30 A.M. - 10:00 A.M. SUBMITTED SYMPOSIUM "Evolution Of Mind" Denise Dellarosa Cummins (Organizer) John Tooby Colin Allen PAPER PRESENTATIONS - VISUAL/SPATIAL REASONING "Spatial Cognition In The Mind And In The World - The Case Of Hypermedia Navigation" Dahlback, Nils ; Hook, Kristina ; Sjolinder, Marie "Individual Differences In Proof Structures Following Multimodal Logic Teaching" Oberlander, Jon ; Cox, Richard ; Monaghan, Padraic ; Stenning, Keith ; Tobin, Richard "Functional Roles For The Cognitive Analysis Of Diagrams In Problem Solving" Cheng, Peter C-H. "A Study Of Visual Reasoning In Medical Diagnosis" Rogers, E. PAPER PRESENTATIONS - SEMANTICS, PHONOLOGY, AND THE LEXICON "The Interaction Of Semantic And Phonological Processing" Tyler, Lorraine K. ; Voice, J. Kate ; Moss, Helen E. "The Combinatorial Lexicon: Affixes As Processing Structures" Marslen-Wilson, William ; Ford, Mike ; Older, Lianne ; Zhou, Xiaolin "Lexical Ambiguity And Context Effects In Spoken Word Recognition: Evidence From Chinese" Li, Ping ; Yip, C. W. "Phonological Reduction, Assimilation, Intra-Word Information Structure, And The Evolution Of The Lexicon Of English" Shillcock, Richard ; Hicks, John ; Cairns, Paul ; Chater, Nick ; Levy, Joseph SUNDAY, JULY 14, 10:30 A.M. - 12:20 P.M. INVITED SYMPOSIUM "Adaptive Behavior and Learning in Complex Environments" Maja Mataric (Organizer) Simon Giszter Andrew Moore Sebastian Thrun PAPER PRESENTATIONS - PERCEPTION "Color Influences Fast Scene Categorization" Oliva, Aude ; Schyns, Philippe "Categorical Perception Of Novel Dimensions" Goldstone, Robert L. ; Steyvers, Mark ; Larimer, Ken "Categorical Perception In Facial Emotion Classification" Padgett, Curtis ; Cottrell, Garrison "MetriCat: A Representation For Basic And Subordinate-Level Classification" Stankiewicz, Brian J. ; Hummel, John E. "Similarity To Reference Shapes As A Basis For Shape Representation" Edelman, Shimon ; Cutzu, Florin ; Duvdevani-Bar, Sharon PAPER PRESENTATIONS - LEXICAL AMBIGUITY AND SEMANTIC REPRESENTATION "Integrating Discourse And Local Constraints In Resolving Lexical Thematic Ambiguities" Hanna, Joy E. ; Spivey-Knowlton, Michael ; Tanenhaus, Michael "Evidence For A Tagging Model Of Human Lexical Category Disambiguation" Corley, Steffan ; Crocker, Matt "The Importance Of Automatic Semantic Relatedness Priming For Distributed Models Of Word Meaning" McRae, Ken ; Boisvert, Stephen "Parallel Activation Of Distributed Concepts: Who Put The P In The PDP?" Gaskell, M. Gareth "Discrete Multi-Dimensional Scaling" Clouse, Daniel ; Cottrell, Garrison SUNDAY, JULY 14, 12:20 P.M. - 2:00 P.M. LUNCH & SOCIETY BUSINESS MEETING SUNDAY, JULY 14, 2:00 P.M. - 3:30 P.M. INVITED SYMPOSIUM "Cognitive Linguistics: Mappings in Conceptual Systems, Grammar, and Meaning Construction" Gilles Fauconnier (Organizer) George Lakoff Ron Langacker PAPER PRESENTATIONS - PROBLEM-SOLVING AND EDUCATION "Collaboration In Primary Science Classroom: Learning About Evaporation" Scanlon, Eileen ; Murphy, Patricia ; Issroff, Kim ; Hodgson, Barbara ; Whitelegg, Elizabeth "Transferring And Modifying Terms In Equations" Catrambone, Richard "Understanding Constraint-Based Processes: A Precursor To Conceptual Change In Physics" Slotta, James ; Chi, T. H. Michelene "The Role Of Generic Modeling In Conceptual Change" Griffith, Todd W. ; Nersessian, Nancy ; Goel, Ashok PAPER PRESENTATIONS - RECURRENT NETWORK MODELS "Using Orthographic Neighborhoods Of Interlexical Nonwords To Support An Interactive-Activation Model Of Bilingual Memory" French, Robert M. ; Ohnesorge, Clark "Conscious And Unconscious Perception: A Computational Theory" Mathis, Donald ; Mozer, Michael "In Search of Articulated Attractors" Noelle, David ; Cottrell, Garrison "A Recurrent Network That Performs A Context-Sensitive Prediction Task" Steijvers, Mark ; Grunwald, Peter SUNDAY, JULY 14, 4:00 P.M. - 5:30 P.M. PLENARY SESSION "Frontal Lobe Development And Dysfunction In Children: Dissociations Between Intention And Action" Adele Diamond (MIT) SUNDAY, JULY 14, 6:00 P.M. - 9:00 P.M. CONFERENCE BANQUET MONDAY, JULY 15, 8:30 A.M. - 10:00 A.M. SUBMITTED SYMPOSIUM "Eye Movements In Cognitive Science" Patrick Suppes (Organizer) Julie Epelboim (Organizer) Eileen Kowler Mary Hayhoe Greg Zelinsky PAPER PRESENTATIONS - ANALOGY "Competition In Analogical Transfer: When Does A Lightbulb Outshine An Army?" Francis, Wendy ; Wickens, Thomas "Can A Real Distinction Be Made Between Cognitive Theories Of Analogy And Categorisation" Ramscar, Michael ; Paint, Helen "LISA: A Computational Model Of Analogical Inference And Schema Induction" Hummel, John E. ; Holyoak, Keith J. "Alignability And Attribute Important In Choice" Lindermann, Patricia ; Markman, Arthur PAPER PRESENTATIONS - DEVELOPMENT II "A Computational Model Of Two Types Of Developmental Dyslexia" Harm, Michael ; Seidenberg, Mark "Integrating Multiple Cues In Word Segmentation: A Connectionist Model Using Hints" Allen, Joe ; Christiansen, Morten "Statistical Cues In Language Acquisition: Word Segmentation By Infants" Saffran, Jenny R. ; Aslin, Richard N. ; Newport, Elissa "Perceptual Laws And The Statistics Of Natural Signals" Movellan, Javier ; Chadderdon, George MONDAY, JULY 15, 10:30 A.M. - 12:20 P.M. SUBMITTED SYMPOSIUM "Computational Models Of Development" Kim Plunkett (Organizer) Tom Shultz (Organizer) Jeff Elman Charles Ling Denis Mareschal Liz Bates (Discussant) Jeff Shrager (Discussant) PAPER PRESENTATIONS - SKILL LEARNING AND SOAR "An Abstract Computational Model Of Learning Selective Sensing Skills" Langley, Pat "Epistemic Action Increases With Skill" Maglio, Paul ; Kirsh, David "Perseverative Subgoaling And Production System Models Of Problem Solving" Cooper, Richard "Probabilistic Plan Recognition For Cognitive Apprenticeship" Conati, Cristina ; VanLehn, Kurt "Do Users Interact With Computers The Way Our Models Say They Should?" Vera, Alonso H. ; Lewis, Richard PAPER PRESENTATIONS - RHYTHM IN COGNITION "Rhythmic Commonalities Between Hand Gestures And Speech" Cummins, Fred ; Port, Robert "Modeling Beat Perception With A Nonlinear Oscillator" Large, Edward W. PAPER PRESENTATIONS - COGNITIVE NEUROSCIENCE "Emotional Decisions" Barnes, Allison ; Thagard, Paul "Self-Organization And Functional Role Of Lateral Connections And Multisize Receptive Fields In The Primary Visual Cortex" Sirosh, Joseph, Miikkulainen, Risto "Synaptic Maintenance Through Neuronal Homeostasis: A Function Of Dream Sleep" Horn, David ; Levy, Nir ; Ruppin, Eytan MONDAY, JULY 15, 12:20 P.M. - 2:00 P.M. LUNCH MONDAY, JULY 15, 2:00 P.M. - 3:30 P.M. INVITED SYMPOSIUM "Evolution Of Language" John Batali (Organizer) David Ackley (Organizer) Domenico Parisi (not confirmed) PAPER PRESENTATIONS - PERCEPTIONS OF CAUSALITY "The Perception Of Causality: Feature Binding In Interacting Objects" Kruschke, John K. ; Fragasi, Michael "Judging The Contingency Of A Constant Cue: Contrasting Predictions From An Associative And A Statistical Model" Vallee-Tourangeau, F. ; Murphy, Robin ; Baker, A. G. "What Language Might Tell Us About The Perception Of Cause" Wolff, Phillip PAPER PRESENTATIONS - CATEGORIES, CONCEPTS, AND MUTABILITY "Mutability, Conceptual Tranformation, And Context" Love, Bradley C. "On Putting Milk In Coffee: The Effect Of Thematic Relations On Similarity Judgments" Wisniewski, Edward ; Bassok, Mariam "The Role Of Situations In Concept Learning" Yeh, Wenchi ; Barsalou, Lawrence "Modeling Interference Effects In Instructed Category Learning" Noelle, David ; Cottrell, Garrison MONDAY, JULY 15, 4:00 P.M. - 5:30 P.M. PLENARY SESSION "Reconstructing Consciousness" Paul Churchland (UCSD) =*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= From rao at cs.rochester.edu Sat May 18 14:14:13 1996 From: rao at cs.rochester.edu (rao@cs.rochester.edu) Date: Sat, 18 May 1996 14:14:13 -0400 Subject: Connectionist Learning - Some New Ideas In-Reply-To: <9604178323.AA832376960@hub.comverse.com> (Jonathan_Stein@comverse.com) Message-ID: <199605181814.OAA09391@skunk.cs.rochester.edu> >One loses about 100,000 cortical neurons a day (about a percent of >the original number every three years) under normal conditions. Does anyone have a concrete citation (a journal article) for this or any other similar estimate regarding the daily cell death rate in the cortex of a normal brain? I've read such numbers in a number of connectionist papers but none cite any neurophysiological studies that substantiate these numbers. Thanks, Raj -- Raj Rao Internet: rao at cs.rochester.edu Dept. of Computer Science VOX: (716) 275-2527 University of Rochester FAX: (716) 461-2018 Rochester NY 14627-0226 WWW: http://www.cs.rochester.edu/u/rao/ From rfl77551 at pegasus.cc.ucf.edu Sun May 19 13:56:44 1996 From: rfl77551 at pegasus.cc.ucf.edu (Richard F Long) Date: Sun, 19 May 1996 13:56:44 -0400 (EDT) Subject: Connectionist Learning - Some New Ideas In-Reply-To: <199605161926.OAA27944@mozzarella.cs.wisc.edu> Message-ID: There may be another reason for the brain to construct networks that are 'minimal' having to do with Chaitin and Kolmogorov computational complexity. If a minimal network corresponds to a 'minimal algorithm' for implementing a particular computation, then that particular network must utilize all of the symmetries and regularities contained in the problem, or else these symmetries could be used to reduce the network further. Chaitin has shown that no algorithm for finding this minimal algorithm in the general case is possible. However, if an evolutionary programming method is used in which the fitness function is both 'solves the problem' and 'smallest size' (i.e. Occam's razor), then it is possible that the symmetries and regularities in the problem would be extracted as smaller and smaller networks are found. I would argue that such networks would compute the solution less by rote or brute force, and more from a deep understanding of the problem. I would like to hear anyone else's thoughts on this. Richard Long rfl77551 at pegasus.cc.ucf.edu General Research and Device Corp. Oviedo, FL & University of Central Florida From maja at garnet.cs.brandeis.edu Sun May 19 18:56:40 1996 From: maja at garnet.cs.brandeis.edu (Maja Mataric) Date: Sun, 19 May 1996 18:56:40 -0400 Subject: CALL for PAPERS Message-ID: <199605192256.SAA03201@garnet.cs.brandeis.edu> CALL FOR PAPERS (http://www.cs.brandeis.edu:80/~maja/abj-special-issue/) ADAPTIVE BEHAVIOR Journal Special Issue on COMPLETE AGENT LEARNING IN COMPLEX ENVIRONMENTS Guest editor: Maja J Mataric Submission Deadline: June 1, 1996. Adaptive Behavior is an international journal published by MIT Press; Editor-in-Chief: Jean-Arcady Meyer, Ecole Normale Superieure, Paris. In the last decade, the problems being treated in AI, Alife, and Robotics have witnessed an increase in complexity as the domains under investigation have transitioned from theoretically clean scenarios to more complex dynamic environments. Agents that must adapt in environments such as the physical world, an active ecology or economy, and the World Wide Web, challenge traditional assumptions and approaches to learning. As a consequence, novel methods for automated adaptation, action selection, and new behavior acquisition have become the focus of much research in the field. This special issue of Adaptive Behavior will focus on situated agent learning in challenging environments that feature noise, uncertainty, and complex dynamics. We are soliciting papers describing finished work on autonomous learning and adaptation during the lifetime of a complete agent situated in a dynamic environment. We encourage submissions that address several of the following topics within a whole agent learning system: * learning from ambiguous perceptual inputs * learning with noisy/uncertain action/motor outputs * learning from sparse, irregular, inconsistent, and noisy reinforcement/feedback * learning in real time * combining built-in and learned knowledge * learning in complex environments requiring generalization in state representation * learning from incremental and delayed feedback * learning in smoothly or discontinuously changing environments We invite submissions from all areas in AI, Alife, and Robotics that treat either complete synthetic systems or models of biological adaptive systems situated in complex environments. Submitted papers should be delivered by June 1, 1996. Authors intending to submit a manuscript should contact the guest editor to discuss paper suitability for this issue. Use maja at cs.brandeis.edu or tel: (617) 736-2708 or fax: (617) 736-2741. Manuscripts should be typed or laser-printed in English (with American spelling preferred) and double-spaced. Both paper and electronic submission are possible, as described below. Copies of the complete Adaptive Behavior Instructions to Contributors are available on request--also see the Adaptive Behavior journal's home page at: http://www.ens.fr:80/bioinfo/www/francais/AB.html. For paper submissions, send five (5) copies of submitted papers (hard-copy only) to: Maja Mataric Volen Center for Complex Systems Computer Science Department Brandeis University Waltham, MA 02254-9110, USA For electronic submissions, use Postscript format, ftp the file to ftp.cs.brandeis.edu/incoming, and send an email notification to maja at cs.brandeis.edu. For a Web page of this call, and detailed ftp directions, see: http://www.cs.brandeis.edu/~maja/abj-special-issue/ From mark at cdu.ucl.ac.uk Mon May 20 11:24:26 1996 From: mark at cdu.ucl.ac.uk (Mark Johnson) Date: Mon, 20 May 96 11:24:26 BST Subject: Connectionist Learning - Some New Ideas Message-ID: >One loses about 100,000 cortical neurons a day (about a percent of >the original number every three years) under normal conditions. Does anyone have a concrete citation (a journal article) for this or any other similar estimate regarding the daily cell death rate in the cortex of a normal brain? I've read such numbers in a number of connectionist papers but none cite any neurophysiological studies that substantiate these numbers. Thanks, Raj ================= From carl at cs.toronto.edu Tue May 21 14:11:35 1996 From: carl at cs.toronto.edu (Carl Edward Rasmussen) Date: Tue, 21 May 1996 14:11:35 -0400 Subject: DELVE Message-ID: <96May21.141136edt.1240@neuron.ai.toronto.edu> Announcing the release of DELVE DELVE --- Data for Evaluating Learning in Valid Experiments DELVE contains a collection of datasets for use in evaluating the predictive performance of empirical learning methods, such as linear models, neural networks, smoothing splines, decision trees, and many other regression and classification procedures. DELVE also includes software that facilitates using this data to assess learning methods in a statistically valid way. Ultimately, DELVE will include results of applying many methods to many tasks, making comparisons between methods much easier than in the past. A preliminary version of DELVE is now freely available on the web at URL http://www.cs.utoronto.ca/~delve. From this web site, you can get to the manual, the software for the DELVE environment, the DELVE datasets, and precise definitions, source code, and results for various learning methods. Contributions of data and methods from other researchers will be added to the web site in future. DELVE was created at the university of Toronto by C. E. Rasmussen R. M. Neal G. E. Hinton D. van Camp M. Revow Z. Ghahramani R. Kustra R. Tibshirani -- \ Carl Edward Rasmussen Email: carl at cs.toronto.edu o/\_ Dept of Computer Science Phone: +1 (416) 978 7391 <|__,\ University of Toronto, Home : +1 (416) 531 5685 "> | Toronto, ONTARIO, FAX : +1 (416) 978 1455 ` | Canada, M5S 1A4 web : http://www.cs.toronto.edu/~carl From moriarty at AIC.NRL.Navy.Mil Mon May 20 10:29:23 1996 From: moriarty at AIC.NRL.Navy.Mil (moriarty@AIC.NRL.Navy.Mil) Date: Mon, 20 May 96 10:29:23 EDT Subject: Papers Available: Neuro-Evolution in Robotics Message-ID: <9605201429.AA16331@sun27.aic.nrl.navy.mil> The following two papers on applying neuro-evolution to robot arm control are available from our WWW page: http://www.cs.utexas.edu/users/nn/ Source code for the SANE system is also avaiable from the WWW site. ------------------------------------------------------------------------ Evolving Obstacle Avoidance Behavior in a Robot Arm David E. Moriarty and Risto Miikkulainen To Appear at From Animals to Animats The Fourth International Conference on Simulation of Adaptive Behavior (SAB96). Cape Cod, MA. 1996 8 pages Abatract: Existing approaches for learning to control a robot arm rely on supervised methods where correct behavior is explicitly given. It is difficult to learn to avoid obstacles using such methods, however, because examples of obstacle avoidance behavior are hard to generate. This paper presents an alternative approach that evolves neural network controllers through genetic algorithms. No input/output examples are necessary, since neuro-evolution learns from a single performance measurement over the entire task of grasping an object. The approach is tested in a simulation of the OSCAR-6 robot arm which receives both visual and sensory input. Neural networks evolved to effectively avoid obstacles at various locations to reach random target locations. ------------------------------------------------------------------------ Hierarchical Evolution of Neural Networks David E. Moriarty and Risto Miikkulainen Technical Report #AI96-242, Department of Computer Sciences, The University of Texas at Austin. 16 pages Abstract: In most applications of neuro-evolution, each individual in the population represents a complete neural network. Recent work on the SANE system, however, has demonstrated that evolving individual neurons often produces a more efficient genetic search. This paper explores the merits of neuro-evolution both at the neuron level and at the network level. While SANE can solve easy tasks in just a few generations, in tasks that require high precision, its progress often stalls and is exceeded by a standard, network-level evolution. In this paper, a new approach called Hierarchical SANE is presented that combines the advantages of both approaches by integrating two levels of evolution in a single framework. Hierarchical SANE couples the early explorative quality of SANE's neuron-level search with the late exploitative quality of a more standard network-level evolution. In a sophisticated robot arm manipulation task, Hierarchical SANE significantly outperformed both SANE and a standard, network-level neuro-evolution approach, suggesting that it can more efficiently solve a broad range of tasks. ------------------------------------------------------------------------ Dave Moriarty Artificial Intelligence Laboratory Department of Computer Sciences The University of Texas at Austin moriarty at cs.utexas.edu http://www.cs.utexas.edu/users/moriarty http://www.cs.utexas.edu/users/nn From karaali at ukraine.corp.mot.com Mon May 20 10:44:26 1996 From: karaali at ukraine.corp.mot.com (Orhan Karaali) Date: Mon, 20 May 1996 09:44:26 -0500 Subject: Linguist with neural net background Message-ID: <199605201444.JAA05484@fiji.mot.com> Motorola Chicago, IL COMPUTATIONAL LINGUIST FOR TEXT-TO-SPEECH SYNTHESIS Motorola's Chicago Corporate Research Laboratories is currently seeking a computational linguist to join the Speech Synthesis Group in its Speech Processing Systems Research Laboratory in Schaumburg, Illinois. The Speech Synthesis Group of Motorola's Speech Processing Laboratory has developed a world-class multi-language text-to-speech synthesizer. This synthesizer is based on innovative neural network and signal processing technologies and produces more natural sounding speech than traditional speech synthesis methods. The successful candidate will work on the components of a text-to-speech system that convert text into a phonetic representation, including part of speech tagging, word sense disambiguation and parsing for prosody. The duties of the position include applied research, software development, data collection, and transfer of developed technologies to product groups. Innovation in research, application of technology and a high level of motivation is the standard for all members of the team. The individual should possess a Ph.D. in the area of computational linguistics with a minimum of two years work experience developing spoken language systems. Strong programming skills in C or C++ are required. Knowledge of neural networks, decision trees, genetic algorithms, and statistical techniques is highly desirable. Please send resume and cover letter by June 15, 1996 to be considered for this position to Motorola Inc., Corporate Staffing Department, Attn: LP-T1521, 1303 E. Algonquin Rd., Schaumburg, IL 60196. Fax: 847-576-4959. Motorola is an equal opportunity/affirmative action employer. We welcome and encourage diversity in our workforce. From gds at sys.uea.ac.uk Tue May 21 12:50:33 1996 From: gds at sys.uea.ac.uk (George Smith) Date: Tue, 21 May 1996 17:50:33 +0100 (BST) Subject: ICANNGA97 Message-ID: ICANNGA97 _________ Third International Conference on Artificial Neural Networks and Genetic Algorithms Preceded by a one-day Introductory Workshop Tuesday 1st - Friday 4th April, 1997 Norwich, England, UK CALL FOR PAPERS AND INVITATION TO PARTICIPATE Conference Theme: _________________ The main theme of the ICCANGA series is the development and application of software paradigms based on natural processes, principally artificial neural networks, genetic algorithms and hybrids thereof. However, the scope of the conference extends to cover many related topics including fuzzy logic, genetic programming and other evolutionary computation systems, classifier systems and adaptive agent systems, distributed intelligence and artificial life, generic optimisation heuristics including simulated annealing and tabu search, and many more. Following the successes of ICANNGA93 (Innsbruck, Austria) and ICCANGA95 (Ales, France), the third meeting of this interdisciplinary conference will be held at the University of East Anglia in the picturesque, medieval city of Norwich, England. The ICANNGA series has quickly established itself as a platform, not only for established workers in the fields, but also for new and young researchers wishing to extend their knowledge and experience. The conference will be preceded by a one day workshop during which introductory sessions on a range of relevant topics will be held. There will be ample opportunity to gain practical experience in the techniques pertaining to the workshop and conference. The conference is hosted by the University of East Anglia, which is a campus university in a parkland setting, offering first class conference facilities including award winning en-suite accomodation and lecture theatres. The conference will include invited talks and contributed oral and poster presentations. It is expected that the ICANNGA97 Proceedings will be printed by Springer-Verlag (Vienna), following the tradition set by its predecessors. International Advisory Committee _________________________ _______ Prof. R. Albrecht, University of Innsbruck, Austria Dr. D. Pearson, Ecole des Mines d'Ales, France Prof. N. Steele, Coventry University, England (Chair) Dr. G. D. Smith, University of East Anglia, England Programme Committee ___________________ Thomas Baeck, Informatik Centrum, Dortmund, Germany Wilfried Brauer, TU Munchen, Germany Marco Dorigo, Universite Libre de Bruxelles, Belgium Terry Fogarty, University of West England, Bristol, UK Jelena Godjevac, EPFL Laboratories, Lausanne, Switzerland Michael Heiss, Neural Network Group, Siemens AG, Austria Tom Harris, Brunel University, London, UK Anne Johannet, EMA-EERIE, Nimes, France Helen Karatza, Aristotle University of Thessaloniki, Greece Sami Kuri, San Jose State University, USA Pedro Larranaga, University Basque Country, San Sebastian, Spain Francesco Masulli, University of Genoa, Italy Josef Mazanec, WU Wien, Austria Janine Magnier, EMA-EERIE, N?mes, France Franz Oppacher, Carleton University, Ottawa, Canada Ian Parmee, University of Plymouth, UK David Pearson, EMA-EERIE, N?mes, France Vic Rayward-Smith, University of East Anglia, Norwich, UK Colin Reeves, Coventry University, Coventry, UK Bernardete Ribeiro, Universidade de Coimbra, Portugal Valentina Salapura, TU-Wien, Austria V. David Sanchez A., University of Miami, Florida, USA Henrik Sax?n, ?bo Akademi, Finland George D. Smith, University of East Anglia, Norwich, UK Nigel Steele, Coventry University, Coventry, UK Kevin Warwick, Reading University, Reading, UK Darrell Whitley, Colorado State University, USA Diethelm Wurtz, Swiss Federal Inst. of Technology, Zurich, Switzerland Organising Committee ____________________ Dr. G. D. Smith, University of East Anglia, England Nigel Steele, Coventry University, Coventry Prof. Vic Rayward-Smith, University of East Anglia, Norwich Submission Instructions _______________________ Contributions are sought in the following topic areas, which is not exhaustive: - Theoretical and Computational Aspects of Artificial Neural Networks: including computational learning, approximation theory, novel paradigms and training methods, dynamical systems, hardware implementation - Practical Applications of Artificial Neural Networks: including pattern recognition, speech and signal processing, visual processing, time series prediction, medical and other diagnostic systems, fault and anomaly detection, financial applications, data compression, datamining, machine learning - Theoretical and Computational Aspects of Genetic Algorithms: including schema theory developments, Markov models, convergence analysis, no free lunch theorem, computational analysis, novel sequential and parallel GA systems - Practical Applications of Genetic Algorithms; including function and combinatorial optimisation, machine learning, classifier and agent systems, datamining, real-world industrial and commercial applications - Hybrid and related topics: including genetic programming, evolutionary programming and evolution strategies, fuzzy logic and control, neuro-fuzzy systems, simulated annealing and tabu search, hybrid search algorithms, hybrid ANN/GA systems Authors should submit an extended abstract of around 1500-2000 words, or full paper, of their proposed contribution before 31st August 1996. Abstracts and papers must be in English and must contain a concise description of the problem, the results achieved, their relevance and a comparison with previous work. The abstract/paper should also contain the following details: Title Authors' names and affiliations Name, address and email address of contact author Keywords Three typed/printed copies should be sent to the following address: Dr George D. Smith School of Information Systems University of East Anglia Norwich, Norfolk, NR4 7TJ UK Alternatively, abstracts may be sent by email to either: gds at sys.uea.ac.uk or rs at sys.uea.ac.uk Notification of acceptance of the paper for presentation will be made by November 30th 1996. Papers accepted for both oral and poster presentations will be published in the Conference Proceedings. Pre-Conference Workshop _______________________ It is intended to hold a workshop on April 1st, 1997, prior to the Conference. This workshop is intended for those who are new to the topics and wish to gain a better understanding of the fundamental aspects of neural networks and genetic algorithms. The format of this workshop will be as follows: Theoretical issues of ANNs Key Issues in the application of ANNs Introduction to GAs and other heuristic search algorithms Key Issues in the application of GAs and related heuristics The second and fourth topics are backed up with laboratory sessions in which participants will have the opportunity to use some of the latest software toolkits supporting the respective technologies. Dates to remember: __________________ First Announcement & CFP: April/May 1996 Submission of Abstracts/Papers: August 31st 1996 Notification of Acceptance: November 30th 1996 Delivery of full paper: January 30th 1997 Pre-Conference Workshop: April 1st 1997 ICANNGA97: April 2nd-4th 1997 Further Information: ____________________ For more information on ICANNGA97, regularly updated, visit the WWW site at: http://www.sys.uea.ac.uk/Research/ResGroups/MAG/ICANNGA97/Default.html This web page also contains a pre-registration form. Pre-Registration form: ______________________ Please enter your details below to receive further information about ICANNGA97 and a full registration form. First name: ______________________________________ Family name: ______________________________________ Affiliation: ______________________________________ Address: ______________________________________ City: ______________________________________ State/Province/County: ______________________________________ ZIP/Postal Code: ______________________________________ Country: ______________________________________ Daytime telephone number: ______________________________________ Email address: ______________________________________ _________________________ _________________________ _________________________ Dr. George D Smith Computing Science Sector School of Information Systems University of East Anglia Norwich NR4 7TJ, UK Tel: + 44 (0)1603 593260 FAX: + 44 (0)1603 503344 Email: gds at sys.uea.ac.uk www: http://www.sys.uea.ac.uk/Teaching/Staff/gds.html From juergen at idsia.ch Mon May 20 03:10:26 1996 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Mon, 20 May 96 09:10:26 +0200 Subject: R Message-ID: <9605200710.AA13354@fava.idsia.ch> Richard Long writes: There may be another reason for the brain to construct networks that are 'minimal' having to do with Chaitin and Kolmogorov computational complexity. If a minimal network corresponds to a 'minimal algorithm' for implementing a particular computation, then that particular network must utilize all of the symmetries and regularities contained in the problem, or else these symmetries could be used to reduce the network further. Chaitin has shown that no algorithm for finding this minimal algorithm in the general case is possible. However, if an evolutionary programming method is used in which the fitness function is both 'solves the problem' and 'smallest size' (i.e. Occam's razor), then it is possible that the symmetries and regularities in the problem would be extracted as smaller and smaller networks are found. I would argue that such networks would compute the solution less by rote or brute force, and more from a deep understanding of the problem. I would like to hear anyone else's thoughts on this. Some comments: Apparently, Kolmogorov was the first to show the impossibility of finding the minimal algorithm in the general case (but Solomonoff also mentions it in his early work). The reason is the halting problem, of course - you don't know the runtime of the minimal algorithm. For all practical applications, runtime has to be taken into account. Interestingly, there is an ``optimal'' way of doing this, namely Levin's universal search algorithm, which tests solution candidates in order of their Levin complexities: L. A. Levin. Universal sequential search problems, Problems of Information Transmission 9:3,265-266,1973. For finding Occam's razor neural networks with minimal Levin complexity, see J. Schmidhuber: Discovering solutions with low Kolmogorov complexity and high generalization capability. In A.Prieditis and S.Russell, editors, Machine Learning: Proceedings of the 12th International Conference, 488--496. Morgan Kaufmann Publishers, San Francisco, CA, 1995. For Occam's razor solutions of non-Markovian reinforcement learning tasks, see M. Wiering and J. Schmidhuber: Solving POMDPs using Levin search and EIRA. In Machine Learning: Proceedings of the 13th International Conference. Morgan Kaufmann Publishers, San Francisco, CA, 1996, to appear. --- Juergen Schmidhuber, IDSIA http://www.idsia.ch/~juergen From smlamb at owlnet.rice.edu Mon May 20 11:35:50 1996 From: smlamb at owlnet.rice.edu (Sydney M Lamb) Date: Mon, 20 May 1996 10:35:50 -0500 (CDT) Subject: Connectionist Learning - Some New Ideas In-Reply-To: <9604178323.AA832376960@hub.comverse.com> Message-ID: On Fri, 17 May 1996 Jonathan_Stein at comverse.com wrote: > > One needn't draw upon injuries to prove the point. One loses about 100,000 > cortical neurons a day (about a percent of the original number every three > years) under normal conditions. This loss is apparently not significant > for brain function. This has been often called the strongest argument for > distributed processing in the brain. Compare this ability with the fact that > single conductor disconnection cause total system failure with high > probability in conventional computers. > > Although certainly acknowledged by the pioneers of artificial neural > network techniques, very few networks designed and trained by present > techniques are anywhere near that robust. Studies carried out on the > Hopfield model of associative memory DO show graceful degradation of > memory capacity with synapse dilution under certain conditions (see eg. > DJ Amit's book "Attractor Neural Networks"). Synapse pruning has been > applied to trained feedforward networks (eg. LeCun's "Optimal Brain Damage") > but requires retraining of the network. > > JS > There seems to be some differing information coming from different sources. The way I heard it, the typical person has lost only about 3% of the original total of cortical neurons after about 70 or 80 years. As for the argument about distributed processing, two comments: (1) there are different kinds of distributive processing; one of them also uses strict localization of points of convergence for distributed subnetworks of information (cf. A. Damasio 1989 --- several papers that year). (2) If the brain is like other biological systems, the neurons being lost are probably most the ones not being used --- ones that have been remaining latent and available to assume some function, but never called upon. Hence what you get with old age is not so much loss of information as loss of ability to learn new things --- varying in amount, of course, from one individual to the next. Syd Lamb Linguistics and Cognitive Science Rice University From cabestan at petrus.upc.es Wed May 22 17:01:52 1996 From: cabestan at petrus.upc.es (JOAN CABESTANY) Date: Wed, 22 May 1996 17:01:52 UTC+0200 Subject: IWANN'97 preliminary announce Message-ID: <01BB4800.52349C00@maripili.upc.es> This message has been sent to several lists of distribution. I apologize its multiple reception. Thank you. Preliminary Announcement and First Call for Papers IWANN'97 INTERNATIONAL WORK-CONFERENCE ON ARTIFICIAL AND NATURAL NEURAL NETWORKS Biological and Artificial Architectures, Technologies and Applications Lanzarote - Canary Islands, Spain June 4-6, 1997 Contact URL http://petrus.upc.es/iwann97.html for an on line information. ORGANIZED BY Universidad Nacional de Educacion a Distancia (UNED), Madrid Universidad de Las Palmas de Gran Canarias Universidad Politecnica de Catalunya Universidad de Malaga Universidad de Granada IWANN'97. The fourth International Workshop on Artificial Neural Networks, now changed to International Work-Conference on Artificial and Natural Neural Networks, will take place in Lanzarote, Canary Islands (Spain) from 4 to 6 of June, 1997. This biennial meeting with focus on biologically inspired and more realistic models of natural neurons and neural nets and new hybrid computing paradigms, was first held in Granada (1991), Sitges (1993) and Torremolinos, Malaga (1995) with a growing number of participants from more than 20 countries and with high quality papers published by Springer-Verlag (LNCS 540, 686 and 930). SCOPE Neural computation is considered here in the dual perspective of analysis (as science) and synthesis (as engineering). As a science of analysis, neural computation seeks to help neurology, brain theory, and cognitive psychology in the understanding of the functioning of the Nervous Systems by means of computational models of neurons, neural nets and subcelular processes, with the possibility of using electronics and computers as a "laboratory" in which cognitive processes can be simulated and hypothesis proven without having to act directly upon living beings. As a synthesis engineering, neural computation seeks to complement the symbolic perspective of Artificial Intelligence (AI), using the biologically inspired models of distributed, self-programming and self-organizing networks, to solve those non-algorithmic problems of function approximation and pattern classification having to do with changing and only partially known environments. Fault tolerance and dynamic reconfiguration are other basic advantages of neural nets. In the sea of meetings, congresses and workshops on ANN's, IWANN'97 focus on the three subjects that most worry us: (1) The seeking of biologically inspired new models of local computation architectures and learning along with the organizational principles behind of the complexity of intelligent behavior. (2) The searching for some methodological contributions in the analysis and design of knowledge-based ANN's, instead of "blind nets", and in the reduction of the knowledge level to the sub-symbolic implementation level. (3) The cooperation with symbolic AI, with the integration of connectionist and symbolic processing in hybrid and multi-strategy approaches for perception, decision and control tasks, as well as for case-based reasoning, concepts formation and learning. To contribute in the posing and partially solving of these global topics, IWANN'97 offer a brain-storming interdisciplinary forum in advanced Neural Computation for scientists and engineers from biology neuroanatomy, computational neurophysiology, molecular biology, biophysics, linguistics, psychology, mathematics and physics, computer science, artificial intelligence, parallel computing, analog and digital electronics, advanced computer architectures, reverse engineering, cognitive sciences and all the concerned applied domains (sensory systems and signal processing, monitoring, diagnosis, classification and decision making, intelligent control and supervision, perceptual robotics and communication systems). Contributions on the following and related topics are welcome. TOPICS 1. Biological Foundations of Neural Computation: Principles of brain organization. Neuroanatomy and Neurophysiological of synapses, dendro-dendritic contacts, neurons and neural nets in peripheral and central areas. Plasticity, learning and memory in natural neural nets. Models of development and evolution. The computational perspective in Neuroscience. 2. Formal Tools and Computational Models of Neurons and Neural Nets Architectures: Analytic and logic models. Object oriented formulations. Hybrid knowledge representation and inference tools (rules and frames with analytic slots). Probabilistic, bayesian and fuzzy models. Energy related models. 3. Plasticity Phenomena (Maturing, Learning and Memory): Biological mechanisms of learning and memory. Computational formulations using correlational, reinforcement and minimization strategies. Conditioned reflex and associative mechanisms. Inductive-deductive and abductive symbolic-subsymbolic formulations. Generalization. 4. Complex Systems Dynamics: Self-organization, cooperative processes, autopoiesis, emergent computation, synergetic, evolutive optimization and genetic algorithms. Self-reproducing nets. Self-organizing feature maps. Simulated evolution. Social organization phenomena. 5. Cognitive Science and IA: Hybrid knowledge based system. Neural networks for knowledge modeling, acquisition and refinement. Natural language understanding. Concepts formation. Spatial and temporal planning and scheduling. Intentionality. 6. Neural Nets Simulation, Emulation and Implementation: Environments and languages. Parallelization, modularity and autonomy. New hardware implementation strategies (FPGA's, VLSI, neurodevices). Evolutive architectures. Real systems validation and evaluation. 7. Methodology for Data Analysis, Task Selection and Nets Design. 8. Neural Networks for Perception: Biologically inspired preprocessing. Low level processing, source separation, sensor fusion, segmentation, feature extraction, adaptive filtering, noise reduction, texture, stereo correspondence, motion analysis, speech recognition, artificial vision, and hybrid architectures for multisensorial perception. 9. Neural Networks for Communications Systems: Modems and codecs, network management, digital communications. 10. Neural Networks for Control and Robotics: Systems identification, motion planning and control, adaptive, predictive and model-based control systems, navigation, real time applications, visuo-motor coordination. LOCATION BEATRIZ Hotel Lanzarote - Canary Islands, June 4-6, 1997 Lanzarote, the most northerly and easterly island of the Canarian archipelago, is at the same time the most unusual one and produces a strange fascination on those who visit it because the fast succession of fire, sea and colors contrasts with craters, green valleys and unforgettable golden and warm beaches. LANGUAGE English will be the official language of IWANN'97. Simultaneous translation will not be provided. CALL FOR PAPERS The Programme Committee seeks for original papers on the above mentioned Topics. Authors should pay special attention to explanation of theoretical and technical choices involved, point out possible limitations and describe the current state of their work. All received papers will be reviewed by the Programme Committee. Accepted papers may be presented orally or as poster panels, however all accepted contributions will be published in full length (Springer-Verlag Proceedings are expected). INSTRUCTIONS TO AUTHORS Five copies (one original and four copies) of the paper must be submitted. The paper must not exceed 10 pages, including figures, tables and references. It should be written in English on A4 paper, in a Roman font, 12 point in size, without page numbers. If possible, please make use of the latex/plaintex style file available in the WWW page: http://petrus.upc.es/iwann97.html . In addition, one sheet must be attached including: Title and authors names, list of five keywords, the Topic the paper fits best, preferred presentation (oral or poster) and the corresponding author (name, postal and e-mail address, phone and fax numbers). CONTRIBUTIONS MUST BE SENT TO: Prof. Jose Mira Dpto. Informatica y Automatica, UNED Senda del Rey, s/n Phone: + 34 1 3987155 E- 28040 MADRID, Spain Fax: + 34 1 3986697 IMPORTANT DATES Second and Final Call for Papers September 1996 Final Date for Submission January 15, 1997 Notification of Acceptance March 1997 Workshop June 4-6, 1997 STEARING COMMITTEE Prof. Joan Cabestany , Universidad Politecnica de Catalunya (E) Prof. Jose Mira Mira, UNED (E) Prof. Alberto Prieto, Universidad de Granada (E) Prof. Francisco Sandoval, Universidad de Malaga (E) TENTATIVE ORGANIZATION COMMITTEE Michael Arbit, University of Southern California (USA) Senen Barro, Universidad de Santiago (E) Trevor Clarkson, King's College London (UK) Ana Delgado, UNED (E) Dante DelCorso, Politecnico de Torino (I) Tamas D. Gedeon, University of New South Wales (AUS) Karl Goser, Universit?t Dortmund (G) Jeanny Herault, Institute National Polytechnique de Grenoble (F) Jaap Hoekstra, Delft University of Technology (NL) Roberto Moreno, Universidad de las Palmas de Gran Canaria (E) Shunsuke Sato, Osaka University (Jp) Igor Shevelev, Russian Academy of Science(R) Cloe Taddei-Ferretti, Istituto di Cibernetica, CNR (I) Marley Vellasco, Pontificia Universidade Catolica do Rio de Janeiro (Br) Michel Verleysen, Universite Catholique de Louvain-la-Neuve (B) From carmesin at schoner.physik.uni-bremen.de Thu May 23 09:49:18 1996 From: carmesin at schoner.physik.uni-bremen.de (Hans-Otto Carmesin) Date: Thu, 23 May 1996 15:49:18 +0200 Subject: BOOK: Neuronal Adaptation Theory Message-ID: <199605231349.PAA12918@schoner.physik.uni-bremen.de> The new book NEURONAL ADAPTATION THEORY is now available. ISBN 3-631-30039-5, US-ISBN 0-8204-3172-9 AUTHOR: Hans-Otto Carmesin, Institute for Theoretical Physics, University Bremen, 28334 Bremen, Germany, Fax 0421 218 4869, email: carmesin at theo.physik.uni-bremen.de, www: http://schoner.physik.uni-bremen.de/~carmesin/ PUBLISHER: Peter Lang, Frankfurt/M., Berlin, Bern, New York, Paris, Wien; ---> ---> Please send your order to: Peter Lang GmbH, Europischer Verlag der Wissenschaften, Abteilung WB, Box 940225, 60460 Frankfurt/M., Germany PRICE: 59DM; PAGES: 236 (23x16cm), num.fig. FEATURES: The book includes 29 exercises with solutions, 43 essential ideas, 108 partially coloured figures, experiment explanations and general theorems. ABSTRACT: The human genotype represents at most ten billion pieces of binary information, whereas the human brain contains more than a million times a billion synapses. So a differentiated brain structure is due to synaptic self-organization and adaptation. The goal is to model the formation of observed global brain structures and cognitive properties from local synaptic dynamics sometimes supervised by the limbic system. A general neuro-synaptic dynamics is solved with a novel field theory in a comprehensible manner and in quantitative agreement with many observations. Novel results concern for instance thermal membrane fluctuations, fluctuation dissipation theorems, cortical maps, topological charges, operant conditioning, transitive inference, learning hidden structures, behaviourism, attention focus, Wittgenstein paradox, infinite generalization, schizophrenia dynamics, perception dynamics, non-equilibrium phase transitions, emergent valuation. Also the formation of advanced cognitive properties is modeled. CONTENTS: 1 Introduction 13 1.1 The role of theory 13 2 Neuronal Association Patterns 17 2.1 Classical conditioning 17 2.2 Typical nerve cell 18 2.3 Neuronal dynamics 20 2.3.1 Two-valued neurons 20 2.3.2 Two alternative formulations 21 2.4 Coupling dynamics 23 2.4.1 Usage dependent couplings 25 2.4.2 Neuronal activity patterns 25 2.5 Network model for classical conditioning 29 2.6 Pattern recognition 32 2.6.1 Task 32 2.6.2 One pattern 32 2.6.3 Several patterns 34 2.7 Pattern retrieval with stochastic dynamics 39 2.7.1 Dynamical equilibrium for a single neuron 40 2.7.2 Dynamical equilibrium for configurations 40 2.8 A physiological basis of stochastic dynamics 44 2.8.1 Biophysics of action potentials 44 2.8.2 Spherical capacitor cell model 45 2.8.3 Nyquist formula 46 2.8.4 Thermodynamic membrane potential fluctuations 50 2.8.5 Resulting stochastic neuronal dynamics 51 2.8.6 Discussion 53 2.9 Pattern retrieval with effectively continuous time 53 2.9.1 Continuous spike response function 54 2.9.2 Network model 54 2.9.3 Model analysis 55 2.10 Discussion of chapter 2 58 3 Self-Organizing Networks 60 3.1 Basic principle 61 3.2 Retinotopy as model system 61 3.3 General two-valued neuron coupling rules 63 3.3.1 Locality principle 63 3.3.2 Additive membrane potential rule, AMPR 64 3.3.3 Coupling transfer rule, CTR 64 3.3.4 Local linear coupling dynamics, LLCD 65 3.3.5 Limited neuronal couplings, LNCR 65 3.4 A 1D self-organizing network with Hebb-rule 65 3.4.1 Network architecture 65 3.4.2 Coupling dynamics 67 3.4.3 Transformed couplings 68 3.4.4 Single stimulation potential 68 3.5 Field theory of neurosynaptic dynamics 69 3.5.1 A general solution method 69 3.5.2 Ergodicity 69 3.5.3 Neurosynaptic states and transitions 70 3.5.4 Averaged neurosynaptic change field 70 3.5.5 Differential equation for neurosynaptic change field 71 3.5.6 Adiabatic principle 71 3.5.7 Differential equation for synaptic change field 72 3.5.8 Change potential field 73 3.5.9 Fluctuation dissipation theorems 77 3.5.10 Discussion 81 3.6 Field theory of topology preservation 81 3.6.1 Emergence of an injective mapping 81 3.6.2 Single neuron separation 83 3.6.3 Coincidence stabilization 84 3.6.4 Emergence of 1D topology preservation 86 3.6.5 Emergence of clusters and topology preservation 87 3.6.6 Discussion 92 3.7 Field theory of orientation preference emergence 92 3.7.1 Network model 92 3.7.2 Change potentials 94 3.7.3 Potential minima 95 3.7.4 Discussion 97 3.8 Field theory of orientation pattern emergence 98 3.8.1 Phenomenon of pinwheel structures 98 3.8.2 Network model 98 3.8.3 Effective iso-orientation interaction 100 3.8.4 Continuous orientation interaction 101 3.8.5 Orientation fluctuations 102 3.8.6 Instability of the ground state 103 3.8.7 Topological singularities according to the Poisson equation 104 3.8.8 Greens function solution 106 3.8.9 Energy of a planar system of charges 108 3.8.10 Prediction: Plasma phase transition 109 3.9 Overview for formal temperatures 110 3.10 Discussion of chapter 3 111 4 Supervised & Self-Organized Adaptation 113 4.1 Forms of supervised adaptation 113 4.2 Operant conditioning 114 4.2.1 The phenomenon of transitive inference 114 4.2.2 Network model 116 4.2.3 Analysis of the network model 117 4.2.4 Transitive inference 119 4.2.5 Symbolic distance effect 119 4.2.6 Network parameters for various species 121 4.3 Generalized quantitative dynamical analysis 122 4.3.1 General valuation dynamics 122 4.3.2 Transitive inference with general valuation dynamics 123 4.3.3 Necessary and sufficient conditions for learning the Piaget task 123 4.3.4 Transitive inference as a consequence of successful learning 124 4.3.5 General set of tasks 125 4.3.6 Network model with minimization of complexity 126 4.3.7 Complete neurosynaptic dynamics and empirical data 127 4.3.8 Discussion of operant conditioning 130 4.4 Supervised Hebb-rule 131 4.4.1 Network model 131 4.4.2 Network analysis 131 4.4.3 Discussion on convergence with Hebb-rules 134 4.5 Perceptron 134 4.5.1 Network and task definition 134 4.5.2 Network architecture capabilities 135 4.5.3 Perceptron convergence theorem 136 4.6 Discussion of chapter 4 137 5 Advanced Adaptations 138 5.1 Learning of charges 139 5.1.1 An especially simple experiment 140 5.1.2 Necessary inner neurons 141 5.1.3 Definition of frameworks 141 5.1.4 Network model 142 5.1.5 Analysis of the network model 143 5.1.6 Discussion 146 5.2 Attention 147 5.2.1 Network model with attention 148 5.2.2 Potential field theorem 148 5.2.3 Attentional learning of charges 150 5.2.4 Attentional adaptation convergence theorem 151 5.2.5 Emergence of network architectures 153 5.2.6 Generalized perceptron 153 5.2.7 Neuronal dynamics with signum function 155 5.2.8 Discussion 156 5.3 Reversal 156 5.3.1 A reversal experiment 157 5.3.2 Network model 157 5.3.3 Discussion of reversal 159 5.4 Learning of counting 159 5.4.1 Generalization without limitation 159 5.4.2 Network architecture and dynamics 160 5.4.3 Analysis of the network 161 5.4.4 An instructive network model 162 5.4.5 Advanced network dynamics 164 5.4.6 Analysis of the advanced network model 165 5.4.7 A solution of Wittgenstein's paradox 166 5.4.8 Discussion 168 5.5 Convergence theorem for inner feedback 168 5.5.1 Idea of adaptation via short dimension increase 169 5.5.2 Specification of the learning situation 170 5.5.3 Learning algorithm for inner feedback 171 5.5.4 Convergence theorem 174 5.5.5 Optimal correspondence via short dimension increase 177 5.5.6 Generalizations 178 5.5.7 Discussion 179 5.6 Correspondence deficit compensation: Schizophrenia model? 180 5.6.1 Starting point 180 5.6.2 Network model 181 5.6.3 Network characteristics 182 5.6.4 Transfer to schizophrenia 185 5.6.5 Therapy 187 5.6.6 Empirical findings 188 5.6.7 Discussion 194 5.7 A mesoscopic perception model 195 5.7.1 External stimulations 195 5.7.2 Network model 197 5.7.3 Field theoretic solution of the network 201 5.7.4 Modeling phenomena 203 5.7.5 Discussion 210 5.8 Emergent valuation 211 5.8.1 Emergence of a valuating field 211 5.8.2 Effect of a valuating stimulation 213 5.9 General adaptation dynamics 214 5.9.1 Definition of microscopic dynamics 214 5.9.2 Resulting macroscopic dynamics 216 5.9.3 Some special cases 218 5.10 Discussion of chapter 5 219 5.11 No Laplace demon 220 6 Summary 221 6.1 Overview 221 6.2 Predictions 222 6.3 List of ideas 224 6.4 Open questions 225 From nq6 at columbia.edu Thu May 23 10:59:23 1996 From: nq6 at columbia.edu (Ning Qian) Date: Thu, 23 May 1996 10:59:23 -0400 (EDT) Subject: Papers available: disparity tuning and motion-stereo integration Message-ID: <199605231459.KAA01297@konichiwa.cc.columbia.edu> The following two papers on disparity tuning of binocular cells and on motion-stereo integration are available from our WWW homepage at: http://brahms.cpmc.columbia.edu ----------------------------------------------------------------------- Binocular receptive field models, disparity tuning, and characteristic disparity Yudong Zhu and Ning Qian Columbia University (To appear in Neural Computation) Disparity tuning of visual cells in the brain depends on the structure of their binocular receptive fields (RFs). Freeman and coworkers have found that binocular RFs of a typical simple cell can be quantitatively described by two Gabor functions with the same Gaussian envelope but different phase parameters in the sinusoidal modulations \cite{Freeman90}. This phase-parameter based RF description, however, has recently been questioned by \citeasnoun{Wagner93} based on their identification of a so-called characteristic disparity (CD) in some cells' disparity tuning curves. They concluded that their data favor the traditional binocular RF model which assumes an overall positional shift between a cell's left and right RFs. Here we set to resolve this issue by studying the dependence of cells' disparity tuning on their underlying RF structures through mathematical analyses and computer simulations. We model the disparity tuning curves in Wagner and Frost's experiments and demonstrate that the mere existence of approximate CDs in real cells cannot be used to distinguish the phase-parameter based RF description from the traditional position-shift based RF description. Specifically, we found that model simple cells with either type of RF description do not have a CD. Model complex cells with the position-shift based RF description have a precise CD, and those with the phase-parameter based RF description have an approximate CD. We also suggest methods for correctly distinguishing the two types of RF descriptions. A hybrid of the two RF models may be required to fit the behavior of some real cells and we show how to determine the relative contributions of the two RF models. This paper is also available from NEUROPROSE: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/qian.cd.ps.Z ...................................................................... A Physiological Model for Motion-stereo Integration and a Unified Explanation of the Pulfrich-like Phenomena Ning Qian and Richard A. Andersen Columbia University and Caltech (To appear in Vision Research) Many psychophysical and physiological experiments indicate that visual motion analysis and stereoscopic depth perception are processed together in the brain. However, little computational effort has been devoted to combining these two visual modalities into a common framework based on physiological mechanisms. We present such an integrated model in this paper. We have previously developed a physiologically realistic model for binocular disparity computation \cite{Qian94e}. Here we demonstrate that under some general and physiological assumptions, our stereo vision model can be combined naturally with motion energy models to achieve motion-stereo integration. The integrated model may be used to explain a wide range of experimental observations regarding motion-stereo interaction. As an example, we show that the model can provide a unified account of the classical Pulfrich effect \cite{Morgan75} and the generalized Pulfrich phenomena to dynamic noise patterns \cite{Tyler74,Falk80} and stroboscopic stimuli \cite{Burr79}. ----------------------------------------------------------------------- From trevor at mallet.Stanford.EDU Thu May 23 11:16:09 1996 From: trevor at mallet.Stanford.EDU (Trevor Hastie) Date: Thu, 23 May 1996 08:16:09 -0700 (PDT) Subject: Modern Regression and Classification Message-ID: <199605231516.IAA09756@mallet.Stanford.EDU> A non-text attachment was scrubbed... Name: not available Type: text Size: 2824 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/878fc910/attachment.ksh From ruppin at math.tau.ac.il Thu May 23 16:30:59 1996 From: ruppin at math.tau.ac.il (Eytan Ruppin) Date: Thu, 23 May 1996 23:30:59 +0300 (GMT+0300) Subject: Neural modeling papers Message-ID: <199605232030.XAA11605@gemini.math.tau.ac.il> Hi, 1. A few recent neural modeling papers are now available on my homepage, http://www.math.tau.ac.il/~ruppin/. Their abstracts are enclosed below. 2. Abstracts of the talks to be given in the TAU workshop on `Memory organization and consolidation: cognitive and computational perspectives' (Tel-Aviv, 28 - 30'th of May), will be available after the workshop via http://www.brain.tau.ac.il and via my homepage. Both homepages currently include the workshop program. Best wishes, Eytan Ruppin. %%%%%%%%%%%%%%%%%%%%%%%%%%% Abstracts: ----------- 1. Neuronal-Based Synaptic Compensation: A Computational Study in Alzheimer's Disease --------------------------------------------- David Horn, Nir Levy and Eytan Ruppin (to appear in Neural Computation 1996) In the framework of an associative memory model, we study the interplay between synaptic deletion and compensation, and memory deterioration, a clinical hallmark of Alzheimer's disease. Our study is motivated by experimental evidence that there are regulatory mechanisms that take part in the homeostasis of neuronal activity and act on the {\em neuronal} level. We show that, following synaptic deletion, synaptic compensation can be carried out efficiently by a {\em local, dynamic} mechanism, where each neuron maintains the profile of its incoming post-synaptic current. Our results open up the possibility that the primary factor in the pathogenesis of cognitive deficiencies in Alzheimer's disease is the failure of local neuronal regulatory mechanisms. Allowing for neuronal death, we observe two pathological routes in AD, leading to different correlations between the levels of structural damage and functional decline. 2. Optimal Firing in Sparsely-connected Low-activity Attractor Networks -------------------------------------------------------------------------- Isaac Meilijson and Eytan Ruppin (to appear in Biological Cybernetics 1996) We examine the performance of Hebbian-like attractor neural networks, recalling stored memory patterns from their distorted versions. Searching for an activation (firing-rate) function that maximizes the performance in sparsely-connected low-activity networks, we show that the optimal activation function is a {\em Threshold-Sigmoid} of the neuron's input field. This function is shown to be in close correspondence with the dependence of the firing rate of cortical neurons on their integrated input current, as described by neurophysiological recordings and conduction-based models. It also accounts for the decreasing-density shape of firing rates that has been reported in the literature. 3. Pathogenesis of Schizophrenic Delusions and Hallucinations: A Neural Model --------------------------------------------------------------------------- Eytan Ruppin, James Reggia and David Horn ({\em Schizophrenia Bulletin}, 22(1), 105-123, 1996 ) We implement and study a computational model of Stevens' theory of the pathogenesis of schizophrenia [1992]. This theory hypothesizes that the onset of schizophrenia is associated with reactive synaptic regeneration occurring in brain regions receiving degenerating temporal lobe projections. Concentrating on one such area, the frontal cortex, we model a frontal module as an associative memory neural network whose input synapses represent incoming temporal projections. Modeling Stevens' hypothesized pathological synaptic changes in this framework results in adverse side effects reminiscent of hallucinations and delusions seen in schizophrenia: spontaneous, stimulus-independent retrieval of stored memories focused on just a few of the stored patterns. These could account for the occurrence of schizophrenic delusions and hallucinations without any apparent external trigger, and for their tendency to concentrate on a few central cognitive and perceptual themes. The model explains why schizophrenic positive symptoms tend to wane as the disease progresses, why delayed therapeutical intervention leads to a much slower response, and why delusions and hallucinations may persist for a long duration when they occur. 4. Synaptic Runaway in Associative Networks ------------------------------------------- (Submitted to NIPS*96) Asnat Greenstein-Messica and Eytan Ruppin Synaptic runaway, the formation of erroneous synapses in the process of learning new patterns, is studied both analytically and numerically in binary associative neural networks. It is found that under normal biological conditions synaptic runaway in these networks is of fairly moderate magnitude, and is thus different from the extensive synaptic runaway found previously in analog-firing associative networks. However, synaptic runaway may become extensive if the threshold for Hebbian learning is reduced. The implications of these findings to the possible role of N-methyl-D-aspartate (NMDA) alterations in the pathogenesis of schizophrenia are discussed. 5. Neuronal Homeostasis and the Art of Synaptic Maintenance ------------------------------------------------------------- David Horn, Nir Levy and Eytan Ruppin (Submitted to NIPS*96) We propose a novel mechanism of synaptic maintenance whose goal is to preserve the performance of an associative memory network undergoing synaptic degradation, and to prevent the development of pathologic attractors. This mechanism is demonstrated by simulations performed in a low-activity neural model that implements local neuronal homeostasis. It works well even in a network undergoing strongly inhomogeneous synaptic alterations, and when input patterns are consecutively stored in the network. Our synaptic maintenance method strongly supports the idea that memory consolidation and synaptic maintenance should occur in separate periods of time, in a repetitive manner. Consequently, we hypothesize that synaptic maintenance occurs during REM sleep, while memory consolidation occurs during slow wave sleep. 6. Neural modeling of psychiatric disorders (A review paper) ------------------------------------------------------------- Eytan Ruppin ({\em Network}, 6, 635-656, 1995) This paper reviews recent neural modeling studies of psychiatric disorders. Numerous aspects of psychiatric disturbances have been investigated, such as the role of synaptic changes in the pathogenesis of Alzheimer's disease, the study of spurious attractors as possible neural correlates of schizophrenic positive symptoms, and the exploration of the ability of feed-forward and recurrent networks to quantitatively model the cognitive performance of schizophrenic patients. Current models all employ considerable simplifications, both on the level of the behavioral phenomenology they seek to explore, and on the level of their structure and dynamics. However, it is encouraging to realize that the disruption of just a few simple computational mechanisms can lead to behaviors which correspond to some of the clinical features of psychiatric disorders, and can shed light on their pathogenesis. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% From risto at cs.utexas.edu Fri May 24 11:27:45 1996 From: risto at cs.utexas.edu (Risto Miikkulainen) Date: Fri, 24 May 1996 10:27:45 -0500 Subject: Electronic book: Lateral Interactions in the Cortex Message-ID: <199605241527.KAA29219@cascais.cs.utexas.edu> We are pleased to announce the publication of the book LATERAL INTERACTIONS IN THE CORTEX: STRUCTURE AND FUNCTION This book is entirely electronic, in the HTML format, and can be accessed through the World Wide Web at the address below. It makes extensive use of the hypertext structure of HTML documents, including hyperlinks to researchers, institutions, and publications around the world, many color illustrations, and even a few mpeg movies. Please read the Preface for hints on how to get the most out of the book. Below is a short abstract of the book and table of contents. Enjoy! -- The Editors ------------------------------------------------------------------------ LATERAL INTERACTIONS IN THE CORTEX: STRUCTURE AND FUNCTION Electronic book, ISBN 0-9647060-0-8 http://www.cs.utexas.edu/users/nn/web-pubs/htmlbook96/ http://eris.wisdom.weizmann.ac.il/~edelman/htmlbook96/ (mirror site) Austin, TX: The UTCS Neural Network Research Group Joseph Sirosh, Risto Miikkulainen, and Yoonsuck Choe (editors) In the last few years, several new results on the structure, development, and functional role of lateral connectivity in the cortex have emerged. These results have led to a new understanding of the cortex as a continuously-adapting dynamic system shaped by competitive and cooperative lateral interactions. Many of the results and their interpretations are still controversial, and computational and analytical investigations can serve a pivotal role in establishing this new model. This book brings together eleven such investigations, each from a slightly different perspective and level, aiming at explaining what function the lateral interactions could play in the development and information processing in the cortex. The book serves as an overview of the kinds of processes that may be going on, laying the groundwork for understanding information processing in the laterally connected cortex. Table of Contents: Preface 1. Introduction - Risto Miikkulainen and Joseph Sirosh 2. The Pattern and Functional Significance of Long-Range Interactions in Human Visual Cortex - Uri Polat, Anthony M. Norcia, and Dov Sagi 3. Recurrent Inhibition and Clustered Connectivity as a Basis for Gabor-like Receptive Fields in the Visual Cortex - Silvio P. Sabatini 4. Variable Gain Control in Local Cortical Circuitry Supports Context-Dependent Modulation by Long-Range Connections - David C. Somers, Louis J. Toth, Emanuel Todorov, S. Chenchal Rao, Dae-Shik Kim, Sacha B. Nelson, Athanassios G. Siapas, and Mriganka Sur 5. The Role of Lateral Connections in Visual Cortex: Dynamics and Information Processing - Marius Usher, Martin Stemmler, and Ernst Niebur 6. Synchronous Oscillations Based on Lateral Connections - DeLiang Wang 7. A Basis for Long-Range Inhibition Across Cortex - J. G. Taylor and F. N. Alavi 8. Self-Organization of Orientation Maps, Lateral Connections, and Dynamic Receptive Fields in the Primary Visual Cortex - Joseph Sirosh, Risto Miikkulainen, and James A. Bednar 9. Associative Decorrelation Dynamics in Visual Cortex - Dawei W. Dong 10. A Self-Organizing Neural Network That Learns to Detect and Represent Visual Depth from Occlusion Events - Jonathan A. Marshall and Richard Alley 11. Face Recognition by Dynamic Link Matching - Laurenz Wiskott and Christoph von der Malsburg 12. Why Have Lateral Connections in the Visual Cortex? - Shimon Edelman From jung at pop.uky.edu Fri May 24 11:58:51 1996 From: jung at pop.uky.edu (Dr. Ranu Jung) Date: Fri, 24 May 1996 15:58:51 +0000 Subject: Graduate Research Asst. Message-ID: <199605242101.RAA23996@service1.cc.uky.edu> GRADUATE RESEARCH ASSISTANTSHIPS (PLEASE FORWARD) A graduate research assistantship is available for up to 3 years to conduct research in the "Neural Control of Locomotion" at the Center for Biomedical Engineering, University of Kentucky. Students have to be accepted into the Ph.D./MS program starting Fall 1996 (August). The assistantship is available to citizens of all nations. The research is to examine the dynamical interaction between the brain and the spinal cord in the control of locomotion, in particular, swimming in a lower vertebrate. Traditional neurophysiological experimental techniques will be complimented by techniques from non-linear signal processing and control. In conjunction, the behavior of connectionist/biophysical neural network models will be examined and analyzed using tools from dynamical systems theory. If interested, send CV, and names of two references, preferably by email or Fax to: Ranu Jung, Ph.D. Center for Biomedical Engineering 21 Wenner-Gren Research Lab. University of Kentucky, Lexington 40506-0070 Tel. 606-257-5931 email:jung at pop.uky.edu Fax: 606-257-1856 The University of Kentucky is located in the rolling hills of the Bluegrass Country and has a diverse campus. The Center for Biomedical Engineering is a multidisciplinary center in the Graduate School. We have strong ties to the Medical Center and the School of Engineering. Details about the University of Kentucky and the Center for Biomedical Engineering can be obtained on the web at http://www.uky.edu; http://www.uky.edu/RGS/CBME. From mike at psych.ualberta.ca Fri May 24 15:10:53 1996 From: mike at psych.ualberta.ca (Dr. Michael R.W. Dawson) Date: Fri, 24 May 1996 13:10:53 -0600 (MDT) Subject: Cognitive Neuroscience Job Message-ID: The University of Alberta, Department of Psychology, is pleased to announce that it is continuing its expansion into the Cognitive Neurosciences in 1997. Canadians and Non-Canadians are encouraged to apply for a tenure-track position. Details are described below. Additional information, including profiles of two cognitive neuroscientists hired by our Department last year, can be found at our web-site: http://web.psych.ualberta.ca/Neuroscience_Positions.htmld/index.html ========================================================== DEPARTMENT OF PSYCHOLOGY, UNIVERSITY OF ALBERTA Tenure-Track Assistant Professor Position in Cognitive Neuroscience The Department of Psychology, Faculty of Science at the University of Alberta, is seeking to expand its development in the Cognitive Neurosciences. A tenure-track position in Cognitive Neuroscience at the assistant professor level will be open to competition (salary range $39,230 - $55,526). The appointment will be effective July 1, 1997. Candidates should have a strong interest in neuroscience with demonstrated excellence and ongoing research programs. The expectation is that the successful candidate will secure NSERC, MRC, or equivalent funding. Hiring decisions will be made on the basis of demonstrated research capability, teaching ability, and the potential for interactions with colleagues. Applicants should have an expertise in any of the following or related areas: perception, language, neural plasticity, development and aging, attention, motor control, emotion, or memory. The applicant should send a curriculum vitae, a statement of current and future research plans, recent publications, and arrange to have at least three letters of reference forwarded, to the Chair of the Cognitive Neuroscience Search Committee, Department of Psychology, P-220 Biological Sciences Building, University of Alberta, Edmonton, Alberta, Canada, T6G 2E9. Applications for the competition should be received by November 1, 1996. PhD must be completed by July 1, 1997. The University of Alberta is committed to the principle of equity in employment. As an employer we welcome diversity in the workplace and encourage applications from all qualified women and men, including Aboriginal peoples, persons with disabilities, and members of visible minorities. From rao at cs.rochester.edu Sat May 25 21:13:35 1996 From: rao at cs.rochester.edu (rao@cs.rochester.edu) Date: Sat, 25 May 1996 21:13:35 -0400 Subject: No subject Message-ID: <199605260113.VAA24390@skunk.cs.rochester.edu> psyc at pucc.princeton.edu, cogneuro at ptolemy-ethernet.arc.nasa.gov, cvnet at skivs.ski.org, inns-l%umdd.bitnet at pucc.princeton.edu, neuronet at tutkie.tut.ac.jp, vision-list at teleosresearch.com Subject: Papers available: Dynamic Models of Visual Recognition The following two papers on dynamic cortical models of visual recognition are now available for retrieval via ftp. Comments/suggestions welcome, -Rajesh Rao (rao at cs.rochester.edu) =========================================================================== A Class of Stochastic Models for Invariant Recognition, Motion, and Stereo Rajesh P.N. Rao and Dana H. Ballard (Submitted to NIPS*96) Abstract We describe a general framework for modeling transformations in the image plane using a stochastic generative model. Algorithms that resemble the well-known Kalman filter are derived from the MDL principle for estimating both the generative weights and the current transformation state. The generative model is assumed to be implemented in cortical feedback pathways while the feedforward pathways implement an approximate inverse model to facilitate the estimation of current state. Using the above framework, we derive stochastic models for invariant recognition, motion estimation, and stereopsis, and present preliminary simulation results demonstrating recognition of objects in the presence of translations, rotations and scale changes. Retrieval information: FTP-host: ftp.cs.rochester.edu FTP-pathname: /pub/u/rao/papers/invar.ps.Z URL: ftp://ftp.cs.rochester.edu/pub/u/rao/papers/invar.ps.Z 7 pages; 430K compressed. ========================================================================== Dynamic Model of Visual Recognition Predicts Neural Response Properties In The Visual Cortex Rajesh P.N. Rao and Dana H. Ballard (Neural Computation - in review) Abstract The responses of visual cortical neurons during fixation tasks can be significantly modulated by stimuli from beyond the classical receptive field. Modulatory effects in neural responses have also been recently reported in a task where a monkey freely views a natural scene. In this paper, we describe a stochastic network model of visual recognition that explains these experimental observations by using a hierarchical form of the extended Kalman filter as given by the Minimum Description Length (MDL) principle. The model dynamically combines input-driven bottom-up signals with expectation-driven top-down signals to predict current recognition state. Synaptic weights in the model are adapted in a Hebbian manner according to a stochastic learning rule also derived from the MDL principle. The architecture of the model posits an active computational role for the reciprocal connections between adjoining visual cortical areas in determining neural response properties. In particular, the model demonstrates the possible role of feedback from higher cortical areas in mediating neurophysiological effects due to stimuli from beyond the classical receptive field. Simulations of the model are provided that help explain the experimental observations regarding neural responses in both free viewing and fixating conditions. Retrieval information: FTP-host: ftp.cs.rochester.edu FTP-pathname: /pub/u/rao/papers/dynmem.ps.Z URL: ftp://ftp.cs.rochester.edu/pub/u/rao/papers/dynmem.ps.Z 32 pages; 534K compressed. =========================================================================== From koza at CS.Stanford.EDU Sun May 26 02:32:04 1996 From: koza at CS.Stanford.EDU (John Koza) Date: Sat, 25 May 96 23:32:04 PDT Subject: GP is competitive with humans on 4 problems Message-ID: We have fixed the problem and the following paper is now available in Post Script. Four Problems for which a Computer Program Evolved by Genetic Programming is Competitive with Human Performance ABSTRACT: It would be desirable if computers could solve problems without the need for a human to write the detailed programmatic steps. That is, it would be desirable to have a domain-independent automatic programming technique in which "What You Want Is What You Get" ("WYWIWYG" - pronounced "wow-eee-wig"). Genetic programming is such a technique. This paper surveys three recent examples of problems (from the fields of cellular automata and molecular biology) in which genetic programming evolved a computer program that produced results that were slightly better than human performance for the same problem. This paper then discusses the problem of electronic circuit synthesis in greater detail. It shows how genetic programming can evolve both the topology of a desired electrical circuit and the sizing (numerical values) for each component in a crossover (woofer and tweeter) filter. Genetic programming has also evolved the design for a lowpass filter, the design of an amplifier, and the design for an asymmetric bandpass filter that was described as being difficult-to-design in an article in a leading electrical engineering journal. John R. Koza Computer Science Department 258 Gates Building Stanford University Stanford, California 94305 E-MAIL: Koza at CS.Stanford.Edu Forrest H Bennett III Visiting Scholar Computer Science Department Stanford University E-MAIL: Koza at CS.Stanford.Edu David Andre Visiting Scholar Computer Science Department Stanford University E-MAIL: fhb3 at slip.net Martin A. Keane Econometrics Inc. Chicago, IL 60630 Paper available in Postscript via WWW from http://www-cs-faculty.stanford.edu/~koza/ Look under "Research Publications" and "Recent Papers" on the home page. This paper was presented at the IEEE International Conference on Evolutionary Computation on May 20-22, 1996 in Nagoya, Japan. Additional papers on evolving electrical circuits will be presented at the GP-96 conference to be held at Stanford University on July 28-31, 1996. For information, see http://www.cs.brandeis.edu/~zippy/gp-96.html From gbugmann at soc.plym.ac.uk Sat May 25 12:22:04 1996 From: gbugmann at soc.plym.ac.uk (Guido.Bugmann xtn 2566) Date: Sat, 25 May 1996 17:22:04 +0100 (BST) Subject: Connectionist Learning - Some New Ideas In-Reply-To: <199605181814.OAA09391@skunk.cs.rochester.edu> Message-ID: On Sat, 18 May 1996 rao at cs.rochester.edu wrote: > >One loses about 100,000 cortical neurons a day (about a percent of > >the original number every three years) under normal conditions. > > Does anyone have a concrete citation (a journal article) for this or > any other similar estimate regarding the daily cell death rate in the > cortex of a normal brain? I've read such numbers in a number of > connectionist papers but none cite any neurophysiological studies that > substantiate these numbers. A similar question (are there references for 1 millions neurons lost per day ?) came up in a discussion on the topic of robustness on connectionists a few years ago (1992). Some of the replies were: ------------------------------------------------------- From phkywong at uxmail.ust.hk Mon May 27 08:56:13 1996 From: phkywong at uxmail.ust.hk (Dr. Michael Wong) Date: Mon, 27 May 1996 20:56:13 +0800 Subject: Paper available Message-ID: <96May27.205615hkt.18930-8054+221@uxmail.ust.hk> The following papers, to be presented at ICONIP'96, is now available via anonymous FTP. (5 pages each) ============================================================================ FTP-host: physics.ust.hk FTP-files: pub/kymwong/robust.ps.gz Neural Dynamic Routing for Robust Teletraffic Control Neural Network Classification of Non-Uniform Data W. K. Felix Lor and K. Y. Michael Wong Department of Physics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong. E-mail addresses: phfelix at usthk.ust.hk, phkywong at usthk.ust.hk ABSTRACT We study the performance of a neural dynamic routing algorithm on the circuit- switched network under critical network situations. It consists of a teacher generating examples for supervised learning in a group of student neural controllers. Simulations show that the method is robust and superior to conventional routing techniques. ============================================================================ FTP-host: physics.ust.hk FTP-files: pub/kymwong/nonuni.ps.gz Neural Network Classification of Non-Uniform Data K. Y. Michael Wong and H. C. Lau Department of Physics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong. E-mail addresses: phkywong at usthk.ust.hk, phhclau at usthk.ust.hk ABSTRACT We consider a model of non-uniform data, which resembles typical data for system faults in diagnostic classification tasks. Phase diagrams illustrate the role reversal of the informators and background as parameters change. With no prior knowledge about the non-uniformity, the Bayesian classifier may perform worse than other neural network classifiers for few examples. ============================================================================ FTP instructions: unix> ftp physics.ust.hk Name: anonymous Password: your full email address ftp> cd pub/kymwong ftp> get robust.ps.gz (or get nonuni.ps.gz) ftp> quit unix> gunzip robust.ps.gz (or gunzip nonuni.ps.gz) unix> lpr robust.ps (or lpr nonuni.ps.gz) From listerrj at helios.aston.ac.uk Tue May 28 05:46:11 1996 From: listerrj at helios.aston.ac.uk (Richard Lister) Date: Tue, 28 May 1996 10:46:11 +0100 Subject: Postdoctoral Research Fellowship at Aston University Message-ID: <11161.199605280946@sun.aston.ac.uk> ---------------------------------------------------------------------- Neural Computing Research Group ------------------------------- Dept of Computer Science and Applied Mathematics Aston University, Birmingham, UK POSTDOCTORAL RESEARCH FELLOWSHIP -------------------------------- On-line Learning in Radial Basis Function Networks -------------------------------------------------- *** Full details at http://www.ncrg.aston.ac.uk/ *** The Neural Computing Research Group at Aston is looking for a highly motivated individual for a 2 year postdoctoral research position in the area of `On-line Learning in Radial Basis Function Networks'. The emphasis of the research will be on applying a theoretically well- founded approach based on methods adopted from statistical mechanics to analyse learning in RBF networks. Potential candidates should have strong mathematical and computational skills, with a background in statistical mechanics and neural networks. Conditions of Service --------------------- Salaries will be up to point 6 on the RA 1A scale, currently 15,566 UK pounds. The salary scale is subject to annual increments. How to Apply ------------ If you wish to be considered for this Fellowship, please send a full CV and publications list, including full details and grades of academic qualifications, together with the names of 3 referees, to: Dr. David Saad Neural Computing Research Group Dept. of Computer Science and Applied Mathematics Aston University Birmingham B4 7ET, U.K. Tel: 0121 333 4631 Fax: 0121 333 4586 e-mail: D.Saad at aston.ac.uk (e-mail submission of postscript files is welcome) Closing date: 21 June, 1996. ---------------------------------------------------------------------- From geoff at salk.edu Tue May 28 15:45:03 1996 From: geoff at salk.edu (Geoff Goodhill) Date: Tue, 28 May 96 12:45:03 PDT Subject: Cell death during embryogenesis Message-ID: <9605281945.AA26708@salk.edu> Those following the current thread on cell death may be interested in a recent experimental investigation of this during development of the mouse by Blaschke, Staley & Chun (abstract below). The most striking finding is that at embryonic day 14, 70% of cortical cells are dying. Geoff Goodhill The Salk Institute 10010 North Torrey Pines Road La Jolla, CA 92037 Email: geoff at salk.edu http: www.cnl.salk.edu/~geoff TI: WIDESPREAD PROGRAMMED CELL-DEATH IN PROLIFERATIVE AND POSTMITOTIC REGIONS OF THE FETAL CEREBRAL-CORTEX AU: BLASCHKE_AJ, STALEY_K, CHUN_J NA: UNIV CALIF SAN DIEGO,SCH MED,DEPT PHARMACOL,NEUROSCI & BIOMED SCI GRAD PROGRAM,9500 GILMAN DR,LA JOLLA,CA,92093 UNIV CALIF SAN DIEGO,SCH MED,DEPT PHARMACOL,NEUROSCI & BIOMED SCI GRAD PROGRAM,LA JOLLA,CA,92093 UNIV CALIF SAN DIEGO,SCH MED,DEPT PHARMACOL,BIOL GRAD PROGRAM,LA JOLLA,CA,92093 JN: DEVELOPMENT, 1996, Vol.122, No.4, pp.1165-1174 IS: 0950-1991 AB: A key event in the development of the mammalian cerebral cortex is the generation of neuronal populations during embryonic life, Previous studies have revealed many details of cortical neuron development including cell birthdates, migration patterns and lineage relationships, Programmed cell death is a potentially important mechanism that could alter the numbers and types of developing cortical cells during these early embryonic phases, While programmed cell death has been documented in other parts of the embryonic central nervous system, its operation has not been previously reported in the embryonic cortex because of the lack of cell death markers and the difficulty in following the entire population of cortical cells, Here, we have investigated the spatial and temporal distribution of dying cells in the embryonic cortex using an in situ end-labelling technique called 'ISEL+' that identifies fragmented nuclear DNA in dying cells with increased sensitivity, The period encompassing murine cerebral cortical neurogenesis was examined, from embryonic days 10 through 18, Dying cells were rare at embryonic day 10, but by embryonic day 14, 70% of cortical cells were found to be dying, This number declined to 50% by embryonic day 18, and few dying cells were observed in the adult cerebral cortex. Surprisingly, while dying cells were observed throughout the cerebral cortical wall, the majority were found within zones of cell proliferation rather than in regions of postmitotic neurons. These observations suggest that multiple mechanisms may regulate programmed cell death in the developing cortex, Moreover, embryonic cell death could be an important factor enabling the selection of appropriate cortical cells before they complete their differentiation in postnatal life. KP: RHESUS-MONKEY, POSTNATAL-DEVELOPMENT, MIGRATION PATTERNS, REGRESSIVE EVENTS, DNA FRAGMENTATION, NERVOUS-SYSTEM, GANGLION- CELL, VISUAL-CORTEX, MOUSE, RAT WA: PROGRAMMED CELL DEATH, CEREBRAL CORTEX, EMBRYONIC DEVELOPMENT, MOUSE From lba at inesc.pt Tue May 28 11:26:09 1996 From: lba at inesc.pt (Luis B. Almeida) Date: Tue, 28 May 1996 16:26:09 +0100 Subject: paper available Message-ID: <31AB1B11.6EEA4806@inesc.pt> The following paper, which will appear in the Proceedings of the IEEE International Conference on Neural Networks 1996, Washington DC, June 1996, is available at ftp://aleph.inesc.pt/pub/lba/icnn96.ps An Objective Function for Independence Goncalo Marques and Luis B. Almeida IST and INESC, Lisbon, Portugal Abstract The problem of separating a linear or nonlinear mixture of independent sources has been the focus of many studies in recent years. It is well known that the classical principal component analysis method, which is based on second order statistics, performs poorly even in the linear case, if the sources do not have Gaussian distributions. Based on this fact, several algorithms take in account higher than second order statistics in their approach to the problem. Other algorithms use the Kullback-Leibler divergence to find a transformation that can separate the independent signals. Nevertheless the great majority of these algorithms only take in account a finite number of statistics, usually up to the fourth order, or use some kind of smoothed approximations. In this paper we present a new class of objective functions for source separation. The objective functions use statistics of all orders simultaneously, and have the advantage of being continuous, differentiable functions that can be computed directly from the training data. A derivation of the class of functions for two dimensional data, some numerical examples illustrating its performance, and some implementation considerations are described. In this electronic version a few typos of the printed version have been corrected. The paper is reproduced with permission from the IEEE. Please read the copyright notice at the beginning of the document. -- Luis B. Almeida INESC Phone: +351-1-3544607, +351-1-3100246 R. Alves Redol, 9 Fax: +351-1-3145843 P-1000 Lisboa Portugal e-mail: lba at inesc.pt or luis.almeida at inesc.pt ------------------------------------------------------------------- *** Indonesia is killing innocent people in East Timor *** From jorn at let.rug.nl Tue May 28 03:37:34 1996 From: jorn at let.rug.nl (Jorn Veenstra) Date: Tue, 28 May 1996 09:37:34 +0200 Subject: postdoc position in Neurolinguistics Message-ID: <199605280737.AA028329061@freya.let.rug.nl> POSTDOCTORAL POSITION AVAILABLE The Netherlands Organization for Scientific Research (NWO) will make a THREE YEAR POSTDOCTORAL POSITION AVAILABLE within the project "Neurological Basis of Language" (NWO project 030-30- 431) to a candidate whose Ph.D. research was directed toward computational modelling of cognitive functions (preferably language) based on psychological and neuroanatomical data. The role of the postdoc would be to develop computational models of language processing which employ physiologically plausible assumptions and are compatible with or predict the results of psycholinguistic experimental evidence on the time course and structure of language processing. The goal of the project as a whole is to investigate localization of language functions using positron emission tomography, the time course of language processing using event-related potentials to develop a neurologically plausible model of language. Contact Dr. Laurie A. Stowe, Dept. of Linguistics, Faculty of Letters, University of Groningen, Postbus 716, 9700 AS Groningen, Netherlands, 31 50 636627 or stowe at let.rug.nl for further information. Applications should be accompanied by a curricu- lum vita, two references (direct from referee), and documenta- tion of research experience in the form of published and in progress articles. From bap at valaga.salk.edu Wed May 29 03:55:38 1996 From: bap at valaga.salk.edu (Barak Pearlmutter) Date: Wed, 29 May 1996 00:55:38 -0700 Subject: Paper Available --- Blind Source Separation Message-ID: <199605290755.AAA04235@valaga.salk.edu> The following paper (which will appear at the 1996 International Conference on Neural Information Processing this Fall) is available as http://www.cnl.salk.edu/~bap/papers/iconip-96-cica.ps.gz A Context-Sensitive Generalization of ICA Barak A. Pearlmutter and Lucas C. Parra Abstract Source separation arises in a surprising number of signal processing applications, from speech recognition to EEG analysis. In the square linear blind source separation problem without time delays, one must find an unmixing matrix which can detangle the result of mixing $n$ unknown independent sources through an unknown $n \times n$ mixing matrix. The recently introduced ICA blind source separation algorithm (Baram and Roth 1994; Bell and Sejnowski 1995) is a powerful and surprisingly simple technique for solving this problem. ICA is all the more remarkable for performing so well despite making absolutely no use of the temporal structure of its input! This paper presents a new algorithm, contextual ICA, which derives from a maximum likelihood density estimation formulation of the problem. cICA can incorporate arbitrarily complex adaptive history-sensitive source models, and thereby make use of the temporal structure of its input. This allows it to separate in a number of situations where standard ICA cannot, including sources with low kurtosis, colored gaussian sources, and sources which have gaussian histograms. Since ICA is a special case of cICA, the MLE derivation provides as a corollary a rigorous derivation of classic ICA. From lba at inesc.pt Wed May 29 08:43:14 1996 From: lba at inesc.pt (Luis B. Almeida) Date: Wed, 29 May 1996 13:43:14 +0100 Subject: paper available - ftp instructions Message-ID: <31AC4662.5E652F78@inesc.pt> Since some people don't know how to translate the address ftp://aleph.inesc.pt/pub/lba/icnn96.ps that was given for the paper An Objective Function for Independence Goncalo Marques and Luis B. Almeida IST and INESC, Lisbon, Portugal into anonymous ftp commands, I'm giving those commands below: >ftp aleph.inesc.pt Connected to aleph.inesc.pt. 220 aleph FTP server (SunOS 4.1) ready. Name (aleph.inesc.pt:lba): [type 'anonymous' here] 331 Guest login ok, send ident as password. Password: [type your e-mail address here] 230 Guest login ok, access restrictions apply. ftp> cd /pub/lba ftp> get icnn96.ps ftp> bye If the address 'aleph.inesc.pt' cannot be resolved, you can also use >ftp 146.193.2.131 instead of >ftp aleph.inesc.pt Happy downloading! Luis B. Almeida INESC Phone: +351-1-3544607, +351-1-3100246 R. Alves Redol, 9 Fax: +351-1-3145843 P-1000 Lisboa Portugal e-mail: lba at inesc.pt or luis.almeida at inesc.pt ------------------------------------------------------------------- *** Indonesia is killing innocent people in East Timor *** From zhuh at helios.aston.ac.uk Wed May 29 15:33:48 1996 From: zhuh at helios.aston.ac.uk (zhuh) Date: Wed, 29 May 1996 19:33:48 +0000 Subject: Paper: efficient online training of curved models using ancillary statistics Message-ID: <1840.9605291833@sun.aston.ac.uk> The following paper is accepted for 1996 International Conference on Neural Information Processing, Hong Kong, Sept. 1996. ftp://cs.aston.ac.uk/neural/zhuh/ac1.ps.Z Using Ancillary Statistics in On-Line Learning Algorithms Huaiyu Zhu and Richard Rohwer Neural Computing Research Group Dept of Comp. Sci. Appl. Math. Aston Univ., Birmingham B4 7ET, UK Abstract Neural networks are usually curved statistical models. They do not have finite dimensional sufficient statistics, so on-line learning on the model itself inevitably loses information. In this paper we propose a new scheme for training curved models, inspired by the ideas of ancillary statistics and adaptive critics. At each point estimate an auxiliary flat model (exponential family) is built to locally accommodate both the usual statistic (tangent to the model) and an ancillary statistic (normal to the model). The auxiliary model plays a role in determining credit assignment analogous to that played by an adaptive critic in solving temporal problems. The method is illustrated with the Cauchy model and the algorithm is proved to be asymptotically efficient. -- Huaiyu Zhu, PhD email: H.Zhu at aston.ac.uk Neural Computing Research Group http://neural-server.aston.ac.uk/People/zhuh Dept of Computer Science ftp://cs.aston.ac.uk/neural/zhuh and Applied Mathematics tel: +44 121 359 3611 x 5427 Aston University, fax: +44 121 333 6215 Birmingham B4 7ET, UK From pierre at mbfys.kun.nl Thu May 30 04:44:12 1996 From: pierre at mbfys.kun.nl (Pi\"erre van de Laar) Date: Thu, 30 May 1996 10:44:12 +0200 Subject: sensitivity analysis and relevance Message-ID: <31AD5FDC.794BDF32@mbfys.kun.nl> Dear Connectionists, I am interested in methods which perform sensitivity analysis and/or relevance determination of input fields, and especially methods which use neural networks. Although I have already found a number of references (see end of mail), I expect that this list is not complete. Any further references would be highly appreciated. As usual a summary of all replies will be posted in about a month. Thanks in advance, Pi\"erre van de Laar Department of Medical Physics and Biophysics University of Nijmegen, The Netherlands mailto:pierre at mbfys.kun.nl http://www.mbfys.kun.nl/~pierre/ Aldrich, C. and van Deventer, J.S.J., Modelling of Induced Aeration in Turbine Aerators by Use of Radial Basis Function Neural Networks, The Canadian Journal of Chemical Engineerin g 73(6):808-816, 1995. Boritz, J. Efrim and Kennedy, Duane B., Effectiveness of Neural Network Types for Predicti on of Business Failure, Expert Systems With Applications, 9(4):503-512, 1995. Hammitt, A.M. and Bartlett, E.B., Determining Functional Relationships from trained neural networks, Mathematical and Computer Modelling 22(3):83-103, 1995. Korthals, R.L. and Hahn, G.L. and Nienaber, J.A., Evaluation of Neural Networks as a tool for management of swine environments, Transactions of the American Society of Agricultural Engineers 37(4):1295-1299, 1994. Lacroix, R. and Wade, K.M. and Kok, R. and Hayes, J.F., Prediction of cow performance with a connectionist model, Transactions of the American Society of Agricultural Engineers 38(5):1573-1579, 1995. MacKay, David J.C., Probable networks and plausible predictions - a review of pratical Bay esian methods for supervised neural networks, Network: Computation in Neural Systems 6(3): 469-505, 1995. Neal, Radford M., Bayesian Learning for neural networks, Dept. of Computer Science, Univer sity of Toronto, 1994. Oh, Sang-Hoon and Lee, Youngjik, Sensitivity Analysis of Single Hidden-Layer Neural Networ ks with Threshold Functions, IEEE Transactions on Neural Networks 6(4):1005-1007, 1995. Naimimohasses, R. and Barnett, D.M. and Green, D.A. and Smith, P.R., Sensor optimization u sing neural network sensitivity measures, Measurement science & technology 6(9):1291-1300, 1995. BrainMaker Professional: User's Guide and Reference Manual, 4th edition, California Scient ific Software, Nevada City, Chapter 10, 48-59, 1993. From bogus@does.not.exist.com Thu May 30 09:55:16 1996 From: bogus@does.not.exist.com () Date: Thu, 30 May 1996 16:55:16 +0300 Subject: No subject Message-ID: <9605301355.AA11576@antigoni.med.auth.gr> From joe at sunia.u-strasbg.fr Thu May 30 07:44:45 1996 From: joe at sunia.u-strasbg.fr (Prof invite) Date: Thu, 30 May 1996 13:44:45 +0200 Subject: Papers on Rule-Extraction from trained ANN Message-ID: <199605301144.NAA19639@sunia.diane> The following papers are available via anonymous ftp: An Evaluation And Comparison Of Techniques For Extracting And Refining Rules From Artificial Neural Networks Robert Andrews* ** Russell Cable* Joachim Diederich* Shlomo Geva* Mostefa Golea* Ross Hayward* Chris Ho-Stewart* Alan B. Tickle* ** Neurocomputing Research Centre* School of Information Systems** Queensland University of Technology Brisbane Q 4001 Australia QUTNRC-96-01-04.ps.Z Abstract It is becoming increasingly apparent that without some form of explanation capability, the full potential of trained Artificial Neural Networks (ANNs) may not be realised. The primary purpose of this report is to survey techniques which have been developed to redress this situation. Specifically the survey focuses on mechanisms, procedures, and algorithms designed to insert knowledge into ANNs (knowledge initialisation), extract rules from trained ANNs (rule extraction), and utilise ANNs to refine existing rule bases (rule refinement). The survey also introduces a new taxanomy for classifying the various techniques, discusses their modus operandi, and delineates criteria for evaluating their efficacy. An additional facet of the report is a comparative evaluation of the performance of a set of techniques developed at the Neurocomputing Research Centre at QUT to extract knowledge from trained ANNs as a set of symbolic rules. Note: This is an extended version of: Andrews, R.; Diederich, J.; Tickle, A.B. A Survey and Critique of Techniques for Extracting Rules from Trained Artificial Neural Networks. KNOWLEDGE-BASED SYSTEMS 8 (1995) 6, 373-389. This version includes first empirical results and is distributed with permission of the editor and publisher. ******************************************************************************* DEDEC: Decision Detection by Rule Extraction from Neural Networks Alan B. Tickle* ** Marian Orlowski* ** Joachim Diederich* Neurocomputing Research Centre* School of Information Systems** Queensland University of Technology Brisbane Q 4001 Australia QUTNRC-95-01-03.ps.Z Abstract A clearly recognised impediment to the realisation of the full potential of Artificial Neural Networks is an inherent inability to explain in a comprehensible form (e.g. as a set of symbolic rules), the process by which an ANN arrived at a particular conclusion/decision/result. While a variety of techniques have already appeared to address this limitation, a substantial number of the more successful approaches are dependent on specialised ANN architectures. The DEDEC technique is a generic approach to rule extraction from trained ANNs which is designed to be applicable across a broad range of ANN architectures. The DEDEC technique is a generic approach to rule extraction from trained ANNs which is designed to be applicable across a broad range of ANN architectures. The basic motif adopted is to utilise the generalisation capability of a trained ANN to generate a set of examples from the problem domain which may include examples beyond the initial training set. These examples are then presented to a symbolic induction algorithm and the requisite rule set extracted. However an important innovation over other rule-extraction techniques of this ('pedagogical') type is that the DEDEC technique utilises information extracted from an analysis of the weight vectors of the trained ANN to rank the input variables (rule antecedents) in terms of their relative importance. This additional information is used to focus the search of the solution space on those examples from the problem domain which are deemed to be of most significance. The paper gives a detailed description of one possible implementation of the DEDEC technique and discusses results obtained on both a set of structured sample problems and 'real world' problems. ******************************************************************************* DEDEC: A Methodology For Extracting Rules From Trained Artificial Neural Networks Alan B. Tickle* ** Marian Orlowski Joachim Diederich* Neurocomputing Research Centre* School of Information Systems** Queensland University of Technology Brisbane Q 4001 Australia QUTNRC-96-01-05.ps.Z Abstract A recognised impediment to the more widespread utilisation of Artificial Neural Networks (ANNs) is the absence of a capability to explain, in a human comprehensible form, either the process by which a specific decision/result has been reached or, in general, the totality of knowledge embedded within the ANN. Currently, one of the most promising approaches to redressing this situation is to extract the knowledge embedded in the trained ANN as a set of symbolic rules. In this paper we describe the DEDEC methodology for rule-extraction which is applicable to a broad class of multilayer, feedforward ANNs trained by the 'back-propogation' method. Central to the DEDEC approach is the identification of the functional dependencies between the ANN inputs (i.e. the attribute values of the data) and the ANN outputs (e.g. the classification decision). However the key motif of the DEDEC methodology is the utilisation of information extracted from analysing the weight vectors in the trained ANN to focus the process of determining these functional dependencies. In addition, if required, DEDEC exploits the capability of a trained ANN to generalise beyond the data used in the ANN training phase. The paper illustrates one of a number of possible implementations of the DEDEC methodology, discusses results obtained on both a set of structured sample problems and a "real world" problem, and provides a comparison with other rule extraction techniques. ***************************************************************************** Artificial Intelligence Meets Artificial Insemination The Importance and Application of Symbolic Rule Extraction From Trained Artificial Neural Networks Robert Andrews* ** Joachim Diederich* Emanoil Pop* Alan B Tickle* ** Neurocomputing Research Centre* School of Information Systems** Queensland University of Technology Brisbane Q 4001 Australia QUTNRC-96-01-01.ps.Z Abstract In a recent article Andrews et al.[1995] describe a schema for classifying neural network rule extraction techniques as either decompositional, eclectic, or pedagogical. Decompositional techniques require knowledge of the neural network architecture and weights. Each hidden and output unit is interpreted as a Boolean rule with the antecedents being a set of incoming links whose summed weights guarantee to exceed the unit's bias regardless of the activations of the other incoming links. Pedagogical techniques on the other hand treat the underlyling neural network as a 'black box' using it to both classify examples and to generate examples which a symbolic algorithm then converts to rules. Eclectic techniques combine elements of the two basic categories. In this paper we describe some reasons why rule extraction is an important area of research. We then briefly describe three rule extraction algorithms, RULEX, DEDEC & RULENEG, these being representative of each of the abovementioned groups. We test these algorithms using two classification problems; the first being a laboratory benchmarking problem while the second is drawn from real life. For each problem, each of the rule extraction techniques previously described is applied to a trained neural network and the resulting rules presented. ******************************************************************************** Rule Extraction From CASCADE-2 Networks Ross Hayward Emanoil Pop Joachim Diederich Neurocomputing Research Centre Queensland University of Technology Brisbane Q 4001 Australia QUTNRC-96-01-02.ps.Z Abstract Rule extraction from feed forward neural networks is a topic that is gaining increasing interest. Any symbolic representation of how a network arrives at a particular decision is important not only for user acceptance, but also for rule refinement and network learning. This paper describes a new method of extracting rules that predict the firing of single units within a feed forward neural network. The extraction technique is applied to networks constructed by the Cascade 2 algorithm each of which solve a different benchmark problem. The hidden and output units within each of the networks are shown to represent distinct rules which govern the classification of patterns. Since a discrete rule set can be obtained for each of the units within the network, a logical mapping between input and output values can be achieved. ******************************************************************************** Feasibility of Incremental Learning in Biologically Plausible Networks James M. Hogan Joachim Diederich Neurocomputing Research Centre Queensland University of Technology Brisbane Q 4001 Australia QUTNRC-96-01-03.ps.Z Abstract The feasibility of incremental learning within a feed-forward network is examined under the constraint of biologically plausible connectivity. A randomly connected network (of physiologically plausible global connection probability) is considered under the assumption of a local connection probability which decays with distance between nodes. The representation of the function XOR is chosen as a test problem, and the likelihood of its recruitment is discussed with reference to the probability of occurrence of a subnetwork suitable for implementation of this function, assuming a uniform initial weight distribution. ****************************************************************************** These papers are available from ftp.fit.qut.edu.au cd to /pub/NRC/tr/ps From fisher at tweed.cse.ogi.edu Thu May 30 09:50:46 1996 From: fisher at tweed.cse.ogi.edu (Therese Fisher) Date: Thu, 30 May 1996 14:50:46 +0100 Subject: New Computational Finance Program Message-ID: Computational Finance at Oregon Graduate of Institute of Science & Technology (OGI) A Concentration in the MS Programs of Computer Science & Engineering (CSE) Electrical Engineering & Applied Physics (EEAP) ---------------------------------------------------------------------------- 20000 NW Walker Road, PO Box 91000, Portland, OR 97291-1000 ---------------------------------------------------------------------------- Computational Finance at OGI is a 12-month intensive program leading to a Master of Science degree in Computer Science and Engineering (CSE track) or in Electrical Engineering & Applied Physics (EE track). The program features: * A 12 month intensive program to train scientists and engineers for doing state-of-the-art quantitative or information systems work in finance. * Provide an attractive alternative to the standard 2 year MBA for technically-sophisticated students. * Provide a solid foundation in finance. Cover three semesters of MBA level finance in three quarters, and go beyond that. * Provide a solid foundation in relevant techniques from CS and EE for modeling financial markets and developing investment analysis, trading, and risk management systems. * Give CS/EE graduates the necessary finance background to work as information system specialists in major financial firms. * Emphasize state-of-the-art techniques in neural networks, adaptive systems, signal processing, and data modeling. * Provide state-of-the-art computing facilities for doing course assignments using live and historical market data provided by Dow Jones Telerate. * Provide students an opportunity to do significant projects using extensive market data resources and state-of-the-art analysis packages, thereby making them more attractive to employers. * Through their course work and projects, students will develop significant expertise in using and programming important analysis packages, such as Mathematica, Matlab, SPlus, and Expo. ---------------------------------------------------------------------------- Major Components of Program: The curriculum includes 4 quarters with courses structured within the standard CSE/EEAP framework, with 5 courses in the finance specialty area, 7 or 8 core courses within the CSE or EEAP departments, and 3 electives. Students will enroll in either the CSE (CSE track) or EEAP (EE track) MS programs. ---------------------------------------------------------------------------- Admission Requirements & Contact Information ---------------------------------------------------------------------------- Admission requirements are the same as the general requirements of the institution. GRE scores are required for the 12-month concentration in Computational Finance, however they may be waived in special circumstances. A candidate must hold a bachelor's degree in computer science, engineering, mathematics, statistics, one of the biological or physical sciences, finance, or one of the quantitative social sciences. For more information, contact Computational Finance Betty Shannon, Academic Coordinator Computer Science and Engineering Department Oregon Graduate Institute of Science and Technology P.O.Box 91000 Portland, OR 97291-1000 E-mail: academic at cse.ogi.edu Phone: (503) 690-1255 or E-mail: CompFin at cse.ogi.edu WWW: http://www.cse.ogi.edu/CompFin/ From ATAXR at asuvm.inre.asu.edu Thu May 30 18:44:10 1996 From: ATAXR at asuvm.inre.asu.edu (Asim Roy) Date: Thu, 30 May 1996 15:44:10 -0700 (MST) Subject: Connectionist Learning - Some New Ideas/Questions Message-ID: <01I5BM4AALLU8Y8W5E@asu.edu> (This is for posting to your mailing list.) This is an attempt to respond to some thoughts on one particular aspect of our learning theory - the one that requires connectionist/neural net algorithms to make an explicit "attempt" to build the smallest possible net (generalize, that is). One school of thought says that we should not attempt to build the smallest possible net because some extra neurons in the net (and their extra connections) provide the benefits of fault tolerance and reliability. And since the brain has access to billions of neurons, it does not really need to worry about a real resource constraint - it is practically an unlimited resource. (It is a fact of life, however, that at some age we do have difficulty memorizing and remembering things and learning- we perhaps run out of space (neurons) like a storage device on a computer. Even though billions of neurons is a large number, we must be using most of it at some age. So it is indeed a finite resource and some of it appears to be reused, like we reuse space on our storage devices. For memorization, for example, it is possible that the brain selectively erases some old memories to store some new ones. So a finite capacity system is a sensible view of the brain.) Another argument in favor of not trying to generalize is that by not worrying about attempting to create the smallest possible net, the connectionist algorithms are easier to develop and less complex. I hope researchers will come forward with other arguments in favor of not attempting to create the smallest possible net or to generalize. There is one main problem with the argument that adding lots of extra neurons to a net buys reliability and fault tolerance. First, we run the severe risk of "learning nothing" if we don't attempt to generalize. With lots of neurons available to a net, we would simply overfit the net to the problem data. (Try it next time on your back prop net. Add 10 or 100 times the number of hidden nodes you need and observe the results.) That is all we would achieve. Without good generalization, we may have a fault tolerant and reliable net, but it may be "useless" for all practical purposes because it may have "learnt nothing". Generalization is the fundamental part of learning - it perhaps should be the first learning criteria for our algorithms. We can't overlook or skip that part. If an algorithm doesn't attempt to generalize, it doesn't attempt to learn. It is as simple as that. So generalization needs to be our first priority and fault tolerance comes later. First we must "learn" something, then make it fault tolerant and reliable. Here is a practical viewpoint for our algorithms. Even though neurons are almost "unlimited" and free of cost to our brain, from a practical engineering stand point, "silicon" neurons are not so cheap. So our algorithms definitely need to be cost conscious and try to build the smallest possible net; they cannot be wasteful in their use of expensive "silicon" neurons. Once we obtain good generalization on a problem, fault tolerance can be achieved in many other ways. It would not hurt to examine the well established theory of reliability for some neat ideas. A few backup systems might be a more cost effective way to buy reliability than throwing in lots of extra silicon in a single system which may buy us nothing (it "learns nothing"). From controlling nuclear power plants with backup computer systems to adding extra tires in our trucks and buses, the backup idea works quite well. It is possible that "backup" is also what is used in our brains. We need to find out. "Redundancy" may be in the form of backup systems. "Repair" is another good idea used in our everyday lives for not so critical systems. Is fault tolerance and reliability sometimes achieved in the brain through the process of "repair"? Patients do recover memory and other brain functions after a stroke. Is that repair work by the biological system? It is a fact that biological systems are good at repairing things (look at simple things like cuts and bruises). We perhaps need to look closer at our biological systems and facts and get real good clues to how it works. Let us not jump to conclusions so quickly. Let us argue and debate with our facts. We will do our science a good service and be able to make real progress. I would welcome more thoughts and debate on this issue. I have included all of the previous responses on this particular issue for easy reference by the readers. I have also appended our earlier note on our learning theory. Perhaps more researchers will come forward with facts and ideas and enlighten all of us on this crucial question. ******************************************** On May 16 Kevin Cherkauer wrote: "In a recent thought-provoking posting to the connectionist list, Asim Roy said: >E. Generalization in Learning: The method must be able to >generalize reasonably well so that only a small amount of network >resources is used. That is, it must try to design the smallest possible >net, although it might not be able to do so every time. This must be >an explicit part of the algorithm. This property is based on the >notion that the brain could not be wasteful of its limited resources, >so it must be trying to design the smallest possible net for every >task. I disagree with this point. According to Hertz, Krogh, and Palmer (1991, p. 2), the human brain contains about 10^11 neurons. (They also state on p. 3 that "the axon of a typical neuron makes a few thousand synapses with other neurons," so we're looking at on the order of 10^14 "connections" in the brain.) Note that a period of 100 years contains only about 3x10^9 seconds. Thus, if you lived 100 years and learned continuously at a constant rate every second of your life, your brain would be at liberty to "use up" the capacity of about 30 neurons (and 30,000 connections) per second. I would guess this is a very conservative bound, because most of us probably spend quite a bit of time where we aren't learning at such a furious rate. But even using this conservative bound, I calculate that I'm allowed to use up about 2.7x10^6 neurons (and 2.7x10^9 connections) today. I'll try not to spend them all in one place. :-) Dr. Roy's suggestion that the brain must try "to design the smallest possible net for every task" because "the brain could not be wasteful of its limited resources" is unlikely, in my opinion. It seems to me that the brain has rather an abundance of neurons. On the other hand, finding optimal solutions to many interesting "real-world" problems is often very hard computationally. I am not a complexity theorist, but I will hazard to suggest that a constraint on neural systems to be optimal or near-optimal in their space usage is probably both impossible to realize and, in fact, unnecessary. Wild speculation: the brain may have so many neurons precisely so that it can afford to be suboptimal in its storage usage in order to avoid computational time intractability. References Hertz, J.; Krogh, A.; & Palmer, R.G. 1991. Introduction to the Theory of Neural Computation. Redwood City, CA:Addison-Wesley." ************************************************** On May 15 Richard Kenyon wrote on the subject of generalization: " The brain probably accepts some form of redundancy (waste). I agree that the brain is one hell of an optimisation machine. Intelligence whatever task it may be applied to is (again imho) one long optimisation process. Generalisation arises (even emerges or is a side effect) as a result of ongoing optimisation, conglomeration, reprocessing etc etc. This is again very important i agree, but i think (i do anyway) we in NN commumnity are aware of this as with much of the above. I thought that apart from point A we were doing all of this already, although to have it explicitly published is very valuable." ***************************************** On May 16 Lokendra Shastri replied to Kevin Cherkauer: "There is another way to look at the numbers. The retina provides 10^6 inputs to the brain every 200 msec! A simple n^2 algorithm to process this input would require more neurons than we have in our brain. We can understand (or at least process) a potentially unbounded number of sentences --- Here is one "the grandcanyon walked past the banana" I could have said anyone of a gazzilion sentences at this point and you would have probably understood it. Even if we just count the overt symbolic knowledge, we carry in our heads, we can enumerate about a million items. A coding scheme that consumed a 1000 neurons per item (which is not much) would soon run out neurons. Remember that a large fraction of our neurons are already taken up by sensorimotor processes (vision itself consumes a fair fraction of the brain).For an argument on the tight constraints posed by the "limited" number of neurons vis-a-vis common sense knowledge, you may want to see: ``From simple associations to systematic reasoning'', L. Shastri and V. Ajjanagadde. In Behavioral and Brain Sciences Vol. 16, No. 3, 417--494, 1993. My home page has a URL to a postscript version. There was also a nice paper by Tsotsos in Behavioral and Brains Sciences on this topic from the perspective of Visual Processing. Also you might want to see Feldman and Ballard 1982 paper in Cognitive Science." *********************************************** On May 17 Steven Small replied to Keven Cherkauer: "I agree with this general idea, although I'm not sure that "computational time intractability" is necessarily the principal reason. There are a lot of good reasons for redundancy, overlap, and space "suboptimality", not the least of which is the marvellous ability at recovery that the brain manifests after both small injuries and larger ones that give pause even to experienced neurologists." ************************************************* On May 17 Jonathan Stein replied to Steven Small and Kevin Cherkauer: "One needn't draw upon injuries to prove the point. One loses about 100,000 cortical neurons a day (about a percent of the original number every three years) under normal conditions. This loss is apparently not significant for brain function. This has been often called the strongest argument for distributed processing in the brain. Compare this ability with the fact that single conductor disconnection cause total system failure with high probability in conventional computers. Although certainly acknowledged by the pioneers of artificial neural network techniques, very few networks designed and trained by present techniques are anywhere near that robust. Studies carried out on the Hopfield model of associative memory DO show graceful degradation of memory capacity with synapse dilution under certain conditions (see eg. DJ Amit's book "Attractor Neural Networks"). Synapse pruning has been applied to trained feedforward networks (eg. LeCun's "Optimal Brain Damage") but requires retraining of the network." ****************************************** On May 18 Raj Rao replied to Kevin Cherkauer and Steven Small: " Does anyone have a concrete citation (a journal article) for this or any other similar estimate regarding the daily cell death rate in the cortex of a normal brain? I've read such numbers in a number of connectionist papers but none cite any neurophysiological studies that substantiate these numbers." ******************************************** On May 19 Richard Long wrote: "There may be another reason for the brain to construct networks that are 'minimal' having to do with Chaitin and Kolmogorov computational complexity. If a minimal network corresponds to a 'minimal algorithm' for implementing a particular computation, then that particular network must utilize all of the symmetries and regularities contained in the problem, or else these symmetries could be used to reduce the network further. Chaitin has shown that no algorithm for finding this minimal algorithm in the general case is possible. However, if an evolutionary programming method is used in which the fitness function is both 'solves the problem' and 'smallest size' (i.e. Occam's razor), then it is possible that the symmetries and regularities in the problem would be extracted as smaller and smaller networks are found. I would argue that such networks would compute the solution less by rote or brute force, and more from a deep understanding of the problem. I would like to hear anyone else's thoughts on this." ************************************************** On May 20 Juergen Schmidhuber replies to Richard Long: "Apparently, Kolmogorov was the first to show the impossibility of finding the minimal algorithm in the general case (but Solomonoff also mentions it in his early work). The reason is the halting problem, of course - you don't know the runtime of the minimal algorithm. For all practical applications, runtime has to be taken into account. Interestingly, there is an ``optimal'' way of doing this, namely Levin's universal search algorithm, which tests solution candidates in order of their Levin complexities: L. A. Levin. Universal sequential search problems, Problems of Information Transmission 9:3,265-266,1973. For finding Occam's razor neural networks with minimal Levin complexity, see J. Schmidhuber: Discovering solutions with low Kolmogorov complexity and high generalization capability. In A.Prieditis and S.Russell, editors, Machine Learning: Proceedings of the 12th International Conference, 488--496. Morgan Kaufmann Publishers, San Francisco, CA, 1995. For Occam's razor solutions of non-Markovian reinforcement learning tasks, see M. Wiering and J. Schmidhuber: Solving POMDPs using Levin search and EIRA.In Machine Learning: Proceedings of the 13th International Conference. Morgan Kaufmann Publishers, San Francisco, CA, 1996, to appear." ********************************************** On May 20 Sydney Lamb replied to Jonathan Stein and others: " There seems to be some differing information coming from different sources. The way I heard it, the typical person has lost only about 3% of the original total of cortical neurons after about 70 or 80 years. As for the argument about distributed processing, two comments: (1) there are different kinds of distributive processing; one of them also uses strict localization of points of convergence for distributed subnetworks of information (cf. A. Damasio 1989 --- several papers that year). (2) If the brain is like other biological systems, the neurons being lost are probably most the ones not being used --- ones that have been remaining latent and available to assume some function, but never called upon. Hence what you get with old age is not so much loss of information as loss of ability to learn new things --- varying in amount, of course, from one individual to the next." ***************************************** On May 20 Mark Johnson replies to Raj Rao: "From my reading of the recent literature massive postnatal cell loss in the human cortex is a myth. There is postnatal cortical cell death in rodents, but in primates (including humans) there is only (i) a decreased density of cell packing, and (ii) massive (up to 50%) synapse loss. (The decreased density of cell packing was apparently misinterpreted as cell loss in the past). Of course, there are pathological cases, such as Alzheimers, in which there is cell loss. I have written a review of human postnatal brain development which I can send out on request." ************************************************** *************************************************** APPENDIX We have recently published a set of principles for learning in neural networks/connectionist models that is different from classical connectionist learning (Neural Networks, Vol. 8, No. 2; IEEE Transactions on Neural Networks, to appear; see references below). Below is a brief summary of the new learning theory and why we think classical connectionist learning, which is characterized by pre-defined nets, local learning laws and memoryless learning (no storing of training examples for learning), is not brain-like at all. Since vigorous and open debate is very healthy for a scientific field, we invite comments for and against our ideas from all sides. "A New Theory for Learning in Connectionist Models" We believe that a good rigorous theory for artificial neural networks/connectionist models should include learning methods that perform the following tasks or adhere to the following criteria: A. Perform Network Design Task: A neural network/connectionist learning method must be able to design an appropriate network for a given problem, since, in general, it is a task performed by the brain. A pre-designed net should not be provided to the method as part of its external input, since it never is an external input to the brain. From a neuroengineering and neuroscience point of view, this is an essential property for any "stand-alone" learning system - a system that is expected to learn "on its own" without any external design assistance. B. Robustness in Learning: The method must be robust so as not to have the local minima problem, the problems of oscillation and catastrophic forgetting, the problem of recall or lost memories and similar learning difficulties. Some people might argue that ordinary brains, and particularly those with learning disabilities, do exhibit such problems and that these learning requirements are the attributes only of a "super" brain. The goal of neuroengineers and neuroscientists is to design and build learning systems that are robust, reliable and powerful. They have no interest in creating weak and problematic learning devices that need constant attention and intervention. C. Quickness in Learning: The method must be quick in its learning and learn rapidly from only a few examples, much as humans do. For example, one which learns from only 10 examples learns faster than one which requires a 100 or a 1000 examples. We have shown that on-line learning (see references below), when not allowed to store training examples in memory, can be extremely slow in learning - that is, would require many more examples to learn a given task compared to methods that use memory to remember training examples. It is not desirable that a neural network/connectionist learning system be similar in characteristics to learners characterized by such sayings as "Told him a million times and he still doesn't understand." On-line learning systems must learn rapidly from only a few examples. D. Efficiency in Learning: The method must be computationally efficient in its learning when provided with a finite number of training examples (Minsky and Papert[1988]). It must be able to both design and train an appropriate net in polynomial time. That is, given P examples, the learning time (i.e. both design and training time) should be a polynomial function of P. This, again, is a critical computational property from a neuroengineering and neuroscience point of view. This property has its origins in the belief that biological systems (insects, birds for example) could not be solving NP-hard problems, especially when efficient, polynomial time learning methods can conceivably be designed and developed. E. Generalization in Learning: The method must be able to generalize reasonably well so that only a small amount of network resources is used. That is, it must try to design the smallest possible net, although it might not be able to do so every time. This must be an explicit part of the algorithm. This property is based on the notion that the brain could not be wasteful of its limited resources, so it must be trying to design the smallest possible net for every task. General Comments This theory defines algorithmic characteristics that are obviously much more brain-like than those of classical connectionist theory, which is characterized by pre-defined nets, local learning laws and memoryless learning (no storing of actual training examples for learning). Judging by the above characteristics, classical connectionist learning is not very powerful or robust. First of all, it does not even address the issue of network design, a task that should be central to any neural network/connectionist learning theory. It is also plagued by efficiency (lack of polynomial time complexity, need for excessive number of teaching examples) and robustness problems (local minima, oscillation, catastrophic forgetting, lost memories), problems that are partly acquired from its attempt to learn without using memory. Classical connectionist learning, therefore, is not very brain-like at all. As far as I know, there is no biological evidence for any of the premises of classical connectionist learning. Without having to reach into biology, simple common sense arguments can show that the ideas of local learning, memoryless learning and predefined nets are impractical even for the brain! For example, the idea of local learning requires a predefined network. Classical connectionist learning forgot to ask a very fundamental question - who designs the net for the brain? The answer is very simple: Who else, but the brain itself! So, who should construct the net for a neural net algorithm? The answer again is very simple: Who else, but the algorithm itself! (By the way, this is not a criticism of constructive algorithms that do design nets.) Under classical connectionist learning, a net has to be constructed (by someone, somehow - but not by the algorithm!) prior to having seen a single training example! I cannot imagine any system, biological or otherwise, being able to construct a net with zero information about the problem to be solved and with no knowledge of the complexity of the problem. (Again, this is not a criticism of constructive algorithms.) A good test for a so-called "brain-like" algorithm is to imagine it actually being part of a human brain. Then examine the learning phenomenon of the algorithm and compare it with that of the human's. For example, pose the following question: If an algorithm like back propagation is "planted" in the brain, how will it behave? Will it be similar to human behavior in every way? Look at the following simple "model/algorithm" phenomenon when the back- propagation algorithm is "fitted" to a human brain. You give it a few learning examples for a simple problem and after a while this "back prop fitted" brain says: "I am stuck in a local minimum. I need to relearn this problem. Start over again." And you ask: "Which examples should I go over again?" And this "back prop fitted" brain replies: "You need to go over all of them. I don't remember anything you told me." So you go over the teaching examples again. And let's say it gets stuck in a local minimum again and, as usual, does not remember any of the past examples. So you provide the teaching examples again and this process is repeated a few times until it learns properly. The obvious questions are as follows: Is "not remembering" any of the learning examples a brain- like phenomenon? Are the interactions with this so-called "brain- like" algorithm similar to what one would actually encounter with a human in a similar situation? If the interactions are not similar, then the algorithm is not brain-like. A so-called brain-like algorithm's interactions with the external world/teacher cannot be different from that of the human. In the context of this example, it should be noted that storing/remembering relevant facts and examples is very much a natural part of the human learning process. Without the ability to store and recall facts/information and discuss, compare and argue about them, our ability to learn would be in serious jeopardy. Information storage facilitates mental comparison of facts and information and is an integral part of rapid and efficient learning. It is not biologically justified when "brain-like" algorithms disallow usage of memory to store relevant information. Another typical phenomenon of classical connectionist learning is the "external tweaking" of algorithms. How many times do we "externally tweak" the brain (e.g. adjust the net, try a different parameter setting) for it to learn? Interactions with a brain-like algorithm has to be brain-like indeed in all respect. The learning scheme postulated above does not specify how learning is to take place - that is, whether memory is to be used or not to store training examples for learning, or whether learning is to be through local learning at each node in the net or through some global mechanism. It merely defines broad computational characteristics and tasks (i.e. fundamental learning principles) that are brain-like and that all neural network/connectionist algorithms should follow. But there is complete freedom otherwise in designing the algorithms themselves. We have shown that robust, reliable learning algorithms can indeed be developed that satisfy these learning principles (see references below). Many constructive algorithms satisfy many of the learning principles defined above. They can, perhaps, be modified to satisfy all of the learning principles. The learning theory above defines computational and learning characteristics that have always been desired by the neural network/connectionist field. It is difficult to argue that these characteristics are not "desirable," especially for self-learning, self- contained systems. For neuroscientists and neuroengineers, it should open the door to development of brain-like systems they have always wanted - those that can learn on their own without any external intervention or assistance, much like the brain. It essentially tries to redefine the nature of algorithms considered to be brain- like. And it defines the foundations for developing truly self- learning systems - ones that wouldn't require constant intervention and tweaking by external agents (human experts) for it to learn. It is perhaps time to reexamine the foundations of the neural network/connectionist field. This mailing list/newsletter provides an excellent opportunity for participation by all concerned throughout the world. I am looking forward to a lively debate on these matters. That is how a scientific field makes real progress. Asim Roy Arizona State University Tempe, Arizona 85287-3606, USA Email: ataxr at asuvm.inre.asu.edu References 1. Roy, A., Govil, S. & Miranda, R. 1995. A Neural Network Learning Theory and a Polynomial Time RBF Algorithm. IEEE Transactions on Neural Networks, to appear. 2. Roy, A., Govil, S. & Miranda, R. 1995. An Algorithm to Generate Radial Basis Function (RBF)-like Nets for Classification Problems. Neural Networks, Vol. 8, No. 2, pp. 179-202. 3. Roy, A., Kim, L.S. & Mukhopadhyay, S. 1993. A Polynomial Time Algorithm for the Construction and Training of a Class of Multilayer Perceptrons. Neural Networks, Vol. 6, No. 4, pp. 535- 545. 4. Mukhopadhyay, S., Roy, A., Kim, L.S. & Govil, S. 1993. A Polynomial Time Algorithm for Generating Neural Networks for Pattern Classification - its Stability Properties and Some Test Results. Neural Computation, Vol. 5, No. 2, pp. 225-238. From jbower at bbb.caltech.edu Thu May 30 14:13:32 1996 From: jbower at bbb.caltech.edu (jbower@bbb.caltech.edu) Date: Thu, 30 May 1996 10:13:32 -0800 Subject: CNS*96 (Computational Neuroscience Meeting) Message-ID: Call for Registration CNS*96 Cambridge, Massachuetts July 14-17 1996 CNS*96: Registration is now open for this year's Computational Neuroscience meeting (CNS*96). This is the fifth in a series of annual inter-disciplinary conferences intended to address the broad range of research approaches and issues involved in the general field of computational neuroscience, The meeting will take place at the Cambridge Center Marriott Hotel and includes plenary, contributed, and poster sessions. In addition, two half days will be devoted to informal workshops on a wide range of subjects. The first session starts at 9 am, Sunday July 14th and the last session ends at 5 pm on Wednesday, July 17th. Day care will be available for children. Overall Agenda This year's meeting is anticipated to be the best meeting yet in this series. Submitted papers increased by more than 80% this year, with representation from many if not most of major institutions involved in computational neuroscience. All papers submitted to the meeting were peer reviewed, resulting in 230 papers to be presented in either oral or poster form . These papers represent contributions by both experimental and theoretical neurobiologists along with engineers, computer scientists, cognitive scientists, physicists, and mathematicians interested in understanding how biological neural systems compute. The agenda is well represented by experimental, model-based, and more abstract theoretical approaches to understanding neurobiological computation. Full information on the agenda\ is available on the meeting's web page (http://www.bbb.caltech.edu/cns96/cns96.html). Invited Speakers Invited speakers for this year's meeting include: Eve Marder (Brandeis University), Miguel Nicoleis (Duke University Medical Center), Joseph J. Atick (Rockefeller University), Ron Calabrese (Emory University), John S. Kauer (Tufts Medical School), Ken Nakamura (Harvard University), Howard Eichenbaum (State University of New York). Poster Presentations More than 200 poster presentations on a wide variety of topics related to computational neuroscience will be presented at this year's meeting. Oral presentations Jeffrey B. Colombe (University of Chicago) Philip S. Ulinski Functional Organization of Cortical Microcircuits: II. Anatomical Organization of Feedforward and Feedback Pathways D. C. Somers (MIT) Emanuel V. Todorov, Athanassios G. Siapas, and Mriganka Sur A Local Circuit Integration Approach to Understanding Visual Cortical Receptive Fields Emilio Salinas (Brandeis University) L.F. Abbott Multiplicative Cortical Responses and Input Selection Based on Recurrent Synaptic Connections Leslie C. Osborne (UC Berkeley) John P. Miller The Filtering of Sensory Information by a Mechanoreceptive Array James A. Mazer (MIT) The Integration of Parallel Processing Streams in the Sound Localization System of the Barn Owl Wulfram Gerstner (Institute for Theoretische Physik) Richard Kempter, J. Leo van Hemmen and Hermann Wagner A Developmental Learning Rule for Coincidence Tuning in the Barn Owl Auditory System Allan Gottschalk (University of Pennsylvania Hospital) Information Based Limits on Synaptic Growth in Hebbian Models S. P. Strong (NEC Research Institute) Ronald Koberle, Rob R. de Ruyter van Steveninck, and William Bialek Entropy and Information in Neural Spike Trains Hans Liljenstr=F6m (Royal Institute of Technology) Peter Arhem Investigating Amplifying and Controlling Mechanisms for Random Events in Neural Systems David Terman (Ohio State University) Amit Bose, and Nancy Kopell Functional Reorganization in Thalamocortical Networks: Transition Between Spindling and Delta Sleep Rhythms Angel Alonso (McGill University) Xiao-Jing Wang, Michael M. Guevara, and Brian Craft A Comparative Model Study of Neuronal Oscillations in Septal GABAergic Cells and Entorhinal Cortex Stellar Cells: Contributors to the Theta and Gamma Rhythms Gene Wallenstein ( Harvard University) Michael E. Hasselmo Bursting and Oscillations in a Biophysical Model of Hippocampal Region CA3: Implications for Associative Memory and Epileptiform Activity Mayank R. Mehta (University of Arizona) Bruce L. McNaughton Rapid Changes in Hippocampal Population Code During Behavior: A Case for Hebbian Learning in Vivo Karl Kilborn (University of California, Irvine) Don Kubota, and Richard Granger Parameters of LTP Induction Modulate Network Categorization Behavior Peter Dayan (MIT) Satinder Pal Singh Long Term Potentiation, Navigation, & Dynamic Programming Chantal E. Stern (Harvard Medical School) Michael E. Hasselmo Functional Magnetic Resonance Imaging and Computational Modeling: An Integrated Study of Hippocampal Function Rajesh P. N. Rao (University of Rochester) Dana H. Ballard Cortico-Cortical Dynamics and Learning During Visual Recognition: A Computational Model R.Y. Reis (AT&T Bell Laboratories) Daniel D. Lee, H.S. Seung, B.I. Shraiman, and D.W. Tank Nonlinear Network Models of the Oculomotor Integrator Yair Weiss (MIT) Edward H. Adelson Adaptive Robust Windows: A Model for the Selective Integration of Motion Signals in Human Vision Emanuel V. Todorov (MIT) Athanassios G. Siapas, David C. Somers, and Sacha B. Nelson Modeling Visual Cortical Contrast Adaptation Effects Dieter Jaeger (Caltech) James M. Bower Dual in Vitro Whole Cell Recordings from Cerebellar Purkinje Cells: Artificial Synaptic Input Using Dynamic Current Clamping Xiao-Jing Wang (Brandeis University) Calcium Control of Time-Dependent Input-Out Computation in Cortical Pyramidal Neurons Alexander Protopapas (Caltech) James M. Bower Piriform Pyramidal Cell Response to Physiologically Plausible Spatio-Temporal Patterns of Synaptic Input Ole Jensen (Brandeis University) Marco A. P. Idiart and John E. Lisman A Model for Physiologically Realistic Synaptic Encoding and Retrieval of Sequence Information S. B. Nelson (Brandeis University) J.A. Varela, K. Sen, and L.F. Abbott Synaptic Decoding of Visual Cortical EPSCs Reveals a Potential Mechanism for Contrast Adaptation Nicolas G. Hatsopoulos (Brown University) Jerome N. Sanes and John P. Donoghue Dynamic Correlations in Unit Firing of Motor Cortical Neurons Related to Movement Preparation and Action Adam N. Elga (Princeton University), A. David Redish, and David S. Touretzky A Model of the Rodent Head Direction System Dianne Pawluk (Harvard University) Robert Howe A Holistic Model of Human Touch ************************************************************************ REGISTRATION INFORMATION FOR THE FIFTH ANNUAL COMPUTATION AND COMPUTATIONAL NEUROSCIENCE MEETING CNS*96 JULY 14 - JULY 17, 1995 BOSTON, MASSACHUSETTS ************************************************************************ LOCATION: The meeting will take place at the Boston Marriott in Cambridge, Massachusetts. MEETING ACCOMMODATIONS: Accommodations for the meeting have been arranged at the Boston Marriott. We have reserved a block of rooms at the special rate for all attendees of $126 per night single or double occupancy in the conference hotel (that is, 2 people sharing a room would split the $126!). A fixed number of rooms have been reserved for students at the rate of $99 per night single or double occupancy (yes, that means $50 a night per student!). These student room rates are on a first-come-first-served basis, so we recommend acting quickly to reserve these slots. Also, for some student registrants housing will be available at Harvard University. Thirty single rooms are available on a first-come-first serve basis. Please look at your orange colored sheets for more information. Registering for the meeting, WILL NOT result in an automatic room reservation. Instead you must make your own reservations by returning the enclosed registration sheet to the hotel, faxing, or by contacting: Boston Marriott Cambridge ATTENTION: Reservations Dept. Two Cambridge Center Cambridge, Massachusetts 02142 (617)494-6600 Toll Free: (800)228-9290 =46ax No. (617)494-0036 NOTE: IN ORDER TO GET THE REDUCED RATES, YOU MUST CONFIRM HOTEL REGISTRATIONS BY JUNE 12, 1995. When making reservations by phone, make sure and indicate that you are registering for the CNS*96 meeting. Students will be asked to verify their status on check in with a student ID or other documentation. MEETING REGISTRATION FEES: Registration received on or before June 12, 1995: Student: $ 95 (One Banquet Ticket Included) Regular: $ 225 (One Banquet Ticket Included) Meeting registration after June 12, 1995: Student:: $ 125 (One Banquet Ticket Included) Regular:: $ 250 (One Banquet Ticket Included) BANQUET: Registration for the meeting includes a single ticket to the annual CNS Banquet this year to be held within the Museum of Science on Tuesday evening, July 16th. Additional Banquet tickets can be purchased for $35 each person. ---------------------------------------------------------------------------- CNS*96 REGISTRATION FORM Last Name: =46irst Name: Title: Student___ Graduate Student___ Post Doc___ Professor___ Committee Member___ Other___ Organization: Address: City: State: Zip: Country: Telephone: Email Address: REGISTRATION FEES: Technical Program --July 14 - July 17, 1996 Regular $225 ($250 after June 12th) - One Banquet Ticket Included Student $ 95 ($125 after June 12th) - One Banquet Ticket Included Banquet $ 35 (Additional Banquet Tickets at $35.00 per Ticket) - July 16, 1996 Total Payment: $ Please Indicate Method of Payment: Check or Money Order * Payable in U. S. Dollars to CNS*96 - Caltech * Please make sure to indicate CNS*96 and YOUR name on all money transfers. Charge my card: Visa Mastercard American Express Number: Expiration Date: Name of Cardholder: Signature as appears on card (for mailed in applications): Date: ADDITIONAL QUESTIONS: Previously Attended: CNS*92___ CNS*93___ CNS*94___ CNS*95___ Did you submit an abstract and summary? ( ) Yes ( ) No Title: Do you have special dietary preferences or restrictions (e.g., diabetic, low sodium, kosher, vegetarian)? If so, please note: Some grants to cover partial travel expenses may become available. Do you wish further information? ( ) Yes ( ) No (Please Note: Travel funds will be available for students and postdoctoral fellows presenting papers at the meeting) *******PLEASE FAX OR MAIL REGISTRATION FORM TO: Caltech, Division of Biology 216-76, Pasadena, CA 91125 Attn: Judy Macias =46ax Number: (818) 795-2088 ADDITIONAL INFORMATION can be obtained by: Using our on-line WWW information and registration server at the URL: http://www.bbb.caltech.edu/cns96/cns96.html ftp-ing to our ftp site: yourhost% ftp ftp.bbb.caltech.edu Name: ftp Password: yourname at yourhost.yourside.yourdomain ftp> cd pub/cns96 ftp> ls ftp> get filename Sending Email to: cns96 at smaug.bbb.caltech.edu *************************************** James M. Bower Division of Biology Mail code: 216-76 Caltech Pasadena, CA 91125 (818) 395-6817 (818) 795-2088 FAX NCSA Mosaic addresses for: laboratory http://www.bbb.caltech.edu/bowerlab GENESIS: http://www.bbb.caltech.edu/GENESIS science education reform http://www.caltech.edu/~capsi From leo at stat.Berkeley.EDU Thu May 30 11:33:46 1996 From: leo at stat.Berkeley.EDU (Leo Breiman) Date: Thu, 30 May 1996 08:33:46 -0700 Subject: paper available: Bias, Variance and Arcing Classifiers Message-ID: A non-text attachment was scrubbed... Name: not available Type: multipart/mixed Size: 1382 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/cc6f4c35/attachment.bin From Connectionists-Request at cs.cmu.edu Wed May 1 00:05:16 1996 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Wed, 01 May 96 00:05:16 -0400 Subject: Bi-monthly Reminder Message-ID: <29055.830923516@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated September 9, 1994. This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is a moderated forum for enlightened technical discussions and professional announcements. It is not a random free-for-all like comp.ai.neural-nets. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to thousands of busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. -- Dave Touretzky & Lisa Saksida --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, corresponded with him directly and retrieved the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new textbooks related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. The file "current" in the same directory contains the archives for the current month. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU 2. Login as user anonymous with password your username. 3. 'cd' directly to the following directory: /afs/cs/project/connect/connect-archives The archive directory is the ONLY one you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into this directory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- Using Mosaic and the World Wide Web ----------------------------------- You can also access these files using the following url: http://www.cs.cmu.edu/afs/cs/project/connect/connect-archives ---------------------------------------------------------------------- The NEUROPROSE Archive ---------------------- Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. We strongly discourage the merger into the repository of existing bodies of work or the use of this medium as a vanity press for papers which are not of publication quality. PLACING A FILE To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. Current naming convention is author.title.filetype.Z where title is just enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. The Z indicates that the file has been compressed by the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is in the appendix. Make sure your paper is single-spaced, so as to save paper, and include an INDEX Entry, consisting of 1) the filename, 2) the email contact for problems, 3) the number of pages and 4) a one sentence description. See the INDEX file for examples. ANNOUNCING YOUR PAPER It is the author's responsibility to invite other researchers to make copies of their paper. Before announcing, have a friend at another institution retrieve and print the file, so as to avoid easily found local postscript library errors. And let the community know how many pages to expect on their printer. Finally, information about where the paper will/might appear is appropriate inside the paper as well as in the announcement. In your subject line of your mail message, rather than "paper available via FTP," please indicate the subject or title, e.g. "paper available "Solving Towers of Hanoi with ART-4" Please add two lines to your mail header, or the top of your message, so as to facilitate the development of mailer scripts and macros which can automatically retrieve files from both NEUROPROSE and other lab-specific repositories: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/filename.ps.Z When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. To prevent forwarding, place a "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your file. If you do offer hard copies, be prepared for a high cost. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! A shell script called Getps, written by Tony Plate, is in the directory, and can perform the necessary retrieval operations, given the file name. Functions for GNU Emacs RMAIL, and other mailing systems will also be posted as debugged and available. At any time, for any reason, the author may request their paper be updated or removed. For further questions contact: Jordan Pollack Associate Professor Computer Science Department Center for Complex Systems Brandeis University Phone: (617) 736-2713/* to fax Waltham, MA 02254 email: pollack at cs.brandeis.edu APPENDIX: Here is an example of naming and placing a file: unix> compress myname.title.ps unix> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put myname.title.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for myname.title.ps.Z 226 Transfer complete. 100000 bytes sent in 1.414 seconds ftp> quit 221 Goodbye. unix> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file myname.title.ps.Z in the Inbox. Here is the INDEX entry: myname.title.ps.Z mylogin at my.email.address 12 pages. A random paper which everyone will want to read Let me know when it is in place so I can announce it to Connectionists at cmu. ^D AFTER RECEIVING THE GO-AHEAD, AND HAVING A FRIEND TEST RETRIEVE THE FILE, HE DOES THE FOLLOWING: unix> mail connectionists Subject: TR announcement: Born Again Perceptrons FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/myname.title.ps.Z The file myname.title.ps.Z is now available for copying from the Neuroprose repository: Random Paper (12 pages) Somebody Somewhere Cornell University ABSTRACT: In this unpublishable paper, I generate another alternative to the back-propagation algorithm which performs 50% better on learning the exclusive-or problem. ~r.signature ^D ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "ftp.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. Another valid directory is "/afs/cs/project/connect/code", where we store various supported and unsupported neural network simulators and related software. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "neural-bench at cs.cmu.edu". From lba at inesc.pt Thu May 2 04:56:02 1996 From: lba at inesc.pt (Luis B. Almeida) Date: Thu, 02 May 1996 09:56:02 +0100 Subject: deadline extension - Sintra spatiotemporal models workshop Message-ID: <318878A2.398A68D@inesc.pt> Due to requests from several prospective authors, the deadline for submission of papers to the Sintra Workshop on Spatiotemporal Models in Biological and Artificial Systems has been extended. The new deadline is May 10. Papers received after this date will NOT be opened. The call for papers and the instructions for authors can be obtained from the web: http://aleph.inesc.pt/smbas/ or http://www.cnel.ufl.edu/workshop.html They can also be requested by sending e-mail to luis.almeida at inesc.pt -- Luis B. Almeida INESC Phone: +351-1-3544607, +351-1-3100246 R. Alves Redol, 9 Fax: +351-1-3145843 P-1000 Lisboa Portugal e-mail: lba at inesc.pt or luis.almeida at inesc.pt ------------------------------------------------------------------- *** Indonesia is killing innocent people in East Timor *** From ludwig at ibm18.uni-paderborn.de Thu May 2 07:05:07 1996 From: ludwig at ibm18.uni-paderborn.de (Lars Alex. Ludwig) Date: Thu, 2 May 1996 12:05:07 +0100 (DFT) Subject: Call for Papers: Fuzzy-Neuro Systems '97 Message-ID: <9605021005.AA11495@ibm18.uni-paderborn.de> A non-text attachment was scrubbed... Name: not available Type: text Size: 7822 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/a2b7963e/attachment-0001.ksh From KOKINOV at BGEARN.BITNET Thu May 2 13:15:30 1996 From: KOKINOV at BGEARN.BITNET (Boicho Kokinov) Date: Thu, 02 May 96 13:15:30 BG Subject: CogSci96 in Sofia Message-ID: 3rd International Summer School in Cognitive Science Sofia, July 21 - August 3, 1996 First Announcement and Call for Papers The Summer School features introductory and advanced courses in Cognitive Science, participant symposia, panel discussions, student sessions, and intensive informal discussions. Participants will include university teachers and researchers, graduate and senior undergraduate students. International Advisory Board Elizabeth BATES (University of California at San Diego, USA) Amedeo CAPPELLI (CNR, Pisa, Italy) Cristiano CASTELFRANCHI (CNR, Roma, Italy) Daniel DENNETT (Tufts University, Medford, Massachusetts, USA) Ennio De RENZI (University of Modena, Italy) Charles DE WEERT (University of Nijmegen, Holland ) Christian FREKSA (Hamburg University, Germany) Dedre GENTNER (Northwestern University, Evanston, Illinois, USA) Christopher HABEL (Hamburg University, Germany) Joachim HOHNSBEIN (Dortmund University, Germany) Douglas HOFSTADTER (Indiana University, Bloomington, Indiana, USA) Keith HOLYOAK (University of California at Los Angeles, USA) Mark KEANE (Trinity College, Dublin, Ireland) Alan LESGOLD (University of Pittsburg, Pennsylvania, USA) Willem LEVELT (Max-Plank Institute of Psycholinguistics, Nijmegen, Holland) David RUMELHART (Stanford University, California, USA) Richard SHIFFRIN (Indiana University, Bloomington, Indiana, USA) Paul SMOLENSKY (University of Colorado, Boulder, USA) Chris THORNTON (University of Sussex, Brighton, England) Carlo UMILTA' (University of Padova, Italy) Eran ZAIDEL (University of California at Los Angeles, USA) Courses Two Sciences of Mind: Cognitive Science and Consciousness Studies - Sean O'Nuallain (NCR, Canada) Contextual Reasoning - Fausto Giunchiglia (University of Trento, Italy) Diagrammatic Reasonning - Hari Narayanan (Georgia Tech, USA) Qualitative Spatial Reasoning - Schlieder (Hamburg and Freiburg University, Germany) Language, Vision, and Spatial Cognition - Annette Herskovits (Boston University) Situated Planning and Reactivity - Iain Craig (University of Warwick, UK) Anthropology of Knowledge - Janet Keller (University of Illinois, USA) Cognitive Ergonomics - Antonio Rizzo (University of Siena, Italy) Psychophysics: Detection, Discrimination, and Scaling - Stephan Mateeff (BAS and NBU, Bulgaria) Participant Symposia Participants are invited to submit papers reporting completed research which will be presented (30 min) at the participant symposia. Authors should send full papers (8 single spaced pages) in triplicate or electronically (postscript, RTF, MS Word or plain ASCII) by May 31. Selected papers will be published in the School's Proceedings. Only papers presented at the School will be eligible for publication. Student Session Graduate students in Cognitive Science are invited to present their work at the student session. Research in progress as well as research plans and proposals for M.Sc. Theses and Ph.D. Theses will be discussed at the student session. Papers will not be published in the School's Proceedings. Panel Discussions Cognitive Science in the 21st century Symbolic vs. Situated Cognition Human Thinking and Reasoning: Contextual, Diagrammatic, Spatial, Culturally Bound Local Organizers New Bulgarian University, Bulgarian Academy of Sciences, Bulgarian Cognitive Science Society Sponsors TEMPUS SJEP 07272/94 Local Organizing Committee Boicho Kokinov - School Director, Elena Andonova, Gergana Yancheva, Veselka Anastasova Timetable Registration Form: as soon as possible Deadline for paper submission: May 31 Notification for acceptance: June 15 Early registration: June 15 Arrival date and on site registration July 21 Summer School July 22-August 2 Excursion July 28 Departure date August 3 Paper submission to: Boicho Kokinov Cognitive Science Department New Bulgarian University 21, Montevideo Str. Sofia 1635, Bulgaria e-mail: cogsci96 at cogs.nbu.acad.bg Send your Registration Form to: e-mail: cogsci96 at cogs.nbu.acad.bg (If you don't receive an aknowledgement within 3 days, send a message to kokinov at bgearn.acad.bg) From N.Sharkey at dcs.shef.ac.uk Thu May 2 15:42:19 1996 From: N.Sharkey at dcs.shef.ac.uk (Noel Sharkey) Date: Thu, 2 May 96 15:42:19 BST Subject: 1st CALL FOR PAPERS Message-ID: <9605021442.AA19951@dcs.shef.ac.uk> ****** ROBOT LEARNING: THE NEW WAVE ****** Special Issue of the journal Robotics and Autonomous Systems Submission Deadline: August, 1st, 1996 Decisions to authors: October, 1st, 1996 Final papers back: November, 7th, 1996 SPECIAL EDITOR Noel Sharkey (Sheffield) SPECIAL EDITORIAL BOARD Michael Arbib (USC) Ronald Arkin (GIT) George Bekey (USC) Randall Beer (Case Western) Bartlett Mel (USC) Maja Mataric (Brandeis) Carme Torras (Barcelona) Lina Massone (Northwestern) Lisa Meeden (Swarthmore) INTERNATIONAL REVIEW PANEL S Perkins (UK) T Ziemke (Sweden) P Zhang (France) S Wilson (USA) P Bakker (Japan) J Tani (Japan) C Thornton (UK) M Wilson (UK) M Recce (UK) D Cliff (UK) G Hayes (UK) U Zimmer (Germany) S Thrun (USA) S Nolfi (Italy) P van der Smagt (Germany) C Touzet (France) U Nehmzow (UK) R Salmon (Switzerland) J Hallam (UK) M Nilsson (Sweden) M Dorigo (Belgium) A Prescott (UK) C Holgate (UK) E Celaya (Spain) P Husbands (UK) I Harvey (UK) The objective of the Special Issue is to provide a focus for the new wave of research on the use of learning techniques to train real robots. We are particularly interested in research using neural computing techniques, but would also like submissions of work using genetic algorithms or other novel techniques. The nature of the new wave research is transdisciplinary bringing on board control engineering, artificial intelligence, animal learning, neurophysiology, embodied cognition, and ethology. We would like to encourage work discussing replicability and quantification provided that the research has been conducted or tested on real robots. AREAS OF RESEARCH INCLUDE: Mobile autonomous robotics, Fixed Arm robotics, Dextrous robots, Walking Machines, High level robotics, Behaviour-based robotics, Biologically inspired robots. TOPICS OF INTEREST INCLUDE * Reinforcement learning * Supervised learning * Self organisiation * Genetic algorithms * Learning brainstyle control systems * High level robot learning * Hybrid learning * Imitation Learning * The learning and use of representations * Adaptive approaches to dynamic planning * Place recognition Send submissions to Ms Jill Martin, RAS Special, Department of Computer Science, Regent Court, Portobello Rd., University of Sheffield, Sheffield, S1 4DP, UK. Updates will appear on the web page: http:\www.dcs.shef.ac.uk/research/groups/nn/RASspecial.html From arbib at pollux.usc.edu Fri May 3 15:37:55 1996 From: arbib at pollux.usc.edu (Michael A. Arbib) Date: Fri, 3 May 1996 12:37:55 -0700 (PDT) Subject: SENSORIMOTOR COORDINATION: EXTENDED DEADLINE Message-ID: <199605031937.MAA02879@pollux.usc.edu> *******PLEASE NOTE THAT THE DEADLINE FOR SUBMISSION *******OF ABSTRACTS HAS BEEN EXTENDED *******FROM 1 MAY to 15 MAY, 1996. Workshop on SENSORIMOTOR COORDINATION: AMPHIBIANS, MODELS, AND COMPARATIVE STUDIES Poco Diablo Resort, Sedona, Arizona, November 22-24, 1996 Co-Directors: Kiisa Nishikawa (Northern Arizona University, Flagstaff) and Michael Arbib (University of Southern California, Los Angeles). Local Arrangements Chair: Kiisa Nishikawa. E-mail enquiries may be addressed to Kiisa.Nishikawa at nau.edu or arbib at pollux.usc.edu. Further information may be found on our home page at http://www.nau.edu:80/~biology/vismot.html. Program Committee: Kiisa Nishikawa (Chair), Michael Arbib, Emilio Bizzi, Chris Comer, Peter Ewert, Simon Giszter, Mel Goodale, Ananda Weerasuriya, Walt Wilczynski, and Phil Zeigler. SCIENTIFIC PROGRAM The aim of this workshop is to study the neural mechanisms of sensorimotor coordination in amphibians and other model systems for their intrinsic interest, as a target for developments in computational neuroscience, and also as a basis for comparative and evolutionary studies. The list of subsidiary themes given below is meant to be representative of this comparative dimension, but is not intended to be exhaustive. The emphasis (but not the exclusive emphasis) will be on papers that encourage the dialog between modeling and experimentation. A decision as to whether or not to publish a proceedings is still pending. Central Theme: Sensorimotor Coordination in Amphibians and Other Model Systems Subsidiary Themes: Visuomotor Coordination: Comparative and Evolutionary Perspectives Reaching and Grasping in Frog, Pigeon, and Primate Cognitive Maps Auditory Communication (with emphasis on spatial behavior and sensory integration) Motor Pattern Generators This workshop is the sequel to four earlier workshops on the general theme of "Visuomotor Coordination in Frog and Toad: Models and Experiments". The first two were organized by Rolando Lara and Michael Arbib at the University of Massachusetts, Amherst (1981) and Mexico City (1982). The next two were organized by Peter Ewert and Arbib in Kassel and Los Angeles, respectively, with the Proceedings published as follows: Ewert, J.-P. and M. A. Arbib (Eds.) 1989. Visuomotor Coordination: Amphibians, Comparisons, Models and Robots. New York: Plenum Press. Arbib, M.A. and J.-P. Ewert (Eds.) 1991. Visual Structures and Integrated Functions, Research Notes in Neural Computing 3. Heidelberg, New York: Springer Verlag. INSTRUCTIONS FOR CONTRIBUTORS Persons who wish to present oral papers are asked to send three copies of an extended abstract, approximately 4 pages long, including figures and references. Persons who wish to present posters are asked to send a one page abstract. Abstracts may be sent by regular mail, e-mail or FAX. Authors should be aware that e-mailed abstracts should contain no figures. Abstracts should be sent no later than 15 May, 1996 to: Kiisa Nishikawa , Department of Biological Sciences, Northern Arizona University, Flagstaff, AZ 86011-5640, E-mail: Kiisa.Nishikawa at nau.edu; FAX: (520)523-7500. Notification of the Program Committee's decision will be sent out no later than 15 June, 1996. REGISTRATION INFORMATION Meeting Location and General Information: The Workshop will be held at the Poco Diablo Resort in Sedona, Arizona (a beautiful small town set in dramatic red hills) immediately following the Society for Neuroscience meeting in 1996. The 1996 Neuroscience meeting ends on Thursday, November 21, so workshop participants can fly from Washington, DC to Phoenix, AZ that evening, meet Friday, Saturday, and Sunday, with a Workshop Banquet on Sunday evening, and fly home on Monday, November 25th. Paper sessions will be held all day on Friday, on Saturday afternoon, and all day on Sunday. Poster sessions will be held on Saturday afternoon and evening. A group field trip is planned for Saturday morning. Graduate Student and Postdoctoral Participation: In order to encourage the participation of graduate students and postdoctorals, we have arranged for affordable housing, and in addition we are able to offer a reduced registration fee (see below) thanks to the generous contribution of the Office of the Associate Provost for Research and Graduate Studies at Northern Arizona University. TRAVEL FROM PHOENIX TO SEDONA: Sedona, AZ is located approximately 100 miles north of Phoenix, where the nearest major airport (Sky Harbor) is located. Workshop attendees may wish to arrange their own transportation (e.g., car rental from Phoenix airport) from Phoenix to Sedona, or they may use the Workshop Shuttle (estimated round trip cost $20 US) to Sedona on 21 November, with a return to Phoenix on 25 November. If you plan to use the Workshop Shuttle, we will need to know your expected arrival time in Phoenix by 1 October 1996, to ensure that space is available for you at a convenient time. LODGING: The following costs are for each night. Since many participants may want to extend their stay to further enjoy Arizona's scenic beauty, we have negotiated special rates for additional nights after the end of the workshop on November 24th. Attendees should make their own booking with the Poco Diablo Resort, by phone (800) 352-5710 or FAX (520) 282-9712. Thurs.-Fri. (and additional week nights before the workshop) per night: students $85 US + tax, faculty $105 + tax Sat.-Sun. (and additional week nights after the workshop) per night: students $69 + tax, faculty $89 + tax. The student room rates are for double occupancy. Thus, students willing to share a room may stay for half the stated rate. When you make your room reservations with the Poco Diablo Resort, please be sure to indicate the number of guests in your party. Graduate students and postdocs should be sure to indicate whether they want single or double occupancy. REGISTRATION FEES: Students and postdoctorals $100; faculty, guests and others $200. The registration fee includes lunch Fri. - Sun., wine and cheese reception during the Saturday evening poster session, and a Farewell Dinner on Sunday evening. Registration fees should be paid by check in US funds, made payable to "Sensorimotor Coordination Workshop", and should be sent to Kiisa Nishikawa at the address listed below, together with the completed registration form that follows at the end of this announcement. Completed registration forms and fees must be received by 1 July, 1996. Late registration fees will be $150 for students and postdoctorals and $250 for faculty. REGISTRATION FORM NAME: ADDRESS: PHONE: FAX: EMAIL: STATUS: [ ] Faculty ($200); [ ] Postdoctoral ($100); [ ] Student ($100); [ ] Other ($200). (Postdocs and students: Please attach certification of your status signed by your supervisor.) TYPE OF PRESENTATION (paper vs. poster): ABSTRACT SENT: (yes/no) AREAS OF INTEREST RELEVANT TO WORKSHOP: WILL YOU REQUIRE ANY SPECIAL AUDIOVISUAL EQUIPMENT FOR YOUR PRESENTATION? HAVE YOU MADE A RESERVATION WITH THE HOTEL? EXPECTED TIME OF ARRIVAL IN PHOENIX (ON NOVEMBER 21): EXPECTED TIME OF DEPARTURE FROM PHOENIX (ON NOVEMBER 25): DO YOU WISH TO USE THE WORKSHOP SHUTTLE TO TRAVEL FROM PHOENIX TO SEDONA? (If so, please be sure that we know your expected arrival time by 1 October!) DO YOU WISH TO PARTICIPATE IN A GROUP HIKE IN THE SEDONA AREA ON SATURDAY MORNING? Please make sure that your check (in US funds and payable to the "Sensorimotor Coordination Workshop") is included with this form. If you plan to bring a guest with you to the Workshop, please add their name(s) to this form and enclo se their registration fee along with your own. Mail to: Kiisa Nishikawa, Department of Biological Sciences, Northern Arizona University, Flagstaff, AZ 86011-5640. E-mail: Kiisa.Nishikawa at nau.edu. FAX: (520)523-7500. Phone: (520)523-9497. From koza at CS.Stanford.EDU Sat May 4 11:20:07 1996 From: koza at CS.Stanford.EDU (John R. Koza) Date: Sat, 4 May 1996 08:20:07 -0700 (PDT) Subject: GP-96 Registration and Papers Message-ID: <199605041520.IAA08316@Sunburn.Stanford.EDU> CALL FOR PARTICIPATION, LIST OF TUTORIALS, LIST OF PAPERS, LIST OF PROGRAM COMMITTEES, AND REGISTRATION FORM (Largest discount availabe until May 15) Genetic Programming 1996 Conference (GP-96) July 28 - 31 (Sunday - Wednesday), 1996 Fairchild Auditorium and other campus locations Stanford University Stanford, California Proceedings will be published by The MIT Press In cooperation with -the Association for Computing Machinery (ACM), - SIGART - IEEE Neural Network Council, - American Association for Artificial Intelligence. Genetic programming is an automatic programming technique for evolving computer programs that solve (or approximately solve) problems. Starting with a primordial ooze of thousands of randomly created computer programs composed of programmatic ingredients appropriate to the problem, a population of computers programs is progressively evolved over many generations using the Darwinian principle of survival of the fittest, a sexual recombination operation, and occasional mutation. Since 1992, over 500 technical papers have been published in this rapidly growing field. This first genetic programming conference will feature 75 papers and 27 poster papers, 12 tutorials, 2 invited speakers, a session featuring late-breaking papers, and informal birds-of-a-feather meetings. Topics include, but are not limited to, applications of genetic programming, theoretical foundations of genetic programming, implementation issues and technique extensions, use of memory and state, cellular encoding (developmental genetic programming), evolvable hardware, evolvable machine language programs, automated evolution of program architecture, evolution and use of mental models, automatic programming of multi-agent strategies, distributed artificial intelligence, automated circuit synthesis, automatic programming of cellular automata, induction, system identification, control, automated design, compression, image analysis, pattern recognition, molecular biology applications, grammar induction, and parallelization. ------------------------------------------------- HONORARY CHAIR: John Holland, University of Michigan INVITED SPEAKERS: John Holland, University of Michigan and David E. Goldberg, University of Illinois GENERAL CHAIR: John Koza, Stanford University PUBLICITY CHAIR: Patrick Tufts, Brandeis University ------------------------------------------------- TUTORIALS -Sunday July 28 9:15 AM - 11:30 AM - Genetic Algorithms - David E. Goldberg, University of Illinois - Machine Language Genetic Programming - Peter Nordin, University of Dortmund, Germany - Genetic Programming using Mathematica P Robert Nachbar P Merck Research Laboratories - Introduction to Genetic Programming - John Koza, Stanford University ------------------------------------------------- Sunday July 28 1:00 PM - 3: 15 PM - Classifier Systems- Robert Elliott Smith, University of Alabama - Evolutionary Computation for Constraint Optimization - Zbigniew Michalewicz, University of North Carolina - Advanced Genetic Programming - John Koza, Stanford University ------------------------------------------------- Sunday July 28 3:45 PM - 6 PM - Evolutionary Programming and Evolution Strategies - David Fogel, University of California, San Diego - Cellular Encoding P Frederic Gruau, Stanford University (via videotape) and David Andre, Stanford University (in person) - Genetic Programming with Linear Genomes (one hour) - Wolfgang Banzhaf, University of Dortmund, Germany -JECHO - Terry Jones, Santa Fe Institute ------------------------------------------------- Tuesday July 30 - 3 PM - 5:15PM - Neural Networks - David E. Rumelhart, Stanford University - Machine Learning - Pat Langley, Stanford University -JMolecular Biology for Computer Scientists - Russ B. Altman, Stanford University ------------------------------------------------- Additional tutorial P Time to be Announced % Evolvable Hardware - Hugo De Garis,ATR, Nara, Japan and Adrian Thompson, University of Sussex, U.K. ------------------------------------------------- FOR MORE INFORMATION ABOUT THE GP-96 CONFERENCE: See the GP-96 home page on the World Wide Web: http://www.cs.brandeis.edu/~zippy/gp-96.html or contact GP-96 at via e-mail at gp at aaai.org. PHONE: 415-328- 3123. FAX: 415-321-4457. Conference operated by Genetic Programming Conferences, Inc. (a California not- for-profit corporation). ABOUT GENETIC PROGRAMMING IN GENERAL: http://www-cs- faculty.stanford.edu/~koza/. FOR GP-96 TRAVEL INFORMATION: See the GP-96 home page on the World Wide Web: http://www.cs.brandeis.edu/~zippy/gp-96.html. For further information regarding special GP-96 airline and car rental rates, please contact Conventions in America at e-mail flycia at balboa.com; or phone 1-800-929-4242; or phone 619-678-3600; or FAX 619-678-3699. FOR HOTEL AND UNIVERSITY HOUSING INFORMATION: See the GP-96 home page on the World Wide Web: http://www.cs.brandeis.edu/~zippy/gp-96.html or via e- mail at gp at aaai.org. FOR STUDENT TRAVEL GRANTS: See the GP-96 home page on the World Wide Web: http://www.cs.brandeis.edu/~zippy/gp-96.html. ABOUT THE SAN FRANCISCO BAY AREA AND SILICON VALLEY SIGHTS: Try the Stanford University home page at http://www.stanford.edu/, the Hyperion Guide at http://www.hyperion.com/ba/sfbay.html; the Palo Alto weekly at http://www.service.com/PAW/home.html; the California Virtual Tourist at http://www.research.digital.com/SRC/virtual- tourist/California.html; and the Yahoo Guide of San Francisco at http://www.yahoo.com/Regional_Information/States/Califor nia/San_Francisco. ABOUT OTHER CONTEMPORANEOUS WEST COAST CONFERENCES: Information about the AAAI-96 conference on August 4 P 8 (Sunday P Thursday), 1996, in Portland, Oregon is at http://www.aaai.org/. Information on the International Conference on Knowledge Discovery and Data Mining (KDD- 96) in Portland on August 3 P 5, 1996 is at http://www- aig.jpl.nasa.gov/kdd96. Information about the Protein Society conference on August 3 P 7, 1996 in San Jose is at http://www.faseb.org. Information about the Foundations of Genetic Algorithms (FOGA) workshop on August 3 P 5 (Saturday P Monday), 1996, in San Diego is at http://www.aic.nrl.navy.mil/galist/foga/. Information about the Parallel and Distributed Processing Techniques and Applications (PDPTA-96) conference on August 6 P 9 (Friday P Sunday), 1996 in Sunnyvale, California is at http://www.ece.neu.edu/pdpta96.html. ABOUT MEMBERSHIP IN THE ACM, AAAI, or IEEE: For information about ACM membership, try http://www.acm.org/; for information about SIGART, try http://sigart.acm.org/; for AAAI membership, go to http://www.aaai.org/; and for membership in the IEEE, go to http://www.ieee.org. PHYSICAL MAIL ADDRESS FOR GP-96: GP-96 Conference, c/o American Association for Artificial Intelligence, 445 Burgess Drive, Menlo Park, CA 94025. PHONE: 415-328- 3123. FAX: 415-321-4457. WWW: http://www.aaai.org/. E-MAIL: gp at aaai.org. ------------------------------------------------ REGISTRATION FORM FOR GENETIC PROGRAMMING 1996 CONFERENCE TO BE HELD ON JULY 28 P 31, 1996 AT STANFORD UNIVERSITY First Name _________________________ Last Name_______________ Affiliation________________________________ Address__________________________________ ________________________________________ City__________________________ State/Province _________________ Zip/Postal Code____________________ Country__________________ Daytime telephone__________________________ E-Mail address_____________________________ Conference registration fee includes copy of proceedings, attendance at 4 tutorials of your choice, syllabus books for the tutorials, conference reception, copy of a book of late-breaking papers, a T-shirt, coffee breaks, lunch (on at least Sunday), and admission to conference sessions. Students must send legible proof of full-time student status. Conference proceedings will be mailed to registered attendees with U.S. mailing addresses via 2-day U.S. priority mail about 1 P 2 weeks prior to the conference at no extra charge (at addressee's risk). If you are uncertain as to whether you will be at that address at that time or DO NOT WANT YOUR PROCEEDINGS MAILED to you at the above address for any other reason, your copy of the proceedings will be held for you at the conference registration desk if you CHECK HERE ____. Postmarked by May 15, 1996: Student P ACM, IEEE, or AAAI Member $195 Regular P ACM, IEEE, or AAAI Member $395 Student P Non-member $215 Regular P Non-member $415 Postmarked by June 26, 1996: Student P ACM, IEEE, or AAAI Member $245 Regular P ACM, IEEE, or AAAI Member $445 Student P Non-member $265 Regular P Non-member $465 Postmarked later or on-site: Student P ACM, IEEE, or AAAI Member $295 Regular P ACM, IEEE, or AAAI Member $495 Student P Non-member $315 Regular P Non-member $515 Member number: ACM # ___________ IEEE # _________ AAAI # _________ Total fee (enter appropriate amount) $ _________ __ Check or money order made payable to "AAAI" (in U.S. funds) __ Mastercard __ Visa __ American Express Credit card number __________________________________________ Expiration Date ___________ Signature _________________________ TUTORIALS: Check off a box for one tutorial from each of the 4 columns: Sunday July 28, 1996 P 9:15 AM - 11:30 AM __ Genetic Algorithms __ Machine Language GP __ GP using Mathematica __ Introductory GP Sunday July 28, 1996 P 1:00 PM - 3: 15 PM __ Classifier Systems __ EC for Constraint Optimization __ Advanced GP Sunday July 28, 1996 P 3:45 PM - 6 PM __ Evolutionary Programming and Evolution Strategies __ Cellular Encoding __ GP with Linear Genomes __ ECHO Tuesday July 30, 1996 P3:00 PM - 5:15PM __ Neural Networks __ Machine Learning __ Molecular Biology for Computer Scientists __ Check here for information about housing and meal package at Stanford University. __ Check here for information on student travel grants. T-shirt size ___ small ___ medium ___ large ___ extra-large No refunds will be made; however, we will transfer your registration to a person you designate upon notification. SEND TO: GP-96 Conference, c/o American Association for Artificial Intelligence, 445 Burgess Drive, Menlo Park, CA 94025. ------------------------------------------------- 90 PAPERS APPEARING IN PROCEEDINGS OF THE GP-96 CONFERENCE TO BE HELD AT STANFORD UNIVERSITY ON JULY 28-31, 1996 -------------------------------------------------- LONG GENETIC PROGRAMMING PAPERS Discovery by Genetic Programming of a Cellular Automata Rule that is Better than any Known Rule for the Majority Classification Problem --- David Andre, Forrest H Bennett III, and John R. Koza A Study in Program Response and the Negative Effects of Introns in Genetic Programming --- David Andre and Astro Teller An Investigation into the Sensitivity of Genetic Programming to the Frequency of Leaf Selection During Subtree Crossover --- Peter J. Angeline Automatic Creation of an Efficient Multi-Agent Architecture Using Genetic Programming with Architecture-Altering Operations --- Forrest H Bennett III Evolving Deterministic Finite Automata Using Cellular Encoding --- Scott Brave Genetic Programming and the Efficient Market Hypothesis --- Shu-Heng Chen and Chia-Hsuan Yeh Bargaining by Artificial Agents in Two Coalition Games: A Study in Genetic Programming for Electronic Commerce --- Garett Dworman, Steven O. Kimbrough, and James D. Laing Waveform Recognition Using Genetic Programming: The Myoelectric Signal Recognition Problem --- Jaime J. Fernandez, Kristin A. Farry, and John B. Cheatham Benchmarking the Generalization Capabilities of A Compiling Genetic programming System using Sparse Data Sets --- Frank D. Francone, Peter Nordin, and Wolfgang Banzhaf A Comparison between Cellular Encoding and Direct Encoding for Genetic Neural Networks --- Frederic Gruau, Darrell Whitley, and Larry Pyeatt Entailment for Specification Refinement --- Thomas Haynes, Rose Gamble, Leslie Knight, and Roger Wainwright Genetic Programming of Near-Minimum-Time Spacecraft Attitude Maneuvers --- Brian Howley Evolving Evolution Programs: Genetic Programming and L-Systems --- Christian Jacob Genetic Programming using Genotype-Phenotype Mapping from Linear Genomes into Linear Phenotypes --- Robert E. Keller and Wolfgang Banzhaf Automated WYWIWYG Design of Both the Topology and Component Values of Electrical Circuits Using Genetic Programming --- John R. Koza, Forrest H Bennett III, David Andre, and Martin A. Keane Use of Automatically Defined Functions and Architecture-Altering Operations in Automated Circuit Synthesis Using Genetic Programming --- John R. Koza, David Andre, Forrest H Bennett III, and Martin A. Keane Using Data Structures within Genetic Programming --- W. B. Langdon Evolving Teamwork and Coordination with Genetic Programming --- Sean Luke and Lee Spector Using Genetic Programming to Develop Inferential Estimation Algorithms --- Ben McKay, Mark Willis, Gary Montague, and Geoffrey W. Barton Dynamics of Genetic Programming and Chaotic Time Series Prediction --- Brian S. Mulloy, Rick L. Riolo, and Robert S. Savit Genetic Programming, the Reflection of Chaos, and the Bootstrap: Towards a useful Test for Chaos --- E. Howard N. Oakley Solving Facility Layout Problems Using Genetic Programming --- Jaime Garces-Perez, Dale A. Schoenefeld, and Roger L. Wainwright Variations in Evolution of Subsumption Architectures Using Genetic Programming: The Wall Following Robot Revisited --- Steven J. Ross, Jason M. Daida, Chau M. Doan, Tommaso F. Bersano- Begey, and Jeffrey J. McClain MASSON: Discovering Commonalties in Collection of Objects using Genetic Programming --- Tae-Wan Ryu and Christoph F. Eick Cultural Transmission of Information in Genetic Programming --- Lee Spector and Sean Luke Code Growth in Genetic Programming --- Terence Soule, James A. Foster, and John Dickinson High-Performance, Parallel, Stack-Based Genetic Programming --- Kilian Stoffel and Lee Spector Search Bias, Language Bias, and Genetic Programming --- P. A. Whigham Learning Recursive Functions from Noisy Examples using Generic Genetic Programming --- Man Leung Wong and Kwong Sak Leung SHORT GENETIC PROGRAMMING PAPERS Classification using Cultural Co-Evolution and Genetic Programming --- Myriam Abramson and Lawrence Hunter Type-Constrained Genetic Programming for Rule-Base Definition in Fuzzy Logic Controllers --- Enrique Alba, Carlos Cotta, and Jose J. Troyo The Evolution of Memory and Mental Models Using Genetic Programming --- Scott Brave Automatic Generation of Object-Oriented Programs Using Genetic Programming --- Wilker Shane Bruce Evolving Event Driven Programs --- Mark Crosbie and Eugene H. Spafford Computer-Assisted Design of Image Classification Algorithms: Dynamic and Static Fitness Evaluations in a Scaffolded Genetic Programming Environment --- Jason M. Daida, Tommaso F. Bersano-Begey, Steven J. Ross, and John F. Vesecky Improved Direct Acyclic Graph Handling and the Combine Operator in Genetic Programming --- Herman Ehrenburg An Adverse Interaction between Crossover and Restricted Tree Depth in Genetic Programming --- Chris Gathercole and Peter Ross The Prediction of the Degree of Exposure to Solvent of Amino Acid Residues via Genetic Programming --- Simon Handle y A New Class of Function Sets for Solving Sequence Problems --- Simon Handley Evolving Edge Detectors with Genetic Programming --- Christopher Harris and Bernard Buxton Toward Simulated Evolution of Machine Language Iteration --- Lorenz Huelsbergen Robustness of Robot Programs Generated by Genetic Programming --- Takuya Ito, Hitoshi Iba, and Masayuki Kimura Signal Path Oriented Approach for Generation of Dynamic Process Models --- Peter Marenbach, Kurt D. Betterhausen, and Stephan Freyer Evolving Control Laws for a Network of Traffic Signals --- David J. Montana and Steven Czerwinski Distributed Genetic Programming: Empirical Study and Analysis --- Tatsuya Niwa and Hitoshi Iba Programmatic Compression of Images and Sound --- Peter Nordin and Wolfgang Banzhaf Investigating the Generality of Automatically Defined Functions --- Una-May O'Reilly Parallel Genetic Programming: An Application to Trading Models Evolution --- Mouloud Oussaidene, Bastien Chopard, Olivier V. Pictet, and Marco Tomassini Genetic Programming for Image Analysis --- Riccardo Poli Evolving Agents --- Adil Qureshi Genetic Programming for Improved Data Mining: An Application to the Biochemistry of Protein Interactions --- M. L. Raymer, W. F. Punch, E. D. Goodman, and L. A. Kuhn Generality Versus Size in Genetic Programming --- Justinian Rosca Genetic Programming in Database Query Optimization --- Michael Stillger and Myra Spiliopoulou Ontogenetic Programming --- Lee Spector and Kilian Stoffel Using Genetic Programming to Approximate Maximum Clique --- Terence Soule, James A. Foster, and John Dickinson Paragen: A Novel Technique for the Autoparallelisation of Sequential Programs using Genetic Programming --- Paul Walsh and Conor Ryan The Benefits of Computing with Introns --- Mark Wineberg and Franz Oppacher GENETIC PROGRAMMING POSTER PAPERS Co-Evolving Classification Programs using Genetic Programming --- Manu Ahluwalia and Terence C. Fogarty Genetic Programming Tools Available on the Web: A First Encounter --- Anthony G. Deakin and Derek F. Yates Speeding up Genetic Programming: A Parallel BSP Implementation --- Dimitris C. Dracopoulos and Simon Kent Easy Inverse Kinematics using Genetic Programming --- Jonathan Gibbs Noisy Wall-Following and Maze Navigation through Genetic Programming --- Andrew Goldfish Genetic Programming for Classification of Brain Tumours from Nuclear Magnetic Resonance Biopsy Spectra --- H. F. Gray, R. J. Maxwell, I. Martinez-Perez, C. Arus, and S. Cerdan GP-COM: A Distributed Component-Based Genetic Programming System in C++ --- Christopher Harris and Bernard Buxton Clique Detection via Genetic Programming --- Thomas Haynes and Dale Schoenefeld Functional Languages on Linear Chromosomes --- Paul Holmes and Peter J. Barclay Improving the Accuracy and Robustness of Genetic Programming through Expression Simplification --- Dale Hooper and Nicholas S. Flann COAST: An Approach to Robustness and Reusability in Genetic Programming --- Naohiro Hondo, Hitoshi Iba, and Yukinori Kakazu Recurrences with Fixed Base Cases in Genetic Programming --- Stefan J. Johansson Evolutionary and Incremental Methods to Solve Hard Learning Problems --- Ibrahim Kuscu Detection of Patterns in Radiographs using ANN Designed and Trained with the Genetic Algorithm --- Alejandro Pazos Julian Dorado and Antonino Santos The Logic-Grammars-Based Genetic Programming System --- Man Leung Wong and Kwong Sak Leung LONG GENETIC ALGORITHMS PAPERS Genetic Algorithms with Analytical Solution --- Erol Gelenbe Silicon Evolution --- Adrian Thompson SHORT GENETIC ALGORITHMS PAPERS On Sensor Evolution in Robotics --- Karthik Balakrishnan and Vasant Honavar Testing Software using Order-Based Genetic Algorithms --- Edward B. Boden and Gilford F. Martino Optimizing Local Area Networks Using Genetic Algorithms --- Andy Choi A Genetic Algorithm for the Construction of Small and Highly Testable OKFDD Circuits --- Rold Drechsler, Bernd Becker, and Nicole Gockel Motion Planning and Design of CAM Mechanisms by Means of a Genetic Algorithm --- Rodolfo Faglia and David Vetturi Evolving Strategies Based on the Nearest Neighbor Rule and a Genetic Algorithm --- Matthias Fuchs Recognition and Reconstruction of Visibility Graphs Using a Genetic Algorithm --- Marshall S. Veach GENETIC ALGORITHMS POSTER PAPERS The Use of Genetic Algorithms in the Optimization of Competitive Neural Networks which Resolve the Stuck Vectors Problem --- Tin Ilakovac, Zeljka Perkovic, and Strahil Ristov An Extraction Method of a Car License Plate using a Distributed Genetic Algorithm --- Dae Wook Kim, Sang Kyoon Kim, and Hang Joon Kim EVOLUTIONARY PROGRAMMING AND EVOLUTION STRATEGIES PAPERS Evolving Fractal Movies --- Peter J. Angeline Preliminary Experiments on Discriminating between Chaotic Signals --- David B. Fogel and Lawrence J. Fogel Discovering Patterns in Spatial Data using Evolutionary Programming --- Adam Ghozeil and David B. Fogel Evolving Reduced Parameter Bilinear Models for Time Series Prediction using Fast Evolutionary Programming --- Sathyanarayan S. Rao and Kumar Chellapilla CLASSIFIER SYSTEMS PAPERS Three-Dimensional Shape Optimization Utilizing a Learning Classifier System --- Robert A. Richards and Sheri D. Sheppard Classifier System Renaissance: New Analogies, New Directions --- H. Brown Cribbs III and Robert E. Smith Natural Niching for Cooperative Learning in Classifier Systems --- Jeffrey Horn and David E. Goldberg From juergen at idsia.ch Tue May 7 10:07:53 1996 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Tue, 7 May 96 16:07:53 +0200 Subject: guessing recurrent nets Message-ID: <9605071407.AA09728@fava.idsia.ch> GUESSING CAN OUTPERFORM MANY LONG TIME LAG ALGORITHMS Juergen Schmidhuber, IDSIA & Sepp Hochreiter, TUM Technical Note IDSIA-19-96 (3 pages, 48 K) Numerous recent papers focus on standard recurrent nets' problems with long time lags between relevant signals. Some propose rather sophisticated, alterna- tive methods. We show: many problems used to test previous methods can be solved more quickly by random weight guessing. To obtain a copy, use ftp, or simply cut and paste: netscape ftp://ftp.idsia.ch/pub/juergen/guess.ps.gz Or try our web pages: http://www7.informatik.tu-muenchen.de/~hochreit http://www.idsia.ch/~juergen/onlinepub.html From rsun at cs.ua.edu Tue May 7 14:24:51 1996 From: rsun at cs.ua.edu (Ron Sun) Date: Tue, 7 May 1996 13:24:51 -0500 Subject: from subsymbolic to symbolic learning Message-ID: <9605071824.AA31663@athos.cs.ua.edu> Bottom-up Skill Learning in Reactive Sequential Decision Tasks Ron Sun Todd Peterson Edward Merrill The University of Alabama Tuscaloosa, AL 35487 --------------------------------------- To appear in: Proc. of Cognitive Science Conference, 1996. 6 pages. ftp or Web access: ftp://cs.ua.edu/pub/tech-reports/sun.cogsci96.ps sorry, no hardcopy available. ---------------------------------------- This paper introduces a hybrid model that unifies connectionist, symbolic, and reinforcement learning into an integrated architecture for bottom-up skill learning in reactive sequential decision tasks. The model is designed for an agent to learn continuously from on-going experience in the world, without the use of preconceived concepts and knowledge. Both procedural skills and high-level knowledge are acquired through an agent's experience interacting with the world. Computational experiments with the model in two domains are reported. From bartlett at alfred.anu.edu.au Tue May 7 05:11:14 1996 From: bartlett at alfred.anu.edu.au (Peter Bartlett) Date: Tue, 7 May 1996 19:11:14 +1000 (EST) Subject: Paper on neural net learning (again) Message-ID: <9605070911.AA24712@cook.anu.edu.au> A new version of the following paper, announced last week, is available by anonymous ftp. ftp host: syseng.anu.edu.au ftp file: pub/peter/TR96d.ps.Z The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network (21 pages) Peter Bartlett Australian National University (The main change is that I've taken out part of the first theorem, which will soon appear in a forthcoming paper with John Shawe-Taylor, Bob Williamson, and Martin Anthony. My apologies that the permissions were set incorrectly on this file. Thanks to those who pointed out the problem.) -- Peter. From goldfarb at unb.ca Tue May 7 20:10:58 1996 From: goldfarb at unb.ca (Lev Goldfarb) Date: Tue, 7 May 1996 21:10:58 -0300 (ADT) Subject: Paper: INDUCTIVE THEORY OF VISION Message-ID: My apologies if you receive multiple copies of this message. The following paper (also TR96-108, April 1996, Faculty of Computer Science, University of New Brunswick, Fredericton, Canada) will be presented at the workshop WHAT IS INDUCTIVE LEARNING held in Toronto on May 20-21, 1996 in conjunction with the 11th Canadian biennial conference on Artificial Intelligence. It is available via anonymous ftp (45 pages) ftp://ftp.cs.unb.ca/incoming/theory.ps.Z It goes without saying that comments and suggestions are appreciated. %*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*% INDUCTIVE THEORY OF VISION Lev Goldfarb, Sanjay S. Deshpande, Virendra C. Bhavsar Faculty of Computer Science University of New Brunswick, Fredericton, N.B., Canada E3B 5A3 Ph: (506)453-4566, FAX: (506)453-3566 E-mail: goldfarb, d23d, bhavsar @unb.ca Abstract ---------- In spite of the fact that some of the outstanding physiologists and neurophysiologists (e.g. Hermann von Helmholtz and Horace Barlow) insisted on the central role of inductive learning processes in vision as well as in other sensory processes, there are absolutely no (computational) theories of vision that are guided by these processes. It appears that this is mainly due to the lack of understanding of what inductive learning processes are. We strongly believe in the central role of inductive learning processes around which, we think, all other (intelligent) biological processes have evolved. In this paper we outline the first (computational) theory of vision completely built around the inductive learning processes for all levels in vision. The development of such a theory became possible with the advent of the formal model of inductive learning--evolving transformation system (ETS). The proposed theory is based on the concept of structured measurement device, which is motivated by the formal model of inductive learning and is a far-reaching generalization of the concept of classical measurement device, whose output measurements are not numbers but structured entities ("symbols") with an appropriate metric geometry. We propose that the triad of object structure, image structure and the appropriate mathematical structure (ETS)--to capture the latter two structures--is precisely what computational vision should be about. And it is the inductive learning process that relates the members of this triad. We suggest that since the structure of objects in the universe has evolved in a combinative (agglomerative) and hierarchical manner, it is quite natural to expect that biological processes have also evolved (to learn) to capture the latter combinative and hierarchical structure. In connection with this, the inadequacy of the classical mathematical structures as well as the role of mathematical structures in information processing are discussed. We propose the following postulates on which we base the theory. POSTULATE 1. The objects in the universe have emergent combinative hierarchical structure. Moreover, the term "object structure" cannot be properly understood and defined outside the inductive learning process. POSTULATE 2. The inductive learning process is an evolving process that tries to capture the emergent object (class) structure mentioned in Postulate 1. The mathematical structure on which the inductive learning model is based should have the intrinsic capability to capture the evolving object structure. (It appears that the corresponding mathematical structure is fundamentally different from the classical mathematical structures.) POSTULATE 3. All basic representations in vision processes are constructed on the basis of the inductive class representation, which, in turn, is constructed by the inductive learning process (see Postulate 2). Thus, the inductive learning processes form the core around which all vision processes have evolved. We present simple examples to illustrate the proposed theory for the case of "low-level" vision. _______________________________________________ KEYWORDS: vision, low-level vision, object structure, inductive learning, learning from examples, evolving transformation system, symbolic image representation, image structure, abstract measurement device. %*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*%*% -- Lev Goldfarb http://wwwos2.cs.unb.ca/profs/goldfarb/goldfarb.htm From jose at scr.siemens.com Wed May 8 11:19:27 1996 From: jose at scr.siemens.com (Stephen Hanson) Date: Wed, 8 May 1996 11:19:27 -0400 (EDT) Subject: Paper Available Message-ID: <199605081519.LAA03527@as1.scr.siemens.com> Development of Schemata during Event Parsing: Neisser's Perceptual Cycle as a Recurrent Connectionist Network Catherine Hanson & Stephen Jose Hanson Temple University Siemens Research & Princeton University Abstract: Neural net simulations of human event parsing are described. A recurent net was used to simulate data collect from humna subjects watching short videotaped event sequences. In one simulation the net was trained on one-half of a taped sequence with the other half of the sequence being used to to test transfer performance. In another simulation the net was trained on one complete event sequence and transfer to a different event sequence was tested. Neural Net simulations provided a unique means of observing the interrelation of top-down and bottom-up processing in a a basic cognitive task. Examination of computational patterns of the net and cluster analysis of the hidden units revealed two factors that may be central to event perception: (1) similarity between a current input and an activated schema and (2) expected duration of a given event. Although the importance of similarity between input and activated schemata during even perception has been acknowledged previously (e.g. Neisser, 1976, Schank, 1982), the present research provides specific instantiation of how similarity judgements can be made using both top-down and bottom-up processing. Moreover, unlike other work on event perception, this approach addresses potential mechanisms for how schemata develop. Journal of Cognitive Neuroscience 8:2, pp. 119-134, 1996. If you are interested in a reprint of the paper Please send your address in return mail to paper-request at scr.siemens.com From carmesin at schoner.physik.uni-bremen.de Wed May 8 12:04:54 1996 From: carmesin at schoner.physik.uni-bremen.de (Hans-Otto Carmesin) Date: Wed, 8 May 1996 18:04:54 +0200 Subject: Cortical Maps Message-ID: <199605081604.SAA05483@schoner.physik.uni-bremen.de> The paper TOPOLOGY-PRESERVATION EMERGENCE BY THE HEBB RULE WITH INFINITESIMAL SHORT RANGE SIGNALS by Hans-Otto Carmesin, Institute for Theoretical Physics and Center for Cognitive Sciences, University Bremen, 28334 Bremen, Germany is available. ABSTRACT: Topology preservation is an ubiquitous phenomenon in the mammalian nervous system. What are the necessary and sufficient conditions for the self- organized formation of topology preservation due to a Hebb- mechanism? A relatively realistic Hebb- rule and neurons with stochastic fluctuations are modeled. Moreover, the reasonable growth law is used for coupling growth that the biomass increase is proportional to the present biomass under the constraint that the biomass is limited at a neuron. It is proven for such general Hebb- type networks that infinitesimal lateral signal transfer to neighbouring neurons is necessary and sufficient for the emergence of topology preservation. As a consequence, observed topology preservation in nervous systems may emerge with or without purpose as a byproduct of infinitesimal lateral signal transfer to neighbouring neurons due to ubiquitous chemical and electrical leakage. Otainable via WWW at http://www.schoner.physik.uni-bremen.de/~carmesin/ The paper appeared in Phys. Rev. E, Vol.53, 993-1002 (1996). From wals96 at SPENCER.CTAN.YALE.EDU Wed May 8 16:24:47 1996 From: wals96 at SPENCER.CTAN.YALE.EDU (Workshop on Adaptive Learning Systems) Date: Wed, 8 May 1996 16:24:47 -0400 Subject: Please post the following Message-ID: <199605082024.AA12842@NOYCE.CTAN.YALE.EDU> The Ninth Yale Workshop on Adaptive and Learning Systems June 10-12, 1996 Yale University New Haven, Connecticut Announcement Objective: Advances in theory and computer technology have enhanced the viability of intelligent systems operating in complex environments. Different perspectives on this general topic offered by learning theory, adaptive control, robotics, artificial neural networks, and biological systems are being linked in productive ways. The aim of the Ninth Workshop on Adaptive and Learning Systems is to bring together engineers and scientists to exploit the synergism between different viewpoints and to provide a favorable environment for constructive collaboration. Program: The principal sessions will be devoted to adaptive systems, learning systems, robotics, neural networks, and biological systems. A tentative list of speakers includes: Adaptation: A. M. Annaswamy, M. Bodson, B. Friedland, R. Horowitz, D. E. Miller, A. S. Morse, K. S. Narendra, H. E. Rauch, H. Unbehauen Learning: A. G. Barto, E. V. Denardo, E. Gelenbe, R. W. Longman, R. K. Mehra, R. S. Sutton, P. Werbos Robotics: P. N. Belhumeur, T. Fukuda, D. J. Kriegman, M. T. Mason, W. T. Miller III, J.-J. E. Slotine Neural Networks: G. Cybenko, L. Feldkamp, C. L. Giles, S. Haykin, L. G. Kraft, U. Lenz, P. Mars, K. S. Narendra, J. Principe, J. N. Tsitsiklis, A. S. Weigend, L. Ungar Biological Systems: E. Bizzi, J. J. Collins, W. Freeman, J. Houk Registration: Registration will be limited and preregistration is highly recommended. Please complete the form below and return together with a check payable to Adaptive and Learning Systems. Information on transportation and lodging will be forwarded upon receipt of the registration form. For further information contact Ms. Lesley Kent, Center for Systems Science, Yale University, P.O. Box 208267, New Haven, CT 06520-8267. Telephone: (203) 432-2211. FAX: (203) 432-7481. e-mail: lesley at sysc2.eng.yale.edu or wals96 at nnc.yale.edu Rooms at reduced rates have been reserved at the Holiday Inn [Tel. (203) 777-6221]. Note that due to other events in New Haven at the time of the Workshop, rooms may not be available if reservations are not made prior to May 25, 1996. Please mention the Yale Workshop when you make your reservation. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - DETACH HERE - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - PREREGISTRATION FORM Name _____________________________________________________ Position _____________________________________________________ Organization _____________________________________________________ Address _____________________________________________________ _____________________________________________________ Phone _____________________________________________________ Enclose a check for $200 ($100 for students with valid ID) payable to Adaptive and Learning Systems and mail to Professor K. S. Narendra, Center for Systems Science, P. O. Box 208267, Yale Station, New Haven, CT 06520-8267, USA. From dayan at ai.mit.edu Thu May 9 09:49:55 1996 From: dayan at ai.mit.edu (Peter Dayan) Date: Thu, 9 May 1996 09:49:55 -0400 (EDT) Subject: Postdoc in computational neurobiology Message-ID: <9605091349.AA08695@sewer.ai.mit.edu> Computational Models of Cortical Development I would like to recruit a postdoc to work on computational models of activity-dependent development in the cortex. The project focuses on development in hierarchical processing structures, particularly the visual system, and we're planning to start with the Helmholtz machine and the wake-sleep algorithm. The position is in my lab in the Department of Brain and Cognitive Sciences at MIT. The job is available immediately, and will last initially for one year, extensible for at least another year. Applicants should have a PhD in a relevant area (such as computational modeling in neuroscience) and should be familiar with neurobiological and computational results in activity-dependent development. To apply, please send a CV and the names and addresses of two referees to me: Peter Dayan Department of Brain and Cognitive Sciences E25-210 MIT Cambridge, MA 02139 USA dayan at psyche.mit.edu tel: +1 (617) 252 1693 fax: +1 (617) 253 2964 From carmesin at schoner.physik.uni-bremen.de Thu May 9 13:25:40 1996 From: carmesin at schoner.physik.uni-bremen.de (Hans-Otto Carmesin) Date: Thu, 9 May 1996 19:25:40 +0200 Subject: Cortical Maps Message-ID: <199605091725.TAA09099@schoner.physik.uni-bremen.de> WWW-adress correction: The paper TOPOLOGY-PRESERVATION EMERGENCE BY THE HEBB RULE WITH INFINITESIMAL SHORT RANGE SIGNALS by Hans-Otto Carmesin, Institute for Theoretical Physics and Center for Cognitive Sciences, University Bremen, 28334 Bremen, Germany announced yesterday is otainable via WWW at http://schoner.physik.uni-bremen.de/~carmesin/ From robtag at dia.unisa.it Thu May 9 04:53:42 1996 From: robtag at dia.unisa.it (Tagliaferri Roberto) Date: Thu, 9 May 1996 10:53:42 +0200 Subject: WIRN VIETRI 96 Message-ID: <9605090853.AA28070@udsab> WIRN VIETRI `96 VIII ITALIAN WORKSHOP ON NEURAL NETS IIASS "Eduardo R. Caianiello", Vietri sul Mare (SA) ITALY 23 - 25 May 1996 PRELIMINARY PROGRAM Thursday 23 May 9:30 - A Microelectronic Retinal Implant for the Blind J. Wyatt (Invited Talk) Mathematical Models 10:30 - Neural Networks for the Classification of Structures A. Sperduti & A. Starita 10:50 - A New Incremental Learning Technique N. Dunkin, J. Shawe-Taylor & P. Koiran 11:10 - A Bayesian Framework for Associative Memories E.R. Hancock & M. Pelillo 11:30 - Coffee Break 12:00 - Cultural Evolution in a Population of Neural Networks D. Denaro & D. Parisi Pattern Recognition 12:20 - Computational Intelligence in Electromagnetics F.C. Morabito 12:40 - The Modulate Asynchronous Information Arrangement (M.A.I.A.) for the Learning of Non Supervisioned Neural Network Applied to Compression and Decompression of Images G. Pappalardo, D. Rosaci & G.M.L. Sarne' 13:00 - Neuro-Fuzzy Processing of Remote Sensed Data P. Blonda, A. Bennardo & G. Satalino 13:20 - A Generalized Regularization Network for Remote Sensing Data Classification M. Ceccarelli & A. Petrosino 13:40 - Lunch 15:30 - Virtual reality and neural nets N.A. Borghese (Review Talk) Pattern Recognition 16:30 - Age Estimates of Stellar Systems by Artificial Neural Networks L. Pulone & R. Scaramella 16:50 - The use of Neural Networks for the Automatic Detection and Classification of Weak Photometric Sub-Components in Early-Type Galaxies M. Capaccioli, G. Di Sciascio, G. Longo, G. Richter & R. Tagliaferri 17:10 - A Hybrid Neural Network Architecture for Dynamic Scenes Understanding A. Chella, S. Gaglio & M. Frixione 17:30 - Coffee Break Architectures and Algorithms 18:00 - Fast Spline Neural Networks for Image Compression F. Piazza, S. Smerli, A. Uncini, M. Griffo & R. Zunino 18:20 - A Novel Hypothesis on Cortical Map: Topological Continuity F. Frisone, V. Sanguineti & P. Morasso Friday 24 May 9:30 - Models of biological vision as powerful analogue spatio-temporal filters for dynamical image processing including motion and colour J. Herault (Invited Talk) Applications 10:30 - Neural Nets for Hybrid on-line Plant Control M. Barbarino, S. Bruzzo & A.M. Colla 10:50 - Using Fuzzy Logic to Solve Optimization Problems by Hopfield Neural Model S. Cavalieri & M. Russo 11:10 - Spectral Mapping: a Comparison of Connectionist Approaches E. Trentin, D. Giuliani & C. Furlanello 11:30 - Coffee Break Architectures and Algorithms 12:00 - Constructive Fuzzy Neural Networks F.M. Frattale Mascioli, G. Martinelli & G.M. Varzi 12:20 - Some Comments and Experimental Results on Bayesian Regularization M. de Bollivier & D. Perrotta 12:40 - Recent Results in On-line Prediction and Boosting N. Cesa Bianchi & S. Panizza (Review Talk) 13:40 - Lunch 15:00 - Poster Session 16:00 - Eduardo R. Caianiello Lectures: - T. Parisini (winner of the 1995 E.R. Caianiello Fellowship Award) Neural Nonlinear Controllers and Observers: Stability Results - P. Frasconi (winner of the 1996 E.R. Caianiello Fellowship Award) Input/Output Hmms for sequence processing 17:00 - Annual S.I.R.E.N. Meeting 20:00 - Conference Dinner Saturday 25 May 9:30 - Title to be announced L.B. Almeida (Invited Talk) Architectures and Algorithms 10:30 - FIR NNs and Temporal BP: Implementation on the Meiko CS-2 A. d'Acierno, W. Ripullone & S. Palma 10:50 - Fast Training of Recurrent Neural Networks by the Recursive Least Squares Method R. Parisi, E.D. Di Claudio, A. Rapagnetta & G. Orlandi 11:10 - A Unification of Genetic Algorithms, Neural Networks and Fuzzy Logic: the GANNFL Approach M. Schmidt 11:30 - Coffee Break 12:00 - A Learning Strategy which Increases Partial Fault Tolerance of Neural Nets S. Cavalieri & O. Mirabella 12:20 - Off-Chip Training of Analog Hardware Feed-Forward Neural Networks Through Hyper - Floating Resilient Propagation G.M. Bollano, M. Costa, D. Palmisano & E. Pasero 12:40 - A Reconfigurable Analog VLSI Neural Network Architecture G.M. Bo, D.D. Caviglia, M. Valle, R. Stratta & E. Trucco 13:00 - An Adaptable Boolean Neural Net Trainable to Comment on its own Innerworkings F.E. Lauria, M. Sette & S. Visco POSTER SESSION - The Computational Neural Map and its Capacity F. Palmieri & D. Mattera (Mathematical models) - Proposal of a Darwin-Neural Network for a Robot Implementation C. Domeniconi (Robotica) - Solving Algebraic and Geometrical Problems Using Neural Networks M. Ferraro & T. Caelli - Simulation of Traffic Flows in Transportation Networks with Non Supervisioned MAIA Neural Network G. Pappalardo, M.N. Postorino, D. Rosaci & G.M.L. Sarne' - FIR NNs and Time Series Prediction: Applications to Stock Market= Forecasting A. d'Acierno, W. Ripullone & S. Palma - Verso la Previsione a Breve Scadenza della Visibilita' Metereologica Attraverso una Rete Neurale a Back-Propagation: Ottimizzazione del Modello per Casi di Nebbia A. Pasini & S. Potesta' - Are Multilayer Perceptrons Adequate for Pattern Recognition and= Verification? M. Gori & R. Scarselli - Proof of the Universal Approximation of a Set of Fuzzy Functions F. Masulli, M. Marinaro & D. Oricchio - An Integrated Neural and Algorithmic System for Optical Flow Computation A. Criminisi, M. Gioiello, D. Molinelli & F. Sorbello - A Mlp-Based Digit and Uppercase Characters Recognition System M. Gioiello, E. Martire, F. Sorbello & G. Vassallo - Neural Network Fuzzification: Critical Review of the Fuzzy Learning Vector Quantization Model A. Baraldi & F. Parmiggiani The registration is of 300.000 Italian Lire ( 250.000 Italian Lire for SIREN members) and can be made on site. More information can be found in the www pages at the address below: http:://www-dsi.ing.unifi.it/neural Hotel Information - 1996 We are glad to inform you about the Hotel prizes of the Hotels near the International Institute for Advanced Scientific Studies. The reservation must be made directly to the Hotels at least 20 days before the arrival date. LLOYD'S BAIA HOTEL - Vietri sul Mare - tel. 089-210145 - fax 089-210186 cat. **** Including Breakfast Half board* Full board* Single room Double room Single room Double room Single room Double room L. 140.000 L. 185.000 L. 160.000 L. 130.000 p.p. L. 185.000 L. 155.000 p.p. * Including drinks (1/4 wine and 1/2 mineral water for each lunch) HOTEL PLAZA - P.zza Ferrovia - Salerno - tel. 089-224477 - fax. 089-237311 cat. *** Without Breakfast Including Breakfast Single room Double room Single room Double room L. 75.000 L. 110.000 L. 85.000 L. 130.000 HOTEL RAITO - Raito (Vietri sul Mare - 10' bus) - tel. 089-210033 - fax 089-211434 cat. **** Including Breakfast Half board Full board Single room Double room Single room Double room Single room Double room L. 130.000 L. 200.000 L. 170.000 L. 140.000 p.p. L. 200.000 L. 180.000 p.p. HOTEL LA LUCERTOLA - Marina di Vietri s/m (200 mt. from IIASS) - tel. 089-210837/8 cat.*** Including Breakfast Half board* Full board* Single room Double room Single room Double room Single room Double room L. 75.000 L. 100.000 L. 95.000 L.90.000 p.p. L. 110.000 L. 100.000 p.p. * Including drinks (1/4 wine and 1/2 mineral water for each lunch) HOTEL BRISTOL - Marina di Vietri s/m (200 mt. from IIASS) - tel. 089-210216 cat. *** Including Breakfast Half board* Full board* Single room Double room Single room Double room Single room Double room L. 79.000 L. 105.000 L. 100.000 L. 90.000 p.p. L. 110.000 L. 100.000 p.p. * Including drinks (1/4 wine and 1/2 mineral water for each lunch) HOTEL VIETRI - Marina di Vietri s/m (200 mt. from IIASS) - tel. 089-761644/210400 cat. ** Including Breakfast Half board* Full board* Single room Double room Single room Double room Single room Double room L. 57.000 L. 85.000 L. 80.000 L. 70.000 p.p. L. 90.000 L. 80.000 p.p. * Including drinks (1/4 wine and 1/2 mineral water for each lunch) From john at dcs.rhbnc.ac.uk Fri May 10 04:35:28 1996 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Fri, 10 May 96 09:35:28 +0100 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <199605100835.JAA18839@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT) has produced a set of new Technical Reports available from the remote ftp site described below. They cover topics in real valued complexity theory, computational learning theory, and analysis of the computational power of continuous neural networks. Abstracts are included for the titles. *** Please note that the location of the files was changed at the beginning of ** the year, so that any copies you have of the previous instructions should be * discarded. The new location and instructions are given at the end of the list. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-043: ---------------------------------------- Elimination of Constants from Machines over Algebraically Closed Fields by Pascal Koiran, Ecole Normale Sup\'erieure de Lyon, France Abstract: Let $\k$ be an algebraically closed field of characteristic 0. We show that constants can be removed efficiently from any machine over $\k$ solving a problem which is definable without constants. This gives a new proof of the transfer theorem of Blum, Cucker, Shub \& Smale for the problem $\p \stackrel{?}{=}\np$. We have similar results in positive characteristic for non-uniform complexity classes. We also construct explicit and correct test sequences (in the sense of Heintz and Schnorr) for the class of polynomials which are easy to compute. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-044: ---------------------------------------- Hilbert's Nullstellensatz is in the Polynomial Hierarchy by Pascal Koiran, Ecole Normale Sup\'erieure de Lyon, France Abstract: We show that if the Generalized Riemann Hypothesis is true, the problem of deciding whether a system of polynomial equations in several complex variables has a solution is in the second level of the polynomial hierarchy. The best previous bound was PSPACE. The possibility that this problem might be NP-complete is also discussed (it is well-known to be NP-hard). ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-045: ---------------------------------------- Networks of Spiking Neurons: The Third Generation of Neural Network Models by Wolfgang Maass, Technische Universitaet Graz, Austria Abstract: The computational power of formal models for networks of spiking neurons is compared with that of other neural network models based on McCulloch Pitts neurons (i.e. threshold gates) respectively sigmoidal gates. In particular it is shown that networks of spiking neurons are computationally more powerful than these other neural network models. A concrete biologically relevant function is exhibited which can be computed by a single spiking neuron (for biologically reasonable values of its parameters), but which requires hundreds of hidden units on a sigmoidal neural net. This article does not assume prior knowledge about spiking neurons, and it contains an extensive list of references to the currently available literature on computations in networks of spiking neurons and relevant results from neurobiology. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-046: ---------------------------------------- Use Of Neural Network Ensembles for Portfolio Selection and Risk Management by D.L.Toulson, Intelligent Financial Systems Ltd., UK S.P.Toulson, London School Of Economics, UK Abstract: A well known method of managing the risk whilst maximising the return of a portfolio is through Markowitz Analysis of the efficient set. A key pre-requisite for this technique is the accurate estimation of the future expected returns and risks (variance of re turns) of the securities contained in the portfolio along with their expected correlations. The estimates for future returns are typically obtained using weighted averages of historical returns of the securities involved or other (linear) techniques. Estimates for the volatilities of the securities may be made in the same way or through the use of (G)ARCH or stochastic volatility (SV) techniques. In this paper we propose the use of neural networks to estimate future returns and risks of securities. The networks are arranged into {\em committees}. Each committee contains a number of independ ently trained neural networks. The task of each committee is to estimate either the future return or risk of a particular security. The inputs to the networks of the committee make use of a novel discriminant analysis technique we have called {\em Fuzzy Discriminants Analysis}. The estimates of future returns and risks provided by the committees are then used to manage a portfolio of 40 UK equities over a five year period (1989-1994). The management of the portfolio is constrained such that at any time it should have the same risk characteristic as the FTSE-100 index. Within this constraint, the portfolio is chosen to provide the maximum possible return. We show that the managed portfolio significantly outper forms the FTSE-100 index in terms of both overall return and volatility. -------------------------------------------------------------------- ***************** ACCESS INSTRUCTIONS ****************** The Report NC-TR-96-001 can be accessed and printed as follows % ftp ftp.dcs.rhbnc.ac.uk (134.219.96.1) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-96-001.ps.Z ftp> bye % zcat nc-tr-96-001.ps.Z | lpr -l Similarly for the other technical reports. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. In some cases there are two files available, for example, nc-tr-96-002-title.ps.Z nc-tr-96-002-body.ps.Z The first contains the title page while the second contains the body of the report. The single command, ftp> mget nc-tr-96-002* will prompt you for the files you require. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. The files may also be accessed via WWW starting from the NeuroCOLT homepage (note that this is undergoing some corrections and may be temporarily inaccessible): http://www.dcs.rhbnc.ac.uk/neural/neurocolt.html Best wishes John Shawe-Taylor From josh at vlsia.uccs.edu Fri May 10 13:33:38 1996 From: josh at vlsia.uccs.edu (Alspector) Date: Fri, 10 May 1996 11:33:38 -0600 (MDT) Subject: Student assistantships in Colorado Message-ID: GRADUATE STUDENT ASSISTANTSHIPS IN NEURAL SYSTEMS IN COLORADO There are several student assistantship positions available in the electrical and computer engineering department at the University of Colorado at Colorado Springs. These are under the direction of Professor Joshua Alspector. A brief description of the areas of interest follows: The application of neural techniques to recognize patterns in handwritten documents and in remote-sensing images. The documents are from the Archives of the Indies in Seville and date from the time of Columbus. The object is to develop a version of the UNIX 'grep' command for visual images. This could also be applied to the detection of scenes in videos based on a rough visual description or a similar image. The application of neural-style chips and algorithms to demanding problems in signal processing. These include adaptive non-linear equalization of underwater acoustic communication channels and magnetic recording channels. It is likely also to involve integrating the learning electronics with micro-machined sonic transducers directly on silicon. Learning algorithms for implementation in VLSI. There is an existing learning system based on the Boltzmann machine. Improvements to this system and new algorithms are sought. Adaptive user models for information services on the internet and in other distributed information systems. A system that predicts preferences for movies has been researched for a video-on-demand service. Similar systems for other information products and services such as music, restaurants, personalized news, shopping, etc. will be investigated. Wireless local area networks. A system which uses low power electronics for classroom size wireless communication among a variety of terminals will be researched. This system may use low power neural signal processing in analog VLSI. Smart sensors for sports activities to aid in physiologic and performance measurements of athletes at the Olympic Training Center. COMPLETED applications are due July 8, 1996 for the Fall, 1996 semester. For more information on applications contact: Susan M. Bennis ECE Graduate Coordinator Univ. of Colorado at Col. Springs Dept. of Elec. & Comp. Eng. P.O. Box 7150 Colorado Springs, CO 80933-7150 (719) 593-3351 (719) 593-3589 (fax) smbennis at elite.uccs.edu For further information on projects contact: Professor Joshua Alspector (719) 593 3510 josh at vlsia.uccs.edu From maass at igi.tu-graz.ac.at Sun May 12 15:11:12 1996 From: maass at igi.tu-graz.ac.at (Wolfgang Maass) Date: Sun, 12 May 96 21:11:12 +0200 Subject: 2 papers in NEUROPROSE on spiking versus sigmoidal neurons Message-ID: <199605121911.AA22731@figids03.tu-graz.ac.at> 1) The file maass.third-generation.ps.Z is now available for copying from the Neuroprose repository. This is a 23-page long paper. Hardcopies are not available. FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/maass.third-generation.ps.Z Networks of Spiking Neurons: The Third Generation of Neural Network Models Wolfgang Maass Institute for Theoretical Computer Science Technische Universitaet Graz Klosterwiesgasse 32/2 A-8010 Graz, Austria e-mail: maass at igi.tu-graz.ac.at Abstract The computational power of formal models for networks of spiking neurons is compared with that of traditional neural network models based on McCulloch Pitts neurons (i.e. threshold gates) respectively sigmoidal gates. It is shown that networks of spiking neurons are computationally more powerful than threshold circuits and sigmoidal neural nets of the same size. A concrete biologically relevant function is exhibited which can be computed by a single spiking neuron (for biologically reasonable values of its parameters), but which requires hundreds of hidden units on a sigmoidal neural net. This article does not assume prior knowledge about spiking neurons, and it contains an extensive list of references to the currently available literature on computations in networks of spiking neurons and relevant results from neurobiology. *************************************************************** 2) The file maass.sigmoidal-spiking.ps.Z is now also available for copying from the Neuroprose repository. This is a 27-page long paper. Hardcopies are not available. FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/maass.sigmoidal-spiking.ps.Z An Efficient Implementation of Sigmoidal Neural Nets in Temporal Coding with Noisy Spiking Neurons Wolfgang Maass Institute for Theoretical Computer Science Technische Universitaet Graz Klosterwiesgasse 32/2 A-8010 Graz, Austria e-mail: maass at igi.tu-graz.ac.at Abstract We show that networks of spiking neurons can simulate arbitrary feedforward sigmoidal neural nets in a way which has previously not been considered. This new approach is based on temporal coding by single spikes (respectively by the timing of synchronous firing in pools of neurons), rather than on the traditional interpretation of analog variables in terms of firing rates. It is based on the observation that incoming "postsynaptic potentials" can SHIFT the firing time of a spiking neuron. The resulting new simulation is substantially faster and hence more consistent with experimental results about the speed of information processing in cortical neural systems. As a consequence we can show that networks of noisy spiking neurons are "universal approximators" in the sense that they can approximate with regard to temporal coding any given continuous function of several variables. This result holds for a fairly large class of schemes for coding analog variables by firing times of spiking neurons. Our new proposal for the possible organization of computations in networks of spiking neurons systems has some interesting consequences for the type of learning rules that would be needed to explain the self-organization of such networks. Finally, our fast and noise-robust implementation of sigmoidal neural nets via temporal coding points to possible new ways of implementing feedforward and recurrent sigmoidal neural nets with pulse stream VLSI. (To appear in Neural Computation.) From zhang at salk.edu Mon May 13 02:31:47 1996 From: zhang at salk.edu (Kechen Zhang) Date: Sun, 12 May 1996 23:31:47 -0700 (PDT) Subject: paper available: HD cell theory Message-ID: <9605130631.AA10324@salk.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 1053 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/b91d52a2/attachment-0001.ksh From tbl at di.ufpe.br Mon May 13 15:13:32 1996 From: tbl at di.ufpe.br (tbl@di.ufpe.br) Date: Mon, 13 May 1996 16:13:32 -0300 Subject: call for papers Message-ID: <9605131913.AA05844@pesqueira> III Brazilian Symposium on Neural Networks ****************************************** First call for papers Sponsored by the Brazilian Computer Society (SBC) The Third Brazilian Symposium on Neural Networks will be held at the Federal University of Pernambuco, in Recife (Brazil), from the 12nd to the 14th of November, 1996. The SBRN symposia, as they were initially named, are organized by the interest group in Neural Networks of the Brazilian Computer Society since 1994. The third version of the meeting follows a very successfull organization of the previous events which brought together the main developments of the area in Brazil and had the participation of many national and international researchers both as invited speakers and as authors of papers presented at the symposium. Recife is a very pleasant city in the northeast of Brazil, known by its good climate and beautiful beaches, with sunshine throughout almost the whole year. The city, whose name originated from the coral formations in the seaside port and beaches, is in a strategic touristic situation in the region and offers a good variety of hotels both in the city historic center and at the seaside resort. Scientific papers will be analyzed by the program committee. This analysis will take into account originality, significance to the area, and clarity. Accepted papers will be fully published in the conference proceedings. The major topics of interest include, but are not limited to: Biological Perspectives Theoretical Models Algorithms and Architectures Learning Models Hardware Implementation Signal Processing Robotics and Control Parallel and Distributed Implementations Pattern Recognition Image Processing Optimization Cognitive Science Hybrid Systems Dynamic Systems Genetic Algorithms Fuzzy Logic Applications Program Committee: (Tentative) - Teresa Bernarda Ludermir - DI/UFPE - Andre C. P. L. F. de Carvalho - ICMSC/USP (chair) - Germano C. Vasconcelos - DI/UFPE - Antonio de Padua Braga - DEE/UFMG - Dibio Leandro Borges - CEFET/PR - Paulo Martins Engel - II/UFRGS - Ricardo Machado - PUC/Rio - Valmir Barbosa - COPPE/UFRJ - Weber Martins - EEE/UFG Organising Committee: - Teresa Bernarda Ludermir - DI/UFPE (chair) - Edson C. B. Carvalho Filho - DI/UFPE - Germano C. Vasconcelos - DI/UFPE - Paulo Jorge Leitao Adeodato- DI/UFPE SUBMISSION PROCEDURE: The symposium seeks contributions to the state of the art and future perspectives of Neural Networks research. Submitted papers must be in Portuguese, English or Spanish. The submissions must include the original and three copies of the paper and must follow the format below (Electronic mail and FAX submissions are NOT accepted). The paper must be printed using a laser printer, in two-column format, not numbered, 8.5 X 11.0 inch (21,7 X 28.0 cm). It must not exceed eight pages, including all figures and diagrams. The font size should be 10 pts, such as Times-Roman or equivalent, with the following margins: right and left 2.5 cm, top 3.5 cm, and bottom 2.0 cm. The first page should contain the paper's title, the complete author(s) name(s), affiliation(s), and mailing address(es), followed by a short (150 words) abstract and a list of descriptive key words. The submission should also include an accompanying letter containing the following information : * Manuscript title * First author's name, mailing address and e-mail * Technical area of the paper SUBMISSION ADDRESS: Four copies (one original and three copies) must be submitted to: Andre C. P. L. F. de Carvalho Coordenador do Comite de Programa - III SBRN Departamento de Ciencias de Computacao e Estatistica ICMSC - Universidade de Sao Paulo Caixa Postal 668 CEP 13560.070 Sao Carlos, SP Brazil Phone: +55 162 726222 FAX: +55 162 749150 E-mail: IIISBRN at di.ufpe.br IMPORTANT DATES: August 16, 1996 (mailing date) Deadline for paper submission September 16, 1996 Notification of acceptance/rejection November, 12-14 1996 III SBRN ADDITIONAL INFORMATION: * Up-to-minute information about the symposium is available on the World Wide Web (WWW) at http://www.di.ufpe.br/~IIISBRN/web_sbrn * Questions can be sent by E-mail to IIISBRN at di.ufpe.br We look forward to seeing you in Recife ! From icsc at freenet.edmonton.ab.ca Tue May 14 12:25:24 1996 From: icsc at freenet.edmonton.ab.ca (icsc@freenet.edmonton.ab.ca) Date: Tue, 14 May 1996 10:25:24 -0600 (MDT) Subject: ISFL'97 Submissions Message-ID: Please note that the deadline for submissions to ISFL'97 approaches on May 31, 1996. Please notify if you need an extension. Announcement and Call for Papers Second International ICSC Symposium on FUZZY LOGIC AND APPLICATIONS ISFL'97 To be held at the Swiss Federal Institute of Technology (ETH), Zurich, Switzerland February 12 - 14, 1997 I. SPONSORS Swiss Federal Institute of Technology (ETH), Zurich, Switzerland and ICSC, International Computer Science Conventions, Canada/Switzerland II. PURPOSE OF THE CONFERENCE This conference is the successor of the highly successful meeting held in Zurich in 1995 (ISFL'95) and is intended to provide a forum for the discussion of new developments in fuzzy logic and its applications. An invitation to participate is extended both to those who took part in ISFL'95 and to others working in this field. Applications of fuzzy logic have played a significant role in industry, notably in the field of process and plant control, especially in applications where accurate modelling is difficult. The organisers hope that contributions will come not only from this field, but also from newer applications areas, perhaps in business, financial planning management, damage assessment, security, and so on. III. TOPICS Contributions are sought in areas based on the list below, which is indicative only. Contributions from new application areas will be particularly welcome. - Basic concepts such as various kinds of Fuzzy Sets, Fuzzy Relations, Possibility Theory - Neuro-Fuzzy Systems and Learning - Fuzzy Decision Analysis - Image Analysis with Fuzzy Techniques - Mathematical Aspects such as non-classical logics, Category Theory, Algebra, Topology, Chaos Theory - Modeling, Identification, Control - Robotics - Fuzzy Reasoning, Methodology and Applications, for example in Artificial Intelligence, Expert Systems, Image Processing and Pattern Recognition, Cluster Analysis, Game Theory, Mathematical Programming, Neural Networks, Genetic Algorithms and Evolutionary Computing - Implementation, for example in Engineering, Process Control, Production, Medicine - Design - Damage Assessment - Security - Business, Finance, Management IV. INTERNATIONAL SCIENTIFIC COMMITTEE (ISC) - Honorary Chairman: M. Mansour, Swiss Federal Institute of Technology, Zurich - Chairman: N. Steele, Coventry University, U.K. - Vice-Chairman: E. Badreddin, Swiss Federal Institute of Technology, Zurich - Members: E. Alpaydin, Turkey P.G. Anderson, USA Z. Bien, Korea H.H. Bothe, Germany G. Dray, France R. Felix, Germany J. Godjevac, Switzerland H. Hellendoorn, Germany M. Heiss, Austria K. Iwata, Japan M. Jamshidi, USA E.P. Klement, Austria B. Kosko, USA R. Kruse, Germany F. Masulli, Italy S. Nahavandi, New Zealand C.C. Nguyen, USA V. Novak, Czech Republic R. Palm, Germany D.W. Pearson, France I. Perfilieva, Russia B. Reusch, Germany G.D. Smith, U.K. V. ORGANISING COMMITTEE ISFL'97 is a joint operation between the Swiss Federal Institute of Technology (ETH), Zurich and International Computer Science Conventions (ICSC), Canada/Switzerland. VI. PUBLICATION OF PAPERS All accepted papers will appear in the conference proceedings, published by ICSC Academic Press. In addition, some selected papers may also be considered for journal publication. VII. SUBMISSION OF MANUSCRIPTS Prospective authors are requested to send two copies of their abstracts of 500 words for review by the International Scientific Committee. All abstracts must be written in English, starting with a succinct statement of the problem, the results achieved, their significance and a comparison with previous work. If authors believe that more details are necessary to substantiate the main claims of the paper, they may include a clearly marked appendix that will be read at the discretion of the International Scientific Committee. The abstract should also include: - Title of proposed paper - Authors names, affiliations, addresses - Name of author to contact for correspondence - E-mail address and fax number of contact author - Name of topic which best describes the paper (max. 5 keywords) Contributions are welcome from those working in industry and having experience in the topics of this conference as well as from academics. The conference language is English. Abstracts may be submitted either by electronic mail (ASCII text), fax or mail (2 copies) to either one of the following addresses: ICSC Canada P.O. Box 279 Millet, Alberta T0C 1Z0 Canada Fax: +1-403-387-4329 Email: icsc at freenet.edmonton.ab.ca or ICSC Switzerland P.O. Box 657 CH-8055 Zurich Switzerland VIII. OTHER CONTRIBUTIONS Anyone wishing to organise a workshop, tutorial or discussion, is requested to contact the chairman of the conference, Prof. Nigel Steele (e-mail: nsteele at coventry.ac.uk / phone: +44-1203-838568 / fax: +44-1203-838585) before August 31, 1996. IX. DEADLINES AND REGISTRATION It is the intention of the organisers to have the conference proceedings available for the delegates. Consequently, the deadlines below are to be strictly respected: - Submission of Abstracts: May 31, 1996 - Notification of Acceptance: August 31, 1996 - Delivery of full papers: October 31, 1996 X. ACCOMMODATION Block reservations will be made at nearby hotels and accommodation at reasonable rates (not included in the registration fee) will be available upon registration (full details will follow with the letters of acceptance) XI. SOCIAL AND TOURIST ACTIVITIES A social programme, including a reception, will be organized on the evening of February 13, 1997. This acitivity will also be available for accompanying persons. Winter is an attractive season in Switzerland and many famous alpine resorts are in easy reach by rail, bus or car for a one or two day excursion. The city of Zurich itself is the proud home of many art galleries, museums or theatres. Furthermore, the world famous shopping street 'Bahnhofstrasse' or the old part of the town with its many bistros, bars and restaurants are always worth a visit. XII. INFORMATION For further information please contact either of the following: - ICSC Canada, P.O. Box 279, Millet, Alberta T0C 1Z0, Canada E-mail: icsc at freenet.edmonton.ab.ca Fax: +1-403-387-4329 Phone: +1-403-387-3546 - ICSC Switzerland, P.O. Box 657, CH-8055 Zurich, Switzerland Fax: +41-1-761-9627 - Prof. Nigel Steele, Chairman ISFL'97, Coventry University, U.K. E-mail: nsteele at coventry.ac.uk Fax: +44-1203-838585 Phone: +44-1203-838568 From goldfarb at unb.ca Tue May 14 13:06:41 1996 From: goldfarb at unb.ca (Lev Goldfarb) Date: Tue, 14 May 1996 14:06:41 -0300 (ADT) Subject: Workshop: WHAT IS INDUCTIVE LEARNING? (program) Message-ID: Please post. My apologies if you receive multiple copies of the following announcement. ********************************************************************** WHAT IS INDUCTIVE LEARNING? On the foundations of AI and Cognitive Science May 20-21, 1996 held in conjunction with the 11th biennial Canadian AI conference, at the Holiday Inn on King, in Toronto, Canada. Workshop Chair: Lev Goldfarb Each talk (except opening remarks) is 30 min. followed by 30 min. question/discussion period. Monday, May 20, Morning Session ------------------------------- 8:45-9:00 Lev Goldfarb, University of New Brunswick, Canada "Opening Remarks: The inductive learning process as the central cognitive process" 9:00 Chris Thornton, University of Sussex, UK "Does Induction always lead to representation?" 10:10 Lev Goldfarb, University of New Brunswick, Canada "What is inductive learning? Construction of the inductive class representation" 11:20 Anselm Blumer, Tufts University, USA (invited talk) "PAC learning and the Vapnik-Chervonenkis dimension" Monday, May 20, Afternoon Session --------------------------------- 2:00 Charles Ling, University of Western Ontario, Canada (invited talk) "Symbolic and neural network learning in cognitive modeling: Where's the beef?" 3:10 Eduardo Perez, Ricardo Vilalta and Larry Rendell, University of Illinois, USA (invited talk) "On the importance of change of representation in induction" 4:20 Sayan Bhattacharyya and John Laird, University of Michigan, USA "A cognitive model of recall motivated by inductive learning" Tuesday, May 21, Morning Session -------------------------------- 9:00 Ryszard Michalski, George Mason University, USA (invited talk) "Inductive inference from the viewpoint of inferential theory of learning" 10:10 Lev Goldfarb, Sanjay Deshpande and Virendra Bhavsar, University of New Brunswick, Canada "Inductive theory of vision" 11:20 David Gadishev and David Chiu, University of Guelph, Canada "Learning basic elements for texture representation and comparison" Tuesday, May 21, Afternoon Session ---------------------------------- 2:00 John Caulfield, Center of Applied Optics, A&M University, USA (invited talk) "Induction and Physics" 3:10 Igor Jurisica, University of Toronto, Canada "Inductive learning and case-based reasoning" 4:20 Concluding discussion: What is inductive learning? ************************************************************************ URL for Canadian AI'96 Conference http://ai.iit.nrc.ca/cscsi/conferences/ai96.html From jdcohen+ at andrew.cmu.edu Tue May 14 14:21:39 1996 From: jdcohen+ at andrew.cmu.edu (Jonathan D. Cohen) Date: Tue, 14 May 1996 14:21:39 -0400 (EDT) Subject: Postdoc Position Available Message-ID: Postdoctoral Position: Computational Modeling of Neuromodulation and/or Prefrontal Cortex Function ---------------- Center for the Neural Basis of Cognition Carnegie Mellon University and the University of Pittsburgh ---------------- A postdocotral position is available starting September 1, 1996 for someone interested in pursuing computational modeling approaches to the role of neuromodulation and/or prefrontal cortical function in cognition. The nature of the position is flexible, depending upon the individual's interest and expertise. Approaches can be focused at the neurobiological level (e.g., modeling detailed physiological characteristics of neuromodulatory systems, such as locus coeruleus and/or dopaminergic nuclei, or the circuitry of prefrontal cortex), or at the more cognitive level (e.g., the nature of representations and/or the mechanisms involved in active maintenance of information within prefrontal cortex, and their role in working memory). The primary requirement for the position is a Ph.D. in the cognitive, computational, or neurosciences, and extensive experience with computational modeling work, either at the PDP/connectionist or detailed biophysical level. The candidate will be working directly with Jonathan Cohen and Randall O'Reilly within the Department of Psychology at CMU, and in potential collaboration with other members of the Center for the Neural Basis of Cognition (CNBC), including James McClelland, David Lewis, German Barrionuevo, Susan Sesack, G. Bard Ermantrout, as well as collaborators at other institutions, such as Gary Aston-Jones (Hahnemann University), Joseph LeDoux (NYU) and Peter Dayan (MIT). Available resources include direct access to state-of-the-art computing facilities within the CNBC (IBM SP-2 and SGI PowerChallenge), neuroimaging facilities (PET and 3T fMRI at the University of Pittsburgh), and clinical populations (Western Psychiatric Institute and Clinic). Carnegie Mellon University and the University of Pittsburgh are both equal opportunity employers; minorities and women are encouraged to apply. Inquiries can be directed to Jonathan Cohen (jdcohen at cmu.edu) or Randy O'Reilly (oreilly at cmu.edu). Applicants should send a CV, a small number of relevant publications, and the names and addresses of at least two references, to: Jonathan D. Cohen Department of Psychology Carnegie Mellon University Pittsburgh, PA 15213 (412) 268-2810 From moeller at informatik.uni-bonn.de Wed May 15 08:13:54 1996 From: moeller at informatik.uni-bonn.de (Knut Moeller) Date: Wed, 15 May 1996 14:13:54 +0200 (MET DST) Subject: HeKoNN96-CfP Message-ID: <199605151213.OAA03722@macke.informatik.uni-bonn.de> This announcement was sent to various lists. Sorry if you recieved multiple copies. CALL FOR PARTICIPATION ================================================================= = = = H e K o N N 9 6 = = = Autumn School in C o n n e c t i o n i s m and N e u r a l N e t w o r k s October 2-6, 1996 Muenster, Germany Conference Language: German ---------------------------------------------------------------- A comprehensive description of the Autumn School together with abstracts of the courses can be found at the following address: WWW: http://set.gmd.de/AS/fg1.1.2/hekonn = = = O V E R V I E W = = = Artificial neural networks (ANN's) have been discussed in many diverse areas, ranging from models of cortical learning to the control of industrial processes. The goal of the Autumn School in Connectionionism and Neural Networks is to give a comprehensive introduction to connectionism and artificial neural networks (ANN's) and to provide an overview of the current state of the art. Courses will be offered in five thematic tracks. (The conference language is German.) The FOUNDATION track will introduce basic concepts (A. Zell, Univ. Stuttgart) and theoretical issues. Hardwareaspects (U. Rueckert, Univ. Paderborn), Lifelong Learning (G. Paass, GMD St.Augustin), algorithmic complexity of learning procedures (M. Schmitt, TU Graz) and convergence properties of ANN's (K. Hornik, TU Vienna) are presented in further lectures. This year, a special track was devoted to BRAIN RESEARCH. Courses are offered about the simulation of biological neurons (R. Rojas, Univ. Halle), theoretical neurobiology (H. Gluender, LMU Munich), learning and memory (A. Bibbig, Univ. Ulm) and dynamical aspects of cortical information processing (H. Dinse, Univ. Bochum). In the track on SYMBOLIC CONNECTIONISM and COGNITIVE MODELLING, consists of courses on: procedures for extracting rules from ANN's (J. Diederich, QUT Brisbane). representation and cognitive models (G. Peschl, Univ. Vienna), autonomous agents and ANN's (R. Pfeiffer, ETH Zuerich) and hybrid systems (A. Ultsch, Univ. Marburg). APPLICATIONS of ANN's are covered by courses on image processing (H.Bischof, TU Vienna), evolution strategies and ANN's (J. Born, FU Berlin), ANN's and fuzzy logic (R. Kruse, Univ. Braunschweig), and on medical applications (T. Waschulzik, Univ. Bremen). In addition, there will be courses on PROGRAMMING and SIMULATORS. Participants will have the opportunity to work with the SNNS simulator (G. Mamier, A. Zell, Univ. Stuttgart) and the Vienet2/ECANSE simulation tool (G. Linhart, TU Vienna). From ATAXR at asuvm.inre.asu.edu Tue May 14 16:20:58 1996 From: ATAXR at asuvm.inre.asu.edu (Asim Roy) Date: Tue, 14 May 1996 13:20:58 -0700 (MST) Subject: Connectionist Learning - Some New Ideas Message-ID: <01I4P4L13GWI8X1P3A@asu.edu> We have recently published a set of principles for learning in neural networks/connectionist models that is different from classical connectionist learning (Neural Networks, Vol. 8, No. 2; IEEE Transactions on Neural Networks, to appear; see references below). Below is a brief summary of the new learning theory and why we think classical connectionist learning, which is characterized by pre-defined nets, local learning laws and memoryless learning (no storing of training examples for learning), is not brain-like at all. Since vigorous and open debate is very healthy for a scientific field, we invite comments for and against our ideas from all sides. "A New Theory for Learning in Connectionist Models" We believe that a good rigorous theory for artificial neural networks/connectionist models should include learning methods that perform the following tasks or adhere to the following criteria: A. Perform Network Design Task: A neural network/connectionist learning method must be able to design an appropriate network for a given problem, since, in general, it is a task performed by the brain. A pre-designed net should not be provided to the method as part of its external input, since it never is an external input to the brain. From a neuroengineering and neuroscience point of view, this is an essential property for any "stand-alone" learning system - a system that is expected to learn "on its own" without any external design assistance. B. Robustness in Learning: The method must be robust so as not to have the local minima problem, the problems of oscillation and catastrophic forgetting, the problem of recall or lost memories and similar learning difficulties. Some people might argue that ordinary brains, and particularly those with learning disabilities, do exhibit such problems and that these learning requirements are the attributes only of a "super" brain. The goal of neuroengineers and neuroscientists is to design and build learning systems that are robust, reliable and powerful. They have no interest in creating weak and problematic learning devices that need constant attention and intervention. C. Quickness in Learning: The method must be quick in its learning and learn rapidly from only a few examples, much as humans do. For example, one which learns from only 10 examples learns faster than one which requires a 100 or a 1000 examples. We have shown that on-line learning (see references below), when not allowed to store training examples in memory, can be extremely slow in learning - that is, would require many more examples to learn a given task compared to methods that use memory to remember training examples. It is not desirable that a neural network/connectionist learning system be similar in characteristics to learners characterized by such sayings as "Told him a million times and he still doesn't understand." On-line learning systems must learn rapidly from only a few examples. D. Efficiency in Learning: The method must be computationally efficient in its learning when provided with a finite number of training examples (Minsky and Papert[1988]). It must be able to both design and train an appropriate net in polynomial time. That is, given P examples, the learning time (i.e. both design and training time) should be a polynomial function of P. This, again, is a critical computational property from a neuroengineering and neuroscience point of view. This property has its origins in the belief that biological systems (insects, birds for example) could not be solving NP-hard problems, especially when efficient, polynomial time learning methods can conceivably be designed and developed. E. Generalization in Learning: The method must be able to generalize reasonably well so that only a small amount of network resources is used. That is, it must try to design the smallest possible net, although it might not be able to do so every time. This must be an explicit part of the algorithm. This property is based on the notion that the brain could not be wasteful of its limited resources, so it must be trying to design the smallest possible net for every task. General Comments This theory defines algorithmic characteristics that are obviously much more brain-like than those of classical connectionist theory, which is characterized by pre-defined nets, local learning laws and memoryless learning (no storing of actual training examples for learning). Judging by the above characteristics, classical connectionist learning is not very powerful or robust. First of all, it does not even address the issue of network design, a task that should be central to any neural network/connectionist learning theory. It is also plagued by efficiency (lack of polynomial time complexity, need for excessive number of teaching examples) and robustness problems (local minima, oscillation, catastrophic forgetting, lost memories), problems that are partly acquired from its attempt to learn without using memory. Classical connectionist learning, therefore, is not very brain-like at all. As far as I know, there is no biological evidence for any of the premises of classical connectionist learning. Without having to reach into biology, simple common sense arguments can show that the ideas of local learning, memoryless learning and predefined nets are impractical even for the brain! For example, the idea of local learning requires a predefined network. Classical connectionist learning forgot to ask a very fundamental question - who designs the net for the brain? The answer is very simple: Who else, but the brain itself! So, who should construct the net for a neural net algorithm? The answer again is very simple: Who else, but the algorithm itself! (By the way, this is not a criticism of constructive algorithms that do design nets.) Under classical connectionist learning, a net has to be constructed (by someone, somehow - but not by the algorithm!) prior to having seen a single training example! I cannot imagine any system, biological or otherwise, being able to construct a net with zero information about the problem to be solved and with no knowledge of the complexity of the problem. (Again, this is not a criticism of constructive algorithms.) A good test for a so-called "brain-like" algorithm is to imagine it actually being part of a human brain. Then examine the learning phenomenon of the algorithm and compare it with that of the human's. For example, pose the following question: If an algorithm like back propagation is "planted" in the brain, how will it behave? Will it be similar to human behavior in every way? Look at the following simple "model/algorithm" phenomenon when the back- propagation algorithm is "fitted" to a human brain. You give it a few learning examples for a simple problem and after a while this "back prop fitted" brain says: "I am stuck in a local minimum. I need to relearn this problem. Start over again." And you ask: "Which examples should I go over again?" And this "back prop fitted" brain replies: "You need to go over all of them. I don't remember anything you told me." So you go over the teaching examples again. And let's say it gets stuck in a local minimum again and, as usual, does not remember any of the past examples. So you provide the teaching examples again and this process is repeated a few times until it learns properly. The obvious questions are as follows: Is "not remembering" any of the learning examples a brain- like phenomenon? Are the interactions with this so-called "brain- like" algorithm similar to what one would actually encounter with a human in a similar situation? If the interactions are not similar, then the algorithm is not brain-like. A so-called brain-like algorithm's interactions with the external world/teacher cannot be different from that of the human. In the context of this example, it should be noted that storing/remembering relevant facts and examples is very much a natural part of the human learning process. Without the ability to store and recall facts/information and discuss, compare and argue about them, our ability to learn would be in serious jeopardy. Information storage facilitates mental comparison of facts and information and is an integral part of rapid and efficient learning. It is not biologically justified when "brain-like" algorithms disallow usage of memory to store relevant information. Another typical phenomenon of classical connectionist learning is the "external tweaking" of algorithms. How many times do we "externally tweak" the brain (e.g. adjust the net, try a different parameter setting) for it to learn? Interactions with a brain-like algorithm has to be brain-like indeed in all respect. The learning scheme postulated above does not specify how learning is to take place - that is, whether memory is to be used or not to store training examples for learning, or whether learning is to be through local learning at each node in the net or through some global mechanism. It merely defines broad computational characteristics and tasks (i.e. fundamental learning principles) that are brain-like and that all neural network/connectionist algorithms should follow. But there is complete freedom otherwise in designing the algorithms themselves. We have shown that robust, reliable learning algorithms can indeed be developed that satisfy these learning principles (see references below). Many constructive algorithms satisfy many of the learning principles defined above. They can, perhaps, be modified to satisfy all of the learning principles. The learning theory above defines computational and learning characteristics that have always been desired by the neural network/connectionist field. It is difficult to argue that these characteristics are not "desirable," especially for self-learning, self- contained systems. For neuroscientists and neuroengineers, it should open the door to development of brain-like systems they have always wanted - those that can learn on their own without any external intervention or assistance, much like the brain. It essentially tries to redefine the nature of algorithms considered to be brain- like. And it defines the foundations for developing truly self- learning systems - ones that wouldn't require constant intervention and tweaking by external agents (human experts) for it to learn. It is perhaps time to reexamine the foundations of the neural network/connectionist field. This mailing list/newsletter provides an excellent opportunity for participation by all concerned throughout the world. I am looking forward to a lively debate on these matters. That is how a scientific field makes real progress. Asim Roy Arizona State University Tempe, Arizona 85287-3606, USA Email: ataxr at asuvm.inre.asu.edu References 1. Roy, A., Govil, S. & Miranda, R. 1995. A Neural Network Learning Theory and a Polynomial Time RBF Algorithm. IEEE Transactions on Neural Networks, to appear. 2. Roy, A., Govil, S. & Miranda, R. 1995. An Algorithm to Generate Radial Basis Function (RBF)-like Nets for Classification Problems. Neural Networks, Vol. 8, No. 2, pp. 179-202. 3. Roy, A., Kim, L.S. & Mukhopadhyay, S. 1993. A Polynomial Time Algorithm for the Construction and Training of a Class of Multilayer Perceptrons. Neural Networks, Vol. 6, No. 4, pp. 535- 545. 4. Mukhopadhyay, S., Roy, A., Kim, L.S. & Govil, S. 1993. A Polynomial Time Algorithm for Generating Neural Networks for Pattern Classification - its Stability Properties and Some Test Results. Neural Computation, Vol. 5, No. 2, pp. 225-238. From isis at cs.monash.edu.au Wed May 15 05:49:04 1996 From: isis at cs.monash.edu.au (ISIS conference) Date: Wed, 15 May 1996 19:49:04 +1000 Subject: Call for Participation for ISIS Message-ID: <199605150949.TAA22314@molly.cs.monash.edu.au> ISIS CONFERENCE: INFORMATION, STATISTICS AND INDUCTION IN SCIENCE *** Call for Participation *** Old Melbourne Hotel Melbourne, Australia 20-23 August 1996 INVITED SPEAKERS: Henry Kyburg, Jr. (University of Rochester, NY) Marvin Minsky (MIT) J. Ross Quinlan (Sydney University) Jorma J. Rissanen (IBM Almaden Research, San Jose, California) Ray Solomonoff (Oxbridge Research, Mass) This conference will explore the use of computational modeling to understand and emulate inductive processes in science. The problems involved in building and using such computer models reflect methodological and foundational concerns common to a variety of academic disciplines, especially statistics, artificial intelligence (AI) and the philosophy of science. This conference aims to bring together researchers from these and related fields to present new computational technologies for supporting or analysing scientific inference and to engage in collegial debate over the merits and difficulties underlying the various approaches to automating inductive and statistical inference. About the invited speakers: Henry Kyburg is noted for his invention of the lottery paradox (in "Probability and the Logic of Rational Belief", 1961) and his research since then in providing a non-Bayesian foundation for a probabilistic epistemology. Marvin Minsky is one of the founders of the field of artificial intelligence. He is the inventor of the use of frames in knowledge representation, stimulus for much of the concern with nonmonotonic reasoning in AI, noted debunker of Perceptrons and recently the developer of the "society of minds" approach to cognitive science. J. Ross Quinlan is the inventor of the information-theoretic approach to classification learning in ID3 and C4.5, which have become world-wide standards in testing machine learning algorithms. Jorma J. Rissanen invented the Minimum Description Length (MDL) method of inference in 1978, which has subsequently been widely adopted in algorithms supporting machine learning. Ray Solomonoff developed the notion of algorithmic complexity in 1960, and his work was influential in shaping the Minimum Message Length (MML) work of Chris Wallace (1968) and the Minimum Description Length (MDL) work of Jorma Rissanen (1978). ========================= Tutorials (Tue 20 Aug 96) ========================= 10am - 1pm: Tutorial 1: Peter Spirtes "Automated Learning of Bayesian Networks" Tutorial 2: Michael Pazzani "Machine Learning and Intelligent Info Access" 2pm - 5pm: Tutorial 3: Jan Zytkow "Automation of Scientific Discovery" Tutorial 4: Paul Vitanyi "Kolmogorov Complexity & Applications" About the tutorial leaders: Peter Spirtes is a co-author of the TETRAD algorithm for the induction of causal models from sample data and is an active member of the research group on causality and induction at Carnegie Mellon University. Mike Pazzani is one of the leading researchers world-wide in machine learning and the founder of the UC Irvine machine learning archive. Current interests include the use of intelligent agents to support information filtering over the Internet. Jan Zytkow is one of the co-authors (with Simon, Langley and Bradshaw) of "Scientific Discovery" (1987), reporting on the series of BACON programs for automating the learning of quantitative scientific laws. Paul Vitanyi is co-author (with Ming Li) of "An Introduction to Kolmogorov Complexity and its Applications (1993) and of much related work on complexity and information-theoretic methods of induction. Professor Vitanyi will be visiting the Department of Computer Science, Monash, for several weeks after the conference. A limited number of free student conference registrations or tutorial registrations will be available by application to the organizers in exchange for part-time work during the conference. Program Committee: Hirotugu Akaike, Lloyd Allison, Shun-Ichi Amari, Mark Bedau, Jim Bezdek, Hamparsum Bozdogan, Wray Buntine, Peter Cheeseman, Honghua Dai, David Dowe, Usama Fayyad, Doug Fisher, Alex Gammerman, Clark Glymour, Randy Goebel, Josef Gruska, David Hand, Bill Harper, David Heckerman, Colin Howson, Lawrence Hunter, Frank Jackson, Max King, Kevin Korb, Henry Kyburg, Rick Lathrop, Ming Li, Nozomu Matsubara, Aleksandar Milosavljevic, Richard Neapolitan, Jon Oliver, Michael Pazzani, J. Ross Quinlan, Glenn Shafer, Peter Slezak, Padhraic Smyth, Ray Solomonoff, Paul Thagard, Neil Thomason, Raul Valdes-Perez, Tim van Gelder, Paul Vitanyi, Chris Wallace, Geoff Webb, Xindong Wu, Jan Zytkow. Inquiries to: isis96 at cs.monash.edu.au David Dowe (chair): dld at cs.monash.edu.au Kevin Korb (co-chair): korb at cs.monash.edu.au or Jonathan Oliver (co-chair): jono at cs.monash.edu.au Detailed up-to-date information, including registration costs and further details of speakers, their talks and the tutorials is available on the WWW at: http://www.cs.monash.edu.au/~jono/ISIS/ISIS.shtml - David Dowe, Kevin Korb and Jon Oliver. ======================================================================= From cherkaue at cs.wisc.edu Thu May 16 15:26:37 1996 From: cherkaue at cs.wisc.edu (Kevin Cherkauer) Date: Thu, 16 May 1996 14:26:37 -0500 Subject: Connectionist Learning - Some New Ideas Message-ID: <199605161926.OAA27944@mozzarella.cs.wisc.edu> In a recent thought-provoking posting to the connectionist list, Asim Roy said: >We have recently published a set of principles for learning in neural >networks/connectionist models that is different from classical >connectionist learning (Neural Networks, Vol. 8, No. 2; IEEE >Transactions on Neural Networks, to appear; .. >E. Generalization in Learning: The method must be able to >generalize reasonably well so that only a small amount of network >resources is used. That is, it must try to design the smallest possible >net, although it might not be able to do so every time. This must be >an explicit part of the algorithm. This property is based on the >notion that the brain could not be wasteful of its limited resources, >so it must be trying to design the smallest possible net for every >task. I disagree with this point. According to Hertz, Krogh, and Palmer (1991, p. 2), the human brain contains about 10^11 neurons. (They also state on p. 3 that "the axon of a typical neuron makes a few thousand synapses with other neurons," so we're looking at on the order of 10^14 "connections" in the brain.) Note that a period of 100 years contains only about 3x10^9 seconds. Thus, if you lived 100 years and learned continuously at a constant rate every second of your life, your brain would be at liberty to "use up" the capacity of about 30 neurons (and 30,000 connections) per second. I would guess this is a very conservative bound, because most of us probably spend quite a bit of time where we aren't learning at such a furious rate. But even using this conservative bound, I calculate that I'm allowed to use up about 2.7x10^6 neurons (and 2.7x10^9 connections) today. I'll try not to spend them all in one place. :-) Dr. Roy's suggestion that the brain must try "to design the smallest possible net for every task" because "the brain could not be wasteful of its limited resources" is unlikely, in my opinion. It seems to me that the brain has rather an abundance of neurons. On the other hand, finding optimal solutions to many interesting "real-world" problems is often very hard computationally. I am not a complexity theorist, but I will hazard to suggest that a constraint on neural systems to be optimal or near-optimal in their space usage is probably both impossible to realize and, in fact, unnecessary. Wild speculation: the brain may have so many neurons precisely so that it can afford to be suboptimal in its storage usage in order to avoid computational time intractability. References Hertz, J.; Krogh, A.; & Palmer, R.G. 1991. Introduction to the Theory of Neural Computation. Redwood City, CA:Addison-Wesley. Roy, A., Govil, S. & Miranda, R. 1995. A Neural Network Learning Theory and a Polynomial Time RBF Algorithm. IEEE Transactions on Neural Networks, to appear. Roy, A., Govil, S. & Miranda, R. 1995. An Algorithm to Generate Radial Basis Function (RBF)-like Nets for Classification Problems. Neural Networks, Vol. 8, No. 2, pp. 179-202. =============================================================================== Kevin Cherkauer cherkauer at cs.wisc.edu From chris at anvil.co.uk Fri May 17 13:08:24 1996 From: chris at anvil.co.uk (Chris Sharpington) Date: Fri, 17 May 96 13:08:24 BST Subject: New Book announcement Message-ID: <9605171208.AA01885@anvil.co.uk> ===================================================================== NEW BOOK ANNOUCEMENT RAPID APPLICATION GENERATION OF BUSINESS AND FINANCE SOFTWARE SUKHDEV KHEBBAL AND CHRIS SHARPINGTON Kluwer Academic Publishers, March 1996 ISBN: 0-7923-9707-X The objectives of the work described in this book were twofold: 1) to capitalise on recent work in object-oriented integration methods to build a Framework for rapid application generation of distributed client-server systems, using the same API on both Microsoft Windows and on Unix 2) to use the Framework to generate real-world applications for intelligent data analysis techniques (neural networks and genetic algorithms) in Finance and Marketing The key requirement was to be able to "plug and play" servers i.e. unplug an Excel forecasting module and plug in a neural network forecasting tool to demonstrate the improved forecasting accuracy. Four applications were built to prove the benefits of the Framework and to demonstrate the value of intelligent techniques for improved data analysis. The application descriptions are accessible to the business manager, interested in the business issues involved, who may have little technical knowledge of neural networks and genetic algorithms. At the same time, technical experts can benefit from the examples of solving real-world application issues. The applications are Direct Marketing (customer targeting and market segmentation), Financial Forecasting, Bankruptcy Predication, and Executive Information Systems. Client-server computing has been attracting great interest of late. However, a server does not have to be a database. The approach in this work has been to standardise the interface to servers and collect a number of different servers together into a Toolkit. Application generation then becomes the rapid and simple process of plugging together the servers required (e.g. data retrieval, data analysis, data display) with a client to control their interaction. The emergence of object-oriented inter-application communication standards such as Object Linking and Embedding (OLE) from Microsoft, and CORBA from the Object Management Group, is fuelling great interest in distributed systems and their commercial benefits. An important contribution of this book is to detail and compare current inter-application communication methods. This will be of great benefit in assessing the potential of each communication method for business application and in assessing the benefits of the HANSA Framework. The work was carried out under Esprit project 6369 HANSA - a collaboration between industrial and academic partners from four European countries, with funding support from the European Commission. Having studied the technical details and illustrations of business value obtained from the Framework, the reader is given details of how to obtain the software, (for both Microsoft Windows and Unix) free of charge from an ftp site. [ There is also a World Wide Web page on The HANSA project: http://www.cs.ucl.ac.uk/hansa ] CONTENTS: ======== Chap 1: Rapid Application Development and the HANSA Project - Sukhdev Khebbal, University College London, UK. - Chris Sharpington, CRL, Hayes, UK. PART ONE: TOOLS FOR RAPID APPLICATION DEVELOPMENT ================================================= Chap 2: The HANSA Framework - Sukhdev Khebbal and Jonthan Ladipo, University College London, UK. Chap 3: THE HANSA Toolkit and The MIMENICE tool - Eric LeSaint, MIMETICS, FRANCE. - Sukhdev Khebbal, University College London, UK. PART TWO: OBJECT-ORIENTED INTEGRATION METHODS ============================================= Chap 4: Survey of Object-Oriented Integration Methods - Sukhdev Khebbal and Jonthan Ladipo, University College London, UK. PART THREE: REAL-WORLD APPLICATIONS =================================== Chap 5: Direct Marketing Application - Chris Sharpington, CRL, Hayes, UK. Chap 6: Banking Application - Thomas Look and Michael Kuhn, IFS, Germany. Chap 7: Bankruptcy Predication Application - Konrad Feldman, Jason Kingdon, Anoop Mangat, SearchSpace Ltd, London. UK. - Renato Arisi, Orsio Romagnoli, O.Group, Rome. ITALY. Chap 8: Executive Information Systems Application - Pierre Charelain and Louis Moussy, Promind, FRANCE. PART FOUR: DEVELOPING HYBRID SYSTEMS ==================================== Chap 9: Evaluating the HANSA Framework - Sukhdev Khebbal and Jonthan Ladipo, University College London, UK. Chap 10: Conclusion and Future Directions - Sukhdev Khebbal, University College London, UK. - Chris Sharpington, CRL, Hayes, UK. ISBN 0-7923-9707-X 212pp HARDBOUND March 1996 Kluwer Academic Publishers, P.O. Box 17, 3300 AA Dordrecht, The Netherlands. TO ORDER THE BOOK ================= Contact your local bookshop or supplier, or direct from the publisher using one of the addresses below: For customers in Mexico, USA, Canada Rest of the world: and Latin America: Kluwer Academic Publishers Kluwer Academic Publishers Order Department Order Department P.O. Box 358 P.O. Box 322 Accord Station 3300 AH Dordrecht Hingham, MA 02018-0358 The Netherlands U.S.A. Tel : 617 871 6600 Tel : +31 78 6392392 Fax : 617 871 6528 Fax : +31 78 6546474 Email : kluwer at wkap.com Email : services at wkap.nl =========================================================== Chris Sharpington (chris at anvil.co.uk) Anvil Software Ltd, 51-53 Rivington Street, London EC2A 3QQ tel +44 171 729 8036 fax +44 171 729 5067 From small at cortex.neurology.pitt.edu Fri May 17 08:32:44 1996 From: small at cortex.neurology.pitt.edu (Steven Small) Date: Fri, 17 May 1996 08:32:44 -0400 Subject: Connectionist Learning - Some New Ideas Message-ID: >Dr. Roy's suggestion that the brain must try "to design the smallest possible >net for every task" because "the brain could not be wasteful of its limited >resources" is unlikely, in my opinion. It seems to me that the brain has >rather an abundance of neurons. On the other hand, finding optimal solutions to >many interesting "real-world" problems is often very hard computationally. I am >not a complexity theorist, but I will hazard to suggest that a constraint on >neural systems to be optimal or near-optimal in their space usage is probably >both impossible to realize and, in fact, unnecessary. > >Wild speculation: the brain may have so many neurons precisely so that it can >afford to be suboptimal in its storage usage in order to avoid computational >time intractability. I agree with this general idea, although I'm not sure that "computational time intractability" is necessarily the principal reason. There are a lot of good reasons for redundancy, overlap, and space "suboptimality", not the least of which is the marvellous ability at recovery that the brain manifests after both small injuries and larger ones that give pause even to experienced neurologists. -SLS From Jonathan_Stein at comverse.com Fri May 17 17:42:44 1996 From: Jonathan_Stein at comverse.com (Jonathan_Stein@comverse.com) Date: Fri, 17 May 96 16:42:44 EST Subject: Connectionist Learning - Some New Ideas Message-ID: <9604178323.AA832376960@hub.comverse.com> > >I agree with this general idea, although I'm not sure that "computational >time intractability" is necessarily the principal reason. There are a lot >of good reasons for redundancy, overlap, and space "suboptimality", not the >least of which is the marvellous ability at recovery that the brain >manifests after both small injuries and larger ones that give pause even to >experienced neurologists. > One needn't draw upon injuries to prove the point. One loses about 100,000 cortical neurons a day (about a percent of the original number every three years) under normal conditions. This loss is apparently not significant for brain function. This has been often called the strongest argument for distributed processing in the brain. Compare this ability with the fact that single conductor disconnection cause total system failure with high probability in conventional computers. Although certainly acknowledged by the pioneers of artificial neural network techniques, very few networks designed and trained by present techniques are anywhere near that robust. Studies carried out on the Hopfield model of associative memory DO show graceful degradation of memory capacity with synapse dilution under certain conditions (see eg. DJ Amit's book "Attractor Neural Networks"). Synapse pruning has been applied to trained feedforward networks (eg. LeCun's "Optimal Brain Damage") but requires retraining of the network. JS From dnoelle at cs.ucsd.edu Thu May 16 14:49:11 1996 From: dnoelle at cs.ucsd.edu (David Noelle) Date: Thu, 16 May 96 11:49:11 -0700 Subject: CogSci96 Extension Message-ID: <9605161849.AA14585@hilbert> ************************************************ ***** EARLY REGISTRATION DEADLINE EXTENDED ***** ************************************************ Eighteenth Annual Conference of the COGNITIVE SCIENCE SOCIETY July 12-15, 1996 University of California, San Diego La Jolla, California SECOND CALL FOR PARTICIPATION The early registration deadline for Cognitive Science '96 has been extended to June 1, 1996. If you register now, you can still get the low early registration rates! (If you have already paid for registration at the higher "late" rates, the difference will be reimbursed to you at the conference.) Also, affordable on-campus housing is still available on a first-come first-served basis. An electronic registration form and the complete conference schedule appear below. Further information may be found on the web at "http://www.cse.ucsd.edu/events/cogsci96/". When scheduling plane flights, note that the conference begins on Friday evening, July 12th, and ends late on Monday afternoon, July 15th. If you want to attend all conference events, you should plan on staying the nights of the 12th through the 15th. Register today! * PLENARY SESSIONS * "Controversies in Cognitive Science: The Case of Language" -+-+- Stephen Crain (UMD College Park) & Mark Seidenberg (USC) Moderated by Paul Smolensky (Johns Hopkins University) "Tenth Anniversary of the PDP Books" -+-+- "Affect and Neuro-modulators: A Connectionist Account" Dave Rumelhart (Stanford) "Parallel-Distributed Processing Models of Normal and Disordered Cognition" Jay McClelland (CMU) "Why Neural Networks Need Generative Models" Geoff Hinton (Toronto) "Frontal Lobe Development and Dysfunction in Children: Dissociations between Intention and Action" -+-+- Adele Diamond (MIT) "Reconstructing Consciousness" -+-+- Paul Churchland (UCSD) * SYMPOSIA * "Adaptive Behavior and Learning in Complex Environments" "Building a Theory of Problem Solving and Scientific Discovery: How Big is N in N-Space Search?" "Cognitive Linguistics: Mappings in Grammar, Conceptual Systems, and On-Line Meaning Construction" "Computational Models of Development" "Evolution of Language" "Evolution of Mind" "Eye Movements in Cognitive Science" "The Future Of Modularity" "The Role of Rhythm in Cognition" "Update on the Plumbing of Cognition: Brain Imaging Studies of Vision, Attention, and Language" * PAPER PRESENTATION SESSIONS * Analogy Categories, Concepts, and Mutability Cognitive Neuroscience Development Distributed Cognition and Education Lexical Ambiguity and Semantic Representation Perception Perception of Causality Philosophy Problem-Solving and Education Reasoning Recurrent Network Models Rhythm in Cognition Semantics, Phonology, and the Lexicon Skill Learning and SOAR Text Comprehension Visual/Spatial Reasoning REGISTRATION INFORMATION There are three ways to register for the 1996 Cognitive Science Conference: * ONLINE REGISTRATION -- You may fill out and electronically submit the online registration form, which may be found on the conference web page at "http://www.cse.ucsd.edu/events/cogsci96/". This is the preferred method of registration. (You must pay registration fees with a Visa or MasterCard in order to use this option.) * EMAIL REGISTRATION -- You may fill out the plain text (ASCII) registration form, which appears below, and send it via electronic mail to "cogsci96reg at cs.ucsd.edu". (You must pay registration fees with a Visa or MasterCard in order to use this option.) * POSTAL REGISTRATION -- You may download a copy of the PostScript registration form from the conference home page (or extract the plain text version, below), print it on a PostScript printer, fill it out with a pen, and send it via postal mail to: CogSci'96 Conference Registration Cognitive Science Department - 0515 University of California, San Diego 9500 Gilman Drive La Jolla, CA 92093-0515 (Under this option, you may enclose payment of registration fees in U. S. dollars in the form of a check or money order, or you may pay these fees with a Visa or MasterCard. Please make checks payable to: The Regents of the University of California.) For more information, visit the conference web page at "http://www.cse.ucsd.edu/events/cogsci96". Please direct questions and comments to "cogsci96 at cs.ucsd.edu", (619) 534-6773, or (619) 534-6776. Edwin Hutchins and Walter Savitch, Conference Chairs John D. Batali, Local Arrangements Chair Garrison W. Cottrell, Program Chair ====================================================================== PLAIN TEXT REGISTRATION FORM ====================================================================== Cognitive Science 1996 Registration Form ---------------------------------------- Your Full Name : _____________________________________________________ Your Postal Address : ________________________________________________ (including zip/postal ________________________________________________ code and country) ________________________________________________ ________________________________________________ Your Telephone Number (Voice) : ______________________________________ Your Telephone Number (Fax) : ______________________________________ Your Internet Electronic Mail Address (e.g., dnoelle at cs.ucsd.edu) : ______________________________________________________________________ REGISTRATION FEES : Please select the appropriate registration option from the menu below by placing an "X" in the corresponding blank on the left. Note that the Cognitive Science Society is offering a special deal to individuals who opt to join the Society simultaneously with conference registration. The "New Member" package includes conference fees and first year's membership dues for only $10 more than the nonmember conference cost. Registration fees received after June 1st are $20 higher ($10 higher for students) than fees received before June 1st. Be sure to register early to take advantage of the lower fee rates. _____ Registration, Member -- $120 ($140 after June 1st) _____ Registration, Nonmember -- $145 ($165 after June 1st) _____ Registration, New Member -- $155 ($175 after June 1st) _____ Registration, Student Member -- $85 ($95 after June 1st) _____ Registration, Student Nonmember -- $100 ($110 after June 1st) _____ Registration, New Student Member -- $115 ($125 after June 1st) CONFERENCE BANQUET : Tickets to the conference banquet are *not* included in the registration fees, above. Banquet tickets are $35 per person. (You may bring guests.) Number Of Banquet Tickets Desired ($35 each): _____ _____ Omnivorous _____ Vegetarian CONFERENCE SHIRTS : Conference T-Shirts are *not* included in the registration fees, above. These are $10 each. Number Of T-Shirts Desired ($10 each): _____ UCSD ON-CAMPUS APARTMENTS : There are a limited number of on-campus apartments available for reservation as a 4 night package, from July 12th through July 16th. Included is a (mandatory) meal plan - cafeteria breakfast (4 days), and lunch (3 days). The total cost is $191 per person (double occupancy, including tax) and $227 per person (single occupancy, including tax). (Checking in a day early is $45 extra for a single room or $36 for a double.) On campus parking is complimentary with this package. Off-campus accommodations in local hotels are also available, but you will need to make reservations by contacting the hotel of interest directly. If you will be staying off-campus, please skip this portion of the registration form. On-campus housing reservations must be received by June 1st, 1996. Please include the cost of on-campus housing in the total conference cost listed at the bottom of this form. Select the housing plan desired by placing an "X" in the appropriate blank on the left: _____ UCSD Housing and Meal Plan (Single Room) -- $227 per person _____ UCSD Housing and Meal Plan (Double Room) -- $191 per person Arrival Date And Time : ____________________________________________ Departure Date And Time : ____________________________________________ If you reserved a double room above, please indicate your roommate preference below: _____ Please assign a roommate to me. I am _____ female _____ male. _____ I will be sharing this room with a guest who is not registered for the conference. I will include $382 ($191 times 2) in the total conference cost listed at the bottom of this form. _____ I will be sharing this room with another conference attendee. I will include $191 in the total conference cost listed at the bottom of this form. My roommate will submit her housing fee along with her registration form. My roommate's full name is: ______________________________________________________________ ASL TRANSLATION : American Sign Language (ASL) translators will be available for a number of conference events. The number of translated events will be, in part, a function of the number of participants in need of this service. Please indicate below if you will require ASL translation of conference talks. _____ I will require ASL translation. Comments To The Registration Staff : ______________________________________________________________________ ______________________________________________________________________ ______________________________________________________________________ Please sum your conference registration fees, the cost of banquet tickets and t-shirts, and on-campus housing costs, and place the total below. To register by electronic mail, payment must be by Visa or MasterCard only. TOTAL : _$____________ Bill to: _____ Visa _____ MasterCard Number : ___________________________________________ Expiration Date: ___________________________________ Registration fees (including on-campus housing costs) will be fully refunded if cancellation is requested prior to May 1st. If registration is cancelled between May 1st and June 1st, 20% of paid fees will be retained by the Society to cover processing costs. No refunds will be granted after June 1st. When complete, send this form via email to "cogsci96reg at cs.ucsd.edu". Please direct questions to "cogsci96 at cs.ucsd.edu", (619) 534-6773, or (619) 534-6776. ====================================================================== PLAIN TEXT REGISTRATION FORM ====================================================================== =*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= TENTATIVE SCHEDULE OF EVENTS The conference check-in/registration desk will be located at the UCSD Price Center at the times listed in the schedule below. On-site registration, conference packets, and names tags will be available there. FRIDAY EVENING, JULY 12, 2:00 P.M. - 9:00 P.M. REGISTRATION (PRICE CENTER THEATER LOBBY) FRIDAY EVENING, JULY 12, 7:00 P.M. - 8:30 P.M. PLENARY SESSION "Controversies In Cognitive Science: The Case Of Language" Stephen Crain (UMD College Park) & Mark Seidenberg (USC) Moderated by Paul Smolensky (Johns Hopkins University) FRIDAY EVENING, JULY 12, 8:30 P.M. WELCOMING RECEPTION SATURDAY MORNING, JULY 13, 7:30 A.M. - 5:00 P.M. REGISTRATION (PRICE CENTER BALLROOM LOBBY) SATURDAY MORNING, JULY 13, 8:30 A.M. - 10:00 A.M. SUBMITTED SYMPOSIUM "Building A Theory Of Problem Solving And Scientific Discovery: How Big Is N In N-Space Search?" Bruce Burns (Organizer) Lisa Baker & Kevin Dunbar Bruce Burns & Regina Vollmeyer Chris Schunn & David Klahr David F. Wolf II & Jonathan R. Beskin SUBMITTED SYMPOSIUM "The Role Of Rhythm In Cognition" Devin McAuley (Organizer) Mari Jones Bill Baird Robert Port Elliot Saltzman PAPER PRESENTATIONS - PHILOSOPHY "Beyond Computationalism" Giunti, Marco "Qualia: The Hard Problem" Griffith, Todd W. ; Byrne, Michael "Connectionism, Systematicity, And Nomic Necessity" Hadley, Robert F. "Fodor On Information And Computation" Brook, Andrew ; Stainton, Robert SATURDAY MORNING, JULY 13, 10:30 A.M. - 12:20 P.M. SUBMITTED SYMPOSIUM "The Future Of Modularity" Michael Spivey-Knowlton (Organizer) Kathleen Eberhard (Organizer) Michael Tanenhaus (Organizer) James McClelland Peter Lennie Robert Jacobs Kenneth Forster Dominic Massaro Gary Dell PAPER PRESENTATIONS - TEXT COMPREHENSION "Integrating World Knowledge With Cognitive Parsing" Paredes-Frigolett, Harold ; Strube, Gerhard "The Role Of Ontology In Creative Understanding" Moorman, Kenneth ; Ram, Ashwin "Working Memory In Text Comprehension: Interrupting Difficult Text" McNamara, Danielle ; Kintsch, Walter "Reasoning From Multiple Texts: An Automatic Analysis Of Readers' Situation Models" Foltz, Peter ; Britt, M. Anne ; Perfetti, Charles "Lexical Limits On The Influence Of Context" Verspoor, Cornelia PAPER PRESENTATIONS - REASONING "Dynamics Of Rule Induction By Making Queries: Transition Between Strategies" Ginzburg, Iris ; Sejnowksi, Terry "The Impact Of Information Representation On Bayesian Reasoning" Hoffrage, Ulrich ; Gigerenzer, Gerd "On Reasoning With Default Rules And Exceptions" Elio, Renee ; Pelletier, Francis "Satisficing Inference And The Perks Of Ignorance" Goldstein, Daniel G. ; Gigerenzer, Gerd "A Connectionist Treatment Of Negation And Inconsistency" Shastri, Lokendra ; Grannes, Dean SATURDAY, JULY 13, 12:20 P.M. - 2:00 P.M. LUNCH & POSTER PREVIEW SATURDAY, JULY 13, 2:00 P.M. - 3:30 P.M. INVITED SYMPOSIUM "Update On The Plumbing Of Cognition: Imaging Studies Of Vision, Attention, And Language" Helen Neville (Organizer) Marty Sereno Steven Hillyard PAPER PRESENTATIONS - DISTRIBUTED COGNITION AND EDUCATION "Hearing With Eyes: A Distributed Cognition Perspective On Guitar Song Imitation" Flor, Nick V. ; Holder, Barbara "Constraints On The Experimental Design Process In Real-World Science" Baker, Lisa M. ; Dunbar, Kevin "Teaching/Learning Events In The Workplace: A Comparative Analysis Of Their Organizational And Interactional Structure" Hall, Rogers ; Stevens, Reed "Distributed Reasoning: An Analysis Of Where Social And Cognitive Worlds Fuse" Dama, Mike ; Dunbar, Kevin PAPER PRESENTATIONS - DEVELOPMENT I "Reading And Learning To Classify Letters" Martin, Gale "Where Defaults Don't Help: The Case Of The German Plural System" Nakisa, Ramin Charles ; Hahn, Ulrike "Selective Attention In The Acquisition Of The Past Tense" Jackson, Dan ; Constandse, Rodger ; Cottrell, Garrison "Word Learning And Verbal Short-Term Memory: A Computational Account" Gupta, Prahlad SATURDAY, JULY 13, 4:00 P.M. - 5:30 P.M. PLENARY SESSION "Tenth Anniversary Of The PDP Books" "Affect and Neuro-modulators: A Connectionist Account" Dave Rumelhart (Stanford) "Parallel-Distributed Processing Models Of Normal And Disordered Cognition" Jay McClelland (CMU) "Why Neural Networks Need Generative Models" Geoff Hinton (Toronto) SATURDAY, JULY 13, 5:30 P.M. - 7:30 P.M. POSTER SESSION & RECEPTION SATURDAY, JULY 13, 9:00 P.M. - 1:00 A.M. BLUES PARTY SUNDAY, JULY 14, 7:30 A.M. - 5:00 P.M. REGISTRATION (PRICE CENTER BALLROOM LOBBY) SUNDAY, JULY 14, 8:30 A.M. - 10:00 A.M. SUBMITTED SYMPOSIUM "Evolution Of Mind" Denise Dellarosa Cummins (Organizer) John Tooby Colin Allen PAPER PRESENTATIONS - VISUAL/SPATIAL REASONING "Spatial Cognition In The Mind And In The World - The Case Of Hypermedia Navigation" Dahlback, Nils ; Hook, Kristina ; Sjolinder, Marie "Individual Differences In Proof Structures Following Multimodal Logic Teaching" Oberlander, Jon ; Cox, Richard ; Monaghan, Padraic ; Stenning, Keith ; Tobin, Richard "Functional Roles For The Cognitive Analysis Of Diagrams In Problem Solving" Cheng, Peter C-H. "A Study Of Visual Reasoning In Medical Diagnosis" Rogers, E. PAPER PRESENTATIONS - SEMANTICS, PHONOLOGY, AND THE LEXICON "The Interaction Of Semantic And Phonological Processing" Tyler, Lorraine K. ; Voice, J. Kate ; Moss, Helen E. "The Combinatorial Lexicon: Affixes As Processing Structures" Marslen-Wilson, William ; Ford, Mike ; Older, Lianne ; Zhou, Xiaolin "Lexical Ambiguity And Context Effects In Spoken Word Recognition: Evidence From Chinese" Li, Ping ; Yip, C. W. "Phonological Reduction, Assimilation, Intra-Word Information Structure, And The Evolution Of The Lexicon Of English" Shillcock, Richard ; Hicks, John ; Cairns, Paul ; Chater, Nick ; Levy, Joseph SUNDAY, JULY 14, 10:30 A.M. - 12:20 P.M. INVITED SYMPOSIUM "Adaptive Behavior and Learning in Complex Environments" Maja Mataric (Organizer) Simon Giszter Andrew Moore Sebastian Thrun PAPER PRESENTATIONS - PERCEPTION "Color Influences Fast Scene Categorization" Oliva, Aude ; Schyns, Philippe "Categorical Perception Of Novel Dimensions" Goldstone, Robert L. ; Steyvers, Mark ; Larimer, Ken "Categorical Perception In Facial Emotion Classification" Padgett, Curtis ; Cottrell, Garrison "MetriCat: A Representation For Basic And Subordinate-Level Classification" Stankiewicz, Brian J. ; Hummel, John E. "Similarity To Reference Shapes As A Basis For Shape Representation" Edelman, Shimon ; Cutzu, Florin ; Duvdevani-Bar, Sharon PAPER PRESENTATIONS - LEXICAL AMBIGUITY AND SEMANTIC REPRESENTATION "Integrating Discourse And Local Constraints In Resolving Lexical Thematic Ambiguities" Hanna, Joy E. ; Spivey-Knowlton, Michael ; Tanenhaus, Michael "Evidence For A Tagging Model Of Human Lexical Category Disambiguation" Corley, Steffan ; Crocker, Matt "The Importance Of Automatic Semantic Relatedness Priming For Distributed Models Of Word Meaning" McRae, Ken ; Boisvert, Stephen "Parallel Activation Of Distributed Concepts: Who Put The P In The PDP?" Gaskell, M. Gareth "Discrete Multi-Dimensional Scaling" Clouse, Daniel ; Cottrell, Garrison SUNDAY, JULY 14, 12:20 P.M. - 2:00 P.M. LUNCH & SOCIETY BUSINESS MEETING SUNDAY, JULY 14, 2:00 P.M. - 3:30 P.M. INVITED SYMPOSIUM "Cognitive Linguistics: Mappings in Conceptual Systems, Grammar, and Meaning Construction" Gilles Fauconnier (Organizer) George Lakoff Ron Langacker PAPER PRESENTATIONS - PROBLEM-SOLVING AND EDUCATION "Collaboration In Primary Science Classroom: Learning About Evaporation" Scanlon, Eileen ; Murphy, Patricia ; Issroff, Kim ; Hodgson, Barbara ; Whitelegg, Elizabeth "Transferring And Modifying Terms In Equations" Catrambone, Richard "Understanding Constraint-Based Processes: A Precursor To Conceptual Change In Physics" Slotta, James ; Chi, T. H. Michelene "The Role Of Generic Modeling In Conceptual Change" Griffith, Todd W. ; Nersessian, Nancy ; Goel, Ashok PAPER PRESENTATIONS - RECURRENT NETWORK MODELS "Using Orthographic Neighborhoods Of Interlexical Nonwords To Support An Interactive-Activation Model Of Bilingual Memory" French, Robert M. ; Ohnesorge, Clark "Conscious And Unconscious Perception: A Computational Theory" Mathis, Donald ; Mozer, Michael "In Search of Articulated Attractors" Noelle, David ; Cottrell, Garrison "A Recurrent Network That Performs A Context-Sensitive Prediction Task" Steijvers, Mark ; Grunwald, Peter SUNDAY, JULY 14, 4:00 P.M. - 5:30 P.M. PLENARY SESSION "Frontal Lobe Development And Dysfunction In Children: Dissociations Between Intention And Action" Adele Diamond (MIT) SUNDAY, JULY 14, 6:00 P.M. - 9:00 P.M. CONFERENCE BANQUET MONDAY, JULY 15, 8:30 A.M. - 10:00 A.M. SUBMITTED SYMPOSIUM "Eye Movements In Cognitive Science" Patrick Suppes (Organizer) Julie Epelboim (Organizer) Eileen Kowler Mary Hayhoe Greg Zelinsky PAPER PRESENTATIONS - ANALOGY "Competition In Analogical Transfer: When Does A Lightbulb Outshine An Army?" Francis, Wendy ; Wickens, Thomas "Can A Real Distinction Be Made Between Cognitive Theories Of Analogy And Categorisation" Ramscar, Michael ; Paint, Helen "LISA: A Computational Model Of Analogical Inference And Schema Induction" Hummel, John E. ; Holyoak, Keith J. "Alignability And Attribute Important In Choice" Lindermann, Patricia ; Markman, Arthur PAPER PRESENTATIONS - DEVELOPMENT II "A Computational Model Of Two Types Of Developmental Dyslexia" Harm, Michael ; Seidenberg, Mark "Integrating Multiple Cues In Word Segmentation: A Connectionist Model Using Hints" Allen, Joe ; Christiansen, Morten "Statistical Cues In Language Acquisition: Word Segmentation By Infants" Saffran, Jenny R. ; Aslin, Richard N. ; Newport, Elissa "Perceptual Laws And The Statistics Of Natural Signals" Movellan, Javier ; Chadderdon, George MONDAY, JULY 15, 10:30 A.M. - 12:20 P.M. SUBMITTED SYMPOSIUM "Computational Models Of Development" Kim Plunkett (Organizer) Tom Shultz (Organizer) Jeff Elman Charles Ling Denis Mareschal Liz Bates (Discussant) Jeff Shrager (Discussant) PAPER PRESENTATIONS - SKILL LEARNING AND SOAR "An Abstract Computational Model Of Learning Selective Sensing Skills" Langley, Pat "Epistemic Action Increases With Skill" Maglio, Paul ; Kirsh, David "Perseverative Subgoaling And Production System Models Of Problem Solving" Cooper, Richard "Probabilistic Plan Recognition For Cognitive Apprenticeship" Conati, Cristina ; VanLehn, Kurt "Do Users Interact With Computers The Way Our Models Say They Should?" Vera, Alonso H. ; Lewis, Richard PAPER PRESENTATIONS - RHYTHM IN COGNITION "Rhythmic Commonalities Between Hand Gestures And Speech" Cummins, Fred ; Port, Robert "Modeling Beat Perception With A Nonlinear Oscillator" Large, Edward W. PAPER PRESENTATIONS - COGNITIVE NEUROSCIENCE "Emotional Decisions" Barnes, Allison ; Thagard, Paul "Self-Organization And Functional Role Of Lateral Connections And Multisize Receptive Fields In The Primary Visual Cortex" Sirosh, Joseph, Miikkulainen, Risto "Synaptic Maintenance Through Neuronal Homeostasis: A Function Of Dream Sleep" Horn, David ; Levy, Nir ; Ruppin, Eytan MONDAY, JULY 15, 12:20 P.M. - 2:00 P.M. LUNCH MONDAY, JULY 15, 2:00 P.M. - 3:30 P.M. INVITED SYMPOSIUM "Evolution Of Language" John Batali (Organizer) David Ackley (Organizer) Domenico Parisi (not confirmed) PAPER PRESENTATIONS - PERCEPTIONS OF CAUSALITY "The Perception Of Causality: Feature Binding In Interacting Objects" Kruschke, John K. ; Fragasi, Michael "Judging The Contingency Of A Constant Cue: Contrasting Predictions From An Associative And A Statistical Model" Vallee-Tourangeau, F. ; Murphy, Robin ; Baker, A. G. "What Language Might Tell Us About The Perception Of Cause" Wolff, Phillip PAPER PRESENTATIONS - CATEGORIES, CONCEPTS, AND MUTABILITY "Mutability, Conceptual Tranformation, And Context" Love, Bradley C. "On Putting Milk In Coffee: The Effect Of Thematic Relations On Similarity Judgments" Wisniewski, Edward ; Bassok, Mariam "The Role Of Situations In Concept Learning" Yeh, Wenchi ; Barsalou, Lawrence "Modeling Interference Effects In Instructed Category Learning" Noelle, David ; Cottrell, Garrison MONDAY, JULY 15, 4:00 P.M. - 5:30 P.M. PLENARY SESSION "Reconstructing Consciousness" Paul Churchland (UCSD) =*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= From rao at cs.rochester.edu Sat May 18 14:14:13 1996 From: rao at cs.rochester.edu (rao@cs.rochester.edu) Date: Sat, 18 May 1996 14:14:13 -0400 Subject: Connectionist Learning - Some New Ideas In-Reply-To: <9604178323.AA832376960@hub.comverse.com> (Jonathan_Stein@comverse.com) Message-ID: <199605181814.OAA09391@skunk.cs.rochester.edu> >One loses about 100,000 cortical neurons a day (about a percent of >the original number every three years) under normal conditions. Does anyone have a concrete citation (a journal article) for this or any other similar estimate regarding the daily cell death rate in the cortex of a normal brain? I've read such numbers in a number of connectionist papers but none cite any neurophysiological studies that substantiate these numbers. Thanks, Raj -- Raj Rao Internet: rao at cs.rochester.edu Dept. of Computer Science VOX: (716) 275-2527 University of Rochester FAX: (716) 461-2018 Rochester NY 14627-0226 WWW: http://www.cs.rochester.edu/u/rao/ From rfl77551 at pegasus.cc.ucf.edu Sun May 19 13:56:44 1996 From: rfl77551 at pegasus.cc.ucf.edu (Richard F Long) Date: Sun, 19 May 1996 13:56:44 -0400 (EDT) Subject: Connectionist Learning - Some New Ideas In-Reply-To: <199605161926.OAA27944@mozzarella.cs.wisc.edu> Message-ID: There may be another reason for the brain to construct networks that are 'minimal' having to do with Chaitin and Kolmogorov computational complexity. If a minimal network corresponds to a 'minimal algorithm' for implementing a particular computation, then that particular network must utilize all of the symmetries and regularities contained in the problem, or else these symmetries could be used to reduce the network further. Chaitin has shown that no algorithm for finding this minimal algorithm in the general case is possible. However, if an evolutionary programming method is used in which the fitness function is both 'solves the problem' and 'smallest size' (i.e. Occam's razor), then it is possible that the symmetries and regularities in the problem would be extracted as smaller and smaller networks are found. I would argue that such networks would compute the solution less by rote or brute force, and more from a deep understanding of the problem. I would like to hear anyone else's thoughts on this. Richard Long rfl77551 at pegasus.cc.ucf.edu General Research and Device Corp. Oviedo, FL & University of Central Florida From maja at garnet.cs.brandeis.edu Sun May 19 18:56:40 1996 From: maja at garnet.cs.brandeis.edu (Maja Mataric) Date: Sun, 19 May 1996 18:56:40 -0400 Subject: CALL for PAPERS Message-ID: <199605192256.SAA03201@garnet.cs.brandeis.edu> CALL FOR PAPERS (http://www.cs.brandeis.edu:80/~maja/abj-special-issue/) ADAPTIVE BEHAVIOR Journal Special Issue on COMPLETE AGENT LEARNING IN COMPLEX ENVIRONMENTS Guest editor: Maja J Mataric Submission Deadline: June 1, 1996. Adaptive Behavior is an international journal published by MIT Press; Editor-in-Chief: Jean-Arcady Meyer, Ecole Normale Superieure, Paris. In the last decade, the problems being treated in AI, Alife, and Robotics have witnessed an increase in complexity as the domains under investigation have transitioned from theoretically clean scenarios to more complex dynamic environments. Agents that must adapt in environments such as the physical world, an active ecology or economy, and the World Wide Web, challenge traditional assumptions and approaches to learning. As a consequence, novel methods for automated adaptation, action selection, and new behavior acquisition have become the focus of much research in the field. This special issue of Adaptive Behavior will focus on situated agent learning in challenging environments that feature noise, uncertainty, and complex dynamics. We are soliciting papers describing finished work on autonomous learning and adaptation during the lifetime of a complete agent situated in a dynamic environment. We encourage submissions that address several of the following topics within a whole agent learning system: * learning from ambiguous perceptual inputs * learning with noisy/uncertain action/motor outputs * learning from sparse, irregular, inconsistent, and noisy reinforcement/feedback * learning in real time * combining built-in and learned knowledge * learning in complex environments requiring generalization in state representation * learning from incremental and delayed feedback * learning in smoothly or discontinuously changing environments We invite submissions from all areas in AI, Alife, and Robotics that treat either complete synthetic systems or models of biological adaptive systems situated in complex environments. Submitted papers should be delivered by June 1, 1996. Authors intending to submit a manuscript should contact the guest editor to discuss paper suitability for this issue. Use maja at cs.brandeis.edu or tel: (617) 736-2708 or fax: (617) 736-2741. Manuscripts should be typed or laser-printed in English (with American spelling preferred) and double-spaced. Both paper and electronic submission are possible, as described below. Copies of the complete Adaptive Behavior Instructions to Contributors are available on request--also see the Adaptive Behavior journal's home page at: http://www.ens.fr:80/bioinfo/www/francais/AB.html. For paper submissions, send five (5) copies of submitted papers (hard-copy only) to: Maja Mataric Volen Center for Complex Systems Computer Science Department Brandeis University Waltham, MA 02254-9110, USA For electronic submissions, use Postscript format, ftp the file to ftp.cs.brandeis.edu/incoming, and send an email notification to maja at cs.brandeis.edu. For a Web page of this call, and detailed ftp directions, see: http://www.cs.brandeis.edu/~maja/abj-special-issue/ From mark at cdu.ucl.ac.uk Mon May 20 11:24:26 1996 From: mark at cdu.ucl.ac.uk (Mark Johnson) Date: Mon, 20 May 96 11:24:26 BST Subject: Connectionist Learning - Some New Ideas Message-ID: >One loses about 100,000 cortical neurons a day (about a percent of >the original number every three years) under normal conditions. Does anyone have a concrete citation (a journal article) for this or any other similar estimate regarding the daily cell death rate in the cortex of a normal brain? I've read such numbers in a number of connectionist papers but none cite any neurophysiological studies that substantiate these numbers. Thanks, Raj ================= From carl at cs.toronto.edu Tue May 21 14:11:35 1996 From: carl at cs.toronto.edu (Carl Edward Rasmussen) Date: Tue, 21 May 1996 14:11:35 -0400 Subject: DELVE Message-ID: <96May21.141136edt.1240@neuron.ai.toronto.edu> Announcing the release of DELVE DELVE --- Data for Evaluating Learning in Valid Experiments DELVE contains a collection of datasets for use in evaluating the predictive performance of empirical learning methods, such as linear models, neural networks, smoothing splines, decision trees, and many other regression and classification procedures. DELVE also includes software that facilitates using this data to assess learning methods in a statistically valid way. Ultimately, DELVE will include results of applying many methods to many tasks, making comparisons between methods much easier than in the past. A preliminary version of DELVE is now freely available on the web at URL http://www.cs.utoronto.ca/~delve. From this web site, you can get to the manual, the software for the DELVE environment, the DELVE datasets, and precise definitions, source code, and results for various learning methods. Contributions of data and methods from other researchers will be added to the web site in future. DELVE was created at the university of Toronto by C. E. Rasmussen R. M. Neal G. E. Hinton D. van Camp M. Revow Z. Ghahramani R. Kustra R. Tibshirani -- \ Carl Edward Rasmussen Email: carl at cs.toronto.edu o/\_ Dept of Computer Science Phone: +1 (416) 978 7391 <|__,\ University of Toronto, Home : +1 (416) 531 5685 "> | Toronto, ONTARIO, FAX : +1 (416) 978 1455 ` | Canada, M5S 1A4 web : http://www.cs.toronto.edu/~carl From moriarty at AIC.NRL.Navy.Mil Mon May 20 10:29:23 1996 From: moriarty at AIC.NRL.Navy.Mil (moriarty@AIC.NRL.Navy.Mil) Date: Mon, 20 May 96 10:29:23 EDT Subject: Papers Available: Neuro-Evolution in Robotics Message-ID: <9605201429.AA16331@sun27.aic.nrl.navy.mil> The following two papers on applying neuro-evolution to robot arm control are available from our WWW page: http://www.cs.utexas.edu/users/nn/ Source code for the SANE system is also avaiable from the WWW site. ------------------------------------------------------------------------ Evolving Obstacle Avoidance Behavior in a Robot Arm David E. Moriarty and Risto Miikkulainen To Appear at From Animals to Animats The Fourth International Conference on Simulation of Adaptive Behavior (SAB96). Cape Cod, MA. 1996 8 pages Abatract: Existing approaches for learning to control a robot arm rely on supervised methods where correct behavior is explicitly given. It is difficult to learn to avoid obstacles using such methods, however, because examples of obstacle avoidance behavior are hard to generate. This paper presents an alternative approach that evolves neural network controllers through genetic algorithms. No input/output examples are necessary, since neuro-evolution learns from a single performance measurement over the entire task of grasping an object. The approach is tested in a simulation of the OSCAR-6 robot arm which receives both visual and sensory input. Neural networks evolved to effectively avoid obstacles at various locations to reach random target locations. ------------------------------------------------------------------------ Hierarchical Evolution of Neural Networks David E. Moriarty and Risto Miikkulainen Technical Report #AI96-242, Department of Computer Sciences, The University of Texas at Austin. 16 pages Abstract: In most applications of neuro-evolution, each individual in the population represents a complete neural network. Recent work on the SANE system, however, has demonstrated that evolving individual neurons often produces a more efficient genetic search. This paper explores the merits of neuro-evolution both at the neuron level and at the network level. While SANE can solve easy tasks in just a few generations, in tasks that require high precision, its progress often stalls and is exceeded by a standard, network-level evolution. In this paper, a new approach called Hierarchical SANE is presented that combines the advantages of both approaches by integrating two levels of evolution in a single framework. Hierarchical SANE couples the early explorative quality of SANE's neuron-level search with the late exploitative quality of a more standard network-level evolution. In a sophisticated robot arm manipulation task, Hierarchical SANE significantly outperformed both SANE and a standard, network-level neuro-evolution approach, suggesting that it can more efficiently solve a broad range of tasks. ------------------------------------------------------------------------ Dave Moriarty Artificial Intelligence Laboratory Department of Computer Sciences The University of Texas at Austin moriarty at cs.utexas.edu http://www.cs.utexas.edu/users/moriarty http://www.cs.utexas.edu/users/nn From karaali at ukraine.corp.mot.com Mon May 20 10:44:26 1996 From: karaali at ukraine.corp.mot.com (Orhan Karaali) Date: Mon, 20 May 1996 09:44:26 -0500 Subject: Linguist with neural net background Message-ID: <199605201444.JAA05484@fiji.mot.com> Motorola Chicago, IL COMPUTATIONAL LINGUIST FOR TEXT-TO-SPEECH SYNTHESIS Motorola's Chicago Corporate Research Laboratories is currently seeking a computational linguist to join the Speech Synthesis Group in its Speech Processing Systems Research Laboratory in Schaumburg, Illinois. The Speech Synthesis Group of Motorola's Speech Processing Laboratory has developed a world-class multi-language text-to-speech synthesizer. This synthesizer is based on innovative neural network and signal processing technologies and produces more natural sounding speech than traditional speech synthesis methods. The successful candidate will work on the components of a text-to-speech system that convert text into a phonetic representation, including part of speech tagging, word sense disambiguation and parsing for prosody. The duties of the position include applied research, software development, data collection, and transfer of developed technologies to product groups. Innovation in research, application of technology and a high level of motivation is the standard for all members of the team. The individual should possess a Ph.D. in the area of computational linguistics with a minimum of two years work experience developing spoken language systems. Strong programming skills in C or C++ are required. Knowledge of neural networks, decision trees, genetic algorithms, and statistical techniques is highly desirable. Please send resume and cover letter by June 15, 1996 to be considered for this position to Motorola Inc., Corporate Staffing Department, Attn: LP-T1521, 1303 E. Algonquin Rd., Schaumburg, IL 60196. Fax: 847-576-4959. Motorola is an equal opportunity/affirmative action employer. We welcome and encourage diversity in our workforce. From gds at sys.uea.ac.uk Tue May 21 12:50:33 1996 From: gds at sys.uea.ac.uk (George Smith) Date: Tue, 21 May 1996 17:50:33 +0100 (BST) Subject: ICANNGA97 Message-ID: ICANNGA97 _________ Third International Conference on Artificial Neural Networks and Genetic Algorithms Preceded by a one-day Introductory Workshop Tuesday 1st - Friday 4th April, 1997 Norwich, England, UK CALL FOR PAPERS AND INVITATION TO PARTICIPATE Conference Theme: _________________ The main theme of the ICCANGA series is the development and application of software paradigms based on natural processes, principally artificial neural networks, genetic algorithms and hybrids thereof. However, the scope of the conference extends to cover many related topics including fuzzy logic, genetic programming and other evolutionary computation systems, classifier systems and adaptive agent systems, distributed intelligence and artificial life, generic optimisation heuristics including simulated annealing and tabu search, and many more. Following the successes of ICANNGA93 (Innsbruck, Austria) and ICCANGA95 (Ales, France), the third meeting of this interdisciplinary conference will be held at the University of East Anglia in the picturesque, medieval city of Norwich, England. The ICANNGA series has quickly established itself as a platform, not only for established workers in the fields, but also for new and young researchers wishing to extend their knowledge and experience. The conference will be preceded by a one day workshop during which introductory sessions on a range of relevant topics will be held. There will be ample opportunity to gain practical experience in the techniques pertaining to the workshop and conference. The conference is hosted by the University of East Anglia, which is a campus university in a parkland setting, offering first class conference facilities including award winning en-suite accomodation and lecture theatres. The conference will include invited talks and contributed oral and poster presentations. It is expected that the ICANNGA97 Proceedings will be printed by Springer-Verlag (Vienna), following the tradition set by its predecessors. International Advisory Committee _________________________ _______ Prof. R. Albrecht, University of Innsbruck, Austria Dr. D. Pearson, Ecole des Mines d'Ales, France Prof. N. Steele, Coventry University, England (Chair) Dr. G. D. Smith, University of East Anglia, England Programme Committee ___________________ Thomas Baeck, Informatik Centrum, Dortmund, Germany Wilfried Brauer, TU Munchen, Germany Marco Dorigo, Universite Libre de Bruxelles, Belgium Terry Fogarty, University of West England, Bristol, UK Jelena Godjevac, EPFL Laboratories, Lausanne, Switzerland Michael Heiss, Neural Network Group, Siemens AG, Austria Tom Harris, Brunel University, London, UK Anne Johannet, EMA-EERIE, Nimes, France Helen Karatza, Aristotle University of Thessaloniki, Greece Sami Kuri, San Jose State University, USA Pedro Larranaga, University Basque Country, San Sebastian, Spain Francesco Masulli, University of Genoa, Italy Josef Mazanec, WU Wien, Austria Janine Magnier, EMA-EERIE, N?mes, France Franz Oppacher, Carleton University, Ottawa, Canada Ian Parmee, University of Plymouth, UK David Pearson, EMA-EERIE, N?mes, France Vic Rayward-Smith, University of East Anglia, Norwich, UK Colin Reeves, Coventry University, Coventry, UK Bernardete Ribeiro, Universidade de Coimbra, Portugal Valentina Salapura, TU-Wien, Austria V. David Sanchez A., University of Miami, Florida, USA Henrik Sax?n, ?bo Akademi, Finland George D. Smith, University of East Anglia, Norwich, UK Nigel Steele, Coventry University, Coventry, UK Kevin Warwick, Reading University, Reading, UK Darrell Whitley, Colorado State University, USA Diethelm Wurtz, Swiss Federal Inst. of Technology, Zurich, Switzerland Organising Committee ____________________ Dr. G. D. Smith, University of East Anglia, England Nigel Steele, Coventry University, Coventry Prof. Vic Rayward-Smith, University of East Anglia, Norwich Submission Instructions _______________________ Contributions are sought in the following topic areas, which is not exhaustive: - Theoretical and Computational Aspects of Artificial Neural Networks: including computational learning, approximation theory, novel paradigms and training methods, dynamical systems, hardware implementation - Practical Applications of Artificial Neural Networks: including pattern recognition, speech and signal processing, visual processing, time series prediction, medical and other diagnostic systems, fault and anomaly detection, financial applications, data compression, datamining, machine learning - Theoretical and Computational Aspects of Genetic Algorithms: including schema theory developments, Markov models, convergence analysis, no free lunch theorem, computational analysis, novel sequential and parallel GA systems - Practical Applications of Genetic Algorithms; including function and combinatorial optimisation, machine learning, classifier and agent systems, datamining, real-world industrial and commercial applications - Hybrid and related topics: including genetic programming, evolutionary programming and evolution strategies, fuzzy logic and control, neuro-fuzzy systems, simulated annealing and tabu search, hybrid search algorithms, hybrid ANN/GA systems Authors should submit an extended abstract of around 1500-2000 words, or full paper, of their proposed contribution before 31st August 1996. Abstracts and papers must be in English and must contain a concise description of the problem, the results achieved, their relevance and a comparison with previous work. The abstract/paper should also contain the following details: Title Authors' names and affiliations Name, address and email address of contact author Keywords Three typed/printed copies should be sent to the following address: Dr George D. Smith School of Information Systems University of East Anglia Norwich, Norfolk, NR4 7TJ UK Alternatively, abstracts may be sent by email to either: gds at sys.uea.ac.uk or rs at sys.uea.ac.uk Notification of acceptance of the paper for presentation will be made by November 30th 1996. Papers accepted for both oral and poster presentations will be published in the Conference Proceedings. Pre-Conference Workshop _______________________ It is intended to hold a workshop on April 1st, 1997, prior to the Conference. This workshop is intended for those who are new to the topics and wish to gain a better understanding of the fundamental aspects of neural networks and genetic algorithms. The format of this workshop will be as follows: Theoretical issues of ANNs Key Issues in the application of ANNs Introduction to GAs and other heuristic search algorithms Key Issues in the application of GAs and related heuristics The second and fourth topics are backed up with laboratory sessions in which participants will have the opportunity to use some of the latest software toolkits supporting the respective technologies. Dates to remember: __________________ First Announcement & CFP: April/May 1996 Submission of Abstracts/Papers: August 31st 1996 Notification of Acceptance: November 30th 1996 Delivery of full paper: January 30th 1997 Pre-Conference Workshop: April 1st 1997 ICANNGA97: April 2nd-4th 1997 Further Information: ____________________ For more information on ICANNGA97, regularly updated, visit the WWW site at: http://www.sys.uea.ac.uk/Research/ResGroups/MAG/ICANNGA97/Default.html This web page also contains a pre-registration form. Pre-Registration form: ______________________ Please enter your details below to receive further information about ICANNGA97 and a full registration form. First name: ______________________________________ Family name: ______________________________________ Affiliation: ______________________________________ Address: ______________________________________ City: ______________________________________ State/Province/County: ______________________________________ ZIP/Postal Code: ______________________________________ Country: ______________________________________ Daytime telephone number: ______________________________________ Email address: ______________________________________ _________________________ _________________________ _________________________ Dr. George D Smith Computing Science Sector School of Information Systems University of East Anglia Norwich NR4 7TJ, UK Tel: + 44 (0)1603 593260 FAX: + 44 (0)1603 503344 Email: gds at sys.uea.ac.uk www: http://www.sys.uea.ac.uk/Teaching/Staff/gds.html From juergen at idsia.ch Mon May 20 03:10:26 1996 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Mon, 20 May 96 09:10:26 +0200 Subject: R Message-ID: <9605200710.AA13354@fava.idsia.ch> Richard Long writes: There may be another reason for the brain to construct networks that are 'minimal' having to do with Chaitin and Kolmogorov computational complexity. If a minimal network corresponds to a 'minimal algorithm' for implementing a particular computation, then that particular network must utilize all of the symmetries and regularities contained in the problem, or else these symmetries could be used to reduce the network further. Chaitin has shown that no algorithm for finding this minimal algorithm in the general case is possible. However, if an evolutionary programming method is used in which the fitness function is both 'solves the problem' and 'smallest size' (i.e. Occam's razor), then it is possible that the symmetries and regularities in the problem would be extracted as smaller and smaller networks are found. I would argue that such networks would compute the solution less by rote or brute force, and more from a deep understanding of the problem. I would like to hear anyone else's thoughts on this. Some comments: Apparently, Kolmogorov was the first to show the impossibility of finding the minimal algorithm in the general case (but Solomonoff also mentions it in his early work). The reason is the halting problem, of course - you don't know the runtime of the minimal algorithm. For all practical applications, runtime has to be taken into account. Interestingly, there is an ``optimal'' way of doing this, namely Levin's universal search algorithm, which tests solution candidates in order of their Levin complexities: L. A. Levin. Universal sequential search problems, Problems of Information Transmission 9:3,265-266,1973. For finding Occam's razor neural networks with minimal Levin complexity, see J. Schmidhuber: Discovering solutions with low Kolmogorov complexity and high generalization capability. In A.Prieditis and S.Russell, editors, Machine Learning: Proceedings of the 12th International Conference, 488--496. Morgan Kaufmann Publishers, San Francisco, CA, 1995. For Occam's razor solutions of non-Markovian reinforcement learning tasks, see M. Wiering and J. Schmidhuber: Solving POMDPs using Levin search and EIRA. In Machine Learning: Proceedings of the 13th International Conference. Morgan Kaufmann Publishers, San Francisco, CA, 1996, to appear. --- Juergen Schmidhuber, IDSIA http://www.idsia.ch/~juergen From smlamb at owlnet.rice.edu Mon May 20 11:35:50 1996 From: smlamb at owlnet.rice.edu (Sydney M Lamb) Date: Mon, 20 May 1996 10:35:50 -0500 (CDT) Subject: Connectionist Learning - Some New Ideas In-Reply-To: <9604178323.AA832376960@hub.comverse.com> Message-ID: On Fri, 17 May 1996 Jonathan_Stein at comverse.com wrote: > > One needn't draw upon injuries to prove the point. One loses about 100,000 > cortical neurons a day (about a percent of the original number every three > years) under normal conditions. This loss is apparently not significant > for brain function. This has been often called the strongest argument for > distributed processing in the brain. Compare this ability with the fact that > single conductor disconnection cause total system failure with high > probability in conventional computers. > > Although certainly acknowledged by the pioneers of artificial neural > network techniques, very few networks designed and trained by present > techniques are anywhere near that robust. Studies carried out on the > Hopfield model of associative memory DO show graceful degradation of > memory capacity with synapse dilution under certain conditions (see eg. > DJ Amit's book "Attractor Neural Networks"). Synapse pruning has been > applied to trained feedforward networks (eg. LeCun's "Optimal Brain Damage") > but requires retraining of the network. > > JS > There seems to be some differing information coming from different sources. The way I heard it, the typical person has lost only about 3% of the original total of cortical neurons after about 70 or 80 years. As for the argument about distributed processing, two comments: (1) there are different kinds of distributive processing; one of them also uses strict localization of points of convergence for distributed subnetworks of information (cf. A. Damasio 1989 --- several papers that year). (2) If the brain is like other biological systems, the neurons being lost are probably most the ones not being used --- ones that have been remaining latent and available to assume some function, but never called upon. Hence what you get with old age is not so much loss of information as loss of ability to learn new things --- varying in amount, of course, from one individual to the next. Syd Lamb Linguistics and Cognitive Science Rice University From cabestan at petrus.upc.es Wed May 22 17:01:52 1996 From: cabestan at petrus.upc.es (JOAN CABESTANY) Date: Wed, 22 May 1996 17:01:52 UTC+0200 Subject: IWANN'97 preliminary announce Message-ID: <01BB4800.52349C00@maripili.upc.es> This message has been sent to several lists of distribution. I apologize its multiple reception. Thank you. Preliminary Announcement and First Call for Papers IWANN'97 INTERNATIONAL WORK-CONFERENCE ON ARTIFICIAL AND NATURAL NEURAL NETWORKS Biological and Artificial Architectures, Technologies and Applications Lanzarote - Canary Islands, Spain June 4-6, 1997 Contact URL http://petrus.upc.es/iwann97.html for an on line information. ORGANIZED BY Universidad Nacional de Educacion a Distancia (UNED), Madrid Universidad de Las Palmas de Gran Canarias Universidad Politecnica de Catalunya Universidad de Malaga Universidad de Granada IWANN'97. The fourth International Workshop on Artificial Neural Networks, now changed to International Work-Conference on Artificial and Natural Neural Networks, will take place in Lanzarote, Canary Islands (Spain) from 4 to 6 of June, 1997. This biennial meeting with focus on biologically inspired and more realistic models of natural neurons and neural nets and new hybrid computing paradigms, was first held in Granada (1991), Sitges (1993) and Torremolinos, Malaga (1995) with a growing number of participants from more than 20 countries and with high quality papers published by Springer-Verlag (LNCS 540, 686 and 930). SCOPE Neural computation is considered here in the dual perspective of analysis (as science) and synthesis (as engineering). As a science of analysis, neural computation seeks to help neurology, brain theory, and cognitive psychology in the understanding of the functioning of the Nervous Systems by means of computational models of neurons, neural nets and subcelular processes, with the possibility of using electronics and computers as a "laboratory" in which cognitive processes can be simulated and hypothesis proven without having to act directly upon living beings. As a synthesis engineering, neural computation seeks to complement the symbolic perspective of Artificial Intelligence (AI), using the biologically inspired models of distributed, self-programming and self-organizing networks, to solve those non-algorithmic problems of function approximation and pattern classification having to do with changing and only partially known environments. Fault tolerance and dynamic reconfiguration are other basic advantages of neural nets. In the sea of meetings, congresses and workshops on ANN's, IWANN'97 focus on the three subjects that most worry us: (1) The seeking of biologically inspired new models of local computation architectures and learning along with the organizational principles behind of the complexity of intelligent behavior. (2) The searching for some methodological contributions in the analysis and design of knowledge-based ANN's, instead of "blind nets", and in the reduction of the knowledge level to the sub-symbolic implementation level. (3) The cooperation with symbolic AI, with the integration of connectionist and symbolic processing in hybrid and multi-strategy approaches for perception, decision and control tasks, as well as for case-based reasoning, concepts formation and learning. To contribute in the posing and partially solving of these global topics, IWANN'97 offer a brain-storming interdisciplinary forum in advanced Neural Computation for scientists and engineers from biology neuroanatomy, computational neurophysiology, molecular biology, biophysics, linguistics, psychology, mathematics and physics, computer science, artificial intelligence, parallel computing, analog and digital electronics, advanced computer architectures, reverse engineering, cognitive sciences and all the concerned applied domains (sensory systems and signal processing, monitoring, diagnosis, classification and decision making, intelligent control and supervision, perceptual robotics and communication systems). Contributions on the following and related topics are welcome. TOPICS 1. Biological Foundations of Neural Computation: Principles of brain organization. Neuroanatomy and Neurophysiological of synapses, dendro-dendritic contacts, neurons and neural nets in peripheral and central areas. Plasticity, learning and memory in natural neural nets. Models of development and evolution. The computational perspective in Neuroscience. 2. Formal Tools and Computational Models of Neurons and Neural Nets Architectures: Analytic and logic models. Object oriented formulations. Hybrid knowledge representation and inference tools (rules and frames with analytic slots). Probabilistic, bayesian and fuzzy models. Energy related models. 3. Plasticity Phenomena (Maturing, Learning and Memory): Biological mechanisms of learning and memory. Computational formulations using correlational, reinforcement and minimization strategies. Conditioned reflex and associative mechanisms. Inductive-deductive and abductive symbolic-subsymbolic formulations. Generalization. 4. Complex Systems Dynamics: Self-organization, cooperative processes, autopoiesis, emergent computation, synergetic, evolutive optimization and genetic algorithms. Self-reproducing nets. Self-organizing feature maps. Simulated evolution. Social organization phenomena. 5. Cognitive Science and IA: Hybrid knowledge based system. Neural networks for knowledge modeling, acquisition and refinement. Natural language understanding. Concepts formation. Spatial and temporal planning and scheduling. Intentionality. 6. Neural Nets Simulation, Emulation and Implementation: Environments and languages. Parallelization, modularity and autonomy. New hardware implementation strategies (FPGA's, VLSI, neurodevices). Evolutive architectures. Real systems validation and evaluation. 7. Methodology for Data Analysis, Task Selection and Nets Design. 8. Neural Networks for Perception: Biologically inspired preprocessing. Low level processing, source separation, sensor fusion, segmentation, feature extraction, adaptive filtering, noise reduction, texture, stereo correspondence, motion analysis, speech recognition, artificial vision, and hybrid architectures for multisensorial perception. 9. Neural Networks for Communications Systems: Modems and codecs, network management, digital communications. 10. Neural Networks for Control and Robotics: Systems identification, motion planning and control, adaptive, predictive and model-based control systems, navigation, real time applications, visuo-motor coordination. LOCATION BEATRIZ Hotel Lanzarote - Canary Islands, June 4-6, 1997 Lanzarote, the most northerly and easterly island of the Canarian archipelago, is at the same time the most unusual one and produces a strange fascination on those who visit it because the fast succession of fire, sea and colors contrasts with craters, green valleys and unforgettable golden and warm beaches. LANGUAGE English will be the official language of IWANN'97. Simultaneous translation will not be provided. CALL FOR PAPERS The Programme Committee seeks for original papers on the above mentioned Topics. Authors should pay special attention to explanation of theoretical and technical choices involved, point out possible limitations and describe the current state of their work. All received papers will be reviewed by the Programme Committee. Accepted papers may be presented orally or as poster panels, however all accepted contributions will be published in full length (Springer-Verlag Proceedings are expected). INSTRUCTIONS TO AUTHORS Five copies (one original and four copies) of the paper must be submitted. The paper must not exceed 10 pages, including figures, tables and references. It should be written in English on A4 paper, in a Roman font, 12 point in size, without page numbers. If possible, please make use of the latex/plaintex style file available in the WWW page: http://petrus.upc.es/iwann97.html . In addition, one sheet must be attached including: Title and authors names, list of five keywords, the Topic the paper fits best, preferred presentation (oral or poster) and the corresponding author (name, postal and e-mail address, phone and fax numbers). CONTRIBUTIONS MUST BE SENT TO: Prof. Jose Mira Dpto. Informatica y Automatica, UNED Senda del Rey, s/n Phone: + 34 1 3987155 E- 28040 MADRID, Spain Fax: + 34 1 3986697 IMPORTANT DATES Second and Final Call for Papers September 1996 Final Date for Submission January 15, 1997 Notification of Acceptance March 1997 Workshop June 4-6, 1997 STEARING COMMITTEE Prof. Joan Cabestany , Universidad Politecnica de Catalunya (E) Prof. Jose Mira Mira, UNED (E) Prof. Alberto Prieto, Universidad de Granada (E) Prof. Francisco Sandoval, Universidad de Malaga (E) TENTATIVE ORGANIZATION COMMITTEE Michael Arbit, University of Southern California (USA) Senen Barro, Universidad de Santiago (E) Trevor Clarkson, King's College London (UK) Ana Delgado, UNED (E) Dante DelCorso, Politecnico de Torino (I) Tamas D. Gedeon, University of New South Wales (AUS) Karl Goser, Universit?t Dortmund (G) Jeanny Herault, Institute National Polytechnique de Grenoble (F) Jaap Hoekstra, Delft University of Technology (NL) Roberto Moreno, Universidad de las Palmas de Gran Canaria (E) Shunsuke Sato, Osaka University (Jp) Igor Shevelev, Russian Academy of Science(R) Cloe Taddei-Ferretti, Istituto di Cibernetica, CNR (I) Marley Vellasco, Pontificia Universidade Catolica do Rio de Janeiro (Br) Michel Verleysen, Universite Catholique de Louvain-la-Neuve (B) From carmesin at schoner.physik.uni-bremen.de Thu May 23 09:49:18 1996 From: carmesin at schoner.physik.uni-bremen.de (Hans-Otto Carmesin) Date: Thu, 23 May 1996 15:49:18 +0200 Subject: BOOK: Neuronal Adaptation Theory Message-ID: <199605231349.PAA12918@schoner.physik.uni-bremen.de> The new book NEURONAL ADAPTATION THEORY is now available. ISBN 3-631-30039-5, US-ISBN 0-8204-3172-9 AUTHOR: Hans-Otto Carmesin, Institute for Theoretical Physics, University Bremen, 28334 Bremen, Germany, Fax 0421 218 4869, email: carmesin at theo.physik.uni-bremen.de, www: http://schoner.physik.uni-bremen.de/~carmesin/ PUBLISHER: Peter Lang, Frankfurt/M., Berlin, Bern, New York, Paris, Wien; ---> ---> Please send your order to: Peter Lang GmbH, Europischer Verlag der Wissenschaften, Abteilung WB, Box 940225, 60460 Frankfurt/M., Germany PRICE: 59DM; PAGES: 236 (23x16cm), num.fig. FEATURES: The book includes 29 exercises with solutions, 43 essential ideas, 108 partially coloured figures, experiment explanations and general theorems. ABSTRACT: The human genotype represents at most ten billion pieces of binary information, whereas the human brain contains more than a million times a billion synapses. So a differentiated brain structure is due to synaptic self-organization and adaptation. The goal is to model the formation of observed global brain structures and cognitive properties from local synaptic dynamics sometimes supervised by the limbic system. A general neuro-synaptic dynamics is solved with a novel field theory in a comprehensible manner and in quantitative agreement with many observations. Novel results concern for instance thermal membrane fluctuations, fluctuation dissipation theorems, cortical maps, topological charges, operant conditioning, transitive inference, learning hidden structures, behaviourism, attention focus, Wittgenstein paradox, infinite generalization, schizophrenia dynamics, perception dynamics, non-equilibrium phase transitions, emergent valuation. Also the formation of advanced cognitive properties is modeled. CONTENTS: 1 Introduction 13 1.1 The role of theory 13 2 Neuronal Association Patterns 17 2.1 Classical conditioning 17 2.2 Typical nerve cell 18 2.3 Neuronal dynamics 20 2.3.1 Two-valued neurons 20 2.3.2 Two alternative formulations 21 2.4 Coupling dynamics 23 2.4.1 Usage dependent couplings 25 2.4.2 Neuronal activity patterns 25 2.5 Network model for classical conditioning 29 2.6 Pattern recognition 32 2.6.1 Task 32 2.6.2 One pattern 32 2.6.3 Several patterns 34 2.7 Pattern retrieval with stochastic dynamics 39 2.7.1 Dynamical equilibrium for a single neuron 40 2.7.2 Dynamical equilibrium for configurations 40 2.8 A physiological basis of stochastic dynamics 44 2.8.1 Biophysics of action potentials 44 2.8.2 Spherical capacitor cell model 45 2.8.3 Nyquist formula 46 2.8.4 Thermodynamic membrane potential fluctuations 50 2.8.5 Resulting stochastic neuronal dynamics 51 2.8.6 Discussion 53 2.9 Pattern retrieval with effectively continuous time 53 2.9.1 Continuous spike response function 54 2.9.2 Network model 54 2.9.3 Model analysis 55 2.10 Discussion of chapter 2 58 3 Self-Organizing Networks 60 3.1 Basic principle 61 3.2 Retinotopy as model system 61 3.3 General two-valued neuron coupling rules 63 3.3.1 Locality principle 63 3.3.2 Additive membrane potential rule, AMPR 64 3.3.3 Coupling transfer rule, CTR 64 3.3.4 Local linear coupling dynamics, LLCD 65 3.3.5 Limited neuronal couplings, LNCR 65 3.4 A 1D self-organizing network with Hebb-rule 65 3.4.1 Network architecture 65 3.4.2 Coupling dynamics 67 3.4.3 Transformed couplings 68 3.4.4 Single stimulation potential 68 3.5 Field theory of neurosynaptic dynamics 69 3.5.1 A general solution method 69 3.5.2 Ergodicity 69 3.5.3 Neurosynaptic states and transitions 70 3.5.4 Averaged neurosynaptic change field 70 3.5.5 Differential equation for neurosynaptic change field 71 3.5.6 Adiabatic principle 71 3.5.7 Differential equation for synaptic change field 72 3.5.8 Change potential field 73 3.5.9 Fluctuation dissipation theorems 77 3.5.10 Discussion 81 3.6 Field theory of topology preservation 81 3.6.1 Emergence of an injective mapping 81 3.6.2 Single neuron separation 83 3.6.3 Coincidence stabilization 84 3.6.4 Emergence of 1D topology preservation 86 3.6.5 Emergence of clusters and topology preservation 87 3.6.6 Discussion 92 3.7 Field theory of orientation preference emergence 92 3.7.1 Network model 92 3.7.2 Change potentials 94 3.7.3 Potential minima 95 3.7.4 Discussion 97 3.8 Field theory of orientation pattern emergence 98 3.8.1 Phenomenon of pinwheel structures 98 3.8.2 Network model 98 3.8.3 Effective iso-orientation interaction 100 3.8.4 Continuous orientation interaction 101 3.8.5 Orientation fluctuations 102 3.8.6 Instability of the ground state 103 3.8.7 Topological singularities according to the Poisson equation 104 3.8.8 Greens function solution 106 3.8.9 Energy of a planar system of charges 108 3.8.10 Prediction: Plasma phase transition 109 3.9 Overview for formal temperatures 110 3.10 Discussion of chapter 3 111 4 Supervised & Self-Organized Adaptation 113 4.1 Forms of supervised adaptation 113 4.2 Operant conditioning 114 4.2.1 The phenomenon of transitive inference 114 4.2.2 Network model 116 4.2.3 Analysis of the network model 117 4.2.4 Transitive inference 119 4.2.5 Symbolic distance effect 119 4.2.6 Network parameters for various species 121 4.3 Generalized quantitative dynamical analysis 122 4.3.1 General valuation dynamics 122 4.3.2 Transitive inference with general valuation dynamics 123 4.3.3 Necessary and sufficient conditions for learning the Piaget task 123 4.3.4 Transitive inference as a consequence of successful learning 124 4.3.5 General set of tasks 125 4.3.6 Network model with minimization of complexity 126 4.3.7 Complete neurosynaptic dynamics and empirical data 127 4.3.8 Discussion of operant conditioning 130 4.4 Supervised Hebb-rule 131 4.4.1 Network model 131 4.4.2 Network analysis 131 4.4.3 Discussion on convergence with Hebb-rules 134 4.5 Perceptron 134 4.5.1 Network and task definition 134 4.5.2 Network architecture capabilities 135 4.5.3 Perceptron convergence theorem 136 4.6 Discussion of chapter 4 137 5 Advanced Adaptations 138 5.1 Learning of charges 139 5.1.1 An especially simple experiment 140 5.1.2 Necessary inner neurons 141 5.1.3 Definition of frameworks 141 5.1.4 Network model 142 5.1.5 Analysis of the network model 143 5.1.6 Discussion 146 5.2 Attention 147 5.2.1 Network model with attention 148 5.2.2 Potential field theorem 148 5.2.3 Attentional learning of charges 150 5.2.4 Attentional adaptation convergence theorem 151 5.2.5 Emergence of network architectures 153 5.2.6 Generalized perceptron 153 5.2.7 Neuronal dynamics with signum function 155 5.2.8 Discussion 156 5.3 Reversal 156 5.3.1 A reversal experiment 157 5.3.2 Network model 157 5.3.3 Discussion of reversal 159 5.4 Learning of counting 159 5.4.1 Generalization without limitation 159 5.4.2 Network architecture and dynamics 160 5.4.3 Analysis of the network 161 5.4.4 An instructive network model 162 5.4.5 Advanced network dynamics 164 5.4.6 Analysis of the advanced network model 165 5.4.7 A solution of Wittgenstein's paradox 166 5.4.8 Discussion 168 5.5 Convergence theorem for inner feedback 168 5.5.1 Idea of adaptation via short dimension increase 169 5.5.2 Specification of the learning situation 170 5.5.3 Learning algorithm for inner feedback 171 5.5.4 Convergence theorem 174 5.5.5 Optimal correspondence via short dimension increase 177 5.5.6 Generalizations 178 5.5.7 Discussion 179 5.6 Correspondence deficit compensation: Schizophrenia model? 180 5.6.1 Starting point 180 5.6.2 Network model 181 5.6.3 Network characteristics 182 5.6.4 Transfer to schizophrenia 185 5.6.5 Therapy 187 5.6.6 Empirical findings 188 5.6.7 Discussion 194 5.7 A mesoscopic perception model 195 5.7.1 External stimulations 195 5.7.2 Network model 197 5.7.3 Field theoretic solution of the network 201 5.7.4 Modeling phenomena 203 5.7.5 Discussion 210 5.8 Emergent valuation 211 5.8.1 Emergence of a valuating field 211 5.8.2 Effect of a valuating stimulation 213 5.9 General adaptation dynamics 214 5.9.1 Definition of microscopic dynamics 214 5.9.2 Resulting macroscopic dynamics 216 5.9.3 Some special cases 218 5.10 Discussion of chapter 5 219 5.11 No Laplace demon 220 6 Summary 221 6.1 Overview 221 6.2 Predictions 222 6.3 List of ideas 224 6.4 Open questions 225 From nq6 at columbia.edu Thu May 23 10:59:23 1996 From: nq6 at columbia.edu (Ning Qian) Date: Thu, 23 May 1996 10:59:23 -0400 (EDT) Subject: Papers available: disparity tuning and motion-stereo integration Message-ID: <199605231459.KAA01297@konichiwa.cc.columbia.edu> The following two papers on disparity tuning of binocular cells and on motion-stereo integration are available from our WWW homepage at: http://brahms.cpmc.columbia.edu ----------------------------------------------------------------------- Binocular receptive field models, disparity tuning, and characteristic disparity Yudong Zhu and Ning Qian Columbia University (To appear in Neural Computation) Disparity tuning of visual cells in the brain depends on the structure of their binocular receptive fields (RFs). Freeman and coworkers have found that binocular RFs of a typical simple cell can be quantitatively described by two Gabor functions with the same Gaussian envelope but different phase parameters in the sinusoidal modulations \cite{Freeman90}. This phase-parameter based RF description, however, has recently been questioned by \citeasnoun{Wagner93} based on their identification of a so-called characteristic disparity (CD) in some cells' disparity tuning curves. They concluded that their data favor the traditional binocular RF model which assumes an overall positional shift between a cell's left and right RFs. Here we set to resolve this issue by studying the dependence of cells' disparity tuning on their underlying RF structures through mathematical analyses and computer simulations. We model the disparity tuning curves in Wagner and Frost's experiments and demonstrate that the mere existence of approximate CDs in real cells cannot be used to distinguish the phase-parameter based RF description from the traditional position-shift based RF description. Specifically, we found that model simple cells with either type of RF description do not have a CD. Model complex cells with the position-shift based RF description have a precise CD, and those with the phase-parameter based RF description have an approximate CD. We also suggest methods for correctly distinguishing the two types of RF descriptions. A hybrid of the two RF models may be required to fit the behavior of some real cells and we show how to determine the relative contributions of the two RF models. This paper is also available from NEUROPROSE: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/qian.cd.ps.Z ...................................................................... A Physiological Model for Motion-stereo Integration and a Unified Explanation of the Pulfrich-like Phenomena Ning Qian and Richard A. Andersen Columbia University and Caltech (To appear in Vision Research) Many psychophysical and physiological experiments indicate that visual motion analysis and stereoscopic depth perception are processed together in the brain. However, little computational effort has been devoted to combining these two visual modalities into a common framework based on physiological mechanisms. We present such an integrated model in this paper. We have previously developed a physiologically realistic model for binocular disparity computation \cite{Qian94e}. Here we demonstrate that under some general and physiological assumptions, our stereo vision model can be combined naturally with motion energy models to achieve motion-stereo integration. The integrated model may be used to explain a wide range of experimental observations regarding motion-stereo interaction. As an example, we show that the model can provide a unified account of the classical Pulfrich effect \cite{Morgan75} and the generalized Pulfrich phenomena to dynamic noise patterns \cite{Tyler74,Falk80} and stroboscopic stimuli \cite{Burr79}. ----------------------------------------------------------------------- From trevor at mallet.Stanford.EDU Thu May 23 11:16:09 1996 From: trevor at mallet.Stanford.EDU (Trevor Hastie) Date: Thu, 23 May 1996 08:16:09 -0700 (PDT) Subject: Modern Regression and Classification Message-ID: <199605231516.IAA09756@mallet.Stanford.EDU> A non-text attachment was scrubbed... Name: not available Type: text Size: 2824 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/878fc910/attachment-0001.ksh From ruppin at math.tau.ac.il Thu May 23 16:30:59 1996 From: ruppin at math.tau.ac.il (Eytan Ruppin) Date: Thu, 23 May 1996 23:30:59 +0300 (GMT+0300) Subject: Neural modeling papers Message-ID: <199605232030.XAA11605@gemini.math.tau.ac.il> Hi, 1. A few recent neural modeling papers are now available on my homepage, http://www.math.tau.ac.il/~ruppin/. Their abstracts are enclosed below. 2. Abstracts of the talks to be given in the TAU workshop on `Memory organization and consolidation: cognitive and computational perspectives' (Tel-Aviv, 28 - 30'th of May), will be available after the workshop via http://www.brain.tau.ac.il and via my homepage. Both homepages currently include the workshop program. Best wishes, Eytan Ruppin. %%%%%%%%%%%%%%%%%%%%%%%%%%% Abstracts: ----------- 1. Neuronal-Based Synaptic Compensation: A Computational Study in Alzheimer's Disease --------------------------------------------- David Horn, Nir Levy and Eytan Ruppin (to appear in Neural Computation 1996) In the framework of an associative memory model, we study the interplay between synaptic deletion and compensation, and memory deterioration, a clinical hallmark of Alzheimer's disease. Our study is motivated by experimental evidence that there are regulatory mechanisms that take part in the homeostasis of neuronal activity and act on the {\em neuronal} level. We show that, following synaptic deletion, synaptic compensation can be carried out efficiently by a {\em local, dynamic} mechanism, where each neuron maintains the profile of its incoming post-synaptic current. Our results open up the possibility that the primary factor in the pathogenesis of cognitive deficiencies in Alzheimer's disease is the failure of local neuronal regulatory mechanisms. Allowing for neuronal death, we observe two pathological routes in AD, leading to different correlations between the levels of structural damage and functional decline. 2. Optimal Firing in Sparsely-connected Low-activity Attractor Networks -------------------------------------------------------------------------- Isaac Meilijson and Eytan Ruppin (to appear in Biological Cybernetics 1996) We examine the performance of Hebbian-like attractor neural networks, recalling stored memory patterns from their distorted versions. Searching for an activation (firing-rate) function that maximizes the performance in sparsely-connected low-activity networks, we show that the optimal activation function is a {\em Threshold-Sigmoid} of the neuron's input field. This function is shown to be in close correspondence with the dependence of the firing rate of cortical neurons on their integrated input current, as described by neurophysiological recordings and conduction-based models. It also accounts for the decreasing-density shape of firing rates that has been reported in the literature. 3. Pathogenesis of Schizophrenic Delusions and Hallucinations: A Neural Model --------------------------------------------------------------------------- Eytan Ruppin, James Reggia and David Horn ({\em Schizophrenia Bulletin}, 22(1), 105-123, 1996 ) We implement and study a computational model of Stevens' theory of the pathogenesis of schizophrenia [1992]. This theory hypothesizes that the onset of schizophrenia is associated with reactive synaptic regeneration occurring in brain regions receiving degenerating temporal lobe projections. Concentrating on one such area, the frontal cortex, we model a frontal module as an associative memory neural network whose input synapses represent incoming temporal projections. Modeling Stevens' hypothesized pathological synaptic changes in this framework results in adverse side effects reminiscent of hallucinations and delusions seen in schizophrenia: spontaneous, stimulus-independent retrieval of stored memories focused on just a few of the stored patterns. These could account for the occurrence of schizophrenic delusions and hallucinations without any apparent external trigger, and for their tendency to concentrate on a few central cognitive and perceptual themes. The model explains why schizophrenic positive symptoms tend to wane as the disease progresses, why delayed therapeutical intervention leads to a much slower response, and why delusions and hallucinations may persist for a long duration when they occur. 4. Synaptic Runaway in Associative Networks ------------------------------------------- (Submitted to NIPS*96) Asnat Greenstein-Messica and Eytan Ruppin Synaptic runaway, the formation of erroneous synapses in the process of learning new patterns, is studied both analytically and numerically in binary associative neural networks. It is found that under normal biological conditions synaptic runaway in these networks is of fairly moderate magnitude, and is thus different from the extensive synaptic runaway found previously in analog-firing associative networks. However, synaptic runaway may become extensive if the threshold for Hebbian learning is reduced. The implications of these findings to the possible role of N-methyl-D-aspartate (NMDA) alterations in the pathogenesis of schizophrenia are discussed. 5. Neuronal Homeostasis and the Art of Synaptic Maintenance ------------------------------------------------------------- David Horn, Nir Levy and Eytan Ruppin (Submitted to NIPS*96) We propose a novel mechanism of synaptic maintenance whose goal is to preserve the performance of an associative memory network undergoing synaptic degradation, and to prevent the development of pathologic attractors. This mechanism is demonstrated by simulations performed in a low-activity neural model that implements local neuronal homeostasis. It works well even in a network undergoing strongly inhomogeneous synaptic alterations, and when input patterns are consecutively stored in the network. Our synaptic maintenance method strongly supports the idea that memory consolidation and synaptic maintenance should occur in separate periods of time, in a repetitive manner. Consequently, we hypothesize that synaptic maintenance occurs during REM sleep, while memory consolidation occurs during slow wave sleep. 6. Neural modeling of psychiatric disorders (A review paper) ------------------------------------------------------------- Eytan Ruppin ({\em Network}, 6, 635-656, 1995) This paper reviews recent neural modeling studies of psychiatric disorders. Numerous aspects of psychiatric disturbances have been investigated, such as the role of synaptic changes in the pathogenesis of Alzheimer's disease, the study of spurious attractors as possible neural correlates of schizophrenic positive symptoms, and the exploration of the ability of feed-forward and recurrent networks to quantitatively model the cognitive performance of schizophrenic patients. Current models all employ considerable simplifications, both on the level of the behavioral phenomenology they seek to explore, and on the level of their structure and dynamics. However, it is encouraging to realize that the disruption of just a few simple computational mechanisms can lead to behaviors which correspond to some of the clinical features of psychiatric disorders, and can shed light on their pathogenesis. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% From risto at cs.utexas.edu Fri May 24 11:27:45 1996 From: risto at cs.utexas.edu (Risto Miikkulainen) Date: Fri, 24 May 1996 10:27:45 -0500 Subject: Electronic book: Lateral Interactions in the Cortex Message-ID: <199605241527.KAA29219@cascais.cs.utexas.edu> We are pleased to announce the publication of the book LATERAL INTERACTIONS IN THE CORTEX: STRUCTURE AND FUNCTION This book is entirely electronic, in the HTML format, and can be accessed through the World Wide Web at the address below. It makes extensive use of the hypertext structure of HTML documents, including hyperlinks to researchers, institutions, and publications around the world, many color illustrations, and even a few mpeg movies. Please read the Preface for hints on how to get the most out of the book. Below is a short abstract of the book and table of contents. Enjoy! -- The Editors ------------------------------------------------------------------------ LATERAL INTERACTIONS IN THE CORTEX: STRUCTURE AND FUNCTION Electronic book, ISBN 0-9647060-0-8 http://www.cs.utexas.edu/users/nn/web-pubs/htmlbook96/ http://eris.wisdom.weizmann.ac.il/~edelman/htmlbook96/ (mirror site) Austin, TX: The UTCS Neural Network Research Group Joseph Sirosh, Risto Miikkulainen, and Yoonsuck Choe (editors) In the last few years, several new results on the structure, development, and functional role of lateral connectivity in the cortex have emerged. These results have led to a new understanding of the cortex as a continuously-adapting dynamic system shaped by competitive and cooperative lateral interactions. Many of the results and their interpretations are still controversial, and computational and analytical investigations can serve a pivotal role in establishing this new model. This book brings together eleven such investigations, each from a slightly different perspective and level, aiming at explaining what function the lateral interactions could play in the development and information processing in the cortex. The book serves as an overview of the kinds of processes that may be going on, laying the groundwork for understanding information processing in the laterally connected cortex. Table of Contents: Preface 1. Introduction - Risto Miikkulainen and Joseph Sirosh 2. The Pattern and Functional Significance of Long-Range Interactions in Human Visual Cortex - Uri Polat, Anthony M. Norcia, and Dov Sagi 3. Recurrent Inhibition and Clustered Connectivity as a Basis for Gabor-like Receptive Fields in the Visual Cortex - Silvio P. Sabatini 4. Variable Gain Control in Local Cortical Circuitry Supports Context-Dependent Modulation by Long-Range Connections - David C. Somers, Louis J. Toth, Emanuel Todorov, S. Chenchal Rao, Dae-Shik Kim, Sacha B. Nelson, Athanassios G. Siapas, and Mriganka Sur 5. The Role of Lateral Connections in Visual Cortex: Dynamics and Information Processing - Marius Usher, Martin Stemmler, and Ernst Niebur 6. Synchronous Oscillations Based on Lateral Connections - DeLiang Wang 7. A Basis for Long-Range Inhibition Across Cortex - J. G. Taylor and F. N. Alavi 8. Self-Organization of Orientation Maps, Lateral Connections, and Dynamic Receptive Fields in the Primary Visual Cortex - Joseph Sirosh, Risto Miikkulainen, and James A. Bednar 9. Associative Decorrelation Dynamics in Visual Cortex - Dawei W. Dong 10. A Self-Organizing Neural Network That Learns to Detect and Represent Visual Depth from Occlusion Events - Jonathan A. Marshall and Richard Alley 11. Face Recognition by Dynamic Link Matching - Laurenz Wiskott and Christoph von der Malsburg 12. Why Have Lateral Connections in the Visual Cortex? - Shimon Edelman From jung at pop.uky.edu Fri May 24 11:58:51 1996 From: jung at pop.uky.edu (Dr. Ranu Jung) Date: Fri, 24 May 1996 15:58:51 +0000 Subject: Graduate Research Asst. Message-ID: <199605242101.RAA23996@service1.cc.uky.edu> GRADUATE RESEARCH ASSISTANTSHIPS (PLEASE FORWARD) A graduate research assistantship is available for up to 3 years to conduct research in the "Neural Control of Locomotion" at the Center for Biomedical Engineering, University of Kentucky. Students have to be accepted into the Ph.D./MS program starting Fall 1996 (August). The assistantship is available to citizens of all nations. The research is to examine the dynamical interaction between the brain and the spinal cord in the control of locomotion, in particular, swimming in a lower vertebrate. Traditional neurophysiological experimental techniques will be complimented by techniques from non-linear signal processing and control. In conjunction, the behavior of connectionist/biophysical neural network models will be examined and analyzed using tools from dynamical systems theory. If interested, send CV, and names of two references, preferably by email or Fax to: Ranu Jung, Ph.D. Center for Biomedical Engineering 21 Wenner-Gren Research Lab. University of Kentucky, Lexington 40506-0070 Tel. 606-257-5931 email:jung at pop.uky.edu Fax: 606-257-1856 The University of Kentucky is located in the rolling hills of the Bluegrass Country and has a diverse campus. The Center for Biomedical Engineering is a multidisciplinary center in the Graduate School. We have strong ties to the Medical Center and the School of Engineering. Details about the University of Kentucky and the Center for Biomedical Engineering can be obtained on the web at http://www.uky.edu; http://www.uky.edu/RGS/CBME. From mike at psych.ualberta.ca Fri May 24 15:10:53 1996 From: mike at psych.ualberta.ca (Dr. Michael R.W. Dawson) Date: Fri, 24 May 1996 13:10:53 -0600 (MDT) Subject: Cognitive Neuroscience Job Message-ID: The University of Alberta, Department of Psychology, is pleased to announce that it is continuing its expansion into the Cognitive Neurosciences in 1997. Canadians and Non-Canadians are encouraged to apply for a tenure-track position. Details are described below. Additional information, including profiles of two cognitive neuroscientists hired by our Department last year, can be found at our web-site: http://web.psych.ualberta.ca/Neuroscience_Positions.htmld/index.html ========================================================== DEPARTMENT OF PSYCHOLOGY, UNIVERSITY OF ALBERTA Tenure-Track Assistant Professor Position in Cognitive Neuroscience The Department of Psychology, Faculty of Science at the University of Alberta, is seeking to expand its development in the Cognitive Neurosciences. A tenure-track position in Cognitive Neuroscience at the assistant professor level will be open to competition (salary range $39,230 - $55,526). The appointment will be effective July 1, 1997. Candidates should have a strong interest in neuroscience with demonstrated excellence and ongoing research programs. The expectation is that the successful candidate will secure NSERC, MRC, or equivalent funding. Hiring decisions will be made on the basis of demonstrated research capability, teaching ability, and the potential for interactions with colleagues. Applicants should have an expertise in any of the following or related areas: perception, language, neural plasticity, development and aging, attention, motor control, emotion, or memory. The applicant should send a curriculum vitae, a statement of current and future research plans, recent publications, and arrange to have at least three letters of reference forwarded, to the Chair of the Cognitive Neuroscience Search Committee, Department of Psychology, P-220 Biological Sciences Building, University of Alberta, Edmonton, Alberta, Canada, T6G 2E9. Applications for the competition should be received by November 1, 1996. PhD must be completed by July 1, 1997. The University of Alberta is committed to the principle of equity in employment. As an employer we welcome diversity in the workplace and encourage applications from all qualified women and men, including Aboriginal peoples, persons with disabilities, and members of visible minorities. From rao at cs.rochester.edu Sat May 25 21:13:35 1996 From: rao at cs.rochester.edu (rao@cs.rochester.edu) Date: Sat, 25 May 1996 21:13:35 -0400 Subject: No subject Message-ID: <199605260113.VAA24390@skunk.cs.rochester.edu> psyc at pucc.princeton.edu, cogneuro at ptolemy-ethernet.arc.nasa.gov, cvnet at skivs.ski.org, inns-l%umdd.bitnet at pucc.princeton.edu, neuronet at tutkie.tut.ac.jp, vision-list at teleosresearch.com Subject: Papers available: Dynamic Models of Visual Recognition The following two papers on dynamic cortical models of visual recognition are now available for retrieval via ftp. Comments/suggestions welcome, -Rajesh Rao (rao at cs.rochester.edu) =========================================================================== A Class of Stochastic Models for Invariant Recognition, Motion, and Stereo Rajesh P.N. Rao and Dana H. Ballard (Submitted to NIPS*96) Abstract We describe a general framework for modeling transformations in the image plane using a stochastic generative model. Algorithms that resemble the well-known Kalman filter are derived from the MDL principle for estimating both the generative weights and the current transformation state. The generative model is assumed to be implemented in cortical feedback pathways while the feedforward pathways implement an approximate inverse model to facilitate the estimation of current state. Using the above framework, we derive stochastic models for invariant recognition, motion estimation, and stereopsis, and present preliminary simulation results demonstrating recognition of objects in the presence of translations, rotations and scale changes. Retrieval information: FTP-host: ftp.cs.rochester.edu FTP-pathname: /pub/u/rao/papers/invar.ps.Z URL: ftp://ftp.cs.rochester.edu/pub/u/rao/papers/invar.ps.Z 7 pages; 430K compressed. ========================================================================== Dynamic Model of Visual Recognition Predicts Neural Response Properties In The Visual Cortex Rajesh P.N. Rao and Dana H. Ballard (Neural Computation - in review) Abstract The responses of visual cortical neurons during fixation tasks can be significantly modulated by stimuli from beyond the classical receptive field. Modulatory effects in neural responses have also been recently reported in a task where a monkey freely views a natural scene. In this paper, we describe a stochastic network model of visual recognition that explains these experimental observations by using a hierarchical form of the extended Kalman filter as given by the Minimum Description Length (MDL) principle. The model dynamically combines input-driven bottom-up signals with expectation-driven top-down signals to predict current recognition state. Synaptic weights in the model are adapted in a Hebbian manner according to a stochastic learning rule also derived from the MDL principle. The architecture of the model posits an active computational role for the reciprocal connections between adjoining visual cortical areas in determining neural response properties. In particular, the model demonstrates the possible role of feedback from higher cortical areas in mediating neurophysiological effects due to stimuli from beyond the classical receptive field. Simulations of the model are provided that help explain the experimental observations regarding neural responses in both free viewing and fixating conditions. Retrieval information: FTP-host: ftp.cs.rochester.edu FTP-pathname: /pub/u/rao/papers/dynmem.ps.Z URL: ftp://ftp.cs.rochester.edu/pub/u/rao/papers/dynmem.ps.Z 32 pages; 534K compressed. =========================================================================== From koza at CS.Stanford.EDU Sun May 26 02:32:04 1996 From: koza at CS.Stanford.EDU (John Koza) Date: Sat, 25 May 96 23:32:04 PDT Subject: GP is competitive with humans on 4 problems Message-ID: We have fixed the problem and the following paper is now available in Post Script. Four Problems for which a Computer Program Evolved by Genetic Programming is Competitive with Human Performance ABSTRACT: It would be desirable if computers could solve problems without the need for a human to write the detailed programmatic steps. That is, it would be desirable to have a domain-independent automatic programming technique in which "What You Want Is What You Get" ("WYWIWYG" - pronounced "wow-eee-wig"). Genetic programming is such a technique. This paper surveys three recent examples of problems (from the fields of cellular automata and molecular biology) in which genetic programming evolved a computer program that produced results that were slightly better than human performance for the same problem. This paper then discusses the problem of electronic circuit synthesis in greater detail. It shows how genetic programming can evolve both the topology of a desired electrical circuit and the sizing (numerical values) for each component in a crossover (woofer and tweeter) filter. Genetic programming has also evolved the design for a lowpass filter, the design of an amplifier, and the design for an asymmetric bandpass filter that was described as being difficult-to-design in an article in a leading electrical engineering journal. John R. Koza Computer Science Department 258 Gates Building Stanford University Stanford, California 94305 E-MAIL: Koza at CS.Stanford.Edu Forrest H Bennett III Visiting Scholar Computer Science Department Stanford University E-MAIL: Koza at CS.Stanford.Edu David Andre Visiting Scholar Computer Science Department Stanford University E-MAIL: fhb3 at slip.net Martin A. Keane Econometrics Inc. Chicago, IL 60630 Paper available in Postscript via WWW from http://www-cs-faculty.stanford.edu/~koza/ Look under "Research Publications" and "Recent Papers" on the home page. This paper was presented at the IEEE International Conference on Evolutionary Computation on May 20-22, 1996 in Nagoya, Japan. Additional papers on evolving electrical circuits will be presented at the GP-96 conference to be held at Stanford University on July 28-31, 1996. For information, see http://www.cs.brandeis.edu/~zippy/gp-96.html From gbugmann at soc.plym.ac.uk Sat May 25 12:22:04 1996 From: gbugmann at soc.plym.ac.uk (Guido.Bugmann xtn 2566) Date: Sat, 25 May 1996 17:22:04 +0100 (BST) Subject: Connectionist Learning - Some New Ideas In-Reply-To: <199605181814.OAA09391@skunk.cs.rochester.edu> Message-ID: On Sat, 18 May 1996 rao at cs.rochester.edu wrote: > >One loses about 100,000 cortical neurons a day (about a percent of > >the original number every three years) under normal conditions. > > Does anyone have a concrete citation (a journal article) for this or > any other similar estimate regarding the daily cell death rate in the > cortex of a normal brain? I've read such numbers in a number of > connectionist papers but none cite any neurophysiological studies that > substantiate these numbers. A similar question (are there references for 1 millions neurons lost per day ?) came up in a discussion on the topic of robustness on connectionists a few years ago (1992). Some of the replies were: ------------------------------------------------------- From phkywong at uxmail.ust.hk Mon May 27 08:56:13 1996 From: phkywong at uxmail.ust.hk (Dr. Michael Wong) Date: Mon, 27 May 1996 20:56:13 +0800 Subject: Paper available Message-ID: <96May27.205615hkt.18930-8054+221@uxmail.ust.hk> The following papers, to be presented at ICONIP'96, is now available via anonymous FTP. (5 pages each) ============================================================================ FTP-host: physics.ust.hk FTP-files: pub/kymwong/robust.ps.gz Neural Dynamic Routing for Robust Teletraffic Control Neural Network Classification of Non-Uniform Data W. K. Felix Lor and K. Y. Michael Wong Department of Physics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong. E-mail addresses: phfelix at usthk.ust.hk, phkywong at usthk.ust.hk ABSTRACT We study the performance of a neural dynamic routing algorithm on the circuit- switched network under critical network situations. It consists of a teacher generating examples for supervised learning in a group of student neural controllers. Simulations show that the method is robust and superior to conventional routing techniques. ============================================================================ FTP-host: physics.ust.hk FTP-files: pub/kymwong/nonuni.ps.gz Neural Network Classification of Non-Uniform Data K. Y. Michael Wong and H. C. Lau Department of Physics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong. E-mail addresses: phkywong at usthk.ust.hk, phhclau at usthk.ust.hk ABSTRACT We consider a model of non-uniform data, which resembles typical data for system faults in diagnostic classification tasks. Phase diagrams illustrate the role reversal of the informators and background as parameters change. With no prior knowledge about the non-uniformity, the Bayesian classifier may perform worse than other neural network classifiers for few examples. ============================================================================ FTP instructions: unix> ftp physics.ust.hk Name: anonymous Password: your full email address ftp> cd pub/kymwong ftp> get robust.ps.gz (or get nonuni.ps.gz) ftp> quit unix> gunzip robust.ps.gz (or gunzip nonuni.ps.gz) unix> lpr robust.ps (or lpr nonuni.ps.gz) From listerrj at helios.aston.ac.uk Tue May 28 05:46:11 1996 From: listerrj at helios.aston.ac.uk (Richard Lister) Date: Tue, 28 May 1996 10:46:11 +0100 Subject: Postdoctoral Research Fellowship at Aston University Message-ID: <11161.199605280946@sun.aston.ac.uk> ---------------------------------------------------------------------- Neural Computing Research Group ------------------------------- Dept of Computer Science and Applied Mathematics Aston University, Birmingham, UK POSTDOCTORAL RESEARCH FELLOWSHIP -------------------------------- On-line Learning in Radial Basis Function Networks -------------------------------------------------- *** Full details at http://www.ncrg.aston.ac.uk/ *** The Neural Computing Research Group at Aston is looking for a highly motivated individual for a 2 year postdoctoral research position in the area of `On-line Learning in Radial Basis Function Networks'. The emphasis of the research will be on applying a theoretically well- founded approach based on methods adopted from statistical mechanics to analyse learning in RBF networks. Potential candidates should have strong mathematical and computational skills, with a background in statistical mechanics and neural networks. Conditions of Service --------------------- Salaries will be up to point 6 on the RA 1A scale, currently 15,566 UK pounds. The salary scale is subject to annual increments. How to Apply ------------ If you wish to be considered for this Fellowship, please send a full CV and publications list, including full details and grades of academic qualifications, together with the names of 3 referees, to: Dr. David Saad Neural Computing Research Group Dept. of Computer Science and Applied Mathematics Aston University Birmingham B4 7ET, U.K. Tel: 0121 333 4631 Fax: 0121 333 4586 e-mail: D.Saad at aston.ac.uk (e-mail submission of postscript files is welcome) Closing date: 21 June, 1996. ---------------------------------------------------------------------- From geoff at salk.edu Tue May 28 15:45:03 1996 From: geoff at salk.edu (Geoff Goodhill) Date: Tue, 28 May 96 12:45:03 PDT Subject: Cell death during embryogenesis Message-ID: <9605281945.AA26708@salk.edu> Those following the current thread on cell death may be interested in a recent experimental investigation of this during development of the mouse by Blaschke, Staley & Chun (abstract below). The most striking finding is that at embryonic day 14, 70% of cortical cells are dying. Geoff Goodhill The Salk Institute 10010 North Torrey Pines Road La Jolla, CA 92037 Email: geoff at salk.edu http: www.cnl.salk.edu/~geoff TI: WIDESPREAD PROGRAMMED CELL-DEATH IN PROLIFERATIVE AND POSTMITOTIC REGIONS OF THE FETAL CEREBRAL-CORTEX AU: BLASCHKE_AJ, STALEY_K, CHUN_J NA: UNIV CALIF SAN DIEGO,SCH MED,DEPT PHARMACOL,NEUROSCI & BIOMED SCI GRAD PROGRAM,9500 GILMAN DR,LA JOLLA,CA,92093 UNIV CALIF SAN DIEGO,SCH MED,DEPT PHARMACOL,NEUROSCI & BIOMED SCI GRAD PROGRAM,LA JOLLA,CA,92093 UNIV CALIF SAN DIEGO,SCH MED,DEPT PHARMACOL,BIOL GRAD PROGRAM,LA JOLLA,CA,92093 JN: DEVELOPMENT, 1996, Vol.122, No.4, pp.1165-1174 IS: 0950-1991 AB: A key event in the development of the mammalian cerebral cortex is the generation of neuronal populations during embryonic life, Previous studies have revealed many details of cortical neuron development including cell birthdates, migration patterns and lineage relationships, Programmed cell death is a potentially important mechanism that could alter the numbers and types of developing cortical cells during these early embryonic phases, While programmed cell death has been documented in other parts of the embryonic central nervous system, its operation has not been previously reported in the embryonic cortex because of the lack of cell death markers and the difficulty in following the entire population of cortical cells, Here, we have investigated the spatial and temporal distribution of dying cells in the embryonic cortex using an in situ end-labelling technique called 'ISEL+' that identifies fragmented nuclear DNA in dying cells with increased sensitivity, The period encompassing murine cerebral cortical neurogenesis was examined, from embryonic days 10 through 18, Dying cells were rare at embryonic day 10, but by embryonic day 14, 70% of cortical cells were found to be dying, This number declined to 50% by embryonic day 18, and few dying cells were observed in the adult cerebral cortex. Surprisingly, while dying cells were observed throughout the cerebral cortical wall, the majority were found within zones of cell proliferation rather than in regions of postmitotic neurons. These observations suggest that multiple mechanisms may regulate programmed cell death in the developing cortex, Moreover, embryonic cell death could be an important factor enabling the selection of appropriate cortical cells before they complete their differentiation in postnatal life. KP: RHESUS-MONKEY, POSTNATAL-DEVELOPMENT, MIGRATION PATTERNS, REGRESSIVE EVENTS, DNA FRAGMENTATION, NERVOUS-SYSTEM, GANGLION- CELL, VISUAL-CORTEX, MOUSE, RAT WA: PROGRAMMED CELL DEATH, CEREBRAL CORTEX, EMBRYONIC DEVELOPMENT, MOUSE From lba at inesc.pt Tue May 28 11:26:09 1996 From: lba at inesc.pt (Luis B. Almeida) Date: Tue, 28 May 1996 16:26:09 +0100 Subject: paper available Message-ID: <31AB1B11.6EEA4806@inesc.pt> The following paper, which will appear in the Proceedings of the IEEE International Conference on Neural Networks 1996, Washington DC, June 1996, is available at ftp://aleph.inesc.pt/pub/lba/icnn96.ps An Objective Function for Independence Goncalo Marques and Luis B. Almeida IST and INESC, Lisbon, Portugal Abstract The problem of separating a linear or nonlinear mixture of independent sources has been the focus of many studies in recent years. It is well known that the classical principal component analysis method, which is based on second order statistics, performs poorly even in the linear case, if the sources do not have Gaussian distributions. Based on this fact, several algorithms take in account higher than second order statistics in their approach to the problem. Other algorithms use the Kullback-Leibler divergence to find a transformation that can separate the independent signals. Nevertheless the great majority of these algorithms only take in account a finite number of statistics, usually up to the fourth order, or use some kind of smoothed approximations. In this paper we present a new class of objective functions for source separation. The objective functions use statistics of all orders simultaneously, and have the advantage of being continuous, differentiable functions that can be computed directly from the training data. A derivation of the class of functions for two dimensional data, some numerical examples illustrating its performance, and some implementation considerations are described. In this electronic version a few typos of the printed version have been corrected. The paper is reproduced with permission from the IEEE. Please read the copyright notice at the beginning of the document. -- Luis B. Almeida INESC Phone: +351-1-3544607, +351-1-3100246 R. Alves Redol, 9 Fax: +351-1-3145843 P-1000 Lisboa Portugal e-mail: lba at inesc.pt or luis.almeida at inesc.pt ------------------------------------------------------------------- *** Indonesia is killing innocent people in East Timor *** From jorn at let.rug.nl Tue May 28 03:37:34 1996 From: jorn at let.rug.nl (Jorn Veenstra) Date: Tue, 28 May 1996 09:37:34 +0200 Subject: postdoc position in Neurolinguistics Message-ID: <199605280737.AA028329061@freya.let.rug.nl> POSTDOCTORAL POSITION AVAILABLE The Netherlands Organization for Scientific Research (NWO) will make a THREE YEAR POSTDOCTORAL POSITION AVAILABLE within the project "Neurological Basis of Language" (NWO project 030-30- 431) to a candidate whose Ph.D. research was directed toward computational modelling of cognitive functions (preferably language) based on psychological and neuroanatomical data. The role of the postdoc would be to develop computational models of language processing which employ physiologically plausible assumptions and are compatible with or predict the results of psycholinguistic experimental evidence on the time course and structure of language processing. The goal of the project as a whole is to investigate localization of language functions using positron emission tomography, the time course of language processing using event-related potentials to develop a neurologically plausible model of language. Contact Dr. Laurie A. Stowe, Dept. of Linguistics, Faculty of Letters, University of Groningen, Postbus 716, 9700 AS Groningen, Netherlands, 31 50 636627 or stowe at let.rug.nl for further information. Applications should be accompanied by a curricu- lum vita, two references (direct from referee), and documenta- tion of research experience in the form of published and in progress articles. From bap at valaga.salk.edu Wed May 29 03:55:38 1996 From: bap at valaga.salk.edu (Barak Pearlmutter) Date: Wed, 29 May 1996 00:55:38 -0700 Subject: Paper Available --- Blind Source Separation Message-ID: <199605290755.AAA04235@valaga.salk.edu> The following paper (which will appear at the 1996 International Conference on Neural Information Processing this Fall) is available as http://www.cnl.salk.edu/~bap/papers/iconip-96-cica.ps.gz A Context-Sensitive Generalization of ICA Barak A. Pearlmutter and Lucas C. Parra Abstract Source separation arises in a surprising number of signal processing applications, from speech recognition to EEG analysis. In the square linear blind source separation problem without time delays, one must find an unmixing matrix which can detangle the result of mixing $n$ unknown independent sources through an unknown $n \times n$ mixing matrix. The recently introduced ICA blind source separation algorithm (Baram and Roth 1994; Bell and Sejnowski 1995) is a powerful and surprisingly simple technique for solving this problem. ICA is all the more remarkable for performing so well despite making absolutely no use of the temporal structure of its input! This paper presents a new algorithm, contextual ICA, which derives from a maximum likelihood density estimation formulation of the problem. cICA can incorporate arbitrarily complex adaptive history-sensitive source models, and thereby make use of the temporal structure of its input. This allows it to separate in a number of situations where standard ICA cannot, including sources with low kurtosis, colored gaussian sources, and sources which have gaussian histograms. Since ICA is a special case of cICA, the MLE derivation provides as a corollary a rigorous derivation of classic ICA. From lba at inesc.pt Wed May 29 08:43:14 1996 From: lba at inesc.pt (Luis B. Almeida) Date: Wed, 29 May 1996 13:43:14 +0100 Subject: paper available - ftp instructions Message-ID: <31AC4662.5E652F78@inesc.pt> Since some people don't know how to translate the address ftp://aleph.inesc.pt/pub/lba/icnn96.ps that was given for the paper An Objective Function for Independence Goncalo Marques and Luis B. Almeida IST and INESC, Lisbon, Portugal into anonymous ftp commands, I'm giving those commands below: >ftp aleph.inesc.pt Connected to aleph.inesc.pt. 220 aleph FTP server (SunOS 4.1) ready. Name (aleph.inesc.pt:lba): [type 'anonymous' here] 331 Guest login ok, send ident as password. Password: [type your e-mail address here] 230 Guest login ok, access restrictions apply. ftp> cd /pub/lba ftp> get icnn96.ps ftp> bye If the address 'aleph.inesc.pt' cannot be resolved, you can also use >ftp 146.193.2.131 instead of >ftp aleph.inesc.pt Happy downloading! Luis B. Almeida INESC Phone: +351-1-3544607, +351-1-3100246 R. Alves Redol, 9 Fax: +351-1-3145843 P-1000 Lisboa Portugal e-mail: lba at inesc.pt or luis.almeida at inesc.pt ------------------------------------------------------------------- *** Indonesia is killing innocent people in East Timor *** From zhuh at helios.aston.ac.uk Wed May 29 15:33:48 1996 From: zhuh at helios.aston.ac.uk (zhuh) Date: Wed, 29 May 1996 19:33:48 +0000 Subject: Paper: efficient online training of curved models using ancillary statistics Message-ID: <1840.9605291833@sun.aston.ac.uk> The following paper is accepted for 1996 International Conference on Neural Information Processing, Hong Kong, Sept. 1996. ftp://cs.aston.ac.uk/neural/zhuh/ac1.ps.Z Using Ancillary Statistics in On-Line Learning Algorithms Huaiyu Zhu and Richard Rohwer Neural Computing Research Group Dept of Comp. Sci. Appl. Math. Aston Univ., Birmingham B4 7ET, UK Abstract Neural networks are usually curved statistical models. They do not have finite dimensional sufficient statistics, so on-line learning on the model itself inevitably loses information. In this paper we propose a new scheme for training curved models, inspired by the ideas of ancillary statistics and adaptive critics. At each point estimate an auxiliary flat model (exponential family) is built to locally accommodate both the usual statistic (tangent to the model) and an ancillary statistic (normal to the model). The auxiliary model plays a role in determining credit assignment analogous to that played by an adaptive critic in solving temporal problems. The method is illustrated with the Cauchy model and the algorithm is proved to be asymptotically efficient. -- Huaiyu Zhu, PhD email: H.Zhu at aston.ac.uk Neural Computing Research Group http://neural-server.aston.ac.uk/People/zhuh Dept of Computer Science ftp://cs.aston.ac.uk/neural/zhuh and Applied Mathematics tel: +44 121 359 3611 x 5427 Aston University, fax: +44 121 333 6215 Birmingham B4 7ET, UK From pierre at mbfys.kun.nl Thu May 30 04:44:12 1996 From: pierre at mbfys.kun.nl (Pi\"erre van de Laar) Date: Thu, 30 May 1996 10:44:12 +0200 Subject: sensitivity analysis and relevance Message-ID: <31AD5FDC.794BDF32@mbfys.kun.nl> Dear Connectionists, I am interested in methods which perform sensitivity analysis and/or relevance determination of input fields, and especially methods which use neural networks. Although I have already found a number of references (see end of mail), I expect that this list is not complete. Any further references would be highly appreciated. As usual a summary of all replies will be posted in about a month. Thanks in advance, Pi\"erre van de Laar Department of Medical Physics and Biophysics University of Nijmegen, The Netherlands mailto:pierre at mbfys.kun.nl http://www.mbfys.kun.nl/~pierre/ Aldrich, C. and van Deventer, J.S.J., Modelling of Induced Aeration in Turbine Aerators by Use of Radial Basis Function Neural Networks, The Canadian Journal of Chemical Engineerin g 73(6):808-816, 1995. Boritz, J. Efrim and Kennedy, Duane B., Effectiveness of Neural Network Types for Predicti on of Business Failure, Expert Systems With Applications, 9(4):503-512, 1995. Hammitt, A.M. and Bartlett, E.B., Determining Functional Relationships from trained neural networks, Mathematical and Computer Modelling 22(3):83-103, 1995. Korthals, R.L. and Hahn, G.L. and Nienaber, J.A., Evaluation of Neural Networks as a tool for management of swine environments, Transactions of the American Society of Agricultural Engineers 37(4):1295-1299, 1994. Lacroix, R. and Wade, K.M. and Kok, R. and Hayes, J.F., Prediction of cow performance with a connectionist model, Transactions of the American Society of Agricultural Engineers 38(5):1573-1579, 1995. MacKay, David J.C., Probable networks and plausible predictions - a review of pratical Bay esian methods for supervised neural networks, Network: Computation in Neural Systems 6(3): 469-505, 1995. Neal, Radford M., Bayesian Learning for neural networks, Dept. of Computer Science, Univer sity of Toronto, 1994. Oh, Sang-Hoon and Lee, Youngjik, Sensitivity Analysis of Single Hidden-Layer Neural Networ ks with Threshold Functions, IEEE Transactions on Neural Networks 6(4):1005-1007, 1995. Naimimohasses, R. and Barnett, D.M. and Green, D.A. and Smith, P.R., Sensor optimization u sing neural network sensitivity measures, Measurement science & technology 6(9):1291-1300, 1995. BrainMaker Professional: User's Guide and Reference Manual, 4th edition, California Scient ific Software, Nevada City, Chapter 10, 48-59, 1993. From bogus@does.not.exist.com Thu May 30 09:55:16 1996 From: bogus@does.not.exist.com () Date: Thu, 30 May 1996 16:55:16 +0300 Subject: No subject Message-ID: <9605301355.AA11576@antigoni.med.auth.gr> From joe at sunia.u-strasbg.fr Thu May 30 07:44:45 1996 From: joe at sunia.u-strasbg.fr (Prof invite) Date: Thu, 30 May 1996 13:44:45 +0200 Subject: Papers on Rule-Extraction from trained ANN Message-ID: <199605301144.NAA19639@sunia.diane> The following papers are available via anonymous ftp: An Evaluation And Comparison Of Techniques For Extracting And Refining Rules From Artificial Neural Networks Robert Andrews* ** Russell Cable* Joachim Diederich* Shlomo Geva* Mostefa Golea* Ross Hayward* Chris Ho-Stewart* Alan B. Tickle* ** Neurocomputing Research Centre* School of Information Systems** Queensland University of Technology Brisbane Q 4001 Australia QUTNRC-96-01-04.ps.Z Abstract It is becoming increasingly apparent that without some form of explanation capability, the full potential of trained Artificial Neural Networks (ANNs) may not be realised. The primary purpose of this report is to survey techniques which have been developed to redress this situation. Specifically the survey focuses on mechanisms, procedures, and algorithms designed to insert knowledge into ANNs (knowledge initialisation), extract rules from trained ANNs (rule extraction), and utilise ANNs to refine existing rule bases (rule refinement). The survey also introduces a new taxanomy for classifying the various techniques, discusses their modus operandi, and delineates criteria for evaluating their efficacy. An additional facet of the report is a comparative evaluation of the performance of a set of techniques developed at the Neurocomputing Research Centre at QUT to extract knowledge from trained ANNs as a set of symbolic rules. Note: This is an extended version of: Andrews, R.; Diederich, J.; Tickle, A.B. A Survey and Critique of Techniques for Extracting Rules from Trained Artificial Neural Networks. KNOWLEDGE-BASED SYSTEMS 8 (1995) 6, 373-389. This version includes first empirical results and is distributed with permission of the editor and publisher. ******************************************************************************* DEDEC: Decision Detection by Rule Extraction from Neural Networks Alan B. Tickle* ** Marian Orlowski* ** Joachim Diederich* Neurocomputing Research Centre* School of Information Systems** Queensland University of Technology Brisbane Q 4001 Australia QUTNRC-95-01-03.ps.Z Abstract A clearly recognised impediment to the realisation of the full potential of Artificial Neural Networks is an inherent inability to explain in a comprehensible form (e.g. as a set of symbolic rules), the process by which an ANN arrived at a particular conclusion/decision/result. While a variety of techniques have already appeared to address this limitation, a substantial number of the more successful approaches are dependent on specialised ANN architectures. The DEDEC technique is a generic approach to rule extraction from trained ANNs which is designed to be applicable across a broad range of ANN architectures. The DEDEC technique is a generic approach to rule extraction from trained ANNs which is designed to be applicable across a broad range of ANN architectures. The basic motif adopted is to utilise the generalisation capability of a trained ANN to generate a set of examples from the problem domain which may include examples beyond the initial training set. These examples are then presented to a symbolic induction algorithm and the requisite rule set extracted. However an important innovation over other rule-extraction techniques of this ('pedagogical') type is that the DEDEC technique utilises information extracted from an analysis of the weight vectors of the trained ANN to rank the input variables (rule antecedents) in terms of their relative importance. This additional information is used to focus the search of the solution space on those examples from the problem domain which are deemed to be of most significance. The paper gives a detailed description of one possible implementation of the DEDEC technique and discusses results obtained on both a set of structured sample problems and 'real world' problems. ******************************************************************************* DEDEC: A Methodology For Extracting Rules From Trained Artificial Neural Networks Alan B. Tickle* ** Marian Orlowski Joachim Diederich* Neurocomputing Research Centre* School of Information Systems** Queensland University of Technology Brisbane Q 4001 Australia QUTNRC-96-01-05.ps.Z Abstract A recognised impediment to the more widespread utilisation of Artificial Neural Networks (ANNs) is the absence of a capability to explain, in a human comprehensible form, either the process by which a specific decision/result has been reached or, in general, the totality of knowledge embedded within the ANN. Currently, one of the most promising approaches to redressing this situation is to extract the knowledge embedded in the trained ANN as a set of symbolic rules. In this paper we describe the DEDEC methodology for rule-extraction which is applicable to a broad class of multilayer, feedforward ANNs trained by the 'back-propogation' method. Central to the DEDEC approach is the identification of the functional dependencies between the ANN inputs (i.e. the attribute values of the data) and the ANN outputs (e.g. the classification decision). However the key motif of the DEDEC methodology is the utilisation of information extracted from analysing the weight vectors in the trained ANN to focus the process of determining these functional dependencies. In addition, if required, DEDEC exploits the capability of a trained ANN to generalise beyond the data used in the ANN training phase. The paper illustrates one of a number of possible implementations of the DEDEC methodology, discusses results obtained on both a set of structured sample problems and a "real world" problem, and provides a comparison with other rule extraction techniques. ***************************************************************************** Artificial Intelligence Meets Artificial Insemination The Importance and Application of Symbolic Rule Extraction From Trained Artificial Neural Networks Robert Andrews* ** Joachim Diederich* Emanoil Pop* Alan B Tickle* ** Neurocomputing Research Centre* School of Information Systems** Queensland University of Technology Brisbane Q 4001 Australia QUTNRC-96-01-01.ps.Z Abstract In a recent article Andrews et al.[1995] describe a schema for classifying neural network rule extraction techniques as either decompositional, eclectic, or pedagogical. Decompositional techniques require knowledge of the neural network architecture and weights. Each hidden and output unit is interpreted as a Boolean rule with the antecedents being a set of incoming links whose summed weights guarantee to exceed the unit's bias regardless of the activations of the other incoming links. Pedagogical techniques on the other hand treat the underlyling neural network as a 'black box' using it to both classify examples and to generate examples which a symbolic algorithm then converts to rules. Eclectic techniques combine elements of the two basic categories. In this paper we describe some reasons why rule extraction is an important area of research. We then briefly describe three rule extraction algorithms, RULEX, DEDEC & RULENEG, these being representative of each of the abovementioned groups. We test these algorithms using two classification problems; the first being a laboratory benchmarking problem while the second is drawn from real life. For each problem, each of the rule extraction techniques previously described is applied to a trained neural network and the resulting rules presented. ******************************************************************************** Rule Extraction From CASCADE-2 Networks Ross Hayward Emanoil Pop Joachim Diederich Neurocomputing Research Centre Queensland University of Technology Brisbane Q 4001 Australia QUTNRC-96-01-02.ps.Z Abstract Rule extraction from feed forward neural networks is a topic that is gaining increasing interest. Any symbolic representation of how a network arrives at a particular decision is important not only for user acceptance, but also for rule refinement and network learning. This paper describes a new method of extracting rules that predict the firing of single units within a feed forward neural network. The extraction technique is applied to networks constructed by the Cascade 2 algorithm each of which solve a different benchmark problem. The hidden and output units within each of the networks are shown to represent distinct rules which govern the classification of patterns. Since a discrete rule set can be obtained for each of the units within the network, a logical mapping between input and output values can be achieved. ******************************************************************************** Feasibility of Incremental Learning in Biologically Plausible Networks James M. Hogan Joachim Diederich Neurocomputing Research Centre Queensland University of Technology Brisbane Q 4001 Australia QUTNRC-96-01-03.ps.Z Abstract The feasibility of incremental learning within a feed-forward network is examined under the constraint of biologically plausible connectivity. A randomly connected network (of physiologically plausible global connection probability) is considered under the assumption of a local connection probability which decays with distance between nodes. The representation of the function XOR is chosen as a test problem, and the likelihood of its recruitment is discussed with reference to the probability of occurrence of a subnetwork suitable for implementation of this function, assuming a uniform initial weight distribution. ****************************************************************************** These papers are available from ftp.fit.qut.edu.au cd to /pub/NRC/tr/ps From fisher at tweed.cse.ogi.edu Thu May 30 09:50:46 1996 From: fisher at tweed.cse.ogi.edu (Therese Fisher) Date: Thu, 30 May 1996 14:50:46 +0100 Subject: New Computational Finance Program Message-ID: Computational Finance at Oregon Graduate of Institute of Science & Technology (OGI) A Concentration in the MS Programs of Computer Science & Engineering (CSE) Electrical Engineering & Applied Physics (EEAP) ---------------------------------------------------------------------------- 20000 NW Walker Road, PO Box 91000, Portland, OR 97291-1000 ---------------------------------------------------------------------------- Computational Finance at OGI is a 12-month intensive program leading to a Master of Science degree in Computer Science and Engineering (CSE track) or in Electrical Engineering & Applied Physics (EE track). The program features: * A 12 month intensive program to train scientists and engineers for doing state-of-the-art quantitative or information systems work in finance. * Provide an attractive alternative to the standard 2 year MBA for technically-sophisticated students. * Provide a solid foundation in finance. Cover three semesters of MBA level finance in three quarters, and go beyond that. * Provide a solid foundation in relevant techniques from CS and EE for modeling financial markets and developing investment analysis, trading, and risk management systems. * Give CS/EE graduates the necessary finance background to work as information system specialists in major financial firms. * Emphasize state-of-the-art techniques in neural networks, adaptive systems, signal processing, and data modeling. * Provide state-of-the-art computing facilities for doing course assignments using live and historical market data provided by Dow Jones Telerate. * Provide students an opportunity to do significant projects using extensive market data resources and state-of-the-art analysis packages, thereby making them more attractive to employers. * Through their course work and projects, students will develop significant expertise in using and programming important analysis packages, such as Mathematica, Matlab, SPlus, and Expo. ---------------------------------------------------------------------------- Major Components of Program: The curriculum includes 4 quarters with courses structured within the standard CSE/EEAP framework, with 5 courses in the finance specialty area, 7 or 8 core courses within the CSE or EEAP departments, and 3 electives. Students will enroll in either the CSE (CSE track) or EEAP (EE track) MS programs. ---------------------------------------------------------------------------- Admission Requirements & Contact Information ---------------------------------------------------------------------------- Admission requirements are the same as the general requirements of the institution. GRE scores are required for the 12-month concentration in Computational Finance, however they may be waived in special circumstances. A candidate must hold a bachelor's degree in computer science, engineering, mathematics, statistics, one of the biological or physical sciences, finance, or one of the quantitative social sciences. For more information, contact Computational Finance Betty Shannon, Academic Coordinator Computer Science and Engineering Department Oregon Graduate Institute of Science and Technology P.O.Box 91000 Portland, OR 97291-1000 E-mail: academic at cse.ogi.edu Phone: (503) 690-1255 or E-mail: CompFin at cse.ogi.edu WWW: http://www.cse.ogi.edu/CompFin/ From ATAXR at asuvm.inre.asu.edu Thu May 30 18:44:10 1996 From: ATAXR at asuvm.inre.asu.edu (Asim Roy) Date: Thu, 30 May 1996 15:44:10 -0700 (MST) Subject: Connectionist Learning - Some New Ideas/Questions Message-ID: <01I5BM4AALLU8Y8W5E@asu.edu> (This is for posting to your mailing list.) This is an attempt to respond to some thoughts on one particular aspect of our learning theory - the one that requires connectionist/neural net algorithms to make an explicit "attempt" to build the smallest possible net (generalize, that is). One school of thought says that we should not attempt to build the smallest possible net because some extra neurons in the net (and their extra connections) provide the benefits of fault tolerance and reliability. And since the brain has access to billions of neurons, it does not really need to worry about a real resource constraint - it is practically an unlimited resource. (It is a fact of life, however, that at some age we do have difficulty memorizing and remembering things and learning- we perhaps run out of space (neurons) like a storage device on a computer. Even though billions of neurons is a large number, we must be using most of it at some age. So it is indeed a finite resource and some of it appears to be reused, like we reuse space on our storage devices. For memorization, for example, it is possible that the brain selectively erases some old memories to store some new ones. So a finite capacity system is a sensible view of the brain.) Another argument in favor of not trying to generalize is that by not worrying about attempting to create the smallest possible net, the connectionist algorithms are easier to develop and less complex. I hope researchers will come forward with other arguments in favor of not attempting to create the smallest possible net or to generalize. There is one main problem with the argument that adding lots of extra neurons to a net buys reliability and fault tolerance. First, we run the severe risk of "learning nothing" if we don't attempt to generalize. With lots of neurons available to a net, we would simply overfit the net to the problem data. (Try it next time on your back prop net. Add 10 or 100 times the number of hidden nodes you need and observe the results.) That is all we would achieve. Without good generalization, we may have a fault tolerant and reliable net, but it may be "useless" for all practical purposes because it may have "learnt nothing". Generalization is the fundamental part of learning - it perhaps should be the first learning criteria for our algorithms. We can't overlook or skip that part. If an algorithm doesn't attempt to generalize, it doesn't attempt to learn. It is as simple as that. So generalization needs to be our first priority and fault tolerance comes later. First we must "learn" something, then make it fault tolerant and reliable. Here is a practical viewpoint for our algorithms. Even though neurons are almost "unlimited" and free of cost to our brain, from a practical engineering stand point, "silicon" neurons are not so cheap. So our algorithms definitely need to be cost conscious and try to build the smallest possible net; they cannot be wasteful in their use of expensive "silicon" neurons. Once we obtain good generalization on a problem, fault tolerance can be achieved in many other ways. It would not hurt to examine the well established theory of reliability for some neat ideas. A few backup systems might be a more cost effective way to buy reliability than throwing in lots of extra silicon in a single system which may buy us nothing (it "learns nothing"). From controlling nuclear power plants with backup computer systems to adding extra tires in our trucks and buses, the backup idea works quite well. It is possible that "backup" is also what is used in our brains. We need to find out. "Redundancy" may be in the form of backup systems. "Repair" is another good idea used in our everyday lives for not so critical systems. Is fault tolerance and reliability sometimes achieved in the brain through the process of "repair"? Patients do recover memory and other brain functions after a stroke. Is that repair work by the biological system? It is a fact that biological systems are good at repairing things (look at simple things like cuts and bruises). We perhaps need to look closer at our biological systems and facts and get real good clues to how it works. Let us not jump to conclusions so quickly. Let us argue and debate with our facts. We will do our science a good service and be able to make real progress. I would welcome more thoughts and debate on this issue. I have included all of the previous responses on this particular issue for easy reference by the readers. I have also appended our earlier note on our learning theory. Perhaps more researchers will come forward with facts and ideas and enlighten all of us on this crucial question. ******************************************** On May 16 Kevin Cherkauer wrote: "In a recent thought-provoking posting to the connectionist list, Asim Roy said: >E. Generalization in Learning: The method must be able to >generalize reasonably well so that only a small amount of network >resources is used. That is, it must try to design the smallest possible >net, although it might not be able to do so every time. This must be >an explicit part of the algorithm. This property is based on the >notion that the brain could not be wasteful of its limited resources, >so it must be trying to design the smallest possible net for every >task. I disagree with this point. According to Hertz, Krogh, and Palmer (1991, p. 2), the human brain contains about 10^11 neurons. (They also state on p. 3 that "the axon of a typical neuron makes a few thousand synapses with other neurons," so we're looking at on the order of 10^14 "connections" in the brain.) Note that a period of 100 years contains only about 3x10^9 seconds. Thus, if you lived 100 years and learned continuously at a constant rate every second of your life, your brain would be at liberty to "use up" the capacity of about 30 neurons (and 30,000 connections) per second. I would guess this is a very conservative bound, because most of us probably spend quite a bit of time where we aren't learning at such a furious rate. But even using this conservative bound, I calculate that I'm allowed to use up about 2.7x10^6 neurons (and 2.7x10^9 connections) today. I'll try not to spend them all in one place. :-) Dr. Roy's suggestion that the brain must try "to design the smallest possible net for every task" because "the brain could not be wasteful of its limited resources" is unlikely, in my opinion. It seems to me that the brain has rather an abundance of neurons. On the other hand, finding optimal solutions to many interesting "real-world" problems is often very hard computationally. I am not a complexity theorist, but I will hazard to suggest that a constraint on neural systems to be optimal or near-optimal in their space usage is probably both impossible to realize and, in fact, unnecessary. Wild speculation: the brain may have so many neurons precisely so that it can afford to be suboptimal in its storage usage in order to avoid computational time intractability. References Hertz, J.; Krogh, A.; & Palmer, R.G. 1991. Introduction to the Theory of Neural Computation. Redwood City, CA:Addison-Wesley." ************************************************** On May 15 Richard Kenyon wrote on the subject of generalization: " The brain probably accepts some form of redundancy (waste). I agree that the brain is one hell of an optimisation machine. Intelligence whatever task it may be applied to is (again imho) one long optimisation process. Generalisation arises (even emerges or is a side effect) as a result of ongoing optimisation, conglomeration, reprocessing etc etc. This is again very important i agree, but i think (i do anyway) we in NN commumnity are aware of this as with much of the above. I thought that apart from point A we were doing all of this already, although to have it explicitly published is very valuable." ***************************************** On May 16 Lokendra Shastri replied to Kevin Cherkauer: "There is another way to look at the numbers. The retina provides 10^6 inputs to the brain every 200 msec! A simple n^2 algorithm to process this input would require more neurons than we have in our brain. We can understand (or at least process) a potentially unbounded number of sentences --- Here is one "the grandcanyon walked past the banana" I could have said anyone of a gazzilion sentences at this point and you would have probably understood it. Even if we just count the overt symbolic knowledge, we carry in our heads, we can enumerate about a million items. A coding scheme that consumed a 1000 neurons per item (which is not much) would soon run out neurons. Remember that a large fraction of our neurons are already taken up by sensorimotor processes (vision itself consumes a fair fraction of the brain).For an argument on the tight constraints posed by the "limited" number of neurons vis-a-vis common sense knowledge, you may want to see: ``From simple associations to systematic reasoning'', L. Shastri and V. Ajjanagadde. In Behavioral and Brain Sciences Vol. 16, No. 3, 417--494, 1993. My home page has a URL to a postscript version. There was also a nice paper by Tsotsos in Behavioral and Brains Sciences on this topic from the perspective of Visual Processing. Also you might want to see Feldman and Ballard 1982 paper in Cognitive Science." *********************************************** On May 17 Steven Small replied to Keven Cherkauer: "I agree with this general idea, although I'm not sure that "computational time intractability" is necessarily the principal reason. There are a lot of good reasons for redundancy, overlap, and space "suboptimality", not the least of which is the marvellous ability at recovery that the brain manifests after both small injuries and larger ones that give pause even to experienced neurologists." ************************************************* On May 17 Jonathan Stein replied to Steven Small and Kevin Cherkauer: "One needn't draw upon injuries to prove the point. One loses about 100,000 cortical neurons a day (about a percent of the original number every three years) under normal conditions. This loss is apparently not significant for brain function. This has been often called the strongest argument for distributed processing in the brain. Compare this ability with the fact that single conductor disconnection cause total system failure with high probability in conventional computers. Although certainly acknowledged by the pioneers of artificial neural network techniques, very few networks designed and trained by present techniques are anywhere near that robust. Studies carried out on the Hopfield model of associative memory DO show graceful degradation of memory capacity with synapse dilution under certain conditions (see eg. DJ Amit's book "Attractor Neural Networks"). Synapse pruning has been applied to trained feedforward networks (eg. LeCun's "Optimal Brain Damage") but requires retraining of the network." ****************************************** On May 18 Raj Rao replied to Kevin Cherkauer and Steven Small: " Does anyone have a concrete citation (a journal article) for this or any other similar estimate regarding the daily cell death rate in the cortex of a normal brain? I've read such numbers in a number of connectionist papers but none cite any neurophysiological studies that substantiate these numbers." ******************************************** On May 19 Richard Long wrote: "There may be another reason for the brain to construct networks that are 'minimal' having to do with Chaitin and Kolmogorov computational complexity. If a minimal network corresponds to a 'minimal algorithm' for implementing a particular computation, then that particular network must utilize all of the symmetries and regularities contained in the problem, or else these symmetries could be used to reduce the network further. Chaitin has shown that no algorithm for finding this minimal algorithm in the general case is possible. However, if an evolutionary programming method is used in which the fitness function is both 'solves the problem' and 'smallest size' (i.e. Occam's razor), then it is possible that the symmetries and regularities in the problem would be extracted as smaller and smaller networks are found. I would argue that such networks would compute the solution less by rote or brute force, and more from a deep understanding of the problem. I would like to hear anyone else's thoughts on this." ************************************************** On May 20 Juergen Schmidhuber replies to Richard Long: "Apparently, Kolmogorov was the first to show the impossibility of finding the minimal algorithm in the general case (but Solomonoff also mentions it in his early work). The reason is the halting problem, of course - you don't know the runtime of the minimal algorithm. For all practical applications, runtime has to be taken into account. Interestingly, there is an ``optimal'' way of doing this, namely Levin's universal search algorithm, which tests solution candidates in order of their Levin complexities: L. A. Levin. Universal sequential search problems, Problems of Information Transmission 9:3,265-266,1973. For finding Occam's razor neural networks with minimal Levin complexity, see J. Schmidhuber: Discovering solutions with low Kolmogorov complexity and high generalization capability. In A.Prieditis and S.Russell, editors, Machine Learning: Proceedings of the 12th International Conference, 488--496. Morgan Kaufmann Publishers, San Francisco, CA, 1995. For Occam's razor solutions of non-Markovian reinforcement learning tasks, see M. Wiering and J. Schmidhuber: Solving POMDPs using Levin search and EIRA.In Machine Learning: Proceedings of the 13th International Conference. Morgan Kaufmann Publishers, San Francisco, CA, 1996, to appear." ********************************************** On May 20 Sydney Lamb replied to Jonathan Stein and others: " There seems to be some differing information coming from different sources. The way I heard it, the typical person has lost only about 3% of the original total of cortical neurons after about 70 or 80 years. As for the argument about distributed processing, two comments: (1) there are different kinds of distributive processing; one of them also uses strict localization of points of convergence for distributed subnetworks of information (cf. A. Damasio 1989 --- several papers that year). (2) If the brain is like other biological systems, the neurons being lost are probably most the ones not being used --- ones that have been remaining latent and available to assume some function, but never called upon. Hence what you get with old age is not so much loss of information as loss of ability to learn new things --- varying in amount, of course, from one individual to the next." ***************************************** On May 20 Mark Johnson replies to Raj Rao: "From my reading of the recent literature massive postnatal cell loss in the human cortex is a myth. There is postnatal cortical cell death in rodents, but in primates (including humans) there is only (i) a decreased density of cell packing, and (ii) massive (up to 50%) synapse loss. (The decreased density of cell packing was apparently misinterpreted as cell loss in the past). Of course, there are pathological cases, such as Alzheimers, in which there is cell loss. I have written a review of human postnatal brain development which I can send out on request." ************************************************** *************************************************** APPENDIX We have recently published a set of principles for learning in neural networks/connectionist models that is different from classical connectionist learning (Neural Networks, Vol. 8, No. 2; IEEE Transactions on Neural Networks, to appear; see references below). Below is a brief summary of the new learning theory and why we think classical connectionist learning, which is characterized by pre-defined nets, local learning laws and memoryless learning (no storing of training examples for learning), is not brain-like at all. Since vigorous and open debate is very healthy for a scientific field, we invite comments for and against our ideas from all sides. "A New Theory for Learning in Connectionist Models" We believe that a good rigorous theory for artificial neural networks/connectionist models should include learning methods that perform the following tasks or adhere to the following criteria: A. Perform Network Design Task: A neural network/connectionist learning method must be able to design an appropriate network for a given problem, since, in general, it is a task performed by the brain. A pre-designed net should not be provided to the method as part of its external input, since it never is an external input to the brain. From a neuroengineering and neuroscience point of view, this is an essential property for any "stand-alone" learning system - a system that is expected to learn "on its own" without any external design assistance. B. Robustness in Learning: The method must be robust so as not to have the local minima problem, the problems of oscillation and catastrophic forgetting, the problem of recall or lost memories and similar learning difficulties. Some people might argue that ordinary brains, and particularly those with learning disabilities, do exhibit such problems and that these learning requirements are the attributes only of a "super" brain. The goal of neuroengineers and neuroscientists is to design and build learning systems that are robust, reliable and powerful. They have no interest in creating weak and problematic learning devices that need constant attention and intervention. C. Quickness in Learning: The method must be quick in its learning and learn rapidly from only a few examples, much as humans do. For example, one which learns from only 10 examples learns faster than one which requires a 100 or a 1000 examples. We have shown that on-line learning (see references below), when not allowed to store training examples in memory, can be extremely slow in learning - that is, would require many more examples to learn a given task compared to methods that use memory to remember training examples. It is not desirable that a neural network/connectionist learning system be similar in characteristics to learners characterized by such sayings as "Told him a million times and he still doesn't understand." On-line learning systems must learn rapidly from only a few examples. D. Efficiency in Learning: The method must be computationally efficient in its learning when provided with a finite number of training examples (Minsky and Papert[1988]). It must be able to both design and train an appropriate net in polynomial time. That is, given P examples, the learning time (i.e. both design and training time) should be a polynomial function of P. This, again, is a critical computational property from a neuroengineering and neuroscience point of view. This property has its origins in the belief that biological systems (insects, birds for example) could not be solving NP-hard problems, especially when efficient, polynomial time learning methods can conceivably be designed and developed. E. Generalization in Learning: The method must be able to generalize reasonably well so that only a small amount of network resources is used. That is, it must try to design the smallest possible net, although it might not be able to do so every time. This must be an explicit part of the algorithm. This property is based on the notion that the brain could not be wasteful of its limited resources, so it must be trying to design the smallest possible net for every task. General Comments This theory defines algorithmic characteristics that are obviously much more brain-like than those of classical connectionist theory, which is characterized by pre-defined nets, local learning laws and memoryless learning (no storing of actual training examples for learning). Judging by the above characteristics, classical connectionist learning is not very powerful or robust. First of all, it does not even address the issue of network design, a task that should be central to any neural network/connectionist learning theory. It is also plagued by efficiency (lack of polynomial time complexity, need for excessive number of teaching examples) and robustness problems (local minima, oscillation, catastrophic forgetting, lost memories), problems that are partly acquired from its attempt to learn without using memory. Classical connectionist learning, therefore, is not very brain-like at all. As far as I know, there is no biological evidence for any of the premises of classical connectionist learning. Without having to reach into biology, simple common sense arguments can show that the ideas of local learning, memoryless learning and predefined nets are impractical even for the brain! For example, the idea of local learning requires a predefined network. Classical connectionist learning forgot to ask a very fundamental question - who designs the net for the brain? The answer is very simple: Who else, but the brain itself! So, who should construct the net for a neural net algorithm? The answer again is very simple: Who else, but the algorithm itself! (By the way, this is not a criticism of constructive algorithms that do design nets.) Under classical connectionist learning, a net has to be constructed (by someone, somehow - but not by the algorithm!) prior to having seen a single training example! I cannot imagine any system, biological or otherwise, being able to construct a net with zero information about the problem to be solved and with no knowledge of the complexity of the problem. (Again, this is not a criticism of constructive algorithms.) A good test for a so-called "brain-like" algorithm is to imagine it actually being part of a human brain. Then examine the learning phenomenon of the algorithm and compare it with that of the human's. For example, pose the following question: If an algorithm like back propagation is "planted" in the brain, how will it behave? Will it be similar to human behavior in every way? Look at the following simple "model/algorithm" phenomenon when the back- propagation algorithm is "fitted" to a human brain. You give it a few learning examples for a simple problem and after a while this "back prop fitted" brain says: "I am stuck in a local minimum. I need to relearn this problem. Start over again." And you ask: "Which examples should I go over again?" And this "back prop fitted" brain replies: "You need to go over all of them. I don't remember anything you told me." So you go over the teaching examples again. And let's say it gets stuck in a local minimum again and, as usual, does not remember any of the past examples. So you provide the teaching examples again and this process is repeated a few times until it learns properly. The obvious questions are as follows: Is "not remembering" any of the learning examples a brain- like phenomenon? Are the interactions with this so-called "brain- like" algorithm similar to what one would actually encounter with a human in a similar situation? If the interactions are not similar, then the algorithm is not brain-like. A so-called brain-like algorithm's interactions with the external world/teacher cannot be different from that of the human. In the context of this example, it should be noted that storing/remembering relevant facts and examples is very much a natural part of the human learning process. Without the ability to store and recall facts/information and discuss, compare and argue about them, our ability to learn would be in serious jeopardy. Information storage facilitates mental comparison of facts and information and is an integral part of rapid and efficient learning. It is not biologically justified when "brain-like" algorithms disallow usage of memory to store relevant information. Another typical phenomenon of classical connectionist learning is the "external tweaking" of algorithms. How many times do we "externally tweak" the brain (e.g. adjust the net, try a different parameter setting) for it to learn? Interactions with a brain-like algorithm has to be brain-like indeed in all respect. The learning scheme postulated above does not specify how learning is to take place - that is, whether memory is to be used or not to store training examples for learning, or whether learning is to be through local learning at each node in the net or through some global mechanism. It merely defines broad computational characteristics and tasks (i.e. fundamental learning principles) that are brain-like and that all neural network/connectionist algorithms should follow. But there is complete freedom otherwise in designing the algorithms themselves. We have shown that robust, reliable learning algorithms can indeed be developed that satisfy these learning principles (see references below). Many constructive algorithms satisfy many of the learning principles defined above. They can, perhaps, be modified to satisfy all of the learning principles. The learning theory above defines computational and learning characteristics that have always been desired by the neural network/connectionist field. It is difficult to argue that these characteristics are not "desirable," especially for self-learning, self- contained systems. For neuroscientists and neuroengineers, it should open the door to development of brain-like systems they have always wanted - those that can learn on their own without any external intervention or assistance, much like the brain. It essentially tries to redefine the nature of algorithms considered to be brain- like. And it defines the foundations for developing truly self- learning systems - ones that wouldn't require constant intervention and tweaking by external agents (human experts) for it to learn. It is perhaps time to reexamine the foundations of the neural network/connectionist field. This mailing list/newsletter provides an excellent opportunity for participation by all concerned throughout the world. I am looking forward to a lively debate on these matters. That is how a scientific field makes real progress. Asim Roy Arizona State University Tempe, Arizona 85287-3606, USA Email: ataxr at asuvm.inre.asu.edu References 1. Roy, A., Govil, S. & Miranda, R. 1995. A Neural Network Learning Theory and a Polynomial Time RBF Algorithm. IEEE Transactions on Neural Networks, to appear. 2. Roy, A., Govil, S. & Miranda, R. 1995. An Algorithm to Generate Radial Basis Function (RBF)-like Nets for Classification Problems. Neural Networks, Vol. 8, No. 2, pp. 179-202. 3. Roy, A., Kim, L.S. & Mukhopadhyay, S. 1993. A Polynomial Time Algorithm for the Construction and Training of a Class of Multilayer Perceptrons. Neural Networks, Vol. 6, No. 4, pp. 535- 545. 4. Mukhopadhyay, S., Roy, A., Kim, L.S. & Govil, S. 1993. A Polynomial Time Algorithm for Generating Neural Networks for Pattern Classification - its Stability Properties and Some Test Results. Neural Computation, Vol. 5, No. 2, pp. 225-238. From jbower at bbb.caltech.edu Thu May 30 14:13:32 1996 From: jbower at bbb.caltech.edu (jbower@bbb.caltech.edu) Date: Thu, 30 May 1996 10:13:32 -0800 Subject: CNS*96 (Computational Neuroscience Meeting) Message-ID: Call for Registration CNS*96 Cambridge, Massachuetts July 14-17 1996 CNS*96: Registration is now open for this year's Computational Neuroscience meeting (CNS*96). This is the fifth in a series of annual inter-disciplinary conferences intended to address the broad range of research approaches and issues involved in the general field of computational neuroscience, The meeting will take place at the Cambridge Center Marriott Hotel and includes plenary, contributed, and poster sessions. In addition, two half days will be devoted to informal workshops on a wide range of subjects. The first session starts at 9 am, Sunday July 14th and the last session ends at 5 pm on Wednesday, July 17th. Day care will be available for children. Overall Agenda This year's meeting is anticipated to be the best meeting yet in this series. Submitted papers increased by more than 80% this year, with representation from many if not most of major institutions involved in computational neuroscience. All papers submitted to the meeting were peer reviewed, resulting in 230 papers to be presented in either oral or poster form . These papers represent contributions by both experimental and theoretical neurobiologists along with engineers, computer scientists, cognitive scientists, physicists, and mathematicians interested in understanding how biological neural systems compute. The agenda is well represented by experimental, model-based, and more abstract theoretical approaches to understanding neurobiological computation. Full information on the agenda\ is available on the meeting's web page (http://www.bbb.caltech.edu/cns96/cns96.html). Invited Speakers Invited speakers for this year's meeting include: Eve Marder (Brandeis University), Miguel Nicoleis (Duke University Medical Center), Joseph J. Atick (Rockefeller University), Ron Calabrese (Emory University), John S. Kauer (Tufts Medical School), Ken Nakamura (Harvard University), Howard Eichenbaum (State University of New York). Poster Presentations More than 200 poster presentations on a wide variety of topics related to computational neuroscience will be presented at this year's meeting. Oral presentations Jeffrey B. Colombe (University of Chicago) Philip S. Ulinski Functional Organization of Cortical Microcircuits: II. Anatomical Organization of Feedforward and Feedback Pathways D. C. Somers (MIT) Emanuel V. Todorov, Athanassios G. Siapas, and Mriganka Sur A Local Circuit Integration Approach to Understanding Visual Cortical Receptive Fields Emilio Salinas (Brandeis University) L.F. Abbott Multiplicative Cortical Responses and Input Selection Based on Recurrent Synaptic Connections Leslie C. Osborne (UC Berkeley) John P. Miller The Filtering of Sensory Information by a Mechanoreceptive Array James A. Mazer (MIT) The Integration of Parallel Processing Streams in the Sound Localization System of the Barn Owl Wulfram Gerstner (Institute for Theoretische Physik) Richard Kempter, J. Leo van Hemmen and Hermann Wagner A Developmental Learning Rule for Coincidence Tuning in the Barn Owl Auditory System Allan Gottschalk (University of Pennsylvania Hospital) Information Based Limits on Synaptic Growth in Hebbian Models S. P. Strong (NEC Research Institute) Ronald Koberle, Rob R. de Ruyter van Steveninck, and William Bialek Entropy and Information in Neural Spike Trains Hans Liljenstr=F6m (Royal Institute of Technology) Peter Arhem Investigating Amplifying and Controlling Mechanisms for Random Events in Neural Systems David Terman (Ohio State University) Amit Bose, and Nancy Kopell Functional Reorganization in Thalamocortical Networks: Transition Between Spindling and Delta Sleep Rhythms Angel Alonso (McGill University) Xiao-Jing Wang, Michael M. Guevara, and Brian Craft A Comparative Model Study of Neuronal Oscillations in Septal GABAergic Cells and Entorhinal Cortex Stellar Cells: Contributors to the Theta and Gamma Rhythms Gene Wallenstein ( Harvard University) Michael E. Hasselmo Bursting and Oscillations in a Biophysical Model of Hippocampal Region CA3: Implications for Associative Memory and Epileptiform Activity Mayank R. Mehta (University of Arizona) Bruce L. McNaughton Rapid Changes in Hippocampal Population Code During Behavior: A Case for Hebbian Learning in Vivo Karl Kilborn (University of California, Irvine) Don Kubota, and Richard Granger Parameters of LTP Induction Modulate Network Categorization Behavior Peter Dayan (MIT) Satinder Pal Singh Long Term Potentiation, Navigation, & Dynamic Programming Chantal E. Stern (Harvard Medical School) Michael E. Hasselmo Functional Magnetic Resonance Imaging and Computational Modeling: An Integrated Study of Hippocampal Function Rajesh P. N. Rao (University of Rochester) Dana H. Ballard Cortico-Cortical Dynamics and Learning During Visual Recognition: A Computational Model R.Y. Reis (AT&T Bell Laboratories) Daniel D. Lee, H.S. Seung, B.I. Shraiman, and D.W. Tank Nonlinear Network Models of the Oculomotor Integrator Yair Weiss (MIT) Edward H. Adelson Adaptive Robust Windows: A Model for the Selective Integration of Motion Signals in Human Vision Emanuel V. Todorov (MIT) Athanassios G. Siapas, David C. Somers, and Sacha B. Nelson Modeling Visual Cortical Contrast Adaptation Effects Dieter Jaeger (Caltech) James M. Bower Dual in Vitro Whole Cell Recordings from Cerebellar Purkinje Cells: Artificial Synaptic Input Using Dynamic Current Clamping Xiao-Jing Wang (Brandeis University) Calcium Control of Time-Dependent Input-Out Computation in Cortical Pyramidal Neurons Alexander Protopapas (Caltech) James M. Bower Piriform Pyramidal Cell Response to Physiologically Plausible Spatio-Temporal Patterns of Synaptic Input Ole Jensen (Brandeis University) Marco A. P. Idiart and John E. Lisman A Model for Physiologically Realistic Synaptic Encoding and Retrieval of Sequence Information S. B. Nelson (Brandeis University) J.A. Varela, K. Sen, and L.F. Abbott Synaptic Decoding of Visual Cortical EPSCs Reveals a Potential Mechanism for Contrast Adaptation Nicolas G. Hatsopoulos (Brown University) Jerome N. Sanes and John P. Donoghue Dynamic Correlations in Unit Firing of Motor Cortical Neurons Related to Movement Preparation and Action Adam N. Elga (Princeton University), A. David Redish, and David S. Touretzky A Model of the Rodent Head Direction System Dianne Pawluk (Harvard University) Robert Howe A Holistic Model of Human Touch ************************************************************************ REGISTRATION INFORMATION FOR THE FIFTH ANNUAL COMPUTATION AND COMPUTATIONAL NEUROSCIENCE MEETING CNS*96 JULY 14 - JULY 17, 1995 BOSTON, MASSACHUSETTS ************************************************************************ LOCATION: The meeting will take place at the Boston Marriott in Cambridge, Massachusetts. MEETING ACCOMMODATIONS: Accommodations for the meeting have been arranged at the Boston Marriott. We have reserved a block of rooms at the special rate for all attendees of $126 per night single or double occupancy in the conference hotel (that is, 2 people sharing a room would split the $126!). A fixed number of rooms have been reserved for students at the rate of $99 per night single or double occupancy (yes, that means $50 a night per student!). These student room rates are on a first-come-first-served basis, so we recommend acting quickly to reserve these slots. Also, for some student registrants housing will be available at Harvard University. Thirty single rooms are available on a first-come-first serve basis. Please look at your orange colored sheets for more information. Registering for the meeting, WILL NOT result in an automatic room reservation. Instead you must make your own reservations by returning the enclosed registration sheet to the hotel, faxing, or by contacting: Boston Marriott Cambridge ATTENTION: Reservations Dept. Two Cambridge Center Cambridge, Massachusetts 02142 (617)494-6600 Toll Free: (800)228-9290 =46ax No. (617)494-0036 NOTE: IN ORDER TO GET THE REDUCED RATES, YOU MUST CONFIRM HOTEL REGISTRATIONS BY JUNE 12, 1995. When making reservations by phone, make sure and indicate that you are registering for the CNS*96 meeting. Students will be asked to verify their status on check in with a student ID or other documentation. MEETING REGISTRATION FEES: Registration received on or before June 12, 1995: Student: $ 95 (One Banquet Ticket Included) Regular: $ 225 (One Banquet Ticket Included) Meeting registration after June 12, 1995: Student:: $ 125 (One Banquet Ticket Included) Regular:: $ 250 (One Banquet Ticket Included) BANQUET: Registration for the meeting includes a single ticket to the annual CNS Banquet this year to be held within the Museum of Science on Tuesday evening, July 16th. Additional Banquet tickets can be purchased for $35 each person. ---------------------------------------------------------------------------- CNS*96 REGISTRATION FORM Last Name: =46irst Name: Title: Student___ Graduate Student___ Post Doc___ Professor___ Committee Member___ Other___ Organization: Address: City: State: Zip: Country: Telephone: Email Address: REGISTRATION FEES: Technical Program --July 14 - July 17, 1996 Regular $225 ($250 after June 12th) - One Banquet Ticket Included Student $ 95 ($125 after June 12th) - One Banquet Ticket Included Banquet $ 35 (Additional Banquet Tickets at $35.00 per Ticket) - July 16, 1996 Total Payment: $ Please Indicate Method of Payment: Check or Money Order * Payable in U. S. Dollars to CNS*96 - Caltech * Please make sure to indicate CNS*96 and YOUR name on all money transfers. Charge my card: Visa Mastercard American Express Number: Expiration Date: Name of Cardholder: Signature as appears on card (for mailed in applications): Date: ADDITIONAL QUESTIONS: Previously Attended: CNS*92___ CNS*93___ CNS*94___ CNS*95___ Did you submit an abstract and summary? ( ) Yes ( ) No Title: Do you have special dietary preferences or restrictions (e.g., diabetic, low sodium, kosher, vegetarian)? If so, please note: Some grants to cover partial travel expenses may become available. Do you wish further information? ( ) Yes ( ) No (Please Note: Travel funds will be available for students and postdoctoral fellows presenting papers at the meeting) *******PLEASE FAX OR MAIL REGISTRATION FORM TO: Caltech, Division of Biology 216-76, Pasadena, CA 91125 Attn: Judy Macias =46ax Number: (818) 795-2088 ADDITIONAL INFORMATION can be obtained by: Using our on-line WWW information and registration server at the URL: http://www.bbb.caltech.edu/cns96/cns96.html ftp-ing to our ftp site: yourhost% ftp ftp.bbb.caltech.edu Name: ftp Password: yourname at yourhost.yourside.yourdomain ftp> cd pub/cns96 ftp> ls ftp> get filename Sending Email to: cns96 at smaug.bbb.caltech.edu *************************************** James M. Bower Division of Biology Mail code: 216-76 Caltech Pasadena, CA 91125 (818) 395-6817 (818) 795-2088 FAX NCSA Mosaic addresses for: laboratory http://www.bbb.caltech.edu/bowerlab GENESIS: http://www.bbb.caltech.edu/GENESIS science education reform http://www.caltech.edu/~capsi From leo at stat.Berkeley.EDU Thu May 30 11:33:46 1996 From: leo at stat.Berkeley.EDU (Leo Breiman) Date: Thu, 30 May 1996 08:33:46 -0700 Subject: paper available: Bias, Variance and Arcing Classifiers Message-ID: A non-text attachment was scrubbed... Name: not available Type: multipart/mixed Size: 1382 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/cc6f4c35/attachment-0001.bin