From Connectionists-Request at cs.cmu.edu Wed Sep 1 00:05:15 1993 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Wed, 01 Sep 93 00:05:15 -0400 Subject: Bi-monthly Reminder Message-ID: <24131.746856315@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated January 4, 1993. This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is not an edited forum like the Neuron Digest, or a free-for-all newsgroup like comp.ai.neural-nets. It's somewhere in between, relying on the self-restraint of its subscribers. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to over a thousand busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. Happy hacking. -- Dave Touretzky & David Redish --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject lately. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, and found the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new text books related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. - Do NOT tell a friend about Connectionists at cs.cmu.edu. Tell him or her only about Connectionists-Request at cs.cmu.edu. This will save your friend from public embarrassment if she/he tries to subscribe. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU (Internet address 128.2.242.8). 2. Login as user anonymous with password your username. 3. 'cd' directly to one of the following directories: /usr/connect/connectionists/archives /usr/connect/connectionists/bibliographies 4. The archives and bibliographies directories are the ONLY ones you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into one of these two directories. Access will be denied to any others, including their parent directory. 5. The archives subdirectory contains back issues of the mailing list. Some bibliographies are in the bibliographies subdirectory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. We strongly discourage the merger into the repository of existing bodies of work or the use of this medium as a vanity press for papers which are not of publication quality. PLACING A FILE To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. Current naming convention is author.title.filetype.Z where title is just enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. The Z indicates that the file has been compressed by the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is in the appendix. Make sure your paper is single-spaced, so as to save paper, and include an INDEX Entry, consisting of 1) the filename, 2) the email contact for problems, 3) the number of pages and 4) a one sentence description. See the INDEX file for examples. ANNOUNCING YOUR PAPER It is the author's responsibility to invite other researchers to make copies of their paper. Before announcing, have a friend at another institution retrieve and print the file, so as to avoid easily found local postscript library errors. And let the community know how many pages to expect on their printer. Finally, information about where the paper will/might appear is appropriate inside the paper as well as in the announcement. Please add two lines to your mail header, or the top of your message, so as to facilitate the development of mailer scripts and macros which can automatically retrieve files from both NEUROPROSE and other lab-specific repositories: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/filename.ps.Z When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. To prevent forwarding, place a "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your file. If you do offer hard copies, be prepared for a high cost. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! A shell script called Getps, written by Tony Plate, is in the directory, and can perform the necessary retrieval operations, given the file name. Functions for GNU Emacs RMAIL, and other mailing systems will also be posted as debugged and available. At any time, for any reason, the author may request their paper be updated or removed. For further questions contact: Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Phone: (614) 292-4890 APPENDIX: Here is an example of naming and placing a file: gvax> cp i-was-right.txt.ps rosenblatt.reborn.ps gvax> compress rosenblatt.reborn.ps gvax> ftp cheops.cis.ohio-state.edu Connected to cheops.cis.ohio-state.edu. 220 cheops.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put rosenblatt.reborn.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for rosenblatt.reborn.ps.Z 226 Transfer complete. 100000 bytes sent in 3.14159 seconds ftp> quit 221 Goodbye. gvax> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file rosenblatt.reborn.ps.Z in the Inbox. Here is the INDEX entry: rosenblatt.reborn.ps.Z rosenblatt at gvax.cs.cornell.edu 17 pages. Boastful statements by the deceased leader of the neurocomputing field. Let me know when it is in place so I can announce it to Connectionists at cmu. Frank ^D AFTER FRANK RECEIVES THE GO-AHEAD, AND HAS A FRIEND TEST RETRIEVE THE FILE, HE DOES THE FOLLOWING: gvax> mail connectionists Subject: TR announcement: Born Again Perceptrons FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/rosenblatt.reborn.ps.Z The file rosenblatt.reborn.ps.Z is now available for copying from the Neuroprose repository: Born Again Perceptrons (17 pages) Frank Rosenblatt Cornell University ABSTRACT: In this unpublished paper, I review the historical facts regarding my death at sea: Was it an accident or suicide? Moreover, I look over the past 23 years of work and find that I was right in my initial overblown assessments of the field of neural networks. ~r.signature ^D ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "pt.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "nn-bench-request at cs.cmu.edu".  From BRUNAK at nbivax.nbi.dk Wed Sep 1 07:44:20 1993 From: BRUNAK at nbivax.nbi.dk (BRUNAK@nbivax.nbi.dk) Date: Wed, 01 Sep 1993 13:44:20 +0200 Subject: IJNS vol. 4 issues 1 and 2 Message-ID: <01H2FPM8ZAJ68X3SKW@nbivax.nbi.dk> Begin Message: ----------------------------------------------------------------------- INTERNATIONAL JOURNAL OF NEURAL SYSTEMS The International Journal of Neural Systems is a quarterly journal which covers information processing in natural and artificial neural systems. It publishes original contributions on all aspects of this broad subject which involves physics, biology, psychology, computer science and engineering. Contributions include research papers, reviews and short communications. The journal presents a fresh undogmatic attitude towards this multidisciplinary field with the aim to be a forum for novel ideas and improved understanding of collective and cooperative phenomena with computational capabilities. ISSN: 0129-0657 (IJNS) ---------------------------------- Contents of Volume 4, issue number 1 (1993): 1. O. Ekeberg: Response Properties of a Population of Neurons. 2. M. Moeller: Supervised Learning on Large Redundant Training Sets. 3. K. Urahama & S.-I. Ueno: A Gradient System Solution to Potts Mean Field Equations and Its Electronic Implementation. 4. A. Romeo: Thermodynamic Transitions in Networks for Letter Distinction. 5. C. H.-A. Ting: Magnocellular Pathway for Rotation Invariant Neocognition. 6. J. Hao, J. Vandewalle & S. Tan: A Rule-Based Neural Controller for Inverted Pendulum System. 7. V. Majernik & A. Kral: Sharpening of Input Exitation Curves in Lateral Inhibition. 8. Y. Deville: Digital Neural Networks for High-Speed Divisions and Root Extractions. ---------------------------------- Contents of Volume 4, issue number 2 (1993): 1. A. A. Handzel, T. Grossman, E Domany, S. Tarem & E. Duchovni: A Neural Network Classifier in Experimental Particle Physics. 2. C. F. Miles & D. Rogers: A Biologically Motivated Associative Memory Architecture. 3. B. Cartling: Control of the Complexity of Associative Memory Dynamics by Neuronal Adaptation. 4. N. Shamir, D. Saad & E. Marom: Neural Net Pruning Based on Functional Behavior of Neurons. 5. J. Gorodkin, L. K. Hansen, A. Krogh, C. Svarer & O. Winther: A Quantative Study of Pruning by Optimal Brain Damage. 6. S. G. Romaniuk: Trans-Dimensional Learning. 7. R. Newman: A Function Approximation Algorithm Using Sequential Composition. ---------------------------------- Editorial board: B. Lautrup (Niels Bohr Institute, Denmark) (Editor-in-charge) S. Brunak (Technical Univ. of Denmark) (Assistant Editor-in-Charge) D. Stork (Stanford) (Book review editor) Associate editors: J. Alspector (Bellcore) B. Baird (Berkeley) D. Ballard (University of Rochester) E. Baum (NEC Research Institute) S. Bjornsson (University of Iceland) J. M. Bower (CalTech) S. S. Chen (University of North Carolina) R. Eckmiller (University of Dusseldorf) J. L. Elman (University of California, San Diego) M. V. Feigelman (Landau Institute for Theoretical Physics) F. Fogelman-Soulie (Paris) K. Fukushima (Osaka University) A. Gjedde (Montreal Neurological Institute) S. Grillner (Nobel Institute for Neurophysiology, Stockholm) T. Gulliksen (University of Oslo) D. Hammerstrom (Oregon Graduate Institute) D. Horn (Tel Aviv University) J. Hounsgaard (University of Copenhagen) B. A. Huberman (XEROX PARC) L. B. Ioffe (Landau Institute for Theoretical Physics) P. I. M. Johannesma (Katholieke Univ. Nijmegen) M. Jordan (MIT) G. Josin (Neural Systems Inc.) I. Kanter (Princeton University) J. H. Kaas (Vanderbilt University) A. Lansner (Royal Institute of Technology, Stockholm) A. Lapedes (Los Alamos) B. McWhinney (Carnegie-Mellon University) J. Moody (Yale, USA) A. F. Murray (University of Edinburgh) J. P. Nadal (Ecole Normale Superieure, Paris) E. Oja (Lappeenranta University of Technology, Finland) N. Parga (Centro Atomico Bariloche, Argentina) S. Patarnello (IBM ECSEC, Italy) P. Peretto (Centre d'Etudes Nucleaires de Grenoble) C. Peterson (University of Lund) K. Plunkett (University of Aarhus) S. A. Solla (AT&T Bell Labs) M. A. Virasoro (University of Rome) D. J. Wallace (University of Edinburgh) A. Weigend (Xerox PARC) D. Zipser (University of California, San Diego) ---------------------------------- CALL FOR PAPERS Original contributions consistent with the scope of the journal are welcome. Complete instructions as well as sample copies and subscription information are available from The Editorial Secretariat, IJNS World Scientific Publishing Co. Pte. Ltd. 73, Lynton Mead, Totteridge London N20 8DH ENGLAND Telephone: (44)81-446-2461 or World Scientific Publishing Co. Inc. Suite 1B 1060 Main Street River Edge New Jersey 07661 USA Telephone: (1)201-487-9655 or World Scientific Publishing Co. Pte. Ltd. Farrer Road, P. O. Box 128 SINGAPORE 9128 Telephone (65)382-5663  From gary at cs.ucsd.edu Thu Sep 2 11:44:04 1993 From: gary at cs.ucsd.edu (Gary Cottrell) Date: Thu, 2 Sep 93 08:44:04 -0700 Subject: Virtual Festschrift for Jellybean Message-ID: <9309021544.AA04923@gremlin> Dear Connectionists, On a sad day this spring, my longtime collaborator, friend, and inspiration for the field of Dognitive Science, Jellybean, died at the ripe old age of 16. His age (for a golden retriever/samoyed cross) at his death is a testament to modern veterinary medicine. Alas, we still must all go sometime. The purpose of this message is to invite the humorists among us to contribute a piece to a collection I am editing of humor in Jellybean's memory. As you may know, a "festschrift" is a volume of articles presented as a tribute or memorial to an academic. I have no plans to publish this except "virtually", through the auspices of the neuroprose archive. I already have several contributions that were privately solicited. This is a public solicitation for humor for this purpose. Your piece does not have to be in the "Dognitive Science" vein, but may be anything having to do with neural nets, Cognitive Science, or nearby fields. I reserve editorial right to accept, edit, and/or reject any material submitted that I deem either inappropriate, too long (I am expecting pieces to be on the order of 1-8 pages), or simply, not funny. Any editing will be with the agreement of the author. Latex files are probably best. Remember, brevity is the mother of wit. The deadline for submission will be Nov. 1, 1993. Email submissions only to gary at cs.ucsd.edu. Thanks for your attention. Gary Cottrell 619-534-6640 Reception: 619-534-6005 FAX: 619-534-7029 Computer Science and Engineering 0114 University of California San Diego La Jolla, Ca. 92093 gary at cs.ucsd.edu (INTERNET) gcottrell at ucsd.edu (BITNET, almost anything) ..!uunet!ucsd!gcottrell (UUCP)  From mozer at dendrite.cs.colorado.edu Mon Sep 6 22:21:07 1993 From: mozer at dendrite.cs.colorado.edu (Michael C. Mozer) Date: Mon, 6 Sep 1993 20:21:07 -0600 Subject: NIPS*93 workshops Message-ID: <199309070221.AA28415@neuron.cs.colorado.edu> For the curious, a list of topics for the NIPS*93 post-conference workshops is attached. The workshops will be held in Vail, Colorado, on December 3 and 4, 1993. For further info concerning the individual workshops, please contact the workshop organizers, whose names and e-mail are listed below. Abstracts are not available at present, but will be distributed prior to the workshops. For NIPS conference and workshop registration info, please write to: NIPS*93 Registration / NIPS Foundation / PO Box 60035 / Pasadena, CA 91116-6035 USA ---------------- December 3, 1993 ---------------- Complexity Issues in Neural Computation and Learning Vwani Roychowdhury & Kai-Yeung Siu vwani at ecn.purdue.edu Connectionism for Music and Audition Andreas Weigend & Dick Duda weigend at cs.colorado.edu Memory-based Methods for Regression and Classification Thomas Dietterich tgd at cs.orst.edu Neural Networks and Formal Grammars Simon Lucas sml at essex.ac.uk Neurobiology, Psychophysics, and Computational Models of Visual Attention Ernst Niebur & Bruno Olshausen ernst at acquine.cns.caltech.edu Robot Learning: Exploration and Continuous Domains David Cohn cohn at psyche.mit.edu Stability and Observability Max Garzon & F. Botelho garzonm at maxpc.msci.memst.edu VLSI Implementations William O. Camp, Jr. camp at owgvm6.vnet.ibm.com What Does the Hippocampus Compute? Mark Gluck & Bruce McNaughton gluck at pavlov.rutgers.edu ---------------- December 4, 1993 ---------------- Catastrophic Interference in Connectionist Networks: Can it be Predicted, Can it be Prevented? Bob French french at willamette.edu Connectionist Modeling and Parallel Architectures Joachim Diederich & Ah Chung Tsoi joachim at fitmail.fit.qut.edu.au Dynamic Representation Issues in Connectionist Cognitive Modeling Jordan Pollack pollack at cis.ohio-state.edu Functional Models of Selective Attention and Context Dependency Thomas Hildebrandt thildebr at aragorn.csee.lehigh.edu Learning in Computer Vision and Image Understanding -- An Advantage over Classical Techniques? Hayit Greenspan hayit at micro.caltech.edu Memory-based Methods for Regression and Classification Thomas Dietterich tgd at cs.orst.edu Neural Network Methods for Optimization Problems Arun Jagota jagota at cs.buffalo.edu Processing of Visual and Auditory Space and its Modification by Experience Josef Rauschecker josef at helix.nih.gov Putting it all Together: Methods for Combining Neural Networks Michael Perrone mpp at cns.brown.edu --------------------------------------------------------- NOTE: The assignment of workshops to dates is tentative. ---------------------------------------------------------  From john at dcs.rhbnc.ac.uk Wed Sep 8 07:23:42 1993 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Wed, 08 Sep 1993 12:23:42 +0100 Subject: EuroCOLT Message-ID: <1809.9309081123@csqx.cs.rhbnc.ac.uk> The Institute of Mathematics and its Applications Euro-COLT '93 FIRST EUROPEAN CONFERENCE ON COMPUTATIONAL LEARNING THEORY 20th-22nd December, 1993 Royal Holloway, University of London Call for Participation and List of Accepted Papers ================================================== The inaugural IMA European conference on Computational Learning Theory will be held 20--22 December at Royal Holloway, University of London. The conference covers areas related to the analysis of learning algorithms and the theory of machine learning, including artificial and biological neural networks, robotics, pattern recognition, inductive inference, information theory and cryptology, decision theory and Bayesian/MDL estimation. Invited Talks ============= As part of our program, we are pleased to announce three invited talks by Wolfgang Maass (Graz), Lenny Pitt (Illinois) and Les Valiant (Harvard). Euroconference Scholarships =========================== The conference has also received scientific approval from the European Commission to be supported under the Human Capital and Mobility Euroconferences initiative. This means that there will be a number of scholarships available to cover the expenses of young researchers attending the conference. The scholarships are open to citizens of European Community Member States or people who have been residing and working in research for at least one year in one of the European States. Please indicate on the return form below if you would like to receive more information about these scholarships. List of Accepted Papers ======================= R. Gavalda, On the Power of Equivalence Queries. M. Golea and M. Marchand, On Learning Simple Deterministic and Probabilistic Neural Concepts. P. Fischer, Learning Unions of Convex Polygons. S. Polt, Improved Sample Size Bounds for PAB-Decisions. F. Ameur, P. Fischer, K-U. Hoffgen and F.M. Heide, Trial and Error: A New Approach to Space-Bounded Learning. A. Anoulova and S. Polt, Using Kullback-Leibler Divergence in Learning Theory. J. Viksna, Weak Inductive Inference. H.U. Simon, Bounds on the Number of Examples Needed for Learning Functions. R. Wiehagen, C.H. Smith and T. Zeugmann, Classification of Predicates and Languages. K. Pillaipakkamnatt and V. Raghavan, Read-twice DNF Formulas can be learned Properly. J. Kivinen, H. Mannila and E. Ukkonen, Learning Rules with Local Exceptions. J. Kivinen and M. Warmuth, Using Experts for Predicting Continuous Outcomes. M. Anthony and J. Shawe-Taylor, Valid Generalisation of Functions from Close Approximation on a Sample. N. Cesa-Bianchi, Y. Freund, D.P. Helmbold and M. Warmuth, On-line Prediction and Conversion Strategies. A. Saoudi and T. Yokomori, Learning Local and Recognisable omega-Languages and Monadic Logic Programs. K. Yamanishi, Learning Non-Parametric Smooth Rules by Stochastic Rules with Finite Partitioning. H. Wiklicky, The Neural Network Loading Problem is Undecidable. T. Hegedus, Learning Zero-one Threshold Functions and Hamming Balls over the Boolean Domain. Members of the Organising Committee =================================== John Shawe-Taylor (Chair: Royal Holloway, University of London, email to eurocolt at dcs.rhbnc.ac.uk), Martin Anthony (LSE, University of London), Jose Balcazar (Barcelona), Norman Biggs (LSE, University of London), Mark Jerrum (Edinburgh), Hans-Ulrich Simon (University of Dortmund), Paul Vitanyi (CWI Amsterdam). Location ======== The conference will be held at Royal Holloway, University of London in Egham, Surrey, conveniently located 15 minutes' drive from London Heathrow airport. Accommodation will be either in the chateau-like original Founders Building or in en-suite rooms in a new block also on the Royal Holloway campus. Accommodation fees range from 110 pounds to 150 pounds (inclusive of bed, breakfast and dinner), while the conference fee is 195 pounds (inclusive of lunch, coffee and tea; 140 pounds for students with reductions available for IMA members; late application fee of 15 pounds if application received after 16th November). -------------------------------------------------------------------- To: The Conference Officer, The Institute of Mathematics and its Applications, 16 Nelson Street, Southend-on-Sea, Essex SS1 1EF. Telephone: (0702) 354020. Fax: (0702) 354111 Euro-COLT '93 20th--22nd December, 1993 Royal Holloway, University of London Please send me an application for the above conference TITLE ............ MALE/FEMALE ..... SURNAME .............................. FORENAMES .................. ADDRESS FOR CORRESPONDENCE ........................................... ..................................................................... TELEPHONE NO ........................ FAX NO ......................... Please send me information about the Euroconference scholarships ........ (Please tick if necessary)  From georg at ai.univie.ac.at Wed Sep 8 10:42:20 1993 From: georg at ai.univie.ac.at (Georg Dorffner) Date: Wed, 8 Sep 1993 16:42:20 +0200 Subject: CFP - symposium on ANN and adaptive systems Message-ID: <199309081442.AA09230@chicago.ai.univie.ac.at> CALL FOR PAPERS for the symposium ====================================================== Artificial Neural Networks and Adaptive Systems ====================================================== chairs: Stephen Grossberg, USA, and Georg Dorffner, Austria as part of the Twelfth European Meeting on Cybernetics and Systems Research April 5-8, 1994 University of Vienna, Vienna, Austria For this symposium, papers on any theoretical or practical aspect of artificial neural networks are invited. Special focus, however, will be put on the issue of adaptivity both in practical engineering applications and in applications of neural networks to the modeling of human behavior. By adaptivity we mean the capability of a neural network to adjust itself to changing environments. For this, a careful distinction between "learning" to devise weight matrices for a neural network before it is applied (and usually left unchanged) on one hand, and "true" adaptivity of a given neural network to constantly changing conditions on the other hand - i.e. real-time learning in unstationary environments - is made. The following is a - by no means exhaustive - list of possible topics in this realm: - online learning of neural network applications facing changing data distributions - transfer of neural network solutions to related but different domains - application of neural networks for adaptive autonomous systems - "phylogenetic" vs. "ontogenetic" adaptivity (e.g. adaptivity of connectivity and architecture vs. adaptivity of coupling parameters or weights) - short term vs. long term adaptation - adaptive reinforcement learning - adaptive pattern recognition - localized vs. distributed approximation (in terms of overlap of decision regions) and adaptivity Preference will be given to contributions that address such issues of adaptivity, but - as mentioned initially - other original work on neural newtorks is also welcome. As an additional highlight, Prof. S. Grossberg will be one of the plenary speakers of the EMCSR 1994. Below is a description of the EMCSR conference containing guidelines for submissions. Note that for this particular symposium the deadline has been extended to ==================================== October 20, 1993 ==================================== If you are planning to submit by this postponed deadline, please send a brief notification containing a tentative title of your submission to georg at ai.univie.ac.at by Oct 8 (the original deadline). Electronic submission (latex or postscript) to the same address is possible (note again that this applies only for this symposium, and does not apply for camera ready accepted FINAL papers). !Hope to see you in Vienna! About EMCSR'94: ====================================================================== Twelfth European Meeting on Cybernetics and Systems Research April 5-8, 1994 at the University of Vienna (Main Building) Organizers: - ----------- Austrian Society for Cybernetic Studies in co-operation with: University of Vienna, Department of Medical Cybernetics and Artificial Intelligence, and: International Federation for Systems Research Chairman: Robert Trappl, President of the Austrian Society for Cybernetic Studies Conference fee : - ---------------- Contributors : AS 2500 if paid before January 31, 1994 AS 3200 if paid later Participants : AS 3500 if paid before January 31, 1994 AS 4200 if paid later (AS 100 = about $ 9) The conference fee includes participation in the Twelfth European Meeting, attendance at official receptions, and the volume of the proceedings available at the Meeting. Please send cheque, or transfer the amount free of charges for beneficiary to our account no. 0026-34400/00 at Creditanstalt-Bankverein Vienna. Please state your name clearly. About the Congress: - ------------------- The international support of the European Meetings on Cybernetics and Systems Research held in Austria in 1972, 1974, 1976, 1978, 1980, 1982, 1984, 1986, 1988, 1990 and 1992 (when 300 scientists from more than 30 countries met to present, hear and discuss 210 papers) encouraged the Council of the Austrian Society for Cybernetic Studies (OSGK) to organize a similar meeting in 1994 to keep pace with continued rapid developments in related fields. A number of Symposia will be arranged and we are grateful to colleagues who have undertaken the task of preparing these events. As on the earlier occasions, eminent speakers of international reputation will present latest research results at daily plenary sessions. The Proceedings of the 10th and 11th European Meetings on Cybernetics and Systems Research, edited by R. Trappl, have been published by World Scientific, Singapore as : CYBERNETICS AND SYSTEMS '90 (1 vol., 1107 p.) CYBERNETICS AND SYSTEMS '92 (2 vols., 1685 p.) Symposia: - --------- A General Systems Methodology G.J.Klir, USA B Advances in Mathematical Systems Theory M.Peschel, Germany & F.Pichler, Austria C Fuzzy Sets, Approximate Reasoning & Knowledge Based Systems C.Carlsson, Finland, K-P.Adlassnig, Austria & E.P.Klement, Austria D Designing and Systems, and Their Education B.Banathy, USA, W.Gasparski, Poland & G.Goldschmidt, Israel E Humanity, Architecture and Conceptualization G.Pask, UK, & G.de Zeeuw, Netherlands F Biocybernetics and Mathematical Biology L.M.Ricciardi, Italy G Systems and Ecology F.J.Radermacher, Germany & K.Freda, Austria H Cybernetics and Informatics in Medicine G.Gell, Austria & G.Porenta, Austria I Cybernetics of Socio-Economic Systems K.Balkus, USA & O.Ladanyi, Austria J Systems, Management and Organization G.Broekstra, Netherlands & R.Hough, USA K Cybernetics of National Development P.Ballonoff, USA, T.Koizumi, USA & S.A.Umpleby, USA L Communication and Computers A M.Tjoa, Austria M Intelligent Autonomous Systems J.W.Rozenblit, USA & H.Praehofer, Austria N Cybernetic Principles of Knowledge Development F.Heylighen, Belgium & S.A.Umpleby, USA O Cybernetics, Systems & Psychotherapy M.Okuyama, Japan & H.Koizumi, USA P Artificial Neural Networks and Adaptive Systems S.Grossberg, USA & G.Dorffner, Austria Q Artificial Intelligence and Cognitive Science V.Marik, Czechia & R.Born, Austria R Artificial Intelligence & Systems Science for Peace Research S.Unseld, Switzerland & R.Trappl, Austria Submission of papers : - ---------------------- Acceptance of contributions will be determined on the basis of Draft Final Papers. These Papers must not exceed 7 single-spaced A4 pages (maximum 50 lines, final size will be 8.5 x 6 inch), in English. They have to contain the final text to be submitted, including graphs and pictures. However, these need not be of reproducible quality. The Draft Final Paper must carry the title, author(s) name(s), and affiliation in this order. Please specify the symposium in which you would like to present your paper. Each scientist shall submit only one paper. Please send three copies of the Draft Final Paper to the Conference Secretariat (except for electronic submission for symposium P - see above). DEADLINE FOR SUBMISSION : October 8, 1993 (Oct 20 for symposium P) In order to enable careful refereeing, Draft Final Papers received after the deadline cannot be considered. FINAL PAPERS : Authors will be notified about acceptance no later than November 13, 1993. They will be provided by the conference secretariat at the same time with the detailed instructions for the preparation of the final paper. PRESENTATION : It is understood that the paper is presented personally at the Meeting by the contributor. HOTEL ACCOMMODATIONS will be handled by Oesterreichisches Verkehrsbuero, Kongressabteilung, Opernring 5, A-1010 Vienna, phone +43-1-58800-113, fax +3-1-5867127, telex 111 222. Reservation cards will be sent to all those returning the attached registration form. SCHOLARSHIPS : The Austrian Federal Ministry for Science and Research has kindly agreed to provide a limited number of scholarships covering the registration fee for the conference and part of the accommodation costs for colleagues from eastern and south-eastern European countries. Applications should be sent to the Conference Secretariat before October 8, 1993. For further information about the Congress, contact: EMCSR 94 - Secretariat : Oesterreichische Studiengesellschaft fuer Kybernetik A-1010 Wien 1, Schottengasse 3, Austria. Phone : +43-1-53532810 Fax : +43-1-5320652 E-mail : sec at ai.univie.ac.at _______________________________________________________________ REGISTRATION FORM _______________________________________________________________ EMCSR-94 Twelfth European Meeting on Cybernetics and Systems Research Please return to : Austrian Society for Cybernetic Studies Schottengasse 3, A-1010 Vienna, AUSTRIA (EUROPE) o I plan to attend the Meeting o I intend to submit a paper to Symposium ... o I enclose the Draft Final Paper o My Draft Final Paper will arrive prior to October 8, 1993 o My cheque for AS .... covering the Conference Fee is enclosed o I have transferred AS .... to your account 0026-34400/00 at Creditanstalt Vienna o I shall not be at the Meeting but am interested to receive particulars of the Proceedings Name : Prof./Dr./Ms./Mr. ...................................... Address : ..................................................... ............................................................... Fax : .............................E-Mail : ................... Date : ....... Signature: _______________________________________________________________  From dhw at santafe.edu Wed Sep 8 16:11:01 1993 From: dhw at santafe.edu (dhw@santafe.edu) Date: Wed, 8 Sep 93 14:11:01 MDT Subject: New file in neuroprose Message-ID: <9309082011.AA08170@zia> *** DO NOT FORWARD TO OTHER BOARDS OR MAILING LISTS *** New file in neuroprose: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/wolpert.field-comp.ps.Z A Computationally Universal Field Computer That is Purely Linear by D. H. Wolpert and B. J. Maclennan Abastract: As defined in MacLennan (1987), a "field computer" is a (spatial) continuum-limit neural net. This paper investigates field computers whose dynamics is also continuum-limit, being governed by a purely linear integro-differential equation. Such systems are motivated both as a means of studying neural nets and as a model for cognitive processing. As this paper proves, such systems are computationally universal. The ``trick'' used to get such universal nonlinear behavior from a purely linear system is quite similar to the way nonlinear macroscopic physics arises from the purely linear microscopic physics of Schrodinger's equation. More precisely, the ``trick'' involves two parts. First, the kind of field computer studied in this paper is a continuum-limit threshold neural net. That is, the meaning of the system's output is determined by which neurons have an activation exceeding a threshold (which in this paper is taken to be 0), rather than by the actual activation values of the neurons. Second, the occurrence of output is determined in the same thresholding fashion; output is available only when certain " output-flagging" neurons exceed the threshold, rather than after a certain fixed number of iterations of the system. In addition to proving and discussing their computational universality, this paper cursorily investigates the dynamics of these kinds of systems. INDEX: wolpert.field-comp.ps.Z 28 pages A computationally universal continuum-limit neural net which is purely linear. Instructions for retrieval: Log on to the FTP-host as anonymous, type 'binary', get the file, quit FTP, uncompress the file, and print out the resulting postscript. Thanks to Jordan Pollack for maintaining this archive. David Wolpert The Santa Fe Institute 1660 Old Pecos Trail, Suite A Santa Fe, NM, 87501, USA dhw at santafe.edu (505) 988-8814 (voice) (505) 982-0565 (fax)  From penev%firenze%venezia.ROCKEFELLER.EDU at ROCKVAX.ROCKEFELLER.EDU Thu Sep 9 13:07:09 1993 From: penev%firenze%venezia.ROCKEFELLER.EDU at ROCKVAX.ROCKEFELLER.EDU (Penio Penev) Date: Thu, 9 Sep 1993 12:07:09 -0500 (EDT) Subject: Universal behaviour of "linear" systems In-Reply-To: <9309082011.AA08170@zia> from "dhw@santafe.edu" at Sep 8, 93 02:11:01 pm Message-ID: <9309091607.AA03475@firenze> dhw at santafe.edu wrote: | [..] The ``trick'' used to get such universal | nonlinear behavior from a purely linear system is quite similar to the | way nonlinear macroscopic physics arises from the purely linear | microscopic physics of Schrodinger's equation. More precisely, the | ``trick'' involves two parts. First, the kind of field computer | studied in this paper is a continuum-limit threshold neural net. That Thresholding is the most non-linear finite function. In a sense, the computer I'm writing this lines on is a finite linear machine with an "IF" function, which is the same as thresholding. If I believe, that my machine (given infinite disk space) is computationally universal, I have no problems believing, that adding an infinite processor to it would be universal also. -- Penio Penev x7423 (212)327-7423 (w) Internet: penev at venezia.rockefeller.edu  From dhw at santafe.edu Fri Sep 10 01:30:35 1993 From: dhw at santafe.edu (David Wolpert) Date: Thu, 9 Sep 93 23:30:35 MDT Subject: No subject Message-ID: <9309100530.AA26262@sfi.santafe.edu> Penio Penev writes of the recently posted abstract from my paper w/ Bruce Maclennan: >>> Thresholding is the most non-linear finite function. In a sense, the computer I'm writing this lines on is a finite linear machine with an "IF" function, which is the same as thresholding. If I believe, that my machine (given infinite disk space) is computationally universal, I have no problems believing, that adding an infinite processor to it would be universal also. >>> Let me emphasize some points which are made clear in the paper (which I suspect Dr. Penev has not yet read). First, as even the abstract mentions, all that's non-linear in our system is the *representation*; the dynamics as the signal propagates through the net is purely linear. As an analogy, it's like having a conventional neural net, except that the net doesn't use sigmoids to go from layer to layer; the dynamics as the signal is propagated through layers is purely linear. Then, when the signal has run through, you interpret output by looking at which of the output neurons don't have the value 0. That ending interpretation is the *only* place that thresholding arises. The important point is that it's the representation which is non-linear, not the dynamics. Note that such non-linear representations are quite common in neural nets; results like those in our paper show that that non-linearity suffices (in the continuum limit), and the non-linerity of sigmoids is not needed. Second, although I'm not sure I understand exactly what Dr. Penev means by "adding an infinite processor to an infinite disk space machine", I certainly would agree that, w/ *countably* infinite "memory", and *discrete* dynamics of the signal through the net, it's trivial to use thresholding to get computational universality. In fact, we devote a paragraph to this very point early in the paper. What's quite a bit more difficult is emulating an arbitrary (discrete space and time) Turing machine using a neural net with an *uncountably* infinite "memory", and *continuum* dynamics of the signal through the net (i.e., signal dynamics governed by differential equations, rather than by discrete time steps). In addressing this issue, our paper parallels Steve Omohundro's earlier work showing how to emulate an arbitrary (discrete space and time) cellular automata with differential equations. There are several interesting aspects to such issues. One is the intrinsic interest of the math. Another is the fact that the universe is in fact continuous and not discrete. (Note the resultant issue of trying to use continuum-limit analyses like that of our paper to try to construct analog computers.) And perhaps most enticingly, there's the fact that one example of a system like that described in our paper is a wave function evolving according to Schrodinger's equation. (The dynamics in quantum mechanics is purely linear, with the non-linearity we see in the world around us arising from the "thresholding" of the collapse of the wave packet, roughly speaking.) Although we didn't have space to purse this issue in our paper, it suggests that the systems we investigate can be "trained" using the very-throroughly understood machinery of quantum mechanical scattering theory. All of this is discussed in our paper.  From josh at faline.bellcore.com Fri Sep 10 13:17:42 1993 From: josh at faline.bellcore.com (Joshua Alspector) Date: Fri, 10 Sep 93 13:17:42 EDT Subject: Telecom workshop early registration deadline next week Message-ID: <9309101717.AA26847@faline.bellcore.com> International Workshop on Applications of Neural Networks to Telecommunications Nassau Inn, Princeton, NJ October 18-20, 1993 You are invited to an international workshop on applications of neural networks to problems in telecommunications. The workshop will be held at the historic Nassau Inn (across from the university) in Princeton, New Jersey on October, 18-20 1993. The conference rate is $95 single, $135 double. You can make reservations directly with the hotel (mention IWANNT*93): Nassau Inn 10 Palmer Square Princeton, NJ 08542 (800) 862-7728 (within USA) (609) 921-7500 (outside USA) (609) 921-9385 (FAX) In addition to the traditional hard-bound proceedings, we will also have an on-line electronic conference proceedings. This will have automatic indexing and cross-referencing, multimedia figures, and annotations for readers and authors to comment. Tentative Schedule International Workshop on Applications of Neural Networks to Telecommunications Nassau Inn, Princeton, NJ Monday Oct. 18, 1993 Prince William Ballroom 8:30 Coffee and registration 9:00 J. Alspector, "Overview" Session 1 9:30 B.J. Sheu, "Programmable VLSI Neural Network Processors for Equalization of Digital Communication Channels" 10:00 A. Jayakumar & Josh Alspector, "An Analog Neural-Network Co-Processor System for Rapid Prototyping of Telecommunications Applications" 10:30 Break Session 2 11:00 J. Cid-Sueiro, "Improving Conventional Equalizers with Neural Networks" 11:30 T. X. Brown, "Neural Networks for Equalization" 12:00 R. Goodman, B. Ambrose, "Applications of Learning Techniques to Network Management" 12:30 Lunch Session 3 1:30 M. Littman & J. Boyan, "A Distributed Reinforcement Learning Scheme for Network Routing" 2:00 M. Goudreau, C. L. Giles, "Discovering the Structure of a Self Routing Interconnection Network with a Recurrent Neural Network" 2:30 G. Kechriotis, E. Manolakos, "Implementing the Optimal CDMA Multiuser Detector with Hopfield Neural Networks" 3:00 Break Session 4 3:30 A. Jagota, "Scheduling Problems in Radio Networks Using Hopfield Networks" 4:00 E. Nordstrom, M. Gustafsson, O. Gallmo, L. Asplund, "A Hybrid Admission Control Scheme for Broadband ATM Traffic" 4:30 A. Tarraf, I. Habib, T. Saadawi, "Characterization of Packetized Voice Traffic in ATM Networks Using Neural Networks" 6:00 Reception Tuesday Oct. 19, 1993 Prince William Ballroom 8:30 Coffee 9:00 Speaker Title (Invited Talk) 10:00 Break Session 5 10:30 A. Chhabra, S. Chandran, R. Kasturi, "Table Structure Interpretation & Neural Network Based Text Recognition for Conversion of Telephone Company Tabular Drawings" 11:00 A. Amin, H. Al-Sadoun, "Arabic Character Recognition System Using Artificial Neural Network" 11:30 G-E Wang, J-F Wang, "A New Hierarchical Approach for Recognition of Unconstrained Handwritten Numerals" Session 6 12:00 Poster session & Lunch POSTER SESSION J. E. Neves, "ATM Call Control by Neural Networks" A. Farago, "A Neural Structure for Dynamic Routing and Resource Management in ATM Networks" S. Amin, M. Gell, "Constrained Optimisation for Switching and Routing Using Neural Networks V. Cherkassky, Y-K Park, G. Lee, "ATM Cell Scheduling for Broadband Switching Systems by Neural Network" S. Neuhauser, "Hopfield Optimization Techniques Applied to Routing in Computer Networks" F. Comellas, R. Roca, "Using Genetic Algorithms to Design Constant Weight Codes" P. Leray, "CUBICORT: A Hardware Simulation of a Multicolumn Model for 3D Image Analysis, Understanding & Compression for Digital TV, HDTV & Multimedia" N. Karunanithi, "A Connectionist Approach for Incorporating Continuous Code Churn into Software Reliability Growth Models" A. Lansner, "Hierarchical Clustering Using a Bayesian Attractor ANN" A. Holst and A. Lansner, "Diagnosis of Technical Equipment Using a Bayesian Neural Network" T. Martinez, G. Rudolph, "A Learning Model for Adaptive Routing" S. Haykin, L. Li, "16 kbps Nonlinear Adaptive Differential Pulse Code Modulation" M. K. Sonmez, T. Adali, "Channel Equalization by Distribution Learning: The Least Relative Entropy Algorithm" J. Connor, "Bootstrapping in Time Series Prediction" A. Kowalczyk and M. Dale, "Isolated Speech Recognition with Low Cost Neural Networks" M. Meyer & G. Pfeiffer, "Multilayer Perception Based Decision Feedback Equalizers Applied to Nonlinear Channels with Intersymbol Interference" H. Liu & D. Yun, "Self-Organizing Finite State Vector Quantization for Image Coding" A. Hasegawa, K. Shibata, K. Itoh, Y. Ichioka, K. Inamura, "Adapting-Size Neural Network for Character Recognition on X-Ray Films" A. Mikler, J. Wong, V. Honavar, "Quo Vadis - A Framework for Adaptive Routing in Very Large High Speed Communication Networks" Chen-Xiong Zhang, "Optimal Traffic Routing Using Self-Organization Principle" S. Kwasny, B. Kalman, A. M. Engebretson, W. Wu,"Real-Time Identification of Language from Raw Speech Waveforms" Session 7 4:00 Board buses for AT&T Worldwide Intelligent Network Center 5:00 Reception and tour Session 8 7:00 Banquet 8:30 B. Widrow, "Adaptive Filters, Adaptive Neural Nets, and Telecommunications" (Invited talk) Wednesday Oct. 20, 1993 Prince William Ballroom 8:30 Coffee 9:00 Speaker Title (Invited Talk) 10:00 Break Session 9 10:30 J. Connor, "Prediction of Access Line Growth" 11:00 B. P. Yuhas, "Telephone Fraud Detection" 11:30 T. John, "Multistage Information Filtering Using Cascaded Neural Networks" 12:00 M. Jabri, "Temporal Credit Assignment for Continuous Speech Recognition" 12:30 Lunch Session 10 1:30 T-D. Chiueh, T-T Tang, L-G Chen, "Vector Quantization Using Tree-Structured Self-Organizing Feature Maps" 2:00 N. Karunanithi, "Identifying Fault-Prone Software Modules Using Connectionist Networks" 2:30 D.S.W. Tansley, S. Carter, "Clone Detection in Telecommunications Software Systems: A Neural Net Approach" 3:00 Break Session 11 3:30 L. Lewis, S. Sycamore, "Learning Index Rules & Adaptation Functions for a Communications Network Fault Resolution System" 4:00 T. Sone, "Using Distributed Neural Networks to Identify Faults in Switching Systems" 4:30 A. Chattell, "A Neural Network Pre-Processor for a Fault Diagnosis Expert System" 5:00 Adjourn Organizing Committee: General Chair Josh Alspector Bellcore, MRE 2P-396 445 South St. Morristown, NJ 07960-6438 (201) 829-4342 josh at bellcore.com Program Chair Rod Goodman Caltech 116-81 Pasadena, CA 91125 (818) 356-3677 rogo at micro.caltech.edu Publications Chair Timothy X Brown Bellcore, MRE 2E-378 445 South St. Morristown, NJ 07960-6438 (201) 829-4314 timxb at faline.bellcore.com Treasurer Anthony Jayakumar, Bellcore Events Coordinator Larry Jackel, AT&T Bell Laboratories Industry Liaisons Miklos Boda, Ellemtel Atul Chhabra, NYNEX Michael Gell, British Telecom Lee Giles, NEC Thomas John, Southwestern Bell Adam Kowalczyk, Telecom Australia Tadashi Sone, NTT University Liaisons S Y Kung, Princeton University Tzi-Dar Chiueh, National Taiwan University INNS Liaison Bernie Widrow, Stanford University IEEE Liaison Steve Weinstein, Bellcore Conference Administrator Betty Greer Bellcore, MRE 2P-295 445 South St. Morristown, NJ 07960 (201) 829-4993 (fax) 829-5888 bg1 at faline.bellcore.com ----------------------------------------------------------------------------- ----------------------------------------------------------------------------- International Workshop on Applications of Neural Networks to Telecommunications Princeton, NJ October 18-20, 1993 Registration Form Name: _____________________________________________________________ Institution: __________________________________________________________ Mailing Address: ___________________________________________________________________ ___________________________________________________________________ ___________________________________________________________________ ___________________________________________________________________ Telephone: ______________________________ Fax: ____________________________________ E-mail: _____________________________________________________________ Registration Fees: includes reception, banquet, refreshment breaks, AT&T tour, and both paper and electronic proceedings available at the conference. | | Early (Before Sept. 15, 1993) $350 | | Late (After Sept. 15, 1993) $450 | | Full time students with ID $150 Enclosed is a check or money order in US Dollars for $___________ Please make check payable to IWANNT*93 Hotel arrangements with Nassau Inn at (609) 921-9385 Mail to: Betty Greer, IWANNT*93 Bellcore, MRE 2P-295 445 South St. Morristown, NJ 07960 (201) 829-4993 (fax) 829-5888 bg1 at faline.bellcore.com  From mitsu at netcom.com Fri Sep 10 13:49:24 1993 From: mitsu at netcom.com (Mitsu Hadeishi) Date: Fri, 10 Sep 93 10:49:24 -0700 Subject: A Computationally Universal Field Computer Message-ID: <9309101749.AA23417@netcom5.netcom.com> David Wolpert writes in response to Penio Penev regarding his recently posted abstract: >What's quite a bit more difficult is emulating an arbitrary (discrete >space and time) Turing machine using a neural net with an *uncountably* >infinite "memory", and *continuum* dynamics of the signal through >the net (i.e., signal dynamics governed by differential equations, rather >than by discrete time steps). I'm interested to know what effect noise or signal degradation might have on your technique for emulating an arbitrary Turing machine using your linear field computer. I would imagine that you would get some sort of statistical approximation of a Turing machine, for example you'd have a certain (perhaps quite high) probability of correct results. Of course, real physical finite computers have exactly the same problem, with the potential for memory failures and other unexpected deviations from theory. However, I was wondering to what extent your technique for getting computational universality was sensitive to noise and whether you addressed this issue in your paper. Mitsu Hadeishi Open Mind  From jagota at cs.Buffalo.EDU Fri Sep 10 17:08:05 1993 From: jagota at cs.Buffalo.EDU (Arun Jagota) Date: Fri, 10 Sep 93 17:08:05 EDT Subject: NIPS*93 workshop Message-ID: <9309102108.AA15660@hadar.cs.Buffalo.EDU> CALL FOR PARTICIPATION NIPS*93 workshop on Neural Network Methods for Optimization Problems There are 4-5 slots remaining for brief oral presentations of 20-30 minutes each. To be considered, submit either (i) a title and one page abstract or (ii) a bibliography of recent work on the topic. Please submit materials by electronic mail to Arun Jagota (jagota at cs.buffalo.edu) by October 5. Later submissions risk not having remaining open slots. Program: ------- Ever since the work of Hopfield and Tank, neural networks have found increasing use for the approximate solution of hard optimization problems. The successes in the past have however been limited, when compared to traditional methods. In this workshop we will discuss the state of the art of neural network algorithms for optimization, examine their weaknesses and strengths, and discuss potential for improvement. Second, as the algorithms arise from different areas (e.g. some from statistical physics, others from computer science) we hope that researchers from these disciplines will share their own insights with others. Third, we also hope to discuss theoretical issues that arise in using neural network algorithms for optimization. Finally, we hope to have people to discuss parallel implementation issues or case studies. --------------------- Arun Jagota  From SABBATINI%ccvax.unicamp.br at UICVM.UIC.EDU Sat Sep 11 20:35:34 1993 From: SABBATINI%ccvax.unicamp.br at UICVM.UIC.EDU (SABBATINI%ccvax.unicamp.br@UICVM.UIC.EDU) Date: Sat, 11 Sep 1993 20:35:34 BSC (-0300 C) Subject: Neural Networks in Biomed.Engineer. IEEE Conf. (S.Diego) Message-ID: <01H2U2W4LAL28WW799@ccvax.unicamp.br> 15th Annual International Conference IEEE Engineering in Medicine and Biology Society San Diego, CA, October 28-31, 1993 ACTIVITIES ON NEURAL NETWORKS 1. Special IFAC Sessions: Frontiers in Neural Network Control ---------------------------------------------------------- Two special sessions sponsored by the International Federation of Automatic Control will address emerging control systems and signal processing principles, algorithms and applications that are inspired by biological and artificial neural networks. Examples of engineering design strategies with brain-like capabilities will be discussed. Application areas include adaptive control, system identification, connectionist neural networks, reinforcement learning, brain models and pattern recognition. Speakers: Dr. Paul Werbos (NSF/USA), Dr. James Albus (National Institute of Standards and Technology, USA)), Dr. K.S. Narendra (Yale University, USA), Dr. Renato M.E. Sabbatini (State University of Campinas, Brazil) and Dr. Chi-Sang Poon (MIT/USA). 2. Workshop on Neural Networks in Biomedical Engineering ----------------------------------------------------- This workshop (Oct. 27, 9:00-17:00) will address the roles of neural networks in biomedical engineering. Topics covered will include the use of neural networks in signal analysis (Dr. Evangelia Micheli-Tzanakou, Rutgers Univ.), the relations between Volterra expansions and neural networks (Dr. V.Z. Marmarelis, Univ. Southern California), the use of neural networks in medical diagnosis (Dr. C.N. Schizas, Univ. Cyprus) and the implications of processing in the brain for artificial neural networks (Dr. Stuart R. Hameroff, Univ. Arizona). 3. Technical Sessions ------------------ Papers on neural networks will be presented in seven technical sessions (Neural Networks I-VII) with oral presentations and one session with poster presentations. All sessions will be 90 minutes long. Venue ----- ITT Sheraton Harbor Island Hotel Proceedings ----------- The conference proceedings (more than 2200 papers) will be available in paper form and in a CD-ROM with retrieval software (IEEE Press). Information and Registration ---------------------------- IEEE/EMBS Conference Management Office Meeting Management 5665 Oberlin Drive, Suite 110 San Diego, CA 92121, USA Phone (619) 453-6222 Fax (619) 535-3880 Email 70750.345 at compuserve.com n.feldman at ieee.org (Abstracted from the official Invitation Folder) $  From dhw at santafe.edu Sun Sep 12 11:49:00 1993 From: dhw at santafe.edu (David Wolpert) Date: Sun, 12 Sep 93 09:49:00 MDT Subject: No subject Message-ID: <9309121549.AA00766@sfi.santafe.edu> Mitsu Hadeishi writes: >>>>> David Wolpert writes in response to Penio Penev regarding his recently posted abstract: >What's quite a bit more difficult is emulating an arbitrary (discrete >space and time) Turing machine using a neural net with an *uncountably* >infinite "memory", and *continuum* dynamics of the signal through >the net (i.e., signal dynamics governed by differential equations, rather >than by discrete time steps). I'm interested to know what effect noise or signal degradation might have on your technique for emulating an arbitrary Turing machine using your linear field computer. I would imagine that you would get some sort of statistical approximation of a Turing machine, for example you'd have a certain (perhaps quite high) probability of correct results. Of course, real physical finite computers have exactly the same problem, with the potential for memory failures and other unexpected deviations from theory. However, I was wondering to what extent your technique for getting computational universality was sensitive to noise and whether you addressed this issue in your paper. >>>>> This is a very good question. It opens up the whole area of error-correcting field computers. Although we've thought about this issue, we haven't addressed it in any detail, other than to note a change of basis which might facilitate robustness against noise. (We didn't want to try to cram too much into the paper.) In general though, it would seem that the issue is dependent on a number of factors. First, the *kind* of noise process is important. Our system is one which evolves continuously in time, and is continuous in space (i.e., in neuron index). Accordingly, one might think that something like a Weiner process for noise would be appropriate. But it's not hard to think of other possibilities. In particular, if the noise is multiplicative (or for some other reason preserves the set of neurons which are non-zero), then it won't affect our system *at all*, since our system is linear, and its interpretation only depends on the set of neurons which are non-zero, and not their actual values. More generally, one would expect that for most kinds of noise processes, the degree of degradation would depend on how long the system runs (especially given that our system is linear). So the accuracy of emulation of a particular Turing machine, when noise is involved, is in general undecidable - if the Turing machine doesn't halt, one would generically expect that any (non-multiplicative) noise process would result in complete degradation eventually. Second, one of the first things we do in demonstrating computational universality is show how to transform the original system, which is continuous-space, continuous-time, with a time-independent weight matrix, to a new system, which is discrete-space, continuous-time, with a time-dependent weight matrix. (We then demonstrate computational universality for this second system.) In essence, we transform the system into a "cellular automaton" evolving continuously in time, with time-dependent dynamics. This transformation relies on choosing the original time-independent weight matrix to lie in a particular equivalence class of such matrices. This is important because if the noise doesn't interfere with this transformation, then it will only affect how accurately our resultant "cellular automaton" is mimicing a Turing machine. However if the noise instead interferes w/ the transformation, we will have automatic "cross-talk" going on *continuously* between all possible Turing machines. (Only at t = 0 would we be coding for only one particular Turing machine.) Generically, this will affect the degradation of our system differently. *** Finally, in practice one would (often) want to use our system the same way neural nets are used - as simple mappings from inputs to outputs, which reproduce a given training set (up to regularization issues), rather than as systems which reproduce a given Turing machine. (In many senses, proving computational univsersality is of interest because it establishes the potential power of a system, not as a practical way to use the system.) And in general, noise is much less calamitous if all you're trying to do is reproduce a given finite (!) training set - you have much more freedom to set things up so that there are "buffers" around the behavior you want to get, buffers which can absorb the noise.  From jbower at smaug.bbb.caltech.edu Sun Sep 12 22:04:45 1993 From: jbower at smaug.bbb.caltech.edu (Jim Bower) Date: Sun, 12 Sep 93 19:04:45 PDT Subject: New GENESIS version 1.4 Message-ID: <9309130204.AA00830@smaug.bbb.caltech.edu> ------------------------------------------------------------------------ This is to announce the availability of a new release of the GENESIS simulator. This version (ver. 1.4.1, August 1993) is greatly improved from the previous public release (ver. 1.1, July 1990). Description: GENESIS (GEneral NEural SImulation System) is a general purpose simulation platform which was developed to support the simulation of neural systems ranging from complex models of single neurons to simulations of large networks made up of more abstract neuronal components. Most current GENESIS applications involve realistic simulations of biological neural systems. Although the software can also model more abstract networks, other simulators are more suitable for backpropagation and similar connectionist modeling. GENESIS and its graphical front-end XODUS are written in C and run on SUN and DEC graphics work stations under UNIX (Sun version 4.0 and up, Ultrix 3.1, 4.0 and up), and X-windows (versions X11R3, X11R4, and X11R5). The current version of GENESIS has also been used with Silicon Graphics (Irix 4.0.1 and up) and the HP 700 series (HPUX). The distribution includes full source code and documentation for both GENESIS and XODUS as well as fourteen demonstration and tutorial simulations. Documentation for these simulations is included, along with three papers that describe the general organization of the simulator. The distributed compressed tar file is about 3 MB in size. In addition to sample simulations which demonstrate the construction of neural simulations, the new GENESIS release contains a number of interactive tutorials for teaching concepts in neurobiology and realistic neural modeling. As their use requires no knowldge of GENESIS programming, they are suitable for use in a computer simulation laboratory which would accompany upper division undergraduate and graduate neuroscience courses,or for self-study. Each of these has on-line help and a number of suggested exercises or "experiments". These tutorials may also be taken apart and modified to create your own simulations, as several of them are derived from recent research simulations. The following papers give further information about GENESIS: Wilson, M. A., Bhalla, U. S., Uhley, J. D., and Bower, J. M. (1989) GENESIS: A system for simulating neural networks. In: Advances in Neural Information Processing Systems. D. Touretzky, editor. Morgan Kaufmann, San Mateo, CA. pp. 485-492 Matthew A. Wilson and James M. Bower, "The Simulation of Large-Scale Neural Networks", in Methods in Neuronal Modeling, Christof Koch and Idan Segev, editors. (MIT Press, 1989) Acquiring GENESIS via free FTP distribution: GENESIS may be obtained via FTP from genesis.cns.caltech.edu (131.215.137.64). As this is a large software package, please read the above description to determine if GENESIS is likely to be suitable for your purposes before you follow this procedure. To acquire the software use 'telnet' to connect to genesis.cns.caltech.edu and login as the user "genesis" (no password required). If you answer all the questions asked of you an 'ftp' account will automatically be created for you. You can then 'ftp' back to the machine and download the software. Further inquiries concerning GENESIS may be addressed to genesis at cns.caltech.edu.  From mclennan at cs.utk.edu Mon Sep 13 18:09:39 1993 From: mclennan at cs.utk.edu (mclennan@cs.utk.edu) Date: Mon, 13 Sep 93 18:09:39 -0400 Subject: Hadeishi's comments Message-ID: <9309132209.AA00564@maclennan.cs.utk.edu> I would like to add one comment to David Wolpert's reply to Mitsu Hadeishi's comments about our paper. That is a reminder that a TM is an idealized model of computation which ignores all the real- life problems of signal detection and classification, for example, in reading its tape. It's assumed to operate perfectly and in a noise-free environment. We make analogous assumptions in the continuous case, so that we also have an idealized model of computation. It's a theoretical construction for theoretical purposes, and I can no more imagine a practical use for it than for, for example, a Turing machine simulation of a recursive function evaluator (e.g., executing LISP on a TM). Like most theoretical constructions, ours makes physically unrealistic assumptions and is extravagant in its use of resources. Bruce MacLennan Department of Computer Science The University of Tennessee Knoxville, TN 37996-1301 (615)974-0994/5067 FAX: (615)974-4404 maclennan at cs.utk.edu  From edelman at wisdom.weizmann.ac.il Tue Sep 14 05:37:49 1993 From: edelman at wisdom.weizmann.ac.il (Edelman Shimon) Date: Tue, 14 Sep 93 11:37:49 +0200 Subject: TR available: Representation, Similarity, and the Chorus of Prototypes Message-ID: <9309140937.AA22716@wisdom.weizmann.ac.il> The technical report described below is available via anonymous ftp from eris.wisdom.weizmann.ac.il (132.76.80.53) as /pub/revised-simil.ps.Z The size of the compressed Postscript file is about 330Kb; the uncompressed file is 2.4Mb and has 21 pages (including figures). -Shimon -------------------------------------------------------------------------------- Representation, Similarity, and the Chorus of Prototypes Shimon Edelman Dept. of Applied Mathematics and Computer Science The Weizmann Institute of Science Rehovot 76100, Israel July 1993 (revised September 1993) \begin{abstract} It is proposed to conceive of representation as an emergent phenomenon that is supervenient on patterns of activity of coarsely tuned and highly redundant feature detectors. The computational underpinnings of the outlined theory of representation are (1) the properties of collections of overlapping graded receptive fields, as in the biological perceptual systems that exhibit hyperacuity-level performance, and (2) the sufficiency of a set of proximal distances between stimulus representations for the recovery of the corresponding distal contrasts between stimuli, as in multidimensional scaling. The present preliminary study appears to indicate that this concept of representation is computationally viable, and is compatible with psychological and neurobiological data. \end{abstract}  From mwitten at hermes.chpc.utexas.edu Tue Sep 14 13:59:07 1993 From: mwitten at hermes.chpc.utexas.edu (mwitten@hermes.chpc.utexas.edu) Date: Tue, 14 Sep 93 12:59:07 CDT Subject: CONGRESS: COMPUTATIONAL MEDICINE AND PUBLIC HEALTH (long) Message-ID: <9309141759.AA08946@morpheus.chpc.utexas.edu> ** NOTE CHANGES IN SUBMISSION AND REGISTRATION DEADLINES ** FIRST WORLD CONGRESS ON COMPUTATIONAL MEDICINE, PUBLIC HEALTH AND BIOTECHNOLOGY 24-28 April 1994 Hyatt Regency Hotel Austin, Texas ----- (Feel Free To Cross Post This Announcement) ---- 1.0 CONFERENCE OVERVIEW: With increasing frequency, computational sciences are being exploited as a means with which to investigate biomedical processes at all levels of complexity; from molecular to systemic to demographic. Computational instruments are now used, not only as exploratory tools but also as diagnostic and prognostic tools. The appearance of high performance computing environments has, to a great extent, removed the problem of increasing the biological reality of the mathematical models. For the first time in the history of the field, practical biological reality is finally within the grasp of the biomedical modeler. Mathematical complexity is no longer as serious an issue as speeds of computation are now of the order necessary to allow extremely large and complex computational models to be analyzed. Large memory machines are now routinely available. Additionally, high speed, efficient, highly optimized numerical algorithms are under constant development. As these algorithms are understood and improved upon, many of them are transferred from software implementation to an implementation in the hardware itself; thereby further enhancing the available computational speed of current hardware. The purpose of this congress is to bring together a transdisciplinary group of researchers in medicine, public health, computer science, mathematics, nursing, veterinary medicine, ecology, allied health, as well as numerous other disciplines, for the purposes of examining the grand challenge problems of the next decades. This will be a definitive meeting in that it will be the first World Congress of its type and will be held as a follow-up to the very well received Workshop On High Performance Computing In The Life Sciences and Medicine held by the University of Texas System Center For High Performance Computing in 1990. Young scientists (graduate students, postdocs, etc.) are encouraged to attend and to present their work in this increasingly interesting discipline. Funding is being solicited from NSF, NIH, DOE, Darpa, EPA, and private foundations, as well as other sources to assist in travel support and in the offsetting of expenses for those unable to attend otherwise. Papers, poster presentations, tutorials, focused topic workshops, birds of a feather groups, demonstrations, and other suggestions are also solicited. 2.0 CONFERENCE SCOPE AND TOPIC AREAS: The Congress has a broad scope. If you are not sure whether or not your subject fits the Congress scope, contact the conference organizers at one of the addresses below. Subject areas include but are not limited to: *Visualization/Sonification --- medical imaging --- molecular visualization as a clinical research tool --- simulation visualization --- microscopy --- visualization as applied to problems arising in computational molecular biology and genetics or other non-traditional disciplines --- telemedicine *Computational Molecular Biology and Genetics --- computational ramifications of clinical needs in the Human Genome, Plant Genome, and Animal Genome Projects --- computational and grand challenge problems in molecular biology and genetics --- algorithms and methodologies --- issues of multiple datatype databases *Computational Pharmacology, Pharmacodynamics, Drug Design *Computational Chemistry as Applied to Clinical Issues *Computational Cell Biology, Physiology, and Metabolism --- Single cell metabolic models (red blood cell) --- Cancer models --- Transport models --- Single cell interaction with external factors models (laser, ultrasound, electrical stimulus) *Computational Physiology and Metabolism --- Renal System --- Cardiovascular dynamics --- Liver function --- Pulmonary dynamics --- Auditory function, coclear dynamics, hearing --- Reproductive modeling: ovarian dynamics, reproductive ecotoxicology, modeling the hormonal cycle --- Metabolic Databases and metabolic models *Computational Demography, Epidemiology, and Statistics/Biostatistics --- Classical demographic, epidemiologic, and biostatistical modeling --- Modeling of the role of culture, poverty, and other sociological issues as they impact healthcare --- Morphometrics *Computational Disease Modeling --- AIDS --- TB --- Influenza --- Statistical Population Genetics Of Disease Processes --- Other *Computational Biofluids --- Blood flow --- Sperm dynamics --- Modeling of arteriosclerosis and related processes *Computational Dentistry, Orthodontics, and Prosthetics *Computational Veterinary Medicine --- Computational issues in modeling non-human dynamics such as equine, feline, canine dynamics (physiological/biomechanical) *Computational Allied Health Sciences --- Physical Therapy --- Neuromusic Therapy --- Respiratory Therapy *Computational Radiology --- Dose modeling --- Treatment planning *Computational Surgery --- Simulation of surgical procedures in VR worlds --- Surgical simulation as a precursor to surgical intervention --- The Visible Human *Computational Cardiology *Computational Nursing *Computational Models In Chiropractice *Computational Neurobiology and Neurophysiology --- Brain modeling --- Single neuron models --- Neural nets and clinical applications --- Neurophysiological dynamics --- Neurotransmitter modeling --- Neurological disorder modeling (Alzheimer's Disease, for example) --- The Human Brain Project *Computational Models of Psychiatric and Psychological Processes *Computational Biomechanics --- Bone Modeling --- Joint Modeling *Computational Models of Non-traditional Medicine --- Acupuncture --- Other *Computational Issues In Medical Instrumentation Design and Simulation --- Scanner Design --- Optical Instrumentation *Ethical issues arising in the use of computational technology in medical diagnosis and simulation *The role of alternate reality methodologies and high performance environments in the medical and public health disciplines *Issues in the use of high performance computing environments in the teaching of health science curricula *The role of high performance environments for the handling of large medical datasets (high performance storage environments, high performance networking, high performance medical records manipulation and management, metadata structures and definitions) *Federal and private support for transdisciplinary research in computational medicine and public health 3.0 CONFERENCE COMMITTEE *CONFERENCE CHAIR: Matthew Witten, UT System Center For High Performance Computing, Austin, Texas m.witten at chpc.utexas.edu *CURRENT CONFERENCE DIRECTORATE: Regina Monaco, Mt. Sinai Medical Center Dan Davison, University of Houston Chris Johnson, University of Utah Lisa Fauci, Tulane University Daniel Zelterman, University of Minnesota Minneapolis James Hyman, Los Alamos National Laboratory Richard Hart, Tulane University Dennis Duke, SCRI-Florida State University Sharon Meintz, University of Nevada Los Vegas Dean Sittig, Vanderbilt University Dick Tsur, UT System CHPC Dan Deerfield, Pittsburgh Supercomputing Center Istvan Gyori, University of Veszprem (Hungary) Don Fussell, University of Texas at Austin Ken Goodman, University Of Miami School of Medicine Martin Hugh-Jones, Louisiana State University Stuart Zimmerman, MD Anderson Cancer Research Center John Wooley, DOE Sylvia Spengler, University of California Berkeley Robert Blystone, Trinity University Gregory Kramer, Santa Fe Institute Franco Celada, NYU Medical Center David Robinson, NIH, NHLBI Jane Preson, MCC Peter Petropoulos, Brooks Air Force Base Marcus Pandy, University of Texas at Austin George Bekey, University of Southern California Stephen Koslow, NIH, NIMH Fred Bookstein, University of Michigan Ann Arbor Dan Levine, University of Texas at Arlington Richard Gordon, University of Manitoba (Canada) Stan Zeitz, Drexel University Marcia McClure, University of Nevada Las Vegas Renato Sabbatini, UNICAMP/Brazil (Brazil) Hiroshi Tanaka, Tokyo Medical and Dental University (Japan) Shusaku Tsumoto, Tokyo Medical and Dental University (Japan) Additional conference directorate members are being added and will be updated on the anonymous ftp list as they agree. 4.0 CONTACTING THE CONFERENCE COMMITTEE: To contact the congress organizers for any reason use any of the following pathways: ELECTRONIC MAIL - compmed94 at chpc.utexas.edu FAX (USA) - (512) 471-2445 PHONE (USA) - (512) 471-2472 GOPHER: log into the University of Texas System-CHPC select the Computational Medicine and Allied Health menu choice ANONYMOUS FTP: ftp.chpc.utexas.edu cd /pub/compmed94 POSTAL: Compmed 1994 University of Texas System CHPC Balcones Research Center 10100 Burnet Road, 1.154CMS Austin, Texas 78758-4497 5.0 SUBMISSION PROCEDURES: Authors must submit 5 copies of a single-page 50-100 word abstract clearly discussing the topic of their presentation. In addition, authors must clearly state their choice of poster, contributed paper, tutorial, exhibit, focused workshop or birds of a feather group along with a discussion of their presentation. Abstracts will be published as part of the preliminary conference material. To notify the congress organizing committee that you would like to participate and to be put on the congress mailing list, please fill out and return the form that follows this announcement. You may use any of the contact methods above. If you wish to organize a contributed paper session, tutorial session, focused workshop, or birds of a feather group, please contact the conference director at mwitten at chpc.utexas.edu . The abstract may be submitted electronically to compmed94 at chpc.utexas.edu or by mail or fax. There is no official format. 6.0 CONFERENCE DEADLINES AND FEES: The following deadlines should be noted: 1 November 1993 - Notification of intent to organize a special session 15 December 1993 - Abstracts for talks/posters/ workshops/birds of a feather sessions/demonstrations 15 January 1994 - Notification of acceptance of abstract 15 February 1994 - Application for financial aid 1 April 1994 - Registration deadline (includes payment of all fees) Fees include lunches for three days, all conference registration materials, the reception, and the sit down banquet: $400.00 Corporate $250.00 Academic $150.00 Student Students are required to submit verification of student status. The verification of academic status form appears appended to the registration form in this announcement. Because financial aid may be available for minority students, faculty, and for individuals from declared minority institutions, you may indicate that you are requesting financial aid as a minority individual. Additionally, we anticipate some support for women to attend. Application for financial aid is also appended to the attached form. 7.0 CONFERENCE PRELIMINARY DETAILS AND ENVIRONMENT LOCATION: Hyatt Regency Hotel, Austin, Texas, USA DATES: 24-28 April 1994 The 1st World Congress On Computational Medicine, Public Health, and Biotechnology will be held at the Hyatt Regency Hotel, Austin, Texas located in downtown Austin on the shores of Town Lake, also known as the Colorado River. The Hyatt Regency has rooms available for the conference participants at a special rate of $79.00/night for single or double occupancy, with a hotel tax of 13%. The Hyatt accepts American Express, Diner's Club, Visa, MasterCard, Carte Blanche, and Discover credit cards. This room rate will be in effect until 9 April 1994 or until the block of rooms is full. We recommend that you make your reservations as soon as possible. You may make your reservations by calling (512) 477-1234 or by returning the enclosed reservation form. Be certain to mention that you are attending the First World Congress On Computational Medicine, Public Health, and Biotechnology if you make your reservations by telephone. The hotel is approximately, five miles (15 minutes from Robert Mueller Airport). The Hyatt offers courtesy limousine service to and from the airport between the hours of 6:00am and 11:00pm. You may call them from the airport when you arrive. If you choose to use a taxi, expect to pay approximately $8.00. Automobiles may be rented, at the airport, from most of the major car rental agencies. However, because of the downtown location of the Congress and access to taxis and to bus service, we do not recommend that you rent an auto unless you are planning to drive outside of the city. Should you not be able to find an available room at the Hyatt Regency, we have scheduled an "overflow" hotel, the Embassy Suites, which is located directly across the street from the Hyatt Regency. If, due to travel expense restrictions, you are unable to stay at either of these two hotels, please contact the conference board directly and we will be more than happy to find a hotel near the conference site that should accommodate your needs. Austin, the state capital, is renowned for its natural hill-country beauty and an active cultural scene. Several hiking and jogging trails are within walking distance of the hotel, as well as opportunities for a variety of aquatic sports. From the Hyatt, you can "Catch a Dillo" downtown, taking a ride on our delightful inner-city, rubber-wheeled trolley system. In Austin's historic downtown area, you can take a free guided tour through the State Capitol Building, constructed in 1888. Or, you can visit the Governor's Mansion, recognized as one of the finest examples of 19th Century Greek Revival architecture and housing every Texas governor since 1856. Downtown you will find the Old Bakery and Emporium, built by Swedish immigrant Charles Lundberg in 1876 and the Sixth Street/Old Pecan Street Historical District - a seven-block renovation of Victorian and native stone buildings, now a National Registered Historic District containing more than 60 restaurants, clubs, and shops to enjoy. The Laguna Gloria Art Museum, the Archer M. Huntington Art Gallery, the LBJ Library and Museum, the Neill-Cochran Museum House, and the Texas Memorial Museum are among Austin's finest museums. The Umlauf Sculpture Garden, has become a major artistic attraction. Charles Umlauf's sculptured works are placed in a variety of elegant settings under a canopy of trees. The Zilker Gardens contains many botanical highlights such as the Rose Garden, Oriental Garden, Garden of the Blind, Water Garden and more. Unique to Austin is a large population of Mexican free-tailed bats which resides beneath the Congress Avenue Bridge. During the month of April, the Highland Lakes Bluebonnet Trail celebrates spring's wildflowers (a major attraction) as this self-guided tour winds through the surrounding region of Austin and nearby towns (you will need to rent a car for this one). Austin offers a number of indoor shopping malls in every part of the city; The Arboretum, Barton Creek Square, Dobie Mall, and Highland Mall, to name a few. Capital Metro, Austin's mass transit system, offers low cost transportation throughout Austin. Specialty shops, offering a wide variety of handmade crafts and merchandise crafted by native Texans, are scattered throughout the city and surrounding areas. Dining out in Austin, you will have choices of American, Chinese, Authentic Mexican, Tex-Mex, Italian, Japanese, or nearly any other type of cuisine you might wish to experience, with price ranges that will suit anyone's budget. Live bands perform in various nightclubs around the city and at night spots along Sixth Street, offering a range of jazz, blues, country/Western, reggae, swing, and rock music. Day temperatures will be in the 80-90(degrees F) range and fairly humid. Evening temperatures have been known to drop down into the 50's (degrees F). Cold weather is not expected so be sure to bring lightweight clothing with you. Congress exhibitor and vendor presentations are also being planned. 8.0 CONFERENCE ENDORSEMENTS AND SPONSORSHIPS: Numerous potential academic sponsors have been contacted. Currently negotiations are underway for sponsorship with SIAM, AMS, MAA, IEEE, FASEB, and IMACS. Additionally AMA and ANA continuing medical education support is being sought. Information will be updated regularly on the anonymous ftp site for the conference (see above). Currently, funding has been generously supplied by the following agencies: University of Texas System - CHPC U.S. Department of Energy ================== REGISTRATION FORM =============== (Please list your name below as it will appear on badge.) First Name : Middle Initial (if available): Family Name: Your Professional Title: [ ]Dr. [ ]Professor [ ]Mr. [ ]Mrs. [ ]Ms. [ ]Other:__________________ Office Phone (desk): Home/Evening Phone (for emergency contact): Fax: Electronic Mail (Bitnet): Electronic Mail (Internet): Postal Address: Institution or Center: Building Code: Mail Stop: Street Address1: Street Address2: City: State: Zip or Country Code: Country: Please list your three major interest areas: Interest1: Interest2: Interest3: Registration fee: $____________ Late fee $50 (if after April 1, 1994) $____________ **IF UT AUSTIN, PLEASE PROVIDE YOUR: UNIVERSITY ACCT. #: ______________________ UNIVERSITY ACCT. TITLE: ______________________ NAME OF ACCT. SIGNER: ______________________ ===================================================== VERIFICATION OF STUDENT STATUS Name: Mailing Address: University at which you are a student: What level student(year): Your student id number: Name of your graduate or postdoctoral advisor: Telephone number for your advisor: By filling in this section, I agree that I am electronically signing my signature to the statement that I am currently a student at the above university. ======================================================= REQUEST FOR FINANCIAL AID Name: Mailing Address: I request financial assistance under one or more of the following categories: [ ] Student (You must fill out the Verification of Student Status Section in order to be considered for financial aid under this category) [ ] Academic [ ] Minority [ ] Female [ ] Black [ ] Hispanic [ ] Native American Indian [ ] Other This form is not meant to invade your personal privacy in any fashion. However, some of the grant funds are targeted at specific ethnic/minority groups and need to be expended appropriately. None of these forms will be in any way released to the public. And, after the congress, all of the financial aid forms will be destroyed. No records will be kept of ethnic or racial backgrounds. If you have any questions concerning financial aid support, please contact Matthew Witten at the above addresses. ==============================================================  From terry at helmholtz.sdsc.edu Tue Sep 14 21:45:35 1993 From: terry at helmholtz.sdsc.edu (Terry Sejnowski) Date: Tue, 14 Sep 93 18:45:35 PDT Subject: Top Prize Message-ID: <9309150145.AA13807@helmholtz.sdsc.edu> New Scientist, 28 August, 1993, pp. 18-19 Physicist nets top prize for keeping building cooler "Thomas Bayes theorem ... was used by an artifical neural network -- a computer system that mimics the way the brain works -- to win an international competition for predicting energy consumption in a nominated building. The competition, called the "Great Predictor Shootout" took place in Atlanta, Georgia, last June, organised by the American Society of Heating, Refrigeration and Airconditioning Engineers. As a guide, the 21 entrants were given data about a four-storey building in Texax, including the external air temperature, wind speed, humidity and sunlight levels for two months, and the consumption of the electricity and hot and cold water for the previous four months. The winning system, devised by David MacKay, a physicist at the University of Cambridge's Cavendish Laboratory, predicted the buildings's energy to within 10%. The runner up was accurate to 15%. .. 'No matter how much you try to design a building, there will always be effects you omit to allow for,' comments Mackay." -----  From kainen at cs.UMD.EDU Thu Sep 16 11:58:37 1993 From: kainen at cs.UMD.EDU (Paul Kainen) Date: Thu, 16 Sep 93 11:58:37 -0400 Subject: INTELLIGENT BUILDINGS Message-ID: <9309161558.AA04914@tove.cs.UMD.EDU> While it may be profitable on a short term basis to continue to construct and then to tear down office space, the nation can ill afford such waste. Since ASHRAE is concerned to improve the efficiency of energy utilization, perhaps they ought to consider conducting a much more comprehensive test. Find a corporation or government planning to construct or purchase a substantial amount of office space, enough for several buildings of moderate size. Have individual high-tech teams plan and construct these buildings with ASHRAE (and other cooperating societies) monitoring energy consumption, construction cost,etc. In addition, government agencies and concerned labor organizations would be consulted during the design process as well as afterwords, so that the resulting buildings would be human-friendly. For instance, issues such as lighting need to be considered quite explicitly and top-down as, for instance, they have been discussed recently on the Internet: Is cost of better lighting offset by worker productivity? Let's not guess when we can find out. What Sejnowski drew our attention to, MacKay's accomplishment, shows that the modern ``high-tech'' combination of mathematics with neural nets can handle large-scale problems. The real challenge, in smart buildings and other intelligent system applications, is to connect the disciplines.  From stork at crc.ricoh.com Wed Sep 15 15:10:48 1993 From: stork at crc.ricoh.com (David G. Stork) Date: Wed, 15 Sep 93 12:10:48 -0700 Subject: "Pattern Classification and Scene Analysis" Message-ID: <9309151910.AA06151@neva.crc.ricoh.com> I am writing the second edition of "Pattern Classification and Scene Analysis" (part I) by R. O. Duda and P. E. Hart for Wiley Publishers. This second edition will be greatly expanded, to include topics such as neural networks, stochastic methods (e.g., Boltzmann learning), theory of learning and generalization (e.g., capacity issues), and many other modern topics. All figures are being redrawn in high-quality PostScript form, including many three-dimensional figures illustrating the basic issues. The homeworks have been similarly expanded, and computer exercises have been written for all chapters. Wiley has given us permission to release the preliminary manuscript to educators free of charge, for consideration for distributing to students (again, at no royalty fee). There are only a few minor provisos, the only two of substance is that no one can make a *profit* from distributing the manuscript, and the copyright must remain on the cover of every set distributed. Six chapters are complete (in draft form) and drafts of the remaining five should be done by the end of this year. (Many of the figures are currently hand drawn, and none appear in the text itself, as of yet.) Likewise, references have not been included. Nevertheless all material from the First Edition has been included (though in later drafts some of this will be deleted). If you are interested in obtaining a copy of chapters of the manuscript, contact me and I'll send you (by e-mail) a copy of the Table of Contents. If you are still interested in possibly using the text, we can go on from there. Dr. David G. Stork Chief Scientist and Head, Machine Learning and Perception Ricoh California Research Center 2882 Sand Hill Road Suite 115 Menlo Park, CA 94025-7022 USA 415-496-5720 (w) 415-854-8740 (fax) stork at crc.ricoh.com  From ken at phy.ucsf.edu Thu Sep 16 13:52:43 1993 From: ken at phy.ucsf.edu (Ken Miller) Date: Thu, 16 Sep 93 10:52:43 -0700 Subject: Computational Neuroscience job at San Diego Supercomputer Center Message-ID: <9309161752.AA07643@phybeta.ucsf.EDU> COMPUTATIONAL NEUROSCIENCE JOB: I just checked with SDSC, and applications are still being accepted for this job (ad posted below). However, as the job has been advertised for two months, applicants are encouraged to act quickly. Ken Kenneth D. Miller telephone: (415) 476-8217 Dept. of Physiology internet: ken at phy.ucsf.edu UCSF fax: (415) 476-4929 513 Parnassus San Francisco, CA 94143-0444 [Office: S-859] ---------------------------------------- This ad appeared in Science on July 16, 1993: San Diego Supercomputer Center ----------------------------- The San Diego Supercomputer Center is a National Computational Science Laboratory operated by General Atomics and the National Science Foundation. It serves the nationwide community of scientists and engineers. We are currently accepting applications for a Staff Scientist in computational ecology, computational neurobiology, or scientific databases to join our team of computational scientists. Requirements include a Ph.D. plus postdoctoral experience in one of the above areas. For the computational ecology or neurobiology position, a willingness to initiate an outreach program in, and collaborative projects with, the research community is necessary. General Atomics offers comprehensive salary and benefit plans as well as an exciting, dynamic environment well suited to applicants who are highly motivated and flexible. Please submit your letter of application, curriculum vitae, list of publications and three references to General Atomics, Dept. 93-23, P.O. Box 85608, San Diego, CA 92186-9784. EEO/AAE If you want further information about this position, please contact Rozeanne Steckler (steckler at sdsc.edu, 619-534-5122) or Dan Sulzbach (sulzbach at sdsc.edu, 619-534-5125) at SDSC.  From fu at whale.cis.ufl.edu Thu Sep 16 14:41:38 1993 From: fu at whale.cis.ufl.edu (Li-Min Fu) Date: Thu, 16 Sep 93 14:41:38 -0400 Subject: ISIKNH'94 Message-ID: <9309161841.AA03015@whale.cis.ufl.edu> CALL FOR PAPERS International Symposium on Integrating Knowledge and Neural Heuristics (ISIKNH'94) Sponsored by University of Florida, and AAAI, in cooperation with IEEE Neural Network Council, INNS-SIG, and FLAIRS. Time: May 9-10 1994; Place: Pensacola Beach, Florida, USA. A large amount of research has been directed toward integrating neural and symbolic methods in recent years. Especially, the integration of knowledge-based principles and neural heuristics holds great promise in solving complicated real-world problems. This symposium will provide a forum for discussions and exchanges of ideas in this area. The objective of this symposium is to bring together researchers from a variety of fields who are interested in applying neural network techniques to augmenting existing knowledge or proceeding the other way around, and especially, who have demonstrated that this combined approach outperforms either approach alone. We welcome views of this problem from areas such as constraint-(knowledge-) based learning and reasoning, connectionist symbol processing, hybrid intelligent systems, fuzzy neural networks, multi-strategic learning, and cognitive science. Examples of specific research include but are not limited to: 1. How do we build a neural network based on {\em a priori} knowledge (i.e., a knowledge-based neural network)? 2. How do neural heuristics improve the current model for a particular problem (e.g., classification, planning, signal processing, and control)? 3. How does knowledge in conjunction with neural heuristics contribute to machine learning? 4. What is the emergent behavior of a hybrid system? 5. What are the fundamental issues behind the combined approach? Program activities include keynote speeches, paper presentation, panel discussions, and tutorials. ***** Scholarships are offered to assist students in attending the symposium. Students who wish to apply for a scholarship should send their resumes and a statement of how their researches are related to the symposium. ***** Symposium Chairs: LiMin Fu, University of Florida, USA. Chris Lacher, Florida State University, USA. Program Committee: Jim Anderson, Brown University, USA Michael Arbib, University of Southern California, USA Fevzi Belli, The University of Paderborn, Germany Jim Bezdek, University of West Florida, USA Bir Bhanu, University of California, USA Su-Shing Chen, National Science Foundation, USA Tharam Dillon, La Trobe University, Australia Douglas Fisher, Vanderbilt University, USA Paul Fishwick, University of Florida, USA Stephen Gallant, HNC Inc., USA Yoichi Hayashi, Ibaraki University, Japan Susan I. Hruska, Florida State University, USA Michel Klefstad-Sillonville CCETT, France David C. Kuncicky, Florida State University, USA Joseph Principe, University of Florida, USA Sylvian Ray, University of Illinois, USA Armando F. Rocha, University of Estadual, Brasil Ron Sun, University of Alabama, USA Keynote Speaker: Balakrishnan Chandrasekaran, Ohio-State University Schedule for Contributed Papers ---------------------------------------------------------------------- Paper Summaries Due: December 15, 1993 Notice of Acceptance Due: February 1, 1994 Camera Ready Papers Due: March 1, 1994 Extended paper summaries should be limited to four pages (single or double-spaced) and should include the title, names of the authors, the network and mailing addresses and telephone number of the corresponding author. Important research results should be attached. Send four copies of extended paper summaries to LiMin Fu Dept. of CIS, 301 CSE University of Florida Gainesville, FL 32611 USA (e-mail: fu at cis.ufl.edu; phone: 904-392-1485). Students' applications for a scholarship should also be sent to the above address. General information and registration materials can be obtained by writing to Rob Francis ISIKNH'94 DOCE/Conferences 2209 NW 13th Street, STE E University of Florida Gainesville, FL 32609-3476 USA (Phone: 904-392-1701; fax: 904-392-6950) --------------------------------------------------------------------- --------------------------------------------------------------------- If you intend to attend the symposium, you may submit the following information by returning this message: NAME: _______________________________________ ADDRESS: ____________________________________ _____________________________________________ _____________________________________________ _____________________________________________ _____________________________________________ PHONE: ______________________________________ FAX: ________________________________________ E-MAIL: _____________________________________ ---------------------------------------------------------------------  From jbower at smaug.bbb.caltech.edu Thu Sep 16 15:41:15 1993 From: jbower at smaug.bbb.caltech.edu (Jim Bower) Date: Thu, 16 Sep 93 12:41:15 PDT Subject: Accuracy Message-ID: <9309161941.AA15967@smaug.bbb.caltech.edu> With respect to: >The winning system, devised by David MacKay, a physicist >at the University of Cambridge's Cavendish Laboratory, >predicted the buildings's energy to within 10%. >The runner up was accurate to 15%. ----------------------------------------------------------- Congradulations to David, However, I wish (continue to wish after all these years) that the same standard for accuracy applied to our descriptions of neural networks: >""Thomas Bayes theorem ... was used by an artifical neural >network -- a computer system that mimics the way the brain >works --" Jim Bower  From rjw at ccs.neu.edu Fri Sep 17 09:51:08 1993 From: rjw at ccs.neu.edu (Ronald J Williams) Date: Fri, 17 Sep 1993 09:51:08 -0400 Subject: TR available in neuroprose Message-ID: <9309171351.AA22077@kenmore.ccs.neu.edu> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/williams.policy-iter.ps.Z **PLEASE DO NOT FORWARD TO OTHER GROUPS** The following paper is now available in the neuroprose directory. It is 49 pages long. For those unable to obtain the file by ftp, hardcopies can be obtained by contacting: Diane Burke, College of Computer Science, 161 CN, Northeastern University, Boston, MA 02115, USA. Analysis of Some Incremental Variants of Policy Iteration: First Steps Toward Understanding Actor-Critic Learning Systems Northeastern University College of Computer Science Technical Report NU-CCS-93-11 Ronald J. Williams Leemon C. Baird, III College of Computer Science Wright Laboratory Northeastern University Wright-Patterson Air Force Base rjw at ccs.neu.edu bairdlc at wL.wpafb.af.mil Abstract: This paper studies algorithms based on an incremental dynamic programming abstraction of one of the key issues in understanding the behavior of actor-critic learning systems. The prime example of such a learning system is the ASE/ACE architecture introduced by Barto, Sutton, and Anderson (1983). Also related are Witten's adaptive controller (1977) and Holland's bucket brigade algorithm (1986). The key feature of such a system is the presence of separate adaptive components for action selection and state evaluation, and the key issue focused on here is the extent to which their joint adaptation is guaranteed to lead to optimal behavior in the limit. In the incremental dynamic programming point of view taken here, these questions are formulated in terms of the use of separate data structures for the current best choice of policy and current best estimate of state values, with separate operations used to update each at individual states. Particular emphasis here is on the effect of complete asynchrony in the updating of these data structures across states. The main results are that, while convergence to optimal performance is not guaranteed in general, there are a number of situations in which such convergence is assured. Since the algorithms investigated represent a certain idealized abstraction of actor-critic learning systems, these results are not directly applicable to current versions of such learning systems but may be viewed instead as providing a useful first step toward more complete understanding of such systems. Another useful perspective on the algorithms analyzed here is that they represent a broad class of asynchronous dynamic programming procedures based on policy iteration. To obtain a copy: ftp cheops.cis.ohio-state.edu login: anonymous password: cd pub/neuroprose binary get williams.policy-iter.ps.Z quit Then at your system: uncompress williams.policy-iter.ps.Z lpr -P williams.policy-iter.ps  From jbower at smaug.bbb.caltech.edu Fri Sep 17 12:55:00 1993 From: jbower at smaug.bbb.caltech.edu (Jim Bower) Date: Fri, 17 Sep 93 09:55:00 PDT Subject: Journal of Computational Neuroscience Message-ID: <9309171655.AA17971@smaug.bbb.caltech.edu> ******************************************************************* JOURNAL OF COMPUTATIONAL NEUROSCIENCE ******************************************************************* From neurons to behavior: A Journal at the interface between experimental and theoretical neuroscience. MANAGING EDITORS: James M. Bower Eve Marder California Institute of Brandeis University Technology John Miller John Rinzel University of California, National Institutes of Health Berkeley Idan Segev Charles Wilson Hebrew University University of Tennessee, Memphis ACTION EDITORS: L. F. Abbott, Brandeis University Richard Andersen, Massachusetts Inst. of Technology Alexander Borst, Max-Planck Inst., Tubingen Robert E. Burke, NINDS, NIH Catherine Carr, Univ. of Maryland, College Park Rodney Douglas, Oxford University G. Bard Ermentrout, University of Pittsburgh Apostoles Georgopoulos, VA Medical Center, MN Charles Gray, University of California, Davis Christof Koch, California Institute of Technology Gilles Laurent, California Institute of Technology David McCormick, Yale University Ken Miller, University of California, San Francisco Steve Redman, Australian National University Barry Richmond, NIMH, NIH Terry Sejnowski, Salk Institute Shihab Shamma, Univ. of Maryland, College Park Karen Sigvardt, University of California, Davis David Tank, Bell Labs Roger Traub, IBM TJ Watson Research Center Thelma Williams, University of London JOURNAL DESCRIPTION: The JOURNAL OF COMPUTATIONAL NEUROSCIENCE is intended to provide a forum for papers that fit the interface between computational and experimental work in the neurosciences. The JOURNAL OF COMPUTATIONAL NEUROSCIENCE will publish full length original papers describing theoretical and experimental work relevant to computations in the brain and nervous system. Papers that combine theoretical and experimental work are especially encouraged. Primarily theoretical papers should deal with issues of obvious relevance to biological nervous systems. Experimental papers should have implications for the computational function of the nervous system, and may report results using any of a variety of approaches including anatomy, electrophysiology, biophysics, imaging, and molecular biology. Papers that report novel technologies of interest to researchers in computational neuroscience are also welcomed. It is anticipated that all levels of analysis from cognitive to single neuron will be represented in THE JOURNAL OF COMPUTATIONAL NEUROSCIENCE. ***************************************************************** CALL FOR PAPERS ***************************************************************** For Instructions to Authors, please contact: Karen Cullen Journal of Computational Neuroscience Kluwer Academic Publishers 101 Philip Drive Norwell, MA 02061 PH: 617 871 6300 FX: 617 878 0449 EM: Karen at world.std.com ***************************************************************** ***************************************************************** ORDERING INFORMATION: For complete ordering information and subscription rates, please contact: KLUWER ACADEMIC PUBLISHERS PH: 617 871 6600 FX: 617 871 6528 EM: Kluwer at world.std.com JOURNAL OF COMPUTATIONAL NEUROSCIENCE ISSN: 0929-5313 *****************************************************************  From PIURI at IPMEL1.POLIMI.IT Fri Sep 17 18:57:59 1993 From: PIURI at IPMEL1.POLIMI.IT (PIURI@IPMEL1.POLIMI.IT) Date: Fri, 17 Sep 1993 18:57:59 MET-DST Subject: call for papers Message-ID: <01H32D8LQCG29AMRQ4@icil64.cilea.it> ============================================================================= 1994 INTERNATIONAL CONFERENCE ON INSTRUMENTATION AND MEASUREMENTS IMTC'94 Advanced Technologies in Instrumentation and Measurements Hamamatsu, Shizuoka, Japan - 10-12 May 1994 ============================================================================= SPECIAL SESSION ON NEURAL INSTRUMENTS CALL FOR PAPERS Program Chair: Kenzo Watanabe Research Institute of Electronics Shizuoka University 3-5-1 Johoku, Hamamatsu, 423 Japan phone +81-53-471-1171 fax +81-53-474-0630 General Chair: Robert Myers 3685 Motor Ave., Suite 240 Los Angeles, CA 90034-5750, USA phone +1-310-287-1463 fax +1-310-286-1851 Sponsored by: IEEE Intrsumentation and Measurement Society Society of Instruments and Control Engineers, Japan Cooperated by: Institute of Electrical Engineers, Japan Institute of Electronics, Information and Communication Engineers, Japan Japan Society of Applied Physics Japan Electric Measuring Instrument Manufacturers' Association The IMTC'94 Conference is the 10th edition of the annual conference organized by the IEEE Instrumentation and Measurement Society to provide a stimulating forum for practitioners and scientists working in areas related to any kind of measurements, theoretical aspects of mesurements, instruments for measurements, and measurement processing. Traditional topics are: Acoustics measurements, AI & fuzzy, Automotive & avionic instrumentation, Calibrartion, Metrology & standards, Digital signal analysis & processing, Digital and mobile communications, LSI analysis, diagnosis & testing, Mixed analog & digital ASICs, Optic & fiber optic measurement, Process measurements, Sensor & transducers, System identification, Waveform analysis and measurements, A/D and D/A, Data acquisition, Antenna & EMI / EMC, Biomedical instruments, Computer-based measurements & software, Environment measurements, Microwave measurements, Nuclear & medial instruments, Pressure & temperature measurements, Quality & reliability, STM and imaging, Time and Frequency measurements. To support presentation and discussion of emergent technologies, a special session on Neural Instruments will be organized within IMTC'94. Main goals are neural technologies for measurements, applications of neural networks in measurement and instruments, design and implementation of neural solutions for instrument's subsystems, neural subsystems for automatic control, and neural subsystems for signal processing. Authors are invited to submit one-page abstract (containing title, authors, affiliations, and the session name "Neural Instruments" in the upper right corner) and cover page (containing title, authors, affiliations, contact author, full address of the contact author, telephone and fax number of the contact author, and the session name "Neural Instruments" in the upper right corner). Submission must be received by the general chair (for authors from Europe and North-America) or by the program chair (for authors from Asia and other areas) by October 1st, 1993. Fax submissions are accepted. An additional copy of the submission should be sent by e-mail or fax to the coordinator of the session on Neural Instruments (this copy does not substitute the formal submission to the general chair or the program chair). Submission of a paper implies a willingness to attend at the conference and to present the paper. Notification of acceptance will be mailed by December 1st, 1993; camera-ready papers are due by February 1st, 1994. Authors of selected papers will also be invited to submit their papers for consideration for the special IMTC/94 issue of the IEEE Transaction on Instrumentation and Measurements. For any additional information regarding the special session on Neural Instruments, contact the session coordinator. Session Coordinator for "Neural Instruments": Prof. Vincenzo PIURI Department of Electronics and Information Politecnico di Milano piazza L. da Vinci 32 I-20133 Milano, Italy phone no. +39-2-2399-3606, +39-2-2399-3623 fax no. +39-2-2399-3411 e-mail piuri at ipmel1.polimi.it =============================================================================  From avner at elect1.weizmann.ac.il Sun Sep 19 03:49:15 1993 From: avner at elect1.weizmann.ac.il (Priel Avner) Date: Sun, 19 Sep 1993 07:49:15 GMT Subject: No subject Message-ID: <9309190749.AA43380@elect1.weizmann.ac.il> From georg at ai.univie.ac.at Mon Sep 20 10:10:37 1993 From: georg at ai.univie.ac.at (Georg Dorffner) Date: Mon, 20 Sep 1993 16:10:37 +0200 Subject: papers/neuroprose: Unifying MLP and RBFN Message-ID: <199309201410.AA11302@chicago.ai.univie.ac.at> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/dorffner.csfn.ps.Z pub/neuroprose/dorffner.nn-clinical.ps.Z The files dorffner.csfn.ps.Z and dorffner.nn-clinical.ps.Z are now available for copying from the Neuroprose repository: ------------------------------------------------------------------------- dorffner.csfn.ps.Z: A Unified Framework for MLPs and RBFNs: Introducing Conic Section Function Networks Georg Dorffner Austrian Research Inst. for Artificial Intelligence and Dept. of Medical Cybernetics and AI Univ. of Vienna georg at ai.univie.ac.at ABSTRACT: Multilayer Perceptrons (MLP, Werbos 1974, Rumelhart et al. 1986) and Radial Basis Function Networks (RBFN, Broomhead & Lowe 1988, Moody & Darken 1989) probably are the most widely used neural network models for practical applications. While the former belong to a group of ``classical'' neural networks (whose weighted sums are loosely inspired by biology), the latter have risen only recently from an analogy to regression theory (Broomhead & Lowe 1988). On first sight, the two models -- except for being multilayer feedforward networks -- do not seem to have much in common. On second thought, however, MLPs and RBFNs share a variety of features, worthy of viewing them in the same context and comparing them to each other with respect to their properties. Consequently, a few attempts on arriving at a unified picture of a class of feedforward networks -- with MLPs and RBFNs as members -- have been undertaken (Robinson et al. 1988, Maruyama et al. 1992, Dorffner 1992, 1993). Most of these attempts have centered around the observation that the function of a neural network unit can be divided into a propagation rule (``net input'') and an activation or transfer function. The dot product (``weighted sum'') and the Euclidean distance are special cases of propagation rules, whereas the sigmoid and Gaussian function are examples for activation functions. This paper introduces a novel neural network model based on a more general conic section function as propagation rule, containing hyperplane (straight line) and hypersphere (circle) as special cases, thus unifying the net inputs of MLPs and RBFNs with an easy-to-handle continuum in between. A new learning rule -- complementing the existing methods of gradient descent in weight space and initialization -- is introduced which enables the network to make a continuous decision between bounded and unbounded (infinite half-space) decision regions. The capabilities of CSFNs are illustrated with several examples and compared with exisiting approaches. CSFNs are viewed as a further step toward more efficient and optimal neural network solutions in practical applications. length: 37 pages. submitted for publication ------------------------------------------------------------------------- dorffner.nn-clinical.ps.Z: On Using Feedforward Neural Networks for Clinical Diagnostic Tasks Georg Dorffner Austrian Research Inst. for Artificial Intelligence and Dept. of Medical Cybernetics and AI Univ. of Vienna georg at ai.univie.ac.at and Georld Porenta Dept. of Cardiology Clinic for Internal Medicine II University of Vienna ABSTRACT: In this paper we present an extensive comparison between several feedforward neural network types in the context of a clinical diagnostic task, namely the detection of coronary artery disease (CAD) using planar thallium-201 dipyridamole stress-redistribution scintigrams. We introduce results from well-known (e.g. multilayer perceptrons or MLPs, and radial basis function networks or RBFNs) as well as novel neural network techniques (e.g. conic section function networks) which demonstrate promising new routes for future applications of neural networks in medicine, and elsewhere. In particular we show that initializations of MLPs and conic section function networks -- which can learn to behave more like an MLP or more like an RBFN -- can lead to much improved results in rather difficult diagnostic tasks. Keywords: Feedforward neural networks, neural network initialization, multilayer perceptrons, radial basis function networks, conic section function networks; thallium scintigraphy, angiography, clinical diagnosis and decision making. length: 21 pages submitted for publication ----------------------------------------------------------------------- To obtain a copy: ftp cheops.cis.ohio-state.edu login: anonymous password: cd pub/neuroprose binary get dorffner.csfn.ps.Z AND/OR get dorffner.nn-clinical.ps.Z quit Then at your system uncompress dorffner.* to obtain (a) postscript file(s). Many thanks to Jordan Pollack for the maintenance and support of this archive. ----------------------------------------------------------------------- both papers are also available through anonymous ftp from ftp.ai.univie.ac.at in the directory 'papers' as oefai-tr-93-23.ps.Z (== dorffner.nn-clinical) and oefai-tr-93-25.ps.Z (== dorffner.csfn) Hardcopies are available (only if you don't have access to ftp!) by sending email to sec at ai.univie.ac.at and asking for technical report oefai-tr-93-23 or oefai-tr-93-25 (see previous paragraph).  From srx014 at cck.coventry.ac.uk Mon Sep 20 15:16:30 1993 From: srx014 at cck.coventry.ac.uk (CRReeves) Date: Mon, 20 Sep 93 15:16:30 WET DST Subject: Research post available Message-ID: <6664.9309201416@cck.coventry.ac.uk> The following University Research Studentship is available, starting as soon as possible: "Application of neural networks to the inference of homologous DNA sequences from related genomes" This project involves the application of neural network techniques in plant genetics. Primary DNA sequence data are being accumulated for a wide range of organisms, and the role of model species in plant genetics is crucial in expanding our knowledge of the fundamental mechanisms of plant development. The purpose of this project is the evaluation of neurocomputing methods in the prediction of gene sequences for a variety of agricultural species. The work will be carried out in the School of Mathematical and Information Sciences at Coventry University (where there is a variety of ongoing research in the applications of neural networks), in collaboration with Horticultural Research International at Wellesbourne, Warwickshire, where there is access to large databases of genetic characteristics. Applicants do not need a specialist background in either genetics or neural computation; preferably, they should have a background in mathematics and a competence in at least one high-level computing language (C, Pascal, etc.). Please send CVs by email or by post to ___________________________________________ | Nigel Steele | | Chair, Mathematics Division | | School of Mathematical and Information | | Sciences | | Coventry University | | Priory St | | Coventry CV1 5FB | | tel :+44 (0)203 838568 | | fax :+44 (0)203 838585 | | Email: nsteele at uk.ac.cov | |___________________________________________| [Message sent by Colin Reeves (CRReeves at uk.ac.cov)]  From garza at mcc.com Mon Sep 20 15:17:36 1993 From: garza at mcc.com (NN.JOB) Date: Mon, 20 Sep 93 14:17:36 CDT Subject: Position announcement Message-ID: <9309201917.AA22886@niobium.mcc.com> ******************* Position Announcement ****************** MCC (Microelectronics & Computer Technology Corp.) is one of the countries most broad-based industry consortia. MCC membership of almost 100 companies/organizations includes a diverse group of electronics, computer, aerospace, manufacturing, and other advanced technology organizations. MCC has an immediate opening for a Member of Technical Staff (MTS) or Senior MTS in their Neural Network Projects. Job responsibilities will be to conduct applied research in one or more of the following three areas (listed in order of importance): Intelligent financial systems, OCR, and Spectral (image/signal) processing applications Required skills: Neural net research & development experience PhD in relevant area, preferably in EE, physics, or applied mathematics Strong quantitative skills C programming, UNIX background Preferred skills: Experience in financial applications and/or time series analysis Demonstrated project leadership Strong communication skills Please forward your resume and salary history to: MCC ATTN: Neural Network Job 3500 W. Balcones Center Drive Austin, TX 78759 email: nn.job at mcc.com  From watrous at learning.siemens.com Mon Sep 20 15:06:34 1993 From: watrous at learning.siemens.com (Raymond L Watrous) Date: Mon, 20 Sep 93 15:06:34 EDT Subject: Proceedings of the 1993 NNSP Workshop Message-ID: <9309201906.AA02937@tiercel.siemens.com> The 1993 IEEE Workshop on Neural Networks for Signal Processing was held September 6 - September 9, 1993 at the Maritime Institute of Technology and Graduate Studies Linthicum Heights, Maryland, USA. Copies of the 593-page, hardbound Proceedings of the workshop may be obtained for $50 (US, check or money order, please) postpaid from: Raymond Watrous, Financial Chair 1993 IEEE Workshop on Neural Networks for Signal Processing c/o Siemens Corporate Research 755 College Road East Princeton, NJ 08540 (609) 734-6596 (609) 734-6565 (FAX)  From esann at dice.ucl.ac.be Mon Sep 20 15:58:40 1993 From: esann at dice.ucl.ac.be (esann@dice.ucl.ac.be) Date: Mon, 20 Sep 93 21:58:40 +0200 Subject: ESANN'94: European Symposium on ANNs Message-ID: <9309201958.AA09530@ns1.dice.ucl.ac.be> ____________________________________________________________________ ____________________________________________________________________ European Symposium on Artificial Neural Networks Brussels - April 20-21-22, 1994 First announcement and call for papers ____________________________________________________________________ ____________________________________________________________________ ----------------------- Scope of the conference ----------------------- ESANN'93 was held in Brussels, in April 1993. It gathered more than 80 scientists, from about 15 countries, who wanted to learn more about the last developments in the theory of neural networks. The European Symposium on Artificial Neural Networks will be organized for the second time in April 1994, and, as in 1993, will focus on the fundamental aspects of the artificial neural network research. Today, thousands of researchers work in this field; they try to develop new algorithms, to mimic properties found in natural networks, to develop parallel computers based on these properties, and to use artificial neural networks in new application areas. But the field is new, and has expanded drastically in about ten years; this lead to a lack of theoretical works in the subject, and also to a lack of comparisons between new methods and more classical ones. The purpose of ESANN is to cover the theoretical and fundamental aspects of neural networks; the symposium is intended to give to the participants an up-to-date and comprehensive view of these aspects, by the presentation of new results and new developments, of tutorial papers covering the relations between neural networks and classical methods of computing, and also by round tables confronting views of specialists and non-specialists of the field. The program committee of ESANN'94 welcomes papers in the following aspects of artificial neural networks: theory models and architectures mathematics learning algorithms biologically plausible artificial networks neurobiological systems adaptive behavior signal processing statistics self-organization evolutive learning Accepted papers will cover new results in one or several of these aspects or will be of tutorial nature. Papers insisting on the relations between artificial neural networks and classical methods of information processing, signal processing or statistics are encouraged. ---------------------- Call for contributions ---------------------- Prospective authors are invited to submit six originals of their contribution before November 26, 1993. Working language of the conference (including proceedings) is English. Papers should not exceed six A4 pages (including figures and references). Printing area will be 12.2 x 19.3 cm (centered on the A4 page); left, right, top and bottom margins will thus respectively be 4.4, 4.4, 5.2 and 5.2 cm. 10-point Times font will be used for the main text; headings will be in bold characters, (but not underlined), and will be separated from the main text by two blank lines before and one after. Manuscripts prepared in this format will be reproduced in the same size in the book. The first page will begin by a heading, indented 1cm left and right with regards to the main text (the heading will thus have left and right margins of 5.4 cm). The heading will contain the title (Times 14 point, bold, centered), one blank line, the author(s) name(s) (Times 10 point, centered), one blank line, the affiliation (Times 9 point, centered), one blank line, and the abstract (Times 9 point, justified, beginning by the word "Abstract." in bold face). Originals of the figures will be pasted into the manuscript and centered between the margins. The lettering of the figures should be in 10-point Times font size. Figures should be numbered. The legends also should be centered between the margins and be written in 9-point Times font size as follows: The pages of the manuscript will not be numbered (numbering decided by the editor). A separate page (not included in the manuscript) will indicate: the title of the manuscript author(s) name(s) the complete address (including phone & fax numbers and E-mail) of the corresponding author a list of five keywords or topics On the same page, the authors will copy and sign the following paragraph: "in case of acceptation of the paper for presentation at ESANN 94: - at least one of the authors will register to the conference and will present the paper - the author(s) give their rights up over the paper to the organizers of ESANN 94, for the proceedings and any publication that could directly be generated by the conference - if the paper does not match the format requirements for the proceedings, the author(s) will send a revised version within two weeks of the notification of acceptation." Contributions must be sent to the conference secretariat. Prospective authors are invited to ask examples of camera-ready contributions by writing to the same address. ------------- Local details ------------- The conference will be held in the center of Brussels (Belgium). Close to most great European cities, Brussels is exceptionally well served by closely-knit motorway and railway systems, and an international airport. Besides an artistic and cultural center of attraction, Brussels is also renowned for its countless typical cafs, form the most unassuming to the most prestigious. Belgian food is typical and famous, and the night life in Brussels is considerable. --------- Deadlines --------- Submission of papers November 26, 1993 Notification of acceptance January 17, 1994 Symposium April 20-22, 1994 ------ Grants ------ A limited number of grants (registration fees and economic accommodation) will be given to young scientists coming from the European Community (Human Capital and Mobility program, European Community - DG XII). Grants will also probably be available for scientists from Central and Eastern European countries. Please write to the conference secretariat to get an application form for these grants. ---------------------- Conference secretariat ---------------------- Dr. Michel Verleysen D Facto Conference Services 45 rue Masui B - 1210 Brussels (Belgium) phone: + 32 2 245 43 63 Fax: + 32 2 245 46 94 E-mail: esann at dice.ucl.ac.be ---------- Reply form ---------- If your contact address is incomplete or has been changed recently, or if you know a colleague who might be interested in ESANN'94, please send this form, with your or his/her name and address, to the conference secretariat: '''''''''''''''''''''''''''''''''''''''''''''''''''''''''''' Name: ..................................................... First Name: ............................................... University or Company: .................................... ........................................................... Address: .................................................. ........................................................... ........................................................... ZIP: ...................................................... Town: ..................................................... Country: .................................................. Tel: ...................................................... Fax: ...................................................... E-mail: ................................................... '''''''''''''''''''''''''''''''''''''''''''''''''''''''''''' ------------------ Steering committee ------------------ Franois Blayo EERIE, Nmes (F) Marie Cottrell Univ. Paris I (F) Nicolas Franceschini CNRS Marseille (F) Jeanny Hrault INPG Grenoble (F) Michel Verleysen UCL Louvain-la-Neuve (B) -------------------- Scientific committee -------------------- Luis Almeida * INESC - Lisboa (P) Jorge Barreto UCL Louvain-en-Woluwe (B) Herv Bourlard L. & H. Speech Products (B) Joan Cabestany Univ. Polit. de Catalunya (E) Dave Cliff University of Sussex (UK) Pierre Comon Thomson-Sintra Sophia (F) Holk Cruse Universitt Bielefeld (D) Dante Del Corso Politecnico di Torino (I) Marc Duranton Philips / LEP (F) Jean-Claude Fort Universit Nancy I (F) Karl Goser Universitt Dortmund (D) Martin Hasler EPFL Lausanne (CH) Philip Husbands University of Sussex (UK) Christian Jutten INPG Grenoble (F) Petr Lansky Acad. of Science of the Czech Rep. (CZ) Jean-Didier Legat UCL Louvain-la-Neuve (B) Jean Arcady Meyer Ecole Normale Suprieure - Paris (F) Erkki Oja Helsinky University of Technology (SF) Guy Orban KU Leuven (B) Gilles Pags * Universit Paris I (F) Alberto Prieto Universitad de Granada (E) Pierre Puget LETI Grenoble (F) Ronan Reilly University College Dublin (IRE) Tamas Roska Hungarian Academy of Science (H) Jean-Pierre Rospars INRA Versailles (F) Andr Roucoux UCL Louvain-en-Woluwe (B) John Stonham Brunel University (UK) Lionel Tarassenko University of Oxford (UK) John Taylor Kings College London (UK) Vincent Torre Universita di Genova (I) Claude Touzet EERIE Nmes (F) Joos Vandewalle KUL Leuven (B) Eric Vittoz CSEM Neuchtel (CH) Christian Wellekens Eurecom Sophia-Antipolis (F) (* tentatively) _____________________________ Michel Verleysen D facto conference services 45 rue Masui 1210 Brussels Belgium tel: +32 2 245 43 63 fax: +32 2 245 46 94 E-mail: esann at dice.ucl.ac.be _____________________________  From heiniw at sun1.eeb.ele.tue.nl Mon Sep 20 05:45:43 1993 From: heiniw at sun1.eeb.ele.tue.nl (Heini Withagen) Date: Mon, 20 Sep 93 11:45:43 +0200 Subject: Neural Network reports available Message-ID: <9309200945.AA08526@sun1.eeb.ele.tue.nl> The following neural network related reports are available from ftp.urc.tue.nl in the /neural directory. Send any questions to: heiniw at eeb.ele.tue.nl neural_interface.ps.gz: INTERFACING NEURAL NETWORK CHIPS WITH A PERSONAL COMPUTER J.J.M. van Teeffelen Abstract: The Electronic Circuit Design Group at the Eindhoven University of Technology currently is implementing several neural networks with a multi-layered perceptron architecture together with their learning algorithms on VLSI chips. In order to test these chips and to use them in an application they will be connected with a personal computer with the help of an interface. This interface, that has to be as versatile as possible, meaning that is must be able to connect all kinds of neural network chips to it, can be realized either by making use of commercially available interfaces or by designing an own interface with help of off-the-shelf components. Two interfaces will be discussed, one for the rather slow AT-bus and one for the high speed VESA local bus. Although the commercially available interfaces are not as versatile as wished, and the prices may seem rather high, they turn out to be the best way to realize the interface at the moment. They are guaranteed to work and can be used immediately. The discussed interfaces for the AT-bus and the VESA local bus still have to be tested and implemented on a printed circuit board. ============================================================================== weight_perturbation.tar.gz: A STUDY OF THE WEIGHT PERTURBATION ALGORITHM USED IN NEURAL NETWORKS R.E. ten Kate Abstract: This thesis studies different aspects of the Weight Perturbation (WP) algorithm used to train neural networks. After a general introduction of neural networks and their algorithms, the WP algorithm is described. A theoretical study is done describing the effects of the applied perturbation on the error performance of the algorithm. Also a theoretical study is done describing the influence of the learning rate on the convergence speed. The effects of these two algorithm parameters have been simulated in an ideal Multilayer Perceptron using various benchmark problems. When the WP algorithm is implemented on an analog neural network chip, several hardware limitations are present which influence the performance of the WP algorithm; weight quantization, weight decay, non-ideal multipliers and neurons. The influence of these non-idealities on the algorithm performance has been studied theorectically. Simulations of these effects have been done using predicted parameters by SPICE simulations of the hardware. Several proposals are made to reduce the effects of the hardware non-idealities. Two proposals have been studied to increase the speed of the WP algorithm. Heini Withagen Dep. of Elec. Engineering EH 9.29 Eindhoven University of Technology P.O. Box 513 Phone: 31-40472366 5600 MB Eindhoven Fax: 31-40455674 The Netherlands E-mail: heiniw at eeb.ele.tue.nl ======================================================================== First Law of Bicycling: No matter which way you ride, it's uphill and against the wind.  From ahg at eng.cam.ac.uk Tue Sep 21 15:58:43 1993 From: ahg at eng.cam.ac.uk (ahg@eng.cam.ac.uk) Date: Tue, 21 Sep 93 15:58:43 BST Subject: Technical report available Message-ID: <15715.9309211458@tulip.eng.cam.ac.uk> The following technical report is available by anonymous ftp from the archive of the Speech, Vision and Robotics Group at the Cambridge University Engineering Department. PROBLEM SOLVING WITH OPTIMIZATION NETWORKS PhD Thesis Andrew Gee Technical Report CUED/F-INFENG/TR 150 Cambridge University Engineering Department Trumpington Street Cambridge CB2 1PZ England Summary Combinatorial optimization problems, which are characterized by a discrete set as opposed to a continuum of possible solutions, occur in many areas of engineering, science and management. Such problems have so far resisted efficient, exact solution, despite the attention of many capable researchers over the last few decades. It is not surprising, therefore, that most practical solution algorithms abandon the goal of finding the optimal solution, and instead attempt to find an approximate, useful solution in a reasonable amount of time. A recent approach makes use of highly interconnected networks of simple processing elements, which can be programmed to compute approximate solutions to a variety of difficult problems. When properly implemented in suitable parallel hardware, these `optimization networks' are capable of extremely rapid solutions rates, thereby lending themselves to real-time applications. This thesis takes a detailed look at problem solving with optimization networks. Three important questions are identified concerning the applicability of optimization networks to general problems, the convergence properties of the networks, and the likely quality of the networks' solutions. These questions are subsequently answered using a combination of rigorous analysis and simple, illustrative examples. The investigation leads to a clearer understanding of the networks' capabilities and shortcomings, confirmed by extensive experiments. It is concluded that optimization networks are not as attractive as they might have previously seemed, since they can be successfully applied to only a limited number of problems exhibiting special, amenable properties. ************************ How to obtain a copy ************************ Via FTP: unix> ftp svr-ftp.eng.cam.ac.uk Name: anonymous Password: (type your email address) ftp> cd reports ftp> binary ftp> get gee_tr150.ps.Z ftp> quit unix> uncompress gee_tr150.ps.Z unix> lpr gee_tr150.ps (or however you print PostScript) NB. This is a large file, expanding to 3.8 MBytes of uncompressed Postscript, and printing on 144 pages.  From lacher at NU.CS.FSU.EDU Tue Sep 21 13:47:13 1993 From: lacher at NU.CS.FSU.EDU (R.C. Lacher) Date: Tue, 21 Sep 93 13:47:13 EDT Subject: Superchair Message-ID: <9309211747.AA15777@lambda.cs.fsu.edu> I would like to call the following announcement to the attention of the connectionists research community. Note that the position is rather wide open as to field or home department. In particular, nominations or applications from eminent scientists in various connectionist fields are encouraged to apply or receive nominations. Biology, Computer Science, Mathematics, Physics, Psychology, and Statistics are all departments in the college. __o __o __o __o __o -\<, -\<, -\<, -\<, -\<, Chris Lacher _ _ _ _ _ _ _ _ O/_O _ O/_O _ O/_O _ O/_O _ O/_O _ _ _ _ Department of Computer Science Phone: (904) 644-4029 Florida State University Fax: (904) 644-0058 Tallahassee, FL 32306-4019 Email: lacher at cs.fsu.edu =================================================================== The Thinking Machines Corporation Eminent Scholar Chair in High Performance Computing Applications and nominations are invited for the TMC Eminent Scholar Chair in High Performance Computing at Florida State University. This position is supported, in part, by a $4 million endowment and will be filled at a senior level in the College of Arts and Sciences. Applicants and nominees should have a distinguished academic or research record in one or more fields closely associated with modern high performance computing. These fields include applied mathematics, applied computer science, and computational science in one or more scientific or engineering disciplines. The appointment will be in one or more academic departments and in the Supercomputer Computations Research Institute (SCRI). The primary responsibilities of the successful candidate will be to establish new research and education directions in high performance computing that complement the existing strong programs in SCRI, the National High Magnetic Field Laboratory, the Structural Biology Institute, the Global Climate Research Institute, and the academic departments. The Chair will be closely involved with the addition of several junior level academic appointments in connection with this new initiative in high performance computing in order to establish the strongest possible group effort. The deadline for applications is December 17, 1993. Applications and nominations should be sent to: HPC Chair Selection Committee, Mesoscale Air-Sea Interaction Group, Florida State University 32306-3041. Florida State University is an Equal Opportunity/Equal Access/Affirmative Action Employer. Women and minorities are encouraged to apply.  From ernst at cns.caltech.edu Tue Sep 21 16:06:36 1993 From: ernst at cns.caltech.edu (Ernst Niebur) Date: Tue, 21 Sep 93 13:06:36 PDT Subject: NIPS 93 Announcement: Workshop on Selective Attention Message-ID: <9309212006.AA11904@isis.klab.caltech.edu> Fellow Connectionists: We would like to announce the final program of a workshop on visual selective attention to be held at this year's NIPS conference. The conference will be held from Nov. 29 to Dec. 2 in Denver, CO, the workshop will be held Dec. 3 and 4 "at a nearby ski area." For NIPS conference and workshop registration info, please write to: NIPS*93 Registration / NIPS Foundation / PO Box 60035 / Pasadena, CA 91116-6035 USA For questions concerning this workshop, please contact either of the organizers by e-mail. --Ernst Niebur NIPS*93 Workshop: Neurobiology, Psychophysics, and Computational ================= Models of Visual Attention Intended Audience: Experimentalists, modelers and others interested in ================== visual attention and high-level vision Organizers: =========== Ernst Niebur Bruno Olshausen ernst at caltech.edu bruno at lgn.wustl.edu Program: ======== In any physical computational system, processing resources are limited, which inevitably leads to bottlenecks in the processing of sensory information. Nowhere is this more evident than in the primate visual system, where the massive amount of information provided by the optic nerve far exceeds what the brain is capable of fully processing and assimilating into conscious experience. Visual attention thus serves as a mechanism for selecting certain portions of the input to be processed preferentially, shifting the processing focus from one location to another in a serial fashion. The study of visual attention is integral to our understanding of higher visual function, and it may also be of practical benefit to machine vision as well. What we know of visual attention has been learned from a combination of psychophysical, neurophysiological, and computational approaches. Psychophysical studies have revealed the behavioral consequences of visual attention by measuring either a speed-up in observer's reaction time or an improvement in discrimination performance when the observer is attending to a task. Neurophysiological studies, on the other hand, have attempted to reveal the neural mechanisms and brain areas involved in attention by measuring the modulation in single cell firing rate or in the activity in a part of the brain as a function of the attentional state of the subject. A number of computational models based on these studies have been proposed to address the question of how attention eases the computational burdens faced by the brain in pattern recognition or other visual tasks, and how attention is controlled and expressed at the neuronal level. The goal of this workshop will be to bring together experts from each of these fields to discuss the latest advances in their approaches to studying visual attention. Half the available time has been reserved for informal presentations and the other half for discussion. Morning session: 7:30-8:00 Introduction/overview "Covert Visual Attention: The Phenomenon" (Ernst Niebur, Caltech) (7:50-8:00: Discussion) 8:00-9:00 Neurobiology 8:00 "Effects of Focal Attention on Receptive Field Profiles in Area V4" (Ed Connor, Washington University) (8:20-8:30: Discussion) 8:30 "Neurophysiological evidence of scene segmentation by feature selective, parallel attentive mechanisms" (Brad Motter, VA Medical Center/SUNY-HSC, Syracuse) (8:50-9:00: Discussion) 9:00-9:30 General Discussion Afternoon session: 4:30-5:00 Psychophysics "Attention and salience: alternative mechanisms of visual selection" (Jochen Braun, Caltech) (4:50-5:00: Discussion) 5:00-6:00 Computational models 5:00 "Models for the neural implementation of attention based on the temporal structure of neural signals" (Ernst Niebur, Caltech) (5:20-5:30: Discussion) 5:30 "Dynamic routing circuits for visual attention" (Bruno Olshausen, Washington University/Caltech) (5:50-6:00: Discussion) 6:00-6:30 General discussion  From B.DASGUPTA at fs3.mbs.ac.uk Wed Sep 22 09:25:11 1993 From: B.DASGUPTA at fs3.mbs.ac.uk (BHASKAR DASGUPTA ALIAS BD) Date: 22 Sep 93 09:25:11 BST Subject: Statistical V/s. Neural Networks in Forecasting Message-ID: <4084C4041C9@fs3.mbs.ac.uk> Hi! I am searching for references on the applications of neural networks to forecasting Time series, univariate as well as multivariate. I have managed to locate the following mentioned references. Does anyone else have any more references to this area ? Hruschka, H, 1993, "Determining market response functions by neural network modeling: A comparison to econometric techniques", European Journal of Operations Research, 66, 1, Apr., 27-35. Lapedes A., & Farber R., 1987, "Nonlinear Signal Processing using Neural Networks: Prediction and System Modeling", Los Alamos National Lab Technical Report LA-UR-87-2261, July. Marquez L., Hill T, Worthley R, & Remus W, 1991, "Neural Network Models as an Alternative to Regression, Proceedings of the IEEE 24th. Annual Hawaii International Conference on System Sciences, Vol. VI 129-135. Sharda R., & Patil R, 1990, "Neural Networks as Forecasting Experts: An Empirical Test," International Joint Conference on Neural Networks, IJCNN-WASH-D.C., Vol. II, January 15- 19, 491-494. ______, 1992, Neural Networks in Finance and Investing: Using Artificial Intelligence to Improve Real - World Performance, ed. Trippi, R.R., & Truban, E., Probus Publishing Co., Cambridge, UK, 451-464. Tang Z, Almedia C. de, & Fishwick P.A., 1990, "Time Series Forecasting Using Neural Networks vs. Box-Jenkins Methodology", International Workshop of Neural Networks, Auburn, February. Wu, F. Y., & Yen, K.K, 1992, "Applications of Neural Networks in Regression Analysis", Computers and Industrial Engineering", 23, 1-4, Nov., 93-95 Youngohc, Y, Swales, G. Jr., & Margavio, T.M., 1993, "A comparison of discriminant analysis versus artificial neural networks", Journal of the Operations Research Society, 44, 1, Jan., 51-60. The second question was that:: has anyone come across any 'research' paper concerning the applications of GA's to financial market modelling?. I keep on seeing only newspaper articles and general interest articles on this subject. I know that the moderators of this list do not like anyone sending requests for refereces without a substantial initial biblio. first, but unfortunately, I am not able to do so, since I could not locate them in the first place. I shall post a compilation of the received references back to this list. Thanks in advance. Bhaskar Dasgupta Manchester Business School Booth Street West Manchester M15 6PB UK. Phone::+44-61-275-6547 fax:: +44-61-273-7732 ============================================================ Chaos is the law of nature Order is the dream of man ============================================================  From gomes at ICSI.Berkeley.EDU Wed Sep 22 05:38:16 1993 From: gomes at ICSI.Berkeley.EDU (Benedict A. Gomes) Date: Wed, 22 Sep 1993 02:38:16 -0700 (PDT) Subject: Thanks for the references Message-ID: <9309220938.AA24172@icsib6.ICSI.Berkeley.EDU> Thanks to those who responded to my request for references on mapping structured nets to parallel machines. Most of the references are from D.R. Mani's dissertation proposal. Since the list is somewhat long, I've set it up for anonymous ftp from pub/ai/gomes/nn-mapping.bib at icsi.berkeley.edu. That directory also contains a partial summary of the references in nn-mapping-summary. I will update the summary as I find and read more of the papers. If you come to know of work that is not mentioned in the list, I would appreciate that information and will keep the references up-to-date. Abbreviated transcript for getting the files gomes:~> ftp icsi.berkeley.edu Connected to icsi.berkeley.edu. Name (icsi.berkeley.edu:gomes): anonymous 331 Guest login ok, send your complete e-mail address as password. Password: ftp> cd pub/ai/gomes ftp> get nn-mapping.bib .. ftp> get nn-mapping-summary .. ftp> bye 221 Goodbye. Benedict Gomes (gomes at icsi.berkeley.edu)  From bard at mthbard.math.pitt.edu Thu Sep 23 10:41:48 1993 From: bard at mthbard.math.pitt.edu (Bard Ermentrout) Date: Thu, 23 Sep 93 10:41:48 EDT Subject: softwre Message-ID: <9309231441.AA22089@mthbard.math.pitt.edu> F R E E S I M U L A T I O N S O F T W A R E I thought perhaps that modellers etc might be interested to know of the availability of software for the analysis and simulation of dynamical and probabilistic phenomena. xpp is available free via anonymous ftp. It solves integro-differential equations, delay equations, iterative equations, all combined with probabilistic models. Postscript output is supported. A variety of numerical methods are employed so that the user can generally be sure that the solutions are accurate. Examples are connectionist type neural nets, biophysical models, models with memory, and models of cells with random inputs or with random transitions. A graphical interface using X windows as well as numerous plotting options are provided. The requirements are a C compiler and an OS capable of running X11. The software has been successfully compiled on DEC,HP,SUN,IBM,NEXT workstations as well as on a PC running Linux. Once it is compiled, no more compilation is necessary as the program can read algebraic expressions and interpret them in order to solve them. The program has been used in various guises for the last 5 years by a variety of mathematicians, physicists, and biologists. To get it follow the instructions below: ------------Installing XPP1.6-------------------------------- XPP is pretty simple to install although you might have to add a line here and there to the Makefile. You can get it from mthsn4.math.pitt.edu (130.49.12.1) here is how: ftp 130.49.12.1 cd /pub bin get xpp1.6.tar.Z quit uncompress xpp1.6.tar.Z tar xf xpp1.6.tar make -k If you get errors in the compilation it is likely to be one of the following: 1) gcc not found in which case you should edit the Makefile so that it says CC= cc 2) Cant find X include files. Then edit the line that says CFLAGS= .... by adding -I where is where the include files are for X, e,g, -I/usr/X11/include 3) Cant find X libraries. Then add a line LDFLAGS= -L right after the CFLAGS= line where is where to find the X11 libraries then change this line: $(CC) -o xpp $(OBJECTS) $(LIBS) to this line $(CC) -o xpp $(OBJECTS) $(LDFLAGS) $(LIBS) That should do it!! If it still doesnt compile, then you should ask your sysadmin about the proper paths. Finally, some compilers have trouble with the GEAR algorithm if they are optimized so you should remove the optimization flags i.e. replace CFLAGS= -O2 -I with CFLAGS= -I delete all the .o files and recompile Good luck! Bard Ermentrout Send comments and bug reports to bard at mthbard.math.pitt.edu  From Matthew.White at cs.cmu.edu Fri Sep 24 03:15:48 1993 From: Matthew.White at cs.cmu.edu (Matthew.White@cs.cmu.edu) Date: Fri, 24 Sep 1993 03:15:48 -0400 (EDT) Subject: CMU Learning Benchmark Database Updated Message-ID: The CMU Learning Benchmark Archive has been updated. As you may know, in the past, all the data sets in this collection have been in varying formats, requiring that code be written to parse each one. This was a waste of everybody's time. These old data sets have been replaced with data sets in a standardized format. Now, all benchmarks consist of a file detailing the benchmark and another file that is either a data set (.data) or a program to generate the appropriate data set (.c). Data sets currently avaialable are: nettalk Pronunciation of English words. parity N-input parity. protein Prediction of secondary structure of proteins. sonar Classification of sonar signals. two-spirals Distinction of a twin spiral pattern. vowel Speaker independant recognition of vowels. xor Traditional xor. To accompany this new data file format is a file describing the format and a C library to parse the data file format. In addition, the simulator (C version) for Cascade-Correlation has been rewritten to use the new file format. Both the parsing code and the cascade correlation code are distributed as compressed shell archives and should compile with any ANSI/ISO compatible C compiler. Code currently available: nevprop1.16.shar A user friendly version of quickprop. cascor1a.shar The re-engineered version of the Cascade Correlation algorithm. parse1.shar C code for the parsing algorithm to the new data set format. Data sets and code are available via anonymous FTP. Instructions follow. If you have difficulties with either the data sets or the programs, please send mail to: neural-bench at cs.cmu.edu. Any comments or suggestions should also be sent to that address. Let me urge you not to hold back questions as it is our single best way to spot places for improvement in our methods of doing things. If you would like to submit a data set to the CMU Learning Benchmark Archive, send email to neural-bench at cs.cmu.edu. All data sets should be in the CMU data file format. If you have difficulty converting your data file, contact us for assistance. Matt White Maintainer, CMU Learning Benchmark Archive ------------------------------------------------------------------------------- Directions for FTPing datasets: For people whose systems support AFS, you can access the files directly from directory "/afs/cs.cmu.edu/project/connect/bench". For people accessing these files via FTP: 1. Create an FTP connection from wherever you are to machine "ftp.cs.cmu.edu". The internet address of this machine is 128.2.206.173, for those who need it. 2. Log in as user "anonymous" with your own internet address as password. You may see an error message that says "filenames may not have /.. in them" or something like that. Just ignore it. 3. Change remote directory to "/afs/cs/project/connect/bench". NOTE: you must do this in a single atomic operation. Some of the super directories on this path are not accessible to outside users. 4. At this point the "dir" command in FTP should give you a listing of files in this directory. Use get or mget to fetch the ones you want. If you want to access a compressed file (with suffix .Z) be sure to give the "binary" command before doing the "get". (Some version of FTP use different names for these operations -- consult your local system maintainer if you have trouble with this.) 5. The directory "/afs/cs/project/connect/code" contains public-domain programs implementing the Quickprop and Cascade-Correlation algorithms, among other things. Access it in the same way.  From joachim at fit.qut.edu.au Fri Sep 24 04:12:20 1993 From: joachim at fit.qut.edu.au (Joachim Diederich) Date: Fri, 24 Sep 1993 04:12:20 -0400 Subject: NIPS-93 Workshop "Parallel Processing" Message-ID: <199309240812.EAA16792@fitmail.fit.qut.edu.au> NIPS*93 Workshop: Connectionist Modelling and Parallel Architectures ================= -------------------------------------------------- 4 December 1993; Vail, Colorado Intended Audience: computer scientists and engineers as well as ================== biologists and cognitive scientists Organizers: =========== Joachim Diederich Ah Chung Tsoi Neurocomputing Research Centre Department of Elec. and Computer Engineering Queensland University of Technology University of Queensland joachim at fitmail.fit.qut.edu.au act at s1.elec.uq.oz.au Program: ======== The objective of the workshop is to provide a discussion platform for researchers interested in software and modelling aspects of neural computing. The workshop should be of considerable interest to computer scientists and engineers as well as biologists and cognitive scientists. The introduction of specialized hardware platforms for connectionist modelling ("connectionist supercomputer") has created a number of research issues which should be addressed. Some of these issues are controversial (incl. the need for such specialized architectures): the efficient implementation of incremental learning techniques, the need for the dynamic reconfiguration of networks at runtime and possible programming environments for these machines. The following topics should be addressed: - the efficient simulation of homogenuous network architectures; mapping of homogenous network architectures to parallel machines - randomness and sparse coding; the efficient simulation of sparse networks on sequential and parallel machines. Sparse activity and communication in parallel architectures - arbitrary interconnection schemes and their mapping to parallel architectures - dynamic reconfiguration: the modification of network structures and activation functions at runtime. Possible trade-offs between the efficient simulation of fixed-sized networks and constructive (incremental) learning algorithms - software tools and environments for neural network modelling, in particular for parallel architectures - connectionist supercomputer (such as CNAPS, Synapse and CNS-1) hardware and programming issues associated with connectionist supercomputer - biologically realistic modelling on parallel machines, the simulation of synaptogenesis, spike trains etc. - realistic simulation of the brain integrating over a number of scales of complexity, from the detailed simulation of neurons to high level abstractions The following is a preliminary schedule, we expect to have two more slots for brief presentations and invite abstracts for short talks (about 10-15min). Please send e-mail to: joachim at fitmail.fit.qut.edu.au Morning Session: ---------------- 7:30-7:40 Joachim Diederich, Queensland University of Tech., Brisbane Introduction 7:40-8:10 Jerome A. Feldman, ICSI & University of California, Berkeley The Connectionist Network Supercomputer (CNS-1) 8:10-8:30 Discussion 8:30-8:50 Nigel Goddard, Pittsburgh Supercomputer Center Practical Parallel Neural Simulation 8:50-9:10 Per Hammarlund, Royal Institute of Technology, Stockholm Simulation of Large Neural Networks: System Specification and Execution on Parallel Machines 9:10-9:30 Discussion Afternoon Session: ------------------ 4:30-4:50 Paul Murtagh & Ah Chung Tsoi, University of Queensland, St. Lucia Digital implementation of a reconfigurable VLSI neural network chip 4:50-5:20 Ulrich Ramacher, Siemens AG, Munich The Neurocomputer SYNAPSE-1 5:20-5:30 Discussion 5:30-6:00 Guenther Palm & Franz Kurfess, University of Ulm Parallel Implementations of Neural Networks for Associative Memory 6:00-6:30 Discussion  From georg at ai.univie.ac.at Fri Sep 24 07:56:06 1993 From: georg at ai.univie.ac.at (Georg Dorffner) Date: Fri, 24 Sep 1993 13:56:06 +0200 Subject: CSFN: instructions for printing Message-ID: <199309241156.AA00364@chicago.ai.univie.ac.at> Dear connectionists, I recently announced the availability of two papers ("A unified framework for MLPs and RBFNs ..." and "On using feedforward NN for Clinical ...") on both neuroprose and our ftp server 'ftp.ai.univie.ac.at'. Unfortunately, the postscript files contain commands that cause trouble with some printers. So, if you fetched any of the files but were not able to print it out it could be either 1) your printer does not support the "A4" tray (European paper format) Then either delete the line consisting of 'statusdict begin a4tray end' or fetch the file again from ftp.ai.univie.ac.at (anonymous ftp) where I have already deleted that line. 2) or the troubles (i.e. no printing at all, or error message "Offending command: cleartomark", or the like) are caused by a bunch of lines consisting of the simple command 'cleartomark' - for whatever reason. Then delete all those lines or globally replace them with '%cleartomark' (= comment them out) and try printing again. Since the troubles seem to occur only on SOME printers and other printers might need those ominous commands (sorry to you PS gurus), and since I want to save Jordan Pollack some additional effort, I have not made any changes to the originally archived files (except for the one mentioned above). Thank you for your patience and sorry for any inconveniences. cheers Georg Dorffner  From protopap at marlowe.cog.brown.edu Fri Sep 24 14:09:03 1993 From: protopap at marlowe.cog.brown.edu (Thanassi Protopapas) Date: Fri, 24 Sep 93 14:09:03 EDT Subject: References on speech perception modeling wanted Message-ID: <9309241809.AA09588@marlowe.cog.brown.edu> Hello, I am a graduate student at Brown in the program in Cognitive Science and I am preparing my prelim paper on connectionist modeling of speech perception. I already have an extensive list of references on the issues of speech perception and on general neural net modeling, as well as on the TRACE model and its (dis)advantages, including the Massaro et al. papers on the FLMP. At the end of this message I list some papers I have that deal specifically with connectionist modeling; the extensive list of references on related topics can be found at ftp site (log in as anonymous): clouseau.cog.brown.edu (128.148.208.14) in directory /pub/protopapas in file sp_references.ps (PostScript format only) I would like to ask if you are aware of any recent (within the past two years or so) attempts to model speech perception with connectionist models. I am interested in the stages from prelexical processing of the signal (feature detection, normalization, etc.) up to lexical access (including representational issues and units/segments of processing). Please send me references at protopap at cog.brown.edu, or, if you have some technical report that I cannot find through the library system, I would really appreciate a copy. My postal address is: Athanassios Protopapas Dept. of Cognitive & Linguistic Sciences Box 1978 Brown University Providence, RI 02912 U.S.A. Although I'd love to know about engineering approaches of neural networks to speech recognition, I am primarily interested in models of human speech perception (more psychologically/cognitively motivated and oriented). Thanks a lot in advance, Thanassi. ------------------------------------------------------------------------ Selected references: Elman, J. L. (1989) Connectionist approaches to acoustic/phonetic processing. In W. Marslen-Wilson (Ed.) Lexical representation and process, pp.227-260. Cambridge, MA : MIT Press Elman, J. L. (1990) Representation and structure in connectionist models. In G. T. M. Altmann (Ed.) Cognitive models of speech processing, pp.345-382. Cambridge, MA : MIT Press Haffner, P. & Waibel, A. (1992) Multi-state time delay neural networks for continuous speech recognition. Advances in neural information processing systems, Vol. 4, pp.135-142 Morgan Kaufman Klatt, D. H. (1989) Review of selected models of speech perception. In W. Marslen-Wilson (Ed.) Lexiacal representation and process, pp.169-226. Cambridge, MA : MIT Press Lippman, R. P. (1989) Review of neural networks for speech recognition. Neural Computation, 1, pp.1-38. Massaro, D. W. (1992) Connectionist models of speech perception. In R. G. Reily & N. E. Sharkey (Eds.) Connectionist approaches to natural language processing, pp.321-350. Hove, UK : Lawrence Erlbaum Massaro, D. W., & Cohen, M. M. (1991) Integration versus interactive activation: The joint influence of stimulus and context in perception. Cognitive Psychology, 23, 558-614 McClelland, J. L. & Elman, J. L. (1986) Interactive processes of speech perception: The TRACE model. In Parallel distributed processing, Vol. 2: Psychological and biological models, pp.58-121. Cambridge, MA : MIT Press McClelland, J. L. (1991) Stochastic interactive processes and the effect of context on perrception. Cognitive Psychology, 23, 1-44 Norris, D. (1990) A dynamic-net model of human speech recognition. In G. T. M. Altmann (Ed.) Cognitive models of speech processing, pp.87-104. Cambridge, MA : MIT Press Quinlan, P. (1991) Connectionism and psychology. Chicago, IL : The University of Chicago Press, pp.132-157 Watrous, R. L. (1990) Phoneme discrimination using connectionist networks The Journal of the Acoustical Society of America, 87(4) : 1753-1772 Weibel, A. (1989) Modular construction of time-delay neural networks for speech recognition. Neural Computation, 1, 39-46  From ken at phy.ucsf.edu Mon Sep 27 06:13:58 1993 From: ken at phy.ucsf.edu (Ken Miller) Date: Mon, 27 Sep 93 03:13:58 -0700 Subject: postdoctoral fellowship opportunity for women and minorities Message-ID: <9309271013.AA24084@phybeta.ucsf.EDU> The University of California annually awards 20 or more postdoctoral fellowships to women and minorities under the "President's Postdoctoral Fellowship Program". Fellowships are awarded to work with a faculty member at any of the nine UC campuses or at one of the three national laboratories associated with UC (Lawrence Berkeley, Lawrence Livermore, and Los Alamos). Fellowships pay $26-27,000/year, plus health benefits and $4000/year for research and travel. Applicants must be citizens or permanent residents of the United States, and should anticipate completion of their Ph.D.'s by July 1, 1994. For this year's competition, DEADLINE FOR APPLICATION IS DECEMBER 14, 1993. There are many of us who work in computational neuroscience or connectionism in the UC system or the national labs. I would encourage anyone eligible to make use of this opportunity to obtain funding to work with one of us. In particular, I encourage anyone interested in computational neuroscience to contact me to further discuss my own research program and the research opportunities in computational and systems neuroscience at UCSF. To receive a fellowship application and further information, contact: President's Postdoctoral Fellowship Program Office of the President University of California 300 Lakeside Drive, 18th Floor Oakland, CA 94612-3550 Phone: 510-987-9500 or 987-9503 Ken Miller Kenneth D. Miller telephone: (415) 476-8217 Dept. of Physiology internet: ken at phy.ucsf.edu University of California, fax: (415) 476-4929 San Francisco 513 Parnassus San Francisco, CA 94143-0444 [Office: S-859]  From S.FLOCKTON at rhbnc.ac.uk Mon Sep 27 12:40:02 1993 From: S.FLOCKTON at rhbnc.ac.uk (S.FLOCKTON@rhbnc.ac.uk) Date: Mon, 27 SEP 93 12:40:02 BST Subject: Job vacancy in evolutionary algorithms Message-ID: <21C31C4F_0050A620.009732BBE1F3469A$9_2@UK.AC.RHBNC.VAX> ROYAL HOLLOWAY, UNIVERISTY OF LONDON POST-DOCTORAL RESEARCH ASSISTANT EVOLUTIONARY ALGORITHMS IN NON-LINEAR SIGNAL PROCESSING Applications are invited for this SERC-funded post, tenable for three years from 1 October 1993 or soon after, to carry out a comparison of the effectiveness of evolution-based algorithms for a number of signal processing problems. This comparison will be done by study of example problems and developing theoretical ideas concerning the behaviour of these algorithms. The successful applicant will join a group investigating several different aspects of genetic algorithms and neural networks. Royal Holloway, one of the five multi-faculty Colleges of the University of London, is situated in a campus environment approximately 20 miles west of London, just outside the M25. Applicants should hold a PhD in Electrical Engineering, Computer Science, Physics, or a related field, preferably in digital signal processing or genetic and/or other evolution-based algorithms. Salary on the Research 1A Scale (UKpounds 14,962 - 17,320 pa, inclusive of London Allowance). Informal enquiries to Dr Stuart Flockton (Tel: 0784 443510 , Fax: 0784 472794, email: S.Flockton at rhbnc.ac.uk). Further particulars from the Personnel Officer, Royal Holloway, University of London, Egham, Surrey, TW20 0EX Tel: 0784 443030. Closing date for applications: 15th October 1993  From GARZONM at hermes.msci.memst.edu Tue Sep 28 10:28:07 1993 From: GARZONM at hermes.msci.memst.edu (GARZONM@hermes.msci.memst.edu) Date: 28 Sep 93 09:28:07 CDT Subject: NIPS'93 workshop on "Stability and Solvability" Message-ID: C A L L F O R P A P E R S A One-day Workshop on * STABILITY AND OBSERVABILITY * at NIPS'93 December 3, 1993 We are organizing a workshop at the NIPS'93 -Neural Information Processing Systems conference to be held at the Denver/Vail area in Colorado December 3. The themes of the workshop are `Stability and Observability'. A more detailed description is attached below. There is still room for some contributed talks. If you are interested in presenting a paper based on previous and/or current research, send a short (one-page) abstract or contact one of the organizers by October 8 via email or fax. A list of speakers will be finalized by mid October. Fernanda Botelho Max Garzon botelhof at hermes.msci.memst.edu garzonm at hermes.msci.memst.edu FAX (901)678-2480 (preferred); 678-3299 Workshop cochairs _____________________________ cut here _________________________ The purpose of this one-day workshop is to bring together neural network practitioners, computer scientists and mathematicians interested in `stability' and `observability' of neural networks of various types (discrete/continuous time and/or activations). These two properties concern the relationship between defining parameters (weights, transfer functions, and training sets) and the behavior of neural networks from the point of view of an outside observer. This behavior is affected by noise, rounding, bounded precision, sensitivity to initial conditions, etc. Roughly speaking, *stability* (e.g. asymptotic, Lyapunov, structural) refers to the ability of a network (or a family of networks) to generate trajectories/orbits that remain reasonably close (resp., in structure, e.g. topological conjugacy) to the original under small perturbations of the input/initial conditions (or the defining parameters of the network). Of course, neural networks are well-known for their graceful degradation, but this is less clear an issue with bounded precision, continuous time with local interaction governed by differential equations, and learning algorithms. Second, the issue of *observability*, roughly speaking, concerns the problem of error control under iteration of recurrent nets. In dynamical systems observability is studied in terms of shadowing. But observability can also be construed other ways, e.g. as our ability to identify a network by observing the abstract i/o function that it realizes (which, at some level, reduces to essential uniqueness of an irreducible network implementing the i/o function). Speakers will present their views in short(< 20 min.) talks. A panel discussion coordinated by the cochairs will discuss known results, and identify fundamental problems and questions of interest for further research. F. Botelho and M. Garzon, cochairs botelhof at hermes.msci.memst.edu garzonm at hermes.msci.memst.edu Mathematical Sciences Institute for Intelligent Systems Memphis State University Memphis, TN 38152 U.S.A. Max Garzon (preferred) garzonm at hermes.msci.memst.edu Math Sciences garzonm at memstvx1.memst.edu Memphis State University Phone: (901) 678-3138/-2482 Memphis, TN 38152 USA Fax: (901) 678-3299  From Pierre.Bessiere at imag.fr Tue Sep 28 11:33:55 1993 From: Pierre.Bessiere at imag.fr (pierre bessiere) Date: Tue, 28 Sep 1993 16:33:55 +0100 Subject: TR: The "Ariadne's Clew" algorithm Message-ID: <9309281533.AA02278@meteore.imag.fr> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/bessiere.iros93.ps.Z The following paper is available through FTP either from: - archive.cis.ohio-state.edu or - imag.fr ******************************************************************** TITLE :THE "ARIADNE'S CLEW" ALGORITHM Global planning with local methods AUTHOR(S) :Pierre Bessiere, Juan-Manuel Ahuactzin, El-Ghazali Talbi & Emmanuel Mazer REFERENCE :IEEE-IROS'93 conference, Yokohama, Japan, 1993 LANGUAGE :English LENGTH :8 pages DATE :28/09/93 KEYWORDS :Robotic, Genetic Algorithm, Path planning FILE NAME :bessiere.iros93.e.ps.Z Author E-mail :Pierre.Bessiere at imag.fr Related Files : ABSTRACT : The goal of the work described in this paper is to build a path planner able to drive a robot in a dynamic environment where the obstacles are moving. In order to do so, we propose a method, called "Ariadne's clew algorithm", to build a global path planner based on the combination of two local planning algorithms : an Explore algorithm and a Search algorithm. The purpose of the Explore algorithm is to collect information about the environment with an increasingly fine resolution by placing landmarks in the searched space. The goal of the Search algorithm is to opportunistically check if the target can be easily reached from any given placed landmark. The Ariadne's clew algorithm is shown to be very fast in most cases allowing plannning in dynamic environments. Hence, it is shown complete, which means that it is sure to find a path when one exists. Finally, we describe a massively parallel implementation of this algorithm. ******************************************************************** How to get files from the Neuroprose archives? ______________________________________________ Anonymous ftp on: - archive.cis.ohio-state.edu (128.146.8.52) mymachine>ftp archive.cis.ohio-state.edu Name: anonymous Password: yourname at youradress ftp>cd pub/neuroprose ftp>binary ftp>get bessiere.iros93.ps.Z ftp>quit mymachine>uncompress bessiere.iros93.ps.Z How to get files from IMAG? ___________________________ Anonymous ftp on: - imag.fr (129.88.32.1) mymachine>ftp imag.fr Name: anonymous Password: yourname at youradress ftp>cd pub/LIFIA ftp>binary ftp>get bessiere.iros93.e.ps.Z ftp>quit mymachine>uncompress bessiere.iros93.e.ps.Z -- Pierre BESSIERE *************** CNRS - IMAG/LIFIA phone: 46 ave. Felix Viallet Work: 33/76.57.46.73 38031 Grenoble Cedex Home: 33/76.88.06.09 FRANCE Fax: 33/76.57.46.02 E-Mail: Pierre.Bessiere at imag.fr Notre esprit a une irresistible tendance a considerer comme plus claire l'idee qui lui sert le plus souvent. BERGSON "La pensee et le mouvant"  From terry at helmholtz.sdsc.edu Tue Sep 28 20:49:21 1993 From: terry at helmholtz.sdsc.edu (Terry Sejnowski) Date: Tue, 28 Sep 93 17:49:21 PDT Subject: Summer Institute Message-ID: <9309290049.AA05457@helmholtz.sdsc.edu> SUMMER INSTITUTE IN COGNITIVE NEUROSCIENCE at The University of California, Davis The 1994 Summer Institute will be held at the University of California, Davis, from July 10 through 23. The two week course will examine how information about the brain bears on issues in cognitive science, and how approaches in cognitive science apply to neuroscience research. A distinguished international faculty will lecture on current topics in brain plasticity, strategies of neuroencoding, and evolution. Laboratorites and demonstrations will provide practical experience with cognitive neuropsychology experiments, connectionist/computational modeling, and neuroimaging techniques. At every stage, the relationship between cognitive processes and underlying neural circuits will be explored. The Foundation is providing room/board and limited support for travel. Faculty Include: Richard Andersen, Max Bear, Ira Black, Kenneth H. Britten, Simon Baron Cohen, Leda Cosmides, Randy Gallistel, Michael S. Gazzaniga, Charles Gilbert, Charles M. Gray, Eric Knudsen, Peter Marler, Michael Merzenich, Reid Montague, Steven Pinker, V. Ramachandran, Gregg Recanzone, Barry Richmond, Mitch Sutter, Timothy Tonini, John Tooby, and many others. For information and applications please write to: McDonnell Summer Institute in Cognitive Neuroscience Center for Neuroscience, 1544 Newton Court University of California, Davis Davis, California 95616 USA APPLICATIONS MUST BE RECEIVED BY JANUARY 15, 1994  From uzimmer at informatik.uni-kl.de Wed Sep 29 11:13:54 1993 From: uzimmer at informatik.uni-kl.de (Uwe R. Zimmer, AG vP) Date: Wed, 29 Sep 93 16:13:54 +0100 Subject: A Neural Fuzzy Decision System (Report, Diploma-thesis & Software) Message-ID: <930929.161354.601@informatik.uni-kl.de> Due to a couple of requests, we have made the complete diploma-thesis of Joerg Bruske ("Neural Fuzzy Decision Systems") available for ftp-access. --------------------------------------------------------------------------- --- Neural Fuzzy Decision System (Report, Diploma-thesis & Software): --------------------------------------------------------------------------- --- Associated report is: FTP-Server is: ag_vp_file_server.informatik.uni-kl.de Mode is : binary Directory is : Neural_Networks/Reports File name is : Zimmer.NFDS.ps.Z SPIN-NFDS Learning and Preset Knowledge for Surface Fusion - A Neural Fuzzy Decision System - Joerg Bruske, Ewald von Puttkamer & Uwe R. Zimmer The problem to be discussed in this paper may be characterized in short by the question: "Are these two surface fragments belonging together (i.e. belonging to the same surface)?". The presented techniques try to benefit from some predefined knowledge as well as from the possibility to refine and adapt this knowledge according to a (changing) real environment, resulting in a combination of fuzzy-decision systems and neural networks. The results are encouraging (fast convergence speed, high accuracy), and the model might be used for a wide range of applications. The general frame surrounding the work in this paper is the SPIN-project, where emphasis is on sub-symbolic abstractions, based on a 3-d scanned environment. --- SPIN-NFDS: the diploma thesis by Joerg Bruske: FTP-Server is : ag_vp_file_server.informatik.uni-kl.de Mode is : binary Directory is : Neural_Networks/Reports/Bruske.Diploma-thesis File names are: *.ps.Z The complete diploma thesis gives more precise information about the state-of-the-art in neural fuzzy decision systems, an introduction to fuzzy logic and neural nets and much more. --- Source code and technical documentation: FTP-Server is: ag_vp_file_server.informatik.uni-kl.de Mode is : binary Directory is : Neural_Networks/Software/Neural_Fuzzy_Decision This documentation consists of five chapters: In Chapter 1, the author presents his approach towards implementing fuzzy decision systems (FDS) by means of neural nets, leading to his NFDS. In order to train (optimize) the NFDS, a slightly modified version of the backpropagation algorithm is introduced. In Chapter 2, the FuzNet project and its modules are described in detail. FuzNet implements the NFDS described in Chapter 1 on Apple-Macintosh computers and has been developed as an easy-integrable SW-component for larger SW-projects. In Chapter 3, we will be concerned with the details of the integration of FuzNet in other SW-projects, taking SurfaceExtractor as an example. However, the reader need not know the SurfaceExtractor project (which currently is not supplied via ftp) in order to understand the details of integrating FuzNet in their projects. In Chapter 4, the FuzTest application is described. FuzTest is a very primitive application intended to familiarize the user with FuzNet. In Chapter 5, the reader will find the syntax diagram for fuzzy data- and rule- bases as accepted by FuzNet. The file "brakingFDS" contains such a fuzzy data- and rule- base. A references list concerning literature about neural nets, fuzzy logic and neural fuzzy decision systems is appended to this documentation. In particular, [Bruske93] is recommended for a detailed discussion of neural fuzzy decision systems and [BruPuttZi93] as a short introduction to NFDS and one of its applications in the Research Group v. Puttkamer. ----------------------------------------------------- ----- Uwe R. Zimmer --- University of Kaiserslautern - Computer Science Department | Research Group Prof. v. Puttkamer | 67663 Kaiserslautern - Germany | -------------------------------------------------------------- | P.O.Box:3049 | Phone:+49 631 205 2624 | Fax:+49 631 205 2803 |  From hwang at pierce.ee.washington.edu Wed Sep 29 21:28:41 1993 From: hwang at pierce.ee.washington.edu (Jenq-Neng Hwang) Date: Wed, 29 Sep 93 18:28:41 PDT Subject: NNSP'94 Call For Papers Message-ID: <9309300128.AA01571@pierce.ee.washington.edu.> 1994 IEEE WORKSHOP ON NEURAL NETWORKS FOR SIGNAL PROCESSING September 6-8, 1994 Ermioni, Greece Sponsored by the IEEE Signal Processing Society (In cooperation with the IEEE Neural Networks Council) GENERAL CHAIR John Vlontzos INTRACOM S.A. Peania, Attica, Greece jvlo at intranet.gr PROGRAM CHAIR Jenq-Neng Hwang University of Washington Seattle, Washington, USA hwang at ee.washington.edu PROCEEDINGS CHAIR Elizabeth J. Wilson Raytheon Co. Marlborough, MA, USA bwilson at sud2.ed.ray.com FINANCE CHAIR Demetris Kalivas INTRACOM S.A. Peania, Attica, Greece dkal at intranet.gr PROGRAM COMMITTEE Joshua Alspector Les Atlas Charles Bachmann David Burr Rama Chellappa Lee Giles Steve J. Hanson Yu-Hen Hu Jenq-Neng Hwang Bing-Huang Juang Shigeru Katagiri Sun-Yuan Kung Gary M. Kuhn Stephanos Kollias Richard Lippmann Fleming Lure John Makhoul Richard Mammone Elias Manolakos Nahesan Niranjan Tomaso Poggio Jose Principe Wojtek Przytula Ulrich Ramacher Bhaskar D. Rao Andreas Stafylopatis Noboru Sonehara John Sorensen Yoh'ichi Tohkura John Vlontzos Raymond Watrous Christian Wellekens Yiu-Fai Issac Wong CALL FOR PAPERS The fourth of a series of IEEE workshops on Neural Networks for Signal Processing will be held at the Porto Hydra Resort Hotel, Ermioni, Greece, in September of 1994. Papers are solicited for, but not limited to, the following topics: APPLICATIONS: Image, speech, communications, sensors, medical, adaptive filtering, OCR, and other general signal processing and pattern recognition topics. THEORIES: Generalization and regularization, system identification, parameter estimation, new network architectures, new learning algorithms, and wavelet in NNs. IMPLEMENTATIONS: Software, digital, analog, and hybrid technologies. Prospective authors are invited to submit 4 copies of extended summaries of no more than 6 pages. The top of the first page of the summary should include a title, authors' names, affiliations, address, telephone and fax numbers and email address if any. Camera-ready full papers of accepted proposals will be published in a hard-bound volume by IEEE and distributed at the workshop. Due to workshop facility constraints, attendance will be limited with priority given to those who submit written technical contributions. For further information, please contact Mrs. Myra Sourlou at the NNSP'94 Athens office, (Tel.) +30 1 6644961, (Fax) +30 1 6644379, (e-mail) msou at intranet.gr. Please send paper submissions to: Prof. Jenq-Neng Hwang IEEE NNSP'94 Department of Electrical Engineering, FT-10 University of Washington, Seattle, WA 98195, USA Phone: (206) 685-1603, Fax: (206) 543-3842 SCHEDULE Submission of extended summary: February 15 Notification of acceptance: April 19 Submission of photo-ready paper: June 1 Advanced registration, before: June 1  From mli at math.uwaterloo.ca Wed Sep 29 23:26:18 1993 From: mli at math.uwaterloo.ca (Ming Li) Date: Wed, 29 Sep 93 23:26:18 -0400 Subject: Call for Papers: COLT'94 Message-ID: <9309300326.AA21739@math.uwaterloo.ca> CALL FOR PAPERS---COLT 94 Seventh ACM Conference on Computational Learning Theory New Brunswick, New Jersey July 12--15, 1994 The Seventh ACM Conference on Computational Learning Theory (COLT 94) will be held at the New Brunswick campus of Rutgers University from Tuesday, July 12, through Friday, July 15, 1994. The conference will be co-located with the Eleventh International Conference on Machine Learning (ML 94), which will be held from Sunday, July 10, through Wednesday, July 13. So the two conferences overlap on Tuesday and Wednesday. The COLT 94 conference is sponsored jointly by the ACM Special Interest Groups for Algorithms and Computation Theory (SIGACT) and Artificial Intelligence (SIGART). We invite papers in all areas that relate directly to the analysis of learning algorithms and the theory of machine learning, including artificial and biological neural networks, robotics, pattern recognition, inductive inference, information theory, decision theory, Bayesian/MDL estimation, statistical physics, and cryptography. We look forward to a lively, interdisciplinary meeting. In particular we expect some fruitful interaction between the research communities of the two overlapping conferences. There will be a number of joint invited talks. Prof. Michael Jordan from MIT will be one of the invited speakers; the others will be announced at a later date. Abstract Submission: Authors should submit twelve copies (preferably two-sided copies) of an extended abstract to be received by Thursday, February 3, 1994, to Manfred Warmuth - COLT 94 225 Applied Sciences Department of Computer Science University of California Santa Cruz, California 95064 An abstract must be received by February 3, 1994 (or postmarked January 23 and sent airmail, or sent overnight delivery on February 2). This deadline is FIRM! Papers that have appeared in journals or other conferences, or that are being submitted to other conferences, are not appropriate for submission to COLT. Abstract Format: The abstract should consist of a cover page with title, authors' names, postal and e-mail addresses, and a 200- word summary. The body of the abstract should be no longer than 10 pages with roughly 35 lines/page in 12-point font. Papers deviating significantly from this length constraint will not be considered. The body should include a clear definition of the theoretical model used, an overview of the results, and some discussion of their significance, including comparison to other work. Proofs or proof sketches should be included in the technical section. Experimental results are welcome, but are expected to be supported by theoretical analysis. Notification: Authors will be notified of acceptance or rejection by a letter mailed on or before Monday, April 4, with possible earlier notification via e-mail. Final camera-ready papers will be due on Tuesday, May 3. Program Format: Depending on submissions, and in order to accommodate a broad variety of papers, the final program may consist of both "long" talks, and "short" talks, corresponding to longer and shorter papers in the proceedings. The short talks will also be coupled with a poster presentation in special poster sessions. By default, all papers will be considered for both categories. Authors who do *not* want their papers considered for the short category should indicate that fact in the cover letter. The cover letter should also specify the contact author and give his/her e-mail. Program Chair: Manfred Warmuth (UC Santa Cruz, e-mail to colt94 at cse.ucsc.edu). Conference and Local Arrangements Co-Chairs: Robert Schapire and Michael Kearns (AT&T Bell Laboratories, e-mail to colt94 at research.att.com). Program Committee: Shun'ichi Amari (U. Tokyo), Avrim Blum (Carnegie Mellon), Nader Bshouty (U. Calgary), Bill Gasarch (U. Maryland), Tom Hancock (Siemens), Michael Kearns (AT&T), Sara Solla (Holmdel), Prasad Tadepalli (Oregon St. U.), Jeffrey Vitter (Duke U.), Thomas Zeugmann (TU. Darmstadt).  From Connectionists-Request at cs.cmu.edu Wed Sep 1 00:05:15 1993 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Wed, 01 Sep 93 00:05:15 -0400 Subject: Bi-monthly Reminder Message-ID: <24131.746856315@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated January 4, 1993. This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is not an edited forum like the Neuron Digest, or a free-for-all newsgroup like comp.ai.neural-nets. It's somewhere in between, relying on the self-restraint of its subscribers. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to over a thousand busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. Happy hacking. -- Dave Touretzky & David Redish --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject lately. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, and found the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new text books related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. - Do NOT tell a friend about Connectionists at cs.cmu.edu. Tell him or her only about Connectionists-Request at cs.cmu.edu. This will save your friend from public embarrassment if she/he tries to subscribe. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU (Internet address 128.2.242.8). 2. Login as user anonymous with password your username. 3. 'cd' directly to one of the following directories: /usr/connect/connectionists/archives /usr/connect/connectionists/bibliographies 4. The archives and bibliographies directories are the ONLY ones you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into one of these two directories. Access will be denied to any others, including their parent directory. 5. The archives subdirectory contains back issues of the mailing list. Some bibliographies are in the bibliographies subdirectory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. We strongly discourage the merger into the repository of existing bodies of work or the use of this medium as a vanity press for papers which are not of publication quality. PLACING A FILE To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. Current naming convention is author.title.filetype.Z where title is just enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. The Z indicates that the file has been compressed by the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is in the appendix. Make sure your paper is single-spaced, so as to save paper, and include an INDEX Entry, consisting of 1) the filename, 2) the email contact for problems, 3) the number of pages and 4) a one sentence description. See the INDEX file for examples. ANNOUNCING YOUR PAPER It is the author's responsibility to invite other researchers to make copies of their paper. Before announcing, have a friend at another institution retrieve and print the file, so as to avoid easily found local postscript library errors. And let the community know how many pages to expect on their printer. Finally, information about where the paper will/might appear is appropriate inside the paper as well as in the announcement. Please add two lines to your mail header, or the top of your message, so as to facilitate the development of mailer scripts and macros which can automatically retrieve files from both NEUROPROSE and other lab-specific repositories: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/filename.ps.Z When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. To prevent forwarding, place a "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your file. If you do offer hard copies, be prepared for a high cost. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! A shell script called Getps, written by Tony Plate, is in the directory, and can perform the necessary retrieval operations, given the file name. Functions for GNU Emacs RMAIL, and other mailing systems will also be posted as debugged and available. At any time, for any reason, the author may request their paper be updated or removed. For further questions contact: Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Phone: (614) 292-4890 APPENDIX: Here is an example of naming and placing a file: gvax> cp i-was-right.txt.ps rosenblatt.reborn.ps gvax> compress rosenblatt.reborn.ps gvax> ftp cheops.cis.ohio-state.edu Connected to cheops.cis.ohio-state.edu. 220 cheops.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put rosenblatt.reborn.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for rosenblatt.reborn.ps.Z 226 Transfer complete. 100000 bytes sent in 3.14159 seconds ftp> quit 221 Goodbye. gvax> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file rosenblatt.reborn.ps.Z in the Inbox. Here is the INDEX entry: rosenblatt.reborn.ps.Z rosenblatt at gvax.cs.cornell.edu 17 pages. Boastful statements by the deceased leader of the neurocomputing field. Let me know when it is in place so I can announce it to Connectionists at cmu. Frank ^D AFTER FRANK RECEIVES THE GO-AHEAD, AND HAS A FRIEND TEST RETRIEVE THE FILE, HE DOES THE FOLLOWING: gvax> mail connectionists Subject: TR announcement: Born Again Perceptrons FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/rosenblatt.reborn.ps.Z The file rosenblatt.reborn.ps.Z is now available for copying from the Neuroprose repository: Born Again Perceptrons (17 pages) Frank Rosenblatt Cornell University ABSTRACT: In this unpublished paper, I review the historical facts regarding my death at sea: Was it an accident or suicide? Moreover, I look over the past 23 years of work and find that I was right in my initial overblown assessments of the field of neural networks. ~r.signature ^D ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "pt.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "nn-bench-request at cs.cmu.edu".  From BRUNAK at nbivax.nbi.dk Wed Sep 1 07:44:20 1993 From: BRUNAK at nbivax.nbi.dk (BRUNAK@nbivax.nbi.dk) Date: Wed, 01 Sep 1993 13:44:20 +0200 Subject: IJNS vol. 4 issues 1 and 2 Message-ID: <01H2FPM8ZAJ68X3SKW@nbivax.nbi.dk> Begin Message: ----------------------------------------------------------------------- INTERNATIONAL JOURNAL OF NEURAL SYSTEMS The International Journal of Neural Systems is a quarterly journal which covers information processing in natural and artificial neural systems. It publishes original contributions on all aspects of this broad subject which involves physics, biology, psychology, computer science and engineering. Contributions include research papers, reviews and short communications. The journal presents a fresh undogmatic attitude towards this multidisciplinary field with the aim to be a forum for novel ideas and improved understanding of collective and cooperative phenomena with computational capabilities. ISSN: 0129-0657 (IJNS) ---------------------------------- Contents of Volume 4, issue number 1 (1993): 1. O. Ekeberg: Response Properties of a Population of Neurons. 2. M. Moeller: Supervised Learning on Large Redundant Training Sets. 3. K. Urahama & S.-I. Ueno: A Gradient System Solution to Potts Mean Field Equations and Its Electronic Implementation. 4. A. Romeo: Thermodynamic Transitions in Networks for Letter Distinction. 5. C. H.-A. Ting: Magnocellular Pathway for Rotation Invariant Neocognition. 6. J. Hao, J. Vandewalle & S. Tan: A Rule-Based Neural Controller for Inverted Pendulum System. 7. V. Majernik & A. Kral: Sharpening of Input Exitation Curves in Lateral Inhibition. 8. Y. Deville: Digital Neural Networks for High-Speed Divisions and Root Extractions. ---------------------------------- Contents of Volume 4, issue number 2 (1993): 1. A. A. Handzel, T. Grossman, E Domany, S. Tarem & E. Duchovni: A Neural Network Classifier in Experimental Particle Physics. 2. C. F. Miles & D. Rogers: A Biologically Motivated Associative Memory Architecture. 3. B. Cartling: Control of the Complexity of Associative Memory Dynamics by Neuronal Adaptation. 4. N. Shamir, D. Saad & E. Marom: Neural Net Pruning Based on Functional Behavior of Neurons. 5. J. Gorodkin, L. K. Hansen, A. Krogh, C. Svarer & O. Winther: A Quantative Study of Pruning by Optimal Brain Damage. 6. S. G. Romaniuk: Trans-Dimensional Learning. 7. R. Newman: A Function Approximation Algorithm Using Sequential Composition. ---------------------------------- Editorial board: B. Lautrup (Niels Bohr Institute, Denmark) (Editor-in-charge) S. Brunak (Technical Univ. of Denmark) (Assistant Editor-in-Charge) D. Stork (Stanford) (Book review editor) Associate editors: J. Alspector (Bellcore) B. Baird (Berkeley) D. Ballard (University of Rochester) E. Baum (NEC Research Institute) S. Bjornsson (University of Iceland) J. M. Bower (CalTech) S. S. Chen (University of North Carolina) R. Eckmiller (University of Dusseldorf) J. L. Elman (University of California, San Diego) M. V. Feigelman (Landau Institute for Theoretical Physics) F. Fogelman-Soulie (Paris) K. Fukushima (Osaka University) A. Gjedde (Montreal Neurological Institute) S. Grillner (Nobel Institute for Neurophysiology, Stockholm) T. Gulliksen (University of Oslo) D. Hammerstrom (Oregon Graduate Institute) D. Horn (Tel Aviv University) J. Hounsgaard (University of Copenhagen) B. A. Huberman (XEROX PARC) L. B. Ioffe (Landau Institute for Theoretical Physics) P. I. M. Johannesma (Katholieke Univ. Nijmegen) M. Jordan (MIT) G. Josin (Neural Systems Inc.) I. Kanter (Princeton University) J. H. Kaas (Vanderbilt University) A. Lansner (Royal Institute of Technology, Stockholm) A. Lapedes (Los Alamos) B. McWhinney (Carnegie-Mellon University) J. Moody (Yale, USA) A. F. Murray (University of Edinburgh) J. P. Nadal (Ecole Normale Superieure, Paris) E. Oja (Lappeenranta University of Technology, Finland) N. Parga (Centro Atomico Bariloche, Argentina) S. Patarnello (IBM ECSEC, Italy) P. Peretto (Centre d'Etudes Nucleaires de Grenoble) C. Peterson (University of Lund) K. Plunkett (University of Aarhus) S. A. Solla (AT&T Bell Labs) M. A. Virasoro (University of Rome) D. J. Wallace (University of Edinburgh) A. Weigend (Xerox PARC) D. Zipser (University of California, San Diego) ---------------------------------- CALL FOR PAPERS Original contributions consistent with the scope of the journal are welcome. Complete instructions as well as sample copies and subscription information are available from The Editorial Secretariat, IJNS World Scientific Publishing Co. Pte. Ltd. 73, Lynton Mead, Totteridge London N20 8DH ENGLAND Telephone: (44)81-446-2461 or World Scientific Publishing Co. Inc. Suite 1B 1060 Main Street River Edge New Jersey 07661 USA Telephone: (1)201-487-9655 or World Scientific Publishing Co. Pte. Ltd. Farrer Road, P. O. Box 128 SINGAPORE 9128 Telephone (65)382-5663  From gary at cs.ucsd.edu Thu Sep 2 11:44:04 1993 From: gary at cs.ucsd.edu (Gary Cottrell) Date: Thu, 2 Sep 93 08:44:04 -0700 Subject: Virtual Festschrift for Jellybean Message-ID: <9309021544.AA04923@gremlin> Dear Connectionists, On a sad day this spring, my longtime collaborator, friend, and inspiration for the field of Dognitive Science, Jellybean, died at the ripe old age of 16. His age (for a golden retriever/samoyed cross) at his death is a testament to modern veterinary medicine. Alas, we still must all go sometime. The purpose of this message is to invite the humorists among us to contribute a piece to a collection I am editing of humor in Jellybean's memory. As you may know, a "festschrift" is a volume of articles presented as a tribute or memorial to an academic. I have no plans to publish this except "virtually", through the auspices of the neuroprose archive. I already have several contributions that were privately solicited. This is a public solicitation for humor for this purpose. Your piece does not have to be in the "Dognitive Science" vein, but may be anything having to do with neural nets, Cognitive Science, or nearby fields. I reserve editorial right to accept, edit, and/or reject any material submitted that I deem either inappropriate, too long (I am expecting pieces to be on the order of 1-8 pages), or simply, not funny. Any editing will be with the agreement of the author. Latex files are probably best. Remember, brevity is the mother of wit. The deadline for submission will be Nov. 1, 1993. Email submissions only to gary at cs.ucsd.edu. Thanks for your attention. Gary Cottrell 619-534-6640 Reception: 619-534-6005 FAX: 619-534-7029 Computer Science and Engineering 0114 University of California San Diego La Jolla, Ca. 92093 gary at cs.ucsd.edu (INTERNET) gcottrell at ucsd.edu (BITNET, almost anything) ..!uunet!ucsd!gcottrell (UUCP)  From mozer at dendrite.cs.colorado.edu Mon Sep 6 22:21:07 1993 From: mozer at dendrite.cs.colorado.edu (Michael C. Mozer) Date: Mon, 6 Sep 1993 20:21:07 -0600 Subject: NIPS*93 workshops Message-ID: <199309070221.AA28415@neuron.cs.colorado.edu> For the curious, a list of topics for the NIPS*93 post-conference workshops is attached. The workshops will be held in Vail, Colorado, on December 3 and 4, 1993. For further info concerning the individual workshops, please contact the workshop organizers, whose names and e-mail are listed below. Abstracts are not available at present, but will be distributed prior to the workshops. For NIPS conference and workshop registration info, please write to: NIPS*93 Registration / NIPS Foundation / PO Box 60035 / Pasadena, CA 91116-6035 USA ---------------- December 3, 1993 ---------------- Complexity Issues in Neural Computation and Learning Vwani Roychowdhury & Kai-Yeung Siu vwani at ecn.purdue.edu Connectionism for Music and Audition Andreas Weigend & Dick Duda weigend at cs.colorado.edu Memory-based Methods for Regression and Classification Thomas Dietterich tgd at cs.orst.edu Neural Networks and Formal Grammars Simon Lucas sml at essex.ac.uk Neurobiology, Psychophysics, and Computational Models of Visual Attention Ernst Niebur & Bruno Olshausen ernst at acquine.cns.caltech.edu Robot Learning: Exploration and Continuous Domains David Cohn cohn at psyche.mit.edu Stability and Observability Max Garzon & F. Botelho garzonm at maxpc.msci.memst.edu VLSI Implementations William O. Camp, Jr. camp at owgvm6.vnet.ibm.com What Does the Hippocampus Compute? Mark Gluck & Bruce McNaughton gluck at pavlov.rutgers.edu ---------------- December 4, 1993 ---------------- Catastrophic Interference in Connectionist Networks: Can it be Predicted, Can it be Prevented? Bob French french at willamette.edu Connectionist Modeling and Parallel Architectures Joachim Diederich & Ah Chung Tsoi joachim at fitmail.fit.qut.edu.au Dynamic Representation Issues in Connectionist Cognitive Modeling Jordan Pollack pollack at cis.ohio-state.edu Functional Models of Selective Attention and Context Dependency Thomas Hildebrandt thildebr at aragorn.csee.lehigh.edu Learning in Computer Vision and Image Understanding -- An Advantage over Classical Techniques? Hayit Greenspan hayit at micro.caltech.edu Memory-based Methods for Regression and Classification Thomas Dietterich tgd at cs.orst.edu Neural Network Methods for Optimization Problems Arun Jagota jagota at cs.buffalo.edu Processing of Visual and Auditory Space and its Modification by Experience Josef Rauschecker josef at helix.nih.gov Putting it all Together: Methods for Combining Neural Networks Michael Perrone mpp at cns.brown.edu --------------------------------------------------------- NOTE: The assignment of workshops to dates is tentative. ---------------------------------------------------------  From john at dcs.rhbnc.ac.uk Wed Sep 8 07:23:42 1993 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Wed, 08 Sep 1993 12:23:42 +0100 Subject: EuroCOLT Message-ID: <1809.9309081123@csqx.cs.rhbnc.ac.uk> The Institute of Mathematics and its Applications Euro-COLT '93 FIRST EUROPEAN CONFERENCE ON COMPUTATIONAL LEARNING THEORY 20th-22nd December, 1993 Royal Holloway, University of London Call for Participation and List of Accepted Papers ================================================== The inaugural IMA European conference on Computational Learning Theory will be held 20--22 December at Royal Holloway, University of London. The conference covers areas related to the analysis of learning algorithms and the theory of machine learning, including artificial and biological neural networks, robotics, pattern recognition, inductive inference, information theory and cryptology, decision theory and Bayesian/MDL estimation. Invited Talks ============= As part of our program, we are pleased to announce three invited talks by Wolfgang Maass (Graz), Lenny Pitt (Illinois) and Les Valiant (Harvard). Euroconference Scholarships =========================== The conference has also received scientific approval from the European Commission to be supported under the Human Capital and Mobility Euroconferences initiative. This means that there will be a number of scholarships available to cover the expenses of young researchers attending the conference. The scholarships are open to citizens of European Community Member States or people who have been residing and working in research for at least one year in one of the European States. Please indicate on the return form below if you would like to receive more information about these scholarships. List of Accepted Papers ======================= R. Gavalda, On the Power of Equivalence Queries. M. Golea and M. Marchand, On Learning Simple Deterministic and Probabilistic Neural Concepts. P. Fischer, Learning Unions of Convex Polygons. S. Polt, Improved Sample Size Bounds for PAB-Decisions. F. Ameur, P. Fischer, K-U. Hoffgen and F.M. Heide, Trial and Error: A New Approach to Space-Bounded Learning. A. Anoulova and S. Polt, Using Kullback-Leibler Divergence in Learning Theory. J. Viksna, Weak Inductive Inference. H.U. Simon, Bounds on the Number of Examples Needed for Learning Functions. R. Wiehagen, C.H. Smith and T. Zeugmann, Classification of Predicates and Languages. K. Pillaipakkamnatt and V. Raghavan, Read-twice DNF Formulas can be learned Properly. J. Kivinen, H. Mannila and E. Ukkonen, Learning Rules with Local Exceptions. J. Kivinen and M. Warmuth, Using Experts for Predicting Continuous Outcomes. M. Anthony and J. Shawe-Taylor, Valid Generalisation of Functions from Close Approximation on a Sample. N. Cesa-Bianchi, Y. Freund, D.P. Helmbold and M. Warmuth, On-line Prediction and Conversion Strategies. A. Saoudi and T. Yokomori, Learning Local and Recognisable omega-Languages and Monadic Logic Programs. K. Yamanishi, Learning Non-Parametric Smooth Rules by Stochastic Rules with Finite Partitioning. H. Wiklicky, The Neural Network Loading Problem is Undecidable. T. Hegedus, Learning Zero-one Threshold Functions and Hamming Balls over the Boolean Domain. Members of the Organising Committee =================================== John Shawe-Taylor (Chair: Royal Holloway, University of London, email to eurocolt at dcs.rhbnc.ac.uk), Martin Anthony (LSE, University of London), Jose Balcazar (Barcelona), Norman Biggs (LSE, University of London), Mark Jerrum (Edinburgh), Hans-Ulrich Simon (University of Dortmund), Paul Vitanyi (CWI Amsterdam). Location ======== The conference will be held at Royal Holloway, University of London in Egham, Surrey, conveniently located 15 minutes' drive from London Heathrow airport. Accommodation will be either in the chateau-like original Founders Building or in en-suite rooms in a new block also on the Royal Holloway campus. Accommodation fees range from 110 pounds to 150 pounds (inclusive of bed, breakfast and dinner), while the conference fee is 195 pounds (inclusive of lunch, coffee and tea; 140 pounds for students with reductions available for IMA members; late application fee of 15 pounds if application received after 16th November). -------------------------------------------------------------------- To: The Conference Officer, The Institute of Mathematics and its Applications, 16 Nelson Street, Southend-on-Sea, Essex SS1 1EF. Telephone: (0702) 354020. Fax: (0702) 354111 Euro-COLT '93 20th--22nd December, 1993 Royal Holloway, University of London Please send me an application for the above conference TITLE ............ MALE/FEMALE ..... SURNAME .............................. FORENAMES .................. ADDRESS FOR CORRESPONDENCE ........................................... ..................................................................... TELEPHONE NO ........................ FAX NO ......................... Please send me information about the Euroconference scholarships ........ (Please tick if necessary)  From georg at ai.univie.ac.at Wed Sep 8 10:42:20 1993 From: georg at ai.univie.ac.at (Georg Dorffner) Date: Wed, 8 Sep 1993 16:42:20 +0200 Subject: CFP - symposium on ANN and adaptive systems Message-ID: <199309081442.AA09230@chicago.ai.univie.ac.at> CALL FOR PAPERS for the symposium ====================================================== Artificial Neural Networks and Adaptive Systems ====================================================== chairs: Stephen Grossberg, USA, and Georg Dorffner, Austria as part of the Twelfth European Meeting on Cybernetics and Systems Research April 5-8, 1994 University of Vienna, Vienna, Austria For this symposium, papers on any theoretical or practical aspect of artificial neural networks are invited. Special focus, however, will be put on the issue of adaptivity both in practical engineering applications and in applications of neural networks to the modeling of human behavior. By adaptivity we mean the capability of a neural network to adjust itself to changing environments. For this, a careful distinction between "learning" to devise weight matrices for a neural network before it is applied (and usually left unchanged) on one hand, and "true" adaptivity of a given neural network to constantly changing conditions on the other hand - i.e. real-time learning in unstationary environments - is made. The following is a - by no means exhaustive - list of possible topics in this realm: - online learning of neural network applications facing changing data distributions - transfer of neural network solutions to related but different domains - application of neural networks for adaptive autonomous systems - "phylogenetic" vs. "ontogenetic" adaptivity (e.g. adaptivity of connectivity and architecture vs. adaptivity of coupling parameters or weights) - short term vs. long term adaptation - adaptive reinforcement learning - adaptive pattern recognition - localized vs. distributed approximation (in terms of overlap of decision regions) and adaptivity Preference will be given to contributions that address such issues of adaptivity, but - as mentioned initially - other original work on neural newtorks is also welcome. As an additional highlight, Prof. S. Grossberg will be one of the plenary speakers of the EMCSR 1994. Below is a description of the EMCSR conference containing guidelines for submissions. Note that for this particular symposium the deadline has been extended to ==================================== October 20, 1993 ==================================== If you are planning to submit by this postponed deadline, please send a brief notification containing a tentative title of your submission to georg at ai.univie.ac.at by Oct 8 (the original deadline). Electronic submission (latex or postscript) to the same address is possible (note again that this applies only for this symposium, and does not apply for camera ready accepted FINAL papers). !Hope to see you in Vienna! About EMCSR'94: ====================================================================== Twelfth European Meeting on Cybernetics and Systems Research April 5-8, 1994 at the University of Vienna (Main Building) Organizers: - ----------- Austrian Society for Cybernetic Studies in co-operation with: University of Vienna, Department of Medical Cybernetics and Artificial Intelligence, and: International Federation for Systems Research Chairman: Robert Trappl, President of the Austrian Society for Cybernetic Studies Conference fee : - ---------------- Contributors : AS 2500 if paid before January 31, 1994 AS 3200 if paid later Participants : AS 3500 if paid before January 31, 1994 AS 4200 if paid later (AS 100 = about $ 9) The conference fee includes participation in the Twelfth European Meeting, attendance at official receptions, and the volume of the proceedings available at the Meeting. Please send cheque, or transfer the amount free of charges for beneficiary to our account no. 0026-34400/00 at Creditanstalt-Bankverein Vienna. Please state your name clearly. About the Congress: - ------------------- The international support of the European Meetings on Cybernetics and Systems Research held in Austria in 1972, 1974, 1976, 1978, 1980, 1982, 1984, 1986, 1988, 1990 and 1992 (when 300 scientists from more than 30 countries met to present, hear and discuss 210 papers) encouraged the Council of the Austrian Society for Cybernetic Studies (OSGK) to organize a similar meeting in 1994 to keep pace with continued rapid developments in related fields. A number of Symposia will be arranged and we are grateful to colleagues who have undertaken the task of preparing these events. As on the earlier occasions, eminent speakers of international reputation will present latest research results at daily plenary sessions. The Proceedings of the 10th and 11th European Meetings on Cybernetics and Systems Research, edited by R. Trappl, have been published by World Scientific, Singapore as : CYBERNETICS AND SYSTEMS '90 (1 vol., 1107 p.) CYBERNETICS AND SYSTEMS '92 (2 vols., 1685 p.) Symposia: - --------- A General Systems Methodology G.J.Klir, USA B Advances in Mathematical Systems Theory M.Peschel, Germany & F.Pichler, Austria C Fuzzy Sets, Approximate Reasoning & Knowledge Based Systems C.Carlsson, Finland, K-P.Adlassnig, Austria & E.P.Klement, Austria D Designing and Systems, and Their Education B.Banathy, USA, W.Gasparski, Poland & G.Goldschmidt, Israel E Humanity, Architecture and Conceptualization G.Pask, UK, & G.de Zeeuw, Netherlands F Biocybernetics and Mathematical Biology L.M.Ricciardi, Italy G Systems and Ecology F.J.Radermacher, Germany & K.Freda, Austria H Cybernetics and Informatics in Medicine G.Gell, Austria & G.Porenta, Austria I Cybernetics of Socio-Economic Systems K.Balkus, USA & O.Ladanyi, Austria J Systems, Management and Organization G.Broekstra, Netherlands & R.Hough, USA K Cybernetics of National Development P.Ballonoff, USA, T.Koizumi, USA & S.A.Umpleby, USA L Communication and Computers A M.Tjoa, Austria M Intelligent Autonomous Systems J.W.Rozenblit, USA & H.Praehofer, Austria N Cybernetic Principles of Knowledge Development F.Heylighen, Belgium & S.A.Umpleby, USA O Cybernetics, Systems & Psychotherapy M.Okuyama, Japan & H.Koizumi, USA P Artificial Neural Networks and Adaptive Systems S.Grossberg, USA & G.Dorffner, Austria Q Artificial Intelligence and Cognitive Science V.Marik, Czechia & R.Born, Austria R Artificial Intelligence & Systems Science for Peace Research S.Unseld, Switzerland & R.Trappl, Austria Submission of papers : - ---------------------- Acceptance of contributions will be determined on the basis of Draft Final Papers. These Papers must not exceed 7 single-spaced A4 pages (maximum 50 lines, final size will be 8.5 x 6 inch), in English. They have to contain the final text to be submitted, including graphs and pictures. However, these need not be of reproducible quality. The Draft Final Paper must carry the title, author(s) name(s), and affiliation in this order. Please specify the symposium in which you would like to present your paper. Each scientist shall submit only one paper. Please send three copies of the Draft Final Paper to the Conference Secretariat (except for electronic submission for symposium P - see above). DEADLINE FOR SUBMISSION : October 8, 1993 (Oct 20 for symposium P) In order to enable careful refereeing, Draft Final Papers received after the deadline cannot be considered. FINAL PAPERS : Authors will be notified about acceptance no later than November 13, 1993. They will be provided by the conference secretariat at the same time with the detailed instructions for the preparation of the final paper. PRESENTATION : It is understood that the paper is presented personally at the Meeting by the contributor. HOTEL ACCOMMODATIONS will be handled by Oesterreichisches Verkehrsbuero, Kongressabteilung, Opernring 5, A-1010 Vienna, phone +43-1-58800-113, fax +3-1-5867127, telex 111 222. Reservation cards will be sent to all those returning the attached registration form. SCHOLARSHIPS : The Austrian Federal Ministry for Science and Research has kindly agreed to provide a limited number of scholarships covering the registration fee for the conference and part of the accommodation costs for colleagues from eastern and south-eastern European countries. Applications should be sent to the Conference Secretariat before October 8, 1993. For further information about the Congress, contact: EMCSR 94 - Secretariat : Oesterreichische Studiengesellschaft fuer Kybernetik A-1010 Wien 1, Schottengasse 3, Austria. Phone : +43-1-53532810 Fax : +43-1-5320652 E-mail : sec at ai.univie.ac.at _______________________________________________________________ REGISTRATION FORM _______________________________________________________________ EMCSR-94 Twelfth European Meeting on Cybernetics and Systems Research Please return to : Austrian Society for Cybernetic Studies Schottengasse 3, A-1010 Vienna, AUSTRIA (EUROPE) o I plan to attend the Meeting o I intend to submit a paper to Symposium ... o I enclose the Draft Final Paper o My Draft Final Paper will arrive prior to October 8, 1993 o My cheque for AS .... covering the Conference Fee is enclosed o I have transferred AS .... to your account 0026-34400/00 at Creditanstalt Vienna o I shall not be at the Meeting but am interested to receive particulars of the Proceedings Name : Prof./Dr./Ms./Mr. ...................................... Address : ..................................................... ............................................................... Fax : .............................E-Mail : ................... Date : ....... Signature: _______________________________________________________________  From dhw at santafe.edu Wed Sep 8 16:11:01 1993 From: dhw at santafe.edu (dhw@santafe.edu) Date: Wed, 8 Sep 93 14:11:01 MDT Subject: New file in neuroprose Message-ID: <9309082011.AA08170@zia> *** DO NOT FORWARD TO OTHER BOARDS OR MAILING LISTS *** New file in neuroprose: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/wolpert.field-comp.ps.Z A Computationally Universal Field Computer That is Purely Linear by D. H. Wolpert and B. J. Maclennan Abastract: As defined in MacLennan (1987), a "field computer" is a (spatial) continuum-limit neural net. This paper investigates field computers whose dynamics is also continuum-limit, being governed by a purely linear integro-differential equation. Such systems are motivated both as a means of studying neural nets and as a model for cognitive processing. As this paper proves, such systems are computationally universal. The ``trick'' used to get such universal nonlinear behavior from a purely linear system is quite similar to the way nonlinear macroscopic physics arises from the purely linear microscopic physics of Schrodinger's equation. More precisely, the ``trick'' involves two parts. First, the kind of field computer studied in this paper is a continuum-limit threshold neural net. That is, the meaning of the system's output is determined by which neurons have an activation exceeding a threshold (which in this paper is taken to be 0), rather than by the actual activation values of the neurons. Second, the occurrence of output is determined in the same thresholding fashion; output is available only when certain " output-flagging" neurons exceed the threshold, rather than after a certain fixed number of iterations of the system. In addition to proving and discussing their computational universality, this paper cursorily investigates the dynamics of these kinds of systems. INDEX: wolpert.field-comp.ps.Z 28 pages A computationally universal continuum-limit neural net which is purely linear. Instructions for retrieval: Log on to the FTP-host as anonymous, type 'binary', get the file, quit FTP, uncompress the file, and print out the resulting postscript. Thanks to Jordan Pollack for maintaining this archive. David Wolpert The Santa Fe Institute 1660 Old Pecos Trail, Suite A Santa Fe, NM, 87501, USA dhw at santafe.edu (505) 988-8814 (voice) (505) 982-0565 (fax)  From penev%firenze%venezia.ROCKEFELLER.EDU at ROCKVAX.ROCKEFELLER.EDU Thu Sep 9 13:07:09 1993 From: penev%firenze%venezia.ROCKEFELLER.EDU at ROCKVAX.ROCKEFELLER.EDU (Penio Penev) Date: Thu, 9 Sep 1993 12:07:09 -0500 (EDT) Subject: Universal behaviour of "linear" systems In-Reply-To: <9309082011.AA08170@zia> from "dhw@santafe.edu" at Sep 8, 93 02:11:01 pm Message-ID: <9309091607.AA03475@firenze> dhw at santafe.edu wrote: | [..] The ``trick'' used to get such universal | nonlinear behavior from a purely linear system is quite similar to the | way nonlinear macroscopic physics arises from the purely linear | microscopic physics of Schrodinger's equation. More precisely, the | ``trick'' involves two parts. First, the kind of field computer | studied in this paper is a continuum-limit threshold neural net. That Thresholding is the most non-linear finite function. In a sense, the computer I'm writing this lines on is a finite linear machine with an "IF" function, which is the same as thresholding. If I believe, that my machine (given infinite disk space) is computationally universal, I have no problems believing, that adding an infinite processor to it would be universal also. -- Penio Penev x7423 (212)327-7423 (w) Internet: penev at venezia.rockefeller.edu  From dhw at santafe.edu Fri Sep 10 01:30:35 1993 From: dhw at santafe.edu (David Wolpert) Date: Thu, 9 Sep 93 23:30:35 MDT Subject: No subject Message-ID: <9309100530.AA26262@sfi.santafe.edu> Penio Penev writes of the recently posted abstract from my paper w/ Bruce Maclennan: >>> Thresholding is the most non-linear finite function. In a sense, the computer I'm writing this lines on is a finite linear machine with an "IF" function, which is the same as thresholding. If I believe, that my machine (given infinite disk space) is computationally universal, I have no problems believing, that adding an infinite processor to it would be universal also. >>> Let me emphasize some points which are made clear in the paper (which I suspect Dr. Penev has not yet read). First, as even the abstract mentions, all that's non-linear in our system is the *representation*; the dynamics as the signal propagates through the net is purely linear. As an analogy, it's like having a conventional neural net, except that the net doesn't use sigmoids to go from layer to layer; the dynamics as the signal is propagated through layers is purely linear. Then, when the signal has run through, you interpret output by looking at which of the output neurons don't have the value 0. That ending interpretation is the *only* place that thresholding arises. The important point is that it's the representation which is non-linear, not the dynamics. Note that such non-linear representations are quite common in neural nets; results like those in our paper show that that non-linearity suffices (in the continuum limit), and the non-linerity of sigmoids is not needed. Second, although I'm not sure I understand exactly what Dr. Penev means by "adding an infinite processor to an infinite disk space machine", I certainly would agree that, w/ *countably* infinite "memory", and *discrete* dynamics of the signal through the net, it's trivial to use thresholding to get computational universality. In fact, we devote a paragraph to this very point early in the paper. What's quite a bit more difficult is emulating an arbitrary (discrete space and time) Turing machine using a neural net with an *uncountably* infinite "memory", and *continuum* dynamics of the signal through the net (i.e., signal dynamics governed by differential equations, rather than by discrete time steps). In addressing this issue, our paper parallels Steve Omohundro's earlier work showing how to emulate an arbitrary (discrete space and time) cellular automata with differential equations. There are several interesting aspects to such issues. One is the intrinsic interest of the math. Another is the fact that the universe is in fact continuous and not discrete. (Note the resultant issue of trying to use continuum-limit analyses like that of our paper to try to construct analog computers.) And perhaps most enticingly, there's the fact that one example of a system like that described in our paper is a wave function evolving according to Schrodinger's equation. (The dynamics in quantum mechanics is purely linear, with the non-linearity we see in the world around us arising from the "thresholding" of the collapse of the wave packet, roughly speaking.) Although we didn't have space to purse this issue in our paper, it suggests that the systems we investigate can be "trained" using the very-throroughly understood machinery of quantum mechanical scattering theory. All of this is discussed in our paper.  From josh at faline.bellcore.com Fri Sep 10 13:17:42 1993 From: josh at faline.bellcore.com (Joshua Alspector) Date: Fri, 10 Sep 93 13:17:42 EDT Subject: Telecom workshop early registration deadline next week Message-ID: <9309101717.AA26847@faline.bellcore.com> International Workshop on Applications of Neural Networks to Telecommunications Nassau Inn, Princeton, NJ October 18-20, 1993 You are invited to an international workshop on applications of neural networks to problems in telecommunications. The workshop will be held at the historic Nassau Inn (across from the university) in Princeton, New Jersey on October, 18-20 1993. The conference rate is $95 single, $135 double. You can make reservations directly with the hotel (mention IWANNT*93): Nassau Inn 10 Palmer Square Princeton, NJ 08542 (800) 862-7728 (within USA) (609) 921-7500 (outside USA) (609) 921-9385 (FAX) In addition to the traditional hard-bound proceedings, we will also have an on-line electronic conference proceedings. This will have automatic indexing and cross-referencing, multimedia figures, and annotations for readers and authors to comment. Tentative Schedule International Workshop on Applications of Neural Networks to Telecommunications Nassau Inn, Princeton, NJ Monday Oct. 18, 1993 Prince William Ballroom 8:30 Coffee and registration 9:00 J. Alspector, "Overview" Session 1 9:30 B.J. Sheu, "Programmable VLSI Neural Network Processors for Equalization of Digital Communication Channels" 10:00 A. Jayakumar & Josh Alspector, "An Analog Neural-Network Co-Processor System for Rapid Prototyping of Telecommunications Applications" 10:30 Break Session 2 11:00 J. Cid-Sueiro, "Improving Conventional Equalizers with Neural Networks" 11:30 T. X. Brown, "Neural Networks for Equalization" 12:00 R. Goodman, B. Ambrose, "Applications of Learning Techniques to Network Management" 12:30 Lunch Session 3 1:30 M. Littman & J. Boyan, "A Distributed Reinforcement Learning Scheme for Network Routing" 2:00 M. Goudreau, C. L. Giles, "Discovering the Structure of a Self Routing Interconnection Network with a Recurrent Neural Network" 2:30 G. Kechriotis, E. Manolakos, "Implementing the Optimal CDMA Multiuser Detector with Hopfield Neural Networks" 3:00 Break Session 4 3:30 A. Jagota, "Scheduling Problems in Radio Networks Using Hopfield Networks" 4:00 E. Nordstrom, M. Gustafsson, O. Gallmo, L. Asplund, "A Hybrid Admission Control Scheme for Broadband ATM Traffic" 4:30 A. Tarraf, I. Habib, T. Saadawi, "Characterization of Packetized Voice Traffic in ATM Networks Using Neural Networks" 6:00 Reception Tuesday Oct. 19, 1993 Prince William Ballroom 8:30 Coffee 9:00 Speaker Title (Invited Talk) 10:00 Break Session 5 10:30 A. Chhabra, S. Chandran, R. Kasturi, "Table Structure Interpretation & Neural Network Based Text Recognition for Conversion of Telephone Company Tabular Drawings" 11:00 A. Amin, H. Al-Sadoun, "Arabic Character Recognition System Using Artificial Neural Network" 11:30 G-E Wang, J-F Wang, "A New Hierarchical Approach for Recognition of Unconstrained Handwritten Numerals" Session 6 12:00 Poster session & Lunch POSTER SESSION J. E. Neves, "ATM Call Control by Neural Networks" A. Farago, "A Neural Structure for Dynamic Routing and Resource Management in ATM Networks" S. Amin, M. Gell, "Constrained Optimisation for Switching and Routing Using Neural Networks V. Cherkassky, Y-K Park, G. Lee, "ATM Cell Scheduling for Broadband Switching Systems by Neural Network" S. Neuhauser, "Hopfield Optimization Techniques Applied to Routing in Computer Networks" F. Comellas, R. Roca, "Using Genetic Algorithms to Design Constant Weight Codes" P. Leray, "CUBICORT: A Hardware Simulation of a Multicolumn Model for 3D Image Analysis, Understanding & Compression for Digital TV, HDTV & Multimedia" N. Karunanithi, "A Connectionist Approach for Incorporating Continuous Code Churn into Software Reliability Growth Models" A. Lansner, "Hierarchical Clustering Using a Bayesian Attractor ANN" A. Holst and A. Lansner, "Diagnosis of Technical Equipment Using a Bayesian Neural Network" T. Martinez, G. Rudolph, "A Learning Model for Adaptive Routing" S. Haykin, L. Li, "16 kbps Nonlinear Adaptive Differential Pulse Code Modulation" M. K. Sonmez, T. Adali, "Channel Equalization by Distribution Learning: The Least Relative Entropy Algorithm" J. Connor, "Bootstrapping in Time Series Prediction" A. Kowalczyk and M. Dale, "Isolated Speech Recognition with Low Cost Neural Networks" M. Meyer & G. Pfeiffer, "Multilayer Perception Based Decision Feedback Equalizers Applied to Nonlinear Channels with Intersymbol Interference" H. Liu & D. Yun, "Self-Organizing Finite State Vector Quantization for Image Coding" A. Hasegawa, K. Shibata, K. Itoh, Y. Ichioka, K. Inamura, "Adapting-Size Neural Network for Character Recognition on X-Ray Films" A. Mikler, J. Wong, V. Honavar, "Quo Vadis - A Framework for Adaptive Routing in Very Large High Speed Communication Networks" Chen-Xiong Zhang, "Optimal Traffic Routing Using Self-Organization Principle" S. Kwasny, B. Kalman, A. M. Engebretson, W. Wu,"Real-Time Identification of Language from Raw Speech Waveforms" Session 7 4:00 Board buses for AT&T Worldwide Intelligent Network Center 5:00 Reception and tour Session 8 7:00 Banquet 8:30 B. Widrow, "Adaptive Filters, Adaptive Neural Nets, and Telecommunications" (Invited talk) Wednesday Oct. 20, 1993 Prince William Ballroom 8:30 Coffee 9:00 Speaker Title (Invited Talk) 10:00 Break Session 9 10:30 J. Connor, "Prediction of Access Line Growth" 11:00 B. P. Yuhas, "Telephone Fraud Detection" 11:30 T. John, "Multistage Information Filtering Using Cascaded Neural Networks" 12:00 M. Jabri, "Temporal Credit Assignment for Continuous Speech Recognition" 12:30 Lunch Session 10 1:30 T-D. Chiueh, T-T Tang, L-G Chen, "Vector Quantization Using Tree-Structured Self-Organizing Feature Maps" 2:00 N. Karunanithi, "Identifying Fault-Prone Software Modules Using Connectionist Networks" 2:30 D.S.W. Tansley, S. Carter, "Clone Detection in Telecommunications Software Systems: A Neural Net Approach" 3:00 Break Session 11 3:30 L. Lewis, S. Sycamore, "Learning Index Rules & Adaptation Functions for a Communications Network Fault Resolution System" 4:00 T. Sone, "Using Distributed Neural Networks to Identify Faults in Switching Systems" 4:30 A. Chattell, "A Neural Network Pre-Processor for a Fault Diagnosis Expert System" 5:00 Adjourn Organizing Committee: General Chair Josh Alspector Bellcore, MRE 2P-396 445 South St. Morristown, NJ 07960-6438 (201) 829-4342 josh at bellcore.com Program Chair Rod Goodman Caltech 116-81 Pasadena, CA 91125 (818) 356-3677 rogo at micro.caltech.edu Publications Chair Timothy X Brown Bellcore, MRE 2E-378 445 South St. Morristown, NJ 07960-6438 (201) 829-4314 timxb at faline.bellcore.com Treasurer Anthony Jayakumar, Bellcore Events Coordinator Larry Jackel, AT&T Bell Laboratories Industry Liaisons Miklos Boda, Ellemtel Atul Chhabra, NYNEX Michael Gell, British Telecom Lee Giles, NEC Thomas John, Southwestern Bell Adam Kowalczyk, Telecom Australia Tadashi Sone, NTT University Liaisons S Y Kung, Princeton University Tzi-Dar Chiueh, National Taiwan University INNS Liaison Bernie Widrow, Stanford University IEEE Liaison Steve Weinstein, Bellcore Conference Administrator Betty Greer Bellcore, MRE 2P-295 445 South St. Morristown, NJ 07960 (201) 829-4993 (fax) 829-5888 bg1 at faline.bellcore.com ----------------------------------------------------------------------------- ----------------------------------------------------------------------------- International Workshop on Applications of Neural Networks to Telecommunications Princeton, NJ October 18-20, 1993 Registration Form Name: _____________________________________________________________ Institution: __________________________________________________________ Mailing Address: ___________________________________________________________________ ___________________________________________________________________ ___________________________________________________________________ ___________________________________________________________________ Telephone: ______________________________ Fax: ____________________________________ E-mail: _____________________________________________________________ Registration Fees: includes reception, banquet, refreshment breaks, AT&T tour, and both paper and electronic proceedings available at the conference. | | Early (Before Sept. 15, 1993) $350 | | Late (After Sept. 15, 1993) $450 | | Full time students with ID $150 Enclosed is a check or money order in US Dollars for $___________ Please make check payable to IWANNT*93 Hotel arrangements with Nassau Inn at (609) 921-9385 Mail to: Betty Greer, IWANNT*93 Bellcore, MRE 2P-295 445 South St. Morristown, NJ 07960 (201) 829-4993 (fax) 829-5888 bg1 at faline.bellcore.com  From mitsu at netcom.com Fri Sep 10 13:49:24 1993 From: mitsu at netcom.com (Mitsu Hadeishi) Date: Fri, 10 Sep 93 10:49:24 -0700 Subject: A Computationally Universal Field Computer Message-ID: <9309101749.AA23417@netcom5.netcom.com> David Wolpert writes in response to Penio Penev regarding his recently posted abstract: >What's quite a bit more difficult is emulating an arbitrary (discrete >space and time) Turing machine using a neural net with an *uncountably* >infinite "memory", and *continuum* dynamics of the signal through >the net (i.e., signal dynamics governed by differential equations, rather >than by discrete time steps). I'm interested to know what effect noise or signal degradation might have on your technique for emulating an arbitrary Turing machine using your linear field computer. I would imagine that you would get some sort of statistical approximation of a Turing machine, for example you'd have a certain (perhaps quite high) probability of correct results. Of course, real physical finite computers have exactly the same problem, with the potential for memory failures and other unexpected deviations from theory. However, I was wondering to what extent your technique for getting computational universality was sensitive to noise and whether you addressed this issue in your paper. Mitsu Hadeishi Open Mind  From jagota at cs.Buffalo.EDU Fri Sep 10 17:08:05 1993 From: jagota at cs.Buffalo.EDU (Arun Jagota) Date: Fri, 10 Sep 93 17:08:05 EDT Subject: NIPS*93 workshop Message-ID: <9309102108.AA15660@hadar.cs.Buffalo.EDU> CALL FOR PARTICIPATION NIPS*93 workshop on Neural Network Methods for Optimization Problems There are 4-5 slots remaining for brief oral presentations of 20-30 minutes each. To be considered, submit either (i) a title and one page abstract or (ii) a bibliography of recent work on the topic. Please submit materials by electronic mail to Arun Jagota (jagota at cs.buffalo.edu) by October 5. Later submissions risk not having remaining open slots. Program: ------- Ever since the work of Hopfield and Tank, neural networks have found increasing use for the approximate solution of hard optimization problems. The successes in the past have however been limited, when compared to traditional methods. In this workshop we will discuss the state of the art of neural network algorithms for optimization, examine their weaknesses and strengths, and discuss potential for improvement. Second, as the algorithms arise from different areas (e.g. some from statistical physics, others from computer science) we hope that researchers from these disciplines will share their own insights with others. Third, we also hope to discuss theoretical issues that arise in using neural network algorithms for optimization. Finally, we hope to have people to discuss parallel implementation issues or case studies. --------------------- Arun Jagota  From SABBATINI%ccvax.unicamp.br at UICVM.UIC.EDU Sat Sep 11 20:35:34 1993 From: SABBATINI%ccvax.unicamp.br at UICVM.UIC.EDU (SABBATINI%ccvax.unicamp.br@UICVM.UIC.EDU) Date: Sat, 11 Sep 1993 20:35:34 BSC (-0300 C) Subject: Neural Networks in Biomed.Engineer. IEEE Conf. (S.Diego) Message-ID: <01H2U2W4LAL28WW799@ccvax.unicamp.br> 15th Annual International Conference IEEE Engineering in Medicine and Biology Society San Diego, CA, October 28-31, 1993 ACTIVITIES ON NEURAL NETWORKS 1. Special IFAC Sessions: Frontiers in Neural Network Control ---------------------------------------------------------- Two special sessions sponsored by the International Federation of Automatic Control will address emerging control systems and signal processing principles, algorithms and applications that are inspired by biological and artificial neural networks. Examples of engineering design strategies with brain-like capabilities will be discussed. Application areas include adaptive control, system identification, connectionist neural networks, reinforcement learning, brain models and pattern recognition. Speakers: Dr. Paul Werbos (NSF/USA), Dr. James Albus (National Institute of Standards and Technology, USA)), Dr. K.S. Narendra (Yale University, USA), Dr. Renato M.E. Sabbatini (State University of Campinas, Brazil) and Dr. Chi-Sang Poon (MIT/USA). 2. Workshop on Neural Networks in Biomedical Engineering ----------------------------------------------------- This workshop (Oct. 27, 9:00-17:00) will address the roles of neural networks in biomedical engineering. Topics covered will include the use of neural networks in signal analysis (Dr. Evangelia Micheli-Tzanakou, Rutgers Univ.), the relations between Volterra expansions and neural networks (Dr. V.Z. Marmarelis, Univ. Southern California), the use of neural networks in medical diagnosis (Dr. C.N. Schizas, Univ. Cyprus) and the implications of processing in the brain for artificial neural networks (Dr. Stuart R. Hameroff, Univ. Arizona). 3. Technical Sessions ------------------ Papers on neural networks will be presented in seven technical sessions (Neural Networks I-VII) with oral presentations and one session with poster presentations. All sessions will be 90 minutes long. Venue ----- ITT Sheraton Harbor Island Hotel Proceedings ----------- The conference proceedings (more than 2200 papers) will be available in paper form and in a CD-ROM with retrieval software (IEEE Press). Information and Registration ---------------------------- IEEE/EMBS Conference Management Office Meeting Management 5665 Oberlin Drive, Suite 110 San Diego, CA 92121, USA Phone (619) 453-6222 Fax (619) 535-3880 Email 70750.345 at compuserve.com n.feldman at ieee.org (Abstracted from the official Invitation Folder) $  From dhw at santafe.edu Sun Sep 12 11:49:00 1993 From: dhw at santafe.edu (David Wolpert) Date: Sun, 12 Sep 93 09:49:00 MDT Subject: No subject Message-ID: <9309121549.AA00766@sfi.santafe.edu> Mitsu Hadeishi writes: >>>>> David Wolpert writes in response to Penio Penev regarding his recently posted abstract: >What's quite a bit more difficult is emulating an arbitrary (discrete >space and time) Turing machine using a neural net with an *uncountably* >infinite "memory", and *continuum* dynamics of the signal through >the net (i.e., signal dynamics governed by differential equations, rather >than by discrete time steps). I'm interested to know what effect noise or signal degradation might have on your technique for emulating an arbitrary Turing machine using your linear field computer. I would imagine that you would get some sort of statistical approximation of a Turing machine, for example you'd have a certain (perhaps quite high) probability of correct results. Of course, real physical finite computers have exactly the same problem, with the potential for memory failures and other unexpected deviations from theory. However, I was wondering to what extent your technique for getting computational universality was sensitive to noise and whether you addressed this issue in your paper. >>>>> This is a very good question. It opens up the whole area of error-correcting field computers. Although we've thought about this issue, we haven't addressed it in any detail, other than to note a change of basis which might facilitate robustness against noise. (We didn't want to try to cram too much into the paper.) In general though, it would seem that the issue is dependent on a number of factors. First, the *kind* of noise process is important. Our system is one which evolves continuously in time, and is continuous in space (i.e., in neuron index). Accordingly, one might think that something like a Weiner process for noise would be appropriate. But it's not hard to think of other possibilities. In particular, if the noise is multiplicative (or for some other reason preserves the set of neurons which are non-zero), then it won't affect our system *at all*, since our system is linear, and its interpretation only depends on the set of neurons which are non-zero, and not their actual values. More generally, one would expect that for most kinds of noise processes, the degree of degradation would depend on how long the system runs (especially given that our system is linear). So the accuracy of emulation of a particular Turing machine, when noise is involved, is in general undecidable - if the Turing machine doesn't halt, one would generically expect that any (non-multiplicative) noise process would result in complete degradation eventually. Second, one of the first things we do in demonstrating computational universality is show how to transform the original system, which is continuous-space, continuous-time, with a time-independent weight matrix, to a new system, which is discrete-space, continuous-time, with a time-dependent weight matrix. (We then demonstrate computational universality for this second system.) In essence, we transform the system into a "cellular automaton" evolving continuously in time, with time-dependent dynamics. This transformation relies on choosing the original time-independent weight matrix to lie in a particular equivalence class of such matrices. This is important because if the noise doesn't interfere with this transformation, then it will only affect how accurately our resultant "cellular automaton" is mimicing a Turing machine. However if the noise instead interferes w/ the transformation, we will have automatic "cross-talk" going on *continuously* between all possible Turing machines. (Only at t = 0 would we be coding for only one particular Turing machine.) Generically, this will affect the degradation of our system differently. *** Finally, in practice one would (often) want to use our system the same way neural nets are used - as simple mappings from inputs to outputs, which reproduce a given training set (up to regularization issues), rather than as systems which reproduce a given Turing machine. (In many senses, proving computational univsersality is of interest because it establishes the potential power of a system, not as a practical way to use the system.) And in general, noise is much less calamitous if all you're trying to do is reproduce a given finite (!) training set - you have much more freedom to set things up so that there are "buffers" around the behavior you want to get, buffers which can absorb the noise.  From jbower at smaug.bbb.caltech.edu Sun Sep 12 22:04:45 1993 From: jbower at smaug.bbb.caltech.edu (Jim Bower) Date: Sun, 12 Sep 93 19:04:45 PDT Subject: New GENESIS version 1.4 Message-ID: <9309130204.AA00830@smaug.bbb.caltech.edu> ------------------------------------------------------------------------ This is to announce the availability of a new release of the GENESIS simulator. This version (ver. 1.4.1, August 1993) is greatly improved from the previous public release (ver. 1.1, July 1990). Description: GENESIS (GEneral NEural SImulation System) is a general purpose simulation platform which was developed to support the simulation of neural systems ranging from complex models of single neurons to simulations of large networks made up of more abstract neuronal components. Most current GENESIS applications involve realistic simulations of biological neural systems. Although the software can also model more abstract networks, other simulators are more suitable for backpropagation and similar connectionist modeling. GENESIS and its graphical front-end XODUS are written in C and run on SUN and DEC graphics work stations under UNIX (Sun version 4.0 and up, Ultrix 3.1, 4.0 and up), and X-windows (versions X11R3, X11R4, and X11R5). The current version of GENESIS has also been used with Silicon Graphics (Irix 4.0.1 and up) and the HP 700 series (HPUX). The distribution includes full source code and documentation for both GENESIS and XODUS as well as fourteen demonstration and tutorial simulations. Documentation for these simulations is included, along with three papers that describe the general organization of the simulator. The distributed compressed tar file is about 3 MB in size. In addition to sample simulations which demonstrate the construction of neural simulations, the new GENESIS release contains a number of interactive tutorials for teaching concepts in neurobiology and realistic neural modeling. As their use requires no knowldge of GENESIS programming, they are suitable for use in a computer simulation laboratory which would accompany upper division undergraduate and graduate neuroscience courses,or for self-study. Each of these has on-line help and a number of suggested exercises or "experiments". These tutorials may also be taken apart and modified to create your own simulations, as several of them are derived from recent research simulations. The following papers give further information about GENESIS: Wilson, M. A., Bhalla, U. S., Uhley, J. D., and Bower, J. M. (1989) GENESIS: A system for simulating neural networks. In: Advances in Neural Information Processing Systems. D. Touretzky, editor. Morgan Kaufmann, San Mateo, CA. pp. 485-492 Matthew A. Wilson and James M. Bower, "The Simulation of Large-Scale Neural Networks", in Methods in Neuronal Modeling, Christof Koch and Idan Segev, editors. (MIT Press, 1989) Acquiring GENESIS via free FTP distribution: GENESIS may be obtained via FTP from genesis.cns.caltech.edu (131.215.137.64). As this is a large software package, please read the above description to determine if GENESIS is likely to be suitable for your purposes before you follow this procedure. To acquire the software use 'telnet' to connect to genesis.cns.caltech.edu and login as the user "genesis" (no password required). If you answer all the questions asked of you an 'ftp' account will automatically be created for you. You can then 'ftp' back to the machine and download the software. Further inquiries concerning GENESIS may be addressed to genesis at cns.caltech.edu.  From mclennan at cs.utk.edu Mon Sep 13 18:09:39 1993 From: mclennan at cs.utk.edu (mclennan@cs.utk.edu) Date: Mon, 13 Sep 93 18:09:39 -0400 Subject: Hadeishi's comments Message-ID: <9309132209.AA00564@maclennan.cs.utk.edu> I would like to add one comment to David Wolpert's reply to Mitsu Hadeishi's comments about our paper. That is a reminder that a TM is an idealized model of computation which ignores all the real- life problems of signal detection and classification, for example, in reading its tape. It's assumed to operate perfectly and in a noise-free environment. We make analogous assumptions in the continuous case, so that we also have an idealized model of computation. It's a theoretical construction for theoretical purposes, and I can no more imagine a practical use for it than for, for example, a Turing machine simulation of a recursive function evaluator (e.g., executing LISP on a TM). Like most theoretical constructions, ours makes physically unrealistic assumptions and is extravagant in its use of resources. Bruce MacLennan Department of Computer Science The University of Tennessee Knoxville, TN 37996-1301 (615)974-0994/5067 FAX: (615)974-4404 maclennan at cs.utk.edu  From edelman at wisdom.weizmann.ac.il Tue Sep 14 05:37:49 1993 From: edelman at wisdom.weizmann.ac.il (Edelman Shimon) Date: Tue, 14 Sep 93 11:37:49 +0200 Subject: TR available: Representation, Similarity, and the Chorus of Prototypes Message-ID: <9309140937.AA22716@wisdom.weizmann.ac.il> The technical report described below is available via anonymous ftp from eris.wisdom.weizmann.ac.il (132.76.80.53) as /pub/revised-simil.ps.Z The size of the compressed Postscript file is about 330Kb; the uncompressed file is 2.4Mb and has 21 pages (including figures). -Shimon -------------------------------------------------------------------------------- Representation, Similarity, and the Chorus of Prototypes Shimon Edelman Dept. of Applied Mathematics and Computer Science The Weizmann Institute of Science Rehovot 76100, Israel July 1993 (revised September 1993) \begin{abstract} It is proposed to conceive of representation as an emergent phenomenon that is supervenient on patterns of activity of coarsely tuned and highly redundant feature detectors. The computational underpinnings of the outlined theory of representation are (1) the properties of collections of overlapping graded receptive fields, as in the biological perceptual systems that exhibit hyperacuity-level performance, and (2) the sufficiency of a set of proximal distances between stimulus representations for the recovery of the corresponding distal contrasts between stimuli, as in multidimensional scaling. The present preliminary study appears to indicate that this concept of representation is computationally viable, and is compatible with psychological and neurobiological data. \end{abstract}  From mwitten at hermes.chpc.utexas.edu Tue Sep 14 13:59:07 1993 From: mwitten at hermes.chpc.utexas.edu (mwitten@hermes.chpc.utexas.edu) Date: Tue, 14 Sep 93 12:59:07 CDT Subject: CONGRESS: COMPUTATIONAL MEDICINE AND PUBLIC HEALTH (long) Message-ID: <9309141759.AA08946@morpheus.chpc.utexas.edu> ** NOTE CHANGES IN SUBMISSION AND REGISTRATION DEADLINES ** FIRST WORLD CONGRESS ON COMPUTATIONAL MEDICINE, PUBLIC HEALTH AND BIOTECHNOLOGY 24-28 April 1994 Hyatt Regency Hotel Austin, Texas ----- (Feel Free To Cross Post This Announcement) ---- 1.0 CONFERENCE OVERVIEW: With increasing frequency, computational sciences are being exploited as a means with which to investigate biomedical processes at all levels of complexity; from molecular to systemic to demographic. Computational instruments are now used, not only as exploratory tools but also as diagnostic and prognostic tools. The appearance of high performance computing environments has, to a great extent, removed the problem of increasing the biological reality of the mathematical models. For the first time in the history of the field, practical biological reality is finally within the grasp of the biomedical modeler. Mathematical complexity is no longer as serious an issue as speeds of computation are now of the order necessary to allow extremely large and complex computational models to be analyzed. Large memory machines are now routinely available. Additionally, high speed, efficient, highly optimized numerical algorithms are under constant development. As these algorithms are understood and improved upon, many of them are transferred from software implementation to an implementation in the hardware itself; thereby further enhancing the available computational speed of current hardware. The purpose of this congress is to bring together a transdisciplinary group of researchers in medicine, public health, computer science, mathematics, nursing, veterinary medicine, ecology, allied health, as well as numerous other disciplines, for the purposes of examining the grand challenge problems of the next decades. This will be a definitive meeting in that it will be the first World Congress of its type and will be held as a follow-up to the very well received Workshop On High Performance Computing In The Life Sciences and Medicine held by the University of Texas System Center For High Performance Computing in 1990. Young scientists (graduate students, postdocs, etc.) are encouraged to attend and to present their work in this increasingly interesting discipline. Funding is being solicited from NSF, NIH, DOE, Darpa, EPA, and private foundations, as well as other sources to assist in travel support and in the offsetting of expenses for those unable to attend otherwise. Papers, poster presentations, tutorials, focused topic workshops, birds of a feather groups, demonstrations, and other suggestions are also solicited. 2.0 CONFERENCE SCOPE AND TOPIC AREAS: The Congress has a broad scope. If you are not sure whether or not your subject fits the Congress scope, contact the conference organizers at one of the addresses below. Subject areas include but are not limited to: *Visualization/Sonification --- medical imaging --- molecular visualization as a clinical research tool --- simulation visualization --- microscopy --- visualization as applied to problems arising in computational molecular biology and genetics or other non-traditional disciplines --- telemedicine *Computational Molecular Biology and Genetics --- computational ramifications of clinical needs in the Human Genome, Plant Genome, and Animal Genome Projects --- computational and grand challenge problems in molecular biology and genetics --- algorithms and methodologies --- issues of multiple datatype databases *Computational Pharmacology, Pharmacodynamics, Drug Design *Computational Chemistry as Applied to Clinical Issues *Computational Cell Biology, Physiology, and Metabolism --- Single cell metabolic models (red blood cell) --- Cancer models --- Transport models --- Single cell interaction with external factors models (laser, ultrasound, electrical stimulus) *Computational Physiology and Metabolism --- Renal System --- Cardiovascular dynamics --- Liver function --- Pulmonary dynamics --- Auditory function, coclear dynamics, hearing --- Reproductive modeling: ovarian dynamics, reproductive ecotoxicology, modeling the hormonal cycle --- Metabolic Databases and metabolic models *Computational Demography, Epidemiology, and Statistics/Biostatistics --- Classical demographic, epidemiologic, and biostatistical modeling --- Modeling of the role of culture, poverty, and other sociological issues as they impact healthcare --- Morphometrics *Computational Disease Modeling --- AIDS --- TB --- Influenza --- Statistical Population Genetics Of Disease Processes --- Other *Computational Biofluids --- Blood flow --- Sperm dynamics --- Modeling of arteriosclerosis and related processes *Computational Dentistry, Orthodontics, and Prosthetics *Computational Veterinary Medicine --- Computational issues in modeling non-human dynamics such as equine, feline, canine dynamics (physiological/biomechanical) *Computational Allied Health Sciences --- Physical Therapy --- Neuromusic Therapy --- Respiratory Therapy *Computational Radiology --- Dose modeling --- Treatment planning *Computational Surgery --- Simulation of surgical procedures in VR worlds --- Surgical simulation as a precursor to surgical intervention --- The Visible Human *Computational Cardiology *Computational Nursing *Computational Models In Chiropractice *Computational Neurobiology and Neurophysiology --- Brain modeling --- Single neuron models --- Neural nets and clinical applications --- Neurophysiological dynamics --- Neurotransmitter modeling --- Neurological disorder modeling (Alzheimer's Disease, for example) --- The Human Brain Project *Computational Models of Psychiatric and Psychological Processes *Computational Biomechanics --- Bone Modeling --- Joint Modeling *Computational Models of Non-traditional Medicine --- Acupuncture --- Other *Computational Issues In Medical Instrumentation Design and Simulation --- Scanner Design --- Optical Instrumentation *Ethical issues arising in the use of computational technology in medical diagnosis and simulation *The role of alternate reality methodologies and high performance environments in the medical and public health disciplines *Issues in the use of high performance computing environments in the teaching of health science curricula *The role of high performance environments for the handling of large medical datasets (high performance storage environments, high performance networking, high performance medical records manipulation and management, metadata structures and definitions) *Federal and private support for transdisciplinary research in computational medicine and public health 3.0 CONFERENCE COMMITTEE *CONFERENCE CHAIR: Matthew Witten, UT System Center For High Performance Computing, Austin, Texas m.witten at chpc.utexas.edu *CURRENT CONFERENCE DIRECTORATE: Regina Monaco, Mt. Sinai Medical Center Dan Davison, University of Houston Chris Johnson, University of Utah Lisa Fauci, Tulane University Daniel Zelterman, University of Minnesota Minneapolis James Hyman, Los Alamos National Laboratory Richard Hart, Tulane University Dennis Duke, SCRI-Florida State University Sharon Meintz, University of Nevada Los Vegas Dean Sittig, Vanderbilt University Dick Tsur, UT System CHPC Dan Deerfield, Pittsburgh Supercomputing Center Istvan Gyori, University of Veszprem (Hungary) Don Fussell, University of Texas at Austin Ken Goodman, University Of Miami School of Medicine Martin Hugh-Jones, Louisiana State University Stuart Zimmerman, MD Anderson Cancer Research Center John Wooley, DOE Sylvia Spengler, University of California Berkeley Robert Blystone, Trinity University Gregory Kramer, Santa Fe Institute Franco Celada, NYU Medical Center David Robinson, NIH, NHLBI Jane Preson, MCC Peter Petropoulos, Brooks Air Force Base Marcus Pandy, University of Texas at Austin George Bekey, University of Southern California Stephen Koslow, NIH, NIMH Fred Bookstein, University of Michigan Ann Arbor Dan Levine, University of Texas at Arlington Richard Gordon, University of Manitoba (Canada) Stan Zeitz, Drexel University Marcia McClure, University of Nevada Las Vegas Renato Sabbatini, UNICAMP/Brazil (Brazil) Hiroshi Tanaka, Tokyo Medical and Dental University (Japan) Shusaku Tsumoto, Tokyo Medical and Dental University (Japan) Additional conference directorate members are being added and will be updated on the anonymous ftp list as they agree. 4.0 CONTACTING THE CONFERENCE COMMITTEE: To contact the congress organizers for any reason use any of the following pathways: ELECTRONIC MAIL - compmed94 at chpc.utexas.edu FAX (USA) - (512) 471-2445 PHONE (USA) - (512) 471-2472 GOPHER: log into the University of Texas System-CHPC select the Computational Medicine and Allied Health menu choice ANONYMOUS FTP: ftp.chpc.utexas.edu cd /pub/compmed94 POSTAL: Compmed 1994 University of Texas System CHPC Balcones Research Center 10100 Burnet Road, 1.154CMS Austin, Texas 78758-4497 5.0 SUBMISSION PROCEDURES: Authors must submit 5 copies of a single-page 50-100 word abstract clearly discussing the topic of their presentation. In addition, authors must clearly state their choice of poster, contributed paper, tutorial, exhibit, focused workshop or birds of a feather group along with a discussion of their presentation. Abstracts will be published as part of the preliminary conference material. To notify the congress organizing committee that you would like to participate and to be put on the congress mailing list, please fill out and return the form that follows this announcement. You may use any of the contact methods above. If you wish to organize a contributed paper session, tutorial session, focused workshop, or birds of a feather group, please contact the conference director at mwitten at chpc.utexas.edu . The abstract may be submitted electronically to compmed94 at chpc.utexas.edu or by mail or fax. There is no official format. 6.0 CONFERENCE DEADLINES AND FEES: The following deadlines should be noted: 1 November 1993 - Notification of intent to organize a special session 15 December 1993 - Abstracts for talks/posters/ workshops/birds of a feather sessions/demonstrations 15 January 1994 - Notification of acceptance of abstract 15 February 1994 - Application for financial aid 1 April 1994 - Registration deadline (includes payment of all fees) Fees include lunches for three days, all conference registration materials, the reception, and the sit down banquet: $400.00 Corporate $250.00 Academic $150.00 Student Students are required to submit verification of student status. The verification of academic status form appears appended to the registration form in this announcement. Because financial aid may be available for minority students, faculty, and for individuals from declared minority institutions, you may indicate that you are requesting financial aid as a minority individual. Additionally, we anticipate some support for women to attend. Application for financial aid is also appended to the attached form. 7.0 CONFERENCE PRELIMINARY DETAILS AND ENVIRONMENT LOCATION: Hyatt Regency Hotel, Austin, Texas, USA DATES: 24-28 April 1994 The 1st World Congress On Computational Medicine, Public Health, and Biotechnology will be held at the Hyatt Regency Hotel, Austin, Texas located in downtown Austin on the shores of Town Lake, also known as the Colorado River. The Hyatt Regency has rooms available for the conference participants at a special rate of $79.00/night for single or double occupancy, with a hotel tax of 13%. The Hyatt accepts American Express, Diner's Club, Visa, MasterCard, Carte Blanche, and Discover credit cards. This room rate will be in effect until 9 April 1994 or until the block of rooms is full. We recommend that you make your reservations as soon as possible. You may make your reservations by calling (512) 477-1234 or by returning the enclosed reservation form. Be certain to mention that you are attending the First World Congress On Computational Medicine, Public Health, and Biotechnology if you make your reservations by telephone. The hotel is approximately, five miles (15 minutes from Robert Mueller Airport). The Hyatt offers courtesy limousine service to and from the airport between the hours of 6:00am and 11:00pm. You may call them from the airport when you arrive. If you choose to use a taxi, expect to pay approximately $8.00. Automobiles may be rented, at the airport, from most of the major car rental agencies. However, because of the downtown location of the Congress and access to taxis and to bus service, we do not recommend that you rent an auto unless you are planning to drive outside of the city. Should you not be able to find an available room at the Hyatt Regency, we have scheduled an "overflow" hotel, the Embassy Suites, which is located directly across the street from the Hyatt Regency. If, due to travel expense restrictions, you are unable to stay at either of these two hotels, please contact the conference board directly and we will be more than happy to find a hotel near the conference site that should accommodate your needs. Austin, the state capital, is renowned for its natural hill-country beauty and an active cultural scene. Several hiking and jogging trails are within walking distance of the hotel, as well as opportunities for a variety of aquatic sports. From the Hyatt, you can "Catch a Dillo" downtown, taking a ride on our delightful inner-city, rubber-wheeled trolley system. In Austin's historic downtown area, you can take a free guided tour through the State Capitol Building, constructed in 1888. Or, you can visit the Governor's Mansion, recognized as one of the finest examples of 19th Century Greek Revival architecture and housing every Texas governor since 1856. Downtown you will find the Old Bakery and Emporium, built by Swedish immigrant Charles Lundberg in 1876 and the Sixth Street/Old Pecan Street Historical District - a seven-block renovation of Victorian and native stone buildings, now a National Registered Historic District containing more than 60 restaurants, clubs, and shops to enjoy. The Laguna Gloria Art Museum, the Archer M. Huntington Art Gallery, the LBJ Library and Museum, the Neill-Cochran Museum House, and the Texas Memorial Museum are among Austin's finest museums. The Umlauf Sculpture Garden, has become a major artistic attraction. Charles Umlauf's sculptured works are placed in a variety of elegant settings under a canopy of trees. The Zilker Gardens contains many botanical highlights such as the Rose Garden, Oriental Garden, Garden of the Blind, Water Garden and more. Unique to Austin is a large population of Mexican free-tailed bats which resides beneath the Congress Avenue Bridge. During the month of April, the Highland Lakes Bluebonnet Trail celebrates spring's wildflowers (a major attraction) as this self-guided tour winds through the surrounding region of Austin and nearby towns (you will need to rent a car for this one). Austin offers a number of indoor shopping malls in every part of the city; The Arboretum, Barton Creek Square, Dobie Mall, and Highland Mall, to name a few. Capital Metro, Austin's mass transit system, offers low cost transportation throughout Austin. Specialty shops, offering a wide variety of handmade crafts and merchandise crafted by native Texans, are scattered throughout the city and surrounding areas. Dining out in Austin, you will have choices of American, Chinese, Authentic Mexican, Tex-Mex, Italian, Japanese, or nearly any other type of cuisine you might wish to experience, with price ranges that will suit anyone's budget. Live bands perform in various nightclubs around the city and at night spots along Sixth Street, offering a range of jazz, blues, country/Western, reggae, swing, and rock music. Day temperatures will be in the 80-90(degrees F) range and fairly humid. Evening temperatures have been known to drop down into the 50's (degrees F). Cold weather is not expected so be sure to bring lightweight clothing with you. Congress exhibitor and vendor presentations are also being planned. 8.0 CONFERENCE ENDORSEMENTS AND SPONSORSHIPS: Numerous potential academic sponsors have been contacted. Currently negotiations are underway for sponsorship with SIAM, AMS, MAA, IEEE, FASEB, and IMACS. Additionally AMA and ANA continuing medical education support is being sought. Information will be updated regularly on the anonymous ftp site for the conference (see above). Currently, funding has been generously supplied by the following agencies: University of Texas System - CHPC U.S. Department of Energy ================== REGISTRATION FORM =============== (Please list your name below as it will appear on badge.) First Name : Middle Initial (if available): Family Name: Your Professional Title: [ ]Dr. [ ]Professor [ ]Mr. [ ]Mrs. [ ]Ms. [ ]Other:__________________ Office Phone (desk): Home/Evening Phone (for emergency contact): Fax: Electronic Mail (Bitnet): Electronic Mail (Internet): Postal Address: Institution or Center: Building Code: Mail Stop: Street Address1: Street Address2: City: State: Zip or Country Code: Country: Please list your three major interest areas: Interest1: Interest2: Interest3: Registration fee: $____________ Late fee $50 (if after April 1, 1994) $____________ **IF UT AUSTIN, PLEASE PROVIDE YOUR: UNIVERSITY ACCT. #: ______________________ UNIVERSITY ACCT. TITLE: ______________________ NAME OF ACCT. SIGNER: ______________________ ===================================================== VERIFICATION OF STUDENT STATUS Name: Mailing Address: University at which you are a student: What level student(year): Your student id number: Name of your graduate or postdoctoral advisor: Telephone number for your advisor: By filling in this section, I agree that I am electronically signing my signature to the statement that I am currently a student at the above university. ======================================================= REQUEST FOR FINANCIAL AID Name: Mailing Address: I request financial assistance under one or more of the following categories: [ ] Student (You must fill out the Verification of Student Status Section in order to be considered for financial aid under this category) [ ] Academic [ ] Minority [ ] Female [ ] Black [ ] Hispanic [ ] Native American Indian [ ] Other This form is not meant to invade your personal privacy in any fashion. However, some of the grant funds are targeted at specific ethnic/minority groups and need to be expended appropriately. None of these forms will be in any way released to the public. And, after the congress, all of the financial aid forms will be destroyed. No records will be kept of ethnic or racial backgrounds. If you have any questions concerning financial aid support, please contact Matthew Witten at the above addresses. ==============================================================  From terry at helmholtz.sdsc.edu Tue Sep 14 21:45:35 1993 From: terry at helmholtz.sdsc.edu (Terry Sejnowski) Date: Tue, 14 Sep 93 18:45:35 PDT Subject: Top Prize Message-ID: <9309150145.AA13807@helmholtz.sdsc.edu> New Scientist, 28 August, 1993, pp. 18-19 Physicist nets top prize for keeping building cooler "Thomas Bayes theorem ... was used by an artifical neural network -- a computer system that mimics the way the brain works -- to win an international competition for predicting energy consumption in a nominated building. The competition, called the "Great Predictor Shootout" took place in Atlanta, Georgia, last June, organised by the American Society of Heating, Refrigeration and Airconditioning Engineers. As a guide, the 21 entrants were given data about a four-storey building in Texax, including the external air temperature, wind speed, humidity and sunlight levels for two months, and the consumption of the electricity and hot and cold water for the previous four months. The winning system, devised by David MacKay, a physicist at the University of Cambridge's Cavendish Laboratory, predicted the buildings's energy to within 10%. The runner up was accurate to 15%. .. 'No matter how much you try to design a building, there will always be effects you omit to allow for,' comments Mackay." -----  From kainen at cs.UMD.EDU Thu Sep 16 11:58:37 1993 From: kainen at cs.UMD.EDU (Paul Kainen) Date: Thu, 16 Sep 93 11:58:37 -0400 Subject: INTELLIGENT BUILDINGS Message-ID: <9309161558.AA04914@tove.cs.UMD.EDU> While it may be profitable on a short term basis to continue to construct and then to tear down office space, the nation can ill afford such waste. Since ASHRAE is concerned to improve the efficiency of energy utilization, perhaps they ought to consider conducting a much more comprehensive test. Find a corporation or government planning to construct or purchase a substantial amount of office space, enough for several buildings of moderate size. Have individual high-tech teams plan and construct these buildings with ASHRAE (and other cooperating societies) monitoring energy consumption, construction cost,etc. In addition, government agencies and concerned labor organizations would be consulted during the design process as well as afterwords, so that the resulting buildings would be human-friendly. For instance, issues such as lighting need to be considered quite explicitly and top-down as, for instance, they have been discussed recently on the Internet: Is cost of better lighting offset by worker productivity? Let's not guess when we can find out. What Sejnowski drew our attention to, MacKay's accomplishment, shows that the modern ``high-tech'' combination of mathematics with neural nets can handle large-scale problems. The real challenge, in smart buildings and other intelligent system applications, is to connect the disciplines.  From stork at crc.ricoh.com Wed Sep 15 15:10:48 1993 From: stork at crc.ricoh.com (David G. Stork) Date: Wed, 15 Sep 93 12:10:48 -0700 Subject: "Pattern Classification and Scene Analysis" Message-ID: <9309151910.AA06151@neva.crc.ricoh.com> I am writing the second edition of "Pattern Classification and Scene Analysis" (part I) by R. O. Duda and P. E. Hart for Wiley Publishers. This second edition will be greatly expanded, to include topics such as neural networks, stochastic methods (e.g., Boltzmann learning), theory of learning and generalization (e.g., capacity issues), and many other modern topics. All figures are being redrawn in high-quality PostScript form, including many three-dimensional figures illustrating the basic issues. The homeworks have been similarly expanded, and computer exercises have been written for all chapters. Wiley has given us permission to release the preliminary manuscript to educators free of charge, for consideration for distributing to students (again, at no royalty fee). There are only a few minor provisos, the only two of substance is that no one can make a *profit* from distributing the manuscript, and the copyright must remain on the cover of every set distributed. Six chapters are complete (in draft form) and drafts of the remaining five should be done by the end of this year. (Many of the figures are currently hand drawn, and none appear in the text itself, as of yet.) Likewise, references have not been included. Nevertheless all material from the First Edition has been included (though in later drafts some of this will be deleted). If you are interested in obtaining a copy of chapters of the manuscript, contact me and I'll send you (by e-mail) a copy of the Table of Contents. If you are still interested in possibly using the text, we can go on from there. Dr. David G. Stork Chief Scientist and Head, Machine Learning and Perception Ricoh California Research Center 2882 Sand Hill Road Suite 115 Menlo Park, CA 94025-7022 USA 415-496-5720 (w) 415-854-8740 (fax) stork at crc.ricoh.com  From ken at phy.ucsf.edu Thu Sep 16 13:52:43 1993 From: ken at phy.ucsf.edu (Ken Miller) Date: Thu, 16 Sep 93 10:52:43 -0700 Subject: Computational Neuroscience job at San Diego Supercomputer Center Message-ID: <9309161752.AA07643@phybeta.ucsf.EDU> COMPUTATIONAL NEUROSCIENCE JOB: I just checked with SDSC, and applications are still being accepted for this job (ad posted below). However, as the job has been advertised for two months, applicants are encouraged to act quickly. Ken Kenneth D. Miller telephone: (415) 476-8217 Dept. of Physiology internet: ken at phy.ucsf.edu UCSF fax: (415) 476-4929 513 Parnassus San Francisco, CA 94143-0444 [Office: S-859] ---------------------------------------- This ad appeared in Science on July 16, 1993: San Diego Supercomputer Center ----------------------------- The San Diego Supercomputer Center is a National Computational Science Laboratory operated by General Atomics and the National Science Foundation. It serves the nationwide community of scientists and engineers. We are currently accepting applications for a Staff Scientist in computational ecology, computational neurobiology, or scientific databases to join our team of computational scientists. Requirements include a Ph.D. plus postdoctoral experience in one of the above areas. For the computational ecology or neurobiology position, a willingness to initiate an outreach program in, and collaborative projects with, the research community is necessary. General Atomics offers comprehensive salary and benefit plans as well as an exciting, dynamic environment well suited to applicants who are highly motivated and flexible. Please submit your letter of application, curriculum vitae, list of publications and three references to General Atomics, Dept. 93-23, P.O. Box 85608, San Diego, CA 92186-9784. EEO/AAE If you want further information about this position, please contact Rozeanne Steckler (steckler at sdsc.edu, 619-534-5122) or Dan Sulzbach (sulzbach at sdsc.edu, 619-534-5125) at SDSC.  From fu at whale.cis.ufl.edu Thu Sep 16 14:41:38 1993 From: fu at whale.cis.ufl.edu (Li-Min Fu) Date: Thu, 16 Sep 93 14:41:38 -0400 Subject: ISIKNH'94 Message-ID: <9309161841.AA03015@whale.cis.ufl.edu> CALL FOR PAPERS International Symposium on Integrating Knowledge and Neural Heuristics (ISIKNH'94) Sponsored by University of Florida, and AAAI, in cooperation with IEEE Neural Network Council, INNS-SIG, and FLAIRS. Time: May 9-10 1994; Place: Pensacola Beach, Florida, USA. A large amount of research has been directed toward integrating neural and symbolic methods in recent years. Especially, the integration of knowledge-based principles and neural heuristics holds great promise in solving complicated real-world problems. This symposium will provide a forum for discussions and exchanges of ideas in this area. The objective of this symposium is to bring together researchers from a variety of fields who are interested in applying neural network techniques to augmenting existing knowledge or proceeding the other way around, and especially, who have demonstrated that this combined approach outperforms either approach alone. We welcome views of this problem from areas such as constraint-(knowledge-) based learning and reasoning, connectionist symbol processing, hybrid intelligent systems, fuzzy neural networks, multi-strategic learning, and cognitive science. Examples of specific research include but are not limited to: 1. How do we build a neural network based on {\em a priori} knowledge (i.e., a knowledge-based neural network)? 2. How do neural heuristics improve the current model for a particular problem (e.g., classification, planning, signal processing, and control)? 3. How does knowledge in conjunction with neural heuristics contribute to machine learning? 4. What is the emergent behavior of a hybrid system? 5. What are the fundamental issues behind the combined approach? Program activities include keynote speeches, paper presentation, panel discussions, and tutorials. ***** Scholarships are offered to assist students in attending the symposium. Students who wish to apply for a scholarship should send their resumes and a statement of how their researches are related to the symposium. ***** Symposium Chairs: LiMin Fu, University of Florida, USA. Chris Lacher, Florida State University, USA. Program Committee: Jim Anderson, Brown University, USA Michael Arbib, University of Southern California, USA Fevzi Belli, The University of Paderborn, Germany Jim Bezdek, University of West Florida, USA Bir Bhanu, University of California, USA Su-Shing Chen, National Science Foundation, USA Tharam Dillon, La Trobe University, Australia Douglas Fisher, Vanderbilt University, USA Paul Fishwick, University of Florida, USA Stephen Gallant, HNC Inc., USA Yoichi Hayashi, Ibaraki University, Japan Susan I. Hruska, Florida State University, USA Michel Klefstad-Sillonville CCETT, France David C. Kuncicky, Florida State University, USA Joseph Principe, University of Florida, USA Sylvian Ray, University of Illinois, USA Armando F. Rocha, University of Estadual, Brasil Ron Sun, University of Alabama, USA Keynote Speaker: Balakrishnan Chandrasekaran, Ohio-State University Schedule for Contributed Papers ---------------------------------------------------------------------- Paper Summaries Due: December 15, 1993 Notice of Acceptance Due: February 1, 1994 Camera Ready Papers Due: March 1, 1994 Extended paper summaries should be limited to four pages (single or double-spaced) and should include the title, names of the authors, the network and mailing addresses and telephone number of the corresponding author. Important research results should be attached. Send four copies of extended paper summaries to LiMin Fu Dept. of CIS, 301 CSE University of Florida Gainesville, FL 32611 USA (e-mail: fu at cis.ufl.edu; phone: 904-392-1485). Students' applications for a scholarship should also be sent to the above address. General information and registration materials can be obtained by writing to Rob Francis ISIKNH'94 DOCE/Conferences 2209 NW 13th Street, STE E University of Florida Gainesville, FL 32609-3476 USA (Phone: 904-392-1701; fax: 904-392-6950) --------------------------------------------------------------------- --------------------------------------------------------------------- If you intend to attend the symposium, you may submit the following information by returning this message: NAME: _______________________________________ ADDRESS: ____________________________________ _____________________________________________ _____________________________________________ _____________________________________________ _____________________________________________ PHONE: ______________________________________ FAX: ________________________________________ E-MAIL: _____________________________________ ---------------------------------------------------------------------  From jbower at smaug.bbb.caltech.edu Thu Sep 16 15:41:15 1993 From: jbower at smaug.bbb.caltech.edu (Jim Bower) Date: Thu, 16 Sep 93 12:41:15 PDT Subject: Accuracy Message-ID: <9309161941.AA15967@smaug.bbb.caltech.edu> With respect to: >The winning system, devised by David MacKay, a physicist >at the University of Cambridge's Cavendish Laboratory, >predicted the buildings's energy to within 10%. >The runner up was accurate to 15%. ----------------------------------------------------------- Congradulations to David, However, I wish (continue to wish after all these years) that the same standard for accuracy applied to our descriptions of neural networks: >""Thomas Bayes theorem ... was used by an artifical neural >network -- a computer system that mimics the way the brain >works --" Jim Bower  From rjw at ccs.neu.edu Fri Sep 17 09:51:08 1993 From: rjw at ccs.neu.edu (Ronald J Williams) Date: Fri, 17 Sep 1993 09:51:08 -0400 Subject: TR available in neuroprose Message-ID: <9309171351.AA22077@kenmore.ccs.neu.edu> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/williams.policy-iter.ps.Z **PLEASE DO NOT FORWARD TO OTHER GROUPS** The following paper is now available in the neuroprose directory. It is 49 pages long. For those unable to obtain the file by ftp, hardcopies can be obtained by contacting: Diane Burke, College of Computer Science, 161 CN, Northeastern University, Boston, MA 02115, USA. Analysis of Some Incremental Variants of Policy Iteration: First Steps Toward Understanding Actor-Critic Learning Systems Northeastern University College of Computer Science Technical Report NU-CCS-93-11 Ronald J. Williams Leemon C. Baird, III College of Computer Science Wright Laboratory Northeastern University Wright-Patterson Air Force Base rjw at ccs.neu.edu bairdlc at wL.wpafb.af.mil Abstract: This paper studies algorithms based on an incremental dynamic programming abstraction of one of the key issues in understanding the behavior of actor-critic learning systems. The prime example of such a learning system is the ASE/ACE architecture introduced by Barto, Sutton, and Anderson (1983). Also related are Witten's adaptive controller (1977) and Holland's bucket brigade algorithm (1986). The key feature of such a system is the presence of separate adaptive components for action selection and state evaluation, and the key issue focused on here is the extent to which their joint adaptation is guaranteed to lead to optimal behavior in the limit. In the incremental dynamic programming point of view taken here, these questions are formulated in terms of the use of separate data structures for the current best choice of policy and current best estimate of state values, with separate operations used to update each at individual states. Particular emphasis here is on the effect of complete asynchrony in the updating of these data structures across states. The main results are that, while convergence to optimal performance is not guaranteed in general, there are a number of situations in which such convergence is assured. Since the algorithms investigated represent a certain idealized abstraction of actor-critic learning systems, these results are not directly applicable to current versions of such learning systems but may be viewed instead as providing a useful first step toward more complete understanding of such systems. Another useful perspective on the algorithms analyzed here is that they represent a broad class of asynchronous dynamic programming procedures based on policy iteration. To obtain a copy: ftp cheops.cis.ohio-state.edu login: anonymous password: cd pub/neuroprose binary get williams.policy-iter.ps.Z quit Then at your system: uncompress williams.policy-iter.ps.Z lpr -P williams.policy-iter.ps  From jbower at smaug.bbb.caltech.edu Fri Sep 17 12:55:00 1993 From: jbower at smaug.bbb.caltech.edu (Jim Bower) Date: Fri, 17 Sep 93 09:55:00 PDT Subject: Journal of Computational Neuroscience Message-ID: <9309171655.AA17971@smaug.bbb.caltech.edu> ******************************************************************* JOURNAL OF COMPUTATIONAL NEUROSCIENCE ******************************************************************* From neurons to behavior: A Journal at the interface between experimental and theoretical neuroscience. MANAGING EDITORS: James M. Bower Eve Marder California Institute of Brandeis University Technology John Miller John Rinzel University of California, National Institutes of Health Berkeley Idan Segev Charles Wilson Hebrew University University of Tennessee, Memphis ACTION EDITORS: L. F. Abbott, Brandeis University Richard Andersen, Massachusetts Inst. of Technology Alexander Borst, Max-Planck Inst., Tubingen Robert E. Burke, NINDS, NIH Catherine Carr, Univ. of Maryland, College Park Rodney Douglas, Oxford University G. Bard Ermentrout, University of Pittsburgh Apostoles Georgopoulos, VA Medical Center, MN Charles Gray, University of California, Davis Christof Koch, California Institute of Technology Gilles Laurent, California Institute of Technology David McCormick, Yale University Ken Miller, University of California, San Francisco Steve Redman, Australian National University Barry Richmond, NIMH, NIH Terry Sejnowski, Salk Institute Shihab Shamma, Univ. of Maryland, College Park Karen Sigvardt, University of California, Davis David Tank, Bell Labs Roger Traub, IBM TJ Watson Research Center Thelma Williams, University of London JOURNAL DESCRIPTION: The JOURNAL OF COMPUTATIONAL NEUROSCIENCE is intended to provide a forum for papers that fit the interface between computational and experimental work in the neurosciences. The JOURNAL OF COMPUTATIONAL NEUROSCIENCE will publish full length original papers describing theoretical and experimental work relevant to computations in the brain and nervous system. Papers that combine theoretical and experimental work are especially encouraged. Primarily theoretical papers should deal with issues of obvious relevance to biological nervous systems. Experimental papers should have implications for the computational function of the nervous system, and may report results using any of a variety of approaches including anatomy, electrophysiology, biophysics, imaging, and molecular biology. Papers that report novel technologies of interest to researchers in computational neuroscience are also welcomed. It is anticipated that all levels of analysis from cognitive to single neuron will be represented in THE JOURNAL OF COMPUTATIONAL NEUROSCIENCE. ***************************************************************** CALL FOR PAPERS ***************************************************************** For Instructions to Authors, please contact: Karen Cullen Journal of Computational Neuroscience Kluwer Academic Publishers 101 Philip Drive Norwell, MA 02061 PH: 617 871 6300 FX: 617 878 0449 EM: Karen at world.std.com ***************************************************************** ***************************************************************** ORDERING INFORMATION: For complete ordering information and subscription rates, please contact: KLUWER ACADEMIC PUBLISHERS PH: 617 871 6600 FX: 617 871 6528 EM: Kluwer at world.std.com JOURNAL OF COMPUTATIONAL NEUROSCIENCE ISSN: 0929-5313 *****************************************************************  From PIURI at IPMEL1.POLIMI.IT Fri Sep 17 18:57:59 1993 From: PIURI at IPMEL1.POLIMI.IT (PIURI@IPMEL1.POLIMI.IT) Date: Fri, 17 Sep 1993 18:57:59 MET-DST Subject: call for papers Message-ID: <01H32D8LQCG29AMRQ4@icil64.cilea.it> ============================================================================= 1994 INTERNATIONAL CONFERENCE ON INSTRUMENTATION AND MEASUREMENTS IMTC'94 Advanced Technologies in Instrumentation and Measurements Hamamatsu, Shizuoka, Japan - 10-12 May 1994 ============================================================================= SPECIAL SESSION ON NEURAL INSTRUMENTS CALL FOR PAPERS Program Chair: Kenzo Watanabe Research Institute of Electronics Shizuoka University 3-5-1 Johoku, Hamamatsu, 423 Japan phone +81-53-471-1171 fax +81-53-474-0630 General Chair: Robert Myers 3685 Motor Ave., Suite 240 Los Angeles, CA 90034-5750, USA phone +1-310-287-1463 fax +1-310-286-1851 Sponsored by: IEEE Intrsumentation and Measurement Society Society of Instruments and Control Engineers, Japan Cooperated by: Institute of Electrical Engineers, Japan Institute of Electronics, Information and Communication Engineers, Japan Japan Society of Applied Physics Japan Electric Measuring Instrument Manufacturers' Association The IMTC'94 Conference is the 10th edition of the annual conference organized by the IEEE Instrumentation and Measurement Society to provide a stimulating forum for practitioners and scientists working in areas related to any kind of measurements, theoretical aspects of mesurements, instruments for measurements, and measurement processing. Traditional topics are: Acoustics measurements, AI & fuzzy, Automotive & avionic instrumentation, Calibrartion, Metrology & standards, Digital signal analysis & processing, Digital and mobile communications, LSI analysis, diagnosis & testing, Mixed analog & digital ASICs, Optic & fiber optic measurement, Process measurements, Sensor & transducers, System identification, Waveform analysis and measurements, A/D and D/A, Data acquisition, Antenna & EMI / EMC, Biomedical instruments, Computer-based measurements & software, Environment measurements, Microwave measurements, Nuclear & medial instruments, Pressure & temperature measurements, Quality & reliability, STM and imaging, Time and Frequency measurements. To support presentation and discussion of emergent technologies, a special session on Neural Instruments will be organized within IMTC'94. Main goals are neural technologies for measurements, applications of neural networks in measurement and instruments, design and implementation of neural solutions for instrument's subsystems, neural subsystems for automatic control, and neural subsystems for signal processing. Authors are invited to submit one-page abstract (containing title, authors, affiliations, and the session name "Neural Instruments" in the upper right corner) and cover page (containing title, authors, affiliations, contact author, full address of the contact author, telephone and fax number of the contact author, and the session name "Neural Instruments" in the upper right corner). Submission must be received by the general chair (for authors from Europe and North-America) or by the program chair (for authors from Asia and other areas) by October 1st, 1993. Fax submissions are accepted. An additional copy of the submission should be sent by e-mail or fax to the coordinator of the session on Neural Instruments (this copy does not substitute the formal submission to the general chair or the program chair). Submission of a paper implies a willingness to attend at the conference and to present the paper. Notification of acceptance will be mailed by December 1st, 1993; camera-ready papers are due by February 1st, 1994. Authors of selected papers will also be invited to submit their papers for consideration for the special IMTC/94 issue of the IEEE Transaction on Instrumentation and Measurements. For any additional information regarding the special session on Neural Instruments, contact the session coordinator. Session Coordinator for "Neural Instruments": Prof. Vincenzo PIURI Department of Electronics and Information Politecnico di Milano piazza L. da Vinci 32 I-20133 Milano, Italy phone no. +39-2-2399-3606, +39-2-2399-3623 fax no. +39-2-2399-3411 e-mail piuri at ipmel1.polimi.it =============================================================================  From avner at elect1.weizmann.ac.il Sun Sep 19 03:49:15 1993 From: avner at elect1.weizmann.ac.il (Priel Avner) Date: Sun, 19 Sep 1993 07:49:15 GMT Subject: No subject Message-ID: <9309190749.AA43380@elect1.weizmann.ac.il> From georg at ai.univie.ac.at Mon Sep 20 10:10:37 1993 From: georg at ai.univie.ac.at (Georg Dorffner) Date: Mon, 20 Sep 1993 16:10:37 +0200 Subject: papers/neuroprose: Unifying MLP and RBFN Message-ID: <199309201410.AA11302@chicago.ai.univie.ac.at> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/dorffner.csfn.ps.Z pub/neuroprose/dorffner.nn-clinical.ps.Z The files dorffner.csfn.ps.Z and dorffner.nn-clinical.ps.Z are now available for copying from the Neuroprose repository: ------------------------------------------------------------------------- dorffner.csfn.ps.Z: A Unified Framework for MLPs and RBFNs: Introducing Conic Section Function Networks Georg Dorffner Austrian Research Inst. for Artificial Intelligence and Dept. of Medical Cybernetics and AI Univ. of Vienna georg at ai.univie.ac.at ABSTRACT: Multilayer Perceptrons (MLP, Werbos 1974, Rumelhart et al. 1986) and Radial Basis Function Networks (RBFN, Broomhead & Lowe 1988, Moody & Darken 1989) probably are the most widely used neural network models for practical applications. While the former belong to a group of ``classical'' neural networks (whose weighted sums are loosely inspired by biology), the latter have risen only recently from an analogy to regression theory (Broomhead & Lowe 1988). On first sight, the two models -- except for being multilayer feedforward networks -- do not seem to have much in common. On second thought, however, MLPs and RBFNs share a variety of features, worthy of viewing them in the same context and comparing them to each other with respect to their properties. Consequently, a few attempts on arriving at a unified picture of a class of feedforward networks -- with MLPs and RBFNs as members -- have been undertaken (Robinson et al. 1988, Maruyama et al. 1992, Dorffner 1992, 1993). Most of these attempts have centered around the observation that the function of a neural network unit can be divided into a propagation rule (``net input'') and an activation or transfer function. The dot product (``weighted sum'') and the Euclidean distance are special cases of propagation rules, whereas the sigmoid and Gaussian function are examples for activation functions. This paper introduces a novel neural network model based on a more general conic section function as propagation rule, containing hyperplane (straight line) and hypersphere (circle) as special cases, thus unifying the net inputs of MLPs and RBFNs with an easy-to-handle continuum in between. A new learning rule -- complementing the existing methods of gradient descent in weight space and initialization -- is introduced which enables the network to make a continuous decision between bounded and unbounded (infinite half-space) decision regions. The capabilities of CSFNs are illustrated with several examples and compared with exisiting approaches. CSFNs are viewed as a further step toward more efficient and optimal neural network solutions in practical applications. length: 37 pages. submitted for publication ------------------------------------------------------------------------- dorffner.nn-clinical.ps.Z: On Using Feedforward Neural Networks for Clinical Diagnostic Tasks Georg Dorffner Austrian Research Inst. for Artificial Intelligence and Dept. of Medical Cybernetics and AI Univ. of Vienna georg at ai.univie.ac.at and Georld Porenta Dept. of Cardiology Clinic for Internal Medicine II University of Vienna ABSTRACT: In this paper we present an extensive comparison between several feedforward neural network types in the context of a clinical diagnostic task, namely the detection of coronary artery disease (CAD) using planar thallium-201 dipyridamole stress-redistribution scintigrams. We introduce results from well-known (e.g. multilayer perceptrons or MLPs, and radial basis function networks or RBFNs) as well as novel neural network techniques (e.g. conic section function networks) which demonstrate promising new routes for future applications of neural networks in medicine, and elsewhere. In particular we show that initializations of MLPs and conic section function networks -- which can learn to behave more like an MLP or more like an RBFN -- can lead to much improved results in rather difficult diagnostic tasks. Keywords: Feedforward neural networks, neural network initialization, multilayer perceptrons, radial basis function networks, conic section function networks; thallium scintigraphy, angiography, clinical diagnosis and decision making. length: 21 pages submitted for publication ----------------------------------------------------------------------- To obtain a copy: ftp cheops.cis.ohio-state.edu login: anonymous password: cd pub/neuroprose binary get dorffner.csfn.ps.Z AND/OR get dorffner.nn-clinical.ps.Z quit Then at your system uncompress dorffner.* to obtain (a) postscript file(s). Many thanks to Jordan Pollack for the maintenance and support of this archive. ----------------------------------------------------------------------- both papers are also available through anonymous ftp from ftp.ai.univie.ac.at in the directory 'papers' as oefai-tr-93-23.ps.Z (== dorffner.nn-clinical) and oefai-tr-93-25.ps.Z (== dorffner.csfn) Hardcopies are available (only if you don't have access to ftp!) by sending email to sec at ai.univie.ac.at and asking for technical report oefai-tr-93-23 or oefai-tr-93-25 (see previous paragraph).  From srx014 at cck.coventry.ac.uk Mon Sep 20 15:16:30 1993 From: srx014 at cck.coventry.ac.uk (CRReeves) Date: Mon, 20 Sep 93 15:16:30 WET DST Subject: Research post available Message-ID: <6664.9309201416@cck.coventry.ac.uk> The following University Research Studentship is available, starting as soon as possible: "Application of neural networks to the inference of homologous DNA sequences from related genomes" This project involves the application of neural network techniques in plant genetics. Primary DNA sequence data are being accumulated for a wide range of organisms, and the role of model species in plant genetics is crucial in expanding our knowledge of the fundamental mechanisms of plant development. The purpose of this project is the evaluation of neurocomputing methods in the prediction of gene sequences for a variety of agricultural species. The work will be carried out in the School of Mathematical and Information Sciences at Coventry University (where there is a variety of ongoing research in the applications of neural networks), in collaboration with Horticultural Research International at Wellesbourne, Warwickshire, where there is access to large databases of genetic characteristics. Applicants do not need a specialist background in either genetics or neural computation; preferably, they should have a background in mathematics and a competence in at least one high-level computing language (C, Pascal, etc.). Please send CVs by email or by post to ___________________________________________ | Nigel Steele | | Chair, Mathematics Division | | School of Mathematical and Information | | Sciences | | Coventry University | | Priory St | | Coventry CV1 5FB | | tel :+44 (0)203 838568 | | fax :+44 (0)203 838585 | | Email: nsteele at uk.ac.cov | |___________________________________________| [Message sent by Colin Reeves (CRReeves at uk.ac.cov)]  From garza at mcc.com Mon Sep 20 15:17:36 1993 From: garza at mcc.com (NN.JOB) Date: Mon, 20 Sep 93 14:17:36 CDT Subject: Position announcement Message-ID: <9309201917.AA22886@niobium.mcc.com> ******************* Position Announcement ****************** MCC (Microelectronics & Computer Technology Corp.) is one of the countries most broad-based industry consortia. MCC membership of almost 100 companies/organizations includes a diverse group of electronics, computer, aerospace, manufacturing, and other advanced technology organizations. MCC has an immediate opening for a Member of Technical Staff (MTS) or Senior MTS in their Neural Network Projects. Job responsibilities will be to conduct applied research in one or more of the following three areas (listed in order of importance): Intelligent financial systems, OCR, and Spectral (image/signal) processing applications Required skills: Neural net research & development experience PhD in relevant area, preferably in EE, physics, or applied mathematics Strong quantitative skills C programming, UNIX background Preferred skills: Experience in financial applications and/or time series analysis Demonstrated project leadership Strong communication skills Please forward your resume and salary history to: MCC ATTN: Neural Network Job 3500 W. Balcones Center Drive Austin, TX 78759 email: nn.job at mcc.com  From watrous at learning.siemens.com Mon Sep 20 15:06:34 1993 From: watrous at learning.siemens.com (Raymond L Watrous) Date: Mon, 20 Sep 93 15:06:34 EDT Subject: Proceedings of the 1993 NNSP Workshop Message-ID: <9309201906.AA02937@tiercel.siemens.com> The 1993 IEEE Workshop on Neural Networks for Signal Processing was held September 6 - September 9, 1993 at the Maritime Institute of Technology and Graduate Studies Linthicum Heights, Maryland, USA. Copies of the 593-page, hardbound Proceedings of the workshop may be obtained for $50 (US, check or money order, please) postpaid from: Raymond Watrous, Financial Chair 1993 IEEE Workshop on Neural Networks for Signal Processing c/o Siemens Corporate Research 755 College Road East Princeton, NJ 08540 (609) 734-6596 (609) 734-6565 (FAX)  From esann at dice.ucl.ac.be Mon Sep 20 15:58:40 1993 From: esann at dice.ucl.ac.be (esann@dice.ucl.ac.be) Date: Mon, 20 Sep 93 21:58:40 +0200 Subject: ESANN'94: European Symposium on ANNs Message-ID: <9309201958.AA09530@ns1.dice.ucl.ac.be> ____________________________________________________________________ ____________________________________________________________________ European Symposium on Artificial Neural Networks Brussels - April 20-21-22, 1994 First announcement and call for papers ____________________________________________________________________ ____________________________________________________________________ ----------------------- Scope of the conference ----------------------- ESANN'93 was held in Brussels, in April 1993. It gathered more than 80 scientists, from about 15 countries, who wanted to learn more about the last developments in the theory of neural networks. The European Symposium on Artificial Neural Networks will be organized for the second time in April 1994, and, as in 1993, will focus on the fundamental aspects of the artificial neural network research. Today, thousands of researchers work in this field; they try to develop new algorithms, to mimic properties found in natural networks, to develop parallel computers based on these properties, and to use artificial neural networks in new application areas. But the field is new, and has expanded drastically in about ten years; this lead to a lack of theoretical works in the subject, and also to a lack of comparisons between new methods and more classical ones. The purpose of ESANN is to cover the theoretical and fundamental aspects of neural networks; the symposium is intended to give to the participants an up-to-date and comprehensive view of these aspects, by the presentation of new results and new developments, of tutorial papers covering the relations between neural networks and classical methods of computing, and also by round tables confronting views of specialists and non-specialists of the field. The program committee of ESANN'94 welcomes papers in the following aspects of artificial neural networks: theory models and architectures mathematics learning algorithms biologically plausible artificial networks neurobiological systems adaptive behavior signal processing statistics self-organization evolutive learning Accepted papers will cover new results in one or several of these aspects or will be of tutorial nature. Papers insisting on the relations between artificial neural networks and classical methods of information processing, signal processing or statistics are encouraged. ---------------------- Call for contributions ---------------------- Prospective authors are invited to submit six originals of their contribution before November 26, 1993. Working language of the conference (including proceedings) is English. Papers should not exceed six A4 pages (including figures and references). Printing area will be 12.2 x 19.3 cm (centered on the A4 page); left, right, top and bottom margins will thus respectively be 4.4, 4.4, 5.2 and 5.2 cm. 10-point Times font will be used for the main text; headings will be in bold characters, (but not underlined), and will be separated from the main text by two blank lines before and one after. Manuscripts prepared in this format will be reproduced in the same size in the book. The first page will begin by a heading, indented 1cm left and right with regards to the main text (the heading will thus have left and right margins of 5.4 cm). The heading will contain the title (Times 14 point, bold, centered), one blank line, the author(s) name(s) (Times 10 point, centered), one blank line, the affiliation (Times 9 point, centered), one blank line, and the abstract (Times 9 point, justified, beginning by the word "Abstract." in bold face). Originals of the figures will be pasted into the manuscript and centered between the margins. The lettering of the figures should be in 10-point Times font size. Figures should be numbered. The legends also should be centered between the margins and be written in 9-point Times font size as follows: The pages of the manuscript will not be numbered (numbering decided by the editor). A separate page (not included in the manuscript) will indicate: the title of the manuscript author(s) name(s) the complete address (including phone & fax numbers and E-mail) of the corresponding author a list of five keywords or topics On the same page, the authors will copy and sign the following paragraph: "in case of acceptation of the paper for presentation at ESANN 94: - at least one of the authors will register to the conference and will present the paper - the author(s) give their rights up over the paper to the organizers of ESANN 94, for the proceedings and any publication that could directly be generated by the conference - if the paper does not match the format requirements for the proceedings, the author(s) will send a revised version within two weeks of the notification of acceptation." Contributions must be sent to the conference secretariat. Prospective authors are invited to ask examples of camera-ready contributions by writing to the same address. ------------- Local details ------------- The conference will be held in the center of Brussels (Belgium). Close to most great European cities, Brussels is exceptionally well served by closely-knit motorway and railway systems, and an international airport. Besides an artistic and cultural center of attraction, Brussels is also renowned for its countless typical cafs, form the most unassuming to the most prestigious. Belgian food is typical and famous, and the night life in Brussels is considerable. --------- Deadlines --------- Submission of papers November 26, 1993 Notification of acceptance January 17, 1994 Symposium April 20-22, 1994 ------ Grants ------ A limited number of grants (registration fees and economic accommodation) will be given to young scientists coming from the European Community (Human Capital and Mobility program, European Community - DG XII). Grants will also probably be available for scientists from Central and Eastern European countries. Please write to the conference secretariat to get an application form for these grants. ---------------------- Conference secretariat ---------------------- Dr. Michel Verleysen D Facto Conference Services 45 rue Masui B - 1210 Brussels (Belgium) phone: + 32 2 245 43 63 Fax: + 32 2 245 46 94 E-mail: esann at dice.ucl.ac.be ---------- Reply form ---------- If your contact address is incomplete or has been changed recently, or if you know a colleague who might be interested in ESANN'94, please send this form, with your or his/her name and address, to the conference secretariat: '''''''''''''''''''''''''''''''''''''''''''''''''''''''''''' Name: ..................................................... First Name: ............................................... University or Company: .................................... ........................................................... Address: .................................................. ........................................................... ........................................................... ZIP: ...................................................... Town: ..................................................... Country: .................................................. Tel: ...................................................... Fax: ...................................................... E-mail: ................................................... '''''''''''''''''''''''''''''''''''''''''''''''''''''''''''' ------------------ Steering committee ------------------ Franois Blayo EERIE, Nmes (F) Marie Cottrell Univ. Paris I (F) Nicolas Franceschini CNRS Marseille (F) Jeanny Hrault INPG Grenoble (F) Michel Verleysen UCL Louvain-la-Neuve (B) -------------------- Scientific committee -------------------- Luis Almeida * INESC - Lisboa (P) Jorge Barreto UCL Louvain-en-Woluwe (B) Herv Bourlard L. & H. Speech Products (B) Joan Cabestany Univ. Polit. de Catalunya (E) Dave Cliff University of Sussex (UK) Pierre Comon Thomson-Sintra Sophia (F) Holk Cruse Universitt Bielefeld (D) Dante Del Corso Politecnico di Torino (I) Marc Duranton Philips / LEP (F) Jean-Claude Fort Universit Nancy I (F) Karl Goser Universitt Dortmund (D) Martin Hasler EPFL Lausanne (CH) Philip Husbands University of Sussex (UK) Christian Jutten INPG Grenoble (F) Petr Lansky Acad. of Science of the Czech Rep. (CZ) Jean-Didier Legat UCL Louvain-la-Neuve (B) Jean Arcady Meyer Ecole Normale Suprieure - Paris (F) Erkki Oja Helsinky University of Technology (SF) Guy Orban KU Leuven (B) Gilles Pags * Universit Paris I (F) Alberto Prieto Universitad de Granada (E) Pierre Puget LETI Grenoble (F) Ronan Reilly University College Dublin (IRE) Tamas Roska Hungarian Academy of Science (H) Jean-Pierre Rospars INRA Versailles (F) Andr Roucoux UCL Louvain-en-Woluwe (B) John Stonham Brunel University (UK) Lionel Tarassenko University of Oxford (UK) John Taylor Kings College London (UK) Vincent Torre Universita di Genova (I) Claude Touzet EERIE Nmes (F) Joos Vandewalle KUL Leuven (B) Eric Vittoz CSEM Neuchtel (CH) Christian Wellekens Eurecom Sophia-Antipolis (F) (* tentatively) _____________________________ Michel Verleysen D facto conference services 45 rue Masui 1210 Brussels Belgium tel: +32 2 245 43 63 fax: +32 2 245 46 94 E-mail: esann at dice.ucl.ac.be _____________________________  From heiniw at sun1.eeb.ele.tue.nl Mon Sep 20 05:45:43 1993 From: heiniw at sun1.eeb.ele.tue.nl (Heini Withagen) Date: Mon, 20 Sep 93 11:45:43 +0200 Subject: Neural Network reports available Message-ID: <9309200945.AA08526@sun1.eeb.ele.tue.nl> The following neural network related reports are available from ftp.urc.tue.nl in the /neural directory. Send any questions to: heiniw at eeb.ele.tue.nl neural_interface.ps.gz: INTERFACING NEURAL NETWORK CHIPS WITH A PERSONAL COMPUTER J.J.M. van Teeffelen Abstract: The Electronic Circuit Design Group at the Eindhoven University of Technology currently is implementing several neural networks with a multi-layered perceptron architecture together with their learning algorithms on VLSI chips. In order to test these chips and to use them in an application they will be connected with a personal computer with the help of an interface. This interface, that has to be as versatile as possible, meaning that is must be able to connect all kinds of neural network chips to it, can be realized either by making use of commercially available interfaces or by designing an own interface with help of off-the-shelf components. Two interfaces will be discussed, one for the rather slow AT-bus and one for the high speed VESA local bus. Although the commercially available interfaces are not as versatile as wished, and the prices may seem rather high, they turn out to be the best way to realize the interface at the moment. They are guaranteed to work and can be used immediately. The discussed interfaces for the AT-bus and the VESA local bus still have to be tested and implemented on a printed circuit board. ============================================================================== weight_perturbation.tar.gz: A STUDY OF THE WEIGHT PERTURBATION ALGORITHM USED IN NEURAL NETWORKS R.E. ten Kate Abstract: This thesis studies different aspects of the Weight Perturbation (WP) algorithm used to train neural networks. After a general introduction of neural networks and their algorithms, the WP algorithm is described. A theoretical study is done describing the effects of the applied perturbation on the error performance of the algorithm. Also a theoretical study is done describing the influence of the learning rate on the convergence speed. The effects of these two algorithm parameters have been simulated in an ideal Multilayer Perceptron using various benchmark problems. When the WP algorithm is implemented on an analog neural network chip, several hardware limitations are present which influence the performance of the WP algorithm; weight quantization, weight decay, non-ideal multipliers and neurons. The influence of these non-idealities on the algorithm performance has been studied theorectically. Simulations of these effects have been done using predicted parameters by SPICE simulations of the hardware. Several proposals are made to reduce the effects of the hardware non-idealities. Two proposals have been studied to increase the speed of the WP algorithm. Heini Withagen Dep. of Elec. Engineering EH 9.29 Eindhoven University of Technology P.O. Box 513 Phone: 31-40472366 5600 MB Eindhoven Fax: 31-40455674 The Netherlands E-mail: heiniw at eeb.ele.tue.nl ======================================================================== First Law of Bicycling: No matter which way you ride, it's uphill and against the wind.  From ahg at eng.cam.ac.uk Tue Sep 21 15:58:43 1993 From: ahg at eng.cam.ac.uk (ahg@eng.cam.ac.uk) Date: Tue, 21 Sep 93 15:58:43 BST Subject: Technical report available Message-ID: <15715.9309211458@tulip.eng.cam.ac.uk> The following technical report is available by anonymous ftp from the archive of the Speech, Vision and Robotics Group at the Cambridge University Engineering Department. PROBLEM SOLVING WITH OPTIMIZATION NETWORKS PhD Thesis Andrew Gee Technical Report CUED/F-INFENG/TR 150 Cambridge University Engineering Department Trumpington Street Cambridge CB2 1PZ England Summary Combinatorial optimization problems, which are characterized by a discrete set as opposed to a continuum of possible solutions, occur in many areas of engineering, science and management. Such problems have so far resisted efficient, exact solution, despite the attention of many capable researchers over the last few decades. It is not surprising, therefore, that most practical solution algorithms abandon the goal of finding the optimal solution, and instead attempt to find an approximate, useful solution in a reasonable amount of time. A recent approach makes use of highly interconnected networks of simple processing elements, which can be programmed to compute approximate solutions to a variety of difficult problems. When properly implemented in suitable parallel hardware, these `optimization networks' are capable of extremely rapid solutions rates, thereby lending themselves to real-time applications. This thesis takes a detailed look at problem solving with optimization networks. Three important questions are identified concerning the applicability of optimization networks to general problems, the convergence properties of the networks, and the likely quality of the networks' solutions. These questions are subsequently answered using a combination of rigorous analysis and simple, illustrative examples. The investigation leads to a clearer understanding of the networks' capabilities and shortcomings, confirmed by extensive experiments. It is concluded that optimization networks are not as attractive as they might have previously seemed, since they can be successfully applied to only a limited number of problems exhibiting special, amenable properties. ************************ How to obtain a copy ************************ Via FTP: unix> ftp svr-ftp.eng.cam.ac.uk Name: anonymous Password: (type your email address) ftp> cd reports ftp> binary ftp> get gee_tr150.ps.Z ftp> quit unix> uncompress gee_tr150.ps.Z unix> lpr gee_tr150.ps (or however you print PostScript) NB. This is a large file, expanding to 3.8 MBytes of uncompressed Postscript, and printing on 144 pages.  From lacher at NU.CS.FSU.EDU Tue Sep 21 13:47:13 1993 From: lacher at NU.CS.FSU.EDU (R.C. Lacher) Date: Tue, 21 Sep 93 13:47:13 EDT Subject: Superchair Message-ID: <9309211747.AA15777@lambda.cs.fsu.edu> I would like to call the following announcement to the attention of the connectionists research community. Note that the position is rather wide open as to field or home department. In particular, nominations or applications from eminent scientists in various connectionist fields are encouraged to apply or receive nominations. Biology, Computer Science, Mathematics, Physics, Psychology, and Statistics are all departments in the college. __o __o __o __o __o -\<, -\<, -\<, -\<, -\<, Chris Lacher _ _ _ _ _ _ _ _ O/_O _ O/_O _ O/_O _ O/_O _ O/_O _ _ _ _ Department of Computer Science Phone: (904) 644-4029 Florida State University Fax: (904) 644-0058 Tallahassee, FL 32306-4019 Email: lacher at cs.fsu.edu =================================================================== The Thinking Machines Corporation Eminent Scholar Chair in High Performance Computing Applications and nominations are invited for the TMC Eminent Scholar Chair in High Performance Computing at Florida State University. This position is supported, in part, by a $4 million endowment and will be filled at a senior level in the College of Arts and Sciences. Applicants and nominees should have a distinguished academic or research record in one or more fields closely associated with modern high performance computing. These fields include applied mathematics, applied computer science, and computational science in one or more scientific or engineering disciplines. The appointment will be in one or more academic departments and in the Supercomputer Computations Research Institute (SCRI). The primary responsibilities of the successful candidate will be to establish new research and education directions in high performance computing that complement the existing strong programs in SCRI, the National High Magnetic Field Laboratory, the Structural Biology Institute, the Global Climate Research Institute, and the academic departments. The Chair will be closely involved with the addition of several junior level academic appointments in connection with this new initiative in high performance computing in order to establish the strongest possible group effort. The deadline for applications is December 17, 1993. Applications and nominations should be sent to: HPC Chair Selection Committee, Mesoscale Air-Sea Interaction Group, Florida State University 32306-3041. Florida State University is an Equal Opportunity/Equal Access/Affirmative Action Employer. Women and minorities are encouraged to apply.  From ernst at cns.caltech.edu Tue Sep 21 16:06:36 1993 From: ernst at cns.caltech.edu (Ernst Niebur) Date: Tue, 21 Sep 93 13:06:36 PDT Subject: NIPS 93 Announcement: Workshop on Selective Attention Message-ID: <9309212006.AA11904@isis.klab.caltech.edu> Fellow Connectionists: We would like to announce the final program of a workshop on visual selective attention to be held at this year's NIPS conference. The conference will be held from Nov. 29 to Dec. 2 in Denver, CO, the workshop will be held Dec. 3 and 4 "at a nearby ski area." For NIPS conference and workshop registration info, please write to: NIPS*93 Registration / NIPS Foundation / PO Box 60035 / Pasadena, CA 91116-6035 USA For questions concerning this workshop, please contact either of the organizers by e-mail. --Ernst Niebur NIPS*93 Workshop: Neurobiology, Psychophysics, and Computational ================= Models of Visual Attention Intended Audience: Experimentalists, modelers and others interested in ================== visual attention and high-level vision Organizers: =========== Ernst Niebur Bruno Olshausen ernst at caltech.edu bruno at lgn.wustl.edu Program: ======== In any physical computational system, processing resources are limited, which inevitably leads to bottlenecks in the processing of sensory information. Nowhere is this more evident than in the primate visual system, where the massive amount of information provided by the optic nerve far exceeds what the brain is capable of fully processing and assimilating into conscious experience. Visual attention thus serves as a mechanism for selecting certain portions of the input to be processed preferentially, shifting the processing focus from one location to another in a serial fashion. The study of visual attention is integral to our understanding of higher visual function, and it may also be of practical benefit to machine vision as well. What we know of visual attention has been learned from a combination of psychophysical, neurophysiological, and computational approaches. Psychophysical studies have revealed the behavioral consequences of visual attention by measuring either a speed-up in observer's reaction time or an improvement in discrimination performance when the observer is attending to a task. Neurophysiological studies, on the other hand, have attempted to reveal the neural mechanisms and brain areas involved in attention by measuring the modulation in single cell firing rate or in the activity in a part of the brain as a function of the attentional state of the subject. A number of computational models based on these studies have been proposed to address the question of how attention eases the computational burdens faced by the brain in pattern recognition or other visual tasks, and how attention is controlled and expressed at the neuronal level. The goal of this workshop will be to bring together experts from each of these fields to discuss the latest advances in their approaches to studying visual attention. Half the available time has been reserved for informal presentations and the other half for discussion. Morning session: 7:30-8:00 Introduction/overview "Covert Visual Attention: The Phenomenon" (Ernst Niebur, Caltech) (7:50-8:00: Discussion) 8:00-9:00 Neurobiology 8:00 "Effects of Focal Attention on Receptive Field Profiles in Area V4" (Ed Connor, Washington University) (8:20-8:30: Discussion) 8:30 "Neurophysiological evidence of scene segmentation by feature selective, parallel attentive mechanisms" (Brad Motter, VA Medical Center/SUNY-HSC, Syracuse) (8:50-9:00: Discussion) 9:00-9:30 General Discussion Afternoon session: 4:30-5:00 Psychophysics "Attention and salience: alternative mechanisms of visual selection" (Jochen Braun, Caltech) (4:50-5:00: Discussion) 5:00-6:00 Computational models 5:00 "Models for the neural implementation of attention based on the temporal structure of neural signals" (Ernst Niebur, Caltech) (5:20-5:30: Discussion) 5:30 "Dynamic routing circuits for visual attention" (Bruno Olshausen, Washington University/Caltech) (5:50-6:00: Discussion) 6:00-6:30 General discussion  From B.DASGUPTA at fs3.mbs.ac.uk Wed Sep 22 09:25:11 1993 From: B.DASGUPTA at fs3.mbs.ac.uk (BHASKAR DASGUPTA ALIAS BD) Date: 22 Sep 93 09:25:11 BST Subject: Statistical V/s. Neural Networks in Forecasting Message-ID: <4084C4041C9@fs3.mbs.ac.uk> Hi! I am searching for references on the applications of neural networks to forecasting Time series, univariate as well as multivariate. I have managed to locate the following mentioned references. Does anyone else have any more references to this area ? Hruschka, H, 1993, "Determining market response functions by neural network modeling: A comparison to econometric techniques", European Journal of Operations Research, 66, 1, Apr., 27-35. Lapedes A., & Farber R., 1987, "Nonlinear Signal Processing using Neural Networks: Prediction and System Modeling", Los Alamos National Lab Technical Report LA-UR-87-2261, July. Marquez L., Hill T, Worthley R, & Remus W, 1991, "Neural Network Models as an Alternative to Regression, Proceedings of the IEEE 24th. Annual Hawaii International Conference on System Sciences, Vol. VI 129-135. Sharda R., & Patil R, 1990, "Neural Networks as Forecasting Experts: An Empirical Test," International Joint Conference on Neural Networks, IJCNN-WASH-D.C., Vol. II, January 15- 19, 491-494. ______, 1992, Neural Networks in Finance and Investing: Using Artificial Intelligence to Improve Real - World Performance, ed. Trippi, R.R., & Truban, E., Probus Publishing Co., Cambridge, UK, 451-464. Tang Z, Almedia C. de, & Fishwick P.A., 1990, "Time Series Forecasting Using Neural Networks vs. Box-Jenkins Methodology", International Workshop of Neural Networks, Auburn, February. Wu, F. Y., & Yen, K.K, 1992, "Applications of Neural Networks in Regression Analysis", Computers and Industrial Engineering", 23, 1-4, Nov., 93-95 Youngohc, Y, Swales, G. Jr., & Margavio, T.M., 1993, "A comparison of discriminant analysis versus artificial neural networks", Journal of the Operations Research Society, 44, 1, Jan., 51-60. The second question was that:: has anyone come across any 'research' paper concerning the applications of GA's to financial market modelling?. I keep on seeing only newspaper articles and general interest articles on this subject. I know that the moderators of this list do not like anyone sending requests for refereces without a substantial initial biblio. first, but unfortunately, I am not able to do so, since I could not locate them in the first place. I shall post a compilation of the received references back to this list. Thanks in advance. Bhaskar Dasgupta Manchester Business School Booth Street West Manchester M15 6PB UK. Phone::+44-61-275-6547 fax:: +44-61-273-7732 ============================================================ Chaos is the law of nature Order is the dream of man ============================================================  From gomes at ICSI.Berkeley.EDU Wed Sep 22 05:38:16 1993 From: gomes at ICSI.Berkeley.EDU (Benedict A. Gomes) Date: Wed, 22 Sep 1993 02:38:16 -0700 (PDT) Subject: Thanks for the references Message-ID: <9309220938.AA24172@icsib6.ICSI.Berkeley.EDU> Thanks to those who responded to my request for references on mapping structured nets to parallel machines. Most of the references are from D.R. Mani's dissertation proposal. Since the list is somewhat long, I've set it up for anonymous ftp from pub/ai/gomes/nn-mapping.bib at icsi.berkeley.edu. That directory also contains a partial summary of the references in nn-mapping-summary. I will update the summary as I find and read more of the papers. If you come to know of work that is not mentioned in the list, I would appreciate that information and will keep the references up-to-date. Abbreviated transcript for getting the files gomes:~> ftp icsi.berkeley.edu Connected to icsi.berkeley.edu. Name (icsi.berkeley.edu:gomes): anonymous 331 Guest login ok, send your complete e-mail address as password. Password: ftp> cd pub/ai/gomes ftp> get nn-mapping.bib .. ftp> get nn-mapping-summary .. ftp> bye 221 Goodbye. Benedict Gomes (gomes at icsi.berkeley.edu)  From bard at mthbard.math.pitt.edu Thu Sep 23 10:41:48 1993 From: bard at mthbard.math.pitt.edu (Bard Ermentrout) Date: Thu, 23 Sep 93 10:41:48 EDT Subject: softwre Message-ID: <9309231441.AA22089@mthbard.math.pitt.edu> F R E E S I M U L A T I O N S O F T W A R E I thought perhaps that modellers etc might be interested to know of the availability of software for the analysis and simulation of dynamical and probabilistic phenomena. xpp is available free via anonymous ftp. It solves integro-differential equations, delay equations, iterative equations, all combined with probabilistic models. Postscript output is supported. A variety of numerical methods are employed so that the user can generally be sure that the solutions are accurate. Examples are connectionist type neural nets, biophysical models, models with memory, and models of cells with random inputs or with random transitions. A graphical interface using X windows as well as numerous plotting options are provided. The requirements are a C compiler and an OS capable of running X11. The software has been successfully compiled on DEC,HP,SUN,IBM,NEXT workstations as well as on a PC running Linux. Once it is compiled, no more compilation is necessary as the program can read algebraic expressions and interpret them in order to solve them. The program has been used in various guises for the last 5 years by a variety of mathematicians, physicists, and biologists. To get it follow the instructions below: ------------Installing XPP1.6-------------------------------- XPP is pretty simple to install although you might have to add a line here and there to the Makefile. You can get it from mthsn4.math.pitt.edu (130.49.12.1) here is how: ftp 130.49.12.1 cd /pub bin get xpp1.6.tar.Z quit uncompress xpp1.6.tar.Z tar xf xpp1.6.tar make -k If you get errors in the compilation it is likely to be one of the following: 1) gcc not found in which case you should edit the Makefile so that it says CC= cc 2) Cant find X include files. Then edit the line that says CFLAGS= .... by adding -I where is where the include files are for X, e,g, -I/usr/X11/include 3) Cant find X libraries. Then add a line LDFLAGS= -L right after the CFLAGS= line where is where to find the X11 libraries then change this line: $(CC) -o xpp $(OBJECTS) $(LIBS) to this line $(CC) -o xpp $(OBJECTS) $(LDFLAGS) $(LIBS) That should do it!! If it still doesnt compile, then you should ask your sysadmin about the proper paths. Finally, some compilers have trouble with the GEAR algorithm if they are optimized so you should remove the optimization flags i.e. replace CFLAGS= -O2 -I with CFLAGS= -I delete all the .o files and recompile Good luck! Bard Ermentrout Send comments and bug reports to bard at mthbard.math.pitt.edu  From Matthew.White at cs.cmu.edu Fri Sep 24 03:15:48 1993 From: Matthew.White at cs.cmu.edu (Matthew.White@cs.cmu.edu) Date: Fri, 24 Sep 1993 03:15:48 -0400 (EDT) Subject: CMU Learning Benchmark Database Updated Message-ID: The CMU Learning Benchmark Archive has been updated. As you may know, in the past, all the data sets in this collection have been in varying formats, requiring that code be written to parse each one. This was a waste of everybody's time. These old data sets have been replaced with data sets in a standardized format. Now, all benchmarks consist of a file detailing the benchmark and another file that is either a data set (.data) or a program to generate the appropriate data set (.c). Data sets currently avaialable are: nettalk Pronunciation of English words. parity N-input parity. protein Prediction of secondary structure of proteins. sonar Classification of sonar signals. two-spirals Distinction of a twin spiral pattern. vowel Speaker independant recognition of vowels. xor Traditional xor. To accompany this new data file format is a file describing the format and a C library to parse the data file format. In addition, the simulator (C version) for Cascade-Correlation has been rewritten to use the new file format. Both the parsing code and the cascade correlation code are distributed as compressed shell archives and should compile with any ANSI/ISO compatible C compiler. Code currently available: nevprop1.16.shar A user friendly version of quickprop. cascor1a.shar The re-engineered version of the Cascade Correlation algorithm. parse1.shar C code for the parsing algorithm to the new data set format. Data sets and code are available via anonymous FTP. Instructions follow. If you have difficulties with either the data sets or the programs, please send mail to: neural-bench at cs.cmu.edu. Any comments or suggestions should also be sent to that address. Let me urge you not to hold back questions as it is our single best way to spot places for improvement in our methods of doing things. If you would like to submit a data set to the CMU Learning Benchmark Archive, send email to neural-bench at cs.cmu.edu. All data sets should be in the CMU data file format. If you have difficulty converting your data file, contact us for assistance. Matt White Maintainer, CMU Learning Benchmark Archive ------------------------------------------------------------------------------- Directions for FTPing datasets: For people whose systems support AFS, you can access the files directly from directory "/afs/cs.cmu.edu/project/connect/bench". For people accessing these files via FTP: 1. Create an FTP connection from wherever you are to machine "ftp.cs.cmu.edu". The internet address of this machine is 128.2.206.173, for those who need it. 2. Log in as user "anonymous" with your own internet address as password. You may see an error message that says "filenames may not have /.. in them" or something like that. Just ignore it. 3. Change remote directory to "/afs/cs/project/connect/bench". NOTE: you must do this in a single atomic operation. Some of the super directories on this path are not accessible to outside users. 4. At this point the "dir" command in FTP should give you a listing of files in this directory. Use get or mget to fetch the ones you want. If you want to access a compressed file (with suffix .Z) be sure to give the "binary" command before doing the "get". (Some version of FTP use different names for these operations -- consult your local system maintainer if you have trouble with this.) 5. The directory "/afs/cs/project/connect/code" contains public-domain programs implementing the Quickprop and Cascade-Correlation algorithms, among other things. Access it in the same way.  From joachim at fit.qut.edu.au Fri Sep 24 04:12:20 1993 From: joachim at fit.qut.edu.au (Joachim Diederich) Date: Fri, 24 Sep 1993 04:12:20 -0400 Subject: NIPS-93 Workshop "Parallel Processing" Message-ID: <199309240812.EAA16792@fitmail.fit.qut.edu.au> NIPS*93 Workshop: Connectionist Modelling and Parallel Architectures ================= -------------------------------------------------- 4 December 1993; Vail, Colorado Intended Audience: computer scientists and engineers as well as ================== biologists and cognitive scientists Organizers: =========== Joachim Diederich Ah Chung Tsoi Neurocomputing Research Centre Department of Elec. and Computer Engineering Queensland University of Technology University of Queensland joachim at fitmail.fit.qut.edu.au act at s1.elec.uq.oz.au Program: ======== The objective of the workshop is to provide a discussion platform for researchers interested in software and modelling aspects of neural computing. The workshop should be of considerable interest to computer scientists and engineers as well as biologists and cognitive scientists. The introduction of specialized hardware platforms for connectionist modelling ("connectionist supercomputer") has created a number of research issues which should be addressed. Some of these issues are controversial (incl. the need for such specialized architectures): the efficient implementation of incremental learning techniques, the need for the dynamic reconfiguration of networks at runtime and possible programming environments for these machines. The following topics should be addressed: - the efficient simulation of homogenuous network architectures; mapping of homogenous network architectures to parallel machines - randomness and sparse coding; the efficient simulation of sparse networks on sequential and parallel machines. Sparse activity and communication in parallel architectures - arbitrary interconnection schemes and their mapping to parallel architectures - dynamic reconfiguration: the modification of network structures and activation functions at runtime. Possible trade-offs between the efficient simulation of fixed-sized networks and constructive (incremental) learning algorithms - software tools and environments for neural network modelling, in particular for parallel architectures - connectionist supercomputer (such as CNAPS, Synapse and CNS-1) hardware and programming issues associated with connectionist supercomputer - biologically realistic modelling on parallel machines, the simulation of synaptogenesis, spike trains etc. - realistic simulation of the brain integrating over a number of scales of complexity, from the detailed simulation of neurons to high level abstractions The following is a preliminary schedule, we expect to have two more slots for brief presentations and invite abstracts for short talks (about 10-15min). Please send e-mail to: joachim at fitmail.fit.qut.edu.au Morning Session: ---------------- 7:30-7:40 Joachim Diederich, Queensland University of Tech., Brisbane Introduction 7:40-8:10 Jerome A. Feldman, ICSI & University of California, Berkeley The Connectionist Network Supercomputer (CNS-1) 8:10-8:30 Discussion 8:30-8:50 Nigel Goddard, Pittsburgh Supercomputer Center Practical Parallel Neural Simulation 8:50-9:10 Per Hammarlund, Royal Institute of Technology, Stockholm Simulation of Large Neural Networks: System Specification and Execution on Parallel Machines 9:10-9:30 Discussion Afternoon Session: ------------------ 4:30-4:50 Paul Murtagh & Ah Chung Tsoi, University of Queensland, St. Lucia Digital implementation of a reconfigurable VLSI neural network chip 4:50-5:20 Ulrich Ramacher, Siemens AG, Munich The Neurocomputer SYNAPSE-1 5:20-5:30 Discussion 5:30-6:00 Guenther Palm & Franz Kurfess, University of Ulm Parallel Implementations of Neural Networks for Associative Memory 6:00-6:30 Discussion  From georg at ai.univie.ac.at Fri Sep 24 07:56:06 1993 From: georg at ai.univie.ac.at (Georg Dorffner) Date: Fri, 24 Sep 1993 13:56:06 +0200 Subject: CSFN: instructions for printing Message-ID: <199309241156.AA00364@chicago.ai.univie.ac.at> Dear connectionists, I recently announced the availability of two papers ("A unified framework for MLPs and RBFNs ..." and "On using feedforward NN for Clinical ...") on both neuroprose and our ftp server 'ftp.ai.univie.ac.at'. Unfortunately, the postscript files contain commands that cause trouble with some printers. So, if you fetched any of the files but were not able to print it out it could be either 1) your printer does not support the "A4" tray (European paper format) Then either delete the line consisting of 'statusdict begin a4tray end' or fetch the file again from ftp.ai.univie.ac.at (anonymous ftp) where I have already deleted that line. 2) or the troubles (i.e. no printing at all, or error message "Offending command: cleartomark", or the like) are caused by a bunch of lines consisting of the simple command 'cleartomark' - for whatever reason. Then delete all those lines or globally replace them with '%cleartomark' (= comment them out) and try printing again. Since the troubles seem to occur only on SOME printers and other printers might need those ominous commands (sorry to you PS gurus), and since I want to save Jordan Pollack some additional effort, I have not made any changes to the originally archived files (except for the one mentioned above). Thank you for your patience and sorry for any inconveniences. cheers Georg Dorffner  From protopap at marlowe.cog.brown.edu Fri Sep 24 14:09:03 1993 From: protopap at marlowe.cog.brown.edu (Thanassi Protopapas) Date: Fri, 24 Sep 93 14:09:03 EDT Subject: References on speech perception modeling wanted Message-ID: <9309241809.AA09588@marlowe.cog.brown.edu> Hello, I am a graduate student at Brown in the program in Cognitive Science and I am preparing my prelim paper on connectionist modeling of speech perception. I already have an extensive list of references on the issues of speech perception and on general neural net modeling, as well as on the TRACE model and its (dis)advantages, including the Massaro et al. papers on the FLMP. At the end of this message I list some papers I have that deal specifically with connectionist modeling; the extensive list of references on related topics can be found at ftp site (log in as anonymous): clouseau.cog.brown.edu (128.148.208.14) in directory /pub/protopapas in file sp_references.ps (PostScript format only) I would like to ask if you are aware of any recent (within the past two years or so) attempts to model speech perception with connectionist models. I am interested in the stages from prelexical processing of the signal (feature detection, normalization, etc.) up to lexical access (including representational issues and units/segments of processing). Please send me references at protopap at cog.brown.edu, or, if you have some technical report that I cannot find through the library system, I would really appreciate a copy. My postal address is: Athanassios Protopapas Dept. of Cognitive & Linguistic Sciences Box 1978 Brown University Providence, RI 02912 U.S.A. Although I'd love to know about engineering approaches of neural networks to speech recognition, I am primarily interested in models of human speech perception (more psychologically/cognitively motivated and oriented). Thanks a lot in advance, Thanassi. ------------------------------------------------------------------------ Selected references: Elman, J. L. (1989) Connectionist approaches to acoustic/phonetic processing. In W. Marslen-Wilson (Ed.) Lexical representation and process, pp.227-260. Cambridge, MA : MIT Press Elman, J. L. (1990) Representation and structure in connectionist models. In G. T. M. Altmann (Ed.) Cognitive models of speech processing, pp.345-382. Cambridge, MA : MIT Press Haffner, P. & Waibel, A. (1992) Multi-state time delay neural networks for continuous speech recognition. Advances in neural information processing systems, Vol. 4, pp.135-142 Morgan Kaufman Klatt, D. H. (1989) Review of selected models of speech perception. In W. Marslen-Wilson (Ed.) Lexiacal representation and process, pp.169-226. Cambridge, MA : MIT Press Lippman, R. P. (1989) Review of neural networks for speech recognition. Neural Computation, 1, pp.1-38. Massaro, D. W. (1992) Connectionist models of speech perception. In R. G. Reily & N. E. Sharkey (Eds.) Connectionist approaches to natural language processing, pp.321-350. Hove, UK : Lawrence Erlbaum Massaro, D. W., & Cohen, M. M. (1991) Integration versus interactive activation: The joint influence of stimulus and context in perception. Cognitive Psychology, 23, 558-614 McClelland, J. L. & Elman, J. L. (1986) Interactive processes of speech perception: The TRACE model. In Parallel distributed processing, Vol. 2: Psychological and biological models, pp.58-121. Cambridge, MA : MIT Press McClelland, J. L. (1991) Stochastic interactive processes and the effect of context on perrception. Cognitive Psychology, 23, 1-44 Norris, D. (1990) A dynamic-net model of human speech recognition. In G. T. M. Altmann (Ed.) Cognitive models of speech processing, pp.87-104. Cambridge, MA : MIT Press Quinlan, P. (1991) Connectionism and psychology. Chicago, IL : The University of Chicago Press, pp.132-157 Watrous, R. L. (1990) Phoneme discrimination using connectionist networks The Journal of the Acoustical Society of America, 87(4) : 1753-1772 Weibel, A. (1989) Modular construction of time-delay neural networks for speech recognition. Neural Computation, 1, 39-46  From ken at phy.ucsf.edu Mon Sep 27 06:13:58 1993 From: ken at phy.ucsf.edu (Ken Miller) Date: Mon, 27 Sep 93 03:13:58 -0700 Subject: postdoctoral fellowship opportunity for women and minorities Message-ID: <9309271013.AA24084@phybeta.ucsf.EDU> The University of California annually awards 20 or more postdoctoral fellowships to women and minorities under the "President's Postdoctoral Fellowship Program". Fellowships are awarded to work with a faculty member at any of the nine UC campuses or at one of the three national laboratories associated with UC (Lawrence Berkeley, Lawrence Livermore, and Los Alamos). Fellowships pay $26-27,000/year, plus health benefits and $4000/year for research and travel. Applicants must be citizens or permanent residents of the United States, and should anticipate completion of their Ph.D.'s by July 1, 1994. For this year's competition, DEADLINE FOR APPLICATION IS DECEMBER 14, 1993. There are many of us who work in computational neuroscience or connectionism in the UC system or the national labs. I would encourage anyone eligible to make use of this opportunity to obtain funding to work with one of us. In particular, I encourage anyone interested in computational neuroscience to contact me to further discuss my own research program and the research opportunities in computational and systems neuroscience at UCSF. To receive a fellowship application and further information, contact: President's Postdoctoral Fellowship Program Office of the President University of California 300 Lakeside Drive, 18th Floor Oakland, CA 94612-3550 Phone: 510-987-9500 or 987-9503 Ken Miller Kenneth D. Miller telephone: (415) 476-8217 Dept. of Physiology internet: ken at phy.ucsf.edu University of California, fax: (415) 476-4929 San Francisco 513 Parnassus San Francisco, CA 94143-0444 [Office: S-859]  From S.FLOCKTON at rhbnc.ac.uk Mon Sep 27 12:40:02 1993 From: S.FLOCKTON at rhbnc.ac.uk (S.FLOCKTON@rhbnc.ac.uk) Date: Mon, 27 SEP 93 12:40:02 BST Subject: Job vacancy in evolutionary algorithms Message-ID: <21C31C4F_0050A620.009732BBE1F3469A$9_2@UK.AC.RHBNC.VAX> ROYAL HOLLOWAY, UNIVERISTY OF LONDON POST-DOCTORAL RESEARCH ASSISTANT EVOLUTIONARY ALGORITHMS IN NON-LINEAR SIGNAL PROCESSING Applications are invited for this SERC-funded post, tenable for three years from 1 October 1993 or soon after, to carry out a comparison of the effectiveness of evolution-based algorithms for a number of signal processing problems. This comparison will be done by study of example problems and developing theoretical ideas concerning the behaviour of these algorithms. The successful applicant will join a group investigating several different aspects of genetic algorithms and neural networks. Royal Holloway, one of the five multi-faculty Colleges of the University of London, is situated in a campus environment approximately 20 miles west of London, just outside the M25. Applicants should hold a PhD in Electrical Engineering, Computer Science, Physics, or a related field, preferably in digital signal processing or genetic and/or other evolution-based algorithms. Salary on the Research 1A Scale (UKpounds 14,962 - 17,320 pa, inclusive of London Allowance). Informal enquiries to Dr Stuart Flockton (Tel: 0784 443510 , Fax: 0784 472794, email: S.Flockton at rhbnc.ac.uk). Further particulars from the Personnel Officer, Royal Holloway, University of London, Egham, Surrey, TW20 0EX Tel: 0784 443030. Closing date for applications: 15th October 1993  From GARZONM at hermes.msci.memst.edu Tue Sep 28 10:28:07 1993 From: GARZONM at hermes.msci.memst.edu (GARZONM@hermes.msci.memst.edu) Date: 28 Sep 93 09:28:07 CDT Subject: NIPS'93 workshop on "Stability and Solvability" Message-ID: C A L L F O R P A P E R S A One-day Workshop on * STABILITY AND OBSERVABILITY * at NIPS'93 December 3, 1993 We are organizing a workshop at the NIPS'93 -Neural Information Processing Systems conference to be held at the Denver/Vail area in Colorado December 3. The themes of the workshop are `Stability and Observability'. A more detailed description is attached below. There is still room for some contributed talks. If you are interested in presenting a paper based on previous and/or current research, send a short (one-page) abstract or contact one of the organizers by October 8 via email or fax. A list of speakers will be finalized by mid October. Fernanda Botelho Max Garzon botelhof at hermes.msci.memst.edu garzonm at hermes.msci.memst.edu FAX (901)678-2480 (preferred); 678-3299 Workshop cochairs _____________________________ cut here _________________________ The purpose of this one-day workshop is to bring together neural network practitioners, computer scientists and mathematicians interested in `stability' and `observability' of neural networks of various types (discrete/continuous time and/or activations). These two properties concern the relationship between defining parameters (weights, transfer functions, and training sets) and the behavior of neural networks from the point of view of an outside observer. This behavior is affected by noise, rounding, bounded precision, sensitivity to initial conditions, etc. Roughly speaking, *stability* (e.g. asymptotic, Lyapunov, structural) refers to the ability of a network (or a family of networks) to generate trajectories/orbits that remain reasonably close (resp., in structure, e.g. topological conjugacy) to the original under small perturbations of the input/initial conditions (or the defining parameters of the network). Of course, neural networks are well-known for their graceful degradation, but this is less clear an issue with bounded precision, continuous time with local interaction governed by differential equations, and learning algorithms. Second, the issue of *observability*, roughly speaking, concerns the problem of error control under iteration of recurrent nets. In dynamical systems observability is studied in terms of shadowing. But observability can also be construed other ways, e.g. as our ability to identify a network by observing the abstract i/o function that it realizes (which, at some level, reduces to essential uniqueness of an irreducible network implementing the i/o function). Speakers will present their views in short(< 20 min.) talks. A panel discussion coordinated by the cochairs will discuss known results, and identify fundamental problems and questions of interest for further research. F. Botelho and M. Garzon, cochairs botelhof at hermes.msci.memst.edu garzonm at hermes.msci.memst.edu Mathematical Sciences Institute for Intelligent Systems Memphis State University Memphis, TN 38152 U.S.A. Max Garzon (preferred) garzonm at hermes.msci.memst.edu Math Sciences garzonm at memstvx1.memst.edu Memphis State University Phone: (901) 678-3138/-2482 Memphis, TN 38152 USA Fax: (901) 678-3299  From Pierre.Bessiere at imag.fr Tue Sep 28 11:33:55 1993 From: Pierre.Bessiere at imag.fr (pierre bessiere) Date: Tue, 28 Sep 1993 16:33:55 +0100 Subject: TR: The "Ariadne's Clew" algorithm Message-ID: <9309281533.AA02278@meteore.imag.fr> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/bessiere.iros93.ps.Z The following paper is available through FTP either from: - archive.cis.ohio-state.edu or - imag.fr ******************************************************************** TITLE :THE "ARIADNE'S CLEW" ALGORITHM Global planning with local methods AUTHOR(S) :Pierre Bessiere, Juan-Manuel Ahuactzin, El-Ghazali Talbi & Emmanuel Mazer REFERENCE :IEEE-IROS'93 conference, Yokohama, Japan, 1993 LANGUAGE :English LENGTH :8 pages DATE :28/09/93 KEYWORDS :Robotic, Genetic Algorithm, Path planning FILE NAME :bessiere.iros93.e.ps.Z Author E-mail :Pierre.Bessiere at imag.fr Related Files : ABSTRACT : The goal of the work described in this paper is to build a path planner able to drive a robot in a dynamic environment where the obstacles are moving. In order to do so, we propose a method, called "Ariadne's clew algorithm", to build a global path planner based on the combination of two local planning algorithms : an Explore algorithm and a Search algorithm. The purpose of the Explore algorithm is to collect information about the environment with an increasingly fine resolution by placing landmarks in the searched space. The goal of the Search algorithm is to opportunistically check if the target can be easily reached from any given placed landmark. The Ariadne's clew algorithm is shown to be very fast in most cases allowing plannning in dynamic environments. Hence, it is shown complete, which means that it is sure to find a path when one exists. Finally, we describe a massively parallel implementation of this algorithm. ******************************************************************** How to get files from the Neuroprose archives? ______________________________________________ Anonymous ftp on: - archive.cis.ohio-state.edu (128.146.8.52) mymachine>ftp archive.cis.ohio-state.edu Name: anonymous Password: yourname at youradress ftp>cd pub/neuroprose ftp>binary ftp>get bessiere.iros93.ps.Z ftp>quit mymachine>uncompress bessiere.iros93.ps.Z How to get files from IMAG? ___________________________ Anonymous ftp on: - imag.fr (129.88.32.1) mymachine>ftp imag.fr Name: anonymous Password: yourname at youradress ftp>cd pub/LIFIA ftp>binary ftp>get bessiere.iros93.e.ps.Z ftp>quit mymachine>uncompress bessiere.iros93.e.ps.Z -- Pierre BESSIERE *************** CNRS - IMAG/LIFIA phone: 46 ave. Felix Viallet Work: 33/76.57.46.73 38031 Grenoble Cedex Home: 33/76.88.06.09 FRANCE Fax: 33/76.57.46.02 E-Mail: Pierre.Bessiere at imag.fr Notre esprit a une irresistible tendance a considerer comme plus claire l'idee qui lui sert le plus souvent. BERGSON "La pensee et le mouvant"  From terry at helmholtz.sdsc.edu Tue Sep 28 20:49:21 1993 From: terry at helmholtz.sdsc.edu (Terry Sejnowski) Date: Tue, 28 Sep 93 17:49:21 PDT Subject: Summer Institute Message-ID: <9309290049.AA05457@helmholtz.sdsc.edu> SUMMER INSTITUTE IN COGNITIVE NEUROSCIENCE at The University of California, Davis The 1994 Summer Institute will be held at the University of California, Davis, from July 10 through 23. The two week course will examine how information about the brain bears on issues in cognitive science, and how approaches in cognitive science apply to neuroscience research. A distinguished international faculty will lecture on current topics in brain plasticity, strategies of neuroencoding, and evolution. Laboratorites and demonstrations will provide practical experience with cognitive neuropsychology experiments, connectionist/computational modeling, and neuroimaging techniques. At every stage, the relationship between cognitive processes and underlying neural circuits will be explored. The Foundation is providing room/board and limited support for travel. Faculty Include: Richard Andersen, Max Bear, Ira Black, Kenneth H. Britten, Simon Baron Cohen, Leda Cosmides, Randy Gallistel, Michael S. Gazzaniga, Charles Gilbert, Charles M. Gray, Eric Knudsen, Peter Marler, Michael Merzenich, Reid Montague, Steven Pinker, V. Ramachandran, Gregg Recanzone, Barry Richmond, Mitch Sutter, Timothy Tonini, John Tooby, and many others. For information and applications please write to: McDonnell Summer Institute in Cognitive Neuroscience Center for Neuroscience, 1544 Newton Court University of California, Davis Davis, California 95616 USA APPLICATIONS MUST BE RECEIVED BY JANUARY 15, 1994  From uzimmer at informatik.uni-kl.de Wed Sep 29 11:13:54 1993 From: uzimmer at informatik.uni-kl.de (Uwe R. Zimmer, AG vP) Date: Wed, 29 Sep 93 16:13:54 +0100 Subject: A Neural Fuzzy Decision System (Report, Diploma-thesis & Software) Message-ID: <930929.161354.601@informatik.uni-kl.de> Due to a couple of requests, we have made the complete diploma-thesis of Joerg Bruske ("Neural Fuzzy Decision Systems") available for ftp-access. --------------------------------------------------------------------------- --- Neural Fuzzy Decision System (Report, Diploma-thesis & Software): --------------------------------------------------------------------------- --- Associated report is: FTP-Server is: ag_vp_file_server.informatik.uni-kl.de Mode is : binary Directory is : Neural_Networks/Reports File name is : Zimmer.NFDS.ps.Z SPIN-NFDS Learning and Preset Knowledge for Surface Fusion - A Neural Fuzzy Decision System - Joerg Bruske, Ewald von Puttkamer & Uwe R. Zimmer The problem to be discussed in this paper may be characterized in short by the question: "Are these two surface fragments belonging together (i.e. belonging to the same surface)?". The presented techniques try to benefit from some predefined knowledge as well as from the possibility to refine and adapt this knowledge according to a (changing) real environment, resulting in a combination of fuzzy-decision systems and neural networks. The results are encouraging (fast convergence speed, high accuracy), and the model might be used for a wide range of applications. The general frame surrounding the work in this paper is the SPIN-project, where emphasis is on sub-symbolic abstractions, based on a 3-d scanned environment. --- SPIN-NFDS: the diploma thesis by Joerg Bruske: FTP-Server is : ag_vp_file_server.informatik.uni-kl.de Mode is : binary Directory is : Neural_Networks/Reports/Bruske.Diploma-thesis File names are: *.ps.Z The complete diploma thesis gives more precise information about the state-of-the-art in neural fuzzy decision systems, an introduction to fuzzy logic and neural nets and much more. --- Source code and technical documentation: FTP-Server is: ag_vp_file_server.informatik.uni-kl.de Mode is : binary Directory is : Neural_Networks/Software/Neural_Fuzzy_Decision This documentation consists of five chapters: In Chapter 1, the author presents his approach towards implementing fuzzy decision systems (FDS) by means of neural nets, leading to his NFDS. In order to train (optimize) the NFDS, a slightly modified version of the backpropagation algorithm is introduced. In Chapter 2, the FuzNet project and its modules are described in detail. FuzNet implements the NFDS described in Chapter 1 on Apple-Macintosh computers and has been developed as an easy-integrable SW-component for larger SW-projects. In Chapter 3, we will be concerned with the details of the integration of FuzNet in other SW-projects, taking SurfaceExtractor as an example. However, the reader need not know the SurfaceExtractor project (which currently is not supplied via ftp) in order to understand the details of integrating FuzNet in their projects. In Chapter 4, the FuzTest application is described. FuzTest is a very primitive application intended to familiarize the user with FuzNet. In Chapter 5, the reader will find the syntax diagram for fuzzy data- and rule- bases as accepted by FuzNet. The file "brakingFDS" contains such a fuzzy data- and rule- base. A references list concerning literature about neural nets, fuzzy logic and neural fuzzy decision systems is appended to this documentation. In particular, [Bruske93] is recommended for a detailed discussion of neural fuzzy decision systems and [BruPuttZi93] as a short introduction to NFDS and one of its applications in the Research Group v. Puttkamer. ----------------------------------------------------- ----- Uwe R. Zimmer --- University of Kaiserslautern - Computer Science Department | Research Group Prof. v. Puttkamer | 67663 Kaiserslautern - Germany | -------------------------------------------------------------- | P.O.Box:3049 | Phone:+49 631 205 2624 | Fax:+49 631 205 2803 |  From hwang at pierce.ee.washington.edu Wed Sep 29 21:28:41 1993 From: hwang at pierce.ee.washington.edu (Jenq-Neng Hwang) Date: Wed, 29 Sep 93 18:28:41 PDT Subject: NNSP'94 Call For Papers Message-ID: <9309300128.AA01571@pierce.ee.washington.edu.> 1994 IEEE WORKSHOP ON NEURAL NETWORKS FOR SIGNAL PROCESSING September 6-8, 1994 Ermioni, Greece Sponsored by the IEEE Signal Processing Society (In cooperation with the IEEE Neural Networks Council) GENERAL CHAIR John Vlontzos INTRACOM S.A. Peania, Attica, Greece jvlo at intranet.gr PROGRAM CHAIR Jenq-Neng Hwang University of Washington Seattle, Washington, USA hwang at ee.washington.edu PROCEEDINGS CHAIR Elizabeth J. Wilson Raytheon Co. Marlborough, MA, USA bwilson at sud2.ed.ray.com FINANCE CHAIR Demetris Kalivas INTRACOM S.A. Peania, Attica, Greece dkal at intranet.gr PROGRAM COMMITTEE Joshua Alspector Les Atlas Charles Bachmann David Burr Rama Chellappa Lee Giles Steve J. Hanson Yu-Hen Hu Jenq-Neng Hwang Bing-Huang Juang Shigeru Katagiri Sun-Yuan Kung Gary M. Kuhn Stephanos Kollias Richard Lippmann Fleming Lure John Makhoul Richard Mammone Elias Manolakos Nahesan Niranjan Tomaso Poggio Jose Principe Wojtek Przytula Ulrich Ramacher Bhaskar D. Rao Andreas Stafylopatis Noboru Sonehara John Sorensen Yoh'ichi Tohkura John Vlontzos Raymond Watrous Christian Wellekens Yiu-Fai Issac Wong CALL FOR PAPERS The fourth of a series of IEEE workshops on Neural Networks for Signal Processing will be held at the Porto Hydra Resort Hotel, Ermioni, Greece, in September of 1994. Papers are solicited for, but not limited to, the following topics: APPLICATIONS: Image, speech, communications, sensors, medical, adaptive filtering, OCR, and other general signal processing and pattern recognition topics. THEORIES: Generalization and regularization, system identification, parameter estimation, new network architectures, new learning algorithms, and wavelet in NNs. IMPLEMENTATIONS: Software, digital, analog, and hybrid technologies. Prospective authors are invited to submit 4 copies of extended summaries of no more than 6 pages. The top of the first page of the summary should include a title, authors' names, affiliations, address, telephone and fax numbers and email address if any. Camera-ready full papers of accepted proposals will be published in a hard-bound volume by IEEE and distributed at the workshop. Due to workshop facility constraints, attendance will be limited with priority given to those who submit written technical contributions. For further information, please contact Mrs. Myra Sourlou at the NNSP'94 Athens office, (Tel.) +30 1 6644961, (Fax) +30 1 6644379, (e-mail) msou at intranet.gr. Please send paper submissions to: Prof. Jenq-Neng Hwang IEEE NNSP'94 Department of Electrical Engineering, FT-10 University of Washington, Seattle, WA 98195, USA Phone: (206) 685-1603, Fax: (206) 543-3842 SCHEDULE Submission of extended summary: February 15 Notification of acceptance: April 19 Submission of photo-ready paper: June 1 Advanced registration, before: June 1  From mli at math.uwaterloo.ca Wed Sep 29 23:26:18 1993 From: mli at math.uwaterloo.ca (Ming Li) Date: Wed, 29 Sep 93 23:26:18 -0400 Subject: Call for Papers: COLT'94 Message-ID: <9309300326.AA21739@math.uwaterloo.ca> CALL FOR PAPERS---COLT 94 Seventh ACM Conference on Computational Learning Theory New Brunswick, New Jersey July 12--15, 1994 The Seventh ACM Conference on Computational Learning Theory (COLT 94) will be held at the New Brunswick campus of Rutgers University from Tuesday, July 12, through Friday, July 15, 1994. The conference will be co-located with the Eleventh International Conference on Machine Learning (ML 94), which will be held from Sunday, July 10, through Wednesday, July 13. So the two conferences overlap on Tuesday and Wednesday. The COLT 94 conference is sponsored jointly by the ACM Special Interest Groups for Algorithms and Computation Theory (SIGACT) and Artificial Intelligence (SIGART). We invite papers in all areas that relate directly to the analysis of learning algorithms and the theory of machine learning, including artificial and biological neural networks, robotics, pattern recognition, inductive inference, information theory, decision theory, Bayesian/MDL estimation, statistical physics, and cryptography. We look forward to a lively, interdisciplinary meeting. In particular we expect some fruitful interaction between the research communities of the two overlapping conferences. There will be a number of joint invited talks. Prof. Michael Jordan from MIT will be one of the invited speakers; the others will be announced at a later date. Abstract Submission: Authors should submit twelve copies (preferably two-sided copies) of an extended abstract to be received by Thursday, February 3, 1994, to Manfred Warmuth - COLT 94 225 Applied Sciences Department of Computer Science University of California Santa Cruz, California 95064 An abstract must be received by February 3, 1994 (or postmarked January 23 and sent airmail, or sent overnight delivery on February 2). This deadline is FIRM! Papers that have appeared in journals or other conferences, or that are being submitted to other conferences, are not appropriate for submission to COLT. Abstract Format: The abstract should consist of a cover page with title, authors' names, postal and e-mail addresses, and a 200- word summary. The body of the abstract should be no longer than 10 pages with roughly 35 lines/page in 12-point font. Papers deviating significantly from this length constraint will not be considered. The body should include a clear definition of the theoretical model used, an overview of the results, and some discussion of their significance, including comparison to other work. Proofs or proof sketches should be included in the technical section. Experimental results are welcome, but are expected to be supported by theoretical analysis. Notification: Authors will be notified of acceptance or rejection by a letter mailed on or before Monday, April 4, with possible earlier notification via e-mail. Final camera-ready papers will be due on Tuesday, May 3. Program Format: Depending on submissions, and in order to accommodate a broad variety of papers, the final program may consist of both "long" talks, and "short" talks, corresponding to longer and shorter papers in the proceedings. The short talks will also be coupled with a poster presentation in special poster sessions. By default, all papers will be considered for both categories. Authors who do *not* want their papers considered for the short category should indicate that fact in the cover letter. The cover letter should also specify the contact author and give his/her e-mail. Program Chair: Manfred Warmuth (UC Santa Cruz, e-mail to colt94 at cse.ucsc.edu). Conference and Local Arrangements Co-Chairs: Robert Schapire and Michael Kearns (AT&T Bell Laboratories, e-mail to colt94 at research.att.com). Program Committee: Shun'ichi Amari (U. Tokyo), Avrim Blum (Carnegie Mellon), Nader Bshouty (U. Calgary), Bill Gasarch (U. Maryland), Tom Hancock (Siemens), Michael Kearns (AT&T), Sara Solla (Holmdel), Prasad Tadepalli (Oregon St. U.), Jeffrey Vitter (Duke U.), Thomas Zeugmann (TU. Darmstadt).