From Connectionists-Request at cs.cmu.edu Fri Sep 1 00:05:22 1995 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Fri, 01 Sep 95 00:05:22 -0400 Subject: Bi-monthly Reminder Message-ID: <25122.809928322@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated September 9, 1994. This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is a moderated forum for enlightened technical discussions and professional announcements. It is not a random free-for-all like comp.ai.neural-nets. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to thousands of busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. -- Dave Touretzky & Lisa Saksida --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, corresponded with him directly and retrieved the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new textbooks related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. The file "current" in the same directory contains the archives for the current month. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU 2. Login as user anonymous with password your username. 3. 'cd' directly to the following directory: /afs/cs/project/connect/connect-archives The archive directory is the ONLY one you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into this directory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- Using Mosaic and the World Wide Web ----------------------------------- You can also access these files using the following url: http://www.cs.cmu.edu:8001/afs/cs/project/connect/connect-archives ---------------------------------------------------------------------- The NEUROPROSE Archive ---------------------- Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. We strongly discourage the merger into the repository of existing bodies of work or the use of this medium as a vanity press for papers which are not of publication quality. PLACING A FILE To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. Current naming convention is author.title.filetype.Z where title is just enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. The Z indicates that the file has been compressed by the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is in the appendix. Make sure your paper is single-spaced, so as to save paper, and include an INDEX Entry, consisting of 1) the filename, 2) the email contact for problems, 3) the number of pages and 4) a one sentence description. See the INDEX file for examples. ANNOUNCING YOUR PAPER It is the author's responsibility to invite other researchers to make copies of their paper. Before announcing, have a friend at another institution retrieve and print the file, so as to avoid easily found local postscript library errors. And let the community know how many pages to expect on their printer. Finally, information about where the paper will/might appear is appropriate inside the paper as well as in the announcement. In your subject line of your mail message, rather than "paper available via FTP," please indicate the subject or title, e.g. "paper available "Solving Towers of Hanoi with ART-4" Please add two lines to your mail header, or the top of your message, so as to facilitate the development of mailer scripts and macros which can automatically retrieve files from both NEUROPROSE and other lab-specific repositories: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/filename.ps.Z When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. To prevent forwarding, place a "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your file. If you do offer hard copies, be prepared for a high cost. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! A shell script called Getps, written by Tony Plate, is in the directory, and can perform the necessary retrieval operations, given the file name. Functions for GNU Emacs RMAIL, and other mailing systems will also be posted as debugged and available. At any time, for any reason, the author may request their paper be updated or removed. For further questions contact: Jordan Pollack Associate Professor Computer Science Department Center for Complex Systems Brandeis University Phone: (617) 736-2713/* to fax Waltham, MA 02254 email: pollack at cs.brandeis.edu APPENDIX: Here is an example of naming and placing a file: unix> compress myname.title.ps unix> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put myname.title.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for myname.title.ps.Z 226 Transfer complete. 100000 bytes sent in 1.414 seconds ftp> quit 221 Goodbye. unix> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file myname.title.ps.Z in the Inbox. Here is the INDEX entry: myname.title.ps.Z mylogin at my.email.address 12 pages. A random paper which everyone will want to read Let me know when it is in place so I can announce it to Connectionists at cmu. ^D AFTER RECEIVING THE GO-AHEAD, AND HAVING A FRIEND TEST RETRIEVE THE FILE, HE DOES THE FOLLOWING: unix> mail connectionists Subject: TR announcement: Born Again Perceptrons FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/myname.title.ps.Z The file myname.title.ps.Z is now available for copying from the Neuroprose repository: Random Paper (12 pages) Somebody Somewhere Cornell University ABSTRACT: In this unpublishable paper, I generate another alternative to the back-propagation algorithm which performs 50% better on learning the exclusive-or problem. ~r.signature ^D ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "ftp.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. Another valid directory is "/afs/cs/project/connect/code", where we store various supported and unsupported neural network simulators and related software. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "neural-bench at cs.cmu.edu". From ericr at mech.gla.ac.uk Fri Sep 1 04:49:48 1995 From: ericr at mech.gla.ac.uk (Eric Ronco) Date: Fri, 1 Sep 1995 09:49:48 +0100 Subject: A modular neural network tech rep Message-ID: Dear all, We completed recently a technical report about modular neural networks. This one is actualy in the web page of our Centre for System and Control in the University of Glasgow (see the abstract below): http://www.mech.gla.ac.uk/Control/reports.html Ronco, E. and Peter Gawthrop, 1995. Modular Neural Networks: a state of the art. Tech. rep. CSC-95026, Centre for Systems and Control, faculty of engineering, Glasgow University. Any comments about this work would be well come. Regards, Eric Ronco Abstract: Title: Modular Neural Networks: a state of the art Author: Eric Ronco and Peter Gawthrop Keywords: Neural networks; Modularity; Global computation; Local computation; Clustering; Function approximation The use of ``global neural networks'' (as the back propagation neural network) and ``clustering neural networks'' (as the radial basis function neural network) leads each other to different advantages and inconvenients. The combination of the desirable features ot those two neural ways of computation is achieved by the use of Modular Neural Networks (MNN). In addition, a considerable advantage can emerge from the use of such a MNN: an interpreatable and relevant neural representation about the plant's behaviour. This very desirable feature for function approximation and especially for control problems, is what lake other neural models. This feature is so important that we introduce it as a way to differenciate MNN between other local computation models. However, to enable a systematic use of MNN three steps have to be achieved. First of all, the task has to be decomposed into subtasks, then the neural modules have to be properly organised considering the subtasks and finally a way of communication inter-modules has to be integrated in the whole architecture. We achieved a study of the main modular applications according to those steps. This study leads to the main fact that a systematic use of MNN depends on the type of task considered. The clustering networks and especially the Local Model Networks can be seen as MNN in the frame of classification or recognition problems. The Euclidean distance criterion that they apply to cluster the input space leads to a relevant decomposition according to the properties of those tasks. But, it is irrelevant to apply such a criteria in case of function approximation problems. As spatial clustering seems to be the only existing decomposing method, therefore, an ``ad hoc'' decomposition and organisation of the architecture is achieved in case of function approximation. So, to improve the systematic use of MNN in the framework of function approximation it is now essential to conceive a method of relevant task decomposition. From zhuh at helios.aston.ac.uk Fri Sep 1 08:47:23 1995 From: zhuh at helios.aston.ac.uk (H ZHU) Date: Fri, 1 Sep 1995 12:47:23 +0000 Subject: 3 TechReports on Measuring Generalisation ... Message-ID: <26706.9509011147@sun.aston.ac.uk> Is there any well-defined meaning to statements like "Learning rule A is better than learning rule B"? The answer is yes, as long as three things are specified: the prior, which is the distribution of problems to be solved; the information divergence, which tells how different the estimated distribution is from the true distribution; and the model, which is the space of all the representable solutions. The following three Technical Reports develop the necessary theory to evaluate and compare any neural network learning rules and other statistical estimators. ftp://cs.aston.ac.uk/neural/zhuh/discrete.ps.Z ftp://cs.aston.ac.uk/neural/zhuh/continuous.ps.Z ftp://cs.aston.ac.uk/neural/zhuh/generalisation.ps.Z Bayesian Invariant Measurements of Generalisation for Discrete Distributions Bayesian Invariant Measurements of Generalisation for Continuous Distributions Information Geometric Measurements of Generalisation by Huaiyu Zhu and Richard Rohwer ABSTRACT Neural networks can be considered as statistical models, and learning rules as statistical estimators. They should be compared in the framework of Bayesian decision theory, with information divergence as the loss function. This ensures coherence (An estimator is optimal if and only if it gives optimal estimates for almost all the data) and invariance (the optimality condition does not depend on one-one transforms in the input, output and parameter spaces). The main result is that the ideal optimal estimator is given as an appropriate average over the posterior. The optimal estimator restricted to any particular model is given by an appropriate projection of the ideal optimal estimator onto the model. The ideal optimal estimator is a sufficient statistic so that all the practical learning rules are its functions. They are also its approximations if preserving information in the data is the sole utility. This new theory of statistical inference retains many of the desirable properties of the least mean squares theory for linear Gaussian models, yet is applicable to any statistical estimation problem, including all the neural network learning rules (deterministic and stochastic, supervised, reinforcement and unsupervised). Comments are welcome and very much appreciated! -- Dr. Huaiyu Zhu zhuh at aston.ac.uk Neural Computing Research Group Dept of Computer Sciences and Applied Mathematics Aston University, Birmingham B4 7ET, UK From arbib at pollux.usc.edu Fri Sep 1 12:08:09 1995 From: arbib at pollux.usc.edu (Michael A. Arbib) Date: Fri, 1 Sep 1995 09:08:09 -0700 Subject: Website for Brain Theory and NN Handbook Message-ID: <199509011608.JAA18825@pollux.usc.edu> The Handbook of Brain Theory and Neural Networks now has a Home Page on the Web. The URL is: http://www-mitpress.mit.edu/mitp/recent- books/comp/handbook-brain-theo.html This includes the preface, the complete table of contents, instructions on "How to Use this Book", and a Contributor list providing address, email, and article titles for all 341 authors. The ISBN is 0-262-01148-4 ARBHH The price is $150.00US till September 30, 1995, and $175 thereafter. Orders can be sent to the MIT Press at mitpress-orders at mit.edu Further MIT Press material is available on line at http://www-mitpress.mit.edu Best wishes Michael Arbib From stiber at cs.ust.hk Sat Sep 2 04:05:42 1995 From: stiber at cs.ust.hk (Dr. Michael Stiber) Date: Sat, 2 Sep 1995 16:05:42 +0800 Subject: Introducing the NeuroGeek WWW page Message-ID: <199509020805.QAA12069@cssu28.cs.ust.hk> Yes, yet another WWW page for you to place on the list of pages to check out some day when you have the time. The NeuroGeek page is for everyone interested in how computers can be applied to solving problems (or creating new ones) in the field of Computational Neuroscience. This includes differential equation integration methods, data analysis tools, applications of parallel computers, methods for simplifying models, repositories for code and information, etc. If I had to choose a unifying theme, it would be a focus on computational METHODS, as opposed to particular preparations, models, etc. We hope that this page will grow into a large virtual index showing HOW people are doing what they do. So please feel free to send pointers to information it may be lacking. The URL is: http://www.cs.ust.hk/faculty/stiber/neurogeek.html -- Dr. Michael Stiber stiber at cs.ust.hk Department of Computer Science tel: +852 2358 6981 The Hong Kong University of Science & Technology fax: +852 2358 1477 Clear Water Bay, Kowloon, Hong Kong finger me at: cssu28.cs.ust.hk or http://www.cs.ust.hk/faculty/stiber/bio.html cszm06.cs.ust.hk From hinton at cs.toronto.edu Sun Sep 3 15:55:29 1995 From: hinton at cs.toronto.edu (Geoffrey Hinton) Date: Sun, 3 Sep 1995 15:55:29 -0400 Subject: postdoc job Message-ID: <95Sep3.155531edt.150@neuron.ai.toronto.edu> POSTDOC JOB USING HIDDEN MARKOV MODELS OR NEURAL NETS TO RECOGNIZE ANIMAL SOUNDS The Natural Sciences Unit of Ontario Hydro Technologies in Toronto has a postdoc position available on a project that aims to monitor biodiversity by recognizing wildlife vocalizations (such as birds and frogs). We already have a large database and are particularly interested in candidates who have experience in speech recognition using Hidden Markov Models, neural networks, or both. The postdoc will spend one afternoon each week at the University of Toronto interacting with Geoff Hinton's group. We need to fill the position as soon as possible. If necessary we will consider suitably qualified candidates who can only come for 6 months, though we would prefer someone to come for at least a year. The annual salary will be in the range 40,000 to 60,000 Canadian dollars. Please contact Dr. Paul Patrick of the Natural Sciences Unit for more details (phone 416-207-6277, fax 416-207-6094) or E-mail Dr. Patrick or Dr. Hinton (PatrickP at rd.hydro.on.ca, hinton at ai.toronto.edu). From ingber at alumni.caltech.edu Sun Sep 3 15:34:01 1995 From: ingber at alumni.caltech.edu (Lester Ingber) Date: Sun, 3 Sep 1995 12:34:01 -0700 Subject: ASA Optimization of EEG Analyses Message-ID: <199509031934.MAA26792@alumni.caltech.edu> ASA Optimization of EEG Analyses The paper smni95_lecture.ps.Z %A L. Ingber %T Statistical mechanics of neocortical interactions (SMNI) %R SMNI Lecture Plates %I Lester Ingber Research %C McLean, VA %D 1995 %P (unpublished) can be retrieved from my archive using instructions below. This includes some plates outlining a project, performing recursive ASA optimization of "canonical momenta" indicators of subject's/patient's EEG nested in parameterized customized clinician's rules, along the lines of the approach formulated for markets in markets95_trading.ps.Z in this archive. This was first presented publicly in a lecture on 18 Aug 95 at the University of Oregon. Ramesh Srinivasan at the U of O and Electrical Geodesics, Inc., is qualifying EEG data he has collected. I would like to receive more information on (quasi-)automated rules some people may now use to correlate EEG with behavioral or physiological states. Lester ======================================================================== Interactively [brackets signify machine prompts]: [your_machine%] ftp ftp.alumni.caltech.edu (or 131.215.50.234) [Name (...):] anonymous [Password:] your_e-mail_address [ftp>] cd pub/ingber [ftp>] binary [ftp>] ls [ftp>] get file_of_interest [ftp>] quit The 00index file contains an index of the other files. This archive also can be accessed via WWW path http://www.alumni.caltech.edu/~ingber/ If you do not have ftp access, get information on the FTPmail service by: mail ftpmail at decwrl.dec.com, and send only the word "help" in the body of the message. Sorry, I cannot assume the task of mailing out hardcopies of code or papers. My volunteer time assisting people with their queries on my codes and papers must be limited to electronic mail correspondence. ======================================================================== /* RESEARCH ingber at alumni.caltech.edu * * INGBER ftp.alumni.caltech.edu:/pub/ingber * * LESTER http://www.alumni.caltech.edu/~ingber/ * * Prof. Lester Ingber _ P.O. Box 857 _ McLean, VA 22101 _ 1.800.L.INGBER */ From klaus at prosun.first.gmd.de Mon Sep 4 05:08:16 1995 From: klaus at prosun.first.gmd.de (klaus@prosun.first.gmd.de) Date: Mon, 04 Sep 95 11:08:16 +0200 Subject: new paper on Asymptotic Statistical Theory of Overtraining and Cross-Validation Message-ID: <9509040908.AA01468@chablis.first.gmd.de> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/amari.overtraining.ps.Z The following paper is now available for copying from the Neuroprose repository: amari.overtraining.ps.Z amari.overtraining.ps.Z klaus at first.gmd.de (128151 bytes) 32 pages. S. Amari, N. Murata, K.-R. M\"uller, M. Finke, H. Yang: "Asymptotic Statistical Theory of Overtraining and Cross-Validation" A statistical theory for overtraining is proposed. The analysis treats general realizable stochastic neural networks, trained with Kullback-Leibler loss in the asymptotic case of a large number of training examples. It is shown that the asymptotic gain in the generalization error is small if we perform early stopping, even if we have access to the optimal stopping time. Considering cross-validation stopping we answer the question: In what ratio the examples should be divided into training and testing sets in order to obtain the optimum performance. However, cross-validated early stopping is useless in the asymptotic region, while it decreases the generalization error only in the non-asymptotic region. Our large scale simulations done on a CM5 are in nice agreement with our analytical findings. (University of Tokyo Technical Report METR 06-95 and submitted to IEEE Transactions on NN) Best regards, Klaus &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& Dr. Klaus-Robert M\"uller GMD First (Forschungszentrum Informationstechnik) Rudower Chaussee 5, 12489 Berlin Germany mail: klaus at first.gmd.de Tel: +49 30 6392 1860 Fax: +49 30 6392 1805 web-page: http://www.first.gmd.de/persons/Mueller.Klaus-Robert.html &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& From meeden at cs.swarthmore.edu Mon Sep 4 12:19:40 1995 From: meeden at cs.swarthmore.edu (Lisa Meeden) Date: Mon, 4 Sep 1995 12:19:40 -0400 Subject: CFP: Workshop on Learning in Autonomous Robots Message-ID: <199509041619.MAA05576@ginger.cs.swarthmore.edu> ROBOLEARN-96 An International Workshop on Learning for Autonomous Robots Key West, Florida, USA May 19-20, 1996 Preliminary Call For Papers This workshop will be held in conjunction with FLAIRS-96. See http://www.cis.ufl.edu/~ddd/FLAIRS/FLAIRS-96/ Designing robots that can accomplish tasks in the real world is a difficult problem due to the complexity and unpredictability of the environment. Thus, really useful robots and autonomous creatures must learn new know-how and improve on old know-how to be successful. This know-how may involve developing maps, action policies, or more basic reactive responses to incoming perceptual data. Autonomous agents may adapt through associative mechanisms such as neural networks, inductive techniques such as reinforcement learning, evolutionary processes such as genetic algorithms, or analytic techniques such as explanantion-based learning. Machine learning is a large research area within which we wish to focus on learning techniques viable for robots and autonomous agents that must operate in complex environments. These learning techniques can be used to improve lower level motor and perceptual skills (such as vision) or higher level reasoning skills. We prefer results with implemented physical agents. More than what is learned, we are interested in discussion of well established as well as novel learning techniques, and in learning issues that remain as open problems. SUBMISSION Authors must submit, by ftp, a compressed postscript version of their paper which should be at most 10 double-spaced pages. The first page should include the title but should not identify the author in any manner. A separate cover page should also be submitted containing the author's name, physical address, email address, phone number, affiliation and paper title. In cases of multiple authors, all correspondence will be sent to the first author unless otherwise requested. ftp ftp.cs.buffalo.edu and put your submission in users/hexmoor/robolearn96 Papers must be received by December 15, 1995. Authors of accepted papers will be notified by February 15, 1996. The final camera-ready copy of the papers will be expeced by April 15, 1996. Final papers must consist of at most 5 galley pages. For those who can't submit papers electronically and all other communications direct your communication to the following address: ROBOLEARN-96 Henry Hexmoor Dept. of Computer Science SUNY at Buffalo Buffalo, NY 14260, USA Email: hexmoor at cs.buffalo.edu PUBLICATION All accepted papers will be published in the workshop proceedings. In addition, a selected subset of the papers will be invited for inclusion (subject to refereeing) in a book or in a special issue of a journal. ORGANIZATION Chairs: Henry Hexmoor, SUNY Buffalo Lisa Meeden, Swarthmore College PROGRAM COMMITTEE Minoru Asada, Osaka University George Bekey, USC Doug Blank, Indiana University Long-Ji Lin, Siemens Gary McGraw, Indiana University Sridhar Mahadevan, University of South Florida Ulrich Nehmzow, University of Manchester Ashwin Ram, Georgia Tech Justinian Rosca, University of Rochester Sebastian Thrun, U of Bonn (and CMU) Toby Tyrrell, Plymouth Marine Laboratory Brian Yamauchi, The Institute for the Study of Learning and Expertise and Stanford CSLI See WWW page for latest detail http://ww.cs.buffalo.edu/~hexmoor/robolearn-96 From rolf at cs.rug.nl Tue Sep 5 04:44:04 1995 From: rolf at cs.rug.nl (rolf@cs.rug.nl) Date: Tue, 5 Sep 1995 10:44:04 +0200 Subject: 2 articles online Message-ID: Dear connectionists, the following articles are available online: --------------------------------------------------------------------- Rolf P. W"urtz. Building visual correspondence maps - from neural dynamics to a face recognition system. To appear in: Jose Mira-Mira, editor, Proceedings of the International Conference on Brain Processes, Theories and Models. MIT Press, November 1995 Abstract URL: http://www.cs.rug.nl/users/rolf/mccul95.html Text URL: http://www.cs.rug.nl/users/rolf/mccul95.ps.gz (0.2 MB, 10 pages) ABSTRACT: On the basis of a pyramidal Gabor function representation of images two systems are presented that build correspondence maps between presegmented memorized models and retinal images. The first one is formulated close to the biology of neuronal layers and dynamic links. It extends earlier ones by a hierarchical approach and background independence. The second system is formulated in a way that is efficiently implementable on digital computers but captures the crucial properties of the first one. It has the capability for object recognition under realistic circumstances which is demonstrated by recognizing human faces independently of their hairstyle. KEYWORDS: Neural network, dynamic link architecture, correspondence problem, object recognition, face recognition, coarse-to-fine strategy, wavelet transform, image representation --------------------------------------------------------------------- Rolf P. W"urtz. Background invariant face recognition. To appear in: 3rd SNN Neural Networks Symposium, Nijmegen, The Netherlands, 14-15 September 1995. Springer Verlag, 1995 Abstract URL: http://www.cs.rug.nl/users/rolf/snn95.html Text URL: http://www.cs.rug.nl/users/rolf/snn95.ps.gz (0.1MB, 4 pages) ABSTRACT: As a contribution to handling the symbol grounding problem in AI an object recognition system is presented that is exemplified with human faces. It differs from earlier systems by a pyramidal representation and the ability to cope with structured background. KEYWORDS: Correspondence problem, object recognition, face recognition, coarse-to-fine strategy, wavelet transform, image representation +---------------------------------------------------------------------------+ | Rolf P. W"urtz | mailto:rolf at cs.rug.nl | URL: http://www.cs.rug.nl/~rolf/ | | Department of Computing Science, University of Groningen, The Netherlands | +---------------------------------------------------------------------------+ From ali at nubian.ICS.UCI.EDU Tue Sep 5 17:59:04 1995 From: ali at nubian.ICS.UCI.EDU (Kamal M Ali) Date: Tue, 05 Sep 1995 14:59:04 -0700 Subject: Combining classifiers - a study of error reduction Message-ID: <9509051459.aa15424@paris.ics.uci.edu> FTP-host: ftp.ics.uci.edu FTP-file: pub/machine-learning-papers/others/Ali-TR95-MultDecTrees.ps.Z pub/machine-learning-papers/others/Ali-TR95-MultRuleSets.ps.Z Available by anonymous ftp. We examine how the error reduction ability (error rate of ensemble divided by error rate of the single model learned on the same data) of an ensemble is affected by the degree to which models in the ensemble make correlated errors. Although the linear relationship which is discovered between error reduction ability and error correlatedness is shown to hold for rule-sets and decision-trees, our on-going research shows it also holds for neural networks. ================================================================ First paper: On the Link between Error Correlation and Error Reduction in Decision Tree Ensembles Abstract Recent work has shown that learning an ensemble consisting of multiple models and then making classifications by combining the classifications of the models often leads to more accurate classifications then those based on a single model learned from the same data. However, the amount of error reduction achieved varies from data set to data set. This paper provides empirical evidence that there is a linear relationship between the degree of error reduction and the degree to which patterns of errors made by individual models are uncorrelated. Ensemble error rate is most reduced in ensembles whose constituents make individual errors in a less correlated manner. The second result of the work is that some of the greatest error reductions occur on domains for which many ties in information gain occur during learning. The third result is that ensembles consisting of models that make errors in a dependent but ``negatively correlated'' manner will have lower ensemble error rates than ensembles whose constituents make errors in an uncorrelated manner. Previous work has aimed at learning models that make errors in a uncorrelated manner rather than those that make errors in an ``negatively correlated'' manner. Taken together, these results help provide an understanding of why the multiple models approach yields great error reduction in some domains but little in others. ================================================================ Second paper: Error reduction through learning multiple descriptions Abstract Learning multiple descriptions for each class in the data has been shown to reduce generalization error but the amount of error reduction varies greatly from domain to domain. This paper presents a novel empirical analysis that helps to understand this variation. Our hypothesis is that the amount of error reduction is linked to the ``degree to which the descriptions for a class make errors in a correlated manner.'' We present a precise and novel definition for this notion and use twenty-nine data sets to show that the amount of observed error reduction is negatively correlated with the degree to which the descriptions make errors in an correlated manner. We empirically show that it is possible to learn descriptions that make less correlated errors in domains in which many ties in the search evaluation measure (e.g. information gain) are experienced during learning. The paper also presents results that help to understand when and why multiple descriptions are a help (irrelevant attributes) and when they are not as much help (large amounts of class noise). From lautrup at hpthbe1.cern.ch Tue Sep 5 11:56:30 1995 From: lautrup at hpthbe1.cern.ch (lautrup@hpthbe1.cern.ch) Date: Tue, 5 Sep 95 11:56:30 METDST Subject: no subject (file transmission) Message-ID: FTP-host: connect.nbi.dk FTP-file: neuroprose/winther.optimal.ps.Z WWW-host: http://connect.nbi.dk ---------------------------------------------- The following paper is now available: Optimal Learning in Multilayer Neural Networks [26 pages] O. Winther, B. Lautrup, and J-B. Zhang CONNECT, The Niels Bohr Institute, University of Copenhagen, Denmark Abstract: The generalization performance of two learning algorithms, Bayes algorithm and the ``optimal learning'' algorithm on two classification tasks is studied theoretically. In the first example the task is defined by a restricted two-layer network, a committee machine, and in the second the task is defined by the so-called prototype problem. The architecture of the learning machine is in both cases defined to be a committee machine. For both tasks the optimal learning algorithm, which is optimal when the solution is restricted to a specific architecture, performs worse than the overall optimal Bayes algorithm. However, both algorithms perform far better than the conventional stochastic Gibbs algorithm, showing that using prior knowledge about the rule helps to avoid overfitting. Please do not reply directly to this message. ----------------------------------------------- FTP-instructions: unix> ftp connect.nbi.dk (or 130.225.212.30) ftp> Name: anonymous ftp> Password: your e-mail address ftp> cd neuroprose ftp> binary ftp> get winther.optimal.ps.Z ftp> quit unix> uncompress winther.optimal.ps.ps.Z ----------------------------------------------- Benny Lautrup, Computational Neural Network Center (CONNECT) Niels Bohr Institute Blegdamsvej 17 2100 Copenhagen Denmark Telephone: +45-3532-5200 Direct: +45-3532-5358 Fax: +45-3142-1016 e-mail: lautrup at connect.nbi.dk From listerrj at helios.aston.ac.uk Thu Sep 7 11:04:35 1995 From: listerrj at helios.aston.ac.uk (Richard Lister) Date: Thu, 07 Sep 1995 16:04:35 +0100 Subject: Lectureship in Neural Computing Message-ID: <15623.9509071504@sun.aston.ac.uk> ---------------------------------------------------------------------- Neural Computing Research Group ------------------------------- Dept of Computer Science and Applied Mathematics Aston University, Birmingham, UK LECTURESHIP ----------- * Full details at http://neural-server.aston.ac.uk/ * Applications are invited for a Lectureship within the Department of Computer Science and Applied Mathematics. (This post is roughly comparable to Assistant Professor positions in North America). Candidates are expected to have excellent academic qualifications and a proven record of research. The appointment will be for an initial period of three years, with the possibility of subsequent renewal or transfer to a continuing appointment. The successful candidate will be expected to make a substantial contribution to the research activities of the Neural Computing Research Group. Current research focusses on principled approaches to neural computing, and ranges from theoretical foundations to industrial and commercial applications. We would be interested in candidates who can contribute directly to this research programme or who can broaden it into related areas, while maintaining the emphasis on theoretically well-founded research. The successful candidate will also be expected to contribute to the undergraduate and/or postgraduate teaching programmes. Neural Computing Research Group ------------------------------- The Neural Computing Research Group currently comprises the following academic staff: Chris Bishop Professor David Lowe Professor David Bounds Professor Richard Rohwer Lecturer Alan Harget Lecturer Ian Nabney Lecturer David Saad Lecturer Chris Williams Lecturer together with the following Postdoctoral Research Fellows Alan McLachlan Huaihu Zhu a full-time system administrator, and eleven postgraduate research students. Five further postdoctoral positions are currently being filled. Conditions of Service --------------------- The appointment will be for an initial period of three years, with the possibility of subsequent renewal or transfer to a continuing appointment. Initial salary will be within the lecturer A and B range 14,756 to 25,735, and exceptionally up to 28,756 (UK pounds; these salary scales are currently under review). How to Apply ------------ If you wish to be considered for this position, please send a full CV and publications list, together with the names of 4 referees, to: Hanni Sondermann Neural Computing Research Group Department of Computer Science and Applied Mathematics Aston University Birmingham B4 7ET, U.K. Tel: (+44 or 01) 21 333 4631 Fax: (+44 or 01) 21 333 4586 e-mail: h.e.sondermann at aston.ac.uk Closing date: 30 September 1995. ---------------------------------------------------------------------- From baluja at GS93.SP.CS.CMU.EDU Thu Sep 7 17:14:50 1995 From: baluja at GS93.SP.CS.CMU.EDU (Shumeet Baluja) Date: Thu, 7 Sep 95 17:14:50 EDT Subject: Comparison of Optimization Techniques based on Hebbian Learning & Genetic Algorithms Message-ID: Title: An Empirical Comparison of Seven Iterative and Evolutionary Function Optimization Heuristics By: Shumeet Baluja Abstract: This report is a repository for the results obtained from a large scale empirical comparison of seven iterative and evolution-based optimization heuristics. Twenty-seven static optimization problems, spanning six sets of problem classes which are commonly explored in genetic algorithm literature, are examined. The problem sets include job-shop scheduling, traveling salesman, knapsack, binpacking, neural network weight optimization, and standard numerical optimization. The search spaces in these problems range from 2^368 to 2^2040. The results indicate that using genetic algorithms for the optimization of static functions does not yield a benefit, in terms of the final answer obtained, over simpler optimization heuristics. Descriptions of the algorithms tested and the encodings of the problems are described in detail for reproducibility. This work may be of interest to the Artificial Neural Network community as two of the algorithms compared are based upon simple Hebbian Learning and supervised competitive learning algorithms. instructions (CMU-CS-95-193) --------------------------------------- anonymous ftp: ftp reports.adm.cs.cmu.edu binary cd 1995 get CMU-CS-95-193.ps from www: from my home page: http://www.cs.cmu.edu/~baluja more directly: http://www.cs.cmu.edu/afs/cs/user/baluja/www/techreps.html if you do not have www or ftp access, send me email, and I will send a copy (.ps) through email. From opper at cse.ucsc.edu Thu Sep 7 19:17:17 1995 From: opper at cse.ucsc.edu (Manfred Opper) Date: Thu, 7 Sep 1995 16:17:17 -0700 (PDT) Subject: TR announcement Message-ID: <199509072317.QAA08472@arapaho.cse.ucsc.edu> The following papers are now available via anonymous ftp: (See below for the retrieval procedure) ------------------------------------------------------------------ "Bounds for Predictive Errors in the Statistical Mechanics of Supervised Learning" (Submitted to Physical Review Letters) M. Opper and D. Haussler Ref. WUE-ITP-95-019 Within a Bayesian framework, by generalizing inequalities known from statistical mechanics, we calculate general upper and lower bounds for a cumulative entropic error, which measures the success in the supervised learning of an unknown rule from examples. Both bounds match asymptotically, when the number m of observed data grows large. We find that the information gain from observing a new example decreases universally like d/m. Here d is a dimension that is defined from the scaling of small volumes with respect to a distance in the space of rules. (10 pages) AND "General Bounds on the Mutual Information Between a Parameter and n Conditionally Independent Observations " D. Haussler and M. Opper: (Proceedings of the 8th Ann. Conf. on Computational Learning Theory: COLT 95) Ref . WUE-ITP-95-020 Each parameter theta in an abstract parameter space Theta is associated with a different probability distribution on a set Y. A parameter w is chosen at random from Theta according to some a priori distribution on theta, and n conditionally independent random variables Y^n = Y_1,..., Y_n are observed with common distribution determined by theta. We obtain bounds on the mutual information between the random variable theta, giving the choice of parameter, and the random variable Y^n, giving the sequence of observations. We also bound the supremum of the mutual information, over choices of the prior distribution on Theta. These quantities have applications in density estimation, computational learning theory, universal coding, hypothesis testing, and portfolio selection theory. The bounds are given in terms of the metric and information dimensions of the parameter space Theta with respect to the Hellinger distance. (11 pages) Manfred Opper present adress: The Baskin Center for Computer Engineering & Information Sciences, University of California Santa Cruz CA 95064 email: opper at cse.ucsc.edu ______________________________________________________________________ Retrieval procedure: unix> ftp ftp.physik.uni-wuerzburg.de Name: anonymous Password: {your e-mail address} ftp> cd pub/preprint ftp> get WUE-ITP-95-0??.ps.gz (*) ftp> quit unix> gunzip WUE-ITP-95-0??.ps.gz e.g. unix> lp WUE-ITP-95-0??.ps (7 pages of output) (*) can be replaced by "get WUE-ITP-95-0??.ps". The file will then be uncompressed before transmission (slower!). _____________________________________________________________________ From lpratt at franklinite.Mines.EDU Fri Sep 8 12:27:54 1995 From: lpratt at franklinite.Mines.EDU (Lorien Y. Pratt) Date: Fri, 8 Sep 1995 10:27:54 -0600 (MDT) Subject: Grad student needed: transfer and hazardous waste Message-ID: <9509081627.AA02460@franklinite.Mines.EDU> A non-text attachment was scrubbed... Name: not available Type: text Size: 3492 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/8c0d708a/attachment.ksh From dyyeung at cs.ust.hk Mon Sep 11 00:12:27 1995 From: dyyeung at cs.ust.hk (Dit-Yan Yeung) Date: Mon, 11 Sep 1995 12:12:27 +0800 (HKT) Subject: Survey Paper on Constructive Neural Networks Message-ID: <199509110412.MAA16731@cssu35.cs.ust.hk> Paper (81596 bytes in gziped PS): ftp://ftp.cs.ust.hk/pub/techreport/95/tr95-43.ps.gz ******************************************************************************* Constructive Feedforward Neural Networks for Regression Problems: A Survey Tin-Yau Kwok & Dit-Yan Yeung Department of Computer Science Hong Kong University of Science and Technology Clear Water Bay, Kowloon Hong Kong {jamesk,dyyeung}@cs.ust.hk Technical Report HKUST-CS95-43 September 1995 ABSTRACT In this paper, we review the procedures for constructing feedforward neural networks in regression problems. While standard back-propagation performs gradient descent only in the weight space of a network with fixed topology, constructive procedures start with a small network and then grow additional hidden units and weights until a satisfactory solution is found. The constructive procedures are categorized according to the resultant network architecture and the learning algorithm for the network weights. ******************************************************************************* From lxu at cs.cuhk.hk Mon Sep 11 04:46:37 1995 From: lxu at cs.cuhk.hk (Dr. Xu Lei) Date: Mon, 11 Sep 1995 16:46:37 +0800 Subject: ICONIP96 Message-ID: <199509110846.QAA14533@cs.cuhk.hk> FIRST CALL FOR PAPERS 1996 INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING The Annual Conference of the Asian Pacific Neural Network Assembly ICONIP'96, September 24 - 27, 1996 Hong Kong Exhibition and Convention Center, Wan Chai, Hong Kong The goal of ICONIP'96 is to provide a forum for researchers and engineers from academia and industry to meet and to exchange ideas on the latest developments in neural information processing. The conference further serves to stimulate local and regional interests in neural information processing and its potential applications to industries indigenous to this region. CONFERENCE TOPICS ================= * Theory * Algorithms & Architectures * Applications * Supervised/Unsupervised Learning * Hardware Implementations * Hybrid Systems * Neurobiological Systems * Associative Memory * Visual & Speech Processing * Intelligent Control & Robotics * Cognitive Science & AI * Recurrent Net & Dynamics * Image Processing * Pattern Recognition * Computer Vision * Time Series Prediction * Financial Engineering * Optimization * Fuzzy Logic * Evolutionary Computing * Other Related Areas CONFERENCE'S SCHEDULE ===================== Submission of paper February 1, 1996 Notification of acceptance May 1, 1996 Early registration deadline July 1, 1996 SUBMISSION INFORMATION ====================== Authors are invited to submit one camera-ready original and five copies of the manuscript written in English on A4-format white paper with one inch margins on all four sides, in one column format, no more than six pages including figures and references, single-spaced, in Times-Roman or similar font of 10 points or larger, and printed on one side of the page only. Electronic or fax submission is not acceptable. Additional pages will be charged at USD $50 per page. Centered at the top of the first page should be the complete title, author(s), affiliation, mailing, and email addresses, followed by an abstract (no more than 150 words) and the text. Each submission should be accompanied by a cover letter indicating the contacting author, affiliation, mailing and email addresses, telephone and fax number, and preference of technical session(s) and format of presentation, either oral or poster (both are published). All submitted papers will be refereed by experts in the field based on quality, clarity, originality, and significance. Authors may also retrieve the ICONIP style, "iconip.tex" and "iconip.sty" files for the conference by anonymous FTP at ftp.cs.cuhk.hk in the directory /pub/iconip96. For further information, inquiries, and paper submissions please contact ICONIP'96 Secretariat Department of Computer Science The Chinese University of Hong Kong Shatin, N.T., Hong Kong Fax (852) 2603-5024 E-mail: iconip96 at cs.cuhk.hk http://www.cs.cuhk.hk/iconip96 ====================================================================== General Co-Chairs ================= Omar Wing, CUHK Shun-ichi Amari, Tokyo U. Advisory Committee ================== International ------------- Yaser Abu-Mostafa, Caltech Michael Arbib, U. Southern Cal. Leo Breiman, UC Berkeley Jack Cowan, U. Chicago Rolf Eckmiller, U. Bonn Jerome Friedman, Stanford U. Stephen Grossberg, Boston U. Robert Hecht-Nielsen, HNC Geoffrey Hinton, U. Toronto Anil Jain, Michigan State U. Teuvo Kohonen, Helsinki U. of Tech. Sun-Yuan Kung, Princeton U. Robert Marks, II, U. Washington Thomas Poggio, MIT Harold Szu, US Naval SWC John Taylor, King's College London David Touretzky, CMU C. v. d. Malsburg, Ruhr-U. Bochum David Willshaw, Edinburgh U. Lofti Zadeh, UC Berkeley Asia-Pacific Region ------------------- Marcelo H. Ang Jr, NUS, Singapore Sung-Yang Bang, POSTECH, Pohang Hsin-Chia Fu, NCTU., Hsinchu Toshio Fukuda, Nagoya U., Nagoya Kunihiko Fukushima, Osaka U., Osaka Zhenya He, Southeastern U., Nanjing Marwan Jabri, U. Sydney, Sydney Nikola Kasabov, U. Otago, Dunedin Yousou Wu, Tsinghua U., Beijing Organizing Committee ==================== L.W. Chan (Co-Chair), CUHK K.S. Leung (Co-Chair), CUHK D.Y. Yeung (Finance), HKUST C.K. Ng (Publication), CityUHK A. Wu (Publication), CityUHK K.P. Lam (Publicity), CUHK M.W. Mak (Local Arr.), HKPU C.S. Tong (Local Arr.), HKBU T. Lee (Registration), CUHK M. Stiber (Registration), HKUST K.P. Chan (Tutorial), HKU H.T. Tsui (Industry Liaison), CUHK I. King (Secretary), CUHK Program Committee ================= Co-Chairs --------- Lei Xu, CUHK Michael Jordan, MIT Erkki Oja, Helsinki Univ. of Tech. Mitsuo Kawato, ATR Members ------- Yoshua Bengio, U. Montreal Chris Bishop, Aston U. Leon Bottou, Neuristique Gail Carpenter, Boston U. Laiwan Chan, CUHK Huishen Chi, Peking U. Peter Dayan, MIT Kenji Doya, ATR Scott Fahlman, CMU Francoise Fogelman, SLIGOS Lee Giles, NEC Research Inst. Michael Hasselmo, Harvard U. Kurt Hornik, Technical U. Wien Steven Nowlan, Synaptics Jeng-Neng Hwang, U. Washington Nathan Intrator, Tel-Aviv U. Larry Jackel, AT&T Bell Lab Adam Kowalczyk, Telecom Australia Soo-Young Lee, KAIST Todd Leen, Oregon Grad. Inst. Cheng-Yuan Liou, National Taiwan U. David MacKay, Cavendish Lab Eric Mjolsness, UC San Diego John Moody, Oregon Grad. Inst. Nelson Morgan, ICSI Michael Perrone, IBM Watson Lab Ting-Chuen Pong, HKUST Paul Refenes, London Business School Hava Siegelmann, Technion Ah Chung Tsoi, U. Queensland Benjamin Wah, U. Illinois Andreas Weigend, Colorado U. Ronald Williams, Northeastern U. John Wyatt, MIT Alan Yuille, Harvard U. Richard Zemel, CMU From cohn at psyche.mit.edu Mon Sep 11 18:06:41 1995 From: cohn at psyche.mit.edu (David Cohn) Date: Mon, 11 Sep 95 18:06:41 EDT Subject: NIPS*95 Registration Info Available Message-ID: <9509112206.AA20429@psyche.mit.edu> [As always, apologies to those who, by dint of subscribing to overlapping multiple mailing lists, receive multiple copies of this announcement.] CONFERENCE ANNOUNCEMENT Neural Information Processing Systems Natural and Synthetic Monday, Nov. 27 - Saturday, Dec. 2, 1995 Denver, Colorado http://www.cs.cmu.edu/Web/Groups/NIPS/NIPS.html This is the ninth meeting of an interdisciplinary conference which brings together neuroscientists, engineers, computer scientists, cognitive scientists, physicists, and mathematicians interested in all aspects of neural processing and computation. The confer- ence will include invited talks, and oral and poster presenta- tions of refereed papers. There will be no parallel sessions. There will also be one day of tutorial presentations (Nov. 27) preceding the regular session, and two days of focused workshops will follow at a nearby ski area (Dec. 1-2). Major conference topics include: Neuroscience, Theory, Implemen- tations, Applications, Algorithms & Architectures, Visual Pro- cessing, Speech/Handwriting/Signal Processing, Cognitive Science & AI, Control, Navigation and Planning. Detailed information and registration materials are available electronically at http://www.cs.cmu.edu/Web/Groups/NIPS/NIPS.html ftp://psyche.mit.edu/pub/NIPS95/ Students who require financial support to attend the conference are urged to retrieve a copy of the registration brochure as soon as possible in order to meet the aid application deadline. Mail general inquiries/requests for registration material to: NIPS*95 Registration Dept. of Mathematical and Computer Sciences Colorado School of Mines Golden, CO 80401 USA FAX: (303) 273-3875 e-mail: nips95 at mines.colorado.edu From kim.plunkett at psy.ox.ac.uk Wed Sep 13 11:05:53 1995 From: kim.plunkett at psy.ox.ac.uk (Kim Plunkett) Date: Wed, 13 Sep 1995 15:05:53 +0000 Subject: Special Issue of LCP Message-ID: <9509131505.AA53353@mac17.psych.ox.ac.uk> Manuscript submissions are invited for inclusion in a Special Issue of the journal "Language and Cognitive Processes" on Connectionist Approaches to Language Development. It is anticipated that most of the papers in the special issue will describe previously unpublished work on some aspect of language development (first or second language learning in either normal or disordered populations) that incorporates a neural network modelling component. However, theoretical papers discussing the general enterprise of connectionist modelling within the domain of language development are also welcome. The deadline for submissions is 1st April 1996. Manuscripts should be sent to the guest editor for this special issue: Kim Plunkett, Department of Experimental Psychology, Oxford University, South Parks Road, Oxford, OX1 3UD, UK (email: plunkett at psy.ox.ac.uk FAX: 1865-310447). All manuscripts will be submitted to the usual Language and Cognitive Processes peer review process. From caruana+ at cs.cmu.edu Fri Sep 15 09:23:26 1995 From: caruana+ at cs.cmu.edu (Rich Caruana) Date: Fri, 15 Sep 95 09:23:26 -0400 Subject: NIPS*95 Workshop: Call for Participation Message-ID: <29911.811171406@GS79.SP.CS.CMU.EDU> *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* *-* POST-NIPS*95 WORKSHOP *-* *-* December 1-2, 1995 *-* *-* Vail, Colorado *-* *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* *-* CALL FOR PARTICIPATION *-* *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* TITLE: "Learning to Learn: Knowledge Consolidation and Transfer in Inductive Systems" ORGANIZERS: Jon Baxter, Rich Caruana, Tom Mitchell, Lori Pratt, Danny Silver, Sebastian Thrun. INVITED TALKS BY: Leo Breiman (Stanford, undecided) Tom Mitchell (CMU) Tomaso Poggio (MIT) Noel Sharkey (Sheffield) Jude Shavlik (Wisconsin) WEB PAGES (for more information): Our Workshop: http://www.cs.cmu.edu/afs/cs/usr/caruana/pub/transfer.html NIPS*95 Info: http://www.cs.cmu.edu/afs/cs/project/cnbc/nips/NIPS.html WORKSHOP DESCRIPTION: The power of tabula rasa learning is limited. Because of this, interest is increasing in methods that capitalize on previously acquired domain knowledge. Examples of these methods include: o using symbolic domain theories to bias connectionist networks o using unsupervised learning on a large corpus of unlabelled data to learn features useful for subsequent supervised learning on a smaller labelled corpus o using models previously learned for other problems as a bias when learning new, but related, problems o using extra outputs on a connectionist network to bias the hidden layer representation towards more predictive features There are many different approaches: hints, knowledge-based artificial neural nets (KBANN), explanation-based neural nets (EBNN), multitask learning (MTL), knowledge consolidation, etc. What they all have in common is the attempt to transfer knowledge from other sources to benefit the current inductive task. The goal of this workshop is to provide an opportunity for researchers and practitioners to discuss problems and progress in knowledge transfer in learning. We hope to identify research directions, debate different theories and approaches, discover unifying principles, and begin to start answering questions like: o when will transfer help -- or hinder? o what should be transferred? o how should it be transferred? o what are the benefits? o in what domains is transfer most useful? SUBMISSIONS: We solicit presentations from anyone working in (or near): o Sequential/incremental, compositional (learning by parts), and parallel learning o Task knowledge transfer (symbolic-neural, neural-neural) o Adaptation of learning algorithms based on prior learning o Learning domain-specific inductive bias o Combining predictions made for related tasks from one domain o Combining supervised learning (where the goal is to learn one feature from the other features) with unsupervised learning (where the goal is to learn every feature from all the other features) o Combining symbolic and connectionist methods via transfer o Fundamental problems/issues in learning to learn o Theoretical models of learning to learn o Cognitive models of, or evidence for, transfer in learning Please send a short (one page or less) description of what you want to present to one of the co-chairs below by Oct 15. Email is preferred. We'll select from the submissions and publish a workshop schedule by Nov 1. Preference will be given to submissions that are likely to generate debate and that go beyond summarizing prior published work by raising important issues or suggesting directions for future work. Suggestions for moderator or panel-led discussions (e.g., sequential vs. parallel transfer) are also encouraged. We plan to run the workshop as a workshop, not as a mini conference, so be daring! We look forward to your submission. Rich Caruana Daniel L. Silver School of Computer Science Department of Computer Science Carnegie Mellon University Middlesex College 5000 Forbes Avenue University of Western Ontario Pittsburgh, PA 15213, USA London, Ontario, Canada N6A 3K7 email: caruana at cs.cmu.edu email: dsilver at csd.uwo.ca ph: (412) 268-3043 ph: (519) 473-6168 fax: (412) 268-5576 fax: (519) 661-3515 See you in Colorado! From antsakli at maddog.ee.nd.edu Thu Sep 14 17:35:17 1995 From: antsakli at maddog.ee.nd.edu (Panos Antsaklis) Date: Thu, 14 Sep 1995 16:35:17 -0500 Subject: Paper available: "The Dependence Identification Neural Network Construction Algorithm Message-ID: <199509142135.QAA15701@maddog.ee.nd.edu> FTP-host: rottweiler.ee.nd.edu FTP-filename: /pub/isis/tnn1845.ps.gz The following paper is available by anonymous ftp. It will appear in an upcoming issue of the IEEE Transactions on Neural Networks. ------------------------------------------------------------------------ THE DEPENDENCE IDENTIFICATION NEURAL NETWORK CONSTRUCTION ALGORITHM John O. Moody and Panos J. Antsaklis Dept. of Electrical Engineering University of Notre Dame Notre Dame, IN 46556, USA email: jmoody at maddog.ee.nd.edu (Accepted for publication in the IEEE Transactions on Neural Networks) Abstract An algorithm for constructing and training multilayer neural networks, dependence identification, is presented in this paper. Its distinctive features are that (i) it transforms the training problem into a set of quadratic optimization problems that are solved by a number of linear equations, (ii) it constructs an appropriate network to meet the training specifications, and (iii) the resulting network architecture and weights can be further refined with standard training algorithms, like backpropagation, giving a significant speed-up in the development time of the neural network and decreasing the amount of trial and error usually associated with network development. -------------------------------------------------------------------- ftp instructions: unix% ftp rottweiler.ee.nd.edu Name: anonymous password: your email address ftp> cd pub/isis ftp> binary ftp> get tnn1845.ps.gz ftp> bye unix% gzip -d tnn1845.ps.gz unix% lpr tnn1845.ps -------------------------------------------------------------------- From A.Sharkey at dcs.shef.ac.uk Fri Sep 15 11:40:21 1995 From: A.Sharkey at dcs.shef.ac.uk (A.Sharkey@dcs.shef.ac.uk) Date: Fri, 15 Sep 95 16:40:21 +0100 Subject: SPECIAL ISSUE of Connection Science Message-ID: <9509151540.AA29776@entropy.dcs.shef.ac.uk> PRELIMINARY CALL FOR PAPERS: Deadline February 14th 1996 ************ COMBINING NEURAL NETS ************ A special issue of Connection Science Papers are sought for this special issue of Connection Science. The aim of this special issue is to examine when, how, and why neural nets should be combined. The reliability of neural nets can be increased through the use of both redundant and modular nets, (either trained on the same task under differing conditions, or on different subcomponents of a task). Questions about the exploitation of redundancy and modularity in the combination of nets, or estimators, have both an engineering and a biological relevance, and include the following: * how best to combine the outputs of several nets. * quantification of the benefits of combining. * how best to create redundant nets that generalise differently (e.g. active learning methods) * how to effectively subdivide a task. * communication between neural net modules. * increasing the reliability of nets. * the use of neural nets for safety critical applications. Special issue editor: Amanda Sharkey (Sheffield, UK) Editorial Board: Leo Breiman (Berkeley, USA) Nathan Intrator (Brown, USA) Robert Jacobs (Rochester, USA) Michael Jordan (MIT, USA) Paul Munro (Pittsburgh, USA) Michael Perrone (IBM, USA) David Wolpert (Santa Fe Institute, USA) We solicit either theoretical or experimental papers on this topic. Questions and submissions concerning this special issue should be sent by February 14th 1996 to: Dr Amanda Sharkey, Department of Computer Science, Regent Court, Portobello Street, University of Sheffield, Sheffield, S1 4DP, United Kingdom. Net: amanda at dcs.shef.ac.uk From william.beaudot at csemne.ch Fri Sep 15 13:41:46 1995 From: william.beaudot at csemne.ch (william.beaudot@csemne.ch) Date: Fri, 15 Sep 1995 19:41:46 +0200 Subject: Winter Retina Conference '96 Message-ID: <199509151741.TAA13120@rotie.csemne.ch> The first annual "Winter Retina Conference: Physiology, Computation, and Neuromorphic Engineering for Vision" will be held in Jackson Hole, Wyoming, USA from 16-20 January 1996. The meeting will bring together leaders in the fields of the physiology, computation and engineering of early vision in vertebrates and invertebrates. The meeting is limited to 50 participants with the intention of fostering rich interaction accross the fields. For more info please see our WWW home page at URL: http://shi18.uth.tmc.edu/retcon.htm The meeting is being organized by: Greg Maguire and Harvey Karten in the USA, and William Beaudot and Andre vanSchaik in Switzerland. ------------------------------------------------------------------------------ For those who have not access to Internet, here is the text version of the WWW home page for the announcement: WINTER RETINA CONFERENCE '96 Physiology, Computation, and Neuromorphic Engineering for Vision 16-20 January 1996 Jackson Hole, Wyoming, USA First Anouncement and Call For Abstracts ======================================== In January 1996, the first annual Winter Retina Conference will be held in Jackson Hole, Wyoming, USA. The meeting will bring together leaders in the field of neural circuitry, computational modeling, and neuromorphic engineering in retinas, including both vertebrate and invertebrates. The purpose of the Winter Retina Conference is to present the latest results in the fundamental aspects of retinal circuitry in a setting conducive to informal and rich interaction between the participants. The meeting is limited to 50 participants and participation is by application and acceptance only. The conference will include daily lectures and demonstrations. SUN Ultra-SPARC and Super-SPARC workstations will be available for demonstrations. Conference Site: Teton Village, Jackson Hole, Wyoming ===================================================== Jackson Hole is a sun drenched, mountain-rimmed valley noted for its natural, rugged beauty, powder snow, and numerous ski slopes. Many other activities besides great downhill skiing are available, including: X-country skiing, ice-skating, sleigh rides, fishing, hunting, hiking, hot spring soaking, and horseback riding to name a few. Teton Village will be the host site for the conference. Restaurants, niteclubs, and shopping abound here. Yellowstone and Grand Teton National Parks are a short drive away from the slopes. Moutain climbing schools and guides are available in the city. Conference Hotel: ================= The Inn at Jackson Hole. A beautiful hotel at the base of the ski slope (yep, ski-in and ski-out) and in the midst of Teton Village. Everything you need is in walking distance of the hotel, including good restaurants and shopping. Standard room (2 beds)- $80.00/ night; Deluxe room (kitchenette)-$120.00/night; Loft suites- $150.00/night. Reservations at 800-842-7666 or fax at307-733-0844. Identify yourself as an attendee to the Winter Retina Conference for these special rates. Travel: ======= Jackson Hole is served by American, United, and Delta Airlines. A 5-10% discount, depending on class of service, is available on American Airlines, the official airline of the 1996 Winter Retina Conference. We advise using American Airlines because they provide the only jet service directly to Jackson Hole. To receive a 5-10% discount on American Airlines, please call American Airlines Meeting Services desk at 1-800-433-1790, identify youself as an attendee to the Winter Retina Conference, and use American Airlines "star number S-0116HZ" to book your flight. FEE: Upon Acceptance a Registration Fee of $100 US ================================================== Housing and travel information will be sent following acceptance. Application: Send a one page abstract and a short c.v. to either: ================================================================= Winter Retina Conference, Greg Maguire, Department of Neurobiology & Anatomy, University of Texas, 6420 Lamar Fleming, Houston, TX 77030, USA or Winter Retina Conference, William Beaudot, Centre Suisse d'Electronique et de Microtechnique SA, IC & Systems Research, Dept. of Bio-Inspired Advanced Research, Maladiere 71, Case postale 41, CH-2007 Neuchatel, Switzerland or Winter Retina Conference, Andre'van Schaik, EPFL-MANTRA Centre for Neuro-Mimetic Systems INJ-035 Ecublens, CH-1015 Lausanne, Switzerland Information and abstracts may also be sent via email: gmaguire at gsbs.gs.uth.tmc.edu Confirmed Attendees =================== Xavier Arreguit, CSEM, Switzerland; William Beaudot, CSEM, Switzerland; Horacio Cantiello, Harvard; Nicolas Franceschini, CNRS, Marseille; Jeanny Herault, Grenoble, France; Harvey Karten, UCSD; Kent Keyser, UCSD; Greg Maguire, Texas; Misha Mahowald, Oxford; Steve Massey, Texas; Haluk Ogmen, Houston; Peter Sterling, Pennsylvania; Andre van Schaik, Lausanne, Switzerland; Frank Werblin, Berkeley Organizing Committee: ===================== William Beaudot, CSEM, Switzerland Harvey Karten, UCSD, USA Greg Maguire, Texas, USA Andre van Schaik, EPFL, Switzerland Registration Form ================= A receipt, program, and information package will be mailed to participants following registration payment. Please include a check for $100 (USA) made payable to: Winter Retina Conference. The fee includes participation in the daily meetings, a social event, and an official t-shirt. Send form and payment by snail mail to: --------------------------------------- Greg Maguire Sensory Sciences Center University of Texas 6420 Lamar Fleming Houston, TX 77030 USA Name: ----- Institution: ------------ Address: -------- Email: ------ Telephone: ---------- Fax: ---- From jagota at ponder.csci.unt.edu Fri Sep 15 19:33:34 1995 From: jagota at ponder.csci.unt.edu (Jagota Arun Kumar) Date: Fri, 15 Sep 95 18:33:34 -0500 Subject: Abstract: NN Optimization on Compressible Graphs Message-ID: <9509152333.AA01789@ponder> Dear Connectionists: I announce a cleaner version of a paper I announced back in 1992. It is essentially unchanged in other aspects. ------------------------------------------------------------------------ Performance of Neural Network Algorithms for Maximum Clique On Highly Compressible Graphs Arun K. Jagota and Kenneth W. Regan The problem of finding the size of the largest clique in an undirected graph is NP-hard, even to approximate well. Simple algorithms work quite well however on random graphs. It is felt by many, however, that the uniform distribution u(n) does not accurately reflect the nature of instances that come up in practice. It is argued that when the actual distribution is unknown, it is more appropriate to suppose that instances come from the Solomonof-Levin or universal distribution m(x) instead, which assigns higher weight to instances with shorter descriptions. Because m(x) is neither computable nor samplable, we employ a realistic analogue q(x) which lends itself to efficient empirical testing. We experimentally evaluate how well certain neural network algorithms for Maximum Clique perform on graphs drawn from q(x), as compared to those drawn from u(n). The experimental results are as follows. All nine algorithms we evaluated performed roughly equally-well on u(n), whereas three of them---the simplest ones---performed markedly poorer than the other six on q(x). Our results suggest that q(x), while postulated as a more realistic distribution to test the performance of algorithms than u(n), is also one that discriminates their performance better. Our q(x) sampler can be used to generate compressible instances of any discrete problem. ------------------------------------------------------------------------ Send requests by e-mail to jagota at cs.unt.edu I use this mechanism to get some indication of the amount of interest, if any, there is in this type of work. Arun Jagota From pfbaldi at cco.caltech.edu Tue Sep 19 11:33:26 1995 From: pfbaldi at cco.caltech.edu (Pierre Baldi) Date: Tue, 19 Sep 1995 08:33:26 -0700 (PDT) Subject: Tal Grossman Message-ID: Tal Grossman tragically died in a car accident on August 1st. Tal was in the Complex Systems Group at Los Alamos. He was very active in the area of machine learning and computational molecular biology. He gave a presentation at one of the NIPS workshops last year. This year he was trying to organize the same workshop himself. He is survived by his wife and children. Anyone who knew Tal, and feels the need to, is welcome to contact either his wife: Dr. Ramit Mehr-Grossman ramit at t10.lanl.gov or his sponsor at Los Alamos: Dr. Alan Lapedes asl at t13.lanl.gov Pierre Baldi From igor at c3serve.c3.lanl.gov Tue Sep 19 19:48:30 1995 From: igor at c3serve.c3.lanl.gov (Igor Zlokarnik) Date: Tue, 19 Sep 1995 17:48:30 -0600 Subject: postdoctoral positions available Message-ID: <199509192348.RAA16530@c3serve.c3.lanl.gov> POSTDOCTORAL POSITIONS IN STATISTICAL ANALYSIS AVAILABLE Los Alamos National Laboratory Los Alamos, NM 87545 Postdoctoral positions are available to participate in the development of appropriate methods for analysing large databases such as medical payment records and related databases for the purpose of detecting and preventing fraud, waste and abuse. This research includes, but is not limited to, the evaluation and modification of existing methods, such as neural networks, genetic algorithms, fuzzy logic, n-grams, multivariate analysis, factor analysis, and multidimensional scaling. It may also involve the implementation of database interfaces. Los Alamos National Laboratory provides excellent opportunities for advanced research. The Laboratory operates the world's largest scientific computing facility. A major strength of Los Alamos is the interdisciplinary nature of much of it research. Scientists in one field may draw on research and techniques developed for quite a different, seemingly unrelated, area. Your immediate team colleagues will be working on such diverse research areas as automatic speech recognition, virtual reality, financial analysis, etc. Appointments are available for applicants who have received a doctoral degree in the past three years or will have completed all PhD requirements by date of hire. Positions are for 2 years and are renewable for a third year. Salaries range between 40k - 45k per annum depending on the number of years since the PhD was earned. Los Alamos National Laboratory is an equal opportunity/affirmative action employer. It is operated for the Department of Energy by the University of California. Applicants must submit a resume including list of publications, a statement of research interests, and three letters of recommendation to: Dr. George Papcun Los Alamos National Laboratory CIC-3, MS B256 Los Alamos, NM 87545 phone: (505) 667-9800 e-mail: gjp at lanl.gov by no later than 15 November 1995. Preliminary E-mail inquiries are encouraged. From asl at santafe.edu Wed Sep 20 00:06:14 1995 From: asl at santafe.edu (Alan Lapedes) Date: Tue, 19 Sep 95 22:06:14 MDT Subject: Tal Grossman Message-ID: <9509200406.AA18984@sfi.santafe.edu> It is with deepest regret that we have to announce that Tal Grossman, a postdoctoral fellow in the Complex Systems Group at Los Alamos, was killed August 1, 1995 in a car accident while on a family vacation in Arizona. His wife, Ramit (a postdoctoral fellow in the Theoretical Biology Group at Los Alamos) and the children have recovered from minor injuries and are all right. Tal was very active in neural networks and computational biology and had a very promising career. The funeral was held in Israel. A memorial service will be held in Los Alamos Wed Sept 20, 1995. If they wish, friends and colleagues of the Grossmans may contact either Ramit, or Alan Lapedes (Tal's postdoctoral superviror) at: ramit at t10.lanl.gov asl at t13.lanl.gov From cia at kamo.riken.go.jp Wed Sep 20 10:40:22 1995 From: cia at kamo.riken.go.jp (cia@kamo.riken.go.jp) Date: Wed, 20 Sep 95 23:40:22 +0900 Subject: Blind Separation of source (abstracts) Message-ID: <9509201440.AA15387@kamo.riken.go.jp> Blind Signal Processing is an emerging area in adaptive signal processing and neural networks. It was originated in France in the late 80's . Below please find an advanced program of a special invited session devoted to blind separation of sources and their applications at 1995 INTERNATIONAL SYMPOSIUM ON NONLINEAR THEORY AND ITS APPLICATIONS , NOLTA'95 in Las Vegas. Any comments will be highly appreciated, especially association of this approach to brain information processing and image and speech enhancement, filtering and noise reduction. Andrzej Cichocki, Head of Laboratory for Artificial Brain Systems, Frontier Research Program RIKEN, Institute of Physical and Chemical Research, Hirosawa 2-1, Saitama 351-01, WAKO-Schi, JAPAN E-mail: cia at kamo.riken.go.jp, FAX (+81) 048 462 4633. URL: http://zoo.riken.go.jp/bip.html =========================================================================== NOLTA'95, 1995 INTERNATIONAL SYMPOSIUM ON NONLINEAR THEORY AND ITS APPLICATIONS Caesars Palace, LAS VEGAS Dec. 10 -14, 1995 Program for Special Invited Session on "BLIND SEPARATION OF SOURCES -Brain Information Processing" Organizer and chair Dr. A. Cichocki Frontier Research Program RIKEN, Institute of Physical and Chemical Research, Hirosawa 2-1, Saitama 351-01, WAKO-Schi, JAPAN Advanced Program: 1. Prof.Christian JUTTEN , Laboratory TIRF, INPG, Grenoble, FRANCE, "Separation of Sources: Blind or Unsupervised? " Abstract: Basically, separation of sources are referred as BLIND methods. However, adaptive algorithms for source separation only emphasize on UNSUPERVISED aspects of the learning. In this talk, we propose a selected review of recent works to show how A PRIORI KNOWLEDGES on the sources or on the mixtures can simplify the algorithms and improve performance. 2. Prof. Jean-Francois CARDOSO , Ecole Nationale Superieure des Telecommunications, Telecom Paris, FRANCE "The Invariant Approach to Source Separation" Abstract: The `invariant approach' to source separation is based on the recognition that the unknown parameter in a source mixture is the mixing matrix, hence it belongs to a multiplicative group. In this contribution, we show that this simple fact can be exploited to build source separation algorithms behaving uniformly well in the mixing matrix. This is achieved if two sufficient conditions are met. + First, contrast functions (or estimating equations) used to identify the mixture should be designed in such a way that source separation is achieved when they are optimized (or solved) **without constraints** (such as normalization, etc). Examples of such contrast functions will be given, some of them being simple variants of classic contrast functions. This requirement is sufficient to guarantee uniform performance of the resulting batch algorithms. + Second, in the case of adaptive algorithm, uniform performance has a more extensive meaning: not only the residual error but also the convergence are important. Again, the multiplicative nature of the parameter calls for a special form of the learning rule, namely it suggests a `multiplicative update'. This approach results in adaptive source separation algorithms and enjoying uniform performance: convergence speed, residual error, stable points, etc... do not depend on the mixing matrix. In addition these algorithms show a very simple (and parallelizable) structure. The paper includes analytical results based on asymptotic performance analysis that quantify the behavior of both batch and adaptive equivariant source separators. In particular, these results allow to determine, given the source distribution, the optimal nonlinearities to be used in the learning rule. 3. Dr. Jie ZHU, Prof. Xi-Ren CAO, and prof. Ruey-Wen LIU, The Hong Kong University of Science and Technology Kowloon, Hong Kong, The University of Notre Dame, Notre Dame, NI 46556, U.S.A. "Blind Source Separation Based on Output Independence - Theory and Implementation" Abstract: The paper presents some recent results on the theory and implementation techniques of blind source separation. the approach is based on independence property of the outputs of a filter. In the theory part, we identify and study two major issues in the blind source separation problem: separability and separation principles. We show that separability is an intrinsic property of the measured signals and can be described by the concept of $m$-row decomposability introduced in this paper, and that the separation principles can be developed by using the structure characterization theory of random variables. In particular, we show that these principles can be derived concisely and intuitively by applying the Darmois-Skitovich theorem, which is well-known in statistical inference theory and psychology. In the implementation part, we show that if at most one of the source signals has a zero third (or fourth) order cumulant, then these signals can be separated by a filter whose parameters can be determined by a system of nonlinear equations using only third (or fourth) order cumulants of the measured signals. This results covers some previous results as special cases. 4. Dr. Jie HUANG, Prof. Noboru OHNISHI and Dr. Noboru Sugie ; Bio-Mimetic Control Research Center , The Institute of Physical and Chemical Research (RIKEN), Nagoya, JAPAN "Sound Separation Based on Perceptual Grouping of Sound Segments" Abstract : We would like to propose a sound separation method, which combines spatial cues (source direction) and structural cues (continuity and harmony). Sound separation is important in various scientific fields. There are mainly two different approaches to achieve this goal. One is based on blind estimation of inverse transfer functions from multiple sources to multiple receivers (microphones). The other is based on grouping sound segments in time-frequency domain. Our approach is based on the sound segments grouping. However, we use multiple microphones to obtain the spatial information. This approach is strongly inspired by the precedence effect and the cocktail party effect of human auditory system. The precedence effect suggests to us the way of coping with echoes in reverberant environment. The cocktail party effect suggests the use of spatial cues for sound separation. Psychological factors of auditory stream integration and segregation, such as continuity and harmony, are used as structural cues. It is realized by a continuity enhancement filter and a harmonic histogram to supplement the spatial segments grouping. The use of this method with real human speeches recorded in an anechoic chamber and a normal room was demonstrated. The experiments have shown that the method was effective to separate sounds in reverberant environments. 5. Dr. Kiyotoshi MATSUOKA and Dr. Mitsuru KAWAMOTO, Department of Control Engineering, Kyushu Institute of Technology, 1-1,Tobata, Kitakyushu, 804 Japan "Blind Signal Separation Based on a Mutual Information Criterion" Abstract: This paper deals with the problem of the so-called blind separation of sources. The problem is to recover a set of source signals from their linear mixtures observed by the same number of sensors, in the absence of any particular information about the transfer function that couples the sources and the sensors. The only a priori knowledge is, basically, the fact that the source signals are statistically mutually independent. Such a task arises in noise canceling of sound signals, image enhancement, medical measurement, etc. If the observed signals are stationary, Gaussian, white ones, then blind separation is essentially impossible. Conversely, blind separation can be realized by exploiting some information on nonstationary, non-Gaussian, or nonwhite characteristics of the observed signals, if any. Most of the conventional methods stipulate that the source signals are non-Gaussian, and use some high-order moments or cumulants. However, it is sometimes difficult to accurately estimate non-Gaussian statistics because random signals in practice are usually not so far from Gaussian. In this paper we propose an approach that utilizes only second-order moments of the observed signals. We consider two cases: (i) the source signals are nonstationary; (ii) the source signals have some temporal correlations, i.e., they are nonwhite signals. To realize signal separation we consider a recovering filter which takes in the observed signals as input and provides an estimate of the source signals as output. The parameters of the filter are determined such that all the outputs of the filter be mutually statistically independent. As a criterion of the statistical independence we adopt the well-known mutual information between the outputs. The adaptation rule for the filter's parameters is derived from the steepest descent minimization of the criterion function. A remarkable feature of our approach is that it is able to treat time-convolutive mixtures of a general number of (stationary) source signals. In contrast, most of the conventional studies on blind separation only consider the static mixing of the source signals. Namely, any delay in the mixing process is not taken into account. So, those methods are useless for many of the important applications of blind separation, e.g., separation of sound signals. Although there are some studies that deal with convolutive mixtures involving some delay, all of them consider only the case of two sources (2 x 2 channels) and do not seem extendible to the case of more than two sources. In our approach, also for convolutive mixtures, the adaptation rule is easily obtained by defining the information criterion in the frequency domain. 6. Dr. Eric MOREAU, and Prof. Odile MACCHI Laboratoire des Signaux et Systems, CNRS-ESE, FRANCE "Adaptive Unsupervised Separation of Discrete Sources" ABSTRACT: We consider the unsupervised source separation problem where observations are captured at the output of an unknown linear mixture of random signals called sources. The sources are assumed discrete, zero-mean and statistically independent. In this paper we consider the problem with a prewhitening stage. The a priori knowledge that sources are discrete with known level, is used in order to improve performances. A novel contrast which combines two parts, is proved. The first part forces statistical independence of the outputs while the second one forces the outputs to have the known distribution. Then a stochastic gradient adaptive algorithm is proposed. Its performance is illustrated thanks to computer simulations that clearly show that the novel contrast achieves much better performance. 7. Dr. Adel BELOUCHRANI, and Prof. Jean-Francois CARDOSO , Telecom Paris, CNRS URA 820, GdR TdSI 46 rue Barrault, 75634 Paris Cedex 13, FRANCE "Maximum Likelihood Source Separation by the Expectation-Maximization Technique: Deterministic and Stochastic Implementation" Abstract: This paper deals with the source separation problem which consists in the separation of a mixture of independent sources without a priori knowledge about the mixing matrix. When the source distributions are known in advance, this problem can be solved via the maximum likelihood (ML) approach by maximizing the data likelihood function using (i) the Expectation-Maximization (EM) algorithm and (ii) a stochastic version of it, the SEM, wich is efficiently implemented by resorting to Metropolis sampler. Two important features of our algorithm are that (a) the covariance of the additive noise can be estimated as a regular parameter, (b) in the case of discrete sources, it is possible to separate more sources than sensors. The effectiveness of this method is illustrated by numerical simulations. 8. Prof. L. TONG and Dr. X. CHEN, University of Connecticut, USA. "Blind Separation of Dynamically Mixed Multiple Sources and its Applications in CDMA Systems" Abstract: In this paper, we consider the problem of separating dynamically mixed multiple sources. Specifically, we address the problem of recovering the sources of an multiple-input multiple-output system. Two issues will be addressed: (i) Source Blind Separability; (ii) Blind Signal Separation Algorithms. Applications of the proposed approach to code-division multiple-access schemes in wireless communication are presented. 9. Prof. Shun-ichi AMARI, Prof. Andrzej CICHOCKI and Dr. Howard Hua YANG, Frontier Research Program RIKEN (Institute of Physical and Chemical Research), Wako-shi, JAPAN "Multi-layer Neural Networks with Local Learning Rules for Blind Separation of Sources" Abstract: In this paper we will propose multi-layer neural network models (feedforward and recurrent) with novel, local, adaptive, unsupervised learning rules which enable not only to separate on-line independent sources but also determine the number of active sources. In other words, we assume that the number of sources and their waveforms are completely unknown. Moreover, the separation problem can be very ill-conditioned and/or badly scaled. In fact the performance of the learning algorithm is independent of scaling factors and a condition number of the mixing matrix. Universal (flexible) computer simulation program will be presented which enable comparison of validity and performance of various recently developed adaptive on-line learning algorithms. 10.. Dr.L. De Lathauwer and Dr.P. Comon; E.E. Dept. - ESAT - SISTA, K.U.Leuven, BELGIUM, CNRS - I3S, Sophia Antipolis, Valbonne, FRANCE "Higher - Order Power Method" Abstract The scientific boom in the field of higher-order statistics involves an increasing need for numerical tools in multi-linear algebra: higher-order moments and cumulants of multivariate stochastic processes are higher-order tensors. We consider the problem of generalizing the computation of the best rank-R approximation of a given matrix to the computation of the best rank-(R1,R2,...,RN) approximation of an Nth-order tensor. We mainly focus on the best rank-1 approximation of third-order tensors. It is shown that this problem leads in a very natural way to a higher-order equivalent of the well-known power method for the computation of the eigendecomposition of matrices. It can be proved that each power iteration step decreases the least-squares error between the initial tensor and the lower-rank estimate. In the tensor case several stationary points might exist, each with a different domain of attraction. Surprisingly, the power iteration for a super-symmetric tensor can produce intermediate results that are unsymmetric. Imposing symmetry on the algorithm does not necessarily improve the convergence speed; the symmetric power iteration can even fail to converge. In the matrix case truncation of the singular value decomposition (SVD) yields the best rank-R approximation; in the tensor case it can only be proved that truncation of the higher-order singular value decomposition (HOSVD) yields a fairly good approximation. All our simulations show that the HOSVD-guess belongs to the attraction region corresponding to the optimal fit. --------------------------------------------------------------------------- From priel at eder.cc.biu.ac.il Fri Sep 22 07:48:32 1995 From: priel at eder.cc.biu.ac.il (priel@eder.cc.biu.ac.il) Date: Fri, 22 Sep 1995 13:48:32 +0200 (WET) Subject: paper on analytical study of time-series generation ... Message-ID: <9509221148.AA18392@eder.cc.biu.ac.il> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/kanter.time_series.ps.Z The file kanter.time_series.ps.Z is now available for copying from the Neuroprose repository. It has been accepted for publication in the PRL. (4 pages) Analytical Study of Time Series Generation by Feed-Forward Networks ---------------------------------------------- I. Kanter, D. A. Kessler, A. Priel and E. Eisenstein Minerva Center and Department of Physics, Bar-Ilan University, Ramat-Gan 52900, Israel ABSTRACT : Generation of time series is studied analytically for a generalization of the Bit-Generator to continuous activation functions and multi-layer architectures. The network exhibits at asymptotically large times the following characteristic features: (a) flows can be periodic or quasi-periodic depending on the phase of the weights, (b) the dimension of the attractor is a function of the gain of the activation function and the number of hidden units, (c) a phase shift in the weights results in a frequency shift in the output, so that the system operates as a phase detector. *** NO HARD COPIES ARE AVAILABLE *** From cnna96 at cnm.us.es Fri Sep 22 06:39:04 1995 From: cnna96 at cnm.us.es (4th Workshop on CNN's and Applications) Date: Fri, 22 Sep 95 12:39:04 +0200 Subject: WWW page on CNNA'96 Message-ID: <9509221039.AA18897@cnm1.cnm.us.es> A World Wide Web page has been established for up-to-date information on CNNA96 (see below). http://www.cica.es/~cnm/cnna96.html The preliminary call for papers (already published in this list) follows. ------------------------------------------------------------------------------ PRELIMINARY CALL FOR PAPERS 4th IEEE INTERNATIONAL WORKSHOP ON CELLULAR NEURAL NETWORKS AND APPLICATIONS (CNNA-96) June 24-26, 1996 (Jointly Organized with NDES-96) Escuela Superior de Ingenieros de Sevilla Centro Nacional de Microelectronica Sevilla, Spain ------------------------------------------------------------------------------ ORGANIZING COMMITTEE: Prof. J.L. Huertas (Chair) Prof. A. Rodriguez-Vazquez Prof. R. Dominguez-Castro SECRETARY: Dr. S. Espejo TECHNICAL PROGRAM: Prof. A. Rodriguez-Vazquez PROCEEDINGS: Prof. R. Dominguez-Castro SCIENTIFIC COMMITTEE: Prof. N.N. Aizemberg, Univ. of Uzhgorod, Ukrania Prof. L.O. Chua, Univ. of Cal. at Berkeley, U.S.A. Prof. V. Cimagalli, Univ. of Rome, Italy Prof. T.G. Clarkson, Kings College of London, U.K. Prof. A.S. Dmitriev, Academy of Sciences, Russia Prof. M. Hasler, EPFL, Switzerland Prof. J. Herault, Nat. Ins. of Tech., France Prof. J.L. Huertas, Nat. Microelectronics Center, Spain Prof. S. Jankowski, Tech. Univ. of Warsaw, Poland Prof. J. Nossek, Tech. Univ. Munich, Germany Prof. V. Porra, Tech. Univ. of Helsinki, Finland Prof. T. Roska, MTA-SZTAKI, Hungary Prof. M. Tanaka, Sophia Univ., Japan Prof. J. Vandewalle, Kath. Univ. Leuven, Belgium ------------------------------------------------------------------------------ GENERAL SCOPE OF THE WORKSHOP AND VENUE The CNNA series of workshops aims to provide a biannual international forum to present and discuss recent advances in Cellular Neural Networks. Following the successful conferences in Budapest (1990), Munich (1992), and Rome (1994), the fourth workshop will be held in Seville during 1996, organized by the National Microelectronic Center and the School of Engineering of Seville. Seville, the capital of Andalusia, and site of the 1992 Universal Exposition, combines a rich cultural heritage accumulated during its more than 2500 years history with modern infrastructures in a stable and sunny climate. It boasts a large, prestigious university, several high-technology research centers of the Spanish Council of Research, and many cultural attractions. It is linked to Madrid by high-speed train and has an international airport serving several daily direct international flights, as well as many connections to international flights via Madrid. ------------------------------------------------------------------------------ PAPERS SUBMISSION Papers on all aspects of Cellular Neural Networks are welcome. Topics of interest include, but are not limited to: - Basic Theory - Applications - Learning - Software Implementations and CNN Simulators - CNN Computers - CNN Chips - CNN System Development and Testing Prospective authors are invited to submit 4 pages summaries of their papers to the Conference Secretariat. Authors of accepted papers will be asked to deliver camera-ready versions of their full-papers for publication in an IEEE-sponsored Proceedings. ------------------------------------------------------------------------------ AUTHOR'S SCHEDULE Submission of summaries: ................ January 31, 1996 Notification of acceptance: ............. March 31, 1996 Submission of camera-ready papers: ...... May 15, 1996 ------------------------------------------------------------------------------ PRELIMINARY REGISTRATION FORM Fourth IEEE Int. Workshop on Cellular Neural Networks and their Applications CNNA'96 Sevilla, Spain, June 24-26, 1996 I wish to attend the workshop. Please send Program and registration form when available. Name: ................______________________________ Mailing address: .....______________________________ Phone: ...............______________________________ Fax: ............. ...______________________________ E-mail: ..............______________________________ Please complete and return to: CNNA'96 Secretariat. Department of Analog Circuit Design, Centro Nacional de Microelectronica Edif. CICA, Avda. Reina Mercedes s/n, E-41012 Sevilla - SPAIN FAX: +34-5-4231832 Phone: +34-5-4239923 E-mail: cnna96 at cnm.us.es ------------------------------------------------------------------------------ From cia at kamo.riken.go.jp Sat Sep 23 01:04:14 1995 From: cia at kamo.riken.go.jp (cia@kamo.riken.go.jp) Date: Sat, 23 Sep 95 14:04:14 +0900 Subject: Nolta-95 - Call for participation Message-ID: <9509230504.AA16627@kamo.riken.go.jp> Many reserchers aked me to send more information about Nolta 95 in Las Vegas. Enclosed, please find "Call for participation". Andrzej Cichocki Laboratory for Artificial Brain Systems, Frontier Research Program RIKEN, Institute of Physical and Chemical Research, Hirosawa 2-1, Saitama 351-01, WAKO-Schi, JAPAN E-mail: cia at kamo.riken.go.jp, FAX (+81) 048 462 4633. URL: http://zoo.riken.go.jp/bip.html ----------------------------------------------------------------------- CALL FOR PARTICIPATION 1995 International Symposium on Nonlinear Theory and its Applications (NOLTA'95) Caesars Palace, Las Vegas, Nevada, U.S.A. December 10 - 14, 1995 ----------------------------------------------------------------------- The 1995 International Symposium on Nonlinear Theory and its Applications (NOLTA'95) will be held at Caesars Palace, Las Vegas, Nevada, U.S.A. on Dec.10-14, 1995. The conference is open to all the world. About 300 papers describing original work in all aspects of Nonlinear Theory and its Applications are presented: [Plenary Talk] 1. CHUA, L.O. (U. C. Berkeley) : Nonlinear Waves, Patterns and Spatio-Temporal Chaos 2. HASEGAWA, Akira (Osaka Univ.): Recent Progress of Optical Soliton Research 3. WILLSON, Jr., A.N. (U.C.L.A.): Some Aspects of Nonlinear Transistor Circuit Theory [Special(invited) Sessions] 1. Application of Nonlinear Analysis to Communication 2. Blind Separation of Sources - Brain Information Processing - 3. Coupled Chaotic Systems - Modeling Brain Functions and Information Processing 4. Fundamental Advances in the Theory of Networks and Systems 5. Hardware Implementation of Nonlinear Dynamical Systems and Its Applications 6. Homotopy Continuation for Nonlinear Analysis 7. Information Processings and Complexity 8. Nonlinear Dynamics and Neural Coding 9. The CNN Nonlinear Dynamic Visual Microprocessor - New Chips, Applications, and Biological Relevance - [Regular Sessions] 1. Bifurcation 2. Biocybernetics and Evolution Systems 3. Cellular Neural Networks 4. Circuit Simulation and Modeling 5. Chaos and Spatial Chaos 6. Chaos Control 7. Chaotic Series Analysis 8. Chaotic Systems 9. Chua's Circuits 10.Communication 11.Electronic Circuits 12.Fractals 13.Fuzzy 14.Image and Signal Processing 15.Information Dynamics 16.Neural Networks (Applications I) 17.Neural Networks (Applications II) 18.Neural Networks (Applications III) 19.Neural Networks (Artificial Systems) 20.Neural Networks (Learning and Capacity) 21.Nonlinear Partial Differential Equations 22.Nonlinear Physics 23.Numerical Analysis and Validation 24.Numerical Methods and Nonlinear Circuits 25.Oscillation 26.Real Nonlinear Control 27.Synchronization 28.Theory of Nonlinear Control 29.Time Series Analysis and Economics 30.Transmission Line ----------------------------------------------------------------------- Registration Before Oct. 31, the registration fee is Japanese Yen 40,000(US$400.00) for researcher, and Yen 20,000(US$200.00) for full-time student. After Nov. 1, the fee is Yen 50,000(US$500.00) for researcher and Yen 25,000(US$250.00) for student. The registration fee includes attendance at all sessions, tea & coffee breaks, a copy of the symposium proceedings, and a banquet ticket. ----------------------------------------------------------------------- Accomodation By reserving through NOLTA'95 Reservation Cards, guest rooms of Caesars Palace during NOLTA'95 will be provided at the rate of US$109.00 for single or double occupancy. For further information, please contact NOLTA'95 secretariat (nolta at sat.t.u-tokyo.ac.jp). ----------------------------------------------------------------------- If you can use WWW browser, you can find further information at http://hawk.ise.chuo-u.ac.jp/NOLTA . If not, please contact to NOLTA'95 secretariat: NOLTA'95 secretariat c/o Oishi Lab., Dept. of Information and Computer Sciences School of Science and Engineering, Waseda University 3-4-1, Okubo, Shinjuku-ku, Tokyo 169, JAPAN Telefax : +81-3-5272-5742 e-mail : nolta at sat.t.u-tokyo.ac.jp ----------------------------------------------------------------------- Organizer : Research Society of Nonlinear Theory and its Applications, IEICE In cooperation with : IEEE Neural Networks Council International Neural Network Society Asian Pacific Neural Network Assembly IEEE CAS Technical Committee on Nonlinear Circuits and Systems Technical Group of Nonlinear Problems, IEICE Technical Committee of Electronic Circuits, IEEJ ----------------------------------------------------------------------- NOLTA'95 Symposium Committee HONORARY CHAIRS: Kazuo Horiuchi(Waseda Univ.) Masao Iri(Chuo Univ.) CO-CHAIRS: Shun-ichi Amari(Univ. of Tokyo) Shinsaku Mori(Keio Univ.) Allan N. Willson, Jr.(U.C.L.A.) TECHNICAL PROGRAM CHAIR: Shun-ichi Amari(Univ. of Tokyo) LOCAL ARRANGEMENT CHAIR: Allan N. Willson, Jr. Dept. of Electrical. Engr., University of Cal., Los Angeles CA 90024, U.S.A. Telefax : +1-310-206-4061 e-mail : willson at epsilon.icsl.ucla.edu PUBLICITY CHIR: Shinsaku Mori(Keio Univ.) PUBLICATION: Toshimichi Saito(Hosei Univ.) SECRETARIES: Shin'ichi Oishi (Waseda Univ.) oishi at oishi.info.waseda.ac.jp Toshimichi Saito(Hosei Univ.) saito at toshi.ee.hosei.ac.jp Mitsunori Makino(Chuo Univ.) makino at ise.chuo-u.ac.jp ----------------------------------------------------------------------- From cia at kamo.riken.go.jp Sat Sep 23 04:03:23 1995 From: cia at kamo.riken.go.jp (cia@kamo.riken.go.jp) Date: Sat, 23 Sep 95 17:03:23 +0900 Subject: Postdoctoral positions available in JAPAN Message-ID: <9509230803.AA16557@kamo.riken.go.jp> Postdoctoral research positions are available to participate in Neural Information Processing- Frontier Research Program RIKEN. Applications are invited for postdoctoral positions to study the artificial neural systems in the laboratory for ARTIFICIAL BRAIN SYSTEMS in the RIKEN (Institute of Physical and Chemical Research -Wako -city, Japan). We are seeking a highly-motivated postdoctoral scientist working in the area of neural networks with a strong background in nonlinear and adaptive signal processing, and/or statistical computation, speech and image processing , mathematics and computer science. Salaries range between 4-6 million Yen (40k - 60k US $) per annum depending on experience, achievements and the number of years since the PhD was earned. The Institute of Physical and Chemical Research (RIKEN) have started a new eight-years Frontier Research Program on Neural Information Processing, beginning in October 1994. The Program includes three research laboratories ( Neural Modeling, Neural Information Representations and Artificial Brain Systems) , each consisting of one research leader and several researchers. We will study fundamental principles underlying the higher order brain functioning from mathematical, information-theoretic and systems-theoretic points of view. The three laboratories cooperate in constructing various models of the brain, mathematically analyzing information principles in the brain, and designing artificial neural systems. We will have close correspondences with another Frontier Research Program on experimental Neuroscience. Research positions, available from November1995- April 1996, or latter are opened for one-year contracts to researchers and post-doctoral fellows, and are renewable for next years depending on success in the program and expected continuation of funding. The search will be continued untill the position is filled. Applicants must submit a letter of application, a current full curriculum vitae/resume including list of publications, the name of three referees and a detailed statement of research interests by e-mail to: Dr. Andrzej Cichocki Team leader of Laboratory for ARTIFICIAL BRAIN SYSTEMS Frontier Research Program RIKEN, Institute of Physical and Chemical Research, Hirosawa 2-1, Saitama 351-01, WAKO-Schi, JAPAN E-mail: cia at kamo.riken.go.jp, FAX (+81) 048 462 4633. URL: http://zoo.riken.go.jp/bip.html The Riken (Institute of Physical and Chemical Research) is a research Institute supported by the Japanese Government. The number of tenured researchers in the Riken is about 350 and more than 1000 are working altogether. The Riken is located at Wako City, Saitama Prefecture, in the suburb of Tokyo. It is very close to central Tokyo. It takes only 15 minutes by train from Ikebukuro Station (one of the famous downtowns in Tokyo). The Frontier Research Program is basic-science oriented, and the goal of the computational Neuroscience group is to elucidate fundamental principles of information processing in the brain by theoretical approaches. The group consists of three teams. Each researcher may propose one's research themes through discussions with a team leader and he should take part in one or two more research projects proposed by a team leader. Official language in the research group is English. There are more than 100 foreigners working in Riken. *************************** Research program of Laboratory for Artificial Brain Systems 1. Objectives The brain realizes intelligent functions over its complex neural network systems. In order to understand and physically simulate the mechanisms of such complex systems, one important method is to synthesize and design artificial neural systems to see how they work under various environmental conditions. We hope that such approach not only make possible to better understand the principles and mechanisms of the brain but also opens new perspectives to develop neurocomputers and their applications. The main objective is development and investigation of models, architectures (structures) and associated learning algorithms of artificial neural systems. The main emphasis will be given to development of novel learning algorithms that are biologically justified and are computational efficient (e.g., they provide numerical stability and high convergence speed). The second objective is investigation of some potential and perspective applications of artificial neural systems in : information and signal processing, high speed parallel computing, solving in real time some optimization problems, classification and pattern recognition problems (e.g. face recognition). 2. Main Topics (1) Blind deconvolution and separation of sources (Cocktail party problem). (2) Development and investigation of on-line adaptive unsupervised learning algorithms. (3) Investigation of recurrent dynamic neural networks with controlling chaos. (4) Investigation of artificial neural systems with variable architectures. (6) Application of artificial neural systems for high speed parallel computing, recognition and optimization problems. 3. Main Features (1) This research is intended to study the system -theoretic frameworks for artificial neural networks based on nonlinear adaptive signal processing and dynamic systems/control theory. (2) The techniques and algorithms developed in this team will be available for collaboration with other teams and individual researchers (3) High priority will be given to the development of synthetic neural systems which are biologically plausible but also implementable (realizable) by electronic and/or optical circuits and systems. (4) Although we intend to investigate mainly models that have possibly biological plausibility (resemblance) and main inspiration will be taken from neuroscience, we assume that investigated structures and algorithms for specific applications may be rather loosely associated with real biological nervous models or they will be only simplified systems of such models. From tresp at traun.zfe.siemens.de Sat Sep 23 14:03:01 1995 From: tresp at traun.zfe.siemens.de (Volker Tresp) Date: Sat, 23 Sep 1995 20:03:01 +0200 Subject: Papers available on Missing and Noisy Data in Nonlinear Time-Series Prediction Message-ID: <9509231803.AA00331@traun.zfe.siemens.de> The file tresp.miss_time.ps.Z can now be copied from Neuroprose. The paper is 11 pages long. Hardcopies copies are not available. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/tresp.miss_time.ps.Z Missing and Noisy Data in Nonlinear Time-Series Prediction by Volker Tresp and Reimar Hofmann We discuss the issue of missing and noisy data in nonlinear time-series prediction. We derive fundamental equations both for prediction and for training. Our discussion shows that if measurements are noisy or missing, treating the time series as a static input/output mapping problem (the usual time-delay neural network approach) is suboptimal. We describe approximations of the solutions which are based on stochastic simulations. A special case is $K$-step prediction in which a one-step predictor is iterated $K$ times. Our solutions provide error bars for prediction with missing or noisy data and for $K$-step prediction. Using the $K$-step iterated logistic map as an example, we show that the proposed solutions are a considerable improvement over simple heuristic solutions. Using our formalism we derive algorithms for training recurrent networks, for control of stochastic systems and for reinforcement learning problems. From ro2m at crab.psy.cmu.edu Sat Sep 23 14:51:28 1995 From: ro2m at crab.psy.cmu.edu (Randall C. O'Reilly) Date: Sat, 23 Sep 95 14:51:28 EDT Subject: ANNOUNCING: PDP++ version 1.0 Message-ID: <9509231851.AA01665@crab.psy.cmu.edu.psy.cmu.edu> ANNOUNCING: The PDP++ Software Authors: Randall C. O'Reilly, Chadley K. Dawson, and James L. McClelland The PDP++ software is a new neural-network simulation system written in C++. It represents the next generation of the PDP software released with the McClelland and Rumelhart "Explorations in Parallel Distributed Processing Handbook", MIT Press, 1987. It is easy enough for novice users, but very powerful and flexible for research use. The current version is 1.0, our first non-beta release. It has been extensively tested and should be completely usable. The software can be obtained by anonymous ftp from: Anonymous FTP Site: hydra.psy.cmu.edu/pub/pdp++/ *or* unix.hensa.ac.uk/mirrors/pdp++/ For more information, see our web page: WWW Page: http://www.cs.cmu.edu/Web/Groups/CNBC/PDP++/PDP++.html There is a 250 page (printed) manual and an HTML version available on-line at the above address. New Features Since Previous Release (1.0b): =========================================== o Better support for sub-groups of units within a Layer (including better interface support). o Fixed and much more flexible 'ReadOldPDP' function for importing environments (pattern files). o Improved documentation for compiling the software. o Pre-compiled InterViews libraries for g++ now available (in addtion to cfront-based ones). o Added a bpso++ executable, which allows creation of mixed backprop and self-organizing networks. o Lots of bug fixes (see the ChangeLog file for details). Software Features: ================== o Full Graphical User Interface (GUI) based on the InterViews toolkit. Allows user-selected "look and feel". o Network Viewer shows network architecture and processing in real- time, allows network to be constructed with simple point-and-click actions. o Training and testing data can be graphed on-line and network state can be displayed over time numerically or using a wide range of color or size-based graphical representations. o Environment Viewer shows training patterns using color or size-based graphical representations. o Flexible object-oriented design allows mix-and-match simulation construction and easy extension by deriving new object types from existing ones. o Built-in 'CSS' scripting language uses C++ syntax, allows full access to simulation object data and functions. Transition between script code and compiled code is simplified since both are C++. Script has command-line completion, source-level debugger, and provides standard C/C++ library functions and objects. o Scripts can control processing, generate training and testing patterns, automate routine tasks, etc. o Scripts can be generated from GUI actions, and the user can create GUI interfaces from script objects to extend and customize the simulation environment. Supported Algorithms: ===================== o Feedforward and recurrent error backpropagation. Recurrent BP includes continuous, real-time models, and Almeida-Pineda. o Constraint satisfaction algorithms and associated learning algorithms including Boltzmann Machine, Hopfield models, mean-field networks (DBM), Interactive Activation and Competition (IAC), and continuous stochastic networks. o Self-organizing learning including Competitive Learning, Soft Competitive Learning, simple Hebbian, and Self-organizing Maps ("Kohonen Nets"). The Fine Print: =============== PDP++ is copyrighted and cannot be sold or distributed by anyone other than the copyright holders. However, the full source code is freely available, and the user is granted full permission to modify, copy, and use it. See our web page for details. The software runs on Unix workstations under XWindows. It requires a minimum of 16 Meg of RAM, and 32 Meg is preferable. It has been developed and tested on Sun Sparc's under SunOs 4.1.3, HP 7xx under HP-UX 9.x, and SGI Irix 5.3. Statically linked binaries are available for these machines. Other machine types will require compiling from the source. Cfront 3.x and g++ 2.6.3 are supported C++ compilers (we specifically were *not* able to compile successfully with the 2.7.0 version of g++, and gave up waiting for 2.7.1). The GUI in PDP++ is based on the InterViews toolkit, version 3.2a. However, we had to patch it to get it to work. We distribute pre-compiled libraries containing these patches for the above architectures. For architectures other than those above, you will have to apply our patches to InterViews before compiling. The basic GUI and script technology in PDP++ is based on a type-scanning system called TypeAccess which interfaces with the CSS script language to provide a virtually automatic interface mechanism. While these were developed for PDP++, they can easily be used for any kind of application, and CSS is available as a stand-alone executable for use like Perl or TCL. The binary-only distribution requires about 54 Meg of disk space, since we have been unable to get shared libraries to work with C++ on the above platforms. Each simulation executable is around 8-12 Meg in size, and there are 4 of these (bp++, cs++, so++, and bpso++), plus the CSS and 'maketa' executables. The compiled source-code distribution takes about 115 Meg (but only around 16 Meg before compiling). For more information on the details of the software, see our web page. - Randy From fritzke at neuroinformatik.ruhr-uni-bochum.de Sun Sep 24 13:41:04 1995 From: fritzke at neuroinformatik.ruhr-uni-bochum.de (Bernd Fritzke) Date: Sun, 24 Sep 1995 18:41:04 +0100 (MET) Subject: paper available: incremental learning of local linear mappings Message-ID: <9509241741.AA13493@hermes.neuroinformatik.ruhr-uni-bochum.de> ftp://ftp.neuroinformatik.ruhr-uni-bochum.de/pub/outgoing/fritzke/papers/fritzke.icann95.ps.gz The following paper is available by http and ftp: INCREMENTAL LEARNING OF LOCAL LINEAR MAPPINGS (to appear in the proceedings of ICANN'95, Paris, France) BERND FRITZKE Institut f"ur Neuroinformatik Ruhr-Universit"at Bochum, Germany A new incremental network model for supervised learning is proposed. The model builds up a structure of units each of which has an associated local linear mapping (LLM). Error information obtained during training is used to determine where to insert new units whose LLMs are interpolated from their neighbors. Simulation results for several classification tasks indicate fast convergence as well as good generalization. The ability of the model to also perform function approximation is demonstrated by an example. -- Bernd Fritzke * Institut f"ur Neuroinformatik Tel. +49-234 7007921 Ruhr-Universit"at Bochum * Germany FAX. +49-234 7094210 WWW: http://www.neuroinformatik.ruhr-uni-bochum.de/ini/PEOPLE/fritzke/top.html From mclennan at cs.utk.edu Mon Sep 25 18:52:17 1995 From: mclennan at cs.utk.edu (Bruce MacLennan) Date: Mon, 25 Sep 1995 18:52:17 -0400 Subject: paper available on continuous formal systems Message-ID: <199509252252.SAA08101@maclennan.cs.utk.edu> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/maclennan.contformalsys.ps.Z Continuous Formal Systems: A Unifying Model in Language and Cognition by Bruce MacLennan (Paper for an IEEE workshop on semiotic modeling.) The idea of a _calculus_ or _discrete formal system_ is central to traditional models of language, knowledge, logic, cognition and computation, and it has provided a unifying framework for these and other disciplines. Nevertheless, research in psychology, neuroscience, philosophy and computer science has shown the limited ability of this model to account for the flexible, adaptive and creative behavior exhibited by much of the animal kingdom. Promising alternate models replace discrete structures by _structured continua_ and discrete rule-following by _continuous dynamical processes_. However, we believe that progress in these alternate models is retarded by the lack of a unifying theoretical construct analogous to the discrete formal system. In this paper we outline the general characteristics of _continuous formal systems_ (_simulacra_), which we believe will be a unifying element in future models of language, knowledge, logic, cognition and computation. Bruce MacLennan Department of Computer Science The University of Tennessee Knoxville, TN 37996-1301 PHONE: (615*)974-0994/5067 FAX: (615*)974-4404 EMAIL: maclennan at cs.utk.edu From emilio at eliza.cc.brandeis.edu Mon Sep 25 10:37:27 1995 From: emilio at eliza.cc.brandeis.edu (Emilio Salinas) Date: Mon, 25 Sep 95 10:37:27 EDT Subject: paper available: Transfer of Coded Information from Sensory to Motor Networks Message-ID: <9509251437.AA18396@eliza.cc.brandeis.edu> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/salinas.trans.ps.Z The following paper is available in the neuroprose archive. It is 18 pages long; about 1 Mb compressed and about 3 Mb uncompressed. The paper will appear in the Journal of Neuroscience. Hardcopies will eventually be available but, please, only for those who are really interested and have no means of downloading the file. Address questions, comments or problems to Emilio Salinas: emilio at eliza.cc.brandeis.edu. Transfer of Coded Information from Sensory to Motor Networks by Emilio Salinas and L.F. Abbott ABSTRACT During sensory-guided motor tasks, information must be transferred from arrays of neurons coding target location to motor networks that generate and control movement. We address two basic questions about this information transfer. First, what mechanisms assure that the different neural representations align properly so that activity in the sensory network representing target location evokes a motor response generating accurate movement toward the target? Coordinate transformations may be needed to put the sensory data into a form appropriate for use by the motor system. For example, in visually guided reaching the location of a target relative to the body is determined by a combination of the position of its image on the retina and the direction of gaze. What assures that the motor network responds to the appropriate combination of sensory inputs corresponding to target position in body- or arm-centered coordinates? To answer these questions, we model a sensory network coding target position and use it to drive a similarly modeled motor network. To determine the actual motor response we use decoding methods that have been developed and verified in experimental work. We derive a general set of conditions on the sensory-to-motor synaptic connections that assure a properly aligned and transformed response. The accuracy of the response for different numbers of coding cells is computed. We show that development of the synaptic weights needed to generate the correct motor response can occur spontaneously through the observation of random movements and correlation-based synaptic modification. No error signal or external teaching is needed during this process. We also discuss nonlinear coordinate transformations and the presence of both shifting and non-shifting receptive fields in sensory/motor systems. From fritzke at neuroinformatik.ruhr-uni-bochum.de Tue Sep 26 13:13:19 1995 From: fritzke at neuroinformatik.ruhr-uni-bochum.de (Bernd Fritzke) Date: Tue, 26 Sep 1995 18:13:19 +0100 (MET) Subject: paper available: Growing Grid ...... Message-ID: <9509261713.AA24334@urda.neuroinformatik.ruhr-uni-bochum.de> ftp://ftp.neuroinformatik.ruhr-uni-bochum.de/pub/outgoing/fritzke/papers/fritzke.growing_grid.ps.gz The following paper is available by http and ftp: GROWING GRID - A SELF-ORGANIZING NETWORK WITH CONSTANT NEIGHBORHOOD RANGE AND ADAPTATION STRENGTH (to appear in Neural Processing Letters, Vol. 2, No. 5, pp. 1-5, 1995) BERND FRITZKE Institut f"ur Neuroinformatik Ruhr-Universit"at Bochum, Germany We present a novel self-organizing network which is generated by a growth process. The application range of the model is the same as for Kohonen's feature map: generation of topology-preserving and dimensionality-reducing mappings, e.g., for the purpose of data visualization. The network structure is a rectangular grid which, however, increases its size during self-organization. By inserting complete rows or columns of units the grid may adapt its height/width ratio to the given pattern distribution. Both the neighborhood range used to co-adapt units in the vicinity of the winning unit and the adaptation strength are constant during the growth phase. This makes it possible to let the network grow until an application-specific performance criterion is fulfilled or until a desired network size is reached. A final approximation phase with decaying adaptation strength fine-tunes the network. -- Bernd Fritzke * Institut f"ur Neuroinformatik Tel. +49-234 7007921 Ruhr-Universit"at Bochum * Germany FAX. +49-234 7094210 WWW: http://www.neuroinformatik.ruhr-uni-bochum.de/ini/PEOPLE/fritzke/top.html From kak at gate.ee.lsu.edu Tue Sep 26 17:11:41 1995 From: kak at gate.ee.lsu.edu (Subhash Kak) Date: Tue, 26 Sep 95 16:11:41 CDT Subject: Paper Message-ID: <9509262111.AA00137@gate.ee.lsu.edu> The following paper THE THREE LANGUAGES OF THE BRAIN: QUANTUM, REORGANIZATIONAL, AND ASSOCIATIVE by Subhash C. Kak was presented at the 4th Appalachian Conference on Behavioral Neurodynamics, Radford, VA, Sept 22-25, 1995. It is available by anonymous ftp from: ftp://gate.ee.lsu.edu/pub/kak/dual.ps.Z (35 pages) Abstract: The paper presents a review of current research on different brain processes related to cognition. The review includes a critique of the Turing test, animal intelligence, and reorganizational issues related to superorganisms. How brain behavior might be examined in terms of continuing reorganization is examined. This is done in terms of the information exchange taking place between the organism and the environment. An algorithm is also provided that can perform associative learning instantaneously. From luca at idsia.ch Wed Sep 27 12:12:22 1995 From: luca at idsia.ch (Luca Gambardella) Date: Wed, 27 Sep 95 17:12:22 +0100 Subject: POSTDOC JOB OPENINGS Message-ID: <9509271612.AA20886@fava.idsia.ch> -------------------------------------------------------------------- IDSIA: POSTDOC JOB OPENINGS Statistics, Neural Nets, Forecasting, Planning, Optimization -------------------------------------------------------------------- IDSIA - Istituto Dalle Molle di Studi sull'Intelligenza Artificiale is a machine learning research center located in Lugano (Switzerland). IDSIA receives subsidies from both private and public sectors. NEW PROJECT. Starting 1996, IDSIA will collaborate with a private company producing software tools for supporting container terminal organization and resource allocation. Goals of the project are: (1) Forecasting the container terminal's input/output flow, using statistical models, neural nets, etc., and taking expert knowledge into account. There is a huge data base describing the terminal activity in previous years. IDSIA has considerable expertise in this area. (2) Finding optimal container positions in the stockage area. Where to place an incoming container? This depends on many parameters: final container destination, size and content, the next carrier, current occupancy of the container parking area (often, to grab one container others need to be rearranged). Emphasis is on optimization and planning. IDSIA's role is to define methodologies for solving problems (1) and (2). The private company's role is to produce a collection of software tools integrated in the existing industrial software environment. The 2 year project is supported by Swiss Government funds (CERS/KWF), and will involve 6 man years. In the first (second) year, there will be two (one) IDSIA researcher(s) and one (two) company employee(s). IDSIA has two immediate openings (one for 1 year, other for 2 years). Required expertise: Ph.D. in computer science, statistics (or similar) experience in forecasting (statistics, neural nets, etc.). experience in optimization and planning experience in industrial applications Please send postscripts of resume, description of current interests, names of three referees, and other related information to: Luca Gambardella IDSIA C.so Elvezia 36 6900 Lugano CH email: luca at idsia.ch DEADLINE: OCTOBER 31. From robbie at psych.rochester.edu Wed Sep 27 13:12:33 1995 From: robbie at psych.rochester.edu (Robbie Jacobs) Date: Wed, 27 Sep 1995 13:12:33 -0400 Subject: Brain & Cog Sci at Rochester Message-ID: <199509271712.NAA23399@prodigal.psych.rochester.edu> DEPARTMENT OF BRAIN AND COGNITIVE SCIENCES, UNIVERSITY OF ROCHESTER ------------------------------------------------------------------- The University of Rochester has formed a new academic department called the Department of Brain and Cognitive Sciences. (Please note that Rochester has terminated its Department of Psychology.) This message contains information about this new department, and is specifically oriented towards individuals seeking graduate education. Members of the Department study how we see and hear, move, learn and remember, reason, produce and understand spoken and signed languages, and how these remarkable capabilities depend upon the workings of the brain. We also study how these capabilities develop during infancy and childhood, and how the brain matures and becomes organized to perform complex tasks. Our research interests span a large domain and straddle several disciplines in the behavioral, neural and computational sciences, but all our work is connected by the idea that to understand behavior we must study not only behavior but also the processes--both neural and computational--that underlie it. While the faculty have active research programs in many regions of this large domain, there are parts in which the Department, as well as the surrounding University community, has notable concentrations of strength. One large group of faculty and students focus their research on understanding the visual system and its organization and function; another major group investigates the nature of language processing and language acquisition; a third group, which cuts across and links those who investigate perception, language, and neurobiology, studies the nature of learning and development. Graduate education is a central part of academic life in the Department. Our faculty's research programs are structured in a way that includes graduate students as essential partners--as our junior colleagues and future peers--and we commit a great deal to their training. The essence of our program is training for research in the disciplines that constitute the brain and cognitive sciences, and the program is designed to ensure that students develop rapidly into independent researchers. This very brief summary can give you only a sketchy sense of our Department, and I would encourage you to get in touch with us if you would like to learn more about us or apply for admission to the graduate program. If you have access to the World Wide Web you can find a great deal of information about us and our research and graduate programs by going to: http://www.bcs.rochester.edu/bcs/ Our Web site provides most of what you will want to know about our program, and it also enables you to submit an electronic application. If you would like to discuss our program with a member of the Department, please call Bette McCormick (716-275-1844) who will put you in touch with someone who can help. Bette can also mail you a brochure that provides much more information about the program and the Department. As a last resort, feel free to contact me (Robert Jacobs) via e-mail (robbie at bcs.rochester.edu) with additional questions or concerns. From mandel at cnm.us.es Wed Sep 27 07:48:54 1995 From: mandel at cnm.us.es (Manuel Delgado Restituto) Date: Wed, 27 Sep 95 12:48:54 +0100 Subject: NDES'96 Workshop Message-ID: <9509271148.AA02371@cnm1.cnm.us.es> A World Wide Web page has been established for up-to-date information on NDES96 (see below). http://www.cica.es/~cnm/ndes96.html The preliminary call for papers (already published in this list) follows. ------------------------------------------------------------------------------ PRELIMINARY CALL FOR PAPERS 4th INTERNATIONAL WORKSHOP ON NONLINEAR DYNAMICS OF ELECTRONIC SYSTEMS (NDES-96) June 27-28, 1996 (Jointly Organized with CNNA-96) Escuela Superior de Ingenieros de Sevilla Centro Nacional de Microelectronica Sevilla, Spain ------------------------------------------------------------------------------ ORGANIZING COMMITTEE: Prof. J.L. Huertas (Chair) Prof. E. Freire Prof. A. Rodriguez-Vazquez SECRETARY: Dr. M. Delgado-Restituto TECHNICAL PROGRAM: Prof. A. Rodriguez-Vazquez PROCEEDINGS: Dr. M. Delgado-Restituto SCIENTIFIC COMMITTEE: Prof. Leon O. Chua Univ. of California at Berkeley, U.S.A. Prof. Anthony C. Davies King's College, Univ. London, U.K. Prof. Alexander S. Dmitriev Russian Academy of Sciences, Russia Prof. Martin Hasler EPFL, Switzerland Prof. Michael Peter Kennedy University College Dublin, Ireland Prof. Erik Lindberg Tech. Univ. Denmark, Denmark Prof. Josef Nossek Tech. Univ. Munich, Germany Prof. Maciej Ogorzalek Univ. Mining and Metallurgy, Poland Prof. A. Rodriguez-Vazquez Univ. Seville, Spain Prof. Wolfgang Schwarz Tech. Univ. Dresden, Germany ------------------------------------------------------------------------------ GENERAL SCOPE OF THE WORKSHOP AND VENUE The NDES series of workshops aims to provide an annual international forum to present and discuss recent advances in the analysis and applications of Nonlinear Dynamics of Electronic Circuits and Systems. Following the sucessful conferences in Dresden (1993), Krakow (1994), and Dublin (1995), the fourth workshop will be hosted by the National Microelectronic Center and the School of Engineering of Seville, in Seville, Spain, on 27-28 June, 1996. Seville, the capital of Andalusia, and site of the 1992 Universal Exposition, combines a rich cultural heritage accumulated during its more than 2500 years history with modern infrastructures in a stable and sunny climate. It boasts a large, prestigious university, several high-technology research centers of the Spanish Council of Research, and many cultural attractions. It is linked to Madrid by high-speed train and has an international airport serving several daily direct international flights, as well as many connections to international flights via Madrid and Barcelona. ------------------------------------------------------------------------------ PAPERS SUBMISSION The workshop will address theoretical and practical issues in nonlinear electronic devices, circuits and systems, with an emphasis on dynamic behavior, chaos and complexity. The official language of the workshop will be English. Prospective authors are invited to submit 4 pages summaries of their papers to the Conference Secretariat. Authors of accepted papers will be asked to deliver camera-ready versions of their full-papers for publication in the Proceedings of the Workshop. ------------------------------------------------------------------------------ AUTHOR'S SCHEDULE Submission of summaries ....................... February 15, 1996 Notification of acceptance .................... March 31, 1996 Submission of camera-ready papers ............. May 15, 1996 ------------------------------------------------------------------------------ PRELIMINARY REGISTRATION FORM 4th INTERNATIONAL WORKSHOP ON NONLINEAR DYNAMICS OF ELECTRONIC SYSTEMS (NDES-96) Sevilla, Spain, June 27-28, 1996 I wish to attend the workshop. Please send Program and registration form when available. Name: ................______________________________ Mailing address: .....______________________________ Phone: ...............______________________________ Fax: .................______________________________ E-mail: ..............______________________________ Please complete and return to: NDES'96 Secretariat. Department of Analog Circuit Design, Centro Nacional de Microelectronica Edif. CICA, Avda. Reina Mercedes s/n, E-41012 Sevilla - SPAIN FAX: +34-5-4231832 Phone: +34-5-4239923 E-mail: ndes96 at cnm.us.es ------------------------------------------------------------------------------ From Mark.Ring at gmd.de Wed Sep 27 14:26:34 1995 From: Mark.Ring at gmd.de (Mark Ring) Date: Wed, 27 Sep 1995 19:26:34 +0100 Subject: PhD thesis: Continual Learning in Reinforcement Environments. Message-ID: <199509271826.AA10041@kauai.gmd.de> FTP-host: ftp.gmd.de FTP-filename: /Learning/rl/papers/ring.thesis.ps.Z URL: ftp://ftp.gmd.de/Learning/rl/papers/ring.thesis.ps.Z also URL: http://borneo.gmd.de:80/~ring/Diss http://www.cs.utexas.edu/users/ring/Diss 138 pages total, 624 kbytes compressed postscript. My dissertation from last year is now available publicly in book format. It can be retrieved via ftp, is accessible in sections by WWW, or can be ordered in any book store from Oldenbourg Verlag (publishers) with the following ISBN number: ISBN 3-486-23603-2. ---------------------------------------------------------------------- Title: Continual Learning in Reinforcement Environments August, 1994 Abstract: *Continual learning* is the constant development of complex behaviors with no final end in mind. It is the process of learning ever more complicated skills by building on those skills already developed. In order for learning at one stage of development to serve as the foundation for later learning, a continual-learning agent should learn hierarchically. CHILD, an agent capable of Continual, Hierarchical, Incremental Learning and Development is proposed, described, tested, and evaluated in this dissertation. CHILD accumulates useful behaviors in reinforcement environments by using the Temporal Transition Hierarchies learning algorithm, also derived in the dissertation. This constructive algorithm generates a hierarchical, higher-order neural network that can be used for predicting context-dependent temporal sequences and can learn sequential-task benchmarks more than two orders of magnitude faster than competing neural-network systems. Consequently, CHILD can quickly solve complicated non-Markovian reinforcement-learning tasks and can then transfer its skills to similar but even more complicated tasks, learning these faster still. This continual-learning approach is made possible by the unique properties of Temporal Transition Hierarchies, which allow existing skills to be amended and augmented in precisely the same way that they were constructed in the first place. Contents: Leading pages (pp. iv - xiv) Chapters: 1. Introduction (pp. 1 - 7) 2. Robotics Environments and Learning Tasks (pp. 8 - 16). 3. Neural-Network Learning (pp. 17 - 24). 4. Solving Temporal Problems with Neural Networks (pp. 25 - 33). 5. Reinforcement Learning (pp. 34 - 44). 6. The Automatic Construction of Sensorimotor Hierarchies (pp. 45 - 71). 6.1 Behavior Hierarchies (pp. 45 - 52). 6.2 Temporal Transition Hierarchies (pp. 52 - 69). 6.3 Conclusions (pp. 70 - 71). 7. Simulations (pp. 72 - 95). 7.1 Description of Simulation System (pp. 72 - 73). 7.2 Supervised-Learning Tasks (pp. 73 - 82). 7.3 Continual Learning Results (pp. 82 - 95). 8. Synopsis, Discussion, and Conclusions (pp. 96 - 107). Appendices A-E (pp. 108 - 117). Bibliography (pp. 118 - 127). ---------- Mark Ring Research Group for Adaptive Systems GMD - German National Research Center for Information Technology Schloss Birlinghoven D-53 754 Sankt Augustin Germany Mark.Ring at gmd.de http://borneo.gmd.de:80/~ring From terry at salk.edu Thu Sep 28 22:21:30 1995 From: terry at salk.edu (Terry Sejnowski) Date: Thu, 28 Sep 95 19:21:30 PDT Subject: Faculty Positions at UCSD Message-ID: <9509290221.AA28129@salk.edu> Two New Faculty Positions in Systems Neurobiology and Behavior at the Assistant Professor Level in the Department of Biology, University of California, San Diego Two outstanding candidates are sought addressing problems in any of a number of area including CNS function in intact animals or semi-intact brain preparations using imaging techniques, computatoinal analysis of CNS function, neuroethology invertebrate behavior, or genetics of higher brain function. Ph.D. and postdoctoral experience required. Please send curriculum vitae, bibliography, statement of professional goals and research interests, and the names of three references by October 20, 1995 to Brain and Behavior Search Department of Biology 0346 University of California, San Diego 9500 Gilman Drive La Jolla, CA 92093-0346 From Connectionists-Request at cs.cmu.edu Fri Sep 1 00:05:22 1995 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Fri, 01 Sep 95 00:05:22 -0400 Subject: Bi-monthly Reminder Message-ID: <25122.809928322@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated September 9, 1994. This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is a moderated forum for enlightened technical discussions and professional announcements. It is not a random free-for-all like comp.ai.neural-nets. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to thousands of busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. -- Dave Touretzky & Lisa Saksida --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, corresponded with him directly and retrieved the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new textbooks related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. The file "current" in the same directory contains the archives for the current month. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU 2. Login as user anonymous with password your username. 3. 'cd' directly to the following directory: /afs/cs/project/connect/connect-archives The archive directory is the ONLY one you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into this directory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- Using Mosaic and the World Wide Web ----------------------------------- You can also access these files using the following url: http://www.cs.cmu.edu:8001/afs/cs/project/connect/connect-archives ---------------------------------------------------------------------- The NEUROPROSE Archive ---------------------- Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. We strongly discourage the merger into the repository of existing bodies of work or the use of this medium as a vanity press for papers which are not of publication quality. PLACING A FILE To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. Current naming convention is author.title.filetype.Z where title is just enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. The Z indicates that the file has been compressed by the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is in the appendix. Make sure your paper is single-spaced, so as to save paper, and include an INDEX Entry, consisting of 1) the filename, 2) the email contact for problems, 3) the number of pages and 4) a one sentence description. See the INDEX file for examples. ANNOUNCING YOUR PAPER It is the author's responsibility to invite other researchers to make copies of their paper. Before announcing, have a friend at another institution retrieve and print the file, so as to avoid easily found local postscript library errors. And let the community know how many pages to expect on their printer. Finally, information about where the paper will/might appear is appropriate inside the paper as well as in the announcement. In your subject line of your mail message, rather than "paper available via FTP," please indicate the subject or title, e.g. "paper available "Solving Towers of Hanoi with ART-4" Please add two lines to your mail header, or the top of your message, so as to facilitate the development of mailer scripts and macros which can automatically retrieve files from both NEUROPROSE and other lab-specific repositories: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/filename.ps.Z When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. To prevent forwarding, place a "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your file. If you do offer hard copies, be prepared for a high cost. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! A shell script called Getps, written by Tony Plate, is in the directory, and can perform the necessary retrieval operations, given the file name. Functions for GNU Emacs RMAIL, and other mailing systems will also be posted as debugged and available. At any time, for any reason, the author may request their paper be updated or removed. For further questions contact: Jordan Pollack Associate Professor Computer Science Department Center for Complex Systems Brandeis University Phone: (617) 736-2713/* to fax Waltham, MA 02254 email: pollack at cs.brandeis.edu APPENDIX: Here is an example of naming and placing a file: unix> compress myname.title.ps unix> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put myname.title.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for myname.title.ps.Z 226 Transfer complete. 100000 bytes sent in 1.414 seconds ftp> quit 221 Goodbye. unix> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file myname.title.ps.Z in the Inbox. Here is the INDEX entry: myname.title.ps.Z mylogin at my.email.address 12 pages. A random paper which everyone will want to read Let me know when it is in place so I can announce it to Connectionists at cmu. ^D AFTER RECEIVING THE GO-AHEAD, AND HAVING A FRIEND TEST RETRIEVE THE FILE, HE DOES THE FOLLOWING: unix> mail connectionists Subject: TR announcement: Born Again Perceptrons FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/myname.title.ps.Z The file myname.title.ps.Z is now available for copying from the Neuroprose repository: Random Paper (12 pages) Somebody Somewhere Cornell University ABSTRACT: In this unpublishable paper, I generate another alternative to the back-propagation algorithm which performs 50% better on learning the exclusive-or problem. ~r.signature ^D ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "ftp.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. Another valid directory is "/afs/cs/project/connect/code", where we store various supported and unsupported neural network simulators and related software. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "neural-bench at cs.cmu.edu". From ericr at mech.gla.ac.uk Fri Sep 1 04:49:48 1995 From: ericr at mech.gla.ac.uk (Eric Ronco) Date: Fri, 1 Sep 1995 09:49:48 +0100 Subject: A modular neural network tech rep Message-ID: Dear all, We completed recently a technical report about modular neural networks. This one is actualy in the web page of our Centre for System and Control in the University of Glasgow (see the abstract below): http://www.mech.gla.ac.uk/Control/reports.html Ronco, E. and Peter Gawthrop, 1995. Modular Neural Networks: a state of the art. Tech. rep. CSC-95026, Centre for Systems and Control, faculty of engineering, Glasgow University. Any comments about this work would be well come. Regards, Eric Ronco Abstract: Title: Modular Neural Networks: a state of the art Author: Eric Ronco and Peter Gawthrop Keywords: Neural networks; Modularity; Global computation; Local computation; Clustering; Function approximation The use of ``global neural networks'' (as the back propagation neural network) and ``clustering neural networks'' (as the radial basis function neural network) leads each other to different advantages and inconvenients. The combination of the desirable features ot those two neural ways of computation is achieved by the use of Modular Neural Networks (MNN). In addition, a considerable advantage can emerge from the use of such a MNN: an interpreatable and relevant neural representation about the plant's behaviour. This very desirable feature for function approximation and especially for control problems, is what lake other neural models. This feature is so important that we introduce it as a way to differenciate MNN between other local computation models. However, to enable a systematic use of MNN three steps have to be achieved. First of all, the task has to be decomposed into subtasks, then the neural modules have to be properly organised considering the subtasks and finally a way of communication inter-modules has to be integrated in the whole architecture. We achieved a study of the main modular applications according to those steps. This study leads to the main fact that a systematic use of MNN depends on the type of task considered. The clustering networks and especially the Local Model Networks can be seen as MNN in the frame of classification or recognition problems. The Euclidean distance criterion that they apply to cluster the input space leads to a relevant decomposition according to the properties of those tasks. But, it is irrelevant to apply such a criteria in case of function approximation problems. As spatial clustering seems to be the only existing decomposing method, therefore, an ``ad hoc'' decomposition and organisation of the architecture is achieved in case of function approximation. So, to improve the systematic use of MNN in the framework of function approximation it is now essential to conceive a method of relevant task decomposition. From zhuh at helios.aston.ac.uk Fri Sep 1 08:47:23 1995 From: zhuh at helios.aston.ac.uk (H ZHU) Date: Fri, 1 Sep 1995 12:47:23 +0000 Subject: 3 TechReports on Measuring Generalisation ... Message-ID: <26706.9509011147@sun.aston.ac.uk> Is there any well-defined meaning to statements like "Learning rule A is better than learning rule B"? The answer is yes, as long as three things are specified: the prior, which is the distribution of problems to be solved; the information divergence, which tells how different the estimated distribution is from the true distribution; and the model, which is the space of all the representable solutions. The following three Technical Reports develop the necessary theory to evaluate and compare any neural network learning rules and other statistical estimators. ftp://cs.aston.ac.uk/neural/zhuh/discrete.ps.Z ftp://cs.aston.ac.uk/neural/zhuh/continuous.ps.Z ftp://cs.aston.ac.uk/neural/zhuh/generalisation.ps.Z Bayesian Invariant Measurements of Generalisation for Discrete Distributions Bayesian Invariant Measurements of Generalisation for Continuous Distributions Information Geometric Measurements of Generalisation by Huaiyu Zhu and Richard Rohwer ABSTRACT Neural networks can be considered as statistical models, and learning rules as statistical estimators. They should be compared in the framework of Bayesian decision theory, with information divergence as the loss function. This ensures coherence (An estimator is optimal if and only if it gives optimal estimates for almost all the data) and invariance (the optimality condition does not depend on one-one transforms in the input, output and parameter spaces). The main result is that the ideal optimal estimator is given as an appropriate average over the posterior. The optimal estimator restricted to any particular model is given by an appropriate projection of the ideal optimal estimator onto the model. The ideal optimal estimator is a sufficient statistic so that all the practical learning rules are its functions. They are also its approximations if preserving information in the data is the sole utility. This new theory of statistical inference retains many of the desirable properties of the least mean squares theory for linear Gaussian models, yet is applicable to any statistical estimation problem, including all the neural network learning rules (deterministic and stochastic, supervised, reinforcement and unsupervised). Comments are welcome and very much appreciated! -- Dr. Huaiyu Zhu zhuh at aston.ac.uk Neural Computing Research Group Dept of Computer Sciences and Applied Mathematics Aston University, Birmingham B4 7ET, UK From arbib at pollux.usc.edu Fri Sep 1 12:08:09 1995 From: arbib at pollux.usc.edu (Michael A. Arbib) Date: Fri, 1 Sep 1995 09:08:09 -0700 Subject: Website for Brain Theory and NN Handbook Message-ID: <199509011608.JAA18825@pollux.usc.edu> The Handbook of Brain Theory and Neural Networks now has a Home Page on the Web. The URL is: http://www-mitpress.mit.edu/mitp/recent- books/comp/handbook-brain-theo.html This includes the preface, the complete table of contents, instructions on "How to Use this Book", and a Contributor list providing address, email, and article titles for all 341 authors. The ISBN is 0-262-01148-4 ARBHH The price is $150.00US till September 30, 1995, and $175 thereafter. Orders can be sent to the MIT Press at mitpress-orders at mit.edu Further MIT Press material is available on line at http://www-mitpress.mit.edu Best wishes Michael Arbib From stiber at cs.ust.hk Sat Sep 2 04:05:42 1995 From: stiber at cs.ust.hk (Dr. Michael Stiber) Date: Sat, 2 Sep 1995 16:05:42 +0800 Subject: Introducing the NeuroGeek WWW page Message-ID: <199509020805.QAA12069@cssu28.cs.ust.hk> Yes, yet another WWW page for you to place on the list of pages to check out some day when you have the time. The NeuroGeek page is for everyone interested in how computers can be applied to solving problems (or creating new ones) in the field of Computational Neuroscience. This includes differential equation integration methods, data analysis tools, applications of parallel computers, methods for simplifying models, repositories for code and information, etc. If I had to choose a unifying theme, it would be a focus on computational METHODS, as opposed to particular preparations, models, etc. We hope that this page will grow into a large virtual index showing HOW people are doing what they do. So please feel free to send pointers to information it may be lacking. The URL is: http://www.cs.ust.hk/faculty/stiber/neurogeek.html -- Dr. Michael Stiber stiber at cs.ust.hk Department of Computer Science tel: +852 2358 6981 The Hong Kong University of Science & Technology fax: +852 2358 1477 Clear Water Bay, Kowloon, Hong Kong finger me at: cssu28.cs.ust.hk or http://www.cs.ust.hk/faculty/stiber/bio.html cszm06.cs.ust.hk From hinton at cs.toronto.edu Sun Sep 3 15:55:29 1995 From: hinton at cs.toronto.edu (Geoffrey Hinton) Date: Sun, 3 Sep 1995 15:55:29 -0400 Subject: postdoc job Message-ID: <95Sep3.155531edt.150@neuron.ai.toronto.edu> POSTDOC JOB USING HIDDEN MARKOV MODELS OR NEURAL NETS TO RECOGNIZE ANIMAL SOUNDS The Natural Sciences Unit of Ontario Hydro Technologies in Toronto has a postdoc position available on a project that aims to monitor biodiversity by recognizing wildlife vocalizations (such as birds and frogs). We already have a large database and are particularly interested in candidates who have experience in speech recognition using Hidden Markov Models, neural networks, or both. The postdoc will spend one afternoon each week at the University of Toronto interacting with Geoff Hinton's group. We need to fill the position as soon as possible. If necessary we will consider suitably qualified candidates who can only come for 6 months, though we would prefer someone to come for at least a year. The annual salary will be in the range 40,000 to 60,000 Canadian dollars. Please contact Dr. Paul Patrick of the Natural Sciences Unit for more details (phone 416-207-6277, fax 416-207-6094) or E-mail Dr. Patrick or Dr. Hinton (PatrickP at rd.hydro.on.ca, hinton at ai.toronto.edu). From ingber at alumni.caltech.edu Sun Sep 3 15:34:01 1995 From: ingber at alumni.caltech.edu (Lester Ingber) Date: Sun, 3 Sep 1995 12:34:01 -0700 Subject: ASA Optimization of EEG Analyses Message-ID: <199509031934.MAA26792@alumni.caltech.edu> ASA Optimization of EEG Analyses The paper smni95_lecture.ps.Z %A L. Ingber %T Statistical mechanics of neocortical interactions (SMNI) %R SMNI Lecture Plates %I Lester Ingber Research %C McLean, VA %D 1995 %P (unpublished) can be retrieved from my archive using instructions below. This includes some plates outlining a project, performing recursive ASA optimization of "canonical momenta" indicators of subject's/patient's EEG nested in parameterized customized clinician's rules, along the lines of the approach formulated for markets in markets95_trading.ps.Z in this archive. This was first presented publicly in a lecture on 18 Aug 95 at the University of Oregon. Ramesh Srinivasan at the U of O and Electrical Geodesics, Inc., is qualifying EEG data he has collected. I would like to receive more information on (quasi-)automated rules some people may now use to correlate EEG with behavioral or physiological states. Lester ======================================================================== Interactively [brackets signify machine prompts]: [your_machine%] ftp ftp.alumni.caltech.edu (or 131.215.50.234) [Name (...):] anonymous [Password:] your_e-mail_address [ftp>] cd pub/ingber [ftp>] binary [ftp>] ls [ftp>] get file_of_interest [ftp>] quit The 00index file contains an index of the other files. This archive also can be accessed via WWW path http://www.alumni.caltech.edu/~ingber/ If you do not have ftp access, get information on the FTPmail service by: mail ftpmail at decwrl.dec.com, and send only the word "help" in the body of the message. Sorry, I cannot assume the task of mailing out hardcopies of code or papers. My volunteer time assisting people with their queries on my codes and papers must be limited to electronic mail correspondence. ======================================================================== /* RESEARCH ingber at alumni.caltech.edu * * INGBER ftp.alumni.caltech.edu:/pub/ingber * * LESTER http://www.alumni.caltech.edu/~ingber/ * * Prof. Lester Ingber _ P.O. Box 857 _ McLean, VA 22101 _ 1.800.L.INGBER */ From klaus at prosun.first.gmd.de Mon Sep 4 05:08:16 1995 From: klaus at prosun.first.gmd.de (klaus@prosun.first.gmd.de) Date: Mon, 04 Sep 95 11:08:16 +0200 Subject: new paper on Asymptotic Statistical Theory of Overtraining and Cross-Validation Message-ID: <9509040908.AA01468@chablis.first.gmd.de> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/amari.overtraining.ps.Z The following paper is now available for copying from the Neuroprose repository: amari.overtraining.ps.Z amari.overtraining.ps.Z klaus at first.gmd.de (128151 bytes) 32 pages. S. Amari, N. Murata, K.-R. M\"uller, M. Finke, H. Yang: "Asymptotic Statistical Theory of Overtraining and Cross-Validation" A statistical theory for overtraining is proposed. The analysis treats general realizable stochastic neural networks, trained with Kullback-Leibler loss in the asymptotic case of a large number of training examples. It is shown that the asymptotic gain in the generalization error is small if we perform early stopping, even if we have access to the optimal stopping time. Considering cross-validation stopping we answer the question: In what ratio the examples should be divided into training and testing sets in order to obtain the optimum performance. However, cross-validated early stopping is useless in the asymptotic region, while it decreases the generalization error only in the non-asymptotic region. Our large scale simulations done on a CM5 are in nice agreement with our analytical findings. (University of Tokyo Technical Report METR 06-95 and submitted to IEEE Transactions on NN) Best regards, Klaus &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& Dr. Klaus-Robert M\"uller GMD First (Forschungszentrum Informationstechnik) Rudower Chaussee 5, 12489 Berlin Germany mail: klaus at first.gmd.de Tel: +49 30 6392 1860 Fax: +49 30 6392 1805 web-page: http://www.first.gmd.de/persons/Mueller.Klaus-Robert.html &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& From meeden at cs.swarthmore.edu Mon Sep 4 12:19:40 1995 From: meeden at cs.swarthmore.edu (Lisa Meeden) Date: Mon, 4 Sep 1995 12:19:40 -0400 Subject: CFP: Workshop on Learning in Autonomous Robots Message-ID: <199509041619.MAA05576@ginger.cs.swarthmore.edu> ROBOLEARN-96 An International Workshop on Learning for Autonomous Robots Key West, Florida, USA May 19-20, 1996 Preliminary Call For Papers This workshop will be held in conjunction with FLAIRS-96. See http://www.cis.ufl.edu/~ddd/FLAIRS/FLAIRS-96/ Designing robots that can accomplish tasks in the real world is a difficult problem due to the complexity and unpredictability of the environment. Thus, really useful robots and autonomous creatures must learn new know-how and improve on old know-how to be successful. This know-how may involve developing maps, action policies, or more basic reactive responses to incoming perceptual data. Autonomous agents may adapt through associative mechanisms such as neural networks, inductive techniques such as reinforcement learning, evolutionary processes such as genetic algorithms, or analytic techniques such as explanantion-based learning. Machine learning is a large research area within which we wish to focus on learning techniques viable for robots and autonomous agents that must operate in complex environments. These learning techniques can be used to improve lower level motor and perceptual skills (such as vision) or higher level reasoning skills. We prefer results with implemented physical agents. More than what is learned, we are interested in discussion of well established as well as novel learning techniques, and in learning issues that remain as open problems. SUBMISSION Authors must submit, by ftp, a compressed postscript version of their paper which should be at most 10 double-spaced pages. The first page should include the title but should not identify the author in any manner. A separate cover page should also be submitted containing the author's name, physical address, email address, phone number, affiliation and paper title. In cases of multiple authors, all correspondence will be sent to the first author unless otherwise requested. ftp ftp.cs.buffalo.edu and put your submission in users/hexmoor/robolearn96 Papers must be received by December 15, 1995. Authors of accepted papers will be notified by February 15, 1996. The final camera-ready copy of the papers will be expeced by April 15, 1996. Final papers must consist of at most 5 galley pages. For those who can't submit papers electronically and all other communications direct your communication to the following address: ROBOLEARN-96 Henry Hexmoor Dept. of Computer Science SUNY at Buffalo Buffalo, NY 14260, USA Email: hexmoor at cs.buffalo.edu PUBLICATION All accepted papers will be published in the workshop proceedings. In addition, a selected subset of the papers will be invited for inclusion (subject to refereeing) in a book or in a special issue of a journal. ORGANIZATION Chairs: Henry Hexmoor, SUNY Buffalo Lisa Meeden, Swarthmore College PROGRAM COMMITTEE Minoru Asada, Osaka University George Bekey, USC Doug Blank, Indiana University Long-Ji Lin, Siemens Gary McGraw, Indiana University Sridhar Mahadevan, University of South Florida Ulrich Nehmzow, University of Manchester Ashwin Ram, Georgia Tech Justinian Rosca, University of Rochester Sebastian Thrun, U of Bonn (and CMU) Toby Tyrrell, Plymouth Marine Laboratory Brian Yamauchi, The Institute for the Study of Learning and Expertise and Stanford CSLI See WWW page for latest detail http://ww.cs.buffalo.edu/~hexmoor/robolearn-96 From rolf at cs.rug.nl Tue Sep 5 04:44:04 1995 From: rolf at cs.rug.nl (rolf@cs.rug.nl) Date: Tue, 5 Sep 1995 10:44:04 +0200 Subject: 2 articles online Message-ID: Dear connectionists, the following articles are available online: --------------------------------------------------------------------- Rolf P. W"urtz. Building visual correspondence maps - from neural dynamics to a face recognition system. To appear in: Jose Mira-Mira, editor, Proceedings of the International Conference on Brain Processes, Theories and Models. MIT Press, November 1995 Abstract URL: http://www.cs.rug.nl/users/rolf/mccul95.html Text URL: http://www.cs.rug.nl/users/rolf/mccul95.ps.gz (0.2 MB, 10 pages) ABSTRACT: On the basis of a pyramidal Gabor function representation of images two systems are presented that build correspondence maps between presegmented memorized models and retinal images. The first one is formulated close to the biology of neuronal layers and dynamic links. It extends earlier ones by a hierarchical approach and background independence. The second system is formulated in a way that is efficiently implementable on digital computers but captures the crucial properties of the first one. It has the capability for object recognition under realistic circumstances which is demonstrated by recognizing human faces independently of their hairstyle. KEYWORDS: Neural network, dynamic link architecture, correspondence problem, object recognition, face recognition, coarse-to-fine strategy, wavelet transform, image representation --------------------------------------------------------------------- Rolf P. W"urtz. Background invariant face recognition. To appear in: 3rd SNN Neural Networks Symposium, Nijmegen, The Netherlands, 14-15 September 1995. Springer Verlag, 1995 Abstract URL: http://www.cs.rug.nl/users/rolf/snn95.html Text URL: http://www.cs.rug.nl/users/rolf/snn95.ps.gz (0.1MB, 4 pages) ABSTRACT: As a contribution to handling the symbol grounding problem in AI an object recognition system is presented that is exemplified with human faces. It differs from earlier systems by a pyramidal representation and the ability to cope with structured background. KEYWORDS: Correspondence problem, object recognition, face recognition, coarse-to-fine strategy, wavelet transform, image representation +---------------------------------------------------------------------------+ | Rolf P. W"urtz | mailto:rolf at cs.rug.nl | URL: http://www.cs.rug.nl/~rolf/ | | Department of Computing Science, University of Groningen, The Netherlands | +---------------------------------------------------------------------------+ From ali at nubian.ICS.UCI.EDU Tue Sep 5 17:59:04 1995 From: ali at nubian.ICS.UCI.EDU (Kamal M Ali) Date: Tue, 05 Sep 1995 14:59:04 -0700 Subject: Combining classifiers - a study of error reduction Message-ID: <9509051459.aa15424@paris.ics.uci.edu> FTP-host: ftp.ics.uci.edu FTP-file: pub/machine-learning-papers/others/Ali-TR95-MultDecTrees.ps.Z pub/machine-learning-papers/others/Ali-TR95-MultRuleSets.ps.Z Available by anonymous ftp. We examine how the error reduction ability (error rate of ensemble divided by error rate of the single model learned on the same data) of an ensemble is affected by the degree to which models in the ensemble make correlated errors. Although the linear relationship which is discovered between error reduction ability and error correlatedness is shown to hold for rule-sets and decision-trees, our on-going research shows it also holds for neural networks. ================================================================ First paper: On the Link between Error Correlation and Error Reduction in Decision Tree Ensembles Abstract Recent work has shown that learning an ensemble consisting of multiple models and then making classifications by combining the classifications of the models often leads to more accurate classifications then those based on a single model learned from the same data. However, the amount of error reduction achieved varies from data set to data set. This paper provides empirical evidence that there is a linear relationship between the degree of error reduction and the degree to which patterns of errors made by individual models are uncorrelated. Ensemble error rate is most reduced in ensembles whose constituents make individual errors in a less correlated manner. The second result of the work is that some of the greatest error reductions occur on domains for which many ties in information gain occur during learning. The third result is that ensembles consisting of models that make errors in a dependent but ``negatively correlated'' manner will have lower ensemble error rates than ensembles whose constituents make errors in an uncorrelated manner. Previous work has aimed at learning models that make errors in a uncorrelated manner rather than those that make errors in an ``negatively correlated'' manner. Taken together, these results help provide an understanding of why the multiple models approach yields great error reduction in some domains but little in others. ================================================================ Second paper: Error reduction through learning multiple descriptions Abstract Learning multiple descriptions for each class in the data has been shown to reduce generalization error but the amount of error reduction varies greatly from domain to domain. This paper presents a novel empirical analysis that helps to understand this variation. Our hypothesis is that the amount of error reduction is linked to the ``degree to which the descriptions for a class make errors in a correlated manner.'' We present a precise and novel definition for this notion and use twenty-nine data sets to show that the amount of observed error reduction is negatively correlated with the degree to which the descriptions make errors in an correlated manner. We empirically show that it is possible to learn descriptions that make less correlated errors in domains in which many ties in the search evaluation measure (e.g. information gain) are experienced during learning. The paper also presents results that help to understand when and why multiple descriptions are a help (irrelevant attributes) and when they are not as much help (large amounts of class noise). From lautrup at hpthbe1.cern.ch Tue Sep 5 11:56:30 1995 From: lautrup at hpthbe1.cern.ch (lautrup@hpthbe1.cern.ch) Date: Tue, 5 Sep 95 11:56:30 METDST Subject: no subject (file transmission) Message-ID: FTP-host: connect.nbi.dk FTP-file: neuroprose/winther.optimal.ps.Z WWW-host: http://connect.nbi.dk ---------------------------------------------- The following paper is now available: Optimal Learning in Multilayer Neural Networks [26 pages] O. Winther, B. Lautrup, and J-B. Zhang CONNECT, The Niels Bohr Institute, University of Copenhagen, Denmark Abstract: The generalization performance of two learning algorithms, Bayes algorithm and the ``optimal learning'' algorithm on two classification tasks is studied theoretically. In the first example the task is defined by a restricted two-layer network, a committee machine, and in the second the task is defined by the so-called prototype problem. The architecture of the learning machine is in both cases defined to be a committee machine. For both tasks the optimal learning algorithm, which is optimal when the solution is restricted to a specific architecture, performs worse than the overall optimal Bayes algorithm. However, both algorithms perform far better than the conventional stochastic Gibbs algorithm, showing that using prior knowledge about the rule helps to avoid overfitting. Please do not reply directly to this message. ----------------------------------------------- FTP-instructions: unix> ftp connect.nbi.dk (or 130.225.212.30) ftp> Name: anonymous ftp> Password: your e-mail address ftp> cd neuroprose ftp> binary ftp> get winther.optimal.ps.Z ftp> quit unix> uncompress winther.optimal.ps.ps.Z ----------------------------------------------- Benny Lautrup, Computational Neural Network Center (CONNECT) Niels Bohr Institute Blegdamsvej 17 2100 Copenhagen Denmark Telephone: +45-3532-5200 Direct: +45-3532-5358 Fax: +45-3142-1016 e-mail: lautrup at connect.nbi.dk From listerrj at helios.aston.ac.uk Thu Sep 7 11:04:35 1995 From: listerrj at helios.aston.ac.uk (Richard Lister) Date: Thu, 07 Sep 1995 16:04:35 +0100 Subject: Lectureship in Neural Computing Message-ID: <15623.9509071504@sun.aston.ac.uk> ---------------------------------------------------------------------- Neural Computing Research Group ------------------------------- Dept of Computer Science and Applied Mathematics Aston University, Birmingham, UK LECTURESHIP ----------- * Full details at http://neural-server.aston.ac.uk/ * Applications are invited for a Lectureship within the Department of Computer Science and Applied Mathematics. (This post is roughly comparable to Assistant Professor positions in North America). Candidates are expected to have excellent academic qualifications and a proven record of research. The appointment will be for an initial period of three years, with the possibility of subsequent renewal or transfer to a continuing appointment. The successful candidate will be expected to make a substantial contribution to the research activities of the Neural Computing Research Group. Current research focusses on principled approaches to neural computing, and ranges from theoretical foundations to industrial and commercial applications. We would be interested in candidates who can contribute directly to this research programme or who can broaden it into related areas, while maintaining the emphasis on theoretically well-founded research. The successful candidate will also be expected to contribute to the undergraduate and/or postgraduate teaching programmes. Neural Computing Research Group ------------------------------- The Neural Computing Research Group currently comprises the following academic staff: Chris Bishop Professor David Lowe Professor David Bounds Professor Richard Rohwer Lecturer Alan Harget Lecturer Ian Nabney Lecturer David Saad Lecturer Chris Williams Lecturer together with the following Postdoctoral Research Fellows Alan McLachlan Huaihu Zhu a full-time system administrator, and eleven postgraduate research students. Five further postdoctoral positions are currently being filled. Conditions of Service --------------------- The appointment will be for an initial period of three years, with the possibility of subsequent renewal or transfer to a continuing appointment. Initial salary will be within the lecturer A and B range 14,756 to 25,735, and exceptionally up to 28,756 (UK pounds; these salary scales are currently under review). How to Apply ------------ If you wish to be considered for this position, please send a full CV and publications list, together with the names of 4 referees, to: Hanni Sondermann Neural Computing Research Group Department of Computer Science and Applied Mathematics Aston University Birmingham B4 7ET, U.K. Tel: (+44 or 01) 21 333 4631 Fax: (+44 or 01) 21 333 4586 e-mail: h.e.sondermann at aston.ac.uk Closing date: 30 September 1995. ---------------------------------------------------------------------- From baluja at GS93.SP.CS.CMU.EDU Thu Sep 7 17:14:50 1995 From: baluja at GS93.SP.CS.CMU.EDU (Shumeet Baluja) Date: Thu, 7 Sep 95 17:14:50 EDT Subject: Comparison of Optimization Techniques based on Hebbian Learning & Genetic Algorithms Message-ID: Title: An Empirical Comparison of Seven Iterative and Evolutionary Function Optimization Heuristics By: Shumeet Baluja Abstract: This report is a repository for the results obtained from a large scale empirical comparison of seven iterative and evolution-based optimization heuristics. Twenty-seven static optimization problems, spanning six sets of problem classes which are commonly explored in genetic algorithm literature, are examined. The problem sets include job-shop scheduling, traveling salesman, knapsack, binpacking, neural network weight optimization, and standard numerical optimization. The search spaces in these problems range from 2^368 to 2^2040. The results indicate that using genetic algorithms for the optimization of static functions does not yield a benefit, in terms of the final answer obtained, over simpler optimization heuristics. Descriptions of the algorithms tested and the encodings of the problems are described in detail for reproducibility. This work may be of interest to the Artificial Neural Network community as two of the algorithms compared are based upon simple Hebbian Learning and supervised competitive learning algorithms. instructions (CMU-CS-95-193) --------------------------------------- anonymous ftp: ftp reports.adm.cs.cmu.edu binary cd 1995 get CMU-CS-95-193.ps from www: from my home page: http://www.cs.cmu.edu/~baluja more directly: http://www.cs.cmu.edu/afs/cs/user/baluja/www/techreps.html if you do not have www or ftp access, send me email, and I will send a copy (.ps) through email. From opper at cse.ucsc.edu Thu Sep 7 19:17:17 1995 From: opper at cse.ucsc.edu (Manfred Opper) Date: Thu, 7 Sep 1995 16:17:17 -0700 (PDT) Subject: TR announcement Message-ID: <199509072317.QAA08472@arapaho.cse.ucsc.edu> The following papers are now available via anonymous ftp: (See below for the retrieval procedure) ------------------------------------------------------------------ "Bounds for Predictive Errors in the Statistical Mechanics of Supervised Learning" (Submitted to Physical Review Letters) M. Opper and D. Haussler Ref. WUE-ITP-95-019 Within a Bayesian framework, by generalizing inequalities known from statistical mechanics, we calculate general upper and lower bounds for a cumulative entropic error, which measures the success in the supervised learning of an unknown rule from examples. Both bounds match asymptotically, when the number m of observed data grows large. We find that the information gain from observing a new example decreases universally like d/m. Here d is a dimension that is defined from the scaling of small volumes with respect to a distance in the space of rules. (10 pages) AND "General Bounds on the Mutual Information Between a Parameter and n Conditionally Independent Observations " D. Haussler and M. Opper: (Proceedings of the 8th Ann. Conf. on Computational Learning Theory: COLT 95) Ref . WUE-ITP-95-020 Each parameter theta in an abstract parameter space Theta is associated with a different probability distribution on a set Y. A parameter w is chosen at random from Theta according to some a priori distribution on theta, and n conditionally independent random variables Y^n = Y_1,..., Y_n are observed with common distribution determined by theta. We obtain bounds on the mutual information between the random variable theta, giving the choice of parameter, and the random variable Y^n, giving the sequence of observations. We also bound the supremum of the mutual information, over choices of the prior distribution on Theta. These quantities have applications in density estimation, computational learning theory, universal coding, hypothesis testing, and portfolio selection theory. The bounds are given in terms of the metric and information dimensions of the parameter space Theta with respect to the Hellinger distance. (11 pages) Manfred Opper present adress: The Baskin Center for Computer Engineering & Information Sciences, University of California Santa Cruz CA 95064 email: opper at cse.ucsc.edu ______________________________________________________________________ Retrieval procedure: unix> ftp ftp.physik.uni-wuerzburg.de Name: anonymous Password: {your e-mail address} ftp> cd pub/preprint ftp> get WUE-ITP-95-0??.ps.gz (*) ftp> quit unix> gunzip WUE-ITP-95-0??.ps.gz e.g. unix> lp WUE-ITP-95-0??.ps (7 pages of output) (*) can be replaced by "get WUE-ITP-95-0??.ps". The file will then be uncompressed before transmission (slower!). _____________________________________________________________________ From lpratt at franklinite.Mines.EDU Fri Sep 8 12:27:54 1995 From: lpratt at franklinite.Mines.EDU (Lorien Y. Pratt) Date: Fri, 8 Sep 1995 10:27:54 -0600 (MDT) Subject: Grad student needed: transfer and hazardous waste Message-ID: <9509081627.AA02460@franklinite.Mines.EDU> A non-text attachment was scrubbed... Name: not available Type: text Size: 3492 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/8c0d708a/attachment-0001.ksh From dyyeung at cs.ust.hk Mon Sep 11 00:12:27 1995 From: dyyeung at cs.ust.hk (Dit-Yan Yeung) Date: Mon, 11 Sep 1995 12:12:27 +0800 (HKT) Subject: Survey Paper on Constructive Neural Networks Message-ID: <199509110412.MAA16731@cssu35.cs.ust.hk> Paper (81596 bytes in gziped PS): ftp://ftp.cs.ust.hk/pub/techreport/95/tr95-43.ps.gz ******************************************************************************* Constructive Feedforward Neural Networks for Regression Problems: A Survey Tin-Yau Kwok & Dit-Yan Yeung Department of Computer Science Hong Kong University of Science and Technology Clear Water Bay, Kowloon Hong Kong {jamesk,dyyeung}@cs.ust.hk Technical Report HKUST-CS95-43 September 1995 ABSTRACT In this paper, we review the procedures for constructing feedforward neural networks in regression problems. While standard back-propagation performs gradient descent only in the weight space of a network with fixed topology, constructive procedures start with a small network and then grow additional hidden units and weights until a satisfactory solution is found. The constructive procedures are categorized according to the resultant network architecture and the learning algorithm for the network weights. ******************************************************************************* From lxu at cs.cuhk.hk Mon Sep 11 04:46:37 1995 From: lxu at cs.cuhk.hk (Dr. Xu Lei) Date: Mon, 11 Sep 1995 16:46:37 +0800 Subject: ICONIP96 Message-ID: <199509110846.QAA14533@cs.cuhk.hk> FIRST CALL FOR PAPERS 1996 INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING The Annual Conference of the Asian Pacific Neural Network Assembly ICONIP'96, September 24 - 27, 1996 Hong Kong Exhibition and Convention Center, Wan Chai, Hong Kong The goal of ICONIP'96 is to provide a forum for researchers and engineers from academia and industry to meet and to exchange ideas on the latest developments in neural information processing. The conference further serves to stimulate local and regional interests in neural information processing and its potential applications to industries indigenous to this region. CONFERENCE TOPICS ================= * Theory * Algorithms & Architectures * Applications * Supervised/Unsupervised Learning * Hardware Implementations * Hybrid Systems * Neurobiological Systems * Associative Memory * Visual & Speech Processing * Intelligent Control & Robotics * Cognitive Science & AI * Recurrent Net & Dynamics * Image Processing * Pattern Recognition * Computer Vision * Time Series Prediction * Financial Engineering * Optimization * Fuzzy Logic * Evolutionary Computing * Other Related Areas CONFERENCE'S SCHEDULE ===================== Submission of paper February 1, 1996 Notification of acceptance May 1, 1996 Early registration deadline July 1, 1996 SUBMISSION INFORMATION ====================== Authors are invited to submit one camera-ready original and five copies of the manuscript written in English on A4-format white paper with one inch margins on all four sides, in one column format, no more than six pages including figures and references, single-spaced, in Times-Roman or similar font of 10 points or larger, and printed on one side of the page only. Electronic or fax submission is not acceptable. Additional pages will be charged at USD $50 per page. Centered at the top of the first page should be the complete title, author(s), affiliation, mailing, and email addresses, followed by an abstract (no more than 150 words) and the text. Each submission should be accompanied by a cover letter indicating the contacting author, affiliation, mailing and email addresses, telephone and fax number, and preference of technical session(s) and format of presentation, either oral or poster (both are published). All submitted papers will be refereed by experts in the field based on quality, clarity, originality, and significance. Authors may also retrieve the ICONIP style, "iconip.tex" and "iconip.sty" files for the conference by anonymous FTP at ftp.cs.cuhk.hk in the directory /pub/iconip96. For further information, inquiries, and paper submissions please contact ICONIP'96 Secretariat Department of Computer Science The Chinese University of Hong Kong Shatin, N.T., Hong Kong Fax (852) 2603-5024 E-mail: iconip96 at cs.cuhk.hk http://www.cs.cuhk.hk/iconip96 ====================================================================== General Co-Chairs ================= Omar Wing, CUHK Shun-ichi Amari, Tokyo U. Advisory Committee ================== International ------------- Yaser Abu-Mostafa, Caltech Michael Arbib, U. Southern Cal. Leo Breiman, UC Berkeley Jack Cowan, U. Chicago Rolf Eckmiller, U. Bonn Jerome Friedman, Stanford U. Stephen Grossberg, Boston U. Robert Hecht-Nielsen, HNC Geoffrey Hinton, U. Toronto Anil Jain, Michigan State U. Teuvo Kohonen, Helsinki U. of Tech. Sun-Yuan Kung, Princeton U. Robert Marks, II, U. Washington Thomas Poggio, MIT Harold Szu, US Naval SWC John Taylor, King's College London David Touretzky, CMU C. v. d. Malsburg, Ruhr-U. Bochum David Willshaw, Edinburgh U. Lofti Zadeh, UC Berkeley Asia-Pacific Region ------------------- Marcelo H. Ang Jr, NUS, Singapore Sung-Yang Bang, POSTECH, Pohang Hsin-Chia Fu, NCTU., Hsinchu Toshio Fukuda, Nagoya U., Nagoya Kunihiko Fukushima, Osaka U., Osaka Zhenya He, Southeastern U., Nanjing Marwan Jabri, U. Sydney, Sydney Nikola Kasabov, U. Otago, Dunedin Yousou Wu, Tsinghua U., Beijing Organizing Committee ==================== L.W. Chan (Co-Chair), CUHK K.S. Leung (Co-Chair), CUHK D.Y. Yeung (Finance), HKUST C.K. Ng (Publication), CityUHK A. Wu (Publication), CityUHK K.P. Lam (Publicity), CUHK M.W. Mak (Local Arr.), HKPU C.S. Tong (Local Arr.), HKBU T. Lee (Registration), CUHK M. Stiber (Registration), HKUST K.P. Chan (Tutorial), HKU H.T. Tsui (Industry Liaison), CUHK I. King (Secretary), CUHK Program Committee ================= Co-Chairs --------- Lei Xu, CUHK Michael Jordan, MIT Erkki Oja, Helsinki Univ. of Tech. Mitsuo Kawato, ATR Members ------- Yoshua Bengio, U. Montreal Chris Bishop, Aston U. Leon Bottou, Neuristique Gail Carpenter, Boston U. Laiwan Chan, CUHK Huishen Chi, Peking U. Peter Dayan, MIT Kenji Doya, ATR Scott Fahlman, CMU Francoise Fogelman, SLIGOS Lee Giles, NEC Research Inst. Michael Hasselmo, Harvard U. Kurt Hornik, Technical U. Wien Steven Nowlan, Synaptics Jeng-Neng Hwang, U. Washington Nathan Intrator, Tel-Aviv U. Larry Jackel, AT&T Bell Lab Adam Kowalczyk, Telecom Australia Soo-Young Lee, KAIST Todd Leen, Oregon Grad. Inst. Cheng-Yuan Liou, National Taiwan U. David MacKay, Cavendish Lab Eric Mjolsness, UC San Diego John Moody, Oregon Grad. Inst. Nelson Morgan, ICSI Michael Perrone, IBM Watson Lab Ting-Chuen Pong, HKUST Paul Refenes, London Business School Hava Siegelmann, Technion Ah Chung Tsoi, U. Queensland Benjamin Wah, U. Illinois Andreas Weigend, Colorado U. Ronald Williams, Northeastern U. John Wyatt, MIT Alan Yuille, Harvard U. Richard Zemel, CMU From cohn at psyche.mit.edu Mon Sep 11 18:06:41 1995 From: cohn at psyche.mit.edu (David Cohn) Date: Mon, 11 Sep 95 18:06:41 EDT Subject: NIPS*95 Registration Info Available Message-ID: <9509112206.AA20429@psyche.mit.edu> [As always, apologies to those who, by dint of subscribing to overlapping multiple mailing lists, receive multiple copies of this announcement.] CONFERENCE ANNOUNCEMENT Neural Information Processing Systems Natural and Synthetic Monday, Nov. 27 - Saturday, Dec. 2, 1995 Denver, Colorado http://www.cs.cmu.edu/Web/Groups/NIPS/NIPS.html This is the ninth meeting of an interdisciplinary conference which brings together neuroscientists, engineers, computer scientists, cognitive scientists, physicists, and mathematicians interested in all aspects of neural processing and computation. The confer- ence will include invited talks, and oral and poster presenta- tions of refereed papers. There will be no parallel sessions. There will also be one day of tutorial presentations (Nov. 27) preceding the regular session, and two days of focused workshops will follow at a nearby ski area (Dec. 1-2). Major conference topics include: Neuroscience, Theory, Implemen- tations, Applications, Algorithms & Architectures, Visual Pro- cessing, Speech/Handwriting/Signal Processing, Cognitive Science & AI, Control, Navigation and Planning. Detailed information and registration materials are available electronically at http://www.cs.cmu.edu/Web/Groups/NIPS/NIPS.html ftp://psyche.mit.edu/pub/NIPS95/ Students who require financial support to attend the conference are urged to retrieve a copy of the registration brochure as soon as possible in order to meet the aid application deadline. Mail general inquiries/requests for registration material to: NIPS*95 Registration Dept. of Mathematical and Computer Sciences Colorado School of Mines Golden, CO 80401 USA FAX: (303) 273-3875 e-mail: nips95 at mines.colorado.edu From kim.plunkett at psy.ox.ac.uk Wed Sep 13 11:05:53 1995 From: kim.plunkett at psy.ox.ac.uk (Kim Plunkett) Date: Wed, 13 Sep 1995 15:05:53 +0000 Subject: Special Issue of LCP Message-ID: <9509131505.AA53353@mac17.psych.ox.ac.uk> Manuscript submissions are invited for inclusion in a Special Issue of the journal "Language and Cognitive Processes" on Connectionist Approaches to Language Development. It is anticipated that most of the papers in the special issue will describe previously unpublished work on some aspect of language development (first or second language learning in either normal or disordered populations) that incorporates a neural network modelling component. However, theoretical papers discussing the general enterprise of connectionist modelling within the domain of language development are also welcome. The deadline for submissions is 1st April 1996. Manuscripts should be sent to the guest editor for this special issue: Kim Plunkett, Department of Experimental Psychology, Oxford University, South Parks Road, Oxford, OX1 3UD, UK (email: plunkett at psy.ox.ac.uk FAX: 1865-310447). All manuscripts will be submitted to the usual Language and Cognitive Processes peer review process. From caruana+ at cs.cmu.edu Fri Sep 15 09:23:26 1995 From: caruana+ at cs.cmu.edu (Rich Caruana) Date: Fri, 15 Sep 95 09:23:26 -0400 Subject: NIPS*95 Workshop: Call for Participation Message-ID: <29911.811171406@GS79.SP.CS.CMU.EDU> *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* *-* POST-NIPS*95 WORKSHOP *-* *-* December 1-2, 1995 *-* *-* Vail, Colorado *-* *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* *-* CALL FOR PARTICIPATION *-* *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* TITLE: "Learning to Learn: Knowledge Consolidation and Transfer in Inductive Systems" ORGANIZERS: Jon Baxter, Rich Caruana, Tom Mitchell, Lori Pratt, Danny Silver, Sebastian Thrun. INVITED TALKS BY: Leo Breiman (Stanford, undecided) Tom Mitchell (CMU) Tomaso Poggio (MIT) Noel Sharkey (Sheffield) Jude Shavlik (Wisconsin) WEB PAGES (for more information): Our Workshop: http://www.cs.cmu.edu/afs/cs/usr/caruana/pub/transfer.html NIPS*95 Info: http://www.cs.cmu.edu/afs/cs/project/cnbc/nips/NIPS.html WORKSHOP DESCRIPTION: The power of tabula rasa learning is limited. Because of this, interest is increasing in methods that capitalize on previously acquired domain knowledge. Examples of these methods include: o using symbolic domain theories to bias connectionist networks o using unsupervised learning on a large corpus of unlabelled data to learn features useful for subsequent supervised learning on a smaller labelled corpus o using models previously learned for other problems as a bias when learning new, but related, problems o using extra outputs on a connectionist network to bias the hidden layer representation towards more predictive features There are many different approaches: hints, knowledge-based artificial neural nets (KBANN), explanation-based neural nets (EBNN), multitask learning (MTL), knowledge consolidation, etc. What they all have in common is the attempt to transfer knowledge from other sources to benefit the current inductive task. The goal of this workshop is to provide an opportunity for researchers and practitioners to discuss problems and progress in knowledge transfer in learning. We hope to identify research directions, debate different theories and approaches, discover unifying principles, and begin to start answering questions like: o when will transfer help -- or hinder? o what should be transferred? o how should it be transferred? o what are the benefits? o in what domains is transfer most useful? SUBMISSIONS: We solicit presentations from anyone working in (or near): o Sequential/incremental, compositional (learning by parts), and parallel learning o Task knowledge transfer (symbolic-neural, neural-neural) o Adaptation of learning algorithms based on prior learning o Learning domain-specific inductive bias o Combining predictions made for related tasks from one domain o Combining supervised learning (where the goal is to learn one feature from the other features) with unsupervised learning (where the goal is to learn every feature from all the other features) o Combining symbolic and connectionist methods via transfer o Fundamental problems/issues in learning to learn o Theoretical models of learning to learn o Cognitive models of, or evidence for, transfer in learning Please send a short (one page or less) description of what you want to present to one of the co-chairs below by Oct 15. Email is preferred. We'll select from the submissions and publish a workshop schedule by Nov 1. Preference will be given to submissions that are likely to generate debate and that go beyond summarizing prior published work by raising important issues or suggesting directions for future work. Suggestions for moderator or panel-led discussions (e.g., sequential vs. parallel transfer) are also encouraged. We plan to run the workshop as a workshop, not as a mini conference, so be daring! We look forward to your submission. Rich Caruana Daniel L. Silver School of Computer Science Department of Computer Science Carnegie Mellon University Middlesex College 5000 Forbes Avenue University of Western Ontario Pittsburgh, PA 15213, USA London, Ontario, Canada N6A 3K7 email: caruana at cs.cmu.edu email: dsilver at csd.uwo.ca ph: (412) 268-3043 ph: (519) 473-6168 fax: (412) 268-5576 fax: (519) 661-3515 See you in Colorado! From antsakli at maddog.ee.nd.edu Thu Sep 14 17:35:17 1995 From: antsakli at maddog.ee.nd.edu (Panos Antsaklis) Date: Thu, 14 Sep 1995 16:35:17 -0500 Subject: Paper available: "The Dependence Identification Neural Network Construction Algorithm Message-ID: <199509142135.QAA15701@maddog.ee.nd.edu> FTP-host: rottweiler.ee.nd.edu FTP-filename: /pub/isis/tnn1845.ps.gz The following paper is available by anonymous ftp. It will appear in an upcoming issue of the IEEE Transactions on Neural Networks. ------------------------------------------------------------------------ THE DEPENDENCE IDENTIFICATION NEURAL NETWORK CONSTRUCTION ALGORITHM John O. Moody and Panos J. Antsaklis Dept. of Electrical Engineering University of Notre Dame Notre Dame, IN 46556, USA email: jmoody at maddog.ee.nd.edu (Accepted for publication in the IEEE Transactions on Neural Networks) Abstract An algorithm for constructing and training multilayer neural networks, dependence identification, is presented in this paper. Its distinctive features are that (i) it transforms the training problem into a set of quadratic optimization problems that are solved by a number of linear equations, (ii) it constructs an appropriate network to meet the training specifications, and (iii) the resulting network architecture and weights can be further refined with standard training algorithms, like backpropagation, giving a significant speed-up in the development time of the neural network and decreasing the amount of trial and error usually associated with network development. -------------------------------------------------------------------- ftp instructions: unix% ftp rottweiler.ee.nd.edu Name: anonymous password: your email address ftp> cd pub/isis ftp> binary ftp> get tnn1845.ps.gz ftp> bye unix% gzip -d tnn1845.ps.gz unix% lpr tnn1845.ps -------------------------------------------------------------------- From A.Sharkey at dcs.shef.ac.uk Fri Sep 15 11:40:21 1995 From: A.Sharkey at dcs.shef.ac.uk (A.Sharkey@dcs.shef.ac.uk) Date: Fri, 15 Sep 95 16:40:21 +0100 Subject: SPECIAL ISSUE of Connection Science Message-ID: <9509151540.AA29776@entropy.dcs.shef.ac.uk> PRELIMINARY CALL FOR PAPERS: Deadline February 14th 1996 ************ COMBINING NEURAL NETS ************ A special issue of Connection Science Papers are sought for this special issue of Connection Science. The aim of this special issue is to examine when, how, and why neural nets should be combined. The reliability of neural nets can be increased through the use of both redundant and modular nets, (either trained on the same task under differing conditions, or on different subcomponents of a task). Questions about the exploitation of redundancy and modularity in the combination of nets, or estimators, have both an engineering and a biological relevance, and include the following: * how best to combine the outputs of several nets. * quantification of the benefits of combining. * how best to create redundant nets that generalise differently (e.g. active learning methods) * how to effectively subdivide a task. * communication between neural net modules. * increasing the reliability of nets. * the use of neural nets for safety critical applications. Special issue editor: Amanda Sharkey (Sheffield, UK) Editorial Board: Leo Breiman (Berkeley, USA) Nathan Intrator (Brown, USA) Robert Jacobs (Rochester, USA) Michael Jordan (MIT, USA) Paul Munro (Pittsburgh, USA) Michael Perrone (IBM, USA) David Wolpert (Santa Fe Institute, USA) We solicit either theoretical or experimental papers on this topic. Questions and submissions concerning this special issue should be sent by February 14th 1996 to: Dr Amanda Sharkey, Department of Computer Science, Regent Court, Portobello Street, University of Sheffield, Sheffield, S1 4DP, United Kingdom. Net: amanda at dcs.shef.ac.uk From william.beaudot at csemne.ch Fri Sep 15 13:41:46 1995 From: william.beaudot at csemne.ch (william.beaudot@csemne.ch) Date: Fri, 15 Sep 1995 19:41:46 +0200 Subject: Winter Retina Conference '96 Message-ID: <199509151741.TAA13120@rotie.csemne.ch> The first annual "Winter Retina Conference: Physiology, Computation, and Neuromorphic Engineering for Vision" will be held in Jackson Hole, Wyoming, USA from 16-20 January 1996. The meeting will bring together leaders in the fields of the physiology, computation and engineering of early vision in vertebrates and invertebrates. The meeting is limited to 50 participants with the intention of fostering rich interaction accross the fields. For more info please see our WWW home page at URL: http://shi18.uth.tmc.edu/retcon.htm The meeting is being organized by: Greg Maguire and Harvey Karten in the USA, and William Beaudot and Andre vanSchaik in Switzerland. ------------------------------------------------------------------------------ For those who have not access to Internet, here is the text version of the WWW home page for the announcement: WINTER RETINA CONFERENCE '96 Physiology, Computation, and Neuromorphic Engineering for Vision 16-20 January 1996 Jackson Hole, Wyoming, USA First Anouncement and Call For Abstracts ======================================== In January 1996, the first annual Winter Retina Conference will be held in Jackson Hole, Wyoming, USA. The meeting will bring together leaders in the field of neural circuitry, computational modeling, and neuromorphic engineering in retinas, including both vertebrate and invertebrates. The purpose of the Winter Retina Conference is to present the latest results in the fundamental aspects of retinal circuitry in a setting conducive to informal and rich interaction between the participants. The meeting is limited to 50 participants and participation is by application and acceptance only. The conference will include daily lectures and demonstrations. SUN Ultra-SPARC and Super-SPARC workstations will be available for demonstrations. Conference Site: Teton Village, Jackson Hole, Wyoming ===================================================== Jackson Hole is a sun drenched, mountain-rimmed valley noted for its natural, rugged beauty, powder snow, and numerous ski slopes. Many other activities besides great downhill skiing are available, including: X-country skiing, ice-skating, sleigh rides, fishing, hunting, hiking, hot spring soaking, and horseback riding to name a few. Teton Village will be the host site for the conference. Restaurants, niteclubs, and shopping abound here. Yellowstone and Grand Teton National Parks are a short drive away from the slopes. Moutain climbing schools and guides are available in the city. Conference Hotel: ================= The Inn at Jackson Hole. A beautiful hotel at the base of the ski slope (yep, ski-in and ski-out) and in the midst of Teton Village. Everything you need is in walking distance of the hotel, including good restaurants and shopping. Standard room (2 beds)- $80.00/ night; Deluxe room (kitchenette)-$120.00/night; Loft suites- $150.00/night. Reservations at 800-842-7666 or fax at307-733-0844. Identify yourself as an attendee to the Winter Retina Conference for these special rates. Travel: ======= Jackson Hole is served by American, United, and Delta Airlines. A 5-10% discount, depending on class of service, is available on American Airlines, the official airline of the 1996 Winter Retina Conference. We advise using American Airlines because they provide the only jet service directly to Jackson Hole. To receive a 5-10% discount on American Airlines, please call American Airlines Meeting Services desk at 1-800-433-1790, identify youself as an attendee to the Winter Retina Conference, and use American Airlines "star number S-0116HZ" to book your flight. FEE: Upon Acceptance a Registration Fee of $100 US ================================================== Housing and travel information will be sent following acceptance. Application: Send a one page abstract and a short c.v. to either: ================================================================= Winter Retina Conference, Greg Maguire, Department of Neurobiology & Anatomy, University of Texas, 6420 Lamar Fleming, Houston, TX 77030, USA or Winter Retina Conference, William Beaudot, Centre Suisse d'Electronique et de Microtechnique SA, IC & Systems Research, Dept. of Bio-Inspired Advanced Research, Maladiere 71, Case postale 41, CH-2007 Neuchatel, Switzerland or Winter Retina Conference, Andre'van Schaik, EPFL-MANTRA Centre for Neuro-Mimetic Systems INJ-035 Ecublens, CH-1015 Lausanne, Switzerland Information and abstracts may also be sent via email: gmaguire at gsbs.gs.uth.tmc.edu Confirmed Attendees =================== Xavier Arreguit, CSEM, Switzerland; William Beaudot, CSEM, Switzerland; Horacio Cantiello, Harvard; Nicolas Franceschini, CNRS, Marseille; Jeanny Herault, Grenoble, France; Harvey Karten, UCSD; Kent Keyser, UCSD; Greg Maguire, Texas; Misha Mahowald, Oxford; Steve Massey, Texas; Haluk Ogmen, Houston; Peter Sterling, Pennsylvania; Andre van Schaik, Lausanne, Switzerland; Frank Werblin, Berkeley Organizing Committee: ===================== William Beaudot, CSEM, Switzerland Harvey Karten, UCSD, USA Greg Maguire, Texas, USA Andre van Schaik, EPFL, Switzerland Registration Form ================= A receipt, program, and information package will be mailed to participants following registration payment. Please include a check for $100 (USA) made payable to: Winter Retina Conference. The fee includes participation in the daily meetings, a social event, and an official t-shirt. Send form and payment by snail mail to: --------------------------------------- Greg Maguire Sensory Sciences Center University of Texas 6420 Lamar Fleming Houston, TX 77030 USA Name: ----- Institution: ------------ Address: -------- Email: ------ Telephone: ---------- Fax: ---- From jagota at ponder.csci.unt.edu Fri Sep 15 19:33:34 1995 From: jagota at ponder.csci.unt.edu (Jagota Arun Kumar) Date: Fri, 15 Sep 95 18:33:34 -0500 Subject: Abstract: NN Optimization on Compressible Graphs Message-ID: <9509152333.AA01789@ponder> Dear Connectionists: I announce a cleaner version of a paper I announced back in 1992. It is essentially unchanged in other aspects. ------------------------------------------------------------------------ Performance of Neural Network Algorithms for Maximum Clique On Highly Compressible Graphs Arun K. Jagota and Kenneth W. Regan The problem of finding the size of the largest clique in an undirected graph is NP-hard, even to approximate well. Simple algorithms work quite well however on random graphs. It is felt by many, however, that the uniform distribution u(n) does not accurately reflect the nature of instances that come up in practice. It is argued that when the actual distribution is unknown, it is more appropriate to suppose that instances come from the Solomonof-Levin or universal distribution m(x) instead, which assigns higher weight to instances with shorter descriptions. Because m(x) is neither computable nor samplable, we employ a realistic analogue q(x) which lends itself to efficient empirical testing. We experimentally evaluate how well certain neural network algorithms for Maximum Clique perform on graphs drawn from q(x), as compared to those drawn from u(n). The experimental results are as follows. All nine algorithms we evaluated performed roughly equally-well on u(n), whereas three of them---the simplest ones---performed markedly poorer than the other six on q(x). Our results suggest that q(x), while postulated as a more realistic distribution to test the performance of algorithms than u(n), is also one that discriminates their performance better. Our q(x) sampler can be used to generate compressible instances of any discrete problem. ------------------------------------------------------------------------ Send requests by e-mail to jagota at cs.unt.edu I use this mechanism to get some indication of the amount of interest, if any, there is in this type of work. Arun Jagota From pfbaldi at cco.caltech.edu Tue Sep 19 11:33:26 1995 From: pfbaldi at cco.caltech.edu (Pierre Baldi) Date: Tue, 19 Sep 1995 08:33:26 -0700 (PDT) Subject: Tal Grossman Message-ID: Tal Grossman tragically died in a car accident on August 1st. Tal was in the Complex Systems Group at Los Alamos. He was very active in the area of machine learning and computational molecular biology. He gave a presentation at one of the NIPS workshops last year. This year he was trying to organize the same workshop himself. He is survived by his wife and children. Anyone who knew Tal, and feels the need to, is welcome to contact either his wife: Dr. Ramit Mehr-Grossman ramit at t10.lanl.gov or his sponsor at Los Alamos: Dr. Alan Lapedes asl at t13.lanl.gov Pierre Baldi From igor at c3serve.c3.lanl.gov Tue Sep 19 19:48:30 1995 From: igor at c3serve.c3.lanl.gov (Igor Zlokarnik) Date: Tue, 19 Sep 1995 17:48:30 -0600 Subject: postdoctoral positions available Message-ID: <199509192348.RAA16530@c3serve.c3.lanl.gov> POSTDOCTORAL POSITIONS IN STATISTICAL ANALYSIS AVAILABLE Los Alamos National Laboratory Los Alamos, NM 87545 Postdoctoral positions are available to participate in the development of appropriate methods for analysing large databases such as medical payment records and related databases for the purpose of detecting and preventing fraud, waste and abuse. This research includes, but is not limited to, the evaluation and modification of existing methods, such as neural networks, genetic algorithms, fuzzy logic, n-grams, multivariate analysis, factor analysis, and multidimensional scaling. It may also involve the implementation of database interfaces. Los Alamos National Laboratory provides excellent opportunities for advanced research. The Laboratory operates the world's largest scientific computing facility. A major strength of Los Alamos is the interdisciplinary nature of much of it research. Scientists in one field may draw on research and techniques developed for quite a different, seemingly unrelated, area. Your immediate team colleagues will be working on such diverse research areas as automatic speech recognition, virtual reality, financial analysis, etc. Appointments are available for applicants who have received a doctoral degree in the past three years or will have completed all PhD requirements by date of hire. Positions are for 2 years and are renewable for a third year. Salaries range between 40k - 45k per annum depending on the number of years since the PhD was earned. Los Alamos National Laboratory is an equal opportunity/affirmative action employer. It is operated for the Department of Energy by the University of California. Applicants must submit a resume including list of publications, a statement of research interests, and three letters of recommendation to: Dr. George Papcun Los Alamos National Laboratory CIC-3, MS B256 Los Alamos, NM 87545 phone: (505) 667-9800 e-mail: gjp at lanl.gov by no later than 15 November 1995. Preliminary E-mail inquiries are encouraged. From asl at santafe.edu Wed Sep 20 00:06:14 1995 From: asl at santafe.edu (Alan Lapedes) Date: Tue, 19 Sep 95 22:06:14 MDT Subject: Tal Grossman Message-ID: <9509200406.AA18984@sfi.santafe.edu> It is with deepest regret that we have to announce that Tal Grossman, a postdoctoral fellow in the Complex Systems Group at Los Alamos, was killed August 1, 1995 in a car accident while on a family vacation in Arizona. His wife, Ramit (a postdoctoral fellow in the Theoretical Biology Group at Los Alamos) and the children have recovered from minor injuries and are all right. Tal was very active in neural networks and computational biology and had a very promising career. The funeral was held in Israel. A memorial service will be held in Los Alamos Wed Sept 20, 1995. If they wish, friends and colleagues of the Grossmans may contact either Ramit, or Alan Lapedes (Tal's postdoctoral superviror) at: ramit at t10.lanl.gov asl at t13.lanl.gov From cia at kamo.riken.go.jp Wed Sep 20 10:40:22 1995 From: cia at kamo.riken.go.jp (cia@kamo.riken.go.jp) Date: Wed, 20 Sep 95 23:40:22 +0900 Subject: Blind Separation of source (abstracts) Message-ID: <9509201440.AA15387@kamo.riken.go.jp> Blind Signal Processing is an emerging area in adaptive signal processing and neural networks. It was originated in France in the late 80's . Below please find an advanced program of a special invited session devoted to blind separation of sources and their applications at 1995 INTERNATIONAL SYMPOSIUM ON NONLINEAR THEORY AND ITS APPLICATIONS , NOLTA'95 in Las Vegas. Any comments will be highly appreciated, especially association of this approach to brain information processing and image and speech enhancement, filtering and noise reduction. Andrzej Cichocki, Head of Laboratory for Artificial Brain Systems, Frontier Research Program RIKEN, Institute of Physical and Chemical Research, Hirosawa 2-1, Saitama 351-01, WAKO-Schi, JAPAN E-mail: cia at kamo.riken.go.jp, FAX (+81) 048 462 4633. URL: http://zoo.riken.go.jp/bip.html =========================================================================== NOLTA'95, 1995 INTERNATIONAL SYMPOSIUM ON NONLINEAR THEORY AND ITS APPLICATIONS Caesars Palace, LAS VEGAS Dec. 10 -14, 1995 Program for Special Invited Session on "BLIND SEPARATION OF SOURCES -Brain Information Processing" Organizer and chair Dr. A. Cichocki Frontier Research Program RIKEN, Institute of Physical and Chemical Research, Hirosawa 2-1, Saitama 351-01, WAKO-Schi, JAPAN Advanced Program: 1. Prof.Christian JUTTEN , Laboratory TIRF, INPG, Grenoble, FRANCE, "Separation of Sources: Blind or Unsupervised? " Abstract: Basically, separation of sources are referred as BLIND methods. However, adaptive algorithms for source separation only emphasize on UNSUPERVISED aspects of the learning. In this talk, we propose a selected review of recent works to show how A PRIORI KNOWLEDGES on the sources or on the mixtures can simplify the algorithms and improve performance. 2. Prof. Jean-Francois CARDOSO , Ecole Nationale Superieure des Telecommunications, Telecom Paris, FRANCE "The Invariant Approach to Source Separation" Abstract: The `invariant approach' to source separation is based on the recognition that the unknown parameter in a source mixture is the mixing matrix, hence it belongs to a multiplicative group. In this contribution, we show that this simple fact can be exploited to build source separation algorithms behaving uniformly well in the mixing matrix. This is achieved if two sufficient conditions are met. + First, contrast functions (or estimating equations) used to identify the mixture should be designed in such a way that source separation is achieved when they are optimized (or solved) **without constraints** (such as normalization, etc). Examples of such contrast functions will be given, some of them being simple variants of classic contrast functions. This requirement is sufficient to guarantee uniform performance of the resulting batch algorithms. + Second, in the case of adaptive algorithm, uniform performance has a more extensive meaning: not only the residual error but also the convergence are important. Again, the multiplicative nature of the parameter calls for a special form of the learning rule, namely it suggests a `multiplicative update'. This approach results in adaptive source separation algorithms and enjoying uniform performance: convergence speed, residual error, stable points, etc... do not depend on the mixing matrix. In addition these algorithms show a very simple (and parallelizable) structure. The paper includes analytical results based on asymptotic performance analysis that quantify the behavior of both batch and adaptive equivariant source separators. In particular, these results allow to determine, given the source distribution, the optimal nonlinearities to be used in the learning rule. 3. Dr. Jie ZHU, Prof. Xi-Ren CAO, and prof. Ruey-Wen LIU, The Hong Kong University of Science and Technology Kowloon, Hong Kong, The University of Notre Dame, Notre Dame, NI 46556, U.S.A. "Blind Source Separation Based on Output Independence - Theory and Implementation" Abstract: The paper presents some recent results on the theory and implementation techniques of blind source separation. the approach is based on independence property of the outputs of a filter. In the theory part, we identify and study two major issues in the blind source separation problem: separability and separation principles. We show that separability is an intrinsic property of the measured signals and can be described by the concept of $m$-row decomposability introduced in this paper, and that the separation principles can be developed by using the structure characterization theory of random variables. In particular, we show that these principles can be derived concisely and intuitively by applying the Darmois-Skitovich theorem, which is well-known in statistical inference theory and psychology. In the implementation part, we show that if at most one of the source signals has a zero third (or fourth) order cumulant, then these signals can be separated by a filter whose parameters can be determined by a system of nonlinear equations using only third (or fourth) order cumulants of the measured signals. This results covers some previous results as special cases. 4. Dr. Jie HUANG, Prof. Noboru OHNISHI and Dr. Noboru Sugie ; Bio-Mimetic Control Research Center , The Institute of Physical and Chemical Research (RIKEN), Nagoya, JAPAN "Sound Separation Based on Perceptual Grouping of Sound Segments" Abstract : We would like to propose a sound separation method, which combines spatial cues (source direction) and structural cues (continuity and harmony). Sound separation is important in various scientific fields. There are mainly two different approaches to achieve this goal. One is based on blind estimation of inverse transfer functions from multiple sources to multiple receivers (microphones). The other is based on grouping sound segments in time-frequency domain. Our approach is based on the sound segments grouping. However, we use multiple microphones to obtain the spatial information. This approach is strongly inspired by the precedence effect and the cocktail party effect of human auditory system. The precedence effect suggests to us the way of coping with echoes in reverberant environment. The cocktail party effect suggests the use of spatial cues for sound separation. Psychological factors of auditory stream integration and segregation, such as continuity and harmony, are used as structural cues. It is realized by a continuity enhancement filter and a harmonic histogram to supplement the spatial segments grouping. The use of this method with real human speeches recorded in an anechoic chamber and a normal room was demonstrated. The experiments have shown that the method was effective to separate sounds in reverberant environments. 5. Dr. Kiyotoshi MATSUOKA and Dr. Mitsuru KAWAMOTO, Department of Control Engineering, Kyushu Institute of Technology, 1-1,Tobata, Kitakyushu, 804 Japan "Blind Signal Separation Based on a Mutual Information Criterion" Abstract: This paper deals with the problem of the so-called blind separation of sources. The problem is to recover a set of source signals from their linear mixtures observed by the same number of sensors, in the absence of any particular information about the transfer function that couples the sources and the sensors. The only a priori knowledge is, basically, the fact that the source signals are statistically mutually independent. Such a task arises in noise canceling of sound signals, image enhancement, medical measurement, etc. If the observed signals are stationary, Gaussian, white ones, then blind separation is essentially impossible. Conversely, blind separation can be realized by exploiting some information on nonstationary, non-Gaussian, or nonwhite characteristics of the observed signals, if any. Most of the conventional methods stipulate that the source signals are non-Gaussian, and use some high-order moments or cumulants. However, it is sometimes difficult to accurately estimate non-Gaussian statistics because random signals in practice are usually not so far from Gaussian. In this paper we propose an approach that utilizes only second-order moments of the observed signals. We consider two cases: (i) the source signals are nonstationary; (ii) the source signals have some temporal correlations, i.e., they are nonwhite signals. To realize signal separation we consider a recovering filter which takes in the observed signals as input and provides an estimate of the source signals as output. The parameters of the filter are determined such that all the outputs of the filter be mutually statistically independent. As a criterion of the statistical independence we adopt the well-known mutual information between the outputs. The adaptation rule for the filter's parameters is derived from the steepest descent minimization of the criterion function. A remarkable feature of our approach is that it is able to treat time-convolutive mixtures of a general number of (stationary) source signals. In contrast, most of the conventional studies on blind separation only consider the static mixing of the source signals. Namely, any delay in the mixing process is not taken into account. So, those methods are useless for many of the important applications of blind separation, e.g., separation of sound signals. Although there are some studies that deal with convolutive mixtures involving some delay, all of them consider only the case of two sources (2 x 2 channels) and do not seem extendible to the case of more than two sources. In our approach, also for convolutive mixtures, the adaptation rule is easily obtained by defining the information criterion in the frequency domain. 6. Dr. Eric MOREAU, and Prof. Odile MACCHI Laboratoire des Signaux et Systems, CNRS-ESE, FRANCE "Adaptive Unsupervised Separation of Discrete Sources" ABSTRACT: We consider the unsupervised source separation problem where observations are captured at the output of an unknown linear mixture of random signals called sources. The sources are assumed discrete, zero-mean and statistically independent. In this paper we consider the problem with a prewhitening stage. The a priori knowledge that sources are discrete with known level, is used in order to improve performances. A novel contrast which combines two parts, is proved. The first part forces statistical independence of the outputs while the second one forces the outputs to have the known distribution. Then a stochastic gradient adaptive algorithm is proposed. Its performance is illustrated thanks to computer simulations that clearly show that the novel contrast achieves much better performance. 7. Dr. Adel BELOUCHRANI, and Prof. Jean-Francois CARDOSO , Telecom Paris, CNRS URA 820, GdR TdSI 46 rue Barrault, 75634 Paris Cedex 13, FRANCE "Maximum Likelihood Source Separation by the Expectation-Maximization Technique: Deterministic and Stochastic Implementation" Abstract: This paper deals with the source separation problem which consists in the separation of a mixture of independent sources without a priori knowledge about the mixing matrix. When the source distributions are known in advance, this problem can be solved via the maximum likelihood (ML) approach by maximizing the data likelihood function using (i) the Expectation-Maximization (EM) algorithm and (ii) a stochastic version of it, the SEM, wich is efficiently implemented by resorting to Metropolis sampler. Two important features of our algorithm are that (a) the covariance of the additive noise can be estimated as a regular parameter, (b) in the case of discrete sources, it is possible to separate more sources than sensors. The effectiveness of this method is illustrated by numerical simulations. 8. Prof. L. TONG and Dr. X. CHEN, University of Connecticut, USA. "Blind Separation of Dynamically Mixed Multiple Sources and its Applications in CDMA Systems" Abstract: In this paper, we consider the problem of separating dynamically mixed multiple sources. Specifically, we address the problem of recovering the sources of an multiple-input multiple-output system. Two issues will be addressed: (i) Source Blind Separability; (ii) Blind Signal Separation Algorithms. Applications of the proposed approach to code-division multiple-access schemes in wireless communication are presented. 9. Prof. Shun-ichi AMARI, Prof. Andrzej CICHOCKI and Dr. Howard Hua YANG, Frontier Research Program RIKEN (Institute of Physical and Chemical Research), Wako-shi, JAPAN "Multi-layer Neural Networks with Local Learning Rules for Blind Separation of Sources" Abstract: In this paper we will propose multi-layer neural network models (feedforward and recurrent) with novel, local, adaptive, unsupervised learning rules which enable not only to separate on-line independent sources but also determine the number of active sources. In other words, we assume that the number of sources and their waveforms are completely unknown. Moreover, the separation problem can be very ill-conditioned and/or badly scaled. In fact the performance of the learning algorithm is independent of scaling factors and a condition number of the mixing matrix. Universal (flexible) computer simulation program will be presented which enable comparison of validity and performance of various recently developed adaptive on-line learning algorithms. 10.. Dr.L. De Lathauwer and Dr.P. Comon; E.E. Dept. - ESAT - SISTA, K.U.Leuven, BELGIUM, CNRS - I3S, Sophia Antipolis, Valbonne, FRANCE "Higher - Order Power Method" Abstract The scientific boom in the field of higher-order statistics involves an increasing need for numerical tools in multi-linear algebra: higher-order moments and cumulants of multivariate stochastic processes are higher-order tensors. We consider the problem of generalizing the computation of the best rank-R approximation of a given matrix to the computation of the best rank-(R1,R2,...,RN) approximation of an Nth-order tensor. We mainly focus on the best rank-1 approximation of third-order tensors. It is shown that this problem leads in a very natural way to a higher-order equivalent of the well-known power method for the computation of the eigendecomposition of matrices. It can be proved that each power iteration step decreases the least-squares error between the initial tensor and the lower-rank estimate. In the tensor case several stationary points might exist, each with a different domain of attraction. Surprisingly, the power iteration for a super-symmetric tensor can produce intermediate results that are unsymmetric. Imposing symmetry on the algorithm does not necessarily improve the convergence speed; the symmetric power iteration can even fail to converge. In the matrix case truncation of the singular value decomposition (SVD) yields the best rank-R approximation; in the tensor case it can only be proved that truncation of the higher-order singular value decomposition (HOSVD) yields a fairly good approximation. All our simulations show that the HOSVD-guess belongs to the attraction region corresponding to the optimal fit. --------------------------------------------------------------------------- From priel at eder.cc.biu.ac.il Fri Sep 22 07:48:32 1995 From: priel at eder.cc.biu.ac.il (priel@eder.cc.biu.ac.il) Date: Fri, 22 Sep 1995 13:48:32 +0200 (WET) Subject: paper on analytical study of time-series generation ... Message-ID: <9509221148.AA18392@eder.cc.biu.ac.il> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/kanter.time_series.ps.Z The file kanter.time_series.ps.Z is now available for copying from the Neuroprose repository. It has been accepted for publication in the PRL. (4 pages) Analytical Study of Time Series Generation by Feed-Forward Networks ---------------------------------------------- I. Kanter, D. A. Kessler, A. Priel and E. Eisenstein Minerva Center and Department of Physics, Bar-Ilan University, Ramat-Gan 52900, Israel ABSTRACT : Generation of time series is studied analytically for a generalization of the Bit-Generator to continuous activation functions and multi-layer architectures. The network exhibits at asymptotically large times the following characteristic features: (a) flows can be periodic or quasi-periodic depending on the phase of the weights, (b) the dimension of the attractor is a function of the gain of the activation function and the number of hidden units, (c) a phase shift in the weights results in a frequency shift in the output, so that the system operates as a phase detector. *** NO HARD COPIES ARE AVAILABLE *** From cnna96 at cnm.us.es Fri Sep 22 06:39:04 1995 From: cnna96 at cnm.us.es (4th Workshop on CNN's and Applications) Date: Fri, 22 Sep 95 12:39:04 +0200 Subject: WWW page on CNNA'96 Message-ID: <9509221039.AA18897@cnm1.cnm.us.es> A World Wide Web page has been established for up-to-date information on CNNA96 (see below). http://www.cica.es/~cnm/cnna96.html The preliminary call for papers (already published in this list) follows. ------------------------------------------------------------------------------ PRELIMINARY CALL FOR PAPERS 4th IEEE INTERNATIONAL WORKSHOP ON CELLULAR NEURAL NETWORKS AND APPLICATIONS (CNNA-96) June 24-26, 1996 (Jointly Organized with NDES-96) Escuela Superior de Ingenieros de Sevilla Centro Nacional de Microelectronica Sevilla, Spain ------------------------------------------------------------------------------ ORGANIZING COMMITTEE: Prof. J.L. Huertas (Chair) Prof. A. Rodriguez-Vazquez Prof. R. Dominguez-Castro SECRETARY: Dr. S. Espejo TECHNICAL PROGRAM: Prof. A. Rodriguez-Vazquez PROCEEDINGS: Prof. R. Dominguez-Castro SCIENTIFIC COMMITTEE: Prof. N.N. Aizemberg, Univ. of Uzhgorod, Ukrania Prof. L.O. Chua, Univ. of Cal. at Berkeley, U.S.A. Prof. V. Cimagalli, Univ. of Rome, Italy Prof. T.G. Clarkson, Kings College of London, U.K. Prof. A.S. Dmitriev, Academy of Sciences, Russia Prof. M. Hasler, EPFL, Switzerland Prof. J. Herault, Nat. Ins. of Tech., France Prof. J.L. Huertas, Nat. Microelectronics Center, Spain Prof. S. Jankowski, Tech. Univ. of Warsaw, Poland Prof. J. Nossek, Tech. Univ. Munich, Germany Prof. V. Porra, Tech. Univ. of Helsinki, Finland Prof. T. Roska, MTA-SZTAKI, Hungary Prof. M. Tanaka, Sophia Univ., Japan Prof. J. Vandewalle, Kath. Univ. Leuven, Belgium ------------------------------------------------------------------------------ GENERAL SCOPE OF THE WORKSHOP AND VENUE The CNNA series of workshops aims to provide a biannual international forum to present and discuss recent advances in Cellular Neural Networks. Following the successful conferences in Budapest (1990), Munich (1992), and Rome (1994), the fourth workshop will be held in Seville during 1996, organized by the National Microelectronic Center and the School of Engineering of Seville. Seville, the capital of Andalusia, and site of the 1992 Universal Exposition, combines a rich cultural heritage accumulated during its more than 2500 years history with modern infrastructures in a stable and sunny climate. It boasts a large, prestigious university, several high-technology research centers of the Spanish Council of Research, and many cultural attractions. It is linked to Madrid by high-speed train and has an international airport serving several daily direct international flights, as well as many connections to international flights via Madrid. ------------------------------------------------------------------------------ PAPERS SUBMISSION Papers on all aspects of Cellular Neural Networks are welcome. Topics of interest include, but are not limited to: - Basic Theory - Applications - Learning - Software Implementations and CNN Simulators - CNN Computers - CNN Chips - CNN System Development and Testing Prospective authors are invited to submit 4 pages summaries of their papers to the Conference Secretariat. Authors of accepted papers will be asked to deliver camera-ready versions of their full-papers for publication in an IEEE-sponsored Proceedings. ------------------------------------------------------------------------------ AUTHOR'S SCHEDULE Submission of summaries: ................ January 31, 1996 Notification of acceptance: ............. March 31, 1996 Submission of camera-ready papers: ...... May 15, 1996 ------------------------------------------------------------------------------ PRELIMINARY REGISTRATION FORM Fourth IEEE Int. Workshop on Cellular Neural Networks and their Applications CNNA'96 Sevilla, Spain, June 24-26, 1996 I wish to attend the workshop. Please send Program and registration form when available. Name: ................______________________________ Mailing address: .....______________________________ Phone: ...............______________________________ Fax: ............. ...______________________________ E-mail: ..............______________________________ Please complete and return to: CNNA'96 Secretariat. Department of Analog Circuit Design, Centro Nacional de Microelectronica Edif. CICA, Avda. Reina Mercedes s/n, E-41012 Sevilla - SPAIN FAX: +34-5-4231832 Phone: +34-5-4239923 E-mail: cnna96 at cnm.us.es ------------------------------------------------------------------------------ From cia at kamo.riken.go.jp Sat Sep 23 01:04:14 1995 From: cia at kamo.riken.go.jp (cia@kamo.riken.go.jp) Date: Sat, 23 Sep 95 14:04:14 +0900 Subject: Nolta-95 - Call for participation Message-ID: <9509230504.AA16627@kamo.riken.go.jp> Many reserchers aked me to send more information about Nolta 95 in Las Vegas. Enclosed, please find "Call for participation". Andrzej Cichocki Laboratory for Artificial Brain Systems, Frontier Research Program RIKEN, Institute of Physical and Chemical Research, Hirosawa 2-1, Saitama 351-01, WAKO-Schi, JAPAN E-mail: cia at kamo.riken.go.jp, FAX (+81) 048 462 4633. URL: http://zoo.riken.go.jp/bip.html ----------------------------------------------------------------------- CALL FOR PARTICIPATION 1995 International Symposium on Nonlinear Theory and its Applications (NOLTA'95) Caesars Palace, Las Vegas, Nevada, U.S.A. December 10 - 14, 1995 ----------------------------------------------------------------------- The 1995 International Symposium on Nonlinear Theory and its Applications (NOLTA'95) will be held at Caesars Palace, Las Vegas, Nevada, U.S.A. on Dec.10-14, 1995. The conference is open to all the world. About 300 papers describing original work in all aspects of Nonlinear Theory and its Applications are presented: [Plenary Talk] 1. CHUA, L.O. (U. C. Berkeley) : Nonlinear Waves, Patterns and Spatio-Temporal Chaos 2. HASEGAWA, Akira (Osaka Univ.): Recent Progress of Optical Soliton Research 3. WILLSON, Jr., A.N. (U.C.L.A.): Some Aspects of Nonlinear Transistor Circuit Theory [Special(invited) Sessions] 1. Application of Nonlinear Analysis to Communication 2. Blind Separation of Sources - Brain Information Processing - 3. Coupled Chaotic Systems - Modeling Brain Functions and Information Processing 4. Fundamental Advances in the Theory of Networks and Systems 5. Hardware Implementation of Nonlinear Dynamical Systems and Its Applications 6. Homotopy Continuation for Nonlinear Analysis 7. Information Processings and Complexity 8. Nonlinear Dynamics and Neural Coding 9. The CNN Nonlinear Dynamic Visual Microprocessor - New Chips, Applications, and Biological Relevance - [Regular Sessions] 1. Bifurcation 2. Biocybernetics and Evolution Systems 3. Cellular Neural Networks 4. Circuit Simulation and Modeling 5. Chaos and Spatial Chaos 6. Chaos Control 7. Chaotic Series Analysis 8. Chaotic Systems 9. Chua's Circuits 10.Communication 11.Electronic Circuits 12.Fractals 13.Fuzzy 14.Image and Signal Processing 15.Information Dynamics 16.Neural Networks (Applications I) 17.Neural Networks (Applications II) 18.Neural Networks (Applications III) 19.Neural Networks (Artificial Systems) 20.Neural Networks (Learning and Capacity) 21.Nonlinear Partial Differential Equations 22.Nonlinear Physics 23.Numerical Analysis and Validation 24.Numerical Methods and Nonlinear Circuits 25.Oscillation 26.Real Nonlinear Control 27.Synchronization 28.Theory of Nonlinear Control 29.Time Series Analysis and Economics 30.Transmission Line ----------------------------------------------------------------------- Registration Before Oct. 31, the registration fee is Japanese Yen 40,000(US$400.00) for researcher, and Yen 20,000(US$200.00) for full-time student. After Nov. 1, the fee is Yen 50,000(US$500.00) for researcher and Yen 25,000(US$250.00) for student. The registration fee includes attendance at all sessions, tea & coffee breaks, a copy of the symposium proceedings, and a banquet ticket. ----------------------------------------------------------------------- Accomodation By reserving through NOLTA'95 Reservation Cards, guest rooms of Caesars Palace during NOLTA'95 will be provided at the rate of US$109.00 for single or double occupancy. For further information, please contact NOLTA'95 secretariat (nolta at sat.t.u-tokyo.ac.jp). ----------------------------------------------------------------------- If you can use WWW browser, you can find further information at http://hawk.ise.chuo-u.ac.jp/NOLTA . If not, please contact to NOLTA'95 secretariat: NOLTA'95 secretariat c/o Oishi Lab., Dept. of Information and Computer Sciences School of Science and Engineering, Waseda University 3-4-1, Okubo, Shinjuku-ku, Tokyo 169, JAPAN Telefax : +81-3-5272-5742 e-mail : nolta at sat.t.u-tokyo.ac.jp ----------------------------------------------------------------------- Organizer : Research Society of Nonlinear Theory and its Applications, IEICE In cooperation with : IEEE Neural Networks Council International Neural Network Society Asian Pacific Neural Network Assembly IEEE CAS Technical Committee on Nonlinear Circuits and Systems Technical Group of Nonlinear Problems, IEICE Technical Committee of Electronic Circuits, IEEJ ----------------------------------------------------------------------- NOLTA'95 Symposium Committee HONORARY CHAIRS: Kazuo Horiuchi(Waseda Univ.) Masao Iri(Chuo Univ.) CO-CHAIRS: Shun-ichi Amari(Univ. of Tokyo) Shinsaku Mori(Keio Univ.) Allan N. Willson, Jr.(U.C.L.A.) TECHNICAL PROGRAM CHAIR: Shun-ichi Amari(Univ. of Tokyo) LOCAL ARRANGEMENT CHAIR: Allan N. Willson, Jr. Dept. of Electrical. Engr., University of Cal., Los Angeles CA 90024, U.S.A. Telefax : +1-310-206-4061 e-mail : willson at epsilon.icsl.ucla.edu PUBLICITY CHIR: Shinsaku Mori(Keio Univ.) PUBLICATION: Toshimichi Saito(Hosei Univ.) SECRETARIES: Shin'ichi Oishi (Waseda Univ.) oishi at oishi.info.waseda.ac.jp Toshimichi Saito(Hosei Univ.) saito at toshi.ee.hosei.ac.jp Mitsunori Makino(Chuo Univ.) makino at ise.chuo-u.ac.jp ----------------------------------------------------------------------- From cia at kamo.riken.go.jp Sat Sep 23 04:03:23 1995 From: cia at kamo.riken.go.jp (cia@kamo.riken.go.jp) Date: Sat, 23 Sep 95 17:03:23 +0900 Subject: Postdoctoral positions available in JAPAN Message-ID: <9509230803.AA16557@kamo.riken.go.jp> Postdoctoral research positions are available to participate in Neural Information Processing- Frontier Research Program RIKEN. Applications are invited for postdoctoral positions to study the artificial neural systems in the laboratory for ARTIFICIAL BRAIN SYSTEMS in the RIKEN (Institute of Physical and Chemical Research -Wako -city, Japan). We are seeking a highly-motivated postdoctoral scientist working in the area of neural networks with a strong background in nonlinear and adaptive signal processing, and/or statistical computation, speech and image processing , mathematics and computer science. Salaries range between 4-6 million Yen (40k - 60k US $) per annum depending on experience, achievements and the number of years since the PhD was earned. The Institute of Physical and Chemical Research (RIKEN) have started a new eight-years Frontier Research Program on Neural Information Processing, beginning in October 1994. The Program includes three research laboratories ( Neural Modeling, Neural Information Representations and Artificial Brain Systems) , each consisting of one research leader and several researchers. We will study fundamental principles underlying the higher order brain functioning from mathematical, information-theoretic and systems-theoretic points of view. The three laboratories cooperate in constructing various models of the brain, mathematically analyzing information principles in the brain, and designing artificial neural systems. We will have close correspondences with another Frontier Research Program on experimental Neuroscience. Research positions, available from November1995- April 1996, or latter are opened for one-year contracts to researchers and post-doctoral fellows, and are renewable for next years depending on success in the program and expected continuation of funding. The search will be continued untill the position is filled. Applicants must submit a letter of application, a current full curriculum vitae/resume including list of publications, the name of three referees and a detailed statement of research interests by e-mail to: Dr. Andrzej Cichocki Team leader of Laboratory for ARTIFICIAL BRAIN SYSTEMS Frontier Research Program RIKEN, Institute of Physical and Chemical Research, Hirosawa 2-1, Saitama 351-01, WAKO-Schi, JAPAN E-mail: cia at kamo.riken.go.jp, FAX (+81) 048 462 4633. URL: http://zoo.riken.go.jp/bip.html The Riken (Institute of Physical and Chemical Research) is a research Institute supported by the Japanese Government. The number of tenured researchers in the Riken is about 350 and more than 1000 are working altogether. The Riken is located at Wako City, Saitama Prefecture, in the suburb of Tokyo. It is very close to central Tokyo. It takes only 15 minutes by train from Ikebukuro Station (one of the famous downtowns in Tokyo). The Frontier Research Program is basic-science oriented, and the goal of the computational Neuroscience group is to elucidate fundamental principles of information processing in the brain by theoretical approaches. The group consists of three teams. Each researcher may propose one's research themes through discussions with a team leader and he should take part in one or two more research projects proposed by a team leader. Official language in the research group is English. There are more than 100 foreigners working in Riken. *************************** Research program of Laboratory for Artificial Brain Systems 1. Objectives The brain realizes intelligent functions over its complex neural network systems. In order to understand and physically simulate the mechanisms of such complex systems, one important method is to synthesize and design artificial neural systems to see how they work under various environmental conditions. We hope that such approach not only make possible to better understand the principles and mechanisms of the brain but also opens new perspectives to develop neurocomputers and their applications. The main objective is development and investigation of models, architectures (structures) and associated learning algorithms of artificial neural systems. The main emphasis will be given to development of novel learning algorithms that are biologically justified and are computational efficient (e.g., they provide numerical stability and high convergence speed). The second objective is investigation of some potential and perspective applications of artificial neural systems in : information and signal processing, high speed parallel computing, solving in real time some optimization problems, classification and pattern recognition problems (e.g. face recognition). 2. Main Topics (1) Blind deconvolution and separation of sources (Cocktail party problem). (2) Development and investigation of on-line adaptive unsupervised learning algorithms. (3) Investigation of recurrent dynamic neural networks with controlling chaos. (4) Investigation of artificial neural systems with variable architectures. (6) Application of artificial neural systems for high speed parallel computing, recognition and optimization problems. 3. Main Features (1) This research is intended to study the system -theoretic frameworks for artificial neural networks based on nonlinear adaptive signal processing and dynamic systems/control theory. (2) The techniques and algorithms developed in this team will be available for collaboration with other teams and individual researchers (3) High priority will be given to the development of synthetic neural systems which are biologically plausible but also implementable (realizable) by electronic and/or optical circuits and systems. (4) Although we intend to investigate mainly models that have possibly biological plausibility (resemblance) and main inspiration will be taken from neuroscience, we assume that investigated structures and algorithms for specific applications may be rather loosely associated with real biological nervous models or they will be only simplified systems of such models. From tresp at traun.zfe.siemens.de Sat Sep 23 14:03:01 1995 From: tresp at traun.zfe.siemens.de (Volker Tresp) Date: Sat, 23 Sep 1995 20:03:01 +0200 Subject: Papers available on Missing and Noisy Data in Nonlinear Time-Series Prediction Message-ID: <9509231803.AA00331@traun.zfe.siemens.de> The file tresp.miss_time.ps.Z can now be copied from Neuroprose. The paper is 11 pages long. Hardcopies copies are not available. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/tresp.miss_time.ps.Z Missing and Noisy Data in Nonlinear Time-Series Prediction by Volker Tresp and Reimar Hofmann We discuss the issue of missing and noisy data in nonlinear time-series prediction. We derive fundamental equations both for prediction and for training. Our discussion shows that if measurements are noisy or missing, treating the time series as a static input/output mapping problem (the usual time-delay neural network approach) is suboptimal. We describe approximations of the solutions which are based on stochastic simulations. A special case is $K$-step prediction in which a one-step predictor is iterated $K$ times. Our solutions provide error bars for prediction with missing or noisy data and for $K$-step prediction. Using the $K$-step iterated logistic map as an example, we show that the proposed solutions are a considerable improvement over simple heuristic solutions. Using our formalism we derive algorithms for training recurrent networks, for control of stochastic systems and for reinforcement learning problems. From ro2m at crab.psy.cmu.edu Sat Sep 23 14:51:28 1995 From: ro2m at crab.psy.cmu.edu (Randall C. O'Reilly) Date: Sat, 23 Sep 95 14:51:28 EDT Subject: ANNOUNCING: PDP++ version 1.0 Message-ID: <9509231851.AA01665@crab.psy.cmu.edu.psy.cmu.edu> ANNOUNCING: The PDP++ Software Authors: Randall C. O'Reilly, Chadley K. Dawson, and James L. McClelland The PDP++ software is a new neural-network simulation system written in C++. It represents the next generation of the PDP software released with the McClelland and Rumelhart "Explorations in Parallel Distributed Processing Handbook", MIT Press, 1987. It is easy enough for novice users, but very powerful and flexible for research use. The current version is 1.0, our first non-beta release. It has been extensively tested and should be completely usable. The software can be obtained by anonymous ftp from: Anonymous FTP Site: hydra.psy.cmu.edu/pub/pdp++/ *or* unix.hensa.ac.uk/mirrors/pdp++/ For more information, see our web page: WWW Page: http://www.cs.cmu.edu/Web/Groups/CNBC/PDP++/PDP++.html There is a 250 page (printed) manual and an HTML version available on-line at the above address. New Features Since Previous Release (1.0b): =========================================== o Better support for sub-groups of units within a Layer (including better interface support). o Fixed and much more flexible 'ReadOldPDP' function for importing environments (pattern files). o Improved documentation for compiling the software. o Pre-compiled InterViews libraries for g++ now available (in addtion to cfront-based ones). o Added a bpso++ executable, which allows creation of mixed backprop and self-organizing networks. o Lots of bug fixes (see the ChangeLog file for details). Software Features: ================== o Full Graphical User Interface (GUI) based on the InterViews toolkit. Allows user-selected "look and feel". o Network Viewer shows network architecture and processing in real- time, allows network to be constructed with simple point-and-click actions. o Training and testing data can be graphed on-line and network state can be displayed over time numerically or using a wide range of color or size-based graphical representations. o Environment Viewer shows training patterns using color or size-based graphical representations. o Flexible object-oriented design allows mix-and-match simulation construction and easy extension by deriving new object types from existing ones. o Built-in 'CSS' scripting language uses C++ syntax, allows full access to simulation object data and functions. Transition between script code and compiled code is simplified since both are C++. Script has command-line completion, source-level debugger, and provides standard C/C++ library functions and objects. o Scripts can control processing, generate training and testing patterns, automate routine tasks, etc. o Scripts can be generated from GUI actions, and the user can create GUI interfaces from script objects to extend and customize the simulation environment. Supported Algorithms: ===================== o Feedforward and recurrent error backpropagation. Recurrent BP includes continuous, real-time models, and Almeida-Pineda. o Constraint satisfaction algorithms and associated learning algorithms including Boltzmann Machine, Hopfield models, mean-field networks (DBM), Interactive Activation and Competition (IAC), and continuous stochastic networks. o Self-organizing learning including Competitive Learning, Soft Competitive Learning, simple Hebbian, and Self-organizing Maps ("Kohonen Nets"). The Fine Print: =============== PDP++ is copyrighted and cannot be sold or distributed by anyone other than the copyright holders. However, the full source code is freely available, and the user is granted full permission to modify, copy, and use it. See our web page for details. The software runs on Unix workstations under XWindows. It requires a minimum of 16 Meg of RAM, and 32 Meg is preferable. It has been developed and tested on Sun Sparc's under SunOs 4.1.3, HP 7xx under HP-UX 9.x, and SGI Irix 5.3. Statically linked binaries are available for these machines. Other machine types will require compiling from the source. Cfront 3.x and g++ 2.6.3 are supported C++ compilers (we specifically were *not* able to compile successfully with the 2.7.0 version of g++, and gave up waiting for 2.7.1). The GUI in PDP++ is based on the InterViews toolkit, version 3.2a. However, we had to patch it to get it to work. We distribute pre-compiled libraries containing these patches for the above architectures. For architectures other than those above, you will have to apply our patches to InterViews before compiling. The basic GUI and script technology in PDP++ is based on a type-scanning system called TypeAccess which interfaces with the CSS script language to provide a virtually automatic interface mechanism. While these were developed for PDP++, they can easily be used for any kind of application, and CSS is available as a stand-alone executable for use like Perl or TCL. The binary-only distribution requires about 54 Meg of disk space, since we have been unable to get shared libraries to work with C++ on the above platforms. Each simulation executable is around 8-12 Meg in size, and there are 4 of these (bp++, cs++, so++, and bpso++), plus the CSS and 'maketa' executables. The compiled source-code distribution takes about 115 Meg (but only around 16 Meg before compiling). For more information on the details of the software, see our web page. - Randy From fritzke at neuroinformatik.ruhr-uni-bochum.de Sun Sep 24 13:41:04 1995 From: fritzke at neuroinformatik.ruhr-uni-bochum.de (Bernd Fritzke) Date: Sun, 24 Sep 1995 18:41:04 +0100 (MET) Subject: paper available: incremental learning of local linear mappings Message-ID: <9509241741.AA13493@hermes.neuroinformatik.ruhr-uni-bochum.de> ftp://ftp.neuroinformatik.ruhr-uni-bochum.de/pub/outgoing/fritzke/papers/fritzke.icann95.ps.gz The following paper is available by http and ftp: INCREMENTAL LEARNING OF LOCAL LINEAR MAPPINGS (to appear in the proceedings of ICANN'95, Paris, France) BERND FRITZKE Institut f"ur Neuroinformatik Ruhr-Universit"at Bochum, Germany A new incremental network model for supervised learning is proposed. The model builds up a structure of units each of which has an associated local linear mapping (LLM). Error information obtained during training is used to determine where to insert new units whose LLMs are interpolated from their neighbors. Simulation results for several classification tasks indicate fast convergence as well as good generalization. The ability of the model to also perform function approximation is demonstrated by an example. -- Bernd Fritzke * Institut f"ur Neuroinformatik Tel. +49-234 7007921 Ruhr-Universit"at Bochum * Germany FAX. +49-234 7094210 WWW: http://www.neuroinformatik.ruhr-uni-bochum.de/ini/PEOPLE/fritzke/top.html From mclennan at cs.utk.edu Mon Sep 25 18:52:17 1995 From: mclennan at cs.utk.edu (Bruce MacLennan) Date: Mon, 25 Sep 1995 18:52:17 -0400 Subject: paper available on continuous formal systems Message-ID: <199509252252.SAA08101@maclennan.cs.utk.edu> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/maclennan.contformalsys.ps.Z Continuous Formal Systems: A Unifying Model in Language and Cognition by Bruce MacLennan (Paper for an IEEE workshop on semiotic modeling.) The idea of a _calculus_ or _discrete formal system_ is central to traditional models of language, knowledge, logic, cognition and computation, and it has provided a unifying framework for these and other disciplines. Nevertheless, research in psychology, neuroscience, philosophy and computer science has shown the limited ability of this model to account for the flexible, adaptive and creative behavior exhibited by much of the animal kingdom. Promising alternate models replace discrete structures by _structured continua_ and discrete rule-following by _continuous dynamical processes_. However, we believe that progress in these alternate models is retarded by the lack of a unifying theoretical construct analogous to the discrete formal system. In this paper we outline the general characteristics of _continuous formal systems_ (_simulacra_), which we believe will be a unifying element in future models of language, knowledge, logic, cognition and computation. Bruce MacLennan Department of Computer Science The University of Tennessee Knoxville, TN 37996-1301 PHONE: (615*)974-0994/5067 FAX: (615*)974-4404 EMAIL: maclennan at cs.utk.edu From emilio at eliza.cc.brandeis.edu Mon Sep 25 10:37:27 1995 From: emilio at eliza.cc.brandeis.edu (Emilio Salinas) Date: Mon, 25 Sep 95 10:37:27 EDT Subject: paper available: Transfer of Coded Information from Sensory to Motor Networks Message-ID: <9509251437.AA18396@eliza.cc.brandeis.edu> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/salinas.trans.ps.Z The following paper is available in the neuroprose archive. It is 18 pages long; about 1 Mb compressed and about 3 Mb uncompressed. The paper will appear in the Journal of Neuroscience. Hardcopies will eventually be available but, please, only for those who are really interested and have no means of downloading the file. Address questions, comments or problems to Emilio Salinas: emilio at eliza.cc.brandeis.edu. Transfer of Coded Information from Sensory to Motor Networks by Emilio Salinas and L.F. Abbott ABSTRACT During sensory-guided motor tasks, information must be transferred from arrays of neurons coding target location to motor networks that generate and control movement. We address two basic questions about this information transfer. First, what mechanisms assure that the different neural representations align properly so that activity in the sensory network representing target location evokes a motor response generating accurate movement toward the target? Coordinate transformations may be needed to put the sensory data into a form appropriate for use by the motor system. For example, in visually guided reaching the location of a target relative to the body is determined by a combination of the position of its image on the retina and the direction of gaze. What assures that the motor network responds to the appropriate combination of sensory inputs corresponding to target position in body- or arm-centered coordinates? To answer these questions, we model a sensory network coding target position and use it to drive a similarly modeled motor network. To determine the actual motor response we use decoding methods that have been developed and verified in experimental work. We derive a general set of conditions on the sensory-to-motor synaptic connections that assure a properly aligned and transformed response. The accuracy of the response for different numbers of coding cells is computed. We show that development of the synaptic weights needed to generate the correct motor response can occur spontaneously through the observation of random movements and correlation-based synaptic modification. No error signal or external teaching is needed during this process. We also discuss nonlinear coordinate transformations and the presence of both shifting and non-shifting receptive fields in sensory/motor systems. From fritzke at neuroinformatik.ruhr-uni-bochum.de Tue Sep 26 13:13:19 1995 From: fritzke at neuroinformatik.ruhr-uni-bochum.de (Bernd Fritzke) Date: Tue, 26 Sep 1995 18:13:19 +0100 (MET) Subject: paper available: Growing Grid ...... Message-ID: <9509261713.AA24334@urda.neuroinformatik.ruhr-uni-bochum.de> ftp://ftp.neuroinformatik.ruhr-uni-bochum.de/pub/outgoing/fritzke/papers/fritzke.growing_grid.ps.gz The following paper is available by http and ftp: GROWING GRID - A SELF-ORGANIZING NETWORK WITH CONSTANT NEIGHBORHOOD RANGE AND ADAPTATION STRENGTH (to appear in Neural Processing Letters, Vol. 2, No. 5, pp. 1-5, 1995) BERND FRITZKE Institut f"ur Neuroinformatik Ruhr-Universit"at Bochum, Germany We present a novel self-organizing network which is generated by a growth process. The application range of the model is the same as for Kohonen's feature map: generation of topology-preserving and dimensionality-reducing mappings, e.g., for the purpose of data visualization. The network structure is a rectangular grid which, however, increases its size during self-organization. By inserting complete rows or columns of units the grid may adapt its height/width ratio to the given pattern distribution. Both the neighborhood range used to co-adapt units in the vicinity of the winning unit and the adaptation strength are constant during the growth phase. This makes it possible to let the network grow until an application-specific performance criterion is fulfilled or until a desired network size is reached. A final approximation phase with decaying adaptation strength fine-tunes the network. -- Bernd Fritzke * Institut f"ur Neuroinformatik Tel. +49-234 7007921 Ruhr-Universit"at Bochum * Germany FAX. +49-234 7094210 WWW: http://www.neuroinformatik.ruhr-uni-bochum.de/ini/PEOPLE/fritzke/top.html From kak at gate.ee.lsu.edu Tue Sep 26 17:11:41 1995 From: kak at gate.ee.lsu.edu (Subhash Kak) Date: Tue, 26 Sep 95 16:11:41 CDT Subject: Paper Message-ID: <9509262111.AA00137@gate.ee.lsu.edu> The following paper THE THREE LANGUAGES OF THE BRAIN: QUANTUM, REORGANIZATIONAL, AND ASSOCIATIVE by Subhash C. Kak was presented at the 4th Appalachian Conference on Behavioral Neurodynamics, Radford, VA, Sept 22-25, 1995. It is available by anonymous ftp from: ftp://gate.ee.lsu.edu/pub/kak/dual.ps.Z (35 pages) Abstract: The paper presents a review of current research on different brain processes related to cognition. The review includes a critique of the Turing test, animal intelligence, and reorganizational issues related to superorganisms. How brain behavior might be examined in terms of continuing reorganization is examined. This is done in terms of the information exchange taking place between the organism and the environment. An algorithm is also provided that can perform associative learning instantaneously. From luca at idsia.ch Wed Sep 27 12:12:22 1995 From: luca at idsia.ch (Luca Gambardella) Date: Wed, 27 Sep 95 17:12:22 +0100 Subject: POSTDOC JOB OPENINGS Message-ID: <9509271612.AA20886@fava.idsia.ch> -------------------------------------------------------------------- IDSIA: POSTDOC JOB OPENINGS Statistics, Neural Nets, Forecasting, Planning, Optimization -------------------------------------------------------------------- IDSIA - Istituto Dalle Molle di Studi sull'Intelligenza Artificiale is a machine learning research center located in Lugano (Switzerland). IDSIA receives subsidies from both private and public sectors. NEW PROJECT. Starting 1996, IDSIA will collaborate with a private company producing software tools for supporting container terminal organization and resource allocation. Goals of the project are: (1) Forecasting the container terminal's input/output flow, using statistical models, neural nets, etc., and taking expert knowledge into account. There is a huge data base describing the terminal activity in previous years. IDSIA has considerable expertise in this area. (2) Finding optimal container positions in the stockage area. Where to place an incoming container? This depends on many parameters: final container destination, size and content, the next carrier, current occupancy of the container parking area (often, to grab one container others need to be rearranged). Emphasis is on optimization and planning. IDSIA's role is to define methodologies for solving problems (1) and (2). The private company's role is to produce a collection of software tools integrated in the existing industrial software environment. The 2 year project is supported by Swiss Government funds (CERS/KWF), and will involve 6 man years. In the first (second) year, there will be two (one) IDSIA researcher(s) and one (two) company employee(s). IDSIA has two immediate openings (one for 1 year, other for 2 years). Required expertise: Ph.D. in computer science, statistics (or similar) experience in forecasting (statistics, neural nets, etc.). experience in optimization and planning experience in industrial applications Please send postscripts of resume, description of current interests, names of three referees, and other related information to: Luca Gambardella IDSIA C.so Elvezia 36 6900 Lugano CH email: luca at idsia.ch DEADLINE: OCTOBER 31. From robbie at psych.rochester.edu Wed Sep 27 13:12:33 1995 From: robbie at psych.rochester.edu (Robbie Jacobs) Date: Wed, 27 Sep 1995 13:12:33 -0400 Subject: Brain & Cog Sci at Rochester Message-ID: <199509271712.NAA23399@prodigal.psych.rochester.edu> DEPARTMENT OF BRAIN AND COGNITIVE SCIENCES, UNIVERSITY OF ROCHESTER ------------------------------------------------------------------- The University of Rochester has formed a new academic department called the Department of Brain and Cognitive Sciences. (Please note that Rochester has terminated its Department of Psychology.) This message contains information about this new department, and is specifically oriented towards individuals seeking graduate education. Members of the Department study how we see and hear, move, learn and remember, reason, produce and understand spoken and signed languages, and how these remarkable capabilities depend upon the workings of the brain. We also study how these capabilities develop during infancy and childhood, and how the brain matures and becomes organized to perform complex tasks. Our research interests span a large domain and straddle several disciplines in the behavioral, neural and computational sciences, but all our work is connected by the idea that to understand behavior we must study not only behavior but also the processes--both neural and computational--that underlie it. While the faculty have active research programs in many regions of this large domain, there are parts in which the Department, as well as the surrounding University community, has notable concentrations of strength. One large group of faculty and students focus their research on understanding the visual system and its organization and function; another major group investigates the nature of language processing and language acquisition; a third group, which cuts across and links those who investigate perception, language, and neurobiology, studies the nature of learning and development. Graduate education is a central part of academic life in the Department. Our faculty's research programs are structured in a way that includes graduate students as essential partners--as our junior colleagues and future peers--and we commit a great deal to their training. The essence of our program is training for research in the disciplines that constitute the brain and cognitive sciences, and the program is designed to ensure that students develop rapidly into independent researchers. This very brief summary can give you only a sketchy sense of our Department, and I would encourage you to get in touch with us if you would like to learn more about us or apply for admission to the graduate program. If you have access to the World Wide Web you can find a great deal of information about us and our research and graduate programs by going to: http://www.bcs.rochester.edu/bcs/ Our Web site provides most of what you will want to know about our program, and it also enables you to submit an electronic application. If you would like to discuss our program with a member of the Department, please call Bette McCormick (716-275-1844) who will put you in touch with someone who can help. Bette can also mail you a brochure that provides much more information about the program and the Department. As a last resort, feel free to contact me (Robert Jacobs) via e-mail (robbie at bcs.rochester.edu) with additional questions or concerns. From mandel at cnm.us.es Wed Sep 27 07:48:54 1995 From: mandel at cnm.us.es (Manuel Delgado Restituto) Date: Wed, 27 Sep 95 12:48:54 +0100 Subject: NDES'96 Workshop Message-ID: <9509271148.AA02371@cnm1.cnm.us.es> A World Wide Web page has been established for up-to-date information on NDES96 (see below). http://www.cica.es/~cnm/ndes96.html The preliminary call for papers (already published in this list) follows. ------------------------------------------------------------------------------ PRELIMINARY CALL FOR PAPERS 4th INTERNATIONAL WORKSHOP ON NONLINEAR DYNAMICS OF ELECTRONIC SYSTEMS (NDES-96) June 27-28, 1996 (Jointly Organized with CNNA-96) Escuela Superior de Ingenieros de Sevilla Centro Nacional de Microelectronica Sevilla, Spain ------------------------------------------------------------------------------ ORGANIZING COMMITTEE: Prof. J.L. Huertas (Chair) Prof. E. Freire Prof. A. Rodriguez-Vazquez SECRETARY: Dr. M. Delgado-Restituto TECHNICAL PROGRAM: Prof. A. Rodriguez-Vazquez PROCEEDINGS: Dr. M. Delgado-Restituto SCIENTIFIC COMMITTEE: Prof. Leon O. Chua Univ. of California at Berkeley, U.S.A. Prof. Anthony C. Davies King's College, Univ. London, U.K. Prof. Alexander S. Dmitriev Russian Academy of Sciences, Russia Prof. Martin Hasler EPFL, Switzerland Prof. Michael Peter Kennedy University College Dublin, Ireland Prof. Erik Lindberg Tech. Univ. Denmark, Denmark Prof. Josef Nossek Tech. Univ. Munich, Germany Prof. Maciej Ogorzalek Univ. Mining and Metallurgy, Poland Prof. A. Rodriguez-Vazquez Univ. Seville, Spain Prof. Wolfgang Schwarz Tech. Univ. Dresden, Germany ------------------------------------------------------------------------------ GENERAL SCOPE OF THE WORKSHOP AND VENUE The NDES series of workshops aims to provide an annual international forum to present and discuss recent advances in the analysis and applications of Nonlinear Dynamics of Electronic Circuits and Systems. Following the sucessful conferences in Dresden (1993), Krakow (1994), and Dublin (1995), the fourth workshop will be hosted by the National Microelectronic Center and the School of Engineering of Seville, in Seville, Spain, on 27-28 June, 1996. Seville, the capital of Andalusia, and site of the 1992 Universal Exposition, combines a rich cultural heritage accumulated during its more than 2500 years history with modern infrastructures in a stable and sunny climate. It boasts a large, prestigious university, several high-technology research centers of the Spanish Council of Research, and many cultural attractions. It is linked to Madrid by high-speed train and has an international airport serving several daily direct international flights, as well as many connections to international flights via Madrid and Barcelona. ------------------------------------------------------------------------------ PAPERS SUBMISSION The workshop will address theoretical and practical issues in nonlinear electronic devices, circuits and systems, with an emphasis on dynamic behavior, chaos and complexity. The official language of the workshop will be English. Prospective authors are invited to submit 4 pages summaries of their papers to the Conference Secretariat. Authors of accepted papers will be asked to deliver camera-ready versions of their full-papers for publication in the Proceedings of the Workshop. ------------------------------------------------------------------------------ AUTHOR'S SCHEDULE Submission of summaries ....................... February 15, 1996 Notification of acceptance .................... March 31, 1996 Submission of camera-ready papers ............. May 15, 1996 ------------------------------------------------------------------------------ PRELIMINARY REGISTRATION FORM 4th INTERNATIONAL WORKSHOP ON NONLINEAR DYNAMICS OF ELECTRONIC SYSTEMS (NDES-96) Sevilla, Spain, June 27-28, 1996 I wish to attend the workshop. Please send Program and registration form when available. Name: ................______________________________ Mailing address: .....______________________________ Phone: ...............______________________________ Fax: .................______________________________ E-mail: ..............______________________________ Please complete and return to: NDES'96 Secretariat. Department of Analog Circuit Design, Centro Nacional de Microelectronica Edif. CICA, Avda. Reina Mercedes s/n, E-41012 Sevilla - SPAIN FAX: +34-5-4231832 Phone: +34-5-4239923 E-mail: ndes96 at cnm.us.es ------------------------------------------------------------------------------ From Mark.Ring at gmd.de Wed Sep 27 14:26:34 1995 From: Mark.Ring at gmd.de (Mark Ring) Date: Wed, 27 Sep 1995 19:26:34 +0100 Subject: PhD thesis: Continual Learning in Reinforcement Environments. Message-ID: <199509271826.AA10041@kauai.gmd.de> FTP-host: ftp.gmd.de FTP-filename: /Learning/rl/papers/ring.thesis.ps.Z URL: ftp://ftp.gmd.de/Learning/rl/papers/ring.thesis.ps.Z also URL: http://borneo.gmd.de:80/~ring/Diss http://www.cs.utexas.edu/users/ring/Diss 138 pages total, 624 kbytes compressed postscript. My dissertation from last year is now available publicly in book format. It can be retrieved via ftp, is accessible in sections by WWW, or can be ordered in any book store from Oldenbourg Verlag (publishers) with the following ISBN number: ISBN 3-486-23603-2. ---------------------------------------------------------------------- Title: Continual Learning in Reinforcement Environments August, 1994 Abstract: *Continual learning* is the constant development of complex behaviors with no final end in mind. It is the process of learning ever more complicated skills by building on those skills already developed. In order for learning at one stage of development to serve as the foundation for later learning, a continual-learning agent should learn hierarchically. CHILD, an agent capable of Continual, Hierarchical, Incremental Learning and Development is proposed, described, tested, and evaluated in this dissertation. CHILD accumulates useful behaviors in reinforcement environments by using the Temporal Transition Hierarchies learning algorithm, also derived in the dissertation. This constructive algorithm generates a hierarchical, higher-order neural network that can be used for predicting context-dependent temporal sequences and can learn sequential-task benchmarks more than two orders of magnitude faster than competing neural-network systems. Consequently, CHILD can quickly solve complicated non-Markovian reinforcement-learning tasks and can then transfer its skills to similar but even more complicated tasks, learning these faster still. This continual-learning approach is made possible by the unique properties of Temporal Transition Hierarchies, which allow existing skills to be amended and augmented in precisely the same way that they were constructed in the first place. Contents: Leading pages (pp. iv - xiv) Chapters: 1. Introduction (pp. 1 - 7) 2. Robotics Environments and Learning Tasks (pp. 8 - 16). 3. Neural-Network Learning (pp. 17 - 24). 4. Solving Temporal Problems with Neural Networks (pp. 25 - 33). 5. Reinforcement Learning (pp. 34 - 44). 6. The Automatic Construction of Sensorimotor Hierarchies (pp. 45 - 71). 6.1 Behavior Hierarchies (pp. 45 - 52). 6.2 Temporal Transition Hierarchies (pp. 52 - 69). 6.3 Conclusions (pp. 70 - 71). 7. Simulations (pp. 72 - 95). 7.1 Description of Simulation System (pp. 72 - 73). 7.2 Supervised-Learning Tasks (pp. 73 - 82). 7.3 Continual Learning Results (pp. 82 - 95). 8. Synopsis, Discussion, and Conclusions (pp. 96 - 107). Appendices A-E (pp. 108 - 117). Bibliography (pp. 118 - 127). ---------- Mark Ring Research Group for Adaptive Systems GMD - German National Research Center for Information Technology Schloss Birlinghoven D-53 754 Sankt Augustin Germany Mark.Ring at gmd.de http://borneo.gmd.de:80/~ring From terry at salk.edu Thu Sep 28 22:21:30 1995 From: terry at salk.edu (Terry Sejnowski) Date: Thu, 28 Sep 95 19:21:30 PDT Subject: Faculty Positions at UCSD Message-ID: <9509290221.AA28129@salk.edu> Two New Faculty Positions in Systems Neurobiology and Behavior at the Assistant Professor Level in the Department of Biology, University of California, San Diego Two outstanding candidates are sought addressing problems in any of a number of area including CNS function in intact animals or semi-intact brain preparations using imaging techniques, computatoinal analysis of CNS function, neuroethology invertebrate behavior, or genetics of higher brain function. Ph.D. and postdoctoral experience required. Please send curriculum vitae, bibliography, statement of professional goals and research interests, and the names of three references by October 20, 1995 to Brain and Behavior Search Department of Biology 0346 University of California, San Diego 9500 Gilman Drive La Jolla, CA 92093-0346