From Connectionists-Request at cs.cmu.edu Sat Jul 1 00:06:05 1995 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Sat, 01 Jul 95 00:06:05 -0400 Subject: Bi-monthly Reminder Message-ID: <5253.804571565@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated September 9, 1994. This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is a moderated forum for enlightened technical discussions and professional announcements. It is not a random free-for-all like comp.ai.neural-nets. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to thousands of busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. -- Dave Touretzky & Lisa Saksida --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, corresponded with him directly and retrieved the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new textbooks related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. The file "current" in the same directory contains the archives for the current month. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU 2. Login as user anonymous with password your username. 3. 'cd' directly to the following directory: /afs/cs/project/connect/connect-archives The archive directory is the ONLY one you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into this directory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- Using Mosaic and the World Wide Web ----------------------------------- You can also access these files using the following url: http://www.cs.cmu.edu:8001/afs/cs/project/connect/connect-archives ---------------------------------------------------------------------- The NEUROPROSE Archive ---------------------- Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. We strongly discourage the merger into the repository of existing bodies of work or the use of this medium as a vanity press for papers which are not of publication quality. PLACING A FILE To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. Current naming convention is author.title.filetype.Z where title is just enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. The Z indicates that the file has been compressed by the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is in the appendix. Make sure your paper is single-spaced, so as to save paper, and include an INDEX Entry, consisting of 1) the filename, 2) the email contact for problems, 3) the number of pages and 4) a one sentence description. See the INDEX file for examples. ANNOUNCING YOUR PAPER It is the author's responsibility to invite other researchers to make copies of their paper. Before announcing, have a friend at another institution retrieve and print the file, so as to avoid easily found local postscript library errors. And let the community know how many pages to expect on their printer. Finally, information about where the paper will/might appear is appropriate inside the paper as well as in the announcement. In your subject line of your mail message, rather than "paper available via FTP," please indicate the subject or title, e.g. "paper available "Solving Towers of Hanoi with ART-4" Please add two lines to your mail header, or the top of your message, so as to facilitate the development of mailer scripts and macros which can automatically retrieve files from both NEUROPROSE and other lab-specific repositories: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/filename.ps.Z When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. To prevent forwarding, place a "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your file. If you do offer hard copies, be prepared for a high cost. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! A shell script called Getps, written by Tony Plate, is in the directory, and can perform the necessary retrieval operations, given the file name. Functions for GNU Emacs RMAIL, and other mailing systems will also be posted as debugged and available. At any time, for any reason, the author may request their paper be updated or removed. For further questions contact: Jordan Pollack Associate Professor Computer Science Department Center for Complex Systems Brandeis University Phone: (617) 736-2713/* to fax Waltham, MA 02254 email: pollack at cs.brandeis.edu APPENDIX: Here is an example of naming and placing a file: unix> compress myname.title.ps unix> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put myname.title.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for myname.title.ps.Z 226 Transfer complete. 100000 bytes sent in 1.414 seconds ftp> quit 221 Goodbye. unix> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file myname.title.ps.Z in the Inbox. Here is the INDEX entry: myname.title.ps.Z mylogin at my.email.address 12 pages. A random paper which everyone will want to read Let me know when it is in place so I can announce it to Connectionists at cmu. ^D AFTER RECEIVING THE GO-AHEAD, AND HAVING A FRIEND TEST RETRIEVE THE FILE, HE DOES THE FOLLOWING: unix> mail connectionists Subject: TR announcement: Born Again Perceptrons FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/myname.title.ps.Z The file myname.title.ps.Z is now available for copying from the Neuroprose repository: Random Paper (12 pages) Somebody Somewhere Cornell University ABSTRACT: In this unpublishable paper, I generate another alternative to the back-propagation algorithm which performs 50% better on learning the exclusive-or problem. ~r.signature ^D ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "ftp.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. Another valid directory is "/afs/cs/project/connect/code", where we store various supported and unsupported neural network simulators and related software. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "neural-bench at cs.cmu.edu". From crites at pride.cs.umass.edu Sat Jul 1 17:05:05 1995 From: crites at pride.cs.umass.edu (crites@pride.cs.umass.edu) Date: Sat, 1 Jul 1995 17:05:05 -0400 (EDT) Subject: papers available on reinforcement learning Message-ID: <9507012105.AA12362@pride.cs.umass.edu> The following papers are now available online: --------------------------------------------------------------------------- Improving Elevator Performance Using Reinforcement Learning Robert H. Crites and Andrew G. Barto Computer Science Department University of Massachusetts Amherst, MA 01003-4610 crites at cs.umass.edu, barto at cs.umass.edu Submitted to NIPS 8 8 pages ftp://ftp.cs.umass.edu/pub/anw/pub/crites/nips8.ps.Z ABSTRACT This paper describes the application of reinforcement learning (RL) to the difficult real world problem of elevator dispatching. The elevator domain poses a combination of challenges not seen in most RL research to date. Elevator systems operate in continuous state spaces and in continuous time as discrete event dynamic systems. Their state is not fully observable and they are non-stationary due to changing passenger arrival rates. In addition, we use a team of RL agents, each of which is responsible for controlling one elevator car. The team receives a global reinforcement signal which appears noisy to each agent due to the effects of the actions of the other agents, the random nature of the arrivals and the incomplete observation of the state. In spite of all of these complications, we show results that surpass the best of the heuristic elevator control algorithms of which we are aware. These results demonstrate the power of RL on a very large scale dynamic optimization problem of practical utility. --------------------------------------------------------------------------- An Actor/Critic Algorithm that is Equivalent to Q-Learning Robert H. Crites and Andrew G. Barto Computer Science Department University of Massachusetts Amherst, MA 01003-4610 crites at cs.umass.edu, barto at cs.umass.edu To appear in: G. Tesauro, D. S. Touretzky and T. K. Leen, eds., Advances in Neural Information Processing Systems 7, MIT Press, Cambridge MA, 1995. 8 pages ftp://ftp.cs.umass.edu/pub/anw/pub/crites/nips7.ps.Z ABSTRACT We prove the convergence of an actor/critic algorithm that is equivalent to Q-learning by construction. Its equivalence is achieved by encoding Q-values within the policy and value function of the actor and critic. The resultant actor/critic algorithm is novel in two ways: it updates the critic only when the most probable action is executed from any given state, and it rewards the actor using criteria that depend on the relative probability of the action that was executed. --------------------------------------------------------------------------- From dhw at santafe.edu Sat Jul 1 20:02:19 1995 From: dhw at santafe.edu (David Wolpert) Date: Sat, 1 Jul 95 18:02:19 MDT Subject: "Orthogonality" of the generalizers being combined Message-ID: <9507020002.AA25348@sfi.santafe.edu> In his recent posting, Nathan Intrator writes >>> combining, or in the simple case averaging estimators is effective only if these estimators are made somehow to be independent. >>> This is an extremely important point. Its importance extends beyond the issue of generalization accuracy however. For example, I once did a set of experiments trying to stack together ID3, backprop, and a nearest neighbor scheme, in the vanilla way. The data set was splice junction prediction. The stacking didn't improve things much at all. Looking at things in detail, I found that the reason for this was that not only did the 3 generalizers have identical xvalidation error rates, but *their guesses were synchronized*. They tended to guess the same thing as one another. In other words, although those generalizers are about as different from one another as can be, *as far as the data set in question was concerned*, they were practically identical. This is a great flag that one is in a data-limited scenario. I.e., if very different generalizers perform identically, that's a good sign that you're screwed. Which is a round-about way of saying that the independence Nathan refers to is always with respect to the data set at hand. This is discussed in a bit of detail in the papers referenced below. *** Getting back to the precise subject of Nathan's posting: Those interested in a formal analysis touching on how the generalizers being combined should differ from one another should read the Ander Krough paper (to come out in NIPS7) that I mentioned in my previous posting. A more intuitive discussion of this issue occurs in my original paper on stacking, where there's a whole page of text elaborating on the fact that "one wants the generalizers being combined to (loosely speaking) 'span the space' of algorithms and be 'mutually orthogonal'" to as much a degree as possible. Indeed, that was one of the surprising aspects of Leo Breiman's seminal paper - he got significant improvement even though the generalizers he combined were quite similar to one another. David Wolpert Stacking can be used for things other than generalizing. The example mentioned above is using it as a flag for when you're data-limited. Another use is as an empirical-Bayes method of setting hyperpriors. These and other non-generalization uses of stacking are discussed in the following two papers: Wolpert, D. H. (1992). "How to deal with multiple possible generalizers". In Fast Learning and Invariant Object Recognition, B. Soucek (Ed.), pp. 61-80. Wiley and Sons. Wolpert, D. H. (1993). "Combining generalizers using partitions of the learning set". In 1992 Lectures in Complex Systems, L. Nadel and D. Stein (Eds), pp. 489-500. Addison Wesley. From terry at salk.edu Sun Jul 2 00:43:51 1995 From: terry at salk.edu (Terry Sejnowski) Date: Sat, 1 Jul 95 21:43:51 PDT Subject: Caltech: LEARNING AND IDENTIFICATION Message-ID: <9507020443.AA16766@salk.edu> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% LEARNING AND IDENTIFICATION a series of informal meetings at Caltech WHEN: July 5, 6 and 7 AT WHAT TIME: 11:00 am to 2:00 pm. Refreshments will be provided WHERE: Caltech, Watson building 104 REGISTRATION: If you are interested, please register by sending e-mail to soatto at caltech.edu. Registration is free, and lunch will be provided only to registered attendees. A detailed schedule of the meetings can be found on the Web site http://www.micro.caltech.edu/EE/Groups/Vision/meetings/ID.html - --------------- This workshop is intended for researchers in controls and in neural networks, and aims at exploring areas of common interest. Emphasis will be on tools of (linear and nonlinear) identification theory and their application to special classes of dynamic systems describing neural networks. The meetings are intended to explore the different areas at an introductory level; the speakers will summarize a particular area (which does not necessarily coincide with their own area of research) and provide a tutorial presentation as free as possible from specialized jargon, with the objective of generating a high degree of interaction among participants. Among the topics we will cover: Classical stochastic realization and estimation theory Conventional linear identification Neural Networks and approximation Wavelets in identification Behavioural approach to representation and identification of systems nonlinear identification and embedding theorems Any change in the schedule and/or program will be posted on the WEB site. - --Stefano Soatto From krogh at nordita.dk Mon Jul 3 10:15:53 1995 From: krogh at nordita.dk (Anders Krogh) Date: Mon, 3 Jul 95 16:15:53 +0200 Subject: "Orthogonality" of the generalizers being combined Message-ID: <9507031415.AA25900@norsci0.nordita.dk> David Wolpert wrote > Getting back to the precise subject of Nathan's posting: Those > interested in a formal analysis touching on how the generalizers being > combined should differ from one another should read the Ander Krough > paper (to come out in NIPS7) that I mentioned in my previous > posting. The reference is (it is not terribly formal): "Neural Network Ensembles, Cross Validation, and Active Learning" by Anders Krogh and Jesper Vedelsby To appear in NIPS 7. It is in Neuroprose (see below for details) or at http://www.nordita.dk/~krogh/papers.html. Peter Sollich and I are finishing up an analysis of an ensemble of linear networks. It may sound trivial, but it actually isn't. We'll post it when we're done. Among other things, we find that averaging under-regularized (ie over-fitting) networks trained on slightly different training sets can give a great improvement over a single network trained on all the data. This doesn't sound too surprising, but I think that is why ensembles work in a lot of applications: People use identical over-parametrized networks and then the ensemble averages the over-fitting away. I've seen some neural network predictors for protein secondary structure where that is the case. It means that an ensemble can sometimes replace regularization. We discuss it a bit in "Improving Prediction of Protein Secondary Structure using Structured Neural Networks and Multiple Sequence Alignments" by by S\o ren K. Riis and Anders Krogh NORDITA preprint 95/34 S (try http://www.nordita.dk/~krogh/papers.html) It's hot today. - Anders --------------------------------------------------------------------- FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/krogh.ensemble.ps.Z The file krogh.ensemble.ps.Z can now be copied from Neuroprose. The paper is 8 pages long. Hardcopies are NOT available. Neural Network Ensembles, Cross Validation, and Active Learning by Anders Krogh and Jesper Vedelsby Abstract: Learning of continuous valued functions using neural network ensembles (committees) can give improved accuracy, reliable estimation of the generalization error, and active learning. The ambiguity is defined as the variation of the output of ensemble members averaged over unlabeled data, so it quantifies the disagreement among the networks. It is discussed how to use the ambiguity in combination with cross-validation to give a reliable estimate of the ensemble generalization error, and how this type of ensemble cross-validation can sometimes improve performance. It is shown how to estimate the optimal weights of the ensemble members using unlabeled data. By a generalization of query by committee, it is finally shown how the ambiguity can be used to select new training data to be labeled in an active learning scheme. The paper will appear in G. Tesauro, D. S. Touretzky and T. K. Leen, eds., "Advances in Neural Information Processing Systems 7", MIT Press, Cambridge MA, 1995. ________________________________________ Anders Krogh Nordita Blegdamsvej 17, 2100 Copenhagen, Denmark email: krogh at nordita.dk Phone: +45 3532 5503 Fax: +45 3138 9157 W.W.Web: http://www.nordita.dk/~krogh/ ________________________________________ From kagan at pine.ece.utexas.edu Mon Jul 3 10:41:52 1995 From: kagan at pine.ece.utexas.edu (Kagan Tumer) Date: Mon, 3 Jul 1995 09:41:52 -0500 Subject: Combining Generalizers Message-ID: <199507031441.JAA17316@pine.ece.utexas.edu> Lately, there has been a great deal of interest in combining estimates, and especially combining neural network outputs. Combining has a LONG history, with seminal ideas contained in Selfridge's Pandemonium (1958) and Nilsson's book on Learning Machines (1965); it is also found i diverse areas (e.g. for at least 20 years in econometrics as "forecast combining"). Recent research on this topic in the neural net/machine learning community largely focuses on (i) WHAT (type of) experts to combine, or (ii) HOW to combine them, or (ii) EXPERIMENTALLY show that combining gives better results Another important question is, how much benefit (%age, limits, reliability,..) can combining methods yield. At least two recent PhD theses (Perrone, Hashem) mathematically address this issue for REGRESSION problems, We have approached this problem for CLASSIFICATION problems by studying the effect of combining on the decision boundaries. The results pinpoint the mechanism by which classification results are improved, and provide limits, including a new way of estimating Bayes' rate. A preliminary version appears as an invited paper in SPIE Proc. Vol 2492, pp. 573-585, (Orlando Conf, April '95); the full version, currently under journal review, can be retrieved from http://pegasus.ece.utexas.edu:80/~kagan/publications.html Besides review and analysis, it contains a reference listing that includes most of the papers quoted on this forum in the past week. The title and abstract follows: THEORETICAL FOUNDATIONS OF LINEAR AND ORDER STATISTICS COMBINERS FOR NEURAL PATTERN CLASSIFIERS by Kagan Tumer and Joydeep Ghosh Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This paper provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and the order statistics combiners introduced in this paper. We show that combining networks in output space reduces the variance of the actual decision region boundaries around the optimum boundary. For linear combiners, we show that in the absence of classifier bias, the added classification error is proportional to the boundary variance. In the presence of bias, the error reduction is shown to be less than or equal to the reduction obtained in the absence of bias. For non-linear combiners, we show analytically that the selection of the median, the maximum and in general the $i$th order statistic improves classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. The combining results can also be used to estimate the Bayesian error rates. Experimental results on several public domain data sets are provided to illustrate the benefits of combining. ------ All comments welcome. ______ Sorry, no hard copies. Kagan Tumer Dept. of ECE The University of Texas, Austin From stefano at kant.irmkant.rm.cnr.it Mon Jul 3 13:26:20 1995 From: stefano at kant.irmkant.rm.cnr.it (stefano@kant.irmkant.rm.cnr.it) Date: Mon, 3 Jul 1995 17:26:20 GMT Subject: two papers on evolutionary robotics Message-ID: <9507031726.AA14040@kant.irmkant.rm.cnr.it> Papers available via WWW / FTP: Keywords: Evolutionary Robotics, Neural Networks, Genetic Algorithms, Autonomous Robots, Noise. ------------------------------------------------------------------------------ EVOLVING NON-TRIVIAL BEHAVIORS ON REAL ROBOTS: AN AUTONOMOUS ROBOT THAT PICK UP OBJECTS Stefano Nolfi Domenico Parisi ***Institute of Psychology, National Research Council, Rome, Italy e-mail: stefano at kant.irmkant.rm.cnr.it domenico at kant.irmkant.rm.cnr.it Abstract Recently, a new approach that involves a form of simulated evolution has been proposed for the building of autonomous robots. However, it is still not clear if this approach may be adequate to face real life problems. In this paper we show how control systems that perform a non-trivial sequence of behaviors can be obtained with this methodology by carefully designing the conditions in which the evolutionary process operates. In the experiment described in the paper, a mobile robot is trained to locate, recognize, and grasp a target object. The controller of the robot has been evolved in simulation and then downloaded and tested on the real robot. to appear in: G. Soda (Ed.) Proceedings of the Fourth Congress of the Italian Association for Artificial Intelligence, Firenze, 11-13 October, Springer Verlag. http://kant.irmkant.rm.cnr.it/public.html or ftp-server: kant.irmkant.rm.cnr.it (150.146.7.5) ftp-file : nolfi.gripper.ps.Z (the file is 0.36 MB) for the homepage of our research group: http://kant.irmkant.rm.cnr.it/gral.html ----------------------------------------------------------------------------- EVOLVING MOBILE ROBOTS IN SIMULATED AND REAL ENVIRONMENTS Orazio Miglino*, Henrik Hautop Lund**, Stefano Nolfi*** *Department of Psychology, University of Palermo, Italy e-mail: orazio at caio.irmkant.rm.cnr.it **Department of Computer Science, Aarhus University, Denmark e-mail: henrik at caio.irmkant.rm.cnr.it ***Institute of Psychology, National Research Council, Rome, Italy e-mail: stefano at kant.irmkant.rm.cnr.it Abstract The problem of the validity of simulation is particularly relevant for methodologies that use machine learning techniques to develop control systems for autonomous robots, like, for instance, the Artificial Life approach named Evolutionary Robotics. In fact, despite that it has been demonstrated that training or evolving robots in the real environment is possible, the number of trials needed to test the system discourage the use of physical robots during the training period. By evolving neural controllers for a Khepera robot in computer simulations and then transferring the obtained agents in the real environment we will show that: (a) an accurate model of a particular robot-environment dynamics can be built by sampling the real world through the sensors and the actuators of the robot; (b) the performance gap between the obtained behaviors in simulated and real environment may be significantly reduced by introducing a "conservative" form of noise; (c) if a decrease in performance is observed when the system is transferred to the real environment, successful and robust results can be obtained by continuing the evolutionary process in the real environment for few generations. Technical Report, Institute of Psychology, C.N.R, Rome. http://kant.irmkant.rm.cnr.it/public.html or ftp-server: kant.irmkant.rm.cnr.it (150.146.7.5) ftp-file : miglino.sim-real.ps.Z (the file is 2.67 MB) for the homepage of our research group: http://kant.irmkant.rm.cnr.it/gral.html ---------------------------------------------------------------------------- Stefano Nolfi Institute of Psychology National Research Council e-mail: stefano at kant.irmkant.rm.cnr.it From dmartine at laas.fr Mon Jul 3 12:07:57 1995 From: dmartine at laas.fr (Dominique Martinez) Date: Mon, 3 Jul 1995 18:07:57 +0200 Subject: two preprints on adaptive quantization Message-ID: <199507031607.AA25508@emilie.laas.fr> The following two preprints on adaptive scalar quantization are available via anonymous ftp. ---------------------------------------------------------------- Generalized Boundary Adaptation Rule for minimizing r-th power law distortion in high resolution quantization. Dominique Martinez and Marc M. Van Hulle (to appear in Neural Networks) Abstract: A new generalized unsupervised competitive learning rule is introduced for adaptive scalar quantization. The rule, called generalized Boundary Adaptation Rule (BAR_r), minimizes r-th power law distortion D_r in the high resolution case. It is shown by simulations that a fast version of BAR_r outperforms generalized Lloyd I in minimizing D_1 (mean absolute error) and D_2 (mean squared error) distortion with substantially less iterations. In addition, since BAR_r does not require generalized centroid estimation, as in Lloyd I, it is much simpler to implement. ftp laas.laas.fr directory: pub/m2i/dmartine file: martinez.bar.ps.Z -------------------------------------------------------------------- A robust backward adaptive quantizer Dominique Martinez and Woodward Yang (to appear at NNSP'95) Abstract: This paper describes an adaptive encoder/decoder for efficient quantization of nonstationary signals. The system uses a robust backward adaptive encoding method such that the adaptation of the encoder and decoder is only determined by the transmitted codeword and does not require any additional side information. By incorporating a forgetting parameter, the quantizer is also robust to transmission errors and encoder/decoder mismatches. It is envisioned that practical applications of this algorithm can be used in the design of adaptive codecs (A/D and D/A converters) or as an efficient source coding algorithm for transmission of digitized speech. ftp laas.laas.fr directory: pub/m2i/dmartine file: martinez.forget.ps.Z From edelman at melon.mpik-tueb.mpg.de Tue Jul 4 08:02:04 1995 From: edelman at melon.mpik-tueb.mpg.de (Shimon Edelman) Date: Tue, 4 Jul 95 14:02:04 +0200 Subject: Combining Generalizers In-Reply-To: <199507031441.JAA17316@pine.ece.utexas.edu> (message from Kagan Tumer on Mon, 3 Jul 1995 09:41:52 -0500) Message-ID: <9507041202.AA28896@melon> Kagan Tumer wrote: > Lately, there has been a great deal of interest in combining > estimates, and especially combining neural network outputs. > Combining has a LONG history, with seminal ideas contained > in Selfridge's Pandemonium (1958) and Nilsson's book on Learning > ... > Another important question is, how much benefit (%age, limits, reliability,..) > can combining methods yield. At least two recent PhD theses > (Perrone, Hashem) mathematically address this issue for REGRESSION problems, > > We have approached this problem for CLASSIFICATION problems > by studying the effect of combining on the > decision boundaries. The results pinpoint the mechanism by > which classification results are improved, and provide limits, > including a new way of estimating Bayes' rate. > ... To me, Selfridge's Pandemonium looks more like a concept that launched a thousand variations on the Winner-Take-All motif than a precursor of combining estimates (of course, non-maximum suppression can also be counted as a mode of combining estimates, albeit not a very productive one :-). Nevertheless, it can serve as a basis for building a powerful representation of the subspace of the input space relevant to the task, if the Master Demon outputs the vector of response strengths of each of its subordinates, rather than the identity of the one that happens to shout the loudest(*). I think this is a good example of the importance of REPRESENTATION: the ensemble of responses is a much richer representation than the identity of the strongest response, and is more likely to constitute a better basis for CLASSIFICATION. In the past couple of years, I have been trying to clarify the computational basis of the effectiveness of representation by a bank of (individually poorly tuned) classifiers. Most of the results of this project to date are available via my Web page. (*) Shimon Edelman, "Representation, Similarity, and the Chorus of Prototypes", Minds and Machines 5:45-68 (1995), ftp://eris.wisdom.weizmann.ac.il/pub/mam.ps.Z See also the recent works by Jonathan Baxter, available from Neuroprose (sorry, I don't have an URL handy). -Shimon Shimon Edelman, Applied Math & CS, Weizmann Institute http://www.wisdom.weizmann.ac.il/~edelman/shimon.html Cyber Rights Now: Accept No Compromise From Jonathan_Stein at comverse.com Tue Jul 4 12:14:27 1995 From: Jonathan_Stein at comverse.com (Jonathan_Stein@comverse.com) Date: Tue, 04 Jul 95 11:14:27 EST Subject: Combining classifiers to reduce false alarm rate Message-ID: <9506048048.AA804881841@hub.comverse.com> In a recent thread on the subject of combining classifiers Nathan Intrator and David Wolpert mentioned the importance of the lack of correlation between the classifiers being combined. It is obvious that if the classifiers make the same errors (no matter what their architectures) then nothing can be gained by combining them. In order to make the basic classifiers less correlated, one can train them on different subsets of the training set, but the trade-off is that each will have a smaller training set. This is true for closed set problems. For open set problems, (eg. continuous speech and handwriting recognition), for which there are "false alarm" errors in addition to misclassification errors, it may be sufficient to use the same training set with different training algorithms or even merely different starting classifiers. Assuming the training processes succeed, these networks will agree on the training set, and should behave similarly on patterns similar to those in the training set. However, the networks will probably disagree as to patterns unlike those in the training set, since there were no constraints placed on these during the training process. Thus it would seem that by combining the opinions of several networks the false alarm rate may be drastically reduced without significantly reducing the classification rate, (perhaps even improving it). Geometrically speaking, for MLP classifiers, the training set imposes restrictions on the placing of each classifier's hyperplanes. The negative examples will natural fall randomly into domains corresponding to some class, but unless they are sufficiently similar to positive examples, there should be decorrelation between the domains into which they fall. When comparing identifications, the different networks should respond similarly to positive examples, but will tend to disagree regarding the negative ones. This behavior should allow one to differentiate between negatives and positives, thus effectively rejecting false alarms. Over the past four years we have found empirically that the false alarm rate of cursive script and continuous speech systems can be significantly reduced by combining the outputs of several multilayer perceptrons. We have also observed similar effects on artificial benchmark problems, and analytically proven the idea for a solvable toy problem. I can email LaTeX conference papers to interested parties. Jonathan (Yaakov) Stein From tresp at traun.zfe.siemens.de Tue Jul 4 12:40:28 1995 From: tresp at traun.zfe.siemens.de (Volker Tresp) Date: Tue, 4 Jul 1995 18:40:28 +0200 Subject: more combinations of more estimators Message-ID: <9507041640.AA09250@traun.zfe.siemens.de> Our recent NIPS7 paper COMBINING ESTIMATORS USING NON-CONSTANT WEIGHTING FUNCTIONS by Volker Tresp and Michiaki Taniguchi might be of interest to people interested in combining predictors. The basic idea in our (and many related) approaches is to estimate some measure of certainty of a predictor given the input for determining the (input dependent) weight of that predictor. The inverse of the variance of a predictor is suitable: if a predictor is uncertain about its prediction it should obtain a small weight. Another measure can be derived from the input data distribution of the training data which were used to train a given predictor: a predictor should obtain a small weight in regions were it did not see any data during training. The latter idea is closely related to the mixtures of experts. We also indicate how both concepts: (i. e. variance based weighting and density based weighting) can be combined. Incidentally, mixtures of experts fit nicely into the missing input concept: we conceptionally introduce an additional input with as many states as there are experts. Since we never know which expert is the true expert we are faced with a missing input problem during training and recall. The learning rules for systems with missing inputs can be used to derive the learning rules for the mixtures of experts. Volker and Mich The paper can be obtained via: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/tresp.combining.ps.Z From arbib at pollux.usc.edu Tue Jul 4 16:03:28 1995 From: arbib at pollux.usc.edu (Michael A. Arbib) Date: Tue, 4 Jul 1995 13:03:28 -0700 Subject: The Handbook of Brain Theory and Neural Networks Message-ID: <199507042003.NAA13280@pollux.usc.edu> In an earlier posting, the following paragraph was garbled: The heart of the book, Part III, is comprised of 266 original articles by leaders in the various fields, arranged alphabetically by title. Parts I and II, written by the editor, are designed to help readers orient themselves to this vast range of material. PART I - BACKGROUND introduces several basic neural models, explains how the present study of Brain Theory and Neural Networks integrates brain theory, artificial intelligence, and cognitive psychology, and provides a tutorial on the concepts essential for understanding neural networks as dynamic, adaptive systems. PART II - ROAD MAPS provides an entre into the many articles of Part III via an introductory "Meta-Map" and twenty-three road maps each of which provide a tour of all the Part III article on the chosen theme. **** Here is additional information that has been requested by a number of individuals: The ISBN is 0-262-01148-4 The book can be ordered from The MIT Press: For orders: mitpress-orders at mit.edu For inquiries: mitpress-orders-inq at mit.edu General information and order forms: http://www-mitpress.mit.edu PS. The list price is $150 till September 30, then $175. ***** Michael A. Arbib Center for Neural Engineering USC Los Angeles CA 90089-2520 Tel: (213) 740-9220 Fax: (213) 740-5687 arbib at pollux.usc.edu http://www.hbp.usc.edu:8376/HBP/Home.html From kagan at pine.ece.utexas.edu Wed Jul 5 11:12:01 1995 From: kagan at pine.ece.utexas.edu (Kagan Tumer) Date: Wed, 5 Jul 1995 10:12:01 -0500 Subject: Combining Generalizers (correction) Message-ID: <199507051512.KAA23399@pine.ece.utexas.edu> There seems to have been a problem with the compressed postscript file I put on the www. I have regenerated the file and checked to make sure it was retrievable in its entirety. I apologize for any inconvenience this may have caused. Sincerely, Kagan Tumer PS: Original post follows. ---------------------------------------------------------------------------- Lately, there has been a great deal of interest in combining estimates, and especially combining neural network outputs. Combining has a LONG history, with seminal ideas contained in Selfridge's Pandemonium (1958) and Nilsson's book on Learning Machines (1965); it is also found i diverse areas (e.g. for at least 20 years in econometrics as "forecast combining"). Recent research on this topic in the neural net/machine learning community largely focuses on (i) WHAT (type of) experts to combine, or (ii) HOW to combine them, or (ii) EXPERIMENTALLY show that combining gives better results Another important question is, how much benefit (%age, limits, reliability,..) can combining methods yield. At least two recent PhD theses (Perrone, Hashem) mathematically address this issue for REGRESSION problems, We have approached this problem for CLASSIFICATION problems by studying the effect of combining on the decision boundaries. The results pinpoint the mechanism by which classification results are improved, and provide limits, including a new way of estimating Bayes' rate. A preliminary version appears as an invited paper in SPIE Proc. Vol 2492, pp. 573-585, (Orlando Conf, April '95); the full version, currently under journal review, can be retrieved from http://pegasus.ece.utexas.edu:80/~kagan/publications.html Besides review and analysis, it contains a reference listing that includes most of the papers quoted on this forum in the past week. The title and abstract follows: THEORETICAL FOUNDATIONS OF LINEAR AND ORDER STATISTICS COMBINERS FOR NEURAL PATTERN CLASSIFIERS by Kagan Tumer and Joydeep Ghosh Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This paper provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and the order statistics combiners introduced in this paper. We show that combining networks in output space reduces the variance of the actual decision region boundaries around the optimum boundary. For linear combiners, we show that in the absence of classifier bias, the added classification error is proportional to the boundary variance. In the presence of bias, the error reduction is shown to be less than or equal to the reduction obtained in the absence of bias. For non-linear combiners, we show analytically that the selection of the median, the maximum and in general the $i$th order statistic improves classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. The combining results can also be used to estimate the Bayesian error rates. Experimental results on several public domain data sets are provided to illustrate the benefits of combining. ------ All comments welcome. ______ Sorry, no hard copies. Kagan Tumer Joydeep Ghosh Dept. of ECE Associate Professor The University of Texas, Austin Dept. of ECE The University of Texas, Austin From guzman at gluttony.cs.umass.edu Wed Jul 5 16:16:28 1995 From: guzman at gluttony.cs.umass.edu (guzman@gluttony.cs.umass.edu) Date: Wed, 05 Jul 95 16:16:28 -0400 Subject: Reinforcement Learning Paper Message-ID: <9507052016.AA06081@gluttony.cs.umass.edu> Reinforcement Learning in Partially Markovian Decision Processes submitted to NIPS*95 (8 pages) FTP-HOST: ftp.cs.umass.edu FTP-FILENAME: /pub/anw/pub/guzman/guzman-lara.PMDP.ps.Z ABSTRACT We define a subclass of Partially Observable Markovian Decision Processes (POMDPs), which we call {\em Partially Markovian Decision Processes} (PMDPs), and propose a novel approach to solve this kind of problem. In contrast to traditional methods for POMDPs, our method does not involve estimation of the state of an underlying Markovian problem; its goal is to find an optimal observation-based policy (an action-selection rule that uses only the information immediately available to the agent). We show that solving this non-Markovian problem is equivalent to solving multiple Markovian Decision Processes (MDPs). We argue that this approach opens new possibilities for distributed systems, and we support this claim with some preliminary results where the use of an observation-based policy yielded a good solution in a complex stochastic environment. Sorry, no hard copies Sergio Guzman-Lara Computer Science Department LGRC, University of Massachusetts Amherst, MA 01003 guzman at cs.umass.edu From Gerhard.Paass at gmd.de Wed Jul 5 08:43:28 1995 From: Gerhard.Paass at gmd.de (Gerhard Paass) Date: Wed, 5 Jul 1995 14:43:28 +0200 Subject: Autumn School Connectionism and Neural Networks Message-ID: <199507051243.AA09680@sein.gmd.de> CALL FOR PARTICIPATION ================================================================= = = = H e K o N N 9 5 = = = Autumn School in C o n n e c t i o n i s m and N e u r a l N e t w o r k s October 2-6, 1995 Muenster, Germany Conference Language: German ---------------------------------------------------------------- A comprehensive description of the Autumn School together with abstracts of the courses can be found at the following addresses: WWW: http://borneo.gmd.de/~hekonn anonymous FTP: ftp.gmd.de directory: Learning/neural/hekonn95 = = = O V E R V I E W = = = Artificial neural networks (ANN's) have in recent years been discussed in many diverse areas, ranging from the modelling of learning in the cortex to the control of industrial processes. The goal of the Autumn School in Connectionism and Neural Networks is to give a comprehensive introduction to conectionism and artificial neural networks and to give an overview of the current state of the art. Courses will be offered in five thematic tracks. (The conference language is German.) The FOUNDATION track will introduce basic concepts (A. Zell, Univ. Stuttgart), as well as present lectures on information processing in biological neural systems (G. Palm, Univ. Ulm), on the relationship between ANN's and fuzzy logic (R. Kruse, Univ. Braunschweig), and on genetic algorithms (S. Vogel, Univ. Cologne). The THEORY track is devoted to the properties of ANN's as abstract learning algorithms. Courses are offered on approximation properties of ANN's (K. Hornik, Univ. Vienna), the algorithmic complexity of learning procedures (M. Schmitt, TU Graz), prediction uncertainty and model selection (G. Paass, GMD St. Augustin), and "neural" solutions of optimization problems (J. Buhmann, Univ. Bonn). This year, special emphasis will be put on APPLICATIONS of ANN's to real-world problems. This track covers courses on vision (H.Bischof, TU Vienna), character recognition (J. Schuermann, Daimler Benz Ulm), speech recognition (R. Rojas, FU Berlin), industrial applications (B. Schuermann, Siemens Munich), robotics (K.Moeller, Univ. Bonn), and hardware for ANN's (U. Rueckert, TU Hamburg-Harburg). In the track on SYMBOLIC CONNECTIONISM, there will be courses on: knowledge processing with ANN's (F. Kurfess, New Jersey IT), hybrid systems in natural language processing (S. Wermter, Univ. Hamburg), connectionist aspects of natural language processing (U. Schade, Univ. Bielefeld), and procedures for extracting rules from ANN's (J. Diederich, QUT Brisbane). In the section on COGNITIVE MODELLING, we have courses on representation and cognitive models (G. Dorffner, Univ. Vienna), aspects of cognitive psychology (R. Mangold-Allwinn, Univ. Saarbruecken), self-organizing ANN's in the visual system (C. v.d. Malsburg, Univ. Bochum), and information processing in the visual cortex (J.L. v. Hemmen, TU Munich). In addition, there will be courses on PROGRAMMING and SIMULATORS. Participants will have the opportunity to work with the SESAME system (J. Kindermann, GMD St.Augustin) and the SNNS simulator (A.Zell, Univ. Stuttgart). From hd at harris.monmouth.edu Wed Jul 5 10:35:19 1995 From: hd at harris.monmouth.edu (Drucker Harris) Date: Wed, 5 Jul 95 10:35:19 EDT Subject: combining classifiers Message-ID: <9507051435.AA00592@harris.monmouth.edu> Re: combining classifiers For those interested in combining classifiers, I give references to the boosting literature below. Boosting gives an explicit method of building multiple classifiers in a sequential fashion, rather than building the classifiers first and then determining how to combine them. The seminal work on boosting which shows that it is possible to combine many classifiers, each with error rate slightly less than 1/2 (termed weak learners) to give a combined classifier with very good performance (termed a strong learner): Robert Schapire, "The strength of weak learnability", Machine Learning, 5(2), p 197-227 Applications to OCR may be seen in H.Drucker, C. Cortes, L.Jackel, Y.LeCun, and V.Vapnik, Boosting and Other Ensemble Methods, Neural Computation, Vol 6, p1289-1301 Comparison of boosting techniques to many others in OCR is given in: LD Jackel, et.al., "Comparison of Classifier Methods: A Case Study in Handwritten Digit Recognition", 1994 International Conference on Pattern Recognition, Jerusalem, 1994. In practical applications, it is helpful to have a very large source of training data. However, there is a new version of boosting which does not require this large data set: Y.Freund and R.E. Schapire, "A decision-theoretic generalization of on-line learning and an application to boosting, Proceedings of the Second European Conference on Computational Learning Theory, March, 1995. Harris Drucker From phkywong at uxmail.ust.hk Wed Jul 5 05:56:25 1995 From: phkywong at uxmail.ust.hk (Dr. Michael Wong) Date: Wed, 5 Jul 1995 17:56:25 +0800 Subject: Paper available on Hybrid Classification Message-ID: <95Jul5.175637+0800_hkt.19012-4+102@uxmail.ust.hk> FTP-host: physics.ust.hk FTP-file: pub/kymwong/hybrid.ps.gz The following paper, presented at IWANNT*95, is now available via anonymous FTP. (8 pages long) ============================================================================ A Hybrid Expert System for Error Message Classification H.C. Lau, K.Y. Szeto, K.Y.M. Wong Department of Physics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong. E-mail address: phhclau at usthk.ust.hk, phkywong at usthk.ust.hk and D.Y. Yeung Department of Computer Science, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong. ABSTRACT A hybrid intelligent classifier is built for pattern classification. It consists of a classification and regression tree (CART), a genetic algorithm (GA) and a neural network (NN). CART extracts features of the patterns by setting up decision rules. Rule improvement by GA is explored. The rules act as a pre-processing layer of NN, a multi- class neural classifier, through which the most probable class is determined. A realistic test of classifying error messages generated from a telephone exchange shows that the CART-NN hybrid system has comparable performance with Bayesian neural network. ============================================================================ FTP instructions: unix> ftp physics.ust.hk Name: anonymous Password: your full email address ftp> cd pub/kymwong ftp> get hybrid.ps.gz ftp> quit unix> gunzip hybrid.ps.gz unix> lpr hybrid.ps From Frank.Smieja at gmd.de Thu Jul 6 09:02:35 1995 From: Frank.Smieja at gmd.de (Frank Smieja) Date: Thu, 6 Jul 1995 15:02:35 +0200 Subject: TR in neuroprose Message-ID: <199507061302.AA07563@shetland.gmd.de> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/beyer.mappings.ps.Z The file beyer.mappings.ps.Z is now available for copying from the Neuroprose repository: On Data-driven Derivation of Discrete Mappings between Finite Spaces (8 pages) ABSTRACT: The guessing of a function behind a discrete mapping between finite spaces using only the information provided by positive examples is shown to be in principle optimally handled by a lookup table combined with a random process. We explore the implications of this result for the construction of intelligent approximators. The paper is also available (as are the rest of our REFLEX papers) from the WWW site http://borneo.gmd.de/AS/janus/publi/publi.html or look at our JANUS/REFLEX home page http://borneo.gmd.de/AS/janus/pages/janus.html -- Frank Smieja ----------------------------------------------------------------------------- Dr Frank Smieja Institute for System Design Technology (I5) Adaptive Systems Group (AS) German National Research Center for Information Technology (GMD) Tel: +49 2241-142214 GMD-SET.AS, Schloss Birlinghoven email: smieja at gmd.de 53754 St Augustin Germany WWW: http://borneo.gmd.de:80/~smieja/ ----------------------------------------------------------------------------- From ormoneit at informatik.tu-muenchen.de Thu Jul 6 12:23:10 1995 From: ormoneit at informatik.tu-muenchen.de (Dirk Ormoneit) Date: Thu, 6 Jul 1995 18:23:10 +0200 Subject: Combining Gaussian Mixture Density Estimates Message-ID: <95Jul6.182316+0200_met_dst.116230+308@papa.informatik.tu-muenchen.de> In a recent message, Anders Krogh mentioned the possibility to use averaging based on slightly different (e.g. resampled) training sets as a method of regularization. In our latest work, we found that this regularizing effect of network averaging may be advantageously exploited for Gaussian mixture density estimation. Regularization is particularly important in this case, because the overfitting problem is even more severe than, for example, in the regression case. In our experiments we found that Leo Breiman's *bagging* (averaging of estimators which were derived from resampled training sets) yields a performance which is comparable and sometimes even superior to a Bayesian regularization approach. As pointed out by Breiman, a basic precondition for obtaining an improvement with *bagging* is that the individual estimators are relatively unstable. This is particularly the case for Gaussian mixture estimates. The title of our paper is Improved Gaussian Mixture Density Estimates Using Bayesian Penalty Terms and Network Averaging by Dirk Ormoneit and Volker Tresp ABSTRACT We compare two regularization methods which can be used to improve the generalization capabilities of Gaussian mixture density estimates. The first method consists of defining a Bayesian prior distribution on the parameter space. We derive EM (Expectation Maximization) update rules which maximize the a posterior parameter probability in contrast to the usual EM rules for Gaussian mixtures which maximize the likelihood function. In the second approach we apply ensemble averaging to density estimation. This includes Breiman's "bagging", which has recently been found to produce impressive results for classification networks. To our knowledge this is the first time that ensemble averaging is applied to improve density estimation. A version of this paper is submitted to NIPS'95. A technical report is available under the name 'fki-205-95.ps.gz' on the FTP-site flop.informatik.tu-muenchen.de To get it, execute the following steps: % ftp flop.informatik.tu-muenchen.de Name (flop.informatik.tu-muenchen.de:ormoneit): anonymous Password: (your email adress) ftp> cd pub/fki ftp> binary ftp> get fki-205-95.ps.gz ftp> bye % gunzip fki-205-95.ps.gz Dirk From ingber at alumni.caltech.edu Thu Jul 6 14:11:50 1995 From: ingber at alumni.caltech.edu (Lester Ingber) Date: 6 Jul 1995 18:11:50 GMT Subject: paper on statistical constraints on 40 Hz models of short-term memory Message-ID: <3th916$hq7@gap.cco.caltech.edu> The following paper, to be published in Physical Review E, and some related reprints are available via anonymous FTP or WWW: Statistical mechanics of neocortical interactions: Constraints on 40 Hz models of short-term memory Lester Ingber Lester Ingber Research, P.O. Box 857, McLean, Virginia 22101 ingber at alumni.caltech.edu Calculations presented in L. Ingber and P.L. Nunez, Phys. Rev. E 51, 5074 (1995) detailed the evolution of short-term memory in the neocortex, supporting the empirical 7+-2 rule of constraints on the capacity of neocortical processing. These results are given further support when other recent models of 40 Hz subcycles of low-frequency oscillations are considered. PACS Nos.: 87.10.+e, 05.40.+j, 02.50.-r, 02.70.-c ======================================================================== Interactively [brackets signify machine prompts]: [your_machine%] ftp ftp.alumni.caltech.edu [Name (...):] anonymous [Password:] your_e-mail_address [ftp>] cd pub/ingber [ftp>] binary [ftp>] get smni95_stm40hz.ps.Z [ftp>] quit The 00index file contains an index of the other files. This archive also can be accessed via WWW path http://www.alumni.caltech.edu/~ingber/ If you do not have ftp access, get information on the FTPmail service by: mail ftpmail at decwrl.dec.com, and send only the word "help" in the body of the message. Sorry, I cannot assume the task of mailing out hardcopies of code or papers. My volunteer time assisting people with their queries on my codes and papers must be limited to electronic mail correspondence. ======================================================================== Lester -- /* RESEARCH ingber at alumni.caltech.edu * * INGBER ftp.alumni.caltech.edu:/pub/ingber * * LESTER http://www.alumni.caltech.edu/~ingber/ * * Prof. Lester Ingber _ P.O. Box 857 _ McLean, VA 22101 _ 1.800.L.INGBER */ From kruschke at croton.psych.indiana.edu Thu Jul 6 16:17:34 1995 From: kruschke at croton.psych.indiana.edu (John Kruschke) Date: Thu, 6 Jul 1995 15:17:34 -0500 (EST) Subject: TR announcement: Extensions to the delta rule of associative learning Message-ID: <9507062017.AA07049@croton.psych.indiana.edu> The following report is now available: Extensions to the Delta Rule for Associative Learning John K. Kruschke and Amy L. Bradley The delta rule of associative learning has recently been used in several models of human category learning, and applied to categories with different relative frequencies, or base rates. Previous research has emphasized predictions of the delta rule after extensive learning. Our first experiment measures the relative acquisition rates of categories with different base rates, and the delta rule significantly and systematically deviates from the human data. We suggest that two additional mechanisms are involved, namely, short-term memory and strategic guessing. Two additional experiments highlight the effects of these mechanisms. The mechanisms are formalized and combined with the delta rule, and provide good fits to the data from all three experiments. The easiest way to get it is from the research section of my Web page (see URL below). -- John K. Kruschke e-mail: kruschke at indiana.edu Dept. of Psychology office: (812) 855-3192 Indiana University lab: (812) 855-9613 Bloomington, IN 47405-1301 USA fax: (812) 855-4691 URL= http://silver.ucs.indiana.edu/~kruschke/home.html From bill at psy.uq.oz.au Thu Jul 6 19:48:41 1995 From: bill at psy.uq.oz.au (Bill Wilson) Date: Fri, 07 Jul 1995 09:48:41 +1000 Subject: papers available by ftp: Recurrent net architectures Message-ID: <199507062348.JAA03259@psych.psy.uq.oz.au> The files wilson.recurrent.ps.Z and wilson.stability.ps.Z are now available for copying from the Neuroprose repository: File: wilson.recurrent.ps (4 pages) Title: A Comparison of Architectural Alternatives for Recurrent Networks Author: William H. Wilson Abstract: This paper describes a class of recurrent neural networks related to Elman networks. The networks used herein differ from standard Elman networks in that they may have more than one state vector. Such networks have an explicit representation of the hidden unit activations from several steps back. In principle, a single-state-vector network is capable of learning any sequential task that a multi-state-vector network can learn. This paper describes experiments which show that, in practice, and for the learning task used, a multi-state-vector network can learn a task faster and better than a single-state-vector network. The task used involved learning the graphotactic structure of a sample of about 400 English words. The training method and architecture used somewhat resemble backpropagation through time, but differ in that multiple state vectors persist in the trained network, and that each state vector is connected to the hidden layer by independent sets of weights. ------------------------------------- File: wilson.stability.ps (4 pages) Title: Stability of Learning in Classes of Recurrent and Feedforward Networks Author: William H. Wilson Abstract: This paper concerns a class of recurrent neural networks related to Elman networks (simple recurrent networks) and Jordan networks and a class of feedforward networks architecturally similar to Waibel's TDNNs. The recurrent nets used herein, unlike standard Elman/Jordan networks, may have more than one state vector. It is known that such multi-state Elman networks have better learning performance on certain tasks than standard Elman networks of similar weight complexity. The task used involves learning the graphotactic structure of a sample of about 400 English words. Learning performance was tested using regimes in which the state vectors are, or are not, zeroed between words: the former results in larger minimum total error, but without the large oscillations in total error observed when the state vectors are not periodically zeroed. Learning performance comparisons of the three classes of network favour the feedforward nets. Bill Wilson Artificial Intelligence Laboratory School of Computer Science and Engineering University of New South Wales Sydney 2052 Australia Email: billw at cse.unsw.edu.au From sontag at control.rutgers.edu Fri Jul 7 10:27:06 1995 From: sontag at control.rutgers.edu (Eduardo Sontag) Date: Fri, 7 Jul 1995 10:27:06 -0400 Subject: Three *** UNRELATED *** TR's available Message-ID: <199507071427.KAA06870@control.rutgers.edu> (NOTE: The following three TR's are NOT in any way related to each other, but announcements are being bundled into one, as requested by list moderator.) ****************************************************************************** Subject: TR (keywords: VC dimension lower bounds, feedforward neural nets) The following preprint is available by FTP: Neural networks with quadratic VC dimension Pascal Koiran and Eduardo Sontag Abstract: This paper shows that neural networks which use continuous activation functions have VC dimension at least as large as the square of the number of weights w. This result settles a long-standing open question, namely whether the well-known O(w log w) bound, known for hard-threshold nets, also held for more general sigmoidal nets. Implications for the number of samples needed for valid generalization are discussed. Retrieval: FTP anonymous to: math.rutgers.edu cd pub/sontag file: quadratic-vc.ps.z (compressed postscript file; for further information, see the files README and CONTENTS in that directory) (Note: also available as NeuroCOLT Technical Report NC-TR-95-044) **** NO HARDCOPY AVAILABLE **** ****************************************************************************** Subject: TR (keywords: VC dimension, learning, dynamical systems, recurrence) The following preprint is available by FTP: Sample complexity for learning recurrent perceptron mappings Bhaskar Dasgupta and Eduardo Sontag Abstract: This paper deals with the learning-theoretic questions involving the identification of linear dynamical systems (in the sense of control theory) and especially with the binary-classification version of these, "recurrent perceptron classifiers". These latter classifiers generalize the classical perceptron model. They take into account those correlations and dependences among input coordinates which arise from linear digital filtering. The paper provides tight theoretical bounds on sample complexity associated to the fitting of recurrent perceptrons to experimental data. The results are based on VC-dimension theory and PAC learning, as well as on recent computational complexity work in elimination methods. Retrieval: FTP anonymous to: math.rutgers.edu cd pub/sontag file: vcdim-signlinear.ps.z (compressed postscript file; for further information, see the files README and CONTENTS in that directory) Note: The paper is available also as DIMACS Technical Report 95-17, and can be obtained by ftp to dimacs.rutgers.edu (IP address = 128.6.75.16), login anonymous, in dir "pub/dimacs/TechnicalReports/TechReports" **** NO HARDCOPY AVAILABLE **** ****************************************************************************** Subject: TR (keywords: feedforward networks, local minima, critical points) The following preprint is available by FTP: Critical points for least-squares problems involving certain analytic functions, with applications to sigmoidal nets Eduardo D. Sontag Abstract: This paper deals with nonlinear least-squares problems involving the fitting to data of parameterized analytic functions. For generic regression data, a general result establishes the countability, and under stronger assumptions finiteness, of the set of functions giving rise to critical points of the quadratic loss function. In the special case of what are usually called ``single-hidden layer neural networks,'' which are built upon the standard sigmoidal activation tanh(x) (or equivalently 1/(1+e^{-x})), a rough upper bound for this cardinality is provided as well. Retrieval: FTP anonymous to: math.rutgers.edu cd pub/sontag file: crit-sigmoid.ps.z (compressed postscript file; for further information, see the files README and CONTENTS in that directory) **** NO HARDCOPY AVAILABLE **** ****************************************************************************** Eduardo D. Sontag URL for Web HomePage: http://www.math.rutgers.edu/~sontag/ ****************************************************************************** From dsilver at csd.uwo.ca Fri Jul 7 09:35:45 1995 From: dsilver at csd.uwo.ca (Danny L. Silver) Date: Fri, 7 Jul 95 9:35:45 EDT Subject: Summary of responses on data transformation tools Message-ID: <9507071335.AA08942@church.ai.csd.uwo.ca.csd.uwo.ca> Some time ago (May/95), I request additional information on data transformation tools: > Many of us spend hours preparing data files for acceptance by > machine learning systems. Typically, I use awk or C code to transform > ASCII records into numeric or symbolic attribute tuples for a neural net, > inductive decision tree, etc. Before re-inventing the wheel, has anyone > developed a general tool for perfoming some of the more common > transformations. Any related suggestions would be of great use to many > on the network. Below is a summary of the most informative responses I received. Sorry for the delay. . Danny -- ========================================================================= = Daniel L. Silver University of Western Ontario, London, Canada = = N6A 3K7 - Dept. of Comp. Sci. - Office: MC27b = = dsilver at csd.uwo.ca H: (519)473-6168 O: (519)679-2111 (ext.6903) = ========================================================================= From: A. Famili I have done quite a bit of work in this area, on data preparation and data pre-processing, and also rule post-processing in induction. As part of our induction system that we have built, we have some data pre-processing capabilities added. I am also organizing and will be chairing a panel on the "Role of data pre-processing in Intelligent Data Analysis" in IDA-95 Symposium. (Intelligent Data Analysis Symposium to be held in Germany in Aug. 1995). The most common tool in the market is NeuralWare's Data Sculptor (I have only seen the brochure and a demo). It is claimed to be a general purpose tool. Others are in a short report that I send you below. A. Famili, Ph.D. Senior Research Scientist Knowledge Systems Lab. IIT- NRC, Bldg. M-50 Montreal Rd. Ottawa, Ont. K1A 0R6 Canada Phone: (613) 993-8554 Fax : (613) 952-7151 email: famili at ai.iit.nrc.ca --------------------------- A. Famili Knowledge Systems Laboratory Institute for Information Technology National Research Council Canada 1.0 Introduction This report outlines a comparison that was made for three commercial data pre-processing tools that are available in the market. The purpose of the study was to identify useful features that exist in these tools that could be helpful in intelligent filtering and data analysis of the IDS project. The comparison study does not involve use and evaluation of either tools on real data. Two of these tools (LabView and OS/2 Visualizer) are avail- able in the KSL. 2.0 Data Sculptor Developed by NeuralWare, the criteria was that in neural network data analysis applica- tions, 80 percent of time is spent on data preprocessing. This tool was developed to han- dle any type of transformation or manipulation of data, before the data being analysed. The graphics capabilities include: histograms, bar charts, line, pie and scatter plots. There are several stat. functions to be used on the data. There are also options to create new variables (attribute vectors) based on transformation of other variables. Following are some important specifications, as explained in the fact sheets and demo version: - Input Data Formats: DBase, Excell, Paradox, Fixed Field, ASCII, and Binary. - Output Data Formats: Fixed Field, Delimited ASCII and Binary - General Data Transformations: Sorting, File Merge, Field Comparison, Sieve and Duplicate and Neighborhood. - Math. Transformations: Arithmetic, Trigonometric, and Exponential. - Special Transformations: Encodings of the type One-of-N, Fuzzy One-of-N, R-of-N, Analog R-of-N, Thermometer, Circular, and Inverse Thermometer, Normalizing Func- tions, Fast Fourier Transformations and some more. - Stat. Functions: Count, Sum, Mean, Chi-square, Min, Max, STD, Variance, Correla- tion and some more. - Graph Formats: Bar chart, Histogram, Scatter Plot, Pie, etc. - Spreadsheet: Data Viewing, and Search Function. A data pre-processing application can be built by using (or defining) icons and assembling the entire application in the Data Sculptor environment, which is quite easy to use. There are a number of demo applications that came with the demo diskettes. On- line hypertext help facility is also available. Data Sculptor runs under Windows. Information for Data Sculptor comes from the literature and two demo diskettes. 3.0 LabView and Data Engine Lab View (Laboratory Virtual Instrument Engineering Workbench) is a product developed by National Instruments. It is however available with Data Engine, a data analysis product developed by MIT in Germany. LabView, a high level programming environ- ment, has been developed to simplify the scientific computation, analyzing process control, and test and measurement applications. It is far more sophisticated than other data pre-processing systems. Unlike other programming systems that are text based, LabView is graphics based and lets users create data viewing and simulation programs in block diagram forms. LabView also contains application specific libraries for data acquisition, data analysis, data presentation, and data storage. It even comes with it's own GUI builder facilities (called front panel) so that the application is monitored and run to simulate the panel of a physical instrument. There are also a number of LabView companion products that have been developed by users or suppliers of this product. 4.0 OS/2 Visualizer The Visualizer comes with OS2 and is installed on the PC's of the IDS project. It's main function is support for data visualization, and consists of three modules: (i) Charts, (ii) Statistics, and (iii) Query. The visualizer Charts provides support for a variety of chart making requirements. Examples are: line, pie, bar, scatter, surface, mixed, etc. The visualizer Statistics provides support in 57 statistical methods in seven categories of: (i) Exploratory methods, (ii) Distributions, (iii) Relations, (iv) Quality control, (v) Model fitting, (vi) Analysis of variance, and (vii) Tests. Each of the above categories consists of several features that are useful for statistical analysis of data. The visualizer Query provides support for a number of query tasks to be performed on the data. These include means to access and work with the data that is currently used, creating and storing new tables in the database, combining data from many tables, and many more. It is not evident, from the documentation, whether or not we can perform some form of data transformation or preprocessing on the queried data so that a preprocessed data file is created for further analysis. ================================================================ From: Matthijs Kadijk I personnaly think that AWK is the best most general tool fit for those purposes, but for those who want something less general but easy to use I suggest to use dm, (a data manipulater) which is part of Gary Perlman UNIX|STAT package. It should be no problem to find it on the net. I also use the unix|stat programs to analyse the results of simulations with my NN programs. I'll attatch the dm tutorial to this mail (DLS: not include in this summary). Matthijs Kadijk _____________________ ______________________________ / Matthijs Kadijk \ / email: kkm at bouw.tno.nl \ | TNO-Bouw, Postbus 49 | www: http://www.bouw.tno.nl \___________________ | NL-2600 AA Delft | tel: +31 - 15 - 842 195 /\ fax: +31 15 843975 \ \_____________________/ \ ________________________/ \_ _____________________/ ===================================================================== From: stefanos at vuse.vanderbilt.edu (Stefanos Manganaris) I have been using this code to read, into LISP, UCI and C4.5 data files. It will enable you to manipulate the records in LISP. All you need to do is define once an appropriate "make-instance" function for each of the learning systems you use. Stef. -- Stefanos Manganaris. Computer Science Department, Vanderbilt University, Nashville, Tennessee. http://www.vuse.vanderbilt.edu/~stefanos/stefanos.html -------------------------------- cut here ------------------------------------ #|============================================================================ READ IN LISP UCI and C4.5 DATA FILES $Id: read-data.cl,v 1.1 1995/04/12 04:13:51 stefanos Exp $ Last Edited: Apr 11/95 23:08 CDT by stefanos at worf (Stefanos Manganaris) Written by Stefanos Manganaris, Computer Sciences, Vanderbilt University. stefanos at vuse.vanderbilt.edu http://www.vuse.vanderbilt.edu/~stefanos/stefanos.html ============================================================================|# (in-package "USER") (defvar *eol* nil) (defun make-simple-instance (class attributes) "A simple example for read-data-file's make-instance-f argument. Change this function to return instances in whatever format your learner expects." (cons class attributes)) ;; Usage: ;; (read-data-file "file.data" #'make-simple-instance) #|____________________________________________________________Sat Feb 4/95____ Function - READ-DATA-FILE Reads the UCI or C4.5 FILE and returns a list of instances. Each instance is created by supplying its class and attribute values to MAKE-INSTANCE-F. Note: * The list of instances is returned in reverse order. * Spaces are not allowed as part of class names or values. * Make sure there is a new line before EOF. Inputs -> file make-instance-f Returns -> list of instances History -> Sat Feb 4/95: Created _______________________________________________________________________Stef__|# (defun read-data-file (file make-instance-f) "Args: file make-instance-f Reads the UCI or C4.5 FILE and returns a list of instances." (let ((instances nil) (last-token nil)) (multiple-value-bind (f-comma commap) (get-macro-character #\,) (set-macro-character #\, #'comma-reader nil) (set-macro-character #\newline #'newline-reader nil) (with-open-file (stream file :direction :input) (loop (setq *eol* nil) (setq last-token (do ((token (read stream t) (read stream t)) (attribute-values nil)) (*eol* (if last-token (push (funcall make-instance-f last-token attribute-values) instances)) (return token)) (if last-token (setq attribute-values (nconc attribute-values (cons last-token nil)))) (setq last-token token))) (if (null last-token) (return)))) (set-macro-character #\, f-comma commap) (set-syntax-from-char #\newline #\newline)) (return-from read-data-file instances))) #|____________________________________________________________Sat Feb 4/95____ Function - COMMA-READER Special reader function for comma characters in UCI and C4.5 files. Inputs -> stream char Returns -> History -> Sat Feb 4/95: Created _______________________________________________________________________Stef__|# (defun comma-reader (stream char) "Args: stream char Special reader function for comma characters in UCI and C4.5 files." (declare (ignore stream char)) (values)) #|____________________________________________________________Sat Feb 4/95____ Function - NEWLINE-READER Special reader function for newline characters in UCI and C4.5 files. Inputs -> stream char Returns -> History -> Sat Feb 4/95: Created _______________________________________________________________________Stef__|# (defun newline-reader (stream char) "Args: stream char Special reader function for newline characters in UCI and C4.5 files." (declare (ignore char)) (setq *eol* t) (read stream nil nil t)) ;; EOF ======================================================================= From ericwan at choosh.eeap.ogi.edu Fri Jul 7 04:34:28 1995 From: ericwan at choosh.eeap.ogi.edu (Eric A. Wan) Date: Fri, 7 Jul 1995 16:34:28 +0800 Subject: OGI Research Assitantship Positions Message-ID: <9507072334.AA28903@choosh.eeap.ogi.edu> ------------------------------------------------------------ Ph.D. Research Assistantship Positions Available ***************************************************************** * * * OREGON GRADUATE INSTITUTE * * -of- * * SCIENCE & TECHNOLOGY * * * * Center for Information Technologies (CIT) * * * ***************************************************************** ***************************************************************** The Oregon Graduate Institute of Science and Technology (OGI) has several openings for outstanding Ph.D. students in its Center for Information Technologies. The center includes faculty from the department of Computer Science and Engineering and the department of Electrical Engineering and Applied Physics. Center members perform research in a broad range of information processing areas including nonlinear and adaptive signal processing, statistical computation, decision analysis, speech, images, prediction, control, economics, and finance. We are specifically looking for potential Ph.D. students who hold masters degrees in Computer Science or Electrical Engineering with knowledge and/or interest adaptive signal processing, statistics, speech and image processing, neural networks and machine learning. We seek qualified candidates to join research projects in Fall / Winter of 1995. Special funding opportunities are available for U.S. citizens and U.S. nationals, although foreign nationals will also be considered. Research areas include neural networks, adaptive signal processing, simulation of human auditory and visual perception, speech and image processing, time-series prediction, learning theory, algorithms, and architectures. Specific projects include speech enhancement in cellular communication, sunspot and solar flux forecasting, speech and image representation, novel techniques for regression, and economic and financial applications. Please send resumes or inquiries to: Todd K. Leen John Moody Eric A. Wan tleen at cse.ogi.edu moody at cse.ogi.edu ericwan at eeap.ogi.edu (503) 690-1160 (503) 690-1554 (503) 690-1164 Hynek Hermansky Misha Pavel hynek at eeap.ogi.edu pavel at eeap.ogi.edu (503)690-1136 (503)690-1155 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ General Information OGI is a young, but rapidly growing, private research institute located in the Portland area. OGI offers Masters and PhD programs in Computer Science and Engineering, Applied Physics, Electrical Engineering, Biology, Chemistry, Materials Science and Engineering, and Environmental Science and Engineering. For additional general information, contact: Office of Admissions and Records Oregon Graduate Institute PO Box 91000 Phone: (503) 690-1027 Portland, OR 97291 Email: registrar at admin.ogi.edu +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Research Interests of Faculty and Postdocs in CIT Hynek Hermansky (Associate Professor, EEAP and CSE); Hynek Hermansky is interested in speech processing by humans and machines with engineering applications in speech and speaker recognition, speech coding, enhancement, and synthesis. His main research interest is in practical engineering models of human information processing. Todd K. Leen (Associate Professor, CSE and EEAP): Todd Leen's research spans theory of neural network models, architecture and algorithm design and applications to speech recognition. His theoretical work is currently focused on the foundations of stochastic learning, while his work on Algorithm design is focused on fast algorithms for non-linear data modeling. John Moody (Associate Professor, CSE and EEAP): John Moody does research on the design and analysis of learning algorithms, statistical learning theory (including generalization and model selection), optimization methods (both deterministic and stochastic), and applications to signal processing, time series, economics and finance. Misha Pavel (Associate Professor, CSE and EEAP): Misha Pavel does mathematical and neural modeling of adaptive behaviors including visual processing, pattern recognition, visually guided motor control, categorization, and decision making. He is also interested in the application of these models to sensor fusion, visually guided vehicular control, and human- computer interfaces. Eric A. Wan (Assistant Professor, EEAP and CSE): Eric Wan's research activities include learning algorithms and architectures for neural networks and adaptive signal processing. He is particularly interested in neural applications to time series prediction, speech enhancement, adaptive control, active noise cancellation, and telecommunications. Andy Fraser (Associate Professor, Portland State University) Andrew Fraser's research interests include non-linear dynamics, information theory, signal modelling, prediction and detection. He is particularly interested in the application of modelling and prediction to signal encoding and detection problems. Holly Jimison (Assistant Professor, Oregon Health Sciences University) Dr. Jimison is the Director of the Informed Patient Decision Group at the Biomedical Information Communication Center at OHSU. She is interested in multimedia systems for patient-physician communication and in application of decision theory to shared medical decision making. Hong Pi (Senior Research Associate, CSE): Hong Pi's research interests include neural network models, time series analysis, and dynamical systems theory. He currently works on the applications of nonlinear modeling and analysis techniques to time series prediction problems. Thorsteinn S. Rognvaldsson (Post-Doctoral Research Associate, CSE): Thorsteinn Rognvaldsson studies both applications and theory of neural networks and other non-linear methods for function fitting and classification. He is currently working on methods for choosing regularization parameters and also comparing the performance of neural networks with the performance of other techniques for time series prediction. Lizhong Wu (Senior Research Associate, CSE): Lizhong Wu's research interests include neural network theory and modeling, time series analysis and prediction, pattern classification and recognition, signal processing, vector quantization, source coding and data compression. He is now working on the application of neural networks and nonparametric statistical paradigms to finance. From gcv at ukc.ac.uk Mon Jul 10 11:42:54 1995 From: gcv at ukc.ac.uk (gcv@ukc.ac.uk) Date: Mon, 10 Jul 95 11:42:54 BST Subject: II Brazilian Symposium on Neural Networks Message-ID: II Brazilian Symposium on Neural Networks ***************************************** October 18-20, 1995 Second call for papers Sponsored by the Brazilian Computer Science Society (SBC) You are cordially invited to attend the II Brazilian Symposium on Neural Networks (SBRN) which will be held at the University of Sao Paulo, campus of Sao Carlos, Sao Paulo. Sao Carlos with its 160.000 population is a pleasant university city known by its climate and high technology companies. Scientific papers will be analyzed by the program committee. This analysis will take into account originality, significance to the area, and clarity. Accepted papers will be fully published in the conference proceedings. The major topics of interest include, but are not limited to: Applications Architecture and Topology Biological Perspectives Cognitive Science Dynamic Systems Fuzzy Logic Genetic Algorithms Hardware Implementation Hybrid Systems Learning Models Otimisation Parallel and Distributed Implementations Pattern Recognition Robotics and Control Signal Processing Theoretical Models Program Committee: - Andre C. P. L. F. de Carvalho - ICMSC/USP - Dante Barone - II/UFRGS (Chairman) - Edson C B C Filho - DI/UFPE - Fernando Gomide - FEE/UNICAMP - Geraldo Mateus - DCC/UFMG - Luciano da Fontoura Costa - IFSC/USP - Rafael Linden - IBCCF/UFRJ - Paulo Martins Engel - II/UFRGS Organising Committee: - Aluizio Araujo - EESC/USP - Andre C. P. L. F. de Carvalho - ICMSC/USP (Chairman) - Dante Barone - II/UFRGS - Edson C B C Filho - DI/UFPE - Germano Vasconcelos - DI/UFPE - Glauco Caurin - EESC/USP - Luciano da Fontoura Costa - IFSC/USP - Roseli A. Francelin Romero - ICMSC/USP - Teresa B. Ludermir - DI/UFPE SUBMISSION PROCEDURE: The Symposium seeks contributions to the state of the art and future perspectives of Neural Networks research. A submitted paper must be in Portuguese, Spanish or English. The submissions must include the original paper and three more copies and must follow the format below (no E-mail or FAX submissions). The paper must be printed using a laser printer, in one-column format, not numbered, 8.5 X 11.0 inch (21,7 X 28.0 cm). It must not exceed six pages, including all figures and diagrams. The font size should be 10 pts, such as TIMES-ROMAN font or its equivalent with the following margins: right and left 2.5 cm, top 3.5 cm, and bottom 2.0 cm. The first page should contain the paper's title, the complete author(s) name(s), affiliation(s), and mailing address(es), followed by a short (150 words) abstract and list of descriptive key words and an acompanying letter. In the accompanying letter, the following information must be included: * Manuscript title * first author's name, mailing address and E-mail * Technical area SUBMISSION ADDRESS: Four copies (one original and three copies) should be submitted to: Andre C. P. L. F. de Carvalho - SBRN 95 Departamento de Ciencias de Computacao e Estatistica ICMSC - Universidade de Sao Paulo Caixa Postal 668 CEP 13560.070 Sao Carlos, SP Brazil Phone: +55 162 726222 FAX: +55 162 749150 E-mail: IISBRN at icmsc.sc.usp.br DEADLINES: July 30, 1995 (mailing date) Deadline for paper submission August 30, 1995 Notification to authors October 18-20, 1995 II SBRN MORE INFORMATION: * Up-to-minute information about the symposium is available on the World Wide Web (WWW) at http://www.icmsc.sc.usp.br * Questions can be sent by E-mail to IISBRN at icmsc.sc.usp.br Hope to see you in Sao Carlos! From markc at crab.psy.cmu.edu Mon Jul 10 15:20:09 1995 From: markc at crab.psy.cmu.edu (Mark Chappell) Date: Mon, 10 Jul 95 15:20:09 EDT Subject: Technical Report; Human memory Message-ID: <9507101920.AA22266@crab.psy.cmu.edu.psy.cmu.edu> The paper whose abstract appears below has recently been submitted. It (paper.ps or paper.ps.Z) may be anonymously ftped from hydra.psy.cmu.edu, in directory /pub/user/markc. If this is not possible hard copies may be obtained from Barbara Dorney; bd1q at crab.psy.cmu.edu. Technical Report PDP.CNS.95.2 Familiarity Breeds Differentiation: A Bayesian Approach to the Effects of Experience in Recognition Memory James L. McClelland and Mark Chappell As people become better at identifying studied items, they also become better at rejecting distractor items. A model of recognition is presented consisting of item detectors that learn estimates of conditional probabilities of items' features, exhibiting this differentiation effect. The model is used to account for a number of findings in the recognition memory literature, including a null list-strength effect, a list-length effect, non-linear effects of strengthening items on false recognition of similar distractors, a number of different kinds of mirror effects, and appropriate $z$-ROC curves. From niranjan at eng.cam.ac.uk Tue Jul 11 10:54:49 1995 From: niranjan at eng.cam.ac.uk (niranjan@eng.cam.ac.uk) Date: Tue, 11 Jul 95 10:54:49 BST Subject: JOB, Neural Computing, Signal Processing, medical Applications Message-ID: <9507110954.22966@baby.eng.cam.ac.uk> Cambridge University Engineering Department Research Assistantship in Neural Computing Applications are invited for a Research Assistant position in the area of Neural Computing applied to Nonstationary Medical Signal Processing in the monitoring of liver transplant patients. The project is funded by the EPSRC and is for 33 months, starting October 1995. Candidates for this post are expected to have a good first degree and preferably a postgraduate degree in a relevant discipline. Salary will be in the RA/1A scale, currently in the range #14,317 to #21,519 p.a. Application forms and further particulars may be obtained from Rachael West, Cambridge University Engineering Department, Trumpington Street, Cambridge CB2 1PZ, England. Phone: 44 1223 332739 , FaX: 44 1223 332662, Email: rw at eng.cam.ac.uk. Closing date for applications is 31 July 1995. Short listed applications are expected to be interviewed on 7 August 1995. Informal inquiries about the project to niranjan at eng.cam.ac.uk --------------------------------------------------------------- From austin at minster.cs.york.ac.uk Mon Jul 10 06:54:46 1995 From: austin at minster.cs.york.ac.uk (austin@minster.cs.york.ac.uk) Date: Mon, 10 Jul 95 06:54:46 Subject: No subject Message-ID: Certification of Neural Networks 8 month post at the University of York, UK. Applications are invited for a Research Assistant to inves- tigate the problems of using neural networks in safety crit- ical systems. The aim of the work is to find out what the limitations are when using neural networks in such systems, and to suggest ways in which they may be overcome. The work will be of interest to people who wish to study the way in which neural networks learn and their subsequent behaviour. The post is open to individuals who have a good theoretical knowledge of neural networks or experience in safety and certification issues. Because some simulation work will be undertaken, knowledge of C and UNIX would be an advantage. Preference will be given to candidates who hold a PhD which involved neural networks. The candidate will join a major group of researchers working in the area of neural networks. The group is supported by high quality computing resources and technical personnel. The post is supported by the EPSRC and is funded, in the first instance, for 8 months. The salary will be in the range 13,941 to 17,813 PA (UK pounds). Four copies of a letter of application with full curriculum vitae, including the names of two referees, should be sent as soon as possible to Gary Morgan (address as below). More information on the post can be obtained from Dr. Jim Austin (01904 432734, austin at minster.york.ac.uk) or Dr. Gary Morgan (01904 432739, gary at minster.york.ac.uk). Advanced Computer Architecture Group Department of Computer Science University of York York YO1 5DD UK From denis at lima.psych.mcgill.ca Tue Jul 11 11:52:31 1995 From: denis at lima.psych.mcgill.ca (Denis Mareschal) Date: Tue, 11 Jul 1995 11:52:31 -0400 Subject: paper on object permanence now available Message-ID: <199507111552.LAA11333@lima.psych.mcgill.ca> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/mareschal.object_permanence.ps.Z The following paper is now available for copying from the neuroprose archive. It will appear in the Proceedings of the Cognitive Science Society. Title: Developing Object Permanence: A Connectionist Model (6 pages) Authors: Denis Mareschal, Kim Plunkett, Paul Harris Department of Experimental Psychology South Parks Rd University of Oxford Oxford OX1 3UD UK Abstract: When tested on surprise or preferential looking tasks, young infants show an understanding that objects continue to exists even though they are no longer directly perceivable. Only later do infants show a similar level of competence when tested on retrieval tasks. Hence, a developmental lag is apparent between infants' knowledge as measured by passive response tasks, and their ability to demonstrate that knowledge in an active retrieval task. We present a connectionist model which learns to track and initiate a motor response towards objects. The model exhibits a capacity to maintain a representation of the object even when it is no longer directly perceptible, and acquires implicit tracking competence before the ability to initiate a manual response to a hidden object. A study with infants confirms the model's prediction concerning improved tracking performance at higher object velocities. It is suggested that the developmental lag is a direct consequence of the need to co-ordinate representations which themselves emerge through learning. Thanks to Jordan Pollack for maintaining this archive service! Instructions for retrieving the paper: unix> ftp archive.cis.ohio-state.edu Name: anonymous Password: (your email address) ftp> cd /pub/neuroprose ftp> bin ftp> get mareschal.object_permanence.ps.Z ftp> quit unix> uncompress mareschal.object_permanence.ps.Z unix> lpr -s mareschal.object_permanence.ps.Z Cheers, DENIS MARESCHAL DEPARTMENT OF EXPERIMENTAL PSYCHOLOGY OXFORD UNIVERSITY SOUTH PARKS RD OXFORD OX1 3UD UK From tirthank at titanic.mpce.mq.edu.au Wed Jul 12 22:11:06 1995 From: tirthank at titanic.mpce.mq.edu.au (Tirthankar Raychaudhuri) Date: Thu, 13 Jul 1995 12:11:06 +1000 (EST) Subject: Web page on Combining Estimators Message-ID: <9507130211.AA27130@titanic.mpce.mq.edu.au> A non-text attachment was scrubbed... Name: not available Type: text Size: 462 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/abceb73f/attachment.ksh From brunel at venus.roma1.infn.it Thu Jul 13 06:13:38 1995 From: brunel at venus.roma1.infn.it (Nicolas Brunel) Date: Thu, 13 Jul 95 12:13:38 +0200 Subject: papers in neuroprose archive Message-ID: <9507131013.AA01923@venus.roma1.infn.it> FTP-host: archive.cis.ohio-state.edu The following three papers are now available for copying from the neuroprose archive. FTP-filename: /pub/neuroprose/brunel.dynamics.ps.Z Title: Dynamics of an attractor neural network converting temporal into spatial correlations (29 pages) Network: computation in neural systems, 5: 449 Author: Nicolas Brunel Dipartimento di Fisica Universita di Roma I La Sapienza P.le Aldo Moro 2 - 00185 Roma Italy Abstract The dynamics of a model attractor neural network, dominated by collateral feedback, composed of excitatory and inhibitory neurons described by afferent currents and spike rates, is studied analytically. The network stores stimuli learned in a temporal sequence. The statistical properties of the delay activities are investigated analytically under the approximation that no neuron is activated by more than one of the learned stimuli, and that inhibitory reaction is instantaneous. The analytic results reproduce the details of simulations of the model in which the stored memories are uncorrelated, and neurons can be shared, with low probability, by different stimuli. As such, the approximate analytic results account for delayed match to sample experiments of Miyashita in the inferotemporal cortex of monkeys. If the stimuli used in the experiment are uncorrelated, the analysis deduces the mean coding level $f$ in a stimulus (i.e. the mean fraction of neurons activated by a given stimulus) from the fraction of selective neurons which have a high correlation coefficient, of $f\sim 0.0125$. It also predicts the structure of the distribution of the correlation coefficients among neurons. FTP-filename: /pub/neuroprose/brunel.learning.ps.Z Title: Learning internal representations in attractor neural network with analogue neurons To be published in Network: computation in neural systems Authors: Daniel J Amit and Nicolas Brunel Dipartimeto di Fisica Universita Roma I P.le Aldo Moro 2 - 00185 Roma Italy Abstract: A learning attractor neural network (LANN) with a double dynamics of neural activities and synaptic efficacies, operating on two different time scales is studied by simulations in preparation for an electronic implementation. The present network includes several quasi-realistic features: neurons are represented by their afferent currents and output spike rates; excitatory and inhibitory neurons are separated; attractor spike rates as well as coding levels in arriving stimuli are low; learning takes place only between excitatory units. Synaptic dynamics is an unsupervised, analog Hebbian process, but long term memory in the absence of neural activity is maintained by a refresh mechanism which on long time scales discretizes the synaptic values, converting learning into an asynchronous stochastic process induced by the stimuli on the synaptic efficacies. This network is intended to learn a set of attractors from the statistics of freely arriving stimuli, which are represented by external synaptic inputs injected into the excitatory neurons. In the simulations different types of sequences of many thousands of stimuli are presented to the network that do not distinguish between retrieval and learning phases. Stimulus sequences differ in preassigned global statistics (including time dependent statistics); in orders of presentation of individual stimuli within a given statistics; in lengths of time intervals for each presentation and in the intervals separating one stimulus from another. We find that the network effectively learns a set of attractors representing the statistics of the stimuli, and is able to modify its attractors when the input statistics change. Moreover, as the global input statistics changes the network can also forget attractors related to stimulus classes no longer presented. Forgetting takes place only due to the arrival of new stimuli. The performance of the network and the statistics of the attractors are studied as a function of the input statistics. Most of the large scale characteristics of the learning dynamics can be captured theoretically. This model modifies a previous implementation of a LANN composed of discrete neurons, in a network of more realistic neurons. The different elements have been designed to facilitate their implementation in silicon. FTP-filename: /pub/neuroprose/brunel.spontaneous.ps.Z Title: Global spontaneous activity and local structured (learned) delay activity in cortex submitted to Journal of Neurophysiology Authors: Daniel J Amit and Nicolas Brunel Dipartimento di Fisica Universita di Roma I P.le Aldo Moro 2 -- 00185 Roma Italy Abstract: 1. We investigate the conditions under which cortical activity alone makes spontaneous activity self-reproducing and stable against fluctuations of spike rates. Invoking simple assumptions about properties of integrate-and-fire neurons it is shown that the stochastic background activity, of 1-5 spikes/second, cannot be stabilized when all neurons are excitatory. 2. On the other hand, spontaneous activity becomes self-stabilizing in presence of local inhibition: given reasonable values of the parameters of the network spontaneous activity reproduces itself and small fluctuations in the rate are suppressed. a. If the integration time constants of excitatory and inhibitory neurons at the soma are equal, {\em local} excitatory and inhibitory inputs to a neuron must balance to provide {\em local} stablility. b. If inhibition integrates faster its synaptic inputs, spontaneous activity is stable even when local recurrent excitation predominates. 3. In a network sustaining spontaneous rates of 1-5 spikes/second, we study the effect of learning in a local module, expressed in synaptic modifications in specific populations of synapses. We find: a. Initially no stimulus specific delay activity manifests itself. Instead, there is a delay activity in which, locally, {\em all} neurons selective to any of the stimuli learned have rates which gradually increase with the amplitude of synaptic potentiation. b. When the average LTP increases beyond a critical value, specific local attractors appear abruptly against the background of the global uniform spontaneous attractor. This happens with either gradual or discrete stochastic LTP. 4. The above findings predict that in the process of learning unfamiliar stimuli, there is a stage in which all neurons selective to any of the learned stimuli enhance their spontaneous activity relative to the rest. Then, abruptly, selective delay activity appear. Both facts could be observed in single unit recordings in delayed match to sample experiments. 5. Beyond this critical learning strength the local module has two types of collective activity. It either participates in the global spontaneous activity, or it maintains a stimulus selective elevated activity distribution. The particular mode of behavior depends on the stimulus: if it is unfamiliar, the activity is spontaneous; if similar to a learned stimulus, the delay activity is selective. These new attractors (delay activities) reflect the synaptic structure developed during learning. In each of them a small population of neurons have elevated rates, 20-30 spikes/second, depending on the strength of LTP. The remaining neurons of the module have their activity at spontaneous rates. Instructions for retrieving these papers: unix> ftp archive.cis.ohio-state.edu login: anonymous passwd: (your email address) ftp> cd /pub/neuroprose ftp> binary ftp> get brunel.dynamics.ps.Z ftp> get brunel.learning.ps.Z ftp> get brunel.spontaneous.ps.Z ftp> quit unix> uncompress brunel.dynamics.ps.Z unix> uncompress brunel.learning.ps.Z unix> uncompress brunel.spontaneous.ps.Z From chentouf at kepler.inpg.fr Thu Jul 13 11:53:20 1995 From: chentouf at kepler.inpg.fr (rachida) Date: Thu, 13 Jul 1995 17:53:20 +0200 Subject: Incremental Learning Message-ID: <199507131553.RAA11209@kepler.inpg.fr> Using incremental neural networks procedures to perform learning tasks is certainely a very attractive idea. These methods allow automatic tuning of the network size what one generally does empirically with an important risk of over-estimation or under-estimation, implying an untractable computation consuming trial/error procedure.The main questions to address when dealing with evolutive architectures are: 1. How to estimate the new unit(s) parameters ? 2. How to connect this(these) unit(s) to the previous network so that is possible to carry on learning without restarting ? 3. When to stop the adding process ? We recently published in NPL (Neural Processing letters in January 1995) a paper presenting our new incremental procedure for supervised learning with noisy data. Each step consists in adding to the current network a new unit which is trained to learn the error of the network. The incremental step is repeated until the error of the current network reduce to the noise in the data. The stopping criterion is very simple and can be directly deduced from a statistical test on the estimated parameters of the new unit. Some experimental results on function approximation tasks point out the efficacy of this new incremental scheme especially to avoid spurious minima and to design a network with a well-suited size. The number of basic operations is also decreased and gives an average gain on convergence speed of about 20%. For more information, consult: ============================= C.Jutten and R.Chentouf. A New Scheme for Incremental Learning. Neural Processing Letters, Vol. 2, 1, pp. 1-4, 1995. R.Chentouf and C.Jutten. Incremental Learning with a Stopping Criterion: experimental results. In IWANN'95: From Natural to artificial Neural Computation, J. Mira and F. Sandoval (Eds.), Lecture Notes in Computer Science 930, Springer, pp. 519-526, June 7-9, 1995 +++++++++++++++++++++++ Mrs CHENTOUF Rachida LTIRF-INPG 46 AV Felix Viallet 38000 Grenoble France Tel : (+33) 76 57 45 50 Fax : (+33) 76 57 47 90 From chris at orion.eee.kcl.ac.uk Thu Jul 13 17:22:19 1995 From: chris at orion.eee.kcl.ac.uk (Chris Christodoulou) Date: Thu, 13 Jul 95 17:22:19 BST Subject: Call for Papers - NN workshop in Prague, April 1996. Message-ID: <9507131622.AA29461@orion.eee.kcl.ac.uk> Subject: Call for Papers - NN workshop in Prague, April 1996. CALL FOR PAPERS: NEuroFuzzy Workshop NEuroFuzzy Workshop on Computational Intelligence Prague, Czech Republic 16-18 April 1996 Prague, lying almost exactly in the centre of Europe has very good international connections. The international airport of Prague is only about 15 km from centre of the city. Prague has approx. 1,200,000 inhabitants and a lot of history and culture. There are also good railway and road connections to Prague (the distance from Munich, Vienna, Berlin and Nuerenberg is about 300 km. ------------------------------------------------------------------ Submissions due: November 6, 1995 Acceptance Notices mailed: January 15, 1996 Camera ready papers due: February 19, 1996 ------------------------------------------------------------------ Call for Papers - Technical Areas o Neuroscience o Computational Models of Neurons and Neural Nets o Organisational Principles o Learning o Fuzzy Logic o Genetic algorithms o Hardware Implementation o Intelligent Systems for Perception o Intelligent Systems for Communications Systems o Intelligent Systems for Control and Robotics ------------------------------------------------------------------ You are invited to submit original papers addressing topics of interest for presentation at the conference and inclusion in the conference proceedings. Submissions should be in-depth, technical papers describing recent research and development results. Some tutorial papers are also welcome. The title page of your submission must include: 1) the name, complete return address, email, telephone and fax numbers of the author to whom all correspondence will be sent, 2) a maximum of 100-words abstract, 3) the designation of the topic (see listing) to which the paper is most closely related. All other pages should be marked with the title of the paper and the name of the first author. Send five double-spaced copies of the manuscript (limited to 3000 words) in English to: Prof. Dr.-Ing. Mirko Novak Czechoslowak Academy of Sciences Inst. for Computer and Information Science Pod vodarenskko vezi 18207 Prag 8 Czech Republic ------------------------------------------------------------------ Steering Committee Mirko NOVAK CZECH Republic Czeslaw JEDRZEJEK POLAND Valerieu BEIU ROMANIA Tamas ROSKA HUNGARY Prof A FROLOV RUSSIA Prof Gustin SLOVENIA Francisco SANDOVAL SPAIN Trevor CLARKSON UK Stamatios KARTALOPOULOS USA ------------------------------------------------------------------ A registration form will be available shortly. The registration fee for the workshop is not expected to exceed $250. A range of accommodation will be available, including student accommodation. ------------------------------------------------------------------ For further details: Dr Trevor Clarkson Chairman, IEEE UKRI Neural Networks Regional Interest Group Department of Electronic and Electrical Engineering King's College London, Strand, London WC2R 2LS, UK Tel: +44 171 873 2367 Fax: +44 171 836 4781 Email: trevor at orion.eee.kcl.ac.uk ------------------------------------------------------------------ New information will be available on WWW, URL http://crg.eee.kcl.ac.uk/rig/neurofuz.htm ---END------------------------------------------------------------ From dwang at cis.ohio-state.edu Thu Jul 13 15:40:37 1995 From: dwang at cis.ohio-state.edu (DeLiang Wang) Date: Thu, 13 Jul 1995 15:40:37 -0400 Subject: Tech reports available: Incremental learning Message-ID: <199507131940.PAA02136@shirt.cis.ohio-state.edu> The following Technical Report is available via FTP/WWW: ------------------------------------------------------------------ Incremental Learning of Complex Temporal Patterns ------------------------------------------------------------------ DeLiang Wang and Budi Yuwono Technical Report: OSU-CISRC-6/95-TR30 Department of Computer and Information Science, The Ohio State University A neural model for temporal pattern generation is used and analyzed for training with multiple complex sequences in a sequential manner. The network exhibits some degree of interference when new sequences are acquired. It is proven that the model is capable of incrementally learning a finite number of complex sequences. The model is then evaluated with a large set of highly correlated sequences. While the number of intact sequences increases linearly with the number of previously acquired sequences, the amount of retraining due to interference appears to be independent of the size of existing memory. The model is extended to include a chunking network which detects repeated subsequences between and within sequences. The chunking mechanism substantially reduces the amount of retraining in sequential training. Thus, the network investigated here constitutes an effective sequential memory. Various aspects of such a memory are discussed at the end of the paper. (34 pages - 239 KB) for anonymous ftp: FTP-HOST: ftp.cis.ohio-state.edu FTP-filename: /pub/leon/Wang-Yuwono.tech.ps.Z or for WWW: http://www.cis.ohio-state.edu/~dwang Comments are most welcome - Please send to DeLiang Wang (dwang at cis.ohio-state.edu) ---------------------------------------------------------------------------- FTP instructions: To retrieve and print the file, use the following commands: unix> ftp ftp.cis.ohio-state.edu Name: anonymous Password: (your email address) ftp> binary ftp> cd /pub/leon ftp> get Wang-Yuwono.tech.ps.Z ftp> quit unix> uncompress Wang-Yuwono.tech.ps.Z unix> lpr Wang-Yuwono.tech.ps (It may not ghostview well - missing page count with my ghostview - but it should print ok) ---------------------------------------------------------------------------- From pja at lfs.loral.com Thu Jul 13 13:45:04 1995 From: pja at lfs.loral.com (Peter J. Angeline) Date: Thu, 13 Jul 1995 13:45:04 -0400 Subject: EP96 Change in Tech Chairs Message-ID: <9507131345.ZM15302@barbarian.endicott.ibm.com> ECers Everywhere, EP96 had a minor reorgnization. Please note that the addresses for sending submissions to the conference are down to 2 now! Submissions are due to one of the technical chairs by September 26th. Pete Angeline Thomas Baeck ********************** EP96 The Fifth Annual Conference On Evolutionary Programming February 29 to March 3, 1996 Sheraton Harbor Island Hotel San Diego, CA, USA Sponsored by The Evolutionary Programming Society The Fifth Annual Conference on Evolutionary Programming will serve as a forum for researchers investigating applications and theory of evolutionary programming and other related areas in evolutionary and natural computation. Topics of interest include but are not limited to the use of evolutionary simulations in optimization, neural network training and design, automatic control, image processing, and other applications, as well as mathematical theory or empirical analysis providing insight into the behavior of such algorithms. Of particular interest are applications of simulated evolution to problems in biology. Conference Committee General Chairman: Lawrence J. Fogel, Natural Selection, Inc. Technical Program Co-Chairs: Peter J. Angeline, Loral Federal Systems Thomas Baeck, Informatik Centrum Dortmund Finance Chair: V. William Porto, Orincon Corporation Local Arrangements: Ward Page, Naval Command Control and Ocean Surveillance Center Conference World Wide Web Page: http://www.aic.nrl.navy.mil/galist/EP96/ Submission Information Authors are invited to submit papers which describe original unpublished research in evolutionary programming, evolution strategies, genetic algorithms and genetic programming, artificial life, cultural algorithms, and other models that rely on evolutionary principles. Specific topics include but are not limited to the use of evolutionary simulations in optimization, neural network training and design, automatic control, image processing, and other applications, as well as mathematical theory or empirical analysis providing insight into the behavior of such algorithms. Of particular interest are applications of simulated evolution to problems in biology. Hardcopies of manuscripts must be received by one of the technical program co-chairs by September 26, 1995. Electronic submissions cannot be accepted. Papers should be clear, concise, and written in English. Papers received after the deadline will be handled on a time- and space-available basis. The notification of the program committee's review decision will be mailed by November 30, 1995. Papers eligible for the student award must be marked appropriately for consideration (see below). Camera ready papers are due at the conference, and will be published shortly after its completion. Submissions should be single-spaced, 12 pt. font and should not exceed 15 pages including figures and references. Send five (5) copies of the complete paper to: In Europe: Thomas Baeck Informatik Centrum Dortmund Joseph-von-Fraunhofer-Str. 20 D-44227 Dortmund Germany Email: baeck at home.informatik.uni-dortmund.de In US: Peter J. Angeline Loral Federal Systems 1801 State Route 17C Mail Drop 0210 Owego, NY 13827 Email: pja at lfs.loral.com Authors outside Europe or the United States may send their paper to any of the above technical chairmen at their convenience. Evolutionary Programming Society Award for Best Student Paper In order to foster student contributions and encourage exceptional scholarship in evolutionary programming and closely related fields, the Evolutionary Programming Society awards one exceptional student paper submitted to the Annual Conference on Evolutionary Programming. The award carries a $500 cash prize and a plaque signifying the honor. To be eligible for the award, all authors of the paper must be full-time students at an accredited college, university or other educational institution. Submissions to be considered for this award must be clearly marked at the top of the title page with the phrase "CONSIDER FOR STUDENT AWARD." In addition, the paper should be accompanied by a cover letter stating that (1) the paper is to be considered for the student award (2) all authors are currently enrolled full-time students at a university, college or other educational institution, and (3) that the student authors are responsible for the work presented. Only papers submitted to the conference and marked as indicated will be considered for the award. Late submissions will not be considered. Officers of the Evolutionary Programming Society, students under their immediate supervision, and their immediate family members are not eligible. Judging will be made by officers of the Evolutionary Programming Society or by an Awards Committee appointed by the president. Judging will be based on the perceived technical merit of the student's research to the field of evolutionary programming, and more broadly to the understanding of self-organizing systems. The Evolutionary Programming Society and/or the Awards Committee reserves the right not to give an award in any year if no eligible student paper is deemed to be of award quality. Presentation of the Student Paper Award will be made at the conference. Important Dates --------------- September 26, 1995 - Submission deadline for papers November 30, 1995 - Notification sent to authors February 29, 1996 - Conference Begins Program Committee: J. L. Breeden, Santa Fe Institute M. Conrad, Wayne State University K. A. De Jong, George Mason University T. M. English, Texas Tech University D. B. Fogel, Natural Selection, Inc. G. B. Fogel, University of California at Los Angeles R. Galar, Technical University of Wroclaw P. G. Harrald, University of Manchester Institute of Science and Technology K. E. Kinnear, Adaptive Systems J. R. McDonnell, Naval Command Control and Ocean Surveillance Center Z. Michalewicz, University of North Carolina F. Palmieri, University of Connecticut R. G. Reynolds, Wayne State University S. H. Rubin, Central Michigan University G. Rudolph, University of Dortmund N. Saravanan, Ford Research H.-P. Schwefel, University of Dortmund A. V. Sebald, University of California at San Diego W. M. Spears, Naval Research Labs D. E. Waagen, TRW Systems Integration Group -- +----------------------------------------------------------------------------+ | Peter J. Angeline, PhD | | | Advanced Technologies Dept. | | | Loral Federal Systems | | | State Route 17C | I have nothing to say, | | Mail Drop 0210 | and I am saying it. | | Owego, NY 13827-3994 | | | Voice: (607)751-4109 | - John Cage | | Fax: (607)751-6025 | | | Email: pja at lfs.loral.com | | +----------------------------------------------------------------------------+ -------------- next part -------------- Fifth Annual Conference on Evolutionary Programming EP96 The Fifth Annual Conference On Evolutionary Programming February 29 to March 3, 1996 Sheraton Harbor Island Hotel San Diego, CA, USA Sponsored by The Evolutionary Programming Society The Fifth Annual Conference on Evolutionary Programming will serve as a forum for researchers investigating applications and theory of evolutionary programming and other related areas in evolutionary and natural computation. Topics of interest include but are not limited to the use of evolutionary simulations in optimization, neural network training and design, automatic control, image processing, and other applications, as well as mathematical theory or empirical analysis providing insight into the behavior of such algorithms. Of particular interest are applications of simulated evolution to problems in biology. Conference Committee General Chairman: Lawrence J. Fogel, Natural Selection, Inc. Technical Program Co-Chairs: Peter J. Angeline, Loral Federal Systems Thomas Baeck, Informatik Centrum Dortmund Finance Chair: V. William Porto, Orincon Corporation Local Arrangements: Ward Page, Naval Command Control and Ocean Surveillance Center Conference World Wide Web Page: http://www.aic.nrl.navy.mil/galist/EP96/ Submission Information Authors are invited to submit papers which describe original unpublished research in evolutionary programming, evolution strategies, genetic algorithms and genetic programming, artificial life, cultural algorithms, and other models that rely on evolutionary principles. Specific topics include but are not limited to the use of evolutionary simulations in optimization, neural network training and design, automatic control, image processing, and other applications, as well as mathematical theory or empirical analysis providing insight into the behavior of such algorithms. Of particular interest are applications of simulated evolution to problems in biology. Hardcopies of manuscripts must be received by one of the technical program co-chairs by September 26, 1995. Electronic submissions cannot be accepted. Papers should be clear, concise, and written in English. Papers received after the deadline will be handled on a time- and space-available basis. The notification of the program committee's review decision will be mailed by November 30, 1995. Papers eligible for the student award must be marked appropriately for consideration (see below). Camera ready papers are due at the conference, and will be published shortly after its completion. Submissions should be single-spaced, 12 pt. font and should not exceed 15 pages including figures and references. Send five (5) copies of the complete paper to: In Europe: Thomas Baeck Informatik Centrum Dortmund Joseph-von-Fraunhofer-Str. 20 D-44227 Dortmund Germany Email: baeck at home.informatik.uni-dortmund.de In US: Peter J. Angeline Loral Federal Systems 1801 State Route 17C Mail Drop 0210 Owego, NY 13827 Email: pja at lfs.loral.com Authors outside Europe or the United States may send their paper to any of the above technical chairmen at their convenience. Evolutionary Programming Society Award for Best Student Paper In order to foster student contributions and encourage exceptional scholarship in evolutionary programming and closely related fields, the Evolutionary Programming Society awards one exceptional student paper submitted to the Annual Conference on Evolutionary Programming. The award carries a $500 cash prize and a plaque signifying the honor. To be eligible for the award, all authors of the paper must be full-time students at an accredited college, university or other educational institution. Submissions to be considered for this award must be clearly marked at the top of the title page with the phrase "CONSIDER FOR STUDENT AWARD." In addition, the paper should be accompanied by a cover letter stating that (1) the paper is to be considered for the student award (2) all authors are currently enrolled full-time students at a university, college or other educational institution, and (3) that the student authors are responsible for the work presented. Only papers submitted to the conference and marked as indicated will be considered for the award. Late submissions will not be considered. Officers of the Evolutionary Programming Society, students under their immediate supervision, and their immediate family members are not eligible. Judging will be made by officers of the Evolutionary Programming Society or by an Awards Committee appointed by the president. Judging will be based on the perceived technical merit of the student's research to the field of evolutionary programming, and more broadly to the understanding of self-organizing systems. The Evolutionary Programming Society and/or the Awards Committee reserves the right not to give an award in any year if no eligible student paper is deemed to be of award quality. Presentation of the Student Paper Award will be made at the conference. Important Dates --------------- September 26, 1995 - Submission deadline for papers November 30, 1995 - Notification sent to authors February 29, 1996 - Conference Begins Program Committee: J. L. Breeden, Santa Fe Institute M. Conrad, Wayne State University K. A. De Jong, George Mason University T. M. English, Texas Tech University D. B. Fogel, Natural Selection, Inc. G. B. Fogel, University of California at Los Angeles R. Galar, Technical University of Wroclaw P. G. Harrald, University of Manchester Institute of Science and Technology K. E. Kinnear, Adaptive Systems J. R. McDonnell, Naval Command Control and Ocean Surveillance Center Z. Michalewicz, University of North Carolina F. Palmieri, University of Connecticut R. G. Reynolds, Wayne State University S. H. Rubin, Central Michigan University G. Rudolph, University of Dortmund N. Saravanan, Ford Research H.-P. Schwefel, University of Dortmund A. V. Sebald, University of California at San Diego W. M. Spears, Naval Research Labs D. E. Waagen, TRW Systems Integration Group From cnna96 at cnm.us.es Fri Jul 14 05:51:10 1995 From: cnna96 at cnm.us.es (4th Workshop on CNN's and Applications) Date: Fri, 14 Jul 95 11:51:10 +0200 Subject: CNNA'96 Call for papers Message-ID: <9507140951.AA22422@cnm1.cnm.us.es> PRELIMINARY CALL FOR PAPERS 4th IEEE INTERNATIONAL WORKSHOP ON CELLULAR NEURAL NETWORKS AND APPLICATIONS (CNNA-96) June 24-26, 1996 (Jointly Organized with NDES-96) Escuela Superior de Ingenieros de Sevilla Centro Nacional de Microelectrnica Sevilla, Spain ------------------------------------------------------------------------------ ORGANIZING COMMITTEE: Prof. J.L. Huertas (Chair) Prof. A. Rodrguez-Vzquez Prof. R. Domnguez-Castro SECRETARY: Dr. S. Espejo TECHNICAL PROGRAM: Prof. A. Rodrguez-Vzquez PROCEEDINGS: Prof. R. Domnguez-Castro SCIENTIFIC COMMITTEE: Prof. N.N. Aizemberg, Univ. of Uzhgorod, Ukrania Prof. L.O. Chua, Univ. of Cal. at Berkeley, U.S.A. Prof. V. Cimagalli, Univ. of Rome, Italy Prof. T.G. Clarkson, Kings College of London, U.K. Prof. A.S. Dmitriev, Academy of Sciences, Russia Prof. M. Hasler, EPFL, Switzerland Prof. J. Herault, Nat. Ins. of Tech., France Prof. J.L. Huertas, Nat. Miroelectronics Center, Spain Prof. S. Jankowski, Tech. Univ. of Warsaw, Poland Prof. J. Nossek, Tech. Univ. Munich, Germany Prof. V. Porra, Tech. Univ. of Helsinki, Finland Prof. T. Roska, MTA-SZTAKI, Hungary Prof. M. Tanaka, Sophia Univ., Japan Prof. J. Vandewalle, Kath. Univ. Leuven, Belgium ------------------------------------------------------------------------------ GENERAL SCOPE OF THE WORKSHOP AND VENUE The CNNA series of workshops aims to provide a biannual international forum to present and discuss recent advances in Cellular Neural Networks. Following the successful conferences in Budapest (1990), Munich (1992), and Rome (1994), the fourth workshop will be held in Seville during 1996, organized by the National Microelectronic Center and the School of Engineering of Seville. Seville, the capital of Andalusia, and site of the 1992 Universal Exposition, combines a rich cultural heritage accumulated during its more than 2500 years history with modern infrastructures in a stable and sunny climate. It boasts a large, prestigious university, several high-technology research centers of the Spanish Council of Research, and many cultural attractions. It is linked to Madrid by high-speed train and has an international airport serving several daily direct international flights, as well as many connections to international flights via Madrid. ------------------------------------------------------------------------------ PAPERS SUBMISSION Papers on all aspects of Cellular Neural Networks are welcome. Topics of interest include, but are not limited to: - Basic Theory - Applications - Learning - Software Implementations and CNN Simulators - CNN Computers - CNN Chips - CNN System Development and Testing Prospective authors are invited to submit 4 pages summaries of their papers to the Conference Secretariat. Authors of accepted papers will be asked to deliver camera-ready versions of their full-papers for publication in an IEEE-sponsored Proceedings. ------------------------------------------------------------------------------ AUTHOR'S SCHEDULE Submission of summaries: ................ January 31, 1996 Notification of acceptance: ............. March 31, 1996 Submission of camera-ready papers: ...... May 15, 1996 ------------------------------------------------------------------------------ PRELIMINARY REGISTRATION FORM Fourth IEEE Int. Workshop on Cellular Neural Networks and their Applications CNNA'96 Sevilla, Spain, June 24-26, 1996 I wish to attend the workshop. Please send Program and registration form when available. Name: ................______________________________ Mailing address: .....______________________________ Phone: ...............______________________________ Fax: ............. ...______________________________ E-mail: ..............______________________________ Please complete and return to: CNNA'96 Secretariat. Department of Analog Circuit Design, Centro Nacional de Microelectrnica Edif. CICA, Avda. Reina Mercedes s/n, E-41012 Sevilla - SPAIN FAX: +34-5-4231832 Phone: +34-5-4239923 E-mail: cnna96 at cnm.us.es ------------------------------------------------------------------------------ From barto at cs.umass.edu Fri Jul 14 17:46:22 1995 From: barto at cs.umass.edu (Andy Barto) Date: Fri, 14 Jul 95 16:46:22 -0500 Subject: post doc position Message-ID: Dear Colleague: I am looking for a postdoctoral researcher for a project using mathematical and computer models to refine and test hypotheses about how the cerebellum and motor cortex function together to support motor activity. We are constructing a large scale-model of the cerebellum and associated premotor circuits that is constrained by the anatomy and physiology, but that is also abstract to allow us to explore its control abilities in a computationally feasible manner. We are specifically focusing on the learning of skilled reaching behavior. The post doc will stongly interact with physiologists in collaborating laboratories who study motor control in animals, but should be most skilled in computational approaches to motor control and in adaptive neural network simulation. If you are interested in this position, which will be available this Fall, or in additional details, please contact Gwyn Mitchell (mitchell at cs.umass.edu, tel (413) 545-1309, fax (413) 545-1249). If you know of any suitable candidate who might be interested in this position, I would appreciate it if you could pass this information along to them. Thank you very much. Sincerely, Andrew G. Barto, Professor Computer Science Department, LGRC University of Massachusetts Box 34610 Amherst MA 01003-4610 Refs: Houk et al., Trends in Neuroscience 16; pp. 27-33, 1993; Houk and Barto, In G. E. Stelmach and J. Requin, eds, Tutorials in Motor Behavior II, pp. 71-100, Elesevier, 1992. ----------------------------------------------------------------------------- From king at cs.cuhk.hk Fri Jul 14 18:55:19 1995 From: king at cs.cuhk.hk (Dr. Irwin K. King) Date: Sat, 15 Jul 1995 06:55:19 +0800 (HKT) Subject: ICONIP'96 CFP Message-ID: <9507142242.AA10879@cucs18.cs.cuhk.hk> FIRST CALL FOR PAPERS 1996 INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING The Annual Conference of the Asian Pacific Neural Network Assembly ICONIP'96, September 24 - 27, 1996 Hong Kong Exhibition and Convention Center, Wan Chai, Hong Kong The goal of ICONIP'96 is to provide a forum for researchers and engineers from academia and industry to meet and to exchange ideas on the latest developments in neural information processing. The conference further serves to stimulate local and regional interests in neural information processing and its potential applications to industries indigenous to this region. CONFERENCE TOPICS ================= * Theory * Algorithms & Architectures * Applications * Supervised/Unsupervised Learning * Hardware Implementations * Hybrid Systems * Neurobiological Systems * Associative Memory * Visual & Speech Processing * Intelligent Control & Robotics * Cognitive Science & AI * Recurrent Net & Dynamics * Image Processing * Pattern Recognition * Computer Vision * Time Series Prediction * Financial Engineering * Optimization * Fuzzy Logic * Evolutionary Computing * Other Related Areas CONFERENCE'S SCHEDULE ===================== Submission of paper February 1, 1996 Notification of acceptance May 1, 1996 Early registration deadline July 1, 1996 SUBMISSION INFORMATION ====================== Authors are invited to submit one camera-ready original and five copies of the manuscript written in English on A4-format white paper with one inch margins on all four sides, in one column format, no more than six pages including figures and references, single-spaced, in Times-Roman or similar font of 10 points or larger, and printed on one side of the page only. Electronic or fax submission is not acceptable. Additional pages will be charged at USD $50 per page. Centered at the top of the first page should be the complete title, author(s), affiliation, mailing, and email addresses, followed by an abstract (no more than 150 words) and the text. Each submission should be accompanied by a cover letter indicating the contacting author, affiliation, mailing and email addresses, telephone and fax number, and preference of technical session(s) and format of presentation, either oral or poster (both are published). All submitted papers will be refereed by experts in the field based on quality, clarity, originality, and significance. Authors may also retrieve the ICONIP style, "iconip.tex" and "iconip.sty" files for the conference by anonymous FTP at ftp.cs.cuhk.hk in the directory /pub/iconip96. For further information, inquiries, and paper submissions please contact ICONIP'96 Secretariat Department of Computer Science The Chinese University of Hong Kong Shatin, N.T., Hong Kong Fax (852) 2603-5024 E-mail: iconip96 at cs.cuhk.hk http://www.cs.cuhk.hk/iconip96 ====================================================================== General Co-Chairs ================= Omar Wing, CUHK Shun-ichi Amari, Tokyo U. Advisory Committee ================== International ------------- Yaser Abu-Mostafa, Caltech Michael Arbib, U. Southern Cal. Leo Breiman, UC Berkeley Jack Cowan, U. Chicago Rolf Eckmiller, U. Bonn Jerome Friedman, Stanford U. Stephen Grossberg, Boston U. Robert Hecht-Nielsen, HNC Geoffrey Hinton, U. Toronto Anil Jain, Michigan State U. Teuvo Kohonen, Helsinki U. of Tech. Sun-Yuan Kung, Princeton U. Robert Marks, II, U. Washington Thomas Poggio, MIT Harold Szu, US Naval SWC John Taylor, King's College London David Touretzky, CMU C. v. d. Malsburg, Ruhr-U. Bochum David Willshaw, Edinburgh U. Asia-Pacific Region ------------------- Marcelo H. Ang Jr, NUS, Singapore Sung-Yang Bang, POSTECH, Korea Hsin-Chia Fu, NCTU., Taiwan Toshio Fukuda, Nagoya U., Japan Kunihiko Fukushima, Osaka U., Japan Zhenya He, Southeastern U., China Marwan Jabri, U. Sydney, Australia Nikola Kasabov, U. Otago, New Zealand Yousou Wu, Tsinghua U., China Organizing Committee ==================== L.W. Chan (Co-Chair), CUHK K.S. Leung (Co-Chair), CUHK D.Y. Yeung (Finance), HKUST C.K. Ng (Publication), CityUHK A. Wu (Publication), CityUHK K.P. Lam (Publicity), CUHK M.W. Mak (Local Arr.), HKPU C.S. Tong (Local Arr.), HKBU T. Lee (Registration), CUHK M. Stiber (Registration), HKUST K.P. Chan (Tutorial), HKU H.T. Tsui (Industry Liaison), CUHK I. King (Secretary), CUHK Program Committee ================= Co-Chairs --------- Lei Xu, CUHK Michael Jordan, MIT Erkki Oja, Helsinki Univ. of Tech. Mitsuo Kawato, ATR Members ------- Yoshua Bengio, U. Montreal Chris Bishop, Aston U. Leon Bottou, Neuristique Gail Carpenter, Boston U. Laiwan Chan, CUHK Huishen Chi, Peking U. Peter Dayan, MIT Kenji Doya, ATR Scott Fahlman, CMU Francoise Fogelman, SLIGOS Lee Giles, NEC Research Inst. Michael Hasselmo, Harvard U. Kurt Hornik, Technical U. Wien Steven Nowlan, Synaptics Jeng-Neng Hwang, U. Washington Nathan Intrator, Technion Larry Jackel, AT&T Bell Lab Adam Kowalczyk, Telecom Australia Soo-Young Lee, KAIST Todd Leen, Oregon Grad. Inst. Cheng-Yuan Liou, National Taiwan U. David MacKay, Cavendish Lab Eric Mjolsness, UC San Diego John Moody, Oregon Grad. Inst. Nelson Morgan, ICSI Michael Perrone, IBM Watson Lab Ting-Chuen Pong, HKUST Paul Refenes, London Business School Hava Siegelmann, Technion Ah Chung Tsoi, U. Queensland Benjamin Wah, U. Illinois Andreas Weigend, Colorado U. Ronald Williams, Northeastern U. John Wyatt, MIT Alan Yuille, Harvard U. Richard Zemel, CMU From ecm at nijenrode.nl Sat Jul 15 04:47:57 1995 From: ecm at nijenrode.nl (Edward Malthouse) Date: Sat, 15 Jul 1995 10:47:57 +0200 (MET DST) Subject: nonparametric reg / nonlin feature extraction Message-ID: <199507150847.KAA27338@bordeaux.nijenrode.nl> The following dissertation is available via anonymous FTP: Nonlinear Partial Least Squares By Edward C. Malthouse Key words: nonparametric regression, partial least squares (PLS), principal components regression (PCR), projection pursuit regression (PPR), feedforward neural networks, nonlinear feature extraction, principal components analysis (PCA), nonlinear principal components analysis (NLPCA), principal curves and surfaces. A B S T R A C T We propose a new nonparametric regression method for high-dimensional data, nonlinear partial least squares (NLPLS). NLPLS is motivated by projection-based regression methods, e.g., partial least squares (PLS), projection pursuit (PPR), and feedforward neural networks. The model takes the form of a composition of two functions. The first function in the composition projects the predictor variables onto a lower-dimensional curve or surface yielding scores, and the second predicts the response variable from the scores. We implement NLPLS with feedforward neural networks. NLPLS will often produce a more parsimonious model (fewer score vectors) than projection-based methods, and the model is well suited for detecting outliers and future covariates requiring extrapolation. The scores are also shown to have useful interpretations. We also extend the model for multiple response variables and discuss situations when multiple response variables should be modeled simultaneously and when they should be modeled with separate regressions. We provide empirical results from mathematical and chemical engineering examples which evaluate the performances of PLS, NLPLS, PPR, and three-layer neural networks on (1) response variable predictions, (2) model parsimony, (3) computational requirements, and (4) robustness to starting values. The curves and surfaces used by NLPLS are motivated by the nonlinear principal components analysis (NLPCA) method of doing nonlinear feature extraction. We develop certain properties of NLPCA and discuss its relation to the principal curve method. Both methods attempt to reduce the dimension of a set of multivariate observations by fitting a curve through the middle of the observations and projecting the observations onto this curve. The two methods fit their models under a similar objective function, with one important difference: NLPCA defines the function which maps observed variables to scores (projection index) to be continuous. We show that the effects of this constraint are (1) NLPCA is unable to model curves and surfaces which intersect themselves and (2) the NLPCA ``projections'' are suboptimal producing larger approximation error. We show how NLPCA score values can be interpreted and give the results of a small simulation study comparing the two methods. The dissertation is 120 pages long (single spaced). ftp mkt2715.kellogg.nwu.edu logname: anonymous password: your email address cd /pub/ecm binary get dissert.ps.gz quit gzip -d dissert.ps lp -dps dissert.ps # or however you print postscript I'm sorry, but no hardcopies are available. From munro at lis.pitt.edu Sun Jul 16 00:08:07 1995 From: munro at lis.pitt.edu (Paul Munro) Date: Sun, 16 Jul 1995 00:08:07 -0400 (EDT) Subject: "Orthogonality" of the generalizers being combined In-Reply-To: <9507020002.AA25348@sfi.santafe.edu> Message-ID: On Sat, 1 Jul 1995, David Wolpert wrote: > > In his recent posting, Nathan Intrator writes > > >>> > combining, or in the simple case > averaging estimators is effective only if these estimators are made > somehow to be independent. > >>> > > This is an extremely important point. Its importance extends beyond (stuff deleted) > In other words, although those generalizers are about as different > from one another as can be, *as far as the data set in question was > concerned*, they were practically identical. This is a great flag that > one is in a data-limited scenario. I.e., if very different > generalizers perform identically, that's a good sign that you're > screwed. > > Which is a round-about way of saying that the independence Nathan > refers to is always with respect to the data set at hand. This is > discussed in a bit of detail in the papers referenced below. > > *** > > Getting back to the precise subject of Nathan's posting: Those > interested in a formal analysis touching on how the generalizers being > combined should differ from one another should read the Ander Krough > paper (to come out in NIPS7) that I mentioned in my previous > posting. A more intuitive discussion of this issue occurs in my > original paper on stacking, where there's a whole page of text > elaborating on the fact that "one wants the generalizers being > combined to (loosely speaking) 'span the space' of algorithms and be > 'mutually orthogonal'" to as much a degree as possible. (more stuff deleted) Bambang Parmanto and I have found that negative correlation among the individual classifiers can improve committee performance even more than zero correlation. So rather than a zero inner product (othogonality), a negative inner product is preferable. Of course, this may be just a matter of definition -- our comparisons are made using the error vector on a test set. That is, it's better for errors to be independent than it is for them to be coincident, but it's even better if the coincidence is below the expected coincidence rate for independent classifiers. Note that to ahieve a significant level of negative correlation, the overall generalization performance must be fairly high... From koiran at ICSI.Berkeley.EDU Sun Jul 16 22:12:52 1995 From: koiran at ICSI.Berkeley.EDU (Pascal Koiran) Date: Sun, 16 Jul 1995 19:12:52 -0700 Subject: "Orthogonality" of the generalizers being combined Message-ID: <199507170212.TAA04679@spare.ICSI.Berkeley.EDU> Regarding this whole thread on "combinining generalizers", I am surprised that no one has ever mentioned the extensive work on "expert advice" in computational learning theory. Is this is simply by ignorance, or is there a more subtle reason ? As suggested by the list moderators, here are the names of a few people who have done relevant work : Cesa-bianchi, Freund, Haussler, Helmbold, Kivinen, Littlestone, Schapire. The seminal paper in the expert advice / on-line learning line of research seems to be: N. Littlestone (1988) Learning quickly when irrevelant attributes abound: a new linear-threshold algorithm. Machine Learning 2, 285-318. I am by no means an expert (no pun intended) in this area, so if you feel that your name was unfairly ommited from this list, I beg your forgiveness. Pascal Koiran. From koiran at ICSI.Berkeley.EDU Sun Jul 16 22:12:52 1995 From: koiran at ICSI.Berkeley.EDU (Pascal Koiran) Date: Sun, 16 Jul 1995 19:12:52 -0700 Subject: "Orthogonality" of the generalizers being combined Message-ID: <199507170212.TAA04679@spare.ICSI.Berkeley.EDU> Regarding this whole thread on "combinining generalizers", I am surprised that no one has ever mentioned the extensive work on "expert advice" in computational learning theory. Is this is simply by ignorance, or is there a more subtle reason ? As suggested by the list moderators, here are the names of a few people who have done relevant work : Cesa-bianchi, Freund, Haussler, Helmbold, Kivinen, Littlestone, Schapire. The seminal paper in the expert advice / on-line learning line of research seems to be: N. Littlestone (1988) Learning quickly when irrevelant attributes abound: a new linear-threshold algorithm. Machine Learning 2, 285-318. I am by no means an expert (no pun intended) in this area, so if you feel that your name was unfairly ommited from this list, I beg your forgiveness. Pascal Koiran. From georgiou at wiley.csusb.edu Mon Jul 17 14:07:35 1995 From: georgiou at wiley.csusb.edu (georgiou@wiley.csusb.edu) Date: Mon, 17 Jul 1995 11:07:35 -0700 Subject: LFP: First Int'l Conf. on Computational Intelligence and Neurosciences Message-ID: <199507171807.AA12508@wiley.csusb.edu> Please note the July 24, 1995, deadline. Papers are also accepted in TeX/LaTeX or postscript via email. For the full text of the call for papers please see: ftp://www.csci.csusb.edu/georgiou/ICCIN-95 and also ftp://www.csci.csusb.edu/georgiou/JCIS-95 ----------------------------------------------------------------------- Last Call for Papers FIRST INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCES September 28 to October 1, 1995. ``Shell Island'' Hotels of Wrightsville Beach, North Carolina, USA. Plenary Speakers include: James Anderson (Brown University) Subhash Kak (Louisiana State University) Haluk Ogmen (Houston of Houston) Ed Page (University of South Carolina) Jeffrey Sutton (Harvard University) L.E.H. Trainor (University of Toronto) Gregory H. Wakefield (University of Michigan) Summary Deadline: July 24, 1995 Decision & Notification: August 5, 1995 Send summaries to: George M. Georgiou Computer Science Department California State University San Bernardino, CA 92407 georgiou at wiley.csusb.edu Papers will be accepted based on summaries. A summary shall not exceed 4 pages of 10-point font, double-column, single-spaced text, (1 page minimum) with figures and tables included. From R.Poli at cs.bham.ac.uk Mon Jul 17 18:28:44 1995 From: R.Poli at cs.bham.ac.uk (R.Poli@cs.bham.ac.uk) Date: Mon, 17 Jul 95 18:28:44 BST Subject: PhD Studentships Message-ID: <7327.9507171728@sonic.cs.bham.ac.uk> Dear Colleagues, Could you please circulate the following advertisement for PhD studentships? Thank you very much, Riccardo Poli Dr. Riccardo Poli E-mail: R.Poli at cs.bham.ac.uk School of Computer Science Telephone: +44-121-414-3739 The University of Birmingham Fax: +44-121-414-4281 Edgbaston, Birmingham B15 2TT, UK ---------------------------------------------------------------------- The University of Birmingham School of Computer Science Research Studentships in ~~~~~~~~ ~~~~~~~~~~~~ EMERGENT AND EVOLUTIONARY BEHAVIOUR, INTELLIGENCE, AND COMPUTATION (EEBIC) Applications are invited for a number of Studentships for full-time PhD research in the School of Computer Science to carry out research within the recently founded EEBIC group. The group's research interests include: evolutionary computation (e.g. genetic algorithms and genetic programming), emergent behaviour, emergent intelligence (e.g. emergent communication), emergent computation and artificial life and their practical applications in hard engineering problems. The members of the group, at Birmingham and elsewhere, are active researchers in Artificial Intelligence, Engineering or Psychology with a variety of different backgrounds including Biology, Computer Science, Engineering, Psychology and Philosophy. In addition to EEBIC, the research experience of the members of the group includes computer vision, neural nets, signal processing, intelligent autonomous agents, hybrid inference systems, computer emotions, logic and many others. The group interacts very closely with the Cognition and Affect group led by Aaron Sloman who is a member of both groups. (For more information see URLs: ftp://ftp.cs.bham.ac.uk/pub/groups/cog_affect and http://www.cs.bham.ac.uk/~axs .) The successful applicants will join the group's effort to explore EEBIC in many interesting directions (from engineering to psychology, from new practical applications to new theoretical frameworks). They will have constant interaction and collaboration with the other members of the group. In addition to the usual requirements of possessing a good honours degree (equivalent to a first or upper second class degree in a UK university) and being EU residents, the successful candidates will need to be particularly open minded to the cross-fertilisation in the group deriving from the different backgrounds and experience of the members. Additional information about how to apply and about the School is available via WWW from URL: http://www.cs.bham.ac.uk Informal enquiries about the EEBIC group can be directed to Riccardo Poli Phone: +44-121-414-3739 Fax: +44-121-414-4281 Email: R.Poli at cs.bham.ac.uk Enquiries concerning the Cognition and Affect group may be sent to Aaron Sloman Phone: +44-121-414-4775 Fax: +44-121-414-4281 Email: A.Sloman at cs.bham.ac.uk For any other queries contact our research students' admission tutor: Dr Peter Hancox Email: P.J.Hancox at cs.bham.ac.uk From eric at research.nj.nec.com Mon Jul 17 16:23:22 1995 From: eric at research.nj.nec.com (Eric B. Baum) Date: Mon, 17 Jul 1995 16:23:22 -0400 Subject: Job Announcement Message-ID: <199507172023.QAA00552@yin> Programmer Wanted. Prerequisites: Experience in getting large programs to work. Some mathematical sophistication. E.g. at least equivalent of a good undergraduate degree in math, physics, theoretical computer science or related field. Salary: Depends on experience. Job: Implementing various novel algorithms. The previous holder of this position (Charles Garrett) implemented our new Bayesian approach to games, with striking success. We are now engaged in an effort to produce a world championship chess program based on these methods and several new ideas regarding learning. The chess program is being written in Modula 3. Experience in Modula 3 is useful but not essential so long as you are willing to learn it. Other projects may include TD learning, GA's, etc. To access papers on our approach to games, and get some idea of general nature of other projects, (e.g. a paper on GA's Garrett worked on) see my home page http://www.neci.nj.nec.com:80/homepages/eric/eric.html A paper on classifier-like learning systems with Garrett will appear there RSN (but don't wait to apply). The successful applicant will (a)have experience getting *large* programs to *work*, (b) be able to understand the papers on my home page and convert them to computer experiments. These projects are at the leading edge of basic research in algorithms/cognition/learning, so expect the work to be both interesting and challenging. Term-contract position. To apply please send cv, cover letter and list of references to: eric at research.nj.nec.com .ps or plain text please! NOTE- EMAIL ONLY. Hardcopy, e.g. US mail or Fedex etc, will not be opened. Equal Opportunity Employer M/F/D/V ------------------------------------- Eric Baum NEC Research Institute, 4 Independence Way, Princeton NJ 08540 PHONE:(609) 951-2712, FAX:(609) 951-2482, Inet:eric at research.nj.nec.com From read at bohr.neusc.bcm.tmc.edu Mon Jul 17 18:17:14 1995 From: read at bohr.neusc.bcm.tmc.edu (P. Read Montague) Date: Mon, 17 Jul 1995 17:17:14 -0500 Subject: Postdoctoral position Message-ID: <9507171717.ZM1627@bohr.bcm.tmc.edu> POSTDOCTORAL POSITION IN THEORETICAL NEUROSCIENCE A postdoctoral fellowship in theoretical neuroscience is available through the newly formed Center for Theoretical Neuroscience at Baylor College of Medicine. The position will focus on theoretical problems, however, all potential projects will be closely allied with ongoing experiments in the laboratories of Drs John Maunsell and Nikos Logothetis in the Division of Neuroscience at Baylor. Current interests include the role of attention in visual perception, the neural basis for decision-making, and the neural basis for object recognition. The successful candidate will have a strong knowledge in basic neurobiology combined with quantitative background in physics, computing, or engineering. Fellows will receive stipends commensurate with their background and qualifications. Send curriculum vitae, research interests, and the names of three references to: Dr. P. Read Montague Division of Neuroscience, Baylor College of Medicine, 1 Baylor Plaza, Houston, TX 77030 email: read at bohr.bcm.tmc.edu -- P. Read Montague Division of Neuroscience Baylor College of Medicine 1 Baylor Plaza, Houston, TX 77030 read at bohr.bcm.tmc.edu From read at bohr.neusc.bcm.tmc.edu Tue Jul 18 11:50:21 1995 From: read at bohr.neusc.bcm.tmc.edu (P. Read Montague) Date: Tue, 18 Jul 1995 10:50:21 -0500 Subject: Posdoctoral Position Message-ID: <9507181050.ZM2786@bohr.bcm.tmc.edu> POSTDOCTORAL POSITION IN THEORETICAL NEUROSCIENCE A postdoctoral fellowship in theoretical neuroscience is available through the newly formed Center for Theoretical Neuroscience at Baylor College of Medicine. The position will focus on theoretical problems, however, all potential projects will be closely allied with ongoing experiments in the laboratories of Drs John Maunsell and Nikos Logothetis in the Division of Neuroscience at Baylor. Current interests include the role of attention in visual perception, the neural basis for decision-making, and the neural basis for object recognition. The successful candidate will have a strong knowledge in basic neurobiology combined with quantitative background in physics, computing, or engineering. Fellows will receive stipends commensurate with their background and qualifications. Send curriculum vitae, research interests, and the names of three references to: -- P. Read Montague Division of Neuroscience Baylor College of Medicine 1 Baylor Plaza, Houston, TX 77030 read at bohr.bcm.tmc.edu From koiran at ICSI.Berkeley.EDU Tue Jul 18 20:22:10 1995 From: koiran at ICSI.Berkeley.EDU (Pascal Koiran) Date: Tue, 18 Jul 1995 17:22:10 -0700 Subject: "Orthogonality" of the generalizers being combined Message-ID: In my previous message on combinining generalizers, Manfred Warmuth should have been added to the list of "expert advice" researchers. I apologize for that omission. (note that I do not claim that the list is complete now !) Pascal Koiran. From giles at research.nj.nec.com Tue Jul 18 18:27:57 1995 From: giles at research.nj.nec.com (Lee Giles) Date: Tue, 18 Jul 1995 18:27:57 -0400 Subject: TR announcment - long-term dependencies Message-ID: <199507182227.SAA08663@telluride> The following Technical Report is available via the University of Maryland Department of Computer Science and the NEC Research Institute archives: _____________________________________________________________________________ LEARNING LONG-TERM DEPENDENCIES IS NOT AS DIFFICULT WITH NARX RECURRENT NEURAL NETWORKS Technical Report UMIACS-TR-95-78 and CS-TR-3500, Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742 Tsungnan Lin{1,2}, Bill G. Horne{1}, Peter Tino{1,3}, C. Lee Giles{1,4} {1}NEC Research Institute, 4 Independence Way, Princeton, NJ 08540 {2}Department of Electrical Engineering, Princeton University, Princeton, NJ 08540 {3}Dept. of Computer Science and Engineering, Slovak Technical University, Ilkovicova 3, 812 19 Bratislava, Slovakia {4}UMIACS, University of Maryland, College Park, MD 20742 ABSTRACT It has recently been shown that gradient descent learning algorithms for recurrent neural networks can perform poorly on tasks that involve long- term dependencies, i.e. those problems for which the desired output depends on inputs presented at times far in the past. In this paper we explore the long-term dependencies problem for a class of architectures called NARX recurrent neural networks, which have power ful representational capabilities. We have previously reported that gradient descent learning is more effective in NARX networks than in recurrent neural network architectures that have ``hidden states'' on problems includ ing grammatical inference and nonlinear system identification. Typically, the network converges much faster and generalizes better than other net works. The results in this paper are an attempt to explain this phenomenon. We present some experimental results which show that NARX networks can often retain information for two to three times as long as conventional recurrent neural networks. We show that although NARX networks do not circumvent the problem of long-term dependencies, they can greatly improve performance on long-term dependency problems. We also describe in detail some of the assumption regarding what it means to latch information robustly and suggest possible ways to loosen these assumptions. ---------------------------------------------------------------------------------- ---------------------------------------------------------------------------------- http://www.neci.nj.nec.com/homepages/giles.html http://www.cs.umd.edu/TRs/TR-no-abs.html or ftp://ftp.nj.nec.com/pub/giles/papers/UMD-CS-TR-3500.long-term.dependencies.narx.ps.Z ------------------------------------------------------------------------------------ -- C. Lee Giles / NEC Research Institute / 4 Independence Way Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 URL http://www.neci.nj.nec.com/homepages/giles.html == From maass at igi.tu-graz.ac.at Tue Jul 18 18:36:21 1995 From: maass at igi.tu-graz.ac.at (Wolfgang Maass) Date: Wed, 19 Jul 95 00:36:21 +0200 Subject: 2 papers on spiking neurons in neuroprose Message-ID: <199507182236.AA12291@figids03> First paper: ************ FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/maass.spiking-details.ps.Z The file maass.spiking-details.ps.Z is now available for copying from the Neuroprose repository. This is a 41-page long paper. Hardcopies are not available. LOWER BOUNDS FOR THE COMPUTATIONAL POWER OF NETWORKS OF SPIKING NEURONS by Wolfgang Maass Institute for Theoretical Computer Science Technische Universitaet Graz A-8010 Graz, Austria e-mail: maass at igi.tu-graz.ac.at Abstract: We explore the computational power of formal models for networks of spiking neurons (often referred to as "integrate-and-fire neurons"). These neural net models are closer related to computations in biological neural systems than the more traditional models, since they allow an encoding of information in the timing of single spikes (not just in firing rates). Our formal model is closely related to the "spike-response model" that was previously introduced by Gerstner and van Hemmen. It turns out that the structure of computations in models for networks of spiking neurons is quite different from that of computations in analog (sigmoidal) neural nets. In particular it is shown in our paper in a rigorous way that simple operations on phase-differences between spike-trains provide a very powerful computational tool, that can in principle be used to carry out highly complex computations on a small network of spiking neurons. We also show in this paper that rather weak assumptions about the shape of response-and threshold-functions of spiking neurons are sufficient in order to employ them for such computations. An extended abstract of this paper had already been posted in November 1994 (it appears in the Proc. of NIPS 94). In the meantime many have asked me for details of the constructions, and hence I am now also posting in neuroprose this detailed version (which appears in Neural Computation). A companion paper with detailed proofs for the upper bounds, will become available in the fall. ---------------------------------------------------------------------- Second paper: ************* FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/maass.shape.ps.Z The file maass.shape.ps.Z is now available for copying from the Neuroprose repository. This is a 6-page long paper. Hardcopies are not available. ON THE RELEVANCE OF THE SHAPE OF POSTSYNAPTIC POTENTIALS FOR THE COMPUTATIONAL POWER OF SPIKING NEURONS by Wolfgang Maass and Berthold Ruf Institute for Theoretical Computer Science Technische Universitaet Graz A-8010 Graz, Austria e-mail: maass at igi.tu-graz.ac.at bruf at igi.tu-graz.ac.at Abstract: Recently one has started to explore silicon models for networks of spiking neurons, where one employs rectangular (i.e. piecewise constant) pulses instead of the "smooth" excitatory postsynaptic potentials (EPSP's) that are employed by biological neurons. We show in this paper that models of spiking neurons that employ rectangular pulses (EPSP's) have substantial computational power, and we give a precise characterization of their computational power in terms of a common benchmark model from computer science (random access machine). This characterization allows us to prove the following somewhat surprising result: Models of networks of spiking neurons with rectangular pulses are from the computational point of view STRICTLY WEAKER than models with "smooth" EPSP's of the type as they are observed in biological neurons. ************ How to obtain a copy of the first paper ************* Via Anonymous FTP: unix> ftp archive.cis.ohio-state.edu Name: anonymous Password: (type your email address) ftp> cd pub/neuroprose ftp> binary ftp> get maass.spiking-details.ps.Z ftp> quit unix> uncompress maass.spiking-details.ps.Z unix> lpr maass.spiking-details.ps (or what you normally do to print PostScript) For the second paper proceed analogously (but with filename maass.shape.ps.Z). From KOKINOV at BGEARN.BITNET Tue Jul 18 16:59:14 1995 From: KOKINOV at BGEARN.BITNET (Boicho Kokinov) Date: Tue, 18 Jul 95 16:59:14 BG Subject: 10 scholarschips in Cognitive Science Message-ID: 10 scholarships are available to successful candidates for the Graduate Program in Cognitive Science at NBU for candidates from Eastern and Central Europe. The scholarschips have been provided by the Soros Foundation. NEW BULGARIAN UNIVERSITY Department of Cognitive Science Admission to the Graduate Program in Cognitive Science is open till July 30. It offers the following degrees: Post-Graduate Diploma, M.Sc., Ph.D. FEATURES Teaching in English both in the regular courses at NBU and in the intensive courses at the Annual International Summer Schools. Strong interdisciplinary program covering Psychology, Artificial Intelligence, Neurosciences, Linguistics, Philosophy, Mathematics, Methods. Theoretical and experimental research in integration of the symbolic and connectionist approaches, emergent hybrid cognitive architectures, models of memory and reasoning, analogy, vision, imagery, agnosia, language and speech processing, aphasia. Advisors: at least two advisors with different backgrounds, possibly one external international advisor. International dissertation committee. INTERNATIONAL ADVISORY BOARD Elizabeth Bates (UCSD, USA), Amedeo Cappelli (CNR, Italy), Cristiano Castelfranchi (CNR, Italy), Daniel Dennett (Tufts University, USA), Charles De Weert (University of Nijmegen, Holland), Christian Freksa (Hamburg University, Germany), Dedre Gentner (Northwestern University, USA), Christopher Habel (Hamburg University, Germany), Douglas Hofstadter (Indiana University, USA), Joachim Hohnsbein (University of Dortmund, Germany), Keith Holyoak (UCLA, USA), Mark Keane (Trinity College, Ireland), Alan Lesgold (University of Pittsburg, USA), Willem Levelt (Max-Plank Institute of Psycholinguistics, Holland), Ennio De Renzi (University of Modena, Italy), David Rumelhart (Stanford University, USA), Richard Shiffrin (Indiana University, USA), Paul Smolensky (University of Colorado, USA), Chris Thornton (University of Sussex, England ), Carlo Umilta' (University of Padova, Italy) ADDMISSION REQUIREMENTS B.Sc. degree in psychology, computer science, linguistics, philosophy, neurosciences, or related fields. Good command of English. Address: Cognitive Science Department, New Bulgarian University, 21 Montevideo Str. Sofia 1635, Bulgaria, tel.: (+3592) 55-80-65 fax: (+3592) 54-08-02 e-mail: cogs at adm.nbu.bg or kokinov at bgearn.acad.bg From phkywong at uxmail.ust.hk Wed Jul 19 03:44:44 1995 From: phkywong at uxmail.ust.hk (Dr. Michael Wong) Date: Wed, 19 Jul 1995 15:44:44 +0800 Subject: Paper on Neural Dynamic Routing Available Message-ID: <95Jul19.154446+0800_hkt.18930-1+4@uxmail.ust.hk> FTP-host: physics.ust.hk FTP-file: pub/kymwong/rout.ps.gz The following paper, presented at IWANNT*95, is now available via anonymous FTP. (8 pages long) ============================================================================ Decentralized Neural Dynamic Routing in Circuit-Switched Networks W. K. Felix Lor and K. Y. Michael Wong Department of Physics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong. E-mail address: phfelix at usthk.ust.hk, phkywong at usthk.ust.hk Clear Water Bay, Kowloon, Hong Kong. ABSTRACT We use a Simplex centralized algorithm to dynamically distribute telephonic traffic among alternate routes in circuit-switched networks according to the fluctuating number of free circuits and the evolving call attempts. It generates examples for training localized Neural controllers. Simulations shows that the decentralized Neural approach has a comparable performance in blocking probability with Maximum Free Circuit (MFC) and gives a surpassing performance in crankback. ============================================================================ FTP instructions: unix> ftp physics.ust.hk Name: anonymous Password: your full email address ftp> cd pub/kymwong ftp> get rout.ps.gz ftp> quit unix> gunzip rout.ps.gz unix> lpr rout.ps From uzimmer at informatik.uni-kl.de Wed Jul 19 12:47:03 1995 From: uzimmer at informatik.uni-kl.de (Uwe R. Zimmer) Date: Wed, 19 Jul 95 17:47:03 +0100 Subject: Paper available on "Minimal Qualitative Topologic World Models for Mobile Robots" Message-ID: <950719.174703.269@informatik.uni-kl.de> Paper available via WWW / FTP: keywords: mobile robots, exploration, world modelling, self-localization, artificial neural networks ------------------------------------------------------------------ Minimal Qualitative Topologic World Models for Mobile Robots ------------------------------------------------------------------ Uwe R. Zimmer (submitted for publication) World models for mobile robots as introduced in many projects, are mostly redundant regarding similar situations detected in different places. The present paper proposes a method for dynamic generation of a minimal world model based on these redundancies. The technique is an extention of the qualitative topologic world modelling methods. As a central aspect the reliability regarding error-tolerance and stability will be emphasized. The proposed technique demands very low constraints on the kind and quality of the employed sensors as well as for the kinematic precision of the utilized mobile platform. Hard realtime constraints can be handled due to the low computational complexity. The principal discussions are supported by real-world experiments with the mobile robot "ALICE". (5 pages - 928 KB) for the WWW-link: ------------------------------------------------------------------ http://ag-vp-www.informatik.uni-kl.de/Projekte/ALICE/ abs.Minimal.html ------------------------------------------------------------------ for the homepage of the author (including more reports): ------------------------------------------------------------------ http://ag-vp-www.informatik.uni-kl.de/Leute/Uwe/ ------------------------------------------------------------------ or for the ftp-server hosting the file: ------------------------------------------------------------------ ftp://ag-vp-ftp.informatik.uni-kl.de/Public/Neural_Networks/ Reports/Zimmer.Minimal.ps.Z ------------------------------------------------------------------ ----------------------------------------------------- ----- Uwe R. Zimmer --- University of Kaiserslautern - Computer Science Department | 67663 Kaiserslautern - Germany | ------------------------------.--------------------------------. Phone:+49 631 205 2624 | Fax:+49 631 205 2803 | ------------------------------.--------------------------------. http://ag-vp-www.informatik.uni-kl.de/Leute/Uwe/ | From arbib at pollux.usc.edu Wed Jul 19 12:48:14 1995 From: arbib at pollux.usc.edu (Michael A. Arbib) Date: Wed, 19 Jul 1995 09:48:14 -0700 Subject: Positions Available: VISUALIZATION FOR BRAIN RESEARCH Message-ID: <199507191648.JAA28988@pollux.usc.edu> ABOUT THE USC BRAIN PROJECT Professors Michael Arbib (Director), Michel Baudry, Theodore Berger, Peter Danzig, Shahram Ghandeharizadeh, Scott Grafton, Dennis McLeod, Thomas McNeill, Larry Swanson, and Richard Thompson are about to start the second year of a major grant from the Human Brain Project (a consortium of federal agencies led by the National Institute of Mental Health) for a 5 year project, "Neural Plasticity: Data and Computational Structures" to be conducted at the University of Southern California. The Project will combine research on databases with the development of tools for database construction and data recovery from multiple databases, simulation tools, and visualization tools for both rat neuroanatomy and human brain imaging. These tools will be used to construct databases for research at USC and elsewhere on mechanisms of neural plasticity in basal ganglia, cerebellum, and hippocampus. The grant will also support a core of neuroscience research linked to several ongoing research programs to explore how experiments can be enhanced when coupled to databases enriched with powerful tools for modeling and visualization. The project is a major expression of USC's approach to the study of the brain which locates neuroscience in the context of a broad interdisciplinary program in Neural, Informational, and Behavioral Sciences (NIBS). The status of our work may be viewed on WWW at http://www-hbp.usc.edu:8376/HBP/Home.html ABOUT THE POSITIONS The grant and related funding will allow us to hire two computer professionals to help us develop visualization tools for the USC Brain Project. VISUALIZATION PROGRAMMER: Three years experience programming and developing graphical software. UNIX, C++, DBMS experience. Internet protocols also desirable. Ability link visualization software with object based data base and simulation tools. Background in neuroscience is not required but proven communication skills and ability to analyze scientific data are valuable. IMAGE ANALYSIS DEVELOPER: Three years experience programming and developing graphical software. UNIX, C++, DBMS experience. Internet protocols also desirable. Emphasis on streamlining existing image analysis code and developing new algorithms for warping 3D data sets. Additional ability to manage the growth of a large archive of MRI and functional image data sets is valuable. Send CV, references, and letter addressing above qualifications to Paulina Tagle, Center for Neural Engineering, USC, Los Angeles, CA 90089-2520; Fax (213) 740-5687; paulina at pollux.usc.edu. USC is an equal opportunity employer. From pihong at merlot.cse.ogi.edu Wed Jul 19 16:00:42 1995 From: pihong at merlot.cse.ogi.edu (Hong Pi) Date: Wed, 19 Jul 95 13:00:42 -0700 Subject: Neural Network Course (Announcement) Message-ID: <9507192000.AA08786@merlot.cse.ogi.edu> Oregon Graduate Institute of Science & Technology, Office of Continuing Education, offers the short course: NEURAL NETWORKS: ALGORITHMS AND APPLICATIONS September 25-29, 1995, at the OGI campus near Portland, Oregon. Course Organizer: John E. Moody Lead Instructor: Hong Pi With Lectures By: Todd K. Leen John E. Moody Thorsteinn S. Rognvaldsson Eric A. Wan Artificial neural networks (ANN) have emerged as a new information processing technique and an effective computational model for solving pattern recognition and completion, feature extraction, optimization, and function approximation problems. This course introduces participants to the neural network paradigms and their applications in pattern classification; system identification; signal processing and image analysis; control engineering; diagnosis; time series prediction; and financial analysis and trading. An introduction to fuzzy logic and fuzzy control systems is also given. Designing a neural network application involves steps from data preprocessing to network tuning and selection. This course, with many examples, application demos and hands-on lab practice, will familiarize the participants with the techniques necessary for building successful applications. About 50 percent of the class time is assigned to lab sessions. The simulations will be based on Matlab, the Matlab Neural Net Toolbox, and other software running on Windows-NT workstations. Prerequisites: Linear algebra and calculus. Previous experience with using Matlab is helpful, but not required. Who will benefit: Technical professionals, business analysts, financial market practitioners, and other individuals who wish to gain a basic understanding of the theory and algorithms of neural computation and/or are interested in applying ANN techniques to real-world, data-driven modeling problems. Course Objectives: After completing the course, students will: - Understand the basic neural networks paradigms - Be familiar with the range of ANN applications - Have a good understanding of the techniques for designing successful applications - Gain hands-on experience with ANN modeling. Course Outline (8:30am - 5:00pm September 25 - 28, and 8:30am - 12:30am September 29): Neural Networks: Biological and Artificial Biological inspirations. Basic models of a neuron. Types of architectures and learning paradigms. Simple Perceptrons and Adalines Decision surfaces. Linear separability. Perceptron learning rules. Linear units. Gradient descent learning. Multi-Layer Feed-Forward Networks I Multi-Layer perceptrons. Back-propagation learning. Generalization. Early Stopping via validation. Momentum and adaptive learning rate. Examples and applications. Multi-Layer Feed-Forward Networks II Newton's method. Conjugate gradient. Levenburg-Marquardt. Radial basis function networks. Projection pursuit regression. Neural Networks for Pattern Recognition and Classification Bayes decision theory. The Bayes risk. Non-neural and neural methods for classification. Neural networks as estimators of the posterior probability. Methods for improving the classification performance. Benchmark tests of neural networks vs. other methods. Some applications. Improving the Generalization Performance Model bias and model variance. Weight decay. Regularizers. Optimal brain surgeon. Learning from hints. Sensitivity analysis. Input variable selection. The delta-test. Time Series Prediction: Classical and Nonlinear Approaches Linear time series models. Simple nonlinear models. Recurrent network models and training algorithms. Case studies: sunspots, economic forecasting. Self-Organized Networks and Unsupervised Learning K-means clustering. Kohonen feature maps. Learning vector quantization. Adaptive principal components analysis. Neural Network for Adaptive Control What is control. Heuristic, open loop, and inverse control. Feedback algorithms for control. Neural network feedback control. Reinforcement learning. Survey of Neural Network Applications in Financial Markets Bond and stock valuation. Currency rate forecasting. Trading systems. Commodity price forecasting. Risk management. Option pricing. Fuzzy Systems Fuzzy logic. Fuzzy control systems. Adaptive fuzzy and neural-fuzzy. About the Instructors Todd K. Leen is associate professor of Computer Science and Engineering at Oregon Graduate Institute of Science & Technology. He received his Ph.D. in theoretical Physics from the University of Wisconsin in 1982. From 1982-1987 he worked at IBM Corporation, and then pursued research in mathematical biology at Good Samaritan Hospital's Neurological Sciences Institute. He joined OGI in 1989. Dr. Leen's current research interests include neural learning, algorithms and architectures, stochastic optimization, model constraints and pruning, and neural and non-neural approaches to data representation and coding. He is particularly interested in fast, local modeling approaches, and applications to image and speech processing. Dr. Leen served as theory program chair for the 1993 Neural Information Processing Systems (NIPS) conference, and workshops chair for the 1994 NIPS conference. John E. Moody is associate professor of Computer Science and Engineering at Oregon Graduate Institute of Science & Technology. His current research focuses on neural network learning theory and algorithms in it's many manifestations. He is particularly interested in statistical learning theory, the dynamics of learning, and learning in dynamical contexts. Key application areas of his work are adaptive signal processing, adaptive control, time series analysis, forecasting, economics and finance. Moody has authored over 35 scientific papers, more than 25 of which concern the theory, algorithms, and applications of neural networks. Prior to joining the Oregon Graduate Institute, Moody was a member of the Computer Science and Neuroscience faculties at Yale University. Moody received his Ph.D. and M.A. degrees in Theoretical Physics from Princeton University, and graduated Summa Cum Laude with a B.A. in Physics from the University of Chicago. Hong Pi is a senior research associate at Oregon Graduate Institute. He received his Ph.D. in theoretical physics from University of Wisconsin in 1989. Prior to joining OGI in 1994 he had been a postdoctoral fellow and research scientist in Lund University, Sweden. His research interests include nonlinear modeling, neural network algorithms and applications. Thorsteinn S. Rognvaldsson received the Ph.D. degree in theoretical physics from Lund University, Sweden, in 1994. His research interests are Neural Networks for prediction and classification. He is currently a postdoctoral research associate at Oregon Graduate Institute. Eric A. Wan, Assistant Professor of Electrical Engineering and Applied Physics, Oregon Graduate Institute of Science & Technology, received his Ph.D. in electrical engineering from Stanford University in 1994. His research interests include learning algorithms and architectures for neural networks and adaptive signal processing. He is particularly interested in neural applications to time series prediction, speech enhancement, system identification, and adaptive control. He is a member of IEEE, INNS, Tau Beta Pi, Sigma Xi, and Phi Beta Kappa. For a complete course brochure contact: Linda M. Pease, Director Office of Continuing Education Oregon Graduate Institute of Science & Technology PO Box 91000 Portland, OR 97291-1000 +1-503-690-1259 +1-503-690-1686 (fax) e-mail: continuinged at admin.ogi.edu WWW home page: http://www.ogi.edu ^*^*^*^*^*^*^*^*^*^*^**^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^* Linda M. Pease, Director lpease at admin.ogi.edu Office of Continuing Education Oregon Graduate Institute of Science & Technology 20000 N.W. Walker Road, Beaverton OR 97006 USA (shipping) P.O. Box 91000, Portland, OR 97291-1000 USA (mailing) +1-503-690-1259 +1-503-690-1686 fax "The future belongs to those who believe in the beauty of their dreams" -Eleanor Roosevelt ^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^* From mao at almaden.ibm.com Wed Jul 19 17:07:04 1995 From: mao at almaden.ibm.com (mao@almaden.ibm.com) Date: Wed, 19 Jul 1995 14:07:04 -0700 Subject: Call for papers, IEEE Trans. NNs Special Issues on ANNs and PR Message-ID: <9507192107.AA21208@powerocr.almaden.ibm.com> CALL FOR PAPERS IEEE Transactions on Neural Networks Special Issue on Artificial Neural Networks and Pattern Recognition Tentative Publication Date: November 1996 Artificial neural networks (ANN) have now been recognized as powerful and economical tools for solving a large variety of problems in a number of scientific and engineering disciplines. The literature on neural networks is enormous consisting of a large number of books, journals and conference proceedings, and new commercial software and hardware products. A large portion of the research and development on ANNs is devoted to solving pattern recognition problems. Pattern recognition (PR) is a relatively mature discipline. Over the past 50 years, a number of different paradigms (statistical, syntactic and neural networks) have been utilized for solving a variety of recognition problems. But, real-world recognition problems are sufficiently difficult so that a single paradigm is not "optimal" for different recognition problems. As a result, successful recognition systems based either on statistical approach or neural networks exist in limited domains (e.g., handprinted character recognition and isolated word speech recognition). There is a close relationship between some of the popular ANN models and statistical pattern recognition (SPR) approaches. Quite often, these relationships are either not known to researchers or not fully exploited to build "hybrid" recognition systems. In spite of this close resemblance between ANN and SPR, ANNs have provided a variety of novel or supplementary approaches for pattern recognition tasks. More noticeably, ANNs have provided architectures on which many classical SPR algorithms (e.g., tree classifiers, principal component analysis, K-means clustering) can be mapped to facilitate hardware implementation. On the other hand, ANNs can derive benefit from some well-known results in SPR (e.g., Bayes decision theory, nearest neighbor rules, curse of dimensionality and Parzen window classifier). The purpose of this special issue is to increase the awareness of researchers and practitioners of pattern recognition about the common links between ANNs and SPR. This is likely to lead to more communication and cooperative work between the two research communities. Such an effort will not only avoid repetitious work but, more importantly, will stimulate and motivate individual disciplines. It is our hope that this special issue will lead to a synergistic approach which combines the strengths of ANN and SPR in order to achieve a significantly better performance for complex pattern recognition problems. Specific topics of interest include, but are not limited to: o Old and new links between ANNs and SPR (e.g., Adaptive Mixture of Expert (AME) and Hierarchical Mixture of Experts (HME) versus traditional decision trees, recurrent ANNs and time-delay ANNs versus Hidden Markov Models, generalization ability in ANNs versus curse of dimensionality). o Comparative studies of ANN and SPR approaches that lead to useful guidelines in practice (e.g., under what conditions does one approach exhibit superiority to the other?). o New ANN models for PR. -- representation/feature extraction (compression rate, invariance, robustness, and efficiency) using ANNs. -- supervised classification. -- clustering/unsupervised classification. o Combination of ANN and SPR classifiers/estimators, and features extracted using traditional PR approaches and ANNs. o Hybrid (using ANNs and traditional PR approaches) systems for solving real-world PR problems (e.g., face recognition, cursive handwriting recognition, and speech recognition). Although these topics cover a broad area of research, we encourage papers that explore the relationship between ANNs and traditional PR. Authors should relate their work with both the PR and ANN literature. Papers should also emphasize results that have been or can be potentially applied to "real world" applications; they should include evaluations through either experimentation, simulation, analysis and/or experience. Guest Editors: -------------- Professor Anil K. Jain Dr. Jianchang Mao Department of Computer Science Image and Multimedia Systems, DPE/803 A714 Wells Hall IBM Almaden Research Center Michigan State University 650 Harry Road East Lansing, MI 48824, USA San Jose, CA 95120, USA Email: jain at cps.msu.edu Email: mao at almaden.ibm.com Fax: 517-432-1061 Fax: 408-927-3497 Instructions for submitting papers: ----------------------------------- Manuscripts must not have been previously published or currently submitted for publication elsewhere. Each manuscript should be no more than 35 pages (double space, 12 point font) including all text, references, and illustrations. Each copy of the manuscript should include a title page containing title, authors' names and affiliations, postal and email addresses, telephone numbers and Fax numbers, a 300-word abstract and a list of keywords identifying the central issues of the manuscript's contents. Please submit six copies of your manuscript to either of the guest editors by January 5, 1996. --------------- From jagota at next1.msci.memst.edu Wed Jul 19 17:41:23 1995 From: jagota at next1.msci.memst.edu (Arun Jagota) Date: Wed, 19 Jul 1995 16:41:23 -0500 Subject: Notes for HKP on WWW Message-ID: <199507192141.AA22908@next1> Dear Connectionists: I am offering a set of handwritten transparencies, in electronic scanned-in form, for portions of the book "Introduction to the Theory of Neural Computation", Hertz, Krogh, and Palmer, on the World Wide Web as follows: http://www.msci.memphis.edu/~jagota You are welcome to make transparencies off them for instructional purposes, or print them off for some other reason. Or simply browse using Netscape, Mosaic, etc. Some features are a little awkward. First, they are all hand-written (in color) and quite unpolished. Second, each transparency is scanned into a raw image and therefore retrieving it takes some time. In all, there are about 110 of them, and they cover the following topics: ------------------------------------ ONE Introduction TWO The Hopfield Model 2.1 Associative Memories and Energy Function: 2.2 Hebb Rule and Capacity 2.3 Stochastic Networks THREE Extensions of the Hopfield Model 3.1 Continuous-Valued Units FOUR Optimization Problems 4.1 Mapping Problems to Hopfield Network 4.2 The Weighted Matching Problem 4.3 Graph Bipartitioning FIVE Simple Perceptrons 5.1 Feed-Forward Networks 5.2 Threshold Units 5.3 Perceptron Learning Rule Proof of Convergence 5.4 Continuous Units 5.5 Capacity of the Simple Perceptron SIX Multi-Layer Networks 6.1 Back-Propagation 6.2 Variations on Back-Propagation 6.3 Examples and Applications ------------------------------------ Let me also add that version 2 of the HKP exercises (very slightly refined and expanded from version 1) is available from the same ftp location as earlier: ftp ftp.cs.buffalo.edu > cd users/jagota > get HKP.ps Arun Jagota, Math Sciences, University of Memphis From tamayo at Think.COM Wed Jul 12 18:17:36 1995 From: tamayo at Think.COM (Pablo Tamayo) Date: Wed, 12 Jul 95 18:17:36 EDT Subject: Commercial Apps/Machine Learning Developer Wanted Message-ID: For more than a decade, Thinking Machines Corporation has been one of the world's leading computer companies in advancing the capability and use of high performance computing. Through its unique expertise in parallel process technology it has developed three generations of world-class hardware, software, and complementary support systems. Now we are beginning a new and exciting growth strategy to open up our technology to new hardware platforms and application environments. Thinking Machines is uniquely positioned to deliver state-of-the-art, cost effective systems on standard industry platforms. Join us as we unleash the power of parallel processing software. Thinking Machines Corporation has several openings for talented engineers, including: Commercial Applications Researcher/Developers Design, implementation and support of software for commercial applications involving machine learning (Neural Nets, CART, MBR, GA), statistics (SAS), and parallel processing of massive databases. Requirements: MS/PhD in CS/EE or equivalent. Expertise in C/UNIX, HPC and commercial databases (SQL, Oracle, etc.) If you are interested in this position or want information on other openings that are currently available, please send your resume to: Rick Pitman Thinking Machines Corporation 245 First Street Cambridge, MA 02142 Internet address: rickp at think.com Phone: (617) 234-3016 Fax: (617) 234-4421 An Equal Opportunity Employer The Connection Machine is a registered trademark of Thinking Machines Corporation. UNIX is a registered trademark of UNIX Systems Laboratories, Inc. From srikanth at diamond.cau.auc.edu Thu Jul 20 13:50:07 1995 From: srikanth at diamond.cau.auc.edu (srikanth@diamond.cau.auc.edu) Date: Thu, 20 Jul 95 13:50:07 EDT Subject: Call For Papers FUZZ-IEEE 1996 Message-ID: <9507201750.AA02884@diamond.cau.auc.edu> ANNOUNCEMENT AND PRELIMINARY CALL FOR PAPERS IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS New Orleans, September 8-11, 1996 PAPERS DUE: January 31, 1996 NOTIFICATION OF ACCEPTANCE: April 15, 1996 FINAL PAPERS DUE: June 15, 1996 SPECIAL SESSION / TUTORIAL PROPOSALS December 1, 1995 The program committee invites potential authors to submit papers dealing with any aspect of research and applications related to the use of fuzzy models. Papers must be written in English and received by 1/31/96. Six copies of the paper must be submitted. The paper may not exceed 7 pages including figures, tables and references. Papers should be prepared on 8.5" X 11" white paper with 1" margins on all sides, one column format in Times or similar style, 10 points or larger, and printed on one side of the paper only. Please include title, author names(s) and affliation on top of the first page followed by an abstract. FAX submissions are NOT acceptable. Please indicate the corresponding author with their email address where possible. Please send submissions prior to the deadline to Dr. Don Kraft, Program Committee Chair Computer Science Department, Louisiana State University Baton Rouge, LA 70803-4020 email: kraft at bit.csc.lsu.edu phone:504-388-2253 SEE ALSO : http://jasper.cau.auc.edu/fuzz_ieee1.html for more details about FUZZ-IEEE '96 and New Orleans! ------------------------------------------------------------------------------ GENERAL CHAIR PROGRAM CHAIR Fredrick E. Petry Donald Kraft Tulane University Louisiana State University New Orleans, LA Baton Rouge, LA petry at rex.cs.tulane.edu kraft at bit.csc.lsu.edu PUBLICITY CHAIRS Roy George, R. Srikanth Jim Keller Clark Atlanta University University of Missouri Atlanta, GA Columbia, MO roy at diamond.cau.auc.edu srikanth at diamond.cau.auc.edu PROCEEDINGS CHAIR Padmini Srinivasan University of SW Louisiana Lafayette, LA EXHIBITS CHAIR Valarie Cross V. Ganesh Miami University Allied Signal Oxford, OH Morristown, NJ FINANCE CHAIR Sujeet Shenoi University of Tulsa Tulsa, OK From poole at cs.ubc.ca Fri Jul 21 09:56:01 1995 From: poole at cs.ubc.ca (David Poole) Date: Fri, 21 Jul 1995 9:56:01 UTC-0700 Subject: 11th Conference on Uncertainty in AI, August 1995 Message-ID: <"6538*poole@cs.ubc.ca"@MHS> The Conferences in Uncartianty in AI are the premier forum for work on reasoning under uncertainty (including probabilistic and other formalisms for uncertainty, representations for uncertainty, such as Bayesian networks, algorithms for inference under uncertainty and learning under ucertainty). The 11th Conference on Uncertianty in AI will be held in Montreal, 18-20 August 1995 (just before IJCAI-95). For full details including registration information and an online proceedings see the URL: http://www.cs.ubc.ca/spider/poole/UAI95.html The program for UAI-95 is as follows: UAI-95 - 11th Conference on Uncertainty in AI McGill University, Montreal, Quebec, 18-20 August 1995 =================================== Final Program =================================== ============================== Friday 18 August Overview ============================== 08:45 -- 09:00 Opening remarks 09:00 -- 10:15 Invited talk #1 (Haussler) 10:15 -- 10:30 Break 10:30 -- 12:30 Presentation session #1 12:30 -- 14:00 Lunch 14:00 -- 16:00 Poster session #1 16:00 -- 16:15 Break 16:30 -- 18:30 Presentation session #2 ================================ Saturday 19 August Overview ================================ 09:00 -- 10:30 Invited talk #2 (Jordan) + panel discussion 10:30 -- 10:45 Break 10:45 -- 12:45 Presentation session #3 12:45 -- 14:30 Lunch 14:30 -- 16:00 Invited talk #3 (Subrahmanian) 16:00 -- 16:15 Break 16:15 -- 18:15 Presentation session #4 ================================ Sunday 20 August Overview ================================ 09:00 -- 10:30 Invited talk #4 (Shafer) + panel discussion 10:30 -- 10:45 Break 10:45 -- 12:45 Presentation session #5 12:45 -- 14:30 Lunch 14:30 -- 16:00 Poster session #2 16:00 -- 16:15 Break 16:15 -- 18:15 Presentation session #6 ============================================== Invited talks ============================================== #1 Haussler "Hidden Markov and Related Statistical Models: How They Have Been Applied to Biosequence Analysis" #2 Jordan (with panel on learning) "A Few Relevant Ideas from Statistics, Neural Networks, and Statistical Mechanics" #3 Subrahamanian "Uncertainty in Deductive Databases" #4 Shafer (with panel on causality) "The Multiple Causal Interpretation of Bayes Nets" ================================================= Presentation session #1 ================================================= Wellman/Ford/Larson PATH PLANNING UNDER TIME-DEPENDENT UNCERTAINTY Horvitz/Barry DISPLAY OF INFORMATION FOR TIME-CRITICAL DECISION MAKING Pearl/Robins PROBABILISTIC EVALUATION OF SEQUENTIAL PLANS FROM CAUSAL MODELS WITH HIDDEN VARIABLES Haddawy/Doan/Goodwin EFFICIENT DECISION-THEORETIC PLANNING: TECHNIQUES AND EMPIRICAL ANALYSIS Fargier/Lang/Clouaire/Schiex A CONSTRAINT SATISFACTION FRAMEWORK FOR DECISION UNDER UNCERTAINTY ================================================= Presentation session #2 ================================================= Xu/Smets GENERATING EXPLANATIONS FOR EVIDENTIAL REASONING ========> Best student paper <=========== Meek CAUSAL INFERENCE AND CAUSAL EXPLANATION WITH BACKGROUND KNOWLEDGE ========> Best student paper <=========== Cayrac/Dubois/Prade PRACTICAL MODEL-BASED DIAGNOSIS WITH QUALITATIVE POSSIBILISTIC UNCERTAINTY Srinivas/Horvitz EXPLOITING SYSTEM HIERARCHY TO COMPUTE REPAIR PLANS IN PROBABILISTIC MODEL-BASED DIAGNOSIS Balke/Pearl COUNTERFACTUALS AND POLICY ANALYSIS IN STRUCTURAL MODELS ================================================= Presentation session #3 ================================================= Jensen CAUTIOUS PROPAGATION IN BAYESIAN NETWORKS Darwiche STRONG CONDITIONING ALGORITHMS FOR EXACT AND APPROXIMATE INFERENCE IN CAUSAL NETWORKS ========> Best student paper <=========== Draper CLUSTERING WITHOUT (THINKING ABOUT) TRIANGULATION ========> Best student paper <=========== Goldszmidt FAST BELIEF UPDATE USING ORDER-OF-MAGNITUDE PROBABILITIES ========> Best student paper <=========== Harmanec TOWARD A CHARACTERIZATION OF UNCERTAINTY MEASURE FOR THE DEMPSTER-SHAFER THEORY ========> Best student paper <=========== ================================================= Presentation session #4 ================================================= Dubois/Prade NUMERICAL REPRESENTATION OF ACCEPTANCE Grosof TRANSFORMING PRIORITIZED DEFAULTS AND SPECIFICITY INTO PARALLEL DEFAULTS Weydert DEFAULTS AND INFINITESIMALS DEFEASIBLE INFERENCE BY NONARCHIMEDEAN ENTROPY-MAXIMIZATION Benferhat/Saffiotti/Smets BELIEF FUNCTIONS AND DEFAULT REASONING Ngo/Haddawy/Helwig A THEORETICAL FRAMEWORK FOR CONTEXT-SENSITIVE TEMPORAL PROBABILITY MODEL CONSTRUCTION WITH APPLICATION TO PLAN PROJECTION ========================================== Presentation session #5 ========================================== Campos/Moral INDEPENDENCE CONCEPTS FOR CONVEX SETS OF PROBABILITIES Geiger/Heckerman A CHARACTERIZATION OF THE DIRICHLET DISTRIBUTION THROUGH GLOBAL AND LOCAL INDEPENDENCE Spirtes DIRECTED CYCLIC GRAPHICAL REPRESENTATIONS OF FEEDBACK MODELS Pynadath/Wellman ACCOUNTING FOR CONTEXT IN PLAN RECOGNITION, WITH APPLICATION TO TRAFFIC MONITORING Srinivas MODELING FAILURE PRIORS AND PERSISTENCE IN MODEL-BASED DIAGNOSIS ========================================== Presentation session #6 ========================================== Poole EXPLOITING THE RULE STRUCTURE FOR DECISION MAKING WITHIN THE INDEPENDENT CHOICE LOGIC Krause/Fox/Judson IS THERE A ROLE FOR QUALITATIVE RISK ASSESSMENT? Srinivas POLYNOMIAL ALGORITHM FOR COMPUTING THE OPTIMAL REPAIR STRATEGY IN A SYSTEM WITH INDEPENDENT COMPONENT FAILURES Boldrin/Sossai AN ALGEBRAIC SEMANTICS FOR POSSIBILISTIC LOGIC Hajek/Godo/Esteva FUZZY LOGIC AND PROBABILITY ============================================================== Poster session #1 ============================================================== 1. Jack Breese, Russ Blake. AUTOMATING COMPUTER BOTTLENECK DETECTION WITH BELIEF NETS 2. Wray L. Buntine CHAIN GRAPHS FOR LEARNING 3. J.L. Castro, J.M. Zurita AN APPROACH TO GET THE STRUCTURE OF A FUZZY RULE UNDER UNCERTAINTY 4. Tom Chavez, Ross Shachter DECISION FLEXIBILITY 5. Arthur L. Delcher, Adam Grove, Simon Kasif, Judea Pearl LOGARITHMIC-TIME UPDATES AND QUERIES IN PROBABILISTIC NETWORKS 6. Eric Driver, Darryl Morrell CONTINUOUS BAYESIAN NETWORKS 7. Nir Friedman, Joseph Y. Halpern PLAUSIBILITY MEASURES: A USER'S GUIDE 8. David Galles, Judea Pearl TESTING IDENTIFIABILITY OF CAUSAL EFFECTS 9. Steve Hanks, David Madigan, Jonathan Gavrin PROBABILISTIC TEMPORAL REASONING WITH ENDOGENOUS CHANGE 10. David Heckerman BAYESIAN METHODS FOR LEARNING CAUSAL NETWORKS 11. Eric Horvitz, Adrian Klein STUDIES IN FLEXIBLE LOGICAL INFERENCE: A DECISION-MAKING PERSPECTIVE 12. George John, Pat Langley ESTIMATING CONTINUOUS DISTRIBUTIONS IN BAYESIAN CLASSIFIERS 13. Uffe Kjaerulff HUGS: COMBINING EXACT INFERENCE AND GIBBS SAMPLING IN JUNCTION TREES 14. Prakash P. Shenoy A NEW PRUNING METHOD FOR SOLVING DECISION TREES AND GAME TREES 15. Peter Spirtes, Christopher Meek, Thomas Richardson CAUSAL INFERENCE IN THE PRESENCE OF LATENT VARIABLES AND SELECTION BIAS 16. Nic Wilson AN ORDER OF MAGNITUDE CALCULUS 17. S.K.M. Wong, C.J. Butz, Y. Xiang A METHOD FOR IMPLEMENTING A PROBABILISTIC MODEL AS A RELATIONAL DATABASE
18. Y. Xiang OPTIMIZATION OF INTER-SUBNET BELIEF UPDATING IN MULTIPLY SECTIONED BAYESIAN NETWORKS 19. Nevin Lianwen Zhang INFERENCE WITH CAUSAL INDEPENDENCE IN THE CPSC NETWORK =============================================== Poster Session #2 =============================================== 1. Fahiem Bacchus, Adam Grove GRAPHICAL MODELS FOR PREFERENCE AND UTILITY 2. Enrique Castillo, Remco R. Bouckaert, Jose Maria Sarabia, ERROR ESTIMATION IN APPROXIMATE BAYESIAN BELIEF NETWORK INFERENCE 3. David Maxwell Chickering A NEW CHARACTERIZATION OF EQUIVALENT BAYESIAN NETWORK STRUCTURES 4. Marek J. Druzdzel, Linda C. van der Gaag ELICITATION OF PROBABILITIES: COMBINING QUALITATIVE AND QUANTITATIVE INFORMATION 5. Kazuo J. Ezawa, Til Schuermann LEARNING SYSTEM: A RARE BINARY OUTCOME WITH MIXED DATA STRUCTURES 6. David Heckerman, Dan Geiger LEARNING BAYESIAN NETWORKS: A UNIFICATION FOR DISCRETE AND GAUSSIAN DOMAINS 7. David Heckerman, Ross Shachter A DEFINITION AND GRAPHICAL REPRESENTATION FOR CAUSALITY 8. Mark Hulme IMPROVED SAMPLING FOR DIAGNOSTIC REASONING IN BAYESIAN NETWORK 9. Ali Jenzarli INFORMATION/RELEVANCE INFLUENCE DIAGRAMS 10. Keiji Kanazawa, Daphne Koller, Stuart Russell STOCHASTIC SIMULATION ALGORITHMS FOR DYNAMIC PROBABILISTIC NETWORKS 11. Grigoris I. Karakoulas PROBABILISTIC EXPLORATION IN PLANNING WHILE LEARNING 12. Alexander V. Kozlov, Jaswinder Pal Singh APPROXIMATE PROBABILISTIC INFERENCE IN BELIEF NETWORKS 13. Michael L. Littman, Thomas L. Dean, Leslie Pack Kaelbling ON THE COMPLEXITY OF SOLVING MARKOV DECISION PROBLEMS 14. Chris Meek STRONG-COMPLETENESS AND FAITHFULNESS IN BAYES NETWORKS 15. Simon Parsons REFINING REASONING IN QUALITATIVE PROBABILISTIC NETWORKS 16. Judea Pearl ON THE TESTABILITY OF CAUSAL MODELS WITH LATENT AND INSTRUMENTAL VARIABLES 17. Gregory Provan ABSTRACTION IN BELIEF NETWORKS: THE ROLE OF INTERMEDIATE STATES IN DIAGNOSTIC REASONING 18. Marco Valtorta, Young-Gyun Kim ON THE DETECTION OF CONFLICTS IN DIAGNOSTIC BAYESIAN NETWORKS USING ABSTRACTION ----------------------------------------------------------------------------- David Poole, Office: +1 (604) 822-6254 Department of Computer Science, Fax: +1 (604) 822-5485 University of British Columbia, Email: poole at cs.ubc.ca 2366 Main Mall, URL: http://www.cs.ubc.ca/spider/poole Vancouver, B.C., Canada V6T 1Z4 FTP: ftp://ftp.cs.ubc.ca/ftp/local/poole From linster at katla.harvard.edu Thu Jul 20 15:04:35 1995 From: linster at katla.harvard.edu (Christiane Linster) Date: Thu, 20 Jul 1995 15:04:35 -0400 (EDT) Subject: postdoc grant (fwd) Message-ID: *********************************************************** ************************************************************ CNRS-INRA Laboratoire de Neurobiologie Comparee des Invertebres Postdoctoral Research Fellowship Applications are invited for a year fellowship, from non-french citizen and qualified researcher with experience in Molecular neurobiology to investigate olfaction in insects. Applications, including a CV with the names of two referees should be sent urgently (before August 31, 1995) to: Dr C. MASSON LNCI BP 23 F - 91 440 Bures-sur-Yvette Tel. and fax : 33 1 69 07 20 59 E-mail : masson at inra.jouy.fr From jbower at bbb.caltech.edu Fri Jul 21 19:55:48 1995 From: jbower at bbb.caltech.edu (jbower@bbb.caltech.edu) Date: Fri, 21 Jul 95 16:55:48 PDT Subject: Journal of Computational Neurscience Vol.II(2) Message-ID: <9507212355.AA22523@bbb.caltech.edu> The JOURNAL OF COMPUTATIONAL NEUROSCIENCE From neurons to behavior: a journal at the interface between experimental and theoretical neuroscience... CONTENTS, VOLUME II, ISSUE 2 Dynamic Modification of Dendritic Cable Properties and Synaptic Transmission by Voltage-Gated Potassium C.J. Wilson. Electrical Consequences of Spine Dimensions in a Model of Cortical Spiny Stellate Cell Completely Reconstructed Serial Thin Sections I. Segev, A. Friedman, E.L. White, M.J. Gutnick. The Electric Image in Weakly Electric Fish: I. A Data Based Model of Waveform Generation in the Gymnotus Carapo A. Caputi and R. Budelli. Temporal Encoding in Nervous Systems: a Rigorous Definition F.Theunissen, J.P. Miller. Editorial introducing the Bulletin Board. Bulletin; D. Glanzman ************************************** SUBSCRIPTIONS: Volume 2, 1995 (4 issues): Institutional rate: $270.00 US Individual rate: $75.00 US PLEASE CONTACT: Kluwer Academic Publishers Order Department P.O. Box 358, Accord Station Hingham, MA 02108-0358 USA Phone: (617) 871-6600, Fax: (617) 871-6528 E-mail: kluwer at wkap.com Please refer to the KLUWER ACADEMIC PUBLISHERS INFORMATION SERVER at GOPHER.WKAP.NL for Call for Papers, Aims and Scope and additional information. *************************************** James M. Bower Division of Biology Mail code: 216-76 Caltech Pasadena, CA 91125 (818) 395-6817 (818) 449-0679 FAX NCSA Mosaic addresses for: laboratory http://www.bbb.caltech.edu/bowerlab GENESIS: http://www.bbb.caltech.edu/GENESIS science education reform http://www.caltech.edu/~capsi From tirthank at titanic.mpce.mq.edu.au Mon Jul 24 05:39:18 1995 From: tirthank at titanic.mpce.mq.edu.au (Tirthankar Raychaudhuri) Date: Mon, 24 Jul 1995 19:39:18 +1000 (EST) Subject: Change to URL of Combining Estimators Page Message-ID: <9507240939.AA07999@titanic.mpce.mq.edu.au> A non-text attachment was scrubbed... Name: not available Type: text Size: 748 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/7369cf4c/attachment.ksh From gerda at ai.univie.ac.at Mon Jul 24 08:59:40 1995 From: gerda at ai.univie.ac.at (Gerda Helscher) Date: Mon, 24 Jul 1995 13:59:40 +0100 Subject: Symposium "ANN and Adaptive Systems" Message-ID: <199507241259.AA05709@kairo.ai.univie.ac.at> CALL FOR PAPERS for the symposium ====================================================== Artificial Neural Networks and Adaptive Systems ====================================================== chairs: Guenter Palm, Germany, and Georg Dorffner, Austria as part of the Thirteenth European Meeting on Cybernetics and Systems Research April 9-12, 1996 University of Vienna, Vienna, Austria For this symposium, papers on any theoretical or practical aspect of artificial neural networks are invited. Special focus, however, will be put on the issue of adaptivity both in practical engineering applications and in applications of neural networks to the modeling of human behavior. By adaptivity we mean the capability of a neural network to adjust itself to changing environments. We make a careful distinction between "learning" to devise weight matrices for a neural network before it is applied (and usually left unchanged) on one hand, and true adaptivity of a given neural network to constantly changing con- ditions on the other hand - i.e. incremental learning in unstat- ionary environments. The following is a - no means exhaustive - list of possible topics in this realm: - online or incremental learning of neural network applications facing changing data distributions - transfer of neural network solutions to related but different approaches - application of neural networks in adaptive autonomous systems - "phylogenetic" vs. "ontogenetic" adaptivity (e.g. adaptivity of connectivity and architecture vs. adaptivity of coupling parameters or weights) - short term vs. long term adaptation - adaptive reinforcement learning - adaptive pattern recognition - localized vs. distributed approximation (in terms of overlap of decision regions) and adaptivity Preference will be given to contributions that address such issues of adaptivity, but - as mentioned initially - other original work on neural newtorks is also welcome. Deadline for submissions (10 single-spaced A4 pages, maximum 43 lines, max. line length 160 mm, 12 point) is =============================================== October 12, 1995 =============================================== Papers should be sent to: I. Ghobrial-Willmann or G. Helscher Austrian Society for Cybernetic Studies A-1010 Vienna 1, Schottengasse 3 (Austria) Phone: +43-1-53532810 Fax: +43-1-5320652 E-mail: sec at ai.univie.ac.at For more information on the whole EMCSR conference, see the Web-page http://www.ai.univie.ac.at/emcsr/ or contact the above address. !Hope to see you in Vienna! From M.Q.Brown at ecs.soton.ac.uk Mon Jul 24 11:49:17 1995 From: M.Q.Brown at ecs.soton.ac.uk (Martin Brown) Date: Mon, 24 Jul 1995 15:49:17 +0000 Subject: 2 postdoctoral positions available Message-ID: <6371.9507241449@ra.ecs.soton.ac.uk> UNIVERSITY OF SOUTHAMPTON DEPARTMENT OF ELECTRONICS AND COMPUTER SCIENCE RESEARCH FELLOWS Two postdoctoral positions are currently available on an EPSRC grant entitled Neurofuzzy Construction Algorithms and their Application in Non-Stationary Environments. Links to the groups, personnel and industrial companies can be obtained from the project's homepage at: http://www-isis.ecs.soton.ac.uk/research/projects/osiris.html Two postdoctoral researchers are required to investigate the development and application of advanced network construction algorithms and training rules for neurofuzzy systems operating in a time-varying environment. The candidates should possess skills in applied mathematics and computer science and have experience in such areas as numerical analysis, Visual C++ programming, neural/fuzzy learning theory, dynamical systems and optimisation theory. This research will be undertaken in association with Neural Computer Sciences http://www.demon.co.uk/skylake/ who produce an object oriented, 32 bit MS windows-based neural networks package called NeuFrame and benchmarking data sets will be collected from GEC and Lucas. In addition, Eurotherm controls are supplying tools to investigate the possibility of developing embedded devices. Post One - One researcher is required for 3 years to investigate and further develop the neurofuzzy construction algorithms that have been proposed by the ISIS group. They will be based at Southampton under the supervision of Martin Brown and Chris Harris. The neural+fuzzy approach allows vague, expert knowledge to be combined with numerical data to produce systems that make the best use of both information sources. However, for ill-defined, high-dimensional systems it would be useful to configure a network's structural parameters directly from the data. Recent research has shown that B-spline-based neurofuzzy systems are suitable for use in such algorithms due to their direct fuzzy interpretation, numerical conditioning and ease of implementation, and by considering an ANalysis Of VAriance (ANOVA) representation, the B-spline neurofuzzy networks can be shown to overcome the curse of dimensionality for many practical problems. A good background in numerical analysis and modelling theory (additive, neural/fuzzy) is required, and as the algorithms will be developed within a Visual C++, Microsoft Foundation Classes environment, hence knowledge about these products would also be useful. Informal enquiries for this post should be made to Dr Martin Brown in the ISIS research group, Department of Electronics and Computer Science, University of Southampton, UK (Tel +44 (0)1703 594984, Email: mqb at ecs.soton.ac.uk). Salary will be in the range of 15,986 - 18,985 per annum. Applicants for post one should send a full curriculum vitae (3 copies from UK applicants and 1 from overseas), including the names and addresses of three referees to the Personnel Department (R), University of Southampton, Highfield, Southampton, SO17 1BJ, telephone number (01703 592750) by no later than 25 August 1995. Please quote reference number R/553. Post Two - A second researcher is required for 2 years (with the possibility of it being extended for an extra year) to investigate on-line learning for non-stationary data. They will be based at Brighton University under the supervision of Steve Ellacott. This work will investigate several aspects of training neurofuzzy systems on-line such as: * learning algorithms for large, redundant training sets * recurrent training rules * high-order instantaneous learning algorithms * aspects of data excitation and on-line regularisation The ideal candidate would be a mathematician or mathematically oriented engineer with a background in numerical analysis and/or dynamical systems. Familiarity with neural network algorithms would be an advantage, but is not essential. The post will involve some programming in C or C++. All enquiries and applications for post two should be made to Dr Steve Ellacott in the Department of Mathematical Sciences, University of Brighton, UK (Tel +44 (0)1273 642544, Email: s.w.ellacott at brighton.ac.uk). working for equal opportunities a centre of excellence for university research and teaching From Dimitris.Dracopoulos at trurl.brunel.ac.uk Mon Jul 24 15:32:05 1995 From: Dimitris.Dracopoulos at trurl.brunel.ac.uk (Dimitris Dracopoulos) Date: Mon, 24 Jul 1995 13:32:05 -0600 Subject: NEURAL AND EVOLUTIONARY SYSTEMS ADVANCED MSC Message-ID: <9507241332.ZM7787@trurl.brunel.ac.uk> NEURAL AND EVOLUTIONARY SYSTEMS ADVANCED MSC ============================================ The Computer Science Department at Brunel University (United Kingdom) will be running a new advanced MSc course on Neural and Evolutionary Systems from September 1995. You may find further details at the following locations: WWW: http://http1.brunel.ac.uk:8080/depts/cs/ in the News section FTP: ftp.brunel.ac.uk CompSci/Announcements/NES-MSc.ps (PostScript version) CompSci/Announcements/NES-MSc.ascii (ASCII version) For further information including literature and an application form, please contact Pam Osborne at the address below, or for more detailed enquiries please contact Vlatka Hlupic (address given below) or me via email at: Dr Dimitris C. Dracopoulos Department of Computer Science Brunel University Telephone: +44 1895 274000 ext. 2120 London Fax: +44 1895 251686 Uxbridge E-mail: Dimitris.Dracopoulos at brunel.ac.uk Middlesex UB8 3PH United Kingdom ------------------------------------------------------------------------------- Pam Osborne Dept of Computer Science Tel: +44 (0)895 274000 Brunel University Ext: 2134 Uxbridge Fax: +44 (0)895 251686 Middlesex UB8 3PH Pam.Osborne at brunel.ac.uk ------------------------------------------------------ ------------------------------------------------------ Dr. Vlatka Hlupic Dept of Computer Science Tel: +44 (0)895 274000 Brunel University Ext: 2231 Uxbridge Fax: +44 (0)895 251686 Middlesex UB8 3PH Vlatka.Hlupic at brunel.ac.uk -- Dr Dimitris C. Dracopoulos Department of Computer Science Brunel University Telephone: +44 1895 274000 ext. 2120 London Fax: +44 1895 251686 Uxbridge E-mail: Dimitris.Dracopoulos at brunel.ac.uk Middlesex UB8 3PH United Kingdom From bogus@does.not.exist.com Tue Jul 25 08:53:07 1995 From: bogus@does.not.exist.com () Date: Tue, 25 Jul 95 13:53:07 +0100 Subject: EANN96-First Call for Papers Message-ID: <9433.9507251253@pluto.lpac.qmw.ac.uk> International Conference on Engineering Applications of Neural Networks (EANN '96) London, UK June 24-26, 1996 First Call for Papers (ASCII version) The conference is a forum for presenting the latest results on neural network applications in technical fields. The applications may be in any engineering or technical field, including but not limited to systems engineering, mechanical engineering, robotics, process engineering, metallurgy, pulp and paper technology, aeronautical engineering, computer science, machine vision, chemistry, chemical engineering, physics, electrical engineering, electronics, civil engineering, geophysical sciences, biotechnology, and environmental engineering. Abstracts of one page (200 to 400 words) should be sent to eann96 at lpac.ac.uk by 21 January 1996, by e-mail in PostScript format, or ASCII. Please mention two to four keywords, and whether you prefer it to be a short paper or a full paper. The short papers will be 4 pages in length, and full papers may be upto 8 pages. Tutorial proposals are also welcome until 21 January 1996. Notification of acceptance will be sent around 15 February. Submissions will be reviewed and the number of full papers will be very limited. Organising Committee A. Bulsari (Finland) Dimitris Tsaptsinos (UK) Trevor Clarkson (UK) International program committee (to be confirmed, extended) Dorffner, Georg (Austria) Gong, Shaogang (UK) Heikkonen, Jukka (Italy) Jervis, Barrie (UK) Oja, Erkki (Finland) Liljenstrom, Hans (Sweden) Papadourakis, George (Greece) Pham, D.T (UK) Refenes, Paul (UK) Sharkey, Noel (UK) Steele, Nigel (UK) Williams, Dave (UK) For more information see the WWW Page at:http://www.lpac.ac.uk/EANN96/ From jbower at bbb.caltech.edu Tue Jul 25 12:54:43 1995 From: jbower at bbb.caltech.edu (jbower@bbb.caltech.edu) Date: Tue, 25 Jul 95 09:54:43 PDT Subject: CNS*94 Conference Proceedings Message-ID: <9507251654.AA17150@bbb.caltech.edu> The Neurobiology of Computation edited by James M. Bower CALTECH, Pasadena, CA, USA The Neurobiology of Computation: The Proceedings of the Third Annual Computation and Neural Systems Conference contains the collected papers of the Conference on Computational Neuroscience, July 21--23, 1994, Monterey, California. These papers represent a cross-section of current research in computational neuroscience. While the majority of papers describe analysis and modeling efforts, other papers describe the results of new biological experiments explicitly placed in the context of computational theories and ideas. Subjects range from an analysis of subcellular processes, to single neurons, networks, behavior, and cognition. In addition, several papers describe new technical developments of use to computational neuroscientists. Contents: Introduction. Section 1: Subcellular. Section 2: Cellular. Section 3: Network. Section 4: Systems. Index. Kluwer Academic Publishers, Boston Date of publishing: July 1995 464 pp. Hardbound ISBN: 0-7923-9543-3 Prices: NLG: 300.00 USD: 180.00 GBP: 122.50 ============================================================================= ORDER FORM Author: James M. Bower Title: The Neurobiology of Computation ( ) Hardbound / ISBN: 0-7923-9543-3 NLG: 300.00 USD: 180.00 GBP: 122.50 Ref: KAPIS ( ) Payment enclosed to the amount of ___________________________ ( ) Please send invoice ( ) Please charge my credit card account: Card no.: |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| Expiry date: ______________ () Access () American Express () Mastercard () Diners Club () Eurocard () Visa Name of Card holder: ___________________________________________________ Delivery address: Title : ___________________________ Initials: _______________M/F______ First name : ______________________ Surname: ______________________________ Organization: ______________________________________________________________ Department : ______________________________________________________________ Address : ______________________________________________________________ Postal Code : ___________________ City: ____________________________________ Country : _____________________________Telephone: ______________________ Email : ______________________________________________________________ Date : _____________________ Signature: _____________________________ Our European VAT registration number is: |_|_|_|_|_|_|_|_|_|_|_|_|_|_| To be sent to: For customers in Mexico, USA, Canada Rest of the world: and Latin America: Kluwer Academic Publishers Kluwer Academic Publishers Group Order Department Order Department P.O. Box 358 P.O. Box 322 Accord Station 3300 AH Dordrecht Hingham, MA 02018-0358 The Netherlands U.S.A. Tel : 617 871 6600 Tel : +31 78 392392 Fax : 617 871 6528 Fax : +31 78 546474 Email : kluwer at wkap.com Email : services at wkap.nl After October 10, 1995 Tel : +31 78 6392392 Fax : +31 78 6546474 Payment will be accepted in any convertible currency. Please check the rate of exchange with your bank. Prices are subject to change without notice. All prices are exclusive of Value Added Tax (VAT). Customers in the Netherlands please add 6% VAT. Customers from other countries in the European Community: * please fill in the VAT number of your institute/company in the appropriate space on the orderform: or * please add 6% VAT to the total order amount (customers from the U.K. are not charged VAT). *************************************** James M. Bower Division of Biology Mail code: 216-76 Caltech Pasadena, CA 91125 (818) 395-6817 (818) 449-0679 FAX NCSA Mosaic addresses for: laboratory http://www.bbb.caltech.edu/bowerlab GENESIS: http://www.bbb.caltech.edu/GENESIS science education reform http://www.caltech.edu/~capsi From baluja at GS93.SP.CS.CMU.EDU Tue Jul 25 16:33:20 1995 From: baluja at GS93.SP.CS.CMU.EDU (Shumeet Baluja) Date: Tue, 25 Jul 95 16:33:20 EDT Subject: Paper Available on Human Face Detection Message-ID: Title: Human Face Detection in Visual Scenes By: Henry Rowley, Shumeet Baluja & Takeo Kanade Abstract: We present a neural network-based face detection system. A retinally connected neural network examines small windows of an image, and decides whether each window contains a face. The system arbitrates between multiple networks to improve performance over a single network. We use a bootstrapping algorithm for training the networks, which adds false detections into the training set as training progresses. This eliminates the difficult task of manually selecting non-face training examples, which must be chosen to accurately represent the entire space of non-face images. The system out-performs other state-of-the-art face detection systems, in terms of detection and false-positive rates. Instructions (via WWW) ---------------------------------------------- PAPER (html & postscript) is available from both of these sites: http://www.cs.cmu.edu/~baluja http://www.cs.cmu.edu/~har ON-LINE DEMO: http://www.ius.cs.cmu.edu/demos/facedemo.html QUESTIONS and COMMENTS (please mail to both): baluja at cs.cmu.edu & har at cs.cmu.edu From FRYRL at f1groups.fsd.jhuapl.edu Tue Jul 25 15:18:00 1995 From: FRYRL at f1groups.fsd.jhuapl.edu (Fry, Robert L.) Date: Tue, 25 Jul 95 15:18:00 EDT Subject: NNs and Info. Th. Message-ID: <3014B770@fsdsmtpgw.fsd.jhuapl.edu> New neuroprose entry: A paper entitled "Rational neural models based on information theory" will be preseted at the Fifteenth International Workshop on MAXIMUM ENTROPY AND BAYESIAN METHODS, in Sante Fe, New Mexico on July 31 - August 4, 1995. The enclosed abstract summarizes the presentation which describes an information-theoretic explanation of some spatial and temporal aspects of neurological information processing. Author: Robert L. Fry Affiliation: The Johns Hopkins University/Applied Physics Laboratory Laurel, MD 20723 Title: Rational neural models based on information theory Abstract Biological organisms which possess a neurological system exhibit varying degrees of what can be termed rational behavior. One can hypothesize that rational behavior and thought processes in general arise as a consequence of the intrinsic rational nature of the neurological system and its constituent neurons. A similar statement may be made of the immunological system [1]. The concept of rational behavior can be made quantitative. In particular, one possible characterization of rational behavior is as follows (1) A physical entity (observer) must exist which has the capacity for both measurement and the generation of outputs (participation). Outputs represent decisions on the part of the observer which will be seen to be rational. (2) The establishment of the quantities measurable by the observer is achieved through learning. Learning characterizes the change in knowledge state of an observer in response to new information and is driven by the directed divergence information measure of Kullback [2]. (3) Output decisions must be made optimally on the basis of noisy and/or missing input data. Optimally here implies that the decision-making process must abide by the standard logical consistency axioms which give rise to probability as the only logically consistent measure of degree of plausible belief. An observer using decision rules based on such is said to be rational. Information theory can be used to quantify the above leading to computational paradigms with architectures that closely resemble both the single cortical neuron and interconnected planar field of multiple cortical neurons all of which are functionally identical to one another. A working definition of information in a neural context must be agreed upon prior to this development, however. Such a definition can be obtained through the Laws of Form - a mathematics of observation originating with the British mathematician George Spencer-Brown [3]. [1] Francisco J. Varela, Principles of Biological Autonomy, North Holland, 1979. [2] Solomon Kullback, Information theory and statistics, Wiley, 1959 and Dover, 1968. [3] George Spencer-Brown, Laws of Form, E. P. Dutton, New York 1979 The paper is in compressed postscript format via FTP from archive.cis.ohio-state.edu /pub/neuroprose/fry.maxent.ps.Z using standard telnet or other FTP procedures From terry at salk.edu Tue Jul 25 16:55:28 1995 From: terry at salk.edu (Terry Sejnowski) Date: Tue, 25 Jul 95 13:55:28 PDT Subject: Development: A Constructivist Manifesto Message-ID: <9507252055.AA26263@salk.edu> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/quartz.const.ps.Z The file quartz.const.ps.Z is now available for copying from the Neuroprose repository. This is a 47 page paper. No hardcopies available. THE NEURAL BASIS OF COGNITIVE DEVELOPMENT: A CONSTRUCTIVIST MANIFESTO by Steven R. Quartz and Terrence J. Sejnowski The Salk Institute for Biological Studies PO Box 85800, San Diego CA 92186-5800 e-mail: steve at salk.edu submitted to: Behavioral and Brain Science ABSTRACT: Through considering the neural basis of cognitive development, we present a constructivist view. Its key feature is that environmentally-derived activity regulates neuronal growth as a progressive increase in the representational capacities of cortex. Learning in development becomes a dynamic interaction between the environment's informational structure and growth mechanisms, allowing the representational properties of cortex to be constructed by the problem domain confronting it. This is a uniquely powerful and general learning strategy that undermines the central assumptions of classical learnability theory. It also minimizes the need for prespecification of cortical function, suggesting that cortical evolution is a progression to more flexible representational structures, in contrast to the popular view of cortical evolution as an increase in specialized, innate circuits. ************ How to obtain a copy of the paper ************* Via Anonymous FTP: unix> ftp archive.cis.ohio-state.edu Name: anonymous Password: (type your email address) ftp> cd pub/neuroprose ftp> binary ftp> get quartz.const.ps.Z ftp> quit unix> uncompress quartz.const.ps.Z unix> lpr quartz.const.ps.Z (or what you normally do to print PostScript) From marco at McCulloch.Ing.UniFI.IT Tue Jul 25 07:20:22 1995 From: marco at McCulloch.Ing.UniFI.IT (Marco Gori) Date: Tue, 25 Jul 1995 13:20:22 +0200 Subject: paper announcement Message-ID: <9507251120.AA14789@McCulloch.Ing.UniFI.IT> FTP-host: spovest.ing.unifi.it FTP-file: pub/tech-reports/bank.ps.Z FTP-file: pub/tech-reports/num-pla.ps.Z The following papers are now available by anonymous ftp. They have been submitted to ICANN95 - Industrial track. In particular, the first one describes BANK, a real machine for banknote recognition, while the second one reports the results of a software tool for the recognition of number-plates in motorway environments. ================================================================ BANK: A Banknote Acceptor with Neural Kernel A. Frosini(*), M. Gori(**), and P. Priami(*) (*) Consulting Engineer (**) DSI - Univ. Firenze (ITALY) Abstract This paper gives a summary of the electronics and software modules of BANK, a banknote machine operating by a neural network-based recognition model. The machine perceives banknotes by means of low cost optoelectronic devices which produce signals associated with the reflected and refracted rays of two parallel strips in the banknote. The recognition model is based on multilayer networks acting for both the classification and verification steps. ================================================================== Number-Plate Recognition in Practice: The role of Neural Networks A. Frosini, M. Gori(*), L. Pistolesi (*) DSI - Univ. Firenze (ITALY) Abstract The automatic number-plate recognition has been receiving a growing attention in practice in a number of different problems. In this paper we show the crucial role of neural networks for implementing a software tool for the recognition of number-plates in the actual motor way environment for Italian cars. We show that proper neural network architectures can solve the problem of character recognition very effectively and, most importantly, can also offer a significant confidence on the the classification decision. This turns out to be of crucial importance in order to exploit effectively the hypothesize and verify paradigm on which the software tool relies. P.S. A demo of the software tool will be available via internet in the next few weeks for Personal Computers. ================================================================== From rmeir at ee.technion.ac.il Wed Jul 26 12:00:50 1995 From: rmeir at ee.technion.ac.il (Ron Meir) Date: Wed, 26 Jul 1995 14:00:50 -0200 Subject: 12th Israeli Symposium on AI, CV & NN Message-ID: <199507261600.OAA25777@ee.technion.ac.il> -------- Announcement and Call For Papers ------------ 12th Israeli Symposium on Artificial Intelligence, Computer Vision and Neural Networks Tel Aviv University, Tel Aviv, February 4-5, 1996 The purpose of the symposium is to bring together researchers and practitioners from Israel and abroad who are interested in the areas of Artificial Intelligence, Computer Vision, and Neural Networks, and to promote interaction between them. The program will include contributed as well as invited lectures and possibly some tutorials. All lectures will be given in English. Papers are solicited addressing all aspects of AI, Computer Vision and Neural Networks. Novel contributions in preliminary stages are especially encouraged but significant work which has been presented recently will also be considered. The symposium is intended to be more informal than previous symposia. The proceedings, including summaries of the contributed and invited talks, will be organized as a technical report and distributed during the symposium. No copyright will be required. To minimize costs, we intend to organize this symposium on a university campus. Authors should submit an extended abstract of their presentation in English so that it will reach us by September 1st 1995. Submissions should be limited to four pages, including title and bibliography. Submitted contributions will be refereed by the program committee. Authors will be notified of acceptance by November 1st, 1995. A final abstract, to be included in the proceedings, is due by January 10, 1996. For receiving updated information on the symposium, please send a message to Yvonne Sagi (yvonne at cs.technion.ac.il), including your name, affiliation, e-mail, fax number and phone number. Submitted extended abstracts should be send to: Yvonne Sagi Computer Science Department Technion, Israel Institute of Technology Haifa, 32000, Israel Dan Geiger (Artificial Intelligence) e-mail: dang at cs.technion.ac.il Phone: 972-4-294265 Micha Lindenbaum (Computer Vision) e-mail: mic at cs.technion.ac.il Phone: 972-4-294331 Ron Meir (Neural Networks) e-mail: rmeir at ee.technion.ac.il Phone: 972-4-294658 From C.Campbell at bristol.ac.uk Thu Jul 27 05:25:27 1995 From: C.Campbell at bristol.ac.uk (I C G Campbell) Date: Thu, 27 Jul 1995 10:25:27 +0100 (BST) Subject: PhD studentship available Message-ID: <199507270925.KAA11419@zeus.bris.ac.uk> PhD Studentship Available A PhD studentship has become available at short notice. The project involves the application of neural computing and statistical techniques to highlight and detect tumours on scans. In particular we are interested in detecting a specific type of tumour called an acoustic neuroma. The project is in collaboration with staff from the Computer Science Dept., Bristol University with an interest in computer vision and staff from the Dept. of Radiology, Bristol Royal Infirmary. The closing date for applications is the ** 12th August 1995 **. The studentship is available for three years with a maintenance grant of 5,000 pounds per annum and coverage of postgraduate fees at the home rate of 2,200 pounds per annum. Suitable candidates should have a First or Upper Second Class degree in computer science or mathematics or a similar numerate discipline. Interested candidates should contact Dr. Colin Campbell, Dept. of Engineering Mathematics, Bristol University, Bristol BS8 1TR, United Kingdom. Given the close deadline it is best to contact Dr. Campbell via e-mail (C.Campbell at bris.ac.uk). Candidates should send a CV and arrange for 2 letters of reference to be despatched to the above address ASAP. From klaus at sat.t.u-tokyo.ac.jp Fri Jul 28 01:50:23 1995 From: klaus at sat.t.u-tokyo.ac.jp (Klaus Mueller) Date: Fri, 28 Jul 95 01:50:23 JST Subject: new paper on learning curves Message-ID: <9507271650.AA19714@elf.sat.t.u-tokyo.ac.jp> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/klaus.lcurve.ps.Z The following paper is now available for copying from the Neuroprose repository: klaus.lcurve.ps.Z klaus.lcurve.ps.Z (129075 bytes) 26 pages. M\"uller, K.-R., Murata, N., Finke, M., Schulten, K., Amari, S.: A Numerical Study on Learning Curves in Stochastic Multi-Layer Feed-Forward Networks The universal asymptotic scaling laws proposed by Amari et al. are studied in large scale simulations using a CM5. Small stochastic multi-layer feed-forward networks trained with back-propagation are investigated. In the range of a large number of training patterns $t$, the asymptotic generalization error scales as $1/t$ as predicted. For a medium range $t$ a faster $1/t^2$ scaling is observed. This effect is explained by using higher order corrections of the likelihood expansion. It is shown for small $t$ that the scaling law changes drastically, when the network undergoes a transition from ineffective to effective learning. (University of Tokyo Technical Report METR 03-95 and submitted) * NO HARDCOPIES * Best regards, Klaus &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& Dr. Klaus-Robert M\"uller C/o Prof. Dr. S. Amari Department of Mathematical Engineering University of Tokyo 7-3-1 Hongo, Bunkyo-ku Tokyo 113 , Japan mail: klaus at sat.t.u-tokyo.ac.jp Fax: +81 - 3 - 5689 5752 &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& PERMANENT ADRESS: &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& Dr. Klaus-Robert M\"uller GMD First (Gesellschaft f. Mathematik und Datenverarbeitung) Rudower Chaussee 5, 12489 Berlin Germany &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& From howse at baku.eece.unm.edu Thu Jul 27 17:59:09 1995 From: howse at baku.eece.unm.edu (El Confundido) Date: Thu, 27 Jul 1995 15:59:09 -0600 Subject: Tech Report Available Message-ID: <9507272159.AA03634@baku.eece.unm.edu> The following technical report is available by FTP: A Synthesis of Gradient and Hamiltonian Dynamics Applied to Learning in Neural Networks James W. Howse, Chaouki T. Abdallah and Gregory L. Heileman Abstract The process of model learning can be considered in two stages: model selection and parameter estimation. In this paper a technique is presented for constructing dynamical systems with desired qualitative properties. The approach is based on the fact that an n-dimensional nonlinear dynamical system can be decomposed into one gradient and (n - 1) Hamiltonian systems. Thus, the model selection stage consists of choosing the gradient and Hamiltonian portions appropriately so that a certain behavior is obtainable. To estimate the parameters, a stably convergent learning rule is presented. This algorithm is proven to converge to the desired system trajectory for all initial conditions and system inputs. This technique can be used to design neural network models which are guaranteed to solve certain classes of nonlinear identification problems. Retrieval: FTP anonymous to: ftp.eece.unm.edu cd howse get techrep.ps.gz This is a PostScript file compressed with gzip. The paper is 28 pages long and formatted to print DOUBLE-sided. This paper has been submitted for publication. If there are any retrieval problems please let me know. I would welcome any comments or suggestions regarding the paper. =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= James Howse - howse at eece.unm.edu __ __ __ __ _ _ /\ \/\ \/\ \/\ \/\ `\_/ `\ University of New Mexico \ \ \ \ \ \ `\\ \ \ \ Department of EECE, 224D \ \ \ \ \ \ , ` \ \ `\_/\ \ Albuquerque, NM 87131-1356 \ \ \_\ \ \ \`\ \ \ \_',\ \ Telephone: (505) 277-0805 \ \_____\ \_\ \_\ \_\ \ \_\ FAX: (505) 277-1413 or (505) 277-1439 \/_____/\/_/\/_/\/_/ \/_/ =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= From d23d at unb.ca Thu Jul 27 18:03:09 1995 From: d23d at unb.ca (Deshpande) Date: Thu, 27 Jul 1995 19:03:09 -0300 (ADT) Subject: Some Questions ... Message-ID: Dear Members, We are currently working on a symbolic approach to low level processing of visual information. There may be many researchers in this list who might be working in this area of vision. I would like to pose certain very basic questions whose importance was often overlooked. Let me first in simple terms put forth the problem that we working on: Is the information processing in low level vision symbolic or non-symbolic? That is, should the signal that the measurement devices capture be interpreted symbolically or , as conventionally being followed, in the functional domain. And what are the implications of the two initial forms of representation as far as pattern recognition is considered? Some of the related work can be found in [1]. Moreover, 1) What is the justification for a spatial/frequency domain decomposition of the signal (intensity-map) that is representing the objects ? 2) Through an information theoretic point of veiw, what relevance does this decomposition have ? 3) Neurophysiological evidence does show a similarity to a Gabor filtering scheme in the human visual system , but as David Marr had rightly pointed out, how does this help one to understand its specific relationship to perception ? 4) Even if one assumes an ad hoc justification for the above (spatial- frequency based decomposition), how does one justify the distance function imposed on the vector space formed by these basis functions (of gabor filters), that is, how does this distance function bring out the relationship between the geometrical information of objects that the signal is representing? One finds a lot of literature on this approach of spatial-frequency domain decompostion of the signal as a scheme for texture segmentation, but none really justifies the appropriateness of this approach. If the above is not of relevance to the majority of the members please send your suggestions and comments directly to the following email address: d23d at unb.ca . cheers, sanjay [1] I.B.Muchnik and V.V Mottl, "Linguistic Analysis of Experimental Curves", Proc. IEEE, vol 67, no. 5, May 1979. From klaus at prosun.first.gmd.de Fri Jul 28 06:17:24 1995 From: klaus at prosun.first.gmd.de (klaus@prosun.first.gmd.de) Date: Fri, 28 Jul 95 12:17:24 +0200 Subject: new paper on "Analysis of Switching Dynamical Systems" Message-ID: <9507281017.AA01180@lanke.first.gmd.de> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/pawelzik.switch.ps.Z FTP-file: pub/neuroprose/mueller.switch_speech.ps.Z The following 2 papers are now available for copying from the Neuroprose repository: pawelzik.switch.ps.Z, mueller.switch_speech.ps.Z pawelzik.switch.ps.Z (124459 bytes) 16 pages. Pawelzik, K., Kohlmorgen, J., M\"uller, K.-R.: Annealed Competition of Experts for a Segmentation and Classification of Switching Dynamics We present a method for the unsupervised segmentation of data streams originating from different unknown sources which alternate in time. We use an architecture consisting of competing neural networks. Memory is included in order to resolve ambiguities of input-output relations. In order to obtain maximal specialization, the competition is adiabatically increased during training. Our method achieves almost perfect identification and segmentation in the case of switching chaotic dynamics where input manifolds overlap and input-output relations are ambiguous. Only a small dataset is needed for the training proceedure. Applications to time series from complex systems demonstrate the potential relevance of our approach for time series analysis and short-term prediction. (Neural Computation in Press). mueller.switch_speech.ps.Z (427948 bytes) 11 pages. M\"uller, K.-R., Kohlmorgen, J., Pawelzik, K.: Analysis of Switching Dynamics with Competing Neural Networks, We present a framework for the unsupervised segmentation of time series. It applies to non-stationary signals originating from different dynamical systems which alternate in time, a phenomenon which appears in many natural systems. In our approach, predictors compete for data points of a given time series. We combine competition and evolutionary inertia to a learning rule. Under this learning rule the system evolves such that the predictors, which finally survive, unambiguously identify the underlying processes. Applications to time series from complex systems and speech are presented. The segmentation achieved is very precise and transients are included, a fact, which makes our approach promising for several applications. (IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences in Press). * NO HARDCOPIES * Best regards, Klaus &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& Dr. Klaus-Robert M\"uller C/o Prof. Dr. S. Amari Department of Mathematical Engineering University of Tokyo 7-3-1 Hongo, Bunkyo-ku Tokyo 113 , Japan mail: klaus at sat.t.u-tokyo.ac.jp Fax: +81 - 3 - 5689 5752 &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& PERMANENT ADRESS: &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& Dr. Klaus-Robert M\"uller GMD First (Gesellschaft f. Mathematik und Datenverarbeitung) Rudower Chaussee 5, 12489 Berlin Germany &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& From chandler at kryton.ntu.ac.uk Fri Jul 28 08:24:11 1995 From: chandler at kryton.ntu.ac.uk (chandler@kryton.ntu.ac.uk) Date: Fri, 28 Jul 1995 12:24:11 +0000 Subject: NEURO-FUZZY CONTROL POSITION Message-ID: <9507281124.AA16372@kryton.ntu.ac.uk> ******************* NEURO-FUZZY CONTROL POSITION *********************** at ******************* The Nottingham Trent University *********************** RESEARCH FELLOW/ASSISTANT ------------------------------------------------------------------------------- The Manufacturing Automation Research Group within the Department of Manufacturing Engineering, working in collaboration with the Real Time Machine Control Group of the Department of Computing, is seeking a full-time researcher for an initial two year appointment to join an active resarch group working in fuzzy control techniques applied to the adaptive control of the complex process of stencil printing of solder paste. For this post we require a numerate graduate with knowledge of computer control techniques and preferably an awareness of neuro-fuzzy methodologies. Previous experience of the electronics industry is also desirable. Individuals who have completed a PhD and graduates with a proven ability in complex system analysis and control are particularly welcome to apply. Salary will be in the Research Fellow Scale ( 12,756 - 21,262 p.a.) or the Research Assistant Scale ( 9,921 - 12,048 p.a.). Closing date 31st August 1995. Post No. G0493. For more information about this post contact : Martin Howarth Manufacturing Automation Research Group Department of Manufacturing Engineering The Nottingham Trent University Burton Street Nottingham NG1 4BU ENGLAND TEL: +44 (115) 941 8418 (ext. 4110) E-MAIL : man3howarm at ntu.ac.uk or Dr. Pete Thomas Real Time Machine Control Group Department of Computing The Nottingham Trent University Burton Street Nottingham NG1 4BU ENGLAND TEL: +44 (115) 941 8418 (ext. 2901) Alternatively use the HTML Form at the URL : http://marg.ntu.ac.uk/marg/vacancy895.html From alisonw at cogs.susx.ac.uk Fri Jul 28 14:08:00 1995 From: alisonw at cogs.susx.ac.uk (Alison White) Date: Fri, 28 Jul 95 14:08 BST Subject: AISB96 Call for Workshop Proposals Message-ID: ------------------------------------ AISB-96: CALL FOR WORKSHOP PROPOSALS ------------------------------------ Call for Workshop Proposals: AISB-96 University of Sussex, Brighton, England April 1 -- 2, 1996 Society for the Study of Artificial Intelligence and Simulation of Behaviour (SSAISB) Workshop Series Chair: Dave Cliff, University of Sussex Local Organisation Chair: Alison White, University of Sussex The AISB is the UK's largest and foremost Artificial Intelligence society -- now in it's 32nd year. The Society has an international membership of nearly 900 drawn from both academia and industry. Membership is open to anyone with interests in Artifical Intelligence and the Cognitive and Computing Sciences. The AISB Committee invites proposals for workshops to be held at the University of Sussex campus, on April 1st and 2nd, 1996. The AISB workshop series is held in even years during the Easter vacation. In odd years workshops are held immediately before the biennial conference. The intention of holding a regular workshop series is to provide an administrative and organisational framework for workshop organisers, thus reducing the administrative burden for individuals and freeing them to focus on the scientific programme. Accommodation, food, and social events are organised for all workshop participants by the local organisers. Proposals are invited for workshops relating to any aspect of Artificial Intelligence or the Simulation of Behaviour. Proposals, from an individual or a pair of organisers, for workshops between 0.5 and 2 days long will be considered. Workshops will probably address topics which are at the forefront of research, but perhaps not yet sufficiently developed to warrant a full-scale conference. In addition to research workshops, a 'Postgraduate Workshop' has become a successful regular event over recent years. This event focuses on how to survive the process of studying for a PhD in AI/Cognitive Science, and has a hybrid workshop/tutorial nature. We welcome proposals, particularly from current PhD survivors, to organise the 1996 Postgraduate Workshop at Sussex. For further information on organising the postgraduate workshop, please see the AISB96 web page (address below) or contact Dave Cliff or Alison White. Proposals for tutorials will also be considered, and will be assessed on individual merit: please contact Dave Cliff or Alison White for further details of submission of tutorial proposals. It is the general policy of AISB to only approve tutorials which look likely to be financially viable. Submission: ---------- A workshop proposal should contain the following information: 1. Workshop Title 2. A detailed outline of the workshop. This should include the necessary background and the potential target audience for the workshop and a justified estimate of the number of possible attendees. Please also state the length and preferred date(s) of the workshop. Specify any equipment requirements, indicating whether the organisers would be expected to meet them. 3. A brief resume of the organiser(s). This should include: background in the research area, references to published work in the topic area and relevant experience, such as previous organisation or chairing of workshops. 4. Administrative information. This should include: name, mailing address, phone number, fax, and email address if available. In the case of multiple organisers, information for each organiser should be provided, but one organiser should be identified as the principal contact. 5. A draft Call for Participation. This should serve the dual purposes of informing and attracting potential participants. The organisers of accepted workshops are responsible for issuing a call for participation, reviewing requests to participate and scheduling the workshop activities within the constraints set by the Workshop Organiser. They are also responsible for submitting a collated set of papers for their workshop to the Workshop Series Chair. Workshop participants will receive bound photocopies of the collated set of papers, with copyright retained by the authors. Individual workshop organisers may wish to approach publishers to discuss publication of workshop papers in journal or book forms. DATES: ------ Intentions to organise a workshop should be made known to the Workshop Series Chair (Dave Cliff) as soon as possible. Proposals must be received by October 1st 1995. Workshop organisers will be notified by October 15th 1995. Organisers should be prepared to send out calls for workshop participation as soon as possible after this date. Collated sets of papers to be received by March 15th 1996. Proposals should be sent to: Dave Cliff AISB96 Workshop Series Chair School of Cognitive and Computing Sciences University of Sussex Brighton BN1 9QH U.K. email: davec at cogs.susx.ac.uk phone: +44 1273 678754 fax: +44 1273 671320 Electronic submission (plain ascii text) is highly preferred, but hard copy submission is also accepted, in which case 5 copies should be submitted. Proposals should not exceed 2 sides of A4 (i.e. 120 lines of text approx.). General enquiries should be addressed to: Alison White AISB96 Local Organisation Chair School of Cognitive and Computing Sciences University of Sussex Brighton BN1 9QH U.K. email: alisonw at cogs.susx.ac.uk phone: +44 1273 678448 fax: +44 1273 671320 A copy of this call, with further details for workshop organisers (including a full schedule), is available on the WWW from: http://www.cogs.susx.ac.uk/aisb/aisb96/cfw.html A plain-ASCII version of the web page is available via anonymous ftp from: % ftp ftp.cogs.susx.ac.uk login: anonymous password: [your_email at your_address] ftp cd pub/aisb/aisb96 ftp get [filename]* ftp quit * Files available at present are: README call_for_proposals From john at dcs.rhbnc.ac.uk Fri Jul 28 10:43:55 1995 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Fri, 28 Jul 95 15:43:55 +0100 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <199507281443.PAA21814@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT): one new report available ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-050: ---------------------------------------- Learning Ordered Binary Decision Diagrams by Ricard Gavald\`a and David Guijarro, Universitat Polit\`ecnica de Catalunya Abstract: This note studies the learnability of ordered binary decision diagrams (obdds). We give a polynomial-time algorithm using membership and equivalence queries that finds the minimum obdd for the target respecting a given ordering. We also prove that both types of queries and the restriction to a given ordering are necessary if we want minimality in the output, unless P=NP. If learning has to occur with respect to the optimal variable ordering, polynomial-time learnability implies the approximability of two NP-hard optimization problems: the problem of finding the optimal variable ordering for a given obdd and the Optimal Linear Arrangement problem on graphs. ----------------------- The Report NC-TR-95-050 can be accessed and printed as follows % ftp cscx.cs.rhbnc.ac.uk (134.219.200.45) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-95-050.ps.Z ftp> bye % zcat nc-tr-95-050.ps.Z | lpr -l Similarly for the other technical report. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. The files may also be accessed via WWW starting from the NeuroCOLT homepage: http://www.dcs.rhbnc.ac.uk/neural/neurocolt.html Best wishes John Shawe-Taylor From honavar at iastate.edu Fri Jul 28 18:37:39 1995 From: honavar at iastate.edu (Vasant Honavar) Date: Fri, 28 Jul 1995 17:37:39 CDT Subject: No subject Message-ID: <9507282237.AA09329@pv031e.vincent.iastate.edu> The following recent publications of the Artificial Intelligence Research Group (URL http://www.cs.iastate.edu/~honavar/aigroup.html) can be accessed on WWW via the URL http://www.cs.iastate.edu/~honavar/publist.html 1. Chen, C-H. and Honavar, V. (1995). A Neural Memory Architecture for Content as well as Address-Based Storage and Recall: Theory and Applications Paper under review. Draft available as ISU CS-TR 95-03. 2. Chen, C-H. and Honavar, V. (1995). A Neural Network Architecture for High-Speed Database Query Processing. Paper under review. Draft available as ISU CS-TR 95-11. 3. Chen, C-H. and Honavar, V. (1995). A Neural Architecture for Syntax Analysis. Paper under review. Draft available as ISU-CS-TR 95-18. 4. Mikler, A., Wong, J., and Honavar, V. (1995). Quo-Vadis - Adaptive Heuristics for Routing in Large Communication Networks. Under review. Draft available as ISU CS-TR 95-10. 5. Mikler, A., Wong, J., and Honavar, V. (1995). An Object-Oriented Approach to Modelling and Simulation of Routing in Large Communication Networks. Under review. Draft available as: ISU CS-TR 95-09. 6. Balakrishnan, K. and Honavar, V. (1995). Evolutionary Design of Neural Architectures - A Preliminary Taxonomy and Guide to Literature. Available as: ISU CS-TR 95-01. 7. Parekh, R. & Honavar, V. (1995). An Interactive Algorithm for Regular Language Learning. Available as: ISU CS-TR 95-02. 8. Balakrishnan, K. and Honavar, V. (1995) Properties of Genetic Representations of Neural Architectures. In: Proceedings of the World Congress on Neural Networks. Washington, D.C., 1995. Available as: ISU CS-TR 95-13. 9. Chen, C-H., Parekh, R., Yang, J., Balakrishnan, K. and Honavar, V. (1995). Analysis of Decision Boundaries Generated by Constructive Neural Network Learning Algorithms. In: Proceedings of the World Congress on Neural Networks. Washington, D.C., 1995. Available as: ISU CS-TR 95-12. The following publications will be available on line shortly (within the next few weeks): 1. Kirillov, V. and Honavar, V. (1995). Simple Stochastic Temporal Constraint Networks. Draft available as: ISU CS-TR 95-16. 2. Mikler, A., Wong, J., and Honavar, V. (1995). Utility-Theoretic Heuristics for Routing in Large Telecommunication Networks. Draft available as: ISU CS-TR 95-14. 3. Parekh, R., Yang, J., and Honavar, V. (1995). Constructive Neural Network Learning Algorithms for Multi-Category Pattern Classification. Draft available as: ISU CS-TR 95-15. 4. Yang, J., Parekh, R., and Honavar, V. (1995). Comparison of Variants of Single-Layer Perceptron Algorithms on Non-Separable Data. Draft available as: ISU CS-TR 95-19. The WWW page also contains pointers to other older publications some of which are available on line. Those who don't have access to a WWW browser can obtain ISU CS tech reports by sending email to almanac at cs.iastate.edu with BODY (not SUBJECT) "send tr catalog" and following the instructions that you will receive in the reply from almanac. Sorry, no hard copies are available. Best regards, Vasant Honavar Artificial Intelligence Research Group 226 Atanasoff Hall Department of Computer Science Iowa State University Ames, IA 50011-1040 email: honavar at cs.iastate.edu www: http://www.cs.iastate.edu/~honavar/homepage.html From stiber at bpe.es.osaka-u.ac.jp Fri Jul 28 23:43:55 1995 From: stiber at bpe.es.osaka-u.ac.jp (Stiber) Date: Sat, 29 Jul 1995 12:43:55 +0900 Subject: two new papers on transient responses of pacemakers at Neuroprose Message-ID: <199507290343.MAA05348@aoi.bpe.es.osaka-u.ac.jp> The following 2 papers are now available for copying from the Neuroprose repository: stiber.transcomp.ps.Z, stiber.transhyst.ps.Z. stiber.transcomp.ps.Z (194085 bytes, 10 pages) M. Stiber, R. Ieong, J.P. Segundo Responses to Transients in Living and Simulated Neurons (submitted to NIPS'95; also technical report HKUST-CS95-26) This paper is concerned with synaptic coding when inputs to a neuron change over time. Experiments were performed on a living and simulated embodiment of a prototypical inhibitory synapse. Results indicate that the neuron's reponse lags its input by a fixed delay. Based on this, we present a qualitative model for phenomena previously observed in the living preparation, including hysteresis and dependence of discharge regularity on rate of change of presynaptic spike rate. As change is the rule rather than the exception in life, understanding neurons' responses to nonstationarity is essential for understanding their function. stiber.transhyst.ps.Z (244297 bytes, 13 pages) M. Stiber and R. Ieong Hysteresis and Asymmetric Sensitivity to Change in Pacemaker Responses to Inhibitory Input Transients (in press, Proc. Int. Conf. on Brain Processes, Theories, and Models. W.S. McCulloch: 25 Years in Memoriam; also technical report HKUST-CS95-29) The coding of presynaptic spike trains to postsynaptic ones is the unit of computation in nervous systems. While such coding has been examined in detail under stationary input conditions, the effects of changing inputs have until recently been understood only superficially. When a neuron receives transient inputs with monotonically changing instantaneous rate, its response along time depends not only on the rate at that time, but also on the sign and magnitude of its rate of change. This has been shown previously for the living embodiment of a prototypical inhibitory synapse. We present simulations of a physiological model of this living preparation which reproduce its behaviors. Based on these results, we propose a simple model for the neuron's response involving a constant delay between its input and internal state. This is then generalized to a nonlinear dynamical model of any similar system with an internal state which lags its input. ** If you absolutely, positively can't produce your own hardcopy (or induce a friend to do so for you), hardcopies can be requested in writing to: Technical Reports, Department of Computer Science, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong; don't forget to include the TR number. ** --- Dr. Michael Stiber stiber at bpe.es.osaka-u.ac.jp c/o Prof. S. Sato Department of Biophysical Engineering Osaka University Toyonaka 560 Osaka, Japan On leave from: Department of Computer Science stiber at cs.ust.hk The Hong Kong University of Science & Technology tel: +852-2358-6981 Clear Water Bay, Kowloon, Hong Kong fax: +852-2358-1477 From listerrj at helios.aston.ac.uk Mon Jul 31 12:14:43 1995 From: listerrj at helios.aston.ac.uk (Richard Lister) Date: Mon, 31 Jul 1995 17:14:43 +0100 Subject: New MSc by Research Message-ID: <18457.9507311614@sun.aston.ac.uk> ============================================================== MSc by Research in Information Processing and Neural Networks Aston University UK ============================================================== The Neural Computing Research Group in the Department of Computer Science at Aston University is introducing a new MSc by Research in Information Processing and Neural Networks to start October 1995. The course will involve intensive taught modules in the first term and a supervised research project throughout the remaining terms, which will constitute the dominant part of the MSc. The aim of the course is to provide a practical grounding in exploiting state-of-the-art information processing methods in the context of real world problems. Consequently the course has close industrial involvement. More details can be found on http://neural-server.aston.ac.uk/ or by contacting: Professor David Lowe Aston University Aston Triangle Birmingham B4 7ET United Kingdom email: ncrg at aston.ac.uk ============================================================== From Connectionists-Request at cs.cmu.edu Sat Jul 1 00:06:05 1995 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Sat, 01 Jul 95 00:06:05 -0400 Subject: Bi-monthly Reminder Message-ID: <5253.804571565@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated September 9, 1994. This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is a moderated forum for enlightened technical discussions and professional announcements. It is not a random free-for-all like comp.ai.neural-nets. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to thousands of busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. -- Dave Touretzky & Lisa Saksida --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, corresponded with him directly and retrieved the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new textbooks related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. The file "current" in the same directory contains the archives for the current month. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU 2. Login as user anonymous with password your username. 3. 'cd' directly to the following directory: /afs/cs/project/connect/connect-archives The archive directory is the ONLY one you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into this directory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- Using Mosaic and the World Wide Web ----------------------------------- You can also access these files using the following url: http://www.cs.cmu.edu:8001/afs/cs/project/connect/connect-archives ---------------------------------------------------------------------- The NEUROPROSE Archive ---------------------- Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. We strongly discourage the merger into the repository of existing bodies of work or the use of this medium as a vanity press for papers which are not of publication quality. PLACING A FILE To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. Current naming convention is author.title.filetype.Z where title is just enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. The Z indicates that the file has been compressed by the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is in the appendix. Make sure your paper is single-spaced, so as to save paper, and include an INDEX Entry, consisting of 1) the filename, 2) the email contact for problems, 3) the number of pages and 4) a one sentence description. See the INDEX file for examples. ANNOUNCING YOUR PAPER It is the author's responsibility to invite other researchers to make copies of their paper. Before announcing, have a friend at another institution retrieve and print the file, so as to avoid easily found local postscript library errors. And let the community know how many pages to expect on their printer. Finally, information about where the paper will/might appear is appropriate inside the paper as well as in the announcement. In your subject line of your mail message, rather than "paper available via FTP," please indicate the subject or title, e.g. "paper available "Solving Towers of Hanoi with ART-4" Please add two lines to your mail header, or the top of your message, so as to facilitate the development of mailer scripts and macros which can automatically retrieve files from both NEUROPROSE and other lab-specific repositories: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/filename.ps.Z When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. To prevent forwarding, place a "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your file. If you do offer hard copies, be prepared for a high cost. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! A shell script called Getps, written by Tony Plate, is in the directory, and can perform the necessary retrieval operations, given the file name. Functions for GNU Emacs RMAIL, and other mailing systems will also be posted as debugged and available. At any time, for any reason, the author may request their paper be updated or removed. For further questions contact: Jordan Pollack Associate Professor Computer Science Department Center for Complex Systems Brandeis University Phone: (617) 736-2713/* to fax Waltham, MA 02254 email: pollack at cs.brandeis.edu APPENDIX: Here is an example of naming and placing a file: unix> compress myname.title.ps unix> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put myname.title.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for myname.title.ps.Z 226 Transfer complete. 100000 bytes sent in 1.414 seconds ftp> quit 221 Goodbye. unix> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file myname.title.ps.Z in the Inbox. Here is the INDEX entry: myname.title.ps.Z mylogin at my.email.address 12 pages. A random paper which everyone will want to read Let me know when it is in place so I can announce it to Connectionists at cmu. ^D AFTER RECEIVING THE GO-AHEAD, AND HAVING A FRIEND TEST RETRIEVE THE FILE, HE DOES THE FOLLOWING: unix> mail connectionists Subject: TR announcement: Born Again Perceptrons FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/myname.title.ps.Z The file myname.title.ps.Z is now available for copying from the Neuroprose repository: Random Paper (12 pages) Somebody Somewhere Cornell University ABSTRACT: In this unpublishable paper, I generate another alternative to the back-propagation algorithm which performs 50% better on learning the exclusive-or problem. ~r.signature ^D ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "ftp.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. Another valid directory is "/afs/cs/project/connect/code", where we store various supported and unsupported neural network simulators and related software. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "neural-bench at cs.cmu.edu". From crites at pride.cs.umass.edu Sat Jul 1 17:05:05 1995 From: crites at pride.cs.umass.edu (crites@pride.cs.umass.edu) Date: Sat, 1 Jul 1995 17:05:05 -0400 (EDT) Subject: papers available on reinforcement learning Message-ID: <9507012105.AA12362@pride.cs.umass.edu> The following papers are now available online: --------------------------------------------------------------------------- Improving Elevator Performance Using Reinforcement Learning Robert H. Crites and Andrew G. Barto Computer Science Department University of Massachusetts Amherst, MA 01003-4610 crites at cs.umass.edu, barto at cs.umass.edu Submitted to NIPS 8 8 pages ftp://ftp.cs.umass.edu/pub/anw/pub/crites/nips8.ps.Z ABSTRACT This paper describes the application of reinforcement learning (RL) to the difficult real world problem of elevator dispatching. The elevator domain poses a combination of challenges not seen in most RL research to date. Elevator systems operate in continuous state spaces and in continuous time as discrete event dynamic systems. Their state is not fully observable and they are non-stationary due to changing passenger arrival rates. In addition, we use a team of RL agents, each of which is responsible for controlling one elevator car. The team receives a global reinforcement signal which appears noisy to each agent due to the effects of the actions of the other agents, the random nature of the arrivals and the incomplete observation of the state. In spite of all of these complications, we show results that surpass the best of the heuristic elevator control algorithms of which we are aware. These results demonstrate the power of RL on a very large scale dynamic optimization problem of practical utility. --------------------------------------------------------------------------- An Actor/Critic Algorithm that is Equivalent to Q-Learning Robert H. Crites and Andrew G. Barto Computer Science Department University of Massachusetts Amherst, MA 01003-4610 crites at cs.umass.edu, barto at cs.umass.edu To appear in: G. Tesauro, D. S. Touretzky and T. K. Leen, eds., Advances in Neural Information Processing Systems 7, MIT Press, Cambridge MA, 1995. 8 pages ftp://ftp.cs.umass.edu/pub/anw/pub/crites/nips7.ps.Z ABSTRACT We prove the convergence of an actor/critic algorithm that is equivalent to Q-learning by construction. Its equivalence is achieved by encoding Q-values within the policy and value function of the actor and critic. The resultant actor/critic algorithm is novel in two ways: it updates the critic only when the most probable action is executed from any given state, and it rewards the actor using criteria that depend on the relative probability of the action that was executed. --------------------------------------------------------------------------- From dhw at santafe.edu Sat Jul 1 20:02:19 1995 From: dhw at santafe.edu (David Wolpert) Date: Sat, 1 Jul 95 18:02:19 MDT Subject: "Orthogonality" of the generalizers being combined Message-ID: <9507020002.AA25348@sfi.santafe.edu> In his recent posting, Nathan Intrator writes >>> combining, or in the simple case averaging estimators is effective only if these estimators are made somehow to be independent. >>> This is an extremely important point. Its importance extends beyond the issue of generalization accuracy however. For example, I once did a set of experiments trying to stack together ID3, backprop, and a nearest neighbor scheme, in the vanilla way. The data set was splice junction prediction. The stacking didn't improve things much at all. Looking at things in detail, I found that the reason for this was that not only did the 3 generalizers have identical xvalidation error rates, but *their guesses were synchronized*. They tended to guess the same thing as one another. In other words, although those generalizers are about as different from one another as can be, *as far as the data set in question was concerned*, they were practically identical. This is a great flag that one is in a data-limited scenario. I.e., if very different generalizers perform identically, that's a good sign that you're screwed. Which is a round-about way of saying that the independence Nathan refers to is always with respect to the data set at hand. This is discussed in a bit of detail in the papers referenced below. *** Getting back to the precise subject of Nathan's posting: Those interested in a formal analysis touching on how the generalizers being combined should differ from one another should read the Ander Krough paper (to come out in NIPS7) that I mentioned in my previous posting. A more intuitive discussion of this issue occurs in my original paper on stacking, where there's a whole page of text elaborating on the fact that "one wants the generalizers being combined to (loosely speaking) 'span the space' of algorithms and be 'mutually orthogonal'" to as much a degree as possible. Indeed, that was one of the surprising aspects of Leo Breiman's seminal paper - he got significant improvement even though the generalizers he combined were quite similar to one another. David Wolpert Stacking can be used for things other than generalizing. The example mentioned above is using it as a flag for when you're data-limited. Another use is as an empirical-Bayes method of setting hyperpriors. These and other non-generalization uses of stacking are discussed in the following two papers: Wolpert, D. H. (1992). "How to deal with multiple possible generalizers". In Fast Learning and Invariant Object Recognition, B. Soucek (Ed.), pp. 61-80. Wiley and Sons. Wolpert, D. H. (1993). "Combining generalizers using partitions of the learning set". In 1992 Lectures in Complex Systems, L. Nadel and D. Stein (Eds), pp. 489-500. Addison Wesley. From terry at salk.edu Sun Jul 2 00:43:51 1995 From: terry at salk.edu (Terry Sejnowski) Date: Sat, 1 Jul 95 21:43:51 PDT Subject: Caltech: LEARNING AND IDENTIFICATION Message-ID: <9507020443.AA16766@salk.edu> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% LEARNING AND IDENTIFICATION a series of informal meetings at Caltech WHEN: July 5, 6 and 7 AT WHAT TIME: 11:00 am to 2:00 pm. Refreshments will be provided WHERE: Caltech, Watson building 104 REGISTRATION: If you are interested, please register by sending e-mail to soatto at caltech.edu. Registration is free, and lunch will be provided only to registered attendees. A detailed schedule of the meetings can be found on the Web site http://www.micro.caltech.edu/EE/Groups/Vision/meetings/ID.html - --------------- This workshop is intended for researchers in controls and in neural networks, and aims at exploring areas of common interest. Emphasis will be on tools of (linear and nonlinear) identification theory and their application to special classes of dynamic systems describing neural networks. The meetings are intended to explore the different areas at an introductory level; the speakers will summarize a particular area (which does not necessarily coincide with their own area of research) and provide a tutorial presentation as free as possible from specialized jargon, with the objective of generating a high degree of interaction among participants. Among the topics we will cover: Classical stochastic realization and estimation theory Conventional linear identification Neural Networks and approximation Wavelets in identification Behavioural approach to representation and identification of systems nonlinear identification and embedding theorems Any change in the schedule and/or program will be posted on the WEB site. - --Stefano Soatto From krogh at nordita.dk Mon Jul 3 10:15:53 1995 From: krogh at nordita.dk (Anders Krogh) Date: Mon, 3 Jul 95 16:15:53 +0200 Subject: "Orthogonality" of the generalizers being combined Message-ID: <9507031415.AA25900@norsci0.nordita.dk> David Wolpert wrote > Getting back to the precise subject of Nathan's posting: Those > interested in a formal analysis touching on how the generalizers being > combined should differ from one another should read the Ander Krough > paper (to come out in NIPS7) that I mentioned in my previous > posting. The reference is (it is not terribly formal): "Neural Network Ensembles, Cross Validation, and Active Learning" by Anders Krogh and Jesper Vedelsby To appear in NIPS 7. It is in Neuroprose (see below for details) or at http://www.nordita.dk/~krogh/papers.html. Peter Sollich and I are finishing up an analysis of an ensemble of linear networks. It may sound trivial, but it actually isn't. We'll post it when we're done. Among other things, we find that averaging under-regularized (ie over-fitting) networks trained on slightly different training sets can give a great improvement over a single network trained on all the data. This doesn't sound too surprising, but I think that is why ensembles work in a lot of applications: People use identical over-parametrized networks and then the ensemble averages the over-fitting away. I've seen some neural network predictors for protein secondary structure where that is the case. It means that an ensemble can sometimes replace regularization. We discuss it a bit in "Improving Prediction of Protein Secondary Structure using Structured Neural Networks and Multiple Sequence Alignments" by by S\o ren K. Riis and Anders Krogh NORDITA preprint 95/34 S (try http://www.nordita.dk/~krogh/papers.html) It's hot today. - Anders --------------------------------------------------------------------- FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/krogh.ensemble.ps.Z The file krogh.ensemble.ps.Z can now be copied from Neuroprose. The paper is 8 pages long. Hardcopies are NOT available. Neural Network Ensembles, Cross Validation, and Active Learning by Anders Krogh and Jesper Vedelsby Abstract: Learning of continuous valued functions using neural network ensembles (committees) can give improved accuracy, reliable estimation of the generalization error, and active learning. The ambiguity is defined as the variation of the output of ensemble members averaged over unlabeled data, so it quantifies the disagreement among the networks. It is discussed how to use the ambiguity in combination with cross-validation to give a reliable estimate of the ensemble generalization error, and how this type of ensemble cross-validation can sometimes improve performance. It is shown how to estimate the optimal weights of the ensemble members using unlabeled data. By a generalization of query by committee, it is finally shown how the ambiguity can be used to select new training data to be labeled in an active learning scheme. The paper will appear in G. Tesauro, D. S. Touretzky and T. K. Leen, eds., "Advances in Neural Information Processing Systems 7", MIT Press, Cambridge MA, 1995. ________________________________________ Anders Krogh Nordita Blegdamsvej 17, 2100 Copenhagen, Denmark email: krogh at nordita.dk Phone: +45 3532 5503 Fax: +45 3138 9157 W.W.Web: http://www.nordita.dk/~krogh/ ________________________________________ From kagan at pine.ece.utexas.edu Mon Jul 3 10:41:52 1995 From: kagan at pine.ece.utexas.edu (Kagan Tumer) Date: Mon, 3 Jul 1995 09:41:52 -0500 Subject: Combining Generalizers Message-ID: <199507031441.JAA17316@pine.ece.utexas.edu> Lately, there has been a great deal of interest in combining estimates, and especially combining neural network outputs. Combining has a LONG history, with seminal ideas contained in Selfridge's Pandemonium (1958) and Nilsson's book on Learning Machines (1965); it is also found i diverse areas (e.g. for at least 20 years in econometrics as "forecast combining"). Recent research on this topic in the neural net/machine learning community largely focuses on (i) WHAT (type of) experts to combine, or (ii) HOW to combine them, or (ii) EXPERIMENTALLY show that combining gives better results Another important question is, how much benefit (%age, limits, reliability,..) can combining methods yield. At least two recent PhD theses (Perrone, Hashem) mathematically address this issue for REGRESSION problems, We have approached this problem for CLASSIFICATION problems by studying the effect of combining on the decision boundaries. The results pinpoint the mechanism by which classification results are improved, and provide limits, including a new way of estimating Bayes' rate. A preliminary version appears as an invited paper in SPIE Proc. Vol 2492, pp. 573-585, (Orlando Conf, April '95); the full version, currently under journal review, can be retrieved from http://pegasus.ece.utexas.edu:80/~kagan/publications.html Besides review and analysis, it contains a reference listing that includes most of the papers quoted on this forum in the past week. The title and abstract follows: THEORETICAL FOUNDATIONS OF LINEAR AND ORDER STATISTICS COMBINERS FOR NEURAL PATTERN CLASSIFIERS by Kagan Tumer and Joydeep Ghosh Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This paper provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and the order statistics combiners introduced in this paper. We show that combining networks in output space reduces the variance of the actual decision region boundaries around the optimum boundary. For linear combiners, we show that in the absence of classifier bias, the added classification error is proportional to the boundary variance. In the presence of bias, the error reduction is shown to be less than or equal to the reduction obtained in the absence of bias. For non-linear combiners, we show analytically that the selection of the median, the maximum and in general the $i$th order statistic improves classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. The combining results can also be used to estimate the Bayesian error rates. Experimental results on several public domain data sets are provided to illustrate the benefits of combining. ------ All comments welcome. ______ Sorry, no hard copies. Kagan Tumer Dept. of ECE The University of Texas, Austin From stefano at kant.irmkant.rm.cnr.it Mon Jul 3 13:26:20 1995 From: stefano at kant.irmkant.rm.cnr.it (stefano@kant.irmkant.rm.cnr.it) Date: Mon, 3 Jul 1995 17:26:20 GMT Subject: two papers on evolutionary robotics Message-ID: <9507031726.AA14040@kant.irmkant.rm.cnr.it> Papers available via WWW / FTP: Keywords: Evolutionary Robotics, Neural Networks, Genetic Algorithms, Autonomous Robots, Noise. ------------------------------------------------------------------------------ EVOLVING NON-TRIVIAL BEHAVIORS ON REAL ROBOTS: AN AUTONOMOUS ROBOT THAT PICK UP OBJECTS Stefano Nolfi Domenico Parisi ***Institute of Psychology, National Research Council, Rome, Italy e-mail: stefano at kant.irmkant.rm.cnr.it domenico at kant.irmkant.rm.cnr.it Abstract Recently, a new approach that involves a form of simulated evolution has been proposed for the building of autonomous robots. However, it is still not clear if this approach may be adequate to face real life problems. In this paper we show how control systems that perform a non-trivial sequence of behaviors can be obtained with this methodology by carefully designing the conditions in which the evolutionary process operates. In the experiment described in the paper, a mobile robot is trained to locate, recognize, and grasp a target object. The controller of the robot has been evolved in simulation and then downloaded and tested on the real robot. to appear in: G. Soda (Ed.) Proceedings of the Fourth Congress of the Italian Association for Artificial Intelligence, Firenze, 11-13 October, Springer Verlag. http://kant.irmkant.rm.cnr.it/public.html or ftp-server: kant.irmkant.rm.cnr.it (150.146.7.5) ftp-file : nolfi.gripper.ps.Z (the file is 0.36 MB) for the homepage of our research group: http://kant.irmkant.rm.cnr.it/gral.html ----------------------------------------------------------------------------- EVOLVING MOBILE ROBOTS IN SIMULATED AND REAL ENVIRONMENTS Orazio Miglino*, Henrik Hautop Lund**, Stefano Nolfi*** *Department of Psychology, University of Palermo, Italy e-mail: orazio at caio.irmkant.rm.cnr.it **Department of Computer Science, Aarhus University, Denmark e-mail: henrik at caio.irmkant.rm.cnr.it ***Institute of Psychology, National Research Council, Rome, Italy e-mail: stefano at kant.irmkant.rm.cnr.it Abstract The problem of the validity of simulation is particularly relevant for methodologies that use machine learning techniques to develop control systems for autonomous robots, like, for instance, the Artificial Life approach named Evolutionary Robotics. In fact, despite that it has been demonstrated that training or evolving robots in the real environment is possible, the number of trials needed to test the system discourage the use of physical robots during the training period. By evolving neural controllers for a Khepera robot in computer simulations and then transferring the obtained agents in the real environment we will show that: (a) an accurate model of a particular robot-environment dynamics can be built by sampling the real world through the sensors and the actuators of the robot; (b) the performance gap between the obtained behaviors in simulated and real environment may be significantly reduced by introducing a "conservative" form of noise; (c) if a decrease in performance is observed when the system is transferred to the real environment, successful and robust results can be obtained by continuing the evolutionary process in the real environment for few generations. Technical Report, Institute of Psychology, C.N.R, Rome. http://kant.irmkant.rm.cnr.it/public.html or ftp-server: kant.irmkant.rm.cnr.it (150.146.7.5) ftp-file : miglino.sim-real.ps.Z (the file is 2.67 MB) for the homepage of our research group: http://kant.irmkant.rm.cnr.it/gral.html ---------------------------------------------------------------------------- Stefano Nolfi Institute of Psychology National Research Council e-mail: stefano at kant.irmkant.rm.cnr.it From dmartine at laas.fr Mon Jul 3 12:07:57 1995 From: dmartine at laas.fr (Dominique Martinez) Date: Mon, 3 Jul 1995 18:07:57 +0200 Subject: two preprints on adaptive quantization Message-ID: <199507031607.AA25508@emilie.laas.fr> The following two preprints on adaptive scalar quantization are available via anonymous ftp. ---------------------------------------------------------------- Generalized Boundary Adaptation Rule for minimizing r-th power law distortion in high resolution quantization. Dominique Martinez and Marc M. Van Hulle (to appear in Neural Networks) Abstract: A new generalized unsupervised competitive learning rule is introduced for adaptive scalar quantization. The rule, called generalized Boundary Adaptation Rule (BAR_r), minimizes r-th power law distortion D_r in the high resolution case. It is shown by simulations that a fast version of BAR_r outperforms generalized Lloyd I in minimizing D_1 (mean absolute error) and D_2 (mean squared error) distortion with substantially less iterations. In addition, since BAR_r does not require generalized centroid estimation, as in Lloyd I, it is much simpler to implement. ftp laas.laas.fr directory: pub/m2i/dmartine file: martinez.bar.ps.Z -------------------------------------------------------------------- A robust backward adaptive quantizer Dominique Martinez and Woodward Yang (to appear at NNSP'95) Abstract: This paper describes an adaptive encoder/decoder for efficient quantization of nonstationary signals. The system uses a robust backward adaptive encoding method such that the adaptation of the encoder and decoder is only determined by the transmitted codeword and does not require any additional side information. By incorporating a forgetting parameter, the quantizer is also robust to transmission errors and encoder/decoder mismatches. It is envisioned that practical applications of this algorithm can be used in the design of adaptive codecs (A/D and D/A converters) or as an efficient source coding algorithm for transmission of digitized speech. ftp laas.laas.fr directory: pub/m2i/dmartine file: martinez.forget.ps.Z From edelman at melon.mpik-tueb.mpg.de Tue Jul 4 08:02:04 1995 From: edelman at melon.mpik-tueb.mpg.de (Shimon Edelman) Date: Tue, 4 Jul 95 14:02:04 +0200 Subject: Combining Generalizers In-Reply-To: <199507031441.JAA17316@pine.ece.utexas.edu> (message from Kagan Tumer on Mon, 3 Jul 1995 09:41:52 -0500) Message-ID: <9507041202.AA28896@melon> Kagan Tumer wrote: > Lately, there has been a great deal of interest in combining > estimates, and especially combining neural network outputs. > Combining has a LONG history, with seminal ideas contained > in Selfridge's Pandemonium (1958) and Nilsson's book on Learning > ... > Another important question is, how much benefit (%age, limits, reliability,..) > can combining methods yield. At least two recent PhD theses > (Perrone, Hashem) mathematically address this issue for REGRESSION problems, > > We have approached this problem for CLASSIFICATION problems > by studying the effect of combining on the > decision boundaries. The results pinpoint the mechanism by > which classification results are improved, and provide limits, > including a new way of estimating Bayes' rate. > ... To me, Selfridge's Pandemonium looks more like a concept that launched a thousand variations on the Winner-Take-All motif than a precursor of combining estimates (of course, non-maximum suppression can also be counted as a mode of combining estimates, albeit not a very productive one :-). Nevertheless, it can serve as a basis for building a powerful representation of the subspace of the input space relevant to the task, if the Master Demon outputs the vector of response strengths of each of its subordinates, rather than the identity of the one that happens to shout the loudest(*). I think this is a good example of the importance of REPRESENTATION: the ensemble of responses is a much richer representation than the identity of the strongest response, and is more likely to constitute a better basis for CLASSIFICATION. In the past couple of years, I have been trying to clarify the computational basis of the effectiveness of representation by a bank of (individually poorly tuned) classifiers. Most of the results of this project to date are available via my Web page. (*) Shimon Edelman, "Representation, Similarity, and the Chorus of Prototypes", Minds and Machines 5:45-68 (1995), ftp://eris.wisdom.weizmann.ac.il/pub/mam.ps.Z See also the recent works by Jonathan Baxter, available from Neuroprose (sorry, I don't have an URL handy). -Shimon Shimon Edelman, Applied Math & CS, Weizmann Institute http://www.wisdom.weizmann.ac.il/~edelman/shimon.html Cyber Rights Now: Accept No Compromise From Jonathan_Stein at comverse.com Tue Jul 4 12:14:27 1995 From: Jonathan_Stein at comverse.com (Jonathan_Stein@comverse.com) Date: Tue, 04 Jul 95 11:14:27 EST Subject: Combining classifiers to reduce false alarm rate Message-ID: <9506048048.AA804881841@hub.comverse.com> In a recent thread on the subject of combining classifiers Nathan Intrator and David Wolpert mentioned the importance of the lack of correlation between the classifiers being combined. It is obvious that if the classifiers make the same errors (no matter what their architectures) then nothing can be gained by combining them. In order to make the basic classifiers less correlated, one can train them on different subsets of the training set, but the trade-off is that each will have a smaller training set. This is true for closed set problems. For open set problems, (eg. continuous speech and handwriting recognition), for which there are "false alarm" errors in addition to misclassification errors, it may be sufficient to use the same training set with different training algorithms or even merely different starting classifiers. Assuming the training processes succeed, these networks will agree on the training set, and should behave similarly on patterns similar to those in the training set. However, the networks will probably disagree as to patterns unlike those in the training set, since there were no constraints placed on these during the training process. Thus it would seem that by combining the opinions of several networks the false alarm rate may be drastically reduced without significantly reducing the classification rate, (perhaps even improving it). Geometrically speaking, for MLP classifiers, the training set imposes restrictions on the placing of each classifier's hyperplanes. The negative examples will natural fall randomly into domains corresponding to some class, but unless they are sufficiently similar to positive examples, there should be decorrelation between the domains into which they fall. When comparing identifications, the different networks should respond similarly to positive examples, but will tend to disagree regarding the negative ones. This behavior should allow one to differentiate between negatives and positives, thus effectively rejecting false alarms. Over the past four years we have found empirically that the false alarm rate of cursive script and continuous speech systems can be significantly reduced by combining the outputs of several multilayer perceptrons. We have also observed similar effects on artificial benchmark problems, and analytically proven the idea for a solvable toy problem. I can email LaTeX conference papers to interested parties. Jonathan (Yaakov) Stein From tresp at traun.zfe.siemens.de Tue Jul 4 12:40:28 1995 From: tresp at traun.zfe.siemens.de (Volker Tresp) Date: Tue, 4 Jul 1995 18:40:28 +0200 Subject: more combinations of more estimators Message-ID: <9507041640.AA09250@traun.zfe.siemens.de> Our recent NIPS7 paper COMBINING ESTIMATORS USING NON-CONSTANT WEIGHTING FUNCTIONS by Volker Tresp and Michiaki Taniguchi might be of interest to people interested in combining predictors. The basic idea in our (and many related) approaches is to estimate some measure of certainty of a predictor given the input for determining the (input dependent) weight of that predictor. The inverse of the variance of a predictor is suitable: if a predictor is uncertain about its prediction it should obtain a small weight. Another measure can be derived from the input data distribution of the training data which were used to train a given predictor: a predictor should obtain a small weight in regions were it did not see any data during training. The latter idea is closely related to the mixtures of experts. We also indicate how both concepts: (i. e. variance based weighting and density based weighting) can be combined. Incidentally, mixtures of experts fit nicely into the missing input concept: we conceptionally introduce an additional input with as many states as there are experts. Since we never know which expert is the true expert we are faced with a missing input problem during training and recall. The learning rules for systems with missing inputs can be used to derive the learning rules for the mixtures of experts. Volker and Mich The paper can be obtained via: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/tresp.combining.ps.Z From arbib at pollux.usc.edu Tue Jul 4 16:03:28 1995 From: arbib at pollux.usc.edu (Michael A. Arbib) Date: Tue, 4 Jul 1995 13:03:28 -0700 Subject: The Handbook of Brain Theory and Neural Networks Message-ID: <199507042003.NAA13280@pollux.usc.edu> In an earlier posting, the following paragraph was garbled: The heart of the book, Part III, is comprised of 266 original articles by leaders in the various fields, arranged alphabetically by title. Parts I and II, written by the editor, are designed to help readers orient themselves to this vast range of material. PART I - BACKGROUND introduces several basic neural models, explains how the present study of Brain Theory and Neural Networks integrates brain theory, artificial intelligence, and cognitive psychology, and provides a tutorial on the concepts essential for understanding neural networks as dynamic, adaptive systems. PART II - ROAD MAPS provides an entre into the many articles of Part III via an introductory "Meta-Map" and twenty-three road maps each of which provide a tour of all the Part III article on the chosen theme. **** Here is additional information that has been requested by a number of individuals: The ISBN is 0-262-01148-4 The book can be ordered from The MIT Press: For orders: mitpress-orders at mit.edu For inquiries: mitpress-orders-inq at mit.edu General information and order forms: http://www-mitpress.mit.edu PS. The list price is $150 till September 30, then $175. ***** Michael A. Arbib Center for Neural Engineering USC Los Angeles CA 90089-2520 Tel: (213) 740-9220 Fax: (213) 740-5687 arbib at pollux.usc.edu http://www.hbp.usc.edu:8376/HBP/Home.html From kagan at pine.ece.utexas.edu Wed Jul 5 11:12:01 1995 From: kagan at pine.ece.utexas.edu (Kagan Tumer) Date: Wed, 5 Jul 1995 10:12:01 -0500 Subject: Combining Generalizers (correction) Message-ID: <199507051512.KAA23399@pine.ece.utexas.edu> There seems to have been a problem with the compressed postscript file I put on the www. I have regenerated the file and checked to make sure it was retrievable in its entirety. I apologize for any inconvenience this may have caused. Sincerely, Kagan Tumer PS: Original post follows. ---------------------------------------------------------------------------- Lately, there has been a great deal of interest in combining estimates, and especially combining neural network outputs. Combining has a LONG history, with seminal ideas contained in Selfridge's Pandemonium (1958) and Nilsson's book on Learning Machines (1965); it is also found i diverse areas (e.g. for at least 20 years in econometrics as "forecast combining"). Recent research on this topic in the neural net/machine learning community largely focuses on (i) WHAT (type of) experts to combine, or (ii) HOW to combine them, or (ii) EXPERIMENTALLY show that combining gives better results Another important question is, how much benefit (%age, limits, reliability,..) can combining methods yield. At least two recent PhD theses (Perrone, Hashem) mathematically address this issue for REGRESSION problems, We have approached this problem for CLASSIFICATION problems by studying the effect of combining on the decision boundaries. The results pinpoint the mechanism by which classification results are improved, and provide limits, including a new way of estimating Bayes' rate. A preliminary version appears as an invited paper in SPIE Proc. Vol 2492, pp. 573-585, (Orlando Conf, April '95); the full version, currently under journal review, can be retrieved from http://pegasus.ece.utexas.edu:80/~kagan/publications.html Besides review and analysis, it contains a reference listing that includes most of the papers quoted on this forum in the past week. The title and abstract follows: THEORETICAL FOUNDATIONS OF LINEAR AND ORDER STATISTICS COMBINERS FOR NEURAL PATTERN CLASSIFIERS by Kagan Tumer and Joydeep Ghosh Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This paper provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and the order statistics combiners introduced in this paper. We show that combining networks in output space reduces the variance of the actual decision region boundaries around the optimum boundary. For linear combiners, we show that in the absence of classifier bias, the added classification error is proportional to the boundary variance. In the presence of bias, the error reduction is shown to be less than or equal to the reduction obtained in the absence of bias. For non-linear combiners, we show analytically that the selection of the median, the maximum and in general the $i$th order statistic improves classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. The combining results can also be used to estimate the Bayesian error rates. Experimental results on several public domain data sets are provided to illustrate the benefits of combining. ------ All comments welcome. ______ Sorry, no hard copies. Kagan Tumer Joydeep Ghosh Dept. of ECE Associate Professor The University of Texas, Austin Dept. of ECE The University of Texas, Austin From guzman at gluttony.cs.umass.edu Wed Jul 5 16:16:28 1995 From: guzman at gluttony.cs.umass.edu (guzman@gluttony.cs.umass.edu) Date: Wed, 05 Jul 95 16:16:28 -0400 Subject: Reinforcement Learning Paper Message-ID: <9507052016.AA06081@gluttony.cs.umass.edu> Reinforcement Learning in Partially Markovian Decision Processes submitted to NIPS*95 (8 pages) FTP-HOST: ftp.cs.umass.edu FTP-FILENAME: /pub/anw/pub/guzman/guzman-lara.PMDP.ps.Z ABSTRACT We define a subclass of Partially Observable Markovian Decision Processes (POMDPs), which we call {\em Partially Markovian Decision Processes} (PMDPs), and propose a novel approach to solve this kind of problem. In contrast to traditional methods for POMDPs, our method does not involve estimation of the state of an underlying Markovian problem; its goal is to find an optimal observation-based policy (an action-selection rule that uses only the information immediately available to the agent). We show that solving this non-Markovian problem is equivalent to solving multiple Markovian Decision Processes (MDPs). We argue that this approach opens new possibilities for distributed systems, and we support this claim with some preliminary results where the use of an observation-based policy yielded a good solution in a complex stochastic environment. Sorry, no hard copies Sergio Guzman-Lara Computer Science Department LGRC, University of Massachusetts Amherst, MA 01003 guzman at cs.umass.edu From Gerhard.Paass at gmd.de Wed Jul 5 08:43:28 1995 From: Gerhard.Paass at gmd.de (Gerhard Paass) Date: Wed, 5 Jul 1995 14:43:28 +0200 Subject: Autumn School Connectionism and Neural Networks Message-ID: <199507051243.AA09680@sein.gmd.de> CALL FOR PARTICIPATION ================================================================= = = = H e K o N N 9 5 = = = Autumn School in C o n n e c t i o n i s m and N e u r a l N e t w o r k s October 2-6, 1995 Muenster, Germany Conference Language: German ---------------------------------------------------------------- A comprehensive description of the Autumn School together with abstracts of the courses can be found at the following addresses: WWW: http://borneo.gmd.de/~hekonn anonymous FTP: ftp.gmd.de directory: Learning/neural/hekonn95 = = = O V E R V I E W = = = Artificial neural networks (ANN's) have in recent years been discussed in many diverse areas, ranging from the modelling of learning in the cortex to the control of industrial processes. The goal of the Autumn School in Connectionism and Neural Networks is to give a comprehensive introduction to conectionism and artificial neural networks and to give an overview of the current state of the art. Courses will be offered in five thematic tracks. (The conference language is German.) The FOUNDATION track will introduce basic concepts (A. Zell, Univ. Stuttgart), as well as present lectures on information processing in biological neural systems (G. Palm, Univ. Ulm), on the relationship between ANN's and fuzzy logic (R. Kruse, Univ. Braunschweig), and on genetic algorithms (S. Vogel, Univ. Cologne). The THEORY track is devoted to the properties of ANN's as abstract learning algorithms. Courses are offered on approximation properties of ANN's (K. Hornik, Univ. Vienna), the algorithmic complexity of learning procedures (M. Schmitt, TU Graz), prediction uncertainty and model selection (G. Paass, GMD St. Augustin), and "neural" solutions of optimization problems (J. Buhmann, Univ. Bonn). This year, special emphasis will be put on APPLICATIONS of ANN's to real-world problems. This track covers courses on vision (H.Bischof, TU Vienna), character recognition (J. Schuermann, Daimler Benz Ulm), speech recognition (R. Rojas, FU Berlin), industrial applications (B. Schuermann, Siemens Munich), robotics (K.Moeller, Univ. Bonn), and hardware for ANN's (U. Rueckert, TU Hamburg-Harburg). In the track on SYMBOLIC CONNECTIONISM, there will be courses on: knowledge processing with ANN's (F. Kurfess, New Jersey IT), hybrid systems in natural language processing (S. Wermter, Univ. Hamburg), connectionist aspects of natural language processing (U. Schade, Univ. Bielefeld), and procedures for extracting rules from ANN's (J. Diederich, QUT Brisbane). In the section on COGNITIVE MODELLING, we have courses on representation and cognitive models (G. Dorffner, Univ. Vienna), aspects of cognitive psychology (R. Mangold-Allwinn, Univ. Saarbruecken), self-organizing ANN's in the visual system (C. v.d. Malsburg, Univ. Bochum), and information processing in the visual cortex (J.L. v. Hemmen, TU Munich). In addition, there will be courses on PROGRAMMING and SIMULATORS. Participants will have the opportunity to work with the SESAME system (J. Kindermann, GMD St.Augustin) and the SNNS simulator (A.Zell, Univ. Stuttgart). From hd at harris.monmouth.edu Wed Jul 5 10:35:19 1995 From: hd at harris.monmouth.edu (Drucker Harris) Date: Wed, 5 Jul 95 10:35:19 EDT Subject: combining classifiers Message-ID: <9507051435.AA00592@harris.monmouth.edu> Re: combining classifiers For those interested in combining classifiers, I give references to the boosting literature below. Boosting gives an explicit method of building multiple classifiers in a sequential fashion, rather than building the classifiers first and then determining how to combine them. The seminal work on boosting which shows that it is possible to combine many classifiers, each with error rate slightly less than 1/2 (termed weak learners) to give a combined classifier with very good performance (termed a strong learner): Robert Schapire, "The strength of weak learnability", Machine Learning, 5(2), p 197-227 Applications to OCR may be seen in H.Drucker, C. Cortes, L.Jackel, Y.LeCun, and V.Vapnik, Boosting and Other Ensemble Methods, Neural Computation, Vol 6, p1289-1301 Comparison of boosting techniques to many others in OCR is given in: LD Jackel, et.al., "Comparison of Classifier Methods: A Case Study in Handwritten Digit Recognition", 1994 International Conference on Pattern Recognition, Jerusalem, 1994. In practical applications, it is helpful to have a very large source of training data. However, there is a new version of boosting which does not require this large data set: Y.Freund and R.E. Schapire, "A decision-theoretic generalization of on-line learning and an application to boosting, Proceedings of the Second European Conference on Computational Learning Theory, March, 1995. Harris Drucker From phkywong at uxmail.ust.hk Wed Jul 5 05:56:25 1995 From: phkywong at uxmail.ust.hk (Dr. Michael Wong) Date: Wed, 5 Jul 1995 17:56:25 +0800 Subject: Paper available on Hybrid Classification Message-ID: <95Jul5.175637+0800_hkt.19012-4+102@uxmail.ust.hk> FTP-host: physics.ust.hk FTP-file: pub/kymwong/hybrid.ps.gz The following paper, presented at IWANNT*95, is now available via anonymous FTP. (8 pages long) ============================================================================ A Hybrid Expert System for Error Message Classification H.C. Lau, K.Y. Szeto, K.Y.M. Wong Department of Physics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong. E-mail address: phhclau at usthk.ust.hk, phkywong at usthk.ust.hk and D.Y. Yeung Department of Computer Science, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong. ABSTRACT A hybrid intelligent classifier is built for pattern classification. It consists of a classification and regression tree (CART), a genetic algorithm (GA) and a neural network (NN). CART extracts features of the patterns by setting up decision rules. Rule improvement by GA is explored. The rules act as a pre-processing layer of NN, a multi- class neural classifier, through which the most probable class is determined. A realistic test of classifying error messages generated from a telephone exchange shows that the CART-NN hybrid system has comparable performance with Bayesian neural network. ============================================================================ FTP instructions: unix> ftp physics.ust.hk Name: anonymous Password: your full email address ftp> cd pub/kymwong ftp> get hybrid.ps.gz ftp> quit unix> gunzip hybrid.ps.gz unix> lpr hybrid.ps From Frank.Smieja at gmd.de Thu Jul 6 09:02:35 1995 From: Frank.Smieja at gmd.de (Frank Smieja) Date: Thu, 6 Jul 1995 15:02:35 +0200 Subject: TR in neuroprose Message-ID: <199507061302.AA07563@shetland.gmd.de> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/beyer.mappings.ps.Z The file beyer.mappings.ps.Z is now available for copying from the Neuroprose repository: On Data-driven Derivation of Discrete Mappings between Finite Spaces (8 pages) ABSTRACT: The guessing of a function behind a discrete mapping between finite spaces using only the information provided by positive examples is shown to be in principle optimally handled by a lookup table combined with a random process. We explore the implications of this result for the construction of intelligent approximators. The paper is also available (as are the rest of our REFLEX papers) from the WWW site http://borneo.gmd.de/AS/janus/publi/publi.html or look at our JANUS/REFLEX home page http://borneo.gmd.de/AS/janus/pages/janus.html -- Frank Smieja ----------------------------------------------------------------------------- Dr Frank Smieja Institute for System Design Technology (I5) Adaptive Systems Group (AS) German National Research Center for Information Technology (GMD) Tel: +49 2241-142214 GMD-SET.AS, Schloss Birlinghoven email: smieja at gmd.de 53754 St Augustin Germany WWW: http://borneo.gmd.de:80/~smieja/ ----------------------------------------------------------------------------- From ormoneit at informatik.tu-muenchen.de Thu Jul 6 12:23:10 1995 From: ormoneit at informatik.tu-muenchen.de (Dirk Ormoneit) Date: Thu, 6 Jul 1995 18:23:10 +0200 Subject: Combining Gaussian Mixture Density Estimates Message-ID: <95Jul6.182316+0200_met_dst.116230+308@papa.informatik.tu-muenchen.de> In a recent message, Anders Krogh mentioned the possibility to use averaging based on slightly different (e.g. resampled) training sets as a method of regularization. In our latest work, we found that this regularizing effect of network averaging may be advantageously exploited for Gaussian mixture density estimation. Regularization is particularly important in this case, because the overfitting problem is even more severe than, for example, in the regression case. In our experiments we found that Leo Breiman's *bagging* (averaging of estimators which were derived from resampled training sets) yields a performance which is comparable and sometimes even superior to a Bayesian regularization approach. As pointed out by Breiman, a basic precondition for obtaining an improvement with *bagging* is that the individual estimators are relatively unstable. This is particularly the case for Gaussian mixture estimates. The title of our paper is Improved Gaussian Mixture Density Estimates Using Bayesian Penalty Terms and Network Averaging by Dirk Ormoneit and Volker Tresp ABSTRACT We compare two regularization methods which can be used to improve the generalization capabilities of Gaussian mixture density estimates. The first method consists of defining a Bayesian prior distribution on the parameter space. We derive EM (Expectation Maximization) update rules which maximize the a posterior parameter probability in contrast to the usual EM rules for Gaussian mixtures which maximize the likelihood function. In the second approach we apply ensemble averaging to density estimation. This includes Breiman's "bagging", which has recently been found to produce impressive results for classification networks. To our knowledge this is the first time that ensemble averaging is applied to improve density estimation. A version of this paper is submitted to NIPS'95. A technical report is available under the name 'fki-205-95.ps.gz' on the FTP-site flop.informatik.tu-muenchen.de To get it, execute the following steps: % ftp flop.informatik.tu-muenchen.de Name (flop.informatik.tu-muenchen.de:ormoneit): anonymous Password: (your email adress) ftp> cd pub/fki ftp> binary ftp> get fki-205-95.ps.gz ftp> bye % gunzip fki-205-95.ps.gz Dirk From ingber at alumni.caltech.edu Thu Jul 6 14:11:50 1995 From: ingber at alumni.caltech.edu (Lester Ingber) Date: 6 Jul 1995 18:11:50 GMT Subject: paper on statistical constraints on 40 Hz models of short-term memory Message-ID: <3th916$hq7@gap.cco.caltech.edu> The following paper, to be published in Physical Review E, and some related reprints are available via anonymous FTP or WWW: Statistical mechanics of neocortical interactions: Constraints on 40 Hz models of short-term memory Lester Ingber Lester Ingber Research, P.O. Box 857, McLean, Virginia 22101 ingber at alumni.caltech.edu Calculations presented in L. Ingber and P.L. Nunez, Phys. Rev. E 51, 5074 (1995) detailed the evolution of short-term memory in the neocortex, supporting the empirical 7+-2 rule of constraints on the capacity of neocortical processing. These results are given further support when other recent models of 40 Hz subcycles of low-frequency oscillations are considered. PACS Nos.: 87.10.+e, 05.40.+j, 02.50.-r, 02.70.-c ======================================================================== Interactively [brackets signify machine prompts]: [your_machine%] ftp ftp.alumni.caltech.edu [Name (...):] anonymous [Password:] your_e-mail_address [ftp>] cd pub/ingber [ftp>] binary [ftp>] get smni95_stm40hz.ps.Z [ftp>] quit The 00index file contains an index of the other files. This archive also can be accessed via WWW path http://www.alumni.caltech.edu/~ingber/ If you do not have ftp access, get information on the FTPmail service by: mail ftpmail at decwrl.dec.com, and send only the word "help" in the body of the message. Sorry, I cannot assume the task of mailing out hardcopies of code or papers. My volunteer time assisting people with their queries on my codes and papers must be limited to electronic mail correspondence. ======================================================================== Lester -- /* RESEARCH ingber at alumni.caltech.edu * * INGBER ftp.alumni.caltech.edu:/pub/ingber * * LESTER http://www.alumni.caltech.edu/~ingber/ * * Prof. Lester Ingber _ P.O. Box 857 _ McLean, VA 22101 _ 1.800.L.INGBER */ From kruschke at croton.psych.indiana.edu Thu Jul 6 16:17:34 1995 From: kruschke at croton.psych.indiana.edu (John Kruschke) Date: Thu, 6 Jul 1995 15:17:34 -0500 (EST) Subject: TR announcement: Extensions to the delta rule of associative learning Message-ID: <9507062017.AA07049@croton.psych.indiana.edu> The following report is now available: Extensions to the Delta Rule for Associative Learning John K. Kruschke and Amy L. Bradley The delta rule of associative learning has recently been used in several models of human category learning, and applied to categories with different relative frequencies, or base rates. Previous research has emphasized predictions of the delta rule after extensive learning. Our first experiment measures the relative acquisition rates of categories with different base rates, and the delta rule significantly and systematically deviates from the human data. We suggest that two additional mechanisms are involved, namely, short-term memory and strategic guessing. Two additional experiments highlight the effects of these mechanisms. The mechanisms are formalized and combined with the delta rule, and provide good fits to the data from all three experiments. The easiest way to get it is from the research section of my Web page (see URL below). -- John K. Kruschke e-mail: kruschke at indiana.edu Dept. of Psychology office: (812) 855-3192 Indiana University lab: (812) 855-9613 Bloomington, IN 47405-1301 USA fax: (812) 855-4691 URL= http://silver.ucs.indiana.edu/~kruschke/home.html From bill at psy.uq.oz.au Thu Jul 6 19:48:41 1995 From: bill at psy.uq.oz.au (Bill Wilson) Date: Fri, 07 Jul 1995 09:48:41 +1000 Subject: papers available by ftp: Recurrent net architectures Message-ID: <199507062348.JAA03259@psych.psy.uq.oz.au> The files wilson.recurrent.ps.Z and wilson.stability.ps.Z are now available for copying from the Neuroprose repository: File: wilson.recurrent.ps (4 pages) Title: A Comparison of Architectural Alternatives for Recurrent Networks Author: William H. Wilson Abstract: This paper describes a class of recurrent neural networks related to Elman networks. The networks used herein differ from standard Elman networks in that they may have more than one state vector. Such networks have an explicit representation of the hidden unit activations from several steps back. In principle, a single-state-vector network is capable of learning any sequential task that a multi-state-vector network can learn. This paper describes experiments which show that, in practice, and for the learning task used, a multi-state-vector network can learn a task faster and better than a single-state-vector network. The task used involved learning the graphotactic structure of a sample of about 400 English words. The training method and architecture used somewhat resemble backpropagation through time, but differ in that multiple state vectors persist in the trained network, and that each state vector is connected to the hidden layer by independent sets of weights. ------------------------------------- File: wilson.stability.ps (4 pages) Title: Stability of Learning in Classes of Recurrent and Feedforward Networks Author: William H. Wilson Abstract: This paper concerns a class of recurrent neural networks related to Elman networks (simple recurrent networks) and Jordan networks and a class of feedforward networks architecturally similar to Waibel's TDNNs. The recurrent nets used herein, unlike standard Elman/Jordan networks, may have more than one state vector. It is known that such multi-state Elman networks have better learning performance on certain tasks than standard Elman networks of similar weight complexity. The task used involves learning the graphotactic structure of a sample of about 400 English words. Learning performance was tested using regimes in which the state vectors are, or are not, zeroed between words: the former results in larger minimum total error, but without the large oscillations in total error observed when the state vectors are not periodically zeroed. Learning performance comparisons of the three classes of network favour the feedforward nets. Bill Wilson Artificial Intelligence Laboratory School of Computer Science and Engineering University of New South Wales Sydney 2052 Australia Email: billw at cse.unsw.edu.au From sontag at control.rutgers.edu Fri Jul 7 10:27:06 1995 From: sontag at control.rutgers.edu (Eduardo Sontag) Date: Fri, 7 Jul 1995 10:27:06 -0400 Subject: Three *** UNRELATED *** TR's available Message-ID: <199507071427.KAA06870@control.rutgers.edu> (NOTE: The following three TR's are NOT in any way related to each other, but announcements are being bundled into one, as requested by list moderator.) ****************************************************************************** Subject: TR (keywords: VC dimension lower bounds, feedforward neural nets) The following preprint is available by FTP: Neural networks with quadratic VC dimension Pascal Koiran and Eduardo Sontag Abstract: This paper shows that neural networks which use continuous activation functions have VC dimension at least as large as the square of the number of weights w. This result settles a long-standing open question, namely whether the well-known O(w log w) bound, known for hard-threshold nets, also held for more general sigmoidal nets. Implications for the number of samples needed for valid generalization are discussed. Retrieval: FTP anonymous to: math.rutgers.edu cd pub/sontag file: quadratic-vc.ps.z (compressed postscript file; for further information, see the files README and CONTENTS in that directory) (Note: also available as NeuroCOLT Technical Report NC-TR-95-044) **** NO HARDCOPY AVAILABLE **** ****************************************************************************** Subject: TR (keywords: VC dimension, learning, dynamical systems, recurrence) The following preprint is available by FTP: Sample complexity for learning recurrent perceptron mappings Bhaskar Dasgupta and Eduardo Sontag Abstract: This paper deals with the learning-theoretic questions involving the identification of linear dynamical systems (in the sense of control theory) and especially with the binary-classification version of these, "recurrent perceptron classifiers". These latter classifiers generalize the classical perceptron model. They take into account those correlations and dependences among input coordinates which arise from linear digital filtering. The paper provides tight theoretical bounds on sample complexity associated to the fitting of recurrent perceptrons to experimental data. The results are based on VC-dimension theory and PAC learning, as well as on recent computational complexity work in elimination methods. Retrieval: FTP anonymous to: math.rutgers.edu cd pub/sontag file: vcdim-signlinear.ps.z (compressed postscript file; for further information, see the files README and CONTENTS in that directory) Note: The paper is available also as DIMACS Technical Report 95-17, and can be obtained by ftp to dimacs.rutgers.edu (IP address = 128.6.75.16), login anonymous, in dir "pub/dimacs/TechnicalReports/TechReports" **** NO HARDCOPY AVAILABLE **** ****************************************************************************** Subject: TR (keywords: feedforward networks, local minima, critical points) The following preprint is available by FTP: Critical points for least-squares problems involving certain analytic functions, with applications to sigmoidal nets Eduardo D. Sontag Abstract: This paper deals with nonlinear least-squares problems involving the fitting to data of parameterized analytic functions. For generic regression data, a general result establishes the countability, and under stronger assumptions finiteness, of the set of functions giving rise to critical points of the quadratic loss function. In the special case of what are usually called ``single-hidden layer neural networks,'' which are built upon the standard sigmoidal activation tanh(x) (or equivalently 1/(1+e^{-x})), a rough upper bound for this cardinality is provided as well. Retrieval: FTP anonymous to: math.rutgers.edu cd pub/sontag file: crit-sigmoid.ps.z (compressed postscript file; for further information, see the files README and CONTENTS in that directory) **** NO HARDCOPY AVAILABLE **** ****************************************************************************** Eduardo D. Sontag URL for Web HomePage: http://www.math.rutgers.edu/~sontag/ ****************************************************************************** From dsilver at csd.uwo.ca Fri Jul 7 09:35:45 1995 From: dsilver at csd.uwo.ca (Danny L. Silver) Date: Fri, 7 Jul 95 9:35:45 EDT Subject: Summary of responses on data transformation tools Message-ID: <9507071335.AA08942@church.ai.csd.uwo.ca.csd.uwo.ca> Some time ago (May/95), I request additional information on data transformation tools: > Many of us spend hours preparing data files for acceptance by > machine learning systems. Typically, I use awk or C code to transform > ASCII records into numeric or symbolic attribute tuples for a neural net, > inductive decision tree, etc. Before re-inventing the wheel, has anyone > developed a general tool for perfoming some of the more common > transformations. Any related suggestions would be of great use to many > on the network. Below is a summary of the most informative responses I received. Sorry for the delay. . Danny -- ========================================================================= = Daniel L. Silver University of Western Ontario, London, Canada = = N6A 3K7 - Dept. of Comp. Sci. - Office: MC27b = = dsilver at csd.uwo.ca H: (519)473-6168 O: (519)679-2111 (ext.6903) = ========================================================================= From: A. Famili I have done quite a bit of work in this area, on data preparation and data pre-processing, and also rule post-processing in induction. As part of our induction system that we have built, we have some data pre-processing capabilities added. I am also organizing and will be chairing a panel on the "Role of data pre-processing in Intelligent Data Analysis" in IDA-95 Symposium. (Intelligent Data Analysis Symposium to be held in Germany in Aug. 1995). The most common tool in the market is NeuralWare's Data Sculptor (I have only seen the brochure and a demo). It is claimed to be a general purpose tool. Others are in a short report that I send you below. A. Famili, Ph.D. Senior Research Scientist Knowledge Systems Lab. IIT- NRC, Bldg. M-50 Montreal Rd. Ottawa, Ont. K1A 0R6 Canada Phone: (613) 993-8554 Fax : (613) 952-7151 email: famili at ai.iit.nrc.ca --------------------------- A. Famili Knowledge Systems Laboratory Institute for Information Technology National Research Council Canada 1.0 Introduction This report outlines a comparison that was made for three commercial data pre-processing tools that are available in the market. The purpose of the study was to identify useful features that exist in these tools that could be helpful in intelligent filtering and data analysis of the IDS project. The comparison study does not involve use and evaluation of either tools on real data. Two of these tools (LabView and OS/2 Visualizer) are avail- able in the KSL. 2.0 Data Sculptor Developed by NeuralWare, the criteria was that in neural network data analysis applica- tions, 80 percent of time is spent on data preprocessing. This tool was developed to han- dle any type of transformation or manipulation of data, before the data being analysed. The graphics capabilities include: histograms, bar charts, line, pie and scatter plots. There are several stat. functions to be used on the data. There are also options to create new variables (attribute vectors) based on transformation of other variables. Following are some important specifications, as explained in the fact sheets and demo version: - Input Data Formats: DBase, Excell, Paradox, Fixed Field, ASCII, and Binary. - Output Data Formats: Fixed Field, Delimited ASCII and Binary - General Data Transformations: Sorting, File Merge, Field Comparison, Sieve and Duplicate and Neighborhood. - Math. Transformations: Arithmetic, Trigonometric, and Exponential. - Special Transformations: Encodings of the type One-of-N, Fuzzy One-of-N, R-of-N, Analog R-of-N, Thermometer, Circular, and Inverse Thermometer, Normalizing Func- tions, Fast Fourier Transformations and some more. - Stat. Functions: Count, Sum, Mean, Chi-square, Min, Max, STD, Variance, Correla- tion and some more. - Graph Formats: Bar chart, Histogram, Scatter Plot, Pie, etc. - Spreadsheet: Data Viewing, and Search Function. A data pre-processing application can be built by using (or defining) icons and assembling the entire application in the Data Sculptor environment, which is quite easy to use. There are a number of demo applications that came with the demo diskettes. On- line hypertext help facility is also available. Data Sculptor runs under Windows. Information for Data Sculptor comes from the literature and two demo diskettes. 3.0 LabView and Data Engine Lab View (Laboratory Virtual Instrument Engineering Workbench) is a product developed by National Instruments. It is however available with Data Engine, a data analysis product developed by MIT in Germany. LabView, a high level programming environ- ment, has been developed to simplify the scientific computation, analyzing process control, and test and measurement applications. It is far more sophisticated than other data pre-processing systems. Unlike other programming systems that are text based, LabView is graphics based and lets users create data viewing and simulation programs in block diagram forms. LabView also contains application specific libraries for data acquisition, data analysis, data presentation, and data storage. It even comes with it's own GUI builder facilities (called front panel) so that the application is monitored and run to simulate the panel of a physical instrument. There are also a number of LabView companion products that have been developed by users or suppliers of this product. 4.0 OS/2 Visualizer The Visualizer comes with OS2 and is installed on the PC's of the IDS project. It's main function is support for data visualization, and consists of three modules: (i) Charts, (ii) Statistics, and (iii) Query. The visualizer Charts provides support for a variety of chart making requirements. Examples are: line, pie, bar, scatter, surface, mixed, etc. The visualizer Statistics provides support in 57 statistical methods in seven categories of: (i) Exploratory methods, (ii) Distributions, (iii) Relations, (iv) Quality control, (v) Model fitting, (vi) Analysis of variance, and (vii) Tests. Each of the above categories consists of several features that are useful for statistical analysis of data. The visualizer Query provides support for a number of query tasks to be performed on the data. These include means to access and work with the data that is currently used, creating and storing new tables in the database, combining data from many tables, and many more. It is not evident, from the documentation, whether or not we can perform some form of data transformation or preprocessing on the queried data so that a preprocessed data file is created for further analysis. ================================================================ From: Matthijs Kadijk I personnaly think that AWK is the best most general tool fit for those purposes, but for those who want something less general but easy to use I suggest to use dm, (a data manipulater) which is part of Gary Perlman UNIX|STAT package. It should be no problem to find it on the net. I also use the unix|stat programs to analyse the results of simulations with my NN programs. I'll attatch the dm tutorial to this mail (DLS: not include in this summary). Matthijs Kadijk _____________________ ______________________________ / Matthijs Kadijk \ / email: kkm at bouw.tno.nl \ | TNO-Bouw, Postbus 49 | www: http://www.bouw.tno.nl \___________________ | NL-2600 AA Delft | tel: +31 - 15 - 842 195 /\ fax: +31 15 843975 \ \_____________________/ \ ________________________/ \_ _____________________/ ===================================================================== From: stefanos at vuse.vanderbilt.edu (Stefanos Manganaris) I have been using this code to read, into LISP, UCI and C4.5 data files. It will enable you to manipulate the records in LISP. All you need to do is define once an appropriate "make-instance" function for each of the learning systems you use. Stef. -- Stefanos Manganaris. Computer Science Department, Vanderbilt University, Nashville, Tennessee. http://www.vuse.vanderbilt.edu/~stefanos/stefanos.html -------------------------------- cut here ------------------------------------ #|============================================================================ READ IN LISP UCI and C4.5 DATA FILES $Id: read-data.cl,v 1.1 1995/04/12 04:13:51 stefanos Exp $ Last Edited: Apr 11/95 23:08 CDT by stefanos at worf (Stefanos Manganaris) Written by Stefanos Manganaris, Computer Sciences, Vanderbilt University. stefanos at vuse.vanderbilt.edu http://www.vuse.vanderbilt.edu/~stefanos/stefanos.html ============================================================================|# (in-package "USER") (defvar *eol* nil) (defun make-simple-instance (class attributes) "A simple example for read-data-file's make-instance-f argument. Change this function to return instances in whatever format your learner expects." (cons class attributes)) ;; Usage: ;; (read-data-file "file.data" #'make-simple-instance) #|____________________________________________________________Sat Feb 4/95____ Function - READ-DATA-FILE Reads the UCI or C4.5 FILE and returns a list of instances. Each instance is created by supplying its class and attribute values to MAKE-INSTANCE-F. Note: * The list of instances is returned in reverse order. * Spaces are not allowed as part of class names or values. * Make sure there is a new line before EOF. Inputs -> file make-instance-f Returns -> list of instances History -> Sat Feb 4/95: Created _______________________________________________________________________Stef__|# (defun read-data-file (file make-instance-f) "Args: file make-instance-f Reads the UCI or C4.5 FILE and returns a list of instances." (let ((instances nil) (last-token nil)) (multiple-value-bind (f-comma commap) (get-macro-character #\,) (set-macro-character #\, #'comma-reader nil) (set-macro-character #\newline #'newline-reader nil) (with-open-file (stream file :direction :input) (loop (setq *eol* nil) (setq last-token (do ((token (read stream t) (read stream t)) (attribute-values nil)) (*eol* (if last-token (push (funcall make-instance-f last-token attribute-values) instances)) (return token)) (if last-token (setq attribute-values (nconc attribute-values (cons last-token nil)))) (setq last-token token))) (if (null last-token) (return)))) (set-macro-character #\, f-comma commap) (set-syntax-from-char #\newline #\newline)) (return-from read-data-file instances))) #|____________________________________________________________Sat Feb 4/95____ Function - COMMA-READER Special reader function for comma characters in UCI and C4.5 files. Inputs -> stream char Returns -> History -> Sat Feb 4/95: Created _______________________________________________________________________Stef__|# (defun comma-reader (stream char) "Args: stream char Special reader function for comma characters in UCI and C4.5 files." (declare (ignore stream char)) (values)) #|____________________________________________________________Sat Feb 4/95____ Function - NEWLINE-READER Special reader function for newline characters in UCI and C4.5 files. Inputs -> stream char Returns -> History -> Sat Feb 4/95: Created _______________________________________________________________________Stef__|# (defun newline-reader (stream char) "Args: stream char Special reader function for newline characters in UCI and C4.5 files." (declare (ignore char)) (setq *eol* t) (read stream nil nil t)) ;; EOF ======================================================================= From ericwan at choosh.eeap.ogi.edu Fri Jul 7 04:34:28 1995 From: ericwan at choosh.eeap.ogi.edu (Eric A. Wan) Date: Fri, 7 Jul 1995 16:34:28 +0800 Subject: OGI Research Assitantship Positions Message-ID: <9507072334.AA28903@choosh.eeap.ogi.edu> ------------------------------------------------------------ Ph.D. Research Assistantship Positions Available ***************************************************************** * * * OREGON GRADUATE INSTITUTE * * -of- * * SCIENCE & TECHNOLOGY * * * * Center for Information Technologies (CIT) * * * ***************************************************************** ***************************************************************** The Oregon Graduate Institute of Science and Technology (OGI) has several openings for outstanding Ph.D. students in its Center for Information Technologies. The center includes faculty from the department of Computer Science and Engineering and the department of Electrical Engineering and Applied Physics. Center members perform research in a broad range of information processing areas including nonlinear and adaptive signal processing, statistical computation, decision analysis, speech, images, prediction, control, economics, and finance. We are specifically looking for potential Ph.D. students who hold masters degrees in Computer Science or Electrical Engineering with knowledge and/or interest adaptive signal processing, statistics, speech and image processing, neural networks and machine learning. We seek qualified candidates to join research projects in Fall / Winter of 1995. Special funding opportunities are available for U.S. citizens and U.S. nationals, although foreign nationals will also be considered. Research areas include neural networks, adaptive signal processing, simulation of human auditory and visual perception, speech and image processing, time-series prediction, learning theory, algorithms, and architectures. Specific projects include speech enhancement in cellular communication, sunspot and solar flux forecasting, speech and image representation, novel techniques for regression, and economic and financial applications. Please send resumes or inquiries to: Todd K. Leen John Moody Eric A. Wan tleen at cse.ogi.edu moody at cse.ogi.edu ericwan at eeap.ogi.edu (503) 690-1160 (503) 690-1554 (503) 690-1164 Hynek Hermansky Misha Pavel hynek at eeap.ogi.edu pavel at eeap.ogi.edu (503)690-1136 (503)690-1155 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ General Information OGI is a young, but rapidly growing, private research institute located in the Portland area. OGI offers Masters and PhD programs in Computer Science and Engineering, Applied Physics, Electrical Engineering, Biology, Chemistry, Materials Science and Engineering, and Environmental Science and Engineering. For additional general information, contact: Office of Admissions and Records Oregon Graduate Institute PO Box 91000 Phone: (503) 690-1027 Portland, OR 97291 Email: registrar at admin.ogi.edu +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Research Interests of Faculty and Postdocs in CIT Hynek Hermansky (Associate Professor, EEAP and CSE); Hynek Hermansky is interested in speech processing by humans and machines with engineering applications in speech and speaker recognition, speech coding, enhancement, and synthesis. His main research interest is in practical engineering models of human information processing. Todd K. Leen (Associate Professor, CSE and EEAP): Todd Leen's research spans theory of neural network models, architecture and algorithm design and applications to speech recognition. His theoretical work is currently focused on the foundations of stochastic learning, while his work on Algorithm design is focused on fast algorithms for non-linear data modeling. John Moody (Associate Professor, CSE and EEAP): John Moody does research on the design and analysis of learning algorithms, statistical learning theory (including generalization and model selection), optimization methods (both deterministic and stochastic), and applications to signal processing, time series, economics and finance. Misha Pavel (Associate Professor, CSE and EEAP): Misha Pavel does mathematical and neural modeling of adaptive behaviors including visual processing, pattern recognition, visually guided motor control, categorization, and decision making. He is also interested in the application of these models to sensor fusion, visually guided vehicular control, and human- computer interfaces. Eric A. Wan (Assistant Professor, EEAP and CSE): Eric Wan's research activities include learning algorithms and architectures for neural networks and adaptive signal processing. He is particularly interested in neural applications to time series prediction, speech enhancement, adaptive control, active noise cancellation, and telecommunications. Andy Fraser (Associate Professor, Portland State University) Andrew Fraser's research interests include non-linear dynamics, information theory, signal modelling, prediction and detection. He is particularly interested in the application of modelling and prediction to signal encoding and detection problems. Holly Jimison (Assistant Professor, Oregon Health Sciences University) Dr. Jimison is the Director of the Informed Patient Decision Group at the Biomedical Information Communication Center at OHSU. She is interested in multimedia systems for patient-physician communication and in application of decision theory to shared medical decision making. Hong Pi (Senior Research Associate, CSE): Hong Pi's research interests include neural network models, time series analysis, and dynamical systems theory. He currently works on the applications of nonlinear modeling and analysis techniques to time series prediction problems. Thorsteinn S. Rognvaldsson (Post-Doctoral Research Associate, CSE): Thorsteinn Rognvaldsson studies both applications and theory of neural networks and other non-linear methods for function fitting and classification. He is currently working on methods for choosing regularization parameters and also comparing the performance of neural networks with the performance of other techniques for time series prediction. Lizhong Wu (Senior Research Associate, CSE): Lizhong Wu's research interests include neural network theory and modeling, time series analysis and prediction, pattern classification and recognition, signal processing, vector quantization, source coding and data compression. He is now working on the application of neural networks and nonparametric statistical paradigms to finance. From gcv at ukc.ac.uk Mon Jul 10 11:42:54 1995 From: gcv at ukc.ac.uk (gcv@ukc.ac.uk) Date: Mon, 10 Jul 95 11:42:54 BST Subject: II Brazilian Symposium on Neural Networks Message-ID: II Brazilian Symposium on Neural Networks ***************************************** October 18-20, 1995 Second call for papers Sponsored by the Brazilian Computer Science Society (SBC) You are cordially invited to attend the II Brazilian Symposium on Neural Networks (SBRN) which will be held at the University of Sao Paulo, campus of Sao Carlos, Sao Paulo. Sao Carlos with its 160.000 population is a pleasant university city known by its climate and high technology companies. Scientific papers will be analyzed by the program committee. This analysis will take into account originality, significance to the area, and clarity. Accepted papers will be fully published in the conference proceedings. The major topics of interest include, but are not limited to: Applications Architecture and Topology Biological Perspectives Cognitive Science Dynamic Systems Fuzzy Logic Genetic Algorithms Hardware Implementation Hybrid Systems Learning Models Otimisation Parallel and Distributed Implementations Pattern Recognition Robotics and Control Signal Processing Theoretical Models Program Committee: - Andre C. P. L. F. de Carvalho - ICMSC/USP - Dante Barone - II/UFRGS (Chairman) - Edson C B C Filho - DI/UFPE - Fernando Gomide - FEE/UNICAMP - Geraldo Mateus - DCC/UFMG - Luciano da Fontoura Costa - IFSC/USP - Rafael Linden - IBCCF/UFRJ - Paulo Martins Engel - II/UFRGS Organising Committee: - Aluizio Araujo - EESC/USP - Andre C. P. L. F. de Carvalho - ICMSC/USP (Chairman) - Dante Barone - II/UFRGS - Edson C B C Filho - DI/UFPE - Germano Vasconcelos - DI/UFPE - Glauco Caurin - EESC/USP - Luciano da Fontoura Costa - IFSC/USP - Roseli A. Francelin Romero - ICMSC/USP - Teresa B. Ludermir - DI/UFPE SUBMISSION PROCEDURE: The Symposium seeks contributions to the state of the art and future perspectives of Neural Networks research. A submitted paper must be in Portuguese, Spanish or English. The submissions must include the original paper and three more copies and must follow the format below (no E-mail or FAX submissions). The paper must be printed using a laser printer, in one-column format, not numbered, 8.5 X 11.0 inch (21,7 X 28.0 cm). It must not exceed six pages, including all figures and diagrams. The font size should be 10 pts, such as TIMES-ROMAN font or its equivalent with the following margins: right and left 2.5 cm, top 3.5 cm, and bottom 2.0 cm. The first page should contain the paper's title, the complete author(s) name(s), affiliation(s), and mailing address(es), followed by a short (150 words) abstract and list of descriptive key words and an acompanying letter. In the accompanying letter, the following information must be included: * Manuscript title * first author's name, mailing address and E-mail * Technical area SUBMISSION ADDRESS: Four copies (one original and three copies) should be submitted to: Andre C. P. L. F. de Carvalho - SBRN 95 Departamento de Ciencias de Computacao e Estatistica ICMSC - Universidade de Sao Paulo Caixa Postal 668 CEP 13560.070 Sao Carlos, SP Brazil Phone: +55 162 726222 FAX: +55 162 749150 E-mail: IISBRN at icmsc.sc.usp.br DEADLINES: July 30, 1995 (mailing date) Deadline for paper submission August 30, 1995 Notification to authors October 18-20, 1995 II SBRN MORE INFORMATION: * Up-to-minute information about the symposium is available on the World Wide Web (WWW) at http://www.icmsc.sc.usp.br * Questions can be sent by E-mail to IISBRN at icmsc.sc.usp.br Hope to see you in Sao Carlos! From markc at crab.psy.cmu.edu Mon Jul 10 15:20:09 1995 From: markc at crab.psy.cmu.edu (Mark Chappell) Date: Mon, 10 Jul 95 15:20:09 EDT Subject: Technical Report; Human memory Message-ID: <9507101920.AA22266@crab.psy.cmu.edu.psy.cmu.edu> The paper whose abstract appears below has recently been submitted. It (paper.ps or paper.ps.Z) may be anonymously ftped from hydra.psy.cmu.edu, in directory /pub/user/markc. If this is not possible hard copies may be obtained from Barbara Dorney; bd1q at crab.psy.cmu.edu. Technical Report PDP.CNS.95.2 Familiarity Breeds Differentiation: A Bayesian Approach to the Effects of Experience in Recognition Memory James L. McClelland and Mark Chappell As people become better at identifying studied items, they also become better at rejecting distractor items. A model of recognition is presented consisting of item detectors that learn estimates of conditional probabilities of items' features, exhibiting this differentiation effect. The model is used to account for a number of findings in the recognition memory literature, including a null list-strength effect, a list-length effect, non-linear effects of strengthening items on false recognition of similar distractors, a number of different kinds of mirror effects, and appropriate $z$-ROC curves. From niranjan at eng.cam.ac.uk Tue Jul 11 10:54:49 1995 From: niranjan at eng.cam.ac.uk (niranjan@eng.cam.ac.uk) Date: Tue, 11 Jul 95 10:54:49 BST Subject: JOB, Neural Computing, Signal Processing, medical Applications Message-ID: <9507110954.22966@baby.eng.cam.ac.uk> Cambridge University Engineering Department Research Assistantship in Neural Computing Applications are invited for a Research Assistant position in the area of Neural Computing applied to Nonstationary Medical Signal Processing in the monitoring of liver transplant patients. The project is funded by the EPSRC and is for 33 months, starting October 1995. Candidates for this post are expected to have a good first degree and preferably a postgraduate degree in a relevant discipline. Salary will be in the RA/1A scale, currently in the range #14,317 to #21,519 p.a. Application forms and further particulars may be obtained from Rachael West, Cambridge University Engineering Department, Trumpington Street, Cambridge CB2 1PZ, England. Phone: 44 1223 332739 , FaX: 44 1223 332662, Email: rw at eng.cam.ac.uk. Closing date for applications is 31 July 1995. Short listed applications are expected to be interviewed on 7 August 1995. Informal inquiries about the project to niranjan at eng.cam.ac.uk --------------------------------------------------------------- From austin at minster.cs.york.ac.uk Mon Jul 10 06:54:46 1995 From: austin at minster.cs.york.ac.uk (austin@minster.cs.york.ac.uk) Date: Mon, 10 Jul 95 06:54:46 Subject: No subject Message-ID: Certification of Neural Networks 8 month post at the University of York, UK. Applications are invited for a Research Assistant to inves- tigate the problems of using neural networks in safety crit- ical systems. The aim of the work is to find out what the limitations are when using neural networks in such systems, and to suggest ways in which they may be overcome. The work will be of interest to people who wish to study the way in which neural networks learn and their subsequent behaviour. The post is open to individuals who have a good theoretical knowledge of neural networks or experience in safety and certification issues. Because some simulation work will be undertaken, knowledge of C and UNIX would be an advantage. Preference will be given to candidates who hold a PhD which involved neural networks. The candidate will join a major group of researchers working in the area of neural networks. The group is supported by high quality computing resources and technical personnel. The post is supported by the EPSRC and is funded, in the first instance, for 8 months. The salary will be in the range 13,941 to 17,813 PA (UK pounds). Four copies of a letter of application with full curriculum vitae, including the names of two referees, should be sent as soon as possible to Gary Morgan (address as below). More information on the post can be obtained from Dr. Jim Austin (01904 432734, austin at minster.york.ac.uk) or Dr. Gary Morgan (01904 432739, gary at minster.york.ac.uk). Advanced Computer Architecture Group Department of Computer Science University of York York YO1 5DD UK From denis at lima.psych.mcgill.ca Tue Jul 11 11:52:31 1995 From: denis at lima.psych.mcgill.ca (Denis Mareschal) Date: Tue, 11 Jul 1995 11:52:31 -0400 Subject: paper on object permanence now available Message-ID: <199507111552.LAA11333@lima.psych.mcgill.ca> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/mareschal.object_permanence.ps.Z The following paper is now available for copying from the neuroprose archive. It will appear in the Proceedings of the Cognitive Science Society. Title: Developing Object Permanence: A Connectionist Model (6 pages) Authors: Denis Mareschal, Kim Plunkett, Paul Harris Department of Experimental Psychology South Parks Rd University of Oxford Oxford OX1 3UD UK Abstract: When tested on surprise or preferential looking tasks, young infants show an understanding that objects continue to exists even though they are no longer directly perceivable. Only later do infants show a similar level of competence when tested on retrieval tasks. Hence, a developmental lag is apparent between infants' knowledge as measured by passive response tasks, and their ability to demonstrate that knowledge in an active retrieval task. We present a connectionist model which learns to track and initiate a motor response towards objects. The model exhibits a capacity to maintain a representation of the object even when it is no longer directly perceptible, and acquires implicit tracking competence before the ability to initiate a manual response to a hidden object. A study with infants confirms the model's prediction concerning improved tracking performance at higher object velocities. It is suggested that the developmental lag is a direct consequence of the need to co-ordinate representations which themselves emerge through learning. Thanks to Jordan Pollack for maintaining this archive service! Instructions for retrieving the paper: unix> ftp archive.cis.ohio-state.edu Name: anonymous Password: (your email address) ftp> cd /pub/neuroprose ftp> bin ftp> get mareschal.object_permanence.ps.Z ftp> quit unix> uncompress mareschal.object_permanence.ps.Z unix> lpr -s mareschal.object_permanence.ps.Z Cheers, DENIS MARESCHAL DEPARTMENT OF EXPERIMENTAL PSYCHOLOGY OXFORD UNIVERSITY SOUTH PARKS RD OXFORD OX1 3UD UK From tirthank at titanic.mpce.mq.edu.au Wed Jul 12 22:11:06 1995 From: tirthank at titanic.mpce.mq.edu.au (Tirthankar Raychaudhuri) Date: Thu, 13 Jul 1995 12:11:06 +1000 (EST) Subject: Web page on Combining Estimators Message-ID: <9507130211.AA27130@titanic.mpce.mq.edu.au> A non-text attachment was scrubbed... Name: not available Type: text Size: 462 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/abceb73f/attachment-0001.ksh From brunel at venus.roma1.infn.it Thu Jul 13 06:13:38 1995 From: brunel at venus.roma1.infn.it (Nicolas Brunel) Date: Thu, 13 Jul 95 12:13:38 +0200 Subject: papers in neuroprose archive Message-ID: <9507131013.AA01923@venus.roma1.infn.it> FTP-host: archive.cis.ohio-state.edu The following three papers are now available for copying from the neuroprose archive. FTP-filename: /pub/neuroprose/brunel.dynamics.ps.Z Title: Dynamics of an attractor neural network converting temporal into spatial correlations (29 pages) Network: computation in neural systems, 5: 449 Author: Nicolas Brunel Dipartimento di Fisica Universita di Roma I La Sapienza P.le Aldo Moro 2 - 00185 Roma Italy Abstract The dynamics of a model attractor neural network, dominated by collateral feedback, composed of excitatory and inhibitory neurons described by afferent currents and spike rates, is studied analytically. The network stores stimuli learned in a temporal sequence. The statistical properties of the delay activities are investigated analytically under the approximation that no neuron is activated by more than one of the learned stimuli, and that inhibitory reaction is instantaneous. The analytic results reproduce the details of simulations of the model in which the stored memories are uncorrelated, and neurons can be shared, with low probability, by different stimuli. As such, the approximate analytic results account for delayed match to sample experiments of Miyashita in the inferotemporal cortex of monkeys. If the stimuli used in the experiment are uncorrelated, the analysis deduces the mean coding level $f$ in a stimulus (i.e. the mean fraction of neurons activated by a given stimulus) from the fraction of selective neurons which have a high correlation coefficient, of $f\sim 0.0125$. It also predicts the structure of the distribution of the correlation coefficients among neurons. FTP-filename: /pub/neuroprose/brunel.learning.ps.Z Title: Learning internal representations in attractor neural network with analogue neurons To be published in Network: computation in neural systems Authors: Daniel J Amit and Nicolas Brunel Dipartimeto di Fisica Universita Roma I P.le Aldo Moro 2 - 00185 Roma Italy Abstract: A learning attractor neural network (LANN) with a double dynamics of neural activities and synaptic efficacies, operating on two different time scales is studied by simulations in preparation for an electronic implementation. The present network includes several quasi-realistic features: neurons are represented by their afferent currents and output spike rates; excitatory and inhibitory neurons are separated; attractor spike rates as well as coding levels in arriving stimuli are low; learning takes place only between excitatory units. Synaptic dynamics is an unsupervised, analog Hebbian process, but long term memory in the absence of neural activity is maintained by a refresh mechanism which on long time scales discretizes the synaptic values, converting learning into an asynchronous stochastic process induced by the stimuli on the synaptic efficacies. This network is intended to learn a set of attractors from the statistics of freely arriving stimuli, which are represented by external synaptic inputs injected into the excitatory neurons. In the simulations different types of sequences of many thousands of stimuli are presented to the network that do not distinguish between retrieval and learning phases. Stimulus sequences differ in preassigned global statistics (including time dependent statistics); in orders of presentation of individual stimuli within a given statistics; in lengths of time intervals for each presentation and in the intervals separating one stimulus from another. We find that the network effectively learns a set of attractors representing the statistics of the stimuli, and is able to modify its attractors when the input statistics change. Moreover, as the global input statistics changes the network can also forget attractors related to stimulus classes no longer presented. Forgetting takes place only due to the arrival of new stimuli. The performance of the network and the statistics of the attractors are studied as a function of the input statistics. Most of the large scale characteristics of the learning dynamics can be captured theoretically. This model modifies a previous implementation of a LANN composed of discrete neurons, in a network of more realistic neurons. The different elements have been designed to facilitate their implementation in silicon. FTP-filename: /pub/neuroprose/brunel.spontaneous.ps.Z Title: Global spontaneous activity and local structured (learned) delay activity in cortex submitted to Journal of Neurophysiology Authors: Daniel J Amit and Nicolas Brunel Dipartimento di Fisica Universita di Roma I P.le Aldo Moro 2 -- 00185 Roma Italy Abstract: 1. We investigate the conditions under which cortical activity alone makes spontaneous activity self-reproducing and stable against fluctuations of spike rates. Invoking simple assumptions about properties of integrate-and-fire neurons it is shown that the stochastic background activity, of 1-5 spikes/second, cannot be stabilized when all neurons are excitatory. 2. On the other hand, spontaneous activity becomes self-stabilizing in presence of local inhibition: given reasonable values of the parameters of the network spontaneous activity reproduces itself and small fluctuations in the rate are suppressed. a. If the integration time constants of excitatory and inhibitory neurons at the soma are equal, {\em local} excitatory and inhibitory inputs to a neuron must balance to provide {\em local} stablility. b. If inhibition integrates faster its synaptic inputs, spontaneous activity is stable even when local recurrent excitation predominates. 3. In a network sustaining spontaneous rates of 1-5 spikes/second, we study the effect of learning in a local module, expressed in synaptic modifications in specific populations of synapses. We find: a. Initially no stimulus specific delay activity manifests itself. Instead, there is a delay activity in which, locally, {\em all} neurons selective to any of the stimuli learned have rates which gradually increase with the amplitude of synaptic potentiation. b. When the average LTP increases beyond a critical value, specific local attractors appear abruptly against the background of the global uniform spontaneous attractor. This happens with either gradual or discrete stochastic LTP. 4. The above findings predict that in the process of learning unfamiliar stimuli, there is a stage in which all neurons selective to any of the learned stimuli enhance their spontaneous activity relative to the rest. Then, abruptly, selective delay activity appear. Both facts could be observed in single unit recordings in delayed match to sample experiments. 5. Beyond this critical learning strength the local module has two types of collective activity. It either participates in the global spontaneous activity, or it maintains a stimulus selective elevated activity distribution. The particular mode of behavior depends on the stimulus: if it is unfamiliar, the activity is spontaneous; if similar to a learned stimulus, the delay activity is selective. These new attractors (delay activities) reflect the synaptic structure developed during learning. In each of them a small population of neurons have elevated rates, 20-30 spikes/second, depending on the strength of LTP. The remaining neurons of the module have their activity at spontaneous rates. Instructions for retrieving these papers: unix> ftp archive.cis.ohio-state.edu login: anonymous passwd: (your email address) ftp> cd /pub/neuroprose ftp> binary ftp> get brunel.dynamics.ps.Z ftp> get brunel.learning.ps.Z ftp> get brunel.spontaneous.ps.Z ftp> quit unix> uncompress brunel.dynamics.ps.Z unix> uncompress brunel.learning.ps.Z unix> uncompress brunel.spontaneous.ps.Z From chentouf at kepler.inpg.fr Thu Jul 13 11:53:20 1995 From: chentouf at kepler.inpg.fr (rachida) Date: Thu, 13 Jul 1995 17:53:20 +0200 Subject: Incremental Learning Message-ID: <199507131553.RAA11209@kepler.inpg.fr> Using incremental neural networks procedures to perform learning tasks is certainely a very attractive idea. These methods allow automatic tuning of the network size what one generally does empirically with an important risk of over-estimation or under-estimation, implying an untractable computation consuming trial/error procedure.The main questions to address when dealing with evolutive architectures are: 1. How to estimate the new unit(s) parameters ? 2. How to connect this(these) unit(s) to the previous network so that is possible to carry on learning without restarting ? 3. When to stop the adding process ? We recently published in NPL (Neural Processing letters in January 1995) a paper presenting our new incremental procedure for supervised learning with noisy data. Each step consists in adding to the current network a new unit which is trained to learn the error of the network. The incremental step is repeated until the error of the current network reduce to the noise in the data. The stopping criterion is very simple and can be directly deduced from a statistical test on the estimated parameters of the new unit. Some experimental results on function approximation tasks point out the efficacy of this new incremental scheme especially to avoid spurious minima and to design a network with a well-suited size. The number of basic operations is also decreased and gives an average gain on convergence speed of about 20%. For more information, consult: ============================= C.Jutten and R.Chentouf. A New Scheme for Incremental Learning. Neural Processing Letters, Vol. 2, 1, pp. 1-4, 1995. R.Chentouf and C.Jutten. Incremental Learning with a Stopping Criterion: experimental results. In IWANN'95: From Natural to artificial Neural Computation, J. Mira and F. Sandoval (Eds.), Lecture Notes in Computer Science 930, Springer, pp. 519-526, June 7-9, 1995 +++++++++++++++++++++++ Mrs CHENTOUF Rachida LTIRF-INPG 46 AV Felix Viallet 38000 Grenoble France Tel : (+33) 76 57 45 50 Fax : (+33) 76 57 47 90 From chris at orion.eee.kcl.ac.uk Thu Jul 13 17:22:19 1995 From: chris at orion.eee.kcl.ac.uk (Chris Christodoulou) Date: Thu, 13 Jul 95 17:22:19 BST Subject: Call for Papers - NN workshop in Prague, April 1996. Message-ID: <9507131622.AA29461@orion.eee.kcl.ac.uk> Subject: Call for Papers - NN workshop in Prague, April 1996. CALL FOR PAPERS: NEuroFuzzy Workshop NEuroFuzzy Workshop on Computational Intelligence Prague, Czech Republic 16-18 April 1996 Prague, lying almost exactly in the centre of Europe has very good international connections. The international airport of Prague is only about 15 km from centre of the city. Prague has approx. 1,200,000 inhabitants and a lot of history and culture. There are also good railway and road connections to Prague (the distance from Munich, Vienna, Berlin and Nuerenberg is about 300 km. ------------------------------------------------------------------ Submissions due: November 6, 1995 Acceptance Notices mailed: January 15, 1996 Camera ready papers due: February 19, 1996 ------------------------------------------------------------------ Call for Papers - Technical Areas o Neuroscience o Computational Models of Neurons and Neural Nets o Organisational Principles o Learning o Fuzzy Logic o Genetic algorithms o Hardware Implementation o Intelligent Systems for Perception o Intelligent Systems for Communications Systems o Intelligent Systems for Control and Robotics ------------------------------------------------------------------ You are invited to submit original papers addressing topics of interest for presentation at the conference and inclusion in the conference proceedings. Submissions should be in-depth, technical papers describing recent research and development results. Some tutorial papers are also welcome. The title page of your submission must include: 1) the name, complete return address, email, telephone and fax numbers of the author to whom all correspondence will be sent, 2) a maximum of 100-words abstract, 3) the designation of the topic (see listing) to which the paper is most closely related. All other pages should be marked with the title of the paper and the name of the first author. Send five double-spaced copies of the manuscript (limited to 3000 words) in English to: Prof. Dr.-Ing. Mirko Novak Czechoslowak Academy of Sciences Inst. for Computer and Information Science Pod vodarenskko vezi 18207 Prag 8 Czech Republic ------------------------------------------------------------------ Steering Committee Mirko NOVAK CZECH Republic Czeslaw JEDRZEJEK POLAND Valerieu BEIU ROMANIA Tamas ROSKA HUNGARY Prof A FROLOV RUSSIA Prof Gustin SLOVENIA Francisco SANDOVAL SPAIN Trevor CLARKSON UK Stamatios KARTALOPOULOS USA ------------------------------------------------------------------ A registration form will be available shortly. The registration fee for the workshop is not expected to exceed $250. A range of accommodation will be available, including student accommodation. ------------------------------------------------------------------ For further details: Dr Trevor Clarkson Chairman, IEEE UKRI Neural Networks Regional Interest Group Department of Electronic and Electrical Engineering King's College London, Strand, London WC2R 2LS, UK Tel: +44 171 873 2367 Fax: +44 171 836 4781 Email: trevor at orion.eee.kcl.ac.uk ------------------------------------------------------------------ New information will be available on WWW, URL http://crg.eee.kcl.ac.uk/rig/neurofuz.htm ---END------------------------------------------------------------ From dwang at cis.ohio-state.edu Thu Jul 13 15:40:37 1995 From: dwang at cis.ohio-state.edu (DeLiang Wang) Date: Thu, 13 Jul 1995 15:40:37 -0400 Subject: Tech reports available: Incremental learning Message-ID: <199507131940.PAA02136@shirt.cis.ohio-state.edu> The following Technical Report is available via FTP/WWW: ------------------------------------------------------------------ Incremental Learning of Complex Temporal Patterns ------------------------------------------------------------------ DeLiang Wang and Budi Yuwono Technical Report: OSU-CISRC-6/95-TR30 Department of Computer and Information Science, The Ohio State University A neural model for temporal pattern generation is used and analyzed for training with multiple complex sequences in a sequential manner. The network exhibits some degree of interference when new sequences are acquired. It is proven that the model is capable of incrementally learning a finite number of complex sequences. The model is then evaluated with a large set of highly correlated sequences. While the number of intact sequences increases linearly with the number of previously acquired sequences, the amount of retraining due to interference appears to be independent of the size of existing memory. The model is extended to include a chunking network which detects repeated subsequences between and within sequences. The chunking mechanism substantially reduces the amount of retraining in sequential training. Thus, the network investigated here constitutes an effective sequential memory. Various aspects of such a memory are discussed at the end of the paper. (34 pages - 239 KB) for anonymous ftp: FTP-HOST: ftp.cis.ohio-state.edu FTP-filename: /pub/leon/Wang-Yuwono.tech.ps.Z or for WWW: http://www.cis.ohio-state.edu/~dwang Comments are most welcome - Please send to DeLiang Wang (dwang at cis.ohio-state.edu) ---------------------------------------------------------------------------- FTP instructions: To retrieve and print the file, use the following commands: unix> ftp ftp.cis.ohio-state.edu Name: anonymous Password: (your email address) ftp> binary ftp> cd /pub/leon ftp> get Wang-Yuwono.tech.ps.Z ftp> quit unix> uncompress Wang-Yuwono.tech.ps.Z unix> lpr Wang-Yuwono.tech.ps (It may not ghostview well - missing page count with my ghostview - but it should print ok) ---------------------------------------------------------------------------- From pja at lfs.loral.com Thu Jul 13 13:45:04 1995 From: pja at lfs.loral.com (Peter J. Angeline) Date: Thu, 13 Jul 1995 13:45:04 -0400 Subject: EP96 Change in Tech Chairs Message-ID: <9507131345.ZM15302@barbarian.endicott.ibm.com> ECers Everywhere, EP96 had a minor reorgnization. Please note that the addresses for sending submissions to the conference are down to 2 now! Submissions are due to one of the technical chairs by September 26th. Pete Angeline Thomas Baeck ********************** EP96 The Fifth Annual Conference On Evolutionary Programming February 29 to March 3, 1996 Sheraton Harbor Island Hotel San Diego, CA, USA Sponsored by The Evolutionary Programming Society The Fifth Annual Conference on Evolutionary Programming will serve as a forum for researchers investigating applications and theory of evolutionary programming and other related areas in evolutionary and natural computation. Topics of interest include but are not limited to the use of evolutionary simulations in optimization, neural network training and design, automatic control, image processing, and other applications, as well as mathematical theory or empirical analysis providing insight into the behavior of such algorithms. Of particular interest are applications of simulated evolution to problems in biology. Conference Committee General Chairman: Lawrence J. Fogel, Natural Selection, Inc. Technical Program Co-Chairs: Peter J. Angeline, Loral Federal Systems Thomas Baeck, Informatik Centrum Dortmund Finance Chair: V. William Porto, Orincon Corporation Local Arrangements: Ward Page, Naval Command Control and Ocean Surveillance Center Conference World Wide Web Page: http://www.aic.nrl.navy.mil/galist/EP96/ Submission Information Authors are invited to submit papers which describe original unpublished research in evolutionary programming, evolution strategies, genetic algorithms and genetic programming, artificial life, cultural algorithms, and other models that rely on evolutionary principles. Specific topics include but are not limited to the use of evolutionary simulations in optimization, neural network training and design, automatic control, image processing, and other applications, as well as mathematical theory or empirical analysis providing insight into the behavior of such algorithms. Of particular interest are applications of simulated evolution to problems in biology. Hardcopies of manuscripts must be received by one of the technical program co-chairs by September 26, 1995. Electronic submissions cannot be accepted. Papers should be clear, concise, and written in English. Papers received after the deadline will be handled on a time- and space-available basis. The notification of the program committee's review decision will be mailed by November 30, 1995. Papers eligible for the student award must be marked appropriately for consideration (see below). Camera ready papers are due at the conference, and will be published shortly after its completion. Submissions should be single-spaced, 12 pt. font and should not exceed 15 pages including figures and references. Send five (5) copies of the complete paper to: In Europe: Thomas Baeck Informatik Centrum Dortmund Joseph-von-Fraunhofer-Str. 20 D-44227 Dortmund Germany Email: baeck at home.informatik.uni-dortmund.de In US: Peter J. Angeline Loral Federal Systems 1801 State Route 17C Mail Drop 0210 Owego, NY 13827 Email: pja at lfs.loral.com Authors outside Europe or the United States may send their paper to any of the above technical chairmen at their convenience. Evolutionary Programming Society Award for Best Student Paper In order to foster student contributions and encourage exceptional scholarship in evolutionary programming and closely related fields, the Evolutionary Programming Society awards one exceptional student paper submitted to the Annual Conference on Evolutionary Programming. The award carries a $500 cash prize and a plaque signifying the honor. To be eligible for the award, all authors of the paper must be full-time students at an accredited college, university or other educational institution. Submissions to be considered for this award must be clearly marked at the top of the title page with the phrase "CONSIDER FOR STUDENT AWARD." In addition, the paper should be accompanied by a cover letter stating that (1) the paper is to be considered for the student award (2) all authors are currently enrolled full-time students at a university, college or other educational institution, and (3) that the student authors are responsible for the work presented. Only papers submitted to the conference and marked as indicated will be considered for the award. Late submissions will not be considered. Officers of the Evolutionary Programming Society, students under their immediate supervision, and their immediate family members are not eligible. Judging will be made by officers of the Evolutionary Programming Society or by an Awards Committee appointed by the president. Judging will be based on the perceived technical merit of the student's research to the field of evolutionary programming, and more broadly to the understanding of self-organizing systems. The Evolutionary Programming Society and/or the Awards Committee reserves the right not to give an award in any year if no eligible student paper is deemed to be of award quality. Presentation of the Student Paper Award will be made at the conference. Important Dates --------------- September 26, 1995 - Submission deadline for papers November 30, 1995 - Notification sent to authors February 29, 1996 - Conference Begins Program Committee: J. L. Breeden, Santa Fe Institute M. Conrad, Wayne State University K. A. De Jong, George Mason University T. M. English, Texas Tech University D. B. Fogel, Natural Selection, Inc. G. B. Fogel, University of California at Los Angeles R. Galar, Technical University of Wroclaw P. G. Harrald, University of Manchester Institute of Science and Technology K. E. Kinnear, Adaptive Systems J. R. McDonnell, Naval Command Control and Ocean Surveillance Center Z. Michalewicz, University of North Carolina F. Palmieri, University of Connecticut R. G. Reynolds, Wayne State University S. H. Rubin, Central Michigan University G. Rudolph, University of Dortmund N. Saravanan, Ford Research H.-P. Schwefel, University of Dortmund A. V. Sebald, University of California at San Diego W. M. Spears, Naval Research Labs D. E. Waagen, TRW Systems Integration Group -- +----------------------------------------------------------------------------+ | Peter J. Angeline, PhD | | | Advanced Technologies Dept. | | | Loral Federal Systems | | | State Route 17C | I have nothing to say, | | Mail Drop 0210 | and I am saying it. | | Owego, NY 13827-3994 | | | Voice: (607)751-4109 | - John Cage | | Fax: (607)751-6025 | | | Email: pja at lfs.loral.com | | +----------------------------------------------------------------------------+ -------------- next part -------------- Fifth Annual Conference on Evolutionary Programming EP96 The Fifth Annual Conference On Evolutionary Programming February 29 to March 3, 1996 Sheraton Harbor Island Hotel San Diego, CA, USA Sponsored by The Evolutionary Programming Society The Fifth Annual Conference on Evolutionary Programming will serve as a forum for researchers investigating applications and theory of evolutionary programming and other related areas in evolutionary and natural computation. Topics of interest include but are not limited to the use of evolutionary simulations in optimization, neural network training and design, automatic control, image processing, and other applications, as well as mathematical theory or empirical analysis providing insight into the behavior of such algorithms. Of particular interest are applications of simulated evolution to problems in biology. Conference Committee General Chairman: Lawrence J. Fogel, Natural Selection, Inc. Technical Program Co-Chairs: Peter J. Angeline, Loral Federal Systems Thomas Baeck, Informatik Centrum Dortmund Finance Chair: V. William Porto, Orincon Corporation Local Arrangements: Ward Page, Naval Command Control and Ocean Surveillance Center Conference World Wide Web Page: http://www.aic.nrl.navy.mil/galist/EP96/ Submission Information Authors are invited to submit papers which describe original unpublished research in evolutionary programming, evolution strategies, genetic algorithms and genetic programming, artificial life, cultural algorithms, and other models that rely on evolutionary principles. Specific topics include but are not limited to the use of evolutionary simulations in optimization, neural network training and design, automatic control, image processing, and other applications, as well as mathematical theory or empirical analysis providing insight into the behavior of such algorithms. Of particular interest are applications of simulated evolution to problems in biology. Hardcopies of manuscripts must be received by one of the technical program co-chairs by September 26, 1995. Electronic submissions cannot be accepted. Papers should be clear, concise, and written in English. Papers received after the deadline will be handled on a time- and space-available basis. The notification of the program committee's review decision will be mailed by November 30, 1995. Papers eligible for the student award must be marked appropriately for consideration (see below). Camera ready papers are due at the conference, and will be published shortly after its completion. Submissions should be single-spaced, 12 pt. font and should not exceed 15 pages including figures and references. Send five (5) copies of the complete paper to: In Europe: Thomas Baeck Informatik Centrum Dortmund Joseph-von-Fraunhofer-Str. 20 D-44227 Dortmund Germany Email: baeck at home.informatik.uni-dortmund.de In US: Peter J. Angeline Loral Federal Systems 1801 State Route 17C Mail Drop 0210 Owego, NY 13827 Email: pja at lfs.loral.com Authors outside Europe or the United States may send their paper to any of the above technical chairmen at their convenience. Evolutionary Programming Society Award for Best Student Paper In order to foster student contributions and encourage exceptional scholarship in evolutionary programming and closely related fields, the Evolutionary Programming Society awards one exceptional student paper submitted to the Annual Conference on Evolutionary Programming. The award carries a $500 cash prize and a plaque signifying the honor. To be eligible for the award, all authors of the paper must be full-time students at an accredited college, university or other educational institution. Submissions to be considered for this award must be clearly marked at the top of the title page with the phrase "CONSIDER FOR STUDENT AWARD." In addition, the paper should be accompanied by a cover letter stating that (1) the paper is to be considered for the student award (2) all authors are currently enrolled full-time students at a university, college or other educational institution, and (3) that the student authors are responsible for the work presented. Only papers submitted to the conference and marked as indicated will be considered for the award. Late submissions will not be considered. Officers of the Evolutionary Programming Society, students under their immediate supervision, and their immediate family members are not eligible. Judging will be made by officers of the Evolutionary Programming Society or by an Awards Committee appointed by the president. Judging will be based on the perceived technical merit of the student's research to the field of evolutionary programming, and more broadly to the understanding of self-organizing systems. The Evolutionary Programming Society and/or the Awards Committee reserves the right not to give an award in any year if no eligible student paper is deemed to be of award quality. Presentation of the Student Paper Award will be made at the conference. Important Dates --------------- September 26, 1995 - Submission deadline for papers November 30, 1995 - Notification sent to authors February 29, 1996 - Conference Begins Program Committee: J. L. Breeden, Santa Fe Institute M. Conrad, Wayne State University K. A. De Jong, George Mason University T. M. English, Texas Tech University D. B. Fogel, Natural Selection, Inc. G. B. Fogel, University of California at Los Angeles R. Galar, Technical University of Wroclaw P. G. Harrald, University of Manchester Institute of Science and Technology K. E. Kinnear, Adaptive Systems J. R. McDonnell, Naval Command Control and Ocean Surveillance Center Z. Michalewicz, University of North Carolina F. Palmieri, University of Connecticut R. G. Reynolds, Wayne State University S. H. Rubin, Central Michigan University G. Rudolph, University of Dortmund N. Saravanan, Ford Research H.-P. Schwefel, University of Dortmund A. V. Sebald, University of California at San Diego W. M. Spears, Naval Research Labs D. E. Waagen, TRW Systems Integration Group From cnna96 at cnm.us.es Fri Jul 14 05:51:10 1995 From: cnna96 at cnm.us.es (4th Workshop on CNN's and Applications) Date: Fri, 14 Jul 95 11:51:10 +0200 Subject: CNNA'96 Call for papers Message-ID: <9507140951.AA22422@cnm1.cnm.us.es> PRELIMINARY CALL FOR PAPERS 4th IEEE INTERNATIONAL WORKSHOP ON CELLULAR NEURAL NETWORKS AND APPLICATIONS (CNNA-96) June 24-26, 1996 (Jointly Organized with NDES-96) Escuela Superior de Ingenieros de Sevilla Centro Nacional de Microelectrnica Sevilla, Spain ------------------------------------------------------------------------------ ORGANIZING COMMITTEE: Prof. J.L. Huertas (Chair) Prof. A. Rodrguez-Vzquez Prof. R. Domnguez-Castro SECRETARY: Dr. S. Espejo TECHNICAL PROGRAM: Prof. A. Rodrguez-Vzquez PROCEEDINGS: Prof. R. Domnguez-Castro SCIENTIFIC COMMITTEE: Prof. N.N. Aizemberg, Univ. of Uzhgorod, Ukrania Prof. L.O. Chua, Univ. of Cal. at Berkeley, U.S.A. Prof. V. Cimagalli, Univ. of Rome, Italy Prof. T.G. Clarkson, Kings College of London, U.K. Prof. A.S. Dmitriev, Academy of Sciences, Russia Prof. M. Hasler, EPFL, Switzerland Prof. J. Herault, Nat. Ins. of Tech., France Prof. J.L. Huertas, Nat. Miroelectronics Center, Spain Prof. S. Jankowski, Tech. Univ. of Warsaw, Poland Prof. J. Nossek, Tech. Univ. Munich, Germany Prof. V. Porra, Tech. Univ. of Helsinki, Finland Prof. T. Roska, MTA-SZTAKI, Hungary Prof. M. Tanaka, Sophia Univ., Japan Prof. J. Vandewalle, Kath. Univ. Leuven, Belgium ------------------------------------------------------------------------------ GENERAL SCOPE OF THE WORKSHOP AND VENUE The CNNA series of workshops aims to provide a biannual international forum to present and discuss recent advances in Cellular Neural Networks. Following the successful conferences in Budapest (1990), Munich (1992), and Rome (1994), the fourth workshop will be held in Seville during 1996, organized by the National Microelectronic Center and the School of Engineering of Seville. Seville, the capital of Andalusia, and site of the 1992 Universal Exposition, combines a rich cultural heritage accumulated during its more than 2500 years history with modern infrastructures in a stable and sunny climate. It boasts a large, prestigious university, several high-technology research centers of the Spanish Council of Research, and many cultural attractions. It is linked to Madrid by high-speed train and has an international airport serving several daily direct international flights, as well as many connections to international flights via Madrid. ------------------------------------------------------------------------------ PAPERS SUBMISSION Papers on all aspects of Cellular Neural Networks are welcome. Topics of interest include, but are not limited to: - Basic Theory - Applications - Learning - Software Implementations and CNN Simulators - CNN Computers - CNN Chips - CNN System Development and Testing Prospective authors are invited to submit 4 pages summaries of their papers to the Conference Secretariat. Authors of accepted papers will be asked to deliver camera-ready versions of their full-papers for publication in an IEEE-sponsored Proceedings. ------------------------------------------------------------------------------ AUTHOR'S SCHEDULE Submission of summaries: ................ January 31, 1996 Notification of acceptance: ............. March 31, 1996 Submission of camera-ready papers: ...... May 15, 1996 ------------------------------------------------------------------------------ PRELIMINARY REGISTRATION FORM Fourth IEEE Int. Workshop on Cellular Neural Networks and their Applications CNNA'96 Sevilla, Spain, June 24-26, 1996 I wish to attend the workshop. Please send Program and registration form when available. Name: ................______________________________ Mailing address: .....______________________________ Phone: ...............______________________________ Fax: ............. ...______________________________ E-mail: ..............______________________________ Please complete and return to: CNNA'96 Secretariat. Department of Analog Circuit Design, Centro Nacional de Microelectrnica Edif. CICA, Avda. Reina Mercedes s/n, E-41012 Sevilla - SPAIN FAX: +34-5-4231832 Phone: +34-5-4239923 E-mail: cnna96 at cnm.us.es ------------------------------------------------------------------------------ From barto at cs.umass.edu Fri Jul 14 17:46:22 1995 From: barto at cs.umass.edu (Andy Barto) Date: Fri, 14 Jul 95 16:46:22 -0500 Subject: post doc position Message-ID: Dear Colleague: I am looking for a postdoctoral researcher for a project using mathematical and computer models to refine and test hypotheses about how the cerebellum and motor cortex function together to support motor activity. We are constructing a large scale-model of the cerebellum and associated premotor circuits that is constrained by the anatomy and physiology, but that is also abstract to allow us to explore its control abilities in a computationally feasible manner. We are specifically focusing on the learning of skilled reaching behavior. The post doc will stongly interact with physiologists in collaborating laboratories who study motor control in animals, but should be most skilled in computational approaches to motor control and in adaptive neural network simulation. If you are interested in this position, which will be available this Fall, or in additional details, please contact Gwyn Mitchell (mitchell at cs.umass.edu, tel (413) 545-1309, fax (413) 545-1249). If you know of any suitable candidate who might be interested in this position, I would appreciate it if you could pass this information along to them. Thank you very much. Sincerely, Andrew G. Barto, Professor Computer Science Department, LGRC University of Massachusetts Box 34610 Amherst MA 01003-4610 Refs: Houk et al., Trends in Neuroscience 16; pp. 27-33, 1993; Houk and Barto, In G. E. Stelmach and J. Requin, eds, Tutorials in Motor Behavior II, pp. 71-100, Elesevier, 1992. ----------------------------------------------------------------------------- From king at cs.cuhk.hk Fri Jul 14 18:55:19 1995 From: king at cs.cuhk.hk (Dr. Irwin K. King) Date: Sat, 15 Jul 1995 06:55:19 +0800 (HKT) Subject: ICONIP'96 CFP Message-ID: <9507142242.AA10879@cucs18.cs.cuhk.hk> FIRST CALL FOR PAPERS 1996 INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING The Annual Conference of the Asian Pacific Neural Network Assembly ICONIP'96, September 24 - 27, 1996 Hong Kong Exhibition and Convention Center, Wan Chai, Hong Kong The goal of ICONIP'96 is to provide a forum for researchers and engineers from academia and industry to meet and to exchange ideas on the latest developments in neural information processing. The conference further serves to stimulate local and regional interests in neural information processing and its potential applications to industries indigenous to this region. CONFERENCE TOPICS ================= * Theory * Algorithms & Architectures * Applications * Supervised/Unsupervised Learning * Hardware Implementations * Hybrid Systems * Neurobiological Systems * Associative Memory * Visual & Speech Processing * Intelligent Control & Robotics * Cognitive Science & AI * Recurrent Net & Dynamics * Image Processing * Pattern Recognition * Computer Vision * Time Series Prediction * Financial Engineering * Optimization * Fuzzy Logic * Evolutionary Computing * Other Related Areas CONFERENCE'S SCHEDULE ===================== Submission of paper February 1, 1996 Notification of acceptance May 1, 1996 Early registration deadline July 1, 1996 SUBMISSION INFORMATION ====================== Authors are invited to submit one camera-ready original and five copies of the manuscript written in English on A4-format white paper with one inch margins on all four sides, in one column format, no more than six pages including figures and references, single-spaced, in Times-Roman or similar font of 10 points or larger, and printed on one side of the page only. Electronic or fax submission is not acceptable. Additional pages will be charged at USD $50 per page. Centered at the top of the first page should be the complete title, author(s), affiliation, mailing, and email addresses, followed by an abstract (no more than 150 words) and the text. Each submission should be accompanied by a cover letter indicating the contacting author, affiliation, mailing and email addresses, telephone and fax number, and preference of technical session(s) and format of presentation, either oral or poster (both are published). All submitted papers will be refereed by experts in the field based on quality, clarity, originality, and significance. Authors may also retrieve the ICONIP style, "iconip.tex" and "iconip.sty" files for the conference by anonymous FTP at ftp.cs.cuhk.hk in the directory /pub/iconip96. For further information, inquiries, and paper submissions please contact ICONIP'96 Secretariat Department of Computer Science The Chinese University of Hong Kong Shatin, N.T., Hong Kong Fax (852) 2603-5024 E-mail: iconip96 at cs.cuhk.hk http://www.cs.cuhk.hk/iconip96 ====================================================================== General Co-Chairs ================= Omar Wing, CUHK Shun-ichi Amari, Tokyo U. Advisory Committee ================== International ------------- Yaser Abu-Mostafa, Caltech Michael Arbib, U. Southern Cal. Leo Breiman, UC Berkeley Jack Cowan, U. Chicago Rolf Eckmiller, U. Bonn Jerome Friedman, Stanford U. Stephen Grossberg, Boston U. Robert Hecht-Nielsen, HNC Geoffrey Hinton, U. Toronto Anil Jain, Michigan State U. Teuvo Kohonen, Helsinki U. of Tech. Sun-Yuan Kung, Princeton U. Robert Marks, II, U. Washington Thomas Poggio, MIT Harold Szu, US Naval SWC John Taylor, King's College London David Touretzky, CMU C. v. d. Malsburg, Ruhr-U. Bochum David Willshaw, Edinburgh U. Asia-Pacific Region ------------------- Marcelo H. Ang Jr, NUS, Singapore Sung-Yang Bang, POSTECH, Korea Hsin-Chia Fu, NCTU., Taiwan Toshio Fukuda, Nagoya U., Japan Kunihiko Fukushima, Osaka U., Japan Zhenya He, Southeastern U., China Marwan Jabri, U. Sydney, Australia Nikola Kasabov, U. Otago, New Zealand Yousou Wu, Tsinghua U., China Organizing Committee ==================== L.W. Chan (Co-Chair), CUHK K.S. Leung (Co-Chair), CUHK D.Y. Yeung (Finance), HKUST C.K. Ng (Publication), CityUHK A. Wu (Publication), CityUHK K.P. Lam (Publicity), CUHK M.W. Mak (Local Arr.), HKPU C.S. Tong (Local Arr.), HKBU T. Lee (Registration), CUHK M. Stiber (Registration), HKUST K.P. Chan (Tutorial), HKU H.T. Tsui (Industry Liaison), CUHK I. King (Secretary), CUHK Program Committee ================= Co-Chairs --------- Lei Xu, CUHK Michael Jordan, MIT Erkki Oja, Helsinki Univ. of Tech. Mitsuo Kawato, ATR Members ------- Yoshua Bengio, U. Montreal Chris Bishop, Aston U. Leon Bottou, Neuristique Gail Carpenter, Boston U. Laiwan Chan, CUHK Huishen Chi, Peking U. Peter Dayan, MIT Kenji Doya, ATR Scott Fahlman, CMU Francoise Fogelman, SLIGOS Lee Giles, NEC Research Inst. Michael Hasselmo, Harvard U. Kurt Hornik, Technical U. Wien Steven Nowlan, Synaptics Jeng-Neng Hwang, U. Washington Nathan Intrator, Technion Larry Jackel, AT&T Bell Lab Adam Kowalczyk, Telecom Australia Soo-Young Lee, KAIST Todd Leen, Oregon Grad. Inst. Cheng-Yuan Liou, National Taiwan U. David MacKay, Cavendish Lab Eric Mjolsness, UC San Diego John Moody, Oregon Grad. Inst. Nelson Morgan, ICSI Michael Perrone, IBM Watson Lab Ting-Chuen Pong, HKUST Paul Refenes, London Business School Hava Siegelmann, Technion Ah Chung Tsoi, U. Queensland Benjamin Wah, U. Illinois Andreas Weigend, Colorado U. Ronald Williams, Northeastern U. John Wyatt, MIT Alan Yuille, Harvard U. Richard Zemel, CMU From ecm at nijenrode.nl Sat Jul 15 04:47:57 1995 From: ecm at nijenrode.nl (Edward Malthouse) Date: Sat, 15 Jul 1995 10:47:57 +0200 (MET DST) Subject: nonparametric reg / nonlin feature extraction Message-ID: <199507150847.KAA27338@bordeaux.nijenrode.nl> The following dissertation is available via anonymous FTP: Nonlinear Partial Least Squares By Edward C. Malthouse Key words: nonparametric regression, partial least squares (PLS), principal components regression (PCR), projection pursuit regression (PPR), feedforward neural networks, nonlinear feature extraction, principal components analysis (PCA), nonlinear principal components analysis (NLPCA), principal curves and surfaces. A B S T R A C T We propose a new nonparametric regression method for high-dimensional data, nonlinear partial least squares (NLPLS). NLPLS is motivated by projection-based regression methods, e.g., partial least squares (PLS), projection pursuit (PPR), and feedforward neural networks. The model takes the form of a composition of two functions. The first function in the composition projects the predictor variables onto a lower-dimensional curve or surface yielding scores, and the second predicts the response variable from the scores. We implement NLPLS with feedforward neural networks. NLPLS will often produce a more parsimonious model (fewer score vectors) than projection-based methods, and the model is well suited for detecting outliers and future covariates requiring extrapolation. The scores are also shown to have useful interpretations. We also extend the model for multiple response variables and discuss situations when multiple response variables should be modeled simultaneously and when they should be modeled with separate regressions. We provide empirical results from mathematical and chemical engineering examples which evaluate the performances of PLS, NLPLS, PPR, and three-layer neural networks on (1) response variable predictions, (2) model parsimony, (3) computational requirements, and (4) robustness to starting values. The curves and surfaces used by NLPLS are motivated by the nonlinear principal components analysis (NLPCA) method of doing nonlinear feature extraction. We develop certain properties of NLPCA and discuss its relation to the principal curve method. Both methods attempt to reduce the dimension of a set of multivariate observations by fitting a curve through the middle of the observations and projecting the observations onto this curve. The two methods fit their models under a similar objective function, with one important difference: NLPCA defines the function which maps observed variables to scores (projection index) to be continuous. We show that the effects of this constraint are (1) NLPCA is unable to model curves and surfaces which intersect themselves and (2) the NLPCA ``projections'' are suboptimal producing larger approximation error. We show how NLPCA score values can be interpreted and give the results of a small simulation study comparing the two methods. The dissertation is 120 pages long (single spaced). ftp mkt2715.kellogg.nwu.edu logname: anonymous password: your email address cd /pub/ecm binary get dissert.ps.gz quit gzip -d dissert.ps lp -dps dissert.ps # or however you print postscript I'm sorry, but no hardcopies are available. From munro at lis.pitt.edu Sun Jul 16 00:08:07 1995 From: munro at lis.pitt.edu (Paul Munro) Date: Sun, 16 Jul 1995 00:08:07 -0400 (EDT) Subject: "Orthogonality" of the generalizers being combined In-Reply-To: <9507020002.AA25348@sfi.santafe.edu> Message-ID: On Sat, 1 Jul 1995, David Wolpert wrote: > > In his recent posting, Nathan Intrator writes > > >>> > combining, or in the simple case > averaging estimators is effective only if these estimators are made > somehow to be independent. > >>> > > This is an extremely important point. Its importance extends beyond (stuff deleted) > In other words, although those generalizers are about as different > from one another as can be, *as far as the data set in question was > concerned*, they were practically identical. This is a great flag that > one is in a data-limited scenario. I.e., if very different > generalizers perform identically, that's a good sign that you're > screwed. > > Which is a round-about way of saying that the independence Nathan > refers to is always with respect to the data set at hand. This is > discussed in a bit of detail in the papers referenced below. > > *** > > Getting back to the precise subject of Nathan's posting: Those > interested in a formal analysis touching on how the generalizers being > combined should differ from one another should read the Ander Krough > paper (to come out in NIPS7) that I mentioned in my previous > posting. A more intuitive discussion of this issue occurs in my > original paper on stacking, where there's a whole page of text > elaborating on the fact that "one wants the generalizers being > combined to (loosely speaking) 'span the space' of algorithms and be > 'mutually orthogonal'" to as much a degree as possible. (more stuff deleted) Bambang Parmanto and I have found that negative correlation among the individual classifiers can improve committee performance even more than zero correlation. So rather than a zero inner product (othogonality), a negative inner product is preferable. Of course, this may be just a matter of definition -- our comparisons are made using the error vector on a test set. That is, it's better for errors to be independent than it is for them to be coincident, but it's even better if the coincidence is below the expected coincidence rate for independent classifiers. Note that to ahieve a significant level of negative correlation, the overall generalization performance must be fairly high... From koiran at ICSI.Berkeley.EDU Sun Jul 16 22:12:52 1995 From: koiran at ICSI.Berkeley.EDU (Pascal Koiran) Date: Sun, 16 Jul 1995 19:12:52 -0700 Subject: "Orthogonality" of the generalizers being combined Message-ID: <199507170212.TAA04679@spare.ICSI.Berkeley.EDU> Regarding this whole thread on "combinining generalizers", I am surprised that no one has ever mentioned the extensive work on "expert advice" in computational learning theory. Is this is simply by ignorance, or is there a more subtle reason ? As suggested by the list moderators, here are the names of a few people who have done relevant work : Cesa-bianchi, Freund, Haussler, Helmbold, Kivinen, Littlestone, Schapire. The seminal paper in the expert advice / on-line learning line of research seems to be: N. Littlestone (1988) Learning quickly when irrevelant attributes abound: a new linear-threshold algorithm. Machine Learning 2, 285-318. I am by no means an expert (no pun intended) in this area, so if you feel that your name was unfairly ommited from this list, I beg your forgiveness. Pascal Koiran. From koiran at ICSI.Berkeley.EDU Sun Jul 16 22:12:52 1995 From: koiran at ICSI.Berkeley.EDU (Pascal Koiran) Date: Sun, 16 Jul 1995 19:12:52 -0700 Subject: "Orthogonality" of the generalizers being combined Message-ID: <199507170212.TAA04679@spare.ICSI.Berkeley.EDU> Regarding this whole thread on "combinining generalizers", I am surprised that no one has ever mentioned the extensive work on "expert advice" in computational learning theory. Is this is simply by ignorance, or is there a more subtle reason ? As suggested by the list moderators, here are the names of a few people who have done relevant work : Cesa-bianchi, Freund, Haussler, Helmbold, Kivinen, Littlestone, Schapire. The seminal paper in the expert advice / on-line learning line of research seems to be: N. Littlestone (1988) Learning quickly when irrevelant attributes abound: a new linear-threshold algorithm. Machine Learning 2, 285-318. I am by no means an expert (no pun intended) in this area, so if you feel that your name was unfairly ommited from this list, I beg your forgiveness. Pascal Koiran. From georgiou at wiley.csusb.edu Mon Jul 17 14:07:35 1995 From: georgiou at wiley.csusb.edu (georgiou@wiley.csusb.edu) Date: Mon, 17 Jul 1995 11:07:35 -0700 Subject: LFP: First Int'l Conf. on Computational Intelligence and Neurosciences Message-ID: <199507171807.AA12508@wiley.csusb.edu> Please note the July 24, 1995, deadline. Papers are also accepted in TeX/LaTeX or postscript via email. For the full text of the call for papers please see: ftp://www.csci.csusb.edu/georgiou/ICCIN-95 and also ftp://www.csci.csusb.edu/georgiou/JCIS-95 ----------------------------------------------------------------------- Last Call for Papers FIRST INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCES September 28 to October 1, 1995. ``Shell Island'' Hotels of Wrightsville Beach, North Carolina, USA. Plenary Speakers include: James Anderson (Brown University) Subhash Kak (Louisiana State University) Haluk Ogmen (Houston of Houston) Ed Page (University of South Carolina) Jeffrey Sutton (Harvard University) L.E.H. Trainor (University of Toronto) Gregory H. Wakefield (University of Michigan) Summary Deadline: July 24, 1995 Decision & Notification: August 5, 1995 Send summaries to: George M. Georgiou Computer Science Department California State University San Bernardino, CA 92407 georgiou at wiley.csusb.edu Papers will be accepted based on summaries. A summary shall not exceed 4 pages of 10-point font, double-column, single-spaced text, (1 page minimum) with figures and tables included. From R.Poli at cs.bham.ac.uk Mon Jul 17 18:28:44 1995 From: R.Poli at cs.bham.ac.uk (R.Poli@cs.bham.ac.uk) Date: Mon, 17 Jul 95 18:28:44 BST Subject: PhD Studentships Message-ID: <7327.9507171728@sonic.cs.bham.ac.uk> Dear Colleagues, Could you please circulate the following advertisement for PhD studentships? Thank you very much, Riccardo Poli Dr. Riccardo Poli E-mail: R.Poli at cs.bham.ac.uk School of Computer Science Telephone: +44-121-414-3739 The University of Birmingham Fax: +44-121-414-4281 Edgbaston, Birmingham B15 2TT, UK ---------------------------------------------------------------------- The University of Birmingham School of Computer Science Research Studentships in ~~~~~~~~ ~~~~~~~~~~~~ EMERGENT AND EVOLUTIONARY BEHAVIOUR, INTELLIGENCE, AND COMPUTATION (EEBIC) Applications are invited for a number of Studentships for full-time PhD research in the School of Computer Science to carry out research within the recently founded EEBIC group. The group's research interests include: evolutionary computation (e.g. genetic algorithms and genetic programming), emergent behaviour, emergent intelligence (e.g. emergent communication), emergent computation and artificial life and their practical applications in hard engineering problems. The members of the group, at Birmingham and elsewhere, are active researchers in Artificial Intelligence, Engineering or Psychology with a variety of different backgrounds including Biology, Computer Science, Engineering, Psychology and Philosophy. In addition to EEBIC, the research experience of the members of the group includes computer vision, neural nets, signal processing, intelligent autonomous agents, hybrid inference systems, computer emotions, logic and many others. The group interacts very closely with the Cognition and Affect group led by Aaron Sloman who is a member of both groups. (For more information see URLs: ftp://ftp.cs.bham.ac.uk/pub/groups/cog_affect and http://www.cs.bham.ac.uk/~axs .) The successful applicants will join the group's effort to explore EEBIC in many interesting directions (from engineering to psychology, from new practical applications to new theoretical frameworks). They will have constant interaction and collaboration with the other members of the group. In addition to the usual requirements of possessing a good honours degree (equivalent to a first or upper second class degree in a UK university) and being EU residents, the successful candidates will need to be particularly open minded to the cross-fertilisation in the group deriving from the different backgrounds and experience of the members. Additional information about how to apply and about the School is available via WWW from URL: http://www.cs.bham.ac.uk Informal enquiries about the EEBIC group can be directed to Riccardo Poli Phone: +44-121-414-3739 Fax: +44-121-414-4281 Email: R.Poli at cs.bham.ac.uk Enquiries concerning the Cognition and Affect group may be sent to Aaron Sloman Phone: +44-121-414-4775 Fax: +44-121-414-4281 Email: A.Sloman at cs.bham.ac.uk For any other queries contact our research students' admission tutor: Dr Peter Hancox Email: P.J.Hancox at cs.bham.ac.uk From eric at research.nj.nec.com Mon Jul 17 16:23:22 1995 From: eric at research.nj.nec.com (Eric B. Baum) Date: Mon, 17 Jul 1995 16:23:22 -0400 Subject: Job Announcement Message-ID: <199507172023.QAA00552@yin> Programmer Wanted. Prerequisites: Experience in getting large programs to work. Some mathematical sophistication. E.g. at least equivalent of a good undergraduate degree in math, physics, theoretical computer science or related field. Salary: Depends on experience. Job: Implementing various novel algorithms. The previous holder of this position (Charles Garrett) implemented our new Bayesian approach to games, with striking success. We are now engaged in an effort to produce a world championship chess program based on these methods and several new ideas regarding learning. The chess program is being written in Modula 3. Experience in Modula 3 is useful but not essential so long as you are willing to learn it. Other projects may include TD learning, GA's, etc. To access papers on our approach to games, and get some idea of general nature of other projects, (e.g. a paper on GA's Garrett worked on) see my home page http://www.neci.nj.nec.com:80/homepages/eric/eric.html A paper on classifier-like learning systems with Garrett will appear there RSN (but don't wait to apply). The successful applicant will (a)have experience getting *large* programs to *work*, (b) be able to understand the papers on my home page and convert them to computer experiments. These projects are at the leading edge of basic research in algorithms/cognition/learning, so expect the work to be both interesting and challenging. Term-contract position. To apply please send cv, cover letter and list of references to: eric at research.nj.nec.com .ps or plain text please! NOTE- EMAIL ONLY. Hardcopy, e.g. US mail or Fedex etc, will not be opened. Equal Opportunity Employer M/F/D/V ------------------------------------- Eric Baum NEC Research Institute, 4 Independence Way, Princeton NJ 08540 PHONE:(609) 951-2712, FAX:(609) 951-2482, Inet:eric at research.nj.nec.com From read at bohr.neusc.bcm.tmc.edu Mon Jul 17 18:17:14 1995 From: read at bohr.neusc.bcm.tmc.edu (P. Read Montague) Date: Mon, 17 Jul 1995 17:17:14 -0500 Subject: Postdoctoral position Message-ID: <9507171717.ZM1627@bohr.bcm.tmc.edu> POSTDOCTORAL POSITION IN THEORETICAL NEUROSCIENCE A postdoctoral fellowship in theoretical neuroscience is available through the newly formed Center for Theoretical Neuroscience at Baylor College of Medicine. The position will focus on theoretical problems, however, all potential projects will be closely allied with ongoing experiments in the laboratories of Drs John Maunsell and Nikos Logothetis in the Division of Neuroscience at Baylor. Current interests include the role of attention in visual perception, the neural basis for decision-making, and the neural basis for object recognition. The successful candidate will have a strong knowledge in basic neurobiology combined with quantitative background in physics, computing, or engineering. Fellows will receive stipends commensurate with their background and qualifications. Send curriculum vitae, research interests, and the names of three references to: Dr. P. Read Montague Division of Neuroscience, Baylor College of Medicine, 1 Baylor Plaza, Houston, TX 77030 email: read at bohr.bcm.tmc.edu -- P. Read Montague Division of Neuroscience Baylor College of Medicine 1 Baylor Plaza, Houston, TX 77030 read at bohr.bcm.tmc.edu From read at bohr.neusc.bcm.tmc.edu Tue Jul 18 11:50:21 1995 From: read at bohr.neusc.bcm.tmc.edu (P. Read Montague) Date: Tue, 18 Jul 1995 10:50:21 -0500 Subject: Posdoctoral Position Message-ID: <9507181050.ZM2786@bohr.bcm.tmc.edu> POSTDOCTORAL POSITION IN THEORETICAL NEUROSCIENCE A postdoctoral fellowship in theoretical neuroscience is available through the newly formed Center for Theoretical Neuroscience at Baylor College of Medicine. The position will focus on theoretical problems, however, all potential projects will be closely allied with ongoing experiments in the laboratories of Drs John Maunsell and Nikos Logothetis in the Division of Neuroscience at Baylor. Current interests include the role of attention in visual perception, the neural basis for decision-making, and the neural basis for object recognition. The successful candidate will have a strong knowledge in basic neurobiology combined with quantitative background in physics, computing, or engineering. Fellows will receive stipends commensurate with their background and qualifications. Send curriculum vitae, research interests, and the names of three references to: -- P. Read Montague Division of Neuroscience Baylor College of Medicine 1 Baylor Plaza, Houston, TX 77030 read at bohr.bcm.tmc.edu From koiran at ICSI.Berkeley.EDU Tue Jul 18 20:22:10 1995 From: koiran at ICSI.Berkeley.EDU (Pascal Koiran) Date: Tue, 18 Jul 1995 17:22:10 -0700 Subject: "Orthogonality" of the generalizers being combined Message-ID: In my previous message on combinining generalizers, Manfred Warmuth should have been added to the list of "expert advice" researchers. I apologize for that omission. (note that I do not claim that the list is complete now !) Pascal Koiran. From giles at research.nj.nec.com Tue Jul 18 18:27:57 1995 From: giles at research.nj.nec.com (Lee Giles) Date: Tue, 18 Jul 1995 18:27:57 -0400 Subject: TR announcment - long-term dependencies Message-ID: <199507182227.SAA08663@telluride> The following Technical Report is available via the University of Maryland Department of Computer Science and the NEC Research Institute archives: _____________________________________________________________________________ LEARNING LONG-TERM DEPENDENCIES IS NOT AS DIFFICULT WITH NARX RECURRENT NEURAL NETWORKS Technical Report UMIACS-TR-95-78 and CS-TR-3500, Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742 Tsungnan Lin{1,2}, Bill G. Horne{1}, Peter Tino{1,3}, C. Lee Giles{1,4} {1}NEC Research Institute, 4 Independence Way, Princeton, NJ 08540 {2}Department of Electrical Engineering, Princeton University, Princeton, NJ 08540 {3}Dept. of Computer Science and Engineering, Slovak Technical University, Ilkovicova 3, 812 19 Bratislava, Slovakia {4}UMIACS, University of Maryland, College Park, MD 20742 ABSTRACT It has recently been shown that gradient descent learning algorithms for recurrent neural networks can perform poorly on tasks that involve long- term dependencies, i.e. those problems for which the desired output depends on inputs presented at times far in the past. In this paper we explore the long-term dependencies problem for a class of architectures called NARX recurrent neural networks, which have power ful representational capabilities. We have previously reported that gradient descent learning is more effective in NARX networks than in recurrent neural network architectures that have ``hidden states'' on problems includ ing grammatical inference and nonlinear system identification. Typically, the network converges much faster and generalizes better than other net works. The results in this paper are an attempt to explain this phenomenon. We present some experimental results which show that NARX networks can often retain information for two to three times as long as conventional recurrent neural networks. We show that although NARX networks do not circumvent the problem of long-term dependencies, they can greatly improve performance on long-term dependency problems. We also describe in detail some of the assumption regarding what it means to latch information robustly and suggest possible ways to loosen these assumptions. ---------------------------------------------------------------------------------- ---------------------------------------------------------------------------------- http://www.neci.nj.nec.com/homepages/giles.html http://www.cs.umd.edu/TRs/TR-no-abs.html or ftp://ftp.nj.nec.com/pub/giles/papers/UMD-CS-TR-3500.long-term.dependencies.narx.ps.Z ------------------------------------------------------------------------------------ -- C. Lee Giles / NEC Research Institute / 4 Independence Way Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 URL http://www.neci.nj.nec.com/homepages/giles.html == From maass at igi.tu-graz.ac.at Tue Jul 18 18:36:21 1995 From: maass at igi.tu-graz.ac.at (Wolfgang Maass) Date: Wed, 19 Jul 95 00:36:21 +0200 Subject: 2 papers on spiking neurons in neuroprose Message-ID: <199507182236.AA12291@figids03> First paper: ************ FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/maass.spiking-details.ps.Z The file maass.spiking-details.ps.Z is now available for copying from the Neuroprose repository. This is a 41-page long paper. Hardcopies are not available. LOWER BOUNDS FOR THE COMPUTATIONAL POWER OF NETWORKS OF SPIKING NEURONS by Wolfgang Maass Institute for Theoretical Computer Science Technische Universitaet Graz A-8010 Graz, Austria e-mail: maass at igi.tu-graz.ac.at Abstract: We explore the computational power of formal models for networks of spiking neurons (often referred to as "integrate-and-fire neurons"). These neural net models are closer related to computations in biological neural systems than the more traditional models, since they allow an encoding of information in the timing of single spikes (not just in firing rates). Our formal model is closely related to the "spike-response model" that was previously introduced by Gerstner and van Hemmen. It turns out that the structure of computations in models for networks of spiking neurons is quite different from that of computations in analog (sigmoidal) neural nets. In particular it is shown in our paper in a rigorous way that simple operations on phase-differences between spike-trains provide a very powerful computational tool, that can in principle be used to carry out highly complex computations on a small network of spiking neurons. We also show in this paper that rather weak assumptions about the shape of response-and threshold-functions of spiking neurons are sufficient in order to employ them for such computations. An extended abstract of this paper had already been posted in November 1994 (it appears in the Proc. of NIPS 94). In the meantime many have asked me for details of the constructions, and hence I am now also posting in neuroprose this detailed version (which appears in Neural Computation). A companion paper with detailed proofs for the upper bounds, will become available in the fall. ---------------------------------------------------------------------- Second paper: ************* FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/maass.shape.ps.Z The file maass.shape.ps.Z is now available for copying from the Neuroprose repository. This is a 6-page long paper. Hardcopies are not available. ON THE RELEVANCE OF THE SHAPE OF POSTSYNAPTIC POTENTIALS FOR THE COMPUTATIONAL POWER OF SPIKING NEURONS by Wolfgang Maass and Berthold Ruf Institute for Theoretical Computer Science Technische Universitaet Graz A-8010 Graz, Austria e-mail: maass at igi.tu-graz.ac.at bruf at igi.tu-graz.ac.at Abstract: Recently one has started to explore silicon models for networks of spiking neurons, where one employs rectangular (i.e. piecewise constant) pulses instead of the "smooth" excitatory postsynaptic potentials (EPSP's) that are employed by biological neurons. We show in this paper that models of spiking neurons that employ rectangular pulses (EPSP's) have substantial computational power, and we give a precise characterization of their computational power in terms of a common benchmark model from computer science (random access machine). This characterization allows us to prove the following somewhat surprising result: Models of networks of spiking neurons with rectangular pulses are from the computational point of view STRICTLY WEAKER than models with "smooth" EPSP's of the type as they are observed in biological neurons. ************ How to obtain a copy of the first paper ************* Via Anonymous FTP: unix> ftp archive.cis.ohio-state.edu Name: anonymous Password: (type your email address) ftp> cd pub/neuroprose ftp> binary ftp> get maass.spiking-details.ps.Z ftp> quit unix> uncompress maass.spiking-details.ps.Z unix> lpr maass.spiking-details.ps (or what you normally do to print PostScript) For the second paper proceed analogously (but with filename maass.shape.ps.Z). From KOKINOV at BGEARN.BITNET Tue Jul 18 16:59:14 1995 From: KOKINOV at BGEARN.BITNET (Boicho Kokinov) Date: Tue, 18 Jul 95 16:59:14 BG Subject: 10 scholarschips in Cognitive Science Message-ID: 10 scholarships are available to successful candidates for the Graduate Program in Cognitive Science at NBU for candidates from Eastern and Central Europe. The scholarschips have been provided by the Soros Foundation. NEW BULGARIAN UNIVERSITY Department of Cognitive Science Admission to the Graduate Program in Cognitive Science is open till July 30. It offers the following degrees: Post-Graduate Diploma, M.Sc., Ph.D. FEATURES Teaching in English both in the regular courses at NBU and in the intensive courses at the Annual International Summer Schools. Strong interdisciplinary program covering Psychology, Artificial Intelligence, Neurosciences, Linguistics, Philosophy, Mathematics, Methods. Theoretical and experimental research in integration of the symbolic and connectionist approaches, emergent hybrid cognitive architectures, models of memory and reasoning, analogy, vision, imagery, agnosia, language and speech processing, aphasia. Advisors: at least two advisors with different backgrounds, possibly one external international advisor. International dissertation committee. INTERNATIONAL ADVISORY BOARD Elizabeth Bates (UCSD, USA), Amedeo Cappelli (CNR, Italy), Cristiano Castelfranchi (CNR, Italy), Daniel Dennett (Tufts University, USA), Charles De Weert (University of Nijmegen, Holland), Christian Freksa (Hamburg University, Germany), Dedre Gentner (Northwestern University, USA), Christopher Habel (Hamburg University, Germany), Douglas Hofstadter (Indiana University, USA), Joachim Hohnsbein (University of Dortmund, Germany), Keith Holyoak (UCLA, USA), Mark Keane (Trinity College, Ireland), Alan Lesgold (University of Pittsburg, USA), Willem Levelt (Max-Plank Institute of Psycholinguistics, Holland), Ennio De Renzi (University of Modena, Italy), David Rumelhart (Stanford University, USA), Richard Shiffrin (Indiana University, USA), Paul Smolensky (University of Colorado, USA), Chris Thornton (University of Sussex, England ), Carlo Umilta' (University of Padova, Italy) ADDMISSION REQUIREMENTS B.Sc. degree in psychology, computer science, linguistics, philosophy, neurosciences, or related fields. Good command of English. Address: Cognitive Science Department, New Bulgarian University, 21 Montevideo Str. Sofia 1635, Bulgaria, tel.: (+3592) 55-80-65 fax: (+3592) 54-08-02 e-mail: cogs at adm.nbu.bg or kokinov at bgearn.acad.bg From phkywong at uxmail.ust.hk Wed Jul 19 03:44:44 1995 From: phkywong at uxmail.ust.hk (Dr. Michael Wong) Date: Wed, 19 Jul 1995 15:44:44 +0800 Subject: Paper on Neural Dynamic Routing Available Message-ID: <95Jul19.154446+0800_hkt.18930-1+4@uxmail.ust.hk> FTP-host: physics.ust.hk FTP-file: pub/kymwong/rout.ps.gz The following paper, presented at IWANNT*95, is now available via anonymous FTP. (8 pages long) ============================================================================ Decentralized Neural Dynamic Routing in Circuit-Switched Networks W. K. Felix Lor and K. Y. Michael Wong Department of Physics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong. E-mail address: phfelix at usthk.ust.hk, phkywong at usthk.ust.hk Clear Water Bay, Kowloon, Hong Kong. ABSTRACT We use a Simplex centralized algorithm to dynamically distribute telephonic traffic among alternate routes in circuit-switched networks according to the fluctuating number of free circuits and the evolving call attempts. It generates examples for training localized Neural controllers. Simulations shows that the decentralized Neural approach has a comparable performance in blocking probability with Maximum Free Circuit (MFC) and gives a surpassing performance in crankback. ============================================================================ FTP instructions: unix> ftp physics.ust.hk Name: anonymous Password: your full email address ftp> cd pub/kymwong ftp> get rout.ps.gz ftp> quit unix> gunzip rout.ps.gz unix> lpr rout.ps From uzimmer at informatik.uni-kl.de Wed Jul 19 12:47:03 1995 From: uzimmer at informatik.uni-kl.de (Uwe R. Zimmer) Date: Wed, 19 Jul 95 17:47:03 +0100 Subject: Paper available on "Minimal Qualitative Topologic World Models for Mobile Robots" Message-ID: <950719.174703.269@informatik.uni-kl.de> Paper available via WWW / FTP: keywords: mobile robots, exploration, world modelling, self-localization, artificial neural networks ------------------------------------------------------------------ Minimal Qualitative Topologic World Models for Mobile Robots ------------------------------------------------------------------ Uwe R. Zimmer (submitted for publication) World models for mobile robots as introduced in many projects, are mostly redundant regarding similar situations detected in different places. The present paper proposes a method for dynamic generation of a minimal world model based on these redundancies. The technique is an extention of the qualitative topologic world modelling methods. As a central aspect the reliability regarding error-tolerance and stability will be emphasized. The proposed technique demands very low constraints on the kind and quality of the employed sensors as well as for the kinematic precision of the utilized mobile platform. Hard realtime constraints can be handled due to the low computational complexity. The principal discussions are supported by real-world experiments with the mobile robot "ALICE". (5 pages - 928 KB) for the WWW-link: ------------------------------------------------------------------ http://ag-vp-www.informatik.uni-kl.de/Projekte/ALICE/ abs.Minimal.html ------------------------------------------------------------------ for the homepage of the author (including more reports): ------------------------------------------------------------------ http://ag-vp-www.informatik.uni-kl.de/Leute/Uwe/ ------------------------------------------------------------------ or for the ftp-server hosting the file: ------------------------------------------------------------------ ftp://ag-vp-ftp.informatik.uni-kl.de/Public/Neural_Networks/ Reports/Zimmer.Minimal.ps.Z ------------------------------------------------------------------ ----------------------------------------------------- ----- Uwe R. Zimmer --- University of Kaiserslautern - Computer Science Department | 67663 Kaiserslautern - Germany | ------------------------------.--------------------------------. Phone:+49 631 205 2624 | Fax:+49 631 205 2803 | ------------------------------.--------------------------------. http://ag-vp-www.informatik.uni-kl.de/Leute/Uwe/ | From arbib at pollux.usc.edu Wed Jul 19 12:48:14 1995 From: arbib at pollux.usc.edu (Michael A. Arbib) Date: Wed, 19 Jul 1995 09:48:14 -0700 Subject: Positions Available: VISUALIZATION FOR BRAIN RESEARCH Message-ID: <199507191648.JAA28988@pollux.usc.edu> ABOUT THE USC BRAIN PROJECT Professors Michael Arbib (Director), Michel Baudry, Theodore Berger, Peter Danzig, Shahram Ghandeharizadeh, Scott Grafton, Dennis McLeod, Thomas McNeill, Larry Swanson, and Richard Thompson are about to start the second year of a major grant from the Human Brain Project (a consortium of federal agencies led by the National Institute of Mental Health) for a 5 year project, "Neural Plasticity: Data and Computational Structures" to be conducted at the University of Southern California. The Project will combine research on databases with the development of tools for database construction and data recovery from multiple databases, simulation tools, and visualization tools for both rat neuroanatomy and human brain imaging. These tools will be used to construct databases for research at USC and elsewhere on mechanisms of neural plasticity in basal ganglia, cerebellum, and hippocampus. The grant will also support a core of neuroscience research linked to several ongoing research programs to explore how experiments can be enhanced when coupled to databases enriched with powerful tools for modeling and visualization. The project is a major expression of USC's approach to the study of the brain which locates neuroscience in the context of a broad interdisciplinary program in Neural, Informational, and Behavioral Sciences (NIBS). The status of our work may be viewed on WWW at http://www-hbp.usc.edu:8376/HBP/Home.html ABOUT THE POSITIONS The grant and related funding will allow us to hire two computer professionals to help us develop visualization tools for the USC Brain Project. VISUALIZATION PROGRAMMER: Three years experience programming and developing graphical software. UNIX, C++, DBMS experience. Internet protocols also desirable. Ability link visualization software with object based data base and simulation tools. Background in neuroscience is not required but proven communication skills and ability to analyze scientific data are valuable. IMAGE ANALYSIS DEVELOPER: Three years experience programming and developing graphical software. UNIX, C++, DBMS experience. Internet protocols also desirable. Emphasis on streamlining existing image analysis code and developing new algorithms for warping 3D data sets. Additional ability to manage the growth of a large archive of MRI and functional image data sets is valuable. Send CV, references, and letter addressing above qualifications to Paulina Tagle, Center for Neural Engineering, USC, Los Angeles, CA 90089-2520; Fax (213) 740-5687; paulina at pollux.usc.edu. USC is an equal opportunity employer. From pihong at merlot.cse.ogi.edu Wed Jul 19 16:00:42 1995 From: pihong at merlot.cse.ogi.edu (Hong Pi) Date: Wed, 19 Jul 95 13:00:42 -0700 Subject: Neural Network Course (Announcement) Message-ID: <9507192000.AA08786@merlot.cse.ogi.edu> Oregon Graduate Institute of Science & Technology, Office of Continuing Education, offers the short course: NEURAL NETWORKS: ALGORITHMS AND APPLICATIONS September 25-29, 1995, at the OGI campus near Portland, Oregon. Course Organizer: John E. Moody Lead Instructor: Hong Pi With Lectures By: Todd K. Leen John E. Moody Thorsteinn S. Rognvaldsson Eric A. Wan Artificial neural networks (ANN) have emerged as a new information processing technique and an effective computational model for solving pattern recognition and completion, feature extraction, optimization, and function approximation problems. This course introduces participants to the neural network paradigms and their applications in pattern classification; system identification; signal processing and image analysis; control engineering; diagnosis; time series prediction; and financial analysis and trading. An introduction to fuzzy logic and fuzzy control systems is also given. Designing a neural network application involves steps from data preprocessing to network tuning and selection. This course, with many examples, application demos and hands-on lab practice, will familiarize the participants with the techniques necessary for building successful applications. About 50 percent of the class time is assigned to lab sessions. The simulations will be based on Matlab, the Matlab Neural Net Toolbox, and other software running on Windows-NT workstations. Prerequisites: Linear algebra and calculus. Previous experience with using Matlab is helpful, but not required. Who will benefit: Technical professionals, business analysts, financial market practitioners, and other individuals who wish to gain a basic understanding of the theory and algorithms of neural computation and/or are interested in applying ANN techniques to real-world, data-driven modeling problems. Course Objectives: After completing the course, students will: - Understand the basic neural networks paradigms - Be familiar with the range of ANN applications - Have a good understanding of the techniques for designing successful applications - Gain hands-on experience with ANN modeling. Course Outline (8:30am - 5:00pm September 25 - 28, and 8:30am - 12:30am September 29): Neural Networks: Biological and Artificial Biological inspirations. Basic models of a neuron. Types of architectures and learning paradigms. Simple Perceptrons and Adalines Decision surfaces. Linear separability. Perceptron learning rules. Linear units. Gradient descent learning. Multi-Layer Feed-Forward Networks I Multi-Layer perceptrons. Back-propagation learning. Generalization. Early Stopping via validation. Momentum and adaptive learning rate. Examples and applications. Multi-Layer Feed-Forward Networks II Newton's method. Conjugate gradient. Levenburg-Marquardt. Radial basis function networks. Projection pursuit regression. Neural Networks for Pattern Recognition and Classification Bayes decision theory. The Bayes risk. Non-neural and neural methods for classification. Neural networks as estimators of the posterior probability. Methods for improving the classification performance. Benchmark tests of neural networks vs. other methods. Some applications. Improving the Generalization Performance Model bias and model variance. Weight decay. Regularizers. Optimal brain surgeon. Learning from hints. Sensitivity analysis. Input variable selection. The delta-test. Time Series Prediction: Classical and Nonlinear Approaches Linear time series models. Simple nonlinear models. Recurrent network models and training algorithms. Case studies: sunspots, economic forecasting. Self-Organized Networks and Unsupervised Learning K-means clustering. Kohonen feature maps. Learning vector quantization. Adaptive principal components analysis. Neural Network for Adaptive Control What is control. Heuristic, open loop, and inverse control. Feedback algorithms for control. Neural network feedback control. Reinforcement learning. Survey of Neural Network Applications in Financial Markets Bond and stock valuation. Currency rate forecasting. Trading systems. Commodity price forecasting. Risk management. Option pricing. Fuzzy Systems Fuzzy logic. Fuzzy control systems. Adaptive fuzzy and neural-fuzzy. About the Instructors Todd K. Leen is associate professor of Computer Science and Engineering at Oregon Graduate Institute of Science & Technology. He received his Ph.D. in theoretical Physics from the University of Wisconsin in 1982. From 1982-1987 he worked at IBM Corporation, and then pursued research in mathematical biology at Good Samaritan Hospital's Neurological Sciences Institute. He joined OGI in 1989. Dr. Leen's current research interests include neural learning, algorithms and architectures, stochastic optimization, model constraints and pruning, and neural and non-neural approaches to data representation and coding. He is particularly interested in fast, local modeling approaches, and applications to image and speech processing. Dr. Leen served as theory program chair for the 1993 Neural Information Processing Systems (NIPS) conference, and workshops chair for the 1994 NIPS conference. John E. Moody is associate professor of Computer Science and Engineering at Oregon Graduate Institute of Science & Technology. His current research focuses on neural network learning theory and algorithms in it's many manifestations. He is particularly interested in statistical learning theory, the dynamics of learning, and learning in dynamical contexts. Key application areas of his work are adaptive signal processing, adaptive control, time series analysis, forecasting, economics and finance. Moody has authored over 35 scientific papers, more than 25 of which concern the theory, algorithms, and applications of neural networks. Prior to joining the Oregon Graduate Institute, Moody was a member of the Computer Science and Neuroscience faculties at Yale University. Moody received his Ph.D. and M.A. degrees in Theoretical Physics from Princeton University, and graduated Summa Cum Laude with a B.A. in Physics from the University of Chicago. Hong Pi is a senior research associate at Oregon Graduate Institute. He received his Ph.D. in theoretical physics from University of Wisconsin in 1989. Prior to joining OGI in 1994 he had been a postdoctoral fellow and research scientist in Lund University, Sweden. His research interests include nonlinear modeling, neural network algorithms and applications. Thorsteinn S. Rognvaldsson received the Ph.D. degree in theoretical physics from Lund University, Sweden, in 1994. His research interests are Neural Networks for prediction and classification. He is currently a postdoctoral research associate at Oregon Graduate Institute. Eric A. Wan, Assistant Professor of Electrical Engineering and Applied Physics, Oregon Graduate Institute of Science & Technology, received his Ph.D. in electrical engineering from Stanford University in 1994. His research interests include learning algorithms and architectures for neural networks and adaptive signal processing. He is particularly interested in neural applications to time series prediction, speech enhancement, system identification, and adaptive control. He is a member of IEEE, INNS, Tau Beta Pi, Sigma Xi, and Phi Beta Kappa. For a complete course brochure contact: Linda M. Pease, Director Office of Continuing Education Oregon Graduate Institute of Science & Technology PO Box 91000 Portland, OR 97291-1000 +1-503-690-1259 +1-503-690-1686 (fax) e-mail: continuinged at admin.ogi.edu WWW home page: http://www.ogi.edu ^*^*^*^*^*^*^*^*^*^*^**^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^* Linda M. Pease, Director lpease at admin.ogi.edu Office of Continuing Education Oregon Graduate Institute of Science & Technology 20000 N.W. Walker Road, Beaverton OR 97006 USA (shipping) P.O. Box 91000, Portland, OR 97291-1000 USA (mailing) +1-503-690-1259 +1-503-690-1686 fax "The future belongs to those who believe in the beauty of their dreams" -Eleanor Roosevelt ^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^* From mao at almaden.ibm.com Wed Jul 19 17:07:04 1995 From: mao at almaden.ibm.com (mao@almaden.ibm.com) Date: Wed, 19 Jul 1995 14:07:04 -0700 Subject: Call for papers, IEEE Trans. NNs Special Issues on ANNs and PR Message-ID: <9507192107.AA21208@powerocr.almaden.ibm.com> CALL FOR PAPERS IEEE Transactions on Neural Networks Special Issue on Artificial Neural Networks and Pattern Recognition Tentative Publication Date: November 1996 Artificial neural networks (ANN) have now been recognized as powerful and economical tools for solving a large variety of problems in a number of scientific and engineering disciplines. The literature on neural networks is enormous consisting of a large number of books, journals and conference proceedings, and new commercial software and hardware products. A large portion of the research and development on ANNs is devoted to solving pattern recognition problems. Pattern recognition (PR) is a relatively mature discipline. Over the past 50 years, a number of different paradigms (statistical, syntactic and neural networks) have been utilized for solving a variety of recognition problems. But, real-world recognition problems are sufficiently difficult so that a single paradigm is not "optimal" for different recognition problems. As a result, successful recognition systems based either on statistical approach or neural networks exist in limited domains (e.g., handprinted character recognition and isolated word speech recognition). There is a close relationship between some of the popular ANN models and statistical pattern recognition (SPR) approaches. Quite often, these relationships are either not known to researchers or not fully exploited to build "hybrid" recognition systems. In spite of this close resemblance between ANN and SPR, ANNs have provided a variety of novel or supplementary approaches for pattern recognition tasks. More noticeably, ANNs have provided architectures on which many classical SPR algorithms (e.g., tree classifiers, principal component analysis, K-means clustering) can be mapped to facilitate hardware implementation. On the other hand, ANNs can derive benefit from some well-known results in SPR (e.g., Bayes decision theory, nearest neighbor rules, curse of dimensionality and Parzen window classifier). The purpose of this special issue is to increase the awareness of researchers and practitioners of pattern recognition about the common links between ANNs and SPR. This is likely to lead to more communication and cooperative work between the two research communities. Such an effort will not only avoid repetitious work but, more importantly, will stimulate and motivate individual disciplines. It is our hope that this special issue will lead to a synergistic approach which combines the strengths of ANN and SPR in order to achieve a significantly better performance for complex pattern recognition problems. Specific topics of interest include, but are not limited to: o Old and new links between ANNs and SPR (e.g., Adaptive Mixture of Expert (AME) and Hierarchical Mixture of Experts (HME) versus traditional decision trees, recurrent ANNs and time-delay ANNs versus Hidden Markov Models, generalization ability in ANNs versus curse of dimensionality). o Comparative studies of ANN and SPR approaches that lead to useful guidelines in practice (e.g., under what conditions does one approach exhibit superiority to the other?). o New ANN models for PR. -- representation/feature extraction (compression rate, invariance, robustness, and efficiency) using ANNs. -- supervised classification. -- clustering/unsupervised classification. o Combination of ANN and SPR classifiers/estimators, and features extracted using traditional PR approaches and ANNs. o Hybrid (using ANNs and traditional PR approaches) systems for solving real-world PR problems (e.g., face recognition, cursive handwriting recognition, and speech recognition). Although these topics cover a broad area of research, we encourage papers that explore the relationship between ANNs and traditional PR. Authors should relate their work with both the PR and ANN literature. Papers should also emphasize results that have been or can be potentially applied to "real world" applications; they should include evaluations through either experimentation, simulation, analysis and/or experience. Guest Editors: -------------- Professor Anil K. Jain Dr. Jianchang Mao Department of Computer Science Image and Multimedia Systems, DPE/803 A714 Wells Hall IBM Almaden Research Center Michigan State University 650 Harry Road East Lansing, MI 48824, USA San Jose, CA 95120, USA Email: jain at cps.msu.edu Email: mao at almaden.ibm.com Fax: 517-432-1061 Fax: 408-927-3497 Instructions for submitting papers: ----------------------------------- Manuscripts must not have been previously published or currently submitted for publication elsewhere. Each manuscript should be no more than 35 pages (double space, 12 point font) including all text, references, and illustrations. Each copy of the manuscript should include a title page containing title, authors' names and affiliations, postal and email addresses, telephone numbers and Fax numbers, a 300-word abstract and a list of keywords identifying the central issues of the manuscript's contents. Please submit six copies of your manuscript to either of the guest editors by January 5, 1996. --------------- From jagota at next1.msci.memst.edu Wed Jul 19 17:41:23 1995 From: jagota at next1.msci.memst.edu (Arun Jagota) Date: Wed, 19 Jul 1995 16:41:23 -0500 Subject: Notes for HKP on WWW Message-ID: <199507192141.AA22908@next1> Dear Connectionists: I am offering a set of handwritten transparencies, in electronic scanned-in form, for portions of the book "Introduction to the Theory of Neural Computation", Hertz, Krogh, and Palmer, on the World Wide Web as follows: http://www.msci.memphis.edu/~jagota You are welcome to make transparencies off them for instructional purposes, or print them off for some other reason. Or simply browse using Netscape, Mosaic, etc. Some features are a little awkward. First, they are all hand-written (in color) and quite unpolished. Second, each transparency is scanned into a raw image and therefore retrieving it takes some time. In all, there are about 110 of them, and they cover the following topics: ------------------------------------ ONE Introduction TWO The Hopfield Model 2.1 Associative Memories and Energy Function: 2.2 Hebb Rule and Capacity 2.3 Stochastic Networks THREE Extensions of the Hopfield Model 3.1 Continuous-Valued Units FOUR Optimization Problems 4.1 Mapping Problems to Hopfield Network 4.2 The Weighted Matching Problem 4.3 Graph Bipartitioning FIVE Simple Perceptrons 5.1 Feed-Forward Networks 5.2 Threshold Units 5.3 Perceptron Learning Rule Proof of Convergence 5.4 Continuous Units 5.5 Capacity of the Simple Perceptron SIX Multi-Layer Networks 6.1 Back-Propagation 6.2 Variations on Back-Propagation 6.3 Examples and Applications ------------------------------------ Let me also add that version 2 of the HKP exercises (very slightly refined and expanded from version 1) is available from the same ftp location as earlier: ftp ftp.cs.buffalo.edu > cd users/jagota > get HKP.ps Arun Jagota, Math Sciences, University of Memphis From tamayo at Think.COM Wed Jul 12 18:17:36 1995 From: tamayo at Think.COM (Pablo Tamayo) Date: Wed, 12 Jul 95 18:17:36 EDT Subject: Commercial Apps/Machine Learning Developer Wanted Message-ID: For more than a decade, Thinking Machines Corporation has been one of the world's leading computer companies in advancing the capability and use of high performance computing. Through its unique expertise in parallel process technology it has developed three generations of world-class hardware, software, and complementary support systems. Now we are beginning a new and exciting growth strategy to open up our technology to new hardware platforms and application environments. Thinking Machines is uniquely positioned to deliver state-of-the-art, cost effective systems on standard industry platforms. Join us as we unleash the power of parallel processing software. Thinking Machines Corporation has several openings for talented engineers, including: Commercial Applications Researcher/Developers Design, implementation and support of software for commercial applications involving machine learning (Neural Nets, CART, MBR, GA), statistics (SAS), and parallel processing of massive databases. Requirements: MS/PhD in CS/EE or equivalent. Expertise in C/UNIX, HPC and commercial databases (SQL, Oracle, etc.) If you are interested in this position or want information on other openings that are currently available, please send your resume to: Rick Pitman Thinking Machines Corporation 245 First Street Cambridge, MA 02142 Internet address: rickp at think.com Phone: (617) 234-3016 Fax: (617) 234-4421 An Equal Opportunity Employer The Connection Machine is a registered trademark of Thinking Machines Corporation. UNIX is a registered trademark of UNIX Systems Laboratories, Inc. From srikanth at diamond.cau.auc.edu Thu Jul 20 13:50:07 1995 From: srikanth at diamond.cau.auc.edu (srikanth@diamond.cau.auc.edu) Date: Thu, 20 Jul 95 13:50:07 EDT Subject: Call For Papers FUZZ-IEEE 1996 Message-ID: <9507201750.AA02884@diamond.cau.auc.edu> ANNOUNCEMENT AND PRELIMINARY CALL FOR PAPERS IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS New Orleans, September 8-11, 1996 PAPERS DUE: January 31, 1996 NOTIFICATION OF ACCEPTANCE: April 15, 1996 FINAL PAPERS DUE: June 15, 1996 SPECIAL SESSION / TUTORIAL PROPOSALS December 1, 1995 The program committee invites potential authors to submit papers dealing with any aspect of research and applications related to the use of fuzzy models. Papers must be written in English and received by 1/31/96. Six copies of the paper must be submitted. The paper may not exceed 7 pages including figures, tables and references. Papers should be prepared on 8.5" X 11" white paper with 1" margins on all sides, one column format in Times or similar style, 10 points or larger, and printed on one side of the paper only. Please include title, author names(s) and affliation on top of the first page followed by an abstract. FAX submissions are NOT acceptable. Please indicate the corresponding author with their email address where possible. Please send submissions prior to the deadline to Dr. Don Kraft, Program Committee Chair Computer Science Department, Louisiana State University Baton Rouge, LA 70803-4020 email: kraft at bit.csc.lsu.edu phone:504-388-2253 SEE ALSO : http://jasper.cau.auc.edu/fuzz_ieee1.html for more details about FUZZ-IEEE '96 and New Orleans! ------------------------------------------------------------------------------ GENERAL CHAIR PROGRAM CHAIR Fredrick E. Petry Donald Kraft Tulane University Louisiana State University New Orleans, LA Baton Rouge, LA petry at rex.cs.tulane.edu kraft at bit.csc.lsu.edu PUBLICITY CHAIRS Roy George, R. Srikanth Jim Keller Clark Atlanta University University of Missouri Atlanta, GA Columbia, MO roy at diamond.cau.auc.edu srikanth at diamond.cau.auc.edu PROCEEDINGS CHAIR Padmini Srinivasan University of SW Louisiana Lafayette, LA EXHIBITS CHAIR Valarie Cross V. Ganesh Miami University Allied Signal Oxford, OH Morristown, NJ FINANCE CHAIR Sujeet Shenoi University of Tulsa Tulsa, OK From poole at cs.ubc.ca Fri Jul 21 09:56:01 1995 From: poole at cs.ubc.ca (David Poole) Date: Fri, 21 Jul 1995 9:56:01 UTC-0700 Subject: 11th Conference on Uncertainty in AI, August 1995 Message-ID: <"6538*poole@cs.ubc.ca"@MHS> The Conferences in Uncartianty in AI are the premier forum for work on reasoning under uncertainty (including probabilistic and other formalisms for uncertainty, representations for uncertainty, such as Bayesian networks, algorithms for inference under uncertainty and learning under ucertainty). The 11th Conference on Uncertianty in AI will be held in Montreal, 18-20 August 1995 (just before IJCAI-95). For full details including registration information and an online proceedings see the URL: http://www.cs.ubc.ca/spider/poole/UAI95.html The program for UAI-95 is as follows: UAI-95 - 11th Conference on Uncertainty in AI McGill University, Montreal, Quebec, 18-20 August 1995 =================================== Final Program =================================== ============================== Friday 18 August Overview ============================== 08:45 -- 09:00 Opening remarks 09:00 -- 10:15 Invited talk #1 (Haussler) 10:15 -- 10:30 Break 10:30 -- 12:30 Presentation session #1 12:30 -- 14:00 Lunch 14:00 -- 16:00 Poster session #1 16:00 -- 16:15 Break 16:30 -- 18:30 Presentation session #2 ================================ Saturday 19 August Overview ================================ 09:00 -- 10:30 Invited talk #2 (Jordan) + panel discussion 10:30 -- 10:45 Break 10:45 -- 12:45 Presentation session #3 12:45 -- 14:30 Lunch 14:30 -- 16:00 Invited talk #3 (Subrahmanian) 16:00 -- 16:15 Break 16:15 -- 18:15 Presentation session #4 ================================ Sunday 20 August Overview ================================ 09:00 -- 10:30 Invited talk #4 (Shafer) + panel discussion 10:30 -- 10:45 Break 10:45 -- 12:45 Presentation session #5 12:45 -- 14:30 Lunch 14:30 -- 16:00 Poster session #2 16:00 -- 16:15 Break 16:15 -- 18:15 Presentation session #6 ============================================== Invited talks ============================================== #1 Haussler "Hidden Markov and Related Statistical Models: How They Have Been Applied to Biosequence Analysis" #2 Jordan (with panel on learning) "A Few Relevant Ideas from Statistics, Neural Networks, and Statistical Mechanics" #3 Subrahamanian "Uncertainty in Deductive Databases" #4 Shafer (with panel on causality) "The Multiple Causal Interpretation of Bayes Nets" ================================================= Presentation session #1 ================================================= Wellman/Ford/Larson PATH PLANNING UNDER TIME-DEPENDENT UNCERTAINTY Horvitz/Barry DISPLAY OF INFORMATION FOR TIME-CRITICAL DECISION MAKING Pearl/Robins PROBABILISTIC EVALUATION OF SEQUENTIAL PLANS FROM CAUSAL MODELS WITH HIDDEN VARIABLES Haddawy/Doan/Goodwin EFFICIENT DECISION-THEORETIC PLANNING: TECHNIQUES AND EMPIRICAL ANALYSIS Fargier/Lang/Clouaire/Schiex A CONSTRAINT SATISFACTION FRAMEWORK FOR DECISION UNDER UNCERTAINTY ================================================= Presentation session #2 ================================================= Xu/Smets GENERATING EXPLANATIONS FOR EVIDENTIAL REASONING ========> Best student paper <=========== Meek CAUSAL INFERENCE AND CAUSAL EXPLANATION WITH BACKGROUND KNOWLEDGE ========> Best student paper <=========== Cayrac/Dubois/Prade PRACTICAL MODEL-BASED DIAGNOSIS WITH QUALITATIVE POSSIBILISTIC UNCERTAINTY Srinivas/Horvitz EXPLOITING SYSTEM HIERARCHY TO COMPUTE REPAIR PLANS IN PROBABILISTIC MODEL-BASED DIAGNOSIS Balke/Pearl COUNTERFACTUALS AND POLICY ANALYSIS IN STRUCTURAL MODELS ================================================= Presentation session #3 ================================================= Jensen CAUTIOUS PROPAGATION IN BAYESIAN NETWORKS Darwiche STRONG CONDITIONING ALGORITHMS FOR EXACT AND APPROXIMATE INFERENCE IN CAUSAL NETWORKS ========> Best student paper <=========== Draper CLUSTERING WITHOUT (THINKING ABOUT) TRIANGULATION ========> Best student paper <=========== Goldszmidt FAST BELIEF UPDATE USING ORDER-OF-MAGNITUDE PROBABILITIES ========> Best student paper <=========== Harmanec TOWARD A CHARACTERIZATION OF UNCERTAINTY MEASURE FOR THE DEMPSTER-SHAFER THEORY ========> Best student paper <=========== ================================================= Presentation session #4 ================================================= Dubois/Prade NUMERICAL REPRESENTATION OF ACCEPTANCE Grosof TRANSFORMING PRIORITIZED DEFAULTS AND SPECIFICITY INTO PARALLEL DEFAULTS Weydert DEFAULTS AND INFINITESIMALS DEFEASIBLE INFERENCE BY NONARCHIMEDEAN ENTROPY-MAXIMIZATION Benferhat/Saffiotti/Smets BELIEF FUNCTIONS AND DEFAULT REASONING Ngo/Haddawy/Helwig A THEORETICAL FRAMEWORK FOR CONTEXT-SENSITIVE TEMPORAL PROBABILITY MODEL CONSTRUCTION WITH APPLICATION TO PLAN PROJECTION ========================================== Presentation session #5 ========================================== Campos/Moral INDEPENDENCE CONCEPTS FOR CONVEX SETS OF PROBABILITIES Geiger/Heckerman A CHARACTERIZATION OF THE DIRICHLET DISTRIBUTION THROUGH GLOBAL AND LOCAL INDEPENDENCE Spirtes DIRECTED CYCLIC GRAPHICAL REPRESENTATIONS OF FEEDBACK MODELS Pynadath/Wellman ACCOUNTING FOR CONTEXT IN PLAN RECOGNITION, WITH APPLICATION TO TRAFFIC MONITORING Srinivas MODELING FAILURE PRIORS AND PERSISTENCE IN MODEL-BASED DIAGNOSIS ========================================== Presentation session #6 ========================================== Poole EXPLOITING THE RULE STRUCTURE FOR DECISION MAKING WITHIN THE INDEPENDENT CHOICE LOGIC Krause/Fox/Judson IS THERE A ROLE FOR QUALITATIVE RISK ASSESSMENT? Srinivas POLYNOMIAL ALGORITHM FOR COMPUTING THE OPTIMAL REPAIR STRATEGY IN A SYSTEM WITH INDEPENDENT COMPONENT FAILURES Boldrin/Sossai AN ALGEBRAIC SEMANTICS FOR POSSIBILISTIC LOGIC Hajek/Godo/Esteva FUZZY LOGIC AND PROBABILITY ============================================================== Poster session #1 ============================================================== 1. Jack Breese, Russ Blake. AUTOMATING COMPUTER BOTTLENECK DETECTION WITH BELIEF NETS 2. Wray L. Buntine CHAIN GRAPHS FOR LEARNING 3. J.L. Castro, J.M. Zurita AN APPROACH TO GET THE STRUCTURE OF A FUZZY RULE UNDER UNCERTAINTY 4. Tom Chavez, Ross Shachter DECISION FLEXIBILITY 5. Arthur L. Delcher, Adam Grove, Simon Kasif, Judea Pearl LOGARITHMIC-TIME UPDATES AND QUERIES IN PROBABILISTIC NETWORKS 6. Eric Driver, Darryl Morrell CONTINUOUS BAYESIAN NETWORKS 7. Nir Friedman, Joseph Y. Halpern PLAUSIBILITY MEASURES: A USER'S GUIDE 8. David Galles, Judea Pearl TESTING IDENTIFIABILITY OF CAUSAL EFFECTS 9. Steve Hanks, David Madigan, Jonathan Gavrin PROBABILISTIC TEMPORAL REASONING WITH ENDOGENOUS CHANGE 10. David Heckerman BAYESIAN METHODS FOR LEARNING CAUSAL NETWORKS 11. Eric Horvitz, Adrian Klein STUDIES IN FLEXIBLE LOGICAL INFERENCE: A DECISION-MAKING PERSPECTIVE 12. George John, Pat Langley ESTIMATING CONTINUOUS DISTRIBUTIONS IN BAYESIAN CLASSIFIERS 13. Uffe Kjaerulff HUGS: COMBINING EXACT INFERENCE AND GIBBS SAMPLING IN JUNCTION TREES 14. Prakash P. Shenoy A NEW PRUNING METHOD FOR SOLVING DECISION TREES AND GAME TREES 15. Peter Spirtes, Christopher Meek, Thomas Richardson CAUSAL INFERENCE IN THE PRESENCE OF LATENT VARIABLES AND SELECTION BIAS 16. Nic Wilson AN ORDER OF MAGNITUDE CALCULUS 17. S.K.M. Wong, C.J. Butz, Y. Xiang A METHOD FOR IMPLEMENTING A PROBABILISTIC MODEL AS A RELATIONAL DATABASE
18. Y. Xiang OPTIMIZATION OF INTER-SUBNET BELIEF UPDATING IN MULTIPLY SECTIONED BAYESIAN NETWORKS 19. Nevin Lianwen Zhang INFERENCE WITH CAUSAL INDEPENDENCE IN THE CPSC NETWORK =============================================== Poster Session #2 =============================================== 1. Fahiem Bacchus, Adam Grove GRAPHICAL MODELS FOR PREFERENCE AND UTILITY 2. Enrique Castillo, Remco R. Bouckaert, Jose Maria Sarabia, ERROR ESTIMATION IN APPROXIMATE BAYESIAN BELIEF NETWORK INFERENCE 3. David Maxwell Chickering A NEW CHARACTERIZATION OF EQUIVALENT BAYESIAN NETWORK STRUCTURES 4. Marek J. Druzdzel, Linda C. van der Gaag ELICITATION OF PROBABILITIES: COMBINING QUALITATIVE AND QUANTITATIVE INFORMATION 5. Kazuo J. Ezawa, Til Schuermann LEARNING SYSTEM: A RARE BINARY OUTCOME WITH MIXED DATA STRUCTURES 6. David Heckerman, Dan Geiger LEARNING BAYESIAN NETWORKS: A UNIFICATION FOR DISCRETE AND GAUSSIAN DOMAINS 7. David Heckerman, Ross Shachter A DEFINITION AND GRAPHICAL REPRESENTATION FOR CAUSALITY 8. Mark Hulme IMPROVED SAMPLING FOR DIAGNOSTIC REASONING IN BAYESIAN NETWORK 9. Ali Jenzarli INFORMATION/RELEVANCE INFLUENCE DIAGRAMS 10. Keiji Kanazawa, Daphne Koller, Stuart Russell STOCHASTIC SIMULATION ALGORITHMS FOR DYNAMIC PROBABILISTIC NETWORKS 11. Grigoris I. Karakoulas PROBABILISTIC EXPLORATION IN PLANNING WHILE LEARNING 12. Alexander V. Kozlov, Jaswinder Pal Singh APPROXIMATE PROBABILISTIC INFERENCE IN BELIEF NETWORKS 13. Michael L. Littman, Thomas L. Dean, Leslie Pack Kaelbling ON THE COMPLEXITY OF SOLVING MARKOV DECISION PROBLEMS 14. Chris Meek STRONG-COMPLETENESS AND FAITHFULNESS IN BAYES NETWORKS 15. Simon Parsons REFINING REASONING IN QUALITATIVE PROBABILISTIC NETWORKS 16. Judea Pearl ON THE TESTABILITY OF CAUSAL MODELS WITH LATENT AND INSTRUMENTAL VARIABLES 17. Gregory Provan ABSTRACTION IN BELIEF NETWORKS: THE ROLE OF INTERMEDIATE STATES IN DIAGNOSTIC REASONING 18. Marco Valtorta, Young-Gyun Kim ON THE DETECTION OF CONFLICTS IN DIAGNOSTIC BAYESIAN NETWORKS USING ABSTRACTION ----------------------------------------------------------------------------- David Poole, Office: +1 (604) 822-6254 Department of Computer Science, Fax: +1 (604) 822-5485 University of British Columbia, Email: poole at cs.ubc.ca 2366 Main Mall, URL: http://www.cs.ubc.ca/spider/poole Vancouver, B.C., Canada V6T 1Z4 FTP: ftp://ftp.cs.ubc.ca/ftp/local/poole From linster at katla.harvard.edu Thu Jul 20 15:04:35 1995 From: linster at katla.harvard.edu (Christiane Linster) Date: Thu, 20 Jul 1995 15:04:35 -0400 (EDT) Subject: postdoc grant (fwd) Message-ID: *********************************************************** ************************************************************ CNRS-INRA Laboratoire de Neurobiologie Comparee des Invertebres Postdoctoral Research Fellowship Applications are invited for a year fellowship, from non-french citizen and qualified researcher with experience in Molecular neurobiology to investigate olfaction in insects. Applications, including a CV with the names of two referees should be sent urgently (before August 31, 1995) to: Dr C. MASSON LNCI BP 23 F - 91 440 Bures-sur-Yvette Tel. and fax : 33 1 69 07 20 59 E-mail : masson at inra.jouy.fr From jbower at bbb.caltech.edu Fri Jul 21 19:55:48 1995 From: jbower at bbb.caltech.edu (jbower@bbb.caltech.edu) Date: Fri, 21 Jul 95 16:55:48 PDT Subject: Journal of Computational Neurscience Vol.II(2) Message-ID: <9507212355.AA22523@bbb.caltech.edu> The JOURNAL OF COMPUTATIONAL NEUROSCIENCE From neurons to behavior: a journal at the interface between experimental and theoretical neuroscience... CONTENTS, VOLUME II, ISSUE 2 Dynamic Modification of Dendritic Cable Properties and Synaptic Transmission by Voltage-Gated Potassium C.J. Wilson. Electrical Consequences of Spine Dimensions in a Model of Cortical Spiny Stellate Cell Completely Reconstructed Serial Thin Sections I. Segev, A. Friedman, E.L. White, M.J. Gutnick. The Electric Image in Weakly Electric Fish: I. A Data Based Model of Waveform Generation in the Gymnotus Carapo A. Caputi and R. Budelli. Temporal Encoding in Nervous Systems: a Rigorous Definition F.Theunissen, J.P. Miller. Editorial introducing the Bulletin Board. Bulletin; D. Glanzman ************************************** SUBSCRIPTIONS: Volume 2, 1995 (4 issues): Institutional rate: $270.00 US Individual rate: $75.00 US PLEASE CONTACT: Kluwer Academic Publishers Order Department P.O. Box 358, Accord Station Hingham, MA 02108-0358 USA Phone: (617) 871-6600, Fax: (617) 871-6528 E-mail: kluwer at wkap.com Please refer to the KLUWER ACADEMIC PUBLISHERS INFORMATION SERVER at GOPHER.WKAP.NL for Call for Papers, Aims and Scope and additional information. *************************************** James M. Bower Division of Biology Mail code: 216-76 Caltech Pasadena, CA 91125 (818) 395-6817 (818) 449-0679 FAX NCSA Mosaic addresses for: laboratory http://www.bbb.caltech.edu/bowerlab GENESIS: http://www.bbb.caltech.edu/GENESIS science education reform http://www.caltech.edu/~capsi From tirthank at titanic.mpce.mq.edu.au Mon Jul 24 05:39:18 1995 From: tirthank at titanic.mpce.mq.edu.au (Tirthankar Raychaudhuri) Date: Mon, 24 Jul 1995 19:39:18 +1000 (EST) Subject: Change to URL of Combining Estimators Page Message-ID: <9507240939.AA07999@titanic.mpce.mq.edu.au> A non-text attachment was scrubbed... Name: not available Type: text Size: 748 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/7369cf4c/attachment-0001.ksh From gerda at ai.univie.ac.at Mon Jul 24 08:59:40 1995 From: gerda at ai.univie.ac.at (Gerda Helscher) Date: Mon, 24 Jul 1995 13:59:40 +0100 Subject: Symposium "ANN and Adaptive Systems" Message-ID: <199507241259.AA05709@kairo.ai.univie.ac.at> CALL FOR PAPERS for the symposium ====================================================== Artificial Neural Networks and Adaptive Systems ====================================================== chairs: Guenter Palm, Germany, and Georg Dorffner, Austria as part of the Thirteenth European Meeting on Cybernetics and Systems Research April 9-12, 1996 University of Vienna, Vienna, Austria For this symposium, papers on any theoretical or practical aspect of artificial neural networks are invited. Special focus, however, will be put on the issue of adaptivity both in practical engineering applications and in applications of neural networks to the modeling of human behavior. By adaptivity we mean the capability of a neural network to adjust itself to changing environments. We make a careful distinction between "learning" to devise weight matrices for a neural network before it is applied (and usually left unchanged) on one hand, and true adaptivity of a given neural network to constantly changing con- ditions on the other hand - i.e. incremental learning in unstat- ionary environments. The following is a - no means exhaustive - list of possible topics in this realm: - online or incremental learning of neural network applications facing changing data distributions - transfer of neural network solutions to related but different approaches - application of neural networks in adaptive autonomous systems - "phylogenetic" vs. "ontogenetic" adaptivity (e.g. adaptivity of connectivity and architecture vs. adaptivity of coupling parameters or weights) - short term vs. long term adaptation - adaptive reinforcement learning - adaptive pattern recognition - localized vs. distributed approximation (in terms of overlap of decision regions) and adaptivity Preference will be given to contributions that address such issues of adaptivity, but - as mentioned initially - other original work on neural newtorks is also welcome. Deadline for submissions (10 single-spaced A4 pages, maximum 43 lines, max. line length 160 mm, 12 point) is =============================================== October 12, 1995 =============================================== Papers should be sent to: I. Ghobrial-Willmann or G. Helscher Austrian Society for Cybernetic Studies A-1010 Vienna 1, Schottengasse 3 (Austria) Phone: +43-1-53532810 Fax: +43-1-5320652 E-mail: sec at ai.univie.ac.at For more information on the whole EMCSR conference, see the Web-page http://www.ai.univie.ac.at/emcsr/ or contact the above address. !Hope to see you in Vienna! From M.Q.Brown at ecs.soton.ac.uk Mon Jul 24 11:49:17 1995 From: M.Q.Brown at ecs.soton.ac.uk (Martin Brown) Date: Mon, 24 Jul 1995 15:49:17 +0000 Subject: 2 postdoctoral positions available Message-ID: <6371.9507241449@ra.ecs.soton.ac.uk> UNIVERSITY OF SOUTHAMPTON DEPARTMENT OF ELECTRONICS AND COMPUTER SCIENCE RESEARCH FELLOWS Two postdoctoral positions are currently available on an EPSRC grant entitled Neurofuzzy Construction Algorithms and their Application in Non-Stationary Environments. Links to the groups, personnel and industrial companies can be obtained from the project's homepage at: http://www-isis.ecs.soton.ac.uk/research/projects/osiris.html Two postdoctoral researchers are required to investigate the development and application of advanced network construction algorithms and training rules for neurofuzzy systems operating in a time-varying environment. The candidates should possess skills in applied mathematics and computer science and have experience in such areas as numerical analysis, Visual C++ programming, neural/fuzzy learning theory, dynamical systems and optimisation theory. This research will be undertaken in association with Neural Computer Sciences http://www.demon.co.uk/skylake/ who produce an object oriented, 32 bit MS windows-based neural networks package called NeuFrame and benchmarking data sets will be collected from GEC and Lucas. In addition, Eurotherm controls are supplying tools to investigate the possibility of developing embedded devices. Post One - One researcher is required for 3 years to investigate and further develop the neurofuzzy construction algorithms that have been proposed by the ISIS group. They will be based at Southampton under the supervision of Martin Brown and Chris Harris. The neural+fuzzy approach allows vague, expert knowledge to be combined with numerical data to produce systems that make the best use of both information sources. However, for ill-defined, high-dimensional systems it would be useful to configure a network's structural parameters directly from the data. Recent research has shown that B-spline-based neurofuzzy systems are suitable for use in such algorithms due to their direct fuzzy interpretation, numerical conditioning and ease of implementation, and by considering an ANalysis Of VAriance (ANOVA) representation, the B-spline neurofuzzy networks can be shown to overcome the curse of dimensionality for many practical problems. A good background in numerical analysis and modelling theory (additive, neural/fuzzy) is required, and as the algorithms will be developed within a Visual C++, Microsoft Foundation Classes environment, hence knowledge about these products would also be useful. Informal enquiries for this post should be made to Dr Martin Brown in the ISIS research group, Department of Electronics and Computer Science, University of Southampton, UK (Tel +44 (0)1703 594984, Email: mqb at ecs.soton.ac.uk). Salary will be in the range of 15,986 - 18,985 per annum. Applicants for post one should send a full curriculum vitae (3 copies from UK applicants and 1 from overseas), including the names and addresses of three referees to the Personnel Department (R), University of Southampton, Highfield, Southampton, SO17 1BJ, telephone number (01703 592750) by no later than 25 August 1995. Please quote reference number R/553. Post Two - A second researcher is required for 2 years (with the possibility of it being extended for an extra year) to investigate on-line learning for non-stationary data. They will be based at Brighton University under the supervision of Steve Ellacott. This work will investigate several aspects of training neurofuzzy systems on-line such as: * learning algorithms for large, redundant training sets * recurrent training rules * high-order instantaneous learning algorithms * aspects of data excitation and on-line regularisation The ideal candidate would be a mathematician or mathematically oriented engineer with a background in numerical analysis and/or dynamical systems. Familiarity with neural network algorithms would be an advantage, but is not essential. The post will involve some programming in C or C++. All enquiries and applications for post two should be made to Dr Steve Ellacott in the Department of Mathematical Sciences, University of Brighton, UK (Tel +44 (0)1273 642544, Email: s.w.ellacott at brighton.ac.uk). working for equal opportunities a centre of excellence for university research and teaching From Dimitris.Dracopoulos at trurl.brunel.ac.uk Mon Jul 24 15:32:05 1995 From: Dimitris.Dracopoulos at trurl.brunel.ac.uk (Dimitris Dracopoulos) Date: Mon, 24 Jul 1995 13:32:05 -0600 Subject: NEURAL AND EVOLUTIONARY SYSTEMS ADVANCED MSC Message-ID: <9507241332.ZM7787@trurl.brunel.ac.uk> NEURAL AND EVOLUTIONARY SYSTEMS ADVANCED MSC ============================================ The Computer Science Department at Brunel University (United Kingdom) will be running a new advanced MSc course on Neural and Evolutionary Systems from September 1995. You may find further details at the following locations: WWW: http://http1.brunel.ac.uk:8080/depts/cs/ in the News section FTP: ftp.brunel.ac.uk CompSci/Announcements/NES-MSc.ps (PostScript version) CompSci/Announcements/NES-MSc.ascii (ASCII version) For further information including literature and an application form, please contact Pam Osborne at the address below, or for more detailed enquiries please contact Vlatka Hlupic (address given below) or me via email at: Dr Dimitris C. Dracopoulos Department of Computer Science Brunel University Telephone: +44 1895 274000 ext. 2120 London Fax: +44 1895 251686 Uxbridge E-mail: Dimitris.Dracopoulos at brunel.ac.uk Middlesex UB8 3PH United Kingdom ------------------------------------------------------------------------------- Pam Osborne Dept of Computer Science Tel: +44 (0)895 274000 Brunel University Ext: 2134 Uxbridge Fax: +44 (0)895 251686 Middlesex UB8 3PH Pam.Osborne at brunel.ac.uk ------------------------------------------------------ ------------------------------------------------------ Dr. Vlatka Hlupic Dept of Computer Science Tel: +44 (0)895 274000 Brunel University Ext: 2231 Uxbridge Fax: +44 (0)895 251686 Middlesex UB8 3PH Vlatka.Hlupic at brunel.ac.uk -- Dr Dimitris C. Dracopoulos Department of Computer Science Brunel University Telephone: +44 1895 274000 ext. 2120 London Fax: +44 1895 251686 Uxbridge E-mail: Dimitris.Dracopoulos at brunel.ac.uk Middlesex UB8 3PH United Kingdom From bogus@does.not.exist.com Tue Jul 25 08:53:07 1995 From: bogus@does.not.exist.com () Date: Tue, 25 Jul 95 13:53:07 +0100 Subject: EANN96-First Call for Papers Message-ID: <9433.9507251253@pluto.lpac.qmw.ac.uk> International Conference on Engineering Applications of Neural Networks (EANN '96) London, UK June 24-26, 1996 First Call for Papers (ASCII version) The conference is a forum for presenting the latest results on neural network applications in technical fields. The applications may be in any engineering or technical field, including but not limited to systems engineering, mechanical engineering, robotics, process engineering, metallurgy, pulp and paper technology, aeronautical engineering, computer science, machine vision, chemistry, chemical engineering, physics, electrical engineering, electronics, civil engineering, geophysical sciences, biotechnology, and environmental engineering. Abstracts of one page (200 to 400 words) should be sent to eann96 at lpac.ac.uk by 21 January 1996, by e-mail in PostScript format, or ASCII. Please mention two to four keywords, and whether you prefer it to be a short paper or a full paper. The short papers will be 4 pages in length, and full papers may be upto 8 pages. Tutorial proposals are also welcome until 21 January 1996. Notification of acceptance will be sent around 15 February. Submissions will be reviewed and the number of full papers will be very limited. Organising Committee A. Bulsari (Finland) Dimitris Tsaptsinos (UK) Trevor Clarkson (UK) International program committee (to be confirmed, extended) Dorffner, Georg (Austria) Gong, Shaogang (UK) Heikkonen, Jukka (Italy) Jervis, Barrie (UK) Oja, Erkki (Finland) Liljenstrom, Hans (Sweden) Papadourakis, George (Greece) Pham, D.T (UK) Refenes, Paul (UK) Sharkey, Noel (UK) Steele, Nigel (UK) Williams, Dave (UK) For more information see the WWW Page at:http://www.lpac.ac.uk/EANN96/ From jbower at bbb.caltech.edu Tue Jul 25 12:54:43 1995 From: jbower at bbb.caltech.edu (jbower@bbb.caltech.edu) Date: Tue, 25 Jul 95 09:54:43 PDT Subject: CNS*94 Conference Proceedings Message-ID: <9507251654.AA17150@bbb.caltech.edu> The Neurobiology of Computation edited by James M. Bower CALTECH, Pasadena, CA, USA The Neurobiology of Computation: The Proceedings of the Third Annual Computation and Neural Systems Conference contains the collected papers of the Conference on Computational Neuroscience, July 21--23, 1994, Monterey, California. These papers represent a cross-section of current research in computational neuroscience. While the majority of papers describe analysis and modeling efforts, other papers describe the results of new biological experiments explicitly placed in the context of computational theories and ideas. Subjects range from an analysis of subcellular processes, to single neurons, networks, behavior, and cognition. In addition, several papers describe new technical developments of use to computational neuroscientists. Contents: Introduction. Section 1: Subcellular. Section 2: Cellular. Section 3: Network. Section 4: Systems. Index. Kluwer Academic Publishers, Boston Date of publishing: July 1995 464 pp. Hardbound ISBN: 0-7923-9543-3 Prices: NLG: 300.00 USD: 180.00 GBP: 122.50 ============================================================================= ORDER FORM Author: James M. Bower Title: The Neurobiology of Computation ( ) Hardbound / ISBN: 0-7923-9543-3 NLG: 300.00 USD: 180.00 GBP: 122.50 Ref: KAPIS ( ) Payment enclosed to the amount of ___________________________ ( ) Please send invoice ( ) Please charge my credit card account: Card no.: |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| Expiry date: ______________ () Access () American Express () Mastercard () Diners Club () Eurocard () Visa Name of Card holder: ___________________________________________________ Delivery address: Title : ___________________________ Initials: _______________M/F______ First name : ______________________ Surname: ______________________________ Organization: ______________________________________________________________ Department : ______________________________________________________________ Address : ______________________________________________________________ Postal Code : ___________________ City: ____________________________________ Country : _____________________________Telephone: ______________________ Email : ______________________________________________________________ Date : _____________________ Signature: _____________________________ Our European VAT registration number is: |_|_|_|_|_|_|_|_|_|_|_|_|_|_| To be sent to: For customers in Mexico, USA, Canada Rest of the world: and Latin America: Kluwer Academic Publishers Kluwer Academic Publishers Group Order Department Order Department P.O. Box 358 P.O. Box 322 Accord Station 3300 AH Dordrecht Hingham, MA 02018-0358 The Netherlands U.S.A. Tel : 617 871 6600 Tel : +31 78 392392 Fax : 617 871 6528 Fax : +31 78 546474 Email : kluwer at wkap.com Email : services at wkap.nl After October 10, 1995 Tel : +31 78 6392392 Fax : +31 78 6546474 Payment will be accepted in any convertible currency. Please check the rate of exchange with your bank. Prices are subject to change without notice. All prices are exclusive of Value Added Tax (VAT). Customers in the Netherlands please add 6% VAT. Customers from other countries in the European Community: * please fill in the VAT number of your institute/company in the appropriate space on the orderform: or * please add 6% VAT to the total order amount (customers from the U.K. are not charged VAT). *************************************** James M. Bower Division of Biology Mail code: 216-76 Caltech Pasadena, CA 91125 (818) 395-6817 (818) 449-0679 FAX NCSA Mosaic addresses for: laboratory http://www.bbb.caltech.edu/bowerlab GENESIS: http://www.bbb.caltech.edu/GENESIS science education reform http://www.caltech.edu/~capsi From baluja at GS93.SP.CS.CMU.EDU Tue Jul 25 16:33:20 1995 From: baluja at GS93.SP.CS.CMU.EDU (Shumeet Baluja) Date: Tue, 25 Jul 95 16:33:20 EDT Subject: Paper Available on Human Face Detection Message-ID: Title: Human Face Detection in Visual Scenes By: Henry Rowley, Shumeet Baluja & Takeo Kanade Abstract: We present a neural network-based face detection system. A retinally connected neural network examines small windows of an image, and decides whether each window contains a face. The system arbitrates between multiple networks to improve performance over a single network. We use a bootstrapping algorithm for training the networks, which adds false detections into the training set as training progresses. This eliminates the difficult task of manually selecting non-face training examples, which must be chosen to accurately represent the entire space of non-face images. The system out-performs other state-of-the-art face detection systems, in terms of detection and false-positive rates. Instructions (via WWW) ---------------------------------------------- PAPER (html & postscript) is available from both of these sites: http://www.cs.cmu.edu/~baluja http://www.cs.cmu.edu/~har ON-LINE DEMO: http://www.ius.cs.cmu.edu/demos/facedemo.html QUESTIONS and COMMENTS (please mail to both): baluja at cs.cmu.edu & har at cs.cmu.edu From FRYRL at f1groups.fsd.jhuapl.edu Tue Jul 25 15:18:00 1995 From: FRYRL at f1groups.fsd.jhuapl.edu (Fry, Robert L.) Date: Tue, 25 Jul 95 15:18:00 EDT Subject: NNs and Info. Th. Message-ID: <3014B770@fsdsmtpgw.fsd.jhuapl.edu> New neuroprose entry: A paper entitled "Rational neural models based on information theory" will be preseted at the Fifteenth International Workshop on MAXIMUM ENTROPY AND BAYESIAN METHODS, in Sante Fe, New Mexico on July 31 - August 4, 1995. The enclosed abstract summarizes the presentation which describes an information-theoretic explanation of some spatial and temporal aspects of neurological information processing. Author: Robert L. Fry Affiliation: The Johns Hopkins University/Applied Physics Laboratory Laurel, MD 20723 Title: Rational neural models based on information theory Abstract Biological organisms which possess a neurological system exhibit varying degrees of what can be termed rational behavior. One can hypothesize that rational behavior and thought processes in general arise as a consequence of the intrinsic rational nature of the neurological system and its constituent neurons. A similar statement may be made of the immunological system [1]. The concept of rational behavior can be made quantitative. In particular, one possible characterization of rational behavior is as follows (1) A physical entity (observer) must exist which has the capacity for both measurement and the generation of outputs (participation). Outputs represent decisions on the part of the observer which will be seen to be rational. (2) The establishment of the quantities measurable by the observer is achieved through learning. Learning characterizes the change in knowledge state of an observer in response to new information and is driven by the directed divergence information measure of Kullback [2]. (3) Output decisions must be made optimally on the basis of noisy and/or missing input data. Optimally here implies that the decision-making process must abide by the standard logical consistency axioms which give rise to probability as the only logically consistent measure of degree of plausible belief. An observer using decision rules based on such is said to be rational. Information theory can be used to quantify the above leading to computational paradigms with architectures that closely resemble both the single cortical neuron and interconnected planar field of multiple cortical neurons all of which are functionally identical to one another. A working definition of information in a neural context must be agreed upon prior to this development, however. Such a definition can be obtained through the Laws of Form - a mathematics of observation originating with the British mathematician George Spencer-Brown [3]. [1] Francisco J. Varela, Principles of Biological Autonomy, North Holland, 1979. [2] Solomon Kullback, Information theory and statistics, Wiley, 1959 and Dover, 1968. [3] George Spencer-Brown, Laws of Form, E. P. Dutton, New York 1979 The paper is in compressed postscript format via FTP from archive.cis.ohio-state.edu /pub/neuroprose/fry.maxent.ps.Z using standard telnet or other FTP procedures From terry at salk.edu Tue Jul 25 16:55:28 1995 From: terry at salk.edu (Terry Sejnowski) Date: Tue, 25 Jul 95 13:55:28 PDT Subject: Development: A Constructivist Manifesto Message-ID: <9507252055.AA26263@salk.edu> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/quartz.const.ps.Z The file quartz.const.ps.Z is now available for copying from the Neuroprose repository. This is a 47 page paper. No hardcopies available. THE NEURAL BASIS OF COGNITIVE DEVELOPMENT: A CONSTRUCTIVIST MANIFESTO by Steven R. Quartz and Terrence J. Sejnowski The Salk Institute for Biological Studies PO Box 85800, San Diego CA 92186-5800 e-mail: steve at salk.edu submitted to: Behavioral and Brain Science ABSTRACT: Through considering the neural basis of cognitive development, we present a constructivist view. Its key feature is that environmentally-derived activity regulates neuronal growth as a progressive increase in the representational capacities of cortex. Learning in development becomes a dynamic interaction between the environment's informational structure and growth mechanisms, allowing the representational properties of cortex to be constructed by the problem domain confronting it. This is a uniquely powerful and general learning strategy that undermines the central assumptions of classical learnability theory. It also minimizes the need for prespecification of cortical function, suggesting that cortical evolution is a progression to more flexible representational structures, in contrast to the popular view of cortical evolution as an increase in specialized, innate circuits. ************ How to obtain a copy of the paper ************* Via Anonymous FTP: unix> ftp archive.cis.ohio-state.edu Name: anonymous Password: (type your email address) ftp> cd pub/neuroprose ftp> binary ftp> get quartz.const.ps.Z ftp> quit unix> uncompress quartz.const.ps.Z unix> lpr quartz.const.ps.Z (or what you normally do to print PostScript) From marco at McCulloch.Ing.UniFI.IT Tue Jul 25 07:20:22 1995 From: marco at McCulloch.Ing.UniFI.IT (Marco Gori) Date: Tue, 25 Jul 1995 13:20:22 +0200 Subject: paper announcement Message-ID: <9507251120.AA14789@McCulloch.Ing.UniFI.IT> FTP-host: spovest.ing.unifi.it FTP-file: pub/tech-reports/bank.ps.Z FTP-file: pub/tech-reports/num-pla.ps.Z The following papers are now available by anonymous ftp. They have been submitted to ICANN95 - Industrial track. In particular, the first one describes BANK, a real machine for banknote recognition, while the second one reports the results of a software tool for the recognition of number-plates in motorway environments. ================================================================ BANK: A Banknote Acceptor with Neural Kernel A. Frosini(*), M. Gori(**), and P. Priami(*) (*) Consulting Engineer (**) DSI - Univ. Firenze (ITALY) Abstract This paper gives a summary of the electronics and software modules of BANK, a banknote machine operating by a neural network-based recognition model. The machine perceives banknotes by means of low cost optoelectronic devices which produce signals associated with the reflected and refracted rays of two parallel strips in the banknote. The recognition model is based on multilayer networks acting for both the classification and verification steps. ================================================================== Number-Plate Recognition in Practice: The role of Neural Networks A. Frosini, M. Gori(*), L. Pistolesi (*) DSI - Univ. Firenze (ITALY) Abstract The automatic number-plate recognition has been receiving a growing attention in practice in a number of different problems. In this paper we show the crucial role of neural networks for implementing a software tool for the recognition of number-plates in the actual motor way environment for Italian cars. We show that proper neural network architectures can solve the problem of character recognition very effectively and, most importantly, can also offer a significant confidence on the the classification decision. This turns out to be of crucial importance in order to exploit effectively the hypothesize and verify paradigm on which the software tool relies. P.S. A demo of the software tool will be available via internet in the next few weeks for Personal Computers. ================================================================== From rmeir at ee.technion.ac.il Wed Jul 26 12:00:50 1995 From: rmeir at ee.technion.ac.il (Ron Meir) Date: Wed, 26 Jul 1995 14:00:50 -0200 Subject: 12th Israeli Symposium on AI, CV & NN Message-ID: <199507261600.OAA25777@ee.technion.ac.il> -------- Announcement and Call For Papers ------------ 12th Israeli Symposium on Artificial Intelligence, Computer Vision and Neural Networks Tel Aviv University, Tel Aviv, February 4-5, 1996 The purpose of the symposium is to bring together researchers and practitioners from Israel and abroad who are interested in the areas of Artificial Intelligence, Computer Vision, and Neural Networks, and to promote interaction between them. The program will include contributed as well as invited lectures and possibly some tutorials. All lectures will be given in English. Papers are solicited addressing all aspects of AI, Computer Vision and Neural Networks. Novel contributions in preliminary stages are especially encouraged but significant work which has been presented recently will also be considered. The symposium is intended to be more informal than previous symposia. The proceedings, including summaries of the contributed and invited talks, will be organized as a technical report and distributed during the symposium. No copyright will be required. To minimize costs, we intend to organize this symposium on a university campus. Authors should submit an extended abstract of their presentation in English so that it will reach us by September 1st 1995. Submissions should be limited to four pages, including title and bibliography. Submitted contributions will be refereed by the program committee. Authors will be notified of acceptance by November 1st, 1995. A final abstract, to be included in the proceedings, is due by January 10, 1996. For receiving updated information on the symposium, please send a message to Yvonne Sagi (yvonne at cs.technion.ac.il), including your name, affiliation, e-mail, fax number and phone number. Submitted extended abstracts should be send to: Yvonne Sagi Computer Science Department Technion, Israel Institute of Technology Haifa, 32000, Israel Dan Geiger (Artificial Intelligence) e-mail: dang at cs.technion.ac.il Phone: 972-4-294265 Micha Lindenbaum (Computer Vision) e-mail: mic at cs.technion.ac.il Phone: 972-4-294331 Ron Meir (Neural Networks) e-mail: rmeir at ee.technion.ac.il Phone: 972-4-294658 From C.Campbell at bristol.ac.uk Thu Jul 27 05:25:27 1995 From: C.Campbell at bristol.ac.uk (I C G Campbell) Date: Thu, 27 Jul 1995 10:25:27 +0100 (BST) Subject: PhD studentship available Message-ID: <199507270925.KAA11419@zeus.bris.ac.uk> PhD Studentship Available A PhD studentship has become available at short notice. The project involves the application of neural computing and statistical techniques to highlight and detect tumours on scans. In particular we are interested in detecting a specific type of tumour called an acoustic neuroma. The project is in collaboration with staff from the Computer Science Dept., Bristol University with an interest in computer vision and staff from the Dept. of Radiology, Bristol Royal Infirmary. The closing date for applications is the ** 12th August 1995 **. The studentship is available for three years with a maintenance grant of 5,000 pounds per annum and coverage of postgraduate fees at the home rate of 2,200 pounds per annum. Suitable candidates should have a First or Upper Second Class degree in computer science or mathematics or a similar numerate discipline. Interested candidates should contact Dr. Colin Campbell, Dept. of Engineering Mathematics, Bristol University, Bristol BS8 1TR, United Kingdom. Given the close deadline it is best to contact Dr. Campbell via e-mail (C.Campbell at bris.ac.uk). Candidates should send a CV and arrange for 2 letters of reference to be despatched to the above address ASAP. From klaus at sat.t.u-tokyo.ac.jp Fri Jul 28 01:50:23 1995 From: klaus at sat.t.u-tokyo.ac.jp (Klaus Mueller) Date: Fri, 28 Jul 95 01:50:23 JST Subject: new paper on learning curves Message-ID: <9507271650.AA19714@elf.sat.t.u-tokyo.ac.jp> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/klaus.lcurve.ps.Z The following paper is now available for copying from the Neuroprose repository: klaus.lcurve.ps.Z klaus.lcurve.ps.Z (129075 bytes) 26 pages. M\"uller, K.-R., Murata, N., Finke, M., Schulten, K., Amari, S.: A Numerical Study on Learning Curves in Stochastic Multi-Layer Feed-Forward Networks The universal asymptotic scaling laws proposed by Amari et al. are studied in large scale simulations using a CM5. Small stochastic multi-layer feed-forward networks trained with back-propagation are investigated. In the range of a large number of training patterns $t$, the asymptotic generalization error scales as $1/t$ as predicted. For a medium range $t$ a faster $1/t^2$ scaling is observed. This effect is explained by using higher order corrections of the likelihood expansion. It is shown for small $t$ that the scaling law changes drastically, when the network undergoes a transition from ineffective to effective learning. (University of Tokyo Technical Report METR 03-95 and submitted) * NO HARDCOPIES * Best regards, Klaus &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& Dr. Klaus-Robert M\"uller C/o Prof. Dr. S. Amari Department of Mathematical Engineering University of Tokyo 7-3-1 Hongo, Bunkyo-ku Tokyo 113 , Japan mail: klaus at sat.t.u-tokyo.ac.jp Fax: +81 - 3 - 5689 5752 &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& PERMANENT ADRESS: &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& Dr. Klaus-Robert M\"uller GMD First (Gesellschaft f. Mathematik und Datenverarbeitung) Rudower Chaussee 5, 12489 Berlin Germany &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& From howse at baku.eece.unm.edu Thu Jul 27 17:59:09 1995 From: howse at baku.eece.unm.edu (El Confundido) Date: Thu, 27 Jul 1995 15:59:09 -0600 Subject: Tech Report Available Message-ID: <9507272159.AA03634@baku.eece.unm.edu> The following technical report is available by FTP: A Synthesis of Gradient and Hamiltonian Dynamics Applied to Learning in Neural Networks James W. Howse, Chaouki T. Abdallah and Gregory L. Heileman Abstract The process of model learning can be considered in two stages: model selection and parameter estimation. In this paper a technique is presented for constructing dynamical systems with desired qualitative properties. The approach is based on the fact that an n-dimensional nonlinear dynamical system can be decomposed into one gradient and (n - 1) Hamiltonian systems. Thus, the model selection stage consists of choosing the gradient and Hamiltonian portions appropriately so that a certain behavior is obtainable. To estimate the parameters, a stably convergent learning rule is presented. This algorithm is proven to converge to the desired system trajectory for all initial conditions and system inputs. This technique can be used to design neural network models which are guaranteed to solve certain classes of nonlinear identification problems. Retrieval: FTP anonymous to: ftp.eece.unm.edu cd howse get techrep.ps.gz This is a PostScript file compressed with gzip. The paper is 28 pages long and formatted to print DOUBLE-sided. This paper has been submitted for publication. If there are any retrieval problems please let me know. I would welcome any comments or suggestions regarding the paper. =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= James Howse - howse at eece.unm.edu __ __ __ __ _ _ /\ \/\ \/\ \/\ \/\ `\_/ `\ University of New Mexico \ \ \ \ \ \ `\\ \ \ \ Department of EECE, 224D \ \ \ \ \ \ , ` \ \ `\_/\ \ Albuquerque, NM 87131-1356 \ \ \_\ \ \ \`\ \ \ \_',\ \ Telephone: (505) 277-0805 \ \_____\ \_\ \_\ \_\ \ \_\ FAX: (505) 277-1413 or (505) 277-1439 \/_____/\/_/\/_/\/_/ \/_/ =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= From d23d at unb.ca Thu Jul 27 18:03:09 1995 From: d23d at unb.ca (Deshpande) Date: Thu, 27 Jul 1995 19:03:09 -0300 (ADT) Subject: Some Questions ... Message-ID: Dear Members, We are currently working on a symbolic approach to low level processing of visual information. There may be many researchers in this list who might be working in this area of vision. I would like to pose certain very basic questions whose importance was often overlooked. Let me first in simple terms put forth the problem that we working on: Is the information processing in low level vision symbolic or non-symbolic? That is, should the signal that the measurement devices capture be interpreted symbolically or , as conventionally being followed, in the functional domain. And what are the implications of the two initial forms of representation as far as pattern recognition is considered? Some of the related work can be found in [1]. Moreover, 1) What is the justification for a spatial/frequency domain decomposition of the signal (intensity-map) that is representing the objects ? 2) Through an information theoretic point of veiw, what relevance does this decomposition have ? 3) Neurophysiological evidence does show a similarity to a Gabor filtering scheme in the human visual system , but as David Marr had rightly pointed out, how does this help one to understand its specific relationship to perception ? 4) Even if one assumes an ad hoc justification for the above (spatial- frequency based decomposition), how does one justify the distance function imposed on the vector space formed by these basis functions (of gabor filters), that is, how does this distance function bring out the relationship between the geometrical information of objects that the signal is representing? One finds a lot of literature on this approach of spatial-frequency domain decompostion of the signal as a scheme for texture segmentation, but none really justifies the appropriateness of this approach. If the above is not of relevance to the majority of the members please send your suggestions and comments directly to the following email address: d23d at unb.ca . cheers, sanjay [1] I.B.Muchnik and V.V Mottl, "Linguistic Analysis of Experimental Curves", Proc. IEEE, vol 67, no. 5, May 1979. From klaus at prosun.first.gmd.de Fri Jul 28 06:17:24 1995 From: klaus at prosun.first.gmd.de (klaus@prosun.first.gmd.de) Date: Fri, 28 Jul 95 12:17:24 +0200 Subject: new paper on "Analysis of Switching Dynamical Systems" Message-ID: <9507281017.AA01180@lanke.first.gmd.de> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/pawelzik.switch.ps.Z FTP-file: pub/neuroprose/mueller.switch_speech.ps.Z The following 2 papers are now available for copying from the Neuroprose repository: pawelzik.switch.ps.Z, mueller.switch_speech.ps.Z pawelzik.switch.ps.Z (124459 bytes) 16 pages. Pawelzik, K., Kohlmorgen, J., M\"uller, K.-R.: Annealed Competition of Experts for a Segmentation and Classification of Switching Dynamics We present a method for the unsupervised segmentation of data streams originating from different unknown sources which alternate in time. We use an architecture consisting of competing neural networks. Memory is included in order to resolve ambiguities of input-output relations. In order to obtain maximal specialization, the competition is adiabatically increased during training. Our method achieves almost perfect identification and segmentation in the case of switching chaotic dynamics where input manifolds overlap and input-output relations are ambiguous. Only a small dataset is needed for the training proceedure. Applications to time series from complex systems demonstrate the potential relevance of our approach for time series analysis and short-term prediction. (Neural Computation in Press). mueller.switch_speech.ps.Z (427948 bytes) 11 pages. M\"uller, K.-R., Kohlmorgen, J., Pawelzik, K.: Analysis of Switching Dynamics with Competing Neural Networks, We present a framework for the unsupervised segmentation of time series. It applies to non-stationary signals originating from different dynamical systems which alternate in time, a phenomenon which appears in many natural systems. In our approach, predictors compete for data points of a given time series. We combine competition and evolutionary inertia to a learning rule. Under this learning rule the system evolves such that the predictors, which finally survive, unambiguously identify the underlying processes. Applications to time series from complex systems and speech are presented. The segmentation achieved is very precise and transients are included, a fact, which makes our approach promising for several applications. (IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences in Press). * NO HARDCOPIES * Best regards, Klaus &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& Dr. Klaus-Robert M\"uller C/o Prof. Dr. S. Amari Department of Mathematical Engineering University of Tokyo 7-3-1 Hongo, Bunkyo-ku Tokyo 113 , Japan mail: klaus at sat.t.u-tokyo.ac.jp Fax: +81 - 3 - 5689 5752 &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& PERMANENT ADRESS: &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& Dr. Klaus-Robert M\"uller GMD First (Gesellschaft f. Mathematik und Datenverarbeitung) Rudower Chaussee 5, 12489 Berlin Germany &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& From chandler at kryton.ntu.ac.uk Fri Jul 28 08:24:11 1995 From: chandler at kryton.ntu.ac.uk (chandler@kryton.ntu.ac.uk) Date: Fri, 28 Jul 1995 12:24:11 +0000 Subject: NEURO-FUZZY CONTROL POSITION Message-ID: <9507281124.AA16372@kryton.ntu.ac.uk> ******************* NEURO-FUZZY CONTROL POSITION *********************** at ******************* The Nottingham Trent University *********************** RESEARCH FELLOW/ASSISTANT ------------------------------------------------------------------------------- The Manufacturing Automation Research Group within the Department of Manufacturing Engineering, working in collaboration with the Real Time Machine Control Group of the Department of Computing, is seeking a full-time researcher for an initial two year appointment to join an active resarch group working in fuzzy control techniques applied to the adaptive control of the complex process of stencil printing of solder paste. For this post we require a numerate graduate with knowledge of computer control techniques and preferably an awareness of neuro-fuzzy methodologies. Previous experience of the electronics industry is also desirable. Individuals who have completed a PhD and graduates with a proven ability in complex system analysis and control are particularly welcome to apply. Salary will be in the Research Fellow Scale ( 12,756 - 21,262 p.a.) or the Research Assistant Scale ( 9,921 - 12,048 p.a.). Closing date 31st August 1995. Post No. G0493. For more information about this post contact : Martin Howarth Manufacturing Automation Research Group Department of Manufacturing Engineering The Nottingham Trent University Burton Street Nottingham NG1 4BU ENGLAND TEL: +44 (115) 941 8418 (ext. 4110) E-MAIL : man3howarm at ntu.ac.uk or Dr. Pete Thomas Real Time Machine Control Group Department of Computing The Nottingham Trent University Burton Street Nottingham NG1 4BU ENGLAND TEL: +44 (115) 941 8418 (ext. 2901) Alternatively use the HTML Form at the URL : http://marg.ntu.ac.uk/marg/vacancy895.html From alisonw at cogs.susx.ac.uk Fri Jul 28 14:08:00 1995 From: alisonw at cogs.susx.ac.uk (Alison White) Date: Fri, 28 Jul 95 14:08 BST Subject: AISB96 Call for Workshop Proposals Message-ID: ------------------------------------ AISB-96: CALL FOR WORKSHOP PROPOSALS ------------------------------------ Call for Workshop Proposals: AISB-96 University of Sussex, Brighton, England April 1 -- 2, 1996 Society for the Study of Artificial Intelligence and Simulation of Behaviour (SSAISB) Workshop Series Chair: Dave Cliff, University of Sussex Local Organisation Chair: Alison White, University of Sussex The AISB is the UK's largest and foremost Artificial Intelligence society -- now in it's 32nd year. The Society has an international membership of nearly 900 drawn from both academia and industry. Membership is open to anyone with interests in Artifical Intelligence and the Cognitive and Computing Sciences. The AISB Committee invites proposals for workshops to be held at the University of Sussex campus, on April 1st and 2nd, 1996. The AISB workshop series is held in even years during the Easter vacation. In odd years workshops are held immediately before the biennial conference. The intention of holding a regular workshop series is to provide an administrative and organisational framework for workshop organisers, thus reducing the administrative burden for individuals and freeing them to focus on the scientific programme. Accommodation, food, and social events are organised for all workshop participants by the local organisers. Proposals are invited for workshops relating to any aspect of Artificial Intelligence or the Simulation of Behaviour. Proposals, from an individual or a pair of organisers, for workshops between 0.5 and 2 days long will be considered. Workshops will probably address topics which are at the forefront of research, but perhaps not yet sufficiently developed to warrant a full-scale conference. In addition to research workshops, a 'Postgraduate Workshop' has become a successful regular event over recent years. This event focuses on how to survive the process of studying for a PhD in AI/Cognitive Science, and has a hybrid workshop/tutorial nature. We welcome proposals, particularly from current PhD survivors, to organise the 1996 Postgraduate Workshop at Sussex. For further information on organising the postgraduate workshop, please see the AISB96 web page (address below) or contact Dave Cliff or Alison White. Proposals for tutorials will also be considered, and will be assessed on individual merit: please contact Dave Cliff or Alison White for further details of submission of tutorial proposals. It is the general policy of AISB to only approve tutorials which look likely to be financially viable. Submission: ---------- A workshop proposal should contain the following information: 1. Workshop Title 2. A detailed outline of the workshop. This should include the necessary background and the potential target audience for the workshop and a justified estimate of the number of possible attendees. Please also state the length and preferred date(s) of the workshop. Specify any equipment requirements, indicating whether the organisers would be expected to meet them. 3. A brief resume of the organiser(s). This should include: background in the research area, references to published work in the topic area and relevant experience, such as previous organisation or chairing of workshops. 4. Administrative information. This should include: name, mailing address, phone number, fax, and email address if available. In the case of multiple organisers, information for each organiser should be provided, but one organiser should be identified as the principal contact. 5. A draft Call for Participation. This should serve the dual purposes of informing and attracting potential participants. The organisers of accepted workshops are responsible for issuing a call for participation, reviewing requests to participate and scheduling the workshop activities within the constraints set by the Workshop Organiser. They are also responsible for submitting a collated set of papers for their workshop to the Workshop Series Chair. Workshop participants will receive bound photocopies of the collated set of papers, with copyright retained by the authors. Individual workshop organisers may wish to approach publishers to discuss publication of workshop papers in journal or book forms. DATES: ------ Intentions to organise a workshop should be made known to the Workshop Series Chair (Dave Cliff) as soon as possible. Proposals must be received by October 1st 1995. Workshop organisers will be notified by October 15th 1995. Organisers should be prepared to send out calls for workshop participation as soon as possible after this date. Collated sets of papers to be received by March 15th 1996. Proposals should be sent to: Dave Cliff AISB96 Workshop Series Chair School of Cognitive and Computing Sciences University of Sussex Brighton BN1 9QH U.K. email: davec at cogs.susx.ac.uk phone: +44 1273 678754 fax: +44 1273 671320 Electronic submission (plain ascii text) is highly preferred, but hard copy submission is also accepted, in which case 5 copies should be submitted. Proposals should not exceed 2 sides of A4 (i.e. 120 lines of text approx.). General enquiries should be addressed to: Alison White AISB96 Local Organisation Chair School of Cognitive and Computing Sciences University of Sussex Brighton BN1 9QH U.K. email: alisonw at cogs.susx.ac.uk phone: +44 1273 678448 fax: +44 1273 671320 A copy of this call, with further details for workshop organisers (including a full schedule), is available on the WWW from: http://www.cogs.susx.ac.uk/aisb/aisb96/cfw.html A plain-ASCII version of the web page is available via anonymous ftp from: % ftp ftp.cogs.susx.ac.uk login: anonymous password: [your_email at your_address] ftp cd pub/aisb/aisb96 ftp get [filename]* ftp quit * Files available at present are: README call_for_proposals From john at dcs.rhbnc.ac.uk Fri Jul 28 10:43:55 1995 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Fri, 28 Jul 95 15:43:55 +0100 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <199507281443.PAA21814@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT): one new report available ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-050: ---------------------------------------- Learning Ordered Binary Decision Diagrams by Ricard Gavald\`a and David Guijarro, Universitat Polit\`ecnica de Catalunya Abstract: This note studies the learnability of ordered binary decision diagrams (obdds). We give a polynomial-time algorithm using membership and equivalence queries that finds the minimum obdd for the target respecting a given ordering. We also prove that both types of queries and the restriction to a given ordering are necessary if we want minimality in the output, unless P=NP. If learning has to occur with respect to the optimal variable ordering, polynomial-time learnability implies the approximability of two NP-hard optimization problems: the problem of finding the optimal variable ordering for a given obdd and the Optimal Linear Arrangement problem on graphs. ----------------------- The Report NC-TR-95-050 can be accessed and printed as follows % ftp cscx.cs.rhbnc.ac.uk (134.219.200.45) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-95-050.ps.Z ftp> bye % zcat nc-tr-95-050.ps.Z | lpr -l Similarly for the other technical report. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. The files may also be accessed via WWW starting from the NeuroCOLT homepage: http://www.dcs.rhbnc.ac.uk/neural/neurocolt.html Best wishes John Shawe-Taylor From honavar at iastate.edu Fri Jul 28 18:37:39 1995 From: honavar at iastate.edu (Vasant Honavar) Date: Fri, 28 Jul 1995 17:37:39 CDT Subject: No subject Message-ID: <9507282237.AA09329@pv031e.vincent.iastate.edu> The following recent publications of the Artificial Intelligence Research Group (URL http://www.cs.iastate.edu/~honavar/aigroup.html) can be accessed on WWW via the URL http://www.cs.iastate.edu/~honavar/publist.html 1. Chen, C-H. and Honavar, V. (1995). A Neural Memory Architecture for Content as well as Address-Based Storage and Recall: Theory and Applications Paper under review. Draft available as ISU CS-TR 95-03. 2. Chen, C-H. and Honavar, V. (1995). A Neural Network Architecture for High-Speed Database Query Processing. Paper under review. Draft available as ISU CS-TR 95-11. 3. Chen, C-H. and Honavar, V. (1995). A Neural Architecture for Syntax Analysis. Paper under review. Draft available as ISU-CS-TR 95-18. 4. Mikler, A., Wong, J., and Honavar, V. (1995). Quo-Vadis - Adaptive Heuristics for Routing in Large Communication Networks. Under review. Draft available as ISU CS-TR 95-10. 5. Mikler, A., Wong, J., and Honavar, V. (1995). An Object-Oriented Approach to Modelling and Simulation of Routing in Large Communication Networks. Under review. Draft available as: ISU CS-TR 95-09. 6. Balakrishnan, K. and Honavar, V. (1995). Evolutionary Design of Neural Architectures - A Preliminary Taxonomy and Guide to Literature. Available as: ISU CS-TR 95-01. 7. Parekh, R. & Honavar, V. (1995). An Interactive Algorithm for Regular Language Learning. Available as: ISU CS-TR 95-02. 8. Balakrishnan, K. and Honavar, V. (1995) Properties of Genetic Representations of Neural Architectures. In: Proceedings of the World Congress on Neural Networks. Washington, D.C., 1995. Available as: ISU CS-TR 95-13. 9. Chen, C-H., Parekh, R., Yang, J., Balakrishnan, K. and Honavar, V. (1995). Analysis of Decision Boundaries Generated by Constructive Neural Network Learning Algorithms. In: Proceedings of the World Congress on Neural Networks. Washington, D.C., 1995. Available as: ISU CS-TR 95-12. The following publications will be available on line shortly (within the next few weeks): 1. Kirillov, V. and Honavar, V. (1995). Simple Stochastic Temporal Constraint Networks. Draft available as: ISU CS-TR 95-16. 2. Mikler, A., Wong, J., and Honavar, V. (1995). Utility-Theoretic Heuristics for Routing in Large Telecommunication Networks. Draft available as: ISU CS-TR 95-14. 3. Parekh, R., Yang, J., and Honavar, V. (1995). Constructive Neural Network Learning Algorithms for Multi-Category Pattern Classification. Draft available as: ISU CS-TR 95-15. 4. Yang, J., Parekh, R., and Honavar, V. (1995). Comparison of Variants of Single-Layer Perceptron Algorithms on Non-Separable Data. Draft available as: ISU CS-TR 95-19. The WWW page also contains pointers to other older publications some of which are available on line. Those who don't have access to a WWW browser can obtain ISU CS tech reports by sending email to almanac at cs.iastate.edu with BODY (not SUBJECT) "send tr catalog" and following the instructions that you will receive in the reply from almanac. Sorry, no hard copies are available. Best regards, Vasant Honavar Artificial Intelligence Research Group 226 Atanasoff Hall Department of Computer Science Iowa State University Ames, IA 50011-1040 email: honavar at cs.iastate.edu www: http://www.cs.iastate.edu/~honavar/homepage.html From stiber at bpe.es.osaka-u.ac.jp Fri Jul 28 23:43:55 1995 From: stiber at bpe.es.osaka-u.ac.jp (Stiber) Date: Sat, 29 Jul 1995 12:43:55 +0900 Subject: two new papers on transient responses of pacemakers at Neuroprose Message-ID: <199507290343.MAA05348@aoi.bpe.es.osaka-u.ac.jp> The following 2 papers are now available for copying from the Neuroprose repository: stiber.transcomp.ps.Z, stiber.transhyst.ps.Z. stiber.transcomp.ps.Z (194085 bytes, 10 pages) M. Stiber, R. Ieong, J.P. Segundo Responses to Transients in Living and Simulated Neurons (submitted to NIPS'95; also technical report HKUST-CS95-26) This paper is concerned with synaptic coding when inputs to a neuron change over time. Experiments were performed on a living and simulated embodiment of a prototypical inhibitory synapse. Results indicate that the neuron's reponse lags its input by a fixed delay. Based on this, we present a qualitative model for phenomena previously observed in the living preparation, including hysteresis and dependence of discharge regularity on rate of change of presynaptic spike rate. As change is the rule rather than the exception in life, understanding neurons' responses to nonstationarity is essential for understanding their function. stiber.transhyst.ps.Z (244297 bytes, 13 pages) M. Stiber and R. Ieong Hysteresis and Asymmetric Sensitivity to Change in Pacemaker Responses to Inhibitory Input Transients (in press, Proc. Int. Conf. on Brain Processes, Theories, and Models. W.S. McCulloch: 25 Years in Memoriam; also technical report HKUST-CS95-29) The coding of presynaptic spike trains to postsynaptic ones is the unit of computation in nervous systems. While such coding has been examined in detail under stationary input conditions, the effects of changing inputs have until recently been understood only superficially. When a neuron receives transient inputs with monotonically changing instantaneous rate, its response along time depends not only on the rate at that time, but also on the sign and magnitude of its rate of change. This has been shown previously for the living embodiment of a prototypical inhibitory synapse. We present simulations of a physiological model of this living preparation which reproduce its behaviors. Based on these results, we propose a simple model for the neuron's response involving a constant delay between its input and internal state. This is then generalized to a nonlinear dynamical model of any similar system with an internal state which lags its input. ** If you absolutely, positively can't produce your own hardcopy (or induce a friend to do so for you), hardcopies can be requested in writing to: Technical Reports, Department of Computer Science, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong; don't forget to include the TR number. ** --- Dr. Michael Stiber stiber at bpe.es.osaka-u.ac.jp c/o Prof. S. Sato Department of Biophysical Engineering Osaka University Toyonaka 560 Osaka, Japan On leave from: Department of Computer Science stiber at cs.ust.hk The Hong Kong University of Science & Technology tel: +852-2358-6981 Clear Water Bay, Kowloon, Hong Kong fax: +852-2358-1477 From listerrj at helios.aston.ac.uk Mon Jul 31 12:14:43 1995 From: listerrj at helios.aston.ac.uk (Richard Lister) Date: Mon, 31 Jul 1995 17:14:43 +0100 Subject: New MSc by Research Message-ID: <18457.9507311614@sun.aston.ac.uk> ============================================================== MSc by Research in Information Processing and Neural Networks Aston University UK ============================================================== The Neural Computing Research Group in the Department of Computer Science at Aston University is introducing a new MSc by Research in Information Processing and Neural Networks to start October 1995. The course will involve intensive taught modules in the first term and a supervised research project throughout the remaining terms, which will constitute the dominant part of the MSc. The aim of the course is to provide a practical grounding in exploiting state-of-the-art information processing methods in the context of real world problems. Consequently the course has close industrial involvement. More details can be found on http://neural-server.aston.ac.uk/ or by contacting: Professor David Lowe Aston University Aston Triangle Birmingham B4 7ET United Kingdom email: ncrg at aston.ac.uk ==============================================================