From Connectionists-Request at cs.cmu.edu Sun Jan 1 00:05:20 1995 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Sun, 01 Jan 95 00:05:20 EST Subject: Bi-monthly Reminder Message-ID: <11463.788936720@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated September 9, 1994. This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is a moderated forum for enlightened technical discussions and professional announcements. It is not a random free-for-all like comp.ai.neural-nets. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to thousands of busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. -- Dave Touretzky & Lisa Saksida --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, corresponded with him directly and retrieved the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new textbooks related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. The file "current" in the same directory contains the archives for the current month. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU 2. Login as user anonymous with password your username. 3. 'cd' directly to the following directory: /afs/cs/project/connect/connect-archives The archive directory is the ONLY one you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into this directory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- Using Mosaic and the World Wide Web ----------------------------------- You can also access these files using the following url: http://www.cs.cmu.edu:8001/afs/cs/project/connect/connect-archives ---------------------------------------------------------------------- The NEUROPROSE Archive ---------------------- Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. We strongly discourage the merger into the repository of existing bodies of work or the use of this medium as a vanity press for papers which are not of publication quality. PLACING A FILE To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. Current naming convention is author.title.filetype.Z where title is just enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. The Z indicates that the file has been compressed by the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is in the appendix. Make sure your paper is single-spaced, so as to save paper, and include an INDEX Entry, consisting of 1) the filename, 2) the email contact for problems, 3) the number of pages and 4) a one sentence description. See the INDEX file for examples. ANNOUNCING YOUR PAPER It is the author's responsibility to invite other researchers to make copies of their paper. Before announcing, have a friend at another institution retrieve and print the file, so as to avoid easily found local postscript library errors. And let the community know how many pages to expect on their printer. Finally, information about where the paper will/might appear is appropriate inside the paper as well as in the announcement. In your subject line of your mail message, rather than "paper available via FTP," please indicate the subject or title, e.g. "paper available "Solving Towers of Hanoi with ART-4" Please add two lines to your mail header, or the top of your message, so as to facilitate the development of mailer scripts and macros which can automatically retrieve files from both NEUROPROSE and other lab-specific repositories: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/filename.ps.Z When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. To prevent forwarding, place a "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your file. If you do offer hard copies, be prepared for a high cost. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! A shell script called Getps, written by Tony Plate, is in the directory, and can perform the necessary retrieval operations, given the file name. Functions for GNU Emacs RMAIL, and other mailing systems will also be posted as debugged and available. At any time, for any reason, the author may request their paper be updated or removed. For further questions contact: Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Phone: (614) 292-4890 APPENDIX: Here is an example of naming and placing a file: unix> compress myname.title.ps unix> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put myname.title.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for myname.title.ps.Z 226 Transfer complete. 100000 bytes sent in 1.414 seconds ftp> quit 221 Goodbye. unix> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file myname.title.ps.Z in the Inbox. Here is the INDEX entry: myname.title.ps.Z mylogin at my.email.address 12 pages. A random paper which everyone will want to read Let me know when it is in place so I can announce it to Connectionists at cmu. ^D AFTER RECEIVING THE GO-AHEAD, AND HAVING A FRIEND TEST RETRIEVE THE FILE, HE DOES THE FOLLOWING: unix> mail connectionists Subject: TR announcement: Born Again Perceptrons FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/myname.title.ps.Z The file myname.title.ps.Z is now available for copying from the Neuroprose repository: Random Paper (12 pages) Somebody Somewhere Cornell University ABSTRACT: In this unpublishable paper, I generate another alternative to the back-propagation algorithm which performs 50% better on learning the exclusive-or problem. ~r.signature ^D ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "ftp.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. Another valid directory is "/afs/cs/project/connect/code", where we store various supported and unsupported neural network simulators and related software. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "neural-bench at cs.cmu.edu".  From 72773.1646 at compuserve.com Sun Jan 1 11:47:06 1995 From: 72773.1646 at compuserve.com (SAM FARAG) Date: 01 Jan 95 11:47:06 EST Subject: NEURAL NETWORKS SPECIALIST Message-ID: <950101164706_72773.1646_EHL69-1@CompuServe.COM> The Switchgear and Motor Control Center Divsion, of Siemens Energy and Automation,Inc.in Raleigh, NC has an immediate opening for a neural networks specialist. Especially feed-forward networks and back propagation. The ideal candidate will have a master degree in electrical engineering or equivelant experience. Your responsibilities will include developing, implementing, and testing neural networks, statistical and/or machine based algorithms for electrical machine monitoring and diagnostics. Experience in embeded controllers, assembly, high level languages, and hardware design is highly desirable. Siemens AG is a worldwide supplier of electrical and electronic devices with sales in excess of 4 Billion$ in the US and 40 billion$ worldwide. If you are interested please send your resume via email ( text only) or US mail to: Sam Farag Siemens Energy & Automation 7000 Siemens Drive Wendell, NC 27626 email: 72773, 1646 at Compuserve.com  From nadal at physique.ens.fr Mon Jan 2 12:10:05 1995 From: nadal at physique.ens.fr (NADAL Jean-Pierre) Date: Mon, 2 Jan 1995 18:10:05 +0100 Subject: New books Message-ID: <199501021710.SAA09077@droopy.ens.fr> *********************************************************************** *********** Proceedings (Cargese NATO ASI): *************************** *********************************************************************** "From Statistical Physics To Statistical Inference and Back" Grassberger P. and Nadal J.-P. editors Kluwer Acad. Publ., 1994 ISBN 0-7923-2775-6 to order: services at wkap.nl kluwer at world.std.com *********************************************************************** *********** Reprint volume, with comments from the editors: *********** *********************************************************************** "Biology and Computation: a physicist's choice" Gutfreund H. and Toulouse G. editors Advance Series in Neurosciences, Vol. 3 World Scientific, 1994 *********************************************************************** *********** Book on neural networks, for non specialists (IN FRENCH) ** *********** Livre en francais, pour non specialistes ****************** *********************************************************************** "Reseaux de neurones : de la physique a la psychologie" Nadal J.-P. Armand Colin, collection 2ai, 1993 ISBN 2-200-21170-8 Dans toutes les bonnes librairies... *********************************************************************** Jean-Pierre Nadal nadal at physique.ens.fr Laboratoire de Physique Statistique Ecole Normale Sup\'erieure 24, rue Lhomond - 75231 Paris Cedex 05   From meeden at cs.swarthmore.edu Mon Jan 2 13:51:02 1995 From: meeden at cs.swarthmore.edu (Lisa Meeden) Date: Mon, 2 Jan 1995 13:51:02 -0500 Subject: Thesis available on adaptive robot control Message-ID: <199501021851.NAA21106@cilantro.cs.swarthmore.edu> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/Thesis/meeden.thesis.ps.Z **DO NOT FORWARD TO OTHER GROUPS** Ph.D. Thesis available by anonymous ftp (124 pages) Towards Planning: Incremental Investigations into Adaptive Robot Control Lisa Meeden Department of Computer Science Indiana University ABSTRACT: Traditional models of planning have adopted a top-down perspective by focusing on the deliberative, conscious qualities of planning at the expense of having a system that is connected to the world through its perceptions. This thesis takes the opposing, bottom-up perspective that being firmly situated in the world is the crucial starting point to understanding planning. The central hypothesis of this thesis is that the ability to plan developed from the more primitive capacity of reactive control. Neural networks offer the most promising mechanism for investigating robot control and planning because connectionist methodology allows the task demands rather than the designer's biases to be the primary force in shaping a system's development. Input can come directly from the sensors and output can feed directly into the actuators creating a close coupling of perception and action. This interplay between sensing and acting fosters a dynamic interaction between the controller and its environment that is crucial to producing reactive behavior. Because adaptation is fundamental to the connectionist paradigm, the designer need not posit what form the internal knowledge will take or what specific function it will serve. Instead, based on the training task, the system will construct its own internal representations built directly from the sensor readings to achieve the desired control behavior. Once the system has reached an adequate level of performance at the task, its method can be dissected and a high-level understanding of its control principles can be determined. This thesis takes an incremental approach towards understanding planning using a simple recurrent network model. In the initial phase, several ways of representing goals are explored using a simulated robot in a one-dimensional environment. Next the model is extended to accommodate a physical robot and two reinforcement learning methods for adapting the network controllers are compared: a gradient descent algorithm and a genetic algorithm. Then, the model's reactive behavior and representations are analyzed to reveal that it contains the potential building blocks necessary for planning, called protoplans. Finally, to show that protoplans can be used to guide behavior, a learning transfer experiment is conducted. The protoplans constructed in one network controller are stored in an associative memory and retrieved by a new controller as it learns the same task from scratch. In this way strategies discovered in the original controller bias the strategies developed in a new controller. The results show that controllers trained with protoplans and without goals are able to converge more quickly to successful solutions than controllers trained with goals. Furthermore, changes in the protoplans over time reveal that particular fluctuations in the protoplan values are highly correlated with switches in the robot's behavior. In some instances, very minor disturbances to the protoplan at these fluctuation points severely disrupts the normal pattern of behavior. Thus protoplans can play a key role in determining the behavior. The success of these protoplan experiments supports a new set of intuitions about planning. Rather than static, stand-alone procedures, plans can be seen as dynamic, context-dependent guides, and the process of planning may be more like informed improvisation than deliberation. It is not fruitful to spend processing time reasoning about an inherently unpredictable world, and with the protoplan model, a new protoplan can be recomputed on every time step. Although each protoplan offers only sketchy guidance, any more information might actually be misleading. Once the chosen action is executed, the subsequent perceptions are used to retrieve a new, more appropriate protoplan. Therefore it is possible to continually replan based on the best information available--the robot's current perceptual state. ------------------- Hard copies are not available. Thanks to Jordan Pollack for maintaining neuroprose. Lisa Meeden Computer Science Program Swarthmore College meeden at cs.swarthmore.edu  From arthur at mail4.ai.univie.ac.at Mon Jan 2 22:05:27 1995 From: arthur at mail4.ai.univie.ac.at (Arthur Flexer) Date: Mon, 2 Jan 95 22:05:27 MET Subject: Q: Statistical Evaluation of Classifiers Message-ID: <199501022105.WAA29311@milano.ai.univie.ac.at> Dear colleagues, I am looking for references on systematic accounts of the problem of the statistical evaluation of (statistical, machine learning or neural network) classifiers. I.e. of systematic accounts of statistical tests, which can be employed if one wants to ensure, whether observed performance differences are indeed caused by the varied independent variables (e.g. kind of method, certain parameters of the method, used data set, ...) and not by mere chance. What I am looking for are the appropriate statistical tests for significance of the observed performance differences. To make things more clear, let me simplify the case and give some references to the literature that I have already found: Problem 1: You have one method for classification (e.g. a neural network) and one data set. There are several network parameters to tune (number of layers, learning rate, ..) and you are looking for optimal performance on your data set. So the independent variables are the network's parameters and the dependent variable be accuracy (i.e. percent of correct classification) (see Kibler & Langley 1988 for an account of machine learning as an experimental science). For each of the parameter settings, multiple neural networks should be trained to rule out the influence of different training sets, weight initialisations and so on. One could even employ experimental designs like bootstrap, jacknife and thelike (see Michie et al. 1994 for an overview) for each of the parameter settings. Therefore, for each parameter setting, the mean accuracy over all runs is the observed performance criteria. A statistical test that can be employed for the testing of the significance of the differences in observed mean accuracies is the t-test. Finnoff et al. 1992 and Hergert et al. 1992 use "a robust modification of a t-test statistic" for comparison. Problem 2: is similar to Problem 1. Instead of one method for classification and one data set, there are several methods of classification and you want to know, which of them shows optimal performance on one data set. Just procede as stated under Problem 1, replacing the parameter settings with the different methods. Problem 3: is quite difficult and I have no solution yet :). You have one method of classification and several data sets. You want to know, on what data set your algorithm performs best (again in terms of mean accuracy).The problem is that the different data sets have different numbers of classes and different probabilities of classes. E.g. one data set has N=100 and the first class has 50 members and the second class also. Another data set has N=100 and the first class has 20 members, the second 30, the third 20 and the fourth again 30. Therfore, an accuracy of 50% would be only as good as chance for the first data set, but maybe quite something for the second data set. This problem has been adressed by Kononenko & Bratko 1991 from an information-based point of view. Problem 4: would of course be the ultimate: Several methods and several data sets. As you can see from the references I have given above, I am aware that there *are* some pointers in the literature. But as the problem of classification has been around for quite a while (at least for statisticians), I am wondering if there already exists an systematic and extensive overview of methods to employ. On the other hand, awareness of the need for such statistical evaluation often is very low :(. So the question is: Is there already a comprehensive text on these matters or do we all have to pick the information out of the standard statistic text books? Regards and thanks for any help, Arthur. ----------------------------------------------------------------------------- Arthur Flexer arthur at ai.univie.ac.at Austrian Research Inst. for Artificial Intelligence +43-1-5336112(Tel) Schottengasse 3, A-1010 Vienna, Austria, Europe +43-1-5320652(Fax) Literature: Finnoff W., Hergert F., Zimmermann H.G.: Improving Generalization Performance by Nonconvergent Model Selection Methods, in Aleksander I. & Taylor J.(eds.), Artificial Neural Networks, 2, North-Holland, Amsterdam, pp.233-236, 1992. Hergert F., Zimmermann H.G., Kramer U., Finnoff W.: Domain Independent Testing and Performance Comparisons for Neural Networks, in Aleksander I. & Taylor J.(eds.), Artificial Neural Networks, 2, North-Holland, Amsterdam, pp.1071-1076, 1992. Kononenko I., Bratko I.: Information-Based Evaluation Criterion for Classifiers' Performance, Machine Learning, 6(1), 1991. Kibler & Langley P.: Machine Learning as an Experimental Science, Machine Learning, 3(1), 5-8, 1988. Michie D., Spiegelhalter D.J., Taylor C.C.(eds.): Machine Learning, Neural and Statistical Classification, Ellis Horwood, England, 1994.  From herwin at osf1.gmu.edu Mon Jan 2 16:12:40 1995 From: herwin at osf1.gmu.edu (HARRY R. ERWIN) Date: Mon, 2 Jan 1995 16:12:40 -0500 (EST) Subject: Interim Report on OB Modeling Message-ID: Interim Report/Lessons Learned on a Simulation Model of the Olfactory Bulb Harry Erwin herwin at gmu.edu January 1, 1995 As a graduate school project during the last quarter, I've been developing a computational and compartmental model of a small but biologically realistic subset of the olfactory bulb. It owes its inspiration to the work done by Walter Freeman and James Skinner, but its many errors naturally remain the responsibility of the author. I'm posting this report for anyone who might provide useful critical feedback. The simulation consists of a number of small compartmental models of sensory, tufted/mitral, periglomerular, and granule cell neurons structured to provide insight into architectural details of the olfactory bulb. Crucial omissions include reafference from the anterior olfactory nucleus, the pyriform cortex, and the locus coeruleus and the effect of changes in the chloride gradiant in the glomeruli. The model is written in C++ and represents 32 tufted/mitral cells and the associated periglomerular and granule cells. The code is efficient, and performance is excellent, both on a Macintosh IIfx and on a Silicon Graphics Iris Workstation. On the SGI, approximately 200 milliseconds of activity is modeled in 8 minutes of CPU time. The model has been rehosted on PVM 3 and will eventually be moved to the Paragon supercomputer. Lessons learned in developing this simulation include the following: 1. Errors in the equations for neural dynamics--a number of errors were noted in recent papers. My take is that it is unsafe to rely on the equations in published papers, and any equations used should be rederived. I didn't notice this until I made my initial runs and got weird results (highly at variance with the biological data). 2. Instability in explicit compartmental models--compartmental neural models are advective with all the problems associated with such models. This becomes clear if one reviews Wilfred Rall's 1989 paper, "Cable theory for dendritic neurons," in Koch and Segev, Methods in Neuronal Modeling. Since both the shape and strength of the signals between compartments are biologically important, the preferred approach would be a high-order adaptive scheme using implicit solution techniques. The system of equations appears to be stiff, with high long-range connectivity, making matrix inversions computationally expensive. My exploratory modeling used a low-order explicit code, and so can only be regarded as suggestive. 3. Transmission rates in compartmental models--some workers publishing compartmental models appear to assume that transmission delays between compartments are unimportant. That leads to modeling difficulties. The equations used should be formally correct. 4. The true role of 'inhibitory' neurotransmitters--GABA and glycine are _not_ inverted excitatory neurotransmitters. Instead they serve to increase the 'inertia' of the neuron by reducing its sensitivity to excitation. The reversal potential for chloride channels is in the vicinity of -70 mV, close to the resting potential of the neuron and also close to the reversal potential for potassium channels. This means that GABA can depolarize as well as hyperpolarize a neuron, depending on the chloride gradient. The model for release of GABA at a synapse must take that into account. 5. Synaptic release models in most published compartmental models are (to be polite about it) simplistic, typically strongly influenced by the artificial neural network model of the spiking neuron. Very little work appears to have been done on the mechanism by which a depolarization level on the presynaptic side results in vesicle release, followed either depolarization or buffering against depolarization on the postsynaptic side. What appears in most analyses are "all-or-nothing" presynaptic spikes and postsynaptic responses, and that does not address the detailed dynamics that actually occur. In particular, electrical synapses and chemical synapses implementing graded potentials are given short-shrift by this model. 6. The crucial role of active tuning in producting the observed EEG patterns--to get the observed EEG patterns, the olfactory bulb has to be actively tuned in sensitivity. I modeled this by adjusting the trigger potential for spiking by active conductances, and over a range of 10 mV, I went from fixed point dynamics to completely chaotic dynamics. To reproduce the dynamics seen in vivo would thus seem to require sensitive tuning in near-real-time. 7. The role of periglomerular cells in the system--the periglomerular cells clearly normalize the bulb input as part of this tuning process. Initially during a breath, they are relatively inactive, so that all the tufted/mitral cells are allowed explore the input, but later in the breath, the periglomerular cells emerge more strongly, eventually allowing the neural cellular assembly (NCA) best classifying the afferent input to dominate. Tuning of the periglomerular cell response is a key aspect to modeling the time constant of this process. Walter Freeman has done work in this area. 8. The role of granule cells in the system--these appear to force the system into a Hopf bifurcation and only work right if the system is actively tuned, since they do not appear to be adaptive. Whether they have active conductances is a major issue for my model. Active conductances appear to overdrive them, since there is no evidence for afferent GABA or glycine synapses. 9. The role of attention in creating and maintaining neural cellular assemblies--see Gary Aston-Jones's work and Gray, Skinner, and Freeman, 1986, in Behavioral Neuroscience, 100(4):585-596. Norepinephrine appears to have a role in vigilance, by modulating the sensitivity of the olfactory bulb. This is much like the modulation by the periglomerular neurons, but on a more global scale, adjusting the percentage of the existing neural cellular assemblies (NCAs) that respond to afferent signals and facilitating NCA assembly, disassembly. I intend to investigate this further. 10. The mechanism of the 'H' synapses in the glomeruli and the reciprocal synapses between the tufted/mitral cells and the granule cells remains unclear. I suspect they produce some sort of difference signal. Harry Erwin Internet: herwin at gmu.edu  From jbower at smaug.bbb.caltech.edu Tue Jan 3 16:05:51 1995 From: jbower at smaug.bbb.caltech.edu (jbower@smaug.bbb.caltech.edu) Date: Tue, 3 Jan 95 13:05:51 PST Subject: No subject Message-ID: <9501032105.AA09903@smaug.bbb.caltech.edu> ANNOUNCEMENT GENESIS in research and education For the last five years, our laboratory, with funding from the National Science Foundation, has been supporting development and use of a GEneral NEural SImulation System called GENESIS. GENESIS is primarily for use in constructing realistic simulations of neurobiologically accurate cells and networks, although it has been used to construct neural models at all levels of detail. This simulation system is now used at institutions throughout the world, and served as the basis for 22 publications originating outside Caltech in 1994. GENESIS is available for free from Caltech (see below). This message is intended to announce the availability of a new GENESIS-based book on biologically realistic neural modeling as well as a new free version of the GENESIS simulator. ************************************************************************ The Book of GENESIS: Exploring Realistic Neural Models with the GEneral NEural SImulation System. James M. Bower, California Institute of Technology, Pasadena David Beeman, University of Colorado, Boulder This book introduces the GENESIS neural simulation system, as well as the interdisciplinary field of computational neuroscience. It is a step-by-step tutorial for professionals, researchers and students working in the area of computational neuroscience or neuroscience in general. Each tutorial is accompanied by a number of suggested exercises, "experiments", or projects which may be assigned as homework, or used for self-study. It can also be used as an interactive guide to understanding neuronal and network structure for those working in the area of neural networks and the cognitive sciences. The Preface and Introduction give suggestions for incorporating this material into neuroscience courses with existing textbooks. The full GENESIS simulator and all simulations used in the book are available at no cost from the Caltech GENESIS ftp site. Part I of the book teaches concepts in neuroscience and neural modeling by means of interactive computer tutorials on subjects ranging from neuronal membrane properties to cortical networks. These chapters, written by several contributors, allow the student to perform realistic simulations and experiments on model neural systems and provide the necessary background for understanding and using the tutorials. The simulations are user-friendly with on-line help and may be used without any prior knowledge of the GENESIS simulator or computer programming. Part II is intended to teach the use of the GENESIS script language for the construction of one's own simulations. This part will be useful for self-study by researchers who wish to do neural modeling, as well as students. It follows approximately the same sequence of topics as Part II, and uses parts of the tutorial simulations as examples of GENESIS programming. Several of these are based on recent research simulations which have been published in the neuroscience literature, but which have not been previously available for use outside the laboratories of the original researchers. Thus, the reader may modify these simulations and use them as a starting point for the development of original simulations. ************************************************************************ GENESIS version 1.4.2 The current version of GENESIS is version 1.4.2 (December 1994), which has been newly updated to contain the tutorials simulations used in "The Book of GENESIS". Around March 1995 we expect to release version 2.0, which will have a number of new features that are described in Part II of the Book of GENESIS. If all goes according to schedule, this release will also run on 486 PC's under the Linux and FreeBSD versions of unix. At present, GENESIS and its graphical front-end XODUS are written in C and run on SUN (SUN 3, 4, and Sparc stations 1 and 2) and DEC (DECstation 2100, 3100, and 5000/200PX) graphics workstations under UNIX (Sun & DEC OS 4.0 and up), and X-windows (version 11.3, 11.4 and 11.5). It has also been used with Silicon Graphics (Irix 4.0.1 and up) and the HP 700 series (HPUX). ************************************************************************ GENESIS use in education From cohn at psyche.mit.edu Wed Jan 4 10:08:41 1995 From: cohn at psyche.mit.edu (David Cohn) Date: Wed, 4 Jan 95 10:08:41 EST Subject: Paper: Active Learning with Statistical Models Message-ID: <9501041508.AA14465@psyche.mit.edu> Anticipating the post-NIPS rush, I would like to announce that the following paper is available by anonymous ftp and web-server as ftp://psyche.mit.edu/pub/cohn/active-models.ps.Z ##################################################################### Active Learning with Statistical Models David A. Cohn, Zoubin Ghahramani and Michael I. Jordan Dept. of Brain and Cognitive Sciences Massachusetts Institute of Technology For many types of learners one can compute the statistically "optimal" way to select data. We review how these techniques have been used with feedforward neural networks. We then show how the same principles may be used to select data for two alternative, statistically-based learning architectures: mixtures of Gaussians and locally weighted regression. While the techniques for neural networks are expensive and approximate, the techniques for mixtures of Gaussians and locally weighted regression are both efficient and accurate. To appear in G. Tesauro, D. Touretzky, and J. Alspector, eds., Advances in Neural Information Processing Systems 7. Morgan Kaufmann, San Francisco, CA (1995). ##################################################################### The paper may also be retrieved by anonymous ftp to "psyche.mit.edu" using the following protocol: unix> ftp psyche.mit.edu Name (psyche.mit.edu:joebob): anonymous <- use "anonymous" here 331 Guest login ok, send ident as password. Password: joebob at machine.univ.edu <- use your email address here 230 Guest login ok, access restrictions apply. ftp> cd pub/cohn <- go to the directory 250 CWD command successful. ftp> binary <- change to binary transfer 200 Type set to I. ftp> get active-models.ps.Z <- get the file 200 PORT command successful. 150 Binary data connection for active-models.ps.Z ... 226 Binary Transfer complete. local: active-models.ps.Z remote: active-models.ps.Z 301099 bytes received in 2.8 seconds (1e+02 Kbytes/s) ftp> quit <- all done 221 Goodbye.  From nilsson at CS.Stanford.EDU Wed Jan 4 13:04:54 1995 From: nilsson at CS.Stanford.EDU (Nils Nilsson) Date: Wed, 4 Jan 95 10:04:54 PST Subject: faculty search Message-ID: <9501041804.AA03176@Fairview.Stanford.EDU> Stanford University's Department of Computer Science has begun searching for a tenure-track, junior faculty position. The advertisement, soon to appear, is attached below. Among other specialities, we are interested in candidates in machine learning. I am particularly eager to see a new machine learning person come to Stanford. Please feel free to distribute this ad to people you think might be interested. -Nils Nilsson ----------- Faculty Opening Stanford University's Department of Computer Science seeks applicants for a tenure track faculty position at the Assistant Professor level. Specific areas of interest include natural language, human-computer interaction, and adaptive and learning systems. In addition, the department is interested in strengthening its faculty in foundations (algorithms and formal methods) and in software systems. Applicants should have a Ph.D. in a relevant field, and should have a strong interest in both teaching and research. The successful candidate will be expected to teach courses, both in the candidate's specialty area and in related subjects, and to build and lead a team of graduate students in Ph.D. research. Stanford University is an equal opportunity employer and welcomes nominations of women and minority group members and applications from them. Applications, including a resume, a publications list, and the names of five references, should be sent by March 1, 1995 to: Search Committee Chair, Department of Computer Science Margaret Jacks Hall, 210 Stanford University Stanford, CA 94305-2140  From rsun at cs.ua.edu Wed Jan 4 11:58:24 1995 From: rsun at cs.ua.edu (Ron Sun) Date: Wed, 4 Jan 1995 10:58:24 -0600 Subject: TR available: schemas, logics, and neural assemblies Message-ID: <9501041658.AA22393@athos.cs.ua.edu> Paper available: ---------------------------------------------- * FTP-host: archive.cis.ohio-state.edu FTP-filename: pub/neuroprose/sun.schema.ps.Z (thanks to Jordan Pollack) ------------------------------------------------ Title: Schemas, Logics, and Neural Assemblies ( length: 30 pages.) Author: Ron Sun Department of Computer Science The University of Alabama Tuscaloosa, AL 35487 To appear in Applied Intelligence, Special issue on connectionist models Vol.5, No.2. Feb. 1995 (edited by Michael Dyer) Abstract: To implement schemas and logics in connectionist models, some form of basic-level organization is needed. This paper proposes such an organization, which is termed a discrete neural assembly. Each discrete neural assembly is in turn made up of discrete neurons (nodes), that is, a node that process inputs based on a discrete mapping instead of a continuous function. A group of discrete neurons (nodes) closely interconnected form an assembly and carry out a basic functionality. Some substructures and superstructures of such assemblies are developed, to enable complex symbolic schemas to be represented and processed in connectionist networks. The paper shows that logical inference can be performed precisely, when necessary, in these networks and with certain generalization, more flexible inference (fuzzy inference) can also be performed. The development of various connectionist constructs demonstrates the possibility of implementing symbolic schemas, in their full complexity, in connectionist networks. * No hardcopy available. * FTP procedure: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get sun.schema.ps.Z ftp> quit unix> uncompress sun.schema.ps.Z unix> lpr sun.schema.ps (or however you print postscript) ----------------------------------------------------------------- Also, the following paper is now available in Neuroprose (file: sun.robust.ps.Z. 50 pages): Robust Reasoning: Integrating Rule-Based and Similarity-Based Reasoning by Ron Sun to appear in: Artificial Intelligence (AIJ), Spring 1995  From weinfeld at lix.polytechnique.fr Thu Jan 5 02:42:11 1995 From: weinfeld at lix.polytechnique.fr (Michel Weinfeld) Date: Thu, 5 Jan 1995 09:42:11 +0200 Subject: Web page for ICANN'95 conference now available Message-ID: International Conference on Artificial Neural Networks (ICANN'95), Paris, 9-13 October 1995. You can get more information about this conference by browsing the WWW server at: http://lix.polytechnique.fr/~ICANN95 Or send inquiries to: ICANN95 at lix.polytechnique.fr  From crg at ida.his.se Thu Jan 5 09:43:00 1995 From: crg at ida.his.se (Connectionist) Date: Thu, 5 Jan 95 15:43:00 +0100 Subject: SCC-95: Programme and Call For Participation Message-ID: <9501051443.AA20119@mhost.ida.his.se> The Second Swedish Conference on Connectionism Thursday 2nd and Friday 3rd March 1995 Skvde, Sweden ------------------------------------ PROGRAMME AND CALL FOR PARTICIPATION ------------------------------------ --- --- The second Swedish Conference on Connectionism is organized by the Connectionist Research Group at the University of Skvde and held in Skvde, Sweden March 2-3, 1995. A number of researchers will present their current work and a number of internationally renowned keynote speakers will give plenary talks on different connectionist topics, such as, neurobiological issues, cognitive science, connectionist modelling, applications with connectionist networks and philosophy of a "connectionist" mind. Invited Talks ------------- The Neural Network House: An Overview. Michael C. Mozer, University of Colorado, USA A connectionist exploration of the computational implications of embodiment. Ronan Reilly, University College Dublin, Ireland Landmark Arrays and the Hippocampal Cognitive Map. David S. Touretzky, Carnegie Mellon University, USA Searching weight space for backpropagation solution types. Noel Sharkey, University of Sheffield, UK Physiological constraints on models of behavior. Michael Hasselmo, Harvard University, USA Modeling, Connectionist and Otherwise. Tim van Gelder, Australian National University, USA Connectionist Synthetic Epistemology: Requirements for the Development of Objectivity. Ron Chrisley, University of Sussex, UK Learning to Retrieve Information. Garrison W. Cottrell, University of California, San Diego, USA Program Committee ----------------- Jim Bower (California Institute of Technology, USA) Harald Brandt (Ellemtel, Sweden) Ronald Chrisley (University of Sussex, UK) Gary Cottrell (University of California, San Diego, USA) Georg Dorffner (University of Vienna, Austria) Tim van Gelder (National University of Australia, Australia) Agneta Gulz (University of Skvde, Sweden) Olle Gllmo (Uppsala University, Sweden) Tommy Grling (Gteborg University, Sweden) Dan Hammerstrom (Adaptive Solutions Inc., USA) Jim Hendler (University of Maryland, USA) Erland Hjelmquist (Gteborg University, Sweden) Anders Lansner (Royal Institute of Technology, Stockholm, Sweden) Reiner Lenz (Linkping University, Sweden) Ajit Narayanan (University of Exeter, UK) Jordan Pollack (Brandeis University, USA) Noel Sharkey (University of Sheffield, UK) Bertil Svensson (Chalmers Institute of Technology, Sweden) Tere Vadn (University of Tampere, Finland) Conference Organizers --------------------- Lars Niklasson (University of Skvde, Sweden) Mikael Bodn (University of Skvde, Sweden) Programme --------- Thursday, March 2nd 09.00 Opening Lars Niklasson (SCC 1995 organizer) 09.20 Landmark Arrays and the Hippocampal Cognitive Map David S. Touretzky 10.00 Recurrent Attractor Neural Networks in Model of Cortical Associative Memory Function Erik Fransn and Anders Lansner 10.20 Coffee break 10.50 Physiological Constraints on Models of Behavior Michael Hasselmo 11.30- A Biophysically-Based Model of the Neostriatum as Dynamically 11.50 Reconfigurable Network J. Randall Gobbel 12.00 Dynamical Approximation by Neural Nets Max Garzon and Fernanda Botelho 12.20 On Parallel Selective Principal Component Analysis Mats sterberg and Reiner Lenz 12.40 LUNCH 14.00 Searching Weight Space for Backpropagation Solution Types Noel Sharkey, John Neary and Amanda Sharkey 14.40 Efficient Neural Net Isomorphism Testing Max Garzon and Arun Jagota 15.00 The TECO Theory - Simulation of Recognition Failure Sverker P. Sikstrm and Anders Lansner 15.20 Coffee break 15.50 Features of Distributed Representations for Tree-structures: A Study of RAAM Mikael Bodn and Lars Niklasson 16.10 Using the Conceptual Graph Model as Intermediate Representation for Knowledge Translation in Hybrid Systems Nicolae B. Szirbik, Gabriel L. Somlo and Diana L. Buliga 16.30- Adaptive Generalization in Dynamic Neural Networks 16.50 Stuart A. Jackson and Noel E. Sharkey 17.00 Diversity, Neural Nets and Safety Critical Applications A.J.C.Sharkey, N. E. Sharkey and O.C. Gopinath 17.20 Some Experiments Using Extra Output Learning to Hint Multi Layer Perceptrons Olle Gllmo and Jakob Carlstrm 17.40 Minimization of Quantization Errors in Digital Implementations of Multi Layer Perceptrons Jakob Carlstrm 18.00- Multimodal Sensing for Motor Control 18.20 Christian Balkenius Friday, March 3rd 09.00 Modeling, Connectionist and Otherwise Tim van Gelder 09.40 Behaviorism and Reinforcement Learning Tomas Landelius and Hans Knutsson 10.00 Symbol Grounding and Transcendental Logic Erich Prem 10.20 Coffee break 10.50 A Connectionist Exploration of the Computational Implications of Embodiment Ronan Reilly 11.30- Are Representaions Still Necessary for Understanding 11.50 Cognition? Orlando Bisacchi Coelho 12.00 Indeterminacy and Experience Pauli Pylkk 12.20 The Symbolic-Subsymbolic Relation: From Limitivism to Correspondence Tere Vadn 12.40 LUNCH 14.00 Connectionist Synthetic Epistemology: Requirements for the Development of Objectivity Ronald Chrisley and Andy Holland 14.40 Learning to Retrieve Information Brian Bartell, Garrison W. Cottrell and Rik Belew 15.20 Coffee break 15.50- Connectionist Models for the Detection of Oil Spills from 16.10 Doppler Radar Imagery Tom Ziemke and Fredrik Athley 16.10 The Neural Network House: An Overview Michael C. Mozer, Robert H. Dodier, Marc Anderson, Lucky Vidmar, Robert F. Cruickshank III and Debra Miller 16.50 Closing The Connectionist Research Group, University of Skvde General Information ------------------- Secretariat All inquiries concerning the Conference should be addressed to the Secretariat: SCC 1995 Attn: Marie Bodn, University of Skvde, P.O. Box 408, S-541 28 Skvde, SWEDEN Phone +46 (0)500-46 46 00, Fax +46 (0)500-46 47 25 email: marie at ida.his.se Conference venue: Billingehus Hotel AB, Alphyddevgen, 541 21 Skvde, SWEDEN Phone +46 (0)500-48 30 00, Fax +46 (0)500-48 38 80 Registration form Fees include admission to all conference sessions, get-together-party, coffee and lunch and a copy of the Proceedings. Hotel reservation is made at Billingehus Hotel and Conference Centre and can be made through the conference organization. The rooms are available from evening Wednesday (1st) to noon Friday (3rd). To register, complete and return the form (one form/person) below to the secretariat. Registration is valid when payment is received. Payment should be made to postal giro 78 81 40-2, payable to SCC 1995, Hgskolan Skvde. Name: Company: Address: City/Country: Phone: Email: Date of arrival: Date of departure: If the double room alternative has been chosen, please give the details for the second person. Name: Company: Country: Email: Alternatives (please circle chosen fee) Conference fee only 1500 SEK After 10/2 2000 SEK Conference fee, FT student 1000 SEK After 10/2 1500 SEK Full board and single room lodging 800 SEK/night Full board and double room lodging 600 SEK/night Invoice wanted: yes/no Signature:  From terry at salk.edu Fri Jan 6 14:50:46 1995 From: terry at salk.edu (Terry Sejnowski) Date: Fri, 6 Jan 95 11:50:46 PST Subject: Faculty Positions at Salk Message-ID: <9501061950.AA08001@salk.edu> The Salk Institute for Biological Studies has recently formed a Center for Theoretical Neurobiology, with funding provided by the Alfred P. Sloan Foundation. The long-range goal of this Center is to develop theoretical foundations for modern neurobiology. To meet this goal we are seeking applications for faculty positions. Candidates should possess formal training in theory and expertise in physical sciences, mathematics, engineering, or computation, and they should be interested in applying quantitative skills to a wide range of contemporary problems in neurobiology. It is expected that Center theoreticians will develop close ties with existing experimental neurobiology laboratories at the Salk Institute, and will take a prominent role in the training of graduate students and postdoctoral fellows in this area. Women and minority candidates are particularly encouraged to apply. The Salk Institute is an equal-opportunity employer. Participating Salk faculty and interests include: Thomas Albright Neural bases of visual perception and visually-guided behavior Francis Crick Theoretical work on the brain Martyn Goulding Neural development Stephen Heinemann Molecular biology of synaptic transmission Christopher Kintner Molecular biology of neurogenesis in amphibian embryos Greg Lemke Developmental neurobiology Dennis O'Leary Development of the vertebrate nervous system Terrence Sejnowski Computational neurobiology Charles Stevens Mechanisms of synaptic transmission John Thomas Neuronal development in Drosophila Applications should include c.v., a statement of research goals and interests, and copies of relevant publications. Applications and requests for information should be sent to: Thomas Albright Sloan Center for Theoretical Neurobiology The Salk Institute for Biological Studies PO Box 85800 San Diego, CA 92186-5800 e-mail: sloan at salk.edu FAX: 619-546-8526 -----  From Paul.Vitanyi at cwi.nl Fri Jan 6 13:19:20 1995 From: Paul.Vitanyi at cwi.nl (Paul.Vitanyi@cwi.nl) Date: Fri, 6 Jan 1995 19:19:20 +0100 Subject: EuroCOLT'95: Program & Registration Form Message-ID: <9501061819.AA00795=paulv@gnoe.cwi.nl> %% 2nd EUROPEAN CONFERENCE ON COMPUTATIONAL LEARNING THEORY %% %% MARCH 13-15, 1995, BARCELONA, SPAIN \documentstyle[draft,proc]{article} %% comment in for two columns \pagestyle{empty} \vfuzz=4pt \setlength{\topmargin}{0.25in} %% comment in for two columns \setlength{\textheight}{7.48in} \parindent=0pt \newcommand{\when}[1]{\makebox[.75in][l]{\sf #1}} %\newcommand{\stub}[1]{\typeout{*** Stub! ***} % $\langle${\bf Stub:} {\em #1}$\rangle$} \newcommand{\topic}[1]{\smallskip{\bf #1}\enspace} \newcommand{\sqr}[1]{{\vcenter{\hrule height.#1pt \hbox{\vrule width.#1pt height#1pt \kern#1pt \vrule width.#1pt} \hrule height.#1pt}}} \newcommand{\thickbar}{\rule{3.1875in}{1pt}} %% comment in for two columns \newcommand{\FILLHERE}{\_\hrulefill \ \\[5pt]} \begin{document} \vspace{.4in} \begin{center} {\Large 2nd European Conference on} \\ \vspace{.3in} {\huge\bf Computational Learning Theory} \\ \vspace{.3in} {\huge\tt EuroCOLT'95} \\ \vspace{.3in} {\large Sponsored by} \\[1ex] \vspace{.2in} {\Large EU-ESPRIT NeuroCOLT} \\ \vspace{.1in} {\Large EATCS} \\ \vspace{.1in} {\Large IFIP WG 14.2} \\ \vspace{.1in} {\Large Universitat Polit\`ecnica de Catalunya} \\ \vspace{2in} {\large March 13 -- 15, 1995} \\ \vspace{.25in} {\large Universitat Polit\`ecnica de Catalunya} \\ \vspace{.25in} {\large Barcelona, Spain} \end{center} \newpage %%% %%% The Technical Program %%% \parskip 1.4ex \begin{center} {\large\bf PROGRAM} \end{center} {\bf RECEPTION/REGISTRATION:} \\ Sunday, March 12, from 18:00 to 22:00 at the C\`atedra Gaud{\'\i} \frenchspacing {\bf SESSION 1:} Monday, March 13, Morning\\ Chair: Paul Vit\'anyi \when{9:00--9:50} {\em The discovery of algorithmic probability: A guide for the programming of true creativity (Invited Lecture),} R.J. Solomonoff (Oxbridge Research, USA) \when{9:50--10:15} {\em A decision-theoretic generalization of on-line learning and an application to boosting}, Y. Freund, R.E. Schapire (AT\&T Bell Labs) \when{10:15--10:40} {\em Online learning versus offline learning}, S. Ben-David (Technion), E. Kushilevitz (Technion), Y. Mansour (Tel Aviv Univ.) \when{10:40--11:15} Break \bigskip {\bf SESSION 2:} Monday, March 13, Morning\\ Chair: Nicola Cesa-Bianchi \when{11:15--11:40} {\em Learning distributions by their density levels - a paradigm for learning without a teacher}, S. Ben-David, M. Lindenbaum (Technion) \when{11:40--12:05} {\em Tight worst-case loss bounds for predicting with expert advice}, D. Haussler, J. Kivinen, M.K. Warmuth (UCSC) \when{12:05--12:30} {\em On-line maximum likelihood prediction with respect to general loss functions} K. Yamanishi (NEC Research, Princeton) \bigskip{\bf LUNCH:} Starting at 13:00 \bigskip {\bf SESSION 3:} Monday, March 13, Afternoon \\ Chair: Rusins Freivalds {\sloppy \when{14:30--14:55} {\em Power of procrastination in inductive inference: How it depends on used ordinal notations}, A. Ambainis (Univ. Latvia) } {\sloppy \when{14:55--15:20} {\em Learnability of Kolmogorov-easy circuit expressions via queries}, J.L. Balcazar (UPC, Barcelona), H. Buhrman (UPC Barcelona/CWI), M. Hermo (Univ. Pa{\'\i}s Vasco) }\when{15:20--15:45} {\em Trading monotonicity demands versus mind changes}, S. Lange (HTWK Leipzig), T. Zeugmann (Kyushu Univ.) \when{15:45--16:20} Break \bigskip {\bf SESSION 4:} Monday, March 13, Afternoon \\ Chair: Ricard Gavald\`a \when{16:20--16:45} {\em Learning recursive functions from approximations}, J. Case (Univ. Delaware), S. Kaufmann (Univ. Karlsruhe), E. Kinber (Univ. Delaware), M. Kummer (Univ. Karlsruhe), \when{16:45--17:10} {\em On the intrinsic complexity of learning}, R. Freivalds (Univ. Latvia), E. Kinber (Univ. Delaware), C.H. Smith (Univ. Maryland) \when{17:10--17:35} {\em The structure of intrinsic complexity of learning}, S. Jain (Nat. Univ. Singapore), A. Sharma (Univ. New S-Wales, Australia) \when{17:35--18:00} {\em Kolmogorov numberings and minimal identification}, R. Freivalds (Univ. Latvia), S. Jain (Nat. Univ. Singapore) \bigskip{\bf RUMP SESSION:}\ From 18:00 to 19:00 \bigskip{\bf BUSINESS MEETING:}\ From 20:00 to 21:30 \bigskip{\bf SESSION 5:} Tuesday, March 14, Morning\\ Chair: Ming Li \when{9:00--9:50} {\em Stochastic complexity in learning (Invited Lecture),} J. Rissanen (IBM Almaden Research Center, USA) \when{9:50--10:15} {\em Function learning from interpolation}, M. Anthony (LSE, London), P. Bartlett (ANU, Canberra, Australia) \when{10:15--10:40} {\em Approximation and learning of convex superpositions}, L. Gurvits (Siemens Res, Princeton), P. Koiran (DIMACS, Rutgers Univ.) \when{10:40--11:15} Break \bigskip{\bf SESSION 6:} Tuesday, March 14, Morning\\ Chair: Jorma Rissanen {\sloppy \when{11:15--11:40} {\em Minimum description length estimators under the optimal coding scheme}, V.G. Vovk (Research Council Cybernetics, Moscow) }\when{11:40--12:05} {\em MDL learning of unions of simple pattern languages from positive examples}, P. Kilpel\"ainen, H. Mannila, E. Ukkonen (Univ. Helsinki) \when{12:05--12:30} {\em A note on the use of probabilities by mechanical learners}, E. Martin, D. Osherson (IDIAP, Switzerland) \bigskip{\bf LUNCH:} Starting at 13:00 \bigskip{\bf SESSION 7:} Tuesday, March 14, Afternoon \\ Chair: Hans-Ulrich Simon \when{14:30--14:55} {\em Characterizing rational versus exponential learning curves}, D. Schuurmans (Univ. Toronto) \when{14:55--15:20} {\em Is Pocket algorithm optimal?}, M. Muselli (CNR, Italy) \when{15:20--15:45} {\em Some theorems concerning the free energy of (un)constrained stochastic hopfield neural networks}, J. van den Berg, J.C. Bioch (Erasmus Univ.) \when{15:45--16:20} Break \bigskip{\bf SESSION 8:} Tuesday, March 14, Afternoon \\ Chair: Wolfgang Maass \when{16:20--16:45} {\em A space-bounded learning algorithm for axis-parallel rectangles}, F. Ameur (H.Nixdorf Inst/Univ. Paderborn) \when{16:45--17:10} {\em Learning decision lists and trees with equivalence queries}, H.-U. Simon (Univ. Dortmund) \bigskip{\bf SIGHTSEEING:}\ From 17:10 to 21:00 \bigskip{\bf BANQUET:}\ Starting at 21:00 \bigskip{\bf SESSION 9:} Wednesday, March 15, Morning \\ Chair: Kenji Yamanishi \when{9:00--9:50} {\em Polynomial bounds for VC dimension of sigmoidal neural nets (Invited Lecture)}, Angus McIntyre (Oxford University, UK) \when{9:50--10:15} {\em Average case analysis of a learning algorithm for $\mu$-DNF expressions}, M. Golea (Univ. Ottawa) \when{10:15--10:40} {\em Learning by extended statistical queries and its relation to PAC learning}, E. Shamir, C. Shwartzman (Hebrew Univ.) \when{10:40--11:15} Break \bigskip{\bf SESSION 10:} Wednesday, March 15, Morning \\ Chair: Martin Anthony \when{11:15--11:40} {\em Typed pattern languages and their learnability}, T. Koshiba (Fujitsu Labs, Kyoto) \when{11:40--12:05} {\em Learning behaviors of automata from shortest counterexamples}, F. Bergadano, S. Varricchio (Univ. Catania) \when{12:05--12:30} {\em Learning of regular expressions by pattern matching}, A. Brazma (Univ. Latvia) \when{12:30--12:55} {\em The query complexity of learning some subclasses of context-free grammars}, C. Domingo, V. Lavin (UPC, Barcelona) \bigskip{\bf LUNCH:} Starting at 13:00 \bigskip{\bf END OF CONFERENCE} \newpage \bigskip {\large\bf Conference Information} \topic{Location:} Barcelona is a city of about 3 million people located on Spain's Mediterranean shore. Founded by the Romans, Barcelona has been for long a center of culture and arts. Fine Romanesque art and architecture, from the middle ages, can be found in Barcelona and surrounding Catalonia. At the turn of the century, Barcelona was a great center of art nouveau. Among its many contributors, the names of Gaud{\'\i}, Picasso, Dal{\'\i}, Mir{\'o} or T{\`a}pies have gained universal respect, and their works can be admired in the streets and local museums. Today, Barcelona is a vibrant, pulsating city offering a varied cultural life, many shopping areas, and a great variety of restaurants. On the occasion of hosting of 1992 Olympic Games, the city went through large urbanistic changes, and the remodelled seafront areas are now major attractions. \topic{Conference Site:} The conference will be held at the North Campus of the Universitat Polit\`ecnica de Catalunya (UPC). To reach it coming from downtown, take the subway line 3 (green), direction {\em Zona Universit\`aria,\/} to the second last stop {\em (Palau Reial),\/} then follow the signs; total travel time is about 30 minutes. Formal sessions will take place at the Aula Master of the North Campus. Rump sessions will be scheduled at the conference and may take place in a different room. \topic{Invited Lectures:} There will be invited lectures by Ray Solomonoff (Oxbridge Research), Jorma Rissanen (IBM Almaden), and Angus McIntyre (Oxford Univ.) \topic{Social Program:} {\sl Sunday Night:} Reception and registration at the {\em C\`atedra Gaud{\'\i},\/} Avda. Pedralbes~7, 18:00---22:00. This is near the conference site. Coming from downtown, take the subway line 3 (green) to {\em Maria Cristina} stop, then follow the signs. {\sl Monday Night:} Business meeting at the conference site, 20:00--21:30. {\sl Tuesday Night:} Banquet at {\em El Gran Caf\'e}, starting at 21:00. The {\em Caf\'e\/} is located in Aviny\'o~9, a few minute walk from the conference hotels. \topic{Weather:} Weather in March is usually sunny but be prepared for rain. Day time temperature should be between $10^o$C and $22^o$C. \topic{Getting there:} There are trains running every 30 minutes from the airport to Pla\c{c}a Catalunya, the central square of Barcelona close to the conference hotels. Travel time is about 25 minutes. There is also an Airport Bus linking the airport terminals to Pla\c{c}a Catalunya. A taxi from the airport to the hotels should cost 2500--3000 Pta, on normal traffic conditions. \medskip {\large\bf Accommodation} Reservations have been made in the following three hotels: {\sl Hotel Catalunya (**):} Santa Anna, 24. Phone +34-3-301-9120. Fax +34-3-302-7870. {\sl Hotel Montecarlo (***):} La Rambla, 124. Phone +34-3-412-0404. Fax +34-3-318-7323. {\sl Hotel Rivoli Ramblas (****):} La Rambla, 128. Phone +34-3-412-0988. Fax +34-3-318-9133. The three of them are quite close to each other in Barcelona's Old Quarter, the liveliest part of the city. The following are the conference prices in Spanish Pesetas (Pta), including VAT. For Catalunya and Rivoli, these prices also include breakfast. \begin{center} \renewcommand{\arraystretch}{1.5} \begin{tabular}{|c|c|c|c|} \hline Price & Catalunya & Montecarlo & Rivoli \\ \hline Single & 3250 & 6740 & n/a \\ \hline Double & 4500 & 9630 & 13900 \\ \hline Double, & 4050 & 7560 & 10700 \\ one occup. & & & \\ \hline \end{tabular} \end{center} \noindent For reservations, use the procedure described under {\em Registration and hotel reservation}, or send a fax directly to the hotel. The hotels are offering special conference prices (conditioned on a minimum occupancy), so make sure you mention EuroCOLT'95 if you contact them directly. Early reservation is recommended. The conference organization does not handle hotel payments. Please pay to the hotels directly when departing. They will accept major credit cards. \medskip {\large\bf Registration \& Hotel Reservation} \smallskip In order of preference: {\sl WWW:} Fill in the registration form at \begin{center} {\tt http://goliat.upc.es/{\large\tt \~{}}{\kern-2pt}eurocolt/reg-form.html} \end{center} {\sl E-mail:\/} Get the source of this brochure by anonymous ftp, as described below. Fill in the registration form and e-mail it to {\tt eurocolt at lsi.upc.es} {\sl Or else:} Fill in the registration form below and send it by fax or air mail to the organizers. \noindent Your registration will be confirmed upon receipt of your payment. \medskip {\large\bf Payment} \smallskip The conference fee includes proceedings, lunches for three days, and all social events. \begin{center} \begin{tabular}{lcc}\footnotesize & Before & After \\ Price (in Pta) & Feb. 10 & Feb. 10 \\[1ex] Normal Conference Fee & 30000 & 34000 \\ Student Fee & 15000 & 17000 \\ Extra Banquet Ticket & 3000 & 3500 \\ \end{tabular} \end{center} Extra proceedings will be available on site and cost about 7000 Pta each. Transfer the amount of your registration ({\em not\/} hotel) to: \begin{tabular}{l} Account Name: EuroCOLT'95 \\ Bank: Caixa d'Estalvis i Pensions de Barcelona \\ Account \#: 2100--0797--91--0200096977 \\ \end{tabular} \topic{Combining NeuroCOLT meeting with EuroColt'95:} The 1st yearly meeting of the EU ESPRIT NeuroCOLT Working Group is planned back-to-back with EuroColt'95 in Barcelona, March 9--11. Participants can arrange the same hotels and joint travel at their convenience. \medskip {\large\bf For more information} \smallskip {\sl WWW:} Connect to \begin{center} {\tt http://goliat.upc.es/{\large\tt \~{}}{\kern-2pt}eurocolt/info.html} \end{center} {\sl ftp:} login as anynomous to {\tt bloom.upc.es}, go to directory {\tt pub/eurocolt} {\sl E-mail:} {\tt eurocolt at lsi.upc.es} {\sl Or else:} contact the organizers at \\ \begin{center} \begin{tabular}{l} Ricard Gavald\`a -- EuroCOLT'95\\ Dept. of Software (LSI) \\ Universitat Polit\`ecnica de Catalunya \\ Pau Gargallo 5 \\ 08028 Barcelona, Spain \\ Phone: +34-3-401-7008 \\ Fax: +34-3-401-7014\\ E-mail: {\tt gavalda at lsi.upc.es} \end{tabular} \end{center} \medskip {\large\bf Acknowledgments} \smallskip {\sloppy \topic{History and Sponsors:} The previous and inaugural European Conference on Computational Learning Theory was held 20--22 December 1993 at Royal Holloway, University of London. The EuroCOLT'95 conference is sponsored by the EATCS, by the European Union through NeuroCOLT ESPRIT Working Group Nr. 8556, by IFIP through SSGFCS WG 14.2., and by Universitat Polit\`ecnica de Catalunya. } \topic{Local Arrangements Chairs:} Ricard Gavald\`a (UPC, Barcelona), Felipe Cucker (Univ. Pompeu Fabra, Barcelona) \topic{Program Committee:} M. Anthony (LSE, Univ. London, UK), E. Baum (NEC Research Inst., Princeton), N. Cesa-Bianchi (Univ. Milano, Italy), J. Koza (Stanford Univ, Palo Alto, USA), M. Li (Univ. Waterloo, Canada), S. Muggleton (Oxford University, UK), W. Maass (TU Graz, Austria), J. Rissanen (IBM Almaden, USA), H.-U. Simon (Univ. Dortmund, Germany), K. Yamanishi (NEC, Princeton, USA), L. Valiant (Harvard Univ, Cambridge, USA), P. Vitanyi (Chair, CWI/Univ. Amsterdam, Netherlands), R. Freivalds (Univ. Riga, Latvia) \topic{Steering Committee:} M. Anthony (LSE, Univ. London, UK), R. Gavald\`a (UPC, Barcelona), W. Maass (TU Graz, Austria), J. Shawe-Taylor (RHBNC, Univ. London, UK), H.-U. Simon (Univ. Dortmund, Germany) P. Vit\'anyi (CWI \& Univ.\ Amsterdam). \newpage %% %% TO REGISTER VIA E-MAIL %% 1. cut here %% 2. fill in the boxes and replace all occurrences of macro \FILLHER %% with your data %% 3. e-mail to eurocolt at lsi.upc.es before Feb. 10 %% %% Recall that registration via WWW is also possible %% \begin{center}\large\bf REGISTRATION FORM \end{center} \tt Last name \FILLHERE First name \FILLHERE Affiliation \FILLHERE Mailing address \FILLHERE \FILLHERE \FILLHERE EMail address \FILLHERE Vegetarian [ ] \\[5pt] Registration fee \hspace{1.5cm} Pta\ \FILLHERE Extra Banquet Ticket(s) \hspace{0.2cm} Pta\ \FILLHERE Total \hspace{3.5cm} Pta\ \FILLHERE Your registration will be confirmed upon receipt of payment. \thickbar \\[5pt] I want a [ ] Single room \ \ \ [ ] Double room \\[5pt] [ ] Double room, one occupant in Hotel [ ] Catalunya\ \ \ [ ] Montecarlo \\[5pt] [ ] Rivoli arriving on March \FILLHERE and leaving on March \FILLHERE If sharing a double room, name of roommate (or 'anyone'): \\ \FILLHERE \end{document}  From chaos at gojira.Berkeley.EDU Fri Jan 6 20:08:15 1995 From: chaos at gojira.Berkeley.EDU (Jim Crutchfield) Date: Fri, 6 Jan 95 17:08:15 PST Subject: Graduate Research Positions at the Santa Fe Institute Message-ID: <9501070108.AA26465@gojira.Berkeley.EDU> Our group at the Santa Fe Institute has been applying evolutionary computation techniques to design cellular automata and other decentralized multiprocessor systems to perform computations. Our group's work has two main thrusts: understanding how emergent computation can occur in spatially-extended decentralized systems, and understanding how an evolutionary process can produce complex, coherent behavior in such a system. Part of what we are doing in this context is formulating a mathematical theory of evolutionary search on landscapes, taking tools from statistical mechanics and stochastic process theory. Another novel aspect of our approach is the development and application of new methods to detect and analyze the computational structure in the evolved systems. We believe this work will eventually lead to (1) a better understanding of how evolution interacts with nonlinear decentralized systems in nature to produce adaptive coordinated behavior and (2) biologically-inspired methods for the automated design of parallel and distributed computing systems. We are searching for two graduate students interested in pursuing Ph.D.s on this project. This work is interdisciplinary: relevant fields include machine learning, theory of computation (especially in parallel decentralized systems), architectures for distributed parallel computation, nonlinear dynamics and statistical physics, evolutionary biology, and the mathematics of stochastic processes. We (the project leaders), Jim Crutchfield and Melanie Mitchell, are respectively a physicist and a computer scientist. We will consider students in any of the fields listed above, and will help formulate dissertation topics appropriate for the students' particular fields. The Santa Fe Institute (SFI) is a interdisciplinary scientific research center in Santa Fe, New Mexico, whose research focuses on the sciences of "complexity." Research programs include adaptive computation, economics, theoretical biology, theoretical immunology, theoretical ecology, anthropology, neurobiology, and foundations of complex systems. There is a small semi-permanent faculty along with a larger external faculty, several postdocs, and many other prominent scientists from many universities around the world who spend extended periods at the Institute. Included in this group are many Nobel Laureates and MacArthur Fellows. Although SFI does not grant degrees, it has a number of resident graduate research assistants who are officially enrolled at degree-granting institutions but do their dissertation research at SFI under the guidance of an Institute faculty member. We are looking for students who have successfully completed their graduate course work, are ready to engage in independent research, and are willing to spend two to three years at SFI working on a dissertation starting this coming summer or fall (1995). We will provide funding to cover housing and a living stipend. The student must have an official advisor at their home institution who is willing to have the student perform his or her work at SFI under our guidance. Interested students should send a letter stating their interest in this position along with a resume including (1) a synopsis of coursework and grades, (2) a synopsis of computer programming experience and proficiencies in programming languages; (3) a synopsis of research experience if any (and publications, if any); and (4) any other information relevant to the student's application. These should be sent (preferably by email) to: James P. Crutchfield Physics Department University of California Berkeley, California 94720-7300, USA Office: 510-642-1287 FAX: 510-643-8497 email: chaos at gojira.berkeley.edu The student should also arrange for two letters of recommendation to be sent to this address (also preferably by email). For more information and for our publications on this project, see our group's Web page (http://www.santafe.edu/projects/evca). Also see the Computational Mechanics Web page (http://www.santafe.edu/projects/CompMech). For more information on the Santa Fe Institute, see the SFI's Web page (http://www.santafe.edu). JAMES P. CRUTCHFIELD MELANIE MITCHELL Research Physicist Research Professor and Director, Adaptive Computation Program (also Research Professor (also Research Assistant Professor at the SFI) at the University of New Mexico) Physics Department Santa Fe Institute University of California 1399 Hyde Park Road Berkeley, California 94720-7300 Santa Fe, New Mexico 87501 173 Birge Hall chaos at gojira.berkeley.edu mm at santafe.edu Office: 510-642-1287 505-984-8800 FAX: 510-643-8497 505-982-0565 http://www.santafe.edu/~jpc http://www.santafe.edu/~mm  From vg197 at neutrino.pnl.gov Fri Jan 6 20:17:08 1995 From: vg197 at neutrino.pnl.gov (Sherif Hashem) Date: Fri, 06 Jan 1995 17:17:08 -0800 (PST) Subject: Workshop announcement Message-ID: <9501070117.AA13861@neutrino.pnl.gov> WORKSHOP ON ENVIRONMENTAL AND ENERGY APPLICATIONS OF NEURAL NETWORKS Battelle Auditorium, Richland, Washington March 30-31, 1995 The Environmental Molecular Sciences Laboratory (EMSL), Pacific Northwest Laboratory (PNL), and the Richland Section of the Institute of Electrical and Electronics Engineers (IEEE) are sponsoring a workshop to bring together scientists and engineers interested in investigating environmental and energy applications of artificial neural networks (ANNs). Objectives: ----------- The main objectives of this workshop are: * to provide a forum for presenting and discussing environmental and energy applications of neural networks. * to serve as a means for investigating the potential uses of neural networks in the U.S. Department of Energy's environmental cleanup efforts and energy programs. * to promote collaboration between researchers in national laboratories, academia, and industry to solve real-world problems. Topics: ------- * Environmental applications (modeling and predicting land, air, and water pollution; environmental sensing; spectroscopy; hazardous waste handling and cleanup). * Energy applications (environmental monitoring for power systems, modeling and control of power plants, power load forecasting, fault location and diagnosis of power systems). * Commercial and industrial applications (environmental, economic, and financial time series analyses and forecasting; chemical process modeling and control). * Medical applications (analysis of environmental health effects, modeling biological systems, medical image analysis, and medical diagnosis). Who should attend? ------------------ This workshop should be of interest to researchers applying ANNs in energy and environmental sciences and engineering, as well as scientists and engineers who see some potential for the application of ANNs to their work. Dates: ------ The workshop will be held on March 30-31, 1995, from 8:00 am to 5:00 pm. An introductory tutorial on neural networks will be offered on March 29, 1995, and is recommended for participants who are new to neural networks. Deadline for contributed presentations: February 10, 1995. Notification of acceptance will be mailed by: February 24, 1995. Cost: ----- The registration fee is $120 ($75 for students). Early registration by March 1, 1995, is $100 ($50 for students). For More Information, Contact: ------------------------------ Sherif Hashem Environmental Molecular Sciences Laboratory Pacific Northwest Laboratory P.O. Box 999, M/S K1-87 Richland, WA 99352 Telephone: 509-375-6995 Fax.: 509-375-6631 Internet: s_hashem at pnl.gov World Wide Web URL: http://www.emsl.pnl.gov:2080/people/bionames/s_hashem.html Also see the workshop's homepage on the World Wide Web at URL: http://www.emsl.pnl.gov:2080/docs/cie/neural/workshop2/homepage.html ____________________________________________________________________________ REGISTRATION FORM Name: ____________________________ Address: ____________________________ ____________________________ ____________________________ ____________________________ Telephone: ____________________________ Fax: ____________________________ E-mail: ____________________________ [ ] I am interested in attending the neural network tutorial (no additional fee is required). [ ] I am interested in a bus tour of the Hanford Site (a Department of Energy site located north of Richland, Washington). Registration Fee: ----------------- Regular: $100 ($120 after March 1, 1995). Student: $50 ($75 after March 1, 1995). Please make your check payable to Battelle. Mail the completed form and check to: Janice Gunter WEEANN Registration Pacific Northwest Laboratory P.O. Box 999, M/S K1-87 Richland, WA 99352 ____________________________________________________________________________  From ping at psy.cuhk.hk Fri Jan 6 23:04:19 1995 From: ping at psy.cuhk.hk (Ping Li) Date: Sat, 7 Jan 1995 12:04:19 +0800 (HKT) Subject: About sequential learning (or interference) Message-ID: Most of the previous efforts to reduce catastrophic intereference seem to have focused on modifying the network archetecture (though with some exceptions, e.g., Sharkey's and McRae's work). I wonder to what extent catastrophic intereference may be reduced if one manipulates the training data in some way. For example, my early study (CRL TR 9203) found if the network is presented with the full data, catastrophic intereference occurs. Some of my preliminary results now suggest that if one uses an "incremental learning" schedule (input data enters into training piece by piece; there are many reasons for a developmental psychologist why this kind of increments are necessary --- see Elman, 1993 Cognition, Plunkett & Marchman, 1993 Cognition), then catastrophic intereference may be reduced. This also seems to go well with what Jay McClelland suggests earlier in his message: > According to this view, cortical (and some other non-hippocampal) > systems learn slowly, using what I call 'interleaved learning'. > Weights are adjusted a small amount after each experience, so that > the overall direction of weight change is governed by the structure > present in the ensemble of events and experiences. New material can > be added to such a memory without catastrophic intereference if it > is added slowly, interleaved with ongoing exposure to other events and > experiences. Happy New Year! ********************************************************************** Ping LI Email: pingli at cuhk.hk Department of Psychology Phone: (852)609-6576 The Chinese University of Hong Kong Fax: (852)603-5019 **********************************************************************  From terry at salk.edu Sun Jan 8 00:46:22 1995 From: terry at salk.edu (Terry Sejnowski) Date: Sat, 7 Jan 95 21:46:22 PST Subject: Faculty Positions at UCSD Message-ID: <9501080546.AA21969@salk.edu> Cognitive Neuroscientist: UNIVERSITY OF CALIFORNIA, SAN DIEGO. The Psychology Department at UCSD anticipates hiring an Assistant Professor (tenure track) in Cognitive Neuropsychology/Cognitive Neuroscience. Candidates must have a Ph.D. and be able to conduct independent, publishable research and teach undergraduate and graduate classes in their area of specialization. Salary commensurate with qualifications and based on U.C. salary scales. Candidates should send curriculum vita, reprints, and names of three referees to Cognitive Neuroscience Search Committee, Department of Psychology, 0109, University of California, San Diego, La Jolla, CA 92093-0109. Immigration status of non-citizens should be stated in the vita. Complete applications received by January 16, 1995 will receive full consideration. Position subject to funding availability. Quantitative Methodologist: UNIVERSITY OF CALIFORNIA, SAN DIEGO. The Department of Psychology at UCSD anticipates hiring an Assistant Professor (tenure track) in Quantitative Methodology, with a research program in any substantive area of Psychology. Candidates must have a Ph.D. and be able to conduct independent, publishable research and teach undergraduate and graduate classes in their area of specialization. Salary commensurate with qualifications and based on U.C. salary scales. Candidates should send curriculum vita, reprints, and names of three referees to Quantitative Methodology Search Committee, Department of Psychology, 0109, University of California, San Diego, La Jolla, CA 92093-0109. Immigration status of non-citizens should be stated in the vita. Complete applications received by January 16, 1995 will receive full consideration. Position subject to funding availability. Biological Psychologist: UNIVERSITY OF CALIFORNIA, SAN DIEGO. The Department of Psychology at UCSD anticipates hiring an Assistant Professor (tenure track) in Biological Psychology. Candidates must have a Ph.D. and be able to conduct independent, publishable research and teach undergraduate and graduate classes in their area of specialization. Salary commensurate with qualifications and based on U.C. salary scales. Candidates should send curriculum vita, reprints, and names of three referees to Biological Psychology Search Committee, Department of Psychology, 0109, University of California, San Diego, La Jolla, CA 92093-0109. Immigration status of non-citizens should be stated in the vita. Complete applications received by January 16, 1995 will receive full consideration. Position subject to funding availability. The University of California is an Affirmative Action/Equal Opportunity Employer. -----  From dsilver at csd.uwo.ca Sat Jan 7 21:30:21 1995 From: dsilver at csd.uwo.ca (Danny L. Silver) Date: Sat, 7 Jan 95 21:30:21 EST Subject: About sequential learning (or interference) In-Reply-To: <9501080218.AA00899@church.ai.csd.uwo.ca.csd.uwo.ca>; from "Danny L. Silver" at Jan 7, 95 9:18 pm Message-ID: <9501080230.AA00923@church.ai.csd.uwo.ca.csd.uwo.ca> For me the significance of inteference in neurally inspired learning systems is the message that an effective learner must not only be capable of learning a single task from a set of examples but must also be capable of effectively integrating variant task knowledge at a meta- level. This falls in line with McClelland's recent papers on consolidation of hippcocampal memories into cortical regions; his "interleaved learning". This is a delicate and complex process which undoubtedly occurs during sleep. In tune with Sebastian Thrun and Tom Mitchell's efforts on "Life Long Learning" I feel the next great step in learning theory will be the discovery of methods which allow our machine learning algorthms to take advantage of previously acquired task knowledge. At UWO we have been investigating methods of storing neural net task knowledge in an interleaved fashion with other, previously learned tasks, so as to create an "experience database". This database can then be used to prime the initial weights of the neural net for a new task. Thus far, studies on simple boolean logic tasks has shown promise. Incremental learning is possible (with decreases in learning times by 1 or 2 orders of magnitude)), but dependent upon task order. Thus one of the key aspects of consolidation, so as to overcome interference, appears to be a reordering of learned tasks. Have others (besides those authors I have mentioned) tried methods of task consolidation at a meta level? ... Danny -- ========================================================================= = Daniel L. Silver University of Western Ontario, London, Canada = = N6A 3K7 - Dept. of Comp. Sci. - Office: MC27b = = dsilver at csd.uwo.ca H: (519)473-6168 O: (519)679-2111 (ext.6903) = ========================================================================= REF: McClelland, J. & McNaughton B. & O'Reiily, R. "Why there are complemetary learning sysetms in the hipocampus and neocortex: Insights from the successes and failures of connectionist models of learning and memory". Technical Report PDP.CNS.94.1, Carnegie Mellon Univeristy and The University of Arizona, March, 1994. Thrun, S. & Mitchell T. "Lifelong Robot Learning". Techincal Report IAI-TR-93-7, Universitat Bonn, Institut fur Informatik II, Germany, July, 1993. Thrun, S. "A Lifelong Learning Perspective for Mobile Robot Control"; Proceedings of the IEEE Conference on Intelligent Robots and Systems, Munich, Germnay, Sept, 1994. Thrun, S. & Mitchell T. "Learning One More Thing". Techincal Report CMU-CS-94-184, Carnegie Mellon University, Pittsburg, PA, Sept, 1994.  From 72773.1646 at compuserve.com Sun Jan 8 22:11:25 1995 From: 72773.1646 at compuserve.com (SAM FARAG) Date: 08 Jan 95 22:11:25 EST Subject: NEURAL NETWORKS SPECIALIST Message-ID: <950109031125_72773.1646_EHL141-1@CompuServe.COM> The Switchgear and Motor Control Center Divsion, of Siemens Energy and Automation,Inc.in Raleigh, NC has an immediate opening for a neural networks specialist. Especially feed-forward networks and back propagation. The ideal candidate will have a master degree in electrical engineering or equivelant experience, 3 to 5 years experience industry experience,and must be a legal resident in the USA. Your responsibilities will include developing, implementing, and testing neural networks, statistical and/or machine based algorithms for electrical machine monitoring and diagnostics. Experience in embeded controllers, assembly, high level languages, digital signal processing and hardware design is highly desirable. Siemens AG is a worldwide supplier of electrical and electronic devices with sales in excess of 4 Billion$ in the US and 40 billion$ worldwide. If you are interested please send your resume via e mail ( text only) or US mail to: Sam Farag Siemens Energy & Automation 7000 Siemens Drive Wendell, NC 27626 email: 72773. 1646 at Compuserve.com  From jon at maths.flinders.edu.au Mon Jan 9 13:07:21 1995 From: jon at maths.flinders.edu.au (Jonathan Baxter) Date: Tue, 10 Jan 1995 04:37:21 +1030 Subject: Sequential learning. Message-ID: <199501091807.AA26151@calvin.maths.flinders.edu.au> Danny Silver writes: > >For me the significance of inteference in neurally inspired learning systems >is the message that an effective learner must not only be capable >of learning a single task from a set of examples but must also be >capable of effectively integrating variant task knowledge at a meta- >level. This falls in line with McClelland's recent papers on consolidation >of hippcocampal memories into cortical regions; his "interleaved learning". >This is a delicate and complex process which undoubtedly occurs during sleep. >In tune with Sebastian Thrun and Tom Mitchell's efforts on "Life Long >Learning" I feel the next great step in learning theory will be the discovery >of methods which allow our machine learning algorthms to take advantage of >previously acquired task knowledge. I could not agree more. And with all modesty, the 'next great step' has already begun with the work in my recently completed PhD thesis entitled 'learning internal representations'. The thesis can be retrieved via anonymous ftp from the neuroprose archive (Thesis subdirectory)-- baxter.thesis.ps.Z (112 pages) In the thesis I examine in detail one important method of enabling machine learning algorithms to take advantage of previously acquired task knowledge, namely by learning an internal representation. The idea behind learning an internal representation is to notice that for many common machine learning problems (such as character and speech recognition) there exists a transformation from the input space of the problem (the space of all images of characters or the space of speech signals) into some other space that makes the learning problem much easier. For example, in character recognition, if a map from the input space can be found that is insensitive to rotations, dilations, and even writer-dependent distortions of the characters and such a map is used to 'preprocess' the input data, then the learning problem becomes quite trivial (the learner only needs to see one positive example of each charecter to be able to classify all future characters perfectly). I argue in the thesis that the information required to learn such a representation cannot in general be contained in a single task: many learning tasks are required to learn a good representation. Thus, the idea is to sample from many similar learning tasks to first learn a representation for a particular learning domain, and then use that representation to learn future tasks from the same domain. Examples of similar tasks in the character recognition learning domain are classifiers for individual characters (which includes characters from other alphabets), and in the speech recognition domain individual word classifiers constitute the similar tasks. It is proven in chapter three of the thesis that for suitable learning domains (of which speech and character recognition should be two examples), the number of examplesof each task required for good generalisation decreases linearly with the number of tasks being leearnt, and that once a representation has been learnt for the learning domain, far fewer examples of any novel task will be required for good generalisation. In fact, depending on the domain, there is no limit to the speedup in learning that can be achieved by first learning an internal representation. There are two levels at which represntation learning can be viewed as applying to human learning. At the bottom level we can assume that the tasks our evolutionary ancestors have had to learn in order to survive has resulted in humans being born with built in representations that are useful for learning the kinds of tasks necessary for survival. An example of this is the edge-detection processing that takes place early in the visual pathway, among other things this should be useful for identifying the boundaries of surfaces in our environment and hence provides a big boost to the process of learning not to bump into those surfaces. At a higher level it is clear that we build representations on top of these lower level representations during our lifetimes. For example, I grew up surrounded by predominately caucasian faces and hence learnt a representation that allows me to learn individual caucasian faces quickly (in fact with one example in most cases). However, although I am more able now, when I originally was presented with images of negro faces I was less able to distinguish them. Thus I have had to re-learn my 'face recognition' representation to accomodate learning of negro faces. In chapter four of my thesis I show how gradient descent may be used to learn ineternal representations and present several experiments supporting the theoretical conclusions that learning more tasks from a domain reduces the number of examples per task, and that once an effective representation is learnt, the number of examples required of future tasks is greatly reduced. It also turns out that the ideas involved in representation learning can be used to solve an old problem in vector quantization: namely how to choose an appropriate distortion measure for the quantization process. This is discussed in chapter five, in which the definition of the canonical distortion measure is introduced and is shown to be optimal in a very general sense. It is also shown how a distortion measure may be learnt using the representation learning techniques introduced in the previous chapters. In the final chapter the ideas of chapter five are applied back to the problem of representation learning to yield an improved error measure for the representation learning process and some experiments are performed demonstrating the improvement. Although learning an internal representation is only one way of enabling information from a body of tasks to be used when learning a new task, I believe it is the one employed extensively by our brains and hence the work in this thesis should provide an appropriate theoretical framework in which to address problems of sequential learning in humans, as well as providing a practical framework and set of techniques for tackling artificial learning problems for which there exists a body of similar tasks. However, it is likely that other methods may be at play in human sequential learning and may also be useful in artificial learning, so at the end of chapter three in my thesis I present a general theroretical framework for tackling any kind of learning problem for which prior information is available in the form of a body of similar learning tasks. Jonathan Baxter Department of Mathematics and Statistics, The Flinders University of South Australia. jon at maths.flinders.edu.au  From meeden at cs.swarthmore.edu Mon Jan 9 14:35:34 1995 From: meeden at cs.swarthmore.edu (Lisa Meeden) Date: Mon, 9 Jan 1995 14:35:34 -0500 (EST) Subject: task consolidation Message-ID: <199501091935.OAA21885@cilantro.cs.swarthmore.edu> Danny Silver asked whether others had tried methods of task consolidation at a meta level. In my dissertation I used an Elman-style recurrent network trained with reinforcement learning to control a simple robot with a few goals. At the time of goal achievement, the hidden layer potentially reflects a consolidated history of the perceptual states encountered during the process of solving the task. I argued that these hidden layer activations could serve as a sort of plan for achieving the goal and called them protoplans. To investigate the efficacy of protoplans, a transfer of learning experiment was done. The protoplans learned in one controller network were saved in an associative memory and used to guide a second controller network as it learned the same task from scratch. The associative memory mapped the precursor sensor states of a protoplan to the protoplan itself. Controllers trained with protoplans instead of goals as input converged more quickly on good solutions than the original controllers trained with goals. Protoplans were able to guide the robot's behavior by marking the important moments in the interaction with the environment when a switch in behavior should occur. This kind of timing information was indirect--no specific action was indicated--but knowing when to change from a particular strategy to a new one can be very important information. For more details on these experiments see chapter 5 of my thesis which is available at: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/Thesis/meeden.thesis.ps.Z -- Lisa Meeden Computer Science Program Swarthmore College 500 College Ave Swarthmore, PA 19081 (610) 328-8565 meeden at cs.swarthmore.edu  From john at dcs.rhbnc.ac.uk Mon Jan 9 09:27:28 1995 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Mon, 09 Jan 95 14:27:28 +0000 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <199501091427.OAA14403@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT): three new reports available ---------------------------------------- NeuroCOLT Technical Report NC-TR-94-015: ---------------------------------------- Grammar Inference and the Minimum Description Length Principle by Peter Gr\"{u}nwald, Centrum voor Wiskunde en Informatica, Kruislaan 413, 1098 SJ Amsterdam, The Netherlands Abstract: We describe a new abstract model for the computational learning of grammars. The model deals with a learning process in which an algorithm is given an input of a large set of training sentences that belong to some grammar $G$. The algorithm then tries to infer this grammar. Our model is based on the well-known {\em Minimum Description Length Principle}. It turns out that our model is, in a certain sense, a more general version of two seemingly different well-known other ones. Also, two other existing models turn out to be very similar to ours. We have made an initial implementation of the algorithm implied by the model. We have tried this implementation on natural language texts, and we give a short description of the results of these tests. The results of testing the algorithm in practice are quite interesting, but unfortunately they are neither encouraging nor discouraging enough to indicate whether our method of grammar induction, which hardly makes any use of any linguistic principles and makes no use at all of any semantical information, is really worth pursuing further. ---------------------------------------- NeuroCOLT Technical Report NC-TR-94-017: ---------------------------------------- Bounds for the Computational Power and Learning Complexity of Analog Neural Nets by Wolfgang Maass, Institute for Theoretical Computer Science, Technische Universitaet Graz, Klosterwiesgasse 32/2, A-8010 Graz, Austria Abstract: It is shown that high order feedforward neural nets of constant depth with piecewise polynomial activation functions and arbitrary real weights can be simulated for boolean inputs and outputs by neural nets of a somewhat larger size and depth with heaviside gates and weights from $\{-1,0,1\}$. This provides the first known upper bound for the computational power of the former type of neural nets. It is also shown that in the case of first order nets with piecewise linear activation functions one can replace arbitrary real weights by rational numbers with polynomially many bits, without changing the boolean function that is computed by the neural net. In order to prove these results we introduce two new methods for reducing nonlinear problems about weights in multi-layer neural nets to linear problems for a transformed set of parameters. These transformed parameters can be interpreted as weights in a somewhat larger neural net. As another application of our new proof technique we show that neural nets with piecewise polynomial activation functions and a constant number of analog inputs are probably approximately learnable (in Valiant's model for PAC-learning). ---------------------------------------- NeuroCOLT Technical Report NC-TR-94-021: ---------------------------------------- On the Computational Complexity of Networks of Spiking Neurons by Wolfgang Maass, Institute for Theoretical Computer Science, Technische Universitaet Graz, Klosterwiesgasse 32/2, A-8010 Graz, Austria Abstract: We investigate the computational power of a formal model for networks of spiking neurons. It is shown that simple operations on phase-differences between spike-trains provide a very powerful computational tool that can in principle be used to carry out highly complex computations on a small network of spiking neurons. We construct networks of spiking neurons that simulate arbitrary threshold circuits, Turing machines, and a certain type of random access machines with real valued inputs. We also show that relatively weak basic assumptions about the response- and threshold-functions of the spiking neurons are sufficient in order to employ them for such computations. Furthermore we prove upper bounds for the computational power of networks of spiking neurons with arbitrary piecewise linear response- and threshold-functions, and show that they are with regard to real-time simulations computationally equivalent to a certain type of random access machine, and to recurrent analog neural nets with piecewise linear activation functions. In addition we give corresponding results for networks of spiking neurons with a limited timing precision, and we prove upper and lower bounds for the VC-dimension and pseudo-dimension of networks of spiking neurons. ----------------------- The Report NC-TR-94-015 can be accessed and printed as follows % ftp cscx.cs.rhbnc.ac.uk (134.219.200.45) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-94-015.ps.Z ftp> bye % zcat nc-tr-94-015.ps.Z | lpr -l Similarly for the other technical reports. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. Best wishes John Shawe-Taylor  From omlinc at research.nj.nec.com Tue Jan 10 11:19:43 1995 From: omlinc at research.nj.nec.com (Christian Omlin) Date: Tue, 10 Jan 95 11:19:43 EST Subject: task consolidation Message-ID: <9501101619.AA01373@arosa> Lisa Meeden writes: >To investigate the efficacy of protoplans, a transfer of learning >experiment was done. The protoplans learned in one controller network >were saved in an associative memory and used to guide a second >controller network as it learned the same task from scratch. The >associative memory mapped the precursor sensor states of a protoplan >to the protoplan itself. Controllers trained with protoplans instead >of goals as input converged more quickly on good solutions than the >original controllers trained with goals. Protoplans were able to >guide the robot's behavior by marking the important moments in the >interaction with the environment when a switch in behavior should >occur. This kind of timing information was indirect--no specific >action was indicated--but knowing when to change from a particular >strategy to a new one can be very important information. This is similar to work done on training (recurrent) networks with prior knowledge. We have investigated algorithms for the extraction and insertion of symbolic knowledge in recurrent networks trained on temporal learning tasks. For a testbed, we learned regular grammars. We have shown how partial prior knowledge about a regular grammar can be encoded in a fully-recurrent neural network with second-order weights. The improvement of convergence time is `proportional' to the amount of prior knowledge. A description of the learned grammar can also be extracted from networks in the form of deterministic finite-state automata (DFAs). We have shown that the extracted DFAs outperform the trained networks, i.e. the DFA correctly classifies more strings than the trained network itself. The details can be found in the following book which has recently been published: @INCOLLECTION{omlin94b, AUTHOR = "C.W. Omlin and C.L. Giles", TITLE = "Extraction and insertion of symbolic information in recurrent neural networks", EDITOR = "V. Honavar and L. Uhr", BOOKTITLE = "Artificial Intelligence and Neural Networks: Steps toward Principled Integration", YEAR = "1994", PUBLISHER = "Academic Press", ADDRESS = "San Diego, CA", PAGES = "271-299"}  From MDUDZIAK at Gems.VCU.EDU Wed Jan 4 20:09:32 1995 From: MDUDZIAK at Gems.VCU.EDU (MARTIN DUDZIAK) Date: Wed, 04 Jan 1995 21:09:32 -0400 (EDT) Subject: Job opportunities info for distribution on list Message-ID: <01HLGNXZE8COA240GO@Gems.VCU.EDU> ========================================================================== The following information is for general distribution within the academic and private research communities. QDI, a comparatively small, new, and secure company in the adaptive systems field, is looking for a few top developers capable of working in an atmosphere that brings together basic research and applications nd emphasizes freedom of thinking, expression, and intellectual creativity. The description that follows, for general consumption, is by necessity rather limited in terms of the details that are given about specific projects, but persons who are seriously looking for a long-term committed situation that has opportunities in several application domains relating to pattern classification and recognition, data compression, approximation, and prediction can make direct contact for further information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Unique opportunities in application development and basic research in fields of neurocomputing and adaptive, intelligent systems. Positions with well-funded company in new mid-Wisconsin R&D center, closely linked with leading international academic and corporate groups. Seeking: enterprising, creative, bold thinkers and doers with experience in object-oriented software design (C, C++, Smalltalk) and background in parallel processing, neural nets, genetic algorithms, fuzzy logic, hardware design, or pure and applied mathematics/physics. Prior experience in both research and application-building a plus, as well as demonstrated skills in presenting, teaching, writing and other communication. Technical and academic backgrounds from diverse scientific fields will be considered; advanced degrees are a plus but not a prerequisite. Salary and benefits will match experience, initiative, inventiveness and potential. Very flexible and creative corporate structure and management organization. Superior scientific/computing resources and working environment, plus unusually strong personal and educational opportunities. Strong self-motivation, self-criticism, team spirit, synergetic thinking, and open-mindedness are essential. There are multiple positions that will be filled within the first half of this year. Curriculum vita and a letter of introduction should be sent by fax to Mr. B. Bice, Director of Operations, at (414) 731-0722 or by email to Dr. M. Dudziak at mdudziak at gems.vcu.edu. ==========================================================================  From perso at DI.UniPi.IT Tue Jan 10 11:08:04 1995 From: perso at DI.UniPi.IT (perso@DI.UniPi.IT) Date: Tue, 10 Jan 1995 17:08:04 +0100 (MET) Subject: TR available Message-ID: <9501101608.AA03361@neuron> Technical Report available: Comments are welcome !! ****************************************************** * FTP-host: ftp.di.unipi.it FTP-filename: pub/Papers/perso/SPERDUTI/lraam-3.ps.Z ****************************************************** @TECHREPORT{lraam-3, AUTHOR = {A. Sperduti and A. Starita}, TITLE = {Dynamical Neural Networks Construction for Processing of Labeled Structures}, INSTITUTION = {Dipartimento di Informatica, Universit\`{a} di Pisa}, YEAR = {1995}, NUMBER = {TR-1/95} } Abstract: We show how Labeling RAAM (LRAAM) can be exploited to generate `on the fly' neural networks for associative access of labeled structures. The topology of these networks, that we call Generalized Hopfield Networks (GHN), depends on the topology of the {\it query} used to retrieve information, and the weights on the networks' connections are the weights of the LRAAM encoding the structures. A method for incremental discovering of multiple solutions to a given query is presented. This method is based on {\it terminal repellers}, which are used to `delete' known solutions from the set of admissible solutions to a query. Terminal repellers are also used to implement exceptions at query level, i.e., when a solution to a query must satisfy some negative constraints on the labels and/or substructures. Besides, the proposed model solves very naturally the connectionist variable binding problem at query level. Some results for a tree-like query are presented. Finally, we define a parallel mode of execution, exploiting terminal repellers, for the GHN, and we propose to use terminal attractors for implementing shared variables and graph queries. * No hardcopy available. * FTP procedure: unix> ftp ftp.di.unipi.it Name: anonymous Password: ftp> cd pub/Papers/perso/SPERDUTI ftp> binary ftp> get lraam-3.ps.Z ftp> bye unix> uncompress lraam-3.ps.Z unix> lpr lraam-3.ps (or however you print postscript) _________________________________________________________________ Alessandro Sperduti Dipartimento di Informatica, Corso Italia 40, Phone: +39-50-887248 56125 Pisa, Fax: +39-50-887226 ITALY E-mail: perso2di.unipi.it _________________________________________________________________  From peterk at nsi.edu Mon Jan 9 22:39:21 1995 From: peterk at nsi.edu (Peter Konig) Date: Mon, 9 Jan 1995 19:39:21 -0800 Subject: Position available: Cortical Neurophysiology Message-ID: Junior Fellow Position in Cortical Neurophysiology available. Applications are invited for the postdoctoral position of Junior Fellow in Experimental Neurobiology at the Neuroscience Institute, La Jolla, to study mechanisms underlying visual perception and sensorimotor integration in the cat. Applicants should have a background in neuro-physiological techniques and data analysis. Fellows will receive stipends appropriate to their quali- fications and experience. Submit a curriculum vitae, statement of research interests, and names of three references to: Dr. Peter Konig; The Neurosciences Institute 3377 North Torrey Pines Court; La Jolla, 92037, CA FAX: 619-554-9159 ----------------------------------------------------------------- Peter Konig The Neurosciences Institute 3377 North Torrey Pines Court La Jolla, CA 92037, USA Office 619 554 3200 Fax 619 554 9159 Home 619 450 0225  From C.Campbell at bristol.ac.uk Wed Jan 11 11:12:43 1995 From: C.Campbell at bristol.ac.uk (I C G Campbell) Date: Wed, 11 Jan 1995 16:12:43 +0000 (GMT) Subject: Fifth Irish Neural Networks Conference Message-ID: <9501111612.AA06233@zeus.bris.ac.uk> FIFTH IRISH NEURAL NETWORK CONFERENCE St. Patricks's College, Maynooth, Ireland September 11-13, 1995 FIRST CALL FOR PAPERS Papers are solicited for the Fifth Irish Neural Network Conference. They can be in any area of theoretical or applied neural computing including, for example: Learning algorithms Cognitive modelling Neurobiology Natural language processing Vision Signal processing Time series analysis Hardware implementations Selected papers from the conference proceedings will be published in the journal Neural Computing and Applications (Springer International). The conference is the fifth in a series previously held at Queen's University, Belfast and University College, Dublin. An extended abstract of not more than 500 words should be sent to: Dr. John Keating, Re: Neural Networks Conference, Dept. of Computer Science St. Patricks's College, Maynooth, Co. Kildare, IRELAND e-mail: JNKEATING at VAX1.MAY.IE NOTE: If submitting by postal mail please make sure to include your e-mail address. The deadline for receipt of abstracts is 1st May 1995. Authors will be contacted regarding acceptance by 1st June, 1995. Full papers will be required by 31st August 1995. ================================================================== FIFTH IRISH NEURAL NETWORKS CONFERENCE REGISTRATION FORM Name: __________________________________________________ Address: __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ e-mail: ______________________ fax: ______________________ REGISTRATION FEE Before August 1, 1995 After Fee enclosed IR#50 IR#60 IR#________ The registration fee covers the cost of the conference proceedings and the session coffee breaks. METHOD OF PAYMENT Payment should be in Irish Pounds in the form of a cheque or banker's draft made payable to INNC'95. =================================================================== FIFTH IRISH NEURAL NETWORKS CONFERENCE ACCOMMODATION FORM Accomodation and meals are available on campus. The rooms are organised into apartments of 6 bedrooms. Each apartment has a bathroom, shower, and a fully equipped dining room/kitchen. The room rate is IR#12 per night (excl. breakfast, breakfast is IR#3 for continental and IR#4 for Full Irish). Name: ___________________________________________________ Address: ___________________________________________________ ___________________________________________________ ___________________________________________________ __________________________________________________ e-mail: ______________________ fax: ______________________ Arrival date: ______________________ Departure date: ______________________ No. of nights: ________ Please fill out a separate copy of the accommodation form for each individual requiring accommodation. If you have any queries, contact John Keating at JNKEATING at VAX1.MAY.IE The second day of the conference (Tuesday 12th September) is a half-day and includes an excursion to Newgrange and Dublin during the afternoon. The cost of this excursion is IR#10. I will be going on the excursion on Tues. afternoon yes/no (please delete as appropriate). ================================================================== Return fees with completed registration/ accommodation forms to: Dr John Keating, Re: Neural Networks Conference, Dept. of Computer Science, St. Patrick's College, Maynooth, Co. Kildare, IRELAND Unfortunately, we cannot accept registration or accommodation bookings by e-mail. =================================================================== Fifth Irish Neural Networks Conference - Paper format The format for accepted submissions will be as follows: LENGTH: 8 pages maximum. PAPER size: European A4 MARGINS: 2cms all round PAGE LAYOUT: Title, author(s), affiliation and e-mail address should be centred on the first page. No running heads or page numbers should be included. TEXT: Should be 10pt and preferably Times Roman.  From maureen at cs.toronto.edu Wed Jan 11 13:02:40 1995 From: maureen at cs.toronto.edu (Maureen Smith) Date: Wed, 11 Jan 1995 13:02:40 -0500 Subject: GLOVE-TALK II VIDEO Message-ID: <95Jan11.130245edt.760@neuron.ai.toronto.edu> ******************************************************************************* GLOVE-TALK II PROJECT UNIVERSITY OF TORONTO Geoffrey Hinton and Sidney Fels ----- VIDEO RELEASE ----- ******************************************************************************* THE GLOVE-TALK II VIDEO A 31 minute video of the Glove-Talk II system developed by Sidney Fels and Geoffrey Hinton at the University of Toronto is now available. Glove-Talk II is an artifical vocal tract that converts hand movements into speech in real time. The inputs come from two gloves, a polhemus and a footpedal. Neural networks are used to convert these inputs into formant descriptions that are sent to a speech synthesizer at 100 frames per second. The neural nets adapt to the particular way in which the user tries to produce target sounds during training sessions. The video shows the system in action for both rehearsed and unrehearsed speech and describes in detail the neural networks that are used. To cover the costs of reproduction and distribution those wishing to receive a copy of the video should send the following payment with their order: Addresses in Canada: Personal check or money order payable to the University of Toronto for 10 Canadian dollars Addresses in USA: Personal check or money order payable to the University of Toronto for 10 US dollars Addresses anywhere else: Money order for 20 Canadian dollars payable to the University of Toronto (and specify PAL or NTSC) Orders should be sent to Maureen Smith, Department of Computer Science, 6 King's College Road, Rm 271, University of Toronto, Toronto, Ontario M5S 1A4 fax: 416-978-1455 e-mail: maureen at cs.toronto.edu  From schmidhu at informatik.tu-muenchen.de Wed Jan 11 14:08:42 1995 From: schmidhu at informatik.tu-muenchen.de (Juergen Schmidhuber) Date: Wed, 11 Jan 1995 20:08:42 +0100 Subject: continual learning etc. Message-ID: <95Jan11.200846met.42325@papa.informatik.tu-muenchen.de> Concerning the recent messages on "transfer learning", "incremental learning" etc: Mark Ring has been working on this subject for many years now, specifically on bottom-up, hierarchical behavior learning and skill transfer in reinforcement-learning agents (1991, 1993c), and with time-dependent context-sensitive neural networks (1993a, 1993b) that keep adding new units in order to learn longer and more complicated sequences. In his dissertation on "continual learning", he described a hierarchical mechanism for learning non-Markovian reinforcement tasks where hierarchy construction was done bottom-up as learning progressed. He tested it on "continual learning" tasks, where the behaviors his learning agent acquired for simple tasks were used for learning more difficult tasks with much less effort (skill transfer). Even after learning much more complicated tasks the agent could still generally solve the simpler ones (avoiding catastrophic forgetting). Juergen Schmidhuber Fakultaet fuer Informatik Technische Universitaet Muenchen 80290 Muenchen, Germany ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++= References: @InProceedings{Ring:1991, author = "Ring, Mark B.", title = "Incremental Development of Complex Behaviors through Automatic Construction of Sensory-motor Hierarchies", booktitle = "Machine Learning: Proceedings of the Eighth International Workshop (ML91)", year = 1991, editor = "Birnbaum, Lawrence A. and Collins, Gregg C.", pages = "343--347", publisher = "Morgan Kaufmann Publishers", month = "June", } @InProceedings{Ring:1993a, author = "Ring, Mark B.", title = "Learning Sequential Tasks by Incrementally Adding Higher Orders", booktitle = "Advances in Neural Information Processing Systems 5", year = 1993, editor = "Giles, C. L. and Hanson, S. J. and Cowan, J. D.", pages = "115--122", publisher = "Morgan Kaufmann Publishers", address = "San Mateo, California", } @InProceedings{Ring:1993b, author = "Ring, Mark B.", title = "Two Methods for Hierarchy Learning in Reinforcement Environments", booktitle = "From Animals to Animats 2: Proceedings of the Second International Conference on Simulation of Adaptive Behavior", year = 1993, editor = "Meyer, J. A. and Roitblat, H. and Wilson, S.", pages = "148--155", publisher = "MIT Press", } @TechReport{Ring:1993c, author = "Ring, Mark B.", title = "Sequence Learning with Incremental Higher-Order Neural Networks", institution = "Artificial Intelligence Laboratory, University of Texas at Austin", year = 1993, number = "AI 93--193", month = "January", } @PhDThesis{Ring:1994, author = "Ring, Mark B.", title = "Continual Learning in Reinforcement Environments", school = "University of Texas at Austin", year = 1994, address = "Austin, Texas 78712", month = "August", }  From crr at cogsci.psych.utah.edu Wed Jan 11 13:48:46 1995 From: crr at cogsci.psych.utah.edu (crr@cogsci.psych.utah.edu) Date: Wed, 11 Jan 95 11:48:46 -0700 Subject: About sequential learning (or interference) In-Reply-To: Your message of Sat, 07 Jan 95 12:04:19 +0800. <9501070557.AA09357@cogsci.psych.utah.edu> Message-ID: <9501111848.AA07885@cogsci.psych.utah.edu> A paper I wrote ages ago speaks to this issue as well, in which we examined the spacing effect using NETtalk as a "verbal connectionist learner" and found (unlike the catastrophic interference that everyone's been talking about) that the effects of distributing practice (learning a little bit each time over many times distributed in time) is pretty similar in people and in nets: @inproceedings{Rosenberg86, author = {Charles R. Rosenberg and Terrence J. Sejnowski}, address = {Hillsdale, NJ}, booktitle = {Proceedings of the Eighth Annual Conference of the Cognitive Science Society}, month = {August}, note = {Amherst, MA}, pages = {72-89}, publisher = {Lawrence Erlbaum}, title = {The Spacing Effect on {NETtalk}, A Massively-Parallel Network}, year = {1986} } It seems to have been lost in the shuffle and I couldn't resist mentioning it any longer. Sorry, no url site yet. Charlie  From kenm at sunrae.sscl.uwo.ca Wed Jan 11 05:08:44 1995 From: kenm at sunrae.sscl.uwo.ca (kenm@sunrae.sscl.uwo.ca) Date: Wed, 11 Jan 1995 15:08:44 +0500 Subject: conference announcement Message-ID: <9501112008.AA27439@sunrae.sscl.uwo.ca> ****************************************************************************** L.O.V.E. 1995 24th Conference on Perception & Cognition February 9, 1:00 pm -> February 10, 5:00 pm The Skyline Brock Hotel Niagara Falls, Ontario, Canada ****************************************************************************** We have a great lineup of speakers this year. Thursday, February 9 Margaret M. Shiffrar Rutgers University The interpretation of object motion Mark S. Seidenberg University of Southern California title t.b.a Friday, February 10 Dana H. Ballard University of Rochester Computational hierarchies for natural behaviors Lee R. Brooks (& Glenn Regehr) McMaster University Perceptual resemblance and effort after commonality in category formation Daniel Kersten University of Minnesota Shedding light on the objects of perception Michael K. Tanenhaus University of Rochester Using eye movements to study spoken language comprehension in visual contexts ****************************************************************************** To reserve a hotel room, call Skyline Hotels at 1-800-263-7135 or 905-374-4444. **Please make sure to mention the LOVE conference in order to get special room rates. The L.O.V.E. room rates are great again this year: $47 single or double $57 triple $67 quadruple Registration is again wildly cheap this year and includes the "L.O.V.E. affair": students and post docs: $15 Canadian or $12 US faculty: $25 Canadian or $20 US Be sure to make L.O.V.E. in '95!!! If wish to be added to the L.O.V.E. email list, please contact kenm at sunrae.sscl.uwo.ca ****************************************************************************** Abstracts appear below. ************************************************************************ The Interpretation of Object Motion Margaret M. Shiffrar Rutgers University To interpret the projected image of a moving object, the visual system must integrate motion signals across different image regions. Traditionally, researchers have examined this process by focusing on the integration of equally ambiguous motion signals. However, moving objects often yield motion measurements having differing degrees of ambiguity. In a series of experiments, I examine how the visual system interprets the motion of simple objects. I argue that the visual system normally uses unambiguous motion signals to interpret object motion. ************************************************************************ Mark S. Seidenberg University of Southern California title t.b.a. ************************************************************************ Computational Hiearchies for Natural Behaviors Dana Ballard University of Rochester We argue that a computational theory of the brain will have to address the issue of computational hierarchies, wherein the brain can be seen as using different instruction sets at different spatio-temporal scales. As examples, we describe two such abstraction levels. At the most abstract level, a language is needed to address the way the brain directs the physical resources of its body. An example of these kinds of instructions would be one used to direct saccadic eye-movements. Interpreting experimental data from this perspective implies that subjects use eye-movements in a special strategy to avoid loading short-term memory. This constraint has implications for the organization of high-level behavior. At a lower level of abstraction we consider a model of instructions which capture the details of directing the eye-movements themselves. This model makes extensive use of feedback. The implications of this are that brain circuitry may have to be used in a very different ways than traditionally proposed. ************************************************************************ Perceptual Resemblance and Effort After Commonality in Category Formation Lee Brooks Glenn Regehr McMaster Univeristy The Toronto Hospital The study of category formation and application has been strongly influenced by the information processing tradition. The influence of this tradition includes describing stimuli solely as informational contrasts (the ubiquitous tables of 1s and 0s), as well as the practice of producing new items by recombining identical elements. Even when experiments use natural stimuli, designs and subsequent models are set up as if informational contrasts are the only important aspects of the stimuli. We will argue that at least two types of changes from this tradition are necessary to capture important types of natural behavior. *Enhanced perceptual resemblance*: Habits of stimulus representation and experimental design derived from the information processing tradition have limited the effect of similarity between pairs of items and virtually eliminated an effect of overall similarity among several items. In particular, the informational interpretation of "family resemblance" does not produce categorization based on category-wide similarity, as is often alleged. A better treatment of similarity is important because similarity-based effects are obvious even when people have good theories about the stimuli, as in medical diagnosis. *Multiple descriptions of the stimuli*: Informational description of the stimuli effectively capture analytic behavior, but do not equally well capture similarity-based behavior. We will argue that stimuli have to be characterized differently to account for their effects on similarity-based processing than for their effects on analytic processing. Having these different descriptions for the same stimuli is important since both types of processes occur concurrently in many natural categorization situations. ************************************************************************ Shedding light on the objects of perception Daniel Kersten University of Minnesota One of the great challenges of perception is to understand how we see the material, shape, and identity of an object given enormous variability in the images of that object. Viewpoint and illumination act together to produce the images that the eye receives. Variability over viewpoint has received recent attention in studies of object recognition and shape. Illumination effects of attached and cast shadows have received somewhat less attention for the following reason. Casual inspection shows that one view of an object can appear rather different from another view of that same object. However, the image of an object under one illumination can appear quite similar to an image of the same object under different illumination, even when objectively the images are very different. This latter observation has contributed to the assumption that human perception discounts effects of varying illumination, in particular those due to cast shadows. But do the effects of illumination get filtered out? I will use 3D computer graphics to show examples of how human vision uses illumination information to resolve perceptual ambiguities. In particular, I will show how cast shadows can determine relative depth of objects, the orientation of surfaces, object rigidity, and the identity of contour types. These demonstrations are examples of the kind of perceptual puzzles which the visual brain solves continually in everyday vision. The solution of these perceptual puzzles is an example of generalized Bayesian inference--the logical and plausible reconciliation of image data with prior constraints. For an object recognition task, the visual system might be expected to filter out effects of illumination (e.g. attached and cast shadows). Here vision can be behave in a way inconsistent with a strong Bayesian view--there is a cost in response time and sensitivity to recognizing an object under left illumination that has been learned under right illumination. These results are consistent with exemplar-based theories of recognition. ************************************************************************ Using eye-movements to study spoken language comprehension in visual contexts. Michael K. Tanenhaus University of Rochester We have been using a head-mounted eye-tracking system to monitor eye-movements while subjects follow spoken instructions to manipulate real objects. In this paradigm, eye-movements to the objects in visual world, are eye- movements are closely time locked to referential expressions in the instructions, providing a natural on- line measure of spoken language comprehension in visual contexts. After discussing the rationale for this line of research in terms of current developments in language comprehension research, I'll present results from experiments conducted with Michael Spivey-Knowlton, Julie Sedivy and Kathy Eberhard. In the first experiment, eye-movements to a target object (e.g., Pick up the candle) begin several hundred ms after the beginning of the word, suggesting that reference is established as the word is being processed. Eye-movements are delayed by about 100 ms when there is a "competitor" object with a similar name as the target (e.g., a piece of candy). In the second experiment, the point in a phrase where reference is established is time-locked to when the referring expression becomes unambiguous with respect to the set of visual alternatives (e.g., Touch the starred red square; Put the five of hearts that is below the eight of clubs above the three of diamonds.). The third experiment shows that visual contexts affect the interpretation of temporarily ambiguous instructions such as "Put the spoon in the bowl on the plate". Finally, we show that contrastive focus (e.g., Touch the LARGE red square) directs attention to both the referent and the contrast member. Taken together our results demonstrate the potential of the methodology, especially for exploring issues of interpretation and questions about spoken language comprehension.. They also highlight the incremental and referential nature of comprehension. In addition, they provide a somewhat different perspective on results that have been central to discussions about the modularity of language processing. ************************************************************************  From eann95 at ra.abo.fi Wed Jan 11 03:52:53 1995 From: eann95 at ra.abo.fi (EANN-95 Konferensomrede VT) Date: Wed, 11 Jan 1995 10:52:53 +0200 Subject: Final CFP: EANN 95 Message-ID: <199501110852.KAA15595@aton.abo.fi> International Conference on Engineering Applications of Neural Networks (EANN '95) Helsinki, Finland August 21-23, 1995 Final Call for Papers The conference is a forum for presenting the latest results on neural network applications in technical fields. The applications may be in any engineering or technical field, including but not limited to systems engineering, mechanical engineering, robotics, process engineering, metallurgy, pulp and paper technology, aeronautical engineering, computer science, machine vision, chemistry, chemical engineering, physics, electrical engineering, electronics, civil engineering, geophysical sciences, biotechnology, food engineering and environmental engineering. Abstracts of one page (200 to 400 words) should be sent to eann95 at aton.abo.fi by *31 January 1995*, by e-mail in PostScript format, or in TeX or LaTeX. Plain ASCII is also acceptable. Please mention two to four keywords, and whether you prefer it to be a short paper or a full paper. The short papers will be 4 pages in length, and full papers may be upto 8 pages. Tutorial proposals are also welcome until 31 January 1995. Notification of acceptance will be sent around 1 March. The number of full papers will be very limited. You will receive a submission number for each abstract you send. If you haven't received one, please ask for it. Special tracks have been set up for applications in robotics (N. Sharkey, n.sharkey at dcs.shef.ac.uk), control applications (E. Tulunay, ersin_tulunay at metu.edu.tr), biotechnology/food engineering applications (P. Linko), and mineral and metal industry (J. van Deventer, metal at maties.sun.ac.za). You can submit abstracts to the special tracks straight to their coordinators or to eann95 at aton.abo.fi. Local program committee A. Bulsari J. Heikkonen (Italy) E. Hyv\"onen P. Linko L. Nystr\"om S. Palosaari H. Sax\'en M. Syrj\"anen J. Sepp\"anen A. Visa International program committee G. Dorffner (Austria) A. da Silva (Brazil) V. Sgurev (Bulgaria) M. Thompson (Canada) B.-Z. Chen (China) V. Kurkova (Czechia) S. Dutta (France) D. Pearson (France) G. Baier (Germany) C. M. Lee (Hong Kong) J. Fodor (Hungary) L. M. Patnaik (India) H. Siegelmann (Israel) R. Baratti (Italy) R. Serra (Italy) I. Kawakami (Japan) C. Kuroda (Japan) H. Zhang (Japan) J. K. Lee (Korea) J. Kok (Netherlands) J. Paredis (Netherlands) W. Duch (Poland) R. Tadeusiewicz (Poland) B. Ribeiro (Portugal) W. L. Dunin-Barkowski (Russia) V. Stefanuk (Russia) E. Pupyrev (Russia) S. Tan (Singapore) V. Kvasnicka (Slovakia) A. Dobnikar (Slovenia) J. van Deventer (South Africa) B. Martinez (Spain) H. Liljenstr\"om (Sweden) G. Sj\"odin (Sweden) J. Sj\"oberg (Sweden) E. Tulunay (Turkey) N. Sharkey (UK) D. Tsaptsinos (UK) N. Steele (UK) S. Shekhar (USA) J. Savkovic-Stevanovic International Conference on Engineering Applications of Neural Networks (EANN '95) Registration information The registration fee is FIM 2000 until 15 March, after which it will be FIM 2400. A discount of upto 40 % will be given to some participants from East Europe and developing countries. Those who wish to avail of this discount need to apply for it. The application form can be sent by e-mail. The papers may not be included in the proceedings if the registration fee is not received before 15 April, or if the paper does not follow the specified format. If your registration fee is received before 15 February, you are entitled to attend one tutorial for free. The fee for each tutorial will be FIM 200, to be paid in cash at the conference site. No decisions have yet been made about which tutorials will be presented, since tutorial proposals can be sent until 31 January. The registration fee should be paid to ``EANN 95'', the bank account SYP (Union Bank of Finland) 220518-125251 Turku, Finland through bank transfer or you could send us a bank draft payable to ``EANN 95''. If it is difficult to get a bank draft in Finnish currency, you could send a bank cheque or a draft of GBP 280 (sterling pounds) until 15 March, or GBP 335 after 15 March. If you need to send it in some other way, please ask. The postal address for sending the bank drafts or bank cheques is EANN '95/SEA, Post box 34, 20111 Turku 11, Finland. Registration form can be sent by e-mail.  From goller at informatik.tu-muenchen.de Wed Jan 11 05:55:29 1995 From: goller at informatik.tu-muenchen.de (Christoph Goller) Date: Wed, 11 Jan 1995 11:55:29 +0100 Subject: TR: Learning Distributed Representations for the Classification Message-ID: <95Jan11.115534mesz.460947@sunjessen21.informatik.tu-muenchen.de> of Terms Return-Receipt-To: goller at informatik.tu-muenchen.de Organization: TU-Muenchen Technical Report available: Comments are welcome !! ******************************************************************************** FTP-host: ftp.informatik.tu-muenchen.de FTP-filename: /local/lehrstuhl/jessen/Group.Automated_Reasoning/Tech.Reports/AR-94-05.ps.gz WWW: http://wwwjessen.informatik.tu-muenchen.de/forschung/reasoning/reports.html ******************************************************************************** @TECHREPORT{label, AUTHOR = {C. Goller and A. Sperduti and A. Starita}, TITLE = {Learning Distributed Representations for the Classification of Terms}, INSTITUTION = {Institut f\"{u}r Informatik, Technische Universit\"{a}t M\"{u}nchen}, YEAR = {1994}, NUMBER = {AR-94-05} } Abstract: This paper is a study on LRAAM-based (Labeling Recursive Auto-Associative Memory) classification of symbolic recursive structures encoding terms. The results reported here have been obtained by combining an LRAAM network with an analog perceptron. The approach used was to interleave the development of representations (unsupervised learning of the LRAAM) with the learning of the classification task. In this way, the representations are optimized with respect to the classification task. The intended applications of the approach described in this paper are hybrid (symbolic/connectionist) systems, where the connectionist part has to solve logic-oriented inductive learning tasks similar to the term-classification problems used in our experiments. These problems range from the detection of a specific subterm to the satisfaction of a specific unification pattern. We show that these problems can get a very satisfactory solution by our approach. * No hardcopy available. * FTP procedure: unix> ftp ftp.informatik.tu-muenchen.de Name: anonymous Password: ftp> cd /local/lehrstuhl/jessen/Group.Automated_Reasoning/Tech.Reports ftp> binary ftp> get AR-94-05.ps.gz ftp> bye unix> gunzip AR-94-05.ps.gz unix> lpr AR-94-05.ps (or however you print postscript) _______________________________________________________________________________ Christoph Goller Lehrstuhl VIII Institut fuer Informatik Research Group "Automated Reasoning" Technische Universitaet Muenchen Tel.: +49-89/521097 Arcisstr. 21 Fax.: +49-89/526502 D-80290 Muenchen email: goller at informatik.tu-muenchen.de Germany -------------------------------------------------------------------------------  From terry at salk.edu Wed Jan 11 18:14:13 1995 From: terry at salk.edu (Terry Sejnowski) Date: Wed, 11 Jan 95 15:14:13 PST Subject: Neural Computation 7:1 Message-ID: <9501112314.AA16905@salk.edu> NEURAL COMPUTATION Volume 7, January, 1995 View: On Neural Circuits and Cognition Michael S. Gazzaniga Notes: The EM Algorithm and Information Geometry in Neural Network Learning Shun-ichi Amari Convergence Theorems for Hybrid Learning Rules Michael Benaim Letters: A Type of Duality between Self-organizing Maps and Minimal Wiring Graeme Mitchison Development of Oriented Ocular Dominance Bands as a Consequence of Areal Geometry Hans-Ulrich Bauer A Multiple Cause Mixture Model for Unsupervised Learning Eric Saund Similarity Metric Learning for a Variable-Kernel Classifier David G. Lowe Unsupervised Mutual Information Criterion for Elimination of Overtraining in Supervised Multilayer Networks G. Deco, W. Finnoff and H. G. Zimmerman Training with Noise is Equivalent to Tikhonov Regularization Chris M. Bishop Bayesian Regularization and Pruning using a Laplace Prior Peter M. Williams Empirical Risk Minimization Versus Maximum-Likelihood Estimation: A Case Study Ronny Meir Learning a Decision Boundary from Stochastic Examples: Incremental Algorithms with and without Queries Yoshiyuki Kabashima and Shigeru Shinomoto Arithmetic Perceptrons Sergio A. Cannas Compensatory Mechanisms in an attractor Neural Network Model of Schizophrenia D. Horn and E. Ruppin Real-time Control of a Tokamak Plasma Using Neural Networks Chris M. Bishop, Paul S. Haynes, Mike E. U. Smith, Tom N. Todd and David L. Trotman ----- SUBSCRIPTIONS - 1995 - VOLUME 7 - BIMONTHLY (6 issues) ______ $40 Student and Retired ______ $68 Individual ______ $180 Institution Add $22 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-6 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 e-mail: hiscox at mitvma.mit.edu -----  From hinton at cs.toronto.edu Thu Jan 12 14:09:20 1995 From: hinton at cs.toronto.edu (Geoffrey Hinton) Date: Thu, 12 Jan 1995 14:09:20 -0500 Subject: correction re: the glovetalk II video Message-ID: <95Jan12.140920edt.830@neuron.ai.toronto.edu> If you order the video, please include CANADA in the address.  From cnna at tce.ing.uniroma1.it Fri Jan 13 03:13:26 1995 From: cnna at tce.ing.uniroma1.it (cnna@tce.ing.uniroma1.it) Date: Fri, 13 Jan 1995 09:13:26 +0100 Subject: Proceedings of CNNA-94 available Message-ID: <9501130813.AA07746@tce.ing.uniroma1.it> PROCEEDINGS OF CNNA-94 AVAILABLE A limited number of copies of the Proceedings of the Third IEEE International Workshop on Cellular Neural Networks and their Applications (CNNA-94), held in Rome, Italy, Dec. 18-21, 1994 is available for purchasing by attendants and authors. Price is Itl. 50,000. The book will be sent by air mail upon receipt of payment, or of a copy of your order of payment. Please add the following charges for postal expenses: Italy: Itl. 6,200 Europe: Itl. 10,000 Mediterranean: Itl. 10,900 Americas and Asia: Itl. 18,100 Australia: Itl. 24,400 To order a copy, please send payment including mailing charges and bank taxes according to one of the following procedures: 1) Italian personal cheque (payable to Prof. V. Cimagalli, sent by "Assicurata") 2) Eurocheque, or international cheque drawn on an Italian bank, in Italian lire, payable to Prof. V. Cimagalli (please add Itl. 7,500 for bank expenses) 3) Bank transfer to Banca di Roma, ag. 158, via Eudossiana, 18 Roma, Italy I-00184; bank codes: ABI 3002-3, CAB 03380-3; account no. 503/34, payable to: "Sezione Italia Centro-Sud dell'IEEE - CNNA'94", stating clearly your name and reason for payment (please add Itl. 22,000 for bank expenses). For further information, do not hesitate to contact us. Sincerely, M. Balsi Organizing Committee, CNNA-94 CNNA-94 Dipartimento di Ingegneria Elettronica via Eudossiana, 18 Rome, Italy I-00184 fax: +39-6-4742647 e-mail: cnna at tce.ing.uniroma1.it TABLE OF CONTENTS Inaugural lecture: L.O. Chua The CNN Universal Chip: Dawn of a New Computer Paradigm 1 Invited review paper: T. Roska Analogic Algorithms Running on the CNN Universal Machine 3 G. Yang, T. Yang, L.-B. Yang On Unconditional Stability of the General Delayed Cellular Neural Networks 9 S. Arik, V. Tavsanoglu A Weaker Condition for the Stability of Nonsymmetric CNNs 15 M. P. Joy, V. Tavsanoglu Circulant Matrices and the Stability Theory of CNNs 21 B.E. Shi, S. Wendsche, T. Roska, L.O. Chua Random Variations in CNN Templates: Theoretical Models and Empirical Studies 27 Invited lecture: F. Werblin, A. Jacobs Using CNN to Unravel Space-Time Processing in the Vertebrate Retina 33 K. Lotz, Z. Vidnynszky, T. Roska, J. Vandewalle, J. Hmori, A. Jacobs, F. Werblin Some Cortical Spiking Neuron Models Using CNN 41 T.W. Berger, B.J. Sheu, R. H.-J. Tsai Analog VLSI Implementation of a Nonlinear Systems Model of the Hippocampal Brain Region 47 A. Jacobs, T. Roska, F. Werblin Techniques for Constructing Physiologically Motivated Neuromorphic Models in CNN 53 Invited review paper: A. Rodrguez-Vzquez, R. Domnguez-Castro, S. Espejo Design of CNN Universal Chips: Trends and Obstacles 59 J.M. Cruz, L.O. Chua, T. Roska A Fast, Complex and Efficient Test Implementation of the CNN Universal Machine 61 F. Sargeni, V. Bonaiuto High Performance Digitally Programmable CNN Chip with Discrete Templates 67 A. Paasio, A. Dawidziuk, K. Halonen, V. Porra Digitally Controllable Weights in Current Mode Cellular Neural Networks 73 D. Lm, G.S. Moschytz A Programmable, Modular CNN Cell 79 M.-D. Doan, R. Chakrabaty, M. Heidenreich, M. Glesner, S. Cheung Realisation of a Digital Cellular Neural Network for Image Processing 85 R. Domnguez-Castro, S. Espejo, A. Rodrguez-Vzquez, R. Carmona A CNN Universal Chip in CMOS Technology 91 E. Pessa, M.P. Penna Local and Global Connectivity in Neuronic Cellular Automata 97 X.-Z. Huang, T. Yang, L.-B. Yang On Stability of the Time-Variant Delayed Cellular Neural Networks 103 J.J. Szczyrek, S. Jankowski A Class of Asymmetrical Templates in Cellular Neural Networks 109 P.P. Civalleri, M. Gilli A Topological description of the State Space of a Cellular Neural Network 115 M. Tanaka, T. Watanabe Cooperative and Competitive Cellular Neural Networks 121 Invited review paper: P. Thiran, M. Hasler Information Processing Using Stable and Unstable Oscillations: A Tutorial 127 Invited review paper: J.A. Nossek Design and Learning with Cellular Neural Networks 137 I. Fajfar, F. Bratkovic Statistical Design Using Variable Parameter Variances and Application to Cellular Neural Networks 147 N.N. Aizenberg, I.N. Aizenberg CNN-like Networks Based on Multi-Valued and Universal Binary Neurons: Learning and Application to Image Processing 153 W. Utschick, J.A. Nossek Computational Learning Theory Applied to Discrete-Time Cellular Neural Networks 159 H. Magnussen, J.A. Nossek Global Learning Algorithms for Discrete-Time Cellular Neural Networks 165 H. Magnussen, G. Papoutsis, J.A. Nossek Continuation-Based Learning Algorithm for Discrete-Time Cellular Neural Networks 171 C. Gzelis, S. Karamahmut Recurrent Perceptron Learning Algorithm for Completely Stable Cellular Neural Networks 177 A.J. Schuler, M. Brabec, D. Schubel, J.A. Nossek Hardware-Oriented Learning for Cellular Neural Networks 183 F. Dellaert, J. Vandewalle Automatic Design of Cellular Neural Networks by Means of Genetic Algorithms: Finding a Feature Detector 189 H. Mizutani A New Learning Method for Multilayered Cellular Neural Networks 195 H. Harrer, P.L. Venetianer, J.A. Nossek, T. Roska, L.O. Chua Some Examples of Preprocessing Analog Images with Discrete-Time Cellular Neural Networks 201 N.N. Aizenberg, I.N. Aizenberg, T.P. Belikova Extraction and Localization of Important Features on Grey-Scale Images: Implementation on the CNN 207 K. Slot Large-Neighborhood Templates Implementation in Discrete-Time CNN Universal Machine with a Nearest-Neighbor Connection Pattern 213 J. Pineda de Gyvez XCNN: A Software Package for Color Image Processing 219 M. Balsi, N. Racina Automatic Recognition of Train Tail Signs Using CNNs 225 A.G. Radvnyi Solution of Stereo Correspondence in Real Scene: an Analogic CNN Algorithm 231 J.P. Miller, K.R. Crounse, T. Sziranyi, L. Nemes, L.O. Chua, T. Roska Deblurring of Images by Cellular Neural Networks with Applications to Microscopy 237 A. Kellner, H. Magnussen, J.A. Nossek Texture Classification, Texture Segmentation and Text Segmentation with Discrete- Time Cellular Neural Networks 243 P.L. Venetianer, P. Szolgay, K.R. Crounse, T. Roska, L.O. Chua Analog Combinatorics and Cellular Automata-Key Algorithms and Layout Design 249 Zarndy, T. Roska, Gy. Liszka, J. Hegyesi, L. Kk, Cs. Rekeczky Design of Analogic CNN Algorithms for Mammogram Analysis 255 P. Szolgay, Gy. Erss, A. Katona, . Kiss An Experimental System for Path Tracking of a Robot Using a 16*16 Connected Component Detector CNN Chip with Direct Optical Input 261 T. Kozek, T. Roska A Double Time-Scale CNN for Solving 2-D Navier-Stokes Equations 267 Zarndy, F. Werblin, T. Roska, L.O. Chua Novel Types of Analogic CNN Algorithms for Recognizing Bank-Notes 273 B.J. Sheu, Sa H. Bang, W.-C. Fang Optimal Solutions of Selected Cellular Neural Network Applications by the Hardware Annealing Method 279 B. Siemiatkowska Cellular Neural Network for Mobile Robot Navigation 285 A. Murgu Distributed Neural Control for Markov Decision Processes in HierarchicCommunication Networks 291 C.-M. Yang, T. Yang, K.-Y. Zhang Chaos in Discrete Time Cellular Neural Networks 297 R. Dogaru, A.T. Murgan, D. Ioan Robust Oscillations and Bifurcations in Cellular Neural Networks 303 H. Chen, M.-D. Dai, X.-Y. Wu Bifurcation and Chaos in Discrete-Time Cellular Neural Networks 309 M.J. Ogorzalek, A. Dabrowski, W. Dabrowski Hyperchaos, Clustering and Cooperative Phenomena in CNN Arrays Composedof Chaotic Circuits 315 P. Szolgay, G. Vrs Transient Response Computation of a Mechanical Vibrating System Using Cellular Neural Networks 321 P.P. Civalleri, M. Gilli Propagation Phenomena in Cellular Neural Networks 327 S. Jankowski, R. Wanczuk CNN models of complex pattern formation in excitable media 333 S. Jankowski, A. Londei, C. Mazur, A. Lozowski Synchronization Phenomena in 2D Chaotic CNN 339 Z. Galias, J.A. Nossek Control of a Real Chaotic Cellular Neural Network 345 A. Piovaccari, G. Setti A Versatile CMOS Building Block for Fully Analogically-ProgrammableVLSI Cellular Neural Networks 347 P. Thiran, G. Setti An Approach to Local Diffusion and Global Propagation in 1-dim. Cellular Neural Networks 349 J. Kowalski, K. Slot, T. Kacprzak A CMOS Current-Mode VLSI Implementation of Cellular Neural Network for an Image Objects Area Estimation 351 W.J. Jansen, R. van Drunen, L. Spaanenburg, J.A.G. Nijhuis The AD2 Microcontroller Extension for Artificial Neural Networks 353 C.-K. Pham, M. Tanaka A Novel Chaos Generator Employing CMOS Inverter for Cellular Neural Networks 355 R. Beccherelli, G. de Cesare, F. Palma Towards an Hydrogenated Amorphous Silicon Phototransistor Cellular Neural Network 357 A. Sani, S. Graffi, G. Masetti, G. Setti Design of CMOS Cellular Neural Networks Operating at Several Supply Voltages 363 M. Russell Grimaila, J. Pineda de Gyvez A Macromodel Fault Generator for Cellular Neural Networks 369 T. Roska, P. Szolgay, . Zarndy, P.L. Venetianer, A. Radvnyi, T. Szirnyi On a CNN Chip-Prototyping System 375 P. Kinget, M. Steyaert Evaluation of CNN Template Robustness Towards VLSI Implementation 381 B.J. Sheu, Sa H. Bang, W.-C. Fang Analog VLSI Design of Cellular Neural Networks with Annealing Ability 387 L. Raffo, S.P. Sabatini, G.M. Bisio A Reconfigurable Architecture Mapping Multilayer CNN Paradigms 393 M. Balsi, V. Cimagalli, I. Ciancaglioni, F. Galluzzi Optoelectronic Cellular Neural Network Based on Amorphous Silicon Thin Film Technology 399 S. Espejo, R. Domnguez-Castro, A. Rodrguez-Vzquez, R. Carmona Weight-Control Strategy for Programmable CNN Chips 405 S. Espejo, A. Rodrguez-Vzquez, R. Domnguez-Castro, R. Carmona Convergence and Stability of the FSR CNN Model 411 R. Domnguez-Castro, S. Espejo, A. Rodrguez-Vzquez, I. Garca-Vargas, J.F. Ramos, R. Carmona SIRENA: A Simulation Environment for CNNs 417 G. Adorni, V. DAndrea, G. Destri A Massively Parallel Approach to Cellular Neural Networks Image Processing 423 M. Coli, P. Palazzari, R. Rughi Use of the CNN Dynamic to Associate Two Points with Different Quantization Grains in the State Space 429 M. Csapodi, L. Nemes, G. Tth, T. Roska, A. Radvnyi Some Novel Analogic CNN Algorithms for Object Rotation, 3D Interpolation- Approximation, and a "Door-in-a-Floor" Problem 435 B.E. Shi Order Statistic Filtering with Cellular Neural Networks 441 L.-B. Yang, T. Yang, B.-S. Chen Moving Point Target Detection Using Cellular Neural Networks 445 X.-P. Yang, T. Yang, L.-B. Yang Extracting Focused Object from Defocused Background Using Cellular Neural Networks 451 P. Arena, S. Baglio, L. Fortuna, G. Manganaro CNN Processing for NMR Spectra 457 P. Arena, L. Fortuna, G. Manganaro, S. Spina CNN Image Processing for the Automatic Classification of Oranges 463 S. Schwarz Detection of Defects on Photolitographic Masks by Cellular Neural Networks 469 M. Ikegami, M. Tanaka Moving Image Coding and Decoding by DTCNN with 3-D Templates 475 M. Kanaya, M. Tanaka Robot Multi-Driving Controls by Cellular Neural Networks 481 R.-W. Liu, Y.-F. Huang, X.-T. Ling A Novel Approach to the Convergence of Neural Networks for Signal Processing 487 Index of Authors 489  From thrun at uran.informatik.uni-bonn.de Fri Jan 13 15:33:11 1995 From: thrun at uran.informatik.uni-bonn.de (Sebastian Thrun) Date: Fri, 13 Jan 1995 21:33:11 +0100 Subject: sequential learning - lifelong learning Message-ID: <199501132033.VAA27677@carbon.informatik.uni-bonn.de> The recent discussions on sequential learning brought up some very interesting points about learning, which I'd like to comment on. Much of current machine learning and neural network learning research makes the assumption that the only available data is a set of input-output examples of the target function (or, in the case of unsupervised learning, a set of unlabeled points which characterize an unknown probability distribution). There is a huge variety of algorithms (Backprop, ID3, MARS, Cascade Correlation to name a few famous ones) which all generalize from such data in somewhat different ways. Despite the exciting progress in understanding these approaches in more depth and coming up with better algorithms (like the work on complexity control, avoiding the over-fitting of noise, model selection, mixtures of experts, committees and related issues), I think there are intrinsic limitations to the view of the learning problem as an isolated function fitting problem, where all the available data consists of a set of examples of the target function. If we consider human learning, there is usually much more data available for generalization than just a task-specific set of input-output data. As Jon Baxter's face recognition example convincingly illustrates, we often learn to recognize highly complex patterns or complex motor strategies from an impressively small number of training examples. Humans somehow successfully manage to transfer big chunks of knowledge across learning tasks. If we face a new learning task, much of the "training data" which we use for generalization actually stems from other tasks, which we might have faced in our previous lifetime. Consider for example Jon's task of recognizing faces. Once one has learned that the shape of the nose does matter, but facial expressions do not matter for the identification of a person, one can transfer this knowledge to new faces and generalize much more accurately from less training examples. To apply these ideas in the context of artificial neural network learning, one might think of learning as a lifelong assignment, in which a learner faces a whole collection of learning tasks over its entire "lifetime." Hence, what has been observed and/or learned in the first n tasks can be reused in the (n+1)st task. There is a lot of potential leverage in such a scenario. For example, in a recent study, Tom Mitchell and I investigated the problem of learning to recognize simple objects from a very small number camera images using Backpropagation. We found that after seeing as few as one example of each target object, the recognition rate increased from 50% (random) to 59.7%. However, by learning invariances up front based on images of *other* objects, and by transferring these learned invariances to the target recognition task, we achieved a recognition rate of 74.8%. After seeing another training examples of each target object, the standard neural network approach led to 64.8% accuracy, which could be improved to 82.9% if knowledge about the invariances was transferred. These results match our experience in other domains (robot control, reinforcement learning robot perception). As the discussion on this mailing list illustrates, there is a bunch of people working on knowledge transfer and related issues; I have seen quite a few exciting approaches. For example, Lori Pratt, Steve Suddarth, Jon Baxter, Rich Caruana and many others have proposed approaches which develop more robust internal representations in Backprop networks based on learning multiple tasks (sequentially or in parallel). Others, like Satinder Singh, Steve Whitehead, Anton Schwartz have studied the issue of transfer in the context of reinforcement learning. Basically, they have proposed ways to transfer actions policies (which is the result of reinforcement learning) across tasks. There is a whole variety of other approaches (like Chris Atkenson's variable distance metrics in memory-based learning), which could potentially be applied in a lifelong learning context. However, I feel that the area of knowledge transfer is still largely unexplored. To scale up learning algorithms, I believe it is really helpful not to restrict oneself to looking at a single training set in isolation but to consider all possible sources of knowledge about the target function. Sebastian  From poggio at hip.atr.co.jp Sat Jan 14 12:41:47 1995 From: poggio at hip.atr.co.jp (Tomaso Poggio) Date: Sat, 14 Jan 95 12:41:47 JST Subject: sequential learning - lifelong learning In-Reply-To: Sebastian Thrun's message of Fri, 13 Jan 1995 21:33:11 +0100 <199501132033.VAA27677@carbon.informatik.uni-bonn.de> Message-ID: <9501140341.AA01767@haiku> There is a large related recent literature both in NN and specific domains, for instance object recognition. Key words are "hints", "virtual examples", "speaker adaptation", "virtual views", "recognition invariants".  From tishby at CS.HUJI.AC.IL Sun Jan 15 08:10:25 1995 From: tishby at CS.HUJI.AC.IL (Tali Tishby) Date: Sun, 15 Jan 1995 15:10:25 +0200 Subject: CORTICAL DYNAMICS IN JERUSALEM, June 11-15, 1995 Message-ID: <199501151310.AA13880@fugue.cs.huji.ac.il> The Hebrew University of Jerusalem The Center for Neural Computation Announces CORTICAL DYNAMICS IN JERUSALEM: A Symposium on Experimental and Theoretical Issues in The Dynamics and Function of the Neocortex June 11-15, 1995 Topics Include: 1. Biophysics of neurons and synaptic intergration: the hardware of cortical dynamics. 2. Temporal structures and patterns of synchrony in cortical activity. 3. Feedforward and recurrent models of cortical information processing. 4. Computational paradigms of brain function. Invited Speakers Include: D.J. Amit (Hebrew University, Israel) y. Amitai (Ben Gurion University, Israel) H. Barlow (University of Cambridge, U.K.) E. Bienenstock (Brown University, U.S.A) J. Bullier (I.N.S.E.R.M., France) R. Eckhorn (Philipps University, Germany) G.L. Gerstein (University of Pennsylvania, U.S.A.) C. Gilbert (Rockefeller University, U.S.A.) A.M. Graybiel (M.I.T., U.S.A.) A. Grinvald (Weizmann Institute, Israel) D. Hansel (C.N.R.S., France) A. Kreiter (M.P.I., Frankfurt, Germany) H. Markram (M.P.I., Heidelberg, Germany) K.A. Martin (Oxford University, U.K.) D. Mumford (Harvard University, U.S.A.) B.J. Richmond (N.I.M.H., U.S.A.) A. Schuez (M.P.I., Tubingen, Germany) A.B. Schwartz (Barrow Neurological Inst., U.S.A.) I. Segev (Hebrew University, Israel) M. Shadlen (Stanford University, U.S.A) A. Thomson (Royal Free Hospital, U.K.) N. Tishby (Hebrew University, Israel) S. Ullman (Weizmann Institute, Israel) E. Vaadia (Hebrew University, Israel) L. Valiant (Harvard University, U.S.A.) Contributed poster presentations will be accepted. Registration deadline: March 31st, 1995. Information and Registration: Alisa Shadmi, Hebrew University, Interdisciplinary Center for Neural Computation, P.O. Box 1255, Jerusalem, 91904, Israel. Tel: 972-2-584899 Fax: 972-2-586152 e-mail: Alisa at vms.huji.ac.il Organizing Committee: Moshe Abeles tel: 972-2-757384 or 972-2-758384 fax: 972-2-439736 e-mail: abeles at md2.huji.ac.il Haim Sompolinsky tel: 972-2-584563 fax: 972-2-584437 e-mail: haim at fiz.huji.ac.il  From joachim at fit.qut.edu.au Sun Jan 15 23:28:18 1995 From: joachim at fit.qut.edu.au (Prof Joachim Diederich) Date: Mon, 16 Jan 1995 14:28:18 +1000 Subject: QUT NRC Technical Reports Message-ID: <199501160428.OAA15919@aldebaran.fit.qut.edu.au> Computers that learn vs. Users that learn: Experiments with adaptive e-mail agents Joachim Diederich* Elizabeth M. Gurrie** Markus Wasserschaff*** Neurocomputing Research Centre* School of Information Systems** Queensland University of Technology Brisbane Q 4001 Australia German National Research Center for Computer Science (GMD)*** Institute for Applied Information Processing (FIT) P.O. Box 1316 D-5205 St. Augustin 1, Germany QUTNRC-95-01-01.ps.Z Abstract The classification, selection and organization of electronic messages (e-mail) is a task that can be supported by a neural information processing system. The objective is to select those incoming messages for display that are most important for a particular user, and to propose actions in anticipation of the user's decisions. The artificial neural networks (ANNs) extract relevant information from incoming messages during a training period, learn the response to the incoming message, i.e., a sequence of user actions, and use the learned representation for the proposal of user actions. We test the system by comparing simple recurrent networks (SRNs, Elman,1990) and recurrent cascade correlation networks (RCC, Fahlman, 1991) by use of a sequence production task. The performance of both network architectures in terms of network size and learning speed for a given data set is examined. Our results show that (1) RCC generates smaller networks with better performance compared to SRNs and (2) learns significantly faster than SRNs. Submitted for publication. This is an extended version of the IJCAI-93 paper. *************************************************************************** A Survey And Critique of Techniques For Extracting Rules From Trained Artificial Neural Networks Robert Andrews* ** Joachim Diederich* Alan B. Tickle* ** Neurocomputing Research Centre* School of Information Systems** Queensland University of Technology Brisbane Q 4001 Australia QUTNRC-95-01-02.ps.Z Abstract It is becoming increasingly apparent that without some form of explanation capability, the full potential of trained Artificial Neural Networks (ANNs) may not be realised. This survey gives an overview of techniques developed to redress this situation. Specifically the survey focuses on mechanisms, procedures, and algorithms designed to insert knowledge into ANNs (knowledge initialisation), extract rules from trained ANNs (rule extraction), and utilise ANNs to refine existing rule bases (rule refinement). The survey also introduces a new taxonomy for classifying the various techniques, discusses their modus operandi, and delineates criteria for evaluating their efficacy. Keywords: rule extraction from neural networks, rule refinement using neural networks, knowledge insertion into neural networks, fuzzy neural networks, inferencing, rule generation Accepted for: Knowledge-Based Systems. Special Issue on Knowledge-Based Neural Networks (Editor: Prof LiMin Fu). These papers are available from ftp.fit.qut.edu.au cd to /pub/NRC/tr/ps  From shawn_mikiten at biad23.uthscsa.edu Tue Jan 17 12:48:56 1995 From: shawn_mikiten at biad23.uthscsa.edu (shawn mikiten) Date: 17 Jan 95 12:48:56 U Subject: Announce-1995 Summer Underg Message-ID: Announce-1995 Summer Undergraduate Research_ The Graduate School of Biomedical Sciences at University of Texas Health Science Center at San Antonio Announcement: 1995 Summer Undergraduate Research Fellowship The Summer Undergraduate Research Fellowship (SURF) will be awarded to outstanding undergraduates from across the nation. Free housing, a stipend, and travel reimbursement will be offered. Selected undergraduates will work with faculty, fellows, and students on a major research project. Learn research techniques and scientific methods. Participate in seminars & workshops exploring current problems & methods in advanced research. For more information on the internet the URL is: http://grad_dean.uthscsa.edu/ To request an application, e-mail to Kelly M. at surf at uthscsa.edu Deadline for completed application: February 24, 1995  From mike at PARK.BU.EDU Tue Jan 17 22:52:37 1995 From: mike at PARK.BU.EDU (Michael Cohen) Date: Tue, 17 Jan 1995 22:52:37 -0500 Subject: VISION, BRAIN, AND THE PHILOSOPHY OF COGNITION Message-ID: <199501180352.WAA11469@cns.bu.edu> VISION, BRAIN, AND THE PHILOSOPHY OF COGNITION Friday, March 17, 1995 Boston University George Sherman Union Conference Auditorium, Second Floor 775 Commonwealth Avenue Boston, MA 02215 Co-Sponsored by the Department of Cognitive and Neural Systems, the Center for Adaptive Systems, and the Center for Philosophy and History of Science Program: -------- 8:30am--9:30am: BELA JULESZ, Rutgers University, Why is the early visual system more interesting than the kidney? 9:30am--10:30am: KEN NAKAYAMA, Harvard University, Visual perception of surfaces 10:30am--11:00am: Coffee Break 11:00am--12:00pm: STEPHEN GROSSBERG, Boston University, Cortical dynamics of visual perception 12:00pm--1:00pm: PATRICK CAVANAGH, Harvard University, Attention-based visual processes 1:00pm--2:30pm: Lunch 2:30pm--3:30pm: V.S. RAMACHANDRAN, University of California, Neural plasticity in the adult human brain: New directions of research 3:30pm--4:30pm: EVAN THOMPSON, Boston University, Phenomenology and computational vision 4:30pm--5:30pm: DANIEL DENNETT, Tufts University, Filling-in revisited 5:30pm---: Discussion Registration: ------------- The conference is free and open to the public. Parking: -------- Parking is available at nearby campus lots: 808 Commonwealth Avenue ($6 per vehicle), 766 Commonwealth Avenue ($8 per vehicle), and 700 Commonwealth Avenue ($10 per vehicle). If these lots are full, please ask the lot attendant for an alternate location. Contact: -------- Professor Stephen Grossberg Department of Cognitive and Neural Systems 111 Cummington Street Boston, MA 02215 fax: (617) 353-7755 email: diana at cns.bu.edu  From minton at ptolemy-ethernet.arc.nasa.gov Wed Jan 18 14:38:12 1995 From: minton at ptolemy-ethernet.arc.nasa.gov (Steve Minton) Date: Wed, 18 Jan 95 11:38:12 PST Subject: Article on Error-Correcting Output Codes Message-ID: <9501181938.AA05805@ptolemy.arc.nasa.gov> Readers of this group may be interested in the followwing article which was just published by JAIR: Dietterich, T.G. and Bakiri, G. (1995) "Solving Multiclass Learning Problems via Error-Correcting Output Codes", Volume 2, pages 263-286. PostScript: volume2/dietterich95a.ps (265K) Abstract: Multiclass learning problems involve finding a definition for an unknown function f(x) whose range is a discrete set containing k>2 values (i.e., k ``classes''). The definition is acquired by studying collections of training examples of the form . Existing approaches to multiclass learning problems include direct application of multiclass algorithms such as the decision-tree algorithms C4.5 and CART, application of binary concept learning algorithms to learn individual binary functions for each of the k classes, and application of binary concept learning algorithms with distributed output representations. This paper compares these three approaches to a new technique in which error-correcting codes are employed as a distributed output representation. We show that these output representations improve the generalization performance of both C4.5 and backpropagation on a wide range of multiclass learning tasks. We also demonstrate that this approach is robust with respect to changes in the size of the training sample, the assignment of distributed representations to particular classes, and the application of overfitting avoidance techniques such as decision-tree pruning. Finally, we show that---like the other methods---the error-correcting code technique can provide reliable class probability estimates. Taken together, these results demonstrate that error-correcting output codes provide a general-purpose method for improving the performance of inductive learning programs on multiclass problems. The PostScript file is available via: -- comp.ai.jair.papers -- World Wide Web: The URL for our World Wide Web server is http://www.cs.washington.edu/research/jair/home.html -- Anonymous FTP from either of the two sites below: CMU: p.gp.cs.cmu.edu directory: /usr/jair/pub/volume2 Genoa: ftp.mrg.dist.unige.it directory: pub/jair/pub/volume2 -- automated email. Send mail to jair at cs.cmu.edu or jair at ftp.mrg.dist.unige.it with the subject AUTORESPOND, and the body GET VOLUME2/DIETTERICH95A.PS (either upper or lowercase is fine). Note: Your mailer might find this file too large to handle. (The compressed version of this paper cannot be mailed.) -- JAIR Gopher server: At p.gp.cs.cmu.edu, port 70. For more information about JAIR, check out our WWW or FTP sites, or send electronic mail to jair at cs.cmu.edu with the subject AUTORESPOND and the message body HELP, or contact jair-ed at ptolemy.arc.nasa.gov.  From rsun at cs.ua.edu Thu Jan 19 11:31:29 1995 From: rsun at cs.ua.edu (Ron Sun) Date: Thu, 19 Jan 1995 10:31:29 -0600 Subject: sequential learning - lifelong learning Message-ID: <9501191631.AA13316@athos.cs.ua.edu> To transfer knowledge from one environment to another, one viable way, I believe, is to extract generic rules (from a NN) that are more widely applicable. This may not solve the interference problem, but surely handles the transfer problem to a certain extent. (It can also deal with the interference problem, if extracted rules are used to train the NN, interspersed with current data.) Here is a TR on extracting rules from a Q-learning network. The resulting architecture consists of two levels, which contains both rules and Q-learning network so that both rigorous, abstract knowledge (declarative knowledge) and flexible, embodied knowledge (procedural knowledge) are maintained. The learning and rule extraction are on-line, while performing the task, and can be continuously adaptive. Rule extraction is done on top of the connectionist network performing Q-learning, so the architecture is parsimonious in terms of learning mechanisms. The TR is available at FTP-host: aramis.cs.ua.edu FTP-file: pub/tech-reports/sun.clarion.ps ================================================================ Dr. Ron Sun Department of Computer Science phone: (205) 348-6363 The University of Alabama fax: (205) 348-0219 Tuscaloosa, AL 35487 rsun at athos.cs.ua.edu ================================================================  From chaos at gojira.Berkeley.EDU Thu Jan 19 20:28:35 1995 From: chaos at gojira.Berkeley.EDU (Jim Crutchfield) Date: Thu, 19 Jan 95 17:28:35 PST Subject: Preprint --- Evolving Globally Synchronized Cellular Automata Message-ID: <9501200128.AA06251@gojira.Berkeley.EDU> The following paper is now available on the Web and via anonymous FTP. Access instructions follow. Evolving Globally Synchronized Cellular Automata Rajarshi Das, James P. Crutchfield, Melanie Mitchell, and James E. Hanson Santa Fe Institute Working Paper 95-01-005 Abstract How does an evolutionary process interact with a decentralized, distributed system in order to produce globally coordinated behavior? Using a genetic algorithm (GA) to evolve cellular automata (CAs), we show that the evolution of spontaneous synchronization, one type of emergent coordination, takes advantage of the underlying medium's potential to form embedded particles. The particles, typically phase defects between synchronous regions, are designed by the evolutionary process to resolve frustrations in the global phase. We describe in detail one typical solution discovered by the GA, delineating the discovered synchronization algorithm in terms of embedded particles and their interactions. We also use the particle-level description to analyze the evolutionary sequence by which this solution was discovered. Our results have implications both for understanding emergent collective behavior in natural systems and for the automatic programming of decentralized spatially extended multiprocessor systems. World Wide Web URL: http://www.santafe.edu/projects/evca/evabstracts.html Anonymous FTP: To obtain an electronic copy of this paper (12 pages): ftp ftp.santafe.edu login: anonymous password: cd /pub/EvCA binary get EGSCA.ps.Z quit Then at your system: uncompress EGSCA.ps.Z lpr -P EGSCA.ps.Z If you have trouble getting this paper electronically, you can request a hard copy from Deborah Smith (drs at santafe.edu), Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM, USA, 87501.  From redi at dynamics.bu.edu Thu Jan 12 11:55:05 1995 From: redi at dynamics.bu.edu (Jason Redi) Date: Thu, 12 Jan 1995 11:55:05 -0500 Subject: Internet Course Announcement on Complex Systems Message-ID: <9501121655.AA08829@dynamics.bu.edu> [ This course will be available through the MBONE virtual network on ] [ the Internet. For more information see the URL ] [ http://ripple.bu.edu/CSDL/sc726/sc726.html ] COURSE ANNOUNCEMENT: SC726: Dynamics of Complex Systems Spring, 1995 Prof. Yaneer Bar-Yam What do protein folding, neural networks, developmental biology, evolution and human economies have in common? Are there mathematical models that capture key properties of these complex systems? Can we develop principles for the design of complex systems? It is now widely held that the theory of complexity and the dynamics of complex systems may be founded on universal principles that describe disparate problems ranging from physics to economics. A corollary is that transferring ideas and results from investigators in many disparate areas will cross-fertilize and lead to important new results. In this course we will study a few examples of complex systems and identify questions that appear in many of them. A central goal of this course is to develop models and modeling techniques that will be useful when applied to all complex systems. For this purpose we will adopt both analytic tools and computer simulation. Analytic techniques will be introduced from statistical mechanics and stochastic dynamics. Computer simulation using Monte Carlo methods and Cellular Automata will be an integral part of the curriculum. Course consists of: three hours of lecture and one hour guest lecturer per week. Students will work on a group project on topics suited to the group interests. Topics to be covered include: subdivision and hierarchical organization, the scaling of kinetic properties with system size, self-organization and organization by design, measuring complexity. Systems to be discussed include: protein folding, neural networks, developmental biology, evolution and human economies/societies. Time: MW 12:00-1:30pm EST This course is intended for graduate students (both Ph.D. and Masters) from a variety of departments - physics, chemistry, biology, all of engineering, mathematics, computer science - who are interested in investigating, working with or engineering complex systems. One purpose of the course is to help doctoral students learn about current research in these areas and identify possible new research topics. Interested faculty are also welcome to attend. Prerequisites: Basic probability and statistics as provided by one of: MA381, EK500, EK424, PY410. ------- Course offered through the Internet to remote sites by arrangement. First day of classes: Jan. 18, 1995 For further information contact Prof. Bar-Yam at tel. (617) 353-2843 or Internet: yaneer at enga.bu.edu  From ptodd at synapse.psy.du.edu Fri Jan 20 14:10:07 1995 From: ptodd at synapse.psy.du.edu (Peter Todd) Date: Fri, 20 Jan 95 12:10:07 MST Subject: GRADUATE PROGRAM IN DEVELOPMENTAL COG. NEUROSCIENCE Message-ID: <9501201910.AA04389@synapse> **************************************************************************** GRADUATE STUDY IN DEVELOPMENTAL COGNITIVE NEUROSCIENCE DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF DENVER Invitation for applications for Fall 1995, due Feb. 1 **************************************************************************** The Department of Psychology at the University of Denver is pleased to announce our new research training program in Developmental Cognitive Neuroscience. This is one of the first graduate training programs in the world to focus on the problems and techniques of cognitive neuroscience as they relate to the processes of development across the lifespan. Our department is already well-known for research and expertise in developmental, experimental/cognitive, and child clinical psychology, and this new program stands as a specialization within each of these areas while also bridging them with a common vision. We have established this program to capitalize on the interdisciplinary environment that exists within our department and across other departments at the University of Denver, and with other universities in Colorado. The program includes coursework in psychology, neuroscience, and biology, and participation in a wide range of research groups. Students will also be trained in experimental and modeling techniques applied to a variety of normal and abnormal populations. In addition to taking courses in the Psychology Department, students will receive training and coursework through the University of Denver Biology Department, the University of Colorado Health Sciences Center, and the Institute for Behavior Genetics at the University of Colorado, Boulder. The new developmental cognitive neuroscience program offers students a unique opportunity to combine interests in a wide range of disciplines. Students pursuing an experimental/developmental psychology degree will find an exciting mix of topics incorporating the latest research in neuroscience. For clinical students, this is one of the few programs anywhere that offers graduate level training in child clinical neuropsychology. All students in the program will take a common set of core courses in neurobiology, neuroanatomy, psychopharmacology, neuropsychology, and connectionist modeling, in addition to more traditional courses in statistics, research design, and substantive areas of psychology. Students will gain valuable hands-on experience through practicums in neuroimaging and research with abnormal populations. These populations include adults with Alzheimer's disease or focal lesions, and children with autism, dyslexia, ADHD, inherited metabolic disorders, and various mental retardation syndromes. Of course, the most important aspect of training is research conducted in close collaboration with a faculty mentor. Below we have included a list of the faculty in this program and their research interests. Two of those listed, Cathy Reed and Peter Todd, were recruited last year specifically for this new program. Denver and the state of Colorado as a whole provide a great living and working environment. The quality of life here is very high, with the cultural attractions of a major metropolitan center vying with the natural attractions of world-class skiing, hiking, biking, and fossil-hunting in the nearby Rockies. The cost of living in the area is among the lowest of any major city in the country, while the local economy continues to thrive and grow. The University of Denver itself is a 130-year-old institution well-respected and supported by the surrounding community, with a student population of 8000 spread across undergraduate, graduate, and professional programs. A major capital campaign has just been announced, which has begun to bring in even more resources and exposure for the University. The Departments of Psychology and Biology are two of the largest and most well- funded on campus, with strong graduate programs and friendly and frequent faculty-student interaction. Our Department has a good track record for attracting high quality graduate students who become productive and renowned scientists. We now seek to attract students who are interested in pursuing first-rate research and training in developmental cognitive neuroscience. Interested students are strongly encouraged to apply. For more information about this new program, prospective students (and faculty advisors) should contact our Graduate Affairs Secretary at the address below, or members of our faculty, at their individual email addresses, for more specific questions about the program and the research training opportunities within their labs. For further information and application materials, please contact: Paula Houghtaling, Graduate Affairs Secretary Department of Psychology University of Denver 2155 S. Race Street Denver, CO 80208 USA Email: phoughta at pstar.psy.du.edu Phone: 303-871-3803 (8-5 Mountain time) *** APPLICATION DEADLINE FEBRUARY 1, 1995 *** Psychology Faculty and Interests Marshall Haith (Ph.D. 1964, University of California, Los Angeles), Professor. Visual scanning and attention in infants, anticipation and planning skills, development of early reading skills. Email: mhaith at pstar.psy.du.edu Janice M. Keenan (Ph.D. 1975, University of Colorado), Professor. Memory, reading comprehension, psycholinguistics, cognitive neuropsychology. Email: jkeenan at pstar.psy.du.edu Bruce F. Pennington (Ph.D. 1977, Duke University), Professor. Developmental neuropsychology, developmental psychopathology, dyslexia, autism, attention deficit hyperactivity disorder. Email: bpenning at pstar.psy.du.edu George R. Potts (Ph.D. 1971, Indiana University), Professor. Cognition, memory, reading, implicit memory and perception. Email: gpotts at pstar.psy.du.edu Catherine L. Reed (Ph.D. 1991, University of California, Santa Barbara), Assistant Professor. Cognitive neuropsychology, somatosensory perception, visual cognition, human movement. Email: creed at pstar.psy.du.edu Ralph J. (Rob) Roberts (Ph.D. 1984, University of Virginia), Associate Professor. Developmental cognitive neuropsychology, attention and working memory, eye movements, acquisition of complex perception-action skills. Email: rroberts at pstar.psy.du.edu Peter M. Todd (Ph.D. 1992, Stanford University), Assistant Professor. Computer modeling and simulation of cognition, connectionism, evolution of behavior and learning, psychological/sexual selection, music cognition. Email: ptodd at pstar.psy.du.edu Biology Faculty and Interests Robert Dores (Ph.D. 1979, University of Minnesota), Professor. Biosynthesis of pituitary polypeptide hormones and neuropeptides. Email: rdores at du.edu John (Jack) Kinnamon (Ph.D. 1976, University of Georgia), Assistant Professor. Neurobiology of sensory systems. Email: jkinnamo at du.edu Susan Sadler (Ph.D. 1982, University of Colorado), Professor. Identification and characterization of molecular mechanisms that are involved in triggering meiotic cell division. Email: ssadler at du.edu  From harnad at ecs.soton.ac.uk Sat Jan 21 13:43:38 1995 From: harnad at ecs.soton.ac.uk (Stevan Harnad) Date: Sat, 21 Jan 95 18:43:38 GMT Subject: Important new Behav. Brain Sci. changes Message-ID: <570.9501211843@cogsci.ecs.soton.ac.uk> Five important new changes in Behavioral and Brain Sciences (BBS) addresses, policies and procedures (1-5) plus Three announcements about positions and activities at my new institution (Southampton University) (6-8). Summaries first, then the details: (1) New address for submitting BBS target articles (2) New address for submitting BBS commentaries (3) All commentaries now require asbtracts (4) All articles.commentaries now require email version and/or disk (5) Target articles now electronically retrievable in multiple ways (6) Applications invited for Psychology Professorship at U. Southampton. (7) Applications invited for grad students and postdocs to work with me (8) Come and give a talk at our new Cognitive Sciences Centre (1) NEW BBS ADDRESS (Editorial): Effective immediately, ALL SUBMITTED TARGET ARTICLES AND ALL CORRESPONDENCE PERTAINING TO EDITING AND REFEREEING should henceforth be addressed to BBS's new Editorial Office: Behavioral and Brain Sciences Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM phone: 44 703 594-583 fax: 44 703 593-281 email: bbs at ecs.soton.ac.uk All BBS email should go to the email address above; only messages intended for Stevan Harnad personally should be sent to harnad at ecs.soton.ac.uk -- I now get over 80 emails a day so please, whatever can be answered by the Managing Editor, send to bbs rather than harnad! (2) SECOND NEW BBS ADDRESS: Effective immediately, ALL SUBMITTED COMMENTARIES (double-spaced, in triplicate, with email version and/or disk) AND ALL CORRESPONDENCE PERTAINING TO COPY-EDITING AND PROOFS should henceforth be addressed to: Behavioral and Brain Sciences Cambridge University Press Journals Department 40 West 20th Street New York, NY 10011-4211 USA phone: 800-431-1580 (ext. 369, Ed Miller) 212-924-3900 (ext. 369, Ed Miller) fax: 212-645-5960 email: bbs at cup.org (or emiller at cup.org) To expedite mailing, all commentaries will be received and logged in New York and then forwarded to the Editor in Southampton for review. (3) Effective immediately, every BBS commentary and author's response must have have an ABSTRACT (~60 words). (4) Effective immediately, IN ADDITION to the requisite number of hard copies, all BBS contributions (articles, commentaries, and responses) will also have to be submitted in electronic form -- by email (preferably) to bbs at ecs.soton.ac.uk or on a computer disk accompanying the hard copies. BBS is moving toward more and more electronic processing at all stages. The result will be much faster, more efficient and fairer procedures. (5) Electronic versions of the preprints of all BBS target articles can be retrieved by ftp, archie, gopher or World-Wide-Web from: ftp://cogsci.ecs.soton.ac.uk/pub/harnad ftp://ftp.princeton.edu/pub/harnad/ http://cogsci.ecs.soton.ac.uk/~harnad/ http://www.princeton.edu/~harnad/ gopher://gopher.princeton.edu/11/.libraries/.pujournals This way prospective commentators can let us know that they would like to be invited to comment on target articles about to circulated for commentary, and can search the archive for past articles on which they may wish to contribute Continuing Commentary. (6) Applications are invited for a full Professorship in Psychology at the University of Southampton. I am especially interested to hear from Experimental/Clinical Neuropsychologists with active research programmes: Please contact me to discuss it informally: harnad at ecs.soton.ac.uk (7) Expressions of interest are also invited from prospective graduate students and postdoctoral fellows interested in coming to work with me in the Cognitive Psychology Laboratory and the Cognitive Sciences Centre at Southampton University. Our research focus is decribed below. Please write to: harnad at ecs.soton.ac.uk (8) Let me know if you will be in the London area and would like to give a talk about your work at our new Cognitive Sciences Centre (CSC), of which I am Director, with the collaboration of Professor Michael Sedgewick (Clinical Neurological Sciences), Professors Tony Hey and Chris Harris (Electronics and Computer Science), Dr. John Bradshaw (Anthro-Zoology Institute), Professor Wendy Hall (Multimedia Centre) and Professor Bob Remington (ex officio, Head of the Psychology Department). -------------------------------------------------------------------- Research Focus of the Laboratory CATEGORISATION AND COGNITION: Our capacity to categorise is at the heart of all of our cognitive capacity. People can sort and label the objects and events they see and hear with a proficiency that still far exceeds that of our most powerful machines. How do we manage to do it? The answer will not only tell us more about ourselves but it will allow us to apply our findings to enhancing our proficiency, both in the learning of categories and in our use of machines to extend our capacities. CATEGORY LEARNING is the most general form of cognition. Animals learn categories when they learn what is and is not safe to eat, where it is safe to forage, who is friend and who is foe. Children learn the same kinds of categories, but they eventually go on to the much more powerful and uniquely human strategy of learning categories by name, rather then by performing some instrumental response on them, such as eating or fleeing. Whether they categorise by instrumental response or by name, however, children must still have direct experience with the objects they are categorising, and some sort of corrective feedback from the consequences of MIScategorising them. Eventually, however, categories can be learned from strings of symbols alone, with most of those symbols being themselves the names of categories. This is the most remarkable of our cognitive capacities, language, but language and cognition cannot be understood unless we analyse how they are grounded in categorisation capacity (Harnad 1990). This is theme of our research programme. BEHAVIORAL, COMPUTATIONAL AND NEURAL APPROACHES: There are three empirical ways to investigate the functional basis of our categorisation capacity. The first way is to (i) analyse our categorisation performance itself experimentally, particularly how we LEARN to categorise. The second way is to (ii) model our categorisation capacity with computers that must learn the same categories that we do, on the basis of the same input and corrective feedback that we get. The third way is to (iii) monitor brain function while we are learning categories, to determine what neural properties change during the course of learning, and to relate them to the performance changes during learning, as well as to the internal functioning of the machine models performing the same task. These three converging lines of investigation are the ones to be pursued in the Cognitive Psychology Laboratory. Details and papers are available from the URLs below: ---------------------------------------------------------------- Stevan Harnad Professor of Psychology Director, Cognitive Sciences Centre Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM harnad at ecs.soton.ac.uk harnad at princeton.edu phone: +44 703 592582 fax: +44 703 594597 -------------------------------------------------------------------- ftp://ftp.princeton.edu/pub/harnad/ http://cogsci.ecs.soton.ac.uk/~harnad/ http://www.princeton.edu/~harnad/ gopher://gopher.princeton.edu/11/.libraries/.pujournals  From ucjtsjd at ucl.ac.uk Sun Jan 22 20:59:45 1995 From: ucjtsjd at ucl.ac.uk (John Draper) Date: Mon, 23 Jan 1995 01:59:45 +0000 Subject: Lectureships in Psychology Message-ID: <7189.ucjtsjd@pop-server.bcc.ac.uk> 1. LECTURESHIPS IN PSYCHOLOGY Applications are invited for four lectureships tenable from 1 October 1995. The successful candidates will be active researchers in their respective fields as below: a) Cognitive science, particularly computational modelling of cognitive functions. Teaching duties include a significant contribution to the undergraduate programme in Cognitive Science b) Developmental psychology. Teaching involves both the undergraduate and MSc Educational Psychology degree programmes. c) Social psychology. Teaching duties include contributing to the undergraduate social psychology programme. d) Cognitive neuroscience, particularly cognitive neuropsychology. Teaching duties involve organising undergraduate labs and providing lectures in neuropsychology. Posts A, B and C are permanent while post D is temporary for two years but with the possibility of being made permanent. The appointments will be made at the appropriate points on the Lecturer A scale (currently !14,756 - !19,326), possibly Lecturer B, plus London Allowance of !2,134. Applications by CV including three referees should be sent to : John Draper, Departmental Administrator, Department of Psychology, University College London, Gower Street, London WC1E 6BT (e-mail : j.draper at ucl.ac.uk; tel 071 387 7050 x5338) from whom further details can be obtained. Closing date : 31 March 1995.  From rafal at mech.gla.ac.uk Mon Jan 23 10:12:05 1995 From: rafal at mech.gla.ac.uk (Rafal W Zbikowski) Date: Mon, 23 Jan 1995 15:12:05 GMT Subject: Workshop on Neurocontrol Message-ID: <15957.199501231512@gryphon.mech.gla.ac.uk> CALL FOR PAPERS Neural Adaptive Control Technology Workshop: NACT I 18--19 May, 1995 University of Glasgow Scotland, UK NACT Project ^^^^^^^^^^^^ The first of a series of three workshops on Neural Adaptive Control Technology (NACT) will take place on May 18--19 1995 in Glasgow, Scotland. This event is being organised in connection with a three-year European Union funded Basic Research Project in the ESPRIT framework. The project is a collaboration between Daimler-Benz Systems Technology Research, Berlin, Germany and the Control Group, Department of Mechanical Engineering, University of Glasgow, Glasgow, Scotland. The project, which began on 1 April 1994, is a study of the fundamental properties of neural network based adaptive control systems. Where possible, links with traditional adaptive control systems will be exploited. A major aim is to develop a systematic engineering procedure for designing neural controllers for non-linear dynamic systems. The techniques developed will be evaluated on concrete industrial problems from within the Daimler-Benz group of companies: Mercedes-Benz AG, Deutsche Aerospace (DASA), AEG and DEBIS. The project leader is Dr Ken Hunt (Daimler-Benz) and the other principal investigator is Professor Peter Gawthrop (University of Glasgow). NACT I Workshop ^^^^^^^^^^^^^^^ The aim of the workshop is to bring together selected invited specialists in the fields of adaptive control, non-linear systems and neural networks. A number of contributed papers will also be included. As well as paper presentation, significant time will be allocated to round-table and discussion sessions. In order to create a fertile atmosphere for a significant information interchange we aim to attract active specialists in the relevant fields. Proceedings of the meeting will be published in an edited book format. A social programme will be prepared for the weekend immediately following the meeting where participants will be able to sample the various cultural and recreational offerings of Central Scotland (a visit to a whisky distillery is included) and the easily reached Highlands. Contributed papers ^^^^^^^^^^^^^^^^^^ The Program Committee is soliciting contributed papers in the area of neurocontrol for presentation at the conference and publication in the Proceedings. Submissions should take the form of an extended abstract of six pages in length and the DEADLINE is 1 March 1995. Accepted extended abstracts will be circulated to participants in a Workshop digest. Following the Workshop selected authors will be asked to prepare a full paper for publication in the proceedings. This will take the form of an edited book produced by an international publisher. LaTeX style files will be available for document preparation. Each submitted paper must be headed with a title, the names, affiliations and complete mailing addresses (including e-mail) of all authors, a list of three keywords, and the statement "NACT I". The first named author of each paper will be used for all correspondence unless otherwise requested. Final selection of papers will be announced in mid-March 1995. Address for submissions ^^^^^^^^^^^^^^^^^^^^^^^ Dr Rafal Zbikowski Department of Mechanical Engineering James Watt Building University of Glasgow Glasgow G12 8QQ Scotland, UK rafal at mech.gla.ac.uk Schedule summary ^^^^^^^^^^^^^^^^ 1 March 1995 Deadline for submission of contributed papers Mid-March 1995 Notification regarding acceptance of papers 18-19 May 1995 Workshop  From john at dcs.rhbnc.ac.uk Mon Jan 23 10:47:32 1995 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Mon, 23 Jan 95 15:47:32 +0000 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <199501231547.PAA15966@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT): two new reports available ---------------------------------------- NeuroCOLT Technical Report NC-TR-94-014: ---------------------------------------- Learning Minor Closed Graph Classes with Membership and Equivalence Queries by John Shawe-Taylor, Dept of Computer Science, Royal Holloway, U. of London Carlos Domingo, Department of Software, U. Polit\`ecnica de Catalunya Hans Bodlaender, Dept of Computer Science, Utrecht University James Abello, Computer Science Dept, Texas A\&M University Abstract: The paper considers the problem of learning classes of graphs closed under taking minors. It is shown that any such class can be properly learned in polynomial time using membership and equivalence queries. The representation of the class is in terms of a set of minimal excluded minors (obstruction set). ---------------------------------------- NeuroCOLT Technical Report NC-TR-94-016: ---------------------------------------- On-line learning with minimal degradation in feedforward networks by V Ruiz de Angulo, Institute for System Engineering and Informatics, CEC Joint Research Center, Ispra, Italy and Carme Torras, CSIC-UPC, Barcelona, Spain Abstract: Dealing with non-stationary processes requires quick adaptation while at the same time avoiding catastrophic forgetting. A neural learning technique that satisfies these requirements, without sacrificing the benefits of distributed respresentations, is presented. It relies on a formalization of the problem as the minimization of the error over the previously learned input-output (i-o) patterns, subject to the constraint of perfect encoding of the new pattern. Then this constrained optimization problem is transformed into an unconstrained one with hidden-unit activations as variables. This new formulation naturally leads to an algorithm for solving the problem, which we call Learning with Minimal Degradation (LMD). Some experimental comparisions of the performance of LMD with back-propagation are provided which, besides showing the advantages of using LMD, reveal the dependence of forgetting on the learning rate in back-propagation. We also explain why overtraining affects forgetting and fault-tolerance, which are seen as related problems. ----------------------- The Report NC-TR-94-014 can be accessed and printed as follows % ftp cscx.cs.rhbnc.ac.uk (134.219.200.45) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-94-014.ps.Z ftp> bye % zcat nc-tr-94-014.ps.Z | lpr -l Similarly for the other technical report. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. Best wishes John Shawe-Taylor  From hml at charon.llnl.gov Mon Jan 23 14:52:23 1995 From: hml at charon.llnl.gov (Hans Martin Lades) Date: Mon, 23 Jan 95 11:52:23 -0800 Subject: Postdoctoral research staff position available Message-ID: POST DOCTORAL RESEARCH STAFF MEMBER IN COMPUTER VISION/CONNECTIONIST VISION MODELING INSTITUTE FOR SCIENTIFIC COMPUTING RESEARCH (ISCR) UNIVERSITY OF CALIFORNIA LAWRENCE LIVERMORE NATIONAL LABORATORY (LLNL) P.O. BOX 808, L-416 LIVERMORE, CA 94550 NATURE AND SCOPE OF POSITION: The Institute for Scientific Computing Research, a University of California Institute located at LLNL, is pursuing research in active computer vision and related biologically motivated computational models. We are currently seeking excellent candidates for a postdoctoral position that will include research in image processing, neural networks and computer vision. The successful candidate will aid in the development of computer vision software and hardware for collaborations both inside and outside LLNL. There will be opportunities for interaction and collaboration with interdisciplinary scientists in the academic community and with private industry. ISCR staff are expected to perform research resulting in publications in leading computational and related journals and to participate fully in the Institute, including its seminar series and research activities. LOCATION: LLNL is located in the San Francisco Bay Area. Coastal and mountain recreational areas are abundant nearby. The ISCR has collaborations with nearby universities, including UC Berkeley, Stanford, and UC Davis. ESSENTIAL SKILLS, KNOWLEDGE, AND ABILITIES: Candidates must have a demonstrated ability to: - Identify complex problems and solve them in a creative and timely manner - Carry out independent research related to the computer vision efforts at the ISCR - Communicate clearly in both oral and written form. Candidates must possess a recent Ph.D. in Physics, Mathematics, Computer Science, or Engineering with a research background in Signal Processing, Image Processing and Neural Networks.=20 Candidates must have excellent programming skills; with demonstrated experience in C and C++, familiarity with different compilers, standardization efforts, object-oriented design and creation of program packages. Experience in the following areas is a plus: - Real-time systems - Datacube programming - COSE, CORBA - Parallel program design - Efficient Modeling of Biological Vision Systems - Hardware design (small PCB design, wrapping, etc.) - Robotics - Algorithms for nonlinear signal processing (fractal compression, higher-order correlations, etc.) - Databases. LENGTH OF EMPLOYMENT: 1 year (with possible renewal for 3 years total) SALARY: In the range $46,000-54,000 per annum depending on experience and qualifications. FOR FURTHER INFORMATION, PLEASE CONTACT: Martin Lades (hml at llnl.gov) or Paul Skokowski (paulsko at llnl.gov) (ph.) (510) 422-7132 (fax) (510) 422-7819 Application Deadline: February, 1, 1995  From vg197 at neutrino.pnl.gov Tue Jan 24 01:22:41 1995 From: vg197 at neutrino.pnl.gov (Sherif Hashem) Date: Mon, 23 Jan 1995 22:22:41 -0800 (PST) Subject: Workshop Announcement and CFP Message-ID: <9501240622.AA19562@neutrino.pnl.gov> WORKSHOP ANNOUNCEMENT AND CALL FOR PARTICIPATION ************************************************* (Abstract submission deadline: February 10, 1995) WORKSHOP ON ENVIRONMENTAL AND ENERGY APPLICATIONS OF NEURAL NETWORKS Battelle Auditorium, Richland, Washington March 30-31, 1995 The Environmental Molecular Sciences Laboratory (EMSL), Pacific Northwest Laboratory (PNL), and the Richland Section of the Institute of Electrical and Electronics Engineers (IEEE) are sponsoring a workshop to bring together scientists and engineers interested in investigating environmental and energy applications of artificial neural networks (ANNs). Objectives: ----------- The main objectives of this workshop are: * to provide a forum for presenting and discussing environmental and energy applications of neural networks. * to serve as a means for investigating the potential uses of neural networks in the U.S. Department of Energy's environmental cleanup efforts and energy programs. * to promote collaboration between researchers in national laboratories, academia, and industry to solve real-world problems. Topics: ------- * Environmental applications (modeling and predicting land, air, and water pollution; environmental sensing; spectroscopy; hazardous waste handling and cleanup). * Energy applications (environmental monitoring for power systems, modeling and control of power plants, power load forecasting, fault location and diagnosis of power systems). * Commercial and industrial applications (environmental, economic, and financial time series analyses and forecasting; chemical process modeling and control). * Medical applications (analysis of environmental health effects, modeling biological systems, medical image analysis, and medical diagnosis). Who should attend? ------------------ This workshop should be of interest to researchers, developers, and practitioners applying ANNs in energy and environmental sciences and engineering, as well as scientists and engineers who see some potential for the application of ANNs to their work. Dates: ------ The workshop will be held on March 30-31, 1995, from 8:00 am to 5:00 pm. An introductory tutorial on neural networks will be offered on March 29, 1995, and is recommended for participants who are new to neural networks. Deadline for contributed presentations: Abstracts are due by February 10, 1995. Notification of acceptance will be mailed by: February 24, 1995. Cost: ----- The registration fee is $120 ($75 for students). Early registration by March 1, 1995, is $100 ($50 for students). For More Information, Contact: ------------------------------ Dr. Sherif Hashem Environmental Molecular Sciences Laboratory Pacific Northwest Laboratory P.O. Box 999, M/S K1-87 Richland, WA 99352 Telephone: 509-375-6995 Fax.: 509-375-6631 Internet: s_hashem at pnl.gov World Wide Web URL: http://www.emsl.pnl.gov:2080/people/bionames/hashem_s.html Also see the workshop's homepage on the World Wide Web at URL: http://www.emsl.pnl.gov:2080/docs/cie/neural/workshop2/homepage.html ____________________________________________________________________________ REGISTRATION FORM Name: ____________________________ Address: ____________________________ ____________________________ ____________________________ ____________________________ Telephone: ____________________________ Fax: ____________________________ E-mail: ____________________________ [ ] I am interested in attending the neural network tutorial (no additional fee is required). [ ] I am interested in a bus tour of the Hanford Site (a Department of Energy site located north of Richland, Washington). Registration Fee: ----------------- Regular: $100 ($120 after March 1, 1995). Student: $50 ($75 after March 1, 1995). Please make your check payable to Battelle. Mail the completed form and check to: Janice Gunter WEEANN Registration Pacific Northwest Laboratory P.O. Box 999, M/S K1-87 Richland, WA 99352 ____________________________________________________________________________ ********************************************************************************** The Pacific Northwest Laboratory (PNL) is a multiprogram national laboratory operated for the U.S. Department of Energy by the Battelle Memorial Institute. To provide the basic research needed to meet environmental cleanup, the Environmental Sciences Laboratory (EMSL) is being constructed at PNL. The prime mission of the EMSL and its associated research programs is to advance scientific knowledge in support of the long-term mission of the U.S. Department of Energy in environmental restoration and waste management. ********************************************************************************** =================================================================== Pacific Northwest Laboratory E-mail: s_hashem at pnl.gov 906 Battelle Boulevard Tel. (509) 375-6995 P.O. Box 999, MSIN K1-87 Fax. (509) 375-6631 Richland, WA 99352 USA ===================================================================  From reza at bme.jhu.edu Tue Jan 24 09:01:34 1995 From: reza at bme.jhu.edu (Reza Shadmehr) Date: Tue, 24 Jan 95 09:01:34 EST Subject: Papers on human motor memory Message-ID: Hello, The following two papers will appear in the upcoming NIPS proceedings. They deal with some of the properties of human motor memory, including interference and forgetting. I've included ftp instructions. best wishes, Reza Shadmehr reza at bme.jhu.edu ---------------------------------------------------------------------- Interference in learning internal models of inverse dynamics in humans Reza Shadmehr, Tom Brashers-Krug, Ferdinando Mussa-Ivaldi Abstract: Experiments were performed to reveal some of the computational properties of the human motor memory system. We show that as humans practice reaching movements while interacting with a novel mechanical environment, they learn an internal model of the inverse dynamics of that environment. Subjects show recall of this model at testing sessions 24 hours after the initial practice. The representation of the internal model in memory is such that there is interference when there is an attempt to learn a new inverse dynamics map immediately after an anticorrelated mapping was learned. We suggest that this interference is an indication that the same computational elements used to encode the first inverse dynamics map are being used to learn the second mapping. We predict that this leads to a forgetting of the initially learned skill. anonymous ftp to: ftp.ai.mit.edu filename: pub/users/reza/nips95a.ps.Z --------------------------------------------------------------------- Catastrophic interference in human motor memory Tom Brashers-Krug, Reza Shadmehr, Emanuel Todorov Abstract: Biological sensorimotor systems are not static maps that transform input (sensory information) into output (motor behavior). Evidence from many lines of research suggests that their representations are plastic, experience-dependent entities. While this plasticity is essential for flexible behavior, it presents the nervous system with difficult organizational challenges. If the sensorimotor system adapts itself to perform well under one set of circumstances, will it then perform poorly when placed in an environment with different demands (negative transfer)? Will a later experience-dependent change undo the benefits of previous learning (catastrophic interference)? We explore the first question in a separate paper in this volume (Shadmehr et al. 1995). Here we present psychophysical and computational results that explore the question of catastrophic interference in the context of a dynamic motor learning task. Under some conditions, subjects show evidence of catastrophic interference. Under other conditions, however, subjects appear to be immune to its effects. These results suggest that motor learning can undergo a process of consolidation. Modular neural networks are well suited for the demands of learning multiple input/output mappings. By incorporating the notion of fast- and slow-changing connections into a modular architecture, we were able to account for the psychophysical results. anonymous ftp to: ftp.ai.mit.edu filename: pub/users/reza/nips95b.ps.Z  From marks at u.washington.edu Tue Jan 24 13:53:24 1995 From: marks at u.washington.edu (Robert Marks) Date: Tue, 24 Jan 95 10:53:24 -0800 Subject: 1995 ISCAS (Seattle) Message-ID: <9501241853.AA26645@carson.u.washington.edu> SEVENTEEN full sessions on Neural Networks Tutorials on Cellular Neural Networks (Download program for details) ISCAS'95 1995 IEEE International Symposium on Circuits and Systems April 30 - May 3, 1995 Sheraton Seattle Hotel and Towers Seattle, Washington USA ----------------------------------------------------------------- |?S"?SSSSSSSSSSSSSSSSSSSSSSSSS"? ""SS" ?SSSSS! `SSSSSSSSSS?| |MM:!MMMMMMMMMMMMMMMMMMMMMMM* fx n H MMMMMM MMMMMMMMMMM?| | MMMM : MMMMMMMMMMMMMMM: : ?MMh MMMMMMMMMMMMM!.MMMM`| | "~ *MMMMMMMMMMMMMM :~f "MMk "MMMMMMMMMMMMM Mf | | ?M : ~ M MM .MMMMMMM!~ ~ | | ::! : HM: .MM. MMMM~ | | u:f.oNNNNu xbi NN: !NNiNNNNNN!Nb: H | | $$ $$$$!xN$$$$$k !$$$$$$$ ~'! | |' ~ x$ 1995 ISCAS ->X < h$$$$$$$$$ ~~< | | <'# RRRRRRRRRRRRRRRRH MRRRRRRRRRRR ~"#x: ~ : | | MRRRRRRRRRRRRRRRRRRRRM MRRRRRRRRRRR~ | | ~:M?MMMMMMMMMMMMHxMMMMMMMMk !:""MMMMMMMM ~ MM :| |8k:u8*888888888888888888888888888i:' !888888W ` ~ :8888x9X| |8N 8 :xN"*NNNNNNNNNNNNNNNNNNNNNNNN : *NNNNNNNN: ` dNNNNNNNH| |$$$$$N$F f$$$$$$$$$$$$$$$$$$$$$$$$N ~ $$$$$$$$R < $"$$$$$$M| |$$$$P "$$$$$$$$$$$$$$$$$$$$$$$$<~ $$$$$$$$$$ :~ $ @$$$$$$M| |$$$$$ $$$$$$$$$$$$$$$$$$$$$$$F 4$$$$$$$$$$$$ $$$$$$$$$$R| |R$$$$$$$$@"$$$R:$$$$$$$$$$$$$$$$$$$< t$$$$$$$$$$$$$$$$$$$$$$$$$$M| |RRRRRRRRRRRRRRMRRRRRRRRRRRRRRRRRRRR< RRRRRRRRRRRRRRRRRRRRRRRRRRRM| |W88888888888888888888888888888888888o888888888888888888888888888W| ----------------------------------------------------------------- April 30 - May 3, 1995 Sheraton Seattle Hotel and Towers Seattle, Washington USA Sponsored by THE INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS CIRCUITS AND SYSTEMS SOCIETY USEFUL CONTACTS ISCAS'95 Secretariat Meeting Management 2603 Main Street, Suite 690 Irvine, CA 92714 USA Tel: (714)752-8205 Fax: (714)752-7444 Email: 74710.2266 at compuserve.com Symposium Venue and Conference Hotel Sheraton Seattle Hotel and Towers 400 Sixth Avenue Seattle, Washington 98101 Tel: (206)621-9000 Fax: (206)621-8441 For Conference Program/Information For those with WWW Access: You can use MOSAIC and access URL site http://www.ee.washington.edu/iscas95.html With ftp access: unix> ftp compton.ee.washington.edu (or ftp 128.95.42.191) Name: anonymous Password: ftp> cd pub/iscas95 ftp> get read.me ***list of all possible files*** ftp> get advprog ftp> get regforms ftp> get visainfo ftp> bye With e-mail but not ftp: send email to: iscas95 at ee.washington.edu (an automatic system e-mail reply will send back information) AIRLINE TRANSPORTATION World Wise Travel Services 1219 Westlake Ave N. Suite 107 Seattle, WA 98109 (800)217-9527 Phone (206)217-0062 Fax Seattle Sightseeing Tours: Convention Services Northwest 1809 7th Ave, 1414 Tower Bldg Seattle, WA 98101 (206)292-9198, FAX:(206)292-0559  From lksaul at psyche.mit.edu Tue Jan 24 13:58:11 1995 From: lksaul at psyche.mit.edu (Lawrence Saul) Date: Tue, 24 Jan 95 13:58:11 EST Subject: paper announcement Message-ID: <9501241858.AA03473@psyche.mit.edu> ------------------------------------------------------------------------ FTP-host: psyche.mit.edu FTP-file: pub/lksaul/boltzmann.chains.ps.Z ------------------------------------------------------------------------ The following paper is now available by anonymous ftp: Boltzmann Chains and Hidden Markov Models [8 pages] Lawrence K. Saul and Michael I. Jordan Center for Biological and Computational Learning Massachusetts Institute of Technology 79 Amherst Street, E10-243 Cambridge, MA 02139 Abstract: We propose a statistical mechanical framework for the modeling of discrete time series. Maximum likelihood estimation is done via Boltzmann learning in one-dimensional networks with tied weights. We call these networks Boltzmann chains and show that they contain hidden Markov models (HMMs) as a special case. Our framework also motivates new architectures that address particular shortcomings of HMMs. We look at two such architectures: parallel chains that model feature sets with disparate time scales, and looped networks that model long-term dependencies between hidden states. For these networks, we show how to implement the Boltzmann learning rule exactly, in polynomial time, without resort to simulated or mean-field annealing. The necessary computations are done by exact decimation procedures from statistical mechanics. *** To appear in the NIPS 1994 Proceedings.  From ronnyk at CS.Stanford.EDU Tue Jan 24 00:08:14 1995 From: ronnyk at CS.Stanford.EDU (Ronny Kohavi) Date: Tue, 24 Jan 1995 13:08:14 +0800 Subject: MLC++ utilities version 1.1 Message-ID: <9501242108.AA15481@starry.Stanford.EDU> [ Summary paragraph moved to beginning, for clarity. -- The Moderator ] MLC++ is a Machine Learning library of C++ classes being developed at Stanford. More information about the library can be obtained at URL http://robotics.stanford.edu:/users/ronnyk/mlc.html. The utilities are available by anonymous ftp to starry.stanford.edu:pub/ronnyk/mlc/util. They are currently given only in object code for Sun, but source code will be distributed in the future or to sites that wish to attempt a port of the code into other compilers. MLC++ Utilities 1.1 ___________________ Since the release of MLC++ utilities 1.0 in December 1994, over 40 sites have copied the utilities. We are now releasing version 1.1. New features include: *. Options now prompt for values with help to explain the option values. Options are divided into common options and "nuisance" options, which users should not change often (especially first-time users). *. New inducers include Naive-Bayes and 1R (Holte). *. The nearest-neighbor (IB) inducer has many new options. It supports nominals, interquartile normalization (as opposed to extreme), voting of neighbors, k distances (as opposed to k neighbors), and more. *. A new utility, discretize, is available to discretize continuous features. Either binning or Holte's discretization can be used. *. Estimated performance on a test set now gives a confidence bound assuming i.i.d. sample (details in the manual). People are often surprised by how wide the interval is for some of the toy datasets. *. Confusion matrices can be displayed for MLC++ inducers. *. The tree induced by ID3 can display the distribution and entropy in the tree displayed using X-windows. (This option requires that you install dotty from AT&T, which is free for research.) *. The learning curve gives an honest estimate of error by testing only on the unseen instances. The accuracy reports for regular induction also report memorization accuracy and generalization accuracy separately (following Schaffer and Wolpert's recent papers). -- Ronny Kohavi (ronnyk at CS.Stanford.EDU, http://robotics.stanford.edu/~ronnyk)  From mike at PARK.BU.EDU Tue Jan 24 18:22:15 1995 From: mike at PARK.BU.EDU (mike@PARK.BU.EDU) Date: Tue, 24 Jan 1995 18:22:15 -0500 Subject: Boston University CAS/CNS Spring Seminar Announcement Message-ID: <199501242322.SAA03941@space.bu.edu> Spring 1995 Colloquium Series CENTER FOR ADAPTIVE SYSTEMS AND DEPARTMENT OF COGNITIVE AND NEURAL SYSTEMS BOSTON UNIVERSITY January 27 APPLICATIONS OF NEURAL NETWORKS TO TELECOMMUNICATIONS Dr. Joshua Alspector, Neural Network Research Group, Bellcore February 3 NETWORKS THAT LEARN AND HOW THE BRAIN SHOULD WORK Professor Tomaso Poggio, Department of Brain and Cognitive Sciences, MIT February 10 PLANNING AND LEARNING USING MARKOV DECISION MODELS Professor Leslie Pack-Kaelbling, Department of Computer Science, Brown University February 24 A DYNAMIC MODEL OF DISCONTINUITIES IN COGNITIVE AND BRAIN DEVELOPMENT Professor Kurt Fischer, Human Development and Psychology, Harvard University March 3 BAYESIAN LEARNING IN MODULAR AND HIERARCHICAL NEURAL NETWORKS Professor Michael Jordan, Department of Brain and Cognitive Sciences, MIT March 17 VISION, BRAIN, AND THE PHILOSOPHY OF COGNITION Speakers: B. Julesz, K. Nakayama, S. Grossberg, P. Cavanagh, V.S. Ramachandran, E. Thompson, D. Dennett. Time: 8:30 A.M. - 6:00 P.M. Place: GSU Conference Auditorium, 2nd floor, 775 Commonwealth Avenue March 24 ON THE GEOMETRY OF PERCEIVED SPACE Professor James Todd, Department of Psychology, Ohio State University April 7 HOW DOES THE BRAIN GENERATE SENSORY-MOTOR BEHAVIOR? A COMPUTATIONAL FIELD THEORY FOR CONTROLLING RAPID EYE MOVEMENTS Dr. Lance Optican, Laboratory of Sensorimotor Research, National Eye Institute, NIH April 21 PARALLEL CEREBRAL MEMORY SYSTEMS Dr. Mortimer Mishkin, Laboratory of Neuropsychology, NIMH April 28 STATISTICAL METHODS IN LARGE VOCABULARY CONTINUOUS SPEECH RECOGNITION Larry Gillick, Dragon Systems All Talks (except March 17) on Fridays at 2:30 PM Refreshments at 2:00 PM in Room 101 2 Cummington Street, Boston  From mark at dcs.kcl.ac.uk Wed Jan 25 04:32:25 1995 From: mark at dcs.kcl.ac.uk (Mark Plumbley) Date: Wed, 25 Jan 1995 09:32:25 +0000 Subject: Reader in Mathematics & Neural Networks, King's College London Message-ID: <9501250932.AA05063@helium.dcs.kcl.ac.uk> KING'S COLLEGE LONDON SCHOOL OF PHYSICAL SCIENCES AND ENGINEERING READER IN MATHEMATICS Applications are invited for the established post of Reader in the Department of Mathematics with effect from 1 September 1995. The successful candidate will have achieved research distinction and have outstanding research potential in mathematics and neural networks, or a mathematically-based discipline (eg theoretical physics, statistics or information processing) and neural networks. Salary will be in the range 29,152 Pounds to 32,667 Pounds per annum inclusive of 2,134 Pounds London Allowance per annum. Application forms and further particulars may be obtained from the Personnel Officer, School of Physical Sciences and Engineering, King's College London, Strand, London WC2R 2LS, UK, tel. +44 (0)171 873 2427 or email H.Holland at kcl.ac.uk. The closing date for completed applications is 3 March 1995. Please quote reference A4/CM/2/95. Equality of opportunity is College policy. --------------------------------------------------------------------------- Dr. Mark D. Plumbley mark at dcs.kcl.ac.uk |_/ I N G'S Centre for Neural Networks | \ College Department of Electronic & Electrical Engineering L O N D O N King's College London Strand/London WC2R 2LS/UK Founded1829 Tel +44 (0)171 873 2241 Fax +44 (0)171 873 2851 ---------------------------------------------------------------------------  From keithm at PARK.BU.EDU Wed Jan 25 09:35:34 1995 From: keithm at PARK.BU.EDU (Keith McDuffee) Date: Wed, 25 Jan 1995 10:35:34 -0400 Subject: 1995 WORLD CONGRESS ON NEURAL NETS MEETING Message-ID: REVISED CALL FOR PAPERS WORLD CONGRESS ON NEURAL NETWORKS 1995 ANNUAL MEETING OF THE INTERNATIONL NEURAL NETWORKS SOCIETY JULY 17-21, 1995 RENAISSANCE HOTEL/WASHINGTON, DC SPECIAL FEATURES: March 1 Due Date, Reduced registration fee, both CD-ROM and paper proceedings. Four-page papers are due by 1 March 1995. Note the change in deadline date. Authors must submit registration payment with papers to be eligible for the early registration fee. A $35 publication fee must accompany each submission that the conference committee will refund if it rejects the paper. The $35 publication fee helps defray the Proceedings cost and allows the conference committee to offer a lower registration fee. The 1995 registration-plus- publication fee of $205 is comparable to the 1994 registration fee. This service has been provided to make the meeting more affordable for attendees who do not plan to have published articles in the proceedings. Please make checks payable to INNS and include with submitted paper. For review purposes, please submit six (6) copies (1 original, 5 copies) plus 3 1/2" disk (see instructions below), four page limit, in English. $20 per page for papers exceeding (4) pages (do not number pages). Checks for over length charges should be made out to INNS and must be included with submitted paper. Papers must be on 8 1/2" x 11" white paper with 1" margins on all sides, one column format, single spaced, in Times or similar type style of 10 points or larger, one side of paper only. FAX's not acceptable. Centered at top of first page should be complete title, author name(s), affiliation(s), and mailing address(es), followed by blank space abstract (up to 15 lines), and text. The following information MUST be included in an accompanying cover letter in order for the paper to be reviewed: Full title of paper, corresponding author and presenting author name, address, telephone and fax numbers. Technical Session (see session topics) 1st and 2nd choices, oral or poster presentation preferred, audio-visual requirements (for oral presentations only). Papers submitted which do not meet these requirements or for which insufficient funds are submitted will be returned. For the first time, the proceedings of the 1995 World Congress on Neural Networks will be distributed on CD-ROM. The CD-ROM Proceedings are included in your registration fee. Accepted papers will appear in BOTH CD-ROM and paper Proceedings format. Format a 3 1/2" disk for CD-ROM: Once paper is proofed, completed and printed for review, reformat the paper in Landscape format, page size 8" x 5" for CD. You may include a separate file with 1 paragraph biographical information with your name, company, address and telephone number. Presenters should submit their papers in one of the following Macintosh or Microsoft Windows formats: Microsoft Word, WordPerfect, FrameMaker, Quark or Quark Professional, PageMaker, Persuasion, ASCII, PowerPooint, Adobe.PDF, Postscript (text, not EPS). Images can be submitted in TIF or PCX format. If submitting a previously unpublished paper, author agrees to the transfer of the copyright to INNS for the conference proceedings. All submitted papers become the property of INNS. Papers and disk to be sent to: WCNN'95, 875 Kings Highway, Suite 200, Woodbury, New Jersey 08096-3172; Tel: 609-845-1720, Fax: 609-853-0411, e-mail: 74577.504 at compuserve.com. Registration Fees: Category Pre-registration Pre-registration On-Site prior to prior to March 1, 1995 June 16, 1995 INNS Member $170.00 $250.00 $350.00 Non-member** $270.00 $380.00 $480.00 Student*** $ 85.00 $110.00 $135.00 **Registration fee includes 1995 membership and a one (1) year subscription to the Journal Neural Networks. ***Student registration must be accompanied by a letter of verification from department chairperson. Any student registration received with no verification letter will be processed at the higher member or non-member fee, depending on current membership status. Copies of student identification cards are NOT acceptable. This also applies to on-site registration. ORGANIZING COMMITTEE John G. Taylor, General Chair Walter J. Freeman Harold Szu Rolf Eckmiller Shun-ichi Amari David Casasent INNS OFFICERS GOVERNING BOARD President: John G. Taylor Shun-ichi Amari President-Elect: Shun-ichi Amari Daniel Alkon Past President: Walter J. Freeman James A. Anderson Secretary: Gail Carpenter Daniel Levine Treasurer: Judith Dayhoff David Casasent Executive Director: R. K. Talley Leon Cooper Rolf Eckmiller Francoise Fogelman-Soulie Kunihiko Fukushima Stephen Grossberg Christof Koch Bart Kosko Christoph von der Malsburg Alianna Maren Paul Werbos Bernard Widrow Lotfi A. Zadeh PROGRAM COMMITTEE Shun-ichi Amari James A. Anderson Kaveh Ashenayi Etienne Barnard Andrew R. Barron Andrew Barto Theodore Berger Horacio Bouzas Artie Briggs Gail Carpenter David Casasent Ralph Castain Huishung Chi Leon Cooper Judith Dayhoff Nick DeClaris Rolf Eckmiller Jeff Elman Terrence L. Fine Gary Fleming Francoise Fogelman-Soulie Walter J. Freeman Kunihiko Fukushima Apostolos Georgopoulos Stephen Grossberg John B. Hampshire II Michael Hasselmo Robert Hecht-Nielsen Akira Iwata Jari Kangas Bert Kappen Christof Koch Teuvo Kohonen Kenneth Kreutz-Delgado Clifford Lau Soo-Young Lee George Lendaris Sam Leven Daniel S. Levine William B. Levy Christof von der Malsburg Alianna Maren Lina Massone Lance Optican Robert Pap Richard Peterson Paul Refenes Mohammed Sayeh Madam G. Singh Dejan Sobajic Jeffrey Sutton Harold Szu John G. Taylor Brian Telfer Shiro Usui Andreas Weigand Paul Werbos Hal White Bernard Widrow Daniel Wolpert Mona E. Zaghloul PLENARY SPEAKERS: Daniel L. Alkon, U.S. National Institutes of Health Shun-ichi Amari, University of Tokyo Gail Carpenter, Boston University Walter J. Freeman, University of California, Berkeley Teuvo Kohonen, Helsinki University of Technology Harold Szu, Naval Surface Warfare Center John G. Taylor, King's College London SESSION TOPICS AND CHAIRS: 1. Biological Vision: Rolf Eckmiller, Shiro Usui 2. Machine Vision: Kunihiko Fukushima, Robert Hecht-Nielsen 3. Speech and Language: Jeff Elman, Richard Peterson 4. Biological Sensory-Motor Control: Andrew Barto, Lina Massone 5. Neurocontrol and Robotics: Paul Werbos, Kaveh Ashenayi 6. Supervised Learning: Andrew R. Barron, Terrence L. Fine, Soo-Young Lee 7. Unsupervised Learning: Teuvo Kohonen, Francoise Fogelman-Soulie 8. Pattern Recognition: David Casasent, Brian Telfer 9. Prediction and System Identification: John G. Taylor, Paul Werbos 10. Cognitive Neuroscience: James Anderson, Jeffrey Sutton 11. Links to Cognitive Science & Artificial Intelligence: Alianna Maren, George Lendaris 12. Signal Processing: Bernard Widrow, Horacio Bouzes 13. Neurodynamics and Chaos: Harold Szu, Mona E. Zaghloul, DeLiang Wang 14. Hardware Implementation: Clifford Lau, Ralph Castain, Mohammad Sayeh 15. Associative Memory: Christoph von der Malsburg, Gary Fleming, Huisheng Chi 16. Applications: Leon Cooper, Robert Pap, Dejan Sobajic 17. Circuits and Systems Neuroscience: Stephen Grossberg, Lance Optican 18. Mathematical Foundations: Shun-ichi Amari, D.S. Levine 19. Evolutionary Computing, Genetic Algorithms: Judith Dayhoff, Vasant Honavar SHORT COURSES: a. Pattern Recognition and Neural Nets: David Casasent, Carnegie Mellon University b. Modelling Consciousness: John G. Taylor, King's College London c. Neocognitron and the Selective Attention Model: Kunihiko Fukushima, Osaka University d. What are the Differences & the Similarities Among Fuzzy, Neural, & Chaotic Systems: Takeshi Yamakawa, Kyushu Institute of Technology e. Image Processing & Pattern Recognition by Self-Organizing Neural Networks: Stephen Grossberg, Boston University f. Dynamic Neural Networks: Signal Processing & Coding: Judith Dayhoff, University of Maryland g. Language and Speech Processing: Jeff Elman, University of California-San Diego h. Introduction to Statistical Theory of Neural Networks: Shun-ichi Amari, University of Tokyo i. Cognitive Network Computation: James Anderson, Brown University j. Biology-Inspired Neural Networks: From Brain Research to Applications in Technology & Medicine: Rolf Eckmiller, University of Dusseldorf k. Neural Control Systems: Bernard Widrow, Stanford University l. Neural Networks to Advance Intelligent Systems: Alianna Maren, Accurate Automation Corporation m. Reinforcement Learning: Andrew G. Barto, University of Massachusetts n. Advanced Supervised-Learning Algorithms and Applications: Francoise Fogelman-Soulie, SLIGOS o. Neural Network & Statistical Methods for Function Estimation: Vladimir Cherkassky, University of Minnesota p. Adaptive Resonance Theory: Gail A. Carpenter, Boston University q. What Have We Learned from Experiences of Real World Applications in NN/FS/GA?: Hideyuki Takagi, Matsushita Elctrical Industrial Co., Ltd. r. Fuzzy Function Approximation: Julie A. Dickerson, University of Southern Califorrnia s. Fuzzy Logic and Calculi of Fuzzy Rules and Fuzzy Graphs: Lofti A. Zadeh, University of California-Berkeley t. Overview of Neuroengineering and Supervised Learning: Paul Werbos, National Science Foundation INDUSTRIAL ENTERPRISE DAY: Monday, July 17, 1995 Enterprise Session: Chair: Robert Hecht-Nielsen, HNC, Inc. Industrial Session: Chair: Takeshi Yamakawa, Kyushu Institute of Technology FUZZY NEURAL NETWORKS: Tuesday, July 18, 1995 Wednesday, July 19, 1995 Co-Chairs: Bart Kosko, University of Southern California Ronald R. Yager, Iona College SPECIAL SESSIONS: Neural Network Applications in the Electrical Utility Industry Biomedical Applications & Imaging/Computer Aided Diagnosis in Medical Imaging Statistics and Neural Networks Dynamical Systems in Financial Engineering Mind, Brain and Consciousness Physics and Neural Networks Biological Neural Networks To obtain additional information (complete registration brochure, registration and hotel forms) contact WCNN'95, 875 Kings Highway, Suite 200, Woodbury, New Jersey 08096-3172 USA, Tele: (609)845- 1720; Fax: (609)853-0411; e-mail: 74577.504 at compuserve.com  From jb at informatik.uni-bonn.de Thu Jan 26 00:20:10 1995 From: jb at informatik.uni-bonn.de (jb@informatik.uni-bonn.de) Date: Wed, 25 Jan 95 19:20:10 -1000 Subject: paper announcement Message-ID: <9501251822.AA05025@olymp.informatik.uni-bonn.de> The following papers are available by anonymous ftp: ------------------------------------------------------------------------ FTP-host: atlas.cs.uni-bonn.de (131.220.10.29) FTP-file: pub/papers/hofmann.nips94.ps.gz ------------------------------------------------------------------------ Multidimensional Scaling and Data Clustering T. Hofmann and J. Buhmann Rheinische Friedrich--Wilhelms--Universitaet Institut fuer Informatik III Roemerstrasse 164 D-53117 Bonn, Germany Abstract: Visualizing and structuring pairwise dissimilarity data are difficult combinatorial optimization problems known as "multidimensional scaling" or "pairwise data clustering. Algorithms for embedding dissimilarity data set in a Euclidian space, for clustering these data and for actively selecting data to support the clustering process are discussed in the maximum entropy framework. Active data selection provides a strategy to discover structure in a data set efficiently with partially unknown data. ------------------------------------------------------------------------ FTP-host: atlas.cs.uni-bonn.de (131.220.10.29) FTP-file: pub/papers/buhmann.icpr94.ps.gz ------------------------------------------------------------------------ A Maximum Entropy Approach to Pairwise Data Clustering Abstract: Partitioning a set of data points which are characterized by their mutual dissimilarities instead of an explicit coordinate representation is a difficult, NP-hard combinatorial optimization problem. We formulate this optimization problem of a pairwise clustering cost function in the maximum entropy framework using a variational principle to derive corresponding data partitionings in a d-dimensional Euclidian space. This approximation solves the embedding problem and the grouping of these data into clusters simultaneously and in a selfconsistent fashion.  From hinton at cs.toronto.edu Wed Jan 25 17:32:19 1995 From: hinton at cs.toronto.edu (Geoffrey Hinton) Date: Wed, 25 Jan 1995 17:32:19 -0500 Subject: job advertisement Message-ID: <95Jan25.173227edt.794@neuron.ai.toronto.edu> I am forwarding the following message. The bits in parentheses are my own comments and should not be construed as representing the views, official or otherwise, of the University of Toronto. NON-TENURE TRACK, LIMITED TERM FACULTY POSITIONS The Department of Computer Science, University of Toronto, has received funding from various granting agencies. Funding permitting, some Limited Term Faculty positions are available in all areas of Computer Science. Applications should be sent by January 31st, 1995 to: Professor Wayne H. Enright, Chairman, Department of Computer Science, University of Toronto, Toronto, Ontario, M5S 1A4, Canada. (but if you work in neural networks it would help to also send a copy to Geoff Hinton at the same address. If your application is a few days late and you are a strong candidate I'll try to make sure you get considered. The phone number for courier packages is 416-978-3707, but please dont call). In accordance with Canadian immigration requirements, this advertisement is directed to Canadian citizens and permanent residents of Canada. (Because of NAFTA, residents or citizens of the USA or Mexico also have a chance, but others do not.) In accordance with its Employment Equity Policy, the University of Toronto encourages applications from qualified women or men, members of visible minorities, aboriginal peoples, and persons with disabilities.  From isabelle at research.att.com Thu Jan 26 16:16:40 1995 From: isabelle at research.att.com (Isabelle Guyon) Date: Thu, 26 Jan 95 16:16:40 EST Subject: No subject Message-ID: <9501262114.AA03389@big.info.att.com> /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ < ICANN industrial 1 day workshop: > < Neural network applications > < to DOCUMENT ANALYSIS and RECOGNITION > < Paris, October 1995 > \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/ * Layout and logical structure analysis of documents. * Map drawing and understanding. * OCR and handwriting recognition (off-line and on-line). * Multimedia document processing. * Image/text retrieval, automated indexing. * User interfaces to electronic libraries. * Image/text compression. This workshop will be a forum for application researchers and developers to present their systems and discuss tabu subjects, including: - hybrid solutions, - solutions that work (don't know why), - solutions that do not work (though theoretically optimum), - hacks, tricks and miscellaneous occult methods, - marketing advantage/disadvantage of saying that there is a NN in the system. The condition of acceptance will not be the novelty of the algorithms but the existence of a working "money making" application or at least a working prototype with a path towards industrialization. The performance of the system should be measured quantitatively, preferably using known benchmarks or comparisons with other systems. As for regular scientific papers, every statement should be properly supported by experimental evidence, statistics or references. Demonstrations and videos are encouraged. *** Submission deadline of a 6 page paper = March 20, 1995 *** Send 4 paper copies to: Isabelle Guyon ---------------------- AT&T Bell Laboratories 955 Creston road Berkeley, CA 94708, USA Electronic formats available at: ftp lix.polytechnique.fr login: anonymous password : your e-mail address ftp> cd /pub/ICANN95/out For more informations on ICANN write to isabelle at research.att.com.  From ngr at atlas.ex.ac.uk Thu Jan 26 13:25:34 1995 From: ngr at atlas.ex.ac.uk (ngr@atlas.ex.ac.uk) Date: Thu, 26 Jan 95 18:25:34 GMT Subject: Special Issue of Connection Science Message-ID: <27088.9501261825@elnath.dcs.exeter.ac.uk> Pleas could you distribute the following outline of the current special issue of Connection Science. Thanks Niall Griffith. ----------------------------------------------------------- Special Issue of Connection Science on Music and Creativity ----------------------------------------------------------- We thought people would like to know that a new collection of work on connectionist models of musical cognition and artistic creativity has appeared in print this month. The collection is a double issue of the journal Connection Science, volume 6, nos. 2&3, covering aspects of musical perception, conception, and action, and the generation of visual art. Some of the papers in this double issue are very interesting from a computational point of view as well, beyond their specific application domain. We hope you enjoy the issue and find it useful, and we welcome your comments and updates about further work in this area for future collections such as this. Niall Griffith and Peter Todd (Please note: Single copies of this double issue are available, at a cost of $93.50. A book version of this double issue is also planned for the near future.) --------------------------------------------------------------------- Niall Griffith, Department of Computer Science, University of Exeter, Prince of Wales Road, Exeter, EX4 4PT UK Email: ngr at dcs.exeter.ac.uk Peter Todd Department of Psychology University of Denver 2155 S. Race Street Denver, CO 80208 USA Email: ptodd at edu.du.psy --------------------------------------------------------------------- Contents of Connection Science 6(2-3), 1994: 0. Niall Griffith & Peter Todd ----------------------------- Editorial: Process and representation in connectionist models of musical structure 1. Ian Taylor & Mike Greenhough ---------------------------- Modelling pitch perception with adaptive resonance theory artificial networks 2. Niall Griffith -------------- Developing tonal centres and abstract pitch as categorisations of pitch-use 3. Edward Large & John Kolen ------------------------- Resonance and the perception of musical meter 4. Steven Smoliar -------------- Modelling musical perception: A critical view 5. Michael Page ------------ Modelling the perception of musical sequences with self-organizing neural networks 6. Michael Mozer ------------- Neural network music composition by prediction: Exploring the benefits of psychoacoustic constraints and multiscale processing 7. Matthew Bellgard & C. Tsang --------------------------- Harmonizing music the Boltzmann way 8. Bruce Katz ---------- An ear for melody 9. Shumeet Baluja, Dean Pomerleau & Todd Jochem -------------------------------------------- Towards automated artificial evolution for computer generated images 10. Michael Casey ------------- Understanding musical sound with forward models and physical models  From john at dcs.rhbnc.ac.uk Fri Jan 27 10:19:09 1995 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Fri, 27 Jan 95 15:19:09 +0000 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <199501271519.PAA11903@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT): two new reports available ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-001: ---------------------------------------- Worst-Case Analysis of the Bandit Problem by Peter Auer, Technische Universit\"{a}t Graz, Klosterwiesgasse 32/2, A-8010 Graz, Austria. Nicol\`o Cesa-Bianchi, DSI, University of Milan, Via Comelico 39, I-20135 Milano, Italy. Abstract: The multi-armed bandit is a classical problem in the area of sequential decision theory and has been studied under a variety of statistical assumptions. In this work we investigate the bandit problem from a purely worst-case standpoint. We present a randomized algorithm with an expected total reward of $G-O(G^{4/5}K^{6/5})$ (disregarding log factors), where $K$ is the number of arms and $G$ is the (unknown) total reward of the best arm. Our analysis holds with no assumptions whatsoever on the way rewards are generated, other than being independent of the algorithm's randomization. Our results can also be interpreted as a novel extension of the on-line prediction model, an intensively studied framework in learning theory. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-004: ---------------------------------------- Degree of Approximation Results for Feedforward Networks Approximating Unknown Mappings and Their Derivatives by Kurt Hornik, Technische Universit\"at Wien, Maxwell Stinchcombe, University of California, San Diego, Halbert White, University of California, San Diego, Peter Auer, Technische Universit\"at Graz Abstract: Barron (1993) has given rates for hidden layer feedforward networks with sigmoid activation functions approximating a class of functions satisfying a certain smoothness condition. These rates do not depend on the dimension of the input space. We extend Barron's results to feedforward networks with possibly non-sigmoid activation functions approximating mappings and their derivatives simultaneously. Our conditions are similar but not identical to Barron's, but we obtain the same rates of approximation, showing that the approximation error decreases at rates as fast as $n^{-\frac{1}{2}}$, where $n$ is the number of hidden units. The dimension of the input space appears only in the constants of our bounds. ----------------------- The Report NC-TR-95-001 can be accessed and printed as follows % ftp cscx.cs.rhbnc.ac.uk (134.219.200.45) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-95-001.ps.Z ftp> bye % zcat nc-tr-95-001.ps.Z | lpr -l Similarly for the other technical report. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. Best wishes John Shawe-Taylor  From edelman at wisdom.weizmann.AC.IL Fri Jan 27 11:30:59 1995 From: edelman at wisdom.weizmann.AC.IL (Edelman Shimon) Date: Fri, 27 Jan 1995 16:30:59 GMT Subject: 2 new TRs: face recognition, shape similarity Message-ID: <199501271630.QAA23426@lachesis.wisdom.weizmann.ac.il> ---------------------------------------------------------------------- URL: ftp://eris.wisdom.weizmann.ac.il/pub/maria-tr.ps.Z Maria Lando and Shimon Edelman Generalization from a single view in face recognition We describe a computational model of face recognition, which generalizes from single views of faces by taking advantage of prior experience with other faces, seen under a wider range of viewing conditions. The model represents face images by vectors of activities of graded overlapping receptive fields (RFs). It relies on high spatial frequency information to estimate the viewing conditions, which are then used to normalize (via a transformation specific for faces), and identify, the low spatial frequency representation of the input. The class-specific transformation approach allows the model to replicate a series of psychophysical findings on face recognition, and constitutes an advance over current face recognition methods, which are incapable of generalization from a single example. 22 pages; uncompressed Postscript file size: 3563304 bytes (600dpi) (a shorter, 6-page version is also available, as maria-short.ps.Z) ---------------------------------------------------------------------- URL: ftp://eris.wisdom.weizmann.ac.il/pub/cs-tr-95-01.ps.Z Florin Cutzu and Shimon Edelman Explorations of shape space Using a small number of prototypical reference objects to span the internal shape representation space has been suggested as a general approach to the problem of object representation in vision (Edelman, Minds and Machines 5, 1995, in press). We have investigated the ability of human subjects to form the low-dimensional metric shape representation space predicted by this approach. In each of a series of experiments, which involved pairwise similarity judgment, and delayed match to sample, subjects were confronted with several classes of computer-rendered 3D animal-like shapes, arranged in a complex pattern in a common high-dimensional parameter space. We combined response time and error rate data into a measure of view similarity, and submitted the resulting proximity matrix to nonmetric multidimensional scaling (MDS). In the two-dimensional MDS solution, views of the same shape were invariably clustered together, and, in each experiment, the relative geometrical arrangement of the view clusters of the different objects reflected the true low-dimensional structure in parameter space (star, triangle, square, line) that defined the relationships between the stimuli classes. These findings are now used used to guide the development of a detailed computational theory of shape vision based on similarity to prototypes. 33 pages; uncompressed Postscript file size: 3887463 bytes (600dpi) ---------------------------------------------------------------------- Related papers available at URL http://www.wisdom.weizmann.ac.il/~edelman/archive.html Comments are welcome. -Shimon Shimon Edelman E-MAIL: edelman at wisdom.weizmann.ac.il TEL: +972-8-342856 FAX: +972-8-344122 WWW: http://eris.wisdom.weizmann.ac.il/~edelman/shimon.html Dept. of Appl. Math. & CS, Weizmann Institute of Science, Rehovot 76100, ISRAEL  From alpaydin at boun.edu.tr Mon Jan 23 10:14:52 1995 From: alpaydin at boun.edu.tr (Ethem Alpaydin) Date: Mon, 23 Jan 1995 10:14:52 -0500 (EST) Subject: Workshop on Soft Computing Methods for Pattern Recognition Message-ID: Dear collegue, SOCO'95, the "International ICSC Symposium on Soft Computing: Fuzzy Logic, Artificial Neural Networks and Genetic Algorithms", is to be held at the Rochester Intitute of Technology, Rochester, New York, USA, October 24-27, 1995. We intend to organize an half a day workshop on "Soft Computing Methods for Pattern Recognition" within SOCO'95. The target of the workshop we are proposing, is the cross-fertilization of complementary approaches to pattern recognition based on classical methods, knowledge-based systems, neural networks, fuzzy theory, probabilistic reasoning, genetic algorithms, etc. In case that you are interested in submitting a paper, please contact us as soon as possible stating the tentative title of your paper, and 5 key-words. We will send to you more instructions for the 500 words abstract to be sent by February 28, 1995. Please indicate your postal address and fax on your submission. We can send you more information on SOCO'95 at your request. Looking forward to hearing from you, Sincerely, F. Masulli and E. Alpaydin ------------------------------ Dr. Francesco Masulli Dr. Ethem Alpaydin University of Genoa Bogazici University Department of Physics Department of Computer Engineering Via Dodecaneso 33 TR-80815 Istanbul Turkey 16146 Genova Italy Voice : +39 10 353 6297 Voice : +90 212 263-1540 x 1862 Fax : +39 10 362 2790 Fax : +90 212 265-8488 Email : masulli at genova.infn.it Email : alpaydin at boun.edu.tr  From NEUROCOG at vms.cis.pitt.edu Fri Jan 27 16:58:03 1995 From: NEUROCOG at vms.cis.pitt.edu (NEUROCOG@vms.cis.pitt.edu) Date: Fri, 27 Jan 1995 17:58:03 -0400 (EDT) Subject: Functional MRI Conference March 25 San Francisco Message-ID: <01HMCLXMSUVMDA8P0U@vms.cis.pitt.edu> Functional Magnetice Resonance Imaging (fMRI) workshop: How to interpret it How to do it Satellite workshop before Cognitive Neuroscience Saturday, March 25, 1995 Fairmont Hotel San Francisco, California At the conference you will learn: How to interpret functional Magnetic Resonance Imaging (fMRI) data What is needed to do effective fMRI imaging Specific techniques for fMRI imaging Sample protocols and procedures A review of recent fMRI results Conference directors Walter Schneider, University of Pittsburgh G. R. Mangun, University of California, Davis Additional faculty Michael Buonocore, University of California, Davis George Carman, Sloan Center Theor. Neurobiology, Salk Institute BJ Casey, University of Pittsburgh Medical Center Jonathan Cohen, Carnegie Mellon Univ. & Univ. of Pittsburgh Neal Cohen, University of Illinois John Desmond, Stanford University Anders Dale, U. of Oslo & U. of California, San Diego Steve Engel, Stanford University Peter Fox, University of Texas Karl Friston, Hamersmith Hospital Gary Glover, Stanford University James Haxby, National Institute of Mental Health Marty Sereno, University of California, San Diego Steve Small, University of Pittsburgh Medical Center Robert Savoy, Massachusetts Gen. Hos. NMR Center Leslie Ungerleider, National Institute of Mental Health Robert Weisskoff, Massachusetts Gen. Hos. NMR Center Program schedule Saturday, March 25, 1995 8:15 Plenary session overview Walter Schneider, G. R. Mangun, Robert Savoy: Perspectives on brain functional imaging and role of fMRI. What is fMRI, examples of use, brain mapping, limitations of technique, example simulated session 9:15 The physics of fMRI Gary Glover, discussant Robert Weisskoff: How MRI works, hemodynamic response, magnet strength, spatial and temporal resolution, pulse sequences (conventional, spiral, echo-planar), coils (surface, head), pulse programming for fMRI, optimization of parameters: thickness, flip, dealing with artifacts (banding, movement, echoes) 10:15 (15 minute break) 10:30 Experimental design & control Walter Schneider; discussant James Haxby: Stimulus presentation/response collection in the MRI, head constraint, replicability, experimental task design, scan planning for fMRI; signal tradeoffs of space, time, condition sequencing, use of scout fMRI images, co-registration across runs, data management, MRI & experiment synchronization 11:30 Data analysis James Haxby, Karl Friston: Statistical procedures, particle analysis, area measurements, correction for multiple tests, eigenvector spaces, averaging within and between subjects, power spectrum analysis 12:30-1:30 Lunch Break 1:30 Localization within and between subjects George Carman, Anders Dale & Marty Sereno, Peter Fox: Within and between session registration, between subject registration, 3d space coordinates, converting 3d space into 2d maps, co- registration with other modalities 2:30 Scanning in different regions & subjects Sensory processes John Engel: Vision, audition, tactile, motor Language processing Steve Small: Reading, speaking listening Complex cognitive processing Jonathan Cohen: Memory, emotion Subcortical Neal Cohen: Hippocampus, LGN 3:40 (15 minute break) 3:55 Scanning children & development BJ Casey: Effects of changes in brain size, cortical maturation, age related hemodynamic responsiveness; screening subjects, getting comfortable in magnet, minimizing movement 4:15 Clinical applications John Desmond: Functional mapping and surgery planning 4:45 fMRI in the neuroscience of learning Leslie Ungerleider: the role of fMRI in the neuroscience assessment of cortical plasticity in the transient and enduring effects of learning 5:30 break (15 minutes) 5:45 Administrative & training considerations Walter Schneider, Robert Savoy, round table: Overview, safety considerations, costs, medical staffing, human subjects review, training opportunities 6:15 MRI demonstration stations Simulated "hands on experience". There will be a series of short presentations around the room, 15 minutes per station. Subject preparation Bite bar, screening, subject selection, running special patient populations, land marking, subject running, subject safety Stimulus presentation Visual stimuli, auditory stimuli, response collection, heart rate monitoring, example stimulus paradigms (checkerboards, hand squeeze, memory updating, correlational mapping) MRI magnet room Scan types, structural scanning, MRA, 3D, functional scans, acoustic effects and types of images obtained at each step Statistical processing of data Significance of activation, correlation, data visualization, registration, labeling, relating of data 7:15 End of workshop Location The conference will be at the Fairmont Hotel immediately preceding the 2nd annual meeting of the Cognitive Neuroscience Society (March 26-28) Fairmont Hotel telephone 415-772-5000 or mail to Group Reservations, Fairmont Hotel, 950 Mason Street, San Francisco, CA 94108. Persons attending the Cognitive Neuroscience Meeting can qualify for the special meeting rate if they reference the Society Related Activities Program Sponsors: Neural Processes In Cognition Program & Center for the Neural Basis of Cognition: Univ. of Pittsburgh & Carnegie Mellon Univ. University of California, Davis, Center for Neuroscience & Radiology Department Center for Advanced MR Technology, Stanford University Meeting. The Cognitive Neuroscience Society meeting will include symposia on brain development, brain imaging, spatial cognition, computational modeling, imagery, language processing and multiple poster sessions. ------------------------------ fMRI Workshop registration information Registration is limited to 150 Register early to insure a seat (Note seats given in order of paid registrations recieved) Name Address Telephone Email Specialty Registration fee: Before March 1, 1995 Faculty $40.00 Student $30.00 After March 1, 1995 Faculty $50.00 Student $40.00 Make Checks payable to: University of Pittsburgh, fMRI workshop Complete form and send to Cathy Rupp, fMRI Workshop 524 LRDC University of Pittsburgh 3939 O'Hara St Pittsburgh PA 15260 For information Email to: neurocog at vms.cis.pitt.edu  From thimm at idiap.ch Sun Jan 29 05:42:15 1995 From: thimm at idiap.ch (Georg Thimm) Date: Sun, 29 Jan 95 11:42:15 +0100 Subject: Paper available Message-ID: <9501291042.AA15409@idiap.ch> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/thimm.gain.ps.Z 6 pages, compressed file size: 63685 KB The file thimm.gain.ps.Z is now available for copying from the Neuroprose archive: The Interchangeability of Learning Rate and Gain in Backpropagation Neural Networks G. Thimm, P. Moerland, and E. Fiesler Abstract: The backpropagation algorithm is widely used for training multilayer neural networks. In this publication the gain of its activation function(s) is investigated. In specific, it is proven that changing the gain of the activation function is equivalent to changing the learning rate and the weights. This simplifies the backpropagation learning rule by eliminating one of its parameters. The theorem can be extended to hold for some well-known variations on the backpropagation algorithm, such as using a momentum term, flat spot elimination, or adaptive gain. Furthermore, it is successfully applied to compensate for the non-standard gain of optical sigmoids for optical neural networks. Keywords: neural network, neural computation, neural computing, connectionism, neurocomputing, multilayer neural network, backpropagation, (sigmoid) steepness, gain, slope, temperature, adaptiv e gain, (steep) activation function, (adaptive) learning rate, initial weight, momentum, flat spot elimination, weight discretization, threshold, bias, optical implementation. Please do not reply directly to this message. FTP INSTRUCTIONS: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get thimm.gain.ps.Z ftp> quit unix> zcat thimm.gain.ps.Z |lpr This paper is also available from the URL http://www.idiap.ch/pub/papers/neural/thimm.gain.ps.Z Sorry, no paper copies available. Regards, Georg Thimm -------------------------------------------------------------- Georg Thimm E-mail: thimm at idiap.ch Institut Dalle Molle d'Intelligence Fax: ++41 26 22 78 18 Artificielle Perceptive (IDIAP) Tel.: ++41 26 22 76 64 Case Postale 592 WWW: http://www.idiap.ch 1920 Martigny / Suisse --------------------------------------------------------------  From usui at bpel.tutics.tut.ac.jp Mon Jan 30 00:45:19 1995 From: usui at bpel.tutics.tut.ac.jp (Shiro USUI) Date: Mon, 30 Jan 1995 14:45:19 +0900 Subject: Paper available In-Reply-To: Georg Thimm's message of Sun, 29 Jan 95 11:42:15 +0100 <9501291042.AA15409@idiap.ch> Sun, 29 Jan 95 11:42:14 +0100 Message-ID: <199501300545.OAA02621@mozart.tutics.tut.ac.jp> Dear Dr. G.Thimm, We read your paper entitled "The Interchangeability of learning rate and gain in backpropagation neural networks", and noticed that the motivations and the results in your paper are very close to our previous paper: Q. Jia, K. Hagiwara, N. Toda and S.Usui:"Equivalence relation between the backpropagation learning process of FNN and That of an FNNG" (Letters to the Editor), Newral Networks, Vol.7, No.2, p.411 (1994), Q. Jia, N. Toda, K,Hagiwara and S. Usui:"An analysis of the error backpropagation learning algorithms with gain"(in Japanese) IEICE Trans.D-II, Vol.J77-D-II, No.4, pp.850-857 (1994). We hardly find any essential new points in your present paper. Please refer to the above papers. ----- Shiro USUI ( usui at bpel.tutics.tut.ac.jp ) Biological & Physiological Engineering Lab. Department of Information & Computer Sciences Toyohashi University of Technology Tel & Fax : +81-532-46-7806  From mbrown at aero.soton.ac.uk Mon Jan 30 05:06:04 1995 From: mbrown at aero.soton.ac.uk (Martin Brown) Date: Mon, 30 Jan 95 10:06:04 GMT Subject: activation function gain Message-ID: <13983.9501301006@aero.soton.ac.uk> With reference to Dr. G. Thimm's paper about the relationship between the gain of the activation function and the learning rate, we investigated a similar topic a couple of years ago and also looked at how the size of the bias term affects the condition of the learning problem. See chapter 4 in: @book{BrownHarris:94, AUTHOR = "Brown, M. and Harris, C.", TITLE = "Neurofuzzy Adaptive Modelling and Control", PUBLISHER = "Prentice Hall", YEAR = 1994, VOLUME = "", SERIES = "", ADDRESS = "Hemel-Hempstead, UK", EDITION = "", MONTH = "" } or the condensed version in: @inproceedings{BrownHarris:93a, AUTHOR = "Brown, M. and An, P.C. and Harris, C.J. and Wang H.", TITLE = "How Biased in your Multi-Layer Perceptron", BOOKTITLE = "World Congress on Neural Networks", YEAR = 1993, EDITOR = "", PAGES = "507--511", ORGANIZATION = "", PUBLISHER = "", ADDRESS = "Portland, Oregon", MONTH = "", NOTE = "Volume 3" } Needless to say though, we probably weren't the first! Martin Brown ISIS research group, Department of Electronics and Computer Science, University of Southampton, UK. Fax: 01703 592865 Email: mqb at ecs.soton.ac.uk  From nowlan at cajal.synaptics.com Mon Jan 30 14:46:43 1995 From: nowlan at cajal.synaptics.com (Steven J. Nowlan) Date: Mon, 30 Jan 95 11:46:43 -0800 Subject: Paper avail. ftp: Nowlan and Platt, "A Convolutional Hand Tracker" Message-ID: <9501301946.AA28135@cajal.synaptics.com.> ****** PAPER AVAILABLE VIA NEUROPROSE *************************************** ****** AVAILABLE VIA FTP ONLY *********************************************** ****** PLEASE DO NOT FORWARD TO OTHER MAILING LISTS OR BOARDS. THANK YOU. ** FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/nowlan.nips95.ps.Z The following paper has been placed in the Neuroprose archive at Ohio State. The file is nowlan.nips95.ps.Z. Only the electronic version of this paper is available. This paper is 8 pages in length. This is a preprint of the paper to appear in Advance in Neural Information Processing Systems 7. This file contains 5 embedded postscript figures and is 0.4 Mbytes uncompressed. ----------------------------------------------------- A Convolutional Neural Network Hand Tracker Steven J. Nowlan John C. Platt Synaptics, Inc. Synaptics, Inc. 2698 Orchard Parkway 2698 Orchard Parkway San Jose, CA 95134 San Jose, CA 95134 ABSTRACT: We describe a system that can track a hand in a sequence of video frames and recognize hand gestures in a user-independent manner. The system locates the hand in each video frame and determines if the hand is open or closed. The tracking system is able to track the hand to within 10 pixels of its correct location in 99.7% of the frames from a test set containing video sequences from 18 different individuals captured in 18 different room environments. The gesture recognition network correctly determines if the hand being tracked is open or closed in 99.1% of the frames in this test set. The system has been designed to operate in real time with existing hardware. ----------------------------------------------------- Steven J. Nowlan Synaptics, Inc. 2698 Orchard Parkway San Jose, CA 95134 e-mail: nowlan at synaptics.com phone: (408) 434-0110 x118  From rolf at cs.rug.nl Mon Jan 30 12:30:11 1995 From: rolf at cs.rug.nl (rolf@cs.rug.nl) Date: Mon, 30 Jan 1995 18:30:11 +0100 Subject: Ph.D. thesis available Message-ID: Ph.D. thesis available by ftp ----------------------------- Multilayer Dynamic Link Networks for Establishing Image Point Correspondences and Visual Object Recognition by Rolf P. W\"urtz FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/wuertz.ps.Z URL: ftp://archive.cis.ohio-state.edu/pub/neuroprose/Thesis/wuertz.ps.Z For Europeans, the following may be more convenient: URL: http://www.cs.rug.nl/~rolf/wuertz.thesis.ps.gz The size of the file (1.6MB compressed, 9.2MB uncompressed) may cause problems for printing. Therefore, as well as in order to save paper it is recommended to browse it in a previewer such as ghostview and print only the pages of interest. (See also copyright notice below.) Hardcopies for those who prefer oldfashioned paperbacks will be available from the publisher mentioned below within a couple of weeks and cost around 20$. (Your bookstore should be able to place the order.) Free hardcopies are available only for close friends. :-) ABSTRACT: --------- The major tasks for automatic object recognition are segmentation of the image and solving the correspondence problem, i.e.\ reliably finding the points in the image that belong to points in a given model. Once these correspondences are found, the local similarities can be used to assign one model out of a set of known ones to the image. This work defines a suitable representation for models and images based on a multiresolution transform with Gabor wavelets.The properties of such transforms are discussed in detail. Then a neural network with dynamic links and short-term activity correlations is presented that estimates these correspondences in several layers coarse-to-fine. It is formalized into a nonlinear dynamical system. Simulations show its capabilities that extend earlier systems by background invariance and faster convergence. Finally, the central procedures of the network are put into an algorithmic form, which allows fast implementation on conventional hardware and uses the correspondences for the successful recognition of human faces out of a gallery of 83 independent of their hairstyle. This demonstrates the potential for the recognition of objects independently of the background, which was not possible with earlier systems. KEYWORDS: --------- Neural network, dynamic link architecture, correspondence problem, object recognition, face recognition, coarse-to-fine strategy, wavelet transform, image representation CONTENTS: --------- Abstract..........................................1 Preface...........................................3 Acknowledgements..................................5 Contents..........................................7 1. Introduction..................................13 2. Wavelet Preprocessing.........................25 3. Representation of Images and Models...........49 4. Hierarchical Dynamic Link Matching............65 5. Algorithmic Pyramid Matching..................89 6. Hierarchical Object Recognition..............109 7. Discussion...................................119 8. Bibliography.................................127 9. Anhang in deutscher Sprache..................141 Index...........................................153 COPYRIGHT NOTICE: ----------------- The copyright of this text has been transferred to: +--------------------------------+ | Verlag Harri Deutsch GmbH | | Gr\"afstra{\ss}e 47/51 | | D-60486 Frankfurt am Main | | Germany | | | | Phone: +49 69 775021 | | Fax: +49 69 7073739} | | Email: vlghd at vlghd.f.eunet.de | +--------------------------------+ It will be available within a couple of weeks at a price of about US $20.-. Due to copyright it is illegal to print more than selected pages for personal use. Enjoy reading (at least) as much as I enjoyed writing! Rolf +----------------------------------+---------------------------------------+ | Rolf P. W"urtz | Email: rolf at cs.rug.nl | | Department of Computer Science | | | University of Groningen | Phone: +31 50 63-6496 or | | P.O. Box 800 | -3939 (dept. secr.) | | 9700 AV Groningen | Fax: +31 50 63-3800 | | The Netherlands | | +----------------------------------+---------------------------------------+  From ucganlb at ucl.ac.uk Tue Jan 31 04:54:20 1995 From: ucganlb at ucl.ac.uk (Dr Neil Burgess - Anatomy UCL London) Date: Tue, 31 Jan 95 09:54:20 +0000 Subject: preprint - a connectionist model of STM for serial order Message-ID: <181056.9501310954@link-1.ts.bcc.ac.uk> anonymous ftp host: archive.cis.ohio-state.edu (128.146.8.52) file: pub/neuroprose/burgess.serial_order.ps.Z I have just put the following pre-print in the neuroprose archive (see above). Cheers, Neil (n.burgess at ucl.ac.uk) _________________________________________________________________________ A SOLVABLE CONNECTIONIST MODEL OF IMMEDIATE RECALL OF ORDERED LISTS Neil Burgess, Department of Anatomy, University College London, London WC1E 6BT, England. ABSTRACT A model of short-term memory for serially ordered lists of verbal stimuli is proposed as an implementation of the `articulatory loop' thought to mediate this type of memory (Baddeley, 1986). The model predicts the presence of a repeatable time-varying `context' signal coding the timing of items' presentation in addition to a store of phonological information and a process of serial rehearsal. Items are associated with context nodes and phonemes by Hebbian connections showing both short and long term plasticity. Items are activated by phonemic input during presentation and reactivated by context and phonemic feedback during output. Serial selection of items occurs via a winner-take-all interaction amongst items, with the winner subsequently receiving decaying inhibition. An approximate analysis of error probabilities due to Gaussian noise during output is presented. The model provides an explanatory account of the probability of error as a function of serial position, list length, word length, phonemic similarity, temporal grouping, item and list familiarity, and is proposed as the starting point for a model of rehearsal and vocabulary acquisition. This paper is 8 pages, 0.2Mbytes uncompressed, and will be published in NIPS 7.  From hzs at cns.brown.edu Tue Jan 31 12:52:06 1995 From: hzs at cns.brown.edu (Harel Z. Shouval) Date: Tue, 31 Jan 1995 12:52:06 -0500 (EST) Subject: no subject (file transmission) Message-ID: <9501311744.AA09759@cns.brown.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 3875 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/d93f8e17/attachment.ksh From valdes at CARMEN.KBS.CS.CMU.EDU Tue Jan 31 08:51:12 1995 From: valdes at CARMEN.KBS.CS.CMU.EDU (Raul Valdes-Perez) Date: Tue, 31 Jan 95 08:51:12 EST Subject: FYI Message-ID: SECOND CALL FOR PAPERS INTERNATIONAL WORKSHOP ON INFORMATION PROCESSING IN CELLS AND TISSUES Liverpool 6th - 8th September 1995 The purpose of this workshop is to bring together a multidisciplinary group of scientists working in the general area of modelling cells and tissues. A central theme will be the nature of biological information and the ways it is processed in cells and tissues. We hope that the workshop will draw together researchers from a range of disciplines including: Computer Science, Cell Biology, Mathematics, Physiology, Biophysics, Experimental Medicine, Biochemistry, Electronic Engineering and Biotechnology. The workshop is intended to provide a forum to report research, discuss emerging topics and gain new insights into information processing in biological and computational systems. Subjects areas are likely to include but not be restricted to: * Cellular information processing systems * Enzyme networks, Gene networks, Metabolic channeling * Second messenger systems * Signal Transduction and Cellular Pattern Recognition * Automata models * Parallel Distributed Processing models * Cellular Automata models * Single Neuron Computation * Biomolecular computing * Inter-cellular communication, Multi-cellularity * Information Processing in Developmental Systems * Information Processing in Immune networks * Endocrine-immune-nervous interactions * Information processing in neural tissue systems * Information processing in non-neural tissue systems * Communication and gap-junctions * Asynchronous processing, MIMD, SIMD and NIMD systems * Cell and tissues oscillators * Fractals and Chaos * Emergent phenomena and self-organisation Programme Committee Georg Brabant Endocrinology (Hanover) Michael Conrad Computer Science (Detroit) Roy Cuthbertson Cell Biology (Liverpool) Claus Emmeche Philosophy of Nature and Science Studies (Copenhagen) Mike Holcombe Computer Science (Sheffield) George Kampis Ethology and Philosophy of Science (Budapest) Douglas Kell Biological Sciences (Aberystwyth) Gareth Leng Physiology (Edinburgh) Pedro Marijuan Electronics & Informatics (Zaragoza) Koichiro Matsuno BioEngineering (Nagaoka) Ray Paton Computer Science (Liverpool) Hans-Paul Schwefel Computer Science (Dortmund) Idan Segev Neurobiology (Jerusalem) Gordon Shepherd Neurobiology (Yale) W Richard Stark Mathematics (Tampa) Rene Thomas Molecular Biology (Brussels) Chris Tofts Computer Science (Manchester) John Tucker Computer Science (Swansea) G Rickey Welch Biological Sciences (New Orleans) Gershom Zajicek Experimental Medicine and Cancer Research (Jerusalem) Organizing Committee Ray Paton, Roy Cuthbertson Mike Holcombe and 'Trina Houghton Submission Details All authors must submit 4 copies of the full technical paper by mail or delivery service to: Ray Paton Department of Computer Science The University of Liverpool A Liverpool L69 3BX UK PLEASE DO NOT SUBMIT PAPERS BY FAX. The paper should be in English, double-spaced in 12 point using Times or similar font. The paper should be a maximum of 16 pages including the first page. The first page must contain: title of the paper, author's names including affiliations, complete mailing address, telephone and FAX numbers, email address, and a 250 word (maximum) abstract. Important Dates Submission deadline: Friday April 14th 1995 Acceptance Notification: Friday May 26th 1995 Deadline for final paper: Friday June 23rd 1995 PROCEEDINGS The papers accepted for the workshop will be bound into an unpublished collection for delegates. PUBLICATION OF THE PROCEEDINGS It is intended that a post workshop proceedings will be published by Springer-Verlag and will appear after the workshop. Enquiries Enquires should be addressed to Ray Paton at the above address or FAX +44 51 794 3715 or email tissues at csc.liv.ac.uk  From nowlan at cajal.synaptics.com Tue Jan 31 13:07:30 1995 From: nowlan at cajal.synaptics.com (Steven J. Nowlan) Date: Tue, 31 Jan 95 10:07:30 -0800 Subject: Fixed Paper avail. ftp: Nowlan and Platt, "A Convolutional Hand Tracker" Message-ID: <9501311807.AA12418@cajal.synaptics.com.> A byte was dropped somehow in the original binary of this paper. A fixed version (retrieved and printed correctly at a remote site) is available: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/Inbox/nowlan.nips95-2.ps.Z ------- Forwarded Message From uunet!cajal.synaptics.com!nowlan Mon Jan 30 14:46:43 1995 From: uunet!cajal.synaptics.com!nowlan (Steven J. Nowlan) Date: Mon, 30 Jan 1995 14:46:43 -0500 Subject: Paper avail. ftp: Nowlan and Platt, "A Convolutional Hand Tracker" Message-ID: ****** PAPER AVAILABLE VIA NEUROPROSE *************************************** ****** AVAILABLE VIA FTP ONLY *********************************************** ****** PLEASE DO NOT FORWARD TO OTHER MAILING LISTS OR BOARDS. THANK YOU. ** FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/nowlan.nips95.ps.Z The following paper has been placed in the Neuroprose archive at Ohio State. The file is nowlan.nips95.ps.Z. Only the electronic version of this paper is available. This paper is 8 pages in length. This is a preprint of the paper to appear in Advance in Neural Information Processing Systems 7. This file contains 5 embedded postscript figures and is 0.4 Mbytes uncompressed. ----------------------------------------------------- A Convolutional Neural Network Hand Tracker Steven J. Nowlan John C. Platt Synaptics, Inc. Synaptics, Inc. 2698 Orchard Parkway 2698 Orchard Parkway San Jose, CA 95134 San Jose, CA 95134 ABSTRACT: We describe a system that can track a hand in a sequence of video frames and recognize hand gestures in a user-independent manner. The system locates the hand in each video frame and determines if the hand is open or closed. The tracking system is able to track the hand to within 10 pixels of its correct location in 99.7% of the frames from a test set containing video sequences from 18 different individuals captured in 18 different room environments. The gesture recognition network correctly determines if the hand being tracked is open or closed in 99.1% of the frames in this test set. The system has been designed to operate in real time with existing hardware. ----------------------------------------------------- Steven J. Nowlan Synaptics, Inc. 2698 Orchard Parkway San Jose, CA 95134 e-mail: nowlan at synaptics.com phone: (408) 434-0110 x118 ------- End of Forwarded Message  From Connectionists-Request at cs.cmu.edu Sun Jan 1 00:05:20 1995 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Sun, 01 Jan 95 00:05:20 EST Subject: Bi-monthly Reminder Message-ID: <11463.788936720@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated September 9, 1994. This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is a moderated forum for enlightened technical discussions and professional announcements. It is not a random free-for-all like comp.ai.neural-nets. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to thousands of busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. -- Dave Touretzky & Lisa Saksida --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, corresponded with him directly and retrieved the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new textbooks related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. The file "current" in the same directory contains the archives for the current month. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU 2. Login as user anonymous with password your username. 3. 'cd' directly to the following directory: /afs/cs/project/connect/connect-archives The archive directory is the ONLY one you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into this directory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- Using Mosaic and the World Wide Web ----------------------------------- You can also access these files using the following url: http://www.cs.cmu.edu:8001/afs/cs/project/connect/connect-archives ---------------------------------------------------------------------- The NEUROPROSE Archive ---------------------- Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. We strongly discourage the merger into the repository of existing bodies of work or the use of this medium as a vanity press for papers which are not of publication quality. PLACING A FILE To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. Current naming convention is author.title.filetype.Z where title is just enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. The Z indicates that the file has been compressed by the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is in the appendix. Make sure your paper is single-spaced, so as to save paper, and include an INDEX Entry, consisting of 1) the filename, 2) the email contact for problems, 3) the number of pages and 4) a one sentence description. See the INDEX file for examples. ANNOUNCING YOUR PAPER It is the author's responsibility to invite other researchers to make copies of their paper. Before announcing, have a friend at another institution retrieve and print the file, so as to avoid easily found local postscript library errors. And let the community know how many pages to expect on their printer. Finally, information about where the paper will/might appear is appropriate inside the paper as well as in the announcement. In your subject line of your mail message, rather than "paper available via FTP," please indicate the subject or title, e.g. "paper available "Solving Towers of Hanoi with ART-4" Please add two lines to your mail header, or the top of your message, so as to facilitate the development of mailer scripts and macros which can automatically retrieve files from both NEUROPROSE and other lab-specific repositories: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/filename.ps.Z When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. To prevent forwarding, place a "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your file. If you do offer hard copies, be prepared for a high cost. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! A shell script called Getps, written by Tony Plate, is in the directory, and can perform the necessary retrieval operations, given the file name. Functions for GNU Emacs RMAIL, and other mailing systems will also be posted as debugged and available. At any time, for any reason, the author may request their paper be updated or removed. For further questions contact: Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Phone: (614) 292-4890 APPENDIX: Here is an example of naming and placing a file: unix> compress myname.title.ps unix> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put myname.title.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for myname.title.ps.Z 226 Transfer complete. 100000 bytes sent in 1.414 seconds ftp> quit 221 Goodbye. unix> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file myname.title.ps.Z in the Inbox. Here is the INDEX entry: myname.title.ps.Z mylogin at my.email.address 12 pages. A random paper which everyone will want to read Let me know when it is in place so I can announce it to Connectionists at cmu. ^D AFTER RECEIVING THE GO-AHEAD, AND HAVING A FRIEND TEST RETRIEVE THE FILE, HE DOES THE FOLLOWING: unix> mail connectionists Subject: TR announcement: Born Again Perceptrons FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/myname.title.ps.Z The file myname.title.ps.Z is now available for copying from the Neuroprose repository: Random Paper (12 pages) Somebody Somewhere Cornell University ABSTRACT: In this unpublishable paper, I generate another alternative to the back-propagation algorithm which performs 50% better on learning the exclusive-or problem. ~r.signature ^D ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "ftp.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. Another valid directory is "/afs/cs/project/connect/code", where we store various supported and unsupported neural network simulators and related software. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "neural-bench at cs.cmu.edu".  From 72773.1646 at compuserve.com Sun Jan 1 11:47:06 1995 From: 72773.1646 at compuserve.com (SAM FARAG) Date: 01 Jan 95 11:47:06 EST Subject: NEURAL NETWORKS SPECIALIST Message-ID: <950101164706_72773.1646_EHL69-1@CompuServe.COM> The Switchgear and Motor Control Center Divsion, of Siemens Energy and Automation,Inc.in Raleigh, NC has an immediate opening for a neural networks specialist. Especially feed-forward networks and back propagation. The ideal candidate will have a master degree in electrical engineering or equivelant experience. Your responsibilities will include developing, implementing, and testing neural networks, statistical and/or machine based algorithms for electrical machine monitoring and diagnostics. Experience in embeded controllers, assembly, high level languages, and hardware design is highly desirable. Siemens AG is a worldwide supplier of electrical and electronic devices with sales in excess of 4 Billion$ in the US and 40 billion$ worldwide. If you are interested please send your resume via email ( text only) or US mail to: Sam Farag Siemens Energy & Automation 7000 Siemens Drive Wendell, NC 27626 email: 72773, 1646 at Compuserve.com  From nadal at physique.ens.fr Mon Jan 2 12:10:05 1995 From: nadal at physique.ens.fr (NADAL Jean-Pierre) Date: Mon, 2 Jan 1995 18:10:05 +0100 Subject: New books Message-ID: <199501021710.SAA09077@droopy.ens.fr> *********************************************************************** *********** Proceedings (Cargese NATO ASI): *************************** *********************************************************************** "From Statistical Physics To Statistical Inference and Back" Grassberger P. and Nadal J.-P. editors Kluwer Acad. Publ., 1994 ISBN 0-7923-2775-6 to order: services at wkap.nl kluwer at world.std.com *********************************************************************** *********** Reprint volume, with comments from the editors: *********** *********************************************************************** "Biology and Computation: a physicist's choice" Gutfreund H. and Toulouse G. editors Advance Series in Neurosciences, Vol. 3 World Scientific, 1994 *********************************************************************** *********** Book on neural networks, for non specialists (IN FRENCH) ** *********** Livre en francais, pour non specialistes ****************** *********************************************************************** "Reseaux de neurones : de la physique a la psychologie" Nadal J.-P. Armand Colin, collection 2ai, 1993 ISBN 2-200-21170-8 Dans toutes les bonnes librairies... *********************************************************************** Jean-Pierre Nadal nadal at physique.ens.fr Laboratoire de Physique Statistique Ecole Normale Sup\'erieure 24, rue Lhomond - 75231 Paris Cedex 05   From meeden at cs.swarthmore.edu Mon Jan 2 13:51:02 1995 From: meeden at cs.swarthmore.edu (Lisa Meeden) Date: Mon, 2 Jan 1995 13:51:02 -0500 Subject: Thesis available on adaptive robot control Message-ID: <199501021851.NAA21106@cilantro.cs.swarthmore.edu> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/Thesis/meeden.thesis.ps.Z **DO NOT FORWARD TO OTHER GROUPS** Ph.D. Thesis available by anonymous ftp (124 pages) Towards Planning: Incremental Investigations into Adaptive Robot Control Lisa Meeden Department of Computer Science Indiana University ABSTRACT: Traditional models of planning have adopted a top-down perspective by focusing on the deliberative, conscious qualities of planning at the expense of having a system that is connected to the world through its perceptions. This thesis takes the opposing, bottom-up perspective that being firmly situated in the world is the crucial starting point to understanding planning. The central hypothesis of this thesis is that the ability to plan developed from the more primitive capacity of reactive control. Neural networks offer the most promising mechanism for investigating robot control and planning because connectionist methodology allows the task demands rather than the designer's biases to be the primary force in shaping a system's development. Input can come directly from the sensors and output can feed directly into the actuators creating a close coupling of perception and action. This interplay between sensing and acting fosters a dynamic interaction between the controller and its environment that is crucial to producing reactive behavior. Because adaptation is fundamental to the connectionist paradigm, the designer need not posit what form the internal knowledge will take or what specific function it will serve. Instead, based on the training task, the system will construct its own internal representations built directly from the sensor readings to achieve the desired control behavior. Once the system has reached an adequate level of performance at the task, its method can be dissected and a high-level understanding of its control principles can be determined. This thesis takes an incremental approach towards understanding planning using a simple recurrent network model. In the initial phase, several ways of representing goals are explored using a simulated robot in a one-dimensional environment. Next the model is extended to accommodate a physical robot and two reinforcement learning methods for adapting the network controllers are compared: a gradient descent algorithm and a genetic algorithm. Then, the model's reactive behavior and representations are analyzed to reveal that it contains the potential building blocks necessary for planning, called protoplans. Finally, to show that protoplans can be used to guide behavior, a learning transfer experiment is conducted. The protoplans constructed in one network controller are stored in an associative memory and retrieved by a new controller as it learns the same task from scratch. In this way strategies discovered in the original controller bias the strategies developed in a new controller. The results show that controllers trained with protoplans and without goals are able to converge more quickly to successful solutions than controllers trained with goals. Furthermore, changes in the protoplans over time reveal that particular fluctuations in the protoplan values are highly correlated with switches in the robot's behavior. In some instances, very minor disturbances to the protoplan at these fluctuation points severely disrupts the normal pattern of behavior. Thus protoplans can play a key role in determining the behavior. The success of these protoplan experiments supports a new set of intuitions about planning. Rather than static, stand-alone procedures, plans can be seen as dynamic, context-dependent guides, and the process of planning may be more like informed improvisation than deliberation. It is not fruitful to spend processing time reasoning about an inherently unpredictable world, and with the protoplan model, a new protoplan can be recomputed on every time step. Although each protoplan offers only sketchy guidance, any more information might actually be misleading. Once the chosen action is executed, the subsequent perceptions are used to retrieve a new, more appropriate protoplan. Therefore it is possible to continually replan based on the best information available--the robot's current perceptual state. ------------------- Hard copies are not available. Thanks to Jordan Pollack for maintaining neuroprose. Lisa Meeden Computer Science Program Swarthmore College meeden at cs.swarthmore.edu  From arthur at mail4.ai.univie.ac.at Mon Jan 2 22:05:27 1995 From: arthur at mail4.ai.univie.ac.at (Arthur Flexer) Date: Mon, 2 Jan 95 22:05:27 MET Subject: Q: Statistical Evaluation of Classifiers Message-ID: <199501022105.WAA29311@milano.ai.univie.ac.at> Dear colleagues, I am looking for references on systematic accounts of the problem of the statistical evaluation of (statistical, machine learning or neural network) classifiers. I.e. of systematic accounts of statistical tests, which can be employed if one wants to ensure, whether observed performance differences are indeed caused by the varied independent variables (e.g. kind of method, certain parameters of the method, used data set, ...) and not by mere chance. What I am looking for are the appropriate statistical tests for significance of the observed performance differences. To make things more clear, let me simplify the case and give some references to the literature that I have already found: Problem 1: You have one method for classification (e.g. a neural network) and one data set. There are several network parameters to tune (number of layers, learning rate, ..) and you are looking for optimal performance on your data set. So the independent variables are the network's parameters and the dependent variable be accuracy (i.e. percent of correct classification) (see Kibler & Langley 1988 for an account of machine learning as an experimental science). For each of the parameter settings, multiple neural networks should be trained to rule out the influence of different training sets, weight initialisations and so on. One could even employ experimental designs like bootstrap, jacknife and thelike (see Michie et al. 1994 for an overview) for each of the parameter settings. Therefore, for each parameter setting, the mean accuracy over all runs is the observed performance criteria. A statistical test that can be employed for the testing of the significance of the differences in observed mean accuracies is the t-test. Finnoff et al. 1992 and Hergert et al. 1992 use "a robust modification of a t-test statistic" for comparison. Problem 2: is similar to Problem 1. Instead of one method for classification and one data set, there are several methods of classification and you want to know, which of them shows optimal performance on one data set. Just procede as stated under Problem 1, replacing the parameter settings with the different methods. Problem 3: is quite difficult and I have no solution yet :). You have one method of classification and several data sets. You want to know, on what data set your algorithm performs best (again in terms of mean accuracy).The problem is that the different data sets have different numbers of classes and different probabilities of classes. E.g. one data set has N=100 and the first class has 50 members and the second class also. Another data set has N=100 and the first class has 20 members, the second 30, the third 20 and the fourth again 30. Therfore, an accuracy of 50% would be only as good as chance for the first data set, but maybe quite something for the second data set. This problem has been adressed by Kononenko & Bratko 1991 from an information-based point of view. Problem 4: would of course be the ultimate: Several methods and several data sets. As you can see from the references I have given above, I am aware that there *are* some pointers in the literature. But as the problem of classification has been around for quite a while (at least for statisticians), I am wondering if there already exists an systematic and extensive overview of methods to employ. On the other hand, awareness of the need for such statistical evaluation often is very low :(. So the question is: Is there already a comprehensive text on these matters or do we all have to pick the information out of the standard statistic text books? Regards and thanks for any help, Arthur. ----------------------------------------------------------------------------- Arthur Flexer arthur at ai.univie.ac.at Austrian Research Inst. for Artificial Intelligence +43-1-5336112(Tel) Schottengasse 3, A-1010 Vienna, Austria, Europe +43-1-5320652(Fax) Literature: Finnoff W., Hergert F., Zimmermann H.G.: Improving Generalization Performance by Nonconvergent Model Selection Methods, in Aleksander I. & Taylor J.(eds.), Artificial Neural Networks, 2, North-Holland, Amsterdam, pp.233-236, 1992. Hergert F., Zimmermann H.G., Kramer U., Finnoff W.: Domain Independent Testing and Performance Comparisons for Neural Networks, in Aleksander I. & Taylor J.(eds.), Artificial Neural Networks, 2, North-Holland, Amsterdam, pp.1071-1076, 1992. Kononenko I., Bratko I.: Information-Based Evaluation Criterion for Classifiers' Performance, Machine Learning, 6(1), 1991. Kibler & Langley P.: Machine Learning as an Experimental Science, Machine Learning, 3(1), 5-8, 1988. Michie D., Spiegelhalter D.J., Taylor C.C.(eds.): Machine Learning, Neural and Statistical Classification, Ellis Horwood, England, 1994.  From herwin at osf1.gmu.edu Mon Jan 2 16:12:40 1995 From: herwin at osf1.gmu.edu (HARRY R. ERWIN) Date: Mon, 2 Jan 1995 16:12:40 -0500 (EST) Subject: Interim Report on OB Modeling Message-ID: Interim Report/Lessons Learned on a Simulation Model of the Olfactory Bulb Harry Erwin herwin at gmu.edu January 1, 1995 As a graduate school project during the last quarter, I've been developing a computational and compartmental model of a small but biologically realistic subset of the olfactory bulb. It owes its inspiration to the work done by Walter Freeman and James Skinner, but its many errors naturally remain the responsibility of the author. I'm posting this report for anyone who might provide useful critical feedback. The simulation consists of a number of small compartmental models of sensory, tufted/mitral, periglomerular, and granule cell neurons structured to provide insight into architectural details of the olfactory bulb. Crucial omissions include reafference from the anterior olfactory nucleus, the pyriform cortex, and the locus coeruleus and the effect of changes in the chloride gradiant in the glomeruli. The model is written in C++ and represents 32 tufted/mitral cells and the associated periglomerular and granule cells. The code is efficient, and performance is excellent, both on a Macintosh IIfx and on a Silicon Graphics Iris Workstation. On the SGI, approximately 200 milliseconds of activity is modeled in 8 minutes of CPU time. The model has been rehosted on PVM 3 and will eventually be moved to the Paragon supercomputer. Lessons learned in developing this simulation include the following: 1. Errors in the equations for neural dynamics--a number of errors were noted in recent papers. My take is that it is unsafe to rely on the equations in published papers, and any equations used should be rederived. I didn't notice this until I made my initial runs and got weird results (highly at variance with the biological data). 2. Instability in explicit compartmental models--compartmental neural models are advective with all the problems associated with such models. This becomes clear if one reviews Wilfred Rall's 1989 paper, "Cable theory for dendritic neurons," in Koch and Segev, Methods in Neuronal Modeling. Since both the shape and strength of the signals between compartments are biologically important, the preferred approach would be a high-order adaptive scheme using implicit solution techniques. The system of equations appears to be stiff, with high long-range connectivity, making matrix inversions computationally expensive. My exploratory modeling used a low-order explicit code, and so can only be regarded as suggestive. 3. Transmission rates in compartmental models--some workers publishing compartmental models appear to assume that transmission delays between compartments are unimportant. That leads to modeling difficulties. The equations used should be formally correct. 4. The true role of 'inhibitory' neurotransmitters--GABA and glycine are _not_ inverted excitatory neurotransmitters. Instead they serve to increase the 'inertia' of the neuron by reducing its sensitivity to excitation. The reversal potential for chloride channels is in the vicinity of -70 mV, close to the resting potential of the neuron and also close to the reversal potential for potassium channels. This means that GABA can depolarize as well as hyperpolarize a neuron, depending on the chloride gradient. The model for release of GABA at a synapse must take that into account. 5. Synaptic release models in most published compartmental models are (to be polite about it) simplistic, typically strongly influenced by the artificial neural network model of the spiking neuron. Very little work appears to have been done on the mechanism by which a depolarization level on the presynaptic side results in vesicle release, followed either depolarization or buffering against depolarization on the postsynaptic side. What appears in most analyses are "all-or-nothing" presynaptic spikes and postsynaptic responses, and that does not address the detailed dynamics that actually occur. In particular, electrical synapses and chemical synapses implementing graded potentials are given short-shrift by this model. 6. The crucial role of active tuning in producting the observed EEG patterns--to get the observed EEG patterns, the olfactory bulb has to be actively tuned in sensitivity. I modeled this by adjusting the trigger potential for spiking by active conductances, and over a range of 10 mV, I went from fixed point dynamics to completely chaotic dynamics. To reproduce the dynamics seen in vivo would thus seem to require sensitive tuning in near-real-time. 7. The role of periglomerular cells in the system--the periglomerular cells clearly normalize the bulb input as part of this tuning process. Initially during a breath, they are relatively inactive, so that all the tufted/mitral cells are allowed explore the input, but later in the breath, the periglomerular cells emerge more strongly, eventually allowing the neural cellular assembly (NCA) best classifying the afferent input to dominate. Tuning of the periglomerular cell response is a key aspect to modeling the time constant of this process. Walter Freeman has done work in this area. 8. The role of granule cells in the system--these appear to force the system into a Hopf bifurcation and only work right if the system is actively tuned, since they do not appear to be adaptive. Whether they have active conductances is a major issue for my model. Active conductances appear to overdrive them, since there is no evidence for afferent GABA or glycine synapses. 9. The role of attention in creating and maintaining neural cellular assemblies--see Gary Aston-Jones's work and Gray, Skinner, and Freeman, 1986, in Behavioral Neuroscience, 100(4):585-596. Norepinephrine appears to have a role in vigilance, by modulating the sensitivity of the olfactory bulb. This is much like the modulation by the periglomerular neurons, but on a more global scale, adjusting the percentage of the existing neural cellular assemblies (NCAs) that respond to afferent signals and facilitating NCA assembly, disassembly. I intend to investigate this further. 10. The mechanism of the 'H' synapses in the glomeruli and the reciprocal synapses between the tufted/mitral cells and the granule cells remains unclear. I suspect they produce some sort of difference signal. Harry Erwin Internet: herwin at gmu.edu  From jbower at smaug.bbb.caltech.edu Tue Jan 3 16:05:51 1995 From: jbower at smaug.bbb.caltech.edu (jbower@smaug.bbb.caltech.edu) Date: Tue, 3 Jan 95 13:05:51 PST Subject: No subject Message-ID: <9501032105.AA09903@smaug.bbb.caltech.edu> ANNOUNCEMENT GENESIS in research and education For the last five years, our laboratory, with funding from the National Science Foundation, has been supporting development and use of a GEneral NEural SImulation System called GENESIS. GENESIS is primarily for use in constructing realistic simulations of neurobiologically accurate cells and networks, although it has been used to construct neural models at all levels of detail. This simulation system is now used at institutions throughout the world, and served as the basis for 22 publications originating outside Caltech in 1994. GENESIS is available for free from Caltech (see below). This message is intended to announce the availability of a new GENESIS-based book on biologically realistic neural modeling as well as a new free version of the GENESIS simulator. ************************************************************************ The Book of GENESIS: Exploring Realistic Neural Models with the GEneral NEural SImulation System. James M. Bower, California Institute of Technology, Pasadena David Beeman, University of Colorado, Boulder This book introduces the GENESIS neural simulation system, as well as the interdisciplinary field of computational neuroscience. It is a step-by-step tutorial for professionals, researchers and students working in the area of computational neuroscience or neuroscience in general. Each tutorial is accompanied by a number of suggested exercises, "experiments", or projects which may be assigned as homework, or used for self-study. It can also be used as an interactive guide to understanding neuronal and network structure for those working in the area of neural networks and the cognitive sciences. The Preface and Introduction give suggestions for incorporating this material into neuroscience courses with existing textbooks. The full GENESIS simulator and all simulations used in the book are available at no cost from the Caltech GENESIS ftp site. Part I of the book teaches concepts in neuroscience and neural modeling by means of interactive computer tutorials on subjects ranging from neuronal membrane properties to cortical networks. These chapters, written by several contributors, allow the student to perform realistic simulations and experiments on model neural systems and provide the necessary background for understanding and using the tutorials. The simulations are user-friendly with on-line help and may be used without any prior knowledge of the GENESIS simulator or computer programming. Part II is intended to teach the use of the GENESIS script language for the construction of one's own simulations. This part will be useful for self-study by researchers who wish to do neural modeling, as well as students. It follows approximately the same sequence of topics as Part II, and uses parts of the tutorial simulations as examples of GENESIS programming. Several of these are based on recent research simulations which have been published in the neuroscience literature, but which have not been previously available for use outside the laboratories of the original researchers. Thus, the reader may modify these simulations and use them as a starting point for the development of original simulations. ************************************************************************ GENESIS version 1.4.2 The current version of GENESIS is version 1.4.2 (December 1994), which has been newly updated to contain the tutorials simulations used in "The Book of GENESIS". Around March 1995 we expect to release version 2.0, which will have a number of new features that are described in Part II of the Book of GENESIS. If all goes according to schedule, this release will also run on 486 PC's under the Linux and FreeBSD versions of unix. At present, GENESIS and its graphical front-end XODUS are written in C and run on SUN (SUN 3, 4, and Sparc stations 1 and 2) and DEC (DECstation 2100, 3100, and 5000/200PX) graphics workstations under UNIX (Sun & DEC OS 4.0 and up), and X-windows (version 11.3, 11.4 and 11.5). It has also been used with Silicon Graphics (Irix 4.0.1 and up) and the HP 700 series (HPUX). ************************************************************************ GENESIS use in education From cohn at psyche.mit.edu Wed Jan 4 10:08:41 1995 From: cohn at psyche.mit.edu (David Cohn) Date: Wed, 4 Jan 95 10:08:41 EST Subject: Paper: Active Learning with Statistical Models Message-ID: <9501041508.AA14465@psyche.mit.edu> Anticipating the post-NIPS rush, I would like to announce that the following paper is available by anonymous ftp and web-server as ftp://psyche.mit.edu/pub/cohn/active-models.ps.Z ##################################################################### Active Learning with Statistical Models David A. Cohn, Zoubin Ghahramani and Michael I. Jordan Dept. of Brain and Cognitive Sciences Massachusetts Institute of Technology For many types of learners one can compute the statistically "optimal" way to select data. We review how these techniques have been used with feedforward neural networks. We then show how the same principles may be used to select data for two alternative, statistically-based learning architectures: mixtures of Gaussians and locally weighted regression. While the techniques for neural networks are expensive and approximate, the techniques for mixtures of Gaussians and locally weighted regression are both efficient and accurate. To appear in G. Tesauro, D. Touretzky, and J. Alspector, eds., Advances in Neural Information Processing Systems 7. Morgan Kaufmann, San Francisco, CA (1995). ##################################################################### The paper may also be retrieved by anonymous ftp to "psyche.mit.edu" using the following protocol: unix> ftp psyche.mit.edu Name (psyche.mit.edu:joebob): anonymous <- use "anonymous" here 331 Guest login ok, send ident as password. Password: joebob at machine.univ.edu <- use your email address here 230 Guest login ok, access restrictions apply. ftp> cd pub/cohn <- go to the directory 250 CWD command successful. ftp> binary <- change to binary transfer 200 Type set to I. ftp> get active-models.ps.Z <- get the file 200 PORT command successful. 150 Binary data connection for active-models.ps.Z ... 226 Binary Transfer complete. local: active-models.ps.Z remote: active-models.ps.Z 301099 bytes received in 2.8 seconds (1e+02 Kbytes/s) ftp> quit <- all done 221 Goodbye.  From nilsson at CS.Stanford.EDU Wed Jan 4 13:04:54 1995 From: nilsson at CS.Stanford.EDU (Nils Nilsson) Date: Wed, 4 Jan 95 10:04:54 PST Subject: faculty search Message-ID: <9501041804.AA03176@Fairview.Stanford.EDU> Stanford University's Department of Computer Science has begun searching for a tenure-track, junior faculty position. The advertisement, soon to appear, is attached below. Among other specialities, we are interested in candidates in machine learning. I am particularly eager to see a new machine learning person come to Stanford. Please feel free to distribute this ad to people you think might be interested. -Nils Nilsson ----------- Faculty Opening Stanford University's Department of Computer Science seeks applicants for a tenure track faculty position at the Assistant Professor level. Specific areas of interest include natural language, human-computer interaction, and adaptive and learning systems. In addition, the department is interested in strengthening its faculty in foundations (algorithms and formal methods) and in software systems. Applicants should have a Ph.D. in a relevant field, and should have a strong interest in both teaching and research. The successful candidate will be expected to teach courses, both in the candidate's specialty area and in related subjects, and to build and lead a team of graduate students in Ph.D. research. Stanford University is an equal opportunity employer and welcomes nominations of women and minority group members and applications from them. Applications, including a resume, a publications list, and the names of five references, should be sent by March 1, 1995 to: Search Committee Chair, Department of Computer Science Margaret Jacks Hall, 210 Stanford University Stanford, CA 94305-2140  From rsun at cs.ua.edu Wed Jan 4 11:58:24 1995 From: rsun at cs.ua.edu (Ron Sun) Date: Wed, 4 Jan 1995 10:58:24 -0600 Subject: TR available: schemas, logics, and neural assemblies Message-ID: <9501041658.AA22393@athos.cs.ua.edu> Paper available: ---------------------------------------------- * FTP-host: archive.cis.ohio-state.edu FTP-filename: pub/neuroprose/sun.schema.ps.Z (thanks to Jordan Pollack) ------------------------------------------------ Title: Schemas, Logics, and Neural Assemblies ( length: 30 pages.) Author: Ron Sun Department of Computer Science The University of Alabama Tuscaloosa, AL 35487 To appear in Applied Intelligence, Special issue on connectionist models Vol.5, No.2. Feb. 1995 (edited by Michael Dyer) Abstract: To implement schemas and logics in connectionist models, some form of basic-level organization is needed. This paper proposes such an organization, which is termed a discrete neural assembly. Each discrete neural assembly is in turn made up of discrete neurons (nodes), that is, a node that process inputs based on a discrete mapping instead of a continuous function. A group of discrete neurons (nodes) closely interconnected form an assembly and carry out a basic functionality. Some substructures and superstructures of such assemblies are developed, to enable complex symbolic schemas to be represented and processed in connectionist networks. The paper shows that logical inference can be performed precisely, when necessary, in these networks and with certain generalization, more flexible inference (fuzzy inference) can also be performed. The development of various connectionist constructs demonstrates the possibility of implementing symbolic schemas, in their full complexity, in connectionist networks. * No hardcopy available. * FTP procedure: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get sun.schema.ps.Z ftp> quit unix> uncompress sun.schema.ps.Z unix> lpr sun.schema.ps (or however you print postscript) ----------------------------------------------------------------- Also, the following paper is now available in Neuroprose (file: sun.robust.ps.Z. 50 pages): Robust Reasoning: Integrating Rule-Based and Similarity-Based Reasoning by Ron Sun to appear in: Artificial Intelligence (AIJ), Spring 1995  From weinfeld at lix.polytechnique.fr Thu Jan 5 02:42:11 1995 From: weinfeld at lix.polytechnique.fr (Michel Weinfeld) Date: Thu, 5 Jan 1995 09:42:11 +0200 Subject: Web page for ICANN'95 conference now available Message-ID: International Conference on Artificial Neural Networks (ICANN'95), Paris, 9-13 October 1995. You can get more information about this conference by browsing the WWW server at: http://lix.polytechnique.fr/~ICANN95 Or send inquiries to: ICANN95 at lix.polytechnique.fr  From crg at ida.his.se Thu Jan 5 09:43:00 1995 From: crg at ida.his.se (Connectionist) Date: Thu, 5 Jan 95 15:43:00 +0100 Subject: SCC-95: Programme and Call For Participation Message-ID: <9501051443.AA20119@mhost.ida.his.se> The Second Swedish Conference on Connectionism Thursday 2nd and Friday 3rd March 1995 Skvde, Sweden ------------------------------------ PROGRAMME AND CALL FOR PARTICIPATION ------------------------------------ --- --- The second Swedish Conference on Connectionism is organized by the Connectionist Research Group at the University of Skvde and held in Skvde, Sweden March 2-3, 1995. A number of researchers will present their current work and a number of internationally renowned keynote speakers will give plenary talks on different connectionist topics, such as, neurobiological issues, cognitive science, connectionist modelling, applications with connectionist networks and philosophy of a "connectionist" mind. Invited Talks ------------- The Neural Network House: An Overview. Michael C. Mozer, University of Colorado, USA A connectionist exploration of the computational implications of embodiment. Ronan Reilly, University College Dublin, Ireland Landmark Arrays and the Hippocampal Cognitive Map. David S. Touretzky, Carnegie Mellon University, USA Searching weight space for backpropagation solution types. Noel Sharkey, University of Sheffield, UK Physiological constraints on models of behavior. Michael Hasselmo, Harvard University, USA Modeling, Connectionist and Otherwise. Tim van Gelder, Australian National University, USA Connectionist Synthetic Epistemology: Requirements for the Development of Objectivity. Ron Chrisley, University of Sussex, UK Learning to Retrieve Information. Garrison W. Cottrell, University of California, San Diego, USA Program Committee ----------------- Jim Bower (California Institute of Technology, USA) Harald Brandt (Ellemtel, Sweden) Ronald Chrisley (University of Sussex, UK) Gary Cottrell (University of California, San Diego, USA) Georg Dorffner (University of Vienna, Austria) Tim van Gelder (National University of Australia, Australia) Agneta Gulz (University of Skvde, Sweden) Olle Gllmo (Uppsala University, Sweden) Tommy Grling (Gteborg University, Sweden) Dan Hammerstrom (Adaptive Solutions Inc., USA) Jim Hendler (University of Maryland, USA) Erland Hjelmquist (Gteborg University, Sweden) Anders Lansner (Royal Institute of Technology, Stockholm, Sweden) Reiner Lenz (Linkping University, Sweden) Ajit Narayanan (University of Exeter, UK) Jordan Pollack (Brandeis University, USA) Noel Sharkey (University of Sheffield, UK) Bertil Svensson (Chalmers Institute of Technology, Sweden) Tere Vadn (University of Tampere, Finland) Conference Organizers --------------------- Lars Niklasson (University of Skvde, Sweden) Mikael Bodn (University of Skvde, Sweden) Programme --------- Thursday, March 2nd 09.00 Opening Lars Niklasson (SCC 1995 organizer) 09.20 Landmark Arrays and the Hippocampal Cognitive Map David S. Touretzky 10.00 Recurrent Attractor Neural Networks in Model of Cortical Associative Memory Function Erik Fransn and Anders Lansner 10.20 Coffee break 10.50 Physiological Constraints on Models of Behavior Michael Hasselmo 11.30- A Biophysically-Based Model of the Neostriatum as Dynamically 11.50 Reconfigurable Network J. Randall Gobbel 12.00 Dynamical Approximation by Neural Nets Max Garzon and Fernanda Botelho 12.20 On Parallel Selective Principal Component Analysis Mats sterberg and Reiner Lenz 12.40 LUNCH 14.00 Searching Weight Space for Backpropagation Solution Types Noel Sharkey, John Neary and Amanda Sharkey 14.40 Efficient Neural Net Isomorphism Testing Max Garzon and Arun Jagota 15.00 The TECO Theory - Simulation of Recognition Failure Sverker P. Sikstrm and Anders Lansner 15.20 Coffee break 15.50 Features of Distributed Representations for Tree-structures: A Study of RAAM Mikael Bodn and Lars Niklasson 16.10 Using the Conceptual Graph Model as Intermediate Representation for Knowledge Translation in Hybrid Systems Nicolae B. Szirbik, Gabriel L. Somlo and Diana L. Buliga 16.30- Adaptive Generalization in Dynamic Neural Networks 16.50 Stuart A. Jackson and Noel E. Sharkey 17.00 Diversity, Neural Nets and Safety Critical Applications A.J.C.Sharkey, N. E. Sharkey and O.C. Gopinath 17.20 Some Experiments Using Extra Output Learning to Hint Multi Layer Perceptrons Olle Gllmo and Jakob Carlstrm 17.40 Minimization of Quantization Errors in Digital Implementations of Multi Layer Perceptrons Jakob Carlstrm 18.00- Multimodal Sensing for Motor Control 18.20 Christian Balkenius Friday, March 3rd 09.00 Modeling, Connectionist and Otherwise Tim van Gelder 09.40 Behaviorism and Reinforcement Learning Tomas Landelius and Hans Knutsson 10.00 Symbol Grounding and Transcendental Logic Erich Prem 10.20 Coffee break 10.50 A Connectionist Exploration of the Computational Implications of Embodiment Ronan Reilly 11.30- Are Representaions Still Necessary for Understanding 11.50 Cognition? Orlando Bisacchi Coelho 12.00 Indeterminacy and Experience Pauli Pylkk 12.20 The Symbolic-Subsymbolic Relation: From Limitivism to Correspondence Tere Vadn 12.40 LUNCH 14.00 Connectionist Synthetic Epistemology: Requirements for the Development of Objectivity Ronald Chrisley and Andy Holland 14.40 Learning to Retrieve Information Brian Bartell, Garrison W. Cottrell and Rik Belew 15.20 Coffee break 15.50- Connectionist Models for the Detection of Oil Spills from 16.10 Doppler Radar Imagery Tom Ziemke and Fredrik Athley 16.10 The Neural Network House: An Overview Michael C. Mozer, Robert H. Dodier, Marc Anderson, Lucky Vidmar, Robert F. Cruickshank III and Debra Miller 16.50 Closing The Connectionist Research Group, University of Skvde General Information ------------------- Secretariat All inquiries concerning the Conference should be addressed to the Secretariat: SCC 1995 Attn: Marie Bodn, University of Skvde, P.O. Box 408, S-541 28 Skvde, SWEDEN Phone +46 (0)500-46 46 00, Fax +46 (0)500-46 47 25 email: marie at ida.his.se Conference venue: Billingehus Hotel AB, Alphyddevgen, 541 21 Skvde, SWEDEN Phone +46 (0)500-48 30 00, Fax +46 (0)500-48 38 80 Registration form Fees include admission to all conference sessions, get-together-party, coffee and lunch and a copy of the Proceedings. Hotel reservation is made at Billingehus Hotel and Conference Centre and can be made through the conference organization. The rooms are available from evening Wednesday (1st) to noon Friday (3rd). To register, complete and return the form (one form/person) below to the secretariat. Registration is valid when payment is received. Payment should be made to postal giro 78 81 40-2, payable to SCC 1995, Hgskolan Skvde. Name: Company: Address: City/Country: Phone: Email: Date of arrival: Date of departure: If the double room alternative has been chosen, please give the details for the second person. Name: Company: Country: Email: Alternatives (please circle chosen fee) Conference fee only 1500 SEK After 10/2 2000 SEK Conference fee, FT student 1000 SEK After 10/2 1500 SEK Full board and single room lodging 800 SEK/night Full board and double room lodging 600 SEK/night Invoice wanted: yes/no Signature:  From terry at salk.edu Fri Jan 6 14:50:46 1995 From: terry at salk.edu (Terry Sejnowski) Date: Fri, 6 Jan 95 11:50:46 PST Subject: Faculty Positions at Salk Message-ID: <9501061950.AA08001@salk.edu> The Salk Institute for Biological Studies has recently formed a Center for Theoretical Neurobiology, with funding provided by the Alfred P. Sloan Foundation. The long-range goal of this Center is to develop theoretical foundations for modern neurobiology. To meet this goal we are seeking applications for faculty positions. Candidates should possess formal training in theory and expertise in physical sciences, mathematics, engineering, or computation, and they should be interested in applying quantitative skills to a wide range of contemporary problems in neurobiology. It is expected that Center theoreticians will develop close ties with existing experimental neurobiology laboratories at the Salk Institute, and will take a prominent role in the training of graduate students and postdoctoral fellows in this area. Women and minority candidates are particularly encouraged to apply. The Salk Institute is an equal-opportunity employer. Participating Salk faculty and interests include: Thomas Albright Neural bases of visual perception and visually-guided behavior Francis Crick Theoretical work on the brain Martyn Goulding Neural development Stephen Heinemann Molecular biology of synaptic transmission Christopher Kintner Molecular biology of neurogenesis in amphibian embryos Greg Lemke Developmental neurobiology Dennis O'Leary Development of the vertebrate nervous system Terrence Sejnowski Computational neurobiology Charles Stevens Mechanisms of synaptic transmission John Thomas Neuronal development in Drosophila Applications should include c.v., a statement of research goals and interests, and copies of relevant publications. Applications and requests for information should be sent to: Thomas Albright Sloan Center for Theoretical Neurobiology The Salk Institute for Biological Studies PO Box 85800 San Diego, CA 92186-5800 e-mail: sloan at salk.edu FAX: 619-546-8526 -----  From Paul.Vitanyi at cwi.nl Fri Jan 6 13:19:20 1995 From: Paul.Vitanyi at cwi.nl (Paul.Vitanyi@cwi.nl) Date: Fri, 6 Jan 1995 19:19:20 +0100 Subject: EuroCOLT'95: Program & Registration Form Message-ID: <9501061819.AA00795=paulv@gnoe.cwi.nl> %% 2nd EUROPEAN CONFERENCE ON COMPUTATIONAL LEARNING THEORY %% %% MARCH 13-15, 1995, BARCELONA, SPAIN \documentstyle[draft,proc]{article} %% comment in for two columns \pagestyle{empty} \vfuzz=4pt \setlength{\topmargin}{0.25in} %% comment in for two columns \setlength{\textheight}{7.48in} \parindent=0pt \newcommand{\when}[1]{\makebox[.75in][l]{\sf #1}} %\newcommand{\stub}[1]{\typeout{*** Stub! ***} % $\langle${\bf Stub:} {\em #1}$\rangle$} \newcommand{\topic}[1]{\smallskip{\bf #1}\enspace} \newcommand{\sqr}[1]{{\vcenter{\hrule height.#1pt \hbox{\vrule width.#1pt height#1pt \kern#1pt \vrule width.#1pt} \hrule height.#1pt}}} \newcommand{\thickbar}{\rule{3.1875in}{1pt}} %% comment in for two columns \newcommand{\FILLHERE}{\_\hrulefill \ \\[5pt]} \begin{document} \vspace{.4in} \begin{center} {\Large 2nd European Conference on} \\ \vspace{.3in} {\huge\bf Computational Learning Theory} \\ \vspace{.3in} {\huge\tt EuroCOLT'95} \\ \vspace{.3in} {\large Sponsored by} \\[1ex] \vspace{.2in} {\Large EU-ESPRIT NeuroCOLT} \\ \vspace{.1in} {\Large EATCS} \\ \vspace{.1in} {\Large IFIP WG 14.2} \\ \vspace{.1in} {\Large Universitat Polit\`ecnica de Catalunya} \\ \vspace{2in} {\large March 13 -- 15, 1995} \\ \vspace{.25in} {\large Universitat Polit\`ecnica de Catalunya} \\ \vspace{.25in} {\large Barcelona, Spain} \end{center} \newpage %%% %%% The Technical Program %%% \parskip 1.4ex \begin{center} {\large\bf PROGRAM} \end{center} {\bf RECEPTION/REGISTRATION:} \\ Sunday, March 12, from 18:00 to 22:00 at the C\`atedra Gaud{\'\i} \frenchspacing {\bf SESSION 1:} Monday, March 13, Morning\\ Chair: Paul Vit\'anyi \when{9:00--9:50} {\em The discovery of algorithmic probability: A guide for the programming of true creativity (Invited Lecture),} R.J. Solomonoff (Oxbridge Research, USA) \when{9:50--10:15} {\em A decision-theoretic generalization of on-line learning and an application to boosting}, Y. Freund, R.E. Schapire (AT\&T Bell Labs) \when{10:15--10:40} {\em Online learning versus offline learning}, S. Ben-David (Technion), E. Kushilevitz (Technion), Y. Mansour (Tel Aviv Univ.) \when{10:40--11:15} Break \bigskip {\bf SESSION 2:} Monday, March 13, Morning\\ Chair: Nicola Cesa-Bianchi \when{11:15--11:40} {\em Learning distributions by their density levels - a paradigm for learning without a teacher}, S. Ben-David, M. Lindenbaum (Technion) \when{11:40--12:05} {\em Tight worst-case loss bounds for predicting with expert advice}, D. Haussler, J. Kivinen, M.K. Warmuth (UCSC) \when{12:05--12:30} {\em On-line maximum likelihood prediction with respect to general loss functions} K. Yamanishi (NEC Research, Princeton) \bigskip{\bf LUNCH:} Starting at 13:00 \bigskip {\bf SESSION 3:} Monday, March 13, Afternoon \\ Chair: Rusins Freivalds {\sloppy \when{14:30--14:55} {\em Power of procrastination in inductive inference: How it depends on used ordinal notations}, A. Ambainis (Univ. Latvia) } {\sloppy \when{14:55--15:20} {\em Learnability of Kolmogorov-easy circuit expressions via queries}, J.L. Balcazar (UPC, Barcelona), H. Buhrman (UPC Barcelona/CWI), M. Hermo (Univ. Pa{\'\i}s Vasco) }\when{15:20--15:45} {\em Trading monotonicity demands versus mind changes}, S. Lange (HTWK Leipzig), T. Zeugmann (Kyushu Univ.) \when{15:45--16:20} Break \bigskip {\bf SESSION 4:} Monday, March 13, Afternoon \\ Chair: Ricard Gavald\`a \when{16:20--16:45} {\em Learning recursive functions from approximations}, J. Case (Univ. Delaware), S. Kaufmann (Univ. Karlsruhe), E. Kinber (Univ. Delaware), M. Kummer (Univ. Karlsruhe), \when{16:45--17:10} {\em On the intrinsic complexity of learning}, R. Freivalds (Univ. Latvia), E. Kinber (Univ. Delaware), C.H. Smith (Univ. Maryland) \when{17:10--17:35} {\em The structure of intrinsic complexity of learning}, S. Jain (Nat. Univ. Singapore), A. Sharma (Univ. New S-Wales, Australia) \when{17:35--18:00} {\em Kolmogorov numberings and minimal identification}, R. Freivalds (Univ. Latvia), S. Jain (Nat. Univ. Singapore) \bigskip{\bf RUMP SESSION:}\ From 18:00 to 19:00 \bigskip{\bf BUSINESS MEETING:}\ From 20:00 to 21:30 \bigskip{\bf SESSION 5:} Tuesday, March 14, Morning\\ Chair: Ming Li \when{9:00--9:50} {\em Stochastic complexity in learning (Invited Lecture),} J. Rissanen (IBM Almaden Research Center, USA) \when{9:50--10:15} {\em Function learning from interpolation}, M. Anthony (LSE, London), P. Bartlett (ANU, Canberra, Australia) \when{10:15--10:40} {\em Approximation and learning of convex superpositions}, L. Gurvits (Siemens Res, Princeton), P. Koiran (DIMACS, Rutgers Univ.) \when{10:40--11:15} Break \bigskip{\bf SESSION 6:} Tuesday, March 14, Morning\\ Chair: Jorma Rissanen {\sloppy \when{11:15--11:40} {\em Minimum description length estimators under the optimal coding scheme}, V.G. Vovk (Research Council Cybernetics, Moscow) }\when{11:40--12:05} {\em MDL learning of unions of simple pattern languages from positive examples}, P. Kilpel\"ainen, H. Mannila, E. Ukkonen (Univ. Helsinki) \when{12:05--12:30} {\em A note on the use of probabilities by mechanical learners}, E. Martin, D. Osherson (IDIAP, Switzerland) \bigskip{\bf LUNCH:} Starting at 13:00 \bigskip{\bf SESSION 7:} Tuesday, March 14, Afternoon \\ Chair: Hans-Ulrich Simon \when{14:30--14:55} {\em Characterizing rational versus exponential learning curves}, D. Schuurmans (Univ. Toronto) \when{14:55--15:20} {\em Is Pocket algorithm optimal?}, M. Muselli (CNR, Italy) \when{15:20--15:45} {\em Some theorems concerning the free energy of (un)constrained stochastic hopfield neural networks}, J. van den Berg, J.C. Bioch (Erasmus Univ.) \when{15:45--16:20} Break \bigskip{\bf SESSION 8:} Tuesday, March 14, Afternoon \\ Chair: Wolfgang Maass \when{16:20--16:45} {\em A space-bounded learning algorithm for axis-parallel rectangles}, F. Ameur (H.Nixdorf Inst/Univ. Paderborn) \when{16:45--17:10} {\em Learning decision lists and trees with equivalence queries}, H.-U. Simon (Univ. Dortmund) \bigskip{\bf SIGHTSEEING:}\ From 17:10 to 21:00 \bigskip{\bf BANQUET:}\ Starting at 21:00 \bigskip{\bf SESSION 9:} Wednesday, March 15, Morning \\ Chair: Kenji Yamanishi \when{9:00--9:50} {\em Polynomial bounds for VC dimension of sigmoidal neural nets (Invited Lecture)}, Angus McIntyre (Oxford University, UK) \when{9:50--10:15} {\em Average case analysis of a learning algorithm for $\mu$-DNF expressions}, M. Golea (Univ. Ottawa) \when{10:15--10:40} {\em Learning by extended statistical queries and its relation to PAC learning}, E. Shamir, C. Shwartzman (Hebrew Univ.) \when{10:40--11:15} Break \bigskip{\bf SESSION 10:} Wednesday, March 15, Morning \\ Chair: Martin Anthony \when{11:15--11:40} {\em Typed pattern languages and their learnability}, T. Koshiba (Fujitsu Labs, Kyoto) \when{11:40--12:05} {\em Learning behaviors of automata from shortest counterexamples}, F. Bergadano, S. Varricchio (Univ. Catania) \when{12:05--12:30} {\em Learning of regular expressions by pattern matching}, A. Brazma (Univ. Latvia) \when{12:30--12:55} {\em The query complexity of learning some subclasses of context-free grammars}, C. Domingo, V. Lavin (UPC, Barcelona) \bigskip{\bf LUNCH:} Starting at 13:00 \bigskip{\bf END OF CONFERENCE} \newpage \bigskip {\large\bf Conference Information} \topic{Location:} Barcelona is a city of about 3 million people located on Spain's Mediterranean shore. Founded by the Romans, Barcelona has been for long a center of culture and arts. Fine Romanesque art and architecture, from the middle ages, can be found in Barcelona and surrounding Catalonia. At the turn of the century, Barcelona was a great center of art nouveau. Among its many contributors, the names of Gaud{\'\i}, Picasso, Dal{\'\i}, Mir{\'o} or T{\`a}pies have gained universal respect, and their works can be admired in the streets and local museums. Today, Barcelona is a vibrant, pulsating city offering a varied cultural life, many shopping areas, and a great variety of restaurants. On the occasion of hosting of 1992 Olympic Games, the city went through large urbanistic changes, and the remodelled seafront areas are now major attractions. \topic{Conference Site:} The conference will be held at the North Campus of the Universitat Polit\`ecnica de Catalunya (UPC). To reach it coming from downtown, take the subway line 3 (green), direction {\em Zona Universit\`aria,\/} to the second last stop {\em (Palau Reial),\/} then follow the signs; total travel time is about 30 minutes. Formal sessions will take place at the Aula Master of the North Campus. Rump sessions will be scheduled at the conference and may take place in a different room. \topic{Invited Lectures:} There will be invited lectures by Ray Solomonoff (Oxbridge Research), Jorma Rissanen (IBM Almaden), and Angus McIntyre (Oxford Univ.) \topic{Social Program:} {\sl Sunday Night:} Reception and registration at the {\em C\`atedra Gaud{\'\i},\/} Avda. Pedralbes~7, 18:00---22:00. This is near the conference site. Coming from downtown, take the subway line 3 (green) to {\em Maria Cristina} stop, then follow the signs. {\sl Monday Night:} Business meeting at the conference site, 20:00--21:30. {\sl Tuesday Night:} Banquet at {\em El Gran Caf\'e}, starting at 21:00. The {\em Caf\'e\/} is located in Aviny\'o~9, a few minute walk from the conference hotels. \topic{Weather:} Weather in March is usually sunny but be prepared for rain. Day time temperature should be between $10^o$C and $22^o$C. \topic{Getting there:} There are trains running every 30 minutes from the airport to Pla\c{c}a Catalunya, the central square of Barcelona close to the conference hotels. Travel time is about 25 minutes. There is also an Airport Bus linking the airport terminals to Pla\c{c}a Catalunya. A taxi from the airport to the hotels should cost 2500--3000 Pta, on normal traffic conditions. \medskip {\large\bf Accommodation} Reservations have been made in the following three hotels: {\sl Hotel Catalunya (**):} Santa Anna, 24. Phone +34-3-301-9120. Fax +34-3-302-7870. {\sl Hotel Montecarlo (***):} La Rambla, 124. Phone +34-3-412-0404. Fax +34-3-318-7323. {\sl Hotel Rivoli Ramblas (****):} La Rambla, 128. Phone +34-3-412-0988. Fax +34-3-318-9133. The three of them are quite close to each other in Barcelona's Old Quarter, the liveliest part of the city. The following are the conference prices in Spanish Pesetas (Pta), including VAT. For Catalunya and Rivoli, these prices also include breakfast. \begin{center} \renewcommand{\arraystretch}{1.5} \begin{tabular}{|c|c|c|c|} \hline Price & Catalunya & Montecarlo & Rivoli \\ \hline Single & 3250 & 6740 & n/a \\ \hline Double & 4500 & 9630 & 13900 \\ \hline Double, & 4050 & 7560 & 10700 \\ one occup. & & & \\ \hline \end{tabular} \end{center} \noindent For reservations, use the procedure described under {\em Registration and hotel reservation}, or send a fax directly to the hotel. The hotels are offering special conference prices (conditioned on a minimum occupancy), so make sure you mention EuroCOLT'95 if you contact them directly. Early reservation is recommended. The conference organization does not handle hotel payments. Please pay to the hotels directly when departing. They will accept major credit cards. \medskip {\large\bf Registration \& Hotel Reservation} \smallskip In order of preference: {\sl WWW:} Fill in the registration form at \begin{center} {\tt http://goliat.upc.es/{\large\tt \~{}}{\kern-2pt}eurocolt/reg-form.html} \end{center} {\sl E-mail:\/} Get the source of this brochure by anonymous ftp, as described below. Fill in the registration form and e-mail it to {\tt eurocolt at lsi.upc.es} {\sl Or else:} Fill in the registration form below and send it by fax or air mail to the organizers. \noindent Your registration will be confirmed upon receipt of your payment. \medskip {\large\bf Payment} \smallskip The conference fee includes proceedings, lunches for three days, and all social events. \begin{center} \begin{tabular}{lcc}\footnotesize & Before & After \\ Price (in Pta) & Feb. 10 & Feb. 10 \\[1ex] Normal Conference Fee & 30000 & 34000 \\ Student Fee & 15000 & 17000 \\ Extra Banquet Ticket & 3000 & 3500 \\ \end{tabular} \end{center} Extra proceedings will be available on site and cost about 7000 Pta each. Transfer the amount of your registration ({\em not\/} hotel) to: \begin{tabular}{l} Account Name: EuroCOLT'95 \\ Bank: Caixa d'Estalvis i Pensions de Barcelona \\ Account \#: 2100--0797--91--0200096977 \\ \end{tabular} \topic{Combining NeuroCOLT meeting with EuroColt'95:} The 1st yearly meeting of the EU ESPRIT NeuroCOLT Working Group is planned back-to-back with EuroColt'95 in Barcelona, March 9--11. Participants can arrange the same hotels and joint travel at their convenience. \medskip {\large\bf For more information} \smallskip {\sl WWW:} Connect to \begin{center} {\tt http://goliat.upc.es/{\large\tt \~{}}{\kern-2pt}eurocolt/info.html} \end{center} {\sl ftp:} login as anynomous to {\tt bloom.upc.es}, go to directory {\tt pub/eurocolt} {\sl E-mail:} {\tt eurocolt at lsi.upc.es} {\sl Or else:} contact the organizers at \\ \begin{center} \begin{tabular}{l} Ricard Gavald\`a -- EuroCOLT'95\\ Dept. of Software (LSI) \\ Universitat Polit\`ecnica de Catalunya \\ Pau Gargallo 5 \\ 08028 Barcelona, Spain \\ Phone: +34-3-401-7008 \\ Fax: +34-3-401-7014\\ E-mail: {\tt gavalda at lsi.upc.es} \end{tabular} \end{center} \medskip {\large\bf Acknowledgments} \smallskip {\sloppy \topic{History and Sponsors:} The previous and inaugural European Conference on Computational Learning Theory was held 20--22 December 1993 at Royal Holloway, University of London. The EuroCOLT'95 conference is sponsored by the EATCS, by the European Union through NeuroCOLT ESPRIT Working Group Nr. 8556, by IFIP through SSGFCS WG 14.2., and by Universitat Polit\`ecnica de Catalunya. } \topic{Local Arrangements Chairs:} Ricard Gavald\`a (UPC, Barcelona), Felipe Cucker (Univ. Pompeu Fabra, Barcelona) \topic{Program Committee:} M. Anthony (LSE, Univ. London, UK), E. Baum (NEC Research Inst., Princeton), N. Cesa-Bianchi (Univ. Milano, Italy), J. Koza (Stanford Univ, Palo Alto, USA), M. Li (Univ. Waterloo, Canada), S. Muggleton (Oxford University, UK), W. Maass (TU Graz, Austria), J. Rissanen (IBM Almaden, USA), H.-U. Simon (Univ. Dortmund, Germany), K. Yamanishi (NEC, Princeton, USA), L. Valiant (Harvard Univ, Cambridge, USA), P. Vitanyi (Chair, CWI/Univ. Amsterdam, Netherlands), R. Freivalds (Univ. Riga, Latvia) \topic{Steering Committee:} M. Anthony (LSE, Univ. London, UK), R. Gavald\`a (UPC, Barcelona), W. Maass (TU Graz, Austria), J. Shawe-Taylor (RHBNC, Univ. London, UK), H.-U. Simon (Univ. Dortmund, Germany) P. Vit\'anyi (CWI \& Univ.\ Amsterdam). \newpage %% %% TO REGISTER VIA E-MAIL %% 1. cut here %% 2. fill in the boxes and replace all occurrences of macro \FILLHER %% with your data %% 3. e-mail to eurocolt at lsi.upc.es before Feb. 10 %% %% Recall that registration via WWW is also possible %% \begin{center}\large\bf REGISTRATION FORM \end{center} \tt Last name \FILLHERE First name \FILLHERE Affiliation \FILLHERE Mailing address \FILLHERE \FILLHERE \FILLHERE EMail address \FILLHERE Vegetarian [ ] \\[5pt] Registration fee \hspace{1.5cm} Pta\ \FILLHERE Extra Banquet Ticket(s) \hspace{0.2cm} Pta\ \FILLHERE Total \hspace{3.5cm} Pta\ \FILLHERE Your registration will be confirmed upon receipt of payment. \thickbar \\[5pt] I want a [ ] Single room \ \ \ [ ] Double room \\[5pt] [ ] Double room, one occupant in Hotel [ ] Catalunya\ \ \ [ ] Montecarlo \\[5pt] [ ] Rivoli arriving on March \FILLHERE and leaving on March \FILLHERE If sharing a double room, name of roommate (or 'anyone'): \\ \FILLHERE \end{document}  From chaos at gojira.Berkeley.EDU Fri Jan 6 20:08:15 1995 From: chaos at gojira.Berkeley.EDU (Jim Crutchfield) Date: Fri, 6 Jan 95 17:08:15 PST Subject: Graduate Research Positions at the Santa Fe Institute Message-ID: <9501070108.AA26465@gojira.Berkeley.EDU> Our group at the Santa Fe Institute has been applying evolutionary computation techniques to design cellular automata and other decentralized multiprocessor systems to perform computations. Our group's work has two main thrusts: understanding how emergent computation can occur in spatially-extended decentralized systems, and understanding how an evolutionary process can produce complex, coherent behavior in such a system. Part of what we are doing in this context is formulating a mathematical theory of evolutionary search on landscapes, taking tools from statistical mechanics and stochastic process theory. Another novel aspect of our approach is the development and application of new methods to detect and analyze the computational structure in the evolved systems. We believe this work will eventually lead to (1) a better understanding of how evolution interacts with nonlinear decentralized systems in nature to produce adaptive coordinated behavior and (2) biologically-inspired methods for the automated design of parallel and distributed computing systems. We are searching for two graduate students interested in pursuing Ph.D.s on this project. This work is interdisciplinary: relevant fields include machine learning, theory of computation (especially in parallel decentralized systems), architectures for distributed parallel computation, nonlinear dynamics and statistical physics, evolutionary biology, and the mathematics of stochastic processes. We (the project leaders), Jim Crutchfield and Melanie Mitchell, are respectively a physicist and a computer scientist. We will consider students in any of the fields listed above, and will help formulate dissertation topics appropriate for the students' particular fields. The Santa Fe Institute (SFI) is a interdisciplinary scientific research center in Santa Fe, New Mexico, whose research focuses on the sciences of "complexity." Research programs include adaptive computation, economics, theoretical biology, theoretical immunology, theoretical ecology, anthropology, neurobiology, and foundations of complex systems. There is a small semi-permanent faculty along with a larger external faculty, several postdocs, and many other prominent scientists from many universities around the world who spend extended periods at the Institute. Included in this group are many Nobel Laureates and MacArthur Fellows. Although SFI does not grant degrees, it has a number of resident graduate research assistants who are officially enrolled at degree-granting institutions but do their dissertation research at SFI under the guidance of an Institute faculty member. We are looking for students who have successfully completed their graduate course work, are ready to engage in independent research, and are willing to spend two to three years at SFI working on a dissertation starting this coming summer or fall (1995). We will provide funding to cover housing and a living stipend. The student must have an official advisor at their home institution who is willing to have the student perform his or her work at SFI under our guidance. Interested students should send a letter stating their interest in this position along with a resume including (1) a synopsis of coursework and grades, (2) a synopsis of computer programming experience and proficiencies in programming languages; (3) a synopsis of research experience if any (and publications, if any); and (4) any other information relevant to the student's application. These should be sent (preferably by email) to: James P. Crutchfield Physics Department University of California Berkeley, California 94720-7300, USA Office: 510-642-1287 FAX: 510-643-8497 email: chaos at gojira.berkeley.edu The student should also arrange for two letters of recommendation to be sent to this address (also preferably by email). For more information and for our publications on this project, see our group's Web page (http://www.santafe.edu/projects/evca). Also see the Computational Mechanics Web page (http://www.santafe.edu/projects/CompMech). For more information on the Santa Fe Institute, see the SFI's Web page (http://www.santafe.edu). JAMES P. CRUTCHFIELD MELANIE MITCHELL Research Physicist Research Professor and Director, Adaptive Computation Program (also Research Professor (also Research Assistant Professor at the SFI) at the University of New Mexico) Physics Department Santa Fe Institute University of California 1399 Hyde Park Road Berkeley, California 94720-7300 Santa Fe, New Mexico 87501 173 Birge Hall chaos at gojira.berkeley.edu mm at santafe.edu Office: 510-642-1287 505-984-8800 FAX: 510-643-8497 505-982-0565 http://www.santafe.edu/~jpc http://www.santafe.edu/~mm  From vg197 at neutrino.pnl.gov Fri Jan 6 20:17:08 1995 From: vg197 at neutrino.pnl.gov (Sherif Hashem) Date: Fri, 06 Jan 1995 17:17:08 -0800 (PST) Subject: Workshop announcement Message-ID: <9501070117.AA13861@neutrino.pnl.gov> WORKSHOP ON ENVIRONMENTAL AND ENERGY APPLICATIONS OF NEURAL NETWORKS Battelle Auditorium, Richland, Washington March 30-31, 1995 The Environmental Molecular Sciences Laboratory (EMSL), Pacific Northwest Laboratory (PNL), and the Richland Section of the Institute of Electrical and Electronics Engineers (IEEE) are sponsoring a workshop to bring together scientists and engineers interested in investigating environmental and energy applications of artificial neural networks (ANNs). Objectives: ----------- The main objectives of this workshop are: * to provide a forum for presenting and discussing environmental and energy applications of neural networks. * to serve as a means for investigating the potential uses of neural networks in the U.S. Department of Energy's environmental cleanup efforts and energy programs. * to promote collaboration between researchers in national laboratories, academia, and industry to solve real-world problems. Topics: ------- * Environmental applications (modeling and predicting land, air, and water pollution; environmental sensing; spectroscopy; hazardous waste handling and cleanup). * Energy applications (environmental monitoring for power systems, modeling and control of power plants, power load forecasting, fault location and diagnosis of power systems). * Commercial and industrial applications (environmental, economic, and financial time series analyses and forecasting; chemical process modeling and control). * Medical applications (analysis of environmental health effects, modeling biological systems, medical image analysis, and medical diagnosis). Who should attend? ------------------ This workshop should be of interest to researchers applying ANNs in energy and environmental sciences and engineering, as well as scientists and engineers who see some potential for the application of ANNs to their work. Dates: ------ The workshop will be held on March 30-31, 1995, from 8:00 am to 5:00 pm. An introductory tutorial on neural networks will be offered on March 29, 1995, and is recommended for participants who are new to neural networks. Deadline for contributed presentations: February 10, 1995. Notification of acceptance will be mailed by: February 24, 1995. Cost: ----- The registration fee is $120 ($75 for students). Early registration by March 1, 1995, is $100 ($50 for students). For More Information, Contact: ------------------------------ Sherif Hashem Environmental Molecular Sciences Laboratory Pacific Northwest Laboratory P.O. Box 999, M/S K1-87 Richland, WA 99352 Telephone: 509-375-6995 Fax.: 509-375-6631 Internet: s_hashem at pnl.gov World Wide Web URL: http://www.emsl.pnl.gov:2080/people/bionames/s_hashem.html Also see the workshop's homepage on the World Wide Web at URL: http://www.emsl.pnl.gov:2080/docs/cie/neural/workshop2/homepage.html ____________________________________________________________________________ REGISTRATION FORM Name: ____________________________ Address: ____________________________ ____________________________ ____________________________ ____________________________ Telephone: ____________________________ Fax: ____________________________ E-mail: ____________________________ [ ] I am interested in attending the neural network tutorial (no additional fee is required). [ ] I am interested in a bus tour of the Hanford Site (a Department of Energy site located north of Richland, Washington). Registration Fee: ----------------- Regular: $100 ($120 after March 1, 1995). Student: $50 ($75 after March 1, 1995). Please make your check payable to Battelle. Mail the completed form and check to: Janice Gunter WEEANN Registration Pacific Northwest Laboratory P.O. Box 999, M/S K1-87 Richland, WA 99352 ____________________________________________________________________________  From ping at psy.cuhk.hk Fri Jan 6 23:04:19 1995 From: ping at psy.cuhk.hk (Ping Li) Date: Sat, 7 Jan 1995 12:04:19 +0800 (HKT) Subject: About sequential learning (or interference) Message-ID: Most of the previous efforts to reduce catastrophic intereference seem to have focused on modifying the network archetecture (though with some exceptions, e.g., Sharkey's and McRae's work). I wonder to what extent catastrophic intereference may be reduced if one manipulates the training data in some way. For example, my early study (CRL TR 9203) found if the network is presented with the full data, catastrophic intereference occurs. Some of my preliminary results now suggest that if one uses an "incremental learning" schedule (input data enters into training piece by piece; there are many reasons for a developmental psychologist why this kind of increments are necessary --- see Elman, 1993 Cognition, Plunkett & Marchman, 1993 Cognition), then catastrophic intereference may be reduced. This also seems to go well with what Jay McClelland suggests earlier in his message: > According to this view, cortical (and some other non-hippocampal) > systems learn slowly, using what I call 'interleaved learning'. > Weights are adjusted a small amount after each experience, so that > the overall direction of weight change is governed by the structure > present in the ensemble of events and experiences. New material can > be added to such a memory without catastrophic intereference if it > is added slowly, interleaved with ongoing exposure to other events and > experiences. Happy New Year! ********************************************************************** Ping LI Email: pingli at cuhk.hk Department of Psychology Phone: (852)609-6576 The Chinese University of Hong Kong Fax: (852)603-5019 **********************************************************************  From terry at salk.edu Sun Jan 8 00:46:22 1995 From: terry at salk.edu (Terry Sejnowski) Date: Sat, 7 Jan 95 21:46:22 PST Subject: Faculty Positions at UCSD Message-ID: <9501080546.AA21969@salk.edu> Cognitive Neuroscientist: UNIVERSITY OF CALIFORNIA, SAN DIEGO. The Psychology Department at UCSD anticipates hiring an Assistant Professor (tenure track) in Cognitive Neuropsychology/Cognitive Neuroscience. Candidates must have a Ph.D. and be able to conduct independent, publishable research and teach undergraduate and graduate classes in their area of specialization. Salary commensurate with qualifications and based on U.C. salary scales. Candidates should send curriculum vita, reprints, and names of three referees to Cognitive Neuroscience Search Committee, Department of Psychology, 0109, University of California, San Diego, La Jolla, CA 92093-0109. Immigration status of non-citizens should be stated in the vita. Complete applications received by January 16, 1995 will receive full consideration. Position subject to funding availability. Quantitative Methodologist: UNIVERSITY OF CALIFORNIA, SAN DIEGO. The Department of Psychology at UCSD anticipates hiring an Assistant Professor (tenure track) in Quantitative Methodology, with a research program in any substantive area of Psychology. Candidates must have a Ph.D. and be able to conduct independent, publishable research and teach undergraduate and graduate classes in their area of specialization. Salary commensurate with qualifications and based on U.C. salary scales. Candidates should send curriculum vita, reprints, and names of three referees to Quantitative Methodology Search Committee, Department of Psychology, 0109, University of California, San Diego, La Jolla, CA 92093-0109. Immigration status of non-citizens should be stated in the vita. Complete applications received by January 16, 1995 will receive full consideration. Position subject to funding availability. Biological Psychologist: UNIVERSITY OF CALIFORNIA, SAN DIEGO. The Department of Psychology at UCSD anticipates hiring an Assistant Professor (tenure track) in Biological Psychology. Candidates must have a Ph.D. and be able to conduct independent, publishable research and teach undergraduate and graduate classes in their area of specialization. Salary commensurate with qualifications and based on U.C. salary scales. Candidates should send curriculum vita, reprints, and names of three referees to Biological Psychology Search Committee, Department of Psychology, 0109, University of California, San Diego, La Jolla, CA 92093-0109. Immigration status of non-citizens should be stated in the vita. Complete applications received by January 16, 1995 will receive full consideration. Position subject to funding availability. The University of California is an Affirmative Action/Equal Opportunity Employer. -----  From dsilver at csd.uwo.ca Sat Jan 7 21:30:21 1995 From: dsilver at csd.uwo.ca (Danny L. Silver) Date: Sat, 7 Jan 95 21:30:21 EST Subject: About sequential learning (or interference) In-Reply-To: <9501080218.AA00899@church.ai.csd.uwo.ca.csd.uwo.ca>; from "Danny L. Silver" at Jan 7, 95 9:18 pm Message-ID: <9501080230.AA00923@church.ai.csd.uwo.ca.csd.uwo.ca> For me the significance of inteference in neurally inspired learning systems is the message that an effective learner must not only be capable of learning a single task from a set of examples but must also be capable of effectively integrating variant task knowledge at a meta- level. This falls in line with McClelland's recent papers on consolidation of hippcocampal memories into cortical regions; his "interleaved learning". This is a delicate and complex process which undoubtedly occurs during sleep. In tune with Sebastian Thrun and Tom Mitchell's efforts on "Life Long Learning" I feel the next great step in learning theory will be the discovery of methods which allow our machine learning algorthms to take advantage of previously acquired task knowledge. At UWO we have been investigating methods of storing neural net task knowledge in an interleaved fashion with other, previously learned tasks, so as to create an "experience database". This database can then be used to prime the initial weights of the neural net for a new task. Thus far, studies on simple boolean logic tasks has shown promise. Incremental learning is possible (with decreases in learning times by 1 or 2 orders of magnitude)), but dependent upon task order. Thus one of the key aspects of consolidation, so as to overcome interference, appears to be a reordering of learned tasks. Have others (besides those authors I have mentioned) tried methods of task consolidation at a meta level? ... Danny -- ========================================================================= = Daniel L. Silver University of Western Ontario, London, Canada = = N6A 3K7 - Dept. of Comp. Sci. - Office: MC27b = = dsilver at csd.uwo.ca H: (519)473-6168 O: (519)679-2111 (ext.6903) = ========================================================================= REF: McClelland, J. & McNaughton B. & O'Reiily, R. "Why there are complemetary learning sysetms in the hipocampus and neocortex: Insights from the successes and failures of connectionist models of learning and memory". Technical Report PDP.CNS.94.1, Carnegie Mellon Univeristy and The University of Arizona, March, 1994. Thrun, S. & Mitchell T. "Lifelong Robot Learning". Techincal Report IAI-TR-93-7, Universitat Bonn, Institut fur Informatik II, Germany, July, 1993. Thrun, S. "A Lifelong Learning Perspective for Mobile Robot Control"; Proceedings of the IEEE Conference on Intelligent Robots and Systems, Munich, Germnay, Sept, 1994. Thrun, S. & Mitchell T. "Learning One More Thing". Techincal Report CMU-CS-94-184, Carnegie Mellon University, Pittsburg, PA, Sept, 1994.  From 72773.1646 at compuserve.com Sun Jan 8 22:11:25 1995 From: 72773.1646 at compuserve.com (SAM FARAG) Date: 08 Jan 95 22:11:25 EST Subject: NEURAL NETWORKS SPECIALIST Message-ID: <950109031125_72773.1646_EHL141-1@CompuServe.COM> The Switchgear and Motor Control Center Divsion, of Siemens Energy and Automation,Inc.in Raleigh, NC has an immediate opening for a neural networks specialist. Especially feed-forward networks and back propagation. The ideal candidate will have a master degree in electrical engineering or equivelant experience, 3 to 5 years experience industry experience,and must be a legal resident in the USA. Your responsibilities will include developing, implementing, and testing neural networks, statistical and/or machine based algorithms for electrical machine monitoring and diagnostics. Experience in embeded controllers, assembly, high level languages, digital signal processing and hardware design is highly desirable. Siemens AG is a worldwide supplier of electrical and electronic devices with sales in excess of 4 Billion$ in the US and 40 billion$ worldwide. If you are interested please send your resume via e mail ( text only) or US mail to: Sam Farag Siemens Energy & Automation 7000 Siemens Drive Wendell, NC 27626 email: 72773. 1646 at Compuserve.com  From jon at maths.flinders.edu.au Mon Jan 9 13:07:21 1995 From: jon at maths.flinders.edu.au (Jonathan Baxter) Date: Tue, 10 Jan 1995 04:37:21 +1030 Subject: Sequential learning. Message-ID: <199501091807.AA26151@calvin.maths.flinders.edu.au> Danny Silver writes: > >For me the significance of inteference in neurally inspired learning systems >is the message that an effective learner must not only be capable >of learning a single task from a set of examples but must also be >capable of effectively integrating variant task knowledge at a meta- >level. This falls in line with McClelland's recent papers on consolidation >of hippcocampal memories into cortical regions; his "interleaved learning". >This is a delicate and complex process which undoubtedly occurs during sleep. >In tune with Sebastian Thrun and Tom Mitchell's efforts on "Life Long >Learning" I feel the next great step in learning theory will be the discovery >of methods which allow our machine learning algorthms to take advantage of >previously acquired task knowledge. I could not agree more. And with all modesty, the 'next great step' has already begun with the work in my recently completed PhD thesis entitled 'learning internal representations'. The thesis can be retrieved via anonymous ftp from the neuroprose archive (Thesis subdirectory)-- baxter.thesis.ps.Z (112 pages) In the thesis I examine in detail one important method of enabling machine learning algorithms to take advantage of previously acquired task knowledge, namely by learning an internal representation. The idea behind learning an internal representation is to notice that for many common machine learning problems (such as character and speech recognition) there exists a transformation from the input space of the problem (the space of all images of characters or the space of speech signals) into some other space that makes the learning problem much easier. For example, in character recognition, if a map from the input space can be found that is insensitive to rotations, dilations, and even writer-dependent distortions of the characters and such a map is used to 'preprocess' the input data, then the learning problem becomes quite trivial (the learner only needs to see one positive example of each charecter to be able to classify all future characters perfectly). I argue in the thesis that the information required to learn such a representation cannot in general be contained in a single task: many learning tasks are required to learn a good representation. Thus, the idea is to sample from many similar learning tasks to first learn a representation for a particular learning domain, and then use that representation to learn future tasks from the same domain. Examples of similar tasks in the character recognition learning domain are classifiers for individual characters (which includes characters from other alphabets), and in the speech recognition domain individual word classifiers constitute the similar tasks. It is proven in chapter three of the thesis that for suitable learning domains (of which speech and character recognition should be two examples), the number of examplesof each task required for good generalisation decreases linearly with the number of tasks being leearnt, and that once a representation has been learnt for the learning domain, far fewer examples of any novel task will be required for good generalisation. In fact, depending on the domain, there is no limit to the speedup in learning that can be achieved by first learning an internal representation. There are two levels at which represntation learning can be viewed as applying to human learning. At the bottom level we can assume that the tasks our evolutionary ancestors have had to learn in order to survive has resulted in humans being born with built in representations that are useful for learning the kinds of tasks necessary for survival. An example of this is the edge-detection processing that takes place early in the visual pathway, among other things this should be useful for identifying the boundaries of surfaces in our environment and hence provides a big boost to the process of learning not to bump into those surfaces. At a higher level it is clear that we build representations on top of these lower level representations during our lifetimes. For example, I grew up surrounded by predominately caucasian faces and hence learnt a representation that allows me to learn individual caucasian faces quickly (in fact with one example in most cases). However, although I am more able now, when I originally was presented with images of negro faces I was less able to distinguish them. Thus I have had to re-learn my 'face recognition' representation to accomodate learning of negro faces. In chapter four of my thesis I show how gradient descent may be used to learn ineternal representations and present several experiments supporting the theoretical conclusions that learning more tasks from a domain reduces the number of examples per task, and that once an effective representation is learnt, the number of examples required of future tasks is greatly reduced. It also turns out that the ideas involved in representation learning can be used to solve an old problem in vector quantization: namely how to choose an appropriate distortion measure for the quantization process. This is discussed in chapter five, in which the definition of the canonical distortion measure is introduced and is shown to be optimal in a very general sense. It is also shown how a distortion measure may be learnt using the representation learning techniques introduced in the previous chapters. In the final chapter the ideas of chapter five are applied back to the problem of representation learning to yield an improved error measure for the representation learning process and some experiments are performed demonstrating the improvement. Although learning an internal representation is only one way of enabling information from a body of tasks to be used when learning a new task, I believe it is the one employed extensively by our brains and hence the work in this thesis should provide an appropriate theoretical framework in which to address problems of sequential learning in humans, as well as providing a practical framework and set of techniques for tackling artificial learning problems for which there exists a body of similar tasks. However, it is likely that other methods may be at play in human sequential learning and may also be useful in artificial learning, so at the end of chapter three in my thesis I present a general theroretical framework for tackling any kind of learning problem for which prior information is available in the form of a body of similar learning tasks. Jonathan Baxter Department of Mathematics and Statistics, The Flinders University of South Australia. jon at maths.flinders.edu.au  From meeden at cs.swarthmore.edu Mon Jan 9 14:35:34 1995 From: meeden at cs.swarthmore.edu (Lisa Meeden) Date: Mon, 9 Jan 1995 14:35:34 -0500 (EST) Subject: task consolidation Message-ID: <199501091935.OAA21885@cilantro.cs.swarthmore.edu> Danny Silver asked whether others had tried methods of task consolidation at a meta level. In my dissertation I used an Elman-style recurrent network trained with reinforcement learning to control a simple robot with a few goals. At the time of goal achievement, the hidden layer potentially reflects a consolidated history of the perceptual states encountered during the process of solving the task. I argued that these hidden layer activations could serve as a sort of plan for achieving the goal and called them protoplans. To investigate the efficacy of protoplans, a transfer of learning experiment was done. The protoplans learned in one controller network were saved in an associative memory and used to guide a second controller network as it learned the same task from scratch. The associative memory mapped the precursor sensor states of a protoplan to the protoplan itself. Controllers trained with protoplans instead of goals as input converged more quickly on good solutions than the original controllers trained with goals. Protoplans were able to guide the robot's behavior by marking the important moments in the interaction with the environment when a switch in behavior should occur. This kind of timing information was indirect--no specific action was indicated--but knowing when to change from a particular strategy to a new one can be very important information. For more details on these experiments see chapter 5 of my thesis which is available at: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/Thesis/meeden.thesis.ps.Z -- Lisa Meeden Computer Science Program Swarthmore College 500 College Ave Swarthmore, PA 19081 (610) 328-8565 meeden at cs.swarthmore.edu  From john at dcs.rhbnc.ac.uk Mon Jan 9 09:27:28 1995 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Mon, 09 Jan 95 14:27:28 +0000 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <199501091427.OAA14403@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT): three new reports available ---------------------------------------- NeuroCOLT Technical Report NC-TR-94-015: ---------------------------------------- Grammar Inference and the Minimum Description Length Principle by Peter Gr\"{u}nwald, Centrum voor Wiskunde en Informatica, Kruislaan 413, 1098 SJ Amsterdam, The Netherlands Abstract: We describe a new abstract model for the computational learning of grammars. The model deals with a learning process in which an algorithm is given an input of a large set of training sentences that belong to some grammar $G$. The algorithm then tries to infer this grammar. Our model is based on the well-known {\em Minimum Description Length Principle}. It turns out that our model is, in a certain sense, a more general version of two seemingly different well-known other ones. Also, two other existing models turn out to be very similar to ours. We have made an initial implementation of the algorithm implied by the model. We have tried this implementation on natural language texts, and we give a short description of the results of these tests. The results of testing the algorithm in practice are quite interesting, but unfortunately they are neither encouraging nor discouraging enough to indicate whether our method of grammar induction, which hardly makes any use of any linguistic principles and makes no use at all of any semantical information, is really worth pursuing further. ---------------------------------------- NeuroCOLT Technical Report NC-TR-94-017: ---------------------------------------- Bounds for the Computational Power and Learning Complexity of Analog Neural Nets by Wolfgang Maass, Institute for Theoretical Computer Science, Technische Universitaet Graz, Klosterwiesgasse 32/2, A-8010 Graz, Austria Abstract: It is shown that high order feedforward neural nets of constant depth with piecewise polynomial activation functions and arbitrary real weights can be simulated for boolean inputs and outputs by neural nets of a somewhat larger size and depth with heaviside gates and weights from $\{-1,0,1\}$. This provides the first known upper bound for the computational power of the former type of neural nets. It is also shown that in the case of first order nets with piecewise linear activation functions one can replace arbitrary real weights by rational numbers with polynomially many bits, without changing the boolean function that is computed by the neural net. In order to prove these results we introduce two new methods for reducing nonlinear problems about weights in multi-layer neural nets to linear problems for a transformed set of parameters. These transformed parameters can be interpreted as weights in a somewhat larger neural net. As another application of our new proof technique we show that neural nets with piecewise polynomial activation functions and a constant number of analog inputs are probably approximately learnable (in Valiant's model for PAC-learning). ---------------------------------------- NeuroCOLT Technical Report NC-TR-94-021: ---------------------------------------- On the Computational Complexity of Networks of Spiking Neurons by Wolfgang Maass, Institute for Theoretical Computer Science, Technische Universitaet Graz, Klosterwiesgasse 32/2, A-8010 Graz, Austria Abstract: We investigate the computational power of a formal model for networks of spiking neurons. It is shown that simple operations on phase-differences between spike-trains provide a very powerful computational tool that can in principle be used to carry out highly complex computations on a small network of spiking neurons. We construct networks of spiking neurons that simulate arbitrary threshold circuits, Turing machines, and a certain type of random access machines with real valued inputs. We also show that relatively weak basic assumptions about the response- and threshold-functions of the spiking neurons are sufficient in order to employ them for such computations. Furthermore we prove upper bounds for the computational power of networks of spiking neurons with arbitrary piecewise linear response- and threshold-functions, and show that they are with regard to real-time simulations computationally equivalent to a certain type of random access machine, and to recurrent analog neural nets with piecewise linear activation functions. In addition we give corresponding results for networks of spiking neurons with a limited timing precision, and we prove upper and lower bounds for the VC-dimension and pseudo-dimension of networks of spiking neurons. ----------------------- The Report NC-TR-94-015 can be accessed and printed as follows % ftp cscx.cs.rhbnc.ac.uk (134.219.200.45) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-94-015.ps.Z ftp> bye % zcat nc-tr-94-015.ps.Z | lpr -l Similarly for the other technical reports. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. Best wishes John Shawe-Taylor  From omlinc at research.nj.nec.com Tue Jan 10 11:19:43 1995 From: omlinc at research.nj.nec.com (Christian Omlin) Date: Tue, 10 Jan 95 11:19:43 EST Subject: task consolidation Message-ID: <9501101619.AA01373@arosa> Lisa Meeden writes: >To investigate the efficacy of protoplans, a transfer of learning >experiment was done. The protoplans learned in one controller network >were saved in an associative memory and used to guide a second >controller network as it learned the same task from scratch. The >associative memory mapped the precursor sensor states of a protoplan >to the protoplan itself. Controllers trained with protoplans instead >of goals as input converged more quickly on good solutions than the >original controllers trained with goals. Protoplans were able to >guide the robot's behavior by marking the important moments in the >interaction with the environment when a switch in behavior should >occur. This kind of timing information was indirect--no specific >action was indicated--but knowing when to change from a particular >strategy to a new one can be very important information. This is similar to work done on training (recurrent) networks with prior knowledge. We have investigated algorithms for the extraction and insertion of symbolic knowledge in recurrent networks trained on temporal learning tasks. For a testbed, we learned regular grammars. We have shown how partial prior knowledge about a regular grammar can be encoded in a fully-recurrent neural network with second-order weights. The improvement of convergence time is `proportional' to the amount of prior knowledge. A description of the learned grammar can also be extracted from networks in the form of deterministic finite-state automata (DFAs). We have shown that the extracted DFAs outperform the trained networks, i.e. the DFA correctly classifies more strings than the trained network itself. The details can be found in the following book which has recently been published: @INCOLLECTION{omlin94b, AUTHOR = "C.W. Omlin and C.L. Giles", TITLE = "Extraction and insertion of symbolic information in recurrent neural networks", EDITOR = "V. Honavar and L. Uhr", BOOKTITLE = "Artificial Intelligence and Neural Networks: Steps toward Principled Integration", YEAR = "1994", PUBLISHER = "Academic Press", ADDRESS = "San Diego, CA", PAGES = "271-299"}  From MDUDZIAK at Gems.VCU.EDU Wed Jan 4 20:09:32 1995 From: MDUDZIAK at Gems.VCU.EDU (MARTIN DUDZIAK) Date: Wed, 04 Jan 1995 21:09:32 -0400 (EDT) Subject: Job opportunities info for distribution on list Message-ID: <01HLGNXZE8COA240GO@Gems.VCU.EDU> ========================================================================== The following information is for general distribution within the academic and private research communities. QDI, a comparatively small, new, and secure company in the adaptive systems field, is looking for a few top developers capable of working in an atmosphere that brings together basic research and applications nd emphasizes freedom of thinking, expression, and intellectual creativity. The description that follows, for general consumption, is by necessity rather limited in terms of the details that are given about specific projects, but persons who are seriously looking for a long-term committed situation that has opportunities in several application domains relating to pattern classification and recognition, data compression, approximation, and prediction can make direct contact for further information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Unique opportunities in application development and basic research in fields of neurocomputing and adaptive, intelligent systems. Positions with well-funded company in new mid-Wisconsin R&D center, closely linked with leading international academic and corporate groups. Seeking: enterprising, creative, bold thinkers and doers with experience in object-oriented software design (C, C++, Smalltalk) and background in parallel processing, neural nets, genetic algorithms, fuzzy logic, hardware design, or pure and applied mathematics/physics. Prior experience in both research and application-building a plus, as well as demonstrated skills in presenting, teaching, writing and other communication. Technical and academic backgrounds from diverse scientific fields will be considered; advanced degrees are a plus but not a prerequisite. Salary and benefits will match experience, initiative, inventiveness and potential. Very flexible and creative corporate structure and management organization. Superior scientific/computing resources and working environment, plus unusually strong personal and educational opportunities. Strong self-motivation, self-criticism, team spirit, synergetic thinking, and open-mindedness are essential. There are multiple positions that will be filled within the first half of this year. Curriculum vita and a letter of introduction should be sent by fax to Mr. B. Bice, Director of Operations, at (414) 731-0722 or by email to Dr. M. Dudziak at mdudziak at gems.vcu.edu. ==========================================================================  From perso at DI.UniPi.IT Tue Jan 10 11:08:04 1995 From: perso at DI.UniPi.IT (perso@DI.UniPi.IT) Date: Tue, 10 Jan 1995 17:08:04 +0100 (MET) Subject: TR available Message-ID: <9501101608.AA03361@neuron> Technical Report available: Comments are welcome !! ****************************************************** * FTP-host: ftp.di.unipi.it FTP-filename: pub/Papers/perso/SPERDUTI/lraam-3.ps.Z ****************************************************** @TECHREPORT{lraam-3, AUTHOR = {A. Sperduti and A. Starita}, TITLE = {Dynamical Neural Networks Construction for Processing of Labeled Structures}, INSTITUTION = {Dipartimento di Informatica, Universit\`{a} di Pisa}, YEAR = {1995}, NUMBER = {TR-1/95} } Abstract: We show how Labeling RAAM (LRAAM) can be exploited to generate `on the fly' neural networks for associative access of labeled structures. The topology of these networks, that we call Generalized Hopfield Networks (GHN), depends on the topology of the {\it query} used to retrieve information, and the weights on the networks' connections are the weights of the LRAAM encoding the structures. A method for incremental discovering of multiple solutions to a given query is presented. This method is based on {\it terminal repellers}, which are used to `delete' known solutions from the set of admissible solutions to a query. Terminal repellers are also used to implement exceptions at query level, i.e., when a solution to a query must satisfy some negative constraints on the labels and/or substructures. Besides, the proposed model solves very naturally the connectionist variable binding problem at query level. Some results for a tree-like query are presented. Finally, we define a parallel mode of execution, exploiting terminal repellers, for the GHN, and we propose to use terminal attractors for implementing shared variables and graph queries. * No hardcopy available. * FTP procedure: unix> ftp ftp.di.unipi.it Name: anonymous Password: ftp> cd pub/Papers/perso/SPERDUTI ftp> binary ftp> get lraam-3.ps.Z ftp> bye unix> uncompress lraam-3.ps.Z unix> lpr lraam-3.ps (or however you print postscript) _________________________________________________________________ Alessandro Sperduti Dipartimento di Informatica, Corso Italia 40, Phone: +39-50-887248 56125 Pisa, Fax: +39-50-887226 ITALY E-mail: perso2di.unipi.it _________________________________________________________________  From peterk at nsi.edu Mon Jan 9 22:39:21 1995 From: peterk at nsi.edu (Peter Konig) Date: Mon, 9 Jan 1995 19:39:21 -0800 Subject: Position available: Cortical Neurophysiology Message-ID: Junior Fellow Position in Cortical Neurophysiology available. Applications are invited for the postdoctoral position of Junior Fellow in Experimental Neurobiology at the Neuroscience Institute, La Jolla, to study mechanisms underlying visual perception and sensorimotor integration in the cat. Applicants should have a background in neuro-physiological techniques and data analysis. Fellows will receive stipends appropriate to their quali- fications and experience. Submit a curriculum vitae, statement of research interests, and names of three references to: Dr. Peter Konig; The Neurosciences Institute 3377 North Torrey Pines Court; La Jolla, 92037, CA FAX: 619-554-9159 ----------------------------------------------------------------- Peter Konig The Neurosciences Institute 3377 North Torrey Pines Court La Jolla, CA 92037, USA Office 619 554 3200 Fax 619 554 9159 Home 619 450 0225  From C.Campbell at bristol.ac.uk Wed Jan 11 11:12:43 1995 From: C.Campbell at bristol.ac.uk (I C G Campbell) Date: Wed, 11 Jan 1995 16:12:43 +0000 (GMT) Subject: Fifth Irish Neural Networks Conference Message-ID: <9501111612.AA06233@zeus.bris.ac.uk> FIFTH IRISH NEURAL NETWORK CONFERENCE St. Patricks's College, Maynooth, Ireland September 11-13, 1995 FIRST CALL FOR PAPERS Papers are solicited for the Fifth Irish Neural Network Conference. They can be in any area of theoretical or applied neural computing including, for example: Learning algorithms Cognitive modelling Neurobiology Natural language processing Vision Signal processing Time series analysis Hardware implementations Selected papers from the conference proceedings will be published in the journal Neural Computing and Applications (Springer International). The conference is the fifth in a series previously held at Queen's University, Belfast and University College, Dublin. An extended abstract of not more than 500 words should be sent to: Dr. John Keating, Re: Neural Networks Conference, Dept. of Computer Science St. Patricks's College, Maynooth, Co. Kildare, IRELAND e-mail: JNKEATING at VAX1.MAY.IE NOTE: If submitting by postal mail please make sure to include your e-mail address. The deadline for receipt of abstracts is 1st May 1995. Authors will be contacted regarding acceptance by 1st June, 1995. Full papers will be required by 31st August 1995. ================================================================== FIFTH IRISH NEURAL NETWORKS CONFERENCE REGISTRATION FORM Name: __________________________________________________ Address: __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ e-mail: ______________________ fax: ______________________ REGISTRATION FEE Before August 1, 1995 After Fee enclosed IR#50 IR#60 IR#________ The registration fee covers the cost of the conference proceedings and the session coffee breaks. METHOD OF PAYMENT Payment should be in Irish Pounds in the form of a cheque or banker's draft made payable to INNC'95. =================================================================== FIFTH IRISH NEURAL NETWORKS CONFERENCE ACCOMMODATION FORM Accomodation and meals are available on campus. The rooms are organised into apartments of 6 bedrooms. Each apartment has a bathroom, shower, and a fully equipped dining room/kitchen. The room rate is IR#12 per night (excl. breakfast, breakfast is IR#3 for continental and IR#4 for Full Irish). Name: ___________________________________________________ Address: ___________________________________________________ ___________________________________________________ ___________________________________________________ __________________________________________________ e-mail: ______________________ fax: ______________________ Arrival date: ______________________ Departure date: ______________________ No. of nights: ________ Please fill out a separate copy of the accommodation form for each individual requiring accommodation. If you have any queries, contact John Keating at JNKEATING at VAX1.MAY.IE The second day of the conference (Tuesday 12th September) is a half-day and includes an excursion to Newgrange and Dublin during the afternoon. The cost of this excursion is IR#10. I will be going on the excursion on Tues. afternoon yes/no (please delete as appropriate). ================================================================== Return fees with completed registration/ accommodation forms to: Dr John Keating, Re: Neural Networks Conference, Dept. of Computer Science, St. Patrick's College, Maynooth, Co. Kildare, IRELAND Unfortunately, we cannot accept registration or accommodation bookings by e-mail. =================================================================== Fifth Irish Neural Networks Conference - Paper format The format for accepted submissions will be as follows: LENGTH: 8 pages maximum. PAPER size: European A4 MARGINS: 2cms all round PAGE LAYOUT: Title, author(s), affiliation and e-mail address should be centred on the first page. No running heads or page numbers should be included. TEXT: Should be 10pt and preferably Times Roman.  From maureen at cs.toronto.edu Wed Jan 11 13:02:40 1995 From: maureen at cs.toronto.edu (Maureen Smith) Date: Wed, 11 Jan 1995 13:02:40 -0500 Subject: GLOVE-TALK II VIDEO Message-ID: <95Jan11.130245edt.760@neuron.ai.toronto.edu> ******************************************************************************* GLOVE-TALK II PROJECT UNIVERSITY OF TORONTO Geoffrey Hinton and Sidney Fels ----- VIDEO RELEASE ----- ******************************************************************************* THE GLOVE-TALK II VIDEO A 31 minute video of the Glove-Talk II system developed by Sidney Fels and Geoffrey Hinton at the University of Toronto is now available. Glove-Talk II is an artifical vocal tract that converts hand movements into speech in real time. The inputs come from two gloves, a polhemus and a footpedal. Neural networks are used to convert these inputs into formant descriptions that are sent to a speech synthesizer at 100 frames per second. The neural nets adapt to the particular way in which the user tries to produce target sounds during training sessions. The video shows the system in action for both rehearsed and unrehearsed speech and describes in detail the neural networks that are used. To cover the costs of reproduction and distribution those wishing to receive a copy of the video should send the following payment with their order: Addresses in Canada: Personal check or money order payable to the University of Toronto for 10 Canadian dollars Addresses in USA: Personal check or money order payable to the University of Toronto for 10 US dollars Addresses anywhere else: Money order for 20 Canadian dollars payable to the University of Toronto (and specify PAL or NTSC) Orders should be sent to Maureen Smith, Department of Computer Science, 6 King's College Road, Rm 271, University of Toronto, Toronto, Ontario M5S 1A4 fax: 416-978-1455 e-mail: maureen at cs.toronto.edu  From schmidhu at informatik.tu-muenchen.de Wed Jan 11 14:08:42 1995 From: schmidhu at informatik.tu-muenchen.de (Juergen Schmidhuber) Date: Wed, 11 Jan 1995 20:08:42 +0100 Subject: continual learning etc. Message-ID: <95Jan11.200846met.42325@papa.informatik.tu-muenchen.de> Concerning the recent messages on "transfer learning", "incremental learning" etc: Mark Ring has been working on this subject for many years now, specifically on bottom-up, hierarchical behavior learning and skill transfer in reinforcement-learning agents (1991, 1993c), and with time-dependent context-sensitive neural networks (1993a, 1993b) that keep adding new units in order to learn longer and more complicated sequences. In his dissertation on "continual learning", he described a hierarchical mechanism for learning non-Markovian reinforcement tasks where hierarchy construction was done bottom-up as learning progressed. He tested it on "continual learning" tasks, where the behaviors his learning agent acquired for simple tasks were used for learning more difficult tasks with much less effort (skill transfer). Even after learning much more complicated tasks the agent could still generally solve the simpler ones (avoiding catastrophic forgetting). Juergen Schmidhuber Fakultaet fuer Informatik Technische Universitaet Muenchen 80290 Muenchen, Germany ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++= References: @InProceedings{Ring:1991, author = "Ring, Mark B.", title = "Incremental Development of Complex Behaviors through Automatic Construction of Sensory-motor Hierarchies", booktitle = "Machine Learning: Proceedings of the Eighth International Workshop (ML91)", year = 1991, editor = "Birnbaum, Lawrence A. and Collins, Gregg C.", pages = "343--347", publisher = "Morgan Kaufmann Publishers", month = "June", } @InProceedings{Ring:1993a, author = "Ring, Mark B.", title = "Learning Sequential Tasks by Incrementally Adding Higher Orders", booktitle = "Advances in Neural Information Processing Systems 5", year = 1993, editor = "Giles, C. L. and Hanson, S. J. and Cowan, J. D.", pages = "115--122", publisher = "Morgan Kaufmann Publishers", address = "San Mateo, California", } @InProceedings{Ring:1993b, author = "Ring, Mark B.", title = "Two Methods for Hierarchy Learning in Reinforcement Environments", booktitle = "From Animals to Animats 2: Proceedings of the Second International Conference on Simulation of Adaptive Behavior", year = 1993, editor = "Meyer, J. A. and Roitblat, H. and Wilson, S.", pages = "148--155", publisher = "MIT Press", } @TechReport{Ring:1993c, author = "Ring, Mark B.", title = "Sequence Learning with Incremental Higher-Order Neural Networks", institution = "Artificial Intelligence Laboratory, University of Texas at Austin", year = 1993, number = "AI 93--193", month = "January", } @PhDThesis{Ring:1994, author = "Ring, Mark B.", title = "Continual Learning in Reinforcement Environments", school = "University of Texas at Austin", year = 1994, address = "Austin, Texas 78712", month = "August", }  From crr at cogsci.psych.utah.edu Wed Jan 11 13:48:46 1995 From: crr at cogsci.psych.utah.edu (crr@cogsci.psych.utah.edu) Date: Wed, 11 Jan 95 11:48:46 -0700 Subject: About sequential learning (or interference) In-Reply-To: Your message of Sat, 07 Jan 95 12:04:19 +0800. <9501070557.AA09357@cogsci.psych.utah.edu> Message-ID: <9501111848.AA07885@cogsci.psych.utah.edu> A paper I wrote ages ago speaks to this issue as well, in which we examined the spacing effect using NETtalk as a "verbal connectionist learner" and found (unlike the catastrophic interference that everyone's been talking about) that the effects of distributing practice (learning a little bit each time over many times distributed in time) is pretty similar in people and in nets: @inproceedings{Rosenberg86, author = {Charles R. Rosenberg and Terrence J. Sejnowski}, address = {Hillsdale, NJ}, booktitle = {Proceedings of the Eighth Annual Conference of the Cognitive Science Society}, month = {August}, note = {Amherst, MA}, pages = {72-89}, publisher = {Lawrence Erlbaum}, title = {The Spacing Effect on {NETtalk}, A Massively-Parallel Network}, year = {1986} } It seems to have been lost in the shuffle and I couldn't resist mentioning it any longer. Sorry, no url site yet. Charlie  From kenm at sunrae.sscl.uwo.ca Wed Jan 11 05:08:44 1995 From: kenm at sunrae.sscl.uwo.ca (kenm@sunrae.sscl.uwo.ca) Date: Wed, 11 Jan 1995 15:08:44 +0500 Subject: conference announcement Message-ID: <9501112008.AA27439@sunrae.sscl.uwo.ca> ****************************************************************************** L.O.V.E. 1995 24th Conference on Perception & Cognition February 9, 1:00 pm -> February 10, 5:00 pm The Skyline Brock Hotel Niagara Falls, Ontario, Canada ****************************************************************************** We have a great lineup of speakers this year. Thursday, February 9 Margaret M. Shiffrar Rutgers University The interpretation of object motion Mark S. Seidenberg University of Southern California title t.b.a Friday, February 10 Dana H. Ballard University of Rochester Computational hierarchies for natural behaviors Lee R. Brooks (& Glenn Regehr) McMaster University Perceptual resemblance and effort after commonality in category formation Daniel Kersten University of Minnesota Shedding light on the objects of perception Michael K. Tanenhaus University of Rochester Using eye movements to study spoken language comprehension in visual contexts ****************************************************************************** To reserve a hotel room, call Skyline Hotels at 1-800-263-7135 or 905-374-4444. **Please make sure to mention the LOVE conference in order to get special room rates. The L.O.V.E. room rates are great again this year: $47 single or double $57 triple $67 quadruple Registration is again wildly cheap this year and includes the "L.O.V.E. affair": students and post docs: $15 Canadian or $12 US faculty: $25 Canadian or $20 US Be sure to make L.O.V.E. in '95!!! If wish to be added to the L.O.V.E. email list, please contact kenm at sunrae.sscl.uwo.ca ****************************************************************************** Abstracts appear below. ************************************************************************ The Interpretation of Object Motion Margaret M. Shiffrar Rutgers University To interpret the projected image of a moving object, the visual system must integrate motion signals across different image regions. Traditionally, researchers have examined this process by focusing on the integration of equally ambiguous motion signals. However, moving objects often yield motion measurements having differing degrees of ambiguity. In a series of experiments, I examine how the visual system interprets the motion of simple objects. I argue that the visual system normally uses unambiguous motion signals to interpret object motion. ************************************************************************ Mark S. Seidenberg University of Southern California title t.b.a. ************************************************************************ Computational Hiearchies for Natural Behaviors Dana Ballard University of Rochester We argue that a computational theory of the brain will have to address the issue of computational hierarchies, wherein the brain can be seen as using different instruction sets at different spatio-temporal scales. As examples, we describe two such abstraction levels. At the most abstract level, a language is needed to address the way the brain directs the physical resources of its body. An example of these kinds of instructions would be one used to direct saccadic eye-movements. Interpreting experimental data from this perspective implies that subjects use eye-movements in a special strategy to avoid loading short-term memory. This constraint has implications for the organization of high-level behavior. At a lower level of abstraction we consider a model of instructions which capture the details of directing the eye-movements themselves. This model makes extensive use of feedback. The implications of this are that brain circuitry may have to be used in a very different ways than traditionally proposed. ************************************************************************ Perceptual Resemblance and Effort After Commonality in Category Formation Lee Brooks Glenn Regehr McMaster Univeristy The Toronto Hospital The study of category formation and application has been strongly influenced by the information processing tradition. The influence of this tradition includes describing stimuli solely as informational contrasts (the ubiquitous tables of 1s and 0s), as well as the practice of producing new items by recombining identical elements. Even when experiments use natural stimuli, designs and subsequent models are set up as if informational contrasts are the only important aspects of the stimuli. We will argue that at least two types of changes from this tradition are necessary to capture important types of natural behavior. *Enhanced perceptual resemblance*: Habits of stimulus representation and experimental design derived from the information processing tradition have limited the effect of similarity between pairs of items and virtually eliminated an effect of overall similarity among several items. In particular, the informational interpretation of "family resemblance" does not produce categorization based on category-wide similarity, as is often alleged. A better treatment of similarity is important because similarity-based effects are obvious even when people have good theories about the stimuli, as in medical diagnosis. *Multiple descriptions of the stimuli*: Informational description of the stimuli effectively capture analytic behavior, but do not equally well capture similarity-based behavior. We will argue that stimuli have to be characterized differently to account for their effects on similarity-based processing than for their effects on analytic processing. Having these different descriptions for the same stimuli is important since both types of processes occur concurrently in many natural categorization situations. ************************************************************************ Shedding light on the objects of perception Daniel Kersten University of Minnesota One of the great challenges of perception is to understand how we see the material, shape, and identity of an object given enormous variability in the images of that object. Viewpoint and illumination act together to produce the images that the eye receives. Variability over viewpoint has received recent attention in studies of object recognition and shape. Illumination effects of attached and cast shadows have received somewhat less attention for the following reason. Casual inspection shows that one view of an object can appear rather different from another view of that same object. However, the image of an object under one illumination can appear quite similar to an image of the same object under different illumination, even when objectively the images are very different. This latter observation has contributed to the assumption that human perception discounts effects of varying illumination, in particular those due to cast shadows. But do the effects of illumination get filtered out? I will use 3D computer graphics to show examples of how human vision uses illumination information to resolve perceptual ambiguities. In particular, I will show how cast shadows can determine relative depth of objects, the orientation of surfaces, object rigidity, and the identity of contour types. These demonstrations are examples of the kind of perceptual puzzles which the visual brain solves continually in everyday vision. The solution of these perceptual puzzles is an example of generalized Bayesian inference--the logical and plausible reconciliation of image data with prior constraints. For an object recognition task, the visual system might be expected to filter out effects of illumination (e.g. attached and cast shadows). Here vision can be behave in a way inconsistent with a strong Bayesian view--there is a cost in response time and sensitivity to recognizing an object under left illumination that has been learned under right illumination. These results are consistent with exemplar-based theories of recognition. ************************************************************************ Using eye-movements to study spoken language comprehension in visual contexts. Michael K. Tanenhaus University of Rochester We have been using a head-mounted eye-tracking system to monitor eye-movements while subjects follow spoken instructions to manipulate real objects. In this paradigm, eye-movements to the objects in visual world, are eye- movements are closely time locked to referential expressions in the instructions, providing a natural on- line measure of spoken language comprehension in visual contexts. After discussing the rationale for this line of research in terms of current developments in language comprehension research, I'll present results from experiments conducted with Michael Spivey-Knowlton, Julie Sedivy and Kathy Eberhard. In the first experiment, eye-movements to a target object (e.g., Pick up the candle) begin several hundred ms after the beginning of the word, suggesting that reference is established as the word is being processed. Eye-movements are delayed by about 100 ms when there is a "competitor" object with a similar name as the target (e.g., a piece of candy). In the second experiment, the point in a phrase where reference is established is time-locked to when the referring expression becomes unambiguous with respect to the set of visual alternatives (e.g., Touch the starred red square; Put the five of hearts that is below the eight of clubs above the three of diamonds.). The third experiment shows that visual contexts affect the interpretation of temporarily ambiguous instructions such as "Put the spoon in the bowl on the plate". Finally, we show that contrastive focus (e.g., Touch the LARGE red square) directs attention to both the referent and the contrast member. Taken together our results demonstrate the potential of the methodology, especially for exploring issues of interpretation and questions about spoken language comprehension.. They also highlight the incremental and referential nature of comprehension. In addition, they provide a somewhat different perspective on results that have been central to discussions about the modularity of language processing. ************************************************************************  From eann95 at ra.abo.fi Wed Jan 11 03:52:53 1995 From: eann95 at ra.abo.fi (EANN-95 Konferensomrede VT) Date: Wed, 11 Jan 1995 10:52:53 +0200 Subject: Final CFP: EANN 95 Message-ID: <199501110852.KAA15595@aton.abo.fi> International Conference on Engineering Applications of Neural Networks (EANN '95) Helsinki, Finland August 21-23, 1995 Final Call for Papers The conference is a forum for presenting the latest results on neural network applications in technical fields. The applications may be in any engineering or technical field, including but not limited to systems engineering, mechanical engineering, robotics, process engineering, metallurgy, pulp and paper technology, aeronautical engineering, computer science, machine vision, chemistry, chemical engineering, physics, electrical engineering, electronics, civil engineering, geophysical sciences, biotechnology, food engineering and environmental engineering. Abstracts of one page (200 to 400 words) should be sent to eann95 at aton.abo.fi by *31 January 1995*, by e-mail in PostScript format, or in TeX or LaTeX. Plain ASCII is also acceptable. Please mention two to four keywords, and whether you prefer it to be a short paper or a full paper. The short papers will be 4 pages in length, and full papers may be upto 8 pages. Tutorial proposals are also welcome until 31 January 1995. Notification of acceptance will be sent around 1 March. The number of full papers will be very limited. You will receive a submission number for each abstract you send. If you haven't received one, please ask for it. Special tracks have been set up for applications in robotics (N. Sharkey, n.sharkey at dcs.shef.ac.uk), control applications (E. Tulunay, ersin_tulunay at metu.edu.tr), biotechnology/food engineering applications (P. Linko), and mineral and metal industry (J. van Deventer, metal at maties.sun.ac.za). You can submit abstracts to the special tracks straight to their coordinators or to eann95 at aton.abo.fi. Local program committee A. Bulsari J. Heikkonen (Italy) E. Hyv\"onen P. Linko L. Nystr\"om S. Palosaari H. Sax\'en M. Syrj\"anen J. Sepp\"anen A. Visa International program committee G. Dorffner (Austria) A. da Silva (Brazil) V. Sgurev (Bulgaria) M. Thompson (Canada) B.-Z. Chen (China) V. Kurkova (Czechia) S. Dutta (France) D. Pearson (France) G. Baier (Germany) C. M. Lee (Hong Kong) J. Fodor (Hungary) L. M. Patnaik (India) H. Siegelmann (Israel) R. Baratti (Italy) R. Serra (Italy) I. Kawakami (Japan) C. Kuroda (Japan) H. Zhang (Japan) J. K. Lee (Korea) J. Kok (Netherlands) J. Paredis (Netherlands) W. Duch (Poland) R. Tadeusiewicz (Poland) B. Ribeiro (Portugal) W. L. Dunin-Barkowski (Russia) V. Stefanuk (Russia) E. Pupyrev (Russia) S. Tan (Singapore) V. Kvasnicka (Slovakia) A. Dobnikar (Slovenia) J. van Deventer (South Africa) B. Martinez (Spain) H. Liljenstr\"om (Sweden) G. Sj\"odin (Sweden) J. Sj\"oberg (Sweden) E. Tulunay (Turkey) N. Sharkey (UK) D. Tsaptsinos (UK) N. Steele (UK) S. Shekhar (USA) J. Savkovic-Stevanovic International Conference on Engineering Applications of Neural Networks (EANN '95) Registration information The registration fee is FIM 2000 until 15 March, after which it will be FIM 2400. A discount of upto 40 % will be given to some participants from East Europe and developing countries. Those who wish to avail of this discount need to apply for it. The application form can be sent by e-mail. The papers may not be included in the proceedings if the registration fee is not received before 15 April, or if the paper does not follow the specified format. If your registration fee is received before 15 February, you are entitled to attend one tutorial for free. The fee for each tutorial will be FIM 200, to be paid in cash at the conference site. No decisions have yet been made about which tutorials will be presented, since tutorial proposals can be sent until 31 January. The registration fee should be paid to ``EANN 95'', the bank account SYP (Union Bank of Finland) 220518-125251 Turku, Finland through bank transfer or you could send us a bank draft payable to ``EANN 95''. If it is difficult to get a bank draft in Finnish currency, you could send a bank cheque or a draft of GBP 280 (sterling pounds) until 15 March, or GBP 335 after 15 March. If you need to send it in some other way, please ask. The postal address for sending the bank drafts or bank cheques is EANN '95/SEA, Post box 34, 20111 Turku 11, Finland. Registration form can be sent by e-mail.  From goller at informatik.tu-muenchen.de Wed Jan 11 05:55:29 1995 From: goller at informatik.tu-muenchen.de (Christoph Goller) Date: Wed, 11 Jan 1995 11:55:29 +0100 Subject: TR: Learning Distributed Representations for the Classification Message-ID: <95Jan11.115534mesz.460947@sunjessen21.informatik.tu-muenchen.de> of Terms Return-Receipt-To: goller at informatik.tu-muenchen.de Organization: TU-Muenchen Technical Report available: Comments are welcome !! ******************************************************************************** FTP-host: ftp.informatik.tu-muenchen.de FTP-filename: /local/lehrstuhl/jessen/Group.Automated_Reasoning/Tech.Reports/AR-94-05.ps.gz WWW: http://wwwjessen.informatik.tu-muenchen.de/forschung/reasoning/reports.html ******************************************************************************** @TECHREPORT{label, AUTHOR = {C. Goller and A. Sperduti and A. Starita}, TITLE = {Learning Distributed Representations for the Classification of Terms}, INSTITUTION = {Institut f\"{u}r Informatik, Technische Universit\"{a}t M\"{u}nchen}, YEAR = {1994}, NUMBER = {AR-94-05} } Abstract: This paper is a study on LRAAM-based (Labeling Recursive Auto-Associative Memory) classification of symbolic recursive structures encoding terms. The results reported here have been obtained by combining an LRAAM network with an analog perceptron. The approach used was to interleave the development of representations (unsupervised learning of the LRAAM) with the learning of the classification task. In this way, the representations are optimized with respect to the classification task. The intended applications of the approach described in this paper are hybrid (symbolic/connectionist) systems, where the connectionist part has to solve logic-oriented inductive learning tasks similar to the term-classification problems used in our experiments. These problems range from the detection of a specific subterm to the satisfaction of a specific unification pattern. We show that these problems can get a very satisfactory solution by our approach. * No hardcopy available. * FTP procedure: unix> ftp ftp.informatik.tu-muenchen.de Name: anonymous Password: ftp> cd /local/lehrstuhl/jessen/Group.Automated_Reasoning/Tech.Reports ftp> binary ftp> get AR-94-05.ps.gz ftp> bye unix> gunzip AR-94-05.ps.gz unix> lpr AR-94-05.ps (or however you print postscript) _______________________________________________________________________________ Christoph Goller Lehrstuhl VIII Institut fuer Informatik Research Group "Automated Reasoning" Technische Universitaet Muenchen Tel.: +49-89/521097 Arcisstr. 21 Fax.: +49-89/526502 D-80290 Muenchen email: goller at informatik.tu-muenchen.de Germany -------------------------------------------------------------------------------  From terry at salk.edu Wed Jan 11 18:14:13 1995 From: terry at salk.edu (Terry Sejnowski) Date: Wed, 11 Jan 95 15:14:13 PST Subject: Neural Computation 7:1 Message-ID: <9501112314.AA16905@salk.edu> NEURAL COMPUTATION Volume 7, January, 1995 View: On Neural Circuits and Cognition Michael S. Gazzaniga Notes: The EM Algorithm and Information Geometry in Neural Network Learning Shun-ichi Amari Convergence Theorems for Hybrid Learning Rules Michael Benaim Letters: A Type of Duality between Self-organizing Maps and Minimal Wiring Graeme Mitchison Development of Oriented Ocular Dominance Bands as a Consequence of Areal Geometry Hans-Ulrich Bauer A Multiple Cause Mixture Model for Unsupervised Learning Eric Saund Similarity Metric Learning for a Variable-Kernel Classifier David G. Lowe Unsupervised Mutual Information Criterion for Elimination of Overtraining in Supervised Multilayer Networks G. Deco, W. Finnoff and H. G. Zimmerman Training with Noise is Equivalent to Tikhonov Regularization Chris M. Bishop Bayesian Regularization and Pruning using a Laplace Prior Peter M. Williams Empirical Risk Minimization Versus Maximum-Likelihood Estimation: A Case Study Ronny Meir Learning a Decision Boundary from Stochastic Examples: Incremental Algorithms with and without Queries Yoshiyuki Kabashima and Shigeru Shinomoto Arithmetic Perceptrons Sergio A. Cannas Compensatory Mechanisms in an attractor Neural Network Model of Schizophrenia D. Horn and E. Ruppin Real-time Control of a Tokamak Plasma Using Neural Networks Chris M. Bishop, Paul S. Haynes, Mike E. U. Smith, Tom N. Todd and David L. Trotman ----- SUBSCRIPTIONS - 1995 - VOLUME 7 - BIMONTHLY (6 issues) ______ $40 Student and Retired ______ $68 Individual ______ $180 Institution Add $22 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-6 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 e-mail: hiscox at mitvma.mit.edu -----  From hinton at cs.toronto.edu Thu Jan 12 14:09:20 1995 From: hinton at cs.toronto.edu (Geoffrey Hinton) Date: Thu, 12 Jan 1995 14:09:20 -0500 Subject: correction re: the glovetalk II video Message-ID: <95Jan12.140920edt.830@neuron.ai.toronto.edu> If you order the video, please include CANADA in the address.  From cnna at tce.ing.uniroma1.it Fri Jan 13 03:13:26 1995 From: cnna at tce.ing.uniroma1.it (cnna@tce.ing.uniroma1.it) Date: Fri, 13 Jan 1995 09:13:26 +0100 Subject: Proceedings of CNNA-94 available Message-ID: <9501130813.AA07746@tce.ing.uniroma1.it> PROCEEDINGS OF CNNA-94 AVAILABLE A limited number of copies of the Proceedings of the Third IEEE International Workshop on Cellular Neural Networks and their Applications (CNNA-94), held in Rome, Italy, Dec. 18-21, 1994 is available for purchasing by attendants and authors. Price is Itl. 50,000. The book will be sent by air mail upon receipt of payment, or of a copy of your order of payment. Please add the following charges for postal expenses: Italy: Itl. 6,200 Europe: Itl. 10,000 Mediterranean: Itl. 10,900 Americas and Asia: Itl. 18,100 Australia: Itl. 24,400 To order a copy, please send payment including mailing charges and bank taxes according to one of the following procedures: 1) Italian personal cheque (payable to Prof. V. Cimagalli, sent by "Assicurata") 2) Eurocheque, or international cheque drawn on an Italian bank, in Italian lire, payable to Prof. V. Cimagalli (please add Itl. 7,500 for bank expenses) 3) Bank transfer to Banca di Roma, ag. 158, via Eudossiana, 18 Roma, Italy I-00184; bank codes: ABI 3002-3, CAB 03380-3; account no. 503/34, payable to: "Sezione Italia Centro-Sud dell'IEEE - CNNA'94", stating clearly your name and reason for payment (please add Itl. 22,000 for bank expenses). For further information, do not hesitate to contact us. Sincerely, M. Balsi Organizing Committee, CNNA-94 CNNA-94 Dipartimento di Ingegneria Elettronica via Eudossiana, 18 Rome, Italy I-00184 fax: +39-6-4742647 e-mail: cnna at tce.ing.uniroma1.it TABLE OF CONTENTS Inaugural lecture: L.O. Chua The CNN Universal Chip: Dawn of a New Computer Paradigm 1 Invited review paper: T. Roska Analogic Algorithms Running on the CNN Universal Machine 3 G. Yang, T. Yang, L.-B. Yang On Unconditional Stability of the General Delayed Cellular Neural Networks 9 S. Arik, V. Tavsanoglu A Weaker Condition for the Stability of Nonsymmetric CNNs 15 M. P. Joy, V. Tavsanoglu Circulant Matrices and the Stability Theory of CNNs 21 B.E. Shi, S. Wendsche, T. Roska, L.O. Chua Random Variations in CNN Templates: Theoretical Models and Empirical Studies 27 Invited lecture: F. Werblin, A. Jacobs Using CNN to Unravel Space-Time Processing in the Vertebrate Retina 33 K. Lotz, Z. Vidnynszky, T. Roska, J. Vandewalle, J. Hmori, A. Jacobs, F. Werblin Some Cortical Spiking Neuron Models Using CNN 41 T.W. Berger, B.J. Sheu, R. H.-J. Tsai Analog VLSI Implementation of a Nonlinear Systems Model of the Hippocampal Brain Region 47 A. Jacobs, T. Roska, F. Werblin Techniques for Constructing Physiologically Motivated Neuromorphic Models in CNN 53 Invited review paper: A. Rodrguez-Vzquez, R. Domnguez-Castro, S. Espejo Design of CNN Universal Chips: Trends and Obstacles 59 J.M. Cruz, L.O. Chua, T. Roska A Fast, Complex and Efficient Test Implementation of the CNN Universal Machine 61 F. Sargeni, V. Bonaiuto High Performance Digitally Programmable CNN Chip with Discrete Templates 67 A. Paasio, A. Dawidziuk, K. Halonen, V. Porra Digitally Controllable Weights in Current Mode Cellular Neural Networks 73 D. Lm, G.S. Moschytz A Programmable, Modular CNN Cell 79 M.-D. Doan, R. Chakrabaty, M. Heidenreich, M. Glesner, S. Cheung Realisation of a Digital Cellular Neural Network for Image Processing 85 R. Domnguez-Castro, S. Espejo, A. Rodrguez-Vzquez, R. Carmona A CNN Universal Chip in CMOS Technology 91 E. Pessa, M.P. Penna Local and Global Connectivity in Neuronic Cellular Automata 97 X.-Z. Huang, T. Yang, L.-B. Yang On Stability of the Time-Variant Delayed Cellular Neural Networks 103 J.J. Szczyrek, S. Jankowski A Class of Asymmetrical Templates in Cellular Neural Networks 109 P.P. Civalleri, M. Gilli A Topological description of the State Space of a Cellular Neural Network 115 M. Tanaka, T. Watanabe Cooperative and Competitive Cellular Neural Networks 121 Invited review paper: P. Thiran, M. Hasler Information Processing Using Stable and Unstable Oscillations: A Tutorial 127 Invited review paper: J.A. Nossek Design and Learning with Cellular Neural Networks 137 I. Fajfar, F. Bratkovic Statistical Design Using Variable Parameter Variances and Application to Cellular Neural Networks 147 N.N. Aizenberg, I.N. Aizenberg CNN-like Networks Based on Multi-Valued and Universal Binary Neurons: Learning and Application to Image Processing 153 W. Utschick, J.A. Nossek Computational Learning Theory Applied to Discrete-Time Cellular Neural Networks 159 H. Magnussen, J.A. Nossek Global Learning Algorithms for Discrete-Time Cellular Neural Networks 165 H. Magnussen, G. Papoutsis, J.A. Nossek Continuation-Based Learning Algorithm for Discrete-Time Cellular Neural Networks 171 C. Gzelis, S. Karamahmut Recurrent Perceptron Learning Algorithm for Completely Stable Cellular Neural Networks 177 A.J. Schuler, M. Brabec, D. Schubel, J.A. Nossek Hardware-Oriented Learning for Cellular Neural Networks 183 F. Dellaert, J. Vandewalle Automatic Design of Cellular Neural Networks by Means of Genetic Algorithms: Finding a Feature Detector 189 H. Mizutani A New Learning Method for Multilayered Cellular Neural Networks 195 H. Harrer, P.L. Venetianer, J.A. Nossek, T. Roska, L.O. Chua Some Examples of Preprocessing Analog Images with Discrete-Time Cellular Neural Networks 201 N.N. Aizenberg, I.N. Aizenberg, T.P. Belikova Extraction and Localization of Important Features on Grey-Scale Images: Implementation on the CNN 207 K. Slot Large-Neighborhood Templates Implementation in Discrete-Time CNN Universal Machine with a Nearest-Neighbor Connection Pattern 213 J. Pineda de Gyvez XCNN: A Software Package for Color Image Processing 219 M. Balsi, N. Racina Automatic Recognition of Train Tail Signs Using CNNs 225 A.G. Radvnyi Solution of Stereo Correspondence in Real Scene: an Analogic CNN Algorithm 231 J.P. Miller, K.R. Crounse, T. Sziranyi, L. Nemes, L.O. Chua, T. Roska Deblurring of Images by Cellular Neural Networks with Applications to Microscopy 237 A. Kellner, H. Magnussen, J.A. Nossek Texture Classification, Texture Segmentation and Text Segmentation with Discrete- Time Cellular Neural Networks 243 P.L. Venetianer, P. Szolgay, K.R. Crounse, T. Roska, L.O. Chua Analog Combinatorics and Cellular Automata-Key Algorithms and Layout Design 249 Zarndy, T. Roska, Gy. Liszka, J. Hegyesi, L. Kk, Cs. Rekeczky Design of Analogic CNN Algorithms for Mammogram Analysis 255 P. Szolgay, Gy. Erss, A. Katona, . Kiss An Experimental System for Path Tracking of a Robot Using a 16*16 Connected Component Detector CNN Chip with Direct Optical Input 261 T. Kozek, T. Roska A Double Time-Scale CNN for Solving 2-D Navier-Stokes Equations 267 Zarndy, F. Werblin, T. Roska, L.O. Chua Novel Types of Analogic CNN Algorithms for Recognizing Bank-Notes 273 B.J. Sheu, Sa H. Bang, W.-C. Fang Optimal Solutions of Selected Cellular Neural Network Applications by the Hardware Annealing Method 279 B. Siemiatkowska Cellular Neural Network for Mobile Robot Navigation 285 A. Murgu Distributed Neural Control for Markov Decision Processes in HierarchicCommunication Networks 291 C.-M. Yang, T. Yang, K.-Y. Zhang Chaos in Discrete Time Cellular Neural Networks 297 R. Dogaru, A.T. Murgan, D. Ioan Robust Oscillations and Bifurcations in Cellular Neural Networks 303 H. Chen, M.-D. Dai, X.-Y. Wu Bifurcation and Chaos in Discrete-Time Cellular Neural Networks 309 M.J. Ogorzalek, A. Dabrowski, W. Dabrowski Hyperchaos, Clustering and Cooperative Phenomena in CNN Arrays Composedof Chaotic Circuits 315 P. Szolgay, G. Vrs Transient Response Computation of a Mechanical Vibrating System Using Cellular Neural Networks 321 P.P. Civalleri, M. Gilli Propagation Phenomena in Cellular Neural Networks 327 S. Jankowski, R. Wanczuk CNN models of complex pattern formation in excitable media 333 S. Jankowski, A. Londei, C. Mazur, A. Lozowski Synchronization Phenomena in 2D Chaotic CNN 339 Z. Galias, J.A. Nossek Control of a Real Chaotic Cellular Neural Network 345 A. Piovaccari, G. Setti A Versatile CMOS Building Block for Fully Analogically-ProgrammableVLSI Cellular Neural Networks 347 P. Thiran, G. Setti An Approach to Local Diffusion and Global Propagation in 1-dim. Cellular Neural Networks 349 J. Kowalski, K. Slot, T. Kacprzak A CMOS Current-Mode VLSI Implementation of Cellular Neural Network for an Image Objects Area Estimation 351 W.J. Jansen, R. van Drunen, L. Spaanenburg, J.A.G. Nijhuis The AD2 Microcontroller Extension for Artificial Neural Networks 353 C.-K. Pham, M. Tanaka A Novel Chaos Generator Employing CMOS Inverter for Cellular Neural Networks 355 R. Beccherelli, G. de Cesare, F. Palma Towards an Hydrogenated Amorphous Silicon Phototransistor Cellular Neural Network 357 A. Sani, S. Graffi, G. Masetti, G. Setti Design of CMOS Cellular Neural Networks Operating at Several Supply Voltages 363 M. Russell Grimaila, J. Pineda de Gyvez A Macromodel Fault Generator for Cellular Neural Networks 369 T. Roska, P. Szolgay, . Zarndy, P.L. Venetianer, A. Radvnyi, T. Szirnyi On a CNN Chip-Prototyping System 375 P. Kinget, M. Steyaert Evaluation of CNN Template Robustness Towards VLSI Implementation 381 B.J. Sheu, Sa H. Bang, W.-C. Fang Analog VLSI Design of Cellular Neural Networks with Annealing Ability 387 L. Raffo, S.P. Sabatini, G.M. Bisio A Reconfigurable Architecture Mapping Multilayer CNN Paradigms 393 M. Balsi, V. Cimagalli, I. Ciancaglioni, F. Galluzzi Optoelectronic Cellular Neural Network Based on Amorphous Silicon Thin Film Technology 399 S. Espejo, R. Domnguez-Castro, A. Rodrguez-Vzquez, R. Carmona Weight-Control Strategy for Programmable CNN Chips 405 S. Espejo, A. Rodrguez-Vzquez, R. Domnguez-Castro, R. Carmona Convergence and Stability of the FSR CNN Model 411 R. Domnguez-Castro, S. Espejo, A. Rodrguez-Vzquez, I. Garca-Vargas, J.F. Ramos, R. Carmona SIRENA: A Simulation Environment for CNNs 417 G. Adorni, V. DAndrea, G. Destri A Massively Parallel Approach to Cellular Neural Networks Image Processing 423 M. Coli, P. Palazzari, R. Rughi Use of the CNN Dynamic to Associate Two Points with Different Quantization Grains in the State Space 429 M. Csapodi, L. Nemes, G. Tth, T. Roska, A. Radvnyi Some Novel Analogic CNN Algorithms for Object Rotation, 3D Interpolation- Approximation, and a "Door-in-a-Floor" Problem 435 B.E. Shi Order Statistic Filtering with Cellular Neural Networks 441 L.-B. Yang, T. Yang, B.-S. Chen Moving Point Target Detection Using Cellular Neural Networks 445 X.-P. Yang, T. Yang, L.-B. Yang Extracting Focused Object from Defocused Background Using Cellular Neural Networks 451 P. Arena, S. Baglio, L. Fortuna, G. Manganaro CNN Processing for NMR Spectra 457 P. Arena, L. Fortuna, G. Manganaro, S. Spina CNN Image Processing for the Automatic Classification of Oranges 463 S. Schwarz Detection of Defects on Photolitographic Masks by Cellular Neural Networks 469 M. Ikegami, M. Tanaka Moving Image Coding and Decoding by DTCNN with 3-D Templates 475 M. Kanaya, M. Tanaka Robot Multi-Driving Controls by Cellular Neural Networks 481 R.-W. Liu, Y.-F. Huang, X.-T. Ling A Novel Approach to the Convergence of Neural Networks for Signal Processing 487 Index of Authors 489  From thrun at uran.informatik.uni-bonn.de Fri Jan 13 15:33:11 1995 From: thrun at uran.informatik.uni-bonn.de (Sebastian Thrun) Date: Fri, 13 Jan 1995 21:33:11 +0100 Subject: sequential learning - lifelong learning Message-ID: <199501132033.VAA27677@carbon.informatik.uni-bonn.de> The recent discussions on sequential learning brought up some very interesting points about learning, which I'd like to comment on. Much of current machine learning and neural network learning research makes the assumption that the only available data is a set of input-output examples of the target function (or, in the case of unsupervised learning, a set of unlabeled points which characterize an unknown probability distribution). There is a huge variety of algorithms (Backprop, ID3, MARS, Cascade Correlation to name a few famous ones) which all generalize from such data in somewhat different ways. Despite the exciting progress in understanding these approaches in more depth and coming up with better algorithms (like the work on complexity control, avoiding the over-fitting of noise, model selection, mixtures of experts, committees and related issues), I think there are intrinsic limitations to the view of the learning problem as an isolated function fitting problem, where all the available data consists of a set of examples of the target function. If we consider human learning, there is usually much more data available for generalization than just a task-specific set of input-output data. As Jon Baxter's face recognition example convincingly illustrates, we often learn to recognize highly complex patterns or complex motor strategies from an impressively small number of training examples. Humans somehow successfully manage to transfer big chunks of knowledge across learning tasks. If we face a new learning task, much of the "training data" which we use for generalization actually stems from other tasks, which we might have faced in our previous lifetime. Consider for example Jon's task of recognizing faces. Once one has learned that the shape of the nose does matter, but facial expressions do not matter for the identification of a person, one can transfer this knowledge to new faces and generalize much more accurately from less training examples. To apply these ideas in the context of artificial neural network learning, one might think of learning as a lifelong assignment, in which a learner faces a whole collection of learning tasks over its entire "lifetime." Hence, what has been observed and/or learned in the first n tasks can be reused in the (n+1)st task. There is a lot of potential leverage in such a scenario. For example, in a recent study, Tom Mitchell and I investigated the problem of learning to recognize simple objects from a very small number camera images using Backpropagation. We found that after seeing as few as one example of each target object, the recognition rate increased from 50% (random) to 59.7%. However, by learning invariances up front based on images of *other* objects, and by transferring these learned invariances to the target recognition task, we achieved a recognition rate of 74.8%. After seeing another training examples of each target object, the standard neural network approach led to 64.8% accuracy, which could be improved to 82.9% if knowledge about the invariances was transferred. These results match our experience in other domains (robot control, reinforcement learning robot perception). As the discussion on this mailing list illustrates, there is a bunch of people working on knowledge transfer and related issues; I have seen quite a few exciting approaches. For example, Lori Pratt, Steve Suddarth, Jon Baxter, Rich Caruana and many others have proposed approaches which develop more robust internal representations in Backprop networks based on learning multiple tasks (sequentially or in parallel). Others, like Satinder Singh, Steve Whitehead, Anton Schwartz have studied the issue of transfer in the context of reinforcement learning. Basically, they have proposed ways to transfer actions policies (which is the result of reinforcement learning) across tasks. There is a whole variety of other approaches (like Chris Atkenson's variable distance metrics in memory-based learning), which could potentially be applied in a lifelong learning context. However, I feel that the area of knowledge transfer is still largely unexplored. To scale up learning algorithms, I believe it is really helpful not to restrict oneself to looking at a single training set in isolation but to consider all possible sources of knowledge about the target function. Sebastian  From poggio at hip.atr.co.jp Sat Jan 14 12:41:47 1995 From: poggio at hip.atr.co.jp (Tomaso Poggio) Date: Sat, 14 Jan 95 12:41:47 JST Subject: sequential learning - lifelong learning In-Reply-To: Sebastian Thrun's message of Fri, 13 Jan 1995 21:33:11 +0100 <199501132033.VAA27677@carbon.informatik.uni-bonn.de> Message-ID: <9501140341.AA01767@haiku> There is a large related recent literature both in NN and specific domains, for instance object recognition. Key words are "hints", "virtual examples", "speaker adaptation", "virtual views", "recognition invariants".  From tishby at CS.HUJI.AC.IL Sun Jan 15 08:10:25 1995 From: tishby at CS.HUJI.AC.IL (Tali Tishby) Date: Sun, 15 Jan 1995 15:10:25 +0200 Subject: CORTICAL DYNAMICS IN JERUSALEM, June 11-15, 1995 Message-ID: <199501151310.AA13880@fugue.cs.huji.ac.il> The Hebrew University of Jerusalem The Center for Neural Computation Announces CORTICAL DYNAMICS IN JERUSALEM: A Symposium on Experimental and Theoretical Issues in The Dynamics and Function of the Neocortex June 11-15, 1995 Topics Include: 1. Biophysics of neurons and synaptic intergration: the hardware of cortical dynamics. 2. Temporal structures and patterns of synchrony in cortical activity. 3. Feedforward and recurrent models of cortical information processing. 4. Computational paradigms of brain function. Invited Speakers Include: D.J. Amit (Hebrew University, Israel) y. Amitai (Ben Gurion University, Israel) H. Barlow (University of Cambridge, U.K.) E. Bienenstock (Brown University, U.S.A) J. Bullier (I.N.S.E.R.M., France) R. Eckhorn (Philipps University, Germany) G.L. Gerstein (University of Pennsylvania, U.S.A.) C. Gilbert (Rockefeller University, U.S.A.) A.M. Graybiel (M.I.T., U.S.A.) A. Grinvald (Weizmann Institute, Israel) D. Hansel (C.N.R.S., France) A. Kreiter (M.P.I., Frankfurt, Germany) H. Markram (M.P.I., Heidelberg, Germany) K.A. Martin (Oxford University, U.K.) D. Mumford (Harvard University, U.S.A.) B.J. Richmond (N.I.M.H., U.S.A.) A. Schuez (M.P.I., Tubingen, Germany) A.B. Schwartz (Barrow Neurological Inst., U.S.A.) I. Segev (Hebrew University, Israel) M. Shadlen (Stanford University, U.S.A) A. Thomson (Royal Free Hospital, U.K.) N. Tishby (Hebrew University, Israel) S. Ullman (Weizmann Institute, Israel) E. Vaadia (Hebrew University, Israel) L. Valiant (Harvard University, U.S.A.) Contributed poster presentations will be accepted. Registration deadline: March 31st, 1995. Information and Registration: Alisa Shadmi, Hebrew University, Interdisciplinary Center for Neural Computation, P.O. Box 1255, Jerusalem, 91904, Israel. Tel: 972-2-584899 Fax: 972-2-586152 e-mail: Alisa at vms.huji.ac.il Organizing Committee: Moshe Abeles tel: 972-2-757384 or 972-2-758384 fax: 972-2-439736 e-mail: abeles at md2.huji.ac.il Haim Sompolinsky tel: 972-2-584563 fax: 972-2-584437 e-mail: haim at fiz.huji.ac.il  From joachim at fit.qut.edu.au Sun Jan 15 23:28:18 1995 From: joachim at fit.qut.edu.au (Prof Joachim Diederich) Date: Mon, 16 Jan 1995 14:28:18 +1000 Subject: QUT NRC Technical Reports Message-ID: <199501160428.OAA15919@aldebaran.fit.qut.edu.au> Computers that learn vs. Users that learn: Experiments with adaptive e-mail agents Joachim Diederich* Elizabeth M. Gurrie** Markus Wasserschaff*** Neurocomputing Research Centre* School of Information Systems** Queensland University of Technology Brisbane Q 4001 Australia German National Research Center for Computer Science (GMD)*** Institute for Applied Information Processing (FIT) P.O. Box 1316 D-5205 St. Augustin 1, Germany QUTNRC-95-01-01.ps.Z Abstract The classification, selection and organization of electronic messages (e-mail) is a task that can be supported by a neural information processing system. The objective is to select those incoming messages for display that are most important for a particular user, and to propose actions in anticipation of the user's decisions. The artificial neural networks (ANNs) extract relevant information from incoming messages during a training period, learn the response to the incoming message, i.e., a sequence of user actions, and use the learned representation for the proposal of user actions. We test the system by comparing simple recurrent networks (SRNs, Elman,1990) and recurrent cascade correlation networks (RCC, Fahlman, 1991) by use of a sequence production task. The performance of both network architectures in terms of network size and learning speed for a given data set is examined. Our results show that (1) RCC generates smaller networks with better performance compared to SRNs and (2) learns significantly faster than SRNs. Submitted for publication. This is an extended version of the IJCAI-93 paper. *************************************************************************** A Survey And Critique of Techniques For Extracting Rules From Trained Artificial Neural Networks Robert Andrews* ** Joachim Diederich* Alan B. Tickle* ** Neurocomputing Research Centre* School of Information Systems** Queensland University of Technology Brisbane Q 4001 Australia QUTNRC-95-01-02.ps.Z Abstract It is becoming increasingly apparent that without some form of explanation capability, the full potential of trained Artificial Neural Networks (ANNs) may not be realised. This survey gives an overview of techniques developed to redress this situation. Specifically the survey focuses on mechanisms, procedures, and algorithms designed to insert knowledge into ANNs (knowledge initialisation), extract rules from trained ANNs (rule extraction), and utilise ANNs to refine existing rule bases (rule refinement). The survey also introduces a new taxonomy for classifying the various techniques, discusses their modus operandi, and delineates criteria for evaluating their efficacy. Keywords: rule extraction from neural networks, rule refinement using neural networks, knowledge insertion into neural networks, fuzzy neural networks, inferencing, rule generation Accepted for: Knowledge-Based Systems. Special Issue on Knowledge-Based Neural Networks (Editor: Prof LiMin Fu). These papers are available from ftp.fit.qut.edu.au cd to /pub/NRC/tr/ps  From shawn_mikiten at biad23.uthscsa.edu Tue Jan 17 12:48:56 1995 From: shawn_mikiten at biad23.uthscsa.edu (shawn mikiten) Date: 17 Jan 95 12:48:56 U Subject: Announce-1995 Summer Underg Message-ID: Announce-1995 Summer Undergraduate Research_ The Graduate School of Biomedical Sciences at University of Texas Health Science Center at San Antonio Announcement: 1995 Summer Undergraduate Research Fellowship The Summer Undergraduate Research Fellowship (SURF) will be awarded to outstanding undergraduates from across the nation. Free housing, a stipend, and travel reimbursement will be offered. Selected undergraduates will work with faculty, fellows, and students on a major research project. Learn research techniques and scientific methods. Participate in seminars & workshops exploring current problems & methods in advanced research. For more information on the internet the URL is: http://grad_dean.uthscsa.edu/ To request an application, e-mail to Kelly M. at surf at uthscsa.edu Deadline for completed application: February 24, 1995  From mike at PARK.BU.EDU Tue Jan 17 22:52:37 1995 From: mike at PARK.BU.EDU (Michael Cohen) Date: Tue, 17 Jan 1995 22:52:37 -0500 Subject: VISION, BRAIN, AND THE PHILOSOPHY OF COGNITION Message-ID: <199501180352.WAA11469@cns.bu.edu> VISION, BRAIN, AND THE PHILOSOPHY OF COGNITION Friday, March 17, 1995 Boston University George Sherman Union Conference Auditorium, Second Floor 775 Commonwealth Avenue Boston, MA 02215 Co-Sponsored by the Department of Cognitive and Neural Systems, the Center for Adaptive Systems, and the Center for Philosophy and History of Science Program: -------- 8:30am--9:30am: BELA JULESZ, Rutgers University, Why is the early visual system more interesting than the kidney? 9:30am--10:30am: KEN NAKAYAMA, Harvard University, Visual perception of surfaces 10:30am--11:00am: Coffee Break 11:00am--12:00pm: STEPHEN GROSSBERG, Boston University, Cortical dynamics of visual perception 12:00pm--1:00pm: PATRICK CAVANAGH, Harvard University, Attention-based visual processes 1:00pm--2:30pm: Lunch 2:30pm--3:30pm: V.S. RAMACHANDRAN, University of California, Neural plasticity in the adult human brain: New directions of research 3:30pm--4:30pm: EVAN THOMPSON, Boston University, Phenomenology and computational vision 4:30pm--5:30pm: DANIEL DENNETT, Tufts University, Filling-in revisited 5:30pm---: Discussion Registration: ------------- The conference is free and open to the public. Parking: -------- Parking is available at nearby campus lots: 808 Commonwealth Avenue ($6 per vehicle), 766 Commonwealth Avenue ($8 per vehicle), and 700 Commonwealth Avenue ($10 per vehicle). If these lots are full, please ask the lot attendant for an alternate location. Contact: -------- Professor Stephen Grossberg Department of Cognitive and Neural Systems 111 Cummington Street Boston, MA 02215 fax: (617) 353-7755 email: diana at cns.bu.edu  From minton at ptolemy-ethernet.arc.nasa.gov Wed Jan 18 14:38:12 1995 From: minton at ptolemy-ethernet.arc.nasa.gov (Steve Minton) Date: Wed, 18 Jan 95 11:38:12 PST Subject: Article on Error-Correcting Output Codes Message-ID: <9501181938.AA05805@ptolemy.arc.nasa.gov> Readers of this group may be interested in the followwing article which was just published by JAIR: Dietterich, T.G. and Bakiri, G. (1995) "Solving Multiclass Learning Problems via Error-Correcting Output Codes", Volume 2, pages 263-286. PostScript: volume2/dietterich95a.ps (265K) Abstract: Multiclass learning problems involve finding a definition for an unknown function f(x) whose range is a discrete set containing k>2 values (i.e., k ``classes''). The definition is acquired by studying collections of training examples of the form . Existing approaches to multiclass learning problems include direct application of multiclass algorithms such as the decision-tree algorithms C4.5 and CART, application of binary concept learning algorithms to learn individual binary functions for each of the k classes, and application of binary concept learning algorithms with distributed output representations. This paper compares these three approaches to a new technique in which error-correcting codes are employed as a distributed output representation. We show that these output representations improve the generalization performance of both C4.5 and backpropagation on a wide range of multiclass learning tasks. We also demonstrate that this approach is robust with respect to changes in the size of the training sample, the assignment of distributed representations to particular classes, and the application of overfitting avoidance techniques such as decision-tree pruning. Finally, we show that---like the other methods---the error-correcting code technique can provide reliable class probability estimates. Taken together, these results demonstrate that error-correcting output codes provide a general-purpose method for improving the performance of inductive learning programs on multiclass problems. The PostScript file is available via: -- comp.ai.jair.papers -- World Wide Web: The URL for our World Wide Web server is http://www.cs.washington.edu/research/jair/home.html -- Anonymous FTP from either of the two sites below: CMU: p.gp.cs.cmu.edu directory: /usr/jair/pub/volume2 Genoa: ftp.mrg.dist.unige.it directory: pub/jair/pub/volume2 -- automated email. Send mail to jair at cs.cmu.edu or jair at ftp.mrg.dist.unige.it with the subject AUTORESPOND, and the body GET VOLUME2/DIETTERICH95A.PS (either upper or lowercase is fine). Note: Your mailer might find this file too large to handle. (The compressed version of this paper cannot be mailed.) -- JAIR Gopher server: At p.gp.cs.cmu.edu, port 70. For more information about JAIR, check out our WWW or FTP sites, or send electronic mail to jair at cs.cmu.edu with the subject AUTORESPOND and the message body HELP, or contact jair-ed at ptolemy.arc.nasa.gov.  From rsun at cs.ua.edu Thu Jan 19 11:31:29 1995 From: rsun at cs.ua.edu (Ron Sun) Date: Thu, 19 Jan 1995 10:31:29 -0600 Subject: sequential learning - lifelong learning Message-ID: <9501191631.AA13316@athos.cs.ua.edu> To transfer knowledge from one environment to another, one viable way, I believe, is to extract generic rules (from a NN) that are more widely applicable. This may not solve the interference problem, but surely handles the transfer problem to a certain extent. (It can also deal with the interference problem, if extracted rules are used to train the NN, interspersed with current data.) Here is a TR on extracting rules from a Q-learning network. The resulting architecture consists of two levels, which contains both rules and Q-learning network so that both rigorous, abstract knowledge (declarative knowledge) and flexible, embodied knowledge (procedural knowledge) are maintained. The learning and rule extraction are on-line, while performing the task, and can be continuously adaptive. Rule extraction is done on top of the connectionist network performing Q-learning, so the architecture is parsimonious in terms of learning mechanisms. The TR is available at FTP-host: aramis.cs.ua.edu FTP-file: pub/tech-reports/sun.clarion.ps ================================================================ Dr. Ron Sun Department of Computer Science phone: (205) 348-6363 The University of Alabama fax: (205) 348-0219 Tuscaloosa, AL 35487 rsun at athos.cs.ua.edu ================================================================  From chaos at gojira.Berkeley.EDU Thu Jan 19 20:28:35 1995 From: chaos at gojira.Berkeley.EDU (Jim Crutchfield) Date: Thu, 19 Jan 95 17:28:35 PST Subject: Preprint --- Evolving Globally Synchronized Cellular Automata Message-ID: <9501200128.AA06251@gojira.Berkeley.EDU> The following paper is now available on the Web and via anonymous FTP. Access instructions follow. Evolving Globally Synchronized Cellular Automata Rajarshi Das, James P. Crutchfield, Melanie Mitchell, and James E. Hanson Santa Fe Institute Working Paper 95-01-005 Abstract How does an evolutionary process interact with a decentralized, distributed system in order to produce globally coordinated behavior? Using a genetic algorithm (GA) to evolve cellular automata (CAs), we show that the evolution of spontaneous synchronization, one type of emergent coordination, takes advantage of the underlying medium's potential to form embedded particles. The particles, typically phase defects between synchronous regions, are designed by the evolutionary process to resolve frustrations in the global phase. We describe in detail one typical solution discovered by the GA, delineating the discovered synchronization algorithm in terms of embedded particles and their interactions. We also use the particle-level description to analyze the evolutionary sequence by which this solution was discovered. Our results have implications both for understanding emergent collective behavior in natural systems and for the automatic programming of decentralized spatially extended multiprocessor systems. World Wide Web URL: http://www.santafe.edu/projects/evca/evabstracts.html Anonymous FTP: To obtain an electronic copy of this paper (12 pages): ftp ftp.santafe.edu login: anonymous password: cd /pub/EvCA binary get EGSCA.ps.Z quit Then at your system: uncompress EGSCA.ps.Z lpr -P EGSCA.ps.Z If you have trouble getting this paper electronically, you can request a hard copy from Deborah Smith (drs at santafe.edu), Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM, USA, 87501.  From redi at dynamics.bu.edu Thu Jan 12 11:55:05 1995 From: redi at dynamics.bu.edu (Jason Redi) Date: Thu, 12 Jan 1995 11:55:05 -0500 Subject: Internet Course Announcement on Complex Systems Message-ID: <9501121655.AA08829@dynamics.bu.edu> [ This course will be available through the MBONE virtual network on ] [ the Internet. For more information see the URL ] [ http://ripple.bu.edu/CSDL/sc726/sc726.html ] COURSE ANNOUNCEMENT: SC726: Dynamics of Complex Systems Spring, 1995 Prof. Yaneer Bar-Yam What do protein folding, neural networks, developmental biology, evolution and human economies have in common? Are there mathematical models that capture key properties of these complex systems? Can we develop principles for the design of complex systems? It is now widely held that the theory of complexity and the dynamics of complex systems may be founded on universal principles that describe disparate problems ranging from physics to economics. A corollary is that transferring ideas and results from investigators in many disparate areas will cross-fertilize and lead to important new results. In this course we will study a few examples of complex systems and identify questions that appear in many of them. A central goal of this course is to develop models and modeling techniques that will be useful when applied to all complex systems. For this purpose we will adopt both analytic tools and computer simulation. Analytic techniques will be introduced from statistical mechanics and stochastic dynamics. Computer simulation using Monte Carlo methods and Cellular Automata will be an integral part of the curriculum. Course consists of: three hours of lecture and one hour guest lecturer per week. Students will work on a group project on topics suited to the group interests. Topics to be covered include: subdivision and hierarchical organization, the scaling of kinetic properties with system size, self-organization and organization by design, measuring complexity. Systems to be discussed include: protein folding, neural networks, developmental biology, evolution and human economies/societies. Time: MW 12:00-1:30pm EST This course is intended for graduate students (both Ph.D. and Masters) from a variety of departments - physics, chemistry, biology, all of engineering, mathematics, computer science - who are interested in investigating, working with or engineering complex systems. One purpose of the course is to help doctoral students learn about current research in these areas and identify possible new research topics. Interested faculty are also welcome to attend. Prerequisites: Basic probability and statistics as provided by one of: MA381, EK500, EK424, PY410. ------- Course offered through the Internet to remote sites by arrangement. First day of classes: Jan. 18, 1995 For further information contact Prof. Bar-Yam at tel. (617) 353-2843 or Internet: yaneer at enga.bu.edu  From ptodd at synapse.psy.du.edu Fri Jan 20 14:10:07 1995 From: ptodd at synapse.psy.du.edu (Peter Todd) Date: Fri, 20 Jan 95 12:10:07 MST Subject: GRADUATE PROGRAM IN DEVELOPMENTAL COG. NEUROSCIENCE Message-ID: <9501201910.AA04389@synapse> **************************************************************************** GRADUATE STUDY IN DEVELOPMENTAL COGNITIVE NEUROSCIENCE DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF DENVER Invitation for applications for Fall 1995, due Feb. 1 **************************************************************************** The Department of Psychology at the University of Denver is pleased to announce our new research training program in Developmental Cognitive Neuroscience. This is one of the first graduate training programs in the world to focus on the problems and techniques of cognitive neuroscience as they relate to the processes of development across the lifespan. Our department is already well-known for research and expertise in developmental, experimental/cognitive, and child clinical psychology, and this new program stands as a specialization within each of these areas while also bridging them with a common vision. We have established this program to capitalize on the interdisciplinary environment that exists within our department and across other departments at the University of Denver, and with other universities in Colorado. The program includes coursework in psychology, neuroscience, and biology, and participation in a wide range of research groups. Students will also be trained in experimental and modeling techniques applied to a variety of normal and abnormal populations. In addition to taking courses in the Psychology Department, students will receive training and coursework through the University of Denver Biology Department, the University of Colorado Health Sciences Center, and the Institute for Behavior Genetics at the University of Colorado, Boulder. The new developmental cognitive neuroscience program offers students a unique opportunity to combine interests in a wide range of disciplines. Students pursuing an experimental/developmental psychology degree will find an exciting mix of topics incorporating the latest research in neuroscience. For clinical students, this is one of the few programs anywhere that offers graduate level training in child clinical neuropsychology. All students in the program will take a common set of core courses in neurobiology, neuroanatomy, psychopharmacology, neuropsychology, and connectionist modeling, in addition to more traditional courses in statistics, research design, and substantive areas of psychology. Students will gain valuable hands-on experience through practicums in neuroimaging and research with abnormal populations. These populations include adults with Alzheimer's disease or focal lesions, and children with autism, dyslexia, ADHD, inherited metabolic disorders, and various mental retardation syndromes. Of course, the most important aspect of training is research conducted in close collaboration with a faculty mentor. Below we have included a list of the faculty in this program and their research interests. Two of those listed, Cathy Reed and Peter Todd, were recruited last year specifically for this new program. Denver and the state of Colorado as a whole provide a great living and working environment. The quality of life here is very high, with the cultural attractions of a major metropolitan center vying with the natural attractions of world-class skiing, hiking, biking, and fossil-hunting in the nearby Rockies. The cost of living in the area is among the lowest of any major city in the country, while the local economy continues to thrive and grow. The University of Denver itself is a 130-year-old institution well-respected and supported by the surrounding community, with a student population of 8000 spread across undergraduate, graduate, and professional programs. A major capital campaign has just been announced, which has begun to bring in even more resources and exposure for the University. The Departments of Psychology and Biology are two of the largest and most well- funded on campus, with strong graduate programs and friendly and frequent faculty-student interaction. Our Department has a good track record for attracting high quality graduate students who become productive and renowned scientists. We now seek to attract students who are interested in pursuing first-rate research and training in developmental cognitive neuroscience. Interested students are strongly encouraged to apply. For more information about this new program, prospective students (and faculty advisors) should contact our Graduate Affairs Secretary at the address below, or members of our faculty, at their individual email addresses, for more specific questions about the program and the research training opportunities within their labs. For further information and application materials, please contact: Paula Houghtaling, Graduate Affairs Secretary Department of Psychology University of Denver 2155 S. Race Street Denver, CO 80208 USA Email: phoughta at pstar.psy.du.edu Phone: 303-871-3803 (8-5 Mountain time) *** APPLICATION DEADLINE FEBRUARY 1, 1995 *** Psychology Faculty and Interests Marshall Haith (Ph.D. 1964, University of California, Los Angeles), Professor. Visual scanning and attention in infants, anticipation and planning skills, development of early reading skills. Email: mhaith at pstar.psy.du.edu Janice M. Keenan (Ph.D. 1975, University of Colorado), Professor. Memory, reading comprehension, psycholinguistics, cognitive neuropsychology. Email: jkeenan at pstar.psy.du.edu Bruce F. Pennington (Ph.D. 1977, Duke University), Professor. Developmental neuropsychology, developmental psychopathology, dyslexia, autism, attention deficit hyperactivity disorder. Email: bpenning at pstar.psy.du.edu George R. Potts (Ph.D. 1971, Indiana University), Professor. Cognition, memory, reading, implicit memory and perception. Email: gpotts at pstar.psy.du.edu Catherine L. Reed (Ph.D. 1991, University of California, Santa Barbara), Assistant Professor. Cognitive neuropsychology, somatosensory perception, visual cognition, human movement. Email: creed at pstar.psy.du.edu Ralph J. (Rob) Roberts (Ph.D. 1984, University of Virginia), Associate Professor. Developmental cognitive neuropsychology, attention and working memory, eye movements, acquisition of complex perception-action skills. Email: rroberts at pstar.psy.du.edu Peter M. Todd (Ph.D. 1992, Stanford University), Assistant Professor. Computer modeling and simulation of cognition, connectionism, evolution of behavior and learning, psychological/sexual selection, music cognition. Email: ptodd at pstar.psy.du.edu Biology Faculty and Interests Robert Dores (Ph.D. 1979, University of Minnesota), Professor. Biosynthesis of pituitary polypeptide hormones and neuropeptides. Email: rdores at du.edu John (Jack) Kinnamon (Ph.D. 1976, University of Georgia), Assistant Professor. Neurobiology of sensory systems. Email: jkinnamo at du.edu Susan Sadler (Ph.D. 1982, University of Colorado), Professor. Identification and characterization of molecular mechanisms that are involved in triggering meiotic cell division. Email: ssadler at du.edu  From harnad at ecs.soton.ac.uk Sat Jan 21 13:43:38 1995 From: harnad at ecs.soton.ac.uk (Stevan Harnad) Date: Sat, 21 Jan 95 18:43:38 GMT Subject: Important new Behav. Brain Sci. changes Message-ID: <570.9501211843@cogsci.ecs.soton.ac.uk> Five important new changes in Behavioral and Brain Sciences (BBS) addresses, policies and procedures (1-5) plus Three announcements about positions and activities at my new institution (Southampton University) (6-8). Summaries first, then the details: (1) New address for submitting BBS target articles (2) New address for submitting BBS commentaries (3) All commentaries now require asbtracts (4) All articles.commentaries now require email version and/or disk (5) Target articles now electronically retrievable in multiple ways (6) Applications invited for Psychology Professorship at U. Southampton. (7) Applications invited for grad students and postdocs to work with me (8) Come and give a talk at our new Cognitive Sciences Centre (1) NEW BBS ADDRESS (Editorial): Effective immediately, ALL SUBMITTED TARGET ARTICLES AND ALL CORRESPONDENCE PERTAINING TO EDITING AND REFEREEING should henceforth be addressed to BBS's new Editorial Office: Behavioral and Brain Sciences Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM phone: 44 703 594-583 fax: 44 703 593-281 email: bbs at ecs.soton.ac.uk All BBS email should go to the email address above; only messages intended for Stevan Harnad personally should be sent to harnad at ecs.soton.ac.uk -- I now get over 80 emails a day so please, whatever can be answered by the Managing Editor, send to bbs rather than harnad! (2) SECOND NEW BBS ADDRESS: Effective immediately, ALL SUBMITTED COMMENTARIES (double-spaced, in triplicate, with email version and/or disk) AND ALL CORRESPONDENCE PERTAINING TO COPY-EDITING AND PROOFS should henceforth be addressed to: Behavioral and Brain Sciences Cambridge University Press Journals Department 40 West 20th Street New York, NY 10011-4211 USA phone: 800-431-1580 (ext. 369, Ed Miller) 212-924-3900 (ext. 369, Ed Miller) fax: 212-645-5960 email: bbs at cup.org (or emiller at cup.org) To expedite mailing, all commentaries will be received and logged in New York and then forwarded to the Editor in Southampton for review. (3) Effective immediately, every BBS commentary and author's response must have have an ABSTRACT (~60 words). (4) Effective immediately, IN ADDITION to the requisite number of hard copies, all BBS contributions (articles, commentaries, and responses) will also have to be submitted in electronic form -- by email (preferably) to bbs at ecs.soton.ac.uk or on a computer disk accompanying the hard copies. BBS is moving toward more and more electronic processing at all stages. The result will be much faster, more efficient and fairer procedures. (5) Electronic versions of the preprints of all BBS target articles can be retrieved by ftp, archie, gopher or World-Wide-Web from: ftp://cogsci.ecs.soton.ac.uk/pub/harnad ftp://ftp.princeton.edu/pub/harnad/ http://cogsci.ecs.soton.ac.uk/~harnad/ http://www.princeton.edu/~harnad/ gopher://gopher.princeton.edu/11/.libraries/.pujournals This way prospective commentators can let us know that they would like to be invited to comment on target articles about to circulated for commentary, and can search the archive for past articles on which they may wish to contribute Continuing Commentary. (6) Applications are invited for a full Professorship in Psychology at the University of Southampton. I am especially interested to hear from Experimental/Clinical Neuropsychologists with active research programmes: Please contact me to discuss it informally: harnad at ecs.soton.ac.uk (7) Expressions of interest are also invited from prospective graduate students and postdoctoral fellows interested in coming to work with me in the Cognitive Psychology Laboratory and the Cognitive Sciences Centre at Southampton University. Our research focus is decribed below. Please write to: harnad at ecs.soton.ac.uk (8) Let me know if you will be in the London area and would like to give a talk about your work at our new Cognitive Sciences Centre (CSC), of which I am Director, with the collaboration of Professor Michael Sedgewick (Clinical Neurological Sciences), Professors Tony Hey and Chris Harris (Electronics and Computer Science), Dr. John Bradshaw (Anthro-Zoology Institute), Professor Wendy Hall (Multimedia Centre) and Professor Bob Remington (ex officio, Head of the Psychology Department). -------------------------------------------------------------------- Research Focus of the Laboratory CATEGORISATION AND COGNITION: Our capacity to categorise is at the heart of all of our cognitive capacity. People can sort and label the objects and events they see and hear with a proficiency that still far exceeds that of our most powerful machines. How do we manage to do it? The answer will not only tell us more about ourselves but it will allow us to apply our findings to enhancing our proficiency, both in the learning of categories and in our use of machines to extend our capacities. CATEGORY LEARNING is the most general form of cognition. Animals learn categories when they learn what is and is not safe to eat, where it is safe to forage, who is friend and who is foe. Children learn the same kinds of categories, but they eventually go on to the much more powerful and uniquely human strategy of learning categories by name, rather then by performing some instrumental response on them, such as eating or fleeing. Whether they categorise by instrumental response or by name, however, children must still have direct experience with the objects they are categorising, and some sort of corrective feedback from the consequences of MIScategorising them. Eventually, however, categories can be learned from strings of symbols alone, with most of those symbols being themselves the names of categories. This is the most remarkable of our cognitive capacities, language, but language and cognition cannot be understood unless we analyse how they are grounded in categorisation capacity (Harnad 1990). This is theme of our research programme. BEHAVIORAL, COMPUTATIONAL AND NEURAL APPROACHES: There are three empirical ways to investigate the functional basis of our categorisation capacity. The first way is to (i) analyse our categorisation performance itself experimentally, particularly how we LEARN to categorise. The second way is to (ii) model our categorisation capacity with computers that must learn the same categories that we do, on the basis of the same input and corrective feedback that we get. The third way is to (iii) monitor brain function while we are learning categories, to determine what neural properties change during the course of learning, and to relate them to the performance changes during learning, as well as to the internal functioning of the machine models performing the same task. These three converging lines of investigation are the ones to be pursued in the Cognitive Psychology Laboratory. Details and papers are available from the URLs below: ---------------------------------------------------------------- Stevan Harnad Professor of Psychology Director, Cognitive Sciences Centre Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM harnad at ecs.soton.ac.uk harnad at princeton.edu phone: +44 703 592582 fax: +44 703 594597 -------------------------------------------------------------------- ftp://ftp.princeton.edu/pub/harnad/ http://cogsci.ecs.soton.ac.uk/~harnad/ http://www.princeton.edu/~harnad/ gopher://gopher.princeton.edu/11/.libraries/.pujournals  From ucjtsjd at ucl.ac.uk Sun Jan 22 20:59:45 1995 From: ucjtsjd at ucl.ac.uk (John Draper) Date: Mon, 23 Jan 1995 01:59:45 +0000 Subject: Lectureships in Psychology Message-ID: <7189.ucjtsjd@pop-server.bcc.ac.uk> 1. LECTURESHIPS IN PSYCHOLOGY Applications are invited for four lectureships tenable from 1 October 1995. The successful candidates will be active researchers in their respective fields as below: a) Cognitive science, particularly computational modelling of cognitive functions. Teaching duties include a significant contribution to the undergraduate programme in Cognitive Science b) Developmental psychology. Teaching involves both the undergraduate and MSc Educational Psychology degree programmes. c) Social psychology. Teaching duties include contributing to the undergraduate social psychology programme. d) Cognitive neuroscience, particularly cognitive neuropsychology. Teaching duties involve organising undergraduate labs and providing lectures in neuropsychology. Posts A, B and C are permanent while post D is temporary for two years but with the possibility of being made permanent. The appointments will be made at the appropriate points on the Lecturer A scale (currently !14,756 - !19,326), possibly Lecturer B, plus London Allowance of !2,134. Applications by CV including three referees should be sent to : John Draper, Departmental Administrator, Department of Psychology, University College London, Gower Street, London WC1E 6BT (e-mail : j.draper at ucl.ac.uk; tel 071 387 7050 x5338) from whom further details can be obtained. Closing date : 31 March 1995.  From rafal at mech.gla.ac.uk Mon Jan 23 10:12:05 1995 From: rafal at mech.gla.ac.uk (Rafal W Zbikowski) Date: Mon, 23 Jan 1995 15:12:05 GMT Subject: Workshop on Neurocontrol Message-ID: <15957.199501231512@gryphon.mech.gla.ac.uk> CALL FOR PAPERS Neural Adaptive Control Technology Workshop: NACT I 18--19 May, 1995 University of Glasgow Scotland, UK NACT Project ^^^^^^^^^^^^ The first of a series of three workshops on Neural Adaptive Control Technology (NACT) will take place on May 18--19 1995 in Glasgow, Scotland. This event is being organised in connection with a three-year European Union funded Basic Research Project in the ESPRIT framework. The project is a collaboration between Daimler-Benz Systems Technology Research, Berlin, Germany and the Control Group, Department of Mechanical Engineering, University of Glasgow, Glasgow, Scotland. The project, which began on 1 April 1994, is a study of the fundamental properties of neural network based adaptive control systems. Where possible, links with traditional adaptive control systems will be exploited. A major aim is to develop a systematic engineering procedure for designing neural controllers for non-linear dynamic systems. The techniques developed will be evaluated on concrete industrial problems from within the Daimler-Benz group of companies: Mercedes-Benz AG, Deutsche Aerospace (DASA), AEG and DEBIS. The project leader is Dr Ken Hunt (Daimler-Benz) and the other principal investigator is Professor Peter Gawthrop (University of Glasgow). NACT I Workshop ^^^^^^^^^^^^^^^ The aim of the workshop is to bring together selected invited specialists in the fields of adaptive control, non-linear systems and neural networks. A number of contributed papers will also be included. As well as paper presentation, significant time will be allocated to round-table and discussion sessions. In order to create a fertile atmosphere for a significant information interchange we aim to attract active specialists in the relevant fields. Proceedings of the meeting will be published in an edited book format. A social programme will be prepared for the weekend immediately following the meeting where participants will be able to sample the various cultural and recreational offerings of Central Scotland (a visit to a whisky distillery is included) and the easily reached Highlands. Contributed papers ^^^^^^^^^^^^^^^^^^ The Program Committee is soliciting contributed papers in the area of neurocontrol for presentation at the conference and publication in the Proceedings. Submissions should take the form of an extended abstract of six pages in length and the DEADLINE is 1 March 1995. Accepted extended abstracts will be circulated to participants in a Workshop digest. Following the Workshop selected authors will be asked to prepare a full paper for publication in the proceedings. This will take the form of an edited book produced by an international publisher. LaTeX style files will be available for document preparation. Each submitted paper must be headed with a title, the names, affiliations and complete mailing addresses (including e-mail) of all authors, a list of three keywords, and the statement "NACT I". The first named author of each paper will be used for all correspondence unless otherwise requested. Final selection of papers will be announced in mid-March 1995. Address for submissions ^^^^^^^^^^^^^^^^^^^^^^^ Dr Rafal Zbikowski Department of Mechanical Engineering James Watt Building University of Glasgow Glasgow G12 8QQ Scotland, UK rafal at mech.gla.ac.uk Schedule summary ^^^^^^^^^^^^^^^^ 1 March 1995 Deadline for submission of contributed papers Mid-March 1995 Notification regarding acceptance of papers 18-19 May 1995 Workshop  From john at dcs.rhbnc.ac.uk Mon Jan 23 10:47:32 1995 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Mon, 23 Jan 95 15:47:32 +0000 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <199501231547.PAA15966@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT): two new reports available ---------------------------------------- NeuroCOLT Technical Report NC-TR-94-014: ---------------------------------------- Learning Minor Closed Graph Classes with Membership and Equivalence Queries by John Shawe-Taylor, Dept of Computer Science, Royal Holloway, U. of London Carlos Domingo, Department of Software, U. Polit\`ecnica de Catalunya Hans Bodlaender, Dept of Computer Science, Utrecht University James Abello, Computer Science Dept, Texas A\&M University Abstract: The paper considers the problem of learning classes of graphs closed under taking minors. It is shown that any such class can be properly learned in polynomial time using membership and equivalence queries. The representation of the class is in terms of a set of minimal excluded minors (obstruction set). ---------------------------------------- NeuroCOLT Technical Report NC-TR-94-016: ---------------------------------------- On-line learning with minimal degradation in feedforward networks by V Ruiz de Angulo, Institute for System Engineering and Informatics, CEC Joint Research Center, Ispra, Italy and Carme Torras, CSIC-UPC, Barcelona, Spain Abstract: Dealing with non-stationary processes requires quick adaptation while at the same time avoiding catastrophic forgetting. A neural learning technique that satisfies these requirements, without sacrificing the benefits of distributed respresentations, is presented. It relies on a formalization of the problem as the minimization of the error over the previously learned input-output (i-o) patterns, subject to the constraint of perfect encoding of the new pattern. Then this constrained optimization problem is transformed into an unconstrained one with hidden-unit activations as variables. This new formulation naturally leads to an algorithm for solving the problem, which we call Learning with Minimal Degradation (LMD). Some experimental comparisions of the performance of LMD with back-propagation are provided which, besides showing the advantages of using LMD, reveal the dependence of forgetting on the learning rate in back-propagation. We also explain why overtraining affects forgetting and fault-tolerance, which are seen as related problems. ----------------------- The Report NC-TR-94-014 can be accessed and printed as follows % ftp cscx.cs.rhbnc.ac.uk (134.219.200.45) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-94-014.ps.Z ftp> bye % zcat nc-tr-94-014.ps.Z | lpr -l Similarly for the other technical report. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. Best wishes John Shawe-Taylor  From hml at charon.llnl.gov Mon Jan 23 14:52:23 1995 From: hml at charon.llnl.gov (Hans Martin Lades) Date: Mon, 23 Jan 95 11:52:23 -0800 Subject: Postdoctoral research staff position available Message-ID: POST DOCTORAL RESEARCH STAFF MEMBER IN COMPUTER VISION/CONNECTIONIST VISION MODELING INSTITUTE FOR SCIENTIFIC COMPUTING RESEARCH (ISCR) UNIVERSITY OF CALIFORNIA LAWRENCE LIVERMORE NATIONAL LABORATORY (LLNL) P.O. BOX 808, L-416 LIVERMORE, CA 94550 NATURE AND SCOPE OF POSITION: The Institute for Scientific Computing Research, a University of California Institute located at LLNL, is pursuing research in active computer vision and related biologically motivated computational models. We are currently seeking excellent candidates for a postdoctoral position that will include research in image processing, neural networks and computer vision. The successful candidate will aid in the development of computer vision software and hardware for collaborations both inside and outside LLNL. There will be opportunities for interaction and collaboration with interdisciplinary scientists in the academic community and with private industry. ISCR staff are expected to perform research resulting in publications in leading computational and related journals and to participate fully in the Institute, including its seminar series and research activities. LOCATION: LLNL is located in the San Francisco Bay Area. Coastal and mountain recreational areas are abundant nearby. The ISCR has collaborations with nearby universities, including UC Berkeley, Stanford, and UC Davis. ESSENTIAL SKILLS, KNOWLEDGE, AND ABILITIES: Candidates must have a demonstrated ability to: - Identify complex problems and solve them in a creative and timely manner - Carry out independent research related to the computer vision efforts at the ISCR - Communicate clearly in both oral and written form. Candidates must possess a recent Ph.D. in Physics, Mathematics, Computer Science, or Engineering with a research background in Signal Processing, Image Processing and Neural Networks.=20 Candidates must have excellent programming skills; with demonstrated experience in C and C++, familiarity with different compilers, standardization efforts, object-oriented design and creation of program packages. Experience in the following areas is a plus: - Real-time systems - Datacube programming - COSE, CORBA - Parallel program design - Efficient Modeling of Biological Vision Systems - Hardware design (small PCB design, wrapping, etc.) - Robotics - Algorithms for nonlinear signal processing (fractal compression, higher-order correlations, etc.) - Databases. LENGTH OF EMPLOYMENT: 1 year (with possible renewal for 3 years total) SALARY: In the range $46,000-54,000 per annum depending on experience and qualifications. FOR FURTHER INFORMATION, PLEASE CONTACT: Martin Lades (hml at llnl.gov) or Paul Skokowski (paulsko at llnl.gov) (ph.) (510) 422-7132 (fax) (510) 422-7819 Application Deadline: February, 1, 1995  From vg197 at neutrino.pnl.gov Tue Jan 24 01:22:41 1995 From: vg197 at neutrino.pnl.gov (Sherif Hashem) Date: Mon, 23 Jan 1995 22:22:41 -0800 (PST) Subject: Workshop Announcement and CFP Message-ID: <9501240622.AA19562@neutrino.pnl.gov> WORKSHOP ANNOUNCEMENT AND CALL FOR PARTICIPATION ************************************************* (Abstract submission deadline: February 10, 1995) WORKSHOP ON ENVIRONMENTAL AND ENERGY APPLICATIONS OF NEURAL NETWORKS Battelle Auditorium, Richland, Washington March 30-31, 1995 The Environmental Molecular Sciences Laboratory (EMSL), Pacific Northwest Laboratory (PNL), and the Richland Section of the Institute of Electrical and Electronics Engineers (IEEE) are sponsoring a workshop to bring together scientists and engineers interested in investigating environmental and energy applications of artificial neural networks (ANNs). Objectives: ----------- The main objectives of this workshop are: * to provide a forum for presenting and discussing environmental and energy applications of neural networks. * to serve as a means for investigating the potential uses of neural networks in the U.S. Department of Energy's environmental cleanup efforts and energy programs. * to promote collaboration between researchers in national laboratories, academia, and industry to solve real-world problems. Topics: ------- * Environmental applications (modeling and predicting land, air, and water pollution; environmental sensing; spectroscopy; hazardous waste handling and cleanup). * Energy applications (environmental monitoring for power systems, modeling and control of power plants, power load forecasting, fault location and diagnosis of power systems). * Commercial and industrial applications (environmental, economic, and financial time series analyses and forecasting; chemical process modeling and control). * Medical applications (analysis of environmental health effects, modeling biological systems, medical image analysis, and medical diagnosis). Who should attend? ------------------ This workshop should be of interest to researchers, developers, and practitioners applying ANNs in energy and environmental sciences and engineering, as well as scientists and engineers who see some potential for the application of ANNs to their work. Dates: ------ The workshop will be held on March 30-31, 1995, from 8:00 am to 5:00 pm. An introductory tutorial on neural networks will be offered on March 29, 1995, and is recommended for participants who are new to neural networks. Deadline for contributed presentations: Abstracts are due by February 10, 1995. Notification of acceptance will be mailed by: February 24, 1995. Cost: ----- The registration fee is $120 ($75 for students). Early registration by March 1, 1995, is $100 ($50 for students). For More Information, Contact: ------------------------------ Dr. Sherif Hashem Environmental Molecular Sciences Laboratory Pacific Northwest Laboratory P.O. Box 999, M/S K1-87 Richland, WA 99352 Telephone: 509-375-6995 Fax.: 509-375-6631 Internet: s_hashem at pnl.gov World Wide Web URL: http://www.emsl.pnl.gov:2080/people/bionames/hashem_s.html Also see the workshop's homepage on the World Wide Web at URL: http://www.emsl.pnl.gov:2080/docs/cie/neural/workshop2/homepage.html ____________________________________________________________________________ REGISTRATION FORM Name: ____________________________ Address: ____________________________ ____________________________ ____________________________ ____________________________ Telephone: ____________________________ Fax: ____________________________ E-mail: ____________________________ [ ] I am interested in attending the neural network tutorial (no additional fee is required). [ ] I am interested in a bus tour of the Hanford Site (a Department of Energy site located north of Richland, Washington). Registration Fee: ----------------- Regular: $100 ($120 after March 1, 1995). Student: $50 ($75 after March 1, 1995). Please make your check payable to Battelle. Mail the completed form and check to: Janice Gunter WEEANN Registration Pacific Northwest Laboratory P.O. Box 999, M/S K1-87 Richland, WA 99352 ____________________________________________________________________________ ********************************************************************************** The Pacific Northwest Laboratory (PNL) is a multiprogram national laboratory operated for the U.S. Department of Energy by the Battelle Memorial Institute. To provide the basic research needed to meet environmental cleanup, the Environmental Sciences Laboratory (EMSL) is being constructed at PNL. The prime mission of the EMSL and its associated research programs is to advance scientific knowledge in support of the long-term mission of the U.S. Department of Energy in environmental restoration and waste management. ********************************************************************************** =================================================================== Pacific Northwest Laboratory E-mail: s_hashem at pnl.gov 906 Battelle Boulevard Tel. (509) 375-6995 P.O. Box 999, MSIN K1-87 Fax. (509) 375-6631 Richland, WA 99352 USA ===================================================================  From reza at bme.jhu.edu Tue Jan 24 09:01:34 1995 From: reza at bme.jhu.edu (Reza Shadmehr) Date: Tue, 24 Jan 95 09:01:34 EST Subject: Papers on human motor memory Message-ID: Hello, The following two papers will appear in the upcoming NIPS proceedings. They deal with some of the properties of human motor memory, including interference and forgetting. I've included ftp instructions. best wishes, Reza Shadmehr reza at bme.jhu.edu ---------------------------------------------------------------------- Interference in learning internal models of inverse dynamics in humans Reza Shadmehr, Tom Brashers-Krug, Ferdinando Mussa-Ivaldi Abstract: Experiments were performed to reveal some of the computational properties of the human motor memory system. We show that as humans practice reaching movements while interacting with a novel mechanical environment, they learn an internal model of the inverse dynamics of that environment. Subjects show recall of this model at testing sessions 24 hours after the initial practice. The representation of the internal model in memory is such that there is interference when there is an attempt to learn a new inverse dynamics map immediately after an anticorrelated mapping was learned. We suggest that this interference is an indication that the same computational elements used to encode the first inverse dynamics map are being used to learn the second mapping. We predict that this leads to a forgetting of the initially learned skill. anonymous ftp to: ftp.ai.mit.edu filename: pub/users/reza/nips95a.ps.Z --------------------------------------------------------------------- Catastrophic interference in human motor memory Tom Brashers-Krug, Reza Shadmehr, Emanuel Todorov Abstract: Biological sensorimotor systems are not static maps that transform input (sensory information) into output (motor behavior). Evidence from many lines of research suggests that their representations are plastic, experience-dependent entities. While this plasticity is essential for flexible behavior, it presents the nervous system with difficult organizational challenges. If the sensorimotor system adapts itself to perform well under one set of circumstances, will it then perform poorly when placed in an environment with different demands (negative transfer)? Will a later experience-dependent change undo the benefits of previous learning (catastrophic interference)? We explore the first question in a separate paper in this volume (Shadmehr et al. 1995). Here we present psychophysical and computational results that explore the question of catastrophic interference in the context of a dynamic motor learning task. Under some conditions, subjects show evidence of catastrophic interference. Under other conditions, however, subjects appear to be immune to its effects. These results suggest that motor learning can undergo a process of consolidation. Modular neural networks are well suited for the demands of learning multiple input/output mappings. By incorporating the notion of fast- and slow-changing connections into a modular architecture, we were able to account for the psychophysical results. anonymous ftp to: ftp.ai.mit.edu filename: pub/users/reza/nips95b.ps.Z  From marks at u.washington.edu Tue Jan 24 13:53:24 1995 From: marks at u.washington.edu (Robert Marks) Date: Tue, 24 Jan 95 10:53:24 -0800 Subject: 1995 ISCAS (Seattle) Message-ID: <9501241853.AA26645@carson.u.washington.edu> SEVENTEEN full sessions on Neural Networks Tutorials on Cellular Neural Networks (Download program for details) ISCAS'95 1995 IEEE International Symposium on Circuits and Systems April 30 - May 3, 1995 Sheraton Seattle Hotel and Towers Seattle, Washington USA ----------------------------------------------------------------- |?S"?SSSSSSSSSSSSSSSSSSSSSSSSS"? ""SS" ?SSSSS! `SSSSSSSSSS?| |MM:!MMMMMMMMMMMMMMMMMMMMMMM* fx n H MMMMMM MMMMMMMMMMM?| | MMMM : MMMMMMMMMMMMMMM: : ?MMh MMMMMMMMMMMMM!.MMMM`| | "~ *MMMMMMMMMMMMMM :~f "MMk "MMMMMMMMMMMMM Mf | | ?M : ~ M MM .MMMMMMM!~ ~ | | ::! : HM: .MM. MMMM~ | | u:f.oNNNNu xbi NN: !NNiNNNNNN!Nb: H | | $$ $$$$!xN$$$$$k !$$$$$$$ ~'! | |' ~ x$ 1995 ISCAS ->X < h$$$$$$$$$ ~~< | | <'# RRRRRRRRRRRRRRRRH MRRRRRRRRRRR ~"#x: ~ : | | MRRRRRRRRRRRRRRRRRRRRM MRRRRRRRRRRR~ | | ~:M?MMMMMMMMMMMMHxMMMMMMMMk !:""MMMMMMMM ~ MM :| |8k:u8*888888888888888888888888888i:' !888888W ` ~ :8888x9X| |8N 8 :xN"*NNNNNNNNNNNNNNNNNNNNNNNN : *NNNNNNNN: ` dNNNNNNNH| |$$$$$N$F f$$$$$$$$$$$$$$$$$$$$$$$$N ~ $$$$$$$$R < $"$$$$$$M| |$$$$P "$$$$$$$$$$$$$$$$$$$$$$$$<~ $$$$$$$$$$ :~ $ @$$$$$$M| |$$$$$ $$$$$$$$$$$$$$$$$$$$$$$F 4$$$$$$$$$$$$ $$$$$$$$$$R| |R$$$$$$$$@"$$$R:$$$$$$$$$$$$$$$$$$$< t$$$$$$$$$$$$$$$$$$$$$$$$$$M| |RRRRRRRRRRRRRRMRRRRRRRRRRRRRRRRRRRR< RRRRRRRRRRRRRRRRRRRRRRRRRRRM| |W88888888888888888888888888888888888o888888888888888888888888888W| ----------------------------------------------------------------- April 30 - May 3, 1995 Sheraton Seattle Hotel and Towers Seattle, Washington USA Sponsored by THE INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS CIRCUITS AND SYSTEMS SOCIETY USEFUL CONTACTS ISCAS'95 Secretariat Meeting Management 2603 Main Street, Suite 690 Irvine, CA 92714 USA Tel: (714)752-8205 Fax: (714)752-7444 Email: 74710.2266 at compuserve.com Symposium Venue and Conference Hotel Sheraton Seattle Hotel and Towers 400 Sixth Avenue Seattle, Washington 98101 Tel: (206)621-9000 Fax: (206)621-8441 For Conference Program/Information For those with WWW Access: You can use MOSAIC and access URL site http://www.ee.washington.edu/iscas95.html With ftp access: unix> ftp compton.ee.washington.edu (or ftp 128.95.42.191) Name: anonymous Password: ftp> cd pub/iscas95 ftp> get read.me ***list of all possible files*** ftp> get advprog ftp> get regforms ftp> get visainfo ftp> bye With e-mail but not ftp: send email to: iscas95 at ee.washington.edu (an automatic system e-mail reply will send back information) AIRLINE TRANSPORTATION World Wise Travel Services 1219 Westlake Ave N. Suite 107 Seattle, WA 98109 (800)217-9527 Phone (206)217-0062 Fax Seattle Sightseeing Tours: Convention Services Northwest 1809 7th Ave, 1414 Tower Bldg Seattle, WA 98101 (206)292-9198, FAX:(206)292-0559  From lksaul at psyche.mit.edu Tue Jan 24 13:58:11 1995 From: lksaul at psyche.mit.edu (Lawrence Saul) Date: Tue, 24 Jan 95 13:58:11 EST Subject: paper announcement Message-ID: <9501241858.AA03473@psyche.mit.edu> ------------------------------------------------------------------------ FTP-host: psyche.mit.edu FTP-file: pub/lksaul/boltzmann.chains.ps.Z ------------------------------------------------------------------------ The following paper is now available by anonymous ftp: Boltzmann Chains and Hidden Markov Models [8 pages] Lawrence K. Saul and Michael I. Jordan Center for Biological and Computational Learning Massachusetts Institute of Technology 79 Amherst Street, E10-243 Cambridge, MA 02139 Abstract: We propose a statistical mechanical framework for the modeling of discrete time series. Maximum likelihood estimation is done via Boltzmann learning in one-dimensional networks with tied weights. We call these networks Boltzmann chains and show that they contain hidden Markov models (HMMs) as a special case. Our framework also motivates new architectures that address particular shortcomings of HMMs. We look at two such architectures: parallel chains that model feature sets with disparate time scales, and looped networks that model long-term dependencies between hidden states. For these networks, we show how to implement the Boltzmann learning rule exactly, in polynomial time, without resort to simulated or mean-field annealing. The necessary computations are done by exact decimation procedures from statistical mechanics. *** To appear in the NIPS 1994 Proceedings.  From ronnyk at CS.Stanford.EDU Tue Jan 24 00:08:14 1995 From: ronnyk at CS.Stanford.EDU (Ronny Kohavi) Date: Tue, 24 Jan 1995 13:08:14 +0800 Subject: MLC++ utilities version 1.1 Message-ID: <9501242108.AA15481@starry.Stanford.EDU> [ Summary paragraph moved to beginning, for clarity. -- The Moderator ] MLC++ is a Machine Learning library of C++ classes being developed at Stanford. More information about the library can be obtained at URL http://robotics.stanford.edu:/users/ronnyk/mlc.html. The utilities are available by anonymous ftp to starry.stanford.edu:pub/ronnyk/mlc/util. They are currently given only in object code for Sun, but source code will be distributed in the future or to sites that wish to attempt a port of the code into other compilers. MLC++ Utilities 1.1 ___________________ Since the release of MLC++ utilities 1.0 in December 1994, over 40 sites have copied the utilities. We are now releasing version 1.1. New features include: *. Options now prompt for values with help to explain the option values. Options are divided into common options and "nuisance" options, which users should not change often (especially first-time users). *. New inducers include Naive-Bayes and 1R (Holte). *. The nearest-neighbor (IB) inducer has many new options. It supports nominals, interquartile normalization (as opposed to extreme), voting of neighbors, k distances (as opposed to k neighbors), and more. *. A new utility, discretize, is available to discretize continuous features. Either binning or Holte's discretization can be used. *. Estimated performance on a test set now gives a confidence bound assuming i.i.d. sample (details in the manual). People are often surprised by how wide the interval is for some of the toy datasets. *. Confusion matrices can be displayed for MLC++ inducers. *. The tree induced by ID3 can display the distribution and entropy in the tree displayed using X-windows. (This option requires that you install dotty from AT&T, which is free for research.) *. The learning curve gives an honest estimate of error by testing only on the unseen instances. The accuracy reports for regular induction also report memorization accuracy and generalization accuracy separately (following Schaffer and Wolpert's recent papers). -- Ronny Kohavi (ronnyk at CS.Stanford.EDU, http://robotics.stanford.edu/~ronnyk)  From mike at PARK.BU.EDU Tue Jan 24 18:22:15 1995 From: mike at PARK.BU.EDU (mike@PARK.BU.EDU) Date: Tue, 24 Jan 1995 18:22:15 -0500 Subject: Boston University CAS/CNS Spring Seminar Announcement Message-ID: <199501242322.SAA03941@space.bu.edu> Spring 1995 Colloquium Series CENTER FOR ADAPTIVE SYSTEMS AND DEPARTMENT OF COGNITIVE AND NEURAL SYSTEMS BOSTON UNIVERSITY January 27 APPLICATIONS OF NEURAL NETWORKS TO TELECOMMUNICATIONS Dr. Joshua Alspector, Neural Network Research Group, Bellcore February 3 NETWORKS THAT LEARN AND HOW THE BRAIN SHOULD WORK Professor Tomaso Poggio, Department of Brain and Cognitive Sciences, MIT February 10 PLANNING AND LEARNING USING MARKOV DECISION MODELS Professor Leslie Pack-Kaelbling, Department of Computer Science, Brown University February 24 A DYNAMIC MODEL OF DISCONTINUITIES IN COGNITIVE AND BRAIN DEVELOPMENT Professor Kurt Fischer, Human Development and Psychology, Harvard University March 3 BAYESIAN LEARNING IN MODULAR AND HIERARCHICAL NEURAL NETWORKS Professor Michael Jordan, Department of Brain and Cognitive Sciences, MIT March 17 VISION, BRAIN, AND THE PHILOSOPHY OF COGNITION Speakers: B. Julesz, K. Nakayama, S. Grossberg, P. Cavanagh, V.S. Ramachandran, E. Thompson, D. Dennett. Time: 8:30 A.M. - 6:00 P.M. Place: GSU Conference Auditorium, 2nd floor, 775 Commonwealth Avenue March 24 ON THE GEOMETRY OF PERCEIVED SPACE Professor James Todd, Department of Psychology, Ohio State University April 7 HOW DOES THE BRAIN GENERATE SENSORY-MOTOR BEHAVIOR? A COMPUTATIONAL FIELD THEORY FOR CONTROLLING RAPID EYE MOVEMENTS Dr. Lance Optican, Laboratory of Sensorimotor Research, National Eye Institute, NIH April 21 PARALLEL CEREBRAL MEMORY SYSTEMS Dr. Mortimer Mishkin, Laboratory of Neuropsychology, NIMH April 28 STATISTICAL METHODS IN LARGE VOCABULARY CONTINUOUS SPEECH RECOGNITION Larry Gillick, Dragon Systems All Talks (except March 17) on Fridays at 2:30 PM Refreshments at 2:00 PM in Room 101 2 Cummington Street, Boston  From mark at dcs.kcl.ac.uk Wed Jan 25 04:32:25 1995 From: mark at dcs.kcl.ac.uk (Mark Plumbley) Date: Wed, 25 Jan 1995 09:32:25 +0000 Subject: Reader in Mathematics & Neural Networks, King's College London Message-ID: <9501250932.AA05063@helium.dcs.kcl.ac.uk> KING'S COLLEGE LONDON SCHOOL OF PHYSICAL SCIENCES AND ENGINEERING READER IN MATHEMATICS Applications are invited for the established post of Reader in the Department of Mathematics with effect from 1 September 1995. The successful candidate will have achieved research distinction and have outstanding research potential in mathematics and neural networks, or a mathematically-based discipline (eg theoretical physics, statistics or information processing) and neural networks. Salary will be in the range 29,152 Pounds to 32,667 Pounds per annum inclusive of 2,134 Pounds London Allowance per annum. Application forms and further particulars may be obtained from the Personnel Officer, School of Physical Sciences and Engineering, King's College London, Strand, London WC2R 2LS, UK, tel. +44 (0)171 873 2427 or email H.Holland at kcl.ac.uk. The closing date for completed applications is 3 March 1995. Please quote reference A4/CM/2/95. Equality of opportunity is College policy. --------------------------------------------------------------------------- Dr. Mark D. Plumbley mark at dcs.kcl.ac.uk |_/ I N G'S Centre for Neural Networks | \ College Department of Electronic & Electrical Engineering L O N D O N King's College London Strand/London WC2R 2LS/UK Founded1829 Tel +44 (0)171 873 2241 Fax +44 (0)171 873 2851 ---------------------------------------------------------------------------  From keithm at PARK.BU.EDU Wed Jan 25 09:35:34 1995 From: keithm at PARK.BU.EDU (Keith McDuffee) Date: Wed, 25 Jan 1995 10:35:34 -0400 Subject: 1995 WORLD CONGRESS ON NEURAL NETS MEETING Message-ID: REVISED CALL FOR PAPERS WORLD CONGRESS ON NEURAL NETWORKS 1995 ANNUAL MEETING OF THE INTERNATIONL NEURAL NETWORKS SOCIETY JULY 17-21, 1995 RENAISSANCE HOTEL/WASHINGTON, DC SPECIAL FEATURES: March 1 Due Date, Reduced registration fee, both CD-ROM and paper proceedings. Four-page papers are due by 1 March 1995. Note the change in deadline date. Authors must submit registration payment with papers to be eligible for the early registration fee. A $35 publication fee must accompany each submission that the conference committee will refund if it rejects the paper. The $35 publication fee helps defray the Proceedings cost and allows the conference committee to offer a lower registration fee. The 1995 registration-plus- publication fee of $205 is comparable to the 1994 registration fee. This service has been provided to make the meeting more affordable for attendees who do not plan to have published articles in the proceedings. Please make checks payable to INNS and include with submitted paper. For review purposes, please submit six (6) copies (1 original, 5 copies) plus 3 1/2" disk (see instructions below), four page limit, in English. $20 per page for papers exceeding (4) pages (do not number pages). Checks for over length charges should be made out to INNS and must be included with submitted paper. Papers must be on 8 1/2" x 11" white paper with 1" margins on all sides, one column format, single spaced, in Times or similar type style of 10 points or larger, one side of paper only. FAX's not acceptable. Centered at top of first page should be complete title, author name(s), affiliation(s), and mailing address(es), followed by blank space abstract (up to 15 lines), and text. The following information MUST be included in an accompanying cover letter in order for the paper to be reviewed: Full title of paper, corresponding author and presenting author name, address, telephone and fax numbers. Technical Session (see session topics) 1st and 2nd choices, oral or poster presentation preferred, audio-visual requirements (for oral presentations only). Papers submitted which do not meet these requirements or for which insufficient funds are submitted will be returned. For the first time, the proceedings of the 1995 World Congress on Neural Networks will be distributed on CD-ROM. The CD-ROM Proceedings are included in your registration fee. Accepted papers will appear in BOTH CD-ROM and paper Proceedings format. Format a 3 1/2" disk for CD-ROM: Once paper is proofed, completed and printed for review, reformat the paper in Landscape format, page size 8" x 5" for CD. You may include a separate file with 1 paragraph biographical information with your name, company, address and telephone number. Presenters should submit their papers in one of the following Macintosh or Microsoft Windows formats: Microsoft Word, WordPerfect, FrameMaker, Quark or Quark Professional, PageMaker, Persuasion, ASCII, PowerPooint, Adobe.PDF, Postscript (text, not EPS). Images can be submitted in TIF or PCX format. If submitting a previously unpublished paper, author agrees to the transfer of the copyright to INNS for the conference proceedings. All submitted papers become the property of INNS. Papers and disk to be sent to: WCNN'95, 875 Kings Highway, Suite 200, Woodbury, New Jersey 08096-3172; Tel: 609-845-1720, Fax: 609-853-0411, e-mail: 74577.504 at compuserve.com. Registration Fees: Category Pre-registration Pre-registration On-Site prior to prior to March 1, 1995 June 16, 1995 INNS Member $170.00 $250.00 $350.00 Non-member** $270.00 $380.00 $480.00 Student*** $ 85.00 $110.00 $135.00 **Registration fee includes 1995 membership and a one (1) year subscription to the Journal Neural Networks. ***Student registration must be accompanied by a letter of verification from department chairperson. Any student registration received with no verification letter will be processed at the higher member or non-member fee, depending on current membership status. Copies of student identification cards are NOT acceptable. This also applies to on-site registration. ORGANIZING COMMITTEE John G. Taylor, General Chair Walter J. Freeman Harold Szu Rolf Eckmiller Shun-ichi Amari David Casasent INNS OFFICERS GOVERNING BOARD President: John G. Taylor Shun-ichi Amari President-Elect: Shun-ichi Amari Daniel Alkon Past President: Walter J. Freeman James A. Anderson Secretary: Gail Carpenter Daniel Levine Treasurer: Judith Dayhoff David Casasent Executive Director: R. K. Talley Leon Cooper Rolf Eckmiller Francoise Fogelman-Soulie Kunihiko Fukushima Stephen Grossberg Christof Koch Bart Kosko Christoph von der Malsburg Alianna Maren Paul Werbos Bernard Widrow Lotfi A. Zadeh PROGRAM COMMITTEE Shun-ichi Amari James A. Anderson Kaveh Ashenayi Etienne Barnard Andrew R. Barron Andrew Barto Theodore Berger Horacio Bouzas Artie Briggs Gail Carpenter David Casasent Ralph Castain Huishung Chi Leon Cooper Judith Dayhoff Nick DeClaris Rolf Eckmiller Jeff Elman Terrence L. Fine Gary Fleming Francoise Fogelman-Soulie Walter J. Freeman Kunihiko Fukushima Apostolos Georgopoulos Stephen Grossberg John B. Hampshire II Michael Hasselmo Robert Hecht-Nielsen Akira Iwata Jari Kangas Bert Kappen Christof Koch Teuvo Kohonen Kenneth Kreutz-Delgado Clifford Lau Soo-Young Lee George Lendaris Sam Leven Daniel S. Levine William B. Levy Christof von der Malsburg Alianna Maren Lina Massone Lance Optican Robert Pap Richard Peterson Paul Refenes Mohammed Sayeh Madam G. Singh Dejan Sobajic Jeffrey Sutton Harold Szu John G. Taylor Brian Telfer Shiro Usui Andreas Weigand Paul Werbos Hal White Bernard Widrow Daniel Wolpert Mona E. Zaghloul PLENARY SPEAKERS: Daniel L. Alkon, U.S. National Institutes of Health Shun-ichi Amari, University of Tokyo Gail Carpenter, Boston University Walter J. Freeman, University of California, Berkeley Teuvo Kohonen, Helsinki University of Technology Harold Szu, Naval Surface Warfare Center John G. Taylor, King's College London SESSION TOPICS AND CHAIRS: 1. Biological Vision: Rolf Eckmiller, Shiro Usui 2. Machine Vision: Kunihiko Fukushima, Robert Hecht-Nielsen 3. Speech and Language: Jeff Elman, Richard Peterson 4. Biological Sensory-Motor Control: Andrew Barto, Lina Massone 5. Neurocontrol and Robotics: Paul Werbos, Kaveh Ashenayi 6. Supervised Learning: Andrew R. Barron, Terrence L. Fine, Soo-Young Lee 7. Unsupervised Learning: Teuvo Kohonen, Francoise Fogelman-Soulie 8. Pattern Recognition: David Casasent, Brian Telfer 9. Prediction and System Identification: John G. Taylor, Paul Werbos 10. Cognitive Neuroscience: James Anderson, Jeffrey Sutton 11. Links to Cognitive Science & Artificial Intelligence: Alianna Maren, George Lendaris 12. Signal Processing: Bernard Widrow, Horacio Bouzes 13. Neurodynamics and Chaos: Harold Szu, Mona E. Zaghloul, DeLiang Wang 14. Hardware Implementation: Clifford Lau, Ralph Castain, Mohammad Sayeh 15. Associative Memory: Christoph von der Malsburg, Gary Fleming, Huisheng Chi 16. Applications: Leon Cooper, Robert Pap, Dejan Sobajic 17. Circuits and Systems Neuroscience: Stephen Grossberg, Lance Optican 18. Mathematical Foundations: Shun-ichi Amari, D.S. Levine 19. Evolutionary Computing, Genetic Algorithms: Judith Dayhoff, Vasant Honavar SHORT COURSES: a. Pattern Recognition and Neural Nets: David Casasent, Carnegie Mellon University b. Modelling Consciousness: John G. Taylor, King's College London c. Neocognitron and the Selective Attention Model: Kunihiko Fukushima, Osaka University d. What are the Differences & the Similarities Among Fuzzy, Neural, & Chaotic Systems: Takeshi Yamakawa, Kyushu Institute of Technology e. Image Processing & Pattern Recognition by Self-Organizing Neural Networks: Stephen Grossberg, Boston University f. Dynamic Neural Networks: Signal Processing & Coding: Judith Dayhoff, University of Maryland g. Language and Speech Processing: Jeff Elman, University of California-San Diego h. Introduction to Statistical Theory of Neural Networks: Shun-ichi Amari, University of Tokyo i. Cognitive Network Computation: James Anderson, Brown University j. Biology-Inspired Neural Networks: From Brain Research to Applications in Technology & Medicine: Rolf Eckmiller, University of Dusseldorf k. Neural Control Systems: Bernard Widrow, Stanford University l. Neural Networks to Advance Intelligent Systems: Alianna Maren, Accurate Automation Corporation m. Reinforcement Learning: Andrew G. Barto, University of Massachusetts n. Advanced Supervised-Learning Algorithms and Applications: Francoise Fogelman-Soulie, SLIGOS o. Neural Network & Statistical Methods for Function Estimation: Vladimir Cherkassky, University of Minnesota p. Adaptive Resonance Theory: Gail A. Carpenter, Boston University q. What Have We Learned from Experiences of Real World Applications in NN/FS/GA?: Hideyuki Takagi, Matsushita Elctrical Industrial Co., Ltd. r. Fuzzy Function Approximation: Julie A. Dickerson, University of Southern Califorrnia s. Fuzzy Logic and Calculi of Fuzzy Rules and Fuzzy Graphs: Lofti A. Zadeh, University of California-Berkeley t. Overview of Neuroengineering and Supervised Learning: Paul Werbos, National Science Foundation INDUSTRIAL ENTERPRISE DAY: Monday, July 17, 1995 Enterprise Session: Chair: Robert Hecht-Nielsen, HNC, Inc. Industrial Session: Chair: Takeshi Yamakawa, Kyushu Institute of Technology FUZZY NEURAL NETWORKS: Tuesday, July 18, 1995 Wednesday, July 19, 1995 Co-Chairs: Bart Kosko, University of Southern California Ronald R. Yager, Iona College SPECIAL SESSIONS: Neural Network Applications in the Electrical Utility Industry Biomedical Applications & Imaging/Computer Aided Diagnosis in Medical Imaging Statistics and Neural Networks Dynamical Systems in Financial Engineering Mind, Brain and Consciousness Physics and Neural Networks Biological Neural Networks To obtain additional information (complete registration brochure, registration and hotel forms) contact WCNN'95, 875 Kings Highway, Suite 200, Woodbury, New Jersey 08096-3172 USA, Tele: (609)845- 1720; Fax: (609)853-0411; e-mail: 74577.504 at compuserve.com  From jb at informatik.uni-bonn.de Thu Jan 26 00:20:10 1995 From: jb at informatik.uni-bonn.de (jb@informatik.uni-bonn.de) Date: Wed, 25 Jan 95 19:20:10 -1000 Subject: paper announcement Message-ID: <9501251822.AA05025@olymp.informatik.uni-bonn.de> The following papers are available by anonymous ftp: ------------------------------------------------------------------------ FTP-host: atlas.cs.uni-bonn.de (131.220.10.29) FTP-file: pub/papers/hofmann.nips94.ps.gz ------------------------------------------------------------------------ Multidimensional Scaling and Data Clustering T. Hofmann and J. Buhmann Rheinische Friedrich--Wilhelms--Universitaet Institut fuer Informatik III Roemerstrasse 164 D-53117 Bonn, Germany Abstract: Visualizing and structuring pairwise dissimilarity data are difficult combinatorial optimization problems known as "multidimensional scaling" or "pairwise data clustering. Algorithms for embedding dissimilarity data set in a Euclidian space, for clustering these data and for actively selecting data to support the clustering process are discussed in the maximum entropy framework. Active data selection provides a strategy to discover structure in a data set efficiently with partially unknown data. ------------------------------------------------------------------------ FTP-host: atlas.cs.uni-bonn.de (131.220.10.29) FTP-file: pub/papers/buhmann.icpr94.ps.gz ------------------------------------------------------------------------ A Maximum Entropy Approach to Pairwise Data Clustering Abstract: Partitioning a set of data points which are characterized by their mutual dissimilarities instead of an explicit coordinate representation is a difficult, NP-hard combinatorial optimization problem. We formulate this optimization problem of a pairwise clustering cost function in the maximum entropy framework using a variational principle to derive corresponding data partitionings in a d-dimensional Euclidian space. This approximation solves the embedding problem and the grouping of these data into clusters simultaneously and in a selfconsistent fashion.  From hinton at cs.toronto.edu Wed Jan 25 17:32:19 1995 From: hinton at cs.toronto.edu (Geoffrey Hinton) Date: Wed, 25 Jan 1995 17:32:19 -0500 Subject: job advertisement Message-ID: <95Jan25.173227edt.794@neuron.ai.toronto.edu> I am forwarding the following message. The bits in parentheses are my own comments and should not be construed as representing the views, official or otherwise, of the University of Toronto. NON-TENURE TRACK, LIMITED TERM FACULTY POSITIONS The Department of Computer Science, University of Toronto, has received funding from various granting agencies. Funding permitting, some Limited Term Faculty positions are available in all areas of Computer Science. Applications should be sent by January 31st, 1995 to: Professor Wayne H. Enright, Chairman, Department of Computer Science, University of Toronto, Toronto, Ontario, M5S 1A4, Canada. (but if you work in neural networks it would help to also send a copy to Geoff Hinton at the same address. If your application is a few days late and you are a strong candidate I'll try to make sure you get considered. The phone number for courier packages is 416-978-3707, but please dont call). In accordance with Canadian immigration requirements, this advertisement is directed to Canadian citizens and permanent residents of Canada. (Because of NAFTA, residents or citizens of the USA or Mexico also have a chance, but others do not.) In accordance with its Employment Equity Policy, the University of Toronto encourages applications from qualified women or men, members of visible minorities, aboriginal peoples, and persons with disabilities.  From isabelle at research.att.com Thu Jan 26 16:16:40 1995 From: isabelle at research.att.com (Isabelle Guyon) Date: Thu, 26 Jan 95 16:16:40 EST Subject: No subject Message-ID: <9501262114.AA03389@big.info.att.com> /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ < ICANN industrial 1 day workshop: > < Neural network applications > < to DOCUMENT ANALYSIS and RECOGNITION > < Paris, October 1995 > \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/ * Layout and logical structure analysis of documents. * Map drawing and understanding. * OCR and handwriting recognition (off-line and on-line). * Multimedia document processing. * Image/text retrieval, automated indexing. * User interfaces to electronic libraries. * Image/text compression. This workshop will be a forum for application researchers and developers to present their systems and discuss tabu subjects, including: - hybrid solutions, - solutions that work (don't know why), - solutions that do not work (though theoretically optimum), - hacks, tricks and miscellaneous occult methods, - marketing advantage/disadvantage of saying that there is a NN in the system. The condition of acceptance will not be the novelty of the algorithms but the existence of a working "money making" application or at least a working prototype with a path towards industrialization. The performance of the system should be measured quantitatively, preferably using known benchmarks or comparisons with other systems. As for regular scientific papers, every statement should be properly supported by experimental evidence, statistics or references. Demonstrations and videos are encouraged. *** Submission deadline of a 6 page paper = March 20, 1995 *** Send 4 paper copies to: Isabelle Guyon ---------------------- AT&T Bell Laboratories 955 Creston road Berkeley, CA 94708, USA Electronic formats available at: ftp lix.polytechnique.fr login: anonymous password : your e-mail address ftp> cd /pub/ICANN95/out For more informations on ICANN write to isabelle at research.att.com.  From ngr at atlas.ex.ac.uk Thu Jan 26 13:25:34 1995 From: ngr at atlas.ex.ac.uk (ngr@atlas.ex.ac.uk) Date: Thu, 26 Jan 95 18:25:34 GMT Subject: Special Issue of Connection Science Message-ID: <27088.9501261825@elnath.dcs.exeter.ac.uk> Pleas could you distribute the following outline of the current special issue of Connection Science. Thanks Niall Griffith. ----------------------------------------------------------- Special Issue of Connection Science on Music and Creativity ----------------------------------------------------------- We thought people would like to know that a new collection of work on connectionist models of musical cognition and artistic creativity has appeared in print this month. The collection is a double issue of the journal Connection Science, volume 6, nos. 2&3, covering aspects of musical perception, conception, and action, and the generation of visual art. Some of the papers in this double issue are very interesting from a computational point of view as well, beyond their specific application domain. We hope you enjoy the issue and find it useful, and we welcome your comments and updates about further work in this area for future collections such as this. Niall Griffith and Peter Todd (Please note: Single copies of this double issue are available, at a cost of $93.50. A book version of this double issue is also planned for the near future.) --------------------------------------------------------------------- Niall Griffith, Department of Computer Science, University of Exeter, Prince of Wales Road, Exeter, EX4 4PT UK Email: ngr at dcs.exeter.ac.uk Peter Todd Department of Psychology University of Denver 2155 S. Race Street Denver, CO 80208 USA Email: ptodd at edu.du.psy --------------------------------------------------------------------- Contents of Connection Science 6(2-3), 1994: 0. Niall Griffith & Peter Todd ----------------------------- Editorial: Process and representation in connectionist models of musical structure 1. Ian Taylor & Mike Greenhough ---------------------------- Modelling pitch perception with adaptive resonance theory artificial networks 2. Niall Griffith -------------- Developing tonal centres and abstract pitch as categorisations of pitch-use 3. Edward Large & John Kolen ------------------------- Resonance and the perception of musical meter 4. Steven Smoliar -------------- Modelling musical perception: A critical view 5. Michael Page ------------ Modelling the perception of musical sequences with self-organizing neural networks 6. Michael Mozer ------------- Neural network music composition by prediction: Exploring the benefits of psychoacoustic constraints and multiscale processing 7. Matthew Bellgard & C. Tsang --------------------------- Harmonizing music the Boltzmann way 8. Bruce Katz ---------- An ear for melody 9. Shumeet Baluja, Dean Pomerleau & Todd Jochem -------------------------------------------- Towards automated artificial evolution for computer generated images 10. Michael Casey ------------- Understanding musical sound with forward models and physical models  From john at dcs.rhbnc.ac.uk Fri Jan 27 10:19:09 1995 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Fri, 27 Jan 95 15:19:09 +0000 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <199501271519.PAA11903@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT): two new reports available ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-001: ---------------------------------------- Worst-Case Analysis of the Bandit Problem by Peter Auer, Technische Universit\"{a}t Graz, Klosterwiesgasse 32/2, A-8010 Graz, Austria. Nicol\`o Cesa-Bianchi, DSI, University of Milan, Via Comelico 39, I-20135 Milano, Italy. Abstract: The multi-armed bandit is a classical problem in the area of sequential decision theory and has been studied under a variety of statistical assumptions. In this work we investigate the bandit problem from a purely worst-case standpoint. We present a randomized algorithm with an expected total reward of $G-O(G^{4/5}K^{6/5})$ (disregarding log factors), where $K$ is the number of arms and $G$ is the (unknown) total reward of the best arm. Our analysis holds with no assumptions whatsoever on the way rewards are generated, other than being independent of the algorithm's randomization. Our results can also be interpreted as a novel extension of the on-line prediction model, an intensively studied framework in learning theory. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-004: ---------------------------------------- Degree of Approximation Results for Feedforward Networks Approximating Unknown Mappings and Their Derivatives by Kurt Hornik, Technische Universit\"at Wien, Maxwell Stinchcombe, University of California, San Diego, Halbert White, University of California, San Diego, Peter Auer, Technische Universit\"at Graz Abstract: Barron (1993) has given rates for hidden layer feedforward networks with sigmoid activation functions approximating a class of functions satisfying a certain smoothness condition. These rates do not depend on the dimension of the input space. We extend Barron's results to feedforward networks with possibly non-sigmoid activation functions approximating mappings and their derivatives simultaneously. Our conditions are similar but not identical to Barron's, but we obtain the same rates of approximation, showing that the approximation error decreases at rates as fast as $n^{-\frac{1}{2}}$, where $n$ is the number of hidden units. The dimension of the input space appears only in the constants of our bounds. ----------------------- The Report NC-TR-95-001 can be accessed and printed as follows % ftp cscx.cs.rhbnc.ac.uk (134.219.200.45) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-95-001.ps.Z ftp> bye % zcat nc-tr-95-001.ps.Z | lpr -l Similarly for the other technical report. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. Best wishes John Shawe-Taylor  From edelman at wisdom.weizmann.AC.IL Fri Jan 27 11:30:59 1995 From: edelman at wisdom.weizmann.AC.IL (Edelman Shimon) Date: Fri, 27 Jan 1995 16:30:59 GMT Subject: 2 new TRs: face recognition, shape similarity Message-ID: <199501271630.QAA23426@lachesis.wisdom.weizmann.ac.il> ---------------------------------------------------------------------- URL: ftp://eris.wisdom.weizmann.ac.il/pub/maria-tr.ps.Z Maria Lando and Shimon Edelman Generalization from a single view in face recognition We describe a computational model of face recognition, which generalizes from single views of faces by taking advantage of prior experience with other faces, seen under a wider range of viewing conditions. The model represents face images by vectors of activities of graded overlapping receptive fields (RFs). It relies on high spatial frequency information to estimate the viewing conditions, which are then used to normalize (via a transformation specific for faces), and identify, the low spatial frequency representation of the input. The class-specific transformation approach allows the model to replicate a series of psychophysical findings on face recognition, and constitutes an advance over current face recognition methods, which are incapable of generalization from a single example. 22 pages; uncompressed Postscript file size: 3563304 bytes (600dpi) (a shorter, 6-page version is also available, as maria-short.ps.Z) ---------------------------------------------------------------------- URL: ftp://eris.wisdom.weizmann.ac.il/pub/cs-tr-95-01.ps.Z Florin Cutzu and Shimon Edelman Explorations of shape space Using a small number of prototypical reference objects to span the internal shape representation space has been suggested as a general approach to the problem of object representation in vision (Edelman, Minds and Machines 5, 1995, in press). We have investigated the ability of human subjects to form the low-dimensional metric shape representation space predicted by this approach. In each of a series of experiments, which involved pairwise similarity judgment, and delayed match to sample, subjects were confronted with several classes of computer-rendered 3D animal-like shapes, arranged in a complex pattern in a common high-dimensional parameter space. We combined response time and error rate data into a measure of view similarity, and submitted the resulting proximity matrix to nonmetric multidimensional scaling (MDS). In the two-dimensional MDS solution, views of the same shape were invariably clustered together, and, in each experiment, the relative geometrical arrangement of the view clusters of the different objects reflected the true low-dimensional structure in parameter space (star, triangle, square, line) that defined the relationships between the stimuli classes. These findings are now used used to guide the development of a detailed computational theory of shape vision based on similarity to prototypes. 33 pages; uncompressed Postscript file size: 3887463 bytes (600dpi) ---------------------------------------------------------------------- Related papers available at URL http://www.wisdom.weizmann.ac.il/~edelman/archive.html Comments are welcome. -Shimon Shimon Edelman E-MAIL: edelman at wisdom.weizmann.ac.il TEL: +972-8-342856 FAX: +972-8-344122 WWW: http://eris.wisdom.weizmann.ac.il/~edelman/shimon.html Dept. of Appl. Math. & CS, Weizmann Institute of Science, Rehovot 76100, ISRAEL  From alpaydin at boun.edu.tr Mon Jan 23 10:14:52 1995 From: alpaydin at boun.edu.tr (Ethem Alpaydin) Date: Mon, 23 Jan 1995 10:14:52 -0500 (EST) Subject: Workshop on Soft Computing Methods for Pattern Recognition Message-ID: Dear collegue, SOCO'95, the "International ICSC Symposium on Soft Computing: Fuzzy Logic, Artificial Neural Networks and Genetic Algorithms", is to be held at the Rochester Intitute of Technology, Rochester, New York, USA, October 24-27, 1995. We intend to organize an half a day workshop on "Soft Computing Methods for Pattern Recognition" within SOCO'95. The target of the workshop we are proposing, is the cross-fertilization of complementary approaches to pattern recognition based on classical methods, knowledge-based systems, neural networks, fuzzy theory, probabilistic reasoning, genetic algorithms, etc. In case that you are interested in submitting a paper, please contact us as soon as possible stating the tentative title of your paper, and 5 key-words. We will send to you more instructions for the 500 words abstract to be sent by February 28, 1995. Please indicate your postal address and fax on your submission. We can send you more information on SOCO'95 at your request. Looking forward to hearing from you, Sincerely, F. Masulli and E. Alpaydin ------------------------------ Dr. Francesco Masulli Dr. Ethem Alpaydin University of Genoa Bogazici University Department of Physics Department of Computer Engineering Via Dodecaneso 33 TR-80815 Istanbul Turkey 16146 Genova Italy Voice : +39 10 353 6297 Voice : +90 212 263-1540 x 1862 Fax : +39 10 362 2790 Fax : +90 212 265-8488 Email : masulli at genova.infn.it Email : alpaydin at boun.edu.tr  From NEUROCOG at vms.cis.pitt.edu Fri Jan 27 16:58:03 1995 From: NEUROCOG at vms.cis.pitt.edu (NEUROCOG@vms.cis.pitt.edu) Date: Fri, 27 Jan 1995 17:58:03 -0400 (EDT) Subject: Functional MRI Conference March 25 San Francisco Message-ID: <01HMCLXMSUVMDA8P0U@vms.cis.pitt.edu> Functional Magnetice Resonance Imaging (fMRI) workshop: How to interpret it How to do it Satellite workshop before Cognitive Neuroscience Saturday, March 25, 1995 Fairmont Hotel San Francisco, California At the conference you will learn: How to interpret functional Magnetic Resonance Imaging (fMRI) data What is needed to do effective fMRI imaging Specific techniques for fMRI imaging Sample protocols and procedures A review of recent fMRI results Conference directors Walter Schneider, University of Pittsburgh G. R. Mangun, University of California, Davis Additional faculty Michael Buonocore, University of California, Davis George Carman, Sloan Center Theor. Neurobiology, Salk Institute BJ Casey, University of Pittsburgh Medical Center Jonathan Cohen, Carnegie Mellon Univ. & Univ. of Pittsburgh Neal Cohen, University of Illinois John Desmond, Stanford University Anders Dale, U. of Oslo & U. of California, San Diego Steve Engel, Stanford University Peter Fox, University of Texas Karl Friston, Hamersmith Hospital Gary Glover, Stanford University James Haxby, National Institute of Mental Health Marty Sereno, University of California, San Diego Steve Small, University of Pittsburgh Medical Center Robert Savoy, Massachusetts Gen. Hos. NMR Center Leslie Ungerleider, National Institute of Mental Health Robert Weisskoff, Massachusetts Gen. Hos. NMR Center Program schedule Saturday, March 25, 1995 8:15 Plenary session overview Walter Schneider, G. R. Mangun, Robert Savoy: Perspectives on brain functional imaging and role of fMRI. What is fMRI, examples of use, brain mapping, limitations of technique, example simulated session 9:15 The physics of fMRI Gary Glover, discussant Robert Weisskoff: How MRI works, hemodynamic response, magnet strength, spatial and temporal resolution, pulse sequences (conventional, spiral, echo-planar), coils (surface, head), pulse programming for fMRI, optimization of parameters: thickness, flip, dealing with artifacts (banding, movement, echoes) 10:15 (15 minute break) 10:30 Experimental design & control Walter Schneider; discussant James Haxby: Stimulus presentation/response collection in the MRI, head constraint, replicability, experimental task design, scan planning for fMRI; signal tradeoffs of space, time, condition sequencing, use of scout fMRI images, co-registration across runs, data management, MRI & experiment synchronization 11:30 Data analysis James Haxby, Karl Friston: Statistical procedures, particle analysis, area measurements, correction for multiple tests, eigenvector spaces, averaging within and between subjects, power spectrum analysis 12:30-1:30 Lunch Break 1:30 Localization within and between subjects George Carman, Anders Dale & Marty Sereno, Peter Fox: Within and between session registration, between subject registration, 3d space coordinates, converting 3d space into 2d maps, co- registration with other modalities 2:30 Scanning in different regions & subjects Sensory processes John Engel: Vision, audition, tactile, motor Language processing Steve Small: Reading, speaking listening Complex cognitive processing Jonathan Cohen: Memory, emotion Subcortical Neal Cohen: Hippocampus, LGN 3:40 (15 minute break) 3:55 Scanning children & development BJ Casey: Effects of changes in brain size, cortical maturation, age related hemodynamic responsiveness; screening subjects, getting comfortable in magnet, minimizing movement 4:15 Clinical applications John Desmond: Functional mapping and surgery planning 4:45 fMRI in the neuroscience of learning Leslie Ungerleider: the role of fMRI in the neuroscience assessment of cortical plasticity in the transient and enduring effects of learning 5:30 break (15 minutes) 5:45 Administrative & training considerations Walter Schneider, Robert Savoy, round table: Overview, safety considerations, costs, medical staffing, human subjects review, training opportunities 6:15 MRI demonstration stations Simulated "hands on experience". There will be a series of short presentations around the room, 15 minutes per station. Subject preparation Bite bar, screening, subject selection, running special patient populations, land marking, subject running, subject safety Stimulus presentation Visual stimuli, auditory stimuli, response collection, heart rate monitoring, example stimulus paradigms (checkerboards, hand squeeze, memory updating, correlational mapping) MRI magnet room Scan types, structural scanning, MRA, 3D, functional scans, acoustic effects and types of images obtained at each step Statistical processing of data Significance of activation, correlation, data visualization, registration, labeling, relating of data 7:15 End of workshop Location The conference will be at the Fairmont Hotel immediately preceding the 2nd annual meeting of the Cognitive Neuroscience Society (March 26-28) Fairmont Hotel telephone 415-772-5000 or mail to Group Reservations, Fairmont Hotel, 950 Mason Street, San Francisco, CA 94108. Persons attending the Cognitive Neuroscience Meeting can qualify for the special meeting rate if they reference the Society Related Activities Program Sponsors: Neural Processes In Cognition Program & Center for the Neural Basis of Cognition: Univ. of Pittsburgh & Carnegie Mellon Univ. University of California, Davis, Center for Neuroscience & Radiology Department Center for Advanced MR Technology, Stanford University Meeting. The Cognitive Neuroscience Society meeting will include symposia on brain development, brain imaging, spatial cognition, computational modeling, imagery, language processing and multiple poster sessions. ------------------------------ fMRI Workshop registration information Registration is limited to 150 Register early to insure a seat (Note seats given in order of paid registrations recieved) Name Address Telephone Email Specialty Registration fee: Before March 1, 1995 Faculty $40.00 Student $30.00 After March 1, 1995 Faculty $50.00 Student $40.00 Make Checks payable to: University of Pittsburgh, fMRI workshop Complete form and send to Cathy Rupp, fMRI Workshop 524 LRDC University of Pittsburgh 3939 O'Hara St Pittsburgh PA 15260 For information Email to: neurocog at vms.cis.pitt.edu  From thimm at idiap.ch Sun Jan 29 05:42:15 1995 From: thimm at idiap.ch (Georg Thimm) Date: Sun, 29 Jan 95 11:42:15 +0100 Subject: Paper available Message-ID: <9501291042.AA15409@idiap.ch> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/thimm.gain.ps.Z 6 pages, compressed file size: 63685 KB The file thimm.gain.ps.Z is now available for copying from the Neuroprose archive: The Interchangeability of Learning Rate and Gain in Backpropagation Neural Networks G. Thimm, P. Moerland, and E. Fiesler Abstract: The backpropagation algorithm is widely used for training multilayer neural networks. In this publication the gain of its activation function(s) is investigated. In specific, it is proven that changing the gain of the activation function is equivalent to changing the learning rate and the weights. This simplifies the backpropagation learning rule by eliminating one of its parameters. The theorem can be extended to hold for some well-known variations on the backpropagation algorithm, such as using a momentum term, flat spot elimination, or adaptive gain. Furthermore, it is successfully applied to compensate for the non-standard gain of optical sigmoids for optical neural networks. Keywords: neural network, neural computation, neural computing, connectionism, neurocomputing, multilayer neural network, backpropagation, (sigmoid) steepness, gain, slope, temperature, adaptiv e gain, (steep) activation function, (adaptive) learning rate, initial weight, momentum, flat spot elimination, weight discretization, threshold, bias, optical implementation. Please do not reply directly to this message. FTP INSTRUCTIONS: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get thimm.gain.ps.Z ftp> quit unix> zcat thimm.gain.ps.Z |lpr This paper is also available from the URL http://www.idiap.ch/pub/papers/neural/thimm.gain.ps.Z Sorry, no paper copies available. Regards, Georg Thimm -------------------------------------------------------------- Georg Thimm E-mail: thimm at idiap.ch Institut Dalle Molle d'Intelligence Fax: ++41 26 22 78 18 Artificielle Perceptive (IDIAP) Tel.: ++41 26 22 76 64 Case Postale 592 WWW: http://www.idiap.ch 1920 Martigny / Suisse --------------------------------------------------------------  From usui at bpel.tutics.tut.ac.jp Mon Jan 30 00:45:19 1995 From: usui at bpel.tutics.tut.ac.jp (Shiro USUI) Date: Mon, 30 Jan 1995 14:45:19 +0900 Subject: Paper available In-Reply-To: Georg Thimm's message of Sun, 29 Jan 95 11:42:15 +0100 <9501291042.AA15409@idiap.ch> Sun, 29 Jan 95 11:42:14 +0100 Message-ID: <199501300545.OAA02621@mozart.tutics.tut.ac.jp> Dear Dr. G.Thimm, We read your paper entitled "The Interchangeability of learning rate and gain in backpropagation neural networks", and noticed that the motivations and the results in your paper are very close to our previous paper: Q. Jia, K. Hagiwara, N. Toda and S.Usui:"Equivalence relation between the backpropagation learning process of FNN and That of an FNNG" (Letters to the Editor), Newral Networks, Vol.7, No.2, p.411 (1994), Q. Jia, N. Toda, K,Hagiwara and S. Usui:"An analysis of the error backpropagation learning algorithms with gain"(in Japanese) IEICE Trans.D-II, Vol.J77-D-II, No.4, pp.850-857 (1994). We hardly find any essential new points in your present paper. Please refer to the above papers. ----- Shiro USUI ( usui at bpel.tutics.tut.ac.jp ) Biological & Physiological Engineering Lab. Department of Information & Computer Sciences Toyohashi University of Technology Tel & Fax : +81-532-46-7806  From mbrown at aero.soton.ac.uk Mon Jan 30 05:06:04 1995 From: mbrown at aero.soton.ac.uk (Martin Brown) Date: Mon, 30 Jan 95 10:06:04 GMT Subject: activation function gain Message-ID: <13983.9501301006@aero.soton.ac.uk> With reference to Dr. G. Thimm's paper about the relationship between the gain of the activation function and the learning rate, we investigated a similar topic a couple of years ago and also looked at how the size of the bias term affects the condition of the learning problem. See chapter 4 in: @book{BrownHarris:94, AUTHOR = "Brown, M. and Harris, C.", TITLE = "Neurofuzzy Adaptive Modelling and Control", PUBLISHER = "Prentice Hall", YEAR = 1994, VOLUME = "", SERIES = "", ADDRESS = "Hemel-Hempstead, UK", EDITION = "", MONTH = "" } or the condensed version in: @inproceedings{BrownHarris:93a, AUTHOR = "Brown, M. and An, P.C. and Harris, C.J. and Wang H.", TITLE = "How Biased in your Multi-Layer Perceptron", BOOKTITLE = "World Congress on Neural Networks", YEAR = 1993, EDITOR = "", PAGES = "507--511", ORGANIZATION = "", PUBLISHER = "", ADDRESS = "Portland, Oregon", MONTH = "", NOTE = "Volume 3" } Needless to say though, we probably weren't the first! Martin Brown ISIS research group, Department of Electronics and Computer Science, University of Southampton, UK. Fax: 01703 592865 Email: mqb at ecs.soton.ac.uk  From nowlan at cajal.synaptics.com Mon Jan 30 14:46:43 1995 From: nowlan at cajal.synaptics.com (Steven J. Nowlan) Date: Mon, 30 Jan 95 11:46:43 -0800 Subject: Paper avail. ftp: Nowlan and Platt, "A Convolutional Hand Tracker" Message-ID: <9501301946.AA28135@cajal.synaptics.com.> ****** PAPER AVAILABLE VIA NEUROPROSE *************************************** ****** AVAILABLE VIA FTP ONLY *********************************************** ****** PLEASE DO NOT FORWARD TO OTHER MAILING LISTS OR BOARDS. THANK YOU. ** FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/nowlan.nips95.ps.Z The following paper has been placed in the Neuroprose archive at Ohio State. The file is nowlan.nips95.ps.Z. Only the electronic version of this paper is available. This paper is 8 pages in length. This is a preprint of the paper to appear in Advance in Neural Information Processing Systems 7. This file contains 5 embedded postscript figures and is 0.4 Mbytes uncompressed. ----------------------------------------------------- A Convolutional Neural Network Hand Tracker Steven J. Nowlan John C. Platt Synaptics, Inc. Synaptics, Inc. 2698 Orchard Parkway 2698 Orchard Parkway San Jose, CA 95134 San Jose, CA 95134 ABSTRACT: We describe a system that can track a hand in a sequence of video frames and recognize hand gestures in a user-independent manner. The system locates the hand in each video frame and determines if the hand is open or closed. The tracking system is able to track the hand to within 10 pixels of its correct location in 99.7% of the frames from a test set containing video sequences from 18 different individuals captured in 18 different room environments. The gesture recognition network correctly determines if the hand being tracked is open or closed in 99.1% of the frames in this test set. The system has been designed to operate in real time with existing hardware. ----------------------------------------------------- Steven J. Nowlan Synaptics, Inc. 2698 Orchard Parkway San Jose, CA 95134 e-mail: nowlan at synaptics.com phone: (408) 434-0110 x118  From rolf at cs.rug.nl Mon Jan 30 12:30:11 1995 From: rolf at cs.rug.nl (rolf@cs.rug.nl) Date: Mon, 30 Jan 1995 18:30:11 +0100 Subject: Ph.D. thesis available Message-ID: Ph.D. thesis available by ftp ----------------------------- Multilayer Dynamic Link Networks for Establishing Image Point Correspondences and Visual Object Recognition by Rolf P. W\"urtz FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/wuertz.ps.Z URL: ftp://archive.cis.ohio-state.edu/pub/neuroprose/Thesis/wuertz.ps.Z For Europeans, the following may be more convenient: URL: http://www.cs.rug.nl/~rolf/wuertz.thesis.ps.gz The size of the file (1.6MB compressed, 9.2MB uncompressed) may cause problems for printing. Therefore, as well as in order to save paper it is recommended to browse it in a previewer such as ghostview and print only the pages of interest. (See also copyright notice below.) Hardcopies for those who prefer oldfashioned paperbacks will be available from the publisher mentioned below within a couple of weeks and cost around 20$. (Your bookstore should be able to place the order.) Free hardcopies are available only for close friends. :-) ABSTRACT: --------- The major tasks for automatic object recognition are segmentation of the image and solving the correspondence problem, i.e.\ reliably finding the points in the image that belong to points in a given model. Once these correspondences are found, the local similarities can be used to assign one model out of a set of known ones to the image. This work defines a suitable representation for models and images based on a multiresolution transform with Gabor wavelets.The properties of such transforms are discussed in detail. Then a neural network with dynamic links and short-term activity correlations is presented that estimates these correspondences in several layers coarse-to-fine. It is formalized into a nonlinear dynamical system. Simulations show its capabilities that extend earlier systems by background invariance and faster convergence. Finally, the central procedures of the network are put into an algorithmic form, which allows fast implementation on conventional hardware and uses the correspondences for the successful recognition of human faces out of a gallery of 83 independent of their hairstyle. This demonstrates the potential for the recognition of objects independently of the background, which was not possible with earlier systems. KEYWORDS: --------- Neural network, dynamic link architecture, correspondence problem, object recognition, face recognition, coarse-to-fine strategy, wavelet transform, image representation CONTENTS: --------- Abstract..........................................1 Preface...........................................3 Acknowledgements..................................5 Contents..........................................7 1. Introduction..................................13 2. Wavelet Preprocessing.........................25 3. Representation of Images and Models...........49 4. Hierarchical Dynamic Link Matching............65 5. Algorithmic Pyramid Matching..................89 6. Hierarchical Object Recognition..............109 7. Discussion...................................119 8. Bibliography.................................127 9. Anhang in deutscher Sprache..................141 Index...........................................153 COPYRIGHT NOTICE: ----------------- The copyright of this text has been transferred to: +--------------------------------+ | Verlag Harri Deutsch GmbH | | Gr\"afstra{\ss}e 47/51 | | D-60486 Frankfurt am Main | | Germany | | | | Phone: +49 69 775021 | | Fax: +49 69 7073739} | | Email: vlghd at vlghd.f.eunet.de | +--------------------------------+ It will be available within a couple of weeks at a price of about US $20.-. Due to copyright it is illegal to print more than selected pages for personal use. Enjoy reading (at least) as much as I enjoyed writing! Rolf +----------------------------------+---------------------------------------+ | Rolf P. W"urtz | Email: rolf at cs.rug.nl | | Department of Computer Science | | | University of Groningen | Phone: +31 50 63-6496 or | | P.O. Box 800 | -3939 (dept. secr.) | | 9700 AV Groningen | Fax: +31 50 63-3800 | | The Netherlands | | +----------------------------------+---------------------------------------+  From ucganlb at ucl.ac.uk Tue Jan 31 04:54:20 1995 From: ucganlb at ucl.ac.uk (Dr Neil Burgess - Anatomy UCL London) Date: Tue, 31 Jan 95 09:54:20 +0000 Subject: preprint - a connectionist model of STM for serial order Message-ID: <181056.9501310954@link-1.ts.bcc.ac.uk> anonymous ftp host: archive.cis.ohio-state.edu (128.146.8.52) file: pub/neuroprose/burgess.serial_order.ps.Z I have just put the following pre-print in the neuroprose archive (see above). Cheers, Neil (n.burgess at ucl.ac.uk) _________________________________________________________________________ A SOLVABLE CONNECTIONIST MODEL OF IMMEDIATE RECALL OF ORDERED LISTS Neil Burgess, Department of Anatomy, University College London, London WC1E 6BT, England. ABSTRACT A model of short-term memory for serially ordered lists of verbal stimuli is proposed as an implementation of the `articulatory loop' thought to mediate this type of memory (Baddeley, 1986). The model predicts the presence of a repeatable time-varying `context' signal coding the timing of items' presentation in addition to a store of phonological information and a process of serial rehearsal. Items are associated with context nodes and phonemes by Hebbian connections showing both short and long term plasticity. Items are activated by phonemic input during presentation and reactivated by context and phonemic feedback during output. Serial selection of items occurs via a winner-take-all interaction amongst items, with the winner subsequently receiving decaying inhibition. An approximate analysis of error probabilities due to Gaussian noise during output is presented. The model provides an explanatory account of the probability of error as a function of serial position, list length, word length, phonemic similarity, temporal grouping, item and list familiarity, and is proposed as the starting point for a model of rehearsal and vocabulary acquisition. This paper is 8 pages, 0.2Mbytes uncompressed, and will be published in NIPS 7.  From hzs at cns.brown.edu Tue Jan 31 12:52:06 1995 From: hzs at cns.brown.edu (Harel Z. Shouval) Date: Tue, 31 Jan 1995 12:52:06 -0500 (EST) Subject: no subject (file transmission) Message-ID: <9501311744.AA09759@cns.brown.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 3875 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/d93f8e17/attachment-0001.ksh From valdes at CARMEN.KBS.CS.CMU.EDU Tue Jan 31 08:51:12 1995 From: valdes at CARMEN.KBS.CS.CMU.EDU (Raul Valdes-Perez) Date: Tue, 31 Jan 95 08:51:12 EST Subject: FYI Message-ID: SECOND CALL FOR PAPERS INTERNATIONAL WORKSHOP ON INFORMATION PROCESSING IN CELLS AND TISSUES Liverpool 6th - 8th September 1995 The purpose of this workshop is to bring together a multidisciplinary group of scientists working in the general area of modelling cells and tissues. A central theme will be the nature of biological information and the ways it is processed in cells and tissues. We hope that the workshop will draw together researchers from a range of disciplines including: Computer Science, Cell Biology, Mathematics, Physiology, Biophysics, Experimental Medicine, Biochemistry, Electronic Engineering and Biotechnology. The workshop is intended to provide a forum to report research, discuss emerging topics and gain new insights into information processing in biological and computational systems. Subjects areas are likely to include but not be restricted to: * Cellular information processing systems * Enzyme networks, Gene networks, Metabolic channeling * Second messenger systems * Signal Transduction and Cellular Pattern Recognition * Automata models * Parallel Distributed Processing models * Cellular Automata models * Single Neuron Computation * Biomolecular computing * Inter-cellular communication, Multi-cellularity * Information Processing in Developmental Systems * Information Processing in Immune networks * Endocrine-immune-nervous interactions * Information processing in neural tissue systems * Information processing in non-neural tissue systems * Communication and gap-junctions * Asynchronous processing, MIMD, SIMD and NIMD systems * Cell and tissues oscillators * Fractals and Chaos * Emergent phenomena and self-organisation Programme Committee Georg Brabant Endocrinology (Hanover) Michael Conrad Computer Science (Detroit) Roy Cuthbertson Cell Biology (Liverpool) Claus Emmeche Philosophy of Nature and Science Studies (Copenhagen) Mike Holcombe Computer Science (Sheffield) George Kampis Ethology and Philosophy of Science (Budapest) Douglas Kell Biological Sciences (Aberystwyth) Gareth Leng Physiology (Edinburgh) Pedro Marijuan Electronics & Informatics (Zaragoza) Koichiro Matsuno BioEngineering (Nagaoka) Ray Paton Computer Science (Liverpool) Hans-Paul Schwefel Computer Science (Dortmund) Idan Segev Neurobiology (Jerusalem) Gordon Shepherd Neurobiology (Yale) W Richard Stark Mathematics (Tampa) Rene Thomas Molecular Biology (Brussels) Chris Tofts Computer Science (Manchester) John Tucker Computer Science (Swansea) G Rickey Welch Biological Sciences (New Orleans) Gershom Zajicek Experimental Medicine and Cancer Research (Jerusalem) Organizing Committee Ray Paton, Roy Cuthbertson Mike Holcombe and 'Trina Houghton Submission Details All authors must submit 4 copies of the full technical paper by mail or delivery service to: Ray Paton Department of Computer Science The University of Liverpool A Liverpool L69 3BX UK PLEASE DO NOT SUBMIT PAPERS BY FAX. The paper should be in English, double-spaced in 12 point using Times or similar font. The paper should be a maximum of 16 pages including the first page. The first page must contain: title of the paper, author's names including affiliations, complete mailing address, telephone and FAX numbers, email address, and a 250 word (maximum) abstract. Important Dates Submission deadline: Friday April 14th 1995 Acceptance Notification: Friday May 26th 1995 Deadline for final paper: Friday June 23rd 1995 PROCEEDINGS The papers accepted for the workshop will be bound into an unpublished collection for delegates. PUBLICATION OF THE PROCEEDINGS It is intended that a post workshop proceedings will be published by Springer-Verlag and will appear after the workshop. Enquiries Enquires should be addressed to Ray Paton at the above address or FAX +44 51 794 3715 or email tissues at csc.liv.ac.uk  From nowlan at cajal.synaptics.com Tue Jan 31 13:07:30 1995 From: nowlan at cajal.synaptics.com (Steven J. Nowlan) Date: Tue, 31 Jan 95 10:07:30 -0800 Subject: Fixed Paper avail. ftp: Nowlan and Platt, "A Convolutional Hand Tracker" Message-ID: <9501311807.AA12418@cajal.synaptics.com.> A byte was dropped somehow in the original binary of this paper. A fixed version (retrieved and printed correctly at a remote site) is available: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/Inbox/nowlan.nips95-2.ps.Z ------- Forwarded Message From uunet!cajal.synaptics.com!nowlan Mon Jan 30 14:46:43 1995 From: uunet!cajal.synaptics.com!nowlan (Steven J. Nowlan) Date: Mon, 30 Jan 1995 14:46:43 -0500 Subject: Paper avail. ftp: Nowlan and Platt, "A Convolutional Hand Tracker" Message-ID: ****** PAPER AVAILABLE VIA NEUROPROSE *************************************** ****** AVAILABLE VIA FTP ONLY *********************************************** ****** PLEASE DO NOT FORWARD TO OTHER MAILING LISTS OR BOARDS. THANK YOU. ** FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/nowlan.nips95.ps.Z The following paper has been placed in the Neuroprose archive at Ohio State. The file is nowlan.nips95.ps.Z. Only the electronic version of this paper is available. This paper is 8 pages in length. This is a preprint of the paper to appear in Advance in Neural Information Processing Systems 7. This file contains 5 embedded postscript figures and is 0.4 Mbytes uncompressed. ----------------------------------------------------- A Convolutional Neural Network Hand Tracker Steven J. Nowlan John C. Platt Synaptics, Inc. Synaptics, Inc. 2698 Orchard Parkway 2698 Orchard Parkway San Jose, CA 95134 San Jose, CA 95134 ABSTRACT: We describe a system that can track a hand in a sequence of video frames and recognize hand gestures in a user-independent manner. The system locates the hand in each video frame and determines if the hand is open or closed. The tracking system is able to track the hand to within 10 pixels of its correct location in 99.7% of the frames from a test set containing video sequences from 18 different individuals captured in 18 different room environments. The gesture recognition network correctly determines if the hand being tracked is open or closed in 99.1% of the frames in this test set. The system has been designed to operate in real time with existing hardware. ----------------------------------------------------- Steven J. Nowlan Synaptics, Inc. 2698 Orchard Parkway San Jose, CA 95134 e-mail: nowlan at synaptics.com phone: (408) 434-0110 x118 ------- End of Forwarded Message