From rajubapi at school-computing.plymouth.ac.uk Mon Apr 1 11:47:18 1996 From: rajubapi at school-computing.plymouth.ac.uk (Raju Bapi) Date: Mon, 1 Apr 1996 17:47:18 +0100 (BST) Subject: What is a "hybrid" model? In-Reply-To: <9602298281.AA828133262@hub1.comverse.com> Message-ID: On Fri, 29 Mar 1996 Jonathan_Stein at com.comverse.hub1 wrote: > Next, it has been demonstrated in psychophysical experiments that there > are two types of learning. The first type is gradual, with slowly > improving performance, while in primates there is also "sudden" learning, > where the subject (EUREKA!) discovers a symbolic representation > simplifying the task. Thus not only is the basic hardware different for > the two processes, different learning algorithms are used as well. Could you (or any one on the list) please give references to this "sudden" or "Eureka" type of learning in animals ? Thanks Raju Bapi ---------------------------------------------------------- Neurodynamics Research Group School of Computing University of Plymouth Plymouth PL4 8AA United Kingdom email: rajubapi at soc.plym.ac.uk From skemp at gibbs.oit.unc.edu Wed Apr 3 01:23:46 1996 From: skemp at gibbs.oit.unc.edu (Steve Kemp) Date: Wed, 3 Apr 1996 01:23:46 -0500 Subject: What is a "hybrid" model? In-Reply-To: Message-ID: On Mon, 1 Apr 1996, Raju Bapi wrote: ..snip.. > On Fri, 29 Mar 1996 Jonathan_Stein at com.comverse.hub1 wrote: > > > Next, it has been demonstrated in psychophysical experiments that there > > are two types of learning. The first type is gradual, with slowly > > improving performance, while in primates there is also "sudden" learning, > > where the subject (EUREKA!) discovers a symbolic representation > > simplifying the task. Thus not only is the basic hardware different for > > the two processes, different learning algorithms are used as well. .snip... > Could you (or any one on the list) please give references to this > "sudden" or "Eureka" type of learning in animals ? > > Thanks > > Raju Bapi Happy to oblige. The sudden learning was demonstrated in studies of human problem solving where it was eventually dubbed the "Aha!" effect. (I believe that there is a book by that name, but I don't have that reference.) In animal learning, it is known as one-trial learning. (I am unaware of the "Eureka" nomenclature.) The earliest reference I have for a study of this effect in humans is Maier (1930;1931). Six classic articles are excerpted in Wason & Johnson-Laird (1968). That should be a good source of background info. The mention of the demonstration of this effect in primates almost certainly refers to Wolfgang Kohler's (1925) classic study, THE MENTALITY OF APES, (Kegan-Paul, also reprinted by Penguin, 1957). That is the study where Kohler hung a banana from the top of a cage and placed several blocks in the cage. With all the blocks placed on one another, the resultant stack was tall enough for the ape to reach the banana. After some "contemplation," the ape would stack the blocks, climb to the top and retrieve the banana. Another Penguin book of readings, Riopelle (1967) includes a number of later articles on primates that discuss and followup on the Kohler work. That collection also includes the classic studies of animal problem-solving by Romanes (1888), Lloyd Morgan (1909), and Thorndike (1898). Single-trial learning is not restricted to apes, nor to cognitive learning alone. The Garcia Effect (Garcia, McGowan, & Green, 1972), a type of Pavlovian conditioning wherein animals as simple as baby chicks learn to avoid foods that have been associated with nausea, can be demonstrated after a single exposure. Indeed, Skinner (1932) demonstrated single-trial learning by reinforcing behavior in a pigeon. (Obviously, learning such simple tasks may not be "sudden" in the same sense of learning far more complex tasks in the studies cited above.) As to whether one-trial learning or the Aha! effect genuinely constitutes a distinct *type* of learning, it is most certainly distinct in that different experimental procedures are required to elicit such behavior. As to whether different brain processes are involved, brain scan studies, such as PET scan, single neuron monitoring, etc. will eventually answer such questions. I would imagine that such studies have already begun in the last few years, particularly with Pavlovian conditioning, but I am not up to date on that research. Perhaps someone else on the list is. I am not sure what Stein means by "psychophysical" in this context, but there is a relatively recent study by Metcalfe (1986) that attempts to measure the speed of sudden learning. For those interested in searching for further materials the keyword "insight" should get you pointed in the right direction on a computer search. Be warned however, that insight studies of REASONING will not be of much interest in this context. You might try INSIGHT and (PROBLEM SOLVING or LEARNING). steve kemp references: Garcia, J., McGowan, B. K., & Green, K. F. (1972). Biological constraints on conditioning. In Classical Conditioning, vol. 2.,, ed. by A. H. Black & W. H. Prokasy. New York: Appleton-Century-Crofts. Kohler, W. (1925). The Mentality of Apes. Kegan Paul. Lloyd Morgan, C. (1909). Introduction to Comparative Psychology. 2nd edition. Scribners Maier, N.R.F. (1930). "Reasoning in humans I: On direction." Journal of Comparative Psychology, vol. 10, pp.115-143. Maier, N.R.F. (1931). "Reasoning in humans II: The solution of a problem and its appearance in consciousness." Journal of Comparative Psychology, vol. 12, pp.181-194. Metcalfe, J. (1986). Feeling of knowing in memory and problem solving. Journal of Experimental Psychology: Learning Memory & Cognition, vol. 12, pp. 288-294. Riopelle, A. J., ed. (1967). Animal Problem Solving. Harmondsworth: Penguin Books. Romanes, G. J. (1888). Animal Intelligence. New York: D. Appleton. Skinner, B. F. (1932). On the rate of formation of a conditioned reflex. Journal of General Psychology. vol. 7, pp.274-286. Thorndike, E. L. (1898). Animal intelligence: An experimental study of the associative processes in animals. Psychological Review Monograph Supplements, vol. 2, pp. 1-9. Thorndike, E. L. (1911). Animal Intelligence: Experimental studies. New York: MacMillan. Wason, P. C. & Johnson-Laird, P. N., eds. (1968). Thinking and Reasoning. Harmondsworth: Penguin Books. (Please note that Wason & Johnson-Laird also have another book on reasoning with a very similar title. The book cited here is the Penguin book of Readings. Paperback only, but probably found in your local University library. Accept no substitutes. smk) Steven M. Kemp | Department of Psychology | email: steve_kemp at unc.edu Davie Hall, CB# 3270 | University of North Carolina | Chapel Hill, NC 27599-3270 | fax: (919) 962-2537 >>>>>>>>>>>>>>>>>>>>> <<<<<<<<<<<<<<<<<<<<<<<< New Left slogan from the Sixties: "Just because you're paranoid doesn't mean no one's out to get you." New Age slogan for the Nineties: "Just because you're schizophrenic doesn't mean no one's sending you messages." From WHYTE at VM.TEMPLE.EDU Tue Apr 2 18:12:10 1996 From: WHYTE at VM.TEMPLE.EDU (WHYTE@VM.TEMPLE.EDU) Date: Tue, 02 Apr 96 18:12:10 EST Subject: Post-doctoral fellowship opportunity Message-ID: <960402.181713.EST.WHYTE@VM.TEMPLE.EDU> The Moss Rehabilitation Research Institute, at MossRehab Hospital, in Philadelphia is seeking post-doctoral fellows for a 2-year fellowship. There are several theoretical and applied topics in our research laboratories that would benefit from collaboration with someone with experience in neural network modelling. Potential topics include: simulation of language comprehension and production systems and the effects of "lesioning" them, in comparison to the data from patients with acquired language disorders; modelling the types of postural and other motoric compensations made by individuals with focal weakness (as in polio and similar disorders); and prediction of functional outcomes in rehabilitation given a variety of s complex and interacting impairments. Interested individuals should send a resume and cover letter to: John Whyte, M.D., Ph.D. Moss Rehabilitation Research Institute 1200 W. Tabor Rd. Phila. PA 19141 Fax: 215-456-9514 From lksaul at psyche.mit.edu Wed Apr 3 13:18:01 1996 From: lksaul at psyche.mit.edu (Lawrence Saul) Date: Wed, 3 Apr 96 13:18:01 EST Subject: paper announcement Message-ID: <9604031818.AA29806@psyche.mit.edu> FTP-host: psyche.mit.edu FTP-file: pub/lksaul/mdplc.ps.Z WWW-host: http://web.mit.edu/~lksaul/ ---------------------------------------------------- The following paper, to appear at COLT'96, is now available on-line. It contains a statistical mechanical analysis of a simple problem in decision and control. ---------------------------------------------------- Title: Learning curve bounds for a Markov decision process with undiscounted rewards Authors: Lawrence Saul and Satinder Singh Abstract: The goal of learning in Markov decision processes is to find a policy that yields the maximum expected return over time. In problems with large state spaces, computing these returns directly is not feasible; instead, the agent must estimate them by stochastic exploration of the state space. Using methods from statistical mechanics, we study how the agent's performance depends on the allowed exploration time. In particular, for a simple control problem with undiscounted rewards, we compute a lower bound on the return of policies that appear optimal based on imperfect statistics. This is done in the thermodynamic limit where the exploration time and the size of the state space tend to infinity at a fixed ratio. ---------------------------------------------------- From terryd at dali.cit.gu.edu.au Wed Apr 3 21:33:06 1996 From: terryd at dali.cit.gu.edu.au (Terry Dartnall) Date: Thu, 4 Apr 1996 12:33:06 +1000 Subject: What is a "hybrid" model? Message-ID: <199604040233.AA18846@dali.cit.gu.edu.au> Steve Thanks for that useful overview. You say >The sudden learning was demonstrated in studies of human problem solving > where it was eventually dubbed the "Aha!" effect. (I believe that there >is a book by that name, but I don't have that reference.) In animal >learning, it is known as one-trial learning. >. >. > As to whether one-trial learning or the Aha! effect genuinely constitutes >a distinct *type* of learning ... I know pretty much nothing about the area, but I would have thought that one-trial learning and the "Aha!" effect were different. I learnt not to stick my fingers in a power socket when I was a kid - and it only needed one trial! - but I wouldn't have though this was an "Aha!" situation. (It was a "Yow!" situation.) This applies to animals other than people, I'm sure. And you can have the "Aha!" effect after many trials, as with Koehler's apes. In fact I would have thought this is when you usually get it - after lots of frustrating failures. So one-trial learning is neither necessary nor sufficient for the "Aha!" effect. Isn't the "sudden learning problem" that, after a number of unsuccessful trials or trials, the answer suddenly comes to us? Best wishes Terry Dartnall ============================================== Terry Dartnall School of Computing and Information Technology Griffith University Nathan Brisbane Queensland 4111 Australia Phone: 61-7-3875 5020 Fax: 61-7-3875-5051 ============================================== From tibs at utstat.toronto.edu Wed Apr 3 21:48:00 1996 From: tibs at utstat.toronto.edu (tibs@utstat.toronto.edu) Date: Wed, 3 Apr 96 21:48 EST Subject: ew paper available Message-ID: Bias, variance and prediction error for classification rules Robert Tibshirani University of Toronto We study the notions of bias and variance for classification rules. Following Efron (1978) and Breiman (1996) we develop a decomposition of prediction error into its natural components. Then we derive bootstrap estimates of these components and illustrate how they can be used to describe the error behaviour of a classifier in practice. In the process we also obtain a bootstrap estimate of the error of a ``bagged'' classifier. Available at: http://utstat.toronto.edu/reports/tibs/biasvar.ps ftp: //utstat.toronto.edu/pub/tibs/biasvar.ps Comments welcome! ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Rob Tibshirani, Dept of Preventive Med & Biostats, and Dept of Statistics Univ of Toronto, Toronto, Canada M5S 1A8. Phone: 416-978-4642 (PMB), 416-978-0673 (stats). FAX: 416 978-8299 computer fax 416-978-1525 (please call or email me to inform) tibs at utstat.toronto.edu. ftp: //utstat.toronto.edu/pub/tibs http://www.utstat.toronto.edu/~tibs +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ From sylee at eekaist.kaist.ac.kr Thu Apr 4 00:28:59 1996 From: sylee at eekaist.kaist.ac.kr (Soo-Young Lee) Date: Thu, 4 Apr 1996 14:28:59 +0900 Subject: Graduate Scholarship Message-ID: <199604040528.OAA25599@eekaist.kaist.ac.kr> GRADUATE STUDENT POSITION A graduate student position is available at the Department of Electrical Engineering at Korea Advanced Institute of Science and Technology (KAIST) to study neural network modelling, speech and control appliations, and hardware (VLSI and optics) implementation. Bacholar degree is required for Master course students, and Master degree is required for Ph.D. course students. The positions are available from September, 1996. The KAIST is the top-ranked research-oriented engineering school in Korea, which belongs to Ministry of Science and Engineering. The Deaprtment of Electrical Enginnering consists of 48 professors, about 500 graduate students. Annual research fund is more than 15 million US dollars. Full scholarship may be provided. For those from other countries we also have Korean language classes. Applicants should send their CV, list of publications, a letter describing their interest, and name, address and phone number of two references to: Prof. Soo-Young Lee Computation and Neural Systems Laboratory Department of Electrical Engineering Korea Advanced Institute of Science and Technology 373-1 Kusong-dong, Yusong-gu Taejon 305-701 Korea (South) From skemp at gibbs.oit.unc.edu Thu Apr 4 01:45:35 1996 From: skemp at gibbs.oit.unc.edu (Steve Kemp) Date: Thu, 4 Apr 1996 01:45:35 -0500 Subject: What is a "hybrid" model? In-Reply-To: <199604040233.AA18846@dali.cit.gu.edu.au> Message-ID: On Thu, 4 Apr 1996, Terry Dartnall wrote: > > Thanks for that useful overview. You say > > >The sudden learning was demonstrated in studies of human problem solving > > where it was eventually dubbed the "Aha!" effect. (I believe that there > >is a book by that name, but I don't have that reference.) In animal > >learning, it is known as one-trial learning. > > I know pretty much nothing about the area, but I would have thought that > one-trial learning and the "Aha!" effect were different. I learnt not to > stick my fingers in a power socket when I was a kid - and it only needed one > trial! - but I wouldn't have though this was an "Aha!" situation. (It was a > "Yow!" situation.) This applies to animals other than people, I'm sure. > A good point. The original post was contrasting the sudden learning found with the Aha! effect with what the poster called "gradual" learning. If the distinction (that makes for the two types) is between gradual and sudden learning, then one-trial learning, while perhaps distinct from insight learning, seems to be sudden rather than gradual. That is, there are other non-gradual types of learning besides insight learning. > And > you can have the "Aha!" effect after many trials, as with Koehler's apes. In > fact I would have thought this is when you usually get it - after lots of > frustrating failures. So one-trial learning is neither necessary nor > sufficient for the "Aha!" effect. > > Isn't the "sudden learning problem" that, after a number of unsuccessful > trials or trials, the answer suddenly comes to us? > Another way of looking at it is that if insight learning occurs on the very first trial, it is very hard to distinguish such a case from one-trial learning, at least from an empirical perspective. The banana problem was quite a challenge for the mental capacity of the apes involved. The power socket "problem" was quite easy for you. If we were to suppose that certain types of learning *are* sudden, then doesn't it make sense that the sudden onset of learning would occur on an early trial for "simple" or "easy" problems and on a later trial for more "complex" or "harder" problems? In that case, the Aha! effect would just be the natural result of being presented with a difficult problem. In fact, in the mathematical learning theory literature, a number of Markov-based models were constructed after just such an assumption. It was assumed that all learning was "all-or-none" in character. Apparent gradual change was modeled as "random" correct guessing by subjects who had not yet "learned," plus artifacts of emprical measures used by experimenters that averaged across subjects or trials where learning had occurred in some instances and not in others. A remarkably large number of learning phenomena, including many apparently gradual ones, were successfully modeled. Finally, "sudden" or "gradual" is measured with respect to the number of trials. It is essential to Kohler's conception that some sort of ongoing internal "contemplative" process was occurring all through the process, during and between trials. More trials more rapidly presented allow a gradual process to appear gradual. If the subject is gradually catching on and we present fewer trials less often, then the gradual learning may appear sudden because enough learning occurred in the long interval between trials to become noticeable all at once on the following trial. In sum, my point is that it is difficult, if not impossible, to establish the existence or non-existence of genuinely different *types* of learning solely from behavioral phenomena, however augmented by theory or mathematics. One of the truly exciting things about the recent advances in the various technologies of brain monitoring is that they provide a second type of empirical evidence that can be correlated with behavioral evidence to discover if apparently distinct learning phenomena involve genuinely different brain mechanisms. regards, steve K Steven M. Kemp | Department of Psychology | email: steve_kemp at unc.edu Davie Hall, CB# 3270 | University of North Carolina | Chapel Hill, NC 27599-3270 | fax: (919) 962-2537 From horne at research.nj.nec.com Thu Apr 4 11:46:25 1996 From: horne at research.nj.nec.com (Bill Horne) Date: Thu, 4 Apr 1996 11:46:25 -0500 Subject: Spectral Radius TR Message-ID: <9604041146.ZM26727@telluride> The following technical report is now available Lower bounds for the spectral radius of a matrix Bill Horne NEC Research Institute 4 Independence Way Princeton, NJ 08540 NECI Technical Report 95-14 In this paper we develop lower bounds for the spectral radius of symmetric, skew-symmetric, and arbitrary real matrices. Our approach utilizes the well-known Leverrier-Faddeev algorithm for calculating the coefficients of the characteristic polynomial of a matrix in conjunction with a theorem by Lucas which states that the critical points of a polynomial lie within the convex hull of its roots. Our results generalize and simplify a proof recently published by Tarazaga for a lower bound on the spectral radius of a symmetric positive definite matrix. In addition, we provide new lower bounds for the spectral radius of skew-symmetric matrices. We apply these results to a problem involving the stability of fixed points in recurrent neural networks. The report can be obtained from my homepage http://www.neci.nj.nec.com/homepages/horne.html Or directly at ftp://ftp.nj.nec.com/pub/horne/spectral.ps.Z -- Bill Horne Senior Research Associate Computer Science Division NEC Research Institute, 4 Independence Way, Princeton, NJ 08540 horne at research.nj.nec.com PHN: (609) 951-2676 FAX: (609) 951-2482 http://www.neci.nj.nec.com/homepages/horne.html From hochreit at informatik.tu-muenchen.de Thu Apr 4 11:45:52 1996 From: hochreit at informatik.tu-muenchen.de (Josef Hochreiter) Date: Thu, 4 Apr 1996 18:45:52 +0200 Subject: Flat Minima Message-ID: <96Apr4.184602+0200_met_dst.116186+506@papa.informatik.tu-muenchen.de> FTP-host: flop.informatik.tu-muenchen.de (131.159.8.35) FTP-filename: /pub/articles-etc/hochreiter.fm.ps.gz FLAT MINIMA Sepp Hochreiter Juergen Schmidhuber To appear in Neural Computation (accepted 1996) 38 pages, 154 K compressed, 463 K uncompressed We present a new algorithm for finding low-complexity neural networks with high generalization capability. The algorithm searches for a ``flat'' minimum of the error function. A flat minimum is a large connected region in weight-space where the error remains approximately constant. An MDL-based, Bayesian argument suggests that flat minima correspond to ``simple'' networks and low expected overfitting. The argument is based on a Gibbs algorithm variant and a novel way of splitting generalization error into underfitting and overfitting error. Unlike many previous approaches, ours does not require Gauss- assumptions and does not depend on a ``good'' weight prior - instead we have a prior over input/output functions, thus ta- king into account net architecture and training set. Although our algorithm requires the computation of second order deri- vatives, it has backprop's order of complexity. Automatically, it effectively prunes units, weights, and input lines. Expe- riments with feedforward and recurrent nets are described. In applications to stock market prediction, flat minimum search outperforms conventional backprop, weight decay, ``optimal brain surgeon'' / ``optimal brain damage''. We also provide pseudo code of the algorithm (omitted from the NC-version). To obtain a copy, cut and paste one of these: netscape http://www7.informatik.tu-muenchen.de/~hochreit/pub.html netscape http://www.idsia.ch/~juergen/onlinepub.html Sepp Hochreiter, TUM Juergen Schmidhuber, IDSIA P.S.: Info on recent IDSIA postdoc job opening: http://www.idsia.ch/~juergen/postdoc.html From thrun+ at heaven.learning.cs.cmu.edu Thu Apr 4 22:10:54 1996 From: thrun+ at heaven.learning.cs.cmu.edu (thrun+@heaven.learning.cs.cmu.edu) Date: Thu, 4 Apr 96 22:10:54 EST Subject: Book announcement: EBNN, Lifelong Learning Message-ID: I have the pleasure to announce the following book. EXPLANATION-BASED NEURAL NETWORK LEARNING: A Lifelong Learning Approach Sebastian Thrun Carnegie Mellon University & University of Bonn published by Kluwer Academic Publishers ---------------------------------------------------------------------- Lifelong learning addresses situations in which a learner faces a series of different learning tasks, providing the opportunity for synergy among them. Explanation-based neural network learning (EBNN) is a machine learning algorithm that transfers knowledge across multiple learning tasks. When faced with a new learning task, EBNN exploits domain knowledge accumulated in previous learning tasks to guide generalization in the new one. As a result, EBNN generalizes more accurately from less data than comparable methods. This book describes the basic EBNN paradigm and investigates it in the context of supervised learning, reinforcement learning, robotics, and chess. ``The paradigm of lifelong learning - using earlier learned knowledge to improve subsequent learning - is a promising direction for a new generation of machine learning algorithms. Given the need for more accurate learning methods, it is difficult to imagine a future for machine learning that does not include this paradigm.'' -- from the Foreword by Tom M. Mitchell ---------------------------------------------------------------------- FOREWORD by Tom Mitchell ix PREFACE xi 1 INTRODUCTION 1 1.1 Motivation 1 1.2 Lifelong Learning 3 1.3 A Simple Complexity Consideration 8 1.4 The EBNN Approach to Lifelong Learning 13 1.5 Overview 16 2 EXPLANATION-BASED NEURAL NETWORK LEARNING 19 2.1 Inductive Neural Network Learning 20 2.2 Analytical Learning 27 2.3 Why Integrate Induction and Analysis? 31 2.4 The EBNN Learning Algorithm 33 2.5 A Simple Example 39 2.6 The Relation of Neural and Symbolic Explanation-Based Learning 43 2.7 Other Approaches that Combine Induction and Analysis 45 2.8 EBNN and Lifelong Learning 47 3 THE INVARIANCE APPROACH 49 3.1 Introduction 49 3.2 Lifelong Supervised Learning 50 3.3 The Invariance Approach 55 3.4 Example: Learning to Recognize Objects 59 3.5 Alternative Methods 74 3.6 Remarks 90 4 REINFORCEMENT LEARNING 93 4.1 Learning Control 94 4.2 Lifelong Control Learning 98 4.3 Q-Learning 102 4.4 Generalizing Function Approximators and Q-Learning 111 4.5 Remarks 125 5 EMPIRICAL RESULTS 131 5.1 Learning Robot Control 132 5.2 Navigation 133 5.3 Simulation 141 5.4 Approaching and Grasping a Cup 146 5.5 NeuroChess 152 5.6 Remarks 175 6 DISCUSSION 177 6.1 Summary 177 6.2 Open Problems 181 6.3 Related Work 185 6.4 Concluding Remarks 192 A AN ALGORITHM FOR APPROXIMATING VALUES AND SLOPES WITH ARTIFICIAL NEURAL NETWORKS 195 A.1 Definitions 196 A.2 Network Forward Propagation 196 A.3 Forward Propagation of Auxiliary Gradients 197 A.4 Error Functions 198 A.5 Minimizing the Value Error 199 A.6 Minimizing the Slope Error 199 A.7 The Squashing Function and its Derivatives 201 A.8 Updating the Network Weights and Biases 202 B PROOFS OF THE THEOREMS 203 C EXAMPLE CHESS GAMES 207 C.1 Game 1 207 C.2 Game 2 219 REFERENCES 227 LIST OF SYMBOLS 253 INDEX 259 ---------------------------------------------------------------------- More information concerning this book: http://www.cs.cmu.edu/~thrun/papers/thrun.book.html http://www.informatik.uni-bonn.de/~thrun/papers/thrun.book.html From goldfarb at unb.ca Fri Apr 5 10:25:10 1996 From: goldfarb at unb.ca (Lev Goldfarb) Date: Fri, 5 Apr 1996 11:25:10 -0400 (AST) Subject: What is a "hybrid" model? In-Reply-To: Message-ID: On Thu, 4 Apr 1996, Steve Kemp wrote: > learning. If the distinction (that makes for the two types) is between > gradual and sudden learning, then one-trial learning, while perhaps > distinct from insight learning, seems to be sudden rather than gradual. > That is, there are other non-gradual types of learning besides insight > learning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . > In sum, my point is that it is difficult, if not impossible, to establish > the existence or non-existence of genuinely different *types* of learning > solely from behavioral phenomena, however augmented by theory or > mathematics. In view of this, why do then most of us ignore the scientific experience of the last four centuries that strongly suggest the scientific parsimony (in that case - one basic learning "mechanism")? Are we ready (i.e. adequately "educated") to deal with the greatest scientific challenge of cognitive science? Lev Goldfarb Tel: 506-453-4566 Fax: 506-453-3566 http://wwwos2.cs.unb.ca/profs/goldfarb/goldfarb.htm From nq6 at columbia.edu Fri Apr 5 12:29:16 1996 From: nq6 at columbia.edu (Ning Qian) Date: Fri, 5 Apr 1996 12:29:16 -0500 (EST) Subject: postdoc position at Columbia Message-ID: <199604051729.MAA07569@merhaba.cc.columbia.edu> Postdoctoral Position in Computational Vision Center for Neurobiology and Behavior Columbia University New York, NY A postdoctoral fellowship position in computational neuroscience is available immediately for a recent Ph. D. The postdoc will participate in an NIMH-funded project that applies mathematical analyses and computer simulations to investigate the neural mechanisms of stereoscopic depth perception and motion-stereo interactions. Opportunities for modeling other neural systems are also available. The details of our research interests and the PostScript files of some of our publications can be found at the web site listed below. Other systems neuroscience faculty members in the Center with closely related research interests include Drs. Vincent P. Ferrera, Claude P. Ghez, John Martin and Irving Kaupfermann. The funding for the position is available for two years with the possibility of renewal. Applicants should have a strong background in mathematics and computational modeling (in the Unix/X-windows/C environment). Previous experience in vision research is desirable but not required. Please send a CV, statement of research interests and experience, along with names/phone numbers/email addresses of three references to: Dr. Ning Qian Center for Neurobiology and Behavior Columbia University 722 W. 168th St., A730 New York, NY 10032 nq6 at columbia.edu (email) 212-960-2213 (phone) 212-960-2561 (fax) ********************************************************************* For the details of our research interests and publications, please visit our World Wide Web home page at: http://brahms.cpmc.columbia.edu Selected Papers (available on line): A Physiological Model for Motion-stereo Integration and a Unified Explanation of the Pulfrich-like Phenomena, Ning Qian and Richard A. Andersen, submitted to Vision Research. Binocular Receptive Field Profiles, Disparity Tuning and Characteristic Disparity, Yudong Zhu and Ning Qian, Neural Computation, 1996 (in press). Computing Stereo Disparity and Motion with Known Binocular Cell Properties, Ning Qian, Neural Computation, 1994, 6:390-404. Transparent Motion Perception as Detection of Unbalanced Motion Signals III: Modeling, Ning Qian, Richard A. Andersen and Edward H. Adelson, J. Neurosci., 1994, 14:7381-7392. Generalization and Analysis of the Lisberger-Sejnowski VOR Model, Ning Qian, Neural Computation, 1995, 7:735-752. From shrager at neurocog.lrdc.pitt.edu Sat Apr 6 09:14:02 1996 From: shrager at neurocog.lrdc.pitt.edu (Jeff Shrager) Date: Sat, 6 Apr 1996 09:14:02 -0500 (EST) Subject: What is a "hybrid" model? In-Reply-To: Message-ID: On Fri, 5 Apr 1996, Lev Goldfarb wrote: > > In sum, my point is that it is difficult, if not impossible, to establish > > the existence or non-existence of genuinely different *types* of learning > > solely from behavioral phenomena, however augmented by theory or > > mathematics. > > In view of this, why do then most of us ignore the scientific experience > of the last four centuries that strongly suggest the scientific parsimony > (in that case - one basic learning "mechanism")? > Are we ready (i.e. adequately "educated") to deal with the greatest > scientific challenge of cognitive science? I'm sorry, but this is all noise. The brain is a complicated machine. Saying that a car runs on "one basic principle" of chemistry (or physics) isn't saying anything important about a car as pertains to most people's interactions with it (except maybe people who are hit by its momentum :-) The "scientific experience of the last four centuries" (at least that little (though important!) spec of it that Lev is apparently referring to) explicitly eschews complexity, or turns it into abstract complexity (such as chaos theory), neither of which approach tells you very much about the real McCoy. If you care about the real brain, the abstract and general theories are important, interesting, and useful, but they are NOT the whole story. I'm sorry to say that this is going to quickly turn into the same old relogious war, and I'd really like to propose that we take it offline. -- Jeff From iehava at ie.technion.ac.il Sun Apr 7 01:17:39 1996 From: iehava at ie.technion.ac.il (Hava Siegelmann) Date: Sun, 7 Apr 1996 08:17:39 +0200 (EET) Subject: A New Computational Model: Continuous Time Message-ID: Dear friends: I wish to introduce you to a new work in continuous-time computability that may be of interest to some of you. Analog Computing and Dynamical Systems ====================================== Hava T. Siegelmann and Shmuel Fishman Technion --- Israel Institute of Technology iehava at ie.technion.ac.il fishman at physics.technion.ac.il ABSTRACT This work is aimed to gain an enlarged and deeper understanding of the computation processes possible in natural and artificial systems. We introduce an interface between dynamical systems and computational models. The theory that is developed encompasses discrete and continuous analog computation by means of difference and differential equations, respectively. Our complexity theory for continuous time systems is very natural and requires no external clocks or synchronizing elements. As opposed to previous models we do not apply some nature principles to the Turing model but rather start from realizable and possibly chaotic dynamical systems and interpret their evolution as generalized computation. By applying the basic computational terms such as, halting, computation under resource constraints, nondeterministic and stochastic behavior to dynamical systems, a general, continuous-time computational theory is introduced. The new theory is indeed different from the classical one: in some ways it seems stronger but it still has natural limits of decidability. ===================================================================== Let me shortly explain why I suspect that this theoretical computer-science work may be of interest to the Connectionist: 1. In some way, this is a generalization of the Hopfield network. The meaningful attractors of these networks --- where information is stored --- are all simple: either stable fixed points or limit cycles, that are periodic orbits. As most dissipative dynamical systems converge to chaotic attractors, the Hopfield network is indeed a very special case of recurrent networks. In our new work, we present the foundations of computation for dissipative dynamical systems; the attractors can be of any kind. In spite of this variety, the computation and computation times in dynamical are now defined in unified, natural, and unambiguous mathematical terms. 2. There is another reason, but here it really depends on personal belief. Previously, the CS theoreticians of us considered functions, trajectories, and control as a discrete time process. Looking at it without this discretization seems to enlarge our understanding of continuous time computation, with no need for external clocks or all other related discretization tricks. I do not understand enough in biological control to state more than that careful comments I would love to hear from those of you that understand if indeed you see any possible application to your kind of work. Best regards, Hava Siegelmann Technion Israel From giles at research.nj.nec.com Mon Apr 8 21:39:52 1996 From: giles at research.nj.nec.com (Lee Giles) Date: Mon, 8 Apr 96 21:39:52 EDT Subject: TR available on Face Recognition Message-ID: <9604090139.AA10780@alta> ----------------------------------------------------------------------- The following paper presents a hybrid neural network solution to face recognition which outperforms eigenfaces and some other methods on the database of 400 images considered. _______________________________________________________________________ FACE RECOGNITION: A HYBRID NEURAL NETWORK APPROACH Steve Lawrence (1,3), C. Lee Giles (1,2), Ah Chung Tsoi (3), Andrew D. Back (3) (1) NEC Research Institute, 4 Independence Way, Princeton, NJ 08540, USA (2) Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742, USA (3) Electrical and Computer Engineering, University of Queensland, St. Lucia, Australia 4072 U. of Maryland Technical Report CS-TR-3608 and UMIACS-96-16 ABSTRACT Faces represent complex, multidimensional, meaningful visual stimuli and developing a computational model for face recognition is difficult. We present a hybrid neural network solution which compares favorably with other methods. The system combines local image sampling, a self-organizing map neural network, and a convolutional neural network. The self-organizing map provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides for partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the self-organizing map, and a multi-layer perceptron in place of the convolutional network. The Karhunen-Loeve transform performs almost as well (5.3% error versus 3.8%). The multi-layer perceptron performs very poorly (40% error versus 3.8%). The method is capable of rapid classification, requires only fast, approximate normalization and preprocessing, and consistently exhibits better classification performance than the eigenfaces approach on the database considered as the number of images per person in the training database is varied from 1 to 5. With 5 images per person the proposed method and eigenfaces result in 3.8 and 10.5 error respectively. The recognizer provides a measure of confidence in its output and classification error approaches zero when rejecting as few as 10 of the examples. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze computational complexity and discuss how new classes could be added to the trained recognizer. Keywords: Convolutional Neural Networks, Hybrid Systems, Face Recognition, Self-Organizing Map __________________________________________________________________________ The paper is available from: http://www.neci.nj.nec.com/homepages/lawrence - USA http://www.neci.nj.nec.com/homepages/giles.html - USA http://www.cs.umd.edu/TRs/TR-no-abs.html - USA http://www.elec.uq.edu.au/~lawrence - Australia ftp://ftp.nj.nec.com/pub/giles/papers/UMD-CS-TR-3608.face.recognition_hybrid.neural.nets.ps.Z We welcome your comments. -- C. Lee Giles / Computer Sciences / NEC Research Institute / 4 Independence Way / Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 www.neci.nj.nec.com/homepages/giles.html == From harmonme at aa.wpafb.af.mil Wed Apr 10 11:19:58 1996 From: harmonme at aa.wpafb.af.mil (Mance E. Harmon) Date: Wed, 10 Apr 96 11:19:58 -0400 Subject: ICML'96 Paper Available Message-ID: <960410111956.575@ethel.aa.wpafb.af.mil.0> The following paper will be presented at the 13th International Conference on Machine Learning, Bari, Italy, 3-6 July, and is now available in postscript and RTF formats at the following URL: http://www.aa.wpafb.af.mil/~harmonme Residual Q-Learning Applied to Visual Attention Cesar Bandera Amherst Systems, Inc. Machine Vision Dept.30 Wilson Road Buffalo, New York14221-7082 cba at amherst.com Francisco J. Vico,Jose M. Bravo Facultad de Psicologia Universidad de Malaga 29017 Malaga (Spain) fjv at eva.psi.uma.es jbm at eva.psi.uma.es Mance E. Harmon Wright Laboratory WL/AACF 2241 Avionics Circle Wright-Patterson AFB,Ohio 45433-7318 harmonme at aa.wpafb.af.mil Leemon C. Baird III U.S.A.F. Academy 2354 Fairchild Dr. Suite 6K41 USAFA, Colorado 80840-6234 baird at cs.usafa.af.mil ABSTRACT Foveal vision features imagers with graded acuity coupled with context sensitive sensor gaze control, analogous to that prevalent throughout vertebrate vision. Foveal vision operates more efficiently than uniform acuity vision because resolution is treated as a dynamically allocatable resource, but requires a more refined visual attention mechanism. We demonstrate that reinforcement learning (RL) significantly improves the performance of foveal visual attention, and of the overall vision system, for the task of model based target recognition. A simulated foveal vision system is shown to classify targets with fewer fixations by learning strategies for the acquisition of visual information relevant to the task, and learning how to generalize these strategies in ambiguous and unexpected scenario conditions. From recruit at phz.com Wed Apr 10 11:54:46 1996 From: recruit at phz.com (PHZ Recruiting) Date: Wed, 10 Apr 96 11:54:46 EDT Subject: Boston area job at PHZ modeling financial markets Message-ID: <9604101554.AA21791@phz.com> Applied research position available immediately in NONLINEAR STATISTICAL MODELING OF FINANCIAL MARKETS at PHZ CAPITAL PARTNERS LP PHZ is a small Boston area startup company founded in 1993 which manages client money using proprietary statistical models to invest in global securities markets. The principals are Tomaso Poggio, Jim Hutchinson, and Xiru Zhang. Following an equity investment by one of the world's largest futures trading manager firms, PHZ is seeking a person to join our team and expand our trading system development efforts. The successful applicant for this position will have a M.S. or Ph.D. in statistics, computer science, finance, or a related field. Experience with advanced statistical modeling tools, large real world data sets, and software development on PCs and Unix systems (esp. using C/C++ and statistics languages such as S+ or SAS) is highly desirable; working knowledge of financial markets is also a plus. Depending on candidate interests and skills, this postion will involve or lead into basic research and application of sophisticated model development tools, exploratory data gathering and analysis, development of our trading and risk management software platform, and/or trading and monitoring live models. The growth potential of this position is large, both in terms of responsibilities and compensation. Initial compensation will be competitive based on qualifications, possibly including stock options. Interested applicants should email resumes (ascii or postscript) to recruiting at phz.com, or send by US mail to: Attn: Recruiting PHZ Capital Partners LP 111 Speen St, Suite 313 Framingham, MA 01701 USA From wgm at santafe.edu Wed Apr 10 16:22:20 1996 From: wgm at santafe.edu (Bill Macready) Date: Wed, 10 Apr 96 14:22:20 MDT Subject: No subject Message-ID: <9604102022.AA25825@sfi.santafe.edu> We would like to announce a paper entitled: An Efficient Method To Estimate Bagging's Generalization Error D.H. Wolpert, W.G. Macready In bagging one uses bootstrap replicates of the training set to try to improve a learning algorithm's performance. The computational requirements for estimating the resultant generalization error on a test set by means of cross-validation are often prohibitive; for leave-one-out cross-validation one needs to train the underlying algorithm on the order of $m^2$ times, where $m$ is the size of the training set. This paper presents several ways to exploit the bias-variance decomposition to estimate the generalization error of a bagged learning algorithm without invoking yet more training of the underlying learning algorithm. In a set of experiments, the accuracy of this estimator was compared to both the accuracy of using cross-validation to estimate the generalization error of the underlying learning algorithm, and the accuracy of using cross-validation to estimate the generalization error of the bagged algorithm. The estimator presented here was comparable in its accuracy to, and sometimes even more accurate than, the alternative cross-validation-based estimators. This paper is available from the web site: "http://www.santafe.edu/~wgm/papers.html" or by ftp from ""ftp://ftp.santafe.edu/pub/wgm/error.ps.gz" From mjo at cns.ed.ac.uk Wed Apr 10 09:12:20 1996 From: mjo at cns.ed.ac.uk (Mark Orr) Date: Wed, 10 Apr 1996 14:12:20 +0100 Subject: Introduction to RBF networks plus Matlab package Message-ID: <199604101312.OAA01716@garbo.cns.ed.ac.uk> Announcing the availability of the following resources on the World Wide Web at URL http://www.cns.ed.ac.uk/people/mark.html. Introduction to RBF Networks ---------------------------- A 67 page introduction to linear feed-forward neural networks for supervised learning, such as radial basis function networks, where there is a single hidden layer and the only parameters that change during learning are the weights from the hidden units to the outputs. But more importantly it covers "nearly linear" networks: networks which, though nonlinear (because learning affects more than just the hidden-to-output weights) can still be analysed with simple mathematics (linear algebra) and which don't need compute intensive gradient descent methods to learn. This applies to RBF networks with local ridge regression or which use regularised forward selection to build the hidden layer. These techniques, in conjunction with leave-one-out or generalised cross-validation, are covered in detail. Available in PostScript or hyper-text. Matlab Routines for ------------------- Subset Selection and Ridge Regression ------------------------------------- in Linear Neural Networks ------------------------- A package of Matlab routines implementing regularised or unregularised forward subset selection and global or local ridge regression in linear networks such as radial basis function networks. Comes with a 45 page user manual with plenty of examples. Available as a compressed unix tape archive (.tar file). The author would like to acknowledge support from the UK Joint Councils Initiative in Human Computer Interaction and Cognitive Science under grant G9213375, "Designing Systems of Coupled Networks". Mark Orr mark at cns.ed.ac.uk From rafal at mech.gla.ac.uk Fri Apr 12 11:45:44 1996 From: rafal at mech.gla.ac.uk (Rafal W Zbikowski) Date: Fri, 12 Apr 1996 16:45:44 +0100 Subject: Workshop on Neurocontrol Message-ID: <29240.199604121545@trebino.mech.gla.ac.uk> CALL FOR PAPERS Neural Adaptive Control Technology Workshop: NACT II 9--10 September, 1996 Daimler-Benz Systems Technology Research Berlin, Germany NACT Project ============ The second of a series of three workshops on Neural Adaptive Control Technology (NACT) will take place on September 9--10, 1996 in Berlin, Germany. This event is being organised in connection with a three-year European Union funded Basic Research Project in the ESPRIT framework. The project is a collaboration between Daimler-Benz Systems Technology Research, Berlin, Germany and the Control Group, Department of Mechanical Engineering, University of Glasgow, Glasgow, Scotland. The project, which began on 1 April 1994, is a study of the fundamental properties of neural network based adaptive control systems. Where possible, links with traditional adaptive control systems will be exploited. A major aim is to develop a systematic engineering procedure for designing neural controllers for non-linear dynamic systems. The techniques developed are being evaluated on concrete industrial problems from within the Daimler-Benz group of companies: Mercedes-Benz AG, Daimler-Benz Aerospace (DASA), AEG Daimler-Benz Industrie and DEBIS. The project leader is Dr Ken Hunt (Daimler-Benz) and the other principal investigator is Professor Peter Gawthrop (University of Glasgow). NACT II Workshop ================ The aim of the workshop is to bring together selected invited specialists in the fields of adaptive control, non-linear systems and neural networks. The first workshop (NACT I) took place in Glasgow in May 1995 and was mainly dedicated to theoretical issues of neural adaptive control. Besides monitoring further development of theory the NACT II workshop will be focused on industrial applications and software tools. A number of contributed papers will also be included. As well as paper presentation, significant time will be allocated to round-table and discussion sessions. In order to create a fertile atmosphere for a significant information interchange we aim to attract active specialists in the relevant fields. Professor Karl Johan Astrom of Lund Institue of Technology, Sweden and Professor Hassan K. Khalil of Michigan State University have kindly agreed to act as invited speakers. Proceedings of the meeting will be published in an edited book format. Contributed papers ================== The Program Committee is soliciting contributed papers in the area of neurocontrol for presentation at the conference and publication in the Proceedings. Prospective authors are invited to send an extended abstract of up to six pages in length to the address below no later than Friday, 31 May 1996. Final selection of papers will be announced at the end of June and authors will have the opportunity of preparing a final version of the extended abstract by the end of July which will be circulated to participants in a Workshop digest. Following the Workshop selected authors will be asked to prepare a full paper for publication in the proceedings. This will take the form of an edited book produced by an international publisher. LaTeX style files will be available for document preparation. Each submitted paper must be headed with a title, the names, affiliations and complete mailing addresses (including e-mail) of all authors, a list of three keywords, and the statement NACT II. The first named author of each paper will be used for all correspondence unless otherwise requested. Address for submissions Dr Kenneth J Hunt Daimler-Benz AG Systems Technology Research Alt-Moabit 96A 10559 BERLIN Germany hunt at DBresearch-berlin.de For more information visit the NACT Web page http://www.mech.gla.ac.uk/~nactftp/nact.html From istvan at psych.ualberta.ca Fri Apr 12 22:01:16 1996 From: istvan at psych.ualberta.ca (Istvan Berkeley) Date: Fri, 12 Apr 1996 20:01:16 -0600 Subject: Workshop Message-ID: CONNECTIONISM FOR COGNITIVISTS: THEORY AND APPLICATIONS On the 25-27 May, 1996, a major international workshop on recent theoretical and applicational aspects of network architectures will be held at Carleton University in Ottawa, Canada. The workshop has a unique structure: approximately half of it will be devoted to presentations of theoretical work by eminent researchers, while the other half will involve hands-on introductions to new software that allows for the use of learning algorithms and techniques for hidden-unit activation analysis that are not available to researchers whose main knowledge of networks stems from the seminal 1986 PDP volumes by Rumelhart, McClelland, et al., or who are, indeed, unfamiliar with the details of *any* PDP modelling techniques, but who would like to understand in detail why they have produced so much interest and debate among cognitive scientists and others. For the second purpose, all registrants will have access to workstations. The workshop has been designed to appeal to, and to be accessible to, researchers from a wide range of disciplines, especially including cognitive science, philosophy, psychology, linguistics, computer science and telecommunications engineering. We stress that no particular disciplinary background, or technical experience with network models, will be presupposed in the design of the workshop. Principal speakers include: David Rumelhart, Stanford Jerome Feldman, Berkeley/ICSI Paul Skokowski, Stanford Christopher Thornton, Sussex Malcolm Forster, Wisconsin at Madison John Bullinaria, Birkbeck College, London Istvan Berkeley, Alberta/Southwestern Louisiana Please note that registration space is limited, and registrations will be accepted in a first-come first-serve basis. Dates: May 25-27, 1995 Registration fees: Regular: $75.00 (CDN) Student: $35.00 (CDN) Banquet (optional): $35.00 (CDN) REGISTRATION PROCEDURES Those wishing to attend the conference may register either electronically, or by mail. To register electronically, send the following information to : NAME: AFFILIATION: REGULAR/STUDENT?: BANQUET (Y/N?): ACCOMMODATION PREFERENCES (no. of nights, preference as between student residence accommodation [subject to availability] or hotel): MAILING ADDRESS: E-MAIL: Electronic registrations will be considered confirmed upon receipt of a cheque for the appropriate amount, in either Canadian dollars or the U.S. equivalent. Cheques should be made payable to CARLETON UNIVERSITY, and should be sent to the address given for postal registration below. To register by post, send the information indicated above, with a cheque for the appropriate amount, to: CONNECTIONISM c/o Professor Don Ross Department of Philosophy Morisset Hall University of Ottawa Ottawa, ON CANADA K1N 6N5 e-mail: Istvan S. N. Berkeley, email: istvan at psych.ualberta.ca Biological Computation Project & Department of Philosophy, c/o 4-108 Humanities Center University of Alberta Edmonton, Alberta Tel: +1 403 436 4182 T6G 2E5, Canada Fax: +1 403 437 2261 From marco at idsia.ch Mon Apr 15 03:39:15 1996 From: marco at idsia.ch (Marco Wiering) Date: Mon, 15 Apr 96 09:39:15 +0200 Subject: Levin Search and EIRA Message-ID: <9604150739.AA07623@fava.idsia.ch> FTP-host: ftp.idsia.ch FTP-file: /pub/marco/ml_levin_eira.ps.gz or /pub/juergen/ml_levin_eira.ps.gz Solving POMDPs with Levin Search and EIRA Marco Wiering Juergen Schmidhuber Machine Learning: 13th Intern. Conf., 1996 9 pages, 86K compressed, 252K uncompressed Partially observable Markov decision problems (POMDPs) recently received a lot of attention in the reinforcement learning community. No attention, however, has been paid to Levin's universal search through program space (LS), which is theoretically optimal for a wide variety of search prob- lems including many POMDPs. Experiments in this paper show that LS can solve partially observable mazes (`POMs') involving many more states and obstacles than those solved by various previous authors. We then note, however, that LS is not necessarily optimal for learning problems where experience with previous problems can be used to speed up the search. For this reason, we introduce an adaptive extension of LS (ALS) which uses experience to increase probabilities of instructions occurring in successful programs found by LS. To deal with cases where ALS does not lead to long-term performance improvement, we use the recent technique ``environment-independent reinforcement acceleration'' (EIRA) as a safe- ty belt (EIRA currently is the only known method that guarantees a life- long history of reward accelerations). Additional experiments demon- strate: (a) ALS can dramatically reduce search time consumed by calls of LS. (b) Further significant speed-ups can be obtained by combining ALS and EIRA. To obtain a copy, do one of these: netscape http://www.idsia.ch/~marco/publications.html netscape http://www.idsia.ch/~juergen/onlinepub.html Marco Wiering Juergen Schmidhuber IDSIA From reiner at isy.liu.se Tue Apr 16 04:38:07 1996 From: reiner at isy.liu.se (Reiner Lenz) Date: Tue, 16 Apr 1996 10:38:07 +0200 (MET DST) Subject: Invariance, group representations and orientation estimation Message-ID: <199604160838.KAA12946@einstein.isy.liu.se> Problems involving the concept of invariance have received a lot of attention and although the following papers are somewhat outside the field of neural networks perhaps someone may find something intresting in them. The main idea is that invariance is often closely related to groups and there representations and that these in turn are closely related to special transforms. The most important example is shift-invariance which is related to the additive group which lead to the Fourier transform. If you are interested you can find some of the reprints in http://www.isy.liu.se/~reiner/proj_desc/section3_3.html GROUPS: is an overview article P2-invariance: Describes the application to permutation and projection invariance Group Theoretical Transforms: uses the dihedral group Lie-Matching: computes the orientation parameters from 3-D data and is an example of fast iterative matching algorithms based on the interplay between Lie-group and Lie-algebra. As I said before: Not strictly NN but perhaps interesting to someone. Best regards "Kleinphi macht auch Mist" Reiner Lenz | Dept. EE. | | Linkoeping University | email: reiner at isy.liu.se | S-58183 Linkoeping/Sweden | From stefano at kant.irmkant.rm.cnr.it Tue Apr 16 08:05:48 1996 From: stefano at kant.irmkant.rm.cnr.it (Stefano Nolfi) Date: Tue, 16 Apr 1996 12:05:48 GMT Subject: Paper available on adaptive classification with autonomous robots Message-ID: <9604161205.AA19378@kant.irmkant.rm.cnr.it> Paper available via WWW / FTP: Keywords: Active Perception, Adaptive Behaviors, Evolutionary Robotics, Neural Networks, Genetic Algorithms. ------------------------------------------------------------------------------ ADAPTATION AS A MORE POWERFUL TOOL THAN DECOMPOSITION AND INTEGRATION Stefano Nolfi Institute of Psychology, C.N.R., Rome. Recently a new way of building control systems, known as behavior based robotics, has been proposed to overcome the difficulties of the traditional AI approach to robotics. Most of the work done in behavior-based robotics involves a decomposition process (in which the behavior required is broken down into simpler sub-components) and an integration process (in which the modules designed to produce the sub-behaviors are put together). In this paper we claim that decomposition and integration should be the result of an adaptation process and not of the decision of an experimenter.To support this hypothesis we show how in the case of a simple task in which a real autonomous robot is supposed to classify objects of different shapes, by letting the entire behavior emerge through an evolutionary technique,a more simple and robust solution can be obtained than by trying to design a set of modules and to integrate them. http://kant.irmkant.rm.cnr.it/public.html or ftp-server: kant.irmkant.rm.cnr.it (150.146.7.5) ftp-file : /pub/econets/nolfi.recog.ps.Z for the homepage of our research group with most of our publications available online and pointers to ALIFE resources see: http://kant.irmkant.rm.cnr.it/gral.html ---------------------------------------------------------------------------- Stefano Nolfi Institute of Psychology, C.N.R. Viale Marx, 15 - 00137 - Rome - Italy voice: 0039-6-86090231 fax: 0039-6-824737 e-mail: stefano at kant.irmkant.rm.cnr.it www: http://kant.irmkant.rm.cnr.it/nolfi.html From smyth at galway.ICS.UCI.EDU Tue Apr 16 14:02:06 1996 From: smyth at galway.ICS.UCI.EDU (Padhraic Smyth) Date: Tue, 16 Apr 1996 11:02:06 -0700 Subject: Final CFP for Sixth AI and Statistics Workshop Message-ID: <9604161102.aa05163@paris.ics.uci.edu> Apologies to those of you who receive this more than once, The deadline for 4-page abstracts is July 1, electronic submissions are encouraged. Padhraic Smyth AIStats97 General Chair Final Call For Papers SIXTH INTERNATIONAL WORKSHOP ON ARTIFICIAL INTELLIGENCE AND STATISTICS January 4-7, 1997 Ft. Lauderdale, Florida http://www.stat.washington.edu/aistats97/ PURPOSE: This is the sixth in a series of workshops which has brought together researchers in Artificial Intelligence (AI) and in Statistics to discuss problems of mutual interest. The exchange has broadened research in both fields and has strongly encouraged interdisciplinary work. Papers on all aspects of the interface between AI & Statistics are encouraged. FORMAT: To encourage interaction and a broad exchange of ideas, the presentations will be limited to about 20 discussion papers in single session meetings over three days (Jan. 5-7). Focussed poster sessions will provide the means for presenting and discussing the remaining research papers. Papers for poster sessions will be treated equally with papers for presentation in publications. Attendance at the workshop will *not* be limited. The three days of research presentations will be preceded by a day of tutorials (Jan. 4). These are intended to expose researchers in each field to the methodology used in the other field. The tutorial speakers are A. P. Dawid (University College London), Michael Jordan (MIT), Tom Mitchell (Carnegie Mellon), and Mike West (Duke University). TOPICS OF INTEREST: - automated data analysis and knowledge representation for statistics - statistical strategy - metadata and design of statistical data bases - multivariate graphical models, belief networks - causality - cluster analysis and unsupervised learning - predictive modeling: classification and regression - interpretability in modeling - model uncertainty, multiple models - probability and search - knowledge discovery in databases - integrated man-machine modeling methods - statistical methods in AI approaches to vision, robotics, pattern recognition, software agents, planning, information retrieval, natural language processing, etc. - AI methods applied to problems in statistics such as statistical advisory systems, experimental design, exploratory data analysis, causal modeling, etc. This list is not intended to define an exclusive list of topics of interest. Authors are encouraged to submit papers on any topic which falls within the intersection of AI and Statistics. SUBMISSION REQUIREMENTS: Three copies of an extended abstract (up to 4 pages) should be sent to David Madigan, Program Chair 6th International Workshop on AI and Statistics Department of Statistics, Box 354322 University of Washington Seattle, WA 98195 or electronically (postscript or latex preferred) to aistats at stat.washington.edu Submissions for will be considered if *postmarked* by June 30, 1996. If the submission is electronic (e-mail), then it must be *received* by midnight July 1, 1996. Please indicate which topic(s) your abstract addresses and include an electronic mail address for correspondence. Receipt of all submissions will be confirmed via electronic mail. Acceptance notices will be mailed by September 1, 1996. Preliminary papers (up to 20 pages) must be returned by November 1, 1996. These preliminary papers will be copied and distributed at the workshop. PROGRAM COMMITTEE: General Chair: P. Smyth UC Irvine and JPL Program Chair: D. Madigan U. Washington Members: Russell Almond, ETS, Princeton Wray Buntine, Thinkbank, Inc. Peter Cheeseman, NASA Ames Paul Cohen, University of Massachusetts Greg Cooper, University of Pittsburgh Bill DuMouchel, Columbia University Doug Fisher, Vanderbilt University Dan Geiger, Technion Clark Glymour, Carnegie-Mellon University David Hand, Open University, UK Steve Hanks, University of Washington Trevor Hastie, Stanford University David Haussler, UC Santa Cruz David Heckerman, Microsoft Paula Hietala, University of Tampere, Finland Geoff Hinton, University of Toronto Mike Jordan, MIT Hans Lenz, Free University of Berlin, Germany David Lewis, AT&T Bell Labs Andrew Moore, Carnegie-Mellon University Radford Neal, University of Toronto Jonathan Oliver, Monash University, Australia Steve Omohundro, NEC Research, Princeton Judea Pearl, UCLA Daryl Pregibon, AT&T Bell Labs Ross Shachter, Stanford University Glenn Shafer, Rutgers University Prakash Shenoy, University of Kansas David Spiegelhalter, MRC, Cambridge, UK Peter Spirtes, Carnegie-Mellon University MORE INFORMATION: For more information see the workshop's Web page: http://www.stat.washington.edu/aistats97/ or write David Madigan at aistats at stat.washington.edu for inquiries concerning the technical program or Padhraic Smyth at aistats at jpl.nasa.gov for other inquiries about the workshop. Write to ai-stats-request at watstat.uwaterloo.ca to subscribe to the AI and Statistics mailing list. -------- From jhf at playfair.Stanford.EDU Tue Apr 16 19:37:49 1996 From: jhf at playfair.Stanford.EDU (Jerome H. Friedman) Date: Tue, 16 Apr 1996 16:37:49 -0700 Subject: Paper available. Message-ID: <199604162337.QAA22041@playfair.Stanford.EDU> *** Paper Announcement *** ON BIAS, VARIANCE, 0/1 - LOSS, AND THE CURSE-OF-DIMENSIONALITY Jerome H. Friedman Stanford University (jhf at playfair.stanford.edu) ABSTRACT The classification problem is considered in which an output variable assumes discrete values with respective probabilities that depend upon the simultaneous values of a set of input variables. At issue is how error in the estimates of these probabilities affects classification error when the estimates are used in a classification rule. These effects are seen to be somewhat counter intuitive in both their strength and nature. In particular the bias and variance components of the estimation error combine to influence classification in a very different way than with squared error on the probabilities themselves. Certain types of (very high) bias can be canceled by low variance to produce accurate classification. This can dramatically mitigate the effect of the bias associated with some simple estimators like "naive" Bayes, and the bias induced by the curse-of- dimensionality on nearest-neighbor procedures. This helps explain why such simple methods are often competitive with and sometimes superior to more sophisticated ones for classification, and why "bagging/aggregating" classifiers can often improve accuracy. These results also suggest simple modifications to these procedures that can (sometimes dramatically) further improve their classification performance. Available by ftp from: "ftp://playfair.stanford.edu/pub/friedman/curse.ps.Z" From ajit at austin.ibm.com Wed Apr 17 12:37:52 1996 From: ajit at austin.ibm.com (Dingankar) Date: Wed, 17 Apr 1996 11:37:52 -0500 Subject: Neuroprose paper announcement Message-ID: <9604171637.AA32765@ding.austin.ibm.com> **DO NOT FORWARD TO OTHER GROUPS** Sorry, no hardcopies available. 4 pages. Greetings! The following invited paper will be presented at ISCAS in May 1996. The compressed PostScript file is available in the Neuroprose archive; the details (URL, bibtex entry and abstract) follow. Thanks, Ajit ------------------------------------------------------------------------------ URL: ftp://archive.cis.ohio-state.edu/pub/neuroprose/dingankar.tensor-products2.ps.Z BiBTeX entry: @INPROCEEDINGS{atd:iscas-96, AUTHOR ="Dingankar, Ajit T. and Sandberg, Irwin W.", TITLE ="{Tensor Product Neural Networks and Approximation of Dynamical Systems}", BOOKTITLE ="Proceedings of the International Symposium on Circuits and Systems", YEAR ="1996", EDITOR ="", PAGES ="", ORGANIZATION ="", PUBLISHER ="", ADDRESS ="Atlanta, Georgia", MONTH ="May 13--15" } Tensor Product Neural Networks and Approximation of Dynamical Systems --------------------------------------------------------------------- ABSTRACT We consider the problem of approximating any member of a large class of input-output operators of nonlinear dynamical systems. The systems need not be shift invariant, and the system inputs need not be continuous. We introduce a family of ``tensor product'' dynamical neural networks, and show that a certain continuity condition is necessary and sufficient for the existence of arbitrarily good approximations using this family. From jlarsen at eivind.ei.dtu.dk Wed Apr 17 14:27:33 1996 From: jlarsen at eivind.ei.dtu.dk (Jan Larsen) Date: Wed, 17 Apr 1996 14:27:33 +-200 Subject: Ph.D. Course in Advanced Digital Signal Processing Message-ID: <01BB2C6A.058CF520@jl.ei.dtu.dk> ******************** *** ANNOUNCEMENT *** ******************** Ph.D. Course in Advanced Digital Signal Processing Host: Section for Digital Signal Processing, Dept. of Mathematical Modelling, Technical University of Denmark. Course responsible persons: Assoc. Prof. Lars Kai Hansen, email: lkhansen at ei.dtu.dk Assoc. Prof. Steffen Duus Hansen, email: sdh at imm.dtu.dk Assis. Prof. Jan Larsen, email: jl at imm.dtu.dk Assoc. Prof. John Aasted Sorensen, email: jaas at imm.dtu.dk Course Highlight: * Design of neural networks. * Signal processing with neural networks. * Vector quantization with application to speech technology. * Adaptive signal processing, filter banks and wavelets. Dates: Full time in weeks 25, 26 and 27, June and July 1996. Registration: Deadline: May 1, 1996. Department of Mathematical Modelling, Build. 321, Technical University of Denmark, DK-2800 Lyngby, Phone +45 45881433. Fax +45 45881397. Notification of acceptance: May 10, 1996. Further info: Course Description: http://www.ei.dtu.dk/teaching/phd_AdvDigSignalProc.html DSP Section Homepage: http://www.ei.dtu.dk/dsphomepage.html !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !!! Please forward this message to people who might be interested !!! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! -- Jan Larsen From verleysen at dice.ucl.ac.be Thu Apr 18 11:21:32 1996 From: verleysen at dice.ucl.ac.be (verleysen@dice.ucl.ac.be) Date: Thu, 18 Apr 1996 16:21:32 +0100 Subject: Neural Processing Letters - new publisher Message-ID: <199604181416.QAA05395@ns1.dice.ucl.ac.be> Dear colleagues, The "Neural Processing Letters" journal is published each two months since 1994; its aim is to rapidly publish new ideas or new developments in the field of artificial neural networks. Today, we are happy to announce you that Kluwer Academic Publishers will publish this journal from 1996, in order to ensure its worldwide distribution. More information will be soon available on a WWW server. Nevertheless, in the meantime,you will find enclosed some details from Kluwer (see below). You can also contact Mike Casey for free sample copies of the journal, and any details about the submission of papers: Mike Casey Kluwer academic publishers Spuiboulevard 50 P.O. Box 17 NL - 3300 AA Dordrecht The Netherlands Phone: + 31 78 6392219 Fax: + 31 78 6392254 E-mail: casey at wkap.nl Thank you for your interest in this journal. Michel Verleysen, co-editor. ----------------------------------------------------------------------------- ----------------------------------------------------------------------------- Neural Processing Letters ------------------------- Editor: Michel Verleysen Universite Catholique de Louvain, Belgium Francois Blayo EERIE, Lyon, France Neural Processing Letters is an international journal publishing research results and innovative ideas in all fields of artificial neural networks. Prospective authors are encouraged to submit letters concerning any aspect of the Artificial Neural Networks field including, but not restricted to, theoretical developments, biological models, new formal modes, learning, applications, software and hardware developments, and prospective researches. The journal promotes fast exchange of information in the community of neural network researchers and users. The resurgence of interest in the field of artificial neural networks since the beginning of the 1980s is coupled to tremendous research activity in specialized or multidisciplinary groups. Research, however, is not possible without good communication between people and the exchange of information, especially in a field covering such different areas; fast communication is also a key aspect, and this is the reason for Neural Processing Letters. Subscription Information: Kluwer Academic Publishers, Boston ISSN: 1370-4621 1996, Volumes 3-4 (6 issues) Prices: Institutional Price NLG: 357.00 Institutional Price USD: 217.00 Private Price NLG: 250.00 Private Price USD: 150.00 ============================================================================= SUBSCRIPTION ORDER FORM Journal Title: Neural Processing Letters 1996, Volumes 3-4 (6 issues) ISSN: 1370-4621 Institutional Rate: NLG: 357.00 USD: 217.00 Ref: KAPIS ( ) Payment enclosed to the amount of ___________________________ ( ) Please send invoice ( ) Please charge my credit card account: Card no.: |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| Expiry date: ______________ () Access () American Express () Mastercard () Diners Club () Eurocard () Visa Name of Card holder: ___________________________________________________ Delivery address: Title : ___________________________ Initials: _______________M/F______ First name : ______________________ Surname: ______________________________ Organization: ______________________________________________________________ Department : ______________________________________________________________ Address : ______________________________________________________________ Postal Code : ___________________ City: ____________________________________ Country : _____________________________Telephone: ______________________ Email : ______________________________________________________________ Date : _____________________ Signature: _____________________________ Our European VAT registration number is: |_|_|_|_|_|_|_|_|_|_|_|_|_|_| To be sent to: For customers in Mexico, USA, Canada and Rest of the world: Latin America: Kluwer Academic Publishers Kluwer Academic Publishers Group Order Department Journals Department P.O. Box 358 P.O. Box 322 Accord Station 3300 AH Dordrecht Hingham, MA 02018-0358 The Netherlands U.S.A. Tel : 617 871 6600 Tel : +31 78 6392392 Fax : 617 871 6528 Fax : +31 78 6546474 Email : kluwer at wkap.com Email : services at wkap.nl Payment will be accepted in any convertible currency. Please check the rate of exchange with your bank. Prices are subject to change without notice. All prices are exclusive of Value Added Tax (VAT). Customers in the Netherlands please add 6% VAT. Customers from other countries in the European Community: * please fill in the VAT number of your institute/company in the appropriate space on the orderform: or * please add 6% VAT to the total order amount (customers from the U.K. are not charged VAT). ============================================================================= ===================================================== Michel Verleysen Universite Catholique de Louvain Microelectronics Laboratory 3, pl. du Levant B-1348 Louvain-la-Neuve Belgium tel: +32 10 47 25 51 fax: + 32 10 47 86 67 E-mail: verleysen at dice.ucl.ac.be WWW: http://www.dice.ucl.ac.be/~verleyse/MV.html ===================================================== From terry at salk.edu Thu Apr 18 14:04:23 1996 From: terry at salk.edu (Terry Sejnowski) Date: Thu, 18 Apr 96 11:04:23 PDT Subject: NEURAL COMPUTATION 8:4 Message-ID: <9604181804.AA17272@salk.edu> Neural Computation - Contents Volume 8, Number 4 - May 15, 1996 Article: Stable encoding of large finite-state automata in recurrent neural networks with signoid discriminants Christian W. Omlin and C. Lee Giles Note: Unicycling helps your French: Spontaneous recovery of association by learning unrelated tasks Inman Harvey and James V. Stone Letters: A theory of the visual motion coding in the primary visual cortex Zhaoping Li Alignment of Coexisting Cortical Maps in a Motor Control Model James A. Reggia and Yinong Chen Controlling the magnification factor of self-organizing feature maps H.-U. Bauer, R. Der and M. Herrmann Semilinear Predictability Minimization Produces Well-Known Feature Detectors Jurgen Schmidhuber, Martin Eldracher and Bernhard Foltin Learning with preknowledge: Clustering with point and graph matching distance measures Steven Gold, Anand Rangarajan and Eric Mjolsness Analog versus discrete neural networks Bhaskar DasGupta and Georg Schnitger On the relationship between generalization error, hypothesis complexity, and sample complexity for radial basis functions Partha Niyogi and Federico Girosi Using neural networks to model conditional multivariate densities Peter M. Williams Pruning with replacement on limited resource allocation networks by F-projections Christophe Molina and Mahesan Niranjan Engineering mulitversion neural-net systems D. Partridge and W. B. Yates Effects of nonlinear synapses on the performance of multilayer neural networks G. Dundar, F-C. Hsu, and K. Rose ----- ABSTRACTS - http://www-mitpress.mit.edu/jrnls-catalog/neural.html SUBSCRIPTIONS - 1996 - VOLUME 8 - 8 ISSUES ______ $50 Student and Retired ______ $78 Individual ______ $220 Institution Add $28 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-7 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) mitpress-orders at mit.edu MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 ----- From robert at fit.qut.edu.au Fri Apr 19 03:26:17 1996 From: robert at fit.qut.edu.au (Robert Andrews) Date: Fri, 19 Apr 1996 17:26:17 +1000 (EST) Subject: Rule Extraction Book Message-ID: ======================== NEW BOOK ANNOUNCEMENT ============================ RULES AND NETWORKS Proceedings of the Rule Extraction From Trained Artificial Neural Networks Workshop Society for the Study of Artificial Intelligence and the Simulation of Behavior Workshop Series, (AISB'96) University of Sussex, Brighton, UK. 2nd April, 1996 Robert Andrews & Joachim Diederich (Editors) =========================== ORDER FORM =================================== Name: _________________________________________________ Address: ______________________________________________ ______________________________________________ ______________________________________________ ______________________________________________ Number of Copies: __________ @ $22.00 (Australian) ___________ Postage & Handling (A$4.00 in Aust, A$10.00 O'Seas) ___________ TOTAL ___________ Payment (in Australian Dollars) Cheque/Money Order made out to: QUT BookShop GPO Box 2434, Brisbane. 4001. Queensland, Australia. Credit Card: MasterCard / Visa / BankCard Number: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Expiry: / / ======================= TABLE OF CONTENTS ================================ Rules and Local Function Networks Robert Andrews and Shlomo Geva The Extraction of Sugeno Fuzzy Rules From Neural Networks Adelmo L. Cechin, Ulrich Epperlein, Wolfgang Rosenstiel and Bernhard Koppenhoefer RULE_OUT Method: A New Approach For Knowledge Explicitation From Trained ANN Loic Decloedt, Fernando Osorio and Bernard Amy Rule Initialisation by Neural Networks Joachim Diederich, James M Hogan, Mostefa Golea and Santhi Muthiah Explaining Results of Neural Networks by Contextual Importance and Utility Kary Framling On the Complexity of Rule Extraction From Neural Networks and Network Querying Mostefa Golea Rule Extraction from Neural Networks Peter Howes and Nigel Crook Using Relevance Information in the Acquisition of Rules From pelillo at dsi.unive.it Fri Apr 19 09:09:32 1996 From: pelillo at dsi.unive.it (Marcello Pelillo) Date: Fri, 19 Apr 1996 15:09:32 +0200 (MET DST) Subject: EMMCVPR'97 - Venice - Call for Papers Message-ID: <199604191309.PAA19019@oink.dsi.unive.it> CALL FOR PAPERS International Workshop on ENERGY MINIMIZATION METHODS IN COMPUTER VISION AND PATTERN RECOGNITION Venice, Italy, May 21-23, 1997 Energy minimization methods represent a fundamental methodology in computer vision and pattern recognition, with roots in such diverse disciplines as Physics, Psychology, and Statistics. Recent manifestations of the idea include Markov random fields, relaxation labeling, various types of neural networks, etc. These techniques are finding application in areas such as early vision, graph matching, motion analysis, visual reconstruction, etc. The aim of this workshop is to consolidate research efforts in this area, and to provide a discussion forum for researchers and practitioners interested in this important yet diverse subject. The scientific program of the workshop will include the presentation of invited talks and contributed research papers. The workshop is sponsored by the International Association for Pattern Recognition (IAPR) and organized by the Department of Applied Mathematics and Computer Science of the University of Venice "Ca' Foscari." Topics Papers covering (but not limited to) the following topics are solicited: Theory: (e.g., Bayesian contextual methods, biology-inspired methods, discrete optimization, information theory and statistics, learning and parameter estimation, Markov random fields, neural networks, relaxation processes, statistical mechanics approaches, stochastic methods, variational methods) Methodology: (e.g., deformable models, early vision, matching, motion, object recognition, shape, stereo, texture, visual organization) Applications: (e.g., character and text recognition, face processing, handwriting, medical imaging, remote sensing) Program co-chairs Marcello Pelillo, University of Venice, Italy Edwin R. Hancock, University of York, UK Program committee Davi Geiger, New York University, USA Anil K. Jain, Michigan State University, USA Josef Kittler, University of Surrey, UK Stan Z. Li, Nanyang Technological University, Singapore Jean-Michel Morel, Universite' Paris Dauphine, France Maria Petrou, University of Surrey, UK Anand Rangarajan, Yale University, USA Sergio Solimini, Polytechnic of Bari, Italy Alan L. Yuille, Harvard University, USA Josiane Zerubia, INRIA, France Steven W. Zucker, McGill University, Canada Invited speakers Anil K. Jain, Michigan State University, USA Josef Kittler, University of Surrey, UK Alan L. Yuille, Harvard University, USA Steven W. Zucker, McGill University, Canada Venue The workshop will be held at the University of Venice "Ca' Foscari." The lecture theater will be in the historic center of Venice, and accommodation will be provided in nearby hotels. Submission procedure Prospective authors should submit four copies of their contribution(s) by September 9, 1996 to: Marcello Pelillo (EMMCVPR'97) Dipartimento di Matematica Applicata e Informatica Universita' "Ca' Foscari" di Venezia Via Torino 155, 30173 Venezia Mestre, Italy E-mail: pelillo at dsi.unive.it The manuscripts submitted should be no longer than 15 pages, and the cover page should contain: title, author's name, affiliation and address, e-mail address, fax and telephone number, and an abstract no longer than 200 words. In case of joint authorship, the first name will be used for correspondence unless otherwise requested. All manuscripts will be reviewed by at least two members of the program committee. Accepted papers will appear in the proceedings which are expected to be published in the series Lecture Notes in Computer Science by Springer-Verlag, and will be distributed to all participants at the workshop. In order to get a high-quality book with a uniform and professional appearance, prospective authors are strongly encouraged to use the LaTeX style file available at the WWW site indicated below. Important dates Paper submission deadline: September 9, 1996 Notification of acceptance: December 1996 Camera-ready paper due: February 1997 Homepage Information on the workshop is maintained at http://Dcpu1.cs.york.ac.uk:6666/~adjc/EMMCVPR97.html This page will be updated continuously and will include information on accepted papers and the final program. Concomitant events During the week following EMMCVPR'97, participants will have the opportunity to attend the 3rd International Workshop on Visual Form (IWVF3) to be held in Capri, May 28-30. For additional information please contact any of the co-chairmen Carlo Arcelli (car at imagm.na.cnr.it), Luigi Cordella (cordel at nadis.dis.unina.it), and Gabriella Sanniti di Baja (gsdb at imagm.na.cnr.it), or see http://amalfi.dis.unina.it/IWF3/iwvf3cfp.html From ronnyk at starry.engr.sgi.com Sun Apr 21 02:22:50 1996 From: ronnyk at starry.engr.sgi.com (Ronny Kohavi) Date: Sat, 20 Apr 1996 23:22:50 -0700 Subject: Bias + variance for classification Message-ID: <199604210622.XAA18334@starry.engr.sgi.com> The following paper will appear in the Proceedings of the Thirteenth International Conference on Machine Learning, 1996. It is available at: http://reality.sgi.com/ronnyk under publications (with some slides containing more results) or by anon ftp to ftp://starry.stanford.edu/pub/ronnyk/biasVar.ps There have been some recent announcements of tech-reports for bias-variance decompositions in classification domains (0-1 loss). In our paper we address the desiderata for good bias-variance decompositions and show some problems with other decompositions. We also address an important issue related to the naive estimation of these quantities using frequency counts and offer a correction. Bias Plus Variance Decomposition for Zero-One Loss Functions Ron Kohavi David H. Wolpert Data Mining and Visualization Silicon Graphics, Inc. The Santa Fe Institute ronnyk at sgi.com dhw at santafe.edu We present a bias-variance decomposition of expected misclassification rate, the most commonly used loss function in supervised classification learning. The bias-variance decomposition for quadratic loss functions is well known and serves as an important tool for analyzing learning algorithms, yet no decomposition was offered for the more commonly used zero-one (misclassification) loss functions until the recent work of Kong & Dietterich [1995] and Breiman [1996]. Their decomposition suffers from some major shortcomings though (.e.g, potentially negative variance), which our decomposition avoids. We show that, in practice, the naive frequency-based estimation of the decomposition terms is by itself biased and show how to correct for this bias. We illustrate the decomposition on various algorithms and datasets from the UCI repository. -- Ronny Kohavi (ronnyk at sgi.com) From STECK at ie.twsu.edu Mon Apr 22 17:14:55 1996 From: STECK at ie.twsu.edu (JIM STECK) Date: Mon, 22 Apr 1996 16:14:55 CDT (GMT-6) Subject: (Fwd) Student Assistantship in Optical Neural Networks Message-ID: <1DD0983499@ie.twsu.edu> The Neural Network Processing Group at Wichita State University, Wichita, Kansas, is currently seeking qualified Ph.D. candidates in the area of optical neural network implementations. Research Assistantships are available from $10,000 to $14,000 per year depending on qualifications. Candidates should presently have a M.S. degree from an accredited program and will be expected to enroll as a full time Ph.D. student in the College of Engineering. The appointment will also be contingent on meeting Graduate School requirements. We are especially interested in individuals who have experience with: photorefractive materials, nonlinear optics and artificial neural networks. Wichita State University is an equal opportunity affirmative action employer. Interested candidates should send a resume to: Dr. Steven R. Skinner Dept. of Electrical Engineering, #44 Wichita State University Wichita, KS 67260-0044 The College of Engineering at Wichita State University is organized into four degree-granting departments: aerospace, electrical, industrial & manufacturing, and mechanical engineering. A Doctor of Philosophy (Ph.D.) is offered by each of the four departments of engineering. The National Institute for Aviation Research is also located on the campus of Wichita State. Wichita State is located in the City of Wichita - Kansas' largest business and industrial center - and is situated within 50 miles of 40 percent of Kansas industry. Wichita, one of the world's largest producers of aircraft through Boeing, Raytheon, Cessna and Learjet, is known as the "Air Capital of the World." For more information on Wichita State University College of Engineering see: http://www.ee.twsu.edu/coe/ '''''''''''''''''''''''''''''''''''''''''''''''''''''''''' James E. Steck Assistant Professor (316)-689-3402 ----- End Included Message ----- '''''''''''''''''''''''''''''''''''''''''''''''''''''''''' James E. Steck Assistant Professor (316)-689-3402 From Jean-Pierre.Nadal at tournesol.ens.fr Mon Apr 22 04:25:04 1996 From: Jean-Pierre.Nadal at tournesol.ens.fr (NADAL Jean-Pierre) Date: Mon, 22 Apr 1996 10:25:04 +0200 (MET DST) Subject: No subject Message-ID: <199604220825.KAA11933@tournesol.ens.fr> DYNAMICAL MODELING IN BIOTECHNOLOGY Commission of the European Communities, Directorate General for Science, Research and Development. Biotechnology Programme Advanced workshops in Biotechnology May 27 to June 8 1996 Institute for Scientific Interchange (ISI), Villa Gualino, Torino, Italy Description: An intensive course for biologists at end-graduate and postgraduate level on concepts and methods of biological modeling. The course will include a 3 days computing alphabetization prelude, a series of lectures and 6 modeling projects carried out under the supervision of specialized tutors. Topics: Discrete models for simulating biological systems (P. Seiden, IBM New York), Qualitative theory and simulation of dynamical systems (A. Pikovsky, MPI, Potsdam), Monte-Carlo simulation of ageing (D. Stauffer,Koeln), Non linear excitations and energy localisation (M. Peyrard, ENS, Lyon), Neural Networks (J.-P. Nadal, ENS, Paris), Self organization and pattern formation in biochemical systems (R. Kapral, Toronto). Projects: Gene expression, DNA/RNA sequence analysis and design, Cellular automata and immune system, Non linear time-series analysis, Bacterial evolution, Models of ecosystems. Application: Although mainly aimed at biologists (both from industrial and academic research) the course is also open to physicists, chemists, applied mathematicians and computer scientists from EU countries with interdisciplinary interests. Housing is provided at Villa Gualino. Submit your CV, a one page description of your interests and how you think the course will facilitate long term research/work goals and a recommendation letter. Use e-mail if you can. No fees. Participants will be selected by the services of the Commission and the organisers. Some budget is reserved to support living expenses for students, expecially from unfavoured European countries. Send applications to Stefano Ruffo, Workshop Organizer, Dipartimento di Energetica, Universita' di Firenze, Via s. Marta 3, 50139 Firenze, Italy,tel +39-55-4796344, fax +39-55-4796342, e-mail ruffo at ingfi1.ing.unifi.it (http://www.isi.it/dynamical) From robtag at dia.unisa.it Tue Apr 23 07:04:27 1996 From: robtag at dia.unisa.it (Tagliaferri Roberto) Date: Tue, 23 Apr 1996 13:04:27 +0200 Subject: Teaching Assistant positions available at IIASS Message-ID: <9604231104.AA05791@udsab> The International Institute for Advanced Scientific Studies "E.R. Caianiello" is holding a Master Course on "Advanced Information and Communication Technology". There are two Teaching Assistant positions available, starting from October-December 1996, for one year, on the following subjects: - Pattern analysis and recognition - Communication networks - Parallel and distributed processing - Advanced operating systems - Robotics It includes research activity in the area of neural nets and related fields. It is required a Ph.d or 3 years experience in related areas. The salary is of $1,300 per month. It is not a permanent position but it can be renewed for some years. Please send information and cvs to Prof. Maria Marinaro c/o IIASS via Pellegrino, 19 84019 Vietri s/m (SA) Italy fax no. +39 89 761189 or to Dr. Roberto Tagliaferri e-mail robtag at dia.unisa.it From itb2 at psy.ox.ac.uk Tue Apr 23 10:10:52 1996 From: itb2 at psy.ox.ac.uk (Information Theory and the Brain II) Date: Tue, 23 Apr 1996 15:10:52 +0100 (BST) Subject: Call for papers: Information theory and the Brain II. Message-ID: <199604231410.PAA02284@axp02.mrc-bbc.ox.ac.uk> First call for papers: Information Theory and the Brain II To be held on the 20-21st of September, Headland Hotel, Newquay, Cornwall, England. http://www.mrc-bbc.ox.ac.uk/~itb2/conference.html This is the sequal to the conference held in Stirling, Scotland last year. Presentations on any topic relating ideas from either information theory or statistics, to the operation on the brain are welcomed. It is hoped that an informal atmosphere can be maintained in the plesant surroundings that Newquay provides. This year the conference will be held in the Cornish town of Newquay. Apart from being one of the best areas for surfing in Europe, the surrounding countryside is amongst the most beautiful in Britain. The conference will be held in the spectacular Headland Hotel right next to the famous Fistral Beach and in mid September the water is at its warmest, the surf is starting to get larger, and the summer holiday crowds have headed home. Organsing Commitee: Roland Baddeley (Chair) Nick Chater Peter Foldiak Peter Hancock Bruno Olshausen Dan Ruderman Simon Schultz Guy Wallis Send short (less than one page) abstacts, and any requests for further information either electronically to itb2 at psy.ox.ac.uk, or by surface mail to: IBT2 c/o Roland Baddeley, Dept of Psychology, University of Oxford, Oxford, England OX1 3UD Registration will be 40 pounds (about $60 U.S.) with the participants expected to find their own accommodation. This varies in price from as low as 5 pounds for the most basic upwards. Accommodation in the summer can be hard to find but by the 20th, most summer holidays have finished and the situation is much better. More information on accommodation can be found at the above mentioned web page. From ma_s435 at crystal.king.ac.uk Tue Apr 23 10:23:51 1996 From: ma_s435 at crystal.king.ac.uk (Dimitris Tsaptsinos) Date: Tue, 23 Apr 1996 10:23:51 GMT0BST Subject: EANN96 Conference Message-ID: <6184177557@crystal.kingston.ac.uk> INVITATION FOR PARTICIPATION AND PROGRAM OUTLINE International Conference on Engineering Applications of Neural Networks (EANN '96) King's College London Strand campus, London, England June 17--19, 1996 The International Conference on Engineering Applications of Neural Networks (EANN '96) is the second conference in the series. The conference is a forum for presenting the latest results on neural network applications in technical fields. 156 papers from over 20 countries have been accepted for oral presentation after a review of the abstracts. Some more information on the conference EANN '96 is available on the world wide web site at http://www.lpac.ac.uk/EANN96, and on EANN '95 at http://www.abo.fi/~abulsari/EANN95.html Conference secretariat E-mail address : eann96 at lpac.ac.uk Address : EANN '96, c/o Dr. D. Tsaptsinos, Kingston University, Mathematics, Kingston upon Thames, Surrey KT1 2EE, UK. Fax: +44 181 5477419 Organisers and co-sponsors Systems Engineering Association IEEE UK Regional Interest Group on Neural Networks London Parallel Applications Centre Neural CCS Ltd. IEE (British Institution of Electrical Engineers) Professional Group C4 Conference chairmen: Abhay Bulsari and Dimitris Tsaptsinos Registration information The conference fee is sterling (GBP) 360. The conference fee can be paid by a bank cheque or a bank draft (no personal cheques) payable to EANN '96, to be sent to EANN '96, c/o Dr. D. Tsaptsinos, Kingston University, Mathematics, Kingston upon Thames, Surrey KT1 2EE, UK. The fee includes attendance to the conference and the proceedings. Registration form can be sent to you by e-mail and you may return it by e-mail (or post or fax) once the conference fee has been sent. A registration form sent before the payment of the conference fee is not valid. For more information, please ask eann96 at lpac.ac.uk The tentative program outline is as on the following page. The detailed program will be prepared in the end of April. ------------------------------------------------------------------------ PROGRAM OUTLINE International Conference on Engineering Applications of Neural Networks (EANN '96) Room A Room B Monday, 17 June 0800 Registration 0830 Opening 0845 Vision (1) Control Systems (1) 1200 --- lunch break --- 1330 Vision (2) Control Systems (2) 1630 Discussion session on Vision Discussion session on Control Tuesday, 18 June 0830 Biomedical Engineering Mechanical Engineering 1200 --- lunch break --- 1330 Process Engineering Robotics 1500 Chemical Engineering 1630 Discussion session on Chemical Engineering Wednesday, 19 June 0830 Speech and signal processing Metallurgical Engineering 1030 Classification systems Discussion session on Metallurgy 1200 --- lunch break --- 1330 Hardware Applications General Applications 1600 Hybrid systems 1800 Closing The indicated times are approximate and changes are still possible. From payman at ebs330.eb.uah.edu Wed Apr 24 19:12:20 1996 From: payman at ebs330.eb.uah.edu (Payman Arabshahi) Date: Wed, 24 Apr 96 18:12:20 CDT Subject: IEEE NNC launches web newsletter Message-ID: <9604242312.AA20688@ebs330> The IEEE Neural Networks Council announces the launching of its newsletter on the World Wide Web. "CONNECTIONS" (ISSN 1068-1450) will appear quarterly and will be the place for various NNC related news of meetings, conferences, and events, as well as in depth reports on NNC committees and their activities, book reviews, and a technology column overviewing the latest research trends in the field of computational intelligence. Please visit our first issue on the web, in the newsletter section of the NNC Homepage at http://www.ieee.org/nnc Contents, New Series, Vol. 1, No. 1, Spring 1996: NEWS - NNC ExCom: Minutes of the Meeting of March 24, 1996 ...... K. Haines - Report on CIFEr'96 ....................................... R. Golan - Upcoming NNC events ...................................... Ed. - NNC Awards ............................................... M. Hassoun - Homepage news and overview FOCUS - Fuzzy Systems Technical Committee ........................ H. Berenji - Council Personality Profiles ............................. Ed. - New Book Review .......................................... Ed. - NNC Regional Interest Group Report ....................... M. Y. Chow TECHNOLOGY - A Virtual Reality Interface to Complex Neural Network Software Simulations ...................... T.P. Caudell -- Payman Arabshahi Tel : (205) 895-6380 Dept. of Electrical & Computer Eng. Fax : (205) 895-6803 University of Alabama in Huntsville payman at ebs330.eb.uah.edu Huntsville, AL 35899 http://www.eb.uah.edu/ece From N.Sharkey at dcs.shef.ac.uk Fri Apr 26 15:30:25 1996 From: N.Sharkey at dcs.shef.ac.uk (Noel Sharkey) Date: Fri, 26 Apr 96 15:30:25 BST Subject: 3 Research Studentships available Message-ID: <9604261430.AA09596@dcs.shef.ac.uk> PLEASE PASS ON TO ANY FINAL YEAR UNDERGRADUATES OR MASTERS WHO ARE SEEKING FUNDED PHD PLACES. Sorry if you receive this more than once. 3 RESEARCH STUDENTSHIPS IN NEURAL COMPUTING AND ROBOTICS Department of Computer Science University of Sheffield Three funded PhD studentships are available, two from from 1st August, and one from end of September, 1996. The first of these is restricted to British Students only and the other 2 are for students from countries within the European Community. 1. Neural Computing. Projects on any topic within the field of neural computing will be considered. Two areas of particular interest are (a) Improving the reliability of neural computing applications through the use of ensembles of nets and (b) Cognitive modelling, transfer and interference. 2. Autonomous Mobile Robotics. The project would provide an ideal opportunity for a creative student to work on the development of ``intelligent'' behaviour on a mobile robot. Neural computing techniques have already been applied in our lab to develop a number of low level behaviours on a Nomad200. The student would be expected to develop higher level behavioural control. There are a number of different approaches that could be taken. For example: using representations developed at the lower levels to induce higher level behaviours or using a human and animal developmental paradigm. However, nothing is set in stone for this project and a good proposal will go a long way. 3. Pharmaceutical Robotics Aim: The development of a neural computing system for coordinating robot arms in the task of mixing dangerous drugs. This project is in collaboration with the Pharmacy Unit at the Northern General Hospital. Their problem is that they currently employ more than twenty highly-qualified specialist staff to spend a large part of their day involved in the rather tedious task of mixing drugs. Since many of the drugs are very dangerous to humans (such as anti-cancer drugs), much of the work has to take place inside a sealed glass case that is accessed by attached gloves (a glove box). The solution is to put robot arms into the cases and let them do most of the work. It should be noted that this is a research project and offers a number of interesting robotics problems. The student would not be expected to develop a commercial system. Further information about neural computing within the Artificial Intelligence and Neural Networks (AINN) research group can be viewed on WWW: http://www.dcs.shef.ac.uk/research/groups/nn (This will not be ready to view until Wednesday, 1st, May). Application forms may be obtained from our PhD admission secretary Jill Martin jill at dcs.shef.ac.uk. Or write to Ms J. Martin, Department of Computer Science, 211 Portabello St., Sheffield, S1 4DP, S. Yorks, UK. Forms should be accompanied by a short proposal (less than a page) about what the applicant would like to work on, but this does not commit the applicant. From search at idiap.ch Mon Apr 29 07:51:30 1996 From: search at idiap.ch (Account for applications) Date: Mon, 29 Apr 1996 13:51:30 +0200 (MET DST) Subject: Director position available Message-ID: <199604291151.NAA21646@midi.idiap.ch> ------------------------ une version franaise suit ------------------------ The Dalle Molle Institute for Perceptive Artificial Intelligence is opening the position of DIRECTOR OF THE INSTITUTE The Dalle Molle Institute for Perceptive Artificial Intelligence (IDIAP) is a private non-profit research institute, founded in 1991 and located in Martigny, Valais, Switzerland. Today, it consists of more than twenty staff members working in the following fields: * Automatic Speech Processing (spoken language understanding, speaker verification/identification), * Artificial Neural Networks (design, training, applications and optical implementation), * Machine Vision (handwriting recognition, lip reading). IDIAP is part of the Dalle Molle Foundation for the Quality of Life and is supported by public-sector partners (City of Martigny, Canton of Valais and Swiss Confederation). The institute has privileged relationships with Geneva University, the Swiss Federal Institute of Technology at Lausanne and the Telecom-PTT. Additional information about IDIAP is available on our WWW page: "http://www.idiap.ch". The position of director of this institute is presently vacant. Candidates should possess a Ph.D. in computer science or a related area. They must have an outstanding research record and a proven exellence in leadership, both on the scientific and administrative levels. Alignment of personal research interests along current research domains of the institute would be an asset. They will be responsible of defining the research policy of the institute and they will have to get involved in maintaining the role of IDIAP in both local and national research policy. An excellent mastery of French would be a plus. Salaries will be in accordance with those offered by the Swiss government for equivalent positions. The position of director is available as soon as possible, but not before September 1996. The duration and renewal of contracts is subject to negotiation. A few words about the geographic location: the town of Martigny is located in the south of Switzerland, in the Rhone's valley, close to both France and Italy. It is located in the heart of the scenic Alpine region east of lake Geneva, with some of the best skiing and hiking in Europe. The town of Martigny, well connected by rail and highway to the rest of Switzerland, offers a large variety of cultural and artistic activities. To apply for this position please send before June 15, 1996 by electronic mail to "search at idiap.ch" (in plain ASCII, TeX, FrameMaker-MIF, Word-RTF or PostScript): * a curriculum vitae * a list of publications * a description of the research program that the candidate wishes to pursue * the names and addresses of three personal references. A paper-form application or request for further information can be sent to Secretariat IDIAP CP 592 Rue du Simplon 4 CH-1920 Martigny Switzerland Phone: +41 26 21 77 11 Fax: +41 26 21 77 12 ---------------------------------------------------------------------------- L'Institut Dalle Molle d'Intelligence Artificielle Perceptive met au concours le poste de DIRECTION DE L'INSTITUT L'Institut Dalle Molle d'Intelligence Artificielle Perceptive (IDIAP) est un institut de recherche priv but non lucratif, cr en 1991 et situ Martigny, Valais, Suisse. Il compte actuellement une vingtaine de collaborateurs travaillant dans les domaines de recherche suivants : * le traitement automatique de la parole (comprhension de la langue parle, vrification/identification du locuteur), * les rseaux de neurones artificiels (modlisation, apprentissage et ralisation optique), * la vision artificielle (reconnaissance de textes manuscrits, lecture des lvres). L'IDIAP est rattach la "Fondation Dalle Molle pour la qualit de la vie", et reoit des subventions du secteur public (Commune de Martigny, Canton du Valais et Confdration helvtique). L'institut a tabli des relations privilgies avec l'Universit de Genve, l'cole Polytechnique Fdrale de Lausanne et les Tlcom-PTT. De plus amples informations sur l'IDIAP et ses projets de recherche actuels sont disponibles sur notre page WWW : "http://www.idiap.ch". Le poste de directrice/directeur de cet institut est actuellement vacant. Les candidats idaux sont en possession d'un doctorat en informatique ou dans un domaine voisin. Ils doivent tre des chercheurs confirms et doivent avoir d'excellentes comptences de direction autant au niveau scientifique qu'administratif. Une bonne adquation de leurs intrts de recherche avec les domaines actuellement dvelopps l'IDIAP est fortement souhaite. La/le titulaire de ce poste devra dfinir la politique de recherche de l'institut et prendre une part active au maintien du rle de l'IDIAP dans la politique de recherche rgionale et nationale. Le salaire est comparable celui offert dans l'administration suisse pour des postes quivalents. L'entre en activit peut se faire ds septembre 1996 ou une date convenir. La dure et le renouvellement du contrat sont sujet ngociation. Quelques mots sur la rgion : la ville de Martigny est situe dans le sud de la Suisse, dans la valle du Rhne, proche de la France et de l'Italie. Elle se trouve au coeur d'une rgion alpine jouissant d'une renomme europenne pour ses domaines skiables et ses randonnes pdestres. La ville de Martigny, bien desservie par le rail et l'autoroute, offre un large ventail d'activits culturelles. Pour soumettre votre candidature ce poste, veuillez envoyer avant le 15 juin 1996, par courrier lectronique l'adresse "search at idiap.ch" les lments suivants (en ASCII, TeX, FrameMaker-MIF, Word-RTF or PostScript) : * votre curriculum vitae, * une liste de vos publications, * une description du programme de recherche que vous souhaitez poursuivre l'IDIAP, * les noms et adresses de trois rfrences. Une version papier de votre dossier de candidature ou des demandes de complments d'informations peuvent aussi tre adresss Secrtariat IDIAP CP 592 Rue du Simplon 4 CH-1920 Martigny SUISSE Tl : +41 26 21 77 11 Fax : +41 26 21 77 12 From timxb at pax.Colorado.EDU Mon Apr 29 12:34:57 1996 From: timxb at pax.Colorado.EDU (Brown Tim) Date: Mon, 29 Apr 1996 10:34:57 -0600 Subject: Research Assistantships at University of Colorado at Boulder Message-ID: <199604291634.KAA03073@pax.Colorado.EDU> It's not too late to apply. University of Colorado at Boulder The Department of Electrical and Computer Engineering at CU Boulder has one and possibly more graduate research assistantships available for work in the following areas: -------- Adaptive Control of Broadband Communication Networks -------- Modern data traffic sources are characterized by many challenging features from a network control standpoint: difficult to model and analyze queueing properties; inter-source correlations; slowly varying statistical properties; diverse heterogeneous source types; misspec- ified traffic descriptors; and inadvertent network traffic shaping. We seek to address problems such as resource allocation, routing, provisioning, and network design for such traffic using methods based on statistical classification techniques that use historical data as to what were acceptable combinations of carried traffic and what were not. Technical problems range from efficient representation and storage of historical data; choice and modification strategy of classifier model; noise on the data; confidence intervals on decisions; directed and undirected exploration of the decision space; filtering out uninformative data; and implementations. -------- Low-Power Neural Network Architectures for Wireless -------- Analog neural networks have demonstrated signal processing power dissipations orders of magnitude less than comparable digital implementations. This is promising for battery limited mobile and wireless applications. Key to harnessing this potential is overcoming noise, dynamic range, and precision limitations inherent to the analog processing. We seek to improve the scope and robustness of neural algorithms and architectures including: techniques for mapping software neural solutions into non-ideal hardware; robust algorithms for learning directly in hardware; and developing neural design methodologies beyond Hopfield energy functions for optimization problems. Research will be guided by mobile signal processing applications such as equalization, vector quantization, and adaptive filtering. Opportunities exist for research in algorithms, architectures, and also hardware implementations. -------- Design Optimization for Low Power Communication -------- Communication systems are designed by separately optimizing components such as RF front ends, error correcting codes, and diversity strategies. The goals of the individual designs (such as designing for capacity in error correcting codes) may not always match the global objective and this approach ignores the coupling between design choices in each module. Simple yet non-intuitive examples show that dramatic power reductions are possible with a holistic design. We seek to address this at two levels. As a static design problem, formulating the problem as an objective function is conceptually straight forward, but due to a wide variety of continuous/discrete, linear/non-linear, and deterministic/stochastic constraints and variables requires conventional techniques need to be supplemented by more robust methods such as genetic algorithms. At a dynamic level the communication design is not required to be static and flexible DSP hardware allows for different strategies to be tried as a function of the current communication environment (e.g. power control). Technical problems range from formalizing a design problem that crosses many research domains; optimization over multiple variable types; and methods for making decisions with uncertainty and incomplete knowledge in dynamic environments. For more info contact Prof. Tim Brown, timxb at colorado.edu (303) 492-1630 From zoubin at cs.toronto.edu Mon Apr 29 15:00:27 1996 From: zoubin at cs.toronto.edu (Zoubin Ghahramani) Date: Mon, 29 Apr 1996 15:00:27 -0400 Subject: Thesis on sensorimotor integration available Message-ID: <96Apr29.150027edt.985@neuron.ai.toronto.edu> The following PhD thesis is available at http://www.cs.utoronto.ca/~zoubin/ or ftp://ftp.cs.toronto.edu/pub/zoubin/thesis.ps.Z ---------------------------------------------------------------------- Computation and Psychophysics of Sensorimotor Integration Zoubin Ghahramani Department of Brain & Cognitive Sciences Massachusetts Institute of Technology ABSTRACT All higher organisms are able to integrate information from multiple sensory modalities and use this information to select and guide movements. In order to do this, the central nervous system (CNS) must solve two problems: (1) Converting information from distinct sensory representations into a common coordinate system, and (2) integrating this information in a sensible way. This dissertation proposes a computational framework, based on statistics and information theory, to study these two problems. The framework suggests explicit models for both the coordinate transformation and integration problems, which are tested through human psychophysics. The experiments in Chapter 2 suggest that: (1) Spatial information from the visual and auditory systems is integrated so as to minimize the variance in localization. (2) When the relation between visual and auditory space is artificially remapped, the spatial pattern of auditory adaptation can be predicted from its localization variance. These studies suggest that multisensory integration and intersensory adaptation are closely related through the principle of minimizing localization variance. This principle is used to model sensorimotor integration of proprioceptive and motor signals during arm movements (Chapter 3). The temporal propagation of errors in estimating the hand's state is captured by the model, providing support for the existence of an internal model in the CNS that simulates the dynamic behavior of the arm. The coordinate transformation problem is examined in the visuomotor system, which mediates reaching to visually-perceived objects (Chapter 4). The pattern of changes induced by a local remapping of this transformation suggests a representation based on units with large functional receptive fields. Finally, the problem of converting information from disparate sensory representations into a common coordinate system is addressed computationally (Chapter 5). An unsupervised learning algorithm is proposed based on the principle of maximizing mutual information between two topographic maps. What results is an algorithm which develops multiple, mutually-aligned topographic maps based purely on correlations between the inputs to the different sensory modalities. (212 pages, 2.6 Mb, formatted for double-sided printing). From stefano at kant.irmkant.rm.cnr.it Tue Apr 30 11:21:36 1996 From: stefano at kant.irmkant.rm.cnr.it (Stefano Nolfi) Date: Tue, 30 Apr 1996 15:21:36 GMT Subject: paper available:Evolving non-trivial behaviors on real robots.. Message-ID: <9604301521.AA15270@kant.irmkant.rm.cnr.it> Paper available via WWW / FTP: Keywords: Evolutionary Robotics, Behavior Based Robotics, Adaptive Behaviors, Neural Networks, Genetic Algorithms. ------------------------------------------------------------------------------ EVOLVING NON-TRIVIAL BEHAVIORS ON REAL ROBOTS: A GARBAGE COLLECTING ROBOT Stefano Nolfi Institute of Psychology, C.N.R., Rome. Recently, a new approach involving a form of simulated evolution has been proposed to build autonomous robots. However, it is still not clear if this approach is adequate for real life problems. In this paper we show how control systems that perform a non-trivial sequence of behaviors can be obtained with this methodology by "canalizing" the evolutionary process in the right direction. In the experiment described in the paper, a mobile robot was successfully trained to keep clear an arena surrounded by walls by locating, recognizing, and grasping "garbage" objects and by taking collected objects outside the arena. The controller of the robot was evolved in simulation and then downloaded and tested on the real robot. We also show that while a given amount of supervision may canalize the evolutionary process in the right direction the addition of unnecessary constraints can delay the evolution of the desired behavior. http://kant.irmkant.rm.cnr.it/public.html or ftp-server: kant.irmkant.rm.cnr.it (150.146.7.5) ftp-file : /pub/econets/nolfi.gripper2.ps.Z for the homepage of our research group with most of our publications available online and pointers to ALIFE resources see: http://kant.irmkant.rm.cnr.it/gral.html ---------------------------------------------------------------------------- Stefano Nolfi Institute of Psychology, C.N.R. Viale Marx, 15 - 00137 - Rome - Italy voice: 0039-6-86090231 fax: 0039-6-824737 e-mail: stefano at kant.irmkant.rm.cnr.it www: http://kant.irmkant.rm.cnr.it/nolfi.html From te at psyc.nott.ac.uk Tue Apr 30 09:17:29 1996 From: te at psyc.nott.ac.uk (Terry Elliott) Date: Tue, 30 Apr 1996 14:17:29 +0100 (BST) Subject: Research Studentship Available Message-ID: ****** PLEASE POST ********* PLEASE POST ********* PLEASE POST ****** Departments of Psychology and Life Science (University of Nottingham, Nottingham, U.K.) Research Studentship in Neuroscience A postgraduate studentship leading to a Ph.D. degree is available from September, 1996 in the area of neural development and plasticity, under the supervision of Professors Shadbolt (Psychology) and Usherwood (Life Science). The research will test predictions derived from recent computer models of neural plasticity. Candidates should have a good first degree in a relevant discipline. Informal enquires may be addressed to Professor Nigel Shadbolt (tel.: +44 (0)115 951 5317; e-mail: nrs at psyc.nott.ac.uk). Candidates should send a detailed C.V. giving the names of two referees to: Mrs Jeannie Tuck, Postgraduate School, Department of Psychology, University of Nottingham, Nottingham, NG7 2RD, U.K. The closing date is 30th May, 1996. From dnoelle at cs.ucsd.edu Tue Apr 30 15:52:02 1996 From: dnoelle at cs.ucsd.edu (David Noelle) Date: Tue, 30 Apr 96 12:52:02 -0700 Subject: CogSci96 - May 1st Is Early Registration Deadline Message-ID: <9604301952.AA17084@hilbert> Eighteenth Annual Conference of the COGNITIVE SCIENCE SOCIETY July 12-15, 1996 University of California, San Diego La Jolla, California CALL FOR PARTICIPATION The Annual Cognitive Science Conference began with the La Jolla Conference on Cognitive Science in August of 1979. The organizing committee of the Eighteenth Annual Conference would like to welcome members home to La Jolla. We plan to recapture the pioneering spirit of the original conference, extending our welcome to fields on the expanding frontier of Cognitive Science, including Artificial Life, Cognitive and Computational Neuroscience, Evolutionary Psychology, as well as the core areas of Anthropology, Computer Science, Linguistics, Neuroscience, Philosophy, and Psychology. The conference will feature plenary addresses by invited speakers, invited symposia by leaders in their fields, technical paper sessions, a poster session, a banquet, and a Blues Party. San Diego is the home of the world-famous San Diego Zoo and Wild Animal Park, Sea World, the historic all-wooden Hotel Del Coronado, beautiful beaches, mountain areas and deserts, is a short drive from Mexico, and features a high Cappuccino Index. Bring the whole family and stay a while! PLENARY SESSIONS "Controversies in Cognitive Science: The Case of Language" Stephen Crain (UMD College Park) & Mark Seidenberg (USC) Moderated by Paul Smolensky (Johns Hopkins University) "Tenth Anniversary of the PDP Books" Geoff Hinton (Toronto), Jay McClelland (CMU), & Dave Rumelhart (Stanford) "Frontal Lobe Development and Dysfunction in Children: Dissociations between Intention and Action" Adele Diamond (MIT) "Reconstructing Consciousness" Paul Churchland (UCSD) TRAVEL & ACCOMMODATIONS United Airlines is the official airline of the 1996 Cognitive Science Conference. Attendees flying with United can receive a 5% discount off of any published United or United Express round trip fare (to San Diego) in effect when ticket is purchased, subject to all applicable restrictions. Attendees flying with United can receive a 10% discount off of applicable BUA fares in effect when ticket is purchased 7 days in advance. To get your discount, be sure to give your travel agent the following information: * "Meeting ID# 557NS for the Cognitive Science Society Meeting" * United's Meeting Desk phone number is (800) 521-4041. Alternatively, you may order your tickets direct from United's Meeting Desk, using the same reference information as above. Purchasers of United tickets to the conference will be eligible for a drawing (to be held at the conference) in which two round trip tickets will be given away -- so don't throw away your boarding pass! If you are flying to San Diego, you will be arriving at Lindbergh Field. If you don't rent a car, transportation from the airport to the UCSD area will cost (not including tip) anywhere from $15.00 (for a seat on a shuttle/van) to $35.00 (for a taxi). We have arranged for special rates at two of the hotels nearest to the UCSD campus. In addition, on campus apartments can be rented at less expense. All rooms are subject to availability and hotel rates are only guaranteed up to the dates specified, so reserve early. None of the rates quoted below (unless explicitly stated) include tax, which is currently 10.5 percent. The La Jolla Marriott is located approximately 2 miles from campus. Single and double rooms are available at $92.00 per night, when reserved before June 21st. Included in the rate is a morning and evening shuttle service to and from campus (running for one hour periods, on July 13th, 14th, and 15th only). The hotel has parking spaces, available at $7 per day or $10 per day with valet service. On campus parking requires the purchase of daily ($6.00) or weekly ($16.00) passes. There is also city bus service (fare is about $1.50 per ride) from and to campus which passes within 1 block of the hotel. Reservations can be made by calling the hotel at (619) 587-1414 or (800) 228-9290. Be sure to reference the "Annual Conference of the Cognitive Science Society" to receive these special rates. Arrival after 6:00 P.M. requires a first night's deposit, or guarantee with a major credit card. The La Jolla Radisson is located approximately 1/2 mile from campus. Single and double rooms are available at $75.00 per night, when reserved before June 12th. Included in the rate is a morning and evening shuttle service to and from campus, although walking is also very feasible. Parking is available and complementary. On campus parking requires the purchase of daily ($6.00) or weekly ($16.00) passes. The first night's room charge (+ tax) is due by June 12th. Reservations can be made by calling Radisson Reservations at (800) 333-3333. Be sure to reference the "Annual Conference of the Cognitive Science Society" to receive these special rates. There are a limited number of on-campus apartments available for reservation as a 4 night package, from July 12th through July 16th. Included is a (mandatory) meal plan - cafeteria breakfast (4 days), and lunch (3 days). The total cost is $191 per person (double occupancy, including tax) and $227 per person (single occupancy, including tax). (Checking in a day early is $45 extra for a single room or $36 for a double.) On campus parking is complimentary with this package. These apartments may be reserved using the conference registration form. REGISTRATION INFORMATION There are three ways to register for the 1996 Cognitive Science Conference: * ONLINE REGISTRATION -- You may fill out and electronically submit the online registration form, which may be found on the conference web page at "http://www.cse.ucsd.edu/events/cogsci96/". This is the preferred method of registration. (You must pay registration fees with a Visa or MasterCard in order to use this option.) * EMAIL REGISTRATION -- You may fill out the plain text (ASCII) registration form, which appears below, and send it via electronic mail to "cogsci96reg at cs.ucsd.edu". (You must pay registration fees with a Visa or MasterCard in order to use this option.) * POSTAL REGISTRATION -- You may download a copy of the PostScript registration form from the conference home page (or extract the plain text version, below), print it on a PostScript printer, fill it out with a pen, and send it via postal mail to: CogSci'96 Conference Registration Cognitive Science Department - 0515 University of California, San Diego 9500 Gilman Drive La Jolla, CA 92093-0515 (Under this option, you may enclose payment of registration fees in U. S. dollars in the form of a check or money order, or you may pay these fees with a Visa or MasterCard. Please make checks payable to: The Regents of the University of California.) For more information, visit the conference web page at "http://www.cse.ucsd.edu/events/cogsci96". Please direct questions and comments to "cogsci96 at cs.ucsd.edu", (619) 534-6773, or (619) 534-6776. Edwin Hutchins and Walter Savitch, Conference Chairs John D. Batali, Local Arrangements Chair Garrison W. Cottrell, Program Chair ====================================================================== PLAIN TEXT REGISTRATION FORM ====================================================================== Cognitive Science 1996 Registration Form ---------------------------------------- Your Full Name : _____________________________________________________ Your Postal Address : ________________________________________________ (including zip/postal ________________________________________________ code and country) ________________________________________________ ________________________________________________ Your Telephone Number (Voice) : ______________________________________ Your Telephone Number (Fax) : ______________________________________ Your Internet Electronic Mail Address (e.g., dnoelle at cs.ucsd.edu) : ______________________________________________________________________ REGISTRATION FEES : Please select the appropriate registration option from the menu below by placing an "X" in the corresponding blank on the left. Note that the Cognitive Science Society is offering a special deal to individuals who opt to join the Society simultaneously with conference registration. The "New Member" package includes conference fees and first year's membership dues for only $10 more than the nonmember conference cost. Registration fees received after May 1st are $20 higher ($10 higher for students) than fees received before May 1st. Be sure to register early to take advantage of the lower fee rates. _____ Registration, Member -- $120 ($140 after May 1st) _____ Registration, Nonmember -- $145 ($165 after May 1st) _____ Registration, New Member -- $155 ($175 after May 1st) _____ Registration, Student Member -- $85 ($95 after May 1st) _____ Registration, Student Nonmember -- $100 ($110 after May 1st) _____ Registration, New Student Member -- $115 ($125 after May 1st) CONFERENCE BANQUET : Tickets to the conference banquet are *not* included in the registration fees, above. Banquet tickets are $35 per person. (You may bring guests.) Number Of Banquet Tickets Desired ($35 each): _____ _____ Omnivorous _____ Vegetarian CONFERENCE SHIRTS : Conference T-Shirts are *not* included in the registration fees, above. These are $10 each. Number Of T-Shirts Desired ($10 each): _____ UCSD ON-CAMPUS APARTMENTS : There are a limited number of on-campus apartments available for reservation as a 4 night package, from July 12th through July 16th. Included is a (mandatory) meal plan - cafeteria breakfast (4 days), and lunch (3 days). The total cost is $191 per person (double occupancy, including tax) and $227 per person (single occupancy, including tax). (Checking in a day early is $45 extra for a single room or $36 for a double.) On campus parking is complimentary with this package. Off-campus accommodations in local hotels are also available, but you will need to make reservations by contacting the hotel of interest directly. If you will be staying off-campus, please skip this portion of the registration form. On-campus housing reservations must be received by May 1st, 1996. Please include the cost of on-campus housing in the total conference cost listed at the bottom of this form. Select the housing plan desired by placing an "X" in the appropriate blank on the left: _____ UCSD Housing and Meal Plan (Single Room) -- $227 per person _____ UCSD Housing and Meal Plan (Double Room) -- $191 per person Arrival Date And Time : ____________________________________________ Departure Date And Time : ____________________________________________ If you reserved a double room above, please indicate your roommate preference below: _____ Please assign a roommate to me. I am _____ female _____ male. _____ I will be sharing this room with a guest who is not registered for the conference. I will include $382 ($191 times 2) in the total conference cost listed at the bottom of this form. _____ I will be sharing this room with another conference attendee. I will include $191 in the total conference cost listed at the bottom of this form. My roommate will submit her housing fee along with her registration form. My roommate's full name is: ______________________________________________________________ ASL TRANSLATION : American Sign Language (ASL) translators will be available for a number of conference events. The number of translated events will be, in part, a function of the number of participants in need of this service. Please indicate below if you will require ASL translation of conference talks. _____ I will require ASL translation. Comments To The Registration Staff : ______________________________________________________________________ ______________________________________________________________________ ______________________________________________________________________ Please sum your conference registration fees, the cost of banquet tickets and t-shirts, and on-campus housing costs, and place the total below. To register by electronic mail, payment must be by Visa or MasterCard only. TOTAL : _$____________ Bill to: _____ Visa _____ MasterCard Number : ___________________________________________ Expiration Date: ___________________________________ Registration fees (including on-campus housing costs) will be fully refunded if cancellation is requested prior to May 1st. If registration is cancelled between May 1st and June 1st, 20% of paid fees will be retained by the Society to cover processing costs. No refunds will be granted after June 1st. When complete, send this form via email to "cogsci96reg at cs.ucsd.edu". Please direct questions to "cogsci96 at cs.ucsd.edu", (619) 534-6773, or (619) 534-6776. ====================================================================== PLAIN TEXT REGISTRATION FORM ====================================================================== From rajubapi at school-computing.plymouth.ac.uk Mon Apr 1 11:47:18 1996 From: rajubapi at school-computing.plymouth.ac.uk (Raju Bapi) Date: Mon, 1 Apr 1996 17:47:18 +0100 (BST) Subject: What is a "hybrid" model? In-Reply-To: <9602298281.AA828133262@hub1.comverse.com> Message-ID: On Fri, 29 Mar 1996 Jonathan_Stein at com.comverse.hub1 wrote: > Next, it has been demonstrated in psychophysical experiments that there > are two types of learning. The first type is gradual, with slowly > improving performance, while in primates there is also "sudden" learning, > where the subject (EUREKA!) discovers a symbolic representation > simplifying the task. Thus not only is the basic hardware different for > the two processes, different learning algorithms are used as well. Could you (or any one on the list) please give references to this "sudden" or "Eureka" type of learning in animals ? Thanks Raju Bapi ---------------------------------------------------------- Neurodynamics Research Group School of Computing University of Plymouth Plymouth PL4 8AA United Kingdom email: rajubapi at soc.plym.ac.uk From skemp at gibbs.oit.unc.edu Wed Apr 3 01:23:46 1996 From: skemp at gibbs.oit.unc.edu (Steve Kemp) Date: Wed, 3 Apr 1996 01:23:46 -0500 Subject: What is a "hybrid" model? In-Reply-To: Message-ID: On Mon, 1 Apr 1996, Raju Bapi wrote: ..snip.. > On Fri, 29 Mar 1996 Jonathan_Stein at com.comverse.hub1 wrote: > > > Next, it has been demonstrated in psychophysical experiments that there > > are two types of learning. The first type is gradual, with slowly > > improving performance, while in primates there is also "sudden" learning, > > where the subject (EUREKA!) discovers a symbolic representation > > simplifying the task. Thus not only is the basic hardware different for > > the two processes, different learning algorithms are used as well. .snip... > Could you (or any one on the list) please give references to this > "sudden" or "Eureka" type of learning in animals ? > > Thanks > > Raju Bapi Happy to oblige. The sudden learning was demonstrated in studies of human problem solving where it was eventually dubbed the "Aha!" effect. (I believe that there is a book by that name, but I don't have that reference.) In animal learning, it is known as one-trial learning. (I am unaware of the "Eureka" nomenclature.) The earliest reference I have for a study of this effect in humans is Maier (1930;1931). Six classic articles are excerpted in Wason & Johnson-Laird (1968). That should be a good source of background info. The mention of the demonstration of this effect in primates almost certainly refers to Wolfgang Kohler's (1925) classic study, THE MENTALITY OF APES, (Kegan-Paul, also reprinted by Penguin, 1957). That is the study where Kohler hung a banana from the top of a cage and placed several blocks in the cage. With all the blocks placed on one another, the resultant stack was tall enough for the ape to reach the banana. After some "contemplation," the ape would stack the blocks, climb to the top and retrieve the banana. Another Penguin book of readings, Riopelle (1967) includes a number of later articles on primates that discuss and followup on the Kohler work. That collection also includes the classic studies of animal problem-solving by Romanes (1888), Lloyd Morgan (1909), and Thorndike (1898). Single-trial learning is not restricted to apes, nor to cognitive learning alone. The Garcia Effect (Garcia, McGowan, & Green, 1972), a type of Pavlovian conditioning wherein animals as simple as baby chicks learn to avoid foods that have been associated with nausea, can be demonstrated after a single exposure. Indeed, Skinner (1932) demonstrated single-trial learning by reinforcing behavior in a pigeon. (Obviously, learning such simple tasks may not be "sudden" in the same sense of learning far more complex tasks in the studies cited above.) As to whether one-trial learning or the Aha! effect genuinely constitutes a distinct *type* of learning, it is most certainly distinct in that different experimental procedures are required to elicit such behavior. As to whether different brain processes are involved, brain scan studies, such as PET scan, single neuron monitoring, etc. will eventually answer such questions. I would imagine that such studies have already begun in the last few years, particularly with Pavlovian conditioning, but I am not up to date on that research. Perhaps someone else on the list is. I am not sure what Stein means by "psychophysical" in this context, but there is a relatively recent study by Metcalfe (1986) that attempts to measure the speed of sudden learning. For those interested in searching for further materials the keyword "insight" should get you pointed in the right direction on a computer search. Be warned however, that insight studies of REASONING will not be of much interest in this context. You might try INSIGHT and (PROBLEM SOLVING or LEARNING). steve kemp references: Garcia, J., McGowan, B. K., & Green, K. F. (1972). Biological constraints on conditioning. In Classical Conditioning, vol. 2.,, ed. by A. H. Black & W. H. Prokasy. New York: Appleton-Century-Crofts. Kohler, W. (1925). The Mentality of Apes. Kegan Paul. Lloyd Morgan, C. (1909). Introduction to Comparative Psychology. 2nd edition. Scribners Maier, N.R.F. (1930). "Reasoning in humans I: On direction." Journal of Comparative Psychology, vol. 10, pp.115-143. Maier, N.R.F. (1931). "Reasoning in humans II: The solution of a problem and its appearance in consciousness." Journal of Comparative Psychology, vol. 12, pp.181-194. Metcalfe, J. (1986). Feeling of knowing in memory and problem solving. Journal of Experimental Psychology: Learning Memory & Cognition, vol. 12, pp. 288-294. Riopelle, A. J., ed. (1967). Animal Problem Solving. Harmondsworth: Penguin Books. Romanes, G. J. (1888). Animal Intelligence. New York: D. Appleton. Skinner, B. F. (1932). On the rate of formation of a conditioned reflex. Journal of General Psychology. vol. 7, pp.274-286. Thorndike, E. L. (1898). Animal intelligence: An experimental study of the associative processes in animals. Psychological Review Monograph Supplements, vol. 2, pp. 1-9. Thorndike, E. L. (1911). Animal Intelligence: Experimental studies. New York: MacMillan. Wason, P. C. & Johnson-Laird, P. N., eds. (1968). Thinking and Reasoning. Harmondsworth: Penguin Books. (Please note that Wason & Johnson-Laird also have another book on reasoning with a very similar title. The book cited here is the Penguin book of Readings. Paperback only, but probably found in your local University library. Accept no substitutes. smk) Steven M. Kemp | Department of Psychology | email: steve_kemp at unc.edu Davie Hall, CB# 3270 | University of North Carolina | Chapel Hill, NC 27599-3270 | fax: (919) 962-2537 >>>>>>>>>>>>>>>>>>>>> <<<<<<<<<<<<<<<<<<<<<<<< New Left slogan from the Sixties: "Just because you're paranoid doesn't mean no one's out to get you." New Age slogan for the Nineties: "Just because you're schizophrenic doesn't mean no one's sending you messages." From WHYTE at VM.TEMPLE.EDU Tue Apr 2 18:12:10 1996 From: WHYTE at VM.TEMPLE.EDU (WHYTE@VM.TEMPLE.EDU) Date: Tue, 02 Apr 96 18:12:10 EST Subject: Post-doctoral fellowship opportunity Message-ID: <960402.181713.EST.WHYTE@VM.TEMPLE.EDU> The Moss Rehabilitation Research Institute, at MossRehab Hospital, in Philadelphia is seeking post-doctoral fellows for a 2-year fellowship. There are several theoretical and applied topics in our research laboratories that would benefit from collaboration with someone with experience in neural network modelling. Potential topics include: simulation of language comprehension and production systems and the effects of "lesioning" them, in comparison to the data from patients with acquired language disorders; modelling the types of postural and other motoric compensations made by individuals with focal weakness (as in polio and similar disorders); and prediction of functional outcomes in rehabilitation given a variety of s complex and interacting impairments. Interested individuals should send a resume and cover letter to: John Whyte, M.D., Ph.D. Moss Rehabilitation Research Institute 1200 W. Tabor Rd. Phila. PA 19141 Fax: 215-456-9514 From lksaul at psyche.mit.edu Wed Apr 3 13:18:01 1996 From: lksaul at psyche.mit.edu (Lawrence Saul) Date: Wed, 3 Apr 96 13:18:01 EST Subject: paper announcement Message-ID: <9604031818.AA29806@psyche.mit.edu> FTP-host: psyche.mit.edu FTP-file: pub/lksaul/mdplc.ps.Z WWW-host: http://web.mit.edu/~lksaul/ ---------------------------------------------------- The following paper, to appear at COLT'96, is now available on-line. It contains a statistical mechanical analysis of a simple problem in decision and control. ---------------------------------------------------- Title: Learning curve bounds for a Markov decision process with undiscounted rewards Authors: Lawrence Saul and Satinder Singh Abstract: The goal of learning in Markov decision processes is to find a policy that yields the maximum expected return over time. In problems with large state spaces, computing these returns directly is not feasible; instead, the agent must estimate them by stochastic exploration of the state space. Using methods from statistical mechanics, we study how the agent's performance depends on the allowed exploration time. In particular, for a simple control problem with undiscounted rewards, we compute a lower bound on the return of policies that appear optimal based on imperfect statistics. This is done in the thermodynamic limit where the exploration time and the size of the state space tend to infinity at a fixed ratio. ---------------------------------------------------- From terryd at dali.cit.gu.edu.au Wed Apr 3 21:33:06 1996 From: terryd at dali.cit.gu.edu.au (Terry Dartnall) Date: Thu, 4 Apr 1996 12:33:06 +1000 Subject: What is a "hybrid" model? Message-ID: <199604040233.AA18846@dali.cit.gu.edu.au> Steve Thanks for that useful overview. You say >The sudden learning was demonstrated in studies of human problem solving > where it was eventually dubbed the "Aha!" effect. (I believe that there >is a book by that name, but I don't have that reference.) In animal >learning, it is known as one-trial learning. >. >. > As to whether one-trial learning or the Aha! effect genuinely constitutes >a distinct *type* of learning ... I know pretty much nothing about the area, but I would have thought that one-trial learning and the "Aha!" effect were different. I learnt not to stick my fingers in a power socket when I was a kid - and it only needed one trial! - but I wouldn't have though this was an "Aha!" situation. (It was a "Yow!" situation.) This applies to animals other than people, I'm sure. And you can have the "Aha!" effect after many trials, as with Koehler's apes. In fact I would have thought this is when you usually get it - after lots of frustrating failures. So one-trial learning is neither necessary nor sufficient for the "Aha!" effect. Isn't the "sudden learning problem" that, after a number of unsuccessful trials or trials, the answer suddenly comes to us? Best wishes Terry Dartnall ============================================== Terry Dartnall School of Computing and Information Technology Griffith University Nathan Brisbane Queensland 4111 Australia Phone: 61-7-3875 5020 Fax: 61-7-3875-5051 ============================================== From tibs at utstat.toronto.edu Wed Apr 3 21:48:00 1996 From: tibs at utstat.toronto.edu (tibs@utstat.toronto.edu) Date: Wed, 3 Apr 96 21:48 EST Subject: ew paper available Message-ID: Bias, variance and prediction error for classification rules Robert Tibshirani University of Toronto We study the notions of bias and variance for classification rules. Following Efron (1978) and Breiman (1996) we develop a decomposition of prediction error into its natural components. Then we derive bootstrap estimates of these components and illustrate how they can be used to describe the error behaviour of a classifier in practice. In the process we also obtain a bootstrap estimate of the error of a ``bagged'' classifier. Available at: http://utstat.toronto.edu/reports/tibs/biasvar.ps ftp: //utstat.toronto.edu/pub/tibs/biasvar.ps Comments welcome! ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Rob Tibshirani, Dept of Preventive Med & Biostats, and Dept of Statistics Univ of Toronto, Toronto, Canada M5S 1A8. Phone: 416-978-4642 (PMB), 416-978-0673 (stats). FAX: 416 978-8299 computer fax 416-978-1525 (please call or email me to inform) tibs at utstat.toronto.edu. ftp: //utstat.toronto.edu/pub/tibs http://www.utstat.toronto.edu/~tibs +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ From sylee at eekaist.kaist.ac.kr Thu Apr 4 00:28:59 1996 From: sylee at eekaist.kaist.ac.kr (Soo-Young Lee) Date: Thu, 4 Apr 1996 14:28:59 +0900 Subject: Graduate Scholarship Message-ID: <199604040528.OAA25599@eekaist.kaist.ac.kr> GRADUATE STUDENT POSITION A graduate student position is available at the Department of Electrical Engineering at Korea Advanced Institute of Science and Technology (KAIST) to study neural network modelling, speech and control appliations, and hardware (VLSI and optics) implementation. Bacholar degree is required for Master course students, and Master degree is required for Ph.D. course students. The positions are available from September, 1996. The KAIST is the top-ranked research-oriented engineering school in Korea, which belongs to Ministry of Science and Engineering. The Deaprtment of Electrical Enginnering consists of 48 professors, about 500 graduate students. Annual research fund is more than 15 million US dollars. Full scholarship may be provided. For those from other countries we also have Korean language classes. Applicants should send their CV, list of publications, a letter describing their interest, and name, address and phone number of two references to: Prof. Soo-Young Lee Computation and Neural Systems Laboratory Department of Electrical Engineering Korea Advanced Institute of Science and Technology 373-1 Kusong-dong, Yusong-gu Taejon 305-701 Korea (South) From skemp at gibbs.oit.unc.edu Thu Apr 4 01:45:35 1996 From: skemp at gibbs.oit.unc.edu (Steve Kemp) Date: Thu, 4 Apr 1996 01:45:35 -0500 Subject: What is a "hybrid" model? In-Reply-To: <199604040233.AA18846@dali.cit.gu.edu.au> Message-ID: On Thu, 4 Apr 1996, Terry Dartnall wrote: > > Thanks for that useful overview. You say > > >The sudden learning was demonstrated in studies of human problem solving > > where it was eventually dubbed the "Aha!" effect. (I believe that there > >is a book by that name, but I don't have that reference.) In animal > >learning, it is known as one-trial learning. > > I know pretty much nothing about the area, but I would have thought that > one-trial learning and the "Aha!" effect were different. I learnt not to > stick my fingers in a power socket when I was a kid - and it only needed one > trial! - but I wouldn't have though this was an "Aha!" situation. (It was a > "Yow!" situation.) This applies to animals other than people, I'm sure. > A good point. The original post was contrasting the sudden learning found with the Aha! effect with what the poster called "gradual" learning. If the distinction (that makes for the two types) is between gradual and sudden learning, then one-trial learning, while perhaps distinct from insight learning, seems to be sudden rather than gradual. That is, there are other non-gradual types of learning besides insight learning. > And > you can have the "Aha!" effect after many trials, as with Koehler's apes. In > fact I would have thought this is when you usually get it - after lots of > frustrating failures. So one-trial learning is neither necessary nor > sufficient for the "Aha!" effect. > > Isn't the "sudden learning problem" that, after a number of unsuccessful > trials or trials, the answer suddenly comes to us? > Another way of looking at it is that if insight learning occurs on the very first trial, it is very hard to distinguish such a case from one-trial learning, at least from an empirical perspective. The banana problem was quite a challenge for the mental capacity of the apes involved. The power socket "problem" was quite easy for you. If we were to suppose that certain types of learning *are* sudden, then doesn't it make sense that the sudden onset of learning would occur on an early trial for "simple" or "easy" problems and on a later trial for more "complex" or "harder" problems? In that case, the Aha! effect would just be the natural result of being presented with a difficult problem. In fact, in the mathematical learning theory literature, a number of Markov-based models were constructed after just such an assumption. It was assumed that all learning was "all-or-none" in character. Apparent gradual change was modeled as "random" correct guessing by subjects who had not yet "learned," plus artifacts of emprical measures used by experimenters that averaged across subjects or trials where learning had occurred in some instances and not in others. A remarkably large number of learning phenomena, including many apparently gradual ones, were successfully modeled. Finally, "sudden" or "gradual" is measured with respect to the number of trials. It is essential to Kohler's conception that some sort of ongoing internal "contemplative" process was occurring all through the process, during and between trials. More trials more rapidly presented allow a gradual process to appear gradual. If the subject is gradually catching on and we present fewer trials less often, then the gradual learning may appear sudden because enough learning occurred in the long interval between trials to become noticeable all at once on the following trial. In sum, my point is that it is difficult, if not impossible, to establish the existence or non-existence of genuinely different *types* of learning solely from behavioral phenomena, however augmented by theory or mathematics. One of the truly exciting things about the recent advances in the various technologies of brain monitoring is that they provide a second type of empirical evidence that can be correlated with behavioral evidence to discover if apparently distinct learning phenomena involve genuinely different brain mechanisms. regards, steve K Steven M. Kemp | Department of Psychology | email: steve_kemp at unc.edu Davie Hall, CB# 3270 | University of North Carolina | Chapel Hill, NC 27599-3270 | fax: (919) 962-2537 From horne at research.nj.nec.com Thu Apr 4 11:46:25 1996 From: horne at research.nj.nec.com (Bill Horne) Date: Thu, 4 Apr 1996 11:46:25 -0500 Subject: Spectral Radius TR Message-ID: <9604041146.ZM26727@telluride> The following technical report is now available Lower bounds for the spectral radius of a matrix Bill Horne NEC Research Institute 4 Independence Way Princeton, NJ 08540 NECI Technical Report 95-14 In this paper we develop lower bounds for the spectral radius of symmetric, skew-symmetric, and arbitrary real matrices. Our approach utilizes the well-known Leverrier-Faddeev algorithm for calculating the coefficients of the characteristic polynomial of a matrix in conjunction with a theorem by Lucas which states that the critical points of a polynomial lie within the convex hull of its roots. Our results generalize and simplify a proof recently published by Tarazaga for a lower bound on the spectral radius of a symmetric positive definite matrix. In addition, we provide new lower bounds for the spectral radius of skew-symmetric matrices. We apply these results to a problem involving the stability of fixed points in recurrent neural networks. The report can be obtained from my homepage http://www.neci.nj.nec.com/homepages/horne.html Or directly at ftp://ftp.nj.nec.com/pub/horne/spectral.ps.Z -- Bill Horne Senior Research Associate Computer Science Division NEC Research Institute, 4 Independence Way, Princeton, NJ 08540 horne at research.nj.nec.com PHN: (609) 951-2676 FAX: (609) 951-2482 http://www.neci.nj.nec.com/homepages/horne.html From hochreit at informatik.tu-muenchen.de Thu Apr 4 11:45:52 1996 From: hochreit at informatik.tu-muenchen.de (Josef Hochreiter) Date: Thu, 4 Apr 1996 18:45:52 +0200 Subject: Flat Minima Message-ID: <96Apr4.184602+0200_met_dst.116186+506@papa.informatik.tu-muenchen.de> FTP-host: flop.informatik.tu-muenchen.de (131.159.8.35) FTP-filename: /pub/articles-etc/hochreiter.fm.ps.gz FLAT MINIMA Sepp Hochreiter Juergen Schmidhuber To appear in Neural Computation (accepted 1996) 38 pages, 154 K compressed, 463 K uncompressed We present a new algorithm for finding low-complexity neural networks with high generalization capability. The algorithm searches for a ``flat'' minimum of the error function. A flat minimum is a large connected region in weight-space where the error remains approximately constant. An MDL-based, Bayesian argument suggests that flat minima correspond to ``simple'' networks and low expected overfitting. The argument is based on a Gibbs algorithm variant and a novel way of splitting generalization error into underfitting and overfitting error. Unlike many previous approaches, ours does not require Gauss- assumptions and does not depend on a ``good'' weight prior - instead we have a prior over input/output functions, thus ta- king into account net architecture and training set. Although our algorithm requires the computation of second order deri- vatives, it has backprop's order of complexity. Automatically, it effectively prunes units, weights, and input lines. Expe- riments with feedforward and recurrent nets are described. In applications to stock market prediction, flat minimum search outperforms conventional backprop, weight decay, ``optimal brain surgeon'' / ``optimal brain damage''. We also provide pseudo code of the algorithm (omitted from the NC-version). To obtain a copy, cut and paste one of these: netscape http://www7.informatik.tu-muenchen.de/~hochreit/pub.html netscape http://www.idsia.ch/~juergen/onlinepub.html Sepp Hochreiter, TUM Juergen Schmidhuber, IDSIA P.S.: Info on recent IDSIA postdoc job opening: http://www.idsia.ch/~juergen/postdoc.html From thrun+ at heaven.learning.cs.cmu.edu Thu Apr 4 22:10:54 1996 From: thrun+ at heaven.learning.cs.cmu.edu (thrun+@heaven.learning.cs.cmu.edu) Date: Thu, 4 Apr 96 22:10:54 EST Subject: Book announcement: EBNN, Lifelong Learning Message-ID: I have the pleasure to announce the following book. EXPLANATION-BASED NEURAL NETWORK LEARNING: A Lifelong Learning Approach Sebastian Thrun Carnegie Mellon University & University of Bonn published by Kluwer Academic Publishers ---------------------------------------------------------------------- Lifelong learning addresses situations in which a learner faces a series of different learning tasks, providing the opportunity for synergy among them. Explanation-based neural network learning (EBNN) is a machine learning algorithm that transfers knowledge across multiple learning tasks. When faced with a new learning task, EBNN exploits domain knowledge accumulated in previous learning tasks to guide generalization in the new one. As a result, EBNN generalizes more accurately from less data than comparable methods. This book describes the basic EBNN paradigm and investigates it in the context of supervised learning, reinforcement learning, robotics, and chess. ``The paradigm of lifelong learning - using earlier learned knowledge to improve subsequent learning - is a promising direction for a new generation of machine learning algorithms. Given the need for more accurate learning methods, it is difficult to imagine a future for machine learning that does not include this paradigm.'' -- from the Foreword by Tom M. Mitchell ---------------------------------------------------------------------- FOREWORD by Tom Mitchell ix PREFACE xi 1 INTRODUCTION 1 1.1 Motivation 1 1.2 Lifelong Learning 3 1.3 A Simple Complexity Consideration 8 1.4 The EBNN Approach to Lifelong Learning 13 1.5 Overview 16 2 EXPLANATION-BASED NEURAL NETWORK LEARNING 19 2.1 Inductive Neural Network Learning 20 2.2 Analytical Learning 27 2.3 Why Integrate Induction and Analysis? 31 2.4 The EBNN Learning Algorithm 33 2.5 A Simple Example 39 2.6 The Relation of Neural and Symbolic Explanation-Based Learning 43 2.7 Other Approaches that Combine Induction and Analysis 45 2.8 EBNN and Lifelong Learning 47 3 THE INVARIANCE APPROACH 49 3.1 Introduction 49 3.2 Lifelong Supervised Learning 50 3.3 The Invariance Approach 55 3.4 Example: Learning to Recognize Objects 59 3.5 Alternative Methods 74 3.6 Remarks 90 4 REINFORCEMENT LEARNING 93 4.1 Learning Control 94 4.2 Lifelong Control Learning 98 4.3 Q-Learning 102 4.4 Generalizing Function Approximators and Q-Learning 111 4.5 Remarks 125 5 EMPIRICAL RESULTS 131 5.1 Learning Robot Control 132 5.2 Navigation 133 5.3 Simulation 141 5.4 Approaching and Grasping a Cup 146 5.5 NeuroChess 152 5.6 Remarks 175 6 DISCUSSION 177 6.1 Summary 177 6.2 Open Problems 181 6.3 Related Work 185 6.4 Concluding Remarks 192 A AN ALGORITHM FOR APPROXIMATING VALUES AND SLOPES WITH ARTIFICIAL NEURAL NETWORKS 195 A.1 Definitions 196 A.2 Network Forward Propagation 196 A.3 Forward Propagation of Auxiliary Gradients 197 A.4 Error Functions 198 A.5 Minimizing the Value Error 199 A.6 Minimizing the Slope Error 199 A.7 The Squashing Function and its Derivatives 201 A.8 Updating the Network Weights and Biases 202 B PROOFS OF THE THEOREMS 203 C EXAMPLE CHESS GAMES 207 C.1 Game 1 207 C.2 Game 2 219 REFERENCES 227 LIST OF SYMBOLS 253 INDEX 259 ---------------------------------------------------------------------- More information concerning this book: http://www.cs.cmu.edu/~thrun/papers/thrun.book.html http://www.informatik.uni-bonn.de/~thrun/papers/thrun.book.html From goldfarb at unb.ca Fri Apr 5 10:25:10 1996 From: goldfarb at unb.ca (Lev Goldfarb) Date: Fri, 5 Apr 1996 11:25:10 -0400 (AST) Subject: What is a "hybrid" model? In-Reply-To: Message-ID: On Thu, 4 Apr 1996, Steve Kemp wrote: > learning. If the distinction (that makes for the two types) is between > gradual and sudden learning, then one-trial learning, while perhaps > distinct from insight learning, seems to be sudden rather than gradual. > That is, there are other non-gradual types of learning besides insight > learning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . > In sum, my point is that it is difficult, if not impossible, to establish > the existence or non-existence of genuinely different *types* of learning > solely from behavioral phenomena, however augmented by theory or > mathematics. In view of this, why do then most of us ignore the scientific experience of the last four centuries that strongly suggest the scientific parsimony (in that case - one basic learning "mechanism")? Are we ready (i.e. adequately "educated") to deal with the greatest scientific challenge of cognitive science? Lev Goldfarb Tel: 506-453-4566 Fax: 506-453-3566 http://wwwos2.cs.unb.ca/profs/goldfarb/goldfarb.htm From nq6 at columbia.edu Fri Apr 5 12:29:16 1996 From: nq6 at columbia.edu (Ning Qian) Date: Fri, 5 Apr 1996 12:29:16 -0500 (EST) Subject: postdoc position at Columbia Message-ID: <199604051729.MAA07569@merhaba.cc.columbia.edu> Postdoctoral Position in Computational Vision Center for Neurobiology and Behavior Columbia University New York, NY A postdoctoral fellowship position in computational neuroscience is available immediately for a recent Ph. D. The postdoc will participate in an NIMH-funded project that applies mathematical analyses and computer simulations to investigate the neural mechanisms of stereoscopic depth perception and motion-stereo interactions. Opportunities for modeling other neural systems are also available. The details of our research interests and the PostScript files of some of our publications can be found at the web site listed below. Other systems neuroscience faculty members in the Center with closely related research interests include Drs. Vincent P. Ferrera, Claude P. Ghez, John Martin and Irving Kaupfermann. The funding for the position is available for two years with the possibility of renewal. Applicants should have a strong background in mathematics and computational modeling (in the Unix/X-windows/C environment). Previous experience in vision research is desirable but not required. Please send a CV, statement of research interests and experience, along with names/phone numbers/email addresses of three references to: Dr. Ning Qian Center for Neurobiology and Behavior Columbia University 722 W. 168th St., A730 New York, NY 10032 nq6 at columbia.edu (email) 212-960-2213 (phone) 212-960-2561 (fax) ********************************************************************* For the details of our research interests and publications, please visit our World Wide Web home page at: http://brahms.cpmc.columbia.edu Selected Papers (available on line): A Physiological Model for Motion-stereo Integration and a Unified Explanation of the Pulfrich-like Phenomena, Ning Qian and Richard A. Andersen, submitted to Vision Research. Binocular Receptive Field Profiles, Disparity Tuning and Characteristic Disparity, Yudong Zhu and Ning Qian, Neural Computation, 1996 (in press). Computing Stereo Disparity and Motion with Known Binocular Cell Properties, Ning Qian, Neural Computation, 1994, 6:390-404. Transparent Motion Perception as Detection of Unbalanced Motion Signals III: Modeling, Ning Qian, Richard A. Andersen and Edward H. Adelson, J. Neurosci., 1994, 14:7381-7392. Generalization and Analysis of the Lisberger-Sejnowski VOR Model, Ning Qian, Neural Computation, 1995, 7:735-752. From shrager at neurocog.lrdc.pitt.edu Sat Apr 6 09:14:02 1996 From: shrager at neurocog.lrdc.pitt.edu (Jeff Shrager) Date: Sat, 6 Apr 1996 09:14:02 -0500 (EST) Subject: What is a "hybrid" model? In-Reply-To: Message-ID: On Fri, 5 Apr 1996, Lev Goldfarb wrote: > > In sum, my point is that it is difficult, if not impossible, to establish > > the existence or non-existence of genuinely different *types* of learning > > solely from behavioral phenomena, however augmented by theory or > > mathematics. > > In view of this, why do then most of us ignore the scientific experience > of the last four centuries that strongly suggest the scientific parsimony > (in that case - one basic learning "mechanism")? > Are we ready (i.e. adequately "educated") to deal with the greatest > scientific challenge of cognitive science? I'm sorry, but this is all noise. The brain is a complicated machine. Saying that a car runs on "one basic principle" of chemistry (or physics) isn't saying anything important about a car as pertains to most people's interactions with it (except maybe people who are hit by its momentum :-) The "scientific experience of the last four centuries" (at least that little (though important!) spec of it that Lev is apparently referring to) explicitly eschews complexity, or turns it into abstract complexity (such as chaos theory), neither of which approach tells you very much about the real McCoy. If you care about the real brain, the abstract and general theories are important, interesting, and useful, but they are NOT the whole story. I'm sorry to say that this is going to quickly turn into the same old relogious war, and I'd really like to propose that we take it offline. -- Jeff From iehava at ie.technion.ac.il Sun Apr 7 01:17:39 1996 From: iehava at ie.technion.ac.il (Hava Siegelmann) Date: Sun, 7 Apr 1996 08:17:39 +0200 (EET) Subject: A New Computational Model: Continuous Time Message-ID: Dear friends: I wish to introduce you to a new work in continuous-time computability that may be of interest to some of you. Analog Computing and Dynamical Systems ====================================== Hava T. Siegelmann and Shmuel Fishman Technion --- Israel Institute of Technology iehava at ie.technion.ac.il fishman at physics.technion.ac.il ABSTRACT This work is aimed to gain an enlarged and deeper understanding of the computation processes possible in natural and artificial systems. We introduce an interface between dynamical systems and computational models. The theory that is developed encompasses discrete and continuous analog computation by means of difference and differential equations, respectively. Our complexity theory for continuous time systems is very natural and requires no external clocks or synchronizing elements. As opposed to previous models we do not apply some nature principles to the Turing model but rather start from realizable and possibly chaotic dynamical systems and interpret their evolution as generalized computation. By applying the basic computational terms such as, halting, computation under resource constraints, nondeterministic and stochastic behavior to dynamical systems, a general, continuous-time computational theory is introduced. The new theory is indeed different from the classical one: in some ways it seems stronger but it still has natural limits of decidability. ===================================================================== Let me shortly explain why I suspect that this theoretical computer-science work may be of interest to the Connectionist: 1. In some way, this is a generalization of the Hopfield network. The meaningful attractors of these networks --- where information is stored --- are all simple: either stable fixed points or limit cycles, that are periodic orbits. As most dissipative dynamical systems converge to chaotic attractors, the Hopfield network is indeed a very special case of recurrent networks. In our new work, we present the foundations of computation for dissipative dynamical systems; the attractors can be of any kind. In spite of this variety, the computation and computation times in dynamical are now defined in unified, natural, and unambiguous mathematical terms. 2. There is another reason, but here it really depends on personal belief. Previously, the CS theoreticians of us considered functions, trajectories, and control as a discrete time process. Looking at it without this discretization seems to enlarge our understanding of continuous time computation, with no need for external clocks or all other related discretization tricks. I do not understand enough in biological control to state more than that careful comments I would love to hear from those of you that understand if indeed you see any possible application to your kind of work. Best regards, Hava Siegelmann Technion Israel From giles at research.nj.nec.com Mon Apr 8 21:39:52 1996 From: giles at research.nj.nec.com (Lee Giles) Date: Mon, 8 Apr 96 21:39:52 EDT Subject: TR available on Face Recognition Message-ID: <9604090139.AA10780@alta> ----------------------------------------------------------------------- The following paper presents a hybrid neural network solution to face recognition which outperforms eigenfaces and some other methods on the database of 400 images considered. _______________________________________________________________________ FACE RECOGNITION: A HYBRID NEURAL NETWORK APPROACH Steve Lawrence (1,3), C. Lee Giles (1,2), Ah Chung Tsoi (3), Andrew D. Back (3) (1) NEC Research Institute, 4 Independence Way, Princeton, NJ 08540, USA (2) Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742, USA (3) Electrical and Computer Engineering, University of Queensland, St. Lucia, Australia 4072 U. of Maryland Technical Report CS-TR-3608 and UMIACS-96-16 ABSTRACT Faces represent complex, multidimensional, meaningful visual stimuli and developing a computational model for face recognition is difficult. We present a hybrid neural network solution which compares favorably with other methods. The system combines local image sampling, a self-organizing map neural network, and a convolutional neural network. The self-organizing map provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides for partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the self-organizing map, and a multi-layer perceptron in place of the convolutional network. The Karhunen-Loeve transform performs almost as well (5.3% error versus 3.8%). The multi-layer perceptron performs very poorly (40% error versus 3.8%). The method is capable of rapid classification, requires only fast, approximate normalization and preprocessing, and consistently exhibits better classification performance than the eigenfaces approach on the database considered as the number of images per person in the training database is varied from 1 to 5. With 5 images per person the proposed method and eigenfaces result in 3.8 and 10.5 error respectively. The recognizer provides a measure of confidence in its output and classification error approaches zero when rejecting as few as 10 of the examples. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze computational complexity and discuss how new classes could be added to the trained recognizer. Keywords: Convolutional Neural Networks, Hybrid Systems, Face Recognition, Self-Organizing Map __________________________________________________________________________ The paper is available from: http://www.neci.nj.nec.com/homepages/lawrence - USA http://www.neci.nj.nec.com/homepages/giles.html - USA http://www.cs.umd.edu/TRs/TR-no-abs.html - USA http://www.elec.uq.edu.au/~lawrence - Australia ftp://ftp.nj.nec.com/pub/giles/papers/UMD-CS-TR-3608.face.recognition_hybrid.neural.nets.ps.Z We welcome your comments. -- C. Lee Giles / Computer Sciences / NEC Research Institute / 4 Independence Way / Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 www.neci.nj.nec.com/homepages/giles.html == From harmonme at aa.wpafb.af.mil Wed Apr 10 11:19:58 1996 From: harmonme at aa.wpafb.af.mil (Mance E. Harmon) Date: Wed, 10 Apr 96 11:19:58 -0400 Subject: ICML'96 Paper Available Message-ID: <960410111956.575@ethel.aa.wpafb.af.mil.0> The following paper will be presented at the 13th International Conference on Machine Learning, Bari, Italy, 3-6 July, and is now available in postscript and RTF formats at the following URL: http://www.aa.wpafb.af.mil/~harmonme Residual Q-Learning Applied to Visual Attention Cesar Bandera Amherst Systems, Inc. Machine Vision Dept.30 Wilson Road Buffalo, New York14221-7082 cba at amherst.com Francisco J. Vico,Jose M. Bravo Facultad de Psicologia Universidad de Malaga 29017 Malaga (Spain) fjv at eva.psi.uma.es jbm at eva.psi.uma.es Mance E. Harmon Wright Laboratory WL/AACF 2241 Avionics Circle Wright-Patterson AFB,Ohio 45433-7318 harmonme at aa.wpafb.af.mil Leemon C. Baird III U.S.A.F. Academy 2354 Fairchild Dr. Suite 6K41 USAFA, Colorado 80840-6234 baird at cs.usafa.af.mil ABSTRACT Foveal vision features imagers with graded acuity coupled with context sensitive sensor gaze control, analogous to that prevalent throughout vertebrate vision. Foveal vision operates more efficiently than uniform acuity vision because resolution is treated as a dynamically allocatable resource, but requires a more refined visual attention mechanism. We demonstrate that reinforcement learning (RL) significantly improves the performance of foveal visual attention, and of the overall vision system, for the task of model based target recognition. A simulated foveal vision system is shown to classify targets with fewer fixations by learning strategies for the acquisition of visual information relevant to the task, and learning how to generalize these strategies in ambiguous and unexpected scenario conditions. From recruit at phz.com Wed Apr 10 11:54:46 1996 From: recruit at phz.com (PHZ Recruiting) Date: Wed, 10 Apr 96 11:54:46 EDT Subject: Boston area job at PHZ modeling financial markets Message-ID: <9604101554.AA21791@phz.com> Applied research position available immediately in NONLINEAR STATISTICAL MODELING OF FINANCIAL MARKETS at PHZ CAPITAL PARTNERS LP PHZ is a small Boston area startup company founded in 1993 which manages client money using proprietary statistical models to invest in global securities markets. The principals are Tomaso Poggio, Jim Hutchinson, and Xiru Zhang. Following an equity investment by one of the world's largest futures trading manager firms, PHZ is seeking a person to join our team and expand our trading system development efforts. The successful applicant for this position will have a M.S. or Ph.D. in statistics, computer science, finance, or a related field. Experience with advanced statistical modeling tools, large real world data sets, and software development on PCs and Unix systems (esp. using C/C++ and statistics languages such as S+ or SAS) is highly desirable; working knowledge of financial markets is also a plus. Depending on candidate interests and skills, this postion will involve or lead into basic research and application of sophisticated model development tools, exploratory data gathering and analysis, development of our trading and risk management software platform, and/or trading and monitoring live models. The growth potential of this position is large, both in terms of responsibilities and compensation. Initial compensation will be competitive based on qualifications, possibly including stock options. Interested applicants should email resumes (ascii or postscript) to recruiting at phz.com, or send by US mail to: Attn: Recruiting PHZ Capital Partners LP 111 Speen St, Suite 313 Framingham, MA 01701 USA From wgm at santafe.edu Wed Apr 10 16:22:20 1996 From: wgm at santafe.edu (Bill Macready) Date: Wed, 10 Apr 96 14:22:20 MDT Subject: No subject Message-ID: <9604102022.AA25825@sfi.santafe.edu> We would like to announce a paper entitled: An Efficient Method To Estimate Bagging's Generalization Error D.H. Wolpert, W.G. Macready In bagging one uses bootstrap replicates of the training set to try to improve a learning algorithm's performance. The computational requirements for estimating the resultant generalization error on a test set by means of cross-validation are often prohibitive; for leave-one-out cross-validation one needs to train the underlying algorithm on the order of $m^2$ times, where $m$ is the size of the training set. This paper presents several ways to exploit the bias-variance decomposition to estimate the generalization error of a bagged learning algorithm without invoking yet more training of the underlying learning algorithm. In a set of experiments, the accuracy of this estimator was compared to both the accuracy of using cross-validation to estimate the generalization error of the underlying learning algorithm, and the accuracy of using cross-validation to estimate the generalization error of the bagged algorithm. The estimator presented here was comparable in its accuracy to, and sometimes even more accurate than, the alternative cross-validation-based estimators. This paper is available from the web site: "http://www.santafe.edu/~wgm/papers.html" or by ftp from ""ftp://ftp.santafe.edu/pub/wgm/error.ps.gz" From mjo at cns.ed.ac.uk Wed Apr 10 09:12:20 1996 From: mjo at cns.ed.ac.uk (Mark Orr) Date: Wed, 10 Apr 1996 14:12:20 +0100 Subject: Introduction to RBF networks plus Matlab package Message-ID: <199604101312.OAA01716@garbo.cns.ed.ac.uk> Announcing the availability of the following resources on the World Wide Web at URL http://www.cns.ed.ac.uk/people/mark.html. Introduction to RBF Networks ---------------------------- A 67 page introduction to linear feed-forward neural networks for supervised learning, such as radial basis function networks, where there is a single hidden layer and the only parameters that change during learning are the weights from the hidden units to the outputs. But more importantly it covers "nearly linear" networks: networks which, though nonlinear (because learning affects more than just the hidden-to-output weights) can still be analysed with simple mathematics (linear algebra) and which don't need compute intensive gradient descent methods to learn. This applies to RBF networks with local ridge regression or which use regularised forward selection to build the hidden layer. These techniques, in conjunction with leave-one-out or generalised cross-validation, are covered in detail. Available in PostScript or hyper-text. Matlab Routines for ------------------- Subset Selection and Ridge Regression ------------------------------------- in Linear Neural Networks ------------------------- A package of Matlab routines implementing regularised or unregularised forward subset selection and global or local ridge regression in linear networks such as radial basis function networks. Comes with a 45 page user manual with plenty of examples. Available as a compressed unix tape archive (.tar file). The author would like to acknowledge support from the UK Joint Councils Initiative in Human Computer Interaction and Cognitive Science under grant G9213375, "Designing Systems of Coupled Networks". Mark Orr mark at cns.ed.ac.uk From rafal at mech.gla.ac.uk Fri Apr 12 11:45:44 1996 From: rafal at mech.gla.ac.uk (Rafal W Zbikowski) Date: Fri, 12 Apr 1996 16:45:44 +0100 Subject: Workshop on Neurocontrol Message-ID: <29240.199604121545@trebino.mech.gla.ac.uk> CALL FOR PAPERS Neural Adaptive Control Technology Workshop: NACT II 9--10 September, 1996 Daimler-Benz Systems Technology Research Berlin, Germany NACT Project ============ The second of a series of three workshops on Neural Adaptive Control Technology (NACT) will take place on September 9--10, 1996 in Berlin, Germany. This event is being organised in connection with a three-year European Union funded Basic Research Project in the ESPRIT framework. The project is a collaboration between Daimler-Benz Systems Technology Research, Berlin, Germany and the Control Group, Department of Mechanical Engineering, University of Glasgow, Glasgow, Scotland. The project, which began on 1 April 1994, is a study of the fundamental properties of neural network based adaptive control systems. Where possible, links with traditional adaptive control systems will be exploited. A major aim is to develop a systematic engineering procedure for designing neural controllers for non-linear dynamic systems. The techniques developed are being evaluated on concrete industrial problems from within the Daimler-Benz group of companies: Mercedes-Benz AG, Daimler-Benz Aerospace (DASA), AEG Daimler-Benz Industrie and DEBIS. The project leader is Dr Ken Hunt (Daimler-Benz) and the other principal investigator is Professor Peter Gawthrop (University of Glasgow). NACT II Workshop ================ The aim of the workshop is to bring together selected invited specialists in the fields of adaptive control, non-linear systems and neural networks. The first workshop (NACT I) took place in Glasgow in May 1995 and was mainly dedicated to theoretical issues of neural adaptive control. Besides monitoring further development of theory the NACT II workshop will be focused on industrial applications and software tools. A number of contributed papers will also be included. As well as paper presentation, significant time will be allocated to round-table and discussion sessions. In order to create a fertile atmosphere for a significant information interchange we aim to attract active specialists in the relevant fields. Professor Karl Johan Astrom of Lund Institue of Technology, Sweden and Professor Hassan K. Khalil of Michigan State University have kindly agreed to act as invited speakers. Proceedings of the meeting will be published in an edited book format. Contributed papers ================== The Program Committee is soliciting contributed papers in the area of neurocontrol for presentation at the conference and publication in the Proceedings. Prospective authors are invited to send an extended abstract of up to six pages in length to the address below no later than Friday, 31 May 1996. Final selection of papers will be announced at the end of June and authors will have the opportunity of preparing a final version of the extended abstract by the end of July which will be circulated to participants in a Workshop digest. Following the Workshop selected authors will be asked to prepare a full paper for publication in the proceedings. This will take the form of an edited book produced by an international publisher. LaTeX style files will be available for document preparation. Each submitted paper must be headed with a title, the names, affiliations and complete mailing addresses (including e-mail) of all authors, a list of three keywords, and the statement NACT II. The first named author of each paper will be used for all correspondence unless otherwise requested. Address for submissions Dr Kenneth J Hunt Daimler-Benz AG Systems Technology Research Alt-Moabit 96A 10559 BERLIN Germany hunt at DBresearch-berlin.de For more information visit the NACT Web page http://www.mech.gla.ac.uk/~nactftp/nact.html From istvan at psych.ualberta.ca Fri Apr 12 22:01:16 1996 From: istvan at psych.ualberta.ca (Istvan Berkeley) Date: Fri, 12 Apr 1996 20:01:16 -0600 Subject: Workshop Message-ID: CONNECTIONISM FOR COGNITIVISTS: THEORY AND APPLICATIONS On the 25-27 May, 1996, a major international workshop on recent theoretical and applicational aspects of network architectures will be held at Carleton University in Ottawa, Canada. The workshop has a unique structure: approximately half of it will be devoted to presentations of theoretical work by eminent researchers, while the other half will involve hands-on introductions to new software that allows for the use of learning algorithms and techniques for hidden-unit activation analysis that are not available to researchers whose main knowledge of networks stems from the seminal 1986 PDP volumes by Rumelhart, McClelland, et al., or who are, indeed, unfamiliar with the details of *any* PDP modelling techniques, but who would like to understand in detail why they have produced so much interest and debate among cognitive scientists and others. For the second purpose, all registrants will have access to workstations. The workshop has been designed to appeal to, and to be accessible to, researchers from a wide range of disciplines, especially including cognitive science, philosophy, psychology, linguistics, computer science and telecommunications engineering. We stress that no particular disciplinary background, or technical experience with network models, will be presupposed in the design of the workshop. Principal speakers include: David Rumelhart, Stanford Jerome Feldman, Berkeley/ICSI Paul Skokowski, Stanford Christopher Thornton, Sussex Malcolm Forster, Wisconsin at Madison John Bullinaria, Birkbeck College, London Istvan Berkeley, Alberta/Southwestern Louisiana Please note that registration space is limited, and registrations will be accepted in a first-come first-serve basis. Dates: May 25-27, 1995 Registration fees: Regular: $75.00 (CDN) Student: $35.00 (CDN) Banquet (optional): $35.00 (CDN) REGISTRATION PROCEDURES Those wishing to attend the conference may register either electronically, or by mail. To register electronically, send the following information to : NAME: AFFILIATION: REGULAR/STUDENT?: BANQUET (Y/N?): ACCOMMODATION PREFERENCES (no. of nights, preference as between student residence accommodation [subject to availability] or hotel): MAILING ADDRESS: E-MAIL: Electronic registrations will be considered confirmed upon receipt of a cheque for the appropriate amount, in either Canadian dollars or the U.S. equivalent. Cheques should be made payable to CARLETON UNIVERSITY, and should be sent to the address given for postal registration below. To register by post, send the information indicated above, with a cheque for the appropriate amount, to: CONNECTIONISM c/o Professor Don Ross Department of Philosophy Morisset Hall University of Ottawa Ottawa, ON CANADA K1N 6N5 e-mail: Istvan S. N. Berkeley, email: istvan at psych.ualberta.ca Biological Computation Project & Department of Philosophy, c/o 4-108 Humanities Center University of Alberta Edmonton, Alberta Tel: +1 403 436 4182 T6G 2E5, Canada Fax: +1 403 437 2261 From marco at idsia.ch Mon Apr 15 03:39:15 1996 From: marco at idsia.ch (Marco Wiering) Date: Mon, 15 Apr 96 09:39:15 +0200 Subject: Levin Search and EIRA Message-ID: <9604150739.AA07623@fava.idsia.ch> FTP-host: ftp.idsia.ch FTP-file: /pub/marco/ml_levin_eira.ps.gz or /pub/juergen/ml_levin_eira.ps.gz Solving POMDPs with Levin Search and EIRA Marco Wiering Juergen Schmidhuber Machine Learning: 13th Intern. Conf., 1996 9 pages, 86K compressed, 252K uncompressed Partially observable Markov decision problems (POMDPs) recently received a lot of attention in the reinforcement learning community. No attention, however, has been paid to Levin's universal search through program space (LS), which is theoretically optimal for a wide variety of search prob- lems including many POMDPs. Experiments in this paper show that LS can solve partially observable mazes (`POMs') involving many more states and obstacles than those solved by various previous authors. We then note, however, that LS is not necessarily optimal for learning problems where experience with previous problems can be used to speed up the search. For this reason, we introduce an adaptive extension of LS (ALS) which uses experience to increase probabilities of instructions occurring in successful programs found by LS. To deal with cases where ALS does not lead to long-term performance improvement, we use the recent technique ``environment-independent reinforcement acceleration'' (EIRA) as a safe- ty belt (EIRA currently is the only known method that guarantees a life- long history of reward accelerations). Additional experiments demon- strate: (a) ALS can dramatically reduce search time consumed by calls of LS. (b) Further significant speed-ups can be obtained by combining ALS and EIRA. To obtain a copy, do one of these: netscape http://www.idsia.ch/~marco/publications.html netscape http://www.idsia.ch/~juergen/onlinepub.html Marco Wiering Juergen Schmidhuber IDSIA From reiner at isy.liu.se Tue Apr 16 04:38:07 1996 From: reiner at isy.liu.se (Reiner Lenz) Date: Tue, 16 Apr 1996 10:38:07 +0200 (MET DST) Subject: Invariance, group representations and orientation estimation Message-ID: <199604160838.KAA12946@einstein.isy.liu.se> Problems involving the concept of invariance have received a lot of attention and although the following papers are somewhat outside the field of neural networks perhaps someone may find something intresting in them. The main idea is that invariance is often closely related to groups and there representations and that these in turn are closely related to special transforms. The most important example is shift-invariance which is related to the additive group which lead to the Fourier transform. If you are interested you can find some of the reprints in http://www.isy.liu.se/~reiner/proj_desc/section3_3.html GROUPS: is an overview article P2-invariance: Describes the application to permutation and projection invariance Group Theoretical Transforms: uses the dihedral group Lie-Matching: computes the orientation parameters from 3-D data and is an example of fast iterative matching algorithms based on the interplay between Lie-group and Lie-algebra. As I said before: Not strictly NN but perhaps interesting to someone. Best regards "Kleinphi macht auch Mist" Reiner Lenz | Dept. EE. | | Linkoeping University | email: reiner at isy.liu.se | S-58183 Linkoeping/Sweden | From stefano at kant.irmkant.rm.cnr.it Tue Apr 16 08:05:48 1996 From: stefano at kant.irmkant.rm.cnr.it (Stefano Nolfi) Date: Tue, 16 Apr 1996 12:05:48 GMT Subject: Paper available on adaptive classification with autonomous robots Message-ID: <9604161205.AA19378@kant.irmkant.rm.cnr.it> Paper available via WWW / FTP: Keywords: Active Perception, Adaptive Behaviors, Evolutionary Robotics, Neural Networks, Genetic Algorithms. ------------------------------------------------------------------------------ ADAPTATION AS A MORE POWERFUL TOOL THAN DECOMPOSITION AND INTEGRATION Stefano Nolfi Institute of Psychology, C.N.R., Rome. Recently a new way of building control systems, known as behavior based robotics, has been proposed to overcome the difficulties of the traditional AI approach to robotics. Most of the work done in behavior-based robotics involves a decomposition process (in which the behavior required is broken down into simpler sub-components) and an integration process (in which the modules designed to produce the sub-behaviors are put together). In this paper we claim that decomposition and integration should be the result of an adaptation process and not of the decision of an experimenter.To support this hypothesis we show how in the case of a simple task in which a real autonomous robot is supposed to classify objects of different shapes, by letting the entire behavior emerge through an evolutionary technique,a more simple and robust solution can be obtained than by trying to design a set of modules and to integrate them. http://kant.irmkant.rm.cnr.it/public.html or ftp-server: kant.irmkant.rm.cnr.it (150.146.7.5) ftp-file : /pub/econets/nolfi.recog.ps.Z for the homepage of our research group with most of our publications available online and pointers to ALIFE resources see: http://kant.irmkant.rm.cnr.it/gral.html ---------------------------------------------------------------------------- Stefano Nolfi Institute of Psychology, C.N.R. Viale Marx, 15 - 00137 - Rome - Italy voice: 0039-6-86090231 fax: 0039-6-824737 e-mail: stefano at kant.irmkant.rm.cnr.it www: http://kant.irmkant.rm.cnr.it/nolfi.html From smyth at galway.ICS.UCI.EDU Tue Apr 16 14:02:06 1996 From: smyth at galway.ICS.UCI.EDU (Padhraic Smyth) Date: Tue, 16 Apr 1996 11:02:06 -0700 Subject: Final CFP for Sixth AI and Statistics Workshop Message-ID: <9604161102.aa05163@paris.ics.uci.edu> Apologies to those of you who receive this more than once, The deadline for 4-page abstracts is July 1, electronic submissions are encouraged. Padhraic Smyth AIStats97 General Chair Final Call For Papers SIXTH INTERNATIONAL WORKSHOP ON ARTIFICIAL INTELLIGENCE AND STATISTICS January 4-7, 1997 Ft. Lauderdale, Florida http://www.stat.washington.edu/aistats97/ PURPOSE: This is the sixth in a series of workshops which has brought together researchers in Artificial Intelligence (AI) and in Statistics to discuss problems of mutual interest. The exchange has broadened research in both fields and has strongly encouraged interdisciplinary work. Papers on all aspects of the interface between AI & Statistics are encouraged. FORMAT: To encourage interaction and a broad exchange of ideas, the presentations will be limited to about 20 discussion papers in single session meetings over three days (Jan. 5-7). Focussed poster sessions will provide the means for presenting and discussing the remaining research papers. Papers for poster sessions will be treated equally with papers for presentation in publications. Attendance at the workshop will *not* be limited. The three days of research presentations will be preceded by a day of tutorials (Jan. 4). These are intended to expose researchers in each field to the methodology used in the other field. The tutorial speakers are A. P. Dawid (University College London), Michael Jordan (MIT), Tom Mitchell (Carnegie Mellon), and Mike West (Duke University). TOPICS OF INTEREST: - automated data analysis and knowledge representation for statistics - statistical strategy - metadata and design of statistical data bases - multivariate graphical models, belief networks - causality - cluster analysis and unsupervised learning - predictive modeling: classification and regression - interpretability in modeling - model uncertainty, multiple models - probability and search - knowledge discovery in databases - integrated man-machine modeling methods - statistical methods in AI approaches to vision, robotics, pattern recognition, software agents, planning, information retrieval, natural language processing, etc. - AI methods applied to problems in statistics such as statistical advisory systems, experimental design, exploratory data analysis, causal modeling, etc. This list is not intended to define an exclusive list of topics of interest. Authors are encouraged to submit papers on any topic which falls within the intersection of AI and Statistics. SUBMISSION REQUIREMENTS: Three copies of an extended abstract (up to 4 pages) should be sent to David Madigan, Program Chair 6th International Workshop on AI and Statistics Department of Statistics, Box 354322 University of Washington Seattle, WA 98195 or electronically (postscript or latex preferred) to aistats at stat.washington.edu Submissions for will be considered if *postmarked* by June 30, 1996. If the submission is electronic (e-mail), then it must be *received* by midnight July 1, 1996. Please indicate which topic(s) your abstract addresses and include an electronic mail address for correspondence. Receipt of all submissions will be confirmed via electronic mail. Acceptance notices will be mailed by September 1, 1996. Preliminary papers (up to 20 pages) must be returned by November 1, 1996. These preliminary papers will be copied and distributed at the workshop. PROGRAM COMMITTEE: General Chair: P. Smyth UC Irvine and JPL Program Chair: D. Madigan U. Washington Members: Russell Almond, ETS, Princeton Wray Buntine, Thinkbank, Inc. Peter Cheeseman, NASA Ames Paul Cohen, University of Massachusetts Greg Cooper, University of Pittsburgh Bill DuMouchel, Columbia University Doug Fisher, Vanderbilt University Dan Geiger, Technion Clark Glymour, Carnegie-Mellon University David Hand, Open University, UK Steve Hanks, University of Washington Trevor Hastie, Stanford University David Haussler, UC Santa Cruz David Heckerman, Microsoft Paula Hietala, University of Tampere, Finland Geoff Hinton, University of Toronto Mike Jordan, MIT Hans Lenz, Free University of Berlin, Germany David Lewis, AT&T Bell Labs Andrew Moore, Carnegie-Mellon University Radford Neal, University of Toronto Jonathan Oliver, Monash University, Australia Steve Omohundro, NEC Research, Princeton Judea Pearl, UCLA Daryl Pregibon, AT&T Bell Labs Ross Shachter, Stanford University Glenn Shafer, Rutgers University Prakash Shenoy, University of Kansas David Spiegelhalter, MRC, Cambridge, UK Peter Spirtes, Carnegie-Mellon University MORE INFORMATION: For more information see the workshop's Web page: http://www.stat.washington.edu/aistats97/ or write David Madigan at aistats at stat.washington.edu for inquiries concerning the technical program or Padhraic Smyth at aistats at jpl.nasa.gov for other inquiries about the workshop. Write to ai-stats-request at watstat.uwaterloo.ca to subscribe to the AI and Statistics mailing list. -------- From jhf at playfair.Stanford.EDU Tue Apr 16 19:37:49 1996 From: jhf at playfair.Stanford.EDU (Jerome H. Friedman) Date: Tue, 16 Apr 1996 16:37:49 -0700 Subject: Paper available. Message-ID: <199604162337.QAA22041@playfair.Stanford.EDU> *** Paper Announcement *** ON BIAS, VARIANCE, 0/1 - LOSS, AND THE CURSE-OF-DIMENSIONALITY Jerome H. Friedman Stanford University (jhf at playfair.stanford.edu) ABSTRACT The classification problem is considered in which an output variable assumes discrete values with respective probabilities that depend upon the simultaneous values of a set of input variables. At issue is how error in the estimates of these probabilities affects classification error when the estimates are used in a classification rule. These effects are seen to be somewhat counter intuitive in both their strength and nature. In particular the bias and variance components of the estimation error combine to influence classification in a very different way than with squared error on the probabilities themselves. Certain types of (very high) bias can be canceled by low variance to produce accurate classification. This can dramatically mitigate the effect of the bias associated with some simple estimators like "naive" Bayes, and the bias induced by the curse-of- dimensionality on nearest-neighbor procedures. This helps explain why such simple methods are often competitive with and sometimes superior to more sophisticated ones for classification, and why "bagging/aggregating" classifiers can often improve accuracy. These results also suggest simple modifications to these procedures that can (sometimes dramatically) further improve their classification performance. Available by ftp from: "ftp://playfair.stanford.edu/pub/friedman/curse.ps.Z" From ajit at austin.ibm.com Wed Apr 17 12:37:52 1996 From: ajit at austin.ibm.com (Dingankar) Date: Wed, 17 Apr 1996 11:37:52 -0500 Subject: Neuroprose paper announcement Message-ID: <9604171637.AA32765@ding.austin.ibm.com> **DO NOT FORWARD TO OTHER GROUPS** Sorry, no hardcopies available. 4 pages. Greetings! The following invited paper will be presented at ISCAS in May 1996. The compressed PostScript file is available in the Neuroprose archive; the details (URL, bibtex entry and abstract) follow. Thanks, Ajit ------------------------------------------------------------------------------ URL: ftp://archive.cis.ohio-state.edu/pub/neuroprose/dingankar.tensor-products2.ps.Z BiBTeX entry: @INPROCEEDINGS{atd:iscas-96, AUTHOR ="Dingankar, Ajit T. and Sandberg, Irwin W.", TITLE ="{Tensor Product Neural Networks and Approximation of Dynamical Systems}", BOOKTITLE ="Proceedings of the International Symposium on Circuits and Systems", YEAR ="1996", EDITOR ="", PAGES ="", ORGANIZATION ="", PUBLISHER ="", ADDRESS ="Atlanta, Georgia", MONTH ="May 13--15" } Tensor Product Neural Networks and Approximation of Dynamical Systems --------------------------------------------------------------------- ABSTRACT We consider the problem of approximating any member of a large class of input-output operators of nonlinear dynamical systems. The systems need not be shift invariant, and the system inputs need not be continuous. We introduce a family of ``tensor product'' dynamical neural networks, and show that a certain continuity condition is necessary and sufficient for the existence of arbitrarily good approximations using this family. From jlarsen at eivind.ei.dtu.dk Wed Apr 17 14:27:33 1996 From: jlarsen at eivind.ei.dtu.dk (Jan Larsen) Date: Wed, 17 Apr 1996 14:27:33 +-200 Subject: Ph.D. Course in Advanced Digital Signal Processing Message-ID: <01BB2C6A.058CF520@jl.ei.dtu.dk> ******************** *** ANNOUNCEMENT *** ******************** Ph.D. Course in Advanced Digital Signal Processing Host: Section for Digital Signal Processing, Dept. of Mathematical Modelling, Technical University of Denmark. Course responsible persons: Assoc. Prof. Lars Kai Hansen, email: lkhansen at ei.dtu.dk Assoc. Prof. Steffen Duus Hansen, email: sdh at imm.dtu.dk Assis. Prof. Jan Larsen, email: jl at imm.dtu.dk Assoc. Prof. John Aasted Sorensen, email: jaas at imm.dtu.dk Course Highlight: * Design of neural networks. * Signal processing with neural networks. * Vector quantization with application to speech technology. * Adaptive signal processing, filter banks and wavelets. Dates: Full time in weeks 25, 26 and 27, June and July 1996. Registration: Deadline: May 1, 1996. Department of Mathematical Modelling, Build. 321, Technical University of Denmark, DK-2800 Lyngby, Phone +45 45881433. Fax +45 45881397. Notification of acceptance: May 10, 1996. Further info: Course Description: http://www.ei.dtu.dk/teaching/phd_AdvDigSignalProc.html DSP Section Homepage: http://www.ei.dtu.dk/dsphomepage.html !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !!! Please forward this message to people who might be interested !!! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! -- Jan Larsen From verleysen at dice.ucl.ac.be Thu Apr 18 11:21:32 1996 From: verleysen at dice.ucl.ac.be (verleysen@dice.ucl.ac.be) Date: Thu, 18 Apr 1996 16:21:32 +0100 Subject: Neural Processing Letters - new publisher Message-ID: <199604181416.QAA05395@ns1.dice.ucl.ac.be> Dear colleagues, The "Neural Processing Letters" journal is published each two months since 1994; its aim is to rapidly publish new ideas or new developments in the field of artificial neural networks. Today, we are happy to announce you that Kluwer Academic Publishers will publish this journal from 1996, in order to ensure its worldwide distribution. More information will be soon available on a WWW server. Nevertheless, in the meantime,you will find enclosed some details from Kluwer (see below). You can also contact Mike Casey for free sample copies of the journal, and any details about the submission of papers: Mike Casey Kluwer academic publishers Spuiboulevard 50 P.O. Box 17 NL - 3300 AA Dordrecht The Netherlands Phone: + 31 78 6392219 Fax: + 31 78 6392254 E-mail: casey at wkap.nl Thank you for your interest in this journal. Michel Verleysen, co-editor. ----------------------------------------------------------------------------- ----------------------------------------------------------------------------- Neural Processing Letters ------------------------- Editor: Michel Verleysen Universite Catholique de Louvain, Belgium Francois Blayo EERIE, Lyon, France Neural Processing Letters is an international journal publishing research results and innovative ideas in all fields of artificial neural networks. Prospective authors are encouraged to submit letters concerning any aspect of the Artificial Neural Networks field including, but not restricted to, theoretical developments, biological models, new formal modes, learning, applications, software and hardware developments, and prospective researches. The journal promotes fast exchange of information in the community of neural network researchers and users. The resurgence of interest in the field of artificial neural networks since the beginning of the 1980s is coupled to tremendous research activity in specialized or multidisciplinary groups. Research, however, is not possible without good communication between people and the exchange of information, especially in a field covering such different areas; fast communication is also a key aspect, and this is the reason for Neural Processing Letters. Subscription Information: Kluwer Academic Publishers, Boston ISSN: 1370-4621 1996, Volumes 3-4 (6 issues) Prices: Institutional Price NLG: 357.00 Institutional Price USD: 217.00 Private Price NLG: 250.00 Private Price USD: 150.00 ============================================================================= SUBSCRIPTION ORDER FORM Journal Title: Neural Processing Letters 1996, Volumes 3-4 (6 issues) ISSN: 1370-4621 Institutional Rate: NLG: 357.00 USD: 217.00 Ref: KAPIS ( ) Payment enclosed to the amount of ___________________________ ( ) Please send invoice ( ) Please charge my credit card account: Card no.: |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| Expiry date: ______________ () Access () American Express () Mastercard () Diners Club () Eurocard () Visa Name of Card holder: ___________________________________________________ Delivery address: Title : ___________________________ Initials: _______________M/F______ First name : ______________________ Surname: ______________________________ Organization: ______________________________________________________________ Department : ______________________________________________________________ Address : ______________________________________________________________ Postal Code : ___________________ City: ____________________________________ Country : _____________________________Telephone: ______________________ Email : ______________________________________________________________ Date : _____________________ Signature: _____________________________ Our European VAT registration number is: |_|_|_|_|_|_|_|_|_|_|_|_|_|_| To be sent to: For customers in Mexico, USA, Canada and Rest of the world: Latin America: Kluwer Academic Publishers Kluwer Academic Publishers Group Order Department Journals Department P.O. Box 358 P.O. Box 322 Accord Station 3300 AH Dordrecht Hingham, MA 02018-0358 The Netherlands U.S.A. Tel : 617 871 6600 Tel : +31 78 6392392 Fax : 617 871 6528 Fax : +31 78 6546474 Email : kluwer at wkap.com Email : services at wkap.nl Payment will be accepted in any convertible currency. Please check the rate of exchange with your bank. Prices are subject to change without notice. All prices are exclusive of Value Added Tax (VAT). Customers in the Netherlands please add 6% VAT. Customers from other countries in the European Community: * please fill in the VAT number of your institute/company in the appropriate space on the orderform: or * please add 6% VAT to the total order amount (customers from the U.K. are not charged VAT). ============================================================================= ===================================================== Michel Verleysen Universite Catholique de Louvain Microelectronics Laboratory 3, pl. du Levant B-1348 Louvain-la-Neuve Belgium tel: +32 10 47 25 51 fax: + 32 10 47 86 67 E-mail: verleysen at dice.ucl.ac.be WWW: http://www.dice.ucl.ac.be/~verleyse/MV.html ===================================================== From terry at salk.edu Thu Apr 18 14:04:23 1996 From: terry at salk.edu (Terry Sejnowski) Date: Thu, 18 Apr 96 11:04:23 PDT Subject: NEURAL COMPUTATION 8:4 Message-ID: <9604181804.AA17272@salk.edu> Neural Computation - Contents Volume 8, Number 4 - May 15, 1996 Article: Stable encoding of large finite-state automata in recurrent neural networks with signoid discriminants Christian W. Omlin and C. Lee Giles Note: Unicycling helps your French: Spontaneous recovery of association by learning unrelated tasks Inman Harvey and James V. Stone Letters: A theory of the visual motion coding in the primary visual cortex Zhaoping Li Alignment of Coexisting Cortical Maps in a Motor Control Model James A. Reggia and Yinong Chen Controlling the magnification factor of self-organizing feature maps H.-U. Bauer, R. Der and M. Herrmann Semilinear Predictability Minimization Produces Well-Known Feature Detectors Jurgen Schmidhuber, Martin Eldracher and Bernhard Foltin Learning with preknowledge: Clustering with point and graph matching distance measures Steven Gold, Anand Rangarajan and Eric Mjolsness Analog versus discrete neural networks Bhaskar DasGupta and Georg Schnitger On the relationship between generalization error, hypothesis complexity, and sample complexity for radial basis functions Partha Niyogi and Federico Girosi Using neural networks to model conditional multivariate densities Peter M. Williams Pruning with replacement on limited resource allocation networks by F-projections Christophe Molina and Mahesan Niranjan Engineering mulitversion neural-net systems D. Partridge and W. B. Yates Effects of nonlinear synapses on the performance of multilayer neural networks G. Dundar, F-C. Hsu, and K. Rose ----- ABSTRACTS - http://www-mitpress.mit.edu/jrnls-catalog/neural.html SUBSCRIPTIONS - 1996 - VOLUME 8 - 8 ISSUES ______ $50 Student and Retired ______ $78 Individual ______ $220 Institution Add $28 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-7 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) mitpress-orders at mit.edu MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 ----- From robert at fit.qut.edu.au Fri Apr 19 03:26:17 1996 From: robert at fit.qut.edu.au (Robert Andrews) Date: Fri, 19 Apr 1996 17:26:17 +1000 (EST) Subject: Rule Extraction Book Message-ID: ======================== NEW BOOK ANNOUNCEMENT ============================ RULES AND NETWORKS Proceedings of the Rule Extraction From Trained Artificial Neural Networks Workshop Society for the Study of Artificial Intelligence and the Simulation of Behavior Workshop Series, (AISB'96) University of Sussex, Brighton, UK. 2nd April, 1996 Robert Andrews & Joachim Diederich (Editors) =========================== ORDER FORM =================================== Name: _________________________________________________ Address: ______________________________________________ ______________________________________________ ______________________________________________ ______________________________________________ Number of Copies: __________ @ $22.00 (Australian) ___________ Postage & Handling (A$4.00 in Aust, A$10.00 O'Seas) ___________ TOTAL ___________ Payment (in Australian Dollars) Cheque/Money Order made out to: QUT BookShop GPO Box 2434, Brisbane. 4001. Queensland, Australia. Credit Card: MasterCard / Visa / BankCard Number: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Expiry: / / ======================= TABLE OF CONTENTS ================================ Rules and Local Function Networks Robert Andrews and Shlomo Geva The Extraction of Sugeno Fuzzy Rules From Neural Networks Adelmo L. Cechin, Ulrich Epperlein, Wolfgang Rosenstiel and Bernhard Koppenhoefer RULE_OUT Method: A New Approach For Knowledge Explicitation From Trained ANN Loic Decloedt, Fernando Osorio and Bernard Amy Rule Initialisation by Neural Networks Joachim Diederich, James M Hogan, Mostefa Golea and Santhi Muthiah Explaining Results of Neural Networks by Contextual Importance and Utility Kary Framling On the Complexity of Rule Extraction From Neural Networks and Network Querying Mostefa Golea Rule Extraction from Neural Networks Peter Howes and Nigel Crook Using Relevance Information in the Acquisition of Rules From pelillo at dsi.unive.it Fri Apr 19 09:09:32 1996 From: pelillo at dsi.unive.it (Marcello Pelillo) Date: Fri, 19 Apr 1996 15:09:32 +0200 (MET DST) Subject: EMMCVPR'97 - Venice - Call for Papers Message-ID: <199604191309.PAA19019@oink.dsi.unive.it> CALL FOR PAPERS International Workshop on ENERGY MINIMIZATION METHODS IN COMPUTER VISION AND PATTERN RECOGNITION Venice, Italy, May 21-23, 1997 Energy minimization methods represent a fundamental methodology in computer vision and pattern recognition, with roots in such diverse disciplines as Physics, Psychology, and Statistics. Recent manifestations of the idea include Markov random fields, relaxation labeling, various types of neural networks, etc. These techniques are finding application in areas such as early vision, graph matching, motion analysis, visual reconstruction, etc. The aim of this workshop is to consolidate research efforts in this area, and to provide a discussion forum for researchers and practitioners interested in this important yet diverse subject. The scientific program of the workshop will include the presentation of invited talks and contributed research papers. The workshop is sponsored by the International Association for Pattern Recognition (IAPR) and organized by the Department of Applied Mathematics and Computer Science of the University of Venice "Ca' Foscari." Topics Papers covering (but not limited to) the following topics are solicited: Theory: (e.g., Bayesian contextual methods, biology-inspired methods, discrete optimization, information theory and statistics, learning and parameter estimation, Markov random fields, neural networks, relaxation processes, statistical mechanics approaches, stochastic methods, variational methods) Methodology: (e.g., deformable models, early vision, matching, motion, object recognition, shape, stereo, texture, visual organization) Applications: (e.g., character and text recognition, face processing, handwriting, medical imaging, remote sensing) Program co-chairs Marcello Pelillo, University of Venice, Italy Edwin R. Hancock, University of York, UK Program committee Davi Geiger, New York University, USA Anil K. Jain, Michigan State University, USA Josef Kittler, University of Surrey, UK Stan Z. Li, Nanyang Technological University, Singapore Jean-Michel Morel, Universite' Paris Dauphine, France Maria Petrou, University of Surrey, UK Anand Rangarajan, Yale University, USA Sergio Solimini, Polytechnic of Bari, Italy Alan L. Yuille, Harvard University, USA Josiane Zerubia, INRIA, France Steven W. Zucker, McGill University, Canada Invited speakers Anil K. Jain, Michigan State University, USA Josef Kittler, University of Surrey, UK Alan L. Yuille, Harvard University, USA Steven W. Zucker, McGill University, Canada Venue The workshop will be held at the University of Venice "Ca' Foscari." The lecture theater will be in the historic center of Venice, and accommodation will be provided in nearby hotels. Submission procedure Prospective authors should submit four copies of their contribution(s) by September 9, 1996 to: Marcello Pelillo (EMMCVPR'97) Dipartimento di Matematica Applicata e Informatica Universita' "Ca' Foscari" di Venezia Via Torino 155, 30173 Venezia Mestre, Italy E-mail: pelillo at dsi.unive.it The manuscripts submitted should be no longer than 15 pages, and the cover page should contain: title, author's name, affiliation and address, e-mail address, fax and telephone number, and an abstract no longer than 200 words. In case of joint authorship, the first name will be used for correspondence unless otherwise requested. All manuscripts will be reviewed by at least two members of the program committee. Accepted papers will appear in the proceedings which are expected to be published in the series Lecture Notes in Computer Science by Springer-Verlag, and will be distributed to all participants at the workshop. In order to get a high-quality book with a uniform and professional appearance, prospective authors are strongly encouraged to use the LaTeX style file available at the WWW site indicated below. Important dates Paper submission deadline: September 9, 1996 Notification of acceptance: December 1996 Camera-ready paper due: February 1997 Homepage Information on the workshop is maintained at http://Dcpu1.cs.york.ac.uk:6666/~adjc/EMMCVPR97.html This page will be updated continuously and will include information on accepted papers and the final program. Concomitant events During the week following EMMCVPR'97, participants will have the opportunity to attend the 3rd International Workshop on Visual Form (IWVF3) to be held in Capri, May 28-30. For additional information please contact any of the co-chairmen Carlo Arcelli (car at imagm.na.cnr.it), Luigi Cordella (cordel at nadis.dis.unina.it), and Gabriella Sanniti di Baja (gsdb at imagm.na.cnr.it), or see http://amalfi.dis.unina.it/IWF3/iwvf3cfp.html From ronnyk at starry.engr.sgi.com Sun Apr 21 02:22:50 1996 From: ronnyk at starry.engr.sgi.com (Ronny Kohavi) Date: Sat, 20 Apr 1996 23:22:50 -0700 Subject: Bias + variance for classification Message-ID: <199604210622.XAA18334@starry.engr.sgi.com> The following paper will appear in the Proceedings of the Thirteenth International Conference on Machine Learning, 1996. It is available at: http://reality.sgi.com/ronnyk under publications (with some slides containing more results) or by anon ftp to ftp://starry.stanford.edu/pub/ronnyk/biasVar.ps There have been some recent announcements of tech-reports for bias-variance decompositions in classification domains (0-1 loss). In our paper we address the desiderata for good bias-variance decompositions and show some problems with other decompositions. We also address an important issue related to the naive estimation of these quantities using frequency counts and offer a correction. Bias Plus Variance Decomposition for Zero-One Loss Functions Ron Kohavi David H. Wolpert Data Mining and Visualization Silicon Graphics, Inc. The Santa Fe Institute ronnyk at sgi.com dhw at santafe.edu We present a bias-variance decomposition of expected misclassification rate, the most commonly used loss function in supervised classification learning. The bias-variance decomposition for quadratic loss functions is well known and serves as an important tool for analyzing learning algorithms, yet no decomposition was offered for the more commonly used zero-one (misclassification) loss functions until the recent work of Kong & Dietterich [1995] and Breiman [1996]. Their decomposition suffers from some major shortcomings though (.e.g, potentially negative variance), which our decomposition avoids. We show that, in practice, the naive frequency-based estimation of the decomposition terms is by itself biased and show how to correct for this bias. We illustrate the decomposition on various algorithms and datasets from the UCI repository. -- Ronny Kohavi (ronnyk at sgi.com) From STECK at ie.twsu.edu Mon Apr 22 17:14:55 1996 From: STECK at ie.twsu.edu (JIM STECK) Date: Mon, 22 Apr 1996 16:14:55 CDT (GMT-6) Subject: (Fwd) Student Assistantship in Optical Neural Networks Message-ID: <1DD0983499@ie.twsu.edu> The Neural Network Processing Group at Wichita State University, Wichita, Kansas, is currently seeking qualified Ph.D. candidates in the area of optical neural network implementations. Research Assistantships are available from $10,000 to $14,000 per year depending on qualifications. Candidates should presently have a M.S. degree from an accredited program and will be expected to enroll as a full time Ph.D. student in the College of Engineering. The appointment will also be contingent on meeting Graduate School requirements. We are especially interested in individuals who have experience with: photorefractive materials, nonlinear optics and artificial neural networks. Wichita State University is an equal opportunity affirmative action employer. Interested candidates should send a resume to: Dr. Steven R. Skinner Dept. of Electrical Engineering, #44 Wichita State University Wichita, KS 67260-0044 The College of Engineering at Wichita State University is organized into four degree-granting departments: aerospace, electrical, industrial & manufacturing, and mechanical engineering. A Doctor of Philosophy (Ph.D.) is offered by each of the four departments of engineering. The National Institute for Aviation Research is also located on the campus of Wichita State. Wichita State is located in the City of Wichita - Kansas' largest business and industrial center - and is situated within 50 miles of 40 percent of Kansas industry. Wichita, one of the world's largest producers of aircraft through Boeing, Raytheon, Cessna and Learjet, is known as the "Air Capital of the World." For more information on Wichita State University College of Engineering see: http://www.ee.twsu.edu/coe/ '''''''''''''''''''''''''''''''''''''''''''''''''''''''''' James E. Steck Assistant Professor (316)-689-3402 ----- End Included Message ----- '''''''''''''''''''''''''''''''''''''''''''''''''''''''''' James E. Steck Assistant Professor (316)-689-3402 From Jean-Pierre.Nadal at tournesol.ens.fr Mon Apr 22 04:25:04 1996 From: Jean-Pierre.Nadal at tournesol.ens.fr (NADAL Jean-Pierre) Date: Mon, 22 Apr 1996 10:25:04 +0200 (MET DST) Subject: No subject Message-ID: <199604220825.KAA11933@tournesol.ens.fr> DYNAMICAL MODELING IN BIOTECHNOLOGY Commission of the European Communities, Directorate General for Science, Research and Development. Biotechnology Programme Advanced workshops in Biotechnology May 27 to June 8 1996 Institute for Scientific Interchange (ISI), Villa Gualino, Torino, Italy Description: An intensive course for biologists at end-graduate and postgraduate level on concepts and methods of biological modeling. The course will include a 3 days computing alphabetization prelude, a series of lectures and 6 modeling projects carried out under the supervision of specialized tutors. Topics: Discrete models for simulating biological systems (P. Seiden, IBM New York), Qualitative theory and simulation of dynamical systems (A. Pikovsky, MPI, Potsdam), Monte-Carlo simulation of ageing (D. Stauffer,Koeln), Non linear excitations and energy localisation (M. Peyrard, ENS, Lyon), Neural Networks (J.-P. Nadal, ENS, Paris), Self organization and pattern formation in biochemical systems (R. Kapral, Toronto). Projects: Gene expression, DNA/RNA sequence analysis and design, Cellular automata and immune system, Non linear time-series analysis, Bacterial evolution, Models of ecosystems. Application: Although mainly aimed at biologists (both from industrial and academic research) the course is also open to physicists, chemists, applied mathematicians and computer scientists from EU countries with interdisciplinary interests. Housing is provided at Villa Gualino. Submit your CV, a one page description of your interests and how you think the course will facilitate long term research/work goals and a recommendation letter. Use e-mail if you can. No fees. Participants will be selected by the services of the Commission and the organisers. Some budget is reserved to support living expenses for students, expecially from unfavoured European countries. Send applications to Stefano Ruffo, Workshop Organizer, Dipartimento di Energetica, Universita' di Firenze, Via s. Marta 3, 50139 Firenze, Italy,tel +39-55-4796344, fax +39-55-4796342, e-mail ruffo at ingfi1.ing.unifi.it (http://www.isi.it/dynamical) From robtag at dia.unisa.it Tue Apr 23 07:04:27 1996 From: robtag at dia.unisa.it (Tagliaferri Roberto) Date: Tue, 23 Apr 1996 13:04:27 +0200 Subject: Teaching Assistant positions available at IIASS Message-ID: <9604231104.AA05791@udsab> The International Institute for Advanced Scientific Studies "E.R. Caianiello" is holding a Master Course on "Advanced Information and Communication Technology". There are two Teaching Assistant positions available, starting from October-December 1996, for one year, on the following subjects: - Pattern analysis and recognition - Communication networks - Parallel and distributed processing - Advanced operating systems - Robotics It includes research activity in the area of neural nets and related fields. It is required a Ph.d or 3 years experience in related areas. The salary is of $1,300 per month. It is not a permanent position but it can be renewed for some years. Please send information and cvs to Prof. Maria Marinaro c/o IIASS via Pellegrino, 19 84019 Vietri s/m (SA) Italy fax no. +39 89 761189 or to Dr. Roberto Tagliaferri e-mail robtag at dia.unisa.it From itb2 at psy.ox.ac.uk Tue Apr 23 10:10:52 1996 From: itb2 at psy.ox.ac.uk (Information Theory and the Brain II) Date: Tue, 23 Apr 1996 15:10:52 +0100 (BST) Subject: Call for papers: Information theory and the Brain II. Message-ID: <199604231410.PAA02284@axp02.mrc-bbc.ox.ac.uk> First call for papers: Information Theory and the Brain II To be held on the 20-21st of September, Headland Hotel, Newquay, Cornwall, England. http://www.mrc-bbc.ox.ac.uk/~itb2/conference.html This is the sequal to the conference held in Stirling, Scotland last year. Presentations on any topic relating ideas from either information theory or statistics, to the operation on the brain are welcomed. It is hoped that an informal atmosphere can be maintained in the plesant surroundings that Newquay provides. This year the conference will be held in the Cornish town of Newquay. Apart from being one of the best areas for surfing in Europe, the surrounding countryside is amongst the most beautiful in Britain. The conference will be held in the spectacular Headland Hotel right next to the famous Fistral Beach and in mid September the water is at its warmest, the surf is starting to get larger, and the summer holiday crowds have headed home. Organsing Commitee: Roland Baddeley (Chair) Nick Chater Peter Foldiak Peter Hancock Bruno Olshausen Dan Ruderman Simon Schultz Guy Wallis Send short (less than one page) abstacts, and any requests for further information either electronically to itb2 at psy.ox.ac.uk, or by surface mail to: IBT2 c/o Roland Baddeley, Dept of Psychology, University of Oxford, Oxford, England OX1 3UD Registration will be 40 pounds (about $60 U.S.) with the participants expected to find their own accommodation. This varies in price from as low as 5 pounds for the most basic upwards. Accommodation in the summer can be hard to find but by the 20th, most summer holidays have finished and the situation is much better. More information on accommodation can be found at the above mentioned web page. From ma_s435 at crystal.king.ac.uk Tue Apr 23 10:23:51 1996 From: ma_s435 at crystal.king.ac.uk (Dimitris Tsaptsinos) Date: Tue, 23 Apr 1996 10:23:51 GMT0BST Subject: EANN96 Conference Message-ID: <6184177557@crystal.kingston.ac.uk> INVITATION FOR PARTICIPATION AND PROGRAM OUTLINE International Conference on Engineering Applications of Neural Networks (EANN '96) King's College London Strand campus, London, England June 17--19, 1996 The International Conference on Engineering Applications of Neural Networks (EANN '96) is the second conference in the series. The conference is a forum for presenting the latest results on neural network applications in technical fields. 156 papers from over 20 countries have been accepted for oral presentation after a review of the abstracts. Some more information on the conference EANN '96 is available on the world wide web site at http://www.lpac.ac.uk/EANN96, and on EANN '95 at http://www.abo.fi/~abulsari/EANN95.html Conference secretariat E-mail address : eann96 at lpac.ac.uk Address : EANN '96, c/o Dr. D. Tsaptsinos, Kingston University, Mathematics, Kingston upon Thames, Surrey KT1 2EE, UK. Fax: +44 181 5477419 Organisers and co-sponsors Systems Engineering Association IEEE UK Regional Interest Group on Neural Networks London Parallel Applications Centre Neural CCS Ltd. IEE (British Institution of Electrical Engineers) Professional Group C4 Conference chairmen: Abhay Bulsari and Dimitris Tsaptsinos Registration information The conference fee is sterling (GBP) 360. The conference fee can be paid by a bank cheque or a bank draft (no personal cheques) payable to EANN '96, to be sent to EANN '96, c/o Dr. D. Tsaptsinos, Kingston University, Mathematics, Kingston upon Thames, Surrey KT1 2EE, UK. The fee includes attendance to the conference and the proceedings. Registration form can be sent to you by e-mail and you may return it by e-mail (or post or fax) once the conference fee has been sent. A registration form sent before the payment of the conference fee is not valid. For more information, please ask eann96 at lpac.ac.uk The tentative program outline is as on the following page. The detailed program will be prepared in the end of April. ------------------------------------------------------------------------ PROGRAM OUTLINE International Conference on Engineering Applications of Neural Networks (EANN '96) Room A Room B Monday, 17 June 0800 Registration 0830 Opening 0845 Vision (1) Control Systems (1) 1200 --- lunch break --- 1330 Vision (2) Control Systems (2) 1630 Discussion session on Vision Discussion session on Control Tuesday, 18 June 0830 Biomedical Engineering Mechanical Engineering 1200 --- lunch break --- 1330 Process Engineering Robotics 1500 Chemical Engineering 1630 Discussion session on Chemical Engineering Wednesday, 19 June 0830 Speech and signal processing Metallurgical Engineering 1030 Classification systems Discussion session on Metallurgy 1200 --- lunch break --- 1330 Hardware Applications General Applications 1600 Hybrid systems 1800 Closing The indicated times are approximate and changes are still possible. From payman at ebs330.eb.uah.edu Wed Apr 24 19:12:20 1996 From: payman at ebs330.eb.uah.edu (Payman Arabshahi) Date: Wed, 24 Apr 96 18:12:20 CDT Subject: IEEE NNC launches web newsletter Message-ID: <9604242312.AA20688@ebs330> The IEEE Neural Networks Council announces the launching of its newsletter on the World Wide Web. "CONNECTIONS" (ISSN 1068-1450) will appear quarterly and will be the place for various NNC related news of meetings, conferences, and events, as well as in depth reports on NNC committees and their activities, book reviews, and a technology column overviewing the latest research trends in the field of computational intelligence. Please visit our first issue on the web, in the newsletter section of the NNC Homepage at http://www.ieee.org/nnc Contents, New Series, Vol. 1, No. 1, Spring 1996: NEWS - NNC ExCom: Minutes of the Meeting of March 24, 1996 ...... K. Haines - Report on CIFEr'96 ....................................... R. Golan - Upcoming NNC events ...................................... Ed. - NNC Awards ............................................... M. Hassoun - Homepage news and overview FOCUS - Fuzzy Systems Technical Committee ........................ H. Berenji - Council Personality Profiles ............................. Ed. - New Book Review .......................................... Ed. - NNC Regional Interest Group Report ....................... M. Y. Chow TECHNOLOGY - A Virtual Reality Interface to Complex Neural Network Software Simulations ...................... T.P. Caudell -- Payman Arabshahi Tel : (205) 895-6380 Dept. of Electrical & Computer Eng. Fax : (205) 895-6803 University of Alabama in Huntsville payman at ebs330.eb.uah.edu Huntsville, AL 35899 http://www.eb.uah.edu/ece From N.Sharkey at dcs.shef.ac.uk Fri Apr 26 15:30:25 1996 From: N.Sharkey at dcs.shef.ac.uk (Noel Sharkey) Date: Fri, 26 Apr 96 15:30:25 BST Subject: 3 Research Studentships available Message-ID: <9604261430.AA09596@dcs.shef.ac.uk> PLEASE PASS ON TO ANY FINAL YEAR UNDERGRADUATES OR MASTERS WHO ARE SEEKING FUNDED PHD PLACES. Sorry if you receive this more than once. 3 RESEARCH STUDENTSHIPS IN NEURAL COMPUTING AND ROBOTICS Department of Computer Science University of Sheffield Three funded PhD studentships are available, two from from 1st August, and one from end of September, 1996. The first of these is restricted to British Students only and the other 2 are for students from countries within the European Community. 1. Neural Computing. Projects on any topic within the field of neural computing will be considered. Two areas of particular interest are (a) Improving the reliability of neural computing applications through the use of ensembles of nets and (b) Cognitive modelling, transfer and interference. 2. Autonomous Mobile Robotics. The project would provide an ideal opportunity for a creative student to work on the development of ``intelligent'' behaviour on a mobile robot. Neural computing techniques have already been applied in our lab to develop a number of low level behaviours on a Nomad200. The student would be expected to develop higher level behavioural control. There are a number of different approaches that could be taken. For example: using representations developed at the lower levels to induce higher level behaviours or using a human and animal developmental paradigm. However, nothing is set in stone for this project and a good proposal will go a long way. 3. Pharmaceutical Robotics Aim: The development of a neural computing system for coordinating robot arms in the task of mixing dangerous drugs. This project is in collaboration with the Pharmacy Unit at the Northern General Hospital. Their problem is that they currently employ more than twenty highly-qualified specialist staff to spend a large part of their day involved in the rather tedious task of mixing drugs. Since many of the drugs are very dangerous to humans (such as anti-cancer drugs), much of the work has to take place inside a sealed glass case that is accessed by attached gloves (a glove box). The solution is to put robot arms into the cases and let them do most of the work. It should be noted that this is a research project and offers a number of interesting robotics problems. The student would not be expected to develop a commercial system. Further information about neural computing within the Artificial Intelligence and Neural Networks (AINN) research group can be viewed on WWW: http://www.dcs.shef.ac.uk/research/groups/nn (This will not be ready to view until Wednesday, 1st, May). Application forms may be obtained from our PhD admission secretary Jill Martin jill at dcs.shef.ac.uk. Or write to Ms J. Martin, Department of Computer Science, 211 Portabello St., Sheffield, S1 4DP, S. Yorks, UK. Forms should be accompanied by a short proposal (less than a page) about what the applicant would like to work on, but this does not commit the applicant. From search at idiap.ch Mon Apr 29 07:51:30 1996 From: search at idiap.ch (Account for applications) Date: Mon, 29 Apr 1996 13:51:30 +0200 (MET DST) Subject: Director position available Message-ID: <199604291151.NAA21646@midi.idiap.ch> ------------------------ une version franaise suit ------------------------ The Dalle Molle Institute for Perceptive Artificial Intelligence is opening the position of DIRECTOR OF THE INSTITUTE The Dalle Molle Institute for Perceptive Artificial Intelligence (IDIAP) is a private non-profit research institute, founded in 1991 and located in Martigny, Valais, Switzerland. Today, it consists of more than twenty staff members working in the following fields: * Automatic Speech Processing (spoken language understanding, speaker verification/identification), * Artificial Neural Networks (design, training, applications and optical implementation), * Machine Vision (handwriting recognition, lip reading). IDIAP is part of the Dalle Molle Foundation for the Quality of Life and is supported by public-sector partners (City of Martigny, Canton of Valais and Swiss Confederation). The institute has privileged relationships with Geneva University, the Swiss Federal Institute of Technology at Lausanne and the Telecom-PTT. Additional information about IDIAP is available on our WWW page: "http://www.idiap.ch". The position of director of this institute is presently vacant. Candidates should possess a Ph.D. in computer science or a related area. They must have an outstanding research record and a proven exellence in leadership, both on the scientific and administrative levels. Alignment of personal research interests along current research domains of the institute would be an asset. They will be responsible of defining the research policy of the institute and they will have to get involved in maintaining the role of IDIAP in both local and national research policy. An excellent mastery of French would be a plus. Salaries will be in accordance with those offered by the Swiss government for equivalent positions. The position of director is available as soon as possible, but not before September 1996. The duration and renewal of contracts is subject to negotiation. A few words about the geographic location: the town of Martigny is located in the south of Switzerland, in the Rhone's valley, close to both France and Italy. It is located in the heart of the scenic Alpine region east of lake Geneva, with some of the best skiing and hiking in Europe. The town of Martigny, well connected by rail and highway to the rest of Switzerland, offers a large variety of cultural and artistic activities. To apply for this position please send before June 15, 1996 by electronic mail to "search at idiap.ch" (in plain ASCII, TeX, FrameMaker-MIF, Word-RTF or PostScript): * a curriculum vitae * a list of publications * a description of the research program that the candidate wishes to pursue * the names and addresses of three personal references. A paper-form application or request for further information can be sent to Secretariat IDIAP CP 592 Rue du Simplon 4 CH-1920 Martigny Switzerland Phone: +41 26 21 77 11 Fax: +41 26 21 77 12 ---------------------------------------------------------------------------- L'Institut Dalle Molle d'Intelligence Artificielle Perceptive met au concours le poste de DIRECTION DE L'INSTITUT L'Institut Dalle Molle d'Intelligence Artificielle Perceptive (IDIAP) est un institut de recherche priv but non lucratif, cr en 1991 et situ Martigny, Valais, Suisse. Il compte actuellement une vingtaine de collaborateurs travaillant dans les domaines de recherche suivants : * le traitement automatique de la parole (comprhension de la langue parle, vrification/identification du locuteur), * les rseaux de neurones artificiels (modlisation, apprentissage et ralisation optique), * la vision artificielle (reconnaissance de textes manuscrits, lecture des lvres). L'IDIAP est rattach la "Fondation Dalle Molle pour la qualit de la vie", et reoit des subventions du secteur public (Commune de Martigny, Canton du Valais et Confdration helvtique). L'institut a tabli des relations privilgies avec l'Universit de Genve, l'cole Polytechnique Fdrale de Lausanne et les Tlcom-PTT. De plus amples informations sur l'IDIAP et ses projets de recherche actuels sont disponibles sur notre page WWW : "http://www.idiap.ch". Le poste de directrice/directeur de cet institut est actuellement vacant. Les candidats idaux sont en possession d'un doctorat en informatique ou dans un domaine voisin. Ils doivent tre des chercheurs confirms et doivent avoir d'excellentes comptences de direction autant au niveau scientifique qu'administratif. Une bonne adquation de leurs intrts de recherche avec les domaines actuellement dvelopps l'IDIAP est fortement souhaite. La/le titulaire de ce poste devra dfinir la politique de recherche de l'institut et prendre une part active au maintien du rle de l'IDIAP dans la politique de recherche rgionale et nationale. Le salaire est comparable celui offert dans l'administration suisse pour des postes quivalents. L'entre en activit peut se faire ds septembre 1996 ou une date convenir. La dure et le renouvellement du contrat sont sujet ngociation. Quelques mots sur la rgion : la ville de Martigny est situe dans le sud de la Suisse, dans la valle du Rhne, proche de la France et de l'Italie. Elle se trouve au coeur d'une rgion alpine jouissant d'une renomme europenne pour ses domaines skiables et ses randonnes pdestres. La ville de Martigny, bien desservie par le rail et l'autoroute, offre un large ventail d'activits culturelles. Pour soumettre votre candidature ce poste, veuillez envoyer avant le 15 juin 1996, par courrier lectronique l'adresse "search at idiap.ch" les lments suivants (en ASCII, TeX, FrameMaker-MIF, Word-RTF or PostScript) : * votre curriculum vitae, * une liste de vos publications, * une description du programme de recherche que vous souhaitez poursuivre l'IDIAP, * les noms et adresses de trois rfrences. Une version papier de votre dossier de candidature ou des demandes de complments d'informations peuvent aussi tre adresss Secrtariat IDIAP CP 592 Rue du Simplon 4 CH-1920 Martigny SUISSE Tl : +41 26 21 77 11 Fax : +41 26 21 77 12 From timxb at pax.Colorado.EDU Mon Apr 29 12:34:57 1996 From: timxb at pax.Colorado.EDU (Brown Tim) Date: Mon, 29 Apr 1996 10:34:57 -0600 Subject: Research Assistantships at University of Colorado at Boulder Message-ID: <199604291634.KAA03073@pax.Colorado.EDU> It's not too late to apply. University of Colorado at Boulder The Department of Electrical and Computer Engineering at CU Boulder has one and possibly more graduate research assistantships available for work in the following areas: -------- Adaptive Control of Broadband Communication Networks -------- Modern data traffic sources are characterized by many challenging features from a network control standpoint: difficult to model and analyze queueing properties; inter-source correlations; slowly varying statistical properties; diverse heterogeneous source types; misspec- ified traffic descriptors; and inadvertent network traffic shaping. We seek to address problems such as resource allocation, routing, provisioning, and network design for such traffic using methods based on statistical classification techniques that use historical data as to what were acceptable combinations of carried traffic and what were not. Technical problems range from efficient representation and storage of historical data; choice and modification strategy of classifier model; noise on the data; confidence intervals on decisions; directed and undirected exploration of the decision space; filtering out uninformative data; and implementations. -------- Low-Power Neural Network Architectures for Wireless -------- Analog neural networks have demonstrated signal processing power dissipations orders of magnitude less than comparable digital implementations. This is promising for battery limited mobile and wireless applications. Key to harnessing this potential is overcoming noise, dynamic range, and precision limitations inherent to the analog processing. We seek to improve the scope and robustness of neural algorithms and architectures including: techniques for mapping software neural solutions into non-ideal hardware; robust algorithms for learning directly in hardware; and developing neural design methodologies beyond Hopfield energy functions for optimization problems. Research will be guided by mobile signal processing applications such as equalization, vector quantization, and adaptive filtering. Opportunities exist for research in algorithms, architectures, and also hardware implementations. -------- Design Optimization for Low Power Communication -------- Communication systems are designed by separately optimizing components such as RF front ends, error correcting codes, and diversity strategies. The goals of the individual designs (such as designing for capacity in error correcting codes) may not always match the global objective and this approach ignores the coupling between design choices in each module. Simple yet non-intuitive examples show that dramatic power reductions are possible with a holistic design. We seek to address this at two levels. As a static design problem, formulating the problem as an objective function is conceptually straight forward, but due to a wide variety of continuous/discrete, linear/non-linear, and deterministic/stochastic constraints and variables requires conventional techniques need to be supplemented by more robust methods such as genetic algorithms. At a dynamic level the communication design is not required to be static and flexible DSP hardware allows for different strategies to be tried as a function of the current communication environment (e.g. power control). Technical problems range from formalizing a design problem that crosses many research domains; optimization over multiple variable types; and methods for making decisions with uncertainty and incomplete knowledge in dynamic environments. For more info contact Prof. Tim Brown, timxb at colorado.edu (303) 492-1630 From zoubin at cs.toronto.edu Mon Apr 29 15:00:27 1996 From: zoubin at cs.toronto.edu (Zoubin Ghahramani) Date: Mon, 29 Apr 1996 15:00:27 -0400 Subject: Thesis on sensorimotor integration available Message-ID: <96Apr29.150027edt.985@neuron.ai.toronto.edu> The following PhD thesis is available at http://www.cs.utoronto.ca/~zoubin/ or ftp://ftp.cs.toronto.edu/pub/zoubin/thesis.ps.Z ---------------------------------------------------------------------- Computation and Psychophysics of Sensorimotor Integration Zoubin Ghahramani Department of Brain & Cognitive Sciences Massachusetts Institute of Technology ABSTRACT All higher organisms are able to integrate information from multiple sensory modalities and use this information to select and guide movements. In order to do this, the central nervous system (CNS) must solve two problems: (1) Converting information from distinct sensory representations into a common coordinate system, and (2) integrating this information in a sensible way. This dissertation proposes a computational framework, based on statistics and information theory, to study these two problems. The framework suggests explicit models for both the coordinate transformation and integration problems, which are tested through human psychophysics. The experiments in Chapter 2 suggest that: (1) Spatial information from the visual and auditory systems is integrated so as to minimize the variance in localization. (2) When the relation between visual and auditory space is artificially remapped, the spatial pattern of auditory adaptation can be predicted from its localization variance. These studies suggest that multisensory integration and intersensory adaptation are closely related through the principle of minimizing localization variance. This principle is used to model sensorimotor integration of proprioceptive and motor signals during arm movements (Chapter 3). The temporal propagation of errors in estimating the hand's state is captured by the model, providing support for the existence of an internal model in the CNS that simulates the dynamic behavior of the arm. The coordinate transformation problem is examined in the visuomotor system, which mediates reaching to visually-perceived objects (Chapter 4). The pattern of changes induced by a local remapping of this transformation suggests a representation based on units with large functional receptive fields. Finally, the problem of converting information from disparate sensory representations into a common coordinate system is addressed computationally (Chapter 5). An unsupervised learning algorithm is proposed based on the principle of maximizing mutual information between two topographic maps. What results is an algorithm which develops multiple, mutually-aligned topographic maps based purely on correlations between the inputs to the different sensory modalities. (212 pages, 2.6 Mb, formatted for double-sided printing). From stefano at kant.irmkant.rm.cnr.it Tue Apr 30 11:21:36 1996 From: stefano at kant.irmkant.rm.cnr.it (Stefano Nolfi) Date: Tue, 30 Apr 1996 15:21:36 GMT Subject: paper available:Evolving non-trivial behaviors on real robots.. Message-ID: <9604301521.AA15270@kant.irmkant.rm.cnr.it> Paper available via WWW / FTP: Keywords: Evolutionary Robotics, Behavior Based Robotics, Adaptive Behaviors, Neural Networks, Genetic Algorithms. ------------------------------------------------------------------------------ EVOLVING NON-TRIVIAL BEHAVIORS ON REAL ROBOTS: A GARBAGE COLLECTING ROBOT Stefano Nolfi Institute of Psychology, C.N.R., Rome. Recently, a new approach involving a form of simulated evolution has been proposed to build autonomous robots. However, it is still not clear if this approach is adequate for real life problems. In this paper we show how control systems that perform a non-trivial sequence of behaviors can be obtained with this methodology by "canalizing" the evolutionary process in the right direction. In the experiment described in the paper, a mobile robot was successfully trained to keep clear an arena surrounded by walls by locating, recognizing, and grasping "garbage" objects and by taking collected objects outside the arena. The controller of the robot was evolved in simulation and then downloaded and tested on the real robot. We also show that while a given amount of supervision may canalize the evolutionary process in the right direction the addition of unnecessary constraints can delay the evolution of the desired behavior. http://kant.irmkant.rm.cnr.it/public.html or ftp-server: kant.irmkant.rm.cnr.it (150.146.7.5) ftp-file : /pub/econets/nolfi.gripper2.ps.Z for the homepage of our research group with most of our publications available online and pointers to ALIFE resources see: http://kant.irmkant.rm.cnr.it/gral.html ---------------------------------------------------------------------------- Stefano Nolfi Institute of Psychology, C.N.R. Viale Marx, 15 - 00137 - Rome - Italy voice: 0039-6-86090231 fax: 0039-6-824737 e-mail: stefano at kant.irmkant.rm.cnr.it www: http://kant.irmkant.rm.cnr.it/nolfi.html From te at psyc.nott.ac.uk Tue Apr 30 09:17:29 1996 From: te at psyc.nott.ac.uk (Terry Elliott) Date: Tue, 30 Apr 1996 14:17:29 +0100 (BST) Subject: Research Studentship Available Message-ID: ****** PLEASE POST ********* PLEASE POST ********* PLEASE POST ****** Departments of Psychology and Life Science (University of Nottingham, Nottingham, U.K.) Research Studentship in Neuroscience A postgraduate studentship leading to a Ph.D. degree is available from September, 1996 in the area of neural development and plasticity, under the supervision of Professors Shadbolt (Psychology) and Usherwood (Life Science). The research will test predictions derived from recent computer models of neural plasticity. Candidates should have a good first degree in a relevant discipline. Informal enquires may be addressed to Professor Nigel Shadbolt (tel.: +44 (0)115 951 5317; e-mail: nrs at psyc.nott.ac.uk). Candidates should send a detailed C.V. giving the names of two referees to: Mrs Jeannie Tuck, Postgraduate School, Department of Psychology, University of Nottingham, Nottingham, NG7 2RD, U.K. The closing date is 30th May, 1996. From dnoelle at cs.ucsd.edu Tue Apr 30 15:52:02 1996 From: dnoelle at cs.ucsd.edu (David Noelle) Date: Tue, 30 Apr 96 12:52:02 -0700 Subject: CogSci96 - May 1st Is Early Registration Deadline Message-ID: <9604301952.AA17084@hilbert> Eighteenth Annual Conference of the COGNITIVE SCIENCE SOCIETY July 12-15, 1996 University of California, San Diego La Jolla, California CALL FOR PARTICIPATION The Annual Cognitive Science Conference began with the La Jolla Conference on Cognitive Science in August of 1979. The organizing committee of the Eighteenth Annual Conference would like to welcome members home to La Jolla. We plan to recapture the pioneering spirit of the original conference, extending our welcome to fields on the expanding frontier of Cognitive Science, including Artificial Life, Cognitive and Computational Neuroscience, Evolutionary Psychology, as well as the core areas of Anthropology, Computer Science, Linguistics, Neuroscience, Philosophy, and Psychology. The conference will feature plenary addresses by invited speakers, invited symposia by leaders in their fields, technical paper sessions, a poster session, a banquet, and a Blues Party. San Diego is the home of the world-famous San Diego Zoo and Wild Animal Park, Sea World, the historic all-wooden Hotel Del Coronado, beautiful beaches, mountain areas and deserts, is a short drive from Mexico, and features a high Cappuccino Index. Bring the whole family and stay a while! PLENARY SESSIONS "Controversies in Cognitive Science: The Case of Language" Stephen Crain (UMD College Park) & Mark Seidenberg (USC) Moderated by Paul Smolensky (Johns Hopkins University) "Tenth Anniversary of the PDP Books" Geoff Hinton (Toronto), Jay McClelland (CMU), & Dave Rumelhart (Stanford) "Frontal Lobe Development and Dysfunction in Children: Dissociations between Intention and Action" Adele Diamond (MIT) "Reconstructing Consciousness" Paul Churchland (UCSD) TRAVEL & ACCOMMODATIONS United Airlines is the official airline of the 1996 Cognitive Science Conference. Attendees flying with United can receive a 5% discount off of any published United or United Express round trip fare (to San Diego) in effect when ticket is purchased, subject to all applicable restrictions. Attendees flying with United can receive a 10% discount off of applicable BUA fares in effect when ticket is purchased 7 days in advance. To get your discount, be sure to give your travel agent the following information: * "Meeting ID# 557NS for the Cognitive Science Society Meeting" * United's Meeting Desk phone number is (800) 521-4041. Alternatively, you may order your tickets direct from United's Meeting Desk, using the same reference information as above. Purchasers of United tickets to the conference will be eligible for a drawing (to be held at the conference) in which two round trip tickets will be given away -- so don't throw away your boarding pass! If you are flying to San Diego, you will be arriving at Lindbergh Field. If you don't rent a car, transportation from the airport to the UCSD area will cost (not including tip) anywhere from $15.00 (for a seat on a shuttle/van) to $35.00 (for a taxi). We have arranged for special rates at two of the hotels nearest to the UCSD campus. In addition, on campus apartments can be rented at less expense. All rooms are subject to availability and hotel rates are only guaranteed up to the dates specified, so reserve early. None of the rates quoted below (unless explicitly stated) include tax, which is currently 10.5 percent. The La Jolla Marriott is located approximately 2 miles from campus. Single and double rooms are available at $92.00 per night, when reserved before June 21st. Included in the rate is a morning and evening shuttle service to and from campus (running for one hour periods, on July 13th, 14th, and 15th only). The hotel has parking spaces, available at $7 per day or $10 per day with valet service. On campus parking requires the purchase of daily ($6.00) or weekly ($16.00) passes. There is also city bus service (fare is about $1.50 per ride) from and to campus which passes within 1 block of the hotel. Reservations can be made by calling the hotel at (619) 587-1414 or (800) 228-9290. Be sure to reference the "Annual Conference of the Cognitive Science Society" to receive these special rates. Arrival after 6:00 P.M. requires a first night's deposit, or guarantee with a major credit card. The La Jolla Radisson is located approximately 1/2 mile from campus. Single and double rooms are available at $75.00 per night, when reserved before June 12th. Included in the rate is a morning and evening shuttle service to and from campus, although walking is also very feasible. Parking is available and complementary. On campus parking requires the purchase of daily ($6.00) or weekly ($16.00) passes. The first night's room charge (+ tax) is due by June 12th. Reservations can be made by calling Radisson Reservations at (800) 333-3333. Be sure to reference the "Annual Conference of the Cognitive Science Society" to receive these special rates. There are a limited number of on-campus apartments available for reservation as a 4 night package, from July 12th through July 16th. Included is a (mandatory) meal plan - cafeteria breakfast (4 days), and lunch (3 days). The total cost is $191 per person (double occupancy, including tax) and $227 per person (single occupancy, including tax). (Checking in a day early is $45 extra for a single room or $36 for a double.) On campus parking is complimentary with this package. These apartments may be reserved using the conference registration form. REGISTRATION INFORMATION There are three ways to register for the 1996 Cognitive Science Conference: * ONLINE REGISTRATION -- You may fill out and electronically submit the online registration form, which may be found on the conference web page at "http://www.cse.ucsd.edu/events/cogsci96/". This is the preferred method of registration. (You must pay registration fees with a Visa or MasterCard in order to use this option.) * EMAIL REGISTRATION -- You may fill out the plain text (ASCII) registration form, which appears below, and send it via electronic mail to "cogsci96reg at cs.ucsd.edu". (You must pay registration fees with a Visa or MasterCard in order to use this option.) * POSTAL REGISTRATION -- You may download a copy of the PostScript registration form from the conference home page (or extract the plain text version, below), print it on a PostScript printer, fill it out with a pen, and send it via postal mail to: CogSci'96 Conference Registration Cognitive Science Department - 0515 University of California, San Diego 9500 Gilman Drive La Jolla, CA 92093-0515 (Under this option, you may enclose payment of registration fees in U. S. dollars in the form of a check or money order, or you may pay these fees with a Visa or MasterCard. Please make checks payable to: The Regents of the University of California.) For more information, visit the conference web page at "http://www.cse.ucsd.edu/events/cogsci96". Please direct questions and comments to "cogsci96 at cs.ucsd.edu", (619) 534-6773, or (619) 534-6776. Edwin Hutchins and Walter Savitch, Conference Chairs John D. Batali, Local Arrangements Chair Garrison W. Cottrell, Program Chair ====================================================================== PLAIN TEXT REGISTRATION FORM ====================================================================== Cognitive Science 1996 Registration Form ---------------------------------------- Your Full Name : _____________________________________________________ Your Postal Address : ________________________________________________ (including zip/postal ________________________________________________ code and country) ________________________________________________ ________________________________________________ Your Telephone Number (Voice) : ______________________________________ Your Telephone Number (Fax) : ______________________________________ Your Internet Electronic Mail Address (e.g., dnoelle at cs.ucsd.edu) : ______________________________________________________________________ REGISTRATION FEES : Please select the appropriate registration option from the menu below by placing an "X" in the corresponding blank on the left. Note that the Cognitive Science Society is offering a special deal to individuals who opt to join the Society simultaneously with conference registration. The "New Member" package includes conference fees and first year's membership dues for only $10 more than the nonmember conference cost. Registration fees received after May 1st are $20 higher ($10 higher for students) than fees received before May 1st. Be sure to register early to take advantage of the lower fee rates. _____ Registration, Member -- $120 ($140 after May 1st) _____ Registration, Nonmember -- $145 ($165 after May 1st) _____ Registration, New Member -- $155 ($175 after May 1st) _____ Registration, Student Member -- $85 ($95 after May 1st) _____ Registration, Student Nonmember -- $100 ($110 after May 1st) _____ Registration, New Student Member -- $115 ($125 after May 1st) CONFERENCE BANQUET : Tickets to the conference banquet are *not* included in the registration fees, above. Banquet tickets are $35 per person. (You may bring guests.) Number Of Banquet Tickets Desired ($35 each): _____ _____ Omnivorous _____ Vegetarian CONFERENCE SHIRTS : Conference T-Shirts are *not* included in the registration fees, above. These are $10 each. Number Of T-Shirts Desired ($10 each): _____ UCSD ON-CAMPUS APARTMENTS : There are a limited number of on-campus apartments available for reservation as a 4 night package, from July 12th through July 16th. Included is a (mandatory) meal plan - cafeteria breakfast (4 days), and lunch (3 days). The total cost is $191 per person (double occupancy, including tax) and $227 per person (single occupancy, including tax). (Checking in a day early is $45 extra for a single room or $36 for a double.) On campus parking is complimentary with this package. Off-campus accommodations in local hotels are also available, but you will need to make reservations by contacting the hotel of interest directly. If you will be staying off-campus, please skip this portion of the registration form. On-campus housing reservations must be received by May 1st, 1996. Please include the cost of on-campus housing in the total conference cost listed at the bottom of this form. Select the housing plan desired by placing an "X" in the appropriate blank on the left: _____ UCSD Housing and Meal Plan (Single Room) -- $227 per person _____ UCSD Housing and Meal Plan (Double Room) -- $191 per person Arrival Date And Time : ____________________________________________ Departure Date And Time : ____________________________________________ If you reserved a double room above, please indicate your roommate preference below: _____ Please assign a roommate to me. I am _____ female _____ male. _____ I will be sharing this room with a guest who is not registered for the conference. I will include $382 ($191 times 2) in the total conference cost listed at the bottom of this form. _____ I will be sharing this room with another conference attendee. I will include $191 in the total conference cost listed at the bottom of this form. My roommate will submit her housing fee along with her registration form. My roommate's full name is: ______________________________________________________________ ASL TRANSLATION : American Sign Language (ASL) translators will be available for a number of conference events. The number of translated events will be, in part, a function of the number of participants in need of this service. Please indicate below if you will require ASL translation of conference talks. _____ I will require ASL translation. Comments To The Registration Staff : ______________________________________________________________________ ______________________________________________________________________ ______________________________________________________________________ Please sum your conference registration fees, the cost of banquet tickets and t-shirts, and on-campus housing costs, and place the total below. To register by electronic mail, payment must be by Visa or MasterCard only. TOTAL : _$____________ Bill to: _____ Visa _____ MasterCard Number : ___________________________________________ Expiration Date: ___________________________________ Registration fees (including on-campus housing costs) will be fully refunded if cancellation is requested prior to May 1st. If registration is cancelled between May 1st and June 1st, 20% of paid fees will be retained by the Society to cover processing costs. No refunds will be granted after June 1st. When complete, send this form via email to "cogsci96reg at cs.ucsd.edu". Please direct questions to "cogsci96 at cs.ucsd.edu", (619) 534-6773, or (619) 534-6776. ====================================================================== PLAIN TEXT REGISTRATION FORM ======================================================================