From harry at brain.Jpl.Nasa.Gov Mon Oct 2 14:05:52 1995 From: harry at brain.Jpl.Nasa.Gov (Harry Langenbacher) Date: Mon, 2 Oct 1995 11:05:52 -0700 Subject: Job opening at JPL Message-ID: <199510021805.LAA22519@brain.jpl.nasa.gov> JOB OPPORTUNITIES NEURAL NETWORK / FUZZY LOGIC SYSTEM DESIGN involving VLSI HARDWARE, and SOFTWARE DEVELOPMENT at the Jet Propulsion Laboratory, Pasadena CA, USA Requires: Ph D (EE or Computer Science) and 2 or more years experience for a position as Member of Technical Staff. PhD (EE or Computer Science) for post-doctoral position. Areas of specialization, expertise, knowledge, and interests: All aspects of analog and digital VLSI and system design. Knowledgeable in new computing paradigms such as neural networks, fuzzy logic, genetic algorithms, etc. Skilled with computer hardware and software development. Interest in innovation. Skills Sought: Experience with VLSI circuit design & layout tools such as Spice and Magic. Expertise in C, Unix, X, MSDOS, MS-Windows, etc. Knowledge of computer interface techniques and hardware. Familiarity with computer graphics, and some knowledge of conventional signal/image processing techniques and algorithms. The Job: Development and integration of state of the art full-custom/ASIC VLSI concurrent processing architectures for real-time sensor signal processing, pattern-recognition, data fusion, etc. Contact: e-mail (preferred), or FAX, mail, or phone to harry at jpl.nasa.gov Harry Langenbacher JPL Mail-stop 302-231 4800 Oak Grove Dr Pasadena CA 91109 USA Jet Propulsion Laboratory Concurrent Processing Devices Group, Phone: 818-354-9513 FAX : 818-393-4540  From ken at phy.ucsf.edu Mon Oct 2 20:45:24 1995 From: ken at phy.ucsf.edu (Ken Miller) Date: Mon, 2 Oct 1995 17:45:24 -0700 Subject: paper available: modeling joint development of ocular dominance and orientation maps Message-ID: <9510030045.AA04586@coltrane.ucsf.edu> FTP-host: phy.ucsf.edu FTP-filename: /pub/erwin/CNS95proc.ps.Z URL: ftp://phy.ucsf.edu/pub/erwin/CNS95proc.ps.Z The following paper is now available by anonymous ftp, from the above addresses, or from my or Ed Erwin's home pages (addresses below). Sorry, hard copies are not available. Modeling Joint Development of Ocular Dominance and Orientation Maps in Primary Visual Cortex by Ed Erwin and Kenneth D. Miller To appear in the Proceedings of the Computation and Neural Systems (CNS) 1995 conference, Monterey. (In press) ABSTRACT: We have combined earlier correlation-based models of striate ocular dominance and orientation preference map formation into a joint model. Cortical feature preferences are defined through patterns of synaptic connectivity to LGN cells which develop due to firing correlations of those LGN cells. Model parameters include spatial correlation patterns between ON- and OFF-center cells in separate eye layers of the LGN. A linear transformation yields correlation functions which predict whether orientation preferences, ocular dominance, or both, will develop. The model thus predicts the correlations between LGN cells which would be necessary to explain formation of visual maps by a linear process. Kenneth D. Miller www: http://keck.ucsf.edu/~ken internet: ken at phy.ucsf.edu Ed Erwin www: http://keck.ucsf.edu/~erwin internet: erwin at phy.ucsf.edu Both: Dept. of Physiology University of California, San Francisco 513 Parnassus San Francisco, CA 94143-0444 fax: (415) 476-4929  From watrous at scr.siemens.com Tue Oct 3 14:11:45 1995 From: watrous at scr.siemens.com (Raymond L Watrous) Date: Tue, 3 Oct 1995 14:11:45 -0400 Subject: paper available on patient-adaptive ECG classification Message-ID: <199510031811.OAA04375@tiercel.scr.siemens.com> FTP-HOST: scr.siemens.com FTP-filename: /pub/learning/Papers/watrous/cic_95.ps.Z The following paper (4 pages, 3 figures) is now available via anonymous ftp: A Patient-Adaptive Neural Network ECG Patient Monitoring Algorithm Raymond Watrous, Geoffrey Towell Siemens Corporate Research 755 College Road East Princeton, NJ 08540 Abstract A new, patient-adaptive ECG Patient Monitoring algorithm is described. The algorithm combines a patient-independent neural network classifier with a three-parameter patient model. The patient model is used to modulate the patient-independent classifier via multiplicative connections. Adaptation is carried out by gradient descent in the patient model parameter space. The patient-adaptive classifier was compared with a well-established baseline algorithm on six major databases, consisting of over 3 million heartbeats. When trained on an initial 77 records and tested on an additional 382 records, the patient-adaptive algorithm was found to reduce the number of Vn errors on one channel by a factor of 5, and the number of Nv errors by a factor of 10. We conclude that patient adaptation provides a significant advance in classifying normal vs. ventricular beats for ECG Patient Monitoring. +=+=+= The paper will appear in the proceedings of Computers in Cardiology, September 10-13, 1995, Vienna, Austria. We regret that we are unable to provide hard copies. Raymond Watrous +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+ Learning Systems Department Phone: (609) 734-6596 Siemens Corporate Research FAX: (609) 734-6565 755 College Road East Princeton, NJ 08540 watrous at learning.scr.siemens.com  From jordan at psyche.mit.edu Tue Oct 3 18:22:02 1995 From: jordan at psyche.mit.edu (Michael Jordan) Date: Tue, 3 Oct 95 18:22:02 EDT Subject: workshop announcement Message-ID: This is an announcement of a post-NIPS workshop on Learning in Bayesian Belief Networks and Other Graphical Models. Bayesian belief networks are probabilistic graphs that have interesting relationships to neural networks. Undirected belief networks are closely related to Boltzmann machines. Directed belief networks (the more popular variety) are related to feedforward neural networks, but have a stronger probabilistic semantics. Many interesting probabilistic models, including HMM's, Kalman filters, mixture models, factor analytic models, etc., can be viewed as special cases of belief networks. In the area of inference (i.e., the calculation of posterior probabilities of certain nodes given that other nodes are clamped), the research on belief networks is quite mature. The inference algorithms provide a clean probabilistic framework for e.g., inverting a network, calculating posterior probabilities of hidden nodes, calculating most probable configurations, etc. In the area of learning, there have been interesting developments in the area of structural learning (deciding which links and which nodes to include in the graph) and learning in the presence of hidden variables. The organizing committee for the workshop includes: Wray Buntine, Greg Cooper, Dan Geiger, David Heckerman, Geoffrey Hinton, Mike Jordan, Steffen Lauritzen, David Mackay, David Madigan, Radford Neal, Steve Omohundro, Judea Pearl, Stuart Russell, Peter Spirtes, and Ross Shachter. Many of these people will be giving presentations at the workshop. A short bibliography follows for those who might like to read up on Bayesian belief networks in anticipation of the workshop. Mike Jordan ------------------ Short Bibliography ------------------ The list below provides a few useful references, with an emphasis on recent review papers, tutorials, and textbooks. The list is not meant to be comprehensive along any dimension... Many additional pointers to the literature can be found on the Uncertainty in Artificial Intelligence homepage; see http://www.auai.org. If I had to pick two papers that I would most recommend for someone wanting to get up to speed quickly on belief networks, I'd recommend the Spiegelhalter, et al. paper and the Heckerman tutorial. Mike ----------------------- A good place to start to learn about the most popular algorithm for general inference in belief networks, as well as some of the basics on learning: Spiegelhalter, D. J., Dawid, A. P., Lauritzen, S. L., & Cowell, R. G. (1993). Bayesian Analysis in Expert Systems, {\em Statistical Science, 8}, 219-283. If you want more details on the inference algorithm: Lauritzen, S. L., \& Spiegelhalter, D. J. (1988). Local computations with probabilities on graphical structures and their application to expert systems (with discussion). {\em Journal of the Royal Statistical Society B, 50}, 157-224. A tutorial on the recent work on learning in belief networks: Heckerman, D. (1995). A tutorial on learning Bayesian networks. [available through http://www.auai.org]. If you want more on learning: Buntine, W. (1994). Operations for learning with graphical models. {\em Journal of Artificial Intelligence Research, 2}, 159-225. [available through http://www.auai.org]. A very readable general textbook on belief networks from a statistical perspective (focusing on ML estimation and model selection): Whittaker, J. (1990). {\em Graphical Models in Applied Multivariate Statistics}. New York: John Wiley. An introductory textbook: Neapolitan, E. (1990). {\em Probabilistic Reasoning in Expert Systems}. New York: John Wiley. The classical text on belief networks; emphasizes inference and AI issues: Pearl, J. (1988). {\em Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference}. San Mateo, CA: Morgan Kaufman. A recent paper that unifies (almost) all of the extant algorithms for inference in belief networks: Shachter, R. D., Anderson, S. K., \& Szolovits, P. (1994). Global conditioning for probabilistic inference in belief networks. {\em Proceedings of the Uncertainty in Artificial Intelligence Conference}, 514-522.  From rsun at cs.ua.edu Thu Oct 5 10:11:57 1995 From: rsun at cs.ua.edu (Ron Sun) Date: Thu, 5 Oct 1995 09:11:57 -0500 Subject: No subject Message-ID: <9510051411.AA16982@athos.cs.ua.edu> ANNOUNCING A NEW MAILING LIST: The mailing lists. ------------------------------------------- As we discussed at the CSI workshop at IJCAI in this August, we now establish this new mailing list for the specific purpose of exchanging information and ideas regarding hybrid models, especially models integrating symbolic and connectionist processes. Other hybrid models, such as fuzzy logic+neural networks and GA+NN, are also covered. This is an unmoderated list. Conference and workshop announcements, papers and technical reports, informed discussions of specific topics in hybrid model areas, and other pertinent messages are appropriate items for submission. Email your submission to hybrid-list at cs.ua.edu, which will be automatically forwarded to all the recipients of the list. Information regarding subscription is attached below. For questions and suggestions regarding this list, send email to rsun at cs.ua.edu (only if you have to). This mailing list has incorporated the old HYBRID list at Brown U. maintained by Michael Perrone (thanks to Michael), and included names of those who attended the 1995 CSI workshop or expressed interest in it. (To remove your name from the list, see the instruction at the end of this message.) Regards, --Ron Sun ============================================================================== The University of Alabama Department of Computer Science has set up a list service for this: To subscribe to this list service, send an e-mail message to the userid "listproc at cs.ua.edu" with NO SUBJECT, but a one-line text message, as shown below: SUBSCRIBE hybrid-list YourFirstName YourLastName You should receive a response back indicating your addition to the list. After this, you can submit items to the list by simply e-mail'ing a message to the userid: "hybrid-list at cs.ua.edu". The message will automatically be sent to all individuals on the list. To unsubscribe to this list service, send an e-mail message to the userid "listproc at cs.ua.edu" with NO SUBJECT, but a one-line text message, as shown below: UNSUBSCRIBE hybrid-list ==============================================================================  From cas-cns at PARK.BU.EDU Thu Oct 5 13:01:56 1995 From: cas-cns at PARK.BU.EDU (BU CNS) Date: Thu, 05 Oct 1995 13:01:56 -0400 Subject: Boston University - Cognitive & Neural Systems Message-ID: <199510051701.NAA29047@cns.bu.edu> (A copy of this message has also been posted to the following newsgroups: comp.ai, comp.cog-eng,comp.software-eng,comp.ai.neural-nets,bu.general,bu.seminars,ne.seminars,news.announce.conferences) ************************************************************** DEPARTMENT OF COGNITIVE AND NEURAL SYSTEMS (CNS) AT BOSTON UNIVERSITY ************************************************************** Ennio Mingolla, Acting Chairman, 1995-96 Stephen Grossberg, Chairman Gail A. Carpenter, Director of Graduate Studies The Boston University Department of Cognitive and Neural Systems offers comprehensive graduate training in the neural and computational principles, mechanisms, and architectures that underlie human and animal behavior, and the application of neural network architectures to the solution of technological problems. Applications for Fall, 1996, admission and financial aid are now being accepted for both the MA and PhD degree programs. To obtain a brochure describing the CNS Program and a set of application materials, write, telephone, or fax: DEPARTMENT OF COGNITIVE & NEURAL SYSTEMS Boston University 111 Cummington Street, 2nd Floor (ON OR AFTER 10/30/95, PLEASE ADDRESS Boston, MA 02215 MAIL TO 677 BEACON STREET) 617/353-9481 (phone) 617/353-7755 (fax) or send via email your full name and mailing address to: rll at cns.bu.edu Applications for admission and financial aid should be received by the Graduate School Admissions Office no later than January 15. Late applications will be considered until May 1; after that date applications will be considered only as special cases. Applicants are required to submit undergraduate (and, if applicable, graduate) transcripts, three letters of recommendation, and Graduate Record Examination (GRE) scores. The Advanced Test should be in the candidate's area of departmental specialization. GRE scores may be waived for MA candidates and, in exceptional cases, for PhD candidates, but absence of these scores may decrease an applicant's chances for admission and financial aid. Non-degree students may also enroll in CNS courses on a part-time basis. Description of the CNS Department: The Department of Cognitive and Neural Systems (CNS) provides advanced training and research experience for graduate students interested in the neural and computational principles, mechanisms, and architectures that underlie human and animal behavior, and the application of neural network architectures to the solution of technological problems. Students are trained in a broad range of areas concerning cognitive and neural systems, including vision and image processing; speech and language understanding; adaptive pattern recognition; cognitive information processing; self-organization; associative learning and long-term memory; computational neuroscience; nerve cell biophysics; cooperative and competitive network dynamics and short-term memory; reinforcement, motivation, and attention; adaptive sensory-motor control and robotics; active vision; and biological rhythms; as well as the mathematical and computational methods needed to support advanced modeling research and applications. The CNS Department awards MA, PhD, and BA/MA degrees. The CNS Department embodies a number of unique offerings. It has developed a curriculum that features 15 interdisciplinary graduate courses each of which integrates the psychological, neurobiological, mathematical, and computational information needed to theoretically investigate fundamental issues concerning mind and brain processes and the applications of neural networks to technology. Each course is typically taught once a week in the evening to make the program available to qualified students, including working professionals, throughout the Boston area. Nine additional research course are also offered. In these courses, one or two students meet regularly with one or two professors to pursue advanced reading and collaborative research. Students develop a coherent area of expertise by designing a program that includes courses in areas such as Biology, Computer Science, Engineering, Mathematics, and Psychology, in addition to courses in the CNS Department. The CNS Department prepares students for PhD thesis research with scientists in one of several Boston University research centers or groups, and with Boston-area scientists collaborating with these centers. The unit most closely linked to the department is the Center for Adaptive Systems (CAS). Students interested in neural network hardware work with researchers in CNS, the College of Engineering, and at MIT Lincoln Laboratory. Other research resources include distinguished research groups in neurophysiology, neuroanatomy, and neuropharmacology at the Medical School and the Charles River campus; in sensory robotics, biomedical engineering, computer and systems engineering, and neuromuscular research within the Engineering School; in dynamical systems within the Mathematics Department; in theoretical computer science within the Computer Science Department; and in biophysics and computational physics within the Physics Department. In addition to its basic research and training program, the Department offers a colloquium series, seminars, conferences, and special interest groups which bring many additional scientists from both experimental and theoretical disciplines into contact with the students. The CNS Department is moving in October, 1995 into its own new four-story building, which features a full range of offices, laboratories, classrooms, library, lounge, and related facilities for exclusive CNS use. 1995-96 CAS MEMBERS and CNS FACULTY: Jelle Atema Professor of Biology Director, Boston University Marine Program (BUMP) PhD, University of Michigan Sensory physiology and behavior Aijaz Baloch Research Associate of Cognitive and Neural Systems PhD, Electrical Engineering, Boston University Neural modeling of role of visual attention of recognition, learning and motor control, computational vision, adaptive control systems, reinforcement learning Helen Barbas Associate Professor, Department of Health Sciences, Boston University PhD, Physiology/Neurophysiology, McGill University Organization of the prefrontal cortex, evolution of the neocortex Jacob Beck Research Professor of Cognitive and Neural Systems PhD, Psychology, Cornell University Visual Perception, Psychophysics, Computational Models Daniel H. Bullock Associate Professor of Cognitive and Neural Systems and Psychology PhD, Psychology, Stanford University Real-time neural systems, sensory-motor learning and control, evolution of intelligence, cognitive development Gail A. Carpenter Professor of Cognitive and Neural Systems and Mathematics Director of Graduate Studies, Department of Cognitive and Neural Systems PhD, Mathematics, University of Wisconsin, Madison Pattern recognition, categorization, machine learning, differential equations Laird Cermak Professor of Neuropsychology, School of Medicine Professor of Occupational Therapy, Sargent College Director, Memory Disorders Research Center, Boston Veterans Affairs Medical Center PhD, Ohio State University Michael A. Cohen Associate Professor of Cognitive and Neural Systems and Computer Science Director, CAS/CNS Computation Labs PhD, Psychology, Harvard University Speech and language processing, measurement theory, neural modeling, dynamical systems H. Steven Colburn Professor of Biomedical Engineering PhD, Electrical Engineering, Massachusetts Institute of Technology Audition, binaural interaction, signal processing models of hearing William D. Eldred III Associate Professor of Biology BS, University of Colorado; PhD, University of Colorado, Health Science Center Visual neural biology Paolo Gaudiano Assistant Professor of Cognitive and Neural Systems PhD, Cognitive and Neural Systems, Boston University Computational and neural models of vision and adaptive sensory-motor control Jean Berko Gleason Professor of Psychology AB, Radcliffe College; AM, PhD, Harvard University Psycholinguistics Douglas Greve Research Associate of Cognitive and Neural Systems PhD, Cognitive and Neural Systems, Boston University Stephen Grossberg Wang Professor of Cognitive and Neural Systems Professor of Mathematics, Psychology, and Biomedical Engineering Director, Center for Adaptive Systems Chairman, Department of Cognitive and Neural Systems PhD, Mathematics, Rockefeller University Theoretical biology, theoretical psychology, dynamical systems, applied mathematics Frank Guenther Assistant Professor of Cognitive and Neural Systems PhD, Cognitive and Neural Systems, Boston University Biological sensory-motor control, spatial representation, speech production Thomas G. Kincaid Chairman and Professor of Electrical, Computer and Systems Engineering, College of Engineering PhD, Electrical Engineering, Massachusetts Institute of Technology Signal and image processing, neural networks, non-destructive testing Nancy Kopell Professor of Mathematics PhD, Mathematics, University of California at Berkeley Dynamical systems, mathematical physiology, pattern formation in biological/physical systems Ennio Mingolla Associate Professor of Cognitive and Neural Systems and Psychology Acting Chairman 1995-96, Department of Cognitive and Neural Systems PhD, Psychology, University of Connecticut Visual perception, mathematical modeling of visual processes Alan Peters Chairman and Professor of Anatomy and Neurobiology, School of Medicine PhD, Zoology, Bristol University, United Kingdom Organization of neurons in the cerebral cortex, effects of aging on the primate brain, fine structure of the nervous system Andrzej Przybyszewski Senior Research Associate of Cognitive and Neural Systems MSc, Technical Warsaw University; MA, University of Warsaw; PhD, Warsaw Medical Academy Adam Reeves Adjunct Professor of Cognitive and Neural Systems Professor of Psychology, Northeastern University PhD, Psychology, City University of New York Psychophysics, cognitive psychology, vision William Ross Research Associate of Cognitive and Neural Systems BSc, Cornell University; MA, PhD, Boston University Mark Rubin Research Assistant Professor of Cognitive and Neural Systems Research Physicist, Naval Air Warfare Center, China Lake, CA (on leave) PhD, Physics, University of Chicago Neural networks for vision, pattern recognition, and motor control Robert Savoy Adjunct Associate Professor of Cognitive and Neural Systems Scientist, Rowland Institute for Science PhD, Experimental Psychology, Harvard University Computational neuroscience; visual psychophysics of color, form, and motion perception Eric Schwartz Professor of Cognitive and Neural Systems; Electrical, Computer and Systems Engineering; and Anatomy and Neurobiology PhD, High Energy Physics, Columbia University Computational neuroscience, machine vision, neuroanatomy, neural modeling Robert Sekuler Adjunct Professor of Cognitive and Neural Systems Research Professor of Biomedical Engineering, College of Engineering, BioMolecular Engineering Research Center Jesse and Louis Salvage Professor of Psychology, Brandeis University AB,MA, Brandeis University; Sc.M., PhD, Brown University Allen Waxman Adjunct Associate Professor of Cognitive and Neural Systems Senior Staff Scientist, MIT Lincoln Laboratory PhD, Astrophysics, University of Chicago Visual system modeling, mobile robotic systems, parallel computing, optoelectronic hybrid architectures James Williamson Research Associate of Cognitive and Neural Systems PhD, Cognitive and Neural Systems, Boston University Image processing and object recognition. Particular interests are: dynamic binding, self-organization, shape representation, and classification Jeremy Wolfe Adjunct Associate Professor of Cognitive and Neural Systems Associate Professor of Ophthalmology, Harvard Medical School Psychophysicist, Brigham & Women's Hospital, Surgery Dept. Director of Psychophysical Studies, Center for Clinical Cataract Research PhD, Massachusetts Institute of Technology  From ajit at austin.ibm.com Thu Oct 5 16:40:53 1995 From: ajit at austin.ibm.com (Dingankar) Date: Thu, 5 Oct 1995 15:40:53 -0500 Subject: Dissertation available in Neuroprose Message-ID: <9510052040.AA14254@ding.austin.ibm.com> **DO NOT FORWARD TO OTHER GROUPS** Sorry, no hardcopies available. URL: ftp://archive.cis.ohio-state.edu/pub/neuroprose/Thesis/dingankar.thesis.ps.Z BiBTeX entry: @PhdThesis{ajit-phd, author = "Ajit Trimbak Dingankar", title = "On Applications of Approximation Theory to Identification, Control and Classification", school = "The University of Texas at Austin", year = 1995, address = "Austin, Texas", } Abstract Applications of approximation theory to some problems in identification of dynamic systems, their control, and to problems in signal classification are studied. First, an algorithm is given for constructing approximations in a wide variety of settings, and a corresponding error bound is derived. Then weak sufficient conditions for perfect classification of signals are studied. Next the problem of approximating linear functionals with certain sums of integrals is studied, alongwith its relation to the approximation of nonlinear functionals. Then an approximation theoretic characterization of continuity of nonlinear maps is given. As another application of function approximation, the problem of universally approximating controllers for discrete-time continuous plants is studied. Finally, error bounds for approximation of functions defined on finite dimensional Hilbert spaces are given. ------------------------------------------------------------------------------ Ajit T. Dingankar | ajit at austin.ibm.com IBM Corporation, Internal Zip 4359 | Work: (512) 838-6850 11400 Burnet Road, Austin, TX 78758 | Fax : (512) 838-5882  From john at dcs.rhbnc.ac.uk Thu Oct 5 09:24:13 1995 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Thu, 05 Oct 95 14:24:13 +0100 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <199510051324.OAA07218@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT): three new reports available ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-051: ---------------------------------------- On the Computational Power of Continuous Time Neural Networks by Pekka Orponen, University of Helsinki, Finland Abstract: We investigate the computational power of continuous-time neural networks with Hopfield-type units. We prove that polynomial-size networks with saturated-linear response functions are at least as powerful as polynomially space-bounded Turing machines. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-052: ---------------------------------------- Computational Machine Learning in Theory and Praxis by Ming Li, University of Waterloo, Canada Paul Vitanyi, CWI and Universiteit van Amsterdam, The Netherlands. Abstract: In the last few decades a computational approach to machine learning has emerged based on paradigms from recursion theory and the theory of computation. Such ideas include learning in the limit, learning by enumeration, and probably approximately correct (pac) learning. These models usually are not suitable in practical situations. In contrast, statistics based inference methods have enjoyed a long and distinguished career. Currently, Bayesian reasoning in various forms, minimum message length (MML) and minimum description length (MDL), are widely applied approaches. They are the tools to use with particular machine learning praxis such as simulated annealing, genetic algorithms, genetic programming, artificial neural networks, and the like. These statistical inference methods select the hypothesis which minimizes the sum of the length of the description of the hypothesis (also called `model') and the length of the description of the data relative to the hypothesis. It appears to us that the future of computational machine learning will include combinations of the approaches above coupled with guaranties with respect to used time and memory resources. Computational learning theory will move closer to practice and the application of the principles such as MDL require further justification. Here, we survey some of the actors in this dichotomy between theory and praxis, we justify MDL via the Bayesian approach, and give a comparison between pac learning and MDL learning of decision trees. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-053: ---------------------------------------- On the relations between distributive computability and the BSS model by Sebastiano Vigna, University of Milan, Italy Abstract: This paper presents an equivalence result between computability in the BSS model and in a suitable distributive category. It is proved that the class of functions $R^m\to R^n$ (with $n,m$ finite and $R$ a commutative, ordered ring) computable in the BSS model and the functions distributively computable over a natural distributive graph based on the operations of $R$ coincide. Using this result, a new structural characterization, based on iteration, of the same functions is given. ----------------------- The Report NC-TR-95-051 can be accessed and printed as follows % ftp cscx.cs.rhbnc.ac.uk (134.219.200.45) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-95-051.ps.Z ftp> bye % zcat nc-tr-95-051.ps.Z | lpr -l Similarly for the other technical report. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. The files may also be accessed via WWW starting from the NeuroCOLT homepage: http://www.dcs.rhbnc.ac.uk/neural/neurocolt.html Best wishes John Shawe-Taylor  From listerrj at helios.aston.ac.uk Fri Oct 6 05:24:28 1995 From: listerrj at helios.aston.ac.uk (Richard Lister) Date: Fri, 06 Oct 1995 10:24:28 +0100 Subject: Postdoctoral position in Neural Computing Message-ID: <27328.9510060924@sun.aston.ac.uk> ---------------------------------------------------------------------- Neural Computing Research Group ------------------------------- Dept of Computer Science and Applied Mathematics Aston University, Birmingham, UK Nonstationary Feature Extraction and Tracking for the Classification -------------------------------------------------------------------- of Turning Points in Multivariate Time Series ---------------------------------------------- The Neural Computing Research Group at Aston is looking for a highly motivated individual for a 3 year postdoctoral research position. The investigation will involve generic problems of feature extraction in nonstationary environments, typically from univariate and multivariate time series. Potential candidates should be mathematically and computationally competent with a background either in artificial neural networks, dynamical systems theory, statistical pattern processing, or have relevant experience from a physics or electrical engineering background. *** Further details at http://neural-server.aston.ac.uk/ *** Conditions of Service --------------------- Salaries will be up to point 6 on the RA 1A scale, currently 15,986 UK pounds. These salary scales are currently under review, and are subject to annual increments. How to Apply ------------ Please send a full CV and publications list, together with the names of 4 referees, to: Professor D Lowe Neural Computing Research Group Department of Computer Science and Applied Mathematics Aston University Birmingham B4 7ET, U.K. Tel: 0121 333 4631 Fax: 0121 333 6215 email: d.lowe at aston.ac.uk (email submission of postscript files is welcome) Closing date: 15 November 1995 ----------------------------------------------------------------------  From dwang at cis.ohio-state.edu Fri Oct 6 11:13:23 1995 From: dwang at cis.ohio-state.edu (DeLiang Wang) Date: Fri, 6 Oct 1995 11:13:23 -0400 Subject: Preprint available on auditory scene analysis Message-ID: <199510061513.LAA01790@shirt.cis.ohio-state.edu> The following preprint is available via FTP/WWW: ------------------------------------------------------------------ Primitive Auditory Segregation Based on Oscillatory Correlation ------------------------------------------------------------------ DeLiang Wang Cognitive Science: Accepted for publication The Ohio State University Auditory scene analysis is critical for complex auditory processing. We study auditory segregation from the neural network perspective, and develop a framework for primitive auditory scene analysis. The architecture is a laterally coupled two-dimensional network of relaxation oscillators with a global inhibitor. One dimension represents time and another one represents frequency. We show that this architecture, plus systematic delay lines, can in real time group auditory features into a stream by phase synchrony and segregate different streams by desynchronization. The network demonstrates a set of psychological phenomena regarding primitive auditory scene analysis, including dependency on frequency proximity and the rate of presentation, sequential capturing, and competition among different perceptual organizations. We offer a neurocomputational theory - shifting synchronization theory - for explaining how auditory segregation might be achieved in the brain, and the psychological phenomenon of stream segregation. Possible extensions of the model are discussed. (42 pages + one figure = 1.5MB + 600 KB) for anonymous ftp: FTP-HOST: ftp.cis.ohio-state.edu Directory: /pub/leon/Wang95 FTP-filenames: Wang.prep.ps.Z, fig5G.ps.Z or for WWW: http://www.cis.ohio-state.edu/~dwang Comments are most welcome - Please send to DeLiang Wang (dwang at cis.ohio-state.edu) ---------------------------------------------------------------------------- FTP instructions: To retrieve and print the files, use the following commands: unix> ftp ftp.cis.ohio-state.edu Name: anonymous Password: (your email address) ftp> binary ftp> cd /pub/leon/Wang95 ftp> get Wang.prep.ps.Z ftp> get fig5G.ps.Z ftp> quit unix> uncompress Wang.prep.ps.Z unix> uncompress fig5G.ps.Z unix> lpr {each of the two postscript files} (Wang.prep.ps may not ghostview well - some figures do not show up with my ghostview - but it should print ok) ----------------------------------------------------------------------------  From arbib at pollux.usc.edu Fri Oct 6 13:50:05 1995 From: arbib at pollux.usc.edu (Michael A. Arbib) Date: Fri, 6 Oct 1995 10:50:05 -0700 Subject: A New Series of Virtual Textbooks on Neural Networks Message-ID: <199510061750.KAA11592@pollux.usc.edu> October 6, 1995 Yesterday, a visitor to my office, while speaking of his enthusiasm for "The Handbook of Brain Theory and Neural Networks", mentioned that some of his colleagues had criticized the fact that the [266] articles [in Part III] were arranged in alphabetical order, thus lacking the "logical order" to make the book easy to use for teaching. The purpose of this note is to answer such concerns. 1. The boring answer is that a Handbook is not a Textbook. Indeed, given that the 266 articles provide such a comprehensive overview - including detailed models of single neurons; analysis of a wide variety of neurobiological systems; connectionist studies; mathematical analyses of abstract neural networks; and technological applications of adaptive, artificial neural networks and related methodologies - it is hard to imagine a course that would cover the whole book, no matter in what order the articles were presented. 2. The exciting answer is that THE HANDBOOK IS A VIRTUAL LIBRARY OF TWENTY-THREE TEXTBOOKS!! Before the 266 articles of Part III come Part I and Part II. Part I provides an introductory textbook level introduction to Neural Networks. Part II provides 23 "road maps", each of which lists the articles on a particular theme, followed by an essay which offers a "logical order" in which to read these articles. Thus, the Handbook can be used to provide a "virtual textbook" on any one of the following 23 topics: Applications of Neural Networks Artificial Intelligence and Neural Networks Biological Motor Control Biological Networks Biological Neurons Computability and Complexity Connectionist Linguistics Connectionist Psychology Control Theory and Robotics Cooperative Phenomena Development and Regeneration of Neural Networks Dynamic Systems and Optimization Implementation of Neural Networks Learning in Artificial Neural Networks, Deterministic Learning in Artificial Neural Networks, Statistical Learning in Biological Systems Mammalian Brain Regions Mechanisms of Neural Plasticity Motor Pattern Generators and Neuroethology Primate Motor Control Self-Organization in Neural Networks Other Sensory Systems Vision In each case, the instructor can follow the road map to traverse the articles to provide full coverage of the topic, using the cross- references to choose supplementary material from within the Handbook, and the carefully selected list of readings at the end of each article to choose supplementary material from the general literature. As an appendix to this message, I include a sample road map, that on "Learning in Artificial Neural Networks, Deterministic". All the road maps are available on the Web at: http://www-mitpress.mit.edu/mitp/recent- books/comp/handbook-brain-theo.html If you have other queries about how best to use the Handbook, or suggestions for improving the Handbook, please feel free to contact me by email: arbib at pollux.usc.edu. With best wishes Michael Arbib ***** APPENDIX: The Road Map for "Learning in Artificial Neural Networks, Deterministic" from Part II of The Handbook of Brain Theory and Neural Networks, (M.A. Arbib, Ed.), A Bradford Book, copyright 1995, The MIT Press. LEARNING IN ARTIFICIAL NEURAL NETWORKS, DETERMINISTIC [Articles in the Road Map, listed in Alphabetical Order.] Adaptive Resonance Theory Associative Networks Backpropagation: Basics and New Developments Convolutional Networks for Images, Speech, and Time-Series Coulomb Potential Learning Kolmogorov's Theorem Learning by Symbolic and Neural Methods Learning as Hill-Climbing in Weight Space Learning as Adaptive Control of Synaptic Matrices Modular Neural Net Systems, Training of Neocognitron: A Model for Visual Pattern Recognition Neurosmithing: Improving Neural Network Learning Nonmonotonic Neuron Associative Memory Pattern Recognition Perceptrons, Adalines, and Backpropagation Recurrent Networks: Supervised Learning Reinforcement Learning Topology-Modifying Neural Network Algorithms [Articles in the Road Map, discussed in Logical Order.] Much of our concern is with supervised learning, getting a network to behave in a way which successfully approximates some specified pattern of behavior or input-output relationship. In particular, much emphasis has been placed on feedforward networks, that is, networks which have no loops, so that the output of the net depends on its input alone, since there is then no internal state defined by reverberating activity. The most direct form of this is a synaptic matrix, a one-layer neural network for which input lines directly drive the output neurons and a "supervised Hebbian" rule sets synapses so that the network will exhibit specified input- output pairs in its response repertoire. This is addressed in the article on ASSOCIATIVE NETWORKS, which notes the problems that arise if the input patterns (the "keys" for associations) are not orthogonal vectors. Association also extends to recurrent networks obtained from one layer networks by feedback connections from the output to the input, but in such systems of "dynamic memories" (e.g., Hopfield networks) there are no external inputs as such. Rather the "input" is the initial state of the network, and the "output" is the "attractor" or equilibrium state to which the network then settles. Unfortunately, the usual "attractor network" memory model, with neurons whose output is a sigmoid function of the linear combination of their inputs, has many spurious memories, i.e., equilibria other than the memorized patterns, and there is no way to decide a memorized pattern is recalled or not. The article on NONMONOTONIC NEURON ASSOCIATIVE MEMORY shows that, if the output of each neuron is a nonmonotonic function of its input, the capacity of the network can be increased, and the network does not exhibit spurious memories: when the network fails to recall a correct memorized pattern, the state shows a chaotic behavior instead of falling into a spurious memory. Historically, the earliest forms of supervised learning involved changing synaptic weights to oppose the error in a neuron with a binary output (the perceptron error-correction rule), or to minimize the sum of squares of errors of output neurons in a network with real- valued outputs (the Widrow-Hoff rule). This work is charted in the article on PERCEPTRONS, ADALINES AND BACKPROPAGATION, which also charts the extension of these classic ideas to multilayered feedforward networks. Multilayered networks pose the structural credit assignment problem: when an error is made at the output of a network, how is credit (or blame) to be assigned to neurons deep within the network? One of the most popular techniques is called backpropagation, whereby the error of output units is propagated back to yield estimates of how much a given "hidden unit" contributed to the output error. These estimates are used in the adjustment of synaptic weights to these units within the network. The article on BACKPROPAGATION: BASICS AND NEW DEVELOPMENTS places this idea in a broader mathematical and historical framework in which backpropagation is seen as a general method for calculating derivatives to adjust the weights of nonlinear systems, whether or not they are neural networks. The underlying theoretical grounding is that, given any function f: X . Y for which X and Y are codable as input and output patterns of a neural network, then, as shown in the article on KOLMOGOROV'S THEOREM, f can be approximated arbitrarily well by a feedforward network with one layer of hidden units. The catch, of course, is that many, many hidden units may be required for a close fit. It is often an empirical question whether there exists a sufficiently good approximation achievable in principle by a network of a given size P an approximation which a given learning rule may or may not find (it may, for example, get stuck in a local optimum rather than a global one). The article on NEUROSMITHING: IMPROVING NEURAL NETWORK LEARNING provides a number of "rules of thumb" to be used in applying backpropagation in trying to find effective settings for network size and for various coefficients in the learning rules. One useful perspective for supervised learning views LEARNING AS HILL-CLIMBING IN WEIGHT SPACE, so that each "experience" adjusts the synaptic weights of the network to climb (or descend) a metaphorical hill for which "height" at a particular point in "weight space" corresponds to some measure of the performance of the network (or the organism or robot of which it is a part). When the aim is to minimize this measure, one of the basic techniques for learning is what mathematicians call "gradient descent"; optimization theory also provides alternative methods such as, e.g., that of conjugate gradients, which are also used in the neural network literature. REINFORCEMENT LEARNING describes a form of "semi-supervised" learning where the network is not provided with an explicit form of error at each time step but rather receives only generalized reinforcement ("you're doing well"; "that was bad!") which yields little immediate indication of how any neuron should change its behavior. Moreover, the reinforcement is intermittent, thus raising the temporal credit assignment problem: how is an action at one time to be credited for positive reinforcement at a later time? One solution is to build an "adaptive critic" which learns to evaluate actions of the network on the basis of how often they occur on a path leading to positive or negative reinforcement. Another perspective on supervised learning is presented in LEARNING AS ADAPTIVE CONTROL OF SYNAPTIC MATRICES, which views learning as a control problem (controlling synaptic matrices to yield a given network behavior) and then uses the adjoint equations of control theory to derive synaptic adjustment rules. Gradient descent methods have also been extended to adapt the synaptic weights of recurrent networks, as discussed in RECURRENT NETWORKS: SUPERVISED LEARNING, where the aim is to match the time course of network activity, rather than the (input, output) pairs of some training set. The task par excellence for supervised learning is pattern recognition, the problem of classifying objects, often represented as vectors or as strings of symbols, into categories. Historically, the field of pattern recognition started with early efforts in neural networks (see PERCEPTRONS, ADALINES AND BACKPROPAGATION). While neural networks played a less central role in pattern recognition for some years, recent progress has made them the method of choice for many applications. As PATTERN RECOGNITION demonstrates, multilayer networks, when properly designed, can learn complex mappings in high-dimensional spaces without requiring complicated hand-crafted feature extractors. To rely more on learning, and less on detailed engineering of feature extractors, it is crucial to tailor the network architecture to the task, incorporating prior knowledge to be able to learn complex tasks without requiring excessively large networks and training sets. Many specific architectures have been developed to solve particular types of learning problem. ADAPTIVE RESONANCE THEORY (ART) bases learning on internal expectations. When the external world fails to match an ART network's expectations or predictions, a search process selects a new category, representing a new hypothesis about what is important in the present environment. The neocognitron (see NEOCOGNITRON: A MODEL FOR VISUAL PATTERN RECOGNITION) was developed as a neural network model for visual pattern recognition which addresses the specific question "how can a pattern be recognized despite variations in size and position?" by using a multilayer architecture in which local features are replicated in many different scales and locations. More generally, as shown in CONVOLUTIONAL NETWORKS FOR IMAGES, SPEECH, AND TIME SERIES, shift invariance in convolutional networks is obtained by forcing the replication of weight configurations across space. Moreover, the topology of the input is taken into account, enabling such networks to force the extraction of local features by restricting the receptive fields of hidden units to be local. COULOMB POTENTIAL LEARNING derives its name from its functional form's likeness to a coulomb charge potential, replacing the linear separability of a simple perceptron with a network that is capable of constructing arbitrary nonlinear boundaries for classification tasks. We have already noted that networks that are too small cannot learn the desired input to output mapping. However, networks can also be too large. Just as a polynomial of too high a degree is not useful for curve-fitting, a network that is too large will fail to generalize well, and will require longer training times. Smaller networks, with fewer free parameters, enforce a smoothness constraint on the function found. For best performance, it is, therefore, desirable to find the smallest network that will "properly" fit the training data. The article TOPOLOGY- MODIFYING NEURAL NETWORK ALGORITHMS reviews algorithms which adjust network topology (i.e., adding or removing neurons during the learning process) to arrive at a network appropriate to a given task. The last two articles in this road map take a somewhat different viewpoint from that of adjusting the synaptic weights in a single network. MODULAR NEURAL NET SYSTEMS, TRAINING OF presents the idea that, although single neural networks are theoretically capable of learning complex functions, many problems are better solved by designing systems in which several modules cooperate together to perform a global task, replacing the complexity of a large neural network by the cooperation of neural network modules whose size is kept small. The article on LEARNING BY SYMBOLIC AND NEURAL METHODS focuses on the distinction between symbolic learning based on producing discrete combinations of the features used to describe examples and neural approaches which adjust continuous, nonlinear weightings of their inputs. The article not only compares but also combines the two approaches, showing for example how symbolic knowledge may be used to set the initial state of an adaptive network. [This Road Map is then followed by one on "Learning in Artificial Neural Networks, Statistical"]  From terry at salk.edu Fri Oct 6 22:14:01 1995 From: terry at salk.edu (Terry Sejnowski) Date: Fri, 6 Oct 95 19:14:01 PDT Subject: Neural Computation 7:6 Message-ID: <9510070214.AA28375@salk.edu> NEURAL COMPUTATION Vol 7, Issue 6, November 1995 Article: An information-maximization approach to blind separation and blind deconvolution Anthony J. Bell and Terrence J. Sejnowski Note: A perceptron reveals the face of sex Michael Gray, David Lawrence, Beatrice Golomb, and Terrence Sejnowski Letters: Self-organization as an iterative kernel smoothing process Vladmir Cherkassky and Filip Mulier On the distribution and the convergence of feature space in self-organizing maps Hujun Yin and Nigel Allinson Sorting with self-organizing maps Marco Budinich Introducing asymmetry into interneuron learning Colin Fyfe Learning and generalization with minimerror, a temperature dependent learning algorithm Bruno Raffin and Mirta B. Gordon Regularized neural networks: Some convergence rate results Halbert White and Valentina Corradi The target switch algorithm: A constructive learning procedure for feedforward neural networks Colin Campbell and C. Perez Vicente On the practical applicability of VC dimension bounds Sean B. Holden and Mahesan Niranjan LeRec: A NN/HMM hybrid for on-line handwriting recognition Yoshua Bengio, Yann LeCun, Craig Nohl, and Chris Burges ----- NOTE: IN 1996 NEURAL COMPUTATION WILL PUBLISH 8 ISSUES ABSTRACTS - http://www-mitpress.mit.edu/jrnls-catalog/neural.html SUBSCRIPTIONS - 1996 - VOLUME 8 - 8 ISSUES ______ $50 Student and Retired ______ $78 Individual ______ $220 Institution Add $28 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-7 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) mitpress-orders at mit.edu MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 -----  From honavar at cs.iastate.edu Fri Oct 6 22:31:09 1995 From: honavar at cs.iastate.edu (Vasant Honavar) Date: Fri, 6 Oct 1995 21:31:09 -0500 (CDT) Subject: paper available: constructive learning algorithms Message-ID: <199510070231.VAA12809@ren.cs.iastate.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 1737 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/971f392b/attachment.ksh From wec at bioele.nuee.nagoya-u.ac.jp Sat Oct 7 22:12:53 1995 From: wec at bioele.nuee.nagoya-u.ac.jp (Workshop on Evolutionary Computation) Date: Sat, 7 Oct 95 22:12:53 JST Subject: Call for participation - Online Workshop on EC Message-ID: <9510071312.AA13486@ursa.bioele.nuee.nagoya-u.ac.jp> [CFA] Online Workshop on Evolutionary Computation This is the call for participation for Online Workshop on EC. The papers and discussions are visible on the Internet. http://www.bioele.nuee.nagoya-u.ac.jp/wec/index.html ********************************************************************* * * * CALL FOR PARTICIPATION * * * * First Online Workshop on Evolutionary Computation * * * * Oct. 9 (Mon) - Oct. 13 (Fri) * * * * On Internet (WWW (World Wide Web) ) Served by Nagoya University * * * * Sponsored by * * Research Group on ECOmp of * * the Society of Fuzzy Theory and Systems (SOFT) * * * ********************************************************************* We call for your participation for online discussions using the Internet. The goal of this workshop is to give its attendees opportunities to exchange information and ideas on various aspects of Evolutionary Computation and "save travel expenses" without having to visit foreign countries. We aim to have ample discussion time between the authors and attendees and make them visible to everyone on the Internet. The papers submitted to the workshop are: ----------------------------------------------------------------------------- *Artificial Life 1. Specialization Under Social Conditions in Shared Environments 2. Pattern Formation and Functionality in Swarm Models 3. Cooperative Cleaners : a Study in Ant-Robotics 4. Seeing In The Dark With Artificial Bats *Evolutionary Programming 1. Evolutionary Programming in Perspective: The Top-Down View *Fuzzy - Genetic Algorithms 1. A Study on Finding Fuzzy Rules for Semi-Active Suspension Controllers with Genetic Algorithm 2. Structural Learning of Fuzzy Rule from Noisy Examples 3. Fuzzy Connectives Based Crossover Operators to Model Genetic Algorithms Population Diversity 4. Dynamic Control of Genetic Algorithms using Fuzzy Logic Techniques *Fuzzy Logic 1. Fuzzy Rules Reduction Method By Using Vector Composition Method & Global/Local Reasoning Method *Genetic Algorithms 1. Orgy in the Computer: Multi-Parent Reproduction in Genetic Algorithms 2. An Evolutionary Algorithm for Parametric Array Signal Processing 3. Hill Climbing with Learning (An Abstraction of Genetic Algorithm) *Knowledge - Genetic Algorithms 1. Genetic Algorithms in Knowledge Base Design 2. Inference of Stochastic Regular Grammars by Massively Parallel Genetic Algorithms 3. Skill Based Manipulation in Hierarchical Intelligent Control of Robots *Neural Networks - Genetic Algorithms 1. Genetic Algorithm Enlarges the Capacity of Associative Memory *Others 1. Indexed Bibliography of Genetic Algorithms in the Far East 2. Optimization with Evolution Strategy after Rechenberg with Smalltalk ------------------------------------------------------------------------------- These papers are visible on the Internet. The home page address is : http://www.bioele.nuee.nagoya-u.ac.jp/wec/index.html INPORTANT DATES: Workshop Week: October 9 - 13, 1995 DISCUSSION PROCEDURE: 1. Read the abstracts on the page: http://www.bioele.nuee.nagoya-u.ac.jp/wec/papers/index.html 2. Copy the main texts (ps file) of interested papers. 3. Send questions and comments to wec at bioele.nuee.nagoya-u.ac.jp (The steering committee will edit the questions from attendees and send them to the authors. It will receive answers from the authors and make the Q&A visible on the Internet.) 4. Read the answers from the authors on http://www.bioele.nuee.nagoya-u.ac.jp/wec/papers/index.html 5. Repeat the above items 3 and 4 until you are satisfied. OFFICAL LANGUAGE: English REGISTRATION PROCEDURE: Those who are interested in the above papers are welcome to participate into this workshop at any time during the workshop week. Please visit our page or send an e-mail to the addresses below. http://www.bioele.nuee.nagoya-u.ac.jp/wec/index.html wec at bioele.nuee.nagoya-u.ac.jp Just tell us you will take part in the workshop. This registration is not a requisite for your participation. The steering committee will mail the registrants important information. For further information, contact: Takeshi Furuhashi Dept. of Information Electronics Nagoya University, Furo-cho, Chikusaku Nagoya 464-01, Japan Tel. +81-52-789-2792 Fax. +81-52-789-3166 E-mail furuhashi at nuee.nagoya-u.ac.jp ORGANIZATION: Advisory Committee Chair: T.Fukuda(Nagoya University) M.Gen(Ashikaga Institute of Tech.) I.Hayashi(Han-nan University) H.Ishibuchi(University of Osaka pref.) Y.Maeda(Osaka Electro-Comm. Univ.) M.Sakawa(Hiroshima University) M.Sano(Hiroshima City University) T.Shibata(MEL, MITI & AI Lab, MIT) H.Shiizuka(Kogakuin University) K.Tanaka(Kanazawa University) N.Wakami(Matsushita Elec. Ind. Co., Ltd.) J.Watada(Osaka Institute of Tech.) Program Committee Chair: S.Nakanishi(Tokai University) T.Furuhashi(Nagoya University) K.Tanaka(Kanazawa University) Steering Committee Chair: T.Furuhashi(Nagoya University) T.Hashiyama(Nagoya University) K.Morikawa(Nagoya University) Y.Miyata(Nagoya University) --------------------------------------------------- Takeshi Furuhashi, Assoc. Professor Dept. of Information Electronics, Nagoya University Furo-cho, Chikusa-ku, Nagoya 464-01, Japan Tel.+81-52-789-2792, Fax.+81-52-789-3166 ---------------------------------------------------  From payman at ebs330.eb.uah.edu Sat Oct 7 15:21:48 1995 From: payman at ebs330.eb.uah.edu (Payman Arabshahi) Date: Sat, 7 Oct 95 14:21:48 CDT Subject: NEW: IEEE Neural Networks Council's Homepage Message-ID: <9510071921.AA21731@ebs330> Dear colleagues, The IEEE Neural Networks Council (NNC) has recently established a Homepage on the World Wide Web. The homepage provides for instant information concerning conferences (both IEEE and non-IEEE), publications, research programs, and NNC administration. The homepage can be viewed (best using the Netscape browser), at http://www.ieee.org/nnc We are constantly seeking information on conferences with a neural network component, as well as information on computational intelligence research programs worldwide. The homepage is updated every two weeks. It is our aim to ensure a a high degree of reliability for the homepage, both in terms of connection to it, and accuracy of its contents, and to be responsive and prompt to comments and suggestions by users and IEEE members regarding changes. If you have information which you would like included in the homepage, or simply have a comment or recommendation regarding it, please contact: Dr. Payman Arabshahi Editor-in-chief, IEEE NNC Homepage Dept. of Electrical & Computer Eng. University of Alabama in Huntsville Huntsville, AL 35899 Tel: (205) 895-6380 Fax: (205) 895-6803 E-mail: payman at ebs330.eb.uah.edu For updates to the list of Neural Network Conferences, please also cc Dr. Paul Bakker Associate Editor, IEEE NNC Homepage Intelligent Systems Division Electrotechnical Laboratory 1-1-4 Umezono, Tsukuba Ibaraki 305, Japan Tel: (81-298) 585-980 Fax: (81-298) 585-971 E-mail: bakker at etl.go.jp Help us, help you with a novel way of accessing and using information that you need, this time on the Internet, through the World Wide Web. Check us out! Best regards, -- Payman Arabshahi Tel : (205) 895-6380 Dept. of Electrical & Computer Eng. Fax : (205) 895-6803 University of Alabama in Huntsville payman at ebs330.eb.uah.edu Huntsville, AL 35899 http://www.eb.uah.edu/ece/  From ingber at alumni.caltech.edu Tue Oct 10 15:12:42 1995 From: ingber at alumni.caltech.edu (Lester Ingber) Date: Tue, 10 Oct 1995 12:12:42 -0700 Subject: two papers on nonlinear aspects of macroscopic neocortex/EEG Message-ID: <199510101912.MAA13207@alumni.caltech.edu> The following two papers are available via WWW and anonymous FTP. The paper, "Nonlinear nonequilibrium non-quantum non-chaotic statistical mechanics of neocortical interactions," is available as file smni96_nonlinear.ps.Z. smni96_nonlinear is an invited BBS commentary on "Dynamics of the brain at global and microscopic scales: Neural networks and the EEG," by J.J. Wright and D.T.J. Liley. The paper, "Adaptive simulated annealing of canonical momenta indicators of financial markets," is available as file markets96_momenta.ps.Z. Some extrapolations to and from EEG systems are also discussed, as first mentioned in smni95_lecture.ps.Z, outlining a project, performing recursive ASA optimization of "canonical momenta" indicators of subject's/patient's EEG nested in parameterized customized clinician's rules. markets96_momenta shows how canonical momenta indicators just by themselves can provide signals for profitable trading on S&P 500 data. This demonstrates that inefficiencies in nonlinear nonequilibrium markets can be used to advantage in trading, and that such canonical momenta can be considered to be at least useful supplemental indicators in other trading systems. Similar arguments are made for EEG analyses. Some additional information is available in the file ftp.alumni.caltech.edu:/pub/ingber/MISC.DIR/projects.txt Lester ======================================================================== Instructions for Retrieval of Code and Reprints Interactively Via WWW The archive can be accessed via WWW path http://www.alumni.caltech.edu/~ingber/ Interactively Via Anonymous FTP Code and reprints can be retrieved via anonymous ftp from ftp.alumni.caltech.edu [131.215.50.234] in the /pub/ingber directory. Interactively [brackets signify machine prompts]: [your_machine%] ftp ftp.alumni.caltech.edu [Name (...):] anonymous [Password:] your_e-mail_address [ftp>] cd pub/ingber [ftp>] binary [ftp>] ls [ftp>] get file_of_interest [ftp>] quit The 00index file contains an index of the other files. Electronic Mail If you do not have ftp access, get information on the FTPmail service by: mail ftpmail at decwrl.dec.com, and send only the word "help" in the body of the message. Additional Information Sorry, I cannot assume the task of mailing out hardcopies of code or papers. Limited help assisting people with their queries on my codes and papers is available only by electronic mail correspondence. Queries on commercial consulting can be made by contacting me via e-mail, mail, or calling 1.800.L.INGBER. Lester ======================================================================== /* RESEARCH ingber at alumni.caltech.edu * * INGBER ftp.alumni.caltech.edu:/pub/ingber * * LESTER http://www.alumni.caltech.edu/~ingber/ * * Prof. Lester Ingber _ P.O. Box 857 _ McLean, VA 22101 _ 1.800.L.INGBER */  From dhw at santafe.edu Tue Oct 10 16:35:28 1995 From: dhw at santafe.edu (David Wolpert) Date: Tue, 10 Oct 95 14:35:28 MDT Subject: Paper announcements Message-ID: <9510102035.AA04144@sfi.santafe.edu> *** PAPER ANNOUNCEMENTS *** The following two papers may be of interest to people in the connectionist community. Both will appear in the proceedings of the 1995 conference on Maximum Entropy and Bayesian Methods. Retrieval instructions are at the bottom of this post. THE BOOTSTRAP IS INCONSISTENT WITH PROBABILITY THEORY by D. H. Wolpert Abstract: This paper proves that for no prior probability distribution does the bootstrap (BS) distribution equal the predictive distribution, for all Bernoulli trials of some fixed size. It then proves that for no prior will the BS give the same first two moments as the predictive distribution for all size trials. It ends with an investigation of whether the BS can get the variance correct. DETERMINING WHETHER TWO DATA SETS ARE FROM THE SAME DISTRIBUTION by D. H. Wolpert Abstract: This paper presents two Bayesian alternatives to the chi-squared test for determining whether a pair of categorical data sets were generated from the same underlying distribution. It then discusses such alternatives for the Kolmogorov-Smirnov test, which is often used when the data sets consist of real numbers. ============================================================ To retrieve these papers anonymous ftp to ftp.santafe.edu. The papers are in pub/dhw_ftp, under the titles maxent.95.boot.ps.Z and maxent.95.wv.ps.Z, respectively.  From mozer at neuron.cs.colorado.edu Tue Oct 10 15:14:30 1995 From: mozer at neuron.cs.colorado.edu (Mike Mozer) Date: Tue, 10 Oct 1995 13:14:30 -0600 Subject: job announcement Message-ID: <199510101914.NAA16869@neuron.cs.colorado.edu> University of Colorado, Boulder Department of Computer Science Applications are invited for a junior tenure-track faculty position in the areas of artificial intelligence or software and systems. The department is particularly interested in candidates in the areas of human-computer interaction, human and machine learning, neural networks, databases, computer networks, distributed systems, programming languages and software engineering. Applicants should have a Ph.D. in computer science or a related field and show strong promise in both research and teaching. The Computer Science Department at the University of Colorado has 22 faculty and about 200 graduate students. It has strong research programs in artificial intelligence, numerical and parallel computation, software and systems and theoretical computer science. The computing environment includes a multitude of computer workstations and a large variety of parallel computers. The department has been the recipient of two consecutive five-year Institutional Infrastructure (previously CER) grants from the National Science Foundation supporting its computing infrastructure and collaborative research among its faculty. Applicants should send a current curriculum vitae and the names of four references to Professor Gary J. Nutt, Chair, Department of Computer Science, Campus Box 430, University of Colorado, Boulder, CO 80309-0430. One-page statements of research and teaching interests would also be appreciated. Review of applications will begin Jan. 1, 1996, but all applications postmarked before March 1, 1996 are eligible for consideration. Earlier applications will receive first consideration. Appointment can begin as early as August 1996. The University of Colorado at Boulder strongly supports the principle of diversity. We are particularly interested in receiving applications from women, ethnic minorities, disabled persons, veterans and veterans of the Vietnam era. ------------------------------------------------------------------------------- You can contact me for further information. The search looks terribly diffuse, but the odds of hiring a neural net / machine learning person are good. -- Mike  From pmn at iau.dtu.dk Wed Oct 11 11:50:53 1995 From: pmn at iau.dtu.dk (Magnus Norgaard) Date: Wed, 11 Oct 95 11:50:53 MET Subject: NNSYSID toolbox available Message-ID: <199510111052.LAA15406@danpost.uni-c.dk> ------------------------------- ANNOUNCING: THE NNSYSID TOOLBOX ------------------------------- Neural Network Based System Identification Toolbox for use with MATLAB(r) Version 1.0 Magnus Norgaard Institute of Automation, Connect/Electronics Institute, Institute of Mathematical Modeling Technical University of Denmark Oct. 4, 1995 The NNSYSID toolbox is a set of freeware tools for neural network based identification of nonlinear dynamic systems. The toolbox contains a number of m and MEX-files for training and evaluation of multilayer perceptron type neural networks within the MATLAB environment. There are functions for training ordinary feedforward networks as well as for identification of nonlinear dynamic systems and for time-series analysis. The toolbox requires at least MATLAB 4.2 with the signal processing toolbox, but it works completely independently of the neural network and system identification toolboxes provided by The MathWorks, Inc. WHAT THE TOOLBOX CONTAINS: o Fast, robust, and easy-to-use training algorithms. o A number of different model structures for modelling dynamic systems. o Validation of trained networks. o Estimation of generalization ability. o Algorithms for determination of the optimal network architecture by pruning. o Demonstration programs. HOW TO GET IT: The toolbox can be obtained in one of the following ways: o WWW: URL adress: http://kalman.iau.dtu.dk/Projects/proj/nnsysid.html zip was used for compressing the toolbox. You must have unzip (UNIX) or pkunzip (DOS) to unpack it. From UNIX: unzip -a nntool.zip From DOS: pkunzip nntool.zip o FTP: ftp eivind.ei.dtu.dk login: anonymous password: (Your e-mail adress) cd dist/matlab/NNSYSID You will find two versions of the compressed toolbox: nntool.zip was packed and compressed with 'zip' nntool.tar.gz was packed with 'tar' and compressed with 'gzip' For the zip-version: get nntool.zip unzip -a nntool.zip (UNIX), or pkunzip nntool.zip (DOS) For the tar+gzip version: get nntool.tar.gz gzip -d nntool.tar.gz (UNIX only) tar xvf nntool.tar After having unpacked the toolbox, read the files README and RELEASE on how to install the tools properly. A 90-page manual (in Postscript) is included with the toolbox. We do not offer any support if you run into problems! The toolbox is freeware - take it or leave it!!! THE TOOLBOX FUNCTIONS GROUPED BY SUBJECT: FUNCTIONS FOR TRAINING A NETWORK: marq : Levenberg-Marquardt method. marq2 : Levenberg-Marquardt method. Works for fully connected networks only. marqlm : Memory-saving implementation of the Levenberg-Marquardt method. rpe : Recursive prediction error method. FUNCTIONS FOR PRETREATING THE DATA: dscale : Scale data to zero mean and variance one. FUNCTIONS FOR TRAINING NETWORKS TO MODEL DYNAMIC SYSTEMS: lipschit : Determine the lag space. nnarmax1 : Identify a Neural Network ARMAX (or ARMA) model (Linear MA filter). nnarmax2 : Identify a Neural Network ARMAX (or ARMA) model. nnarx : Identify a Neural Network ARX (or AR) model. nniol : Identify a Neural Network model suited for I-O linearization control. nnoe : Identify a Neural Network Output Error model. nnrarmx1 : Recursive counterpart to NNARMAX1. nnrarmx2 : Recursive counterpart to NNARMAX2. nnrarx : Recursive counterpart to NNARX. nnssif : Identify a NN State Space Innovations form model. FUNCTIONS FOR PRUNING NETWORKS: netstruc : Extract weight matrices from matrix of parameter vectors. nnprune : Prune models of dynamic systems with Optimal Brain Surgeon (OBS). obdprune : Prune feed-forward networks with Optimal Brain Damage (OBD). obsprune : Prune feed-forward networks with Optimal Brain Surgeon (OBS). FUNCTIONS FOR EVALUATING TRAINED NETWORKS: fpe : FPE estimate of the generalization error for feed-forward nets. ifvalid : Validation of models generated by NNSSIF. ioleval : Validation of models generated by NNIOL. loo : Leave-One-Out estimate of generalization error for feed-forward nets. nneval : Validation of feed-forward networks (trained by marq,rpe,bp). nnfpe : FPE for I/O models of dynamic systems. nnsim : Simulate model of dynamic system from control sequence alone. nnvalid : Validation of I/O models of dynamic systems. wrescale : Rescale weights of trained network. MISCELLANOUS FUNCTIONS: Contents : Contents file. drawnet : Draws a two layer neural network. getgrad : Derivative of network outputs w.r.t. the weights. pmntanh : Fast tanh function. DEMOS: test1 : Demonstrates different training methods on a curve fitting example. test2 : Demonstrates the NNARX function. test3 : Demonstrates the NNARMAX2 function. test4 : Demonstrates the NNSSIF function. test5 : Demonstrates the NNOE function. test6 : Demonstrates the effect of regularization by weight decay. test7 : Demonstrates pruning by OBS on the sunspot benchmark problem. Enjoy - MN +-----------------------------------+------------------------------+ | Magnus Norgaard | e-mail : pmn at iau.dtu.dk | | Institute of Automation | Phone : +45 42 25 35 65 | | Technical University of Denmark | Fax : +45 45 88 12 95 | | Building 326 | http://kalman.iau.dtu.dk/ | | DK-2800 Lyngby | Staff/Magnus_Norgaard.html| | Denmark | | |___________________________________|______________________________|  From janine.stook at era.co.uk Wed Oct 11 11:00:04 1995 From: janine.stook at era.co.uk (janine stook) Date: Wed, 11 Oct 95 15:00:04 GMT Subject: Producing Dependable Systems - Conference & Exhibition Message-ID: <9509118134.AA813448797@mailhost.era.co.uk> Conference & Exhibition Neural Networks - Producing Dependable Systems National Motorcycle Museum, Solihull, West Midlands, UK: Thursday 2 November 1995 The conference will look at the problem of producing dependable neural network computing systems from both theoretical and practical angles. The different approaches to producing and demonstrating dependable systems will be discussed during the day. Case studies will illustrate the state-of-the-art and draw out the lessons that can be applied from one area to another. With a blend of speakers drawn from industry and academia, this conference will be of interest to engineers, managers, technical directors and industrial scientists who wish to know more about the practical application of neural networks. For programme, booking and other details, please see: http://www.era.co.uk/neural.htm, or contact Janine Stook, Senior Conference Organiser, E-mail: conferences at era.co.uk Bookings can be accepted until the day of the event. I look forward to hearing from you. Regards. Janine Stook. Technical Services Division, ERA Technology Ltd, Cleeve Road Leatherhead, Surrey KT22 7SA, UK Tel: +44 (0)1372 367021 Fax: +44 (0)1372 377927  From cas-cns at PARK.BU.EDU Wed Oct 11 15:49:31 1995 From: cas-cns at PARK.BU.EDU (B.U. CAS/CNS) Date: Wed, 11 Oct 1995 15:49:31 -0400 Subject: New Faculty Position Available Message-ID: <199510111948.PAA19316@cns.bu.edu> (A copy of this message has also been posted to the following newsgroups: comp.ai, comp.cog-eng,comp.software-eng,comp.ai.neural-nets,bu.general,misc.jobs.offered,ne.jobs,sci.math,sci.cognitive,sci.psychology,sci.misc,sci.physics,sci.med.psychobiology) NEW FACULTY IN COGNITIVE AND NEURAL SYSTEMS AT BOSTON UNIVERSITY Boston University seeks an associate or full professor for its graduate Department of Cognitive and Neural Systems. Exceptionally qualified assistant professors will also be considered. This department offers an integrated curriculum of psychological, neurobiological, and computational concepts, models, and methods in the fields of computational neuroscience, connectionist cognitive science, and neural network technology in which Boston University is a leader. Candidates should have an international research reputation, preferably including extensive analytic or computational research experience in modeling a broad range of nonlinear neural networks, especially in one or more of the areas: vision and image processing, adaptive pattern recognition, cognitive information processing, speech and language, and neural network technology. Send a complete curriculum vitae and three letters of recommendation to Search Committee, Department of Cognitive and Neural Systems, 677 Beacon Street, Boston University, Boston, MA 02215. Boston University is an Equal Opportunity/Affirmative Action Employer. http://cns-web.bu.edu  From harnad at cogsci.soton.ac.uk Wed Oct 11 16:44:05 1995 From: harnad at cogsci.soton.ac.uk (Stevan Harnad) Date: Wed, 11 Oct 95 21:44:05 +0100 Subject: Addictions: BBS Call for Commentators Message-ID: <14340.9510112044@cogsci.ecs.soton.ac.uk> Below is the abstract of a forthcoming target article on: RESOLVING THE CONTRADICTIONS OF ADDICTION by Gene Heyman (Psychology, Harvard) This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: bbs at soton.ac.uk or write to: Behavioral and Brain Sciences Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://www.princeton.edu/~harnad/bbs.html http://cogsci.soton.ac.uk/bbs ftp://ftp.princeton.edu/pub/harnad/BBS ftp://cogsci.ecs.soton.ac.uk/pub/harnad/BBS gopher://gopher.princeton.edu:70/11/.libraries/.pujournals To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp (or gopher or world-wide-web) according to the instructions that follow after the abstract. ____________________________________________________________________ RESOLVING THE CONTRADICTIONS OF ADDICTION Gene M. Heyman Department of Psychology Harvard University Cambridge, MA 02138 gmh at wjh12.harvard.edu KEYWORDS: Addiction, compulsive behavior, disease, incentive- sensitization, reinforcement, rational choice, matching law ABSTRACT: Research findings on addiction are contradictory. According to biographical records and widely used diagnostic manuals, addicts use drugs compulsively. These accounts are consistent with genetic research and laboratory experiments in which repeated administration of addictive drugs caused changes in neural substrates associated with reward. However, epidemiological and experimental data show that the consequences of drug consumption can significantly modify drug intake in addicts. The disease model can account for the compulsive features of addiction, but not occasions in which price and punishment reduced drug consumption in addicts. Conversely, learning models of addiction can account for the influence of price and punishment, but not compulsive drug taking. The occasion for this paper is that recent developments in behavioral choice theory resolve the apparent contradictions in the addiction literature. The basic argument includes the following four statements. First, repeated consumption of an addictive drug decreases its future value and the future value of competing activities. Second, the frequency of an activity is a function of its relative (not absolute) value. This implies that an activity that reduces the values of competing behaviors can increase in frequency even if its own value also declines. Third, a recent experiment (Heyman & Tanz, 1995) shows that the effective reinforcement contingencies are relative to a frame of reference, and this frame of reference can change so as to favor optimal or sub-optimal choice. Fourth, if the frame of reference is local, reinforcement contingencies will favor excessive drug use, but if the frame of reference is global, the reinforcement contingencies will favor controlled drug use. The transition from a global to local frame of reference explains relapse and other compulsive features of addiction. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from ftp.princeton.edu according to the instructions below (the filename is bbs.heyman). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- These files are also on the World Wide Web and the easiest way to retrieve them is with Netscape, Mosaic, gopher, archie, veronica, etc. Here are some of the URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs.html http://cogsci.soton.ac.uk/~harnad/bbs.html gopher://gopher.princeton.edu:70/11/.libraries/.pujournals ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.heyman ftp://cogsci.soton.ac.uk/pub/harnad/BBS/bbs.heyman To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.heyman When you have the file(s) you want, type: quit ---------- Where the above procedure is not available there are two fileservers: ftpmail at decwrl.dec.com and bitftp at pucc.bitnet that will do the transfer for you. To one or the other of them, send the following one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). -------------------------------------------------------------  From rosen at unr.edu Thu Oct 12 04:14:58 1995 From: rosen at unr.edu (David B. Rosen) Date: Thu, 12 Oct 1995 01:14:58 -0700 Subject: Missing Data Workshop Announcement & Call for Presentations Message-ID: <199510120809.IAA13967@solstice.ccs.unr.edu> This is the first announcement and call for presentations for: MISSING DATA: METHODS AND MODELS A NIPS*95 Workshop Friday, December 1, 1995 INTRODUCTION Incomplete or missing data, typically unobserved or unavailable features in supervised learning, is an important problem often encountered in real-world data sets and applications. Assumptions about the missing-data mechanism are often not stated explicitly, for example independence between this mechanism and the values of the (missing or other) features themselves. In the important case of incomplete training data, one often discards incomplete rows or columns of the data matrix, throwing out some useful information along with the missing data. Ad hoc or univariate methods such as imputing the mean or mode are dangerous as they can sometimes give much worse results than simple discarding. Overcoming the problem of missing data often requires that we model not just the dependence of the output on the inputs, but the inputs among themselves as well. THE WORKSHOP This one-day workshop should provide a valuable opportunity to share and discuss methods and models used for missing data. There will be a number of short presentations, with discussion following each and continuing in greater depth after all are done. Particular classes of methods we hope will be discussed include: * Discarding data, univariate imputation (filling-in) of missing values, etc. * Single and multiple imputation based on other (non-missing) features. * Mixture models for the joint (input-output) space. * EM algorithm. * Recurrent networks, iterative relaxation, auto-associative pattern completion. * Methods specific to certain learning algorithms, e.g. trees, graphical models * Stochastic simulation and Bayesian posterior sampling Presenters so far include (alphabetically) Leo Breiman (tentative!), Zoubin Ghahramani, Bo Thiesson / Steffen Lauritzen (missing data in graphical models), and Volker Tresp. CALL FOR PRESENTATIONS Additional speakers are needed to present some of the methods mentioned above, or other topics of interest. If you would like to do so, or if you have additional suggestions to offer, please contact us as soon as possible. ORGANIZERS Harry Burke David Rosen New York Medical College, Department of Medicine, Valhalla NY 10595 USA FAX: +1.914.347.2419 FURTHER INFORMATION Check the workshop's Web page (the above is a snapshot of it) http://www.scs.unr.edu/~cbmr/nips/workshop-missing.html for further updates over time. It also has a link to the NIPS*95 home page. Those without direct access to the World Wide Web can use Agora, the email-based Web browser. Send the message HELP, or the message send http://www.scs.unr.edu/~cbmr/nips/workshop-missing.html to agora at www.undp.org . Any Subject header is ignored. -- David Rosen New York Medical College  From cohn at psyche.mit.edu Thu Oct 12 10:23:53 1995 From: cohn at psyche.mit.edu (David Cohn) Date: Thu, 12 Oct 95 10:23:53 EDT Subject: NIPS*95 program info available Message-ID: <9510121423.AA12917@psyche.mit.edu> Final Program NIPS*95 Neural Information Processing Systems: Natural and Synthetic November 27 - December 2 Denver, Colorado The final program for NIPS*95, along with other conference and registration information, is now available on the NIPS home page URL: http://www.cs.cmu.edu/Web/Groups/CNBC/nips/NIPS.html or by sending email to nips95 at mines.colorado.edu. Please note that the deadline for early registration is October 28th; registration costs rise by $50 after that date. --------------------------------------------------------------------- NIPS*95 Conference Highlights --------------------------------------------------------------------- Tutorials: November 27, 1995 Denver Marriott City Center, Denver, Colorado Formal Conference Sessions: November 28 - 30, 1995 Denver Marriott City Center, Denver, Colorado Post-Meeting Workshops: December 1-2, 1995 Marriott Hotel, Vail, Colorado NIPS is a single-track conference -- there will be no parallel sessions. Out of approximately 460 submitted papers, 30 will be presented as talks; another 110 will be presented as posters. All accepted papers will appear in the proceedings. A number of invited talks will survey active areas of research and lead off the sessions. These include: John H. McMasters, (Banquet Speaker) - Boeing Commercial Aircraft Company "Origins and Future of Flight: A Paleoecological Perspective" Bruce Rosen, Massachusetts General Hospital "Mapping Brain Function with Functional Magnetic Resonance Imaging" David Heckerman, Microsoft "Learning Bayesian Networks" Brian Ripley, Statistics, Oxford "Statistical Ideas for Selecting Network Architectures" Thomas McAvoy, University of Maryland "Application of Neural Networks in the Chemical Process Industries" Elizabeth Bates, UCSD, Cognitive Science Department "Brain Organization for Language in Children and Adults" --------------------------------------------------------------------- TUTORIAL PROGRAM --------------------------------------------------------------------- November 27, 1995 Session I: 9:30-11:30 a.m. "Functional Anatomy of Primate Vision" Gary Blasdel, Harvard Medical School "Neural Networks for Identification and Control" Kumpati Narendra, Yale University Session II: 1:00-3:00 p.m. "Cortical Circuits in a Multichip Communication Framework" Misha Mahowald, Institute for Neuroinformatics "Computational Learning and Statistical Prediction" Jerome Friedman, Stanford University Session III: 3:30-5:30 p.m. "Unsupervised Learning Procedures" Geoffrey Hinton, University of Toronto "Option Pricing in Modern Finance Theory and the Relevance of Artificial Neural Networks" Halbert White, University of California at San Diego --------------------------------------------------------------------- POST-MEETING WORKSHOPS --------------------------------------------------------------------- November 30 - December 2, 1995 The formal conference will be followed by post-meeting workshop sessions in Vail, Colorado. Registration for the workshops is optional. It includes the welcome reception, two continental breakfasts and one banquet dinner. The workshops will have morning (7:30-9:30 a.m.) and afternoon (4:30-6:30 p.m.) sessions each day and will be followed by a summary session at 7:00 p.m. on the final day. Early registration is strongly encouraged, as we may have to limit attendance. Early room reservations at Vail are also strongly encouraged. Below is a partial list of this year's workshops. Noisy Time Series Object Features for Visual Shape Representation Neural Hardware Engineering Benchmarking of NN Learning Algorithms Symbolic Dynamics in Neural Processing Prospects for Neural Human-Machine Interfaces Neural Information and Coding Modeling the Mind: Large Scale Research Projects Vertebrate Neurophysiology and Neural Networks: can the teacher learn from the student? Hybrid HMM/ANN Systems for Sequence Recognition Robot Learning - Learning in the "Real World" Transfer of Knowledge in Inductive Systems The Dynamics of On-Line Learning Optimization Problem Solving with Neural Nets Neural Networks for Signal Processing Statistical and Structural Models in Network Vision Learning in Bayesian Networks and Other Graphical Models Knowledge Acquisition, Consolidation, and Transfer within Neural Networks Dealing with Incomplete Data in Classification and Regression Topological Maps for Density Estimation, Regression and Classification  From inns_www at PARK.BU.EDU Thu Oct 12 23:08:06 1995 From: inns_www at PARK.BU.EDU (INNS Web Staff) Date: Thu, 12 Oct 1995 23:08:06 -0400 Subject: WCNN'96: Call for Papers Message-ID: <307DD816.167EB0E7@cns.bu.edu> CALL FOR PAPERS The 1996 World Congress on Neural Networks San Diego, California September 15--20, 1996 Town & Country Hotel The following information is also available on the INNS WEB site: http://cns-web.bu.edu/INNS/index.html ------------------------------------------------------------------------------- Organizing Committee: David Casasent, General Chair Shun-Ichi Amari, President Daniel L. Alkon, Program Chair Walter J. Freeman, President, 1994 Bart Kosko, Program Chair John G. Taylor, President, 1995 1995 INNS Officers: 1995 Governing Board: President: John G. Daniel L. Alkon Christof Koch Taylor President Elect: Shun-Ichi James A. Anderson Bart Kosko Amari Past President: Walter J. David Casasent Daniel S. Freeman Levine Secretary: Gail Leon Cooper Christoph von Carpenter der Malsburg Treasurer: Judy Dayhoff Rolf Eckmiller Alianna Maren Headquarter Services: Stephanie Francoise Fogelman-Soulie Harold Szu Dickinson Kunihiko Fukushima Paul Werbos Stephen Grossberg Bernard Widrow Lofti A. Zadeh ------------------------------------------------------------------------------- Call for Papers Papers must be received, in English, by January 15, 1996. There is a four-page limit, in English, with a $25 per page fee for papers over four pages. Overlength charges can be paid by check (payable in U.S. Dollars and issued by a U.S. Correspondent Bank, to WCNN'96), Visa or MasterCard. (Should a paper be rejected, the fee will be refunded). Papers must be on 8-1/2" x 11" white paper with 1" margins on all sides, one-column format, single spaced, in Times or similar type style of 10 points or larger, one side of paper only. Faxes are not acceptable. Centered at the top of the first page should be complete title, author name(s), affiliation(s), and mailing and e-mail address(es), followed by blank space, abstract (up to 15 lines), and text. The following information must be included in an accompanying cover letter for the paper to be reviewed: Full title of paper, corresponding author and presenting author name, address, telephone and fax numbers; 1st and 2nd choices of technical sessions (see below), oral or poster session preferred; and audio-visual requirements (for oral presentation only). Papers submitted which do not meet these requirements or with insufficient funds will be returned. The original and five copies of each paper should be sent to: WCNN'96 Program Chairs 875 Kings Highway, Suite 200 Woodbury, NJ 08096-3172 U.S.A. The program committee will determine whether papers will be an oral or poster presentation. All members of INNS in good standing have the right to designate in their cover letters one (1) 15-line abstract with themselves as an author for automatic acceptance for at least publication and a poster presentation; this is in addition to any full paper submissions. Biological and engineering papers are welcome for all sessions. Contributed papers are welcome for all twenty-six sessions, including special sessions. ------------------------------------------------------------------------------- SESSION TOPICS 1. Vision 11. Neurodynamics & Chaos 2. Speech 12. Hardware Implementation 3. Neurocontrol and Robotics 13. Associative Memory and Reinforcement Learning 4. Supervised Learning 14. Applications 5. Unsupervised Learning 15. Mathematical Foundations 6. Pattern Recognition 16. Evolutionary/Genetic/Annealing Algorithms 7. Prediction and System 17. Neural and Fuzzy Systems Identification 8. Intelligent Systems 18. Fuzzy Approximation and Applications 9. Computational Neuroscience 19. Medical Applications 10. Signal Processing 20. Industrial Applications SPECIAL SESSIONS A. Consciousness & and Intentionality B. Biological Neural Networks C. Dynamical Systems in Financial Engineering D. Power Industry Applications E. Statistics & Neural Networks F. INNS Special Interest Groups (SIGINNS) -------------------------------------------------------------------------------  From nin at cns.brown.edu Fri Oct 13 12:03:52 1995 From: nin at cns.brown.edu (Nathan Intrator) Date: Fri, 13 Oct 95 12:03:52 EDT Subject: NIPS Workshop: Object Features for Visual Shape Representation Message-ID: <9510131603.AA12898@cns.brown.edu> First announcement of the following workshop. Updates and call for presentations is on the web page. ---------------------------------------------------------------------- Object Features for Visual Shape Representation NIPS-95 Workshop: Saturday, Dec 2, 1995 Organizers: Shimon Edelman, Nathan Intrator http://www.physics.brown.edu/~nin/workshop95.html Overview Object recognition can be regarded as a comparison between the stimulus shape and a library of reference shapes stored in long-term memory. It is not likely that the visual system stores exact templates or snapshots of familiar objects, both for pragmatic reasons (the appearance of a 3D object depends on the viewing conditions, making a close match between the stimulus and a template unattainable), and because of computational limitations that have to do with the curse of dimensionality. Many competing approaches to the extraction of features useful for shape representation have been proposed in the past. The workshop will explore and compare some of these approaches. We shall be particularly interested in discussing different approaches for evaluating feature extraction rules: information theory, statistics, pattern recognition etc. We would like to elaborate on the goal of feature/information extraction in early visual cortex, the relevance of the statistics of the input environment to studying learning rules and comparison between visual cortical plasticity models. Presentation of psychophysical and neurobiological data relevant to the feature issue will be encouraged. POTENTIAL PARTICIPANTS: connectionists / feature extraction people / vision researchers / neurobiologists working on perceptual learning Invited Speakers ---------------- Joseph Atick, Rockefeller (Tentative) Horace Barlow, Cambridge (Tentative) Elie Binenstock, CNRS, Brown Ichiro Fujita, Osaka Stu Geman, Brown Tai Sing Lee, Harvard Bruno A. Olshausen, Cornell Tommy Poggio, MIT Dan Ruderman, USC (Tentative) Harel Shouval, Brown  From payman at ebs330.eb.uah.edu Fri Oct 13 18:47:14 1995 From: payman at ebs330.eb.uah.edu (Payman Arabshahi) Date: Fri, 13 Oct 95 17:47:14 CDT Subject: Computational Intelligence in Financial Engineering - CIFEr'96 Message-ID: <9510132247.AA24566@ebs330> Call for Papers Conference on Computational Intelligence for Financial Engineering CIFEr Conference =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Visit us on the World Wide Web for latest updates and information at http://www.ieee.org/nnc/conferences/cfp/cifer96.html The homepage can also be accessed via the "Conferences" page of the IEEE Neural Network Council's Homepage at http://www.ieee.org/nnc The deadlines mentioned in this CFP supercede those in the hard copy version. Our homepage will be updated on October 15 to reflect these new deadlines. -- Payman Arabshahi Electronic Publicity Chair, CIFEr'96 Tel : (205) 895-6380 Dept. of Electrical & Computer Eng. Fax : (205) 895-6803 University of Alabama in Huntsville payman at ebs330.eb.uah.edu Huntsville, AL 35899 http://www.eb.uah.edu/ece/ ---------------------------------------------------------------------- IEEE/IAFE 1996 $$$$$$$$$$$ $$$$$$ $$$$$$$$$$$ $$$$$$$$$$ $$$$$$$$$$$ $$$$$$ $$$$$$$$$$$ $$$$$$$$$$ $$$$ $$ $$$$ $$$$ $$$ $$$ $$$$ $$$$ $$$$$$$ $$$$$$ $$$$$$$$$$ $$$$ $$$$ $$$$$$$ $$$$$$ $$$$$$$$$$ $$$$ $$ $$$$ $$$$ $$$ $$$ $$$ $$$$$$$$$$$ $$$$$$ $$$$ $$$$$$$$$$ $$$ $$$$$$$$$$$ $$$$$$ $$$$ $$$$$$$$$$ $$$ Call for Papers Conference on Computational Intelligence for Financial Engineering CIFEr Conference March 24-26, 1996, New York City, Crowne Plaza Manhattan Sponsors: The IEEE Neural Networks Council, The International Association of Financial Engineers The IEEE/IAFE CIFEr Conference is the second annual collaboration between the professional engineering and financial communities, and is one of the leading forums for new technologies and applications in the intersection of computational intelligence and financial engineering. Intelligent computational systems have become indispensable in virtually all financial applications, from portfolio selection to proprietary trading to risk management. Topics in which papers, panel sessions, and tutorial proposals are invited include, but are not limited to, the following: CONFERENCE TOPICS ----------------- > Financial Engineering Applications: Trading Systems Forecasting Hedging Strategies Risk Management Pricing of Structured Securities Systemic Risk Asset Allocation Exotic Options > Computer & Engineering Applications & Models: Neural Networks Probabilistic Reasoning Fuzzy Systems and Rough Sets Stochastic Processes Dynamic Optimization Time Series Analysis Non-linear Dynamics Evolutionary Computation INSTRUCTIONS FOR AUTHORS, PANEL PROPOSALS, SPECIAL SESSIONS, TUTORIALS ---------------------------------------------------------------------- All summaries and proposals for tutorials, panels and special sessions must be received by the conference Secretariat at Meeting Management by December 1, 1995. Our intentions are to publish a book with the best selection of papers accepted. AUTHORS (FOR CONFERENCE ORAL SESSIONS) -------------------------------------- One copy of the Extended Summary (not exceeding four pages of 8.5 inch by 11 inch size) must be received by Meeting Management by December 1, 1995. Centered at the top of the first page should be the paper's complete title, author name(s), affiliation(s), and mailing addresses(es). Fonts no smaller than 10 pt should be used. Papers must report original work that has not been published previously, and is not under consideration for publication elsewhere. In the letter accompanying the submission, the following information should be included: * Topic(s) * Full title of paper * Corresponding Author's name * Mailing address * Telephone and fax * E-mail (if available) * Presenter (If different from corresponding author, please provide name, mailing address, etc.) Authors will be notified of acceptance of the Extended Summary by January 10, 1996. Complete papers (up to a maximum of seven 8.5 inch by 11 inch pages) will be due by February 9, 1996, and will be published in the conference proceedings. SPECIAL SESSIONS ---------------- A limited number of special sessions will address subjects within the topical scope of the conference. Each special session will consist of from four to six papers on a specific topic. Proposals for special sessions will be submitted by the session organizer and should include: * Topic(s) * Title of Special Session * Name, address, phone, fax, and email of the Session Organizer * List of paper titles with authors' names and addresses * One page of summaries of all papers Notification of acceptance of special session proposals will be on January 10, 1995. If a proposal for a special session is accepted, the authors will be required to submit a camera ready copy of their paper for the conference proceedings by February 9, 1996. PANEL PROPOSALS --------------- Proposals for panels addressing topics within the technical scope of the conference will be considered. Panel organizers should describe, in two pages or less, the objective of the panel and the topic(s) to be addressed. Panel sessions should be interactive with panel members and the audience and should not be a sequence of paper presentations by the panel members. The participants in the panel should be identified. No papers will be published from panel activities. Notification of acceptance of panel session proposals will be on January 10, 1996. TUTORIAL PROPOSALS ------------------ Proposals for tutorials addressing subjects within the topical scope of the conference will be considered. Proposals for tutorials should describe, in two pages or less, the objective of the panel and the topic(s) to be addressed. A detailed syllabus of the course contents should also be included. Most tutorials will be four hours, although proposals for longer tutorials will also be considered. Notification of acceptance of tutorial proposals will be on January 10, 1996. EXHIBIT INFORMATION ------------------- Businesses with activities related to financial engineering, including software & hardware vendors, publishers and academic institutions, are invited to participate in CIFEr's exhibits. Further information about the exhibits can be obtained from the CIFEr-secretariat, Barbara Klemm. SPONSORS -------- Sponsorship for the CIFEr Conference is being provided by the IAFE (International Association of Financial Engineers) and the IEEE Neural Networks Council. The IEEE (Institute of Electrical and Electronics Engineers) is the world's largest engineering and computer science professional non-profit association and sponsors hundreds of technical conferences and publications annually. The IAFE is a professional non-profit financial association with members worldwide specializing in new financial product design, derivative structures, risk management strategies, arbitrage techniques, and application of computational techniques to finance. Early registration is $400 for IEEE (Institute of Electrical and Electronic Engineers, Neural Networks Council) and IAFE (International Association of Financial Engineers) members. For details contact Barbara Klemm at Meeting Management. INFORMATION ----------- CIFEr Secretariat: Meeting Management IEEE/IAFE Computational Intelligence for Financial Engineering 2603 Main Street, Suite #690 Irvine, California 92714 Tel: (714) 752-8205 or (800) 321-6338 Fax: (714) 752-7444 Email: 74710.2266 at compuserve.com Visit us on the World Wide Web for latest updates: http://www.ieee.org/nnc/conferences/cfp/cifer96.html ORGANIZING COMMITTEE -------------------- Keynote Speaker: Stephen Figlewski, Professor of Finance and Editor of the Journal of Derivatives Stern School of Business, New York University John M. Mulvey, Professor and Director Engineering Management Systems Princeton University, Princeton Conference Committee General Co-chairs: John Marshall, Professor of Financial Engineering Polytechnic University, New York, NY Robert Marks, Professor of Electrical Engineering, University of Washington, Seattle, WA Program Committee Co-chairs: Benjamin Melamed, Ph.D., Research Scientist RUTCOR-Rutgers University's Center for Operations Research Alan Tucker, Associate Professor of Finance Pace University, New York, NY International Liaison: Arnold Jang, Vice President, Intelligent Trading Systems Springfields Investments Advisory Company, Taipei, Taiwan Organizational Chair: Robert Golan, President Rough Knowledge Discovery Inc., Calgary, Alberta Finance Chair: Ingrid Marshall, Accountant Marshall & Marshall, Stroudsburg, PA Exhibits Chair: Steve Piche, Lead Scientist Pavillion Inc, Austin Program Co-Chair: Alan Tucker and Benjamin Melamed Program Committee: Phelim Boyle, Professor of Accounting University of Waterloo, Waterloo, Ontario Mark Broadie, Associate Professor of Finance Graduate School of Business Columbia University, New York, NY Jan Dash, Ph.D, Managing Director Smith Barney, New York, NY Stephen Figlewski, Professor of Finance New York University, New York, NY Roy S. Freedman, Ph.D, President Inductive Solutions, Inc, New York, NY Peter L. Hammer, Professor and Director RUTCOR-Rutgers University's Center for Operations Research, New Brunswick, NJ Jimmy E. Hilliard, Professor of Finance University of Georgia, Athens, GA John Hull, Professor of Management University of Toronto, Toronto, Ontario Yuval Lirov, Ph.D., Vice President Lehman Brothers, Inc, New York, NY David G. Luenberger, Professor of Electrical Engineering Stanford University, Stanford, CA John M. Mulvey, Professor and Director Engineering Management Systems Princeton University, Princeton, NJ Jason Z. Wei, Associate Professor of Finance University of Saskatchewan, Saskatoon Robert E. Whaley, Professor of Business Futures and Options Research Center Duke University, Durham, NC Publicity Chair Michael Wolf, General Manager Financial Products, The Mathworks, Inc., Natick, MA Electronic Publicity Chair Payman Arabshahi, Assistant Professor of Electrical Engineering University of Alabama in Huntsville, Huntsville Conference Liaison Scott Mathews, Senior Associate Marshall, Tucker, and Associates, Edmonds, WA  From tony at discus.anu.edu.au Mon Oct 16 02:52:50 1995 From: tony at discus.anu.edu.au (Tony BURKITT) Date: Mon, 16 Oct 1995 16:52:50 +1000 Subject: ACNN'96: Call for Papers Message-ID: <199510160652.QAA04463@cslab.anu.edu.au> C A L L F O R P A P E R S ACNN'96 SEVENTH AUSTRALIAN CONFERENCE ON NEURAL NETWORKS 10th - 12th APRIL 1996 Australian National University Canberra, Australia The seventh Australian conference on neural networks will be held in Canberra on April 10th - 12th 1996 at the Australian National University. ACNN'96 is the annual national meeting of the Australian neural network community. It is a multi-disciplinary meeting and seeks contributions from Neuroscientists, Engineers, Computer Scientists, Mathematicians, Physicists and Psychologists. ACNN'96 will feature a number of invited speakers. The program will include lecture presentations and poster sessions. Proceedings will be printed and distributed to the attendees. The posters will be displayed for a significant period of time, and time will be allocated for authors to be present at their poster in the conference program. Pre-Conference Workshops and Tutorials Proposals for Pre-Conference Workshops and Tutorials are invited. These are to be held on Tuesday 9th April at the same venue as the conference. People wishing to organize such workshops or tutorials are invited to submit a precis at the same time as the submission deadline for papers, and these will be advertised. Invited Keynote Speakers ACNN'96 will feature a number of keynote speakers, including Professor Wolfgang Maass, Technical University Graz. Further details to be announced. Submission Categories The major categories for paper submissions include: 1. Computational Neuroscience: Integrative function of neural networks in Vision, Audition, Motor, Somatosensory and Autonomic functions; Synaptic function; Cellular information processing; 2. Theory: Learning; Generalisation; Complexity; Scaling; Stability; Dynamics; 3. Implementation: Hardware implementation of neural nets; Analog and digital VLSI implementation; Optical implementation; 4. Architectures and Learning Algorithms: New architectures and learning algorithms; Hierarchy; Modularity; Learning pattern sequences; Information integration; 5. Cognitive Science and AI: Computational models of perception and pattern recognition; Memory; Concept formation; Problem solving and reasoning; Visual and auditory attention; Language acquisition and production; Neural network implementation of expert systems; 6. Applications: Application of neural nets to signal processing and analysis; Pattern recognition: Speech, Machine vision; Motor control; Robotics; Forecasting; Medical. Initial Submission of Papers As this is a multi-disciplinary meeting, papers are required to be comprehensible to an informed researcher outside the particular stream of the author in addition to the normal requirements of technical merit. Papers should be submitted as close as possible to final form and must not exceed six single A4 pages (2-column format). A cover page should be supplied giving the title of the paper, the name and affiliation of each author, together with the postal address, the e-mail address, and the phone and fax numbers of a designated contact author. The type font should be no smaller than 10 point except in footnotes. A serif font such as Times or New Century Schoolbook is preferred. . A LaTeX style file and a LaTeX template file specifying the final format are available by ftp (in the directory ftp://syseng.anu.edu.au/pub/acnn96/paperformat). Five copies of the paper and the front cover sheet should be sent to: ACNN'96 Secretariat L.P.O. Box 228 Australian National University Canberra, ACT 2601 Australia Each manuscript should clearly indicate submission category (from the six listed) and author preference for oral or poster presentations. This initial submission must be on hard copy to reach us by Friday, 1 December 1995. ACNN'96 will include a special poster session devoted to recent work and work-in-progress. Abstracts are solicited for this session (1 page limit), and may be submitted up to one week before the commencement of the conference. They will not be refereed or included in the proceedings, but will be distributed to attendees upon arrival. Students are especially encouraged to submit abstracts for this session. Submission Deadlines Friday, 1 December 1995 Deadline for receipt of paper submissions Friday, 19 January 1996 Notification of acceptance Friday, 16 February 1996 Final papers in camera-ready form for printing Venue Huxley Lecture Theatre, Leonard Huxley Building, Mills Road, Australian National University, Canberra, Australia. ACNN'96 Organising Committee Peter Bartlett Australian National University Tony Burkitt Australian National University Bob Williamson Australian National University ACNN'96 Technical Program Committee Tom Downs University of Queensland Bill Gibson University of Sydney Andrew Heathcote University of Newcastle Marwan Jabri University of Sydney Adam Kowalczyk Telecom Research Laboratories Cyril Latimer University of Sydney Wee Sun Lee Australian National University M. V. Srinivasan Australian National University Registrations The registration fee to attend ACNN'96 is: Full Time Students A$120.00 Academics A$260.00 Other A$380.00 A discount of 20% applies for advance registration. Registration forms must be posted before February 16th, 1996, to be entitled to the discount. To be eligible for the Full Time Student rate, a letter from the Head of Department as verification of enrollment is required. There is a registration form at the end of this document. Accommodation Delegates will have to make their own accommodation arrangements directly with the college or hotel of their choice. A list of accomodation close to the conference venue is available (see "Further Information"). Further Information For further information and registration forms, contact: ACNN'96 Secretariat L.P.O. Box 228 Australian National University Canberra, ACT 2601 Australia Tel: 06 - 249 5645 WWW page: http://wwwsyseng.anu.edu.au/acnn96/ ftp site: syseng.anu.edu.au:pub/acnn96 or via email: acnn96 at anu.edu.au  From andreas at sabai.cs.colorado.edu Tue Oct 24 00:14:30 1995 From: andreas at sabai.cs.colorado.edu (Andreas Weigend) Date: Mon, 23 Oct 1995 22:14:30 -0600 (MDT) Subject: Noisy Time Series (2-day NIPS workshop) Message-ID: <199510240414.WAA28537@sabai.cs.colorado.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 3027 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/0e14ac8e/attachment.ksh From wray at Heuristicrat.COM Mon Oct 23 20:32:35 1995 From: wray at Heuristicrat.COM (Wray Buntine) Date: Mon, 23 Oct 95 17:32:35 PDT Subject: ISIS: Information, Statistics and Induction in Science Message-ID: <9510240032.AA28178@euclid.Heuristicrat.COM> *** CALL FOR PAPERS *** ISIS: Information, Statistics and Induction in Science Melbourne, Australia, 20-23 August 1996 Conference Chair: David Dowe Co-chairs: Kevin Korb and Jonathan Oliver INVITED SPEAKERS: Henry Kyburg, Jr. (University of Rochester, NY) J. Ross Quinlan (Sydney University) Jorma J. Rissanen (IBM Almaden Research, San Jose, California) Ray Solomonoff (U.S.A.) PROGRAM COMMITTEE: Lloyd Allison, Mark Bedau, Hamparsum Bozdogan, Wray Buntine, Peter Cheeseman, Honghua Dai, David Dowe, Doug Fisher, Alex Gammerman, Clark Glymour, Randy Goebel, David Hand, Bill Harper, David Heckerman, Colin Howson, Lawrence Hunter, Frank Jackson, Max King, Kevin Korb, Henry Kyburg, Ming Li, Nozomu Matsubara, Aleksandar Milosavljevic, Richard Neapolitan, Jonathan Oliver, Michael Pazzani, J. Ross Quinlan, Glenn Shafer, Peter Slezak, Ray Solomonoff, Paul Thagard, Neil Thomason, Raul Valdes-Perez, Tim van Gelder, Paul Vitanyi, Chris Wallace, Geoff Webb, Xindong Wu, Jan Zytkow. Inquiries to: isis96 at cs.monash.edu.au David Dowe: dld at cs.monash.edu.au Kevin Korb: korb at cs.monash.edu.au or Jonathan Oliver: jono at cs.monash.edu.au Information is available on the WWW at: http://www.cs.monash.edu.au/~jono/ISIS/ISIS.shtml This conference will explore the use of computational modelling to understand and emulate inductive processes in science. The problems involved in building and using such computer models reflect methodological and foundational concerns common to a variety of academic disciplines, especially statistics, artificial intelligence (AI) and the philosophy of science. This conference aims to bring together researchers from these and related fields to present new computational technologies for supporting or analysing scientific inference and to engage in collegial debate over the merits and difficulties underlying the various approaches to automating inductive and statistical inference. AREAS OF INTEREST. The following streams/subject areas are of particular interest to the organisers: Concept Formation and Classification. Minimum Encoding Length Inference Methods. Scientific Discovery. Theory Revision. Bayesian Methodology. Foundations of Statistics. Foundations of Social Science. Foundations of AI. CALL FOR PAPERS. Prospective authors should mail five copies of their papers to Dr. David Dowe, ISIS chair. Alternatively, authors may submit by email to isis96 at cs.monash.edu.au. Email submissions must be in LaTex (using the ISIS style guide [will be available at the ISIS WWW page]). Submitted papers should be in double-column format in 10 point font and not exceeding 10 pages. An additional page should display the title, author(s) and affiliation(s), abstract, keywords and identification of which of the eight areas of interest (see http://www.cs.monash.edu.au/~jono/ISIS/ISIS.Area.Interest.html) are most relevant to the paper. Refereeing will be blind; that is, this additional page will not be passed along to referees. The proceedings will be published; details have not yet been settled with the prospective publisher. Accepted papers will have to be represented by at least one author in attendance to be published. Papers should be sent to: Dr David Dowe ISIS chair Department of Computer Science Monash University Clayton Victoria 3168 Australia Phone: +61-3-9 905 5226 FAX: +61-3-9 905 5146 Email: isis96 at cs.monash.edu.au Submission (receipt) deadline: 11 March, 1996 Notification of acceptance: 10 June, 1996 Camera-ready copy (receipt) deadline: 15 July, 1996 CONFERENCE VENUE ISIS will be held at the Old Melbourne Hotel, 5-17 Flemington Rd. North Melbourne. The Old Melbourne Hotel is within easy walking distance of downtown Melbourne, Melbourne University, many restaurants (on Lygon Street) and the Melbourne Zoo. It is about fifteen to twenty minutes drive from the airport. REGISTRATION A registration form will be available at the WWW site http://www.cs.monash.edu.au/~jono/ISIS/ISIS.shtml, or by mail from the conference chair. Dates for registration will be considered to be met assuming that legible postmarks are on or before the dates and airmail is used. Student registrations will be available at a discount (but prices have not yet been fixed). Relevant dates are: Early registration (at a discount): 3 June, 1996 Final registration: 1 July, 1996  From Dave_Touretzky at DST.BOLTZ.CS.CMU.EDU Wed Oct 25 23:57:34 1995 From: Dave_Touretzky at DST.BOLTZ.CS.CMU.EDU (Dave_Touretzky@DST.BOLTZ.CS.CMU.EDU) Date: Wed, 25 Oct 95 23:57:34 -0400 Subject: problem fixed re: Connectionists list Message-ID: <14308.814679854@DST.BOLTZ.CS.CMU.EDU> This past week we had a problem with the processing of the distribution list used for CONNECTIONSTS. As a result, several messages didn't get sent out to subscribers. The problem has been corrected, and we are now redistributing the messages that we believe got eaten. Apologies to anyone who receives a duplicate copy. -- Dave Touretzky & Lisa Saksida  From jagota at ponder.csci.unt.edu Tue Oct 17 13:10:48 1995 From: jagota at ponder.csci.unt.edu (Jagota Arun Kumar) Date: Tue, 17 Oct 95 12:10:48 -0500 Subject: NIPS*95 workshop on Optimization Message-ID: <9510171710.AA08474@ponder> Dear Connectionists: Attached find a description of the NIPS*95 workshop on optimization. For up-to-date information, including abstracts of talks, see the URL. We might be able to fit in one or two more talks. Send me a title and abstract by e-mail if you'd like to give a talk. Arun Jagota ---------------------------------------------------------------------- OPTIMIZATION PROBLEM SOLVING WITH NEURAL NETS NIPS95 Workshop, Organizer: Arun Jagota Friday Dec 1 1995, 7:30--9:30 AM and 4:30--6:30 PM E-mail: jagota at cs.unt.edu Workshop URL: http://www.msci.memphis.edu/~jagota/NIPS95 Ever since the work of Hopfield and Tank, neural nets have found increasing use in the approximate solution of difficult optimization problems, arising in many applications. Such neural nets are well-suited in principle to these problems, because they minimize, in parallel form, an energy function into which an optimization problem's objective and constraints can be mapped. Unfortunately, often they haven't worked well in practice, for two reasons. First, mapping the objective and constraints of a problem onto a single good energy function has turned out difficult for certain problems, for example for the Travelling Salesman Problem. The ease or difficulty of mapping has turned out moreover to be problem-dependent, making it difficult to find a good general mapping methodology. Second, the dynamical algorithms have often been limited to some form of local search or gradient-descent. In recent years, there have been significant advances on both fronts. Provably good mappings of several optimization problems have been found. Powerful dynamical algorithms that go beyond gradient-descent have also been developed, with ideas borrowed from different fields. Examples are Mean Field Annealing, Simulated Annealing, Projection Methods, and Randomized Multi-Start Algorithms. This workshop aims to take stock of the state of the art on this topic, and to study directions for future research and applications. Target Audience Both the topics---neural nets and optimization---are of relevance to a wide range of disciplines and we hope that several of these will be represented at this workshop. These include Cognitive Science, Computer Science, Engineering, Mathematics, Neurobiology, Physics, Chemistry, and Psychology. Format 6-8 30 minute talks, each including 5 minutes for discussion. 30 minutes for discussion at the end. Talks The Complexity of Stability in Hopfield Networks Ian Parberry, University of North Texas Title to be announced Anand Rangarajan, Yale University Performance of Neural Network Algorithms for Maximum Clique on Highly Compressible Graphs Arun Jagota, University of North Texas Population-based Incremental Learning Shumeet Baluja, Carnegie-Mellon University How Good are Neural Networks Algorithms for the Travelling Salesman Problem? Marco Budinich, Dipartimento di Fisica, Via Valerio 2, 34127 Trieste ITALY Relaxation Labeling Networks for the Maximum Clique Problem Marcello Pelillo, University of Venice, Italy ----------------------------------------------------------------------  From shawn_mikiten at biad23.uthscsa.edu Wed Oct 18 14:03:51 1995 From: shawn_mikiten at biad23.uthscsa.edu (shawn mikiten) Date: 18 Oct 1995 14:03:51 U Subject: BrainMap '95 Conference ann Message-ID: The upcoming BrainMap '95 Conference on December 3 & 4 will be in San Antonio, TX. Anyone involved in, or interested in developing databases in brain mapping and/or behaviors are welcome to apply. If you have access to WWW the URL is: http://ric.uthscsa.edu/services/95  From robert at fit.qut.edu.au Fri Oct 20 01:55:04 1995 From: robert at fit.qut.edu.au (Robert Andrews) Date: Fri, 20 Oct 1995 15:55:04 +1000 Subject: Rule Extraction From ANNs - AISB96 Workshop Message-ID: <199510200555.PAA25350@ocean.fit.qut.edu.au> ============================================================= FIRST CALL FOR PAPERS AISB-96 WORKSHOP Society for the Study of Artificial Intelligence and Simulation of Behaviour (SSAISB) University of Sussex, Brighton, England April 2, 1996 -------------------------------------------- RULE-EXTRACTION FROM TRAINED NEURAL NETWORKS -------------------------------------------- Robert Andrews Neurocomputing Research Centre Queensland University of Technology Brisbane 4001 Queensland, Australia Phone: +61 7 864-1656 Fax: +61 7 864-1969 E-mail: robert at fit.qut.edu.au Joachim Diederich Neurocomputing Research Centre Queensland University of Technology Brisbane 4001 Queensland, Australia Phone: +61 7 864-2143 Fax: +61 7 864-1801 E-mail: joachim at fit.qut.edu.au Lee Giles NEC Research Institute 4 Independence Way Princeton, NJ 08540 The objective of the workshop is to provide a discussion platform for researchers interested in Artificial Neural Networks (ANNs), Artificial Intelligence (AI) and Cognitive Science. The workshop should be of considerable interest to computer scientists and engineers as well as to cognitive scientists and people interested in ANN applications which require a justification of a classification or inference. INTRODUCTION It is becoming increasingly apparent that without some form of explanation capability, the full potential of trained Artificial Neural Networks may not be realised. The problem is an inherent inability to explain in a comprehensible form, the process by which a given decision or output generated by an ANN has been reached. For Artificial Neural Networks to gain a even wider degree of user acceptance and to enhance their overall utility as learning and generalisation tools, it is highly desirable if not essential that an `explanation' capability becomes an integral part of the functionality of a trained ANN. Such a requirement is mandatory if, for example, the ANN is to be used in what are termed as `safety critical' applications such as airlines and power stations. In these cases it is imperative that a system user be able to validate the output of the Artificial Neural Network under all possible input conditions. Further the system user should be provided with the capability to determine the set of conditions under which an output unit within an ANN is active and when it is not, thereby providing some degree of transparency of the ANN solution. Craven & Shavlik (1994) define the rule-extraction from neural networks task as follows: "Given a trained neural network and the examples used to train it, produce a concise and accurate symbolic description of the network." The following discussion of the importance of rule-extraction algorithms is based on this definition. THE IMPORTANCE OF RULE-EXTRACTION ALGORITHMS Since rule extraction from trained Artificial Neural Networks comes at a cost in terms of resources and additional effort, an early imperative in any discussion is to delineate the reasons why rule extraction is an important, if not mandatory, extension of conventional ANN techniques. The merits of including rule extraction techniques as an adjunct to conventional Artificial Neural Network techniques include: Data exploration and the induction of scientific theories Over time neural networks have proven to be extremely powerful tools for data exploration with the capability to discover previously unknown dependencies and relationships in data sets. As Craven and Shavlik (1994) observe, `a (learning) system may discover salient features in the input data whose importance was not previously recognised.' However, even if a trained Artificial Neural Network has learned interesting and possibly non-linear relationships, these relationships are encoded incomprehensibly as weight vectors within the trained ANN and hence cannot easily serve the generation of scientific theories. Rule-extraction algorithms significantly enhance the capabilities of ANNs to explore data to the benefit of the user. Provision of a `user explanation' capability Experience has shown that an explanation capability is considered to be one of the most important functions provided by symbolic AI systems. In particular, the salutary lesson from the introduction and operation of Knowledge Based systems is that the ability to generate even limited explanations (in terms of being meaningful and coherent) is absolutely crucial for the user-acceptance of such systems. In contrast to symbolic AI systems, Artificial Neural Networks have no explicit declarative knowledge representation. Therefore they have considerable difficulty in generating the required explanation structures. It is becoming increasingly apparent that the absence of an `explanation' capability in ANN systems limits the realisation of the full potential of such systems and it is this precise deficiency that the rule extraction process seeks to redress. Improving the generalisation of ANN solutions Where a limited or unrepresentative data set from the problem domain has been used in the ANN training process, it is difficult to determine when generalisation can fail even with evaluation methods such as cross-validation. By being able to express the knowledge embedded within the trained Artificial Neural Network as a set of symbolic rules, the rule-extraction process may provide an experienced system user with the capability to anticipate or predict a set of circumstances under which generalisation failure can occur. Alternatively the system user may be able to use the extracted rules to identify regions in input space which are not represented sufficiently in the existing ANN training set data and to supplement the data set accordingly. A CLASSIFICATION SCHEME FOR RULE EXTRACTION ALGORITHMS The method of classification proposed here is in terms of: (a) the expressive power of the extracted rules; (b) the `translucency' of the view taken within the rule extraction technique of the underlying Artificial Neural Network units; (c) the extent to which the underlying ANN incorporates specialised training regimes; (d) the `quality' of the extracted rules; and (e) the algorithmic `complexity' of the rule extraction/rule refinement technique. The `translucency' dimension of classification is of particular interest. It is designed to reveal the relationship between the extracted rules and the internal architecture of the trained ANN. It comprises two basic categories of rule extraction techniques viz `decompositional' and `pedagogical' and a third - labelled as `eclectic' - which combines elements of the two basic categories. The distinguishing characteristic of the `decompositional' approach is that the focus is on extracting rules at the level of individual (hidden and output) units within the trained Artificial Neural Network. Hence the `view' of the underlying trained Artificial Neural Network is one of `transparency'. The translucency dimension - `pedagogical' is given to those rule extraction techniques which treat the trained ANN as a `black box' ie the view of the underlying trained Artificial Neural Network is `opaque'. The core idea in the `pedagogical' approach is to `view rule extraction as a learning task where the target concept is the function computed by the network and the input features are simply the network's input features'. Hence the `pedagogical' techniques aim to extract rules that map inputs directly into outputs. Where such techniques are used in conjunction with a symbolic learning algorithm, the basic motif is to use the trained Artificial Neural Network to generate examples for the learning algorithm. As indicated above the proposed third category in this classification scheme are composites which incorporate elements of both the `decompositional' and `pedagogical' (or `black-box') rule extraction techniques. This is the `eclectic' group. Membership in this category is assigned to techniques which utilise knowledge about the internal architecture and/or weight vectors in the trained Artificial Neural Network to complement a symbolic learning algorithm. An ancillary problem to that of rule extraction from trained ANNs is that of using the ANN for the `refinement' of existing rules within symbolic knowledge bases. The goal in rule refinement is to use a combination of ANN learning and rule extraction techniques to produce a `better' (ie a `refined') set of symbolic rules which can then be applied back in the original problem domain. In the rule refinement process, the initial rule base (ie what may be termed `prior knowledge') is inserted into an ANN by programming some of the weights. The rule refinement process then proceeds in the same way as normal rule extraction viz (1) train the network on the available data set(s); and (2) extract (in this case the `refined') rules - with the proviso that the rule refinement process may involve a number of iterations of the training phase rather than a single pass. DISCUSSION POINTS FOR WORKSHOP PARTICIPANTS 1. Decompositional vs. learning approaches to rule- extraction from ANNs - What are the advantages and disadvantages w.r.t. performance, solution time, computational complexity, problem domain etc. Are decompositional approaches always dependent on a certain ANN architecture? 2. Rule-extraction from trained neural networks vs. symbolic induction. What are the relative strength and weaknesses? 3. What are the most important criteria for rule quality? 4. What are the most suitable representation languages for extracted rules? How does the extraction problem vary across different languages? 5. What is the relationship between rule-initialisation (insertion) and rule-extraction? For instance, are these equivalent or complementary processes? How important is rule-refinement by neural networks? 6. Rule-extraction from trained neural networks and computational learning theory. Is generating a minimal rule-set which mimics an ANN a hard problem? 7. Does rule-initialisation result in faster learning and improved generalisation? 8. To what extent are existing extraction algorithms limited in their applicability? How can these limitations be addressed? 9. Are there any interesting rule-extraction success stories? That is, problem domains in which the application of rule-extraction methods has resulted in an interesting or significant advance. ACKNOWLEDGEMENT Many thanks to Mark Craven, and Alan Tickle for comments on earlier versions of this proposal. RELEVANT PUBLICATIONS Andrews, R Diederich, J and Tickle, A.B.: A survey and critique of techniques for extracting rules from trained artificial neural networks. To appear: Knowledge-Based Systems, 1995 (ftp:fit.qut.edu.au//pub/NRC/ps/QUTNRC-95-01- 02.ps.Z) Andrews, R and Geva, S: `Rule extraction from a constrained error back propagation MLP' Proc. 5th Australian Conference on Neural Networks Brisbane Queensland (1994) pp 9-12 Andrews, R and Geva, S `Inserting and extracting knowledge from constrained error back propagation networks' Proc. 6th Australian Conference on Neural Networks Sydney NSW (1995) Craven, M W and Shavlik , J W `Using sampling and queries to extract rules from trained neural networks' Machine Learning: Proceedings of the Eleventh International Conference (San Francisco CA) (1994) (in print) Diederich, J `Explanation and artificial neural networks' International Journal of Man-Machine Studies Vol 37 (1992) pp 335-357 Fu, L M `Neural networks in computer intelligence' McGraw Hill (New York) (1994) Fu, L M `Rule generation from neural networks' IEEE Transactions on Systems, Man, and Cybernetics Vol 28 No 8 (1994) pp 1114-1124 Gallant, S `Connectionist expert systems' Communications of the ACM Vol 31 No 2 (February 1988) pp 152-169 Giles, C L and Omlin C W `Rule refinement with recurrent neural networks' Proc. of the IEEE International Conference on Neural Networks (San Francisco CA) (March 1993) pp 801-806 Giles, C L and Omlin C W `Extraction, insertion, and refinement of symbolic rules in dynamically driven recurrent networks' Connection Science Vol 5 Nos 3 and 4 (1993) pp 307-328 Giles, C L, Miller, C B, Chen, D, Chen, H, Sun, G Z and Lee, Y C `Learning and extracting finite state automata with second-order recurrent neural networks' Neural Computation Vol 4 (1992) pp 393-405 Hayward, R.; Pop, E.; Diederich, J.: Extracting Rules for Grammar Recognition from Cascade-2 Networks. Proceeding, IJCAI-95 Workshop on Machine Learning and Natural Language Processing. McMillan, C, Mozer, M C and Smolensky, P `The connectionist scientist game: rule extraction and refinement in a neural network' Proc. of the Thirteenth Annual Conference of the Cognitive Science Society (Hillsdale NJ) 1991 Omlin, C W, Giles, C L and Miller, C B `Heuristics for the extraction of rules from discrete time recurrent neural networks' Proc. of the International Joint Conference on Neural Networks (IJCNN'92) (Baltimore MD) Vol 1 (1992) pp 33 Pop, E, Hayward, R, and Diederich, J `RULENEG: extracting rules from a trained ANN by stepwise negation' QUT NRC (December 1994) Sestito, S and Dillon, T `Automated knowledge acquisition of rules with continuously valued attributes' Proc. 12th International Conference on Expert Systems and their Applications (AVIGNON'92) (Avignon France) (May 1992) pp 645-656. Sestito, S and Dillon, T `Automated knowledge acquisition' Prentice Hall (Australia) (1994) Thrun, S B `Extracting Provably Correct Rules From Artificial Neural Networks' Technical Report IAI-TR-93-5 Institut fur Informatik III Universitat Bonn (1994) Tickle, A B, Orlowski, M, and Diederich, J `DEDEC: decision detection by rule extraction from neural networks' QUT NRC (September 1994) Towell, G and Shavlik, J `The Extraction of Refined Rules Tresp, V, Hollatz, J and Ahmad, S `Network Structuring and Training Using Rule-based Knowledge' Advances In Neural Information Processing Vol 5 (1993) pp871-878 SUBMISSION OF WORKSHOP EXTENDED ABSTRACTS/PAPERS Authors are invited to submit 3 copies of either an extended abstract or full paper relating to one of the topic areas listed above. Papers should be written in English in single column format and should be limited to no more than eight, (8) sides of A4 paper including figures and references. Centered at the top of the first page should be complete title, author name(s), affiliation(s), and mailing and email address(es), followed by blank space, abstract(15-20 lines), and text. Please include the following information in an accompanying cover letter: Full title of paper, presenting author's name, address, and telephone and fax numbers, authors e-mail address. Submission Deadline is January 15th,1996 with notification to authors by 31st January,1996. For further information, inquiries, and paper submissions please contact: Robert Andrews Queensland University of Technology GPO Box 2434 Brisbane Q. 4001. Australia. phone +61 7 864-1656 fax +61 7 864-1969 email robert at fit.qut.edu.au More information about the AISB-96 workshop series is available from: ftp: ftp.cogs.susx.ac.uk pub/aisb/aisb96 WWW: (http://www.cogs.susx.ac.uk/aisb/aisb96) WORKSHOP PARTICIPATION CHARGES The workshop fees are listed below. Note that these fees include lunch. Student charges are shown in brackets. AISB NON-ASIB MEMBERS MEMBERS 1 Day Workshop 65 (45) 80 LATE REGISTRATION: 85 (60) 100 PROGRAM COMMITTEE MEMBERS R. Andrews, Queensland University of Technology A. Tickle, Queensland University of Technology S. Sestito, DSTO, Australia J. Shavlik, University of Wisconsin =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Mr Robert Andrews School of Information Systems robert at fit.qut.edu.au Faculty of Information Technology R.Andrews at qut.edu.au Queensland University of Technology +61 7 864 1656 (voice) GPO Box 2434 _--_|\ +61 7 864 1969 (fax) Brisbane Q 4001 / QUT Australia \_.--._/ v =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=  From austin at minster.cs.york.ac.uk Fri Oct 20 13:29:47 1995 From: austin at minster.cs.york.ac.uk (austin@minster.cs.york.ac.uk) Date: Fri, 20 Oct 95 13:29:47 Subject: No subject Message-ID: Statistical Modelling and Simulation of Neural Networks 2 Year Research Associate supported by the EPSRC under the ROPA scheme. Within the Advanced Computer Architecture Group Departent of Computer Science University of York, York, Y01 5DD, UK. Applications are invited for a 2 year post to aimed at investigating the properties and the application of a novel neural network method for parallel processing. The work will involve building probabilistic models and performing experimental evaluations of the network. The candidates will be expected to hold a PhD in a relevant subject and have a good understanding of probability theory and programming in C. In addition, knowledge of neural networks and parallel processing would be an advantage, but not essential. The work will study the properties of a novel form of binary neural network, based on correlation matrix memories. In a recent paper (austin 1995 available via WWW and included in the further particulars), it has been shown how a correlation matrix memory can be used to recall multiple data items in parallel. Although the technique has been shown to be feasible, its practical application requires a thorough analysis of the networks abilities through statistical modelling and computer simulation. The modelling is likely to require probabilistic methods, which will greatly add to the groups work on binary neural networks. The computer modelling will be undertaken in C, on the groups extensive network of silicon graphics machines. The successful applicant will join a thriving team of over 14 researchers working in neural networks, which specialises in research on binary weighted neural networks and there application in computer vision and knowledge based systems. The project is supported under the UK government ROPA scheme, and is available immediately for two years. Applications should be sent to the Personnel Office, University of York, York, YO1 5DD, by Friday 3rd of November 1995 quoting reference 6616. Salary will be up to 15986 U.K pounds. Further details can be obtained from the Personnel Office, or by contacting Dr. Jim Austin on 01904 432734, email austin at minster.york.ac.uk. Further details of the groups work can be found on the world wide web http://dcpu1.cs.york.ac.uk:6666/arch/acag.html  From mpolycar at ece.uc.edu Fri Oct 20 11:05:26 1995 From: mpolycar at ece.uc.edu (Marios Polycarpou) Date: Fri, 20 Oct 1995 11:05:26 -0400 (EDT) Subject: ISIC'96: Call for Papers Message-ID: <199510201505.LAA21722@zoe.ece.uc.edu> ************************************ CALL FOR PAPERS 11th IEEE International Symposium on Intelligent Control (ISIC) ************************************  Sponsored by the IEEE Control Systems Society and held in conjunction with The 1996 IEEE International Conference on Control Applications (CCA) and The IEEE Symposium on Computer-Aided Control System Design (CACSD) September 15-18, 1996 The Ritz-Carlton Hotel, Dearborn, Michigan, USA ISIC General Chair: Kevin M. Passino, The Ohio State University ISIC Program Chair: Jay A. Farrell, University of California, Riverside ISIC Publicity Chair: Marios Polycarpou, University of Cincinnati Intelligent control, the discipline where control algorithms are developed by emulating certain characteristics of intelligent biological systems, is being fueled by recent advancements in computing technology and is emerging as a technology that may open avenues for significant technological advances. For instance, fuzzy controllers which provide for a simplistic emulation of human deduction have been heuristically constructed to perform difficult nonlinear control tasks. Knowledge-based controllers developed using expert systems or planning systems have been used for hierarchical and supervisory control. Learning controllers, which provide for a simplistic emulation of human induction, have been used for the adaptive control of uncertain nonlinear systems. Neural networks have been used to emulate human memorization and learning characteristics to achieve high performance adaptive control for nonlinear systems. Genetic algorithms that use the principles of biological evolution and "survival of the fittest" have been used for computer-aided-design of control systems and to automate the tuning of controllers by evolving in real-time populations of highly fit controllers. Topics in the field of intelligent control are gradually evolving, and expanding on and merging with those of conventional control. For instance, recent work has focused on comparative cost-benefit analyses of conventional and intelligent control techniques using simulation and implementations. In addition, there has been recent activity focused on modeling and nonlinear analysis of intelligent control systems, particularly work focusing on stability analysis. Moreover, there has been a recent focus on the development of intelligent and conventional control systems that can achieve enhanced autonomous operation. Such intelligent autonomous controllers try to integrate conventional and intelligent control approaches to achieve levels of performance, reliability, and autonomous operation previously only seen in systems operated by humans. Papers are being solicited for presentation at ISIC and for publication in the Symposium Proceedings on topics such as: - Architectures for intelligent control - Hierarchical intelligent control - Distributed intelligent systems - Modeling intelligent systems - Mathematical analysis of intelligent systems - Knowledge-based systems - Fuzzy systems / fuzzy control - Neural networks / neural control - Machine learning - Genetic algorithms - Applications / Implementations: - Automotive / vehicular systems - Robotics / Manufacturing - Process control - Aircraft / spacecraft This year the ISIC is being held in conjunction with the 1996 IEEE International Conference on Control Applications and the IEEE Symposium on Computer-Aided Control System Design. Effectively this is one large conference at the beautiful Ritz-Carlton hotel. The programs will be held in parallel so that sessions from each conference can be attended by all. There will be one registration fee and each registrant will receive a complete set of proceedings. For more information, and information on how to submit a paper to the conference see the back of this sheet. ++++++++++ Submissions: ++++++++++ Papers: Five copies of the paper (including an abstract) should be sent by Jan. 22, 1996 to: Jay A. Farrell, ISIC'96 College of Engineering ph: (909) 787-2159 University of California, Riverside fax: (909) 787-3188 Riverside, CA 92521 Jay_Farrell at qmail.ucr.edu Clearly indicate who will serve as the corresponding author and include a telephone number, fax number, email address, and full mailing address. Authors will be notified of acceptance by May 1996. Accepted papers, in final camera ready form (maximum of 6 pages in the proceedings), will be due in June 1996. Invited Sessions: Proposals for invited sessions are being solicited and are due Jan. 22, 1996. The session organizers should contact the Program Chair by Jan. 1, 1996 to discuss their ideas and obtain information on the required invited session proposal format. Workshops and Tutorials: Proposals for pre-symposium workshops should be submitted by Jan. 22, 1996 to: Kevin M. Passino, ISIC'96 Dept. Electrical Engineering ph: (614) 292-5716 The Ohio State University fax: (614) 292-7596 2015 Neil Ave. passino at osu.edu Columbus, OH 43210-1272 Please contact K.M. Passino by Jan. 1, 1996 to discuss the content and required format for the workshop or tutorial proposal. ++++++++++++++++++++++++ Symposium Program Committee: ++++++++++++++++++++++++ James Albus, National Institute of Standards and Technology Karl Astrom, Lund Institute of Technology Matt Barth, University of California, Riverside Michael Branicky, Massachusetts Institute of Technology Edwin Chong, Purdue University Sebastian Engell, University of Dortmund Toshio Fukuda, Nagoya University Zhiqiang Gao, Cleveland State University Dimitry Gorinevsky, Measurex Devron Inc. Ken Hunt, Daimler-Benz AG Tag Gon Kim, KAIST Mieczyslaw Kokar, Northeastern University Ken Loparo, Case Western Reserve University Kwang Lee, The Pennsylvania State University Michael Lemmon, University of Notre Dame Frank Lewis, University of Texas at Arlington Ping Liang, University of California, Riverside Derong Liu, General Motors R&D Center Kumpati Narendra, Yale University Anil Nerode, Cornell University Marios Polycarpou, University of Cincinnati S. Joe Qin, Fisher-Rosemount Systems, Inc. Tariq Samad, Honeywell Technology Center George Saridis, Rensselaer Polytechnic Institute Jennie Si, Arizona State University Mark Spong, University of Illinois at Urbana-Champaign Jeffrey Spooner, Sandia National Laboratories Harry Stephanou, Rensselaer Polytechnic Institute Kimon Valavanis, University of Southwestern Louisiana Li-Xin Wang, Hong Kong University of Science and Tech. Gary Yen, USAF Phillips Laboratory ************************************************************************** * Prof. Marios M. Polycarpou | TEL: (513) 556-4763 * * University of Cincinnati | FAX: (513) 556-7326 * * Dept. Electrical & Computer Engineering | * * Cincinnati, Ohio 45221-0030 | Email: polycarpou at uc.edu * **************************************************************************  From mjo at cns.ed.ac.uk Fri Oct 20 11:45:16 1995 From: mjo at cns.ed.ac.uk (Mark Orr) Date: Fri, 20 Oct 1995 16:45:16 +0100 Subject: Paper available: Local Smoothing of RBF Networks Message-ID: <199510201545.QAA21458@garbo.cns.ed.ac.uk> The following paper has been accepted for presentation at the International Symposium on Neural Networks, Hsinchu, Taiwan, December 1995. LOCAL SMOOTHING OF RADIAL BASIS FUNCTION NETWORKS Mark J.L. Orr Centre for Cognitive Science Edinburgh University Abstract: A method of supervised learning is described which enhances generalisation performance by adaptive local smoothing in the input space. The method exploits the local nature of radial basis functions and employs multiple smoothing parameters optimised by generalised cross-validation. More traditional approaches have only a single smoothing parameter and produce a globally uniform smoothing but are demonstrably less effective unless the target function itself is uniformly smooth. A postscript version of a slightly longer version (9 pages instead of 6) can be retrieved by following the links "publications" and "Neural Networks" from the world wide web page: http://www.cns.ed.ac.uk/people/mark.html Alternatively the paper can be retrieved by anonymous ftp: ftp://scott.cogsci.ed.ac.uk/pub/mjo/isann95-long.ps.Z Size: 77KB compressed, 155KB uncompressed. Sorry, no hardcopies. ---- Mark J L Orr, Centre for Cognitive Science, Edinburgh University, 2, Buccleuch Place, Edinburgh EH8 9LW, Scotland, UK phone: (+44) (0) 131 650 4413 email: mjo at cns.ed.ac.uk  From lbl at nagoya.bmc.riken.go.jp Wed Oct 25 22:15:42 1995 From: lbl at nagoya.bmc.riken.go.jp (Bao-Liang Lu) Date: Thu, 26 Oct 1995 11:15:42 +0900 Subject: Paper available: Transformation of NLP Problems Using Neural Nets Message-ID: <9510260215.AA07409@xian> The following paper, to appear in Annals of Mathematics and Artificial Intelligence, is available via anonymous FTP. (This work was presented at 1st Mathematics of Neural Networks and Applications (MANNA'95) Conference, 3-7 July 1995, Lady Margaret Hall, Oxford, UK) FTP-host:ftp.bmc.riken.go.jp FTP-file:pub/publish/Lu/lu-manna95.ps.Z ========================================================================== TITLE: Transformation of Nonlinear Programming Problems into Separable Ones Using Multilayer Neural Networks AUTHORS: Bao-Liang Lu (1) Koji Ito (1,2) ORGANISATIONS: (1) The Institute of Physical and Chemical Research (RIKEN) (2) Toyohashi University of Technology ABSTRACT: In this paper we present a novel method for transforming nonseparable nonlinear programming (NLP) problems into separable ones using multilayer neural networks. This method is based on a useful feature of multilayer neural networks, i.e., any nonseparable function can be approximately expressed as a separable one by a multilayer neural network. By use of this method, the nonseparable objective and (or) constraint functions in NLP problems can be approximated by multilayer neural networks, and therefore, any nonseparable NLP problem can be transformed into a separable one. The importance of this method lies in the fact that it provides us with a promising approach to using modified simplex methods to solve general NLP problems. (6 pages. No hard copies available.) Bao-Liang Lu --------------------------------------------- Bio-Mimetic Control Research Center, The Institute of Physical and Chemical Research (RIKEN) 3-8-31 Rokuban, Atsuta-ku, Nagoya 456, Japan Phone: +81-52-654-9137 Fax: +81-52-654-9138 Email: lbl at nagoya.bmc.riken.go.jp  From S.Renals at dcs.shef.ac.uk Thu Oct 26 11:08:14 1995 From: S.Renals at dcs.shef.ac.uk (S.Renals@dcs.shef.ac.uk) Date: Thu, 26 Oct 1995 16:08:14 +0100 Subject: Research Positions in Speech Recognition Message-ID: <199510261508.QAA12522@elvis.dcs.shef.ac.uk> As part of the EU funded project SPRACH (Speech Recognition Algorithms for Connectionist Hybrids) two Research Associate positions, of three years duration, are available at the Universities of Cambridge and Sheffield. Both positions are concerned with developing new methods for large vocabulary speech recognition. The Sheffield position will have an emphasis towards statistical language modelling; the Cambridge position will have an emphasis on connectionist acoustic models. The research project will build on the recently completed Wernicke project. One of the outcomes of that project is the Abbot large vocabulary speech recognition system, which is available in a demonstration version at ftp://svr-ftp.eng.cam.ac.uk/pub/comp.speech/recognition/AbbotDemo/ The job adverts and application details are included below. For informal discussion contact either Steve Renals (s.renals at dcs.shef.ac.uk) or Tony Robinson (ajr at eng.cam.ac.uk). Tony Robinson, Cambridge University Steve Renals, Sheffield University ----------------------------------------------------------------------- University of Sheffield Department of Computer Science, Speech and Hearing Group Research Associate in Continuous Speech Recognition Applications are invited for a Research Associate to work in the speech and hearing group in the area of continuous speech recognition. In particular, the post will involve the investigation of new methods of statistical language modelling, approaches to domain adaptation and the development and evaluation of demonstration systems. The project is funded as a Basic Research Project by the EU and will be of three years duration, from December 1995. Candidates for the post will be expected to hold a postgraduate degree (preferably a PhD) in a relevant discipline, or to have acquired equivalent experience. The successful candidate will have had research experience in the area of statistical language modelling or connectionist/HMM-based speech recognition. Salary will be in the range \pounds 14,317 to \pounds 18,985. Informal enquiries about the post to Dr. Steve Renals (email: s.renals at dcs.shef.ac.uk; tel: +44-114-282-5575; fax: +44-114-278-0972). Further particulars and an application form are available from the Director of Human Resource Management, The University of Sheffield, Western Bank, Sheffield S10 2TN (tel: +44-114-282-4144; fax: +44-114-276-7897), citing Ref:R780. The closing date for applications is Friday 10 November 1995. The University of Sheffield follows an Equal Opportunity Policy. ----------------------------------------------------------------------- University of Cambridge Engineering Department, Speech Vision and Robotics group Research Associate in Large Vocabulary Connectionist Speech Recognition Applications are invited for a Research Assistantship in the use of connectionist models and hidden Markov models in large vocabulary automatic speech recognition. The project is funded by the EU and is of 36 months duration. Candidates for this post will have a good first degree and preferably a postgraduate degree in a relevant discipline. The candidate is expected to have prior knowledge of connectionist/Markov model hybrids, large vocabulary recognition or a related area. The ability to manage a large software project, participate in international evaluations and liaise with industry would be advantageous. Salary will be in the range \pounds 14,317 to \pounds 19,848. Further details and an application form may be obtained by writing to Dr Tony Robinson, Cambridge University Engineering Department, Trumpington Street, Cambridge CB2 1PZ, U.K., email ajr at eng.cam.ac.uk, phone +44-1223-332815, fax +44-1223-332662, or http://svr-www.eng.cam.ac.uk/~ajr. The deadline for applications is 26 November 1995. The University follows an equal opportunities policy. -----------------------------------------------------------------------  From listerrj at helios.aston.ac.uk Thu Oct 26 14:01:12 1995 From: listerrj at helios.aston.ac.uk (Richard Lister) Date: Thu, 26 Oct 1995 19:01:12 +0100 Subject: Postdoctoral Research Fellowship Message-ID: <2120.9510261801@sun.aston.ac.uk> ---------------------------------------------------------------------- Neural Computing Research Group ------------------------------- Dept of Computer Science and Applied Mathematics Aston University, Birmingham, UK POSTDOCTORAL RESEARCH FELLOWSHIP -------------------------------- Neural Networks for Visualisation of High-Dimensional Data ---------------------------------------------------------- *** Full details at http://neural-server.aston.ac.uk/ *** The Neural Computing Research Group at Aston is looking for a highly motivated individual for a 2 year postdoctoral research position in the area of novel techniques for data visualisation. The emphasis of the research will be on theoretically well-founded approaches which are applicable to real-world data sets. A key starting point for the research will be the recent developments in latent variable techniques for density estimation. Potential candidates should be have strong mathematical and computational skills, with a background either in artificial neural networks, statistical pattern recognition, or a related field. Conditions of Service --------------------- Salaries will be up to point 6 on the RA 1A scale, currently 15,986 UK pounds. The salary scale is subject to annual increments. How to Apply ------------ If you wish to be considered for this Fellowship, please send a full CV and publications list, including full details and grades of academic qualifications, together with the names of 4 referees, to: Professor C M Bishop Neural Computing Research Group Dept. of Computer Science and Applied Mathematics Aston University Birmingham B4 7ET, U.K. Tel: 0121 333 4631 Fax: 0121 333 6215 e-mail: c.m.bishop at aston.ac.uk (e-mail submission of postscript files is welcome) Closing date: 20 November, 1995. ----------------------------------------------------------------------  From bogus@does.not.exist.com Fri Oct 27 05:19:58 1995 From: bogus@does.not.exist.com () Date: Fri, 27 Oct 1995 09:19:58 +0000 Subject: Post in Statistical Modelling of Neural Networks Message-ID: <9510270919.ZM696@minster.york.ac.uk> Statistical Modelling and Simulation of Neural Networks 2 Year Research Associate supported by the EPSRC under the ROPA scheme. Within the Advanced Computer Architecture Group Department of Computer Science University of York, York, Y01 5DD, UK. Applications are invited for a 2 year post to aimed at investigating the properties and the application of a novel neural network method for parallel processing. The work will involve building probabilistic models and performing experimental evaluations of the network. The candidates will be expected to hold a PhD in a relevant subject and have a good understanding of probability theory and programming in C. In addition, knowledge of neural networks and parallel processing would be an advantage, but not essential. The work will study the properties of a novel form of binary neural network, based on correlation matrix memories. In a recent paper (austin 1995 available via WWW and included in the further particulars), it has been shown how a correlation matrix memory can be used to recall multiple data items in parallel. Although the technique has been shown to be feasible, its practical application requires a thorough analysis of the networks abilities through statistical modelling and computer simulation. The modelling is likely to require probabilistic methods, which will greatly add to the groups work on binary neural networks. The computer modelling will be undertaken in C, on the groups extensive network of silicon graphics machines. The successful applicant will join a thriving team of over 14 researchers working in neural networks, which specialises in research on binary weighted neural networks and there application in computer vision and knowledge based systems. The project is supported under the UK government ROPA scheme, and is available immediately for two years. Applications should be sent to the Personnel Office, University of York, York, YO1 5DD, POST MARKED by Friday 3rd of November 1995 quoting reference 6616. Applications up to 10th November 1995 will be accepted, as long as they ARRIVE by that date. Salary will be up to 15986 U.K pounds. Further details can be obtained from the Personnel Office, or by contacting Dr. Jim Austin on 01904 432734, email austin at minster.york.ac.uk. Further details of the groups work can be found on the world wide web http://dcpu1.cs.york.ac.uk:6666/arch/acag.html  From eann96 at lpac.ac.uk Fri Oct 27 05:46:22 1995 From: eann96 at lpac.ac.uk (Engineering Apps in Neural Nets 96) Date: Fri, 27 Oct 95 09:46:22 GMT Subject: EANN96-Second Call for Papers Message-ID: <20523.9510270946@pluto.lpac.ac.uk> International Conference on Engineering Applications of Neural Networks (EANN '96) London, UK 17--19 June 1996 Second Call for Papers The conference is a forum for presenting the latest results on neural network applications in technical fields. The applications may be in any engineering or technical field, including but not limited to systems engineering, mechanical engineering, robotics, process engineering, metallurgy, pulp and paper technology, aeronautical engineering, computer science, machine vision, chemistry, chemical engineering, physics, electrical engineering, electronics, civil engineering, geophysical sciences, biotechnology, and environmental engineering. Abstracts of one page (200 to 400 words) should be sent to eann96 at lpac.ac.uk by 21 January 1996 by e-mail in PostScript format or ASCII. Please mention two to four keywords, and whether you prefer it to be a short paper or a full paper. The short papers will be 4 pages in length, and full papers may be upto 8 pages. Tutorial proposals are also welcome until 21 January 1996. Notification of acceptance will be sent around 15 February. Submissions will be reviewed and the number of full papers will be very limited. For more information, please see the www page at http://www.lpac.ac.uk/EANN96 Organising committee A. Bulsari (Finland) D. Tsaptsinos (UK) T. Clarkson (UK) International program committee (to be confirmed, extended) G. Dorffner (Austria) S. Gong (UK) J. Heikkonen (Italy) B. Jervis (UK) E. Oja (Finland) H. Liljenstrvm (Sweden) G. Papadourakis (Greece) D. T. Pham (UK) P. Refenes (UK) N. Sharkey (UK) N. Steele (UK) D. Williams (UK) W. Duch (Poland) R. Baratti (Italy) G. Baier (Germany) E. Tulunay (Turkey) S. Kartalopoulos (USA) C. Schizas (Cyprus) J. Galvan (Spain) M. Ishikawa (Japan) Sponsored by: London Parallel Applications Centre (LPAC) IEE UK RIG NN British Institution of Electrical Engineers Professional Group C4 ---------------------------------+-------------------------------- Engineering Applications of Neural Networks'96 (EANN96) Further info: http://www.lpac.ac.uk/EANN96/ or Dimitris Tsaptsinos(D.Tsaptsinos at lpac.ac.uk) http://www.lpac.ac.uk/SEL-HPC/People/Dimitris ---------------------------------+--------------------------------  From gluck at pavlov.rutgers.edu Fri Oct 27 17:30:01 1995 From: gluck at pavlov.rutgers.edu (Mark Gluck) Date: Fri, 27 Oct 1995 17:30:01 -0400 Subject: Graduate Training in NEURAL COMPUTATION at Rutgers Univ. (NJ), Behav. & Neural Sci Ph.D. Message-ID: <199510272130.RAA09224@pavlov.rutgers.edu> Application Information for Ph.D. Program in BEHAVIORAL AND NEURAL SCIENCES at Rutgers University, Newark, New Jersey * Application target date is February 1, 1996 * ----------------------------------------------------------------- Additional information on our Ph.D. program, research facilities,and faculty can be obtained over the internet at: http://www.cmbn.rutgers.edu/bns-home.html ----------------------------------------------------------------- The Behavioral and Neural Sciences (BNS) graduate program at Rutgers-Newark aims to provide students with a rigorous understanding of modern neuroscience with an emphasis on integrating behavioral and neural approaches to understanding brain function. The program emphasizes the multidisciplinary nature of this endeavor, and offers specific research training in Behavioral and Cognitive Neuroscience as well as Molecular, Cellular and Systems Neuroscience. These research areas represent different but complementary approaches to contemporary issues in behavioral and molecular neuroscience and can emphasize either human or animal studies. The BNS graduate program is composed of faculty from the Center for Molecular and Behavioral Neuroscience (CMBN), the Institute of Animal Behavior (IAB), the Department of Biological Sciences, the Department of Psychology, and the School of Nursing. Research training in the BNS program emphasizes integration across levels of analysis and traditional disciplinary boundaries. Basic research areas in Cellular and Molecular Neuroscience include the study of the basal forebrain, basal ganglia, hippocampus, visual and auditory systems and monoaminergic and neuroendocrine systems using electrophysiological, neurochemical, neuroanatomical and molecular biological approaches. Research in Cognitive and Behavioral Neuroscience includes the study of memory, language (both signed and spoken), reading, attention, motor control, vision, and animal behavior. Clinically relevant research areas are the study of the behavioral, physiological and pharmacological aspects of schizophrenia, Alzheimer's Disease, amnesia, epilepsy, Parkinson's disease and other movement disorders, and the molecular genetics of neuropsychiatric disorders Other Information ----------------- At present the CMBN supports up to 40 students with 12-month renewable assistantships for a period of four years. The curent stipend for first year students is $12,750; this includes tuition remission and excellent healthcare benefits. In addition, the Johnson & Johnson pharmaceutical company's Foundation has provided four Excellence Awards which increase students' stipends by $5,000. Several other fellowships are offered. More information is available in our graduate brochure, available upon request. The Rutgers-Newark campus is 20 minutes outside New York City, and close to other major university research centers at NYU, Columbia, SUNY, and Princeton, as well as major industrial research labs in Northern NJ, including ATT, Bellcore, Siemens, and a host of pharmaceutical companies including Johnson & Johnson Hoecsht-Celanese, and Sandoz. Faculty Associated With Rutgers BNS Ph.D. Program ------------------------------------------------- FACULTY - RUTGERS Elizabeth Abercrombie (Ph.D., Princeton), neurotransmitters and behavior [CMBN] Colin Beer (Ph.D., Oxford), ethology [IAB] April Benasich (Ph.D., New York), infant perception and cognition [CMBN] Ed Bonder (Ph.D., Pennsylvania), cell biology [Biology] Linda Brzustowicz (M.D.,Ph.D., Columbia), human genetics [CMBN] Gyorgy Buzsaki (Ph.D., Budapest), systems neuroscience [CMBN] Mei-Fang Cheng (Ph.D., Bryn Mawr) neuroethology/neurobiology [IAB] Ian Creese (Ph.D., Cambridge), neuropsychopharmacology [CMBN] Doina Ganea (Ph.D., Illinois Medical School), molecular immunology [Biology] Alan Gilchrist (Ph.D., Rutgers), visual perception [Psychology] Mark Gluck (Ph.D.,Stanford), learning, memory and neural computation [CMBN] Ron Hart (Ph.D., Michigan), molecular neuroscience [Biology] G. Miller Jonakait (Ph.D., Cornell Medical College), neuroimmunology [Biology] Judy Kegl (Ph.D., M.I.T.), linguistics/neurolinguistics [CMBN] Barry Komisaruk (Ph.D., Rutgers), behavioral neurophysiology/pharmacology [IAB] Joan Morrell (Ph.D., Rochester), cellular neuroendocrinology [CMBN] Teresa Perney (Ph.D., Chicago), ion channel gene expression and function [CMBN] Howard Poizner (Ph.D., Northeastern), language and motor behavior [CMBN] Jay Rosenblatt (Ph.D., New York), maternal behavior [IAB] Anne Sereno (Ph.D., Harvard), attention and visual perception [CMBN] Maggie Shiffrar (Ph.D., Stanford), vision and motion perception[CMBN] Harold Siegel (Ph.D., Rutgers) neuroendocrine mechanisms [IAB] Ralph Siegel (Ph.D., McGill), neuropsychology of visual perception [CMBN] Jennifer Swann (Ph.D., Michigan), neuroendocrinology [Biology] Paula Tallal (Ph.D., Cambridge), neural basis of language development [CMBN] James Tepper (Ph.D., Colorado), basal ganglia neurophysiology and anatomy [CMBN] Beverly Whipple (Ph.D., Rutgers), women's health [Nursing] Laszlo Zaborszky (Ph.D., Hungarian Academy), neuroanatomy of forebrain [CMBN] ASSOCIATES OF CMBN Izrail Gelfand (Ph.D., Moscow State), biology of cells [Biology] Richard Katz (Ph.D., Bryn Mawr), psychopharmacology [Ciba Geigy] Barry Levin (M.D., Emory Medical) neurobiology David Tank (Ph.D., Cornell), neural plasticity [Bell Labs] For More Information or an Application -------------------------------------- If you are interested in applying to our graduate program, or possibly applying to one of the labs as a post-doc, research assistant or programmer, please contact us via one of the following: Dr. Gyorgy Buzsaki or Dr. Mark A. Gluck BNS Graduate Admissions CMBN, Rutgers University 197 University Ave. Newark, New Jersey 07102 Phone: (201) 648-1080 (Ext. 3221) Fax: (201) 648-1272 Email: buzsaki at axon.rutgers.edu or gluck at pavlov.rutgers.edu We will be happy to send you info on our research and graduate program, as well as set up an a possible visit to the Neuroscience Center here at Rutgers-Newark. Please also see our WWW Homepage listed above which contains extensive information on faculty research, degree requirements, local facilities, and more.  From mm at santafe.edu Fri Oct 27 17:44:46 1995 From: mm at santafe.edu (Melanie Mitchell) Date: Fri, 27 Oct 95 15:44:46 MDT Subject: Postdoctoral fellowships at the Santa Fe Institute Message-ID: <9510272144.AA25294@sfi.santafe.edu> The Santa Fe Institute has an opening for one or more Postdoctoral Fellows beginning in September, 1996. The Institute's research program is devoted to the study of complex systems, especially complex adaptive systems. Systems and techniques currently under study include: the economy; the immune system; the brain; biomolecular sequence and structure; the origin of life; artificial life; models of evolution; adaptive computation and intelligent systems; complexity, entropy, and the physics of information; nonlinear modeling and prediction; the evolution of culture; the development of general-purpose simulation environments; and others. Postdoctoral Fellows work either on existing research projects or on projects of their own choosing. Candidates should have a Ph.D. (or expect to receive one before September 1996) and should have backgrounds in computer science, mathematics, economics, theoretical physics or chemistry, game theory, cognitive science, theoretical biology, dynamical systems theory, or related fields. A strong background in computational approaches is essential, as is an interest in interdisciplinary work. Evidence of this interest, in the form of previous research experience and publications, is important. Applicants should submit a curriculum vitae, list of publications, and statement of research interests, and arrange for three letters of recommendation. Incomplete applications will not be considered. All application materials must be received by February 15, 1996. Decisions will be made by April, 1996. Send applications to: Postdoctoral Committee, Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, New Mexico 87501. Send complete application packages only, preferably hard copy, to the above address. Include your e-mail address and/or fax number. SFI is an equal opportunity employer. Women and minorities are encouraged to apply. More information about SFI and its research program can be found at SFI's web site: http://www.santafe.edu.  From harnad at cogsci.soton.ac.uk Sat Oct 28 14:06:05 1995 From: harnad at cogsci.soton.ac.uk (Stevan Harnad) Date: Sat, 28 Oct 95 18:06:05 GMT Subject: Language Innateness: BBS Call for Commentators Message-ID: <8042.9510281806@cogsci.ecs.soton.ac.uk> Below is the abstract of a forthcoming target article on: INNATENESS, AUTONOMY, UNIVERSALITY? NEUROBIOLOGICAL APPROACHES TO LANGUAGE by Ralph-Axel Mueller This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: bbs at soton.ac.uk or write to: Behavioral and Brain Sciences Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://cogsci.soton.ac.uk/~harnad/bbs.html gopher://gopher.princeton.edu:70/11/.libraries/.pujournals ftp://ftp.princeton.edu/pub/harnad/BBS To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp (or gopher or world-wide-web) according to the instructions that follow after the abstract. ____________________________________________________________________ INNATENESS, AUTONOMY, UNIVERSALITY? NEUROBIOLOGICAL APPROACHES TO LANGUAGE Ralph-Axel Mueller PET Center, Children's Hospital of Michigan, Wayne State University, Detroit MI 48201-2196, USA rmueller at pet.wayne.edu KEYWORDS: brain development, dissociations, distributive representations, epigenesis, evolution, functional localization, individual variation, innateness, language. ABSTRACT: The concepts of the innateness, universality, species-specificity, and autonomy of the human language capacity have had an extreme impact on the psycholinguistic debate for over thirty years. These concepts are evaluated from several neurobiological perspectives, with an emphasis on the emergence of language and its decay due to brain lesion and progressive brain disease. Evidence of perceptuomotor homologies and preadaptations for human language in nonhuman primates suggests a gradual emergence of language during hominid evolution. Regarding ontogeny, the innate component of language capacity is likely to be polygenic and shared with other developmental domains. Dissociations between verbal and nonverbal development are probably rooted in the perceptuomotor specializations of neural substrates rather than the autonomy of a grammar module. Aphasiological data often assumed to suggest modular linguistic subsystems can be accounted for in terms of a neurofunctional model incorporating perceptuomotor-based regional specializations and distributivity of representations. Thus, dissociations between grammatical functors and content words are due to different conditions of acquisition and resulting differences in neural representation. Since human brains are characterized by multifactorial interindividual variability, strict universality of functional organization is biologically unrealistic. A theoretical alternative is proposed according to which (a) linguistic specialization of brain areas is due to epigenetic and probabilistic maturational events, not to genetic 'hard-wiring', and (b) linguistic knowledge is neurally represented in distributed cell assemblies whose topography reflects the perceptuomotor modalities involved in the acquisition and use of a given item of knowledge. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from ftp.princeton.edu according to the instructions below (the filename is bbs.mueller). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- These files are also on the World Wide Web and the easiest way to retrieve them is with Netscape, Mosaic, gopher, archie, veronica, etc. Here are some of the URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs.html http://cogsci.soton.ac.uk/~harnad/bbs.html gopher://gopher.princeton.edu:70/11/.libraries/.pujournals ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.mueller ftp://cogsci.soton.ac.uk/pub/harnad/BBS/bbs.mueller To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.mueller When you have the file(s) you want, type: quit ---------- Where the above procedure is not available there are two fileservers: ftpmail at decwrl.dec.com and bitftp at pucc.bitnet that will do the transfer for you. To one or the other of them, send the following one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). -------------------------------------------------------------  From N.Sharkey at dcs.shef.ac.uk Mon Oct 30 09:28:16 1995 From: N.Sharkey at dcs.shef.ac.uk (N.Sharkey@dcs.shef.ac.uk) Date: Mon, 30 Oct 95 14:28:16 GMT Subject: one day seminar/colloquim Message-ID: <9510301428.AA08491@entropy.dcs.shef.ac.uk> ************************ SELF-LEARNING ROBOTS ************************ Organisers: Noel Sharkey John Hallam Computer Science AI Dept. Sheffield U. Edinburgh U. An Institution of Electrical Engineers One-day Seminar Savoy Place, London, UK: February, 12th, 1996. This will be a one day seminar to examine the most recent developments in Robot learning. There will a number of international invited speakers and there will also be opportunities for group (and personal) discussion at the event. INVITED SPEAKERS (alphabetical) Evolutionary Learning in Robots Dave Cliff (U.K.) Shaping robots: An experiment in Behavior Engineering Marco Dorigo (Italy) Learning subsumptions for an autonomous robot. Jan Heemskerk & Noel Sharkey (U.K.) Self-Organization in Robot Control Ulrich Nehmzow (U.K.) Toward Conscious Robots Martin Nilsson (Sweden) Evolving non-trivial behaviors on an autonomous robot. Stefano Nolfi (Italy) Robot spatial learning: insights from animal and human behaviour\\ Tony Prescott (UK) Learning more from less data: Experiments with lifelong robot learning. Sebastian Thrun (Germany) Neural Reinforcement Learning for Behavior Synthesis. Claude Touzet (France) A robot arm is neurally controlled using monocular feedback Patrick van der Smagt (The Netherlands) Exploration in Reinforcement Learning Jeremy Wyatt, Gillian Hayes, and John Hallam (U.K.) Robust and Adaptive World Modelling for Mobile Robots Uwe Zimmer (Germany) REGISTRATION INFORMATION: Sarah Evans (at above address) or email: sevans at iee.org.uk Dont reply to this message for info. please - use above email only.  From listerrj at helios.aston.ac.uk Mon Oct 30 09:19:56 1995 From: listerrj at helios.aston.ac.uk (Richard Lister) Date: Mon, 30 Oct 1995 14:19:56 +0000 Subject: Research Programmer Message-ID: <5409.9510301419@sun.aston.ac.uk> ------------------------------------------------------------------- Neural Computing Research Group ------------------------------- Dept of Computer Science and Applied Mathematics Aston University, Birmingham, UK Research Programmer ------------------- * Full details at http://neural-server.aston.ac.uk/ * Applications are invited for the post of Research Programmer within the Neural Computing Research Group (NCRG) at Aston University. The NCRG is now the largest academic research group in this area in the UK, and has an extensive and lively programme of research ranging from the theoretical foundations of neural computing and pattern recognition through to industrial and commercial applications. The Group is based in spacious accommodation in the University's Main Building, and is well equipped with its own network of Silicon Graphics and Sun workstations, supported by a full-time system administrator. The successful candidate will work under the supervision of Professor Chris Bishop and Professor David Lowe and will be responsible for a range of software development and related activities. An early task will involve the development of a large C++ library of neural network software for use in many of the Group's projects. Another significant component will involve contributions to industrial and commercial research contracts, as well as providing software support to existing research projects. Additional responsibilities may include development of software for use in taught courses as part of the Group's MSc programme in Pattern Analysis and Neural Networks. The ideal candidate will have: * a good first degree in a numerate discipline * expertise in software development (preferably in C and C++) * a good understanding of neural networks * working knowledge of basic mathematics such as calculus and linear algebra * experience of working in a UNIX environment Neural Computing Research Group ------------------------------- The Neural Computing Research Group currently comprises the following academic staff Chris Bishop Professor David Lowe Professor David Bounds Professor Geoffrey Hinton Visiting Professor Richard Rohwer Lecturer Alan Harget Lecturer Ian Nabney Lecturer David Saad Lecturer Chris Williams Lecturer together with the following Postdoctoral Research Fellows David Barber Paul Goldberg Alan McLachlan Herbert Wiklicky Huaihu Zhu a full-time system administrator, and PhD and MSc research students. Conditions of Service --------------------- The appointment will be for an initial period of one year, with the possibility of subsequent renewal. Initial salary will be on the academic 1A or 1B scales up to 15,986. How to Apply ------------ If you wish to be considered for this position, please send a full CV, together with the names and addresses of at least 3 referees, to: Hanni Sondermann Neural Computing Research Group Department of Computer Science and Applied Mathematics Aston University Birmingham B4 7ET, U.K. Tel: (+44 or 0) 121 333 4631 Fax: (+44 or 0) 121 333 6215 e-mail: h.e.sondermann at aston.ac.uk Closing date: 20 November 1995. ----------------------------------------------------------------------  From tds at ai.mit.edu Mon Oct 30 16:02:53 1995 From: tds at ai.mit.edu (Terence D. Sanger) Date: Mon, 30 Oct 95 16:02:53 EST Subject: NIPS workshop opening Message-ID: <9510302102.AA08537@dentate.ai.mit.edu> Dear Connectionists, There has been an unexpected opening in the panelists for the NIPS workshop described below. If you would be interested in presenting some of your work and aiding a rousing discussion, please send me a brief abstract and description of your current research. I am most interested in speakers with background or current research in "wet-science" neurophysiology. Sorry for the late notice! Terry Sanger tds at ai.mit.edu ============================================================================= NIPS*95 Post-Conference Workshop "Vertebrate Neurophysiology and Neural Networks: Can the teacher learn from the student?" Results from neurophysiological investigations continue to guide the development of artificial neural network models that have been shown to have wide applicability in solving difficult computational problems. This workshop addresses the question of whether artificial neural network models can be applied to understanding neurophysiological results and guiding further experimental investigations. Recent work on close modelling of vertebrate neurophysiology will be presented, so as to give a survey of some of the results in this field. We will concentrate on examples for which artificial neural network models have been constructed to mimic the structure as well as the function of their biological counterparts. Clearly, this can be done at many different levels of abstraction. The goal is to discuss models that have explanatory and predictive power for neurophysiology. The following questions will serve as general discussion topics: 1. Do artificial neural network models have any relationship to ``real'' Neurophysiology? 2. Have any such models been used to guide new biological research? 3. Is Neurophysiology really useful for designing artificial networks, or does it just provide a vague ``inspiration''? 4. How faithfully do models need to address ultrastructural or membrane properties of neurons and neural circuits in order to generate realistic predictions of function? 5. Are there any artificial network models that have applicability across different regions of the central nervous system devoted to varied sensory and motor modalities? 6. To what extent do theoretical models address more than one of David Marr's levels of algorithmic abstraction (general approach, specific algorithm, and hardware implementation)? The workshop is planned as a single day panel discussion including both morning and afternoon sessions. Two or three speakers per session will be asked to limit presentations of relevant research to 15 minutes. Each speaker will describe computational models of different vertebrate regions, and speakers are encouraged to present an overview of algorithms and results in a manner that will allow direct comparison between methods. Intense audience participation is actively encouraged. The intended audience includes researchers actively involved in neurophysiological modelling, as well as a general audience that can contribute viewpoints from different backgrounds within the Neural Networks field.  From marks at neuro.usc.edu Mon Oct 30 14:04:28 1995 From: marks at neuro.usc.edu (Mark Seidenberg) Date: Mon, 30 Oct 1995 11:04:28 -0800 (PST) Subject: job opening at USC Message-ID: <199510301904.LAA07514@neuro.usc.edu> The psychology department at USC is searching for a person in the "cognitive and behavioral neuroscience" area, with a preference for someone who does neural network modeling. The position could also include appointments in computer science or neurobiology as appropriate. The person would join a strong neuroscience-cognitive science program here at USC, which includes Michael Arbib, Christoph von der Malsburg (part-time), myself, Irv Biederman, Richard F. Thompson, Michel Beaudry, Larry Swanson, Ted Berger, and others. Text of the ad follows. I would be willing to answer inquiries from interested parties. ----- COGNITIVE AND BEHAVIORAL NEUROSCIENCE: The Psychology Department at the University of Southern California invites applications for a faculty position at the tenure-track assistant professor level, including but not limited to individuals with expertise in neural network modeling. We are particularly interested in applicants skilled in quantitative methods or computational modeling techniques. Teaching responsibilities would include courses in these areas. Interested candidates should submit a letter outlining their qualifications, curriculum vitae, recent publications and three letters of recommendation to: Cognitive and Behavioral Neuroscience Search Committee, Department of Psychology, University of Southern California, Los Angeles CA 90089-1061. Deadline for applications is January 1, 1996. We are an equal opportunity/affirmative action employer and strongly encourage applications from minorities and women. --- ____________________________________ Mark S. Seidenberg Neuroscience Program University of Southern California 3614 Watt Way Los Angeles, CA 90089-2520 Phone: 213-740-9174 Fax: 213-740-5687 ____________________________________  From maja at cs.brandeis.edu Mon Oct 30 19:40:07 1995 From: maja at cs.brandeis.edu (Maja Mataric) Date: Mon, 30 Oct 1995 19:40:07 -0500 Subject: NIPS*95 Post-Conference Workshop on Robot Learning Message-ID: <199510310040.TAA10752@garnet.cs.brandeis.edu> ---------------------------------------------------------------- CALL FOR PARTICIPATION: Robot Learning III -- Learning in the "Real World" A NIPS*95 Post-conference Workshop Vail, Colorado, Dec 1, 1995 ---------------------------------------------------------------- The goal of this one-day workshop is to provide a forum for researchers active in the area of robot learning. Due to the limited time available, we will focus on one major issue: the difficulty of going from theory and simulation to practice and actual implementation of robot learning. A wide variety of algorithms have been developed for learning in robots and, in simulation, many of them work quite well. However, physical robots are faced with sensor noise, control error, non-stationary environments, inconsistent feedback, and the need to operate robustly in real time. Most of these aspects are difficult to simulate accurately, yet have a critical effect on the learning performance. Unfortunately, very few of the developed learning algorithms have been empirically tested on actual robots, and of those even fewer have repeated the success found in simulated domains. Some of the specific questions we plan to discuss are: How can we handle noise in sensing and action without a priori models? How do we build in a priori knowledge? How can we learn in real time with exploration in real time? How can we construct richer reward functions, incorporating feedback, shaping, multi-model reinforcement, etc? This workshop is intended to serve as a followup to previous years' post-NIPS workshops on robot learning. The morning session of the workshop will consist of short presentations of problems faced when implementing learning in physical robots, followed by a general discussion guided by a moderator. The afternoon session will concentrate on actual implementations, with video (and hopefully live) demonstrations where possible. As time permits, we will also attempt to create an updated "Where do we go from here?" list, following the example of the previous years' workshops. The list will attempt to characterize the problems that must be solved next in order to make progress in applied robot learning. Talks by: Stefan Schaal, Georgia Tech, ATR "How Hard Is It To Balance a Real Pole With a Real Arm?" Sebastian Thrun, Carnegie Mellon University, "Learning More from Less Data: Experiments in Lifelong Robot Learning" Maja Mataric, Brandeis University "Complete Systems Learning in Dynamic Environments" Marcos Salganicoff, University of Delaware, A.I. Dupont Institute "Robots are from Mars, Learning Algorithms are from Venus: A practical guide to getting what you want in a relationship with your robot learning implementation" The targeted audience for the workshop are those researchers who are interested in robot learning and robots in general. We expect to draw an eclectic audience, so every attempt will be made to ensure that presentations are accessible to people without any specific background in the field. ----------------------------------------------------------------------- Organized by: Maja Mataric, Brandeis University maja at cs.brandeis.edu David Cohn, MIT and Harlequin, Inc. cohn at harlequin.com  From bert at mbfys.kun.nl Tue Oct 31 11:24:48 1995 From: bert at mbfys.kun.nl (Bert Kappen) Date: Tue, 31 Oct 1995 17:24:48 +0100 Subject: No subject Message-ID: <199510311624.RAA26312@septimius.mbfys.kun.nl> University of Nijmegen, Postdoctoral Fellowship The Foundation for Neural Netwokrs at the University of Nijmegen has a vacancy for a postdoctoral fellow for a theoretical research project. The project consists of the development of novel theory, techniques and implementations for perception and cognitive reasoning in a complex dynamic multi-sensory environment. The results will be explored both in robotics and in multi-media applications. The fundamental problems which could be addressed within the context of this project are for instance: Sub-symbolic symbolic interfacing, learning in a changing environment, or probabilistic knowledge representation combining neural networks and AI techniques such as Bayes networks. The project includes a group of about 10 scientists with participation from the University of Amsterdam (robotics-group) and Utrecht (research group on 3_d computer vision) and is funded by the Japanese Real World Computing Program. For more information about the research of the Foundation for Neural Networks and the neural networks research at the University of Nijmegen, please consult http://www.mbfys.kun.nl/SNN/ Applications for the position, which will be for one year with possible extension to three years, should be sent before november 20 to: Foundation for Neural Networks, University of Nijmegen Geert Grooteplein 21, NL 6525 EZ Nijmegen, The Netherlands Email: snn at mbfys.kun.nl  From bruno at redwood.psych.cornell.edu Tue Oct 31 11:37:23 1995 From: bruno at redwood.psych.cornell.edu (Bruno A. Olshausen) Date: Tue, 31 Oct 1995 11:37:23 -0500 Subject: neuroanatomical database available Message-ID: <199510311637.LAA14619@redwood.psych.cornell.edu> The following software is available via ftp://v1.wustl.edu/pub/xanat/xanat-2.0.tar.Z There is also a homepage at http://redwood.psych.cornell.edu/bruno/xanat/xanat.html Xanat 2.0 A Graphical Anatomical Database By Bill Press and Bruno Olshausen Washington University School of Medicine Department of Anatomy and Neurobiology St. Louis, Missouri 63110 XANAT is a computer program that facilitates the analysis of neuroanatomical data by storing the results of numerous studies in a standardized format, and by providing various tools for performing summaries and comparisons of these studies. Data are entered by drawing injection and label sites directly onto canonical representations of the neuroanatomical structures of interest, along with providing descriptive text information. Searches may then be performed on the data by querying the database graphically according to injection or label site, or via text information (i.e., keyword search). Analyses may also be performed by accumulating data across multiple studies and displaying a color coded map that graphically represents the total evidence for connectivity between regions. Data may be studied and compared free of areal boundaries (which often vary from one lab to the next), and instead with respect to standard landmarks, such as the position relative to well known neuroanatomical substrates, or stereotaxic coordinates. If desired, areal boundaries may be defined by the user to facilitate the interpretation of results. XANAT is written in C and is intended to run on unix workstations running the X11 window system. The workstation must have at least a modifiable 8-bit color map. The program has been successfully tested on Suns, SGIs, and IBM PCs (running Linux). Included with the distribution is an example dataset of pulvinar-cortical connectivity, which should prove useful in learning how to use the program.  From harry at brain.Jpl.Nasa.Gov Mon Oct 2 14:05:52 1995 From: harry at brain.Jpl.Nasa.Gov (Harry Langenbacher) Date: Mon, 2 Oct 1995 11:05:52 -0700 Subject: Job opening at JPL Message-ID: <199510021805.LAA22519@brain.jpl.nasa.gov> JOB OPPORTUNITIES NEURAL NETWORK / FUZZY LOGIC SYSTEM DESIGN involving VLSI HARDWARE, and SOFTWARE DEVELOPMENT at the Jet Propulsion Laboratory, Pasadena CA, USA Requires: Ph D (EE or Computer Science) and 2 or more years experience for a position as Member of Technical Staff. PhD (EE or Computer Science) for post-doctoral position. Areas of specialization, expertise, knowledge, and interests: All aspects of analog and digital VLSI and system design. Knowledgeable in new computing paradigms such as neural networks, fuzzy logic, genetic algorithms, etc. Skilled with computer hardware and software development. Interest in innovation. Skills Sought: Experience with VLSI circuit design & layout tools such as Spice and Magic. Expertise in C, Unix, X, MSDOS, MS-Windows, etc. Knowledge of computer interface techniques and hardware. Familiarity with computer graphics, and some knowledge of conventional signal/image processing techniques and algorithms. The Job: Development and integration of state of the art full-custom/ASIC VLSI concurrent processing architectures for real-time sensor signal processing, pattern-recognition, data fusion, etc. Contact: e-mail (preferred), or FAX, mail, or phone to harry at jpl.nasa.gov Harry Langenbacher JPL Mail-stop 302-231 4800 Oak Grove Dr Pasadena CA 91109 USA Jet Propulsion Laboratory Concurrent Processing Devices Group, Phone: 818-354-9513 FAX : 818-393-4540  From ken at phy.ucsf.edu Mon Oct 2 20:45:24 1995 From: ken at phy.ucsf.edu (Ken Miller) Date: Mon, 2 Oct 1995 17:45:24 -0700 Subject: paper available: modeling joint development of ocular dominance and orientation maps Message-ID: <9510030045.AA04586@coltrane.ucsf.edu> FTP-host: phy.ucsf.edu FTP-filename: /pub/erwin/CNS95proc.ps.Z URL: ftp://phy.ucsf.edu/pub/erwin/CNS95proc.ps.Z The following paper is now available by anonymous ftp, from the above addresses, or from my or Ed Erwin's home pages (addresses below). Sorry, hard copies are not available. Modeling Joint Development of Ocular Dominance and Orientation Maps in Primary Visual Cortex by Ed Erwin and Kenneth D. Miller To appear in the Proceedings of the Computation and Neural Systems (CNS) 1995 conference, Monterey. (In press) ABSTRACT: We have combined earlier correlation-based models of striate ocular dominance and orientation preference map formation into a joint model. Cortical feature preferences are defined through patterns of synaptic connectivity to LGN cells which develop due to firing correlations of those LGN cells. Model parameters include spatial correlation patterns between ON- and OFF-center cells in separate eye layers of the LGN. A linear transformation yields correlation functions which predict whether orientation preferences, ocular dominance, or both, will develop. The model thus predicts the correlations between LGN cells which would be necessary to explain formation of visual maps by a linear process. Kenneth D. Miller www: http://keck.ucsf.edu/~ken internet: ken at phy.ucsf.edu Ed Erwin www: http://keck.ucsf.edu/~erwin internet: erwin at phy.ucsf.edu Both: Dept. of Physiology University of California, San Francisco 513 Parnassus San Francisco, CA 94143-0444 fax: (415) 476-4929  From watrous at scr.siemens.com Tue Oct 3 14:11:45 1995 From: watrous at scr.siemens.com (Raymond L Watrous) Date: Tue, 3 Oct 1995 14:11:45 -0400 Subject: paper available on patient-adaptive ECG classification Message-ID: <199510031811.OAA04375@tiercel.scr.siemens.com> FTP-HOST: scr.siemens.com FTP-filename: /pub/learning/Papers/watrous/cic_95.ps.Z The following paper (4 pages, 3 figures) is now available via anonymous ftp: A Patient-Adaptive Neural Network ECG Patient Monitoring Algorithm Raymond Watrous, Geoffrey Towell Siemens Corporate Research 755 College Road East Princeton, NJ 08540 Abstract A new, patient-adaptive ECG Patient Monitoring algorithm is described. The algorithm combines a patient-independent neural network classifier with a three-parameter patient model. The patient model is used to modulate the patient-independent classifier via multiplicative connections. Adaptation is carried out by gradient descent in the patient model parameter space. The patient-adaptive classifier was compared with a well-established baseline algorithm on six major databases, consisting of over 3 million heartbeats. When trained on an initial 77 records and tested on an additional 382 records, the patient-adaptive algorithm was found to reduce the number of Vn errors on one channel by a factor of 5, and the number of Nv errors by a factor of 10. We conclude that patient adaptation provides a significant advance in classifying normal vs. ventricular beats for ECG Patient Monitoring. +=+=+= The paper will appear in the proceedings of Computers in Cardiology, September 10-13, 1995, Vienna, Austria. We regret that we are unable to provide hard copies. Raymond Watrous +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+ Learning Systems Department Phone: (609) 734-6596 Siemens Corporate Research FAX: (609) 734-6565 755 College Road East Princeton, NJ 08540 watrous at learning.scr.siemens.com  From jordan at psyche.mit.edu Tue Oct 3 18:22:02 1995 From: jordan at psyche.mit.edu (Michael Jordan) Date: Tue, 3 Oct 95 18:22:02 EDT Subject: workshop announcement Message-ID: This is an announcement of a post-NIPS workshop on Learning in Bayesian Belief Networks and Other Graphical Models. Bayesian belief networks are probabilistic graphs that have interesting relationships to neural networks. Undirected belief networks are closely related to Boltzmann machines. Directed belief networks (the more popular variety) are related to feedforward neural networks, but have a stronger probabilistic semantics. Many interesting probabilistic models, including HMM's, Kalman filters, mixture models, factor analytic models, etc., can be viewed as special cases of belief networks. In the area of inference (i.e., the calculation of posterior probabilities of certain nodes given that other nodes are clamped), the research on belief networks is quite mature. The inference algorithms provide a clean probabilistic framework for e.g., inverting a network, calculating posterior probabilities of hidden nodes, calculating most probable configurations, etc. In the area of learning, there have been interesting developments in the area of structural learning (deciding which links and which nodes to include in the graph) and learning in the presence of hidden variables. The organizing committee for the workshop includes: Wray Buntine, Greg Cooper, Dan Geiger, David Heckerman, Geoffrey Hinton, Mike Jordan, Steffen Lauritzen, David Mackay, David Madigan, Radford Neal, Steve Omohundro, Judea Pearl, Stuart Russell, Peter Spirtes, and Ross Shachter. Many of these people will be giving presentations at the workshop. A short bibliography follows for those who might like to read up on Bayesian belief networks in anticipation of the workshop. Mike Jordan ------------------ Short Bibliography ------------------ The list below provides a few useful references, with an emphasis on recent review papers, tutorials, and textbooks. The list is not meant to be comprehensive along any dimension... Many additional pointers to the literature can be found on the Uncertainty in Artificial Intelligence homepage; see http://www.auai.org. If I had to pick two papers that I would most recommend for someone wanting to get up to speed quickly on belief networks, I'd recommend the Spiegelhalter, et al. paper and the Heckerman tutorial. Mike ----------------------- A good place to start to learn about the most popular algorithm for general inference in belief networks, as well as some of the basics on learning: Spiegelhalter, D. J., Dawid, A. P., Lauritzen, S. L., & Cowell, R. G. (1993). Bayesian Analysis in Expert Systems, {\em Statistical Science, 8}, 219-283. If you want more details on the inference algorithm: Lauritzen, S. L., \& Spiegelhalter, D. J. (1988). Local computations with probabilities on graphical structures and their application to expert systems (with discussion). {\em Journal of the Royal Statistical Society B, 50}, 157-224. A tutorial on the recent work on learning in belief networks: Heckerman, D. (1995). A tutorial on learning Bayesian networks. [available through http://www.auai.org]. If you want more on learning: Buntine, W. (1994). Operations for learning with graphical models. {\em Journal of Artificial Intelligence Research, 2}, 159-225. [available through http://www.auai.org]. A very readable general textbook on belief networks from a statistical perspective (focusing on ML estimation and model selection): Whittaker, J. (1990). {\em Graphical Models in Applied Multivariate Statistics}. New York: John Wiley. An introductory textbook: Neapolitan, E. (1990). {\em Probabilistic Reasoning in Expert Systems}. New York: John Wiley. The classical text on belief networks; emphasizes inference and AI issues: Pearl, J. (1988). {\em Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference}. San Mateo, CA: Morgan Kaufman. A recent paper that unifies (almost) all of the extant algorithms for inference in belief networks: Shachter, R. D., Anderson, S. K., \& Szolovits, P. (1994). Global conditioning for probabilistic inference in belief networks. {\em Proceedings of the Uncertainty in Artificial Intelligence Conference}, 514-522.  From rsun at cs.ua.edu Thu Oct 5 10:11:57 1995 From: rsun at cs.ua.edu (Ron Sun) Date: Thu, 5 Oct 1995 09:11:57 -0500 Subject: No subject Message-ID: <9510051411.AA16982@athos.cs.ua.edu> ANNOUNCING A NEW MAILING LIST: The mailing lists. ------------------------------------------- As we discussed at the CSI workshop at IJCAI in this August, we now establish this new mailing list for the specific purpose of exchanging information and ideas regarding hybrid models, especially models integrating symbolic and connectionist processes. Other hybrid models, such as fuzzy logic+neural networks and GA+NN, are also covered. This is an unmoderated list. Conference and workshop announcements, papers and technical reports, informed discussions of specific topics in hybrid model areas, and other pertinent messages are appropriate items for submission. Email your submission to hybrid-list at cs.ua.edu, which will be automatically forwarded to all the recipients of the list. Information regarding subscription is attached below. For questions and suggestions regarding this list, send email to rsun at cs.ua.edu (only if you have to). This mailing list has incorporated the old HYBRID list at Brown U. maintained by Michael Perrone (thanks to Michael), and included names of those who attended the 1995 CSI workshop or expressed interest in it. (To remove your name from the list, see the instruction at the end of this message.) Regards, --Ron Sun ============================================================================== The University of Alabama Department of Computer Science has set up a list service for this: To subscribe to this list service, send an e-mail message to the userid "listproc at cs.ua.edu" with NO SUBJECT, but a one-line text message, as shown below: SUBSCRIBE hybrid-list YourFirstName YourLastName You should receive a response back indicating your addition to the list. After this, you can submit items to the list by simply e-mail'ing a message to the userid: "hybrid-list at cs.ua.edu". The message will automatically be sent to all individuals on the list. To unsubscribe to this list service, send an e-mail message to the userid "listproc at cs.ua.edu" with NO SUBJECT, but a one-line text message, as shown below: UNSUBSCRIBE hybrid-list ==============================================================================  From cas-cns at PARK.BU.EDU Thu Oct 5 13:01:56 1995 From: cas-cns at PARK.BU.EDU (BU CNS) Date: Thu, 05 Oct 1995 13:01:56 -0400 Subject: Boston University - Cognitive & Neural Systems Message-ID: <199510051701.NAA29047@cns.bu.edu> (A copy of this message has also been posted to the following newsgroups: comp.ai, comp.cog-eng,comp.software-eng,comp.ai.neural-nets,bu.general,bu.seminars,ne.seminars,news.announce.conferences) ************************************************************** DEPARTMENT OF COGNITIVE AND NEURAL SYSTEMS (CNS) AT BOSTON UNIVERSITY ************************************************************** Ennio Mingolla, Acting Chairman, 1995-96 Stephen Grossberg, Chairman Gail A. Carpenter, Director of Graduate Studies The Boston University Department of Cognitive and Neural Systems offers comprehensive graduate training in the neural and computational principles, mechanisms, and architectures that underlie human and animal behavior, and the application of neural network architectures to the solution of technological problems. Applications for Fall, 1996, admission and financial aid are now being accepted for both the MA and PhD degree programs. To obtain a brochure describing the CNS Program and a set of application materials, write, telephone, or fax: DEPARTMENT OF COGNITIVE & NEURAL SYSTEMS Boston University 111 Cummington Street, 2nd Floor (ON OR AFTER 10/30/95, PLEASE ADDRESS Boston, MA 02215 MAIL TO 677 BEACON STREET) 617/353-9481 (phone) 617/353-7755 (fax) or send via email your full name and mailing address to: rll at cns.bu.edu Applications for admission and financial aid should be received by the Graduate School Admissions Office no later than January 15. Late applications will be considered until May 1; after that date applications will be considered only as special cases. Applicants are required to submit undergraduate (and, if applicable, graduate) transcripts, three letters of recommendation, and Graduate Record Examination (GRE) scores. The Advanced Test should be in the candidate's area of departmental specialization. GRE scores may be waived for MA candidates and, in exceptional cases, for PhD candidates, but absence of these scores may decrease an applicant's chances for admission and financial aid. Non-degree students may also enroll in CNS courses on a part-time basis. Description of the CNS Department: The Department of Cognitive and Neural Systems (CNS) provides advanced training and research experience for graduate students interested in the neural and computational principles, mechanisms, and architectures that underlie human and animal behavior, and the application of neural network architectures to the solution of technological problems. Students are trained in a broad range of areas concerning cognitive and neural systems, including vision and image processing; speech and language understanding; adaptive pattern recognition; cognitive information processing; self-organization; associative learning and long-term memory; computational neuroscience; nerve cell biophysics; cooperative and competitive network dynamics and short-term memory; reinforcement, motivation, and attention; adaptive sensory-motor control and robotics; active vision; and biological rhythms; as well as the mathematical and computational methods needed to support advanced modeling research and applications. The CNS Department awards MA, PhD, and BA/MA degrees. The CNS Department embodies a number of unique offerings. It has developed a curriculum that features 15 interdisciplinary graduate courses each of which integrates the psychological, neurobiological, mathematical, and computational information needed to theoretically investigate fundamental issues concerning mind and brain processes and the applications of neural networks to technology. Each course is typically taught once a week in the evening to make the program available to qualified students, including working professionals, throughout the Boston area. Nine additional research course are also offered. In these courses, one or two students meet regularly with one or two professors to pursue advanced reading and collaborative research. Students develop a coherent area of expertise by designing a program that includes courses in areas such as Biology, Computer Science, Engineering, Mathematics, and Psychology, in addition to courses in the CNS Department. The CNS Department prepares students for PhD thesis research with scientists in one of several Boston University research centers or groups, and with Boston-area scientists collaborating with these centers. The unit most closely linked to the department is the Center for Adaptive Systems (CAS). Students interested in neural network hardware work with researchers in CNS, the College of Engineering, and at MIT Lincoln Laboratory. Other research resources include distinguished research groups in neurophysiology, neuroanatomy, and neuropharmacology at the Medical School and the Charles River campus; in sensory robotics, biomedical engineering, computer and systems engineering, and neuromuscular research within the Engineering School; in dynamical systems within the Mathematics Department; in theoretical computer science within the Computer Science Department; and in biophysics and computational physics within the Physics Department. In addition to its basic research and training program, the Department offers a colloquium series, seminars, conferences, and special interest groups which bring many additional scientists from both experimental and theoretical disciplines into contact with the students. The CNS Department is moving in October, 1995 into its own new four-story building, which features a full range of offices, laboratories, classrooms, library, lounge, and related facilities for exclusive CNS use. 1995-96 CAS MEMBERS and CNS FACULTY: Jelle Atema Professor of Biology Director, Boston University Marine Program (BUMP) PhD, University of Michigan Sensory physiology and behavior Aijaz Baloch Research Associate of Cognitive and Neural Systems PhD, Electrical Engineering, Boston University Neural modeling of role of visual attention of recognition, learning and motor control, computational vision, adaptive control systems, reinforcement learning Helen Barbas Associate Professor, Department of Health Sciences, Boston University PhD, Physiology/Neurophysiology, McGill University Organization of the prefrontal cortex, evolution of the neocortex Jacob Beck Research Professor of Cognitive and Neural Systems PhD, Psychology, Cornell University Visual Perception, Psychophysics, Computational Models Daniel H. Bullock Associate Professor of Cognitive and Neural Systems and Psychology PhD, Psychology, Stanford University Real-time neural systems, sensory-motor learning and control, evolution of intelligence, cognitive development Gail A. Carpenter Professor of Cognitive and Neural Systems and Mathematics Director of Graduate Studies, Department of Cognitive and Neural Systems PhD, Mathematics, University of Wisconsin, Madison Pattern recognition, categorization, machine learning, differential equations Laird Cermak Professor of Neuropsychology, School of Medicine Professor of Occupational Therapy, Sargent College Director, Memory Disorders Research Center, Boston Veterans Affairs Medical Center PhD, Ohio State University Michael A. Cohen Associate Professor of Cognitive and Neural Systems and Computer Science Director, CAS/CNS Computation Labs PhD, Psychology, Harvard University Speech and language processing, measurement theory, neural modeling, dynamical systems H. Steven Colburn Professor of Biomedical Engineering PhD, Electrical Engineering, Massachusetts Institute of Technology Audition, binaural interaction, signal processing models of hearing William D. Eldred III Associate Professor of Biology BS, University of Colorado; PhD, University of Colorado, Health Science Center Visual neural biology Paolo Gaudiano Assistant Professor of Cognitive and Neural Systems PhD, Cognitive and Neural Systems, Boston University Computational and neural models of vision and adaptive sensory-motor control Jean Berko Gleason Professor of Psychology AB, Radcliffe College; AM, PhD, Harvard University Psycholinguistics Douglas Greve Research Associate of Cognitive and Neural Systems PhD, Cognitive and Neural Systems, Boston University Stephen Grossberg Wang Professor of Cognitive and Neural Systems Professor of Mathematics, Psychology, and Biomedical Engineering Director, Center for Adaptive Systems Chairman, Department of Cognitive and Neural Systems PhD, Mathematics, Rockefeller University Theoretical biology, theoretical psychology, dynamical systems, applied mathematics Frank Guenther Assistant Professor of Cognitive and Neural Systems PhD, Cognitive and Neural Systems, Boston University Biological sensory-motor control, spatial representation, speech production Thomas G. Kincaid Chairman and Professor of Electrical, Computer and Systems Engineering, College of Engineering PhD, Electrical Engineering, Massachusetts Institute of Technology Signal and image processing, neural networks, non-destructive testing Nancy Kopell Professor of Mathematics PhD, Mathematics, University of California at Berkeley Dynamical systems, mathematical physiology, pattern formation in biological/physical systems Ennio Mingolla Associate Professor of Cognitive and Neural Systems and Psychology Acting Chairman 1995-96, Department of Cognitive and Neural Systems PhD, Psychology, University of Connecticut Visual perception, mathematical modeling of visual processes Alan Peters Chairman and Professor of Anatomy and Neurobiology, School of Medicine PhD, Zoology, Bristol University, United Kingdom Organization of neurons in the cerebral cortex, effects of aging on the primate brain, fine structure of the nervous system Andrzej Przybyszewski Senior Research Associate of Cognitive and Neural Systems MSc, Technical Warsaw University; MA, University of Warsaw; PhD, Warsaw Medical Academy Adam Reeves Adjunct Professor of Cognitive and Neural Systems Professor of Psychology, Northeastern University PhD, Psychology, City University of New York Psychophysics, cognitive psychology, vision William Ross Research Associate of Cognitive and Neural Systems BSc, Cornell University; MA, PhD, Boston University Mark Rubin Research Assistant Professor of Cognitive and Neural Systems Research Physicist, Naval Air Warfare Center, China Lake, CA (on leave) PhD, Physics, University of Chicago Neural networks for vision, pattern recognition, and motor control Robert Savoy Adjunct Associate Professor of Cognitive and Neural Systems Scientist, Rowland Institute for Science PhD, Experimental Psychology, Harvard University Computational neuroscience; visual psychophysics of color, form, and motion perception Eric Schwartz Professor of Cognitive and Neural Systems; Electrical, Computer and Systems Engineering; and Anatomy and Neurobiology PhD, High Energy Physics, Columbia University Computational neuroscience, machine vision, neuroanatomy, neural modeling Robert Sekuler Adjunct Professor of Cognitive and Neural Systems Research Professor of Biomedical Engineering, College of Engineering, BioMolecular Engineering Research Center Jesse and Louis Salvage Professor of Psychology, Brandeis University AB,MA, Brandeis University; Sc.M., PhD, Brown University Allen Waxman Adjunct Associate Professor of Cognitive and Neural Systems Senior Staff Scientist, MIT Lincoln Laboratory PhD, Astrophysics, University of Chicago Visual system modeling, mobile robotic systems, parallel computing, optoelectronic hybrid architectures James Williamson Research Associate of Cognitive and Neural Systems PhD, Cognitive and Neural Systems, Boston University Image processing and object recognition. Particular interests are: dynamic binding, self-organization, shape representation, and classification Jeremy Wolfe Adjunct Associate Professor of Cognitive and Neural Systems Associate Professor of Ophthalmology, Harvard Medical School Psychophysicist, Brigham & Women's Hospital, Surgery Dept. Director of Psychophysical Studies, Center for Clinical Cataract Research PhD, Massachusetts Institute of Technology  From ajit at austin.ibm.com Thu Oct 5 16:40:53 1995 From: ajit at austin.ibm.com (Dingankar) Date: Thu, 5 Oct 1995 15:40:53 -0500 Subject: Dissertation available in Neuroprose Message-ID: <9510052040.AA14254@ding.austin.ibm.com> **DO NOT FORWARD TO OTHER GROUPS** Sorry, no hardcopies available. URL: ftp://archive.cis.ohio-state.edu/pub/neuroprose/Thesis/dingankar.thesis.ps.Z BiBTeX entry: @PhdThesis{ajit-phd, author = "Ajit Trimbak Dingankar", title = "On Applications of Approximation Theory to Identification, Control and Classification", school = "The University of Texas at Austin", year = 1995, address = "Austin, Texas", } Abstract Applications of approximation theory to some problems in identification of dynamic systems, their control, and to problems in signal classification are studied. First, an algorithm is given for constructing approximations in a wide variety of settings, and a corresponding error bound is derived. Then weak sufficient conditions for perfect classification of signals are studied. Next the problem of approximating linear functionals with certain sums of integrals is studied, alongwith its relation to the approximation of nonlinear functionals. Then an approximation theoretic characterization of continuity of nonlinear maps is given. As another application of function approximation, the problem of universally approximating controllers for discrete-time continuous plants is studied. Finally, error bounds for approximation of functions defined on finite dimensional Hilbert spaces are given. ------------------------------------------------------------------------------ Ajit T. Dingankar | ajit at austin.ibm.com IBM Corporation, Internal Zip 4359 | Work: (512) 838-6850 11400 Burnet Road, Austin, TX 78758 | Fax : (512) 838-5882  From john at dcs.rhbnc.ac.uk Thu Oct 5 09:24:13 1995 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Thu, 05 Oct 95 14:24:13 +0100 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <199510051324.OAA07218@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT): three new reports available ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-051: ---------------------------------------- On the Computational Power of Continuous Time Neural Networks by Pekka Orponen, University of Helsinki, Finland Abstract: We investigate the computational power of continuous-time neural networks with Hopfield-type units. We prove that polynomial-size networks with saturated-linear response functions are at least as powerful as polynomially space-bounded Turing machines. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-052: ---------------------------------------- Computational Machine Learning in Theory and Praxis by Ming Li, University of Waterloo, Canada Paul Vitanyi, CWI and Universiteit van Amsterdam, The Netherlands. Abstract: In the last few decades a computational approach to machine learning has emerged based on paradigms from recursion theory and the theory of computation. Such ideas include learning in the limit, learning by enumeration, and probably approximately correct (pac) learning. These models usually are not suitable in practical situations. In contrast, statistics based inference methods have enjoyed a long and distinguished career. Currently, Bayesian reasoning in various forms, minimum message length (MML) and minimum description length (MDL), are widely applied approaches. They are the tools to use with particular machine learning praxis such as simulated annealing, genetic algorithms, genetic programming, artificial neural networks, and the like. These statistical inference methods select the hypothesis which minimizes the sum of the length of the description of the hypothesis (also called `model') and the length of the description of the data relative to the hypothesis. It appears to us that the future of computational machine learning will include combinations of the approaches above coupled with guaranties with respect to used time and memory resources. Computational learning theory will move closer to practice and the application of the principles such as MDL require further justification. Here, we survey some of the actors in this dichotomy between theory and praxis, we justify MDL via the Bayesian approach, and give a comparison between pac learning and MDL learning of decision trees. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-053: ---------------------------------------- On the relations between distributive computability and the BSS model by Sebastiano Vigna, University of Milan, Italy Abstract: This paper presents an equivalence result between computability in the BSS model and in a suitable distributive category. It is proved that the class of functions $R^m\to R^n$ (with $n,m$ finite and $R$ a commutative, ordered ring) computable in the BSS model and the functions distributively computable over a natural distributive graph based on the operations of $R$ coincide. Using this result, a new structural characterization, based on iteration, of the same functions is given. ----------------------- The Report NC-TR-95-051 can be accessed and printed as follows % ftp cscx.cs.rhbnc.ac.uk (134.219.200.45) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-95-051.ps.Z ftp> bye % zcat nc-tr-95-051.ps.Z | lpr -l Similarly for the other technical report. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. The files may also be accessed via WWW starting from the NeuroCOLT homepage: http://www.dcs.rhbnc.ac.uk/neural/neurocolt.html Best wishes John Shawe-Taylor  From listerrj at helios.aston.ac.uk Fri Oct 6 05:24:28 1995 From: listerrj at helios.aston.ac.uk (Richard Lister) Date: Fri, 06 Oct 1995 10:24:28 +0100 Subject: Postdoctoral position in Neural Computing Message-ID: <27328.9510060924@sun.aston.ac.uk> ---------------------------------------------------------------------- Neural Computing Research Group ------------------------------- Dept of Computer Science and Applied Mathematics Aston University, Birmingham, UK Nonstationary Feature Extraction and Tracking for the Classification -------------------------------------------------------------------- of Turning Points in Multivariate Time Series ---------------------------------------------- The Neural Computing Research Group at Aston is looking for a highly motivated individual for a 3 year postdoctoral research position. The investigation will involve generic problems of feature extraction in nonstationary environments, typically from univariate and multivariate time series. Potential candidates should be mathematically and computationally competent with a background either in artificial neural networks, dynamical systems theory, statistical pattern processing, or have relevant experience from a physics or electrical engineering background. *** Further details at http://neural-server.aston.ac.uk/ *** Conditions of Service --------------------- Salaries will be up to point 6 on the RA 1A scale, currently 15,986 UK pounds. These salary scales are currently under review, and are subject to annual increments. How to Apply ------------ Please send a full CV and publications list, together with the names of 4 referees, to: Professor D Lowe Neural Computing Research Group Department of Computer Science and Applied Mathematics Aston University Birmingham B4 7ET, U.K. Tel: 0121 333 4631 Fax: 0121 333 6215 email: d.lowe at aston.ac.uk (email submission of postscript files is welcome) Closing date: 15 November 1995 ----------------------------------------------------------------------  From dwang at cis.ohio-state.edu Fri Oct 6 11:13:23 1995 From: dwang at cis.ohio-state.edu (DeLiang Wang) Date: Fri, 6 Oct 1995 11:13:23 -0400 Subject: Preprint available on auditory scene analysis Message-ID: <199510061513.LAA01790@shirt.cis.ohio-state.edu> The following preprint is available via FTP/WWW: ------------------------------------------------------------------ Primitive Auditory Segregation Based on Oscillatory Correlation ------------------------------------------------------------------ DeLiang Wang Cognitive Science: Accepted for publication The Ohio State University Auditory scene analysis is critical for complex auditory processing. We study auditory segregation from the neural network perspective, and develop a framework for primitive auditory scene analysis. The architecture is a laterally coupled two-dimensional network of relaxation oscillators with a global inhibitor. One dimension represents time and another one represents frequency. We show that this architecture, plus systematic delay lines, can in real time group auditory features into a stream by phase synchrony and segregate different streams by desynchronization. The network demonstrates a set of psychological phenomena regarding primitive auditory scene analysis, including dependency on frequency proximity and the rate of presentation, sequential capturing, and competition among different perceptual organizations. We offer a neurocomputational theory - shifting synchronization theory - for explaining how auditory segregation might be achieved in the brain, and the psychological phenomenon of stream segregation. Possible extensions of the model are discussed. (42 pages + one figure = 1.5MB + 600 KB) for anonymous ftp: FTP-HOST: ftp.cis.ohio-state.edu Directory: /pub/leon/Wang95 FTP-filenames: Wang.prep.ps.Z, fig5G.ps.Z or for WWW: http://www.cis.ohio-state.edu/~dwang Comments are most welcome - Please send to DeLiang Wang (dwang at cis.ohio-state.edu) ---------------------------------------------------------------------------- FTP instructions: To retrieve and print the files, use the following commands: unix> ftp ftp.cis.ohio-state.edu Name: anonymous Password: (your email address) ftp> binary ftp> cd /pub/leon/Wang95 ftp> get Wang.prep.ps.Z ftp> get fig5G.ps.Z ftp> quit unix> uncompress Wang.prep.ps.Z unix> uncompress fig5G.ps.Z unix> lpr {each of the two postscript files} (Wang.prep.ps may not ghostview well - some figures do not show up with my ghostview - but it should print ok) ----------------------------------------------------------------------------  From arbib at pollux.usc.edu Fri Oct 6 13:50:05 1995 From: arbib at pollux.usc.edu (Michael A. Arbib) Date: Fri, 6 Oct 1995 10:50:05 -0700 Subject: A New Series of Virtual Textbooks on Neural Networks Message-ID: <199510061750.KAA11592@pollux.usc.edu> October 6, 1995 Yesterday, a visitor to my office, while speaking of his enthusiasm for "The Handbook of Brain Theory and Neural Networks", mentioned that some of his colleagues had criticized the fact that the [266] articles [in Part III] were arranged in alphabetical order, thus lacking the "logical order" to make the book easy to use for teaching. The purpose of this note is to answer such concerns. 1. The boring answer is that a Handbook is not a Textbook. Indeed, given that the 266 articles provide such a comprehensive overview - including detailed models of single neurons; analysis of a wide variety of neurobiological systems; connectionist studies; mathematical analyses of abstract neural networks; and technological applications of adaptive, artificial neural networks and related methodologies - it is hard to imagine a course that would cover the whole book, no matter in what order the articles were presented. 2. The exciting answer is that THE HANDBOOK IS A VIRTUAL LIBRARY OF TWENTY-THREE TEXTBOOKS!! Before the 266 articles of Part III come Part I and Part II. Part I provides an introductory textbook level introduction to Neural Networks. Part II provides 23 "road maps", each of which lists the articles on a particular theme, followed by an essay which offers a "logical order" in which to read these articles. Thus, the Handbook can be used to provide a "virtual textbook" on any one of the following 23 topics: Applications of Neural Networks Artificial Intelligence and Neural Networks Biological Motor Control Biological Networks Biological Neurons Computability and Complexity Connectionist Linguistics Connectionist Psychology Control Theory and Robotics Cooperative Phenomena Development and Regeneration of Neural Networks Dynamic Systems and Optimization Implementation of Neural Networks Learning in Artificial Neural Networks, Deterministic Learning in Artificial Neural Networks, Statistical Learning in Biological Systems Mammalian Brain Regions Mechanisms of Neural Plasticity Motor Pattern Generators and Neuroethology Primate Motor Control Self-Organization in Neural Networks Other Sensory Systems Vision In each case, the instructor can follow the road map to traverse the articles to provide full coverage of the topic, using the cross- references to choose supplementary material from within the Handbook, and the carefully selected list of readings at the end of each article to choose supplementary material from the general literature. As an appendix to this message, I include a sample road map, that on "Learning in Artificial Neural Networks, Deterministic". All the road maps are available on the Web at: http://www-mitpress.mit.edu/mitp/recent- books/comp/handbook-brain-theo.html If you have other queries about how best to use the Handbook, or suggestions for improving the Handbook, please feel free to contact me by email: arbib at pollux.usc.edu. With best wishes Michael Arbib ***** APPENDIX: The Road Map for "Learning in Artificial Neural Networks, Deterministic" from Part II of The Handbook of Brain Theory and Neural Networks, (M.A. Arbib, Ed.), A Bradford Book, copyright 1995, The MIT Press. LEARNING IN ARTIFICIAL NEURAL NETWORKS, DETERMINISTIC [Articles in the Road Map, listed in Alphabetical Order.] Adaptive Resonance Theory Associative Networks Backpropagation: Basics and New Developments Convolutional Networks for Images, Speech, and Time-Series Coulomb Potential Learning Kolmogorov's Theorem Learning by Symbolic and Neural Methods Learning as Hill-Climbing in Weight Space Learning as Adaptive Control of Synaptic Matrices Modular Neural Net Systems, Training of Neocognitron: A Model for Visual Pattern Recognition Neurosmithing: Improving Neural Network Learning Nonmonotonic Neuron Associative Memory Pattern Recognition Perceptrons, Adalines, and Backpropagation Recurrent Networks: Supervised Learning Reinforcement Learning Topology-Modifying Neural Network Algorithms [Articles in the Road Map, discussed in Logical Order.] Much of our concern is with supervised learning, getting a network to behave in a way which successfully approximates some specified pattern of behavior or input-output relationship. In particular, much emphasis has been placed on feedforward networks, that is, networks which have no loops, so that the output of the net depends on its input alone, since there is then no internal state defined by reverberating activity. The most direct form of this is a synaptic matrix, a one-layer neural network for which input lines directly drive the output neurons and a "supervised Hebbian" rule sets synapses so that the network will exhibit specified input- output pairs in its response repertoire. This is addressed in the article on ASSOCIATIVE NETWORKS, which notes the problems that arise if the input patterns (the "keys" for associations) are not orthogonal vectors. Association also extends to recurrent networks obtained from one layer networks by feedback connections from the output to the input, but in such systems of "dynamic memories" (e.g., Hopfield networks) there are no external inputs as such. Rather the "input" is the initial state of the network, and the "output" is the "attractor" or equilibrium state to which the network then settles. Unfortunately, the usual "attractor network" memory model, with neurons whose output is a sigmoid function of the linear combination of their inputs, has many spurious memories, i.e., equilibria other than the memorized patterns, and there is no way to decide a memorized pattern is recalled or not. The article on NONMONOTONIC NEURON ASSOCIATIVE MEMORY shows that, if the output of each neuron is a nonmonotonic function of its input, the capacity of the network can be increased, and the network does not exhibit spurious memories: when the network fails to recall a correct memorized pattern, the state shows a chaotic behavior instead of falling into a spurious memory. Historically, the earliest forms of supervised learning involved changing synaptic weights to oppose the error in a neuron with a binary output (the perceptron error-correction rule), or to minimize the sum of squares of errors of output neurons in a network with real- valued outputs (the Widrow-Hoff rule). This work is charted in the article on PERCEPTRONS, ADALINES AND BACKPROPAGATION, which also charts the extension of these classic ideas to multilayered feedforward networks. Multilayered networks pose the structural credit assignment problem: when an error is made at the output of a network, how is credit (or blame) to be assigned to neurons deep within the network? One of the most popular techniques is called backpropagation, whereby the error of output units is propagated back to yield estimates of how much a given "hidden unit" contributed to the output error. These estimates are used in the adjustment of synaptic weights to these units within the network. The article on BACKPROPAGATION: BASICS AND NEW DEVELOPMENTS places this idea in a broader mathematical and historical framework in which backpropagation is seen as a general method for calculating derivatives to adjust the weights of nonlinear systems, whether or not they are neural networks. The underlying theoretical grounding is that, given any function f: X . Y for which X and Y are codable as input and output patterns of a neural network, then, as shown in the article on KOLMOGOROV'S THEOREM, f can be approximated arbitrarily well by a feedforward network with one layer of hidden units. The catch, of course, is that many, many hidden units may be required for a close fit. It is often an empirical question whether there exists a sufficiently good approximation achievable in principle by a network of a given size P an approximation which a given learning rule may or may not find (it may, for example, get stuck in a local optimum rather than a global one). The article on NEUROSMITHING: IMPROVING NEURAL NETWORK LEARNING provides a number of "rules of thumb" to be used in applying backpropagation in trying to find effective settings for network size and for various coefficients in the learning rules. One useful perspective for supervised learning views LEARNING AS HILL-CLIMBING IN WEIGHT SPACE, so that each "experience" adjusts the synaptic weights of the network to climb (or descend) a metaphorical hill for which "height" at a particular point in "weight space" corresponds to some measure of the performance of the network (or the organism or robot of which it is a part). When the aim is to minimize this measure, one of the basic techniques for learning is what mathematicians call "gradient descent"; optimization theory also provides alternative methods such as, e.g., that of conjugate gradients, which are also used in the neural network literature. REINFORCEMENT LEARNING describes a form of "semi-supervised" learning where the network is not provided with an explicit form of error at each time step but rather receives only generalized reinforcement ("you're doing well"; "that was bad!") which yields little immediate indication of how any neuron should change its behavior. Moreover, the reinforcement is intermittent, thus raising the temporal credit assignment problem: how is an action at one time to be credited for positive reinforcement at a later time? One solution is to build an "adaptive critic" which learns to evaluate actions of the network on the basis of how often they occur on a path leading to positive or negative reinforcement. Another perspective on supervised learning is presented in LEARNING AS ADAPTIVE CONTROL OF SYNAPTIC MATRICES, which views learning as a control problem (controlling synaptic matrices to yield a given network behavior) and then uses the adjoint equations of control theory to derive synaptic adjustment rules. Gradient descent methods have also been extended to adapt the synaptic weights of recurrent networks, as discussed in RECURRENT NETWORKS: SUPERVISED LEARNING, where the aim is to match the time course of network activity, rather than the (input, output) pairs of some training set. The task par excellence for supervised learning is pattern recognition, the problem of classifying objects, often represented as vectors or as strings of symbols, into categories. Historically, the field of pattern recognition started with early efforts in neural networks (see PERCEPTRONS, ADALINES AND BACKPROPAGATION). While neural networks played a less central role in pattern recognition for some years, recent progress has made them the method of choice for many applications. As PATTERN RECOGNITION demonstrates, multilayer networks, when properly designed, can learn complex mappings in high-dimensional spaces without requiring complicated hand-crafted feature extractors. To rely more on learning, and less on detailed engineering of feature extractors, it is crucial to tailor the network architecture to the task, incorporating prior knowledge to be able to learn complex tasks without requiring excessively large networks and training sets. Many specific architectures have been developed to solve particular types of learning problem. ADAPTIVE RESONANCE THEORY (ART) bases learning on internal expectations. When the external world fails to match an ART network's expectations or predictions, a search process selects a new category, representing a new hypothesis about what is important in the present environment. The neocognitron (see NEOCOGNITRON: A MODEL FOR VISUAL PATTERN RECOGNITION) was developed as a neural network model for visual pattern recognition which addresses the specific question "how can a pattern be recognized despite variations in size and position?" by using a multilayer architecture in which local features are replicated in many different scales and locations. More generally, as shown in CONVOLUTIONAL NETWORKS FOR IMAGES, SPEECH, AND TIME SERIES, shift invariance in convolutional networks is obtained by forcing the replication of weight configurations across space. Moreover, the topology of the input is taken into account, enabling such networks to force the extraction of local features by restricting the receptive fields of hidden units to be local. COULOMB POTENTIAL LEARNING derives its name from its functional form's likeness to a coulomb charge potential, replacing the linear separability of a simple perceptron with a network that is capable of constructing arbitrary nonlinear boundaries for classification tasks. We have already noted that networks that are too small cannot learn the desired input to output mapping. However, networks can also be too large. Just as a polynomial of too high a degree is not useful for curve-fitting, a network that is too large will fail to generalize well, and will require longer training times. Smaller networks, with fewer free parameters, enforce a smoothness constraint on the function found. For best performance, it is, therefore, desirable to find the smallest network that will "properly" fit the training data. The article TOPOLOGY- MODIFYING NEURAL NETWORK ALGORITHMS reviews algorithms which adjust network topology (i.e., adding or removing neurons during the learning process) to arrive at a network appropriate to a given task. The last two articles in this road map take a somewhat different viewpoint from that of adjusting the synaptic weights in a single network. MODULAR NEURAL NET SYSTEMS, TRAINING OF presents the idea that, although single neural networks are theoretically capable of learning complex functions, many problems are better solved by designing systems in which several modules cooperate together to perform a global task, replacing the complexity of a large neural network by the cooperation of neural network modules whose size is kept small. The article on LEARNING BY SYMBOLIC AND NEURAL METHODS focuses on the distinction between symbolic learning based on producing discrete combinations of the features used to describe examples and neural approaches which adjust continuous, nonlinear weightings of their inputs. The article not only compares but also combines the two approaches, showing for example how symbolic knowledge may be used to set the initial state of an adaptive network. [This Road Map is then followed by one on "Learning in Artificial Neural Networks, Statistical"]  From terry at salk.edu Fri Oct 6 22:14:01 1995 From: terry at salk.edu (Terry Sejnowski) Date: Fri, 6 Oct 95 19:14:01 PDT Subject: Neural Computation 7:6 Message-ID: <9510070214.AA28375@salk.edu> NEURAL COMPUTATION Vol 7, Issue 6, November 1995 Article: An information-maximization approach to blind separation and blind deconvolution Anthony J. Bell and Terrence J. Sejnowski Note: A perceptron reveals the face of sex Michael Gray, David Lawrence, Beatrice Golomb, and Terrence Sejnowski Letters: Self-organization as an iterative kernel smoothing process Vladmir Cherkassky and Filip Mulier On the distribution and the convergence of feature space in self-organizing maps Hujun Yin and Nigel Allinson Sorting with self-organizing maps Marco Budinich Introducing asymmetry into interneuron learning Colin Fyfe Learning and generalization with minimerror, a temperature dependent learning algorithm Bruno Raffin and Mirta B. Gordon Regularized neural networks: Some convergence rate results Halbert White and Valentina Corradi The target switch algorithm: A constructive learning procedure for feedforward neural networks Colin Campbell and C. Perez Vicente On the practical applicability of VC dimension bounds Sean B. Holden and Mahesan Niranjan LeRec: A NN/HMM hybrid for on-line handwriting recognition Yoshua Bengio, Yann LeCun, Craig Nohl, and Chris Burges ----- NOTE: IN 1996 NEURAL COMPUTATION WILL PUBLISH 8 ISSUES ABSTRACTS - http://www-mitpress.mit.edu/jrnls-catalog/neural.html SUBSCRIPTIONS - 1996 - VOLUME 8 - 8 ISSUES ______ $50 Student and Retired ______ $78 Individual ______ $220 Institution Add $28 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-7 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) mitpress-orders at mit.edu MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 -----  From honavar at cs.iastate.edu Fri Oct 6 22:31:09 1995 From: honavar at cs.iastate.edu (Vasant Honavar) Date: Fri, 6 Oct 1995 21:31:09 -0500 (CDT) Subject: paper available: constructive learning algorithms Message-ID: <199510070231.VAA12809@ren.cs.iastate.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 1737 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/971f392b/attachment-0001.ksh From wec at bioele.nuee.nagoya-u.ac.jp Sat Oct 7 22:12:53 1995 From: wec at bioele.nuee.nagoya-u.ac.jp (Workshop on Evolutionary Computation) Date: Sat, 7 Oct 95 22:12:53 JST Subject: Call for participation - Online Workshop on EC Message-ID: <9510071312.AA13486@ursa.bioele.nuee.nagoya-u.ac.jp> [CFA] Online Workshop on Evolutionary Computation This is the call for participation for Online Workshop on EC. The papers and discussions are visible on the Internet. http://www.bioele.nuee.nagoya-u.ac.jp/wec/index.html ********************************************************************* * * * CALL FOR PARTICIPATION * * * * First Online Workshop on Evolutionary Computation * * * * Oct. 9 (Mon) - Oct. 13 (Fri) * * * * On Internet (WWW (World Wide Web) ) Served by Nagoya University * * * * Sponsored by * * Research Group on ECOmp of * * the Society of Fuzzy Theory and Systems (SOFT) * * * ********************************************************************* We call for your participation for online discussions using the Internet. The goal of this workshop is to give its attendees opportunities to exchange information and ideas on various aspects of Evolutionary Computation and "save travel expenses" without having to visit foreign countries. We aim to have ample discussion time between the authors and attendees and make them visible to everyone on the Internet. The papers submitted to the workshop are: ----------------------------------------------------------------------------- *Artificial Life 1. Specialization Under Social Conditions in Shared Environments 2. Pattern Formation and Functionality in Swarm Models 3. Cooperative Cleaners : a Study in Ant-Robotics 4. Seeing In The Dark With Artificial Bats *Evolutionary Programming 1. Evolutionary Programming in Perspective: The Top-Down View *Fuzzy - Genetic Algorithms 1. A Study on Finding Fuzzy Rules for Semi-Active Suspension Controllers with Genetic Algorithm 2. Structural Learning of Fuzzy Rule from Noisy Examples 3. Fuzzy Connectives Based Crossover Operators to Model Genetic Algorithms Population Diversity 4. Dynamic Control of Genetic Algorithms using Fuzzy Logic Techniques *Fuzzy Logic 1. Fuzzy Rules Reduction Method By Using Vector Composition Method & Global/Local Reasoning Method *Genetic Algorithms 1. Orgy in the Computer: Multi-Parent Reproduction in Genetic Algorithms 2. An Evolutionary Algorithm for Parametric Array Signal Processing 3. Hill Climbing with Learning (An Abstraction of Genetic Algorithm) *Knowledge - Genetic Algorithms 1. Genetic Algorithms in Knowledge Base Design 2. Inference of Stochastic Regular Grammars by Massively Parallel Genetic Algorithms 3. Skill Based Manipulation in Hierarchical Intelligent Control of Robots *Neural Networks - Genetic Algorithms 1. Genetic Algorithm Enlarges the Capacity of Associative Memory *Others 1. Indexed Bibliography of Genetic Algorithms in the Far East 2. Optimization with Evolution Strategy after Rechenberg with Smalltalk ------------------------------------------------------------------------------- These papers are visible on the Internet. The home page address is : http://www.bioele.nuee.nagoya-u.ac.jp/wec/index.html INPORTANT DATES: Workshop Week: October 9 - 13, 1995 DISCUSSION PROCEDURE: 1. Read the abstracts on the page: http://www.bioele.nuee.nagoya-u.ac.jp/wec/papers/index.html 2. Copy the main texts (ps file) of interested papers. 3. Send questions and comments to wec at bioele.nuee.nagoya-u.ac.jp (The steering committee will edit the questions from attendees and send them to the authors. It will receive answers from the authors and make the Q&A visible on the Internet.) 4. Read the answers from the authors on http://www.bioele.nuee.nagoya-u.ac.jp/wec/papers/index.html 5. Repeat the above items 3 and 4 until you are satisfied. OFFICAL LANGUAGE: English REGISTRATION PROCEDURE: Those who are interested in the above papers are welcome to participate into this workshop at any time during the workshop week. Please visit our page or send an e-mail to the addresses below. http://www.bioele.nuee.nagoya-u.ac.jp/wec/index.html wec at bioele.nuee.nagoya-u.ac.jp Just tell us you will take part in the workshop. This registration is not a requisite for your participation. The steering committee will mail the registrants important information. For further information, contact: Takeshi Furuhashi Dept. of Information Electronics Nagoya University, Furo-cho, Chikusaku Nagoya 464-01, Japan Tel. +81-52-789-2792 Fax. +81-52-789-3166 E-mail furuhashi at nuee.nagoya-u.ac.jp ORGANIZATION: Advisory Committee Chair: T.Fukuda(Nagoya University) M.Gen(Ashikaga Institute of Tech.) I.Hayashi(Han-nan University) H.Ishibuchi(University of Osaka pref.) Y.Maeda(Osaka Electro-Comm. Univ.) M.Sakawa(Hiroshima University) M.Sano(Hiroshima City University) T.Shibata(MEL, MITI & AI Lab, MIT) H.Shiizuka(Kogakuin University) K.Tanaka(Kanazawa University) N.Wakami(Matsushita Elec. Ind. Co., Ltd.) J.Watada(Osaka Institute of Tech.) Program Committee Chair: S.Nakanishi(Tokai University) T.Furuhashi(Nagoya University) K.Tanaka(Kanazawa University) Steering Committee Chair: T.Furuhashi(Nagoya University) T.Hashiyama(Nagoya University) K.Morikawa(Nagoya University) Y.Miyata(Nagoya University) --------------------------------------------------- Takeshi Furuhashi, Assoc. Professor Dept. of Information Electronics, Nagoya University Furo-cho, Chikusa-ku, Nagoya 464-01, Japan Tel.+81-52-789-2792, Fax.+81-52-789-3166 ---------------------------------------------------  From payman at ebs330.eb.uah.edu Sat Oct 7 15:21:48 1995 From: payman at ebs330.eb.uah.edu (Payman Arabshahi) Date: Sat, 7 Oct 95 14:21:48 CDT Subject: NEW: IEEE Neural Networks Council's Homepage Message-ID: <9510071921.AA21731@ebs330> Dear colleagues, The IEEE Neural Networks Council (NNC) has recently established a Homepage on the World Wide Web. The homepage provides for instant information concerning conferences (both IEEE and non-IEEE), publications, research programs, and NNC administration. The homepage can be viewed (best using the Netscape browser), at http://www.ieee.org/nnc We are constantly seeking information on conferences with a neural network component, as well as information on computational intelligence research programs worldwide. The homepage is updated every two weeks. It is our aim to ensure a a high degree of reliability for the homepage, both in terms of connection to it, and accuracy of its contents, and to be responsive and prompt to comments and suggestions by users and IEEE members regarding changes. If you have information which you would like included in the homepage, or simply have a comment or recommendation regarding it, please contact: Dr. Payman Arabshahi Editor-in-chief, IEEE NNC Homepage Dept. of Electrical & Computer Eng. University of Alabama in Huntsville Huntsville, AL 35899 Tel: (205) 895-6380 Fax: (205) 895-6803 E-mail: payman at ebs330.eb.uah.edu For updates to the list of Neural Network Conferences, please also cc Dr. Paul Bakker Associate Editor, IEEE NNC Homepage Intelligent Systems Division Electrotechnical Laboratory 1-1-4 Umezono, Tsukuba Ibaraki 305, Japan Tel: (81-298) 585-980 Fax: (81-298) 585-971 E-mail: bakker at etl.go.jp Help us, help you with a novel way of accessing and using information that you need, this time on the Internet, through the World Wide Web. Check us out! Best regards, -- Payman Arabshahi Tel : (205) 895-6380 Dept. of Electrical & Computer Eng. Fax : (205) 895-6803 University of Alabama in Huntsville payman at ebs330.eb.uah.edu Huntsville, AL 35899 http://www.eb.uah.edu/ece/  From ingber at alumni.caltech.edu Tue Oct 10 15:12:42 1995 From: ingber at alumni.caltech.edu (Lester Ingber) Date: Tue, 10 Oct 1995 12:12:42 -0700 Subject: two papers on nonlinear aspects of macroscopic neocortex/EEG Message-ID: <199510101912.MAA13207@alumni.caltech.edu> The following two papers are available via WWW and anonymous FTP. The paper, "Nonlinear nonequilibrium non-quantum non-chaotic statistical mechanics of neocortical interactions," is available as file smni96_nonlinear.ps.Z. smni96_nonlinear is an invited BBS commentary on "Dynamics of the brain at global and microscopic scales: Neural networks and the EEG," by J.J. Wright and D.T.J. Liley. The paper, "Adaptive simulated annealing of canonical momenta indicators of financial markets," is available as file markets96_momenta.ps.Z. Some extrapolations to and from EEG systems are also discussed, as first mentioned in smni95_lecture.ps.Z, outlining a project, performing recursive ASA optimization of "canonical momenta" indicators of subject's/patient's EEG nested in parameterized customized clinician's rules. markets96_momenta shows how canonical momenta indicators just by themselves can provide signals for profitable trading on S&P 500 data. This demonstrates that inefficiencies in nonlinear nonequilibrium markets can be used to advantage in trading, and that such canonical momenta can be considered to be at least useful supplemental indicators in other trading systems. Similar arguments are made for EEG analyses. Some additional information is available in the file ftp.alumni.caltech.edu:/pub/ingber/MISC.DIR/projects.txt Lester ======================================================================== Instructions for Retrieval of Code and Reprints Interactively Via WWW The archive can be accessed via WWW path http://www.alumni.caltech.edu/~ingber/ Interactively Via Anonymous FTP Code and reprints can be retrieved via anonymous ftp from ftp.alumni.caltech.edu [131.215.50.234] in the /pub/ingber directory. Interactively [brackets signify machine prompts]: [your_machine%] ftp ftp.alumni.caltech.edu [Name (...):] anonymous [Password:] your_e-mail_address [ftp>] cd pub/ingber [ftp>] binary [ftp>] ls [ftp>] get file_of_interest [ftp>] quit The 00index file contains an index of the other files. Electronic Mail If you do not have ftp access, get information on the FTPmail service by: mail ftpmail at decwrl.dec.com, and send only the word "help" in the body of the message. Additional Information Sorry, I cannot assume the task of mailing out hardcopies of code or papers. Limited help assisting people with their queries on my codes and papers is available only by electronic mail correspondence. Queries on commercial consulting can be made by contacting me via e-mail, mail, or calling 1.800.L.INGBER. Lester ======================================================================== /* RESEARCH ingber at alumni.caltech.edu * * INGBER ftp.alumni.caltech.edu:/pub/ingber * * LESTER http://www.alumni.caltech.edu/~ingber/ * * Prof. Lester Ingber _ P.O. Box 857 _ McLean, VA 22101 _ 1.800.L.INGBER */  From dhw at santafe.edu Tue Oct 10 16:35:28 1995 From: dhw at santafe.edu (David Wolpert) Date: Tue, 10 Oct 95 14:35:28 MDT Subject: Paper announcements Message-ID: <9510102035.AA04144@sfi.santafe.edu> *** PAPER ANNOUNCEMENTS *** The following two papers may be of interest to people in the connectionist community. Both will appear in the proceedings of the 1995 conference on Maximum Entropy and Bayesian Methods. Retrieval instructions are at the bottom of this post. THE BOOTSTRAP IS INCONSISTENT WITH PROBABILITY THEORY by D. H. Wolpert Abstract: This paper proves that for no prior probability distribution does the bootstrap (BS) distribution equal the predictive distribution, for all Bernoulli trials of some fixed size. It then proves that for no prior will the BS give the same first two moments as the predictive distribution for all size trials. It ends with an investigation of whether the BS can get the variance correct. DETERMINING WHETHER TWO DATA SETS ARE FROM THE SAME DISTRIBUTION by D. H. Wolpert Abstract: This paper presents two Bayesian alternatives to the chi-squared test for determining whether a pair of categorical data sets were generated from the same underlying distribution. It then discusses such alternatives for the Kolmogorov-Smirnov test, which is often used when the data sets consist of real numbers. ============================================================ To retrieve these papers anonymous ftp to ftp.santafe.edu. The papers are in pub/dhw_ftp, under the titles maxent.95.boot.ps.Z and maxent.95.wv.ps.Z, respectively.  From mozer at neuron.cs.colorado.edu Tue Oct 10 15:14:30 1995 From: mozer at neuron.cs.colorado.edu (Mike Mozer) Date: Tue, 10 Oct 1995 13:14:30 -0600 Subject: job announcement Message-ID: <199510101914.NAA16869@neuron.cs.colorado.edu> University of Colorado, Boulder Department of Computer Science Applications are invited for a junior tenure-track faculty position in the areas of artificial intelligence or software and systems. The department is particularly interested in candidates in the areas of human-computer interaction, human and machine learning, neural networks, databases, computer networks, distributed systems, programming languages and software engineering. Applicants should have a Ph.D. in computer science or a related field and show strong promise in both research and teaching. The Computer Science Department at the University of Colorado has 22 faculty and about 200 graduate students. It has strong research programs in artificial intelligence, numerical and parallel computation, software and systems and theoretical computer science. The computing environment includes a multitude of computer workstations and a large variety of parallel computers. The department has been the recipient of two consecutive five-year Institutional Infrastructure (previously CER) grants from the National Science Foundation supporting its computing infrastructure and collaborative research among its faculty. Applicants should send a current curriculum vitae and the names of four references to Professor Gary J. Nutt, Chair, Department of Computer Science, Campus Box 430, University of Colorado, Boulder, CO 80309-0430. One-page statements of research and teaching interests would also be appreciated. Review of applications will begin Jan. 1, 1996, but all applications postmarked before March 1, 1996 are eligible for consideration. Earlier applications will receive first consideration. Appointment can begin as early as August 1996. The University of Colorado at Boulder strongly supports the principle of diversity. We are particularly interested in receiving applications from women, ethnic minorities, disabled persons, veterans and veterans of the Vietnam era. ------------------------------------------------------------------------------- You can contact me for further information. The search looks terribly diffuse, but the odds of hiring a neural net / machine learning person are good. -- Mike  From pmn at iau.dtu.dk Wed Oct 11 11:50:53 1995 From: pmn at iau.dtu.dk (Magnus Norgaard) Date: Wed, 11 Oct 95 11:50:53 MET Subject: NNSYSID toolbox available Message-ID: <199510111052.LAA15406@danpost.uni-c.dk> ------------------------------- ANNOUNCING: THE NNSYSID TOOLBOX ------------------------------- Neural Network Based System Identification Toolbox for use with MATLAB(r) Version 1.0 Magnus Norgaard Institute of Automation, Connect/Electronics Institute, Institute of Mathematical Modeling Technical University of Denmark Oct. 4, 1995 The NNSYSID toolbox is a set of freeware tools for neural network based identification of nonlinear dynamic systems. The toolbox contains a number of m and MEX-files for training and evaluation of multilayer perceptron type neural networks within the MATLAB environment. There are functions for training ordinary feedforward networks as well as for identification of nonlinear dynamic systems and for time-series analysis. The toolbox requires at least MATLAB 4.2 with the signal processing toolbox, but it works completely independently of the neural network and system identification toolboxes provided by The MathWorks, Inc. WHAT THE TOOLBOX CONTAINS: o Fast, robust, and easy-to-use training algorithms. o A number of different model structures for modelling dynamic systems. o Validation of trained networks. o Estimation of generalization ability. o Algorithms for determination of the optimal network architecture by pruning. o Demonstration programs. HOW TO GET IT: The toolbox can be obtained in one of the following ways: o WWW: URL adress: http://kalman.iau.dtu.dk/Projects/proj/nnsysid.html zip was used for compressing the toolbox. You must have unzip (UNIX) or pkunzip (DOS) to unpack it. From UNIX: unzip -a nntool.zip From DOS: pkunzip nntool.zip o FTP: ftp eivind.ei.dtu.dk login: anonymous password: (Your e-mail adress) cd dist/matlab/NNSYSID You will find two versions of the compressed toolbox: nntool.zip was packed and compressed with 'zip' nntool.tar.gz was packed with 'tar' and compressed with 'gzip' For the zip-version: get nntool.zip unzip -a nntool.zip (UNIX), or pkunzip nntool.zip (DOS) For the tar+gzip version: get nntool.tar.gz gzip -d nntool.tar.gz (UNIX only) tar xvf nntool.tar After having unpacked the toolbox, read the files README and RELEASE on how to install the tools properly. A 90-page manual (in Postscript) is included with the toolbox. We do not offer any support if you run into problems! The toolbox is freeware - take it or leave it!!! THE TOOLBOX FUNCTIONS GROUPED BY SUBJECT: FUNCTIONS FOR TRAINING A NETWORK: marq : Levenberg-Marquardt method. marq2 : Levenberg-Marquardt method. Works for fully connected networks only. marqlm : Memory-saving implementation of the Levenberg-Marquardt method. rpe : Recursive prediction error method. FUNCTIONS FOR PRETREATING THE DATA: dscale : Scale data to zero mean and variance one. FUNCTIONS FOR TRAINING NETWORKS TO MODEL DYNAMIC SYSTEMS: lipschit : Determine the lag space. nnarmax1 : Identify a Neural Network ARMAX (or ARMA) model (Linear MA filter). nnarmax2 : Identify a Neural Network ARMAX (or ARMA) model. nnarx : Identify a Neural Network ARX (or AR) model. nniol : Identify a Neural Network model suited for I-O linearization control. nnoe : Identify a Neural Network Output Error model. nnrarmx1 : Recursive counterpart to NNARMAX1. nnrarmx2 : Recursive counterpart to NNARMAX2. nnrarx : Recursive counterpart to NNARX. nnssif : Identify a NN State Space Innovations form model. FUNCTIONS FOR PRUNING NETWORKS: netstruc : Extract weight matrices from matrix of parameter vectors. nnprune : Prune models of dynamic systems with Optimal Brain Surgeon (OBS). obdprune : Prune feed-forward networks with Optimal Brain Damage (OBD). obsprune : Prune feed-forward networks with Optimal Brain Surgeon (OBS). FUNCTIONS FOR EVALUATING TRAINED NETWORKS: fpe : FPE estimate of the generalization error for feed-forward nets. ifvalid : Validation of models generated by NNSSIF. ioleval : Validation of models generated by NNIOL. loo : Leave-One-Out estimate of generalization error for feed-forward nets. nneval : Validation of feed-forward networks (trained by marq,rpe,bp). nnfpe : FPE for I/O models of dynamic systems. nnsim : Simulate model of dynamic system from control sequence alone. nnvalid : Validation of I/O models of dynamic systems. wrescale : Rescale weights of trained network. MISCELLANOUS FUNCTIONS: Contents : Contents file. drawnet : Draws a two layer neural network. getgrad : Derivative of network outputs w.r.t. the weights. pmntanh : Fast tanh function. DEMOS: test1 : Demonstrates different training methods on a curve fitting example. test2 : Demonstrates the NNARX function. test3 : Demonstrates the NNARMAX2 function. test4 : Demonstrates the NNSSIF function. test5 : Demonstrates the NNOE function. test6 : Demonstrates the effect of regularization by weight decay. test7 : Demonstrates pruning by OBS on the sunspot benchmark problem. Enjoy - MN +-----------------------------------+------------------------------+ | Magnus Norgaard | e-mail : pmn at iau.dtu.dk | | Institute of Automation | Phone : +45 42 25 35 65 | | Technical University of Denmark | Fax : +45 45 88 12 95 | | Building 326 | http://kalman.iau.dtu.dk/ | | DK-2800 Lyngby | Staff/Magnus_Norgaard.html| | Denmark | | |___________________________________|______________________________|  From janine.stook at era.co.uk Wed Oct 11 11:00:04 1995 From: janine.stook at era.co.uk (janine stook) Date: Wed, 11 Oct 95 15:00:04 GMT Subject: Producing Dependable Systems - Conference & Exhibition Message-ID: <9509118134.AA813448797@mailhost.era.co.uk> Conference & Exhibition Neural Networks - Producing Dependable Systems National Motorcycle Museum, Solihull, West Midlands, UK: Thursday 2 November 1995 The conference will look at the problem of producing dependable neural network computing systems from both theoretical and practical angles. The different approaches to producing and demonstrating dependable systems will be discussed during the day. Case studies will illustrate the state-of-the-art and draw out the lessons that can be applied from one area to another. With a blend of speakers drawn from industry and academia, this conference will be of interest to engineers, managers, technical directors and industrial scientists who wish to know more about the practical application of neural networks. For programme, booking and other details, please see: http://www.era.co.uk/neural.htm, or contact Janine Stook, Senior Conference Organiser, E-mail: conferences at era.co.uk Bookings can be accepted until the day of the event. I look forward to hearing from you. Regards. Janine Stook. Technical Services Division, ERA Technology Ltd, Cleeve Road Leatherhead, Surrey KT22 7SA, UK Tel: +44 (0)1372 367021 Fax: +44 (0)1372 377927  From cas-cns at PARK.BU.EDU Wed Oct 11 15:49:31 1995 From: cas-cns at PARK.BU.EDU (B.U. CAS/CNS) Date: Wed, 11 Oct 1995 15:49:31 -0400 Subject: New Faculty Position Available Message-ID: <199510111948.PAA19316@cns.bu.edu> (A copy of this message has also been posted to the following newsgroups: comp.ai, comp.cog-eng,comp.software-eng,comp.ai.neural-nets,bu.general,misc.jobs.offered,ne.jobs,sci.math,sci.cognitive,sci.psychology,sci.misc,sci.physics,sci.med.psychobiology) NEW FACULTY IN COGNITIVE AND NEURAL SYSTEMS AT BOSTON UNIVERSITY Boston University seeks an associate or full professor for its graduate Department of Cognitive and Neural Systems. Exceptionally qualified assistant professors will also be considered. This department offers an integrated curriculum of psychological, neurobiological, and computational concepts, models, and methods in the fields of computational neuroscience, connectionist cognitive science, and neural network technology in which Boston University is a leader. Candidates should have an international research reputation, preferably including extensive analytic or computational research experience in modeling a broad range of nonlinear neural networks, especially in one or more of the areas: vision and image processing, adaptive pattern recognition, cognitive information processing, speech and language, and neural network technology. Send a complete curriculum vitae and three letters of recommendation to Search Committee, Department of Cognitive and Neural Systems, 677 Beacon Street, Boston University, Boston, MA 02215. Boston University is an Equal Opportunity/Affirmative Action Employer. http://cns-web.bu.edu  From harnad at cogsci.soton.ac.uk Wed Oct 11 16:44:05 1995 From: harnad at cogsci.soton.ac.uk (Stevan Harnad) Date: Wed, 11 Oct 95 21:44:05 +0100 Subject: Addictions: BBS Call for Commentators Message-ID: <14340.9510112044@cogsci.ecs.soton.ac.uk> Below is the abstract of a forthcoming target article on: RESOLVING THE CONTRADICTIONS OF ADDICTION by Gene Heyman (Psychology, Harvard) This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: bbs at soton.ac.uk or write to: Behavioral and Brain Sciences Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://www.princeton.edu/~harnad/bbs.html http://cogsci.soton.ac.uk/bbs ftp://ftp.princeton.edu/pub/harnad/BBS ftp://cogsci.ecs.soton.ac.uk/pub/harnad/BBS gopher://gopher.princeton.edu:70/11/.libraries/.pujournals To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp (or gopher or world-wide-web) according to the instructions that follow after the abstract. ____________________________________________________________________ RESOLVING THE CONTRADICTIONS OF ADDICTION Gene M. Heyman Department of Psychology Harvard University Cambridge, MA 02138 gmh at wjh12.harvard.edu KEYWORDS: Addiction, compulsive behavior, disease, incentive- sensitization, reinforcement, rational choice, matching law ABSTRACT: Research findings on addiction are contradictory. According to biographical records and widely used diagnostic manuals, addicts use drugs compulsively. These accounts are consistent with genetic research and laboratory experiments in which repeated administration of addictive drugs caused changes in neural substrates associated with reward. However, epidemiological and experimental data show that the consequences of drug consumption can significantly modify drug intake in addicts. The disease model can account for the compulsive features of addiction, but not occasions in which price and punishment reduced drug consumption in addicts. Conversely, learning models of addiction can account for the influence of price and punishment, but not compulsive drug taking. The occasion for this paper is that recent developments in behavioral choice theory resolve the apparent contradictions in the addiction literature. The basic argument includes the following four statements. First, repeated consumption of an addictive drug decreases its future value and the future value of competing activities. Second, the frequency of an activity is a function of its relative (not absolute) value. This implies that an activity that reduces the values of competing behaviors can increase in frequency even if its own value also declines. Third, a recent experiment (Heyman & Tanz, 1995) shows that the effective reinforcement contingencies are relative to a frame of reference, and this frame of reference can change so as to favor optimal or sub-optimal choice. Fourth, if the frame of reference is local, reinforcement contingencies will favor excessive drug use, but if the frame of reference is global, the reinforcement contingencies will favor controlled drug use. The transition from a global to local frame of reference explains relapse and other compulsive features of addiction. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from ftp.princeton.edu according to the instructions below (the filename is bbs.heyman). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- These files are also on the World Wide Web and the easiest way to retrieve them is with Netscape, Mosaic, gopher, archie, veronica, etc. Here are some of the URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs.html http://cogsci.soton.ac.uk/~harnad/bbs.html gopher://gopher.princeton.edu:70/11/.libraries/.pujournals ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.heyman ftp://cogsci.soton.ac.uk/pub/harnad/BBS/bbs.heyman To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.heyman When you have the file(s) you want, type: quit ---------- Where the above procedure is not available there are two fileservers: ftpmail at decwrl.dec.com and bitftp at pucc.bitnet that will do the transfer for you. To one or the other of them, send the following one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). -------------------------------------------------------------  From rosen at unr.edu Thu Oct 12 04:14:58 1995 From: rosen at unr.edu (David B. Rosen) Date: Thu, 12 Oct 1995 01:14:58 -0700 Subject: Missing Data Workshop Announcement & Call for Presentations Message-ID: <199510120809.IAA13967@solstice.ccs.unr.edu> This is the first announcement and call for presentations for: MISSING DATA: METHODS AND MODELS A NIPS*95 Workshop Friday, December 1, 1995 INTRODUCTION Incomplete or missing data, typically unobserved or unavailable features in supervised learning, is an important problem often encountered in real-world data sets and applications. Assumptions about the missing-data mechanism are often not stated explicitly, for example independence between this mechanism and the values of the (missing or other) features themselves. In the important case of incomplete training data, one often discards incomplete rows or columns of the data matrix, throwing out some useful information along with the missing data. Ad hoc or univariate methods such as imputing the mean or mode are dangerous as they can sometimes give much worse results than simple discarding. Overcoming the problem of missing data often requires that we model not just the dependence of the output on the inputs, but the inputs among themselves as well. THE WORKSHOP This one-day workshop should provide a valuable opportunity to share and discuss methods and models used for missing data. There will be a number of short presentations, with discussion following each and continuing in greater depth after all are done. Particular classes of methods we hope will be discussed include: * Discarding data, univariate imputation (filling-in) of missing values, etc. * Single and multiple imputation based on other (non-missing) features. * Mixture models for the joint (input-output) space. * EM algorithm. * Recurrent networks, iterative relaxation, auto-associative pattern completion. * Methods specific to certain learning algorithms, e.g. trees, graphical models * Stochastic simulation and Bayesian posterior sampling Presenters so far include (alphabetically) Leo Breiman (tentative!), Zoubin Ghahramani, Bo Thiesson / Steffen Lauritzen (missing data in graphical models), and Volker Tresp. CALL FOR PRESENTATIONS Additional speakers are needed to present some of the methods mentioned above, or other topics of interest. If you would like to do so, or if you have additional suggestions to offer, please contact us as soon as possible. ORGANIZERS Harry Burke David Rosen New York Medical College, Department of Medicine, Valhalla NY 10595 USA FAX: +1.914.347.2419 FURTHER INFORMATION Check the workshop's Web page (the above is a snapshot of it) http://www.scs.unr.edu/~cbmr/nips/workshop-missing.html for further updates over time. It also has a link to the NIPS*95 home page. Those without direct access to the World Wide Web can use Agora, the email-based Web browser. Send the message HELP, or the message send http://www.scs.unr.edu/~cbmr/nips/workshop-missing.html to agora at www.undp.org . Any Subject header is ignored. -- David Rosen New York Medical College  From cohn at psyche.mit.edu Thu Oct 12 10:23:53 1995 From: cohn at psyche.mit.edu (David Cohn) Date: Thu, 12 Oct 95 10:23:53 EDT Subject: NIPS*95 program info available Message-ID: <9510121423.AA12917@psyche.mit.edu> Final Program NIPS*95 Neural Information Processing Systems: Natural and Synthetic November 27 - December 2 Denver, Colorado The final program for NIPS*95, along with other conference and registration information, is now available on the NIPS home page URL: http://www.cs.cmu.edu/Web/Groups/CNBC/nips/NIPS.html or by sending email to nips95 at mines.colorado.edu. Please note that the deadline for early registration is October 28th; registration costs rise by $50 after that date. --------------------------------------------------------------------- NIPS*95 Conference Highlights --------------------------------------------------------------------- Tutorials: November 27, 1995 Denver Marriott City Center, Denver, Colorado Formal Conference Sessions: November 28 - 30, 1995 Denver Marriott City Center, Denver, Colorado Post-Meeting Workshops: December 1-2, 1995 Marriott Hotel, Vail, Colorado NIPS is a single-track conference -- there will be no parallel sessions. Out of approximately 460 submitted papers, 30 will be presented as talks; another 110 will be presented as posters. All accepted papers will appear in the proceedings. A number of invited talks will survey active areas of research and lead off the sessions. These include: John H. McMasters, (Banquet Speaker) - Boeing Commercial Aircraft Company "Origins and Future of Flight: A Paleoecological Perspective" Bruce Rosen, Massachusetts General Hospital "Mapping Brain Function with Functional Magnetic Resonance Imaging" David Heckerman, Microsoft "Learning Bayesian Networks" Brian Ripley, Statistics, Oxford "Statistical Ideas for Selecting Network Architectures" Thomas McAvoy, University of Maryland "Application of Neural Networks in the Chemical Process Industries" Elizabeth Bates, UCSD, Cognitive Science Department "Brain Organization for Language in Children and Adults" --------------------------------------------------------------------- TUTORIAL PROGRAM --------------------------------------------------------------------- November 27, 1995 Session I: 9:30-11:30 a.m. "Functional Anatomy of Primate Vision" Gary Blasdel, Harvard Medical School "Neural Networks for Identification and Control" Kumpati Narendra, Yale University Session II: 1:00-3:00 p.m. "Cortical Circuits in a Multichip Communication Framework" Misha Mahowald, Institute for Neuroinformatics "Computational Learning and Statistical Prediction" Jerome Friedman, Stanford University Session III: 3:30-5:30 p.m. "Unsupervised Learning Procedures" Geoffrey Hinton, University of Toronto "Option Pricing in Modern Finance Theory and the Relevance of Artificial Neural Networks" Halbert White, University of California at San Diego --------------------------------------------------------------------- POST-MEETING WORKSHOPS --------------------------------------------------------------------- November 30 - December 2, 1995 The formal conference will be followed by post-meeting workshop sessions in Vail, Colorado. Registration for the workshops is optional. It includes the welcome reception, two continental breakfasts and one banquet dinner. The workshops will have morning (7:30-9:30 a.m.) and afternoon (4:30-6:30 p.m.) sessions each day and will be followed by a summary session at 7:00 p.m. on the final day. Early registration is strongly encouraged, as we may have to limit attendance. Early room reservations at Vail are also strongly encouraged. Below is a partial list of this year's workshops. Noisy Time Series Object Features for Visual Shape Representation Neural Hardware Engineering Benchmarking of NN Learning Algorithms Symbolic Dynamics in Neural Processing Prospects for Neural Human-Machine Interfaces Neural Information and Coding Modeling the Mind: Large Scale Research Projects Vertebrate Neurophysiology and Neural Networks: can the teacher learn from the student? Hybrid HMM/ANN Systems for Sequence Recognition Robot Learning - Learning in the "Real World" Transfer of Knowledge in Inductive Systems The Dynamics of On-Line Learning Optimization Problem Solving with Neural Nets Neural Networks for Signal Processing Statistical and Structural Models in Network Vision Learning in Bayesian Networks and Other Graphical Models Knowledge Acquisition, Consolidation, and Transfer within Neural Networks Dealing with Incomplete Data in Classification and Regression Topological Maps for Density Estimation, Regression and Classification  From inns_www at PARK.BU.EDU Thu Oct 12 23:08:06 1995 From: inns_www at PARK.BU.EDU (INNS Web Staff) Date: Thu, 12 Oct 1995 23:08:06 -0400 Subject: WCNN'96: Call for Papers Message-ID: <307DD816.167EB0E7@cns.bu.edu> CALL FOR PAPERS The 1996 World Congress on Neural Networks San Diego, California September 15--20, 1996 Town & Country Hotel The following information is also available on the INNS WEB site: http://cns-web.bu.edu/INNS/index.html ------------------------------------------------------------------------------- Organizing Committee: David Casasent, General Chair Shun-Ichi Amari, President Daniel L. Alkon, Program Chair Walter J. Freeman, President, 1994 Bart Kosko, Program Chair John G. Taylor, President, 1995 1995 INNS Officers: 1995 Governing Board: President: John G. Daniel L. Alkon Christof Koch Taylor President Elect: Shun-Ichi James A. Anderson Bart Kosko Amari Past President: Walter J. David Casasent Daniel S. Freeman Levine Secretary: Gail Leon Cooper Christoph von Carpenter der Malsburg Treasurer: Judy Dayhoff Rolf Eckmiller Alianna Maren Headquarter Services: Stephanie Francoise Fogelman-Soulie Harold Szu Dickinson Kunihiko Fukushima Paul Werbos Stephen Grossberg Bernard Widrow Lofti A. Zadeh ------------------------------------------------------------------------------- Call for Papers Papers must be received, in English, by January 15, 1996. There is a four-page limit, in English, with a $25 per page fee for papers over four pages. Overlength charges can be paid by check (payable in U.S. Dollars and issued by a U.S. Correspondent Bank, to WCNN'96), Visa or MasterCard. (Should a paper be rejected, the fee will be refunded). Papers must be on 8-1/2" x 11" white paper with 1" margins on all sides, one-column format, single spaced, in Times or similar type style of 10 points or larger, one side of paper only. Faxes are not acceptable. Centered at the top of the first page should be complete title, author name(s), affiliation(s), and mailing and e-mail address(es), followed by blank space, abstract (up to 15 lines), and text. The following information must be included in an accompanying cover letter for the paper to be reviewed: Full title of paper, corresponding author and presenting author name, address, telephone and fax numbers; 1st and 2nd choices of technical sessions (see below), oral or poster session preferred; and audio-visual requirements (for oral presentation only). Papers submitted which do not meet these requirements or with insufficient funds will be returned. The original and five copies of each paper should be sent to: WCNN'96 Program Chairs 875 Kings Highway, Suite 200 Woodbury, NJ 08096-3172 U.S.A. The program committee will determine whether papers will be an oral or poster presentation. All members of INNS in good standing have the right to designate in their cover letters one (1) 15-line abstract with themselves as an author for automatic acceptance for at least publication and a poster presentation; this is in addition to any full paper submissions. Biological and engineering papers are welcome for all sessions. Contributed papers are welcome for all twenty-six sessions, including special sessions. ------------------------------------------------------------------------------- SESSION TOPICS 1. Vision 11. Neurodynamics & Chaos 2. Speech 12. Hardware Implementation 3. Neurocontrol and Robotics 13. Associative Memory and Reinforcement Learning 4. Supervised Learning 14. Applications 5. Unsupervised Learning 15. Mathematical Foundations 6. Pattern Recognition 16. Evolutionary/Genetic/Annealing Algorithms 7. Prediction and System 17. Neural and Fuzzy Systems Identification 8. Intelligent Systems 18. Fuzzy Approximation and Applications 9. Computational Neuroscience 19. Medical Applications 10. Signal Processing 20. Industrial Applications SPECIAL SESSIONS A. Consciousness & and Intentionality B. Biological Neural Networks C. Dynamical Systems in Financial Engineering D. Power Industry Applications E. Statistics & Neural Networks F. INNS Special Interest Groups (SIGINNS) -------------------------------------------------------------------------------  From nin at cns.brown.edu Fri Oct 13 12:03:52 1995 From: nin at cns.brown.edu (Nathan Intrator) Date: Fri, 13 Oct 95 12:03:52 EDT Subject: NIPS Workshop: Object Features for Visual Shape Representation Message-ID: <9510131603.AA12898@cns.brown.edu> First announcement of the following workshop. Updates and call for presentations is on the web page. ---------------------------------------------------------------------- Object Features for Visual Shape Representation NIPS-95 Workshop: Saturday, Dec 2, 1995 Organizers: Shimon Edelman, Nathan Intrator http://www.physics.brown.edu/~nin/workshop95.html Overview Object recognition can be regarded as a comparison between the stimulus shape and a library of reference shapes stored in long-term memory. It is not likely that the visual system stores exact templates or snapshots of familiar objects, both for pragmatic reasons (the appearance of a 3D object depends on the viewing conditions, making a close match between the stimulus and a template unattainable), and because of computational limitations that have to do with the curse of dimensionality. Many competing approaches to the extraction of features useful for shape representation have been proposed in the past. The workshop will explore and compare some of these approaches. We shall be particularly interested in discussing different approaches for evaluating feature extraction rules: information theory, statistics, pattern recognition etc. We would like to elaborate on the goal of feature/information extraction in early visual cortex, the relevance of the statistics of the input environment to studying learning rules and comparison between visual cortical plasticity models. Presentation of psychophysical and neurobiological data relevant to the feature issue will be encouraged. POTENTIAL PARTICIPANTS: connectionists / feature extraction people / vision researchers / neurobiologists working on perceptual learning Invited Speakers ---------------- Joseph Atick, Rockefeller (Tentative) Horace Barlow, Cambridge (Tentative) Elie Binenstock, CNRS, Brown Ichiro Fujita, Osaka Stu Geman, Brown Tai Sing Lee, Harvard Bruno A. Olshausen, Cornell Tommy Poggio, MIT Dan Ruderman, USC (Tentative) Harel Shouval, Brown  From payman at ebs330.eb.uah.edu Fri Oct 13 18:47:14 1995 From: payman at ebs330.eb.uah.edu (Payman Arabshahi) Date: Fri, 13 Oct 95 17:47:14 CDT Subject: Computational Intelligence in Financial Engineering - CIFEr'96 Message-ID: <9510132247.AA24566@ebs330> Call for Papers Conference on Computational Intelligence for Financial Engineering CIFEr Conference =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Visit us on the World Wide Web for latest updates and information at http://www.ieee.org/nnc/conferences/cfp/cifer96.html The homepage can also be accessed via the "Conferences" page of the IEEE Neural Network Council's Homepage at http://www.ieee.org/nnc The deadlines mentioned in this CFP supercede those in the hard copy version. Our homepage will be updated on October 15 to reflect these new deadlines. -- Payman Arabshahi Electronic Publicity Chair, CIFEr'96 Tel : (205) 895-6380 Dept. of Electrical & Computer Eng. Fax : (205) 895-6803 University of Alabama in Huntsville payman at ebs330.eb.uah.edu Huntsville, AL 35899 http://www.eb.uah.edu/ece/ ---------------------------------------------------------------------- IEEE/IAFE 1996 $$$$$$$$$$$ $$$$$$ $$$$$$$$$$$ $$$$$$$$$$ $$$$$$$$$$$ $$$$$$ $$$$$$$$$$$ $$$$$$$$$$ $$$$ $$ $$$$ $$$$ $$$ $$$ $$$$ $$$$ $$$$$$$ $$$$$$ $$$$$$$$$$ $$$$ $$$$ $$$$$$$ $$$$$$ $$$$$$$$$$ $$$$ $$ $$$$ $$$$ $$$ $$$ $$$ $$$$$$$$$$$ $$$$$$ $$$$ $$$$$$$$$$ $$$ $$$$$$$$$$$ $$$$$$ $$$$ $$$$$$$$$$ $$$ Call for Papers Conference on Computational Intelligence for Financial Engineering CIFEr Conference March 24-26, 1996, New York City, Crowne Plaza Manhattan Sponsors: The IEEE Neural Networks Council, The International Association of Financial Engineers The IEEE/IAFE CIFEr Conference is the second annual collaboration between the professional engineering and financial communities, and is one of the leading forums for new technologies and applications in the intersection of computational intelligence and financial engineering. Intelligent computational systems have become indispensable in virtually all financial applications, from portfolio selection to proprietary trading to risk management. Topics in which papers, panel sessions, and tutorial proposals are invited include, but are not limited to, the following: CONFERENCE TOPICS ----------------- > Financial Engineering Applications: Trading Systems Forecasting Hedging Strategies Risk Management Pricing of Structured Securities Systemic Risk Asset Allocation Exotic Options > Computer & Engineering Applications & Models: Neural Networks Probabilistic Reasoning Fuzzy Systems and Rough Sets Stochastic Processes Dynamic Optimization Time Series Analysis Non-linear Dynamics Evolutionary Computation INSTRUCTIONS FOR AUTHORS, PANEL PROPOSALS, SPECIAL SESSIONS, TUTORIALS ---------------------------------------------------------------------- All summaries and proposals for tutorials, panels and special sessions must be received by the conference Secretariat at Meeting Management by December 1, 1995. Our intentions are to publish a book with the best selection of papers accepted. AUTHORS (FOR CONFERENCE ORAL SESSIONS) -------------------------------------- One copy of the Extended Summary (not exceeding four pages of 8.5 inch by 11 inch size) must be received by Meeting Management by December 1, 1995. Centered at the top of the first page should be the paper's complete title, author name(s), affiliation(s), and mailing addresses(es). Fonts no smaller than 10 pt should be used. Papers must report original work that has not been published previously, and is not under consideration for publication elsewhere. In the letter accompanying the submission, the following information should be included: * Topic(s) * Full title of paper * Corresponding Author's name * Mailing address * Telephone and fax * E-mail (if available) * Presenter (If different from corresponding author, please provide name, mailing address, etc.) Authors will be notified of acceptance of the Extended Summary by January 10, 1996. Complete papers (up to a maximum of seven 8.5 inch by 11 inch pages) will be due by February 9, 1996, and will be published in the conference proceedings. SPECIAL SESSIONS ---------------- A limited number of special sessions will address subjects within the topical scope of the conference. Each special session will consist of from four to six papers on a specific topic. Proposals for special sessions will be submitted by the session organizer and should include: * Topic(s) * Title of Special Session * Name, address, phone, fax, and email of the Session Organizer * List of paper titles with authors' names and addresses * One page of summaries of all papers Notification of acceptance of special session proposals will be on January 10, 1995. If a proposal for a special session is accepted, the authors will be required to submit a camera ready copy of their paper for the conference proceedings by February 9, 1996. PANEL PROPOSALS --------------- Proposals for panels addressing topics within the technical scope of the conference will be considered. Panel organizers should describe, in two pages or less, the objective of the panel and the topic(s) to be addressed. Panel sessions should be interactive with panel members and the audience and should not be a sequence of paper presentations by the panel members. The participants in the panel should be identified. No papers will be published from panel activities. Notification of acceptance of panel session proposals will be on January 10, 1996. TUTORIAL PROPOSALS ------------------ Proposals for tutorials addressing subjects within the topical scope of the conference will be considered. Proposals for tutorials should describe, in two pages or less, the objective of the panel and the topic(s) to be addressed. A detailed syllabus of the course contents should also be included. Most tutorials will be four hours, although proposals for longer tutorials will also be considered. Notification of acceptance of tutorial proposals will be on January 10, 1996. EXHIBIT INFORMATION ------------------- Businesses with activities related to financial engineering, including software & hardware vendors, publishers and academic institutions, are invited to participate in CIFEr's exhibits. Further information about the exhibits can be obtained from the CIFEr-secretariat, Barbara Klemm. SPONSORS -------- Sponsorship for the CIFEr Conference is being provided by the IAFE (International Association of Financial Engineers) and the IEEE Neural Networks Council. The IEEE (Institute of Electrical and Electronics Engineers) is the world's largest engineering and computer science professional non-profit association and sponsors hundreds of technical conferences and publications annually. The IAFE is a professional non-profit financial association with members worldwide specializing in new financial product design, derivative structures, risk management strategies, arbitrage techniques, and application of computational techniques to finance. Early registration is $400 for IEEE (Institute of Electrical and Electronic Engineers, Neural Networks Council) and IAFE (International Association of Financial Engineers) members. For details contact Barbara Klemm at Meeting Management. INFORMATION ----------- CIFEr Secretariat: Meeting Management IEEE/IAFE Computational Intelligence for Financial Engineering 2603 Main Street, Suite #690 Irvine, California 92714 Tel: (714) 752-8205 or (800) 321-6338 Fax: (714) 752-7444 Email: 74710.2266 at compuserve.com Visit us on the World Wide Web for latest updates: http://www.ieee.org/nnc/conferences/cfp/cifer96.html ORGANIZING COMMITTEE -------------------- Keynote Speaker: Stephen Figlewski, Professor of Finance and Editor of the Journal of Derivatives Stern School of Business, New York University John M. Mulvey, Professor and Director Engineering Management Systems Princeton University, Princeton Conference Committee General Co-chairs: John Marshall, Professor of Financial Engineering Polytechnic University, New York, NY Robert Marks, Professor of Electrical Engineering, University of Washington, Seattle, WA Program Committee Co-chairs: Benjamin Melamed, Ph.D., Research Scientist RUTCOR-Rutgers University's Center for Operations Research Alan Tucker, Associate Professor of Finance Pace University, New York, NY International Liaison: Arnold Jang, Vice President, Intelligent Trading Systems Springfields Investments Advisory Company, Taipei, Taiwan Organizational Chair: Robert Golan, President Rough Knowledge Discovery Inc., Calgary, Alberta Finance Chair: Ingrid Marshall, Accountant Marshall & Marshall, Stroudsburg, PA Exhibits Chair: Steve Piche, Lead Scientist Pavillion Inc, Austin Program Co-Chair: Alan Tucker and Benjamin Melamed Program Committee: Phelim Boyle, Professor of Accounting University of Waterloo, Waterloo, Ontario Mark Broadie, Associate Professor of Finance Graduate School of Business Columbia University, New York, NY Jan Dash, Ph.D, Managing Director Smith Barney, New York, NY Stephen Figlewski, Professor of Finance New York University, New York, NY Roy S. Freedman, Ph.D, President Inductive Solutions, Inc, New York, NY Peter L. Hammer, Professor and Director RUTCOR-Rutgers University's Center for Operations Research, New Brunswick, NJ Jimmy E. Hilliard, Professor of Finance University of Georgia, Athens, GA John Hull, Professor of Management University of Toronto, Toronto, Ontario Yuval Lirov, Ph.D., Vice President Lehman Brothers, Inc, New York, NY David G. Luenberger, Professor of Electrical Engineering Stanford University, Stanford, CA John M. Mulvey, Professor and Director Engineering Management Systems Princeton University, Princeton, NJ Jason Z. Wei, Associate Professor of Finance University of Saskatchewan, Saskatoon Robert E. Whaley, Professor of Business Futures and Options Research Center Duke University, Durham, NC Publicity Chair Michael Wolf, General Manager Financial Products, The Mathworks, Inc., Natick, MA Electronic Publicity Chair Payman Arabshahi, Assistant Professor of Electrical Engineering University of Alabama in Huntsville, Huntsville Conference Liaison Scott Mathews, Senior Associate Marshall, Tucker, and Associates, Edmonds, WA  From tony at discus.anu.edu.au Mon Oct 16 02:52:50 1995 From: tony at discus.anu.edu.au (Tony BURKITT) Date: Mon, 16 Oct 1995 16:52:50 +1000 Subject: ACNN'96: Call for Papers Message-ID: <199510160652.QAA04463@cslab.anu.edu.au> C A L L F O R P A P E R S ACNN'96 SEVENTH AUSTRALIAN CONFERENCE ON NEURAL NETWORKS 10th - 12th APRIL 1996 Australian National University Canberra, Australia The seventh Australian conference on neural networks will be held in Canberra on April 10th - 12th 1996 at the Australian National University. ACNN'96 is the annual national meeting of the Australian neural network community. It is a multi-disciplinary meeting and seeks contributions from Neuroscientists, Engineers, Computer Scientists, Mathematicians, Physicists and Psychologists. ACNN'96 will feature a number of invited speakers. The program will include lecture presentations and poster sessions. Proceedings will be printed and distributed to the attendees. The posters will be displayed for a significant period of time, and time will be allocated for authors to be present at their poster in the conference program. Pre-Conference Workshops and Tutorials Proposals for Pre-Conference Workshops and Tutorials are invited. These are to be held on Tuesday 9th April at the same venue as the conference. People wishing to organize such workshops or tutorials are invited to submit a precis at the same time as the submission deadline for papers, and these will be advertised. Invited Keynote Speakers ACNN'96 will feature a number of keynote speakers, including Professor Wolfgang Maass, Technical University Graz. Further details to be announced. Submission Categories The major categories for paper submissions include: 1. Computational Neuroscience: Integrative function of neural networks in Vision, Audition, Motor, Somatosensory and Autonomic functions; Synaptic function; Cellular information processing; 2. Theory: Learning; Generalisation; Complexity; Scaling; Stability; Dynamics; 3. Implementation: Hardware implementation of neural nets; Analog and digital VLSI implementation; Optical implementation; 4. Architectures and Learning Algorithms: New architectures and learning algorithms; Hierarchy; Modularity; Learning pattern sequences; Information integration; 5. Cognitive Science and AI: Computational models of perception and pattern recognition; Memory; Concept formation; Problem solving and reasoning; Visual and auditory attention; Language acquisition and production; Neural network implementation of expert systems; 6. Applications: Application of neural nets to signal processing and analysis; Pattern recognition: Speech, Machine vision; Motor control; Robotics; Forecasting; Medical. Initial Submission of Papers As this is a multi-disciplinary meeting, papers are required to be comprehensible to an informed researcher outside the particular stream of the author in addition to the normal requirements of technical merit. Papers should be submitted as close as possible to final form and must not exceed six single A4 pages (2-column format). A cover page should be supplied giving the title of the paper, the name and affiliation of each author, together with the postal address, the e-mail address, and the phone and fax numbers of a designated contact author. The type font should be no smaller than 10 point except in footnotes. A serif font such as Times or New Century Schoolbook is preferred. . A LaTeX style file and a LaTeX template file specifying the final format are available by ftp (in the directory ftp://syseng.anu.edu.au/pub/acnn96/paperformat). Five copies of the paper and the front cover sheet should be sent to: ACNN'96 Secretariat L.P.O. Box 228 Australian National University Canberra, ACT 2601 Australia Each manuscript should clearly indicate submission category (from the six listed) and author preference for oral or poster presentations. This initial submission must be on hard copy to reach us by Friday, 1 December 1995. ACNN'96 will include a special poster session devoted to recent work and work-in-progress. Abstracts are solicited for this session (1 page limit), and may be submitted up to one week before the commencement of the conference. They will not be refereed or included in the proceedings, but will be distributed to attendees upon arrival. Students are especially encouraged to submit abstracts for this session. Submission Deadlines Friday, 1 December 1995 Deadline for receipt of paper submissions Friday, 19 January 1996 Notification of acceptance Friday, 16 February 1996 Final papers in camera-ready form for printing Venue Huxley Lecture Theatre, Leonard Huxley Building, Mills Road, Australian National University, Canberra, Australia. ACNN'96 Organising Committee Peter Bartlett Australian National University Tony Burkitt Australian National University Bob Williamson Australian National University ACNN'96 Technical Program Committee Tom Downs University of Queensland Bill Gibson University of Sydney Andrew Heathcote University of Newcastle Marwan Jabri University of Sydney Adam Kowalczyk Telecom Research Laboratories Cyril Latimer University of Sydney Wee Sun Lee Australian National University M. V. Srinivasan Australian National University Registrations The registration fee to attend ACNN'96 is: Full Time Students A$120.00 Academics A$260.00 Other A$380.00 A discount of 20% applies for advance registration. Registration forms must be posted before February 16th, 1996, to be entitled to the discount. To be eligible for the Full Time Student rate, a letter from the Head of Department as verification of enrollment is required. There is a registration form at the end of this document. Accommodation Delegates will have to make their own accommodation arrangements directly with the college or hotel of their choice. A list of accomodation close to the conference venue is available (see "Further Information"). Further Information For further information and registration forms, contact: ACNN'96 Secretariat L.P.O. Box 228 Australian National University Canberra, ACT 2601 Australia Tel: 06 - 249 5645 WWW page: http://wwwsyseng.anu.edu.au/acnn96/ ftp site: syseng.anu.edu.au:pub/acnn96 or via email: acnn96 at anu.edu.au  From andreas at sabai.cs.colorado.edu Tue Oct 24 00:14:30 1995 From: andreas at sabai.cs.colorado.edu (Andreas Weigend) Date: Mon, 23 Oct 1995 22:14:30 -0600 (MDT) Subject: Noisy Time Series (2-day NIPS workshop) Message-ID: <199510240414.WAA28537@sabai.cs.colorado.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 3027 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/0e14ac8e/attachment-0001.ksh From wray at Heuristicrat.COM Mon Oct 23 20:32:35 1995 From: wray at Heuristicrat.COM (Wray Buntine) Date: Mon, 23 Oct 95 17:32:35 PDT Subject: ISIS: Information, Statistics and Induction in Science Message-ID: <9510240032.AA28178@euclid.Heuristicrat.COM> *** CALL FOR PAPERS *** ISIS: Information, Statistics and Induction in Science Melbourne, Australia, 20-23 August 1996 Conference Chair: David Dowe Co-chairs: Kevin Korb and Jonathan Oliver INVITED SPEAKERS: Henry Kyburg, Jr. (University of Rochester, NY) J. Ross Quinlan (Sydney University) Jorma J. Rissanen (IBM Almaden Research, San Jose, California) Ray Solomonoff (U.S.A.) PROGRAM COMMITTEE: Lloyd Allison, Mark Bedau, Hamparsum Bozdogan, Wray Buntine, Peter Cheeseman, Honghua Dai, David Dowe, Doug Fisher, Alex Gammerman, Clark Glymour, Randy Goebel, David Hand, Bill Harper, David Heckerman, Colin Howson, Lawrence Hunter, Frank Jackson, Max King, Kevin Korb, Henry Kyburg, Ming Li, Nozomu Matsubara, Aleksandar Milosavljevic, Richard Neapolitan, Jonathan Oliver, Michael Pazzani, J. Ross Quinlan, Glenn Shafer, Peter Slezak, Ray Solomonoff, Paul Thagard, Neil Thomason, Raul Valdes-Perez, Tim van Gelder, Paul Vitanyi, Chris Wallace, Geoff Webb, Xindong Wu, Jan Zytkow. Inquiries to: isis96 at cs.monash.edu.au David Dowe: dld at cs.monash.edu.au Kevin Korb: korb at cs.monash.edu.au or Jonathan Oliver: jono at cs.monash.edu.au Information is available on the WWW at: http://www.cs.monash.edu.au/~jono/ISIS/ISIS.shtml This conference will explore the use of computational modelling to understand and emulate inductive processes in science. The problems involved in building and using such computer models reflect methodological and foundational concerns common to a variety of academic disciplines, especially statistics, artificial intelligence (AI) and the philosophy of science. This conference aims to bring together researchers from these and related fields to present new computational technologies for supporting or analysing scientific inference and to engage in collegial debate over the merits and difficulties underlying the various approaches to automating inductive and statistical inference. AREAS OF INTEREST. The following streams/subject areas are of particular interest to the organisers: Concept Formation and Classification. Minimum Encoding Length Inference Methods. Scientific Discovery. Theory Revision. Bayesian Methodology. Foundations of Statistics. Foundations of Social Science. Foundations of AI. CALL FOR PAPERS. Prospective authors should mail five copies of their papers to Dr. David Dowe, ISIS chair. Alternatively, authors may submit by email to isis96 at cs.monash.edu.au. Email submissions must be in LaTex (using the ISIS style guide [will be available at the ISIS WWW page]). Submitted papers should be in double-column format in 10 point font and not exceeding 10 pages. An additional page should display the title, author(s) and affiliation(s), abstract, keywords and identification of which of the eight areas of interest (see http://www.cs.monash.edu.au/~jono/ISIS/ISIS.Area.Interest.html) are most relevant to the paper. Refereeing will be blind; that is, this additional page will not be passed along to referees. The proceedings will be published; details have not yet been settled with the prospective publisher. Accepted papers will have to be represented by at least one author in attendance to be published. Papers should be sent to: Dr David Dowe ISIS chair Department of Computer Science Monash University Clayton Victoria 3168 Australia Phone: +61-3-9 905 5226 FAX: +61-3-9 905 5146 Email: isis96 at cs.monash.edu.au Submission (receipt) deadline: 11 March, 1996 Notification of acceptance: 10 June, 1996 Camera-ready copy (receipt) deadline: 15 July, 1996 CONFERENCE VENUE ISIS will be held at the Old Melbourne Hotel, 5-17 Flemington Rd. North Melbourne. The Old Melbourne Hotel is within easy walking distance of downtown Melbourne, Melbourne University, many restaurants (on Lygon Street) and the Melbourne Zoo. It is about fifteen to twenty minutes drive from the airport. REGISTRATION A registration form will be available at the WWW site http://www.cs.monash.edu.au/~jono/ISIS/ISIS.shtml, or by mail from the conference chair. Dates for registration will be considered to be met assuming that legible postmarks are on or before the dates and airmail is used. Student registrations will be available at a discount (but prices have not yet been fixed). Relevant dates are: Early registration (at a discount): 3 June, 1996 Final registration: 1 July, 1996  From Dave_Touretzky at DST.BOLTZ.CS.CMU.EDU Wed Oct 25 23:57:34 1995 From: Dave_Touretzky at DST.BOLTZ.CS.CMU.EDU (Dave_Touretzky@DST.BOLTZ.CS.CMU.EDU) Date: Wed, 25 Oct 95 23:57:34 -0400 Subject: problem fixed re: Connectionists list Message-ID: <14308.814679854@DST.BOLTZ.CS.CMU.EDU> This past week we had a problem with the processing of the distribution list used for CONNECTIONSTS. As a result, several messages didn't get sent out to subscribers. The problem has been corrected, and we are now redistributing the messages that we believe got eaten. Apologies to anyone who receives a duplicate copy. -- Dave Touretzky & Lisa Saksida  From jagota at ponder.csci.unt.edu Tue Oct 17 13:10:48 1995 From: jagota at ponder.csci.unt.edu (Jagota Arun Kumar) Date: Tue, 17 Oct 95 12:10:48 -0500 Subject: NIPS*95 workshop on Optimization Message-ID: <9510171710.AA08474@ponder> Dear Connectionists: Attached find a description of the NIPS*95 workshop on optimization. For up-to-date information, including abstracts of talks, see the URL. We might be able to fit in one or two more talks. Send me a title and abstract by e-mail if you'd like to give a talk. Arun Jagota ---------------------------------------------------------------------- OPTIMIZATION PROBLEM SOLVING WITH NEURAL NETS NIPS95 Workshop, Organizer: Arun Jagota Friday Dec 1 1995, 7:30--9:30 AM and 4:30--6:30 PM E-mail: jagota at cs.unt.edu Workshop URL: http://www.msci.memphis.edu/~jagota/NIPS95 Ever since the work of Hopfield and Tank, neural nets have found increasing use in the approximate solution of difficult optimization problems, arising in many applications. Such neural nets are well-suited in principle to these problems, because they minimize, in parallel form, an energy function into which an optimization problem's objective and constraints can be mapped. Unfortunately, often they haven't worked well in practice, for two reasons. First, mapping the objective and constraints of a problem onto a single good energy function has turned out difficult for certain problems, for example for the Travelling Salesman Problem. The ease or difficulty of mapping has turned out moreover to be problem-dependent, making it difficult to find a good general mapping methodology. Second, the dynamical algorithms have often been limited to some form of local search or gradient-descent. In recent years, there have been significant advances on both fronts. Provably good mappings of several optimization problems have been found. Powerful dynamical algorithms that go beyond gradient-descent have also been developed, with ideas borrowed from different fields. Examples are Mean Field Annealing, Simulated Annealing, Projection Methods, and Randomized Multi-Start Algorithms. This workshop aims to take stock of the state of the art on this topic, and to study directions for future research and applications. Target Audience Both the topics---neural nets and optimization---are of relevance to a wide range of disciplines and we hope that several of these will be represented at this workshop. These include Cognitive Science, Computer Science, Engineering, Mathematics, Neurobiology, Physics, Chemistry, and Psychology. Format 6-8 30 minute talks, each including 5 minutes for discussion. 30 minutes for discussion at the end. Talks The Complexity of Stability in Hopfield Networks Ian Parberry, University of North Texas Title to be announced Anand Rangarajan, Yale University Performance of Neural Network Algorithms for Maximum Clique on Highly Compressible Graphs Arun Jagota, University of North Texas Population-based Incremental Learning Shumeet Baluja, Carnegie-Mellon University How Good are Neural Networks Algorithms for the Travelling Salesman Problem? Marco Budinich, Dipartimento di Fisica, Via Valerio 2, 34127 Trieste ITALY Relaxation Labeling Networks for the Maximum Clique Problem Marcello Pelillo, University of Venice, Italy ----------------------------------------------------------------------  From shawn_mikiten at biad23.uthscsa.edu Wed Oct 18 14:03:51 1995 From: shawn_mikiten at biad23.uthscsa.edu (shawn mikiten) Date: 18 Oct 1995 14:03:51 U Subject: BrainMap '95 Conference ann Message-ID: The upcoming BrainMap '95 Conference on December 3 & 4 will be in San Antonio, TX. Anyone involved in, or interested in developing databases in brain mapping and/or behaviors are welcome to apply. If you have access to WWW the URL is: http://ric.uthscsa.edu/services/95  From robert at fit.qut.edu.au Fri Oct 20 01:55:04 1995 From: robert at fit.qut.edu.au (Robert Andrews) Date: Fri, 20 Oct 1995 15:55:04 +1000 Subject: Rule Extraction From ANNs - AISB96 Workshop Message-ID: <199510200555.PAA25350@ocean.fit.qut.edu.au> ============================================================= FIRST CALL FOR PAPERS AISB-96 WORKSHOP Society for the Study of Artificial Intelligence and Simulation of Behaviour (SSAISB) University of Sussex, Brighton, England April 2, 1996 -------------------------------------------- RULE-EXTRACTION FROM TRAINED NEURAL NETWORKS -------------------------------------------- Robert Andrews Neurocomputing Research Centre Queensland University of Technology Brisbane 4001 Queensland, Australia Phone: +61 7 864-1656 Fax: +61 7 864-1969 E-mail: robert at fit.qut.edu.au Joachim Diederich Neurocomputing Research Centre Queensland University of Technology Brisbane 4001 Queensland, Australia Phone: +61 7 864-2143 Fax: +61 7 864-1801 E-mail: joachim at fit.qut.edu.au Lee Giles NEC Research Institute 4 Independence Way Princeton, NJ 08540 The objective of the workshop is to provide a discussion platform for researchers interested in Artificial Neural Networks (ANNs), Artificial Intelligence (AI) and Cognitive Science. The workshop should be of considerable interest to computer scientists and engineers as well as to cognitive scientists and people interested in ANN applications which require a justification of a classification or inference. INTRODUCTION It is becoming increasingly apparent that without some form of explanation capability, the full potential of trained Artificial Neural Networks may not be realised. The problem is an inherent inability to explain in a comprehensible form, the process by which a given decision or output generated by an ANN has been reached. For Artificial Neural Networks to gain a even wider degree of user acceptance and to enhance their overall utility as learning and generalisation tools, it is highly desirable if not essential that an `explanation' capability becomes an integral part of the functionality of a trained ANN. Such a requirement is mandatory if, for example, the ANN is to be used in what are termed as `safety critical' applications such as airlines and power stations. In these cases it is imperative that a system user be able to validate the output of the Artificial Neural Network under all possible input conditions. Further the system user should be provided with the capability to determine the set of conditions under which an output unit within an ANN is active and when it is not, thereby providing some degree of transparency of the ANN solution. Craven & Shavlik (1994) define the rule-extraction from neural networks task as follows: "Given a trained neural network and the examples used to train it, produce a concise and accurate symbolic description of the network." The following discussion of the importance of rule-extraction algorithms is based on this definition. THE IMPORTANCE OF RULE-EXTRACTION ALGORITHMS Since rule extraction from trained Artificial Neural Networks comes at a cost in terms of resources and additional effort, an early imperative in any discussion is to delineate the reasons why rule extraction is an important, if not mandatory, extension of conventional ANN techniques. The merits of including rule extraction techniques as an adjunct to conventional Artificial Neural Network techniques include: Data exploration and the induction of scientific theories Over time neural networks have proven to be extremely powerful tools for data exploration with the capability to discover previously unknown dependencies and relationships in data sets. As Craven and Shavlik (1994) observe, `a (learning) system may discover salient features in the input data whose importance was not previously recognised.' However, even if a trained Artificial Neural Network has learned interesting and possibly non-linear relationships, these relationships are encoded incomprehensibly as weight vectors within the trained ANN and hence cannot easily serve the generation of scientific theories. Rule-extraction algorithms significantly enhance the capabilities of ANNs to explore data to the benefit of the user. Provision of a `user explanation' capability Experience has shown that an explanation capability is considered to be one of the most important functions provided by symbolic AI systems. In particular, the salutary lesson from the introduction and operation of Knowledge Based systems is that the ability to generate even limited explanations (in terms of being meaningful and coherent) is absolutely crucial for the user-acceptance of such systems. In contrast to symbolic AI systems, Artificial Neural Networks have no explicit declarative knowledge representation. Therefore they have considerable difficulty in generating the required explanation structures. It is becoming increasingly apparent that the absence of an `explanation' capability in ANN systems limits the realisation of the full potential of such systems and it is this precise deficiency that the rule extraction process seeks to redress. Improving the generalisation of ANN solutions Where a limited or unrepresentative data set from the problem domain has been used in the ANN training process, it is difficult to determine when generalisation can fail even with evaluation methods such as cross-validation. By being able to express the knowledge embedded within the trained Artificial Neural Network as a set of symbolic rules, the rule-extraction process may provide an experienced system user with the capability to anticipate or predict a set of circumstances under which generalisation failure can occur. Alternatively the system user may be able to use the extracted rules to identify regions in input space which are not represented sufficiently in the existing ANN training set data and to supplement the data set accordingly. A CLASSIFICATION SCHEME FOR RULE EXTRACTION ALGORITHMS The method of classification proposed here is in terms of: (a) the expressive power of the extracted rules; (b) the `translucency' of the view taken within the rule extraction technique of the underlying Artificial Neural Network units; (c) the extent to which the underlying ANN incorporates specialised training regimes; (d) the `quality' of the extracted rules; and (e) the algorithmic `complexity' of the rule extraction/rule refinement technique. The `translucency' dimension of classification is of particular interest. It is designed to reveal the relationship between the extracted rules and the internal architecture of the trained ANN. It comprises two basic categories of rule extraction techniques viz `decompositional' and `pedagogical' and a third - labelled as `eclectic' - which combines elements of the two basic categories. The distinguishing characteristic of the `decompositional' approach is that the focus is on extracting rules at the level of individual (hidden and output) units within the trained Artificial Neural Network. Hence the `view' of the underlying trained Artificial Neural Network is one of `transparency'. The translucency dimension - `pedagogical' is given to those rule extraction techniques which treat the trained ANN as a `black box' ie the view of the underlying trained Artificial Neural Network is `opaque'. The core idea in the `pedagogical' approach is to `view rule extraction as a learning task where the target concept is the function computed by the network and the input features are simply the network's input features'. Hence the `pedagogical' techniques aim to extract rules that map inputs directly into outputs. Where such techniques are used in conjunction with a symbolic learning algorithm, the basic motif is to use the trained Artificial Neural Network to generate examples for the learning algorithm. As indicated above the proposed third category in this classification scheme are composites which incorporate elements of both the `decompositional' and `pedagogical' (or `black-box') rule extraction techniques. This is the `eclectic' group. Membership in this category is assigned to techniques which utilise knowledge about the internal architecture and/or weight vectors in the trained Artificial Neural Network to complement a symbolic learning algorithm. An ancillary problem to that of rule extraction from trained ANNs is that of using the ANN for the `refinement' of existing rules within symbolic knowledge bases. The goal in rule refinement is to use a combination of ANN learning and rule extraction techniques to produce a `better' (ie a `refined') set of symbolic rules which can then be applied back in the original problem domain. In the rule refinement process, the initial rule base (ie what may be termed `prior knowledge') is inserted into an ANN by programming some of the weights. The rule refinement process then proceeds in the same way as normal rule extraction viz (1) train the network on the available data set(s); and (2) extract (in this case the `refined') rules - with the proviso that the rule refinement process may involve a number of iterations of the training phase rather than a single pass. DISCUSSION POINTS FOR WORKSHOP PARTICIPANTS 1. Decompositional vs. learning approaches to rule- extraction from ANNs - What are the advantages and disadvantages w.r.t. performance, solution time, computational complexity, problem domain etc. Are decompositional approaches always dependent on a certain ANN architecture? 2. Rule-extraction from trained neural networks vs. symbolic induction. What are the relative strength and weaknesses? 3. What are the most important criteria for rule quality? 4. What are the most suitable representation languages for extracted rules? How does the extraction problem vary across different languages? 5. What is the relationship between rule-initialisation (insertion) and rule-extraction? For instance, are these equivalent or complementary processes? How important is rule-refinement by neural networks? 6. Rule-extraction from trained neural networks and computational learning theory. Is generating a minimal rule-set which mimics an ANN a hard problem? 7. Does rule-initialisation result in faster learning and improved generalisation? 8. To what extent are existing extraction algorithms limited in their applicability? How can these limitations be addressed? 9. Are there any interesting rule-extraction success stories? That is, problem domains in which the application of rule-extraction methods has resulted in an interesting or significant advance. ACKNOWLEDGEMENT Many thanks to Mark Craven, and Alan Tickle for comments on earlier versions of this proposal. RELEVANT PUBLICATIONS Andrews, R Diederich, J and Tickle, A.B.: A survey and critique of techniques for extracting rules from trained artificial neural networks. To appear: Knowledge-Based Systems, 1995 (ftp:fit.qut.edu.au//pub/NRC/ps/QUTNRC-95-01- 02.ps.Z) Andrews, R and Geva, S: `Rule extraction from a constrained error back propagation MLP' Proc. 5th Australian Conference on Neural Networks Brisbane Queensland (1994) pp 9-12 Andrews, R and Geva, S `Inserting and extracting knowledge from constrained error back propagation networks' Proc. 6th Australian Conference on Neural Networks Sydney NSW (1995) Craven, M W and Shavlik , J W `Using sampling and queries to extract rules from trained neural networks' Machine Learning: Proceedings of the Eleventh International Conference (San Francisco CA) (1994) (in print) Diederich, J `Explanation and artificial neural networks' International Journal of Man-Machine Studies Vol 37 (1992) pp 335-357 Fu, L M `Neural networks in computer intelligence' McGraw Hill (New York) (1994) Fu, L M `Rule generation from neural networks' IEEE Transactions on Systems, Man, and Cybernetics Vol 28 No 8 (1994) pp 1114-1124 Gallant, S `Connectionist expert systems' Communications of the ACM Vol 31 No 2 (February 1988) pp 152-169 Giles, C L and Omlin C W `Rule refinement with recurrent neural networks' Proc. of the IEEE International Conference on Neural Networks (San Francisco CA) (March 1993) pp 801-806 Giles, C L and Omlin C W `Extraction, insertion, and refinement of symbolic rules in dynamically driven recurrent networks' Connection Science Vol 5 Nos 3 and 4 (1993) pp 307-328 Giles, C L, Miller, C B, Chen, D, Chen, H, Sun, G Z and Lee, Y C `Learning and extracting finite state automata with second-order recurrent neural networks' Neural Computation Vol 4 (1992) pp 393-405 Hayward, R.; Pop, E.; Diederich, J.: Extracting Rules for Grammar Recognition from Cascade-2 Networks. Proceeding, IJCAI-95 Workshop on Machine Learning and Natural Language Processing. McMillan, C, Mozer, M C and Smolensky, P `The connectionist scientist game: rule extraction and refinement in a neural network' Proc. of the Thirteenth Annual Conference of the Cognitive Science Society (Hillsdale NJ) 1991 Omlin, C W, Giles, C L and Miller, C B `Heuristics for the extraction of rules from discrete time recurrent neural networks' Proc. of the International Joint Conference on Neural Networks (IJCNN'92) (Baltimore MD) Vol 1 (1992) pp 33 Pop, E, Hayward, R, and Diederich, J `RULENEG: extracting rules from a trained ANN by stepwise negation' QUT NRC (December 1994) Sestito, S and Dillon, T `Automated knowledge acquisition of rules with continuously valued attributes' Proc. 12th International Conference on Expert Systems and their Applications (AVIGNON'92) (Avignon France) (May 1992) pp 645-656. Sestito, S and Dillon, T `Automated knowledge acquisition' Prentice Hall (Australia) (1994) Thrun, S B `Extracting Provably Correct Rules From Artificial Neural Networks' Technical Report IAI-TR-93-5 Institut fur Informatik III Universitat Bonn (1994) Tickle, A B, Orlowski, M, and Diederich, J `DEDEC: decision detection by rule extraction from neural networks' QUT NRC (September 1994) Towell, G and Shavlik, J `The Extraction of Refined Rules Tresp, V, Hollatz, J and Ahmad, S `Network Structuring and Training Using Rule-based Knowledge' Advances In Neural Information Processing Vol 5 (1993) pp871-878 SUBMISSION OF WORKSHOP EXTENDED ABSTRACTS/PAPERS Authors are invited to submit 3 copies of either an extended abstract or full paper relating to one of the topic areas listed above. Papers should be written in English in single column format and should be limited to no more than eight, (8) sides of A4 paper including figures and references. Centered at the top of the first page should be complete title, author name(s), affiliation(s), and mailing and email address(es), followed by blank space, abstract(15-20 lines), and text. Please include the following information in an accompanying cover letter: Full title of paper, presenting author's name, address, and telephone and fax numbers, authors e-mail address. Submission Deadline is January 15th,1996 with notification to authors by 31st January,1996. For further information, inquiries, and paper submissions please contact: Robert Andrews Queensland University of Technology GPO Box 2434 Brisbane Q. 4001. Australia. phone +61 7 864-1656 fax +61 7 864-1969 email robert at fit.qut.edu.au More information about the AISB-96 workshop series is available from: ftp: ftp.cogs.susx.ac.uk pub/aisb/aisb96 WWW: (http://www.cogs.susx.ac.uk/aisb/aisb96) WORKSHOP PARTICIPATION CHARGES The workshop fees are listed below. Note that these fees include lunch. Student charges are shown in brackets. AISB NON-ASIB MEMBERS MEMBERS 1 Day Workshop 65 (45) 80 LATE REGISTRATION: 85 (60) 100 PROGRAM COMMITTEE MEMBERS R. Andrews, Queensland University of Technology A. Tickle, Queensland University of Technology S. Sestito, DSTO, Australia J. Shavlik, University of Wisconsin =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Mr Robert Andrews School of Information Systems robert at fit.qut.edu.au Faculty of Information Technology R.Andrews at qut.edu.au Queensland University of Technology +61 7 864 1656 (voice) GPO Box 2434 _--_|\ +61 7 864 1969 (fax) Brisbane Q 4001 / QUT Australia \_.--._/ v =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=  From austin at minster.cs.york.ac.uk Fri Oct 20 13:29:47 1995 From: austin at minster.cs.york.ac.uk (austin@minster.cs.york.ac.uk) Date: Fri, 20 Oct 95 13:29:47 Subject: No subject Message-ID: Statistical Modelling and Simulation of Neural Networks 2 Year Research Associate supported by the EPSRC under the ROPA scheme. Within the Advanced Computer Architecture Group Departent of Computer Science University of York, York, Y01 5DD, UK. Applications are invited for a 2 year post to aimed at investigating the properties and the application of a novel neural network method for parallel processing. The work will involve building probabilistic models and performing experimental evaluations of the network. The candidates will be expected to hold a PhD in a relevant subject and have a good understanding of probability theory and programming in C. In addition, knowledge of neural networks and parallel processing would be an advantage, but not essential. The work will study the properties of a novel form of binary neural network, based on correlation matrix memories. In a recent paper (austin 1995 available via WWW and included in the further particulars), it has been shown how a correlation matrix memory can be used to recall multiple data items in parallel. Although the technique has been shown to be feasible, its practical application requires a thorough analysis of the networks abilities through statistical modelling and computer simulation. The modelling is likely to require probabilistic methods, which will greatly add to the groups work on binary neural networks. The computer modelling will be undertaken in C, on the groups extensive network of silicon graphics machines. The successful applicant will join a thriving team of over 14 researchers working in neural networks, which specialises in research on binary weighted neural networks and there application in computer vision and knowledge based systems. The project is supported under the UK government ROPA scheme, and is available immediately for two years. Applications should be sent to the Personnel Office, University of York, York, YO1 5DD, by Friday 3rd of November 1995 quoting reference 6616. Salary will be up to 15986 U.K pounds. Further details can be obtained from the Personnel Office, or by contacting Dr. Jim Austin on 01904 432734, email austin at minster.york.ac.uk. Further details of the groups work can be found on the world wide web http://dcpu1.cs.york.ac.uk:6666/arch/acag.html  From mpolycar at ece.uc.edu Fri Oct 20 11:05:26 1995 From: mpolycar at ece.uc.edu (Marios Polycarpou) Date: Fri, 20 Oct 1995 11:05:26 -0400 (EDT) Subject: ISIC'96: Call for Papers Message-ID: <199510201505.LAA21722@zoe.ece.uc.edu> ************************************ CALL FOR PAPERS 11th IEEE International Symposium on Intelligent Control (ISIC) ************************************  Sponsored by the IEEE Control Systems Society and held in conjunction with The 1996 IEEE International Conference on Control Applications (CCA) and The IEEE Symposium on Computer-Aided Control System Design (CACSD) September 15-18, 1996 The Ritz-Carlton Hotel, Dearborn, Michigan, USA ISIC General Chair: Kevin M. Passino, The Ohio State University ISIC Program Chair: Jay A. Farrell, University of California, Riverside ISIC Publicity Chair: Marios Polycarpou, University of Cincinnati Intelligent control, the discipline where control algorithms are developed by emulating certain characteristics of intelligent biological systems, is being fueled by recent advancements in computing technology and is emerging as a technology that may open avenues for significant technological advances. For instance, fuzzy controllers which provide for a simplistic emulation of human deduction have been heuristically constructed to perform difficult nonlinear control tasks. Knowledge-based controllers developed using expert systems or planning systems have been used for hierarchical and supervisory control. Learning controllers, which provide for a simplistic emulation of human induction, have been used for the adaptive control of uncertain nonlinear systems. Neural networks have been used to emulate human memorization and learning characteristics to achieve high performance adaptive control for nonlinear systems. Genetic algorithms that use the principles of biological evolution and "survival of the fittest" have been used for computer-aided-design of control systems and to automate the tuning of controllers by evolving in real-time populations of highly fit controllers. Topics in the field of intelligent control are gradually evolving, and expanding on and merging with those of conventional control. For instance, recent work has focused on comparative cost-benefit analyses of conventional and intelligent control techniques using simulation and implementations. In addition, there has been recent activity focused on modeling and nonlinear analysis of intelligent control systems, particularly work focusing on stability analysis. Moreover, there has been a recent focus on the development of intelligent and conventional control systems that can achieve enhanced autonomous operation. Such intelligent autonomous controllers try to integrate conventional and intelligent control approaches to achieve levels of performance, reliability, and autonomous operation previously only seen in systems operated by humans. Papers are being solicited for presentation at ISIC and for publication in the Symposium Proceedings on topics such as: - Architectures for intelligent control - Hierarchical intelligent control - Distributed intelligent systems - Modeling intelligent systems - Mathematical analysis of intelligent systems - Knowledge-based systems - Fuzzy systems / fuzzy control - Neural networks / neural control - Machine learning - Genetic algorithms - Applications / Implementations: - Automotive / vehicular systems - Robotics / Manufacturing - Process control - Aircraft / spacecraft This year the ISIC is being held in conjunction with the 1996 IEEE International Conference on Control Applications and the IEEE Symposium on Computer-Aided Control System Design. Effectively this is one large conference at the beautiful Ritz-Carlton hotel. The programs will be held in parallel so that sessions from each conference can be attended by all. There will be one registration fee and each registrant will receive a complete set of proceedings. For more information, and information on how to submit a paper to the conference see the back of this sheet. ++++++++++ Submissions: ++++++++++ Papers: Five copies of the paper (including an abstract) should be sent by Jan. 22, 1996 to: Jay A. Farrell, ISIC'96 College of Engineering ph: (909) 787-2159 University of California, Riverside fax: (909) 787-3188 Riverside, CA 92521 Jay_Farrell at qmail.ucr.edu Clearly indicate who will serve as the corresponding author and include a telephone number, fax number, email address, and full mailing address. Authors will be notified of acceptance by May 1996. Accepted papers, in final camera ready form (maximum of 6 pages in the proceedings), will be due in June 1996. Invited Sessions: Proposals for invited sessions are being solicited and are due Jan. 22, 1996. The session organizers should contact the Program Chair by Jan. 1, 1996 to discuss their ideas and obtain information on the required invited session proposal format. Workshops and Tutorials: Proposals for pre-symposium workshops should be submitted by Jan. 22, 1996 to: Kevin M. Passino, ISIC'96 Dept. Electrical Engineering ph: (614) 292-5716 The Ohio State University fax: (614) 292-7596 2015 Neil Ave. passino at osu.edu Columbus, OH 43210-1272 Please contact K.M. Passino by Jan. 1, 1996 to discuss the content and required format for the workshop or tutorial proposal. ++++++++++++++++++++++++ Symposium Program Committee: ++++++++++++++++++++++++ James Albus, National Institute of Standards and Technology Karl Astrom, Lund Institute of Technology Matt Barth, University of California, Riverside Michael Branicky, Massachusetts Institute of Technology Edwin Chong, Purdue University Sebastian Engell, University of Dortmund Toshio Fukuda, Nagoya University Zhiqiang Gao, Cleveland State University Dimitry Gorinevsky, Measurex Devron Inc. Ken Hunt, Daimler-Benz AG Tag Gon Kim, KAIST Mieczyslaw Kokar, Northeastern University Ken Loparo, Case Western Reserve University Kwang Lee, The Pennsylvania State University Michael Lemmon, University of Notre Dame Frank Lewis, University of Texas at Arlington Ping Liang, University of California, Riverside Derong Liu, General Motors R&D Center Kumpati Narendra, Yale University Anil Nerode, Cornell University Marios Polycarpou, University of Cincinnati S. Joe Qin, Fisher-Rosemount Systems, Inc. Tariq Samad, Honeywell Technology Center George Saridis, Rensselaer Polytechnic Institute Jennie Si, Arizona State University Mark Spong, University of Illinois at Urbana-Champaign Jeffrey Spooner, Sandia National Laboratories Harry Stephanou, Rensselaer Polytechnic Institute Kimon Valavanis, University of Southwestern Louisiana Li-Xin Wang, Hong Kong University of Science and Tech. Gary Yen, USAF Phillips Laboratory ************************************************************************** * Prof. Marios M. Polycarpou | TEL: (513) 556-4763 * * University of Cincinnati | FAX: (513) 556-7326 * * Dept. Electrical & Computer Engineering | * * Cincinnati, Ohio 45221-0030 | Email: polycarpou at uc.edu * **************************************************************************  From mjo at cns.ed.ac.uk Fri Oct 20 11:45:16 1995 From: mjo at cns.ed.ac.uk (Mark Orr) Date: Fri, 20 Oct 1995 16:45:16 +0100 Subject: Paper available: Local Smoothing of RBF Networks Message-ID: <199510201545.QAA21458@garbo.cns.ed.ac.uk> The following paper has been accepted for presentation at the International Symposium on Neural Networks, Hsinchu, Taiwan, December 1995. LOCAL SMOOTHING OF RADIAL BASIS FUNCTION NETWORKS Mark J.L. Orr Centre for Cognitive Science Edinburgh University Abstract: A method of supervised learning is described which enhances generalisation performance by adaptive local smoothing in the input space. The method exploits the local nature of radial basis functions and employs multiple smoothing parameters optimised by generalised cross-validation. More traditional approaches have only a single smoothing parameter and produce a globally uniform smoothing but are demonstrably less effective unless the target function itself is uniformly smooth. A postscript version of a slightly longer version (9 pages instead of 6) can be retrieved by following the links "publications" and "Neural Networks" from the world wide web page: http://www.cns.ed.ac.uk/people/mark.html Alternatively the paper can be retrieved by anonymous ftp: ftp://scott.cogsci.ed.ac.uk/pub/mjo/isann95-long.ps.Z Size: 77KB compressed, 155KB uncompressed. Sorry, no hardcopies. ---- Mark J L Orr, Centre for Cognitive Science, Edinburgh University, 2, Buccleuch Place, Edinburgh EH8 9LW, Scotland, UK phone: (+44) (0) 131 650 4413 email: mjo at cns.ed.ac.uk  From lbl at nagoya.bmc.riken.go.jp Wed Oct 25 22:15:42 1995 From: lbl at nagoya.bmc.riken.go.jp (Bao-Liang Lu) Date: Thu, 26 Oct 1995 11:15:42 +0900 Subject: Paper available: Transformation of NLP Problems Using Neural Nets Message-ID: <9510260215.AA07409@xian> The following paper, to appear in Annals of Mathematics and Artificial Intelligence, is available via anonymous FTP. (This work was presented at 1st Mathematics of Neural Networks and Applications (MANNA'95) Conference, 3-7 July 1995, Lady Margaret Hall, Oxford, UK) FTP-host:ftp.bmc.riken.go.jp FTP-file:pub/publish/Lu/lu-manna95.ps.Z ========================================================================== TITLE: Transformation of Nonlinear Programming Problems into Separable Ones Using Multilayer Neural Networks AUTHORS: Bao-Liang Lu (1) Koji Ito (1,2) ORGANISATIONS: (1) The Institute of Physical and Chemical Research (RIKEN) (2) Toyohashi University of Technology ABSTRACT: In this paper we present a novel method for transforming nonseparable nonlinear programming (NLP) problems into separable ones using multilayer neural networks. This method is based on a useful feature of multilayer neural networks, i.e., any nonseparable function can be approximately expressed as a separable one by a multilayer neural network. By use of this method, the nonseparable objective and (or) constraint functions in NLP problems can be approximated by multilayer neural networks, and therefore, any nonseparable NLP problem can be transformed into a separable one. The importance of this method lies in the fact that it provides us with a promising approach to using modified simplex methods to solve general NLP problems. (6 pages. No hard copies available.) Bao-Liang Lu --------------------------------------------- Bio-Mimetic Control Research Center, The Institute of Physical and Chemical Research (RIKEN) 3-8-31 Rokuban, Atsuta-ku, Nagoya 456, Japan Phone: +81-52-654-9137 Fax: +81-52-654-9138 Email: lbl at nagoya.bmc.riken.go.jp  From S.Renals at dcs.shef.ac.uk Thu Oct 26 11:08:14 1995 From: S.Renals at dcs.shef.ac.uk (S.Renals@dcs.shef.ac.uk) Date: Thu, 26 Oct 1995 16:08:14 +0100 Subject: Research Positions in Speech Recognition Message-ID: <199510261508.QAA12522@elvis.dcs.shef.ac.uk> As part of the EU funded project SPRACH (Speech Recognition Algorithms for Connectionist Hybrids) two Research Associate positions, of three years duration, are available at the Universities of Cambridge and Sheffield. Both positions are concerned with developing new methods for large vocabulary speech recognition. The Sheffield position will have an emphasis towards statistical language modelling; the Cambridge position will have an emphasis on connectionist acoustic models. The research project will build on the recently completed Wernicke project. One of the outcomes of that project is the Abbot large vocabulary speech recognition system, which is available in a demonstration version at ftp://svr-ftp.eng.cam.ac.uk/pub/comp.speech/recognition/AbbotDemo/ The job adverts and application details are included below. For informal discussion contact either Steve Renals (s.renals at dcs.shef.ac.uk) or Tony Robinson (ajr at eng.cam.ac.uk). Tony Robinson, Cambridge University Steve Renals, Sheffield University ----------------------------------------------------------------------- University of Sheffield Department of Computer Science, Speech and Hearing Group Research Associate in Continuous Speech Recognition Applications are invited for a Research Associate to work in the speech and hearing group in the area of continuous speech recognition. In particular, the post will involve the investigation of new methods of statistical language modelling, approaches to domain adaptation and the development and evaluation of demonstration systems. The project is funded as a Basic Research Project by the EU and will be of three years duration, from December 1995. Candidates for the post will be expected to hold a postgraduate degree (preferably a PhD) in a relevant discipline, or to have acquired equivalent experience. The successful candidate will have had research experience in the area of statistical language modelling or connectionist/HMM-based speech recognition. Salary will be in the range \pounds 14,317 to \pounds 18,985. Informal enquiries about the post to Dr. Steve Renals (email: s.renals at dcs.shef.ac.uk; tel: +44-114-282-5575; fax: +44-114-278-0972). Further particulars and an application form are available from the Director of Human Resource Management, The University of Sheffield, Western Bank, Sheffield S10 2TN (tel: +44-114-282-4144; fax: +44-114-276-7897), citing Ref:R780. The closing date for applications is Friday 10 November 1995. The University of Sheffield follows an Equal Opportunity Policy. ----------------------------------------------------------------------- University of Cambridge Engineering Department, Speech Vision and Robotics group Research Associate in Large Vocabulary Connectionist Speech Recognition Applications are invited for a Research Assistantship in the use of connectionist models and hidden Markov models in large vocabulary automatic speech recognition. The project is funded by the EU and is of 36 months duration. Candidates for this post will have a good first degree and preferably a postgraduate degree in a relevant discipline. The candidate is expected to have prior knowledge of connectionist/Markov model hybrids, large vocabulary recognition or a related area. The ability to manage a large software project, participate in international evaluations and liaise with industry would be advantageous. Salary will be in the range \pounds 14,317 to \pounds 19,848. Further details and an application form may be obtained by writing to Dr Tony Robinson, Cambridge University Engineering Department, Trumpington Street, Cambridge CB2 1PZ, U.K., email ajr at eng.cam.ac.uk, phone +44-1223-332815, fax +44-1223-332662, or http://svr-www.eng.cam.ac.uk/~ajr. The deadline for applications is 26 November 1995. The University follows an equal opportunities policy. -----------------------------------------------------------------------  From listerrj at helios.aston.ac.uk Thu Oct 26 14:01:12 1995 From: listerrj at helios.aston.ac.uk (Richard Lister) Date: Thu, 26 Oct 1995 19:01:12 +0100 Subject: Postdoctoral Research Fellowship Message-ID: <2120.9510261801@sun.aston.ac.uk> ---------------------------------------------------------------------- Neural Computing Research Group ------------------------------- Dept of Computer Science and Applied Mathematics Aston University, Birmingham, UK POSTDOCTORAL RESEARCH FELLOWSHIP -------------------------------- Neural Networks for Visualisation of High-Dimensional Data ---------------------------------------------------------- *** Full details at http://neural-server.aston.ac.uk/ *** The Neural Computing Research Group at Aston is looking for a highly motivated individual for a 2 year postdoctoral research position in the area of novel techniques for data visualisation. The emphasis of the research will be on theoretically well-founded approaches which are applicable to real-world data sets. A key starting point for the research will be the recent developments in latent variable techniques for density estimation. Potential candidates should be have strong mathematical and computational skills, with a background either in artificial neural networks, statistical pattern recognition, or a related field. Conditions of Service --------------------- Salaries will be up to point 6 on the RA 1A scale, currently 15,986 UK pounds. The salary scale is subject to annual increments. How to Apply ------------ If you wish to be considered for this Fellowship, please send a full CV and publications list, including full details and grades of academic qualifications, together with the names of 4 referees, to: Professor C M Bishop Neural Computing Research Group Dept. of Computer Science and Applied Mathematics Aston University Birmingham B4 7ET, U.K. Tel: 0121 333 4631 Fax: 0121 333 6215 e-mail: c.m.bishop at aston.ac.uk (e-mail submission of postscript files is welcome) Closing date: 20 November, 1995. ----------------------------------------------------------------------  From bogus@does.not.exist.com Fri Oct 27 05:19:58 1995 From: bogus@does.not.exist.com () Date: Fri, 27 Oct 1995 09:19:58 +0000 Subject: Post in Statistical Modelling of Neural Networks Message-ID: <9510270919.ZM696@minster.york.ac.uk> Statistical Modelling and Simulation of Neural Networks 2 Year Research Associate supported by the EPSRC under the ROPA scheme. Within the Advanced Computer Architecture Group Department of Computer Science University of York, York, Y01 5DD, UK. Applications are invited for a 2 year post to aimed at investigating the properties and the application of a novel neural network method for parallel processing. The work will involve building probabilistic models and performing experimental evaluations of the network. The candidates will be expected to hold a PhD in a relevant subject and have a good understanding of probability theory and programming in C. In addition, knowledge of neural networks and parallel processing would be an advantage, but not essential. The work will study the properties of a novel form of binary neural network, based on correlation matrix memories. In a recent paper (austin 1995 available via WWW and included in the further particulars), it has been shown how a correlation matrix memory can be used to recall multiple data items in parallel. Although the technique has been shown to be feasible, its practical application requires a thorough analysis of the networks abilities through statistical modelling and computer simulation. The modelling is likely to require probabilistic methods, which will greatly add to the groups work on binary neural networks. The computer modelling will be undertaken in C, on the groups extensive network of silicon graphics machines. The successful applicant will join a thriving team of over 14 researchers working in neural networks, which specialises in research on binary weighted neural networks and there application in computer vision and knowledge based systems. The project is supported under the UK government ROPA scheme, and is available immediately for two years. Applications should be sent to the Personnel Office, University of York, York, YO1 5DD, POST MARKED by Friday 3rd of November 1995 quoting reference 6616. Applications up to 10th November 1995 will be accepted, as long as they ARRIVE by that date. Salary will be up to 15986 U.K pounds. Further details can be obtained from the Personnel Office, or by contacting Dr. Jim Austin on 01904 432734, email austin at minster.york.ac.uk. Further details of the groups work can be found on the world wide web http://dcpu1.cs.york.ac.uk:6666/arch/acag.html  From eann96 at lpac.ac.uk Fri Oct 27 05:46:22 1995 From: eann96 at lpac.ac.uk (Engineering Apps in Neural Nets 96) Date: Fri, 27 Oct 95 09:46:22 GMT Subject: EANN96-Second Call for Papers Message-ID: <20523.9510270946@pluto.lpac.ac.uk> International Conference on Engineering Applications of Neural Networks (EANN '96) London, UK 17--19 June 1996 Second Call for Papers The conference is a forum for presenting the latest results on neural network applications in technical fields. The applications may be in any engineering or technical field, including but not limited to systems engineering, mechanical engineering, robotics, process engineering, metallurgy, pulp and paper technology, aeronautical engineering, computer science, machine vision, chemistry, chemical engineering, physics, electrical engineering, electronics, civil engineering, geophysical sciences, biotechnology, and environmental engineering. Abstracts of one page (200 to 400 words) should be sent to eann96 at lpac.ac.uk by 21 January 1996 by e-mail in PostScript format or ASCII. Please mention two to four keywords, and whether you prefer it to be a short paper or a full paper. The short papers will be 4 pages in length, and full papers may be upto 8 pages. Tutorial proposals are also welcome until 21 January 1996. Notification of acceptance will be sent around 15 February. Submissions will be reviewed and the number of full papers will be very limited. For more information, please see the www page at http://www.lpac.ac.uk/EANN96 Organising committee A. Bulsari (Finland) D. Tsaptsinos (UK) T. Clarkson (UK) International program committee (to be confirmed, extended) G. Dorffner (Austria) S. Gong (UK) J. Heikkonen (Italy) B. Jervis (UK) E. Oja (Finland) H. Liljenstrvm (Sweden) G. Papadourakis (Greece) D. T. Pham (UK) P. Refenes (UK) N. Sharkey (UK) N. Steele (UK) D. Williams (UK) W. Duch (Poland) R. Baratti (Italy) G. Baier (Germany) E. Tulunay (Turkey) S. Kartalopoulos (USA) C. Schizas (Cyprus) J. Galvan (Spain) M. Ishikawa (Japan) Sponsored by: London Parallel Applications Centre (LPAC) IEE UK RIG NN British Institution of Electrical Engineers Professional Group C4 ---------------------------------+-------------------------------- Engineering Applications of Neural Networks'96 (EANN96) Further info: http://www.lpac.ac.uk/EANN96/ or Dimitris Tsaptsinos(D.Tsaptsinos at lpac.ac.uk) http://www.lpac.ac.uk/SEL-HPC/People/Dimitris ---------------------------------+--------------------------------  From gluck at pavlov.rutgers.edu Fri Oct 27 17:30:01 1995 From: gluck at pavlov.rutgers.edu (Mark Gluck) Date: Fri, 27 Oct 1995 17:30:01 -0400 Subject: Graduate Training in NEURAL COMPUTATION at Rutgers Univ. (NJ), Behav. & Neural Sci Ph.D. Message-ID: <199510272130.RAA09224@pavlov.rutgers.edu> Application Information for Ph.D. Program in BEHAVIORAL AND NEURAL SCIENCES at Rutgers University, Newark, New Jersey * Application target date is February 1, 1996 * ----------------------------------------------------------------- Additional information on our Ph.D. program, research facilities,and faculty can be obtained over the internet at: http://www.cmbn.rutgers.edu/bns-home.html ----------------------------------------------------------------- The Behavioral and Neural Sciences (BNS) graduate program at Rutgers-Newark aims to provide students with a rigorous understanding of modern neuroscience with an emphasis on integrating behavioral and neural approaches to understanding brain function. The program emphasizes the multidisciplinary nature of this endeavor, and offers specific research training in Behavioral and Cognitive Neuroscience as well as Molecular, Cellular and Systems Neuroscience. These research areas represent different but complementary approaches to contemporary issues in behavioral and molecular neuroscience and can emphasize either human or animal studies. The BNS graduate program is composed of faculty from the Center for Molecular and Behavioral Neuroscience (CMBN), the Institute of Animal Behavior (IAB), the Department of Biological Sciences, the Department of Psychology, and the School of Nursing. Research training in the BNS program emphasizes integration across levels of analysis and traditional disciplinary boundaries. Basic research areas in Cellular and Molecular Neuroscience include the study of the basal forebrain, basal ganglia, hippocampus, visual and auditory systems and monoaminergic and neuroendocrine systems using electrophysiological, neurochemical, neuroanatomical and molecular biological approaches. Research in Cognitive and Behavioral Neuroscience includes the study of memory, language (both signed and spoken), reading, attention, motor control, vision, and animal behavior. Clinically relevant research areas are the study of the behavioral, physiological and pharmacological aspects of schizophrenia, Alzheimer's Disease, amnesia, epilepsy, Parkinson's disease and other movement disorders, and the molecular genetics of neuropsychiatric disorders Other Information ----------------- At present the CMBN supports up to 40 students with 12-month renewable assistantships for a period of four years. The curent stipend for first year students is $12,750; this includes tuition remission and excellent healthcare benefits. In addition, the Johnson & Johnson pharmaceutical company's Foundation has provided four Excellence Awards which increase students' stipends by $5,000. Several other fellowships are offered. More information is available in our graduate brochure, available upon request. The Rutgers-Newark campus is 20 minutes outside New York City, and close to other major university research centers at NYU, Columbia, SUNY, and Princeton, as well as major industrial research labs in Northern NJ, including ATT, Bellcore, Siemens, and a host of pharmaceutical companies including Johnson & Johnson Hoecsht-Celanese, and Sandoz. Faculty Associated With Rutgers BNS Ph.D. Program ------------------------------------------------- FACULTY - RUTGERS Elizabeth Abercrombie (Ph.D., Princeton), neurotransmitters and behavior [CMBN] Colin Beer (Ph.D., Oxford), ethology [IAB] April Benasich (Ph.D., New York), infant perception and cognition [CMBN] Ed Bonder (Ph.D., Pennsylvania), cell biology [Biology] Linda Brzustowicz (M.D.,Ph.D., Columbia), human genetics [CMBN] Gyorgy Buzsaki (Ph.D., Budapest), systems neuroscience [CMBN] Mei-Fang Cheng (Ph.D., Bryn Mawr) neuroethology/neurobiology [IAB] Ian Creese (Ph.D., Cambridge), neuropsychopharmacology [CMBN] Doina Ganea (Ph.D., Illinois Medical School), molecular immunology [Biology] Alan Gilchrist (Ph.D., Rutgers), visual perception [Psychology] Mark Gluck (Ph.D.,Stanford), learning, memory and neural computation [CMBN] Ron Hart (Ph.D., Michigan), molecular neuroscience [Biology] G. Miller Jonakait (Ph.D., Cornell Medical College), neuroimmunology [Biology] Judy Kegl (Ph.D., M.I.T.), linguistics/neurolinguistics [CMBN] Barry Komisaruk (Ph.D., Rutgers), behavioral neurophysiology/pharmacology [IAB] Joan Morrell (Ph.D., Rochester), cellular neuroendocrinology [CMBN] Teresa Perney (Ph.D., Chicago), ion channel gene expression and function [CMBN] Howard Poizner (Ph.D., Northeastern), language and motor behavior [CMBN] Jay Rosenblatt (Ph.D., New York), maternal behavior [IAB] Anne Sereno (Ph.D., Harvard), attention and visual perception [CMBN] Maggie Shiffrar (Ph.D., Stanford), vision and motion perception[CMBN] Harold Siegel (Ph.D., Rutgers) neuroendocrine mechanisms [IAB] Ralph Siegel (Ph.D., McGill), neuropsychology of visual perception [CMBN] Jennifer Swann (Ph.D., Michigan), neuroendocrinology [Biology] Paula Tallal (Ph.D., Cambridge), neural basis of language development [CMBN] James Tepper (Ph.D., Colorado), basal ganglia neurophysiology and anatomy [CMBN] Beverly Whipple (Ph.D., Rutgers), women's health [Nursing] Laszlo Zaborszky (Ph.D., Hungarian Academy), neuroanatomy of forebrain [CMBN] ASSOCIATES OF CMBN Izrail Gelfand (Ph.D., Moscow State), biology of cells [Biology] Richard Katz (Ph.D., Bryn Mawr), psychopharmacology [Ciba Geigy] Barry Levin (M.D., Emory Medical) neurobiology David Tank (Ph.D., Cornell), neural plasticity [Bell Labs] For More Information or an Application -------------------------------------- If you are interested in applying to our graduate program, or possibly applying to one of the labs as a post-doc, research assistant or programmer, please contact us via one of the following: Dr. Gyorgy Buzsaki or Dr. Mark A. Gluck BNS Graduate Admissions CMBN, Rutgers University 197 University Ave. Newark, New Jersey 07102 Phone: (201) 648-1080 (Ext. 3221) Fax: (201) 648-1272 Email: buzsaki at axon.rutgers.edu or gluck at pavlov.rutgers.edu We will be happy to send you info on our research and graduate program, as well as set up an a possible visit to the Neuroscience Center here at Rutgers-Newark. Please also see our WWW Homepage listed above which contains extensive information on faculty research, degree requirements, local facilities, and more.  From mm at santafe.edu Fri Oct 27 17:44:46 1995 From: mm at santafe.edu (Melanie Mitchell) Date: Fri, 27 Oct 95 15:44:46 MDT Subject: Postdoctoral fellowships at the Santa Fe Institute Message-ID: <9510272144.AA25294@sfi.santafe.edu> The Santa Fe Institute has an opening for one or more Postdoctoral Fellows beginning in September, 1996. The Institute's research program is devoted to the study of complex systems, especially complex adaptive systems. Systems and techniques currently under study include: the economy; the immune system; the brain; biomolecular sequence and structure; the origin of life; artificial life; models of evolution; adaptive computation and intelligent systems; complexity, entropy, and the physics of information; nonlinear modeling and prediction; the evolution of culture; the development of general-purpose simulation environments; and others. Postdoctoral Fellows work either on existing research projects or on projects of their own choosing. Candidates should have a Ph.D. (or expect to receive one before September 1996) and should have backgrounds in computer science, mathematics, economics, theoretical physics or chemistry, game theory, cognitive science, theoretical biology, dynamical systems theory, or related fields. A strong background in computational approaches is essential, as is an interest in interdisciplinary work. Evidence of this interest, in the form of previous research experience and publications, is important. Applicants should submit a curriculum vitae, list of publications, and statement of research interests, and arrange for three letters of recommendation. Incomplete applications will not be considered. All application materials must be received by February 15, 1996. Decisions will be made by April, 1996. Send applications to: Postdoctoral Committee, Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, New Mexico 87501. Send complete application packages only, preferably hard copy, to the above address. Include your e-mail address and/or fax number. SFI is an equal opportunity employer. Women and minorities are encouraged to apply. More information about SFI and its research program can be found at SFI's web site: http://www.santafe.edu.  From harnad at cogsci.soton.ac.uk Sat Oct 28 14:06:05 1995 From: harnad at cogsci.soton.ac.uk (Stevan Harnad) Date: Sat, 28 Oct 95 18:06:05 GMT Subject: Language Innateness: BBS Call for Commentators Message-ID: <8042.9510281806@cogsci.ecs.soton.ac.uk> Below is the abstract of a forthcoming target article on: INNATENESS, AUTONOMY, UNIVERSALITY? NEUROBIOLOGICAL APPROACHES TO LANGUAGE by Ralph-Axel Mueller This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: bbs at soton.ac.uk or write to: Behavioral and Brain Sciences Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://cogsci.soton.ac.uk/~harnad/bbs.html gopher://gopher.princeton.edu:70/11/.libraries/.pujournals ftp://ftp.princeton.edu/pub/harnad/BBS To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp (or gopher or world-wide-web) according to the instructions that follow after the abstract. ____________________________________________________________________ INNATENESS, AUTONOMY, UNIVERSALITY? NEUROBIOLOGICAL APPROACHES TO LANGUAGE Ralph-Axel Mueller PET Center, Children's Hospital of Michigan, Wayne State University, Detroit MI 48201-2196, USA rmueller at pet.wayne.edu KEYWORDS: brain development, dissociations, distributive representations, epigenesis, evolution, functional localization, individual variation, innateness, language. ABSTRACT: The concepts of the innateness, universality, species-specificity, and autonomy of the human language capacity have had an extreme impact on the psycholinguistic debate for over thirty years. These concepts are evaluated from several neurobiological perspectives, with an emphasis on the emergence of language and its decay due to brain lesion and progressive brain disease. Evidence of perceptuomotor homologies and preadaptations for human language in nonhuman primates suggests a gradual emergence of language during hominid evolution. Regarding ontogeny, the innate component of language capacity is likely to be polygenic and shared with other developmental domains. Dissociations between verbal and nonverbal development are probably rooted in the perceptuomotor specializations of neural substrates rather than the autonomy of a grammar module. Aphasiological data often assumed to suggest modular linguistic subsystems can be accounted for in terms of a neurofunctional model incorporating perceptuomotor-based regional specializations and distributivity of representations. Thus, dissociations between grammatical functors and content words are due to different conditions of acquisition and resulting differences in neural representation. Since human brains are characterized by multifactorial interindividual variability, strict universality of functional organization is biologically unrealistic. A theoretical alternative is proposed according to which (a) linguistic specialization of brain areas is due to epigenetic and probabilistic maturational events, not to genetic 'hard-wiring', and (b) linguistic knowledge is neurally represented in distributed cell assemblies whose topography reflects the perceptuomotor modalities involved in the acquisition and use of a given item of knowledge. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from ftp.princeton.edu according to the instructions below (the filename is bbs.mueller). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- These files are also on the World Wide Web and the easiest way to retrieve them is with Netscape, Mosaic, gopher, archie, veronica, etc. Here are some of the URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs.html http://cogsci.soton.ac.uk/~harnad/bbs.html gopher://gopher.princeton.edu:70/11/.libraries/.pujournals ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.mueller ftp://cogsci.soton.ac.uk/pub/harnad/BBS/bbs.mueller To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.mueller When you have the file(s) you want, type: quit ---------- Where the above procedure is not available there are two fileservers: ftpmail at decwrl.dec.com and bitftp at pucc.bitnet that will do the transfer for you. To one or the other of them, send the following one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). -------------------------------------------------------------  From N.Sharkey at dcs.shef.ac.uk Mon Oct 30 09:28:16 1995 From: N.Sharkey at dcs.shef.ac.uk (N.Sharkey@dcs.shef.ac.uk) Date: Mon, 30 Oct 95 14:28:16 GMT Subject: one day seminar/colloquim Message-ID: <9510301428.AA08491@entropy.dcs.shef.ac.uk> ************************ SELF-LEARNING ROBOTS ************************ Organisers: Noel Sharkey John Hallam Computer Science AI Dept. Sheffield U. Edinburgh U. An Institution of Electrical Engineers One-day Seminar Savoy Place, London, UK: February, 12th, 1996. This will be a one day seminar to examine the most recent developments in Robot learning. There will a number of international invited speakers and there will also be opportunities for group (and personal) discussion at the event. INVITED SPEAKERS (alphabetical) Evolutionary Learning in Robots Dave Cliff (U.K.) Shaping robots: An experiment in Behavior Engineering Marco Dorigo (Italy) Learning subsumptions for an autonomous robot. Jan Heemskerk & Noel Sharkey (U.K.) Self-Organization in Robot Control Ulrich Nehmzow (U.K.) Toward Conscious Robots Martin Nilsson (Sweden) Evolving non-trivial behaviors on an autonomous robot. Stefano Nolfi (Italy) Robot spatial learning: insights from animal and human behaviour\\ Tony Prescott (UK) Learning more from less data: Experiments with lifelong robot learning. Sebastian Thrun (Germany) Neural Reinforcement Learning for Behavior Synthesis. Claude Touzet (France) A robot arm is neurally controlled using monocular feedback Patrick van der Smagt (The Netherlands) Exploration in Reinforcement Learning Jeremy Wyatt, Gillian Hayes, and John Hallam (U.K.) Robust and Adaptive World Modelling for Mobile Robots Uwe Zimmer (Germany) REGISTRATION INFORMATION: Sarah Evans (at above address) or email: sevans at iee.org.uk Dont reply to this message for info. please - use above email only.  From listerrj at helios.aston.ac.uk Mon Oct 30 09:19:56 1995 From: listerrj at helios.aston.ac.uk (Richard Lister) Date: Mon, 30 Oct 1995 14:19:56 +0000 Subject: Research Programmer Message-ID: <5409.9510301419@sun.aston.ac.uk> ------------------------------------------------------------------- Neural Computing Research Group ------------------------------- Dept of Computer Science and Applied Mathematics Aston University, Birmingham, UK Research Programmer ------------------- * Full details at http://neural-server.aston.ac.uk/ * Applications are invited for the post of Research Programmer within the Neural Computing Research Group (NCRG) at Aston University. The NCRG is now the largest academic research group in this area in the UK, and has an extensive and lively programme of research ranging from the theoretical foundations of neural computing and pattern recognition through to industrial and commercial applications. The Group is based in spacious accommodation in the University's Main Building, and is well equipped with its own network of Silicon Graphics and Sun workstations, supported by a full-time system administrator. The successful candidate will work under the supervision of Professor Chris Bishop and Professor David Lowe and will be responsible for a range of software development and related activities. An early task will involve the development of a large C++ library of neural network software for use in many of the Group's projects. Another significant component will involve contributions to industrial and commercial research contracts, as well as providing software support to existing research projects. Additional responsibilities may include development of software for use in taught courses as part of the Group's MSc programme in Pattern Analysis and Neural Networks. The ideal candidate will have: * a good first degree in a numerate discipline * expertise in software development (preferably in C and C++) * a good understanding of neural networks * working knowledge of basic mathematics such as calculus and linear algebra * experience of working in a UNIX environment Neural Computing Research Group ------------------------------- The Neural Computing Research Group currently comprises the following academic staff Chris Bishop Professor David Lowe Professor David Bounds Professor Geoffrey Hinton Visiting Professor Richard Rohwer Lecturer Alan Harget Lecturer Ian Nabney Lecturer David Saad Lecturer Chris Williams Lecturer together with the following Postdoctoral Research Fellows David Barber Paul Goldberg Alan McLachlan Herbert Wiklicky Huaihu Zhu a full-time system administrator, and PhD and MSc research students. Conditions of Service --------------------- The appointment will be for an initial period of one year, with the possibility of subsequent renewal. Initial salary will be on the academic 1A or 1B scales up to 15,986. How to Apply ------------ If you wish to be considered for this position, please send a full CV, together with the names and addresses of at least 3 referees, to: Hanni Sondermann Neural Computing Research Group Department of Computer Science and Applied Mathematics Aston University Birmingham B4 7ET, U.K. Tel: (+44 or 0) 121 333 4631 Fax: (+44 or 0) 121 333 6215 e-mail: h.e.sondermann at aston.ac.uk Closing date: 20 November 1995. ----------------------------------------------------------------------  From tds at ai.mit.edu Mon Oct 30 16:02:53 1995 From: tds at ai.mit.edu (Terence D. Sanger) Date: Mon, 30 Oct 95 16:02:53 EST Subject: NIPS workshop opening Message-ID: <9510302102.AA08537@dentate.ai.mit.edu> Dear Connectionists, There has been an unexpected opening in the panelists for the NIPS workshop described below. If you would be interested in presenting some of your work and aiding a rousing discussion, please send me a brief abstract and description of your current research. I am most interested in speakers with background or current research in "wet-science" neurophysiology. Sorry for the late notice! Terry Sanger tds at ai.mit.edu ============================================================================= NIPS*95 Post-Conference Workshop "Vertebrate Neurophysiology and Neural Networks: Can the teacher learn from the student?" Results from neurophysiological investigations continue to guide the development of artificial neural network models that have been shown to have wide applicability in solving difficult computational problems. This workshop addresses the question of whether artificial neural network models can be applied to understanding neurophysiological results and guiding further experimental investigations. Recent work on close modelling of vertebrate neurophysiology will be presented, so as to give a survey of some of the results in this field. We will concentrate on examples for which artificial neural network models have been constructed to mimic the structure as well as the function of their biological counterparts. Clearly, this can be done at many different levels of abstraction. The goal is to discuss models that have explanatory and predictive power for neurophysiology. The following questions will serve as general discussion topics: 1. Do artificial neural network models have any relationship to ``real'' Neurophysiology? 2. Have any such models been used to guide new biological research? 3. Is Neurophysiology really useful for designing artificial networks, or does it just provide a vague ``inspiration''? 4. How faithfully do models need to address ultrastructural or membrane properties of neurons and neural circuits in order to generate realistic predictions of function? 5. Are there any artificial network models that have applicability across different regions of the central nervous system devoted to varied sensory and motor modalities? 6. To what extent do theoretical models address more than one of David Marr's levels of algorithmic abstraction (general approach, specific algorithm, and hardware implementation)? The workshop is planned as a single day panel discussion including both morning and afternoon sessions. Two or three speakers per session will be asked to limit presentations of relevant research to 15 minutes. Each speaker will describe computational models of different vertebrate regions, and speakers are encouraged to present an overview of algorithms and results in a manner that will allow direct comparison between methods. Intense audience participation is actively encouraged. The intended audience includes researchers actively involved in neurophysiological modelling, as well as a general audience that can contribute viewpoints from different backgrounds within the Neural Networks field.  From marks at neuro.usc.edu Mon Oct 30 14:04:28 1995 From: marks at neuro.usc.edu (Mark Seidenberg) Date: Mon, 30 Oct 1995 11:04:28 -0800 (PST) Subject: job opening at USC Message-ID: <199510301904.LAA07514@neuro.usc.edu> The psychology department at USC is searching for a person in the "cognitive and behavioral neuroscience" area, with a preference for someone who does neural network modeling. The position could also include appointments in computer science or neurobiology as appropriate. The person would join a strong neuroscience-cognitive science program here at USC, which includes Michael Arbib, Christoph von der Malsburg (part-time), myself, Irv Biederman, Richard F. Thompson, Michel Beaudry, Larry Swanson, Ted Berger, and others. Text of the ad follows. I would be willing to answer inquiries from interested parties. ----- COGNITIVE AND BEHAVIORAL NEUROSCIENCE: The Psychology Department at the University of Southern California invites applications for a faculty position at the tenure-track assistant professor level, including but not limited to individuals with expertise in neural network modeling. We are particularly interested in applicants skilled in quantitative methods or computational modeling techniques. Teaching responsibilities would include courses in these areas. Interested candidates should submit a letter outlining their qualifications, curriculum vitae, recent publications and three letters of recommendation to: Cognitive and Behavioral Neuroscience Search Committee, Department of Psychology, University of Southern California, Los Angeles CA 90089-1061. Deadline for applications is January 1, 1996. We are an equal opportunity/affirmative action employer and strongly encourage applications from minorities and women. --- ____________________________________ Mark S. Seidenberg Neuroscience Program University of Southern California 3614 Watt Way Los Angeles, CA 90089-2520 Phone: 213-740-9174 Fax: 213-740-5687 ____________________________________  From maja at cs.brandeis.edu Mon Oct 30 19:40:07 1995 From: maja at cs.brandeis.edu (Maja Mataric) Date: Mon, 30 Oct 1995 19:40:07 -0500 Subject: NIPS*95 Post-Conference Workshop on Robot Learning Message-ID: <199510310040.TAA10752@garnet.cs.brandeis.edu> ---------------------------------------------------------------- CALL FOR PARTICIPATION: Robot Learning III -- Learning in the "Real World" A NIPS*95 Post-conference Workshop Vail, Colorado, Dec 1, 1995 ---------------------------------------------------------------- The goal of this one-day workshop is to provide a forum for researchers active in the area of robot learning. Due to the limited time available, we will focus on one major issue: the difficulty of going from theory and simulation to practice and actual implementation of robot learning. A wide variety of algorithms have been developed for learning in robots and, in simulation, many of them work quite well. However, physical robots are faced with sensor noise, control error, non-stationary environments, inconsistent feedback, and the need to operate robustly in real time. Most of these aspects are difficult to simulate accurately, yet have a critical effect on the learning performance. Unfortunately, very few of the developed learning algorithms have been empirically tested on actual robots, and of those even fewer have repeated the success found in simulated domains. Some of the specific questions we plan to discuss are: How can we handle noise in sensing and action without a priori models? How do we build in a priori knowledge? How can we learn in real time with exploration in real time? How can we construct richer reward functions, incorporating feedback, shaping, multi-model reinforcement, etc? This workshop is intended to serve as a followup to previous years' post-NIPS workshops on robot learning. The morning session of the workshop will consist of short presentations of problems faced when implementing learning in physical robots, followed by a general discussion guided by a moderator. The afternoon session will concentrate on actual implementations, with video (and hopefully live) demonstrations where possible. As time permits, we will also attempt to create an updated "Where do we go from here?" list, following the example of the previous years' workshops. The list will attempt to characterize the problems that must be solved next in order to make progress in applied robot learning. Talks by: Stefan Schaal, Georgia Tech, ATR "How Hard Is It To Balance a Real Pole With a Real Arm?" Sebastian Thrun, Carnegie Mellon University, "Learning More from Less Data: Experiments in Lifelong Robot Learning" Maja Mataric, Brandeis University "Complete Systems Learning in Dynamic Environments" Marcos Salganicoff, University of Delaware, A.I. Dupont Institute "Robots are from Mars, Learning Algorithms are from Venus: A practical guide to getting what you want in a relationship with your robot learning implementation" The targeted audience for the workshop are those researchers who are interested in robot learning and robots in general. We expect to draw an eclectic audience, so every attempt will be made to ensure that presentations are accessible to people without any specific background in the field. ----------------------------------------------------------------------- Organized by: Maja Mataric, Brandeis University maja at cs.brandeis.edu David Cohn, MIT and Harlequin, Inc. cohn at harlequin.com  From bert at mbfys.kun.nl Tue Oct 31 11:24:48 1995 From: bert at mbfys.kun.nl (Bert Kappen) Date: Tue, 31 Oct 1995 17:24:48 +0100 Subject: No subject Message-ID: <199510311624.RAA26312@septimius.mbfys.kun.nl> University of Nijmegen, Postdoctoral Fellowship The Foundation for Neural Netwokrs at the University of Nijmegen has a vacancy for a postdoctoral fellow for a theoretical research project. The project consists of the development of novel theory, techniques and implementations for perception and cognitive reasoning in a complex dynamic multi-sensory environment. The results will be explored both in robotics and in multi-media applications. The fundamental problems which could be addressed within the context of this project are for instance: Sub-symbolic symbolic interfacing, learning in a changing environment, or probabilistic knowledge representation combining neural networks and AI techniques such as Bayes networks. The project includes a group of about 10 scientists with participation from the University of Amsterdam (robotics-group) and Utrecht (research group on 3_d computer vision) and is funded by the Japanese Real World Computing Program. For more information about the research of the Foundation for Neural Networks and the neural networks research at the University of Nijmegen, please consult http://www.mbfys.kun.nl/SNN/ Applications for the position, which will be for one year with possible extension to three years, should be sent before november 20 to: Foundation for Neural Networks, University of Nijmegen Geert Grooteplein 21, NL 6525 EZ Nijmegen, The Netherlands Email: snn at mbfys.kun.nl  From bruno at redwood.psych.cornell.edu Tue Oct 31 11:37:23 1995 From: bruno at redwood.psych.cornell.edu (Bruno A. Olshausen) Date: Tue, 31 Oct 1995 11:37:23 -0500 Subject: neuroanatomical database available Message-ID: <199510311637.LAA14619@redwood.psych.cornell.edu> The following software is available via ftp://v1.wustl.edu/pub/xanat/xanat-2.0.tar.Z There is also a homepage at http://redwood.psych.cornell.edu/bruno/xanat/xanat.html Xanat 2.0 A Graphical Anatomical Database By Bill Press and Bruno Olshausen Washington University School of Medicine Department of Anatomy and Neurobiology St. Louis, Missouri 63110 XANAT is a computer program that facilitates the analysis of neuroanatomical data by storing the results of numerous studies in a standardized format, and by providing various tools for performing summaries and comparisons of these studies. Data are entered by drawing injection and label sites directly onto canonical representations of the neuroanatomical structures of interest, along with providing descriptive text information. Searches may then be performed on the data by querying the database graphically according to injection or label site, or via text information (i.e., keyword search). Analyses may also be performed by accumulating data across multiple studies and displaying a color coded map that graphically represents the total evidence for connectivity between regions. Data may be studied and compared free of areal boundaries (which often vary from one lab to the next), and instead with respect to standard landmarks, such as the position relative to well known neuroanatomical substrates, or stereotaxic coordinates. If desired, areal boundaries may be defined by the user to facilitate the interpretation of results. XANAT is written in C and is intended to run on unix workstations running the X11 window system. The workstation must have at least a modifiable 8-bit color map. The program has been successfully tested on Suns, SGIs, and IBM PCs (running Linux). Included with the distribution is an example dataset of pulvinar-cortical connectivity, which should prove useful in learning how to use the program.