From sylee at eekaist.kaist.ac.kr Sun Aug 3 03:13:52 1997 From: sylee at eekaist.kaist.ac.kr (sylee) Date: Sun, 03 Aug 1997 16:13:52 +0900 Subject: Post Doc in Analog Neuro-Chip Message-ID: <33E42FB0.2644@ee.kaist.ac.kr> One or two Post Doc position is open immediately at Korea Advanced Institute of Science and Technology. The Computation and Neural Systems Laboratory is working on neural systems, which includes neural architecture and learning algorithms, hardware implementation, and applications such as speech recognition. We had developed analog neuro-chips with on-chip learning capability, and working on several applications of these chips. Currently we are inviting researchers for the following two projects. (1) Adaptive nonlinear equalizer Channel equalizer is a kind of adaptive filter to extract signal from corrupted signal. Convolutional channel, random noise, and nonlinearity are considered to cause the signal correuption. It's architecture is similar to a single-layer Perceptron, which may also be generalized to multi-layer architectures. Therefore, the developed analog neuro-chip is well suited to this application. (2) Biosensor array It is a part of Biochip Project, a Star Project of KAIST. At the realy stage of the Biochip Project, we will develope an integreated chip which consists of multi biosensors, liquid channel, and signal processing neuro-chip for robust decison based on the multi-sensor output. The successful applicant will work on the signal processing neuro-chip with close cooperation with biologist and MEM researchers. The appointment period is one year, which may be extended upto 3 years. The monthly salary is about US$1400, which is enough to support two member family near the campus at Taeduk Science Town, Taejon, about 150 km south of Seoul. If interested, please send me a brief resume and list of publications. Prof. Soo-Young Lee Computation and Neural Systems Laboratory Department of Electrical Engineering Korea Advanced Institute of Science and Technology 373-1 Kusong-dong, Yusong-gu Taejon 305-701 Korea (South) Tel: +82-42-869-3431 Fax: +82-42-869-3410 E-mail: sylee at ee.kaist.ac.kr   From hirai at is.tsukuba.ac.jp Sun Aug 3 21:08:37 1997 From: hirai at is.tsukuba.ac.jp (Yuzo Hirai) Date: Mon, 4 Aug 1997 10:08:37 +0900 (JST) Subject: Invited session in KES'98 Message-ID: <199708040108.KAA27134@poplar.is.tsukuba.ac.jp> Dear colleagues, I was asked to organize one session at the SECOND INTERNATIONAL CONFERENCE ON KNOWLEDGE-BASED INTELLIGENT ELECTRONIC SYSTEMS (KES'98), 21st - 23rd April 1998, ADELAIDE, Australia. The title of the session is: "Hardware Implementation of Neural Networks." I encourage you to submit papers concerning microelectronic or optical implementation of neural networks. Experimental systems as well as commercially available systems in the following fields are welcome: * Learning and Classification * Feature Extraction and Dimension Reduction * Self-organization and Clustering * Principal Component Analysis and Independent Component Analysis * Associative Memory and Constraint Processings * Sensory Processings * Fault Tolerance in Hardware Neural Networks SUBMISSION OF PAPERS must be done according to the following rules: * Papers must be written in English (5 to 10 pages maximum). * Paper presentation is about 20 minutes each including questions and discussions. * Include corresponding author with full name, address, telephone and fax numbers, E-Mail address. * Include presenter address and his/her 4 line resume for introduction purposes only. * Fax or E-Mail copies are not acceptable. * The conference proceedings will be published by IEEE, U.S.A. * Please submit one original and three copies of the camera ready paper (A4 size), two column format in Times or similar font style, 10 points with one inch margin on all four sides for review to: Professor Yuzo Hirai Institute of Information Sciences and Electronics University of Tsukuba Address: 1-1-1 Ten-nodai, Tsukuba, Ibaraki 305, Japan Tel: +81-298-53-5519 Fax: +81-298-53-5206 E-mail: hirai at is.tsukuba.ac.jp DEADLINE FOR THE RECEIPT OF PAPERS is 30th September 1997 For further information please contact to Hirai. Best Regards, Yuzo Hirai   From devin at psy.uq.edu.au Mon Aug 4 02:10:05 1997 From: devin at psy.uq.edu.au (Devin McAuley) Date: Mon, 4 Aug 1997 16:10:05 +1000 (EST) Subject: Software Release: BrainWave neural network simulator (version 1.1) Message-ID: Dear colleagues, We are pleased to announce the release of BrainWave (version 1.1). BrainWave is a web-based neural network simulator, designed for teaching and research of connectionist models of cognition. It employs a highly graphical, direct manipulation interface -much like a drawing program - allowing students to focus on the models and not the interface. BrainWave is written in the Java programming language meaning that it can be run directly from web browsers such as Netscape and Internet Explorer (on Windows 95, MacOs 7.0 and several Unix platforms). You can access BrainWave at http://psy.uq.edu.au/~brainwav The following algorithms are included in version 1.1. * Interactive Activation and Competition * Self Organizing Map * Backpropagation * Hebbian Learning * Hopfield Network * Simple-Recurrent Network * Phase-Resetting Oscillators * ALCOVE All models in BrainWave are accessible in a user modifiable Networks menu, or can be loaded directly from local disk or a URL. Some of the models included in version 1.1 are * Letter Perception (McCelland & Rumelhart 1981) * Orientation Selectivity (von der Malsberg 1973) * Controlled and Automatic Processing -The Stroop Effect (Cohen, Dunbar & McClelland 1990) * Category Learning - ALCOVE (Kruschke 1993) * Episodic Memory - The Matrix Model (Humphreys, Bain & Pike 1989) * Language Acquisition - Simple Recurrent Network (Elman 1990) * Deep Dyslexia (Hinton & Shallice 1991) * Synchronous Fireflies Example (Buck & Buck 1976) Neural Networks by Example is an online workbook designed for use with the BrainWave simulator in the context of a course on neural networks or through a program of self study. The chapters of Neural Networks by Example (available from the BrainWave home page) cover a set of neural architectures that have been instrumental in the development of the field and that illustrate key concepts in the area. Network architectures are introduced through a series of exercises using the simulator, highlighting important issues or concepts that the architectures demonstrate. The approach is very hands on, and the best way to use the workbook is with a running copy of BrainWave. We hope that you find BrainWave a useful teaching and research tool. If you have any questions, please email us at brainwav at psy.uq.edu.au cheers, Simon Dennis and Devin McAuley   From cns-cas at cns.bu.edu Sun Aug 3 19:54:35 1997 From: cns-cas at cns.bu.edu (Boston University - Cognitive and Neural Systems) Date: Sun, 03 Aug 1997 19:54:35 -0400 Subject: CALL FOR PAPERS: 2nd International Conference on CNS Message-ID: <3.0.2.32.19970803195435.0071fdbc@cns.bu.edu> *****CALL FOR PAPERS***** SECOND INTERNATIONAL CONFERENCE ON COGNITIVE AND NEURAL SYSTEMS (CNS'98) May 27-30, 1998 Sponsored by the Center for Adaptive Systems and the Department of Cognitive and Neural Systems Boston University with financial support from the Defense Advanced Research Projects Agency and the Office of Naval Research CNS'98 will include invited lectures and contributed lectures and posters by experts on the biology and technology of how the brain and other intelligent systems adapt to a changing world. The conference is aimed at bringing together researchers in computational neuroscience, connectionist cognitive science, and artificial neural networks, among other disciplines, with a particular focus upon how intelligent systems adapt autonomously to a changing world. The First International Conference on Cognitive and Neural Systems was held on May 28-31, 1997 at Boston University. Its title was: Vision, Recognition, and Action: From Biology to Technology. Over 200 people from 18 countries attended this conference. Many participants asked that a sequel be held in 1998, and that the meeting scope be broadened. CNS'98 has been designed to achieve both goals. The meeting aims to be a forum for lively presentation and discussion of recent research that is relevant to modeling how the brain controls behavior, how the technology of intelligent systems can benefit from understanding human and animal intelligence, and how technology transfers between these two endeavors can be accomplished. The meeting's philosophy is to have a single oral or poster session at a time, so that all presented work is highly visible. Abstract submissions enable scientists and engineers to send in examples of their freshest work. Costs are kept at a minimum to enable the maximum number of people, including students, to attend, without compromising on the quality of tutorial notes, meeting proceedings, reception, and coffee breaks. Although Memorial Day falls on Saturday, May 30, it is observed on Monday, May 25, 1998. Contributions are welcomed on the following topics, among others. Contributors are requested to list a first and second choice from among these topics in their cover letter, and to say whether it is biological (B) or technological (T) work, when they submit their abstract, as described below. *vision *object recognition *image understanding *audition *speech and language *unsupervised learning *supervised learning *reinforcement and emotion *cognition, planning, and attention *sensory-motor control *spatial mapping and navigation *neural circuit models *neural system models *mathematics of neural systems *robotics *neuromorphic VLSI *hybrid systems (fuzzy, evolutionary, digital) *industrial applications *other Example: first choice: vision (B); second choice: neural system models (B). CALL FOR ABSTRACTS: Contributed abstracts by active modelers in cognitive science, computational neuroscience, artificial neural networks, artificial intelligence, and neuromorphic engineering are welcome. They must be received, in English, by January 31, 1998. Notification of acceptance will be given by February 28, 1998. A meeting registration fee of $45 for regular attendees and $30 for students must accompany each Abstract. See Registration Information below for details. The fee will be returned if the Abstract is not accepted for presentation and publication in the meeting proceedings. Registration fees of accepted abstracts will be returned on request only until April 1, 1998. Each Abstract should fit on one 8.5 x 11" white page with 1" margins on all sides, single-column format, single-spaced, Times Roman or similar font of 10 points or larger, printed on one side of the page only. Fax submissions will not be accepted. Abstract title, author name(s), affiliation(s), mailing, and email address(es) should begin each Abstract. An accompanying cover letter should include: Full title of Abstract, corresponding author and presenting author name, address, telephone, fax, and email address. Preference for oral or poster presentation should be noted. (Talks will be 15 minutes long. Posters will be up for a full day. Overhead, slide, and VCR facilities will be available for talks.) Abstracts which do not meet these requirements or which are submitted with insufficient funds will be returned. The original and 3 copies of each Abstract should be sent to: CNS'98, c/o Cynthia Bradford, Boston University, Department of Cognitive and Neural Systems, 677 Beacon Street, Boston, MA 02215. The program committee will determine whether papers will be accepted in an oral or poster presentation, or rejected. REGISTRATION INFORMATION: Since seating at the meeting is limited, early registration is recommended. To register, please fill out the registration form below. Student registrations must be accompanied by a letter of verification from a department chairperson or faculty/research advisor. If accompanied by an Abstract or if paying by check, mail to: CNS'98, c/o Cynthia Bradford, Boston University, Department of Cognitive and Neural Systems, 677 Beacon Street, Boston, MA 02215. If paying by credit card, mail as above, or fax to (617) 353-7755, or email to cindy at cns.bu.edu. The registration fee will help to pay for a reception, 6 coffee breaks, and the meeting proceedings. STUDENT FELLOWSHIPS: A limited number of fellowships for PhD candidates and postdoctoral fellows are available to at least partially defray meeting travel and living costs. The deadline for applying for fellowship support is January 31, 1998. Applicants will be notified by February 28, 1998. Each application should include the applicant's CV, including name; mailing address; email address; current student status; faculty or PhD research advisor's name, address, and email address; relevant courses and other educational data; and a list of research articles. A letter from the listed faculty or PhD advisor on official institutional stationery should accompany the application and summarize how the candidate may benefit from the meeting. Students who also submit an Abstract need to include the registration fee with their Abstract. Reimbursement checks will be distributed after the meeting. Their size will be determined by student need and the availability of funds. REGISTRATION FORM (Please Type or Print) Cognitive and Neural Systems Boston University Boston, Massachusetts Tutorials: May 27, 1998 Meeting: May 28-30, 1998 Mr/Ms/Dr/Prof: Name: Affiliation: Address: City, State, Postal Code: Phone and Fax: Email: The conference registration fee includes the meeting program, reception, two coffee breaks each day, and meeting proceedings. For registered participants in the conference, the regular tutorial registration fee is $25 and the student fee is $15. For attendees of only the conference, the regular registration fee is $45 and the student fee is $30. Two coffee breaks and a tutorial handout will be covered by the tutorial registration fee. CHECK ONE: [ ] $70 Conference plus Tutorial (Regular) [ ] $45 Conference plus Tutorial (Student) [ ] $45 Conference Only (Regular) [ ] $30 Conference Only (Student) [ ] $25 Tutorial Only (Regular) [ ] $15 Tutorial Only (Student) Method of Payment: [ ] Enclosed is a check made payable to "Boston University". Checks must be made payable in US dollars and issued by a US correspondent bank. Each registrant is responsible for any and all bank charges. [ ] I wish to pay my fees by credit card (MasterCard, Visa, or Discover Card only). Type of card: Name as it appears on the card: Account number: Expiration date: Signature and date: ****************************************   From peter.hansen at physiol.ox.ac.uk Mon Aug 4 06:18:59 1997 From: peter.hansen at physiol.ox.ac.uk (Peter Hansen) Date: Mon, 4 Aug 1997 11:18:59 +0100 (BST) Subject: 1997 Autumn School in Cognitive Neuroscience, Oxford Message-ID: AUTUMN SCHOOL IN COGNITIVE NEUROSCIENCE Oxford, 30 September to 3 October 1997 UNIVERSITY OF OXFORD OXFORD CENTRE FOR COGNITIVE NEUROSCIENCE The 1997 Annual Autumn School in Cognitive Neuroscience will be held in Oxford on the four days Tuesday 30 September to Friday 3 October. The School is intended primarily for doctoral students, other graduate students and postdoctoral scientists, at Oxford and at other universities, and also for third-year undergraduates who are considering the possibility of research in Neuroscience and would like to find out more about it. Each day will be devoted to a particular area of Cognitive Neuroscience. The preliminary programme is as follows. DAY 1 ATTENTION: FROM PERCEPTION TO SINGLE CELLS Lecturers: S Judge (Oxford), M Goldberg (Bethesda, USA), M Husain, J Driver (London), G Humphreys (Birmingham), J Duncan, C Spence (Cambridge), G Fink (Cologne) DAY 2 NEURAL TRANSPLANTATION AND RESTORATION OF FUNCTION Lecturers: J Gray, H Hodges, J Sinden (IOP, London) S Dunnett, R Franklin, C Svendsen, L Annett, A Rosser (Cambridge), G Raisman (NIMR, London), J Mallet (Paris) DAY 3 DYNAMIC IMAGING OF THE HUMAN BRAIN Lecturers: A Nobre, E Wilding, E Rolls, V Walsh (Oxford), P Fletcher, K Friston (London), R Mangun (Davis, USA), W Singer (Frankfurt) DAY 4 MOTOR FUNCTION: FUNCTIONAL IMAGING AND PSYCHOPHYSICAL APPROACHES Lecturers: J Stein, C Miall, P Matthews, R Passingham (Oxford), J Wann (Reading), P Haggard (UCL), D Brooks (Hammersmith), S Jackson (Bangor), G Stelmach (Phoenix, USA) This course is offered free of charge. A limited number of bursaries is available to graduates at UK universities outside Oxford, to assist with travel and accommodation expenses. For further information and application forms, see: http://www.physiol.ox.ac.uk/mcdp/autsch/ --- Dr Peter Hansen Oxford Centre for Cognitive Neuroscience Phone: (01865) 282163 Physiology Laboratory, Oxford University   From cg at eivind.imm.dtu.dk Mon Aug 4 12:21:44 1997 From: cg at eivind.imm.dtu.dk (Cyril Goutte) Date: Mon, 4 Aug 1997 18:21:44 +0200 (METDST) Subject: PhD thesis available. Message-ID: Dear Connectionists, I am pleased to announce that the manuscript of my thesis: STATISTICAL LEARNING AND REGULARISATION FOR REGRESSION is available through the WWW at the following URL: http://eivind.imm.dtu.dk/staff/goutte/PUBLIS/thesis.html --- Abstract : This thesis deals with the use of statistical learning and regularisation on regression problems, with a focus on time series modelling and system identification. Both linear models and non-linear neural networks are considered as particular modelling techniques. Linear and non-linear parametric regression are briefly introduced and their limit is shown using the bias-variance decomposition of the generalisation error. We then show that as such, those problems are ill-posed, and thus need to be regularised. Regularisation introduces a number of hyper-parameters, the setting of which is performed by estimating generalisation error. Several such methods are evoked in the course of this work. The use of these theoretical aspects is targeted towards two particular problems. First an iterative method relying on generalisation error to extract the relevant delays from time series data is presented. Then a particular regularisation functional is studied, that provides pruning of unnecessary parameters as well as a regularising effect. This last part uses Bayesian estimators, and a brief presentation of those estimators is also given in the thesis. --- Cyril. --- Cyril Goutte |> cg at imm.dtu.dk <| Tel: +45-4525 3921 (Fax: +45-4587 2599) Department of Mathematical Modelling - D.T.U., Bygn. 321 - DK-2800 Lyngby   From cardie at CS.Cornell.EDU Mon Aug 4 14:58:56 1997 From: cardie at CS.Cornell.EDU (Claire Cardie) Date: Mon, 4 Aug 1997 14:58:56 -0400 (EDT) Subject: Machine Learning Journal Special Issue on Natural Language Learning Message-ID: <199708041858.OAA21226@ewigkeit.cs.cornell.edu> CALL FOR PAPERS Machine Learning Journal Special Issue on Natural Language Learning The application of learning techniques to natural language processing has grown dramatically in recent years under the rubric of "corpus-based," "statistical," or "empirical" methods. However, most of this research has been conducted outside the traditional machine learning research community. This special issue is an attempt to bridge this divide by inviting researchers in all areas of natural language learning to communicate their recent results to a general machine learning audience. Papers are invited on learning applied to all natural language tasks including: * Syntax: Part-of-Speech tagging, parsing, language modeling, prepositional-phrase attachment, spelling correction, word segmentation * Semantics: Word-sense disambiguation, word clustering, lexicon acquisition, semantic analysis, database-query mapping * Discourse: Information extraction, anaphora resolution, discourse segmentation * Machine Translation: Bilingual text alignment, bilingual dictionary construction, lexical, syntactic, and semantic transfer and all learning approaches including: * Statistical: n-gram models, hidden Markov models, probabilistic context-free grammars, Bayesian networks * Symbolic: Decision trees, rule-based, case-based, inductive logic programming, automata and grammar induction * Neural-Network & Evolutionary: recurrent networks, self-organizing maps, genetic algorithms Experimental papers with significant results evaluating either engineering performance or cognitive-modeling validity on suitable corpora are invited. Papers will be evaluated by three reviewers, including at least two experts in the relevant area of natural language learning; however, they should be written to be reasonably accessible to a general machine learning audience. Schedule: December 1, 1997: Deadline for submissions March 1, 1998: Deadline for getting decisions back to authors May 1, 1998: Deadline for authors to submit final versions Fall 1998: Publication Submission Guidelines: 1) Manuscripts should conform to the formatting instructions in: http://www.cs.orst.edu/~tgd/mlj/info-for-authors.html The first author will be the primary contact unless otherwise stated. 2) Authors should send 5 copies of the manuscript to: Karen Cullen Machine Learning Editorial Office Attn: Special Issue on Natural Language Learning Kluwer Academic Press 101 Philip Drive Assinippi Park Norwell, MA 02061 617-871-6300 617-871-6528 (fax) kcullen at wkap.com and one copy to: Raymond J. Mooney Department of Computer Sciences Taylor Hall 2.124 University of Texas Austin, TX 78712-1188 (512) 471-9558 (512) 471-8885 (fax) mooney at cs.utexas.edu 3) Please also send an ASCII title page (title, authors, email, abstract, and keywords) and a postscript version of the manuscript to mooney at cs.utexas.edu. General Inquiries: Please address general inquiries to: mooney at cs.utexas.edu Up-to-date information will be maintained on the WWW at: http://www.cs.utexas.edu/users/ml/mlj-nll Co-Editors: Claire Cardie Cornell University cardie at cs.cornell.edu Raymond J. Mooney University of Texas at Austin mooney at cs.utexas.edu ------- End of forwarded message -------   From jose at tractatus.rutgers.edu Fri Aug 1 12:51:58 1997 From: jose at tractatus.rutgers.edu (Stephen Jose Hanson) Date: Fri, 01 Aug 1997 12:51:58 -0400 Subject: Cognitive Science Symposium on Modeling and Brain Imaging Message-ID: [ Moderator's note: Steve Hanson would like to suggest that a critical aspect of Brain Imaging in the future will be Neural Network (or system level) modeling. In order to stimulate discussion of this topic on the Connectionists list, he submitted the program for a symposium he's organized on the subject for next week's Cognitive Science conference. There are some intereting ideas here. Perhaps we'll also have a workshop on the topic at NIPS this year. Persons seeking more information about the Cognitive Science symposium may contact Steve at jose at psychology.rutgers.edu. -- Dave Touretzky ] ================================================================ 19th Annual Cognitive Science Society, Stanford 8/7-10/97 Brain Imaging: Models, Methods and High Level Cognition (8/8/97, 2pm - 4pm) (Organizer: Stephen Hanson) Brain Imaging methods hold the promise of being the new "brass instrument" for Psychology. These methods provide tantalizing snapshots of mental activity and function. Nonetheless, basic measurement questions arise as more complex mental functions are being inferred. Tensions arise in determining what is being measured during blood flow changes in the brain ? And what are the role of computational models in representing, interpreting and understanding the nature of the mental function which brain imaging methods probe? The idea behind this symposium is to examine the tension between measurement and modeling in Brain Imaging especially against the backdrop of high level cognitive processes, such as reasoning, categorization and language. An important compenent of these techniques in the future might be in how they may utilize computational and mathematical models that are initally biased with prior beliefs about the relevant location estimators and temporal structure of the underlying mental process. "Functional Neuroimaging: A bridge between Cognitive and Neuro Sciences?". Tomas Paus MNI I will start by posing a question whether one can marry cognitive and neuro-sciences, and what role functional neuroimaging can play here. I will ask, with Tulving, whether it is true that "we lack the requisite background knowledge to appreciate each others excitement", and what can be done about it. I will then go on to outline the basic principles and the techniques of the research that deals with the brain/behavior relationship, pointing out crucial distinctions between "disruption" (i.e. lesion, stimulation, etc.) and "correlate" (i.e. unit activity, EEG, PET, fMRI) studies. I will review the basic principles of the current neuroimaging methods (concentrating on PET and fMRI, but mentioning also NIRS). At the end of this methodological section, I will again stress that, using neuroimaging, we measure brain correlates of behavior and, as such, we are limited in drawing any causal inferences about the brain/behavior relationship. This does not mean that we shouldnt be doing this kind of research though. It only means, in my mind, that we may need to focus on fairly simple cognitive processes, and that we absolutely need to constrain the interpretation of imaging data by specific a priori hypotheses based on the knowledge of brain anatomy, physiology, etc. In this context, I will also make a distinction between directed (or predicted) and exploratory search in the entire brain volume for significant changes in the signal. In the second half of the talk, I will concentrate on the issue of functional connectivity and how we can study it using PET (and fMRI). I will briefly mention results of our research on corollary discharges and on combining transcranial magnetic stimulation with PET. "Methods and Models in interpreting fMRI: The case of Independent Components of fMRI Images" Martin J. McKeown The Salk Institute Many current fMRI experiments use a block design in which the subject is requested to sequentially perform experimental and control tasks in an alternating sequence of 20-40-s blocks. The bulk of the fluctuations in the resultant time series recorded from each brain region (a "voxel") arise not from local task-related activations, but rather from machine noise, subtle subject movements, and heart and breathing rhythms. This tangled mixture of signals presents a formidable challenge for analytical methods attempting to tease apart task-related changes in the time courses of 5,000 - 25,000 voxels. Correlational and ANOVA-like analytical methods technically require narrow \a priori\ assumptions that may not be valid in fMRI data. Moreover, activations arising from important cognitive processes like changes in subject task strategy or decreasing stimulus novelty cannot typically be tested for, as their time courses are not easily predicted in advance. Signal-processing strategies for analyzing fMRI experiments monitoring cognition are generally used without regard to basic neuropsychological principles, such as localization or connectionism. We propose that an appropriate criteria for the separation of fMRI data into cognitively and physiologically meaningful components is the determination of the separate groups of multi-focal anatomical brain areas that are activated synchronously during an fMRI trial. With this view, each scan obtained during an fMRI experiment can be considered as the mean activity plus the sum of enhancements (or suppressions) of activity from the possibly overlapping individual components. Using an Independent Component Analysis (ICA) algorithm, we demonstrate how the fMRI data from Stroop color-naming and attention orienting experiments can be separated into numerous spatially independent components, some of which demonstrate transient and sustained task-related activation during the behavioral experiment. Active areas of these task-related components correspond with regions implicated from PET and neuropsychological studies. Other components relate to machine noise, subtle head movements and presumed cardiac and breathing pulsations. Considering fMRI data to be the sum of independent areas activated with different time courses enables, with minimal \a priori\ assumptions, the separation of artifacts from transient and sustained task-related activations. Determining the independent components of fMRI data appears to be a promising method for the analysis of cognitive experiments in normal and clinical populations. Sometimes Weak is Strong: Functional Imaging Analysis with Minimal Assumptions Benjamin Martin Bly and Mark Griswold Brain-Imaging Studies of Categorization by Rule or Family Membership Andrea Patalano and Edward Smith A PET Study of Deductive Versus Probabilistic Reasoning Stefano F. Cappa, Daniela Perani, Daniel Osherson, Tatiana Schnur, and Ferruccio Fazio Deductive versus probabilistic inferences are distinguished by normative theories, but it is still unknown whether these two forms of reasoning engage similar brain areas. In order to investigate the neurological correlates of reasoning, we have performed an activation study using positron emission tomography and 15O-water in normal subjects. Cerebral perfusion was assessed during a "logic task", in which they had to distinguish between valid and invalid arguments; a "probability task", in which they had to judge whether the conclusion had a greater chance of being true or false, supposing the truth of the premises; and a "meaning task", in which they had to evaluate the premises and the conclusions to determine whether any had anomalous content. The latter was used as "baseline" task: identical arguments were evaluated either for validity, probability or anomaly. In the direct comparison of the two reasoning tasks, probabilistic reasoning increased regional cerebral blood flow (rCBF) in dorsolateral frontal regions, whereas deductive reasoning enhanced rCBF in associative occipital and parietal regions, with a right hemispheric prevalence. Compared to the meaning condition, which involved the same stimuli, both probabilistic and deductive reasoning increased rCBF in the cerebellum. These results are compatible with the idea that deductive reasoning has a geometrical character requiring visuo-spatial processing, while the involvement of the frontal lobe in probabilistic tasks is in agreement with neuropsychological evidence of impairment in cognitive estimation in patients with frontal lesions. The cerebellar activation found in both reasoning tasks may be related to the involvement of working memory. Neural Correlates of Mathematical Reasoning: An fMRI Study of Word-Problem Solving Bart Rypma, Vivek Prabhakaran, Jennifer A. L. Smith, John E. Desmond, Gary H. Glover, and John D. E. Gabrieli   From zhuh at santafe.edu Mon Aug 4 16:08:21 1997 From: zhuh at santafe.edu (Huaiyu Zhu) Date: Mon, 04 Aug 1997 14:08:21 -0600 Subject: Paper: Less predictable than random ... Message-ID: <33E636B5.3451@santafe.edu> The following paper has been submitted to Neural Computation: ftp://ftp.santafe.edu/pub/zhuh/anti.ps Anti-Predictable Sequences: Harder to Predict Than A Random Sequence Huaiyu Zhu Santa Fe Institute, 1399 Hyde Park Rd, Santa Fe, NM 87501, USA Wolfgang Kinzel Santa Fe Institute, 1399 Hyde Park Rd, Santa Fe, NM 87501, USA Institut f\"ur Theoretische Physik, Universit\"at, D-97074 W\"urzburg, Germany ABSTRACT For any discrete state sequence prediction algorithm $A$ it is always possible, using an algorithm $B$ no more complicated than $A$, to generate a sequence for which $A$'s prediction is always wrong. For any prediction algorithm $A$ and sequence $x$, there exists a sequence $y$ no more complicated than $x$, such that if $A$ performs better than random on $x$ then it will perform worse than random on $y$ by the same margin. An example of a simple neural network predicting a bit-sequence is used to illustrate this very general but not widely recognized phenomena. This implies that any predictor with good performance must rely on some (usually implicitly) assumed prior distributions of the problem. -- Huaiyu Zhu Tel: 1 505 984 8800 ext 305 Santa Fe Institute Fax: 1 505 983 0751 1399 Hyde Park Road mailto:zhuh at santafe.edu Santa Fe, NM 87501 http://www.santafe.edu/~zhuh/ USA ftp://ftp.santafe.edu/pub/zhuh/   From maja at cs.brandeis.edu Tue Aug 5 11:34:02 1997 From: maja at cs.brandeis.edu (Maja Mataric) Date: Tue, 5 Aug 1997 11:34:02 -0400 (EDT) Subject: Extended Deadline Autonomous Robots CFP Message-ID: <199708051534.LAA02786@garnet.cs.brandeis.edu> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! NOTE: Due to popular demand and unwieldy schedules, we have moved and *finalized* the submission deadline to Sep 1, 1997. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! CALL FOR PAPERS Autonomous Robots Journal Special Issue on Learning in Autonomous Robots http://www.cs.buffalo.edu/~hexmoor/autonomous-robots.html Guest editors: Henry Hexmoor and Maja Mataric Submission Deadline: September 1, 1997 Autonomous Robots is an international journal published by Kluwer Academic Publishers, Editor-in-Chief: George Bekey Current applications of machine learning in robotics explore learning behaviors such as obstacle avoidance, navigation, gaze control, pick and place operations, manipulating everyday objects, walking, foraging, herding, and delivering objects. It is hoped that these are first steps toward robots that will learn to perform complex operations ranging from folding clothes, cleaning up toxic waste and oil spills, picking up after the children, de-mining, look after a summer house, imitating a human teacher, or overseeing a factory or a space mission. As builders of autonomous embedded agents, researchers in robot learning deal with learning schemes in the context of physical embodiment. Strides are being made to design programs that change their initial encoding of know-how to include new concepts as well as improvements in the associations of sensing to acting. Driven by concerns about the quality and quantity of training data and real-time issues such as sparse and low-quality feedback from the environment, robot learning is undergoing a search for quantification and evaluation mechanisms, as well as for methods for scaling up the complexity of learning tasks. This special issue of Autonomous Robots will focus on novel robot learning applications and quantification of learning in autonomous robots. We are soliciting papers describing finished work preferably involving real manipulator or mobile robots. We invite submissions from all areas in AI and Machine Learning, Mobile Robotics, Machine Vision, Dexterous Manipulation, and Artificial Life that address robot learning. Submitted papers should be delivered by September 1, 1997. Potential authors intending to submit a manuscript can contact either guest editor for answer to any questions. Manuscripts should be typed or laser-printed in English (with American spelling preferred) and double-spaced. Both paper and electronic submission are possible, as described below. For paper submissions, send five (5) copies of submitted papers (hard-copy only) to: Dr. Henry Hexmoor Department of Computer Science State University of New York at Buffalo 226 Bell Hall Buffalo, NY 14260-2000 U.S.A. PHONE: 716-645-3197 FAX: 716-645-3464 For electronic submissions, use Postscript format, ftp the file to ftp.cs.buffalo.edu, and send an email notification to hexmoor at cs.buffalo.edu Detailed ftp instructions: compress your-paper (both Unix compress and gzip commands are ok) ftp ftp.cs.buffalo.edu (but check in case it has changed) give anonymous as your login name give your e-mail address as password set transmission to binary (just type the command BINARY) cd to users/hexmoor/ put your-paper send an email notification to hexmoor at cs.buffalo.edu to notify us that you transferred the paper Editoral Board: James Albus, NIST, USA Peter Bonasso, NASA Johnson Space Center, USA Enric Celaya, Institut de Robotica i Informatica Industrial, Spain Adam J. Cheyer, SRI International, USA Keith L. Doty, University of Florida, USA Marco Dorigo, Universite' Libre de Bruxelles, Belgium Judy Franklin, Mount Holyoke College, USA Rod Grupen, University of Mass, USA John Hallam, University of Edinburgh, UK Inman Harvey, COGS, Univ. of Sussex, UK Gillian Hayes, University of Edinburgh, UK James Hendler, University of Maryland, USA David Hinkle, Johns Hopkins University, USA R James Firby, University of Chicago, USA Ian Horswill, Northwestern University, USA Sven Koenig, Carnegie Mellon University, USA Kurt Konolige, SRI International, USA David Kortenkamp, NASA Johnson Space Center, USA Francois Michaud, Brandeis University, USA Robin R. Murphy, Colorado School of Mines, USA Jose del R. MILLAN, Joint Research Centre of the EU, Italy Amitabha Mukerjee, IIT, India David J. Musliner, Honeywell Technology Center, USA Ulrich Nehmzow, University of Manchester, UK Tim Smithers, Universidad del Pai's Vasco, Spain Martin Nilsson, Swedish Institute of Computer Science, Sweden Stefano Nolfi, Institute of Psychology, C.N.R., Italy Tony J Prescott, University of Sheffield, UK Ashwin Ram, Georgia Institute of Technology, USA Alan C. Schultz, Naval Research Laboratory, USA Noel Sharkey, Sheffield University, UK Chris Thornton, UK Francisco J. Vico, Campus Universitario de Teatinos, Spain Brian Yamauchi, Naval Research Laboratory, USA Uwe R. Zimmer, Schloss Birlinghoven, Germany Relevant Dates: September 1, 1997 submission deadline November 15, 1997 review deadline December 1, 1997 acceptance/rejection notifications to the authors   From hd at harris.monmouth.edu Tue Aug 5 21:25:14 1997 From: hd at harris.monmouth.edu (Harris Drucker) Date: Tue, 5 Aug 97 21:25:14 EDT Subject: paper:boosting regression Message-ID: <9708060125.AA04381@harris.monmouth.edu.monmouth.edu> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/drucker.boosting-regression.ps.Z The following paper on regression was presented at the Fourteenth International Conference on Machine Learning(1997), Morgan Kaufmann, publishers: Improving Regressors using Boosting Techniques Harris Drucker Monmouth University West Long Branch, NJ 07764 drucker at monmouth.edu Abstract In the regression context, boosting and bagging are techniques to build a committee of regressors that may be superior to a single regressor. We use regression trees as fundamental building blocks in bagging committee machines and boosting committee machines. Performance is analyzed on three non-linear functions and the Boston housing database. In all cases, boosting is at least equivalent, and in most cases better than bagging in terms of prediction error. If you do not have access to the proceedings, anonymous ftp from the above site may be used to retrieve this 9 page compressed paper. Sorry, no hard copies.   From jan at uran.informatik.uni-bonn.de Wed Aug 6 08:22:04 1997 From: jan at uran.informatik.uni-bonn.de (Jan Puzicha) Date: Wed, 6 Aug 1997 14:22:04 +0200 (MET DST) Subject: Preprints and Abstracts available online Message-ID: <199708061222.OAA13370@thalia.informatik.uni-bonn.de> This message has been posted to several lists. Sorry, if you receive multiple copies. The following seven PREPRINTS are now available as abstracts and compressed postscript online via the WWW-Home-Page http://www-dbv.cs.uni-bonn.de/ of the |---------------------------------------------| |Computer Vision and Pattern Recognition Group| | of the University of Bonn, | | Prof. J. Buhmann, Germany. | |---------------------------------------------| 1.) Hansjrg Klock and Joachim M. Buhmann: Data Visualization by MultidimensionalScaling: A Deterministic Annealing Approach. Technical Report IAI-TR-96-8, Institut fr Informatik III, University of Bonn. October 1996. 2.) Jan Puzicha, Thomas Hofmann and Joachim M. Buhmann: Deterministic Annealing: Fast Physical Heuristics for Real-Time Optimization of Large Systems. In: Proceedings of the 15th IMACS World Conference on Scientific Computation, Modelling and Applied Mathematics, Berlin, August 1997. 3.) Jan Puzicha and Joachim M. Buhmann: Multiscale Annealing for Real-Time Unsupervised Texture Segmentation. Technical Report IAI-TR-97-4, Institut fr Informatik III, University of Bonn. April 1997. Accepted for presentation at the International Congress on Computer Vision (ICCV'98). 4.) Jan Puzicha, Thomas Hofmann and Joachim M. Buhmann: Non-parametric Similarity Measures for Unsupervised Texture Segmentation and Image Retrieval. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, pp. 267-272, San Juan, 1997. 5.) Thomas Hofmann, Jan Puzicha and Joachim M. Buhmann: An Optimization Approach to Unsupervised Hierarchical Texture Segmentation. Proceedings of the International Conference on Image Processing, Santa Barbara, 1997. 6.) Andreas Polzer, Hans-Jrg Klock and Joachim M. Buhmann: Video-Coding by Region-Based Motion Compensation and Spatio-temporal wavelet transform. Proceedings of the International Conference on Image Processing, Santa Barbara, 1997. 7.) Hans-Jrg Klock, Andreas Polzer and Joachim M. Buhmann: Region-Based Motion Compensated 3D-Wavelet Transform Coding of Video. Proceedings of the International Conference on Image Processing, Santa Barbara, 1997. If you have any questions or remarks, please let me know. Best regards, Jan Puzicha -------------------------------------------------------------------- Jan Puzicha | email: jan at uran.cs.uni-bonn.de Institute f. Informatics III | jan at cs.uni-bonn.de University of Bonn | WWW : http://www.cs.uni-bonn.de/~jan | Roemerstrasse 164 | Tel. : +49 228 550-383 D-53117 Bonn | Fax : +49 228 550-382 --------------------------------------------------------------------   From Friedrich.Leisch at ci.tuwien.ac.at Wed Aug 6 06:10:28 1997 From: Friedrich.Leisch at ci.tuwien.ac.at (Friedrich Leisch) Date: Wed, 6 Aug 1997 12:10:28 +0200 Subject: CI BibTeX Collection -- Update Message-ID: <199708061010.MAA15951@galadriel.ci.tuwien.ac.at> The following volumes have been added to the collection of BibTeX files maintained by the Vienna Center for Computational Intelligence: Machine Learning 27 Neural Networks 10/3, Neural Computation 9/5, Neural Processing Letters 5/2 Advances in Neural Information Processing Systems 9 All files have been converted automatically from various source formats, please report any bugs you find. The complete collection can be downloaded from http://www.ci.tuwien.ac.at/docs/ci/bibtex_collection.html ftp://ftp.ci.tuwien.ac.at/pub/texmf/bibtex/ Best, Fritz -- Friedrich Leisch Institut fur Statistik Tel: (+43 1) 58801 4541 Technische Universitat Wien Fax: (+43 1) 504 14 98 Wiedner Hauptstrase 8-10/1071 Friedrich.Leisch at ci.tuwien.ac.at A-1040 Wien, Austria http://www.ci.tuwien.ac.at/~leisch PGP public key http://www.ci.tuwien.ac.at/~leisch/pgp.key   From nic at idsia.ch Thu Aug 7 08:46:29 1997 From: nic at idsia.ch (Nici Schraudolph) Date: Thu, 7 Aug 1997 14:46:29 +0200 Subject: paper available: On Centering Neural Network Weight Updates Message-ID: <19970807124629.AAA28359@kraut.idsia.ch> Dear colleagues, the following paper is now available by anonymous ftp from the locations ftp://ftp.idsia.ch/pub/nic/center.ps.gz and ftp://ftp.cnl.salk.edu/pub/schraudo/center.ps.gz On Centering Neural Network Weight Updates ------------------------------------------ by Nicol N. Schraudolph Technical Report IDSIA-19-97 IDSIA, Lugano 1997 It has long been known that neural networks can learn faster when their input and hidden unit activity is centered about zero; recently we have extended this approach to also encompass the centering of error signals (Schraudolph & Sejnowski, 1996). Here we generalize this notion to all factors involved in the weight update, leading us to propose centering the slope of hidden unit activation functions as well. Slope centering removes the linear component of backpropagated error; this improves credit assignment in networks with shortcut connections. Benchmark results show that this can speed up learning significantly without adversely affecting the trained network's generalization ability. Best regards, -- Dr. Nicol N. Schraudolph Tel: +41-91-911-9838 IDSIA Fax: +41-91-911-9839 Corso Elvezia 36 CH-6900 Lugano http://www.idsia.ch/~nic/ Switzerland http://www.cnl.salk.edu/~schraudo/   From terry at salk.edu Thu Aug 7 17:59:47 1997 From: terry at salk.edu (Terry Sejnowski) Date: Thu, 7 Aug 1997 14:59:47 -0700 (PDT) Subject: NIPS Workshop on Brain Imaging Message-ID: <199708072159.OAA18914@helmholtz.salk.edu> ---------------------- Call for Participants NIPS*97 Workshop on Brain Imaging December 5, 1997 Breckenridge, Colorado ---------------------- Title: Analysis of Brain Imaging Data Organizers: Scott Makeig and Terrence Sejnowski scott at salk.edu terry at salk.edu Abstract: The goal of this workshop is to bring together researchers who are interested in new techniques for analyzing brain recordings based on electroencephalography (EEG), event-related potentials (ERP), magnetoencephalography (MEG), optical recordings from cortex using voltage-sensitive dyes, functional magnetic resonance imaging (fMRI) and positron emission tomography (PET). This is a rapidly developing area of cognitive neuroscience and some new unsupervised learning techniques such as independent component analysis (ICA) are proving useful for analyzing data from humans and other primates. Both signal processing experts and neuroscientists will participate in this workshop. In addition to those who are interested in analysis of brain data, those interested in interfacing on-line recordings with practical devices are also welcome to participate. Format: The workshop will meet for 3 hours in the morning and 3 hours in the late afternoon on December 5. The sessions will include short (15 min) presentations by invited participants to help focus discussion. Participants from the signal processing and neuroscience communities are encouraged to attend and to actively participate in the discussion, contribute brief (5 min) presentations, and bring posters to the sessions. Invited Participants: Erkki Oja - Finland - Analysis of EEG Data using ICA Martin McKeown - Salk Institute - fMRI analysis using ICA Partha Mitra - Caltech - fMRI analysis using tapered spectral analysis Ehud Kaplan - PCA analysis of optical recordings Klaus Obermayer - Berlin, Germany - ICA analysis of optical recordings from visual cortex Karl Friston, Queens Square, London - Analysis of PET and fMRI data Henri Begleiter and David Chorlian (SUNY Health Center, NY) - ERP analysis Others who have expressed an interest in attending include: Brad Duckrow - Univ. Connecticut Alex Dimitrov - Univ. Chicago Andrew Oliver - UC London Klaus Prank - Hannover, Germany -----   From piuri at pine.ece.utexas.edu Fri Aug 8 14:37:26 1997 From: piuri at pine.ece.utexas.edu (Vincenzo Piuri) Date: Fri, 8 Aug 1997 20:37:26 +0200 Subject: ISATA98 CALL FOR PAPERS Message-ID: <199708081829.NAA10321@pine.ece.utexas.edu> ================================================================================ 31st ISATA International Symposium on Automotive Technology and Automation Dusseldorf, Germany, 2-5 June 1998 Chairman: Prof. Dr. Dieter Roller, Universitat Stuttgart, Germany ================================================================================ SPECIAL SESSION ON NEURAL IDENTIFICATION, PREDICTION AND CONTROL FOR AUTOMOTIVE EMBEDDED SYSTEMS CALL FOR PAPERS ================================================================================ The ISATA Symposium is an outstanding international forum addressing the most topical and important areas of research and development in a wide variety of fields relevant to the automotive industry. It brings together researchers and practitioners both from the academy and industry. The wide range of research and applications topics covered by this meeting is divided and presented in 8 simultaneous program tracks: "Automotive mechatronics design and engineering", "Simulation, virtual reality and supercomputing automotive applications", "Advanced manufacturing in the automotive industry", "Materials for energy- efficient vehicles", "New propulsion systems and alternative fuel vehicles", "Automotive electronics and new products", "Logistics management and environmental aspects", and "Passenger comfort, road and vehicle safety". The special session on "Neural identification, prediction and control for automotive embedded systems" will be a forum for analyzing, comparing and evaluating the capabilities and the effectiveness of neural techniques with specific reference to the automotive area. The use of such technologies in heterogeneous embedded system, composed by digital dedicated ASIC devices or microprocessor-based structures and neural components, is in fact becoming more and more attractive to realize advanced flexible, efficient and smart vehicles due to their "intelligent" and adaptive features. For example, typical application areas are fuel injection, engine efficiency, guide control, asset balancing, exhausted emissions control, assisted navigation, sensor enhancement and fusion, system diagnosis. For the special session, papers are welcome on all aspects of neural network theory and applications in identification, prediction and control for the automotive industry, with specific reference to their use in heterogeneous embedded systems. In particular, papers are solicited on neural techniques and implementations referring to system modeling, control, prediction, learning, stability, optimization, adaptivity, sensor fusion, classification, instrumentation, diagnosis, neural devices, VLSI and FPGA realizations, integration of neural components in embedded systems, interfacing of neural components in digital systems, specification of heterogeneous embedded systems, design automation of neural devices and heterogeneous embedded systems, testing, CAD tools, real applications, and experimental results. Perspective authors should submit a short abstract (100-150 words) both to the Special Session Organizer and to the Secretariat, including title, author(s) and affiliation(s), by October 31, 1997. The name of the special session must be clearly specified in the submission. The contact author must be identified, with his complete affiliation, address, phone, fax and email. The short abstract is required to plan the review process. Submission of abstracts can be performed also by email or fax. Authors must then send the paper drafts to the Secretariat only by January 16, 1998: submission of drafts must be performed by mail only (email and fax submissions are not accepted). Refereeing will be performed on the draft papers. Submission of the draft paper implies, if accepted for presentation at the conference, the willingness to send the final version of the paper, to register at the conference and to present the paper. Notification of rejection or acceptance will be mailed by February 28, 1998. The final camera ready version is due to the Secretariat by March 31, 1998. Organizer of the Special Session on Neural Identification, Prediction and Control for Automotive Embedded Systems prof. Vincenzo Piuri Department of Electronics and Information Politecnico di Milano piazza L. da Vinci 32 20133 Milano, Italy phone +39-2-2399-3623 fax +39-2-2399-3411 email piuri at elet.polimi.it Conference Secretariat: ISATA 31th ISATA Symposium 32A Queen Street Croydon, CRO 1SY, UK phone +44 181 681 3069 fax +44 181 686 1490 email 100270.1263 at compuserve.com web page http://www.isata.com ================================================================================   From sschaal at erato.atr.co.jp Sat Aug 9 12:02:49 1997 From: sschaal at erato.atr.co.jp (Stefan Schaal) Date: Sat, 9 Aug 97 12:02:49 JST Subject: NIPS*97 workshop on Imitation Learning, call for participation Message-ID: <9708090302.AA05867@power.hip.atr.co.jp> Call for participation for NIPS*97 workshop on: =========================================================================== Imitation Learning --------------------------------------------------------------------------- http://www.erato.atr.co.jp/nips97 NIPS*97 Post-Conference Workshop December 5-6, 1997, Beaver Run Resort (970 453-6000) in Breckenridge, Colorado Abstract -------- Imitation learning is an ability shared by humans and many different species of animals. In imitation learning, the learner observes a skill being demonstrated by a teacher, then attempts to imitate that skill, and finally refines the skill through trial and error learning. The ability of imitation learning provides the opportunity to profit from knowledge of others and to acquire new skills much more quickly. Effectively, imitation learning biases a learning system towards a good solution in order to significantly reduce the search space during trial by trial learning. The ability of imitation learning, however, is not trivial. It requires a sophisticated interplay between perceptual systems that recognize the demonstrated skill, and motor systems, onto which the recognized skill must be mapped. Differences between teacher and learner emphasize the need for more abstract representations for imitation learning. Recent demonstrations of imitation-specific neurons in primate premotor cortex have even lead to speculations that the development of imitation skills may have been a key milestone in the evolution of higher intelligence. Goal ---- The goal of this 1-day workshop is to identify and discuss the complex information processes of imitation learning in short presentations and panel discussions. The hope is to outline a strategy of how imitation learning could be studied systematically by bringing together researchers with a broad range of expertise. Topics of the workshop include: - how learning methods can profit from (i.e., can be biased by) a demonstration of a teacher, - how the recognition process of a demonstration could interact with the generative process of motor control (e.g., connecting to ideas of reciprocally constrained learning processes, as in Helmholtz machines), - how memory and attentional processes operate during imitation, - segmentation and recognition of the teacher's demonstration, - extracting the intent of a demonstration, - psychophysical experiments on imitation learning, - mapping imitation learning onto the functional structure in primate brains, - robot and simulation studies of imitation learning, - representations supporting imitation learning. Participants ------------ This workshop will bring together active researchers from neurobiology, computational neuroscience, behavioral sciences, cognitive science, machine learning, statistical learning, motor control, control theory, robotics, human computer interaction, and related fields. A tentative list of speakers is given below. We are interested in additional contributers. If you would like to give a presentation in this workshop, please send email to nips97_imitation at hip.atr.co.jp describing the material you would like to present. Tentative Speakers ------------------ Bob Sekuler (Brandeis Univ.), Chris Atkeson (GaTech), Dana Ballard (Univ. of Rochester), Jean-Jacques Slotine (MIT), Jeff Siskind (NEC), Kenji Doya (JST), Maja Mataric (USC), Matt Brand (MIT), Mitsuo Kawato (ATR, Japan), Polly Pook (MIT), Sebastian Thrun (CMU), Stefan Schaal (JST & USC), Trevor Darell (Interval), Yasuo Kuniyoshi (ETL,Japan), Zoubin Ghahramani (Univ. of Toronto). Organizers ---------- Stefan Schaal (JST & USC), Maja Mataric (USC), Chris Atkeson (GaTech) Location and More Information ----------------------------- The most up-to-date information about NIPS*97 can be found on the NIPS*97 Home Page (http://www.cs.cmu.edu/Groups/NIPS/NIPS.html) --------------------------------------------------------------------------- If you have comments or suggestions, send email to nips97_imitation at hip.atr.co.jp   From ft at uran.informatik.uni-bonn.de Mon Aug 11 10:49:21 1997 From: ft at uran.informatik.uni-bonn.de (Thorsten Froehlinghaus) Date: Mon, 11 Aug 1997 16:49:21 +0200 (MET DST) Subject: Stereo Images with Ground Truth for Benchmarking Message-ID: <199708111449.QAA14519@atlas.informatik.uni-bonn.de> Stereo Images with Ground Truth for Benchmarking At the University of Bonn, a synthetic byt realistic stereo image pair has been generated together with ground truth disparity and occlusion maps. They are shown on the following web-page: http://www-dbv.cs.uni-bonn.de/~ft/stereo.html The stereo images and the ground truth maps can be employed to benchmark the performance of different stereo matching techniques. Every interested stereo-researcher is invited to use their matching technique to compute a dense disparity map for these images. I will then evaluate all contributed results and compare them with regard to their precision and reliability. Thorsten Froehlinghaus University of Bonn Dept. of Computer Science III E-Mail: ft at cs.bonn.edu   From mepp at almaden.ibm.com Sun Aug 10 11:33:06 1997 From: mepp at almaden.ibm.com (Mark Plutowski/Almaden/IBM) Date: 10 Aug 97 11:33:06 Subject: Call for papers: AI Review, Special Issue on Datamining the Internet Message-ID: <9708101833.AA2613@almlnsg0.almaden.ibm.com> Artificial Intelligence Review: Special Issue on Data Mining on the Internet The advent of the World Wide Web has caused a dramatic increase in usage of the Internet. The resulting growth in on-line information combined with the almost chaotic nature of the web necessitates the development of powerful yet computationally efficient algorithms to track and tame this constantly evolving complex system. While traditionally the data mining community has dealt with structured databases, web mining poses problems not only due to the lack of structure, but also due to the intrinsic distributed nature of the data. Furthermore, mining on the Internet involves also dealing with multi-media content consisting of not only natural language documents but also images, audio and video streams. Several interesting and potentially useful applications have already been developed by academic researchers and industry practitioners to address these challenges. It is important to learn from these initial endeavors, if we are to develop new algorithms and interesting applications. The purpose of this special issue is to provide a comprehensive state-of-the-art overview of the technical challenges and successes in mining of the Internet. Of particular interest are papers describing both the development of novel algorithms and applications. Topics of interest could include but are not limited to: * Resource Discovery * Collaborative Filtering * Information Filtering * Content Mining (text, images, video, etc.) * Information Extraction * User Profiling * Applications, e.g., one-to-one marketing In addition to the call for full-length papers, we request that any researchers working in this area submit abstracts and/or pointers to recently published applications for the purposes of compiling a comprehensive survey of the current state-of-the-art. The mission of Artificial Intelligence Review: The Artificial Intelligence Review serves as a forum for the work of researchers and application developers from Artificial Intelligence, Cognitive Science and related disciplines. The Review publishes state-of-the-art research and applications and critical evaluations of techniques and algorithms from the field. The Review also presents refereed survey and tutorial articles, as well as reviews and commentary on topics from these applications. **** Instructions for submitting papers *** Papers should be no more than 30 printed pages (approximately 15,000 words) with a 12-point font and 18-point spacing, including figures and tables. Papers must not have appeared in, nor be under consideration by other journals. Include a separate page specifying the paper's title and providing the address of the contact author for correspondence (including postal, telephone number, fax number, and e-mail address). Send FOUR copies of each submission to the guest editor listed below. Papers in ascii or postscript form may be submitted electronically. Instructions for on-line submission are given below. ================================== Information For on-line submission ================================== Kluwer Academic Publishers allows on-line submission of scientific articles via ftp and e-mail. We will make this system more user-friendly by incorporating it into our KAPIS WWW server and use Netscape as the user-interface. This is currently being prepared and will be implemented by the end of this year. Below, please find the procedure that should be used until then. - an author sends an e-mail message to "submit at wkap.nl" containing the following line REQUEST SUBMISSIONFORM AIRE AIRE = Artificial Intelligence Review (the 4-letter code that is used at Kluwer) - the author receives the electronic submission form (see attachment) via e-mail with a dedicated file name filled in (and also the information that is given at point 4: the journal's four-letter code plus the full journal title) - the author fills in the submission form and send it back to: "submit at wkap.nl" - at the same time, the author submits his/her article via anonymous ftp at the following address: "ftp.wkap.nl" in the subdirectory INCOMING/SUBMIT, using the dedicated file name with an appropriate extension - at Kluwer, the article is registrated and taken into production in the usual way ======================================================================== ** Important Dates ** Papers Due: December 15, 1997 Acceptance Notification: March 1, 1998 Final Manuscript due: June 1, 1998 Guest Editor: Shivakumar Vaithyanathan, net.Mining, IBM Almaden Research Center, 650 Harry Road, San Jose, CA 95120 (408)927-2465 (Phone) (408)927-2240 (Fax) e-mail: shiv at almaden.ibm.com   From frey at cs.toronto.edu Mon Aug 11 16:12:54 1997 From: frey at cs.toronto.edu (Brendan J. Frey) Date: Mon, 11 Aug 1997 16:12:54 -0400 Subject: Bayes nets for classification, compression and error-correction Message-ID: <97Aug11.161255edt.1012@neuron.ai.toronto.edu> Doctoral dissertation Bayesian Networks for Pattern Classification, Data Compression, and Channel Coding Brendan J. Frey http://www.cs.utoronto.ca/~frey Pattern classification, data compression, and channel coding are tasks that usually must deal with complex but structured natural or artificial systems. Patterns that we wish to classify are a consequence of a causal physical process. Images that we wish to compress are also a consequence of a causal physical process. Noisy outputs from a telephone line are corrupted versions of a signal produced by a structured man-made telephone modem. Not only are these tasks characterized by complex structure, but they also contain random elements. Graphical models such as Bayesian networks provide a way to describe the relationships between random variables in a stochastic system. In this thesis, I use Bayesian networks as an overarching framework to describe and solve problems in the areas of pattern classification, data compression, and channel coding. Results on the classification of handwritten digits show that Bayesian network pattern classifiers outperform other standard methods, such as the k-nearest neighbor method. When Bayesian networks are used as source models for data compression, an exponentially large number of codewords are associated with each input pattern. It turns out that the code can still be used efficiently, if a new technique called ``bits-back coding'' is used. Several new error-correcting decoding algorithms are instances of ``probability propagation'' in various Bayesian networks. These new schemes are rapidly closing the gap between the performances of practical channel coding systems and Shannon's 50-year-old channel coding limit. The Bayesian network framework exposes the similarities between these codes and leads the way to a new class of ``trellis-constraint codes'' which also operate close to Shannon's limit. Brendan.   From ecm at casbah.acns.nwu.edu Tue Aug 12 10:22:26 1997 From: ecm at casbah.acns.nwu.edu (Edward Malthouse) Date: Tue, 12 Aug 1997 09:22:26 -0500 (CDT) Subject: Nonlinear Principal Components Analysis Message-ID: <199708121422.JAA22318@casbah.acns.nwu.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 1470 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/c7ececb8/attachment.ksh From jdunn at cyllene.uwa.edu.au Tue Aug 12 00:17:16 1997 From: jdunn at cyllene.uwa.edu.au (John Dunn) Date: Tue, 12 Aug 1997 12:17:16 +0800 (WST) Subject: Eighth Australasian Mathematical Psychology Conference (AMPC97) Message-ID: <199708120417.MAA28639@cyllene.uwa.edu.au> Second Call for Papers Eighth Australasian Mathematical Psychology Conference (AMPC97) November 27-30, 1997 University of Western Australia Perth, W.A. Australia Conference organisers: John Dunn, Mike Kalish, Steve Lewandowsky Email to: mathpsych at psy.uwa.edu.au. ----------------------------------------------------- AMPC97 provides an opportunity for researchers interested in the application of mathematical analysis to psychology to meet and exchange views. Relevant domains include Experimental Psychology (particularly computational models), Cognitive Science, Connectionist Modelling, Scaling, Psychological Methods, Statistics and Test Theory. Papers are invited from researchers in all areas of mathematical psychology. Contributors are encouraged to propose topics for focused symposia, consisting of three to six papers, to present their work. The following symposia have been accepted. If you wish to present a paper at any of these, please contact the relevant convenor, listed below. Requests for additional symposia should be directed to the conference organisers at mathpsych at psy.uwa.edu.au. The current deadline for abstracts is August 31, 1997. ----------------------------------------------------- Symposia ----------------------------------------------------- Local energy detection in vision David Badcock, Universtiy of Western Australia david at psy.uwa.edu.au Nonlinear dynamics Robert A M Gregson, Australian National University Robert.Gregson at anu.edu.au Associative learning John K Kruschke, Indiana University kruschke at croton.psych.indiana.edu Computational models of memory Stephan Lewandowsky, University of Western Australia lewan at psy.uwa.edu.au Knowledge representation Josef Lukas, University of Halle, Germany j.lukas at psych.uni-halle.de Choice, decision, and measurement Anthony A J Marley, McGill University tony at hebb.psych.mcgill.ca Face recognition Alice O'Toole & Herve Abdi, University of Texas otoole at utdallas.edu Models of response time Roger Ratcliff, Northwestern University roger at eccles.psych.nwu.edu ----------------------------------------------------- Special issue of the Australian Journal of Psychology ----------------------------------------------------- A special issue of the Australian Journal of Psychology dedicated to Mathematical Psychology will be forthcoming in late 1998 or early 1999. The conference organisers, John Dunn, Mike Kalish and Stephan Lewandowsky, will act as guest editors of this issue. All contributors to AMPC97 are invited to submit a paper which will be fully peer reviewed by researchers who are not contributing to the special issue. The aim of the special issue is to showcase the work of Australian mathematical psychologists and to demonstrate how this work is at the forefront of international developments. We are therefore particularly interested in papers arising from international collaboration, preferably those co-authored by researchers in Australia and abroad. ----------------------------------------------------- Details concerning the conference, registration, and submission of papers are available at the AMPC97 Web site at the following URL: http://www.psy.uwa.edu.au/mathpsych/ Registration for the conference and submission of abstracts need to be submitted in electronic form through the AMPC97 Web Site. -----------------------------------------------------   From becker at curie.psychology.mcmaster.ca Tue Aug 12 15:27:09 1997 From: becker at curie.psychology.mcmaster.ca (Sue Becker) Date: Tue, 12 Aug 1997 15:27:09 -0400 (EDT) Subject: nips97 workshop: models of episodic memory and hippocampus Message-ID: NIPS*97 workshop announcement COMPUTATIONAL MODELS OF EPISODIC MEMORY AND HIPPOCAMPAL FUNCTION The aim of this one-day workshop is to bring together computational modellers (not just of the neural network type) and experimentalists to share information on data and models. This will allow us to explore how these models can be better informed by experimental data, and how they may be used to guide experimental questions. Understanding the mechanisms underlying episodic memory, and the long-term storage of temporally ordered experience in general, remains a fascinating problem for both cogntive psychology and neuroscience. Interestingly, episodic memory has long been thought to involve the hippocampus, a structure in which recent experimental and modelling advances have provided insight at both the physiological and behavioral levels. The aim of this workshop is to bring together modellers and experimenters from a wide range of disciplines to define the key aspects of human behaviour (and possibly physiology and anatomy) that a model of epsisodic memory should account for, and to discuss the computational mechanisms that might support it. Modellers will span a wide range of approaches, but will all make contact in some way with human memory data. One or more of the experimenters will start the workshop off by addressing the data regarding episodic memory and hippocampal function from psychology, neuropsychology, functional imaging and animal physiology. The format of the rest of the workshop will be a mixture of lectures and discussions on the merits of various modelling approaches - of which neural networks are just one example. Each speaker will have approximately 25 minutes, including about 7-10 minutes for discussion. Tentative speaker list: Paul Fletcher Mike Hasselmo Jay McClelland Janet Wiles Chip Levy Andy Yonelinas Alan Pickering Jaap Murre Mike Kahana Bill Skaggs Neil Burgess Randy O'Reilly Sue Becker Date and location: Friday December 5, in Breckenridge, Colorado, at the site of the NIPS*97 workshops following the main conference in Denver (see http://www.cs.cmu.edu/Web/Groups/NIPS for details) Call for submissions: We may have time for a small number of contributed talks. Interested participants are asked to submit by email a title, abstract and summary of relevant publications to each of the organizers. Organizers: Sue Becker (becker at mcmaster.ca) Neil Burgess (n.burgess at ucl.ac.uk) Randy O'Reilly (oreilly at flies.mit.edu) Web page: http://claret.psychology.mcmaster.ca/becker/nips97wshop Abstracts will be added here soon. Summaries of talks will be published here after the workshop.   From giles at research.nj.nec.com Wed Aug 13 13:26:59 1997 From: giles at research.nj.nec.com (Lee Giles) Date: Wed, 13 Aug 97 13:26:59 EDT Subject: paper on fuzzy automata and recurrent neural networks Message-ID: <9708131726.AA07646@alta> The following manuscript has been accepted in IEEE Transactions on Fuzzy Systems and is available at the WWW site listed below: www.neci.nj.nec.com/homepages/giles/papers/IEEE.TFS.fuzzy.automata.encoding.recurrent.net.ps.Z We apologize in advance for any multiple postings that may be received. *********************************************************************** Fuzzy Finite-State Automata Can Be Deterministically Encoded Into Recurrent Neural Networks Christian W. Omlin(1), Karvel K. Thornber(2), C. Lee~Giles(2,3) (1)Adaptive Computing Technologies, Troy, NY 12180 (2)NEC Research Institute, Princeton, NJ 08540 (3)UMIACS, U. of Maryland, College Park, MD 20742 ABSTRACT There has been an increased interest in combining fuzzy systems with neural networks because fuzzy neural systems merge the advantages of both paradigms. On the one hand, parameters in fuzzy systems have clear physical meanings, and rule-based and linguistic information can be incorporated into adaptive fuzzy systems in a systematic way. On the other hand, there exist powerful algorithms for training various neural network models. However, most of the proposed combined architectures are only able to process static input-output relationships; they are not able to process temporal input sequences of arbitrary length. Fuzzy finite-state automata (FFAs) can model dynamical processes whose current state depends on the current input and previous states. Unlike in the case of deterministic finite-state automata (DFAs), FFAs are not in one particular state, rather each state is occupied to some degree defined by a membership function. Based on previous work on encoding DFAs in discrete-time, second-order recurrent neural networks, we propose an algorithm that constructs an augmented recurrent neural network that encodes a FFA and recognizes a given fuzzy regular language with arbitrary accuracy. We then empirically verify the encoding methodology by correct string recognition of randomly generated FFAs. In particular, we examined how the networks' performance varies as a function of synaptic weight strengths Keywords: Fuzzy systems, fuzzy neural networks, recurrent neural networks, knowledge representation, automata, languages, nonlinear systems. - __ C. Lee Giles / Computer Science / NEC Research Institute / 4 Independence Way / Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 www.neci.nj.nec.com/homepages/giles.html ==   From cmb35 at newton.cam.ac.uk Thu Aug 14 08:34:17 1997 From: cmb35 at newton.cam.ac.uk (C.M. Bishop) Date: Thu, 14 Aug 1997 13:34:17 +0100 Subject: Workshop on pulsed neural networks Message-ID: <199708141234.NAA13173@feynman> WORKSHOP ON PULSED NEURAL NETWORKS ---------------------------------- Isaac Newton Institute, Cambridge, U.K. 26 and 27 August, 1997 Organisers: Wolfgang Maass and Chris Bishop ****** FINAL PROGRAMME ****** This workshop draws together many aspects of pulsed neural networks including computational models, theoretical analyses, neuro-biological motivation and hardware implementations. A provisional programme, together with a list of abstracts, is given below. The dates of the workshop have been chosen so that participation can easily be combined with a trip to the First European Workshop on Neuromorphic Systems (EWNS-1), August 29-31, 1997, in Stirling, Scotland (for details see: http://www.cs.stir.ac.uk/~lss/Neuromorphic/Info1.html). If you would like to attend this workshop, please complete and return the registration form below. There is no registration fee, and accommodation for participants will be available (at reasonable cost) in Wolfson Court adjacent to the Institute. This workshop will form part of the six month programme at the Isaac Newton Institute on "Neural Networks and Machine Learning". For further information about the Institute and this programme see: http://www.newton.cam.ac.uk/ http://www.newton.cam.ac.uk/programs/nnm.html If you wish to be kept informed of other workshops and seminars taking place during the programme, please subscribe to the nnm mailing list: Send mail to majordomo at newton.cam.ac.uk with a message whose BODY (not subject -- which is irrelevant) contains the line 'subscribe nnm-list your_email_address' We look forward to seeing you in Cambridge. Wolfgang Maass Chris Bishop --------------------------------------------------------------------------- REGISTRATION FORM ----------------- (Please return to H.Dawson at newton.cam.ac.uk) Last Name:....................................Title:..................... Forenames:.................................................................... Address of Home Institution: ................................... ................................... ................................... ................................... ................................... Office Phone:........................ Home Phone:........................... Fax Number:.......................... E-mail:.............................. Date of Arrival:.................... Date of Departure:.................... If you would like accommodation in Wolfson Court at 22.50 UK pounds per night for bed and breakfast, please contact Heather Dawson (H.Dawson at newton.cam.ac.uk) as soon as possible. ------------------------------------------------------------------------------ FINAL PROGRAMME --------------------- Tuesday, August 26: 9:00 - 10:15 Tutorial by Wulfram Gerstner (Swiss Federal Institute of Technology, Lausanne, Switzerland) "Motivation and Models for Spiking Neurons" 10:15 - 10:45 Coffee-Break 10:45 - 12:00 Tutorial by Wolfgang Maass (Technische Universitaet Graz, Austria) "Computation and Coding in Networks of Spiking Neurons" 12:00 - 14:00 Lunch 14:00 - 14:40 David Horn (Tel Aviv University, Israel) "Fast Temporal Encoding and Decoding with Spiking Neurons" 14:40 - 15:20 John Shawe-Taylor (Royal Holloway, University of London) "Neural Modelling and Implementation via Stochastic Computing" 15:20 - 16:00 Tea Break 16:00 - 16:40 Wolfgang Maass (Technische Universitaet Graz, Austria) "A Simple Model for Neural Computation with Pulse Rates and Pulse Correlations" 16:40 - 17:20 Wulfram Gerstner (Swiss Federal Institute of Technology, Lausanne, Switzerland) "Hebbian Tuning of Delay Lines for Coincidence Detection in the Barn Owl Auditory System" 17:20 - 18:00 Poster-Spotlights (5 minutes each) 18:00 - 19:00 Poster-Session (with wine reception) 19:00 Barbecue dinner at the Isaac Newton Institute ------------------------ Wednesday, August 27 9:00 - 10:15 Tutorial by Alan F. Murray (University of Edinburgh) "Pulse-Based Computation in VLSI Neural Networks : Fundamentals" 10:15 - 10:40 Coffee-Break 10:40 - 11:20 Alessandro Mortara (Centre Suisse d'Electronique et de Microtechnique, Neuchatel, Switzerland) "Communication and Computation using Spikes in Silicon Perceptive Systems" 11:20 - 12:00 David P.M. Northmore (University of Delaware, USA) "Interpreting Spike Trains with Networks of Dendritic-Tree Neuromorphs" 12:00 - 14:00 Lunch (During lunch we will discuss plans for an edited book on pulsed neural nets) 14:00 - 14:40 Alister Hamilton (University of Edinburgh) "Pulse Based Signal Processing for Programmable Analogue VLSI" 14:40 - 15:20 Rodney Douglas (ETH Zurich, Switzerland) "A Communications Infrastructure for Neuromorphic Analog VLSI Systems" 15:20 - 15:40 Coffee-Break 15:40 - 17:00 Plenary Discussion: Artifical Pulsed Neural Nets: Prospects and Problems +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ABSTRACTS --------- (in the order of the talks) Tutorial by Wulfram Gerstner (Swiss Federal Institute of Technology, Lausanne, Switzerland) "Motivation and Models for Spiking Neurons" In this introductory tutorial I will try to explain some basic ideas of and provide a common language for pulsed neural nets. To do so I will 0) motivate the idea of pulse coding as opposed to rate coding 1) discuss the relation between various simplified models of spiking neurons (integrate-and-fire, Hodgkin-Huxley) and argue that the Spike Response Model (=linear response kernels + threshold) is a suitable framework to think about such models. 2) discuss typical phenoma of the dynamics in populations of spiking neurons (oscillations, asynchronous states), provide stability arguments and introduce an integral equation for the population dynamics. 3) review the idea of feature binding and pattern segmentation by a 'synchronicity code'. ------------------------------------------------------------ Tutorial by Wolfgang Maass (Technische Universitaet Graz, Austria) "Computation and Coding in Networks of Spiking Neurons" This tutorial will provide an introduction to --- methods for encoding information in trains of pulses --- simplified computational models for networks of spiking neurons --- the computational power of networks of spiking neurons for concrete coding schemes --- computational consequences of synapses that are not static, but but give different "weights" to different pulses in a pulse train --- relationships between models for networks of spiking neurons and classical neural network models. ------------------------------------------------------------- David Horn (Tel Aviv University, Israel) "Fast Temporal Encoding and Decoding with Spiking Neurons" We propose a simple theoretical structure of interacting integrate and fire neurons that can handle fast information processing, and may account for the fact that only a few neuronal spikes suffice to transmit information in the brain. Using integrate and fire neurons that are subjected to individual noise and to a common external input, we calculate their first passage time (FPT), or inter-spike interval. We suggest using a population average for evaluating the FPT that represents the desired information. Instantaneous lateral excitation among these neurons helps the analysis. By employing a second layer of neurons with variable connections to the first layer, we represent the strength of the input by the number of output neurons that fire, thus decoding the temporal information. Such a model can easily lead to a logarithmic relation as in Weber's law. The latter follows naturally from information maximization, if the input strength is statistically distributed according to an approximate inverse law. ------------------------------------------- John Shawe-Taylor (Royal Holloway, University of London) "Neural Modelling and Implementation via Stochastic Computing" 'Stochastic computing' studies computation performed by manipulating streams of random bits which represent real values via a frequency encoding. The paper will review results obtained in applying this approach to neural computation. The following topics will be covered: * Basic neural modelling * Implementation of feedforward networks and learning strategies * Generalization analysis in the statistical learning framework * Recurrent networks for combinatorial optimization, simulated and mean field annealing * Applications to graph colouring * Hardware implementation in FPGAs ------------------------------------------ Wolfgang Maass (Technische Universitaet Graz, Austria) "A Simple Model for Neural Computation with Pulse Rates and Pulse Correlations" A simple extension of standard neural network models is introduced, that provides a model for computations with pulses where both the pulse frequencies and correlations in pulse times between different pulse trains are computationally relevant. Such extension appears to be useful since it has been shown that firing correlations play a significant computational role in many biological neural systems, and there exist attempts tp transport this coding mechanism to artifical pulsed neural networks. Standard neural network models are only suitable for describing computations in terms of pulse rates. The resulting extended neural network models are still relatively simple, so that their computational power can be analyzed theoretically. We prove rigorous separation results, which show that the use of pulse correlations in addition to pulse rates can increase the computational power of a neural network by a significant amount. ------------------------------------------------------------ Wulfram Gerstner (Swiss Federal Institute of Technology, Lausanne, Switzerland) "Hebbian Tuning of Delay Lines for Coincidence Detection in the Barn Owl Auditory System" Owls can locate sound sources in the complete darkness with a remarkable precision. This capability requires auditory information processing with a temporal precision of less than 5 microseconds. How is this possible, given that typical neurons are at least one order of magnitude slower? In this talk, an integrate-and-fire model is presented of a neuron in the auditory system of the barn owl. Given a coherent input the model neuron is capable to generate precisely timed output spikes. In order to make the input coherent, delay lines are tuned during an early period of the owls development by an unsupervised learning procedure. This results in an adaptive system which develops a sensitivity to the exact timing of pulses arriving from the left and the right ear, a necessary step for the localization of external sound sourcec and hence prey. *************************************************************** (Abstracts of Posters: see the end of this listing) ************************************************************** ------------------------------------------------------------- Tutorial by Alan F. Murray (University of Edinburgh) "Pulse-Based Computation in VLSI Neural Networks : Fundamentals" This tutorial will present the techniques that underly pulse generation, distribution and arithmetic in VLSI devices. The talk will concentrate on work performed in Edinburgh, but will include references to alternative approaches. Ancillary issues surrounding "neural" computation in analogue VLSI will be drawn out and the tutorial will include a brief introduction to MOSFET circuits and devices. ------------------------------------------------------------------ Alessandro Mortara (Centre Suisse d'Electronique et de Microtechnique, Neuchatel, Switzerland) "Communication and Computation using Spikes in Silicon Perceptive Systems" This presentation deals with the principles, the main properties and some applications of a pulsed communication system adapted to the needs of the analog implementation of perceptive and sensory-motor systems. The interface takes advantage of the fact that activity in perception tasks is often sparsely distributed over a large number of elementary processing units (cells) and facilitates the access to the communication channel to the more active cells. The resulting "open loop" communication architecture can be advantageously be used to set up connections between distant cells on the same chip or point to point connections between cells on different chips. The system also lends itself to the simple circuit implementation of typically biological connectivity patterns such as projection of the activity of one cell on a region (its "projective field") of the next neural processing layer, which can be on a different chip in an actual implementation. Examples of possible applications will be drawn from the fields of vision and sensory-motor loops. ------------------------------------------------------------------ David P.M. Northmore (University of Delaware, USA) "Interpreting Spike Trains with Networks of Dendritic-Tree Neuromorphs" The dendrites of neurons probably play very important signal processing roles in the CNS, allowing large numbers of afferent spike trains to be differentially weighted and delayed, with linear and non-linear summation. Our VLSI neuromorphs capture these essential properties and demonstrate the kinds of computations involved in sensory processing. As recent neurobiology shows, dendrites also play a critical role in learning by back-propagating output spikes to recently active synapses, leading to changes in their efficacy. Using a spike distribution system we are exploring Hebbian learning in networks of neuromorphs. -------------------------------------------------- Alister Hamilton (University of Edinburgh) "Pulse Based Signal Processing for Programmable Analogue VLSI" VLSI implementations of Pulsed Neural Systems often require the use of standard signal processing functions and neural networks in order to process sensory data. This talk will introduce a new pulse based technique for implementing standard signal processing functions - the Palmo technique. The technique we have developed is fully programmable, and may be used to implement Field Programmable Mixed Signal Arrays - making it of great interest to the wider electronics community. --------------------------------------------------- Rodney Douglas (ETH Zurich, Switzerland) "A Communications Infrastructure for Neuromorphic Analog VLSI Systems" Analogs of peripheral sensory structures such as retinas and cochleas, and populations of neurons have been successfully implemented on single neuromorphic analog Very Large Scale Integration (aVLSI) chips. However, the amount of computation that can be performed on a single chip is limited. The construction of large neuromorphic systems requires a multi-chip communication framework optimized for neuromorphic aVLSI designs. We have developed one such framework. It is an asynchronous multiplexing communication network based on address event data representation (AER). In AER, analog signals from the neurons are encoded by pulse frequency modulation. These pulses are abstractly represented on a communication bus by the address of the neuron that generated it, and the timing of these address-event communicate analog information. The multiplexing used by the communication framework attempts to take advantage of the greater speed of silicon technology over biological neurons to compensate for more limited direct physical connectivity of aVLSI. The AER provides a large degree of flexibility for routing digital signals to arbitrary physical locations. ******************************************************************* POSTERS ******* Irit Opher and David Horn (Tel Aviv University, Israel) "Arrays of Pulse Coupled Neurons: Spontaneous Activity Patterns and Image Analysis" Arrays of interacting identical pulse coupled neurons can develop coherent firing patterns, such as moving stripes, rotating spirals and expanding concentric rings. We obtain all of them using a novel two variable description of integrate and fire neurons that allows for a continuum formulation of neural fields. One of these variables distinguishes between the two different states of refractoriness and depolarization and acquires topological meaning when it is turned into a field. Hence it leads to a topologic characterization of the ensuing solitary waves. These are limited to point-like excitations on a line and linear excitations, including all the examples quoted above, on a two-dimensional surface. A moving patch of firing activity is not an allowed solitary wave on our neural surface. Only the presence of strong inhomogeneity that destroys the neural field continuity, allows for the appearance of patchy incoherent firing patterns driven by excitatory interactions. Such a neural manifold can be used for image analysis, performing edge detection and scene segmentation, under different connectivities. Using either DOG or short range synaptic connections we obtain edge detection at times when the total activity of the system runs through a minimum. With generalized Hebbian connections the system develops temporal segmentation. Its separation power is limited to a small number of segments. ----------------------------------------------------------------- Berthold Ruf und Michael Schmitt (Technische Universitaet Graz, Austria) "Self-Organizing Maps of Spiking Neurons Using Temporal Coding" The basic idea of self-organizing maps (SOM) introduced by Kohonen, namely to map similar input patterns to contiguous locations in the output space, is not only of importance to artificial but also to biological systems, e.g. in the visual cortex. However, the standard formulation of the SOM and the corresponding learning rule are not suitable for biological systems. Here we show how networks of spiking neurons can be used to implement a variation of the SOM in temporal coding, which has the same characteristic behavior. In contrast to the standard formulation of the SOM our construction has the additional advantage that the winner among the competing neurons can be determined fast and locally. ---------------------------------------------------- Wolfgang Maass and Michael Schmitt (Technische Universitaet Graz, Austria) "On the Complexity of Learning for Networks of Spiking Neurons" In a network of spiking neurons a new set of parameters becomes relevant which has no counterpart in traditional neural network models: the time that a pulse needs to travel through a connection between two neurons (also known as ``delay'' of a connection). It is known that these delays are tuned in biological neural systems through a variety of mechanisms. We investigate the VC-dimension of networks of spiking neurons where the delays are viewed as ``programmable parameters'' and we prove tight bounds for this VC-dimension. Thus we get quantitative estimates for the diversity of functions that a network with fixed architecture can compute with different settings of its delays. It turns out that a network of spiking neurons with k adjustable delays is able to compute a much richer class of Boolean functions than a threshold circuit with k adjustable weights. The results also yield bounds for the number of training examples that an algorithm needs for tuning the delays of a network of spiking neurons. Results about the computational complexity of such algorithms are also given. ------------------------------------------------------------------ Wolfgang Maass and Thomas Natschlaeger (Technische Universitaet Graz, Austria) " Networks of Spiking Neurons Can Emulate Arbitrary Hopfield Nets in Temporal Coding" A theoretical model for analog computation in networks of spiking neurons with temporal coding is introduced and tested through simulations in GENESIS. It turns out that the use of multiple synapses yields very noise robust mechanisms for analog computations via the timing of single spikes. One arrives in this way at a method for emulating arbitrary Hopfield nets with spiking neurons in temporal coding, yielding new models for associative recall of spatio-temporal firing patterns. We also show that it suffices to store these patterns in the efficacies of excitatory synapses. A corresponding layered architecture yields a refinement of the synfire-chain model that can assume a fairly large set of different stable firing patterns for different inputs. ----------------------------------------------------------- Wolfgang Maass and Berthold Ruf (Technische Universitaet Graz, Austria) It was previously shown that the computational power of formal models for computation with pulses is quite high if the pulses arriving at a spiking neuron have an approximately linearly rising or linearly decreasing initial segment. This property is satisfied by common models for biological neurons. On the other hand several implementations of pulsed neural nets in VLSI employ pulses that have the shape of step functions. We analyse the relevance of the shape of pulses for the computational power of formal models for pulsed neural nets. It turns out that the computational power is significantly higher if one employs pulses with a linearly increasing or decreasing segment. ******************   From michal at neuron.tau.ac.il Thu Aug 14 07:07:00 1997 From: michal at neuron.tau.ac.il (Michal Finkelman) Date: Thu, 14 Aug 1997 14:07:00 +0300 (IDT) Subject: D. Horn's 60th birthday - Symposium on Neural Comp. & Part. Physics Message-ID: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% TEL AVIV UNIVERSITY THE RAYMOND & BEVERLY SACKLER FACULTY OF EXACT SCIENCES SCHOOL OF PHYSICS AND ASTRONOMY SYMPOSIUM ON PARTICLE PHYSICS AND NEURAL COMPUTATION ---------------------------------------------------- IN HONOR OF DAVID HORN'S 60TH BIRTHDAY -------------------------------------- Monday, October 27th 1997 (9:15 AM - 05:30 PM) Lev Auditorium, Tel-Aviv University PROGRAM ---------- 9:15 AM: Opening addresses: Nili Cohen, Rector of Tel-Aviv University Yuval Ne'eman (Tel Aviv) 9:30 - 10:30: Gabriele Veneziano (CERN) - From s-t-u Duality to S-T-U Duality 10:30 - 11:00: Coffee break 11:00 - 12:00: Fredrick J Gilman (Carnegie Mellon) - CP Violation 12:00 - 1:30: Lunch break 1:30 - 2:30: Leon N Cooper (Brown) - From Receptive Fields to the Cellular Basis for Learning and Memory Storage: A Unified Learning Hypothesis 2:30 - 3:30: John J Hopfield (Princeton) - How Can We Be So Smart? Information Representation and Neurobiological Computation. 3:30 - 4:00: Coffee break 4:00 - 5:00: Yakir Aharonov (Tel Aviv) - A New Approach to Quantum Mechanics 5:00 PM: David Horn - Closing Remarks %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Notes: 1. This announcement serves also as an invitation to enter the TAU campus on that date. 2. Colleages and friends who wish to attend the symposium are kindly requested to NOTIFY US IN ADVANCE by e-mailing to michal at neuron.tau.ac.il. fax: 972-3-6407932 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%   From ecm at casbah.acns.nwu.edu Thu Aug 14 18:05:11 1997 From: ecm at casbah.acns.nwu.edu (Edward Malthouse) Date: Thu, 14 Aug 1997 17:05:11 -0500 (CDT) Subject: Nonlinear Principal Components Analysis (fwd) Message-ID: <199708142205.RAA13356@casbah.acns.nwu.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 355 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/f959c024/attachment.ksh From rsun at cs.ua.edu Fri Aug 15 01:22:16 1997 From: rsun at cs.ua.edu (Ron Sun) Date: Fri, 15 Aug 1997 00:22:16 -0500 Subject: CFP: AAAI 1998 Spring Symposium on Multimodal Reasoning Message-ID: <199708150522.AAA10798@sun.cs.ua.edu> ------ CFP: AAAI 1998 Spring Symposium on Multimodal Reasoning There are a number of AI reasoning modes or paradigms that have widespread application, e.g. case-based reasoning, constraint-based reasoning, model-based reasoning, rule-based reasoning. The symposium will encourage integration of these reasoning modes, and interaction among the corresponding research communities. Topics include, but are not limited to: *Combining reasoning methods in a single application *Using one form of reasoning to support or guide another *Compiling one form of reasoning experience into another form of reasoning knowledge *Transferring successful methods from one form of reasoning to another *Interoperability of applications based on different reasoning technology *Switching among alternative forms of reasoning *Comparing and evaluating reasoning alternatives for specific problem domains *Identifying categories, structures, or properties of knowledge or tasks for which different reasoning techniques are appropriate or advantageous *Systematically relating reasoning formalisms *Demonstrating practical advantages of a multimodal approach for real problems *Identifying and exploiting commonalities Papers grounded in specific problems or domains will be welcome. More general or theoretical insights will also be appropriate. The Symposium will encourage building on the specific experiences of the attendees towards general principles of multimodal reasoning architecture, multimodal both in the sense of combining modes, and in the sense of being relevant to multiple modes. Submissions Submit an abstract of a new paper or a summary of previous relevant work. Submissions should be no more than four pages, single column, 12 point type. Include an illustrative example. E-mail PostScript of submissions to multimodal at cs.unh.edu. The symposium web page is at: www.cs.unh.edu/ccc/mm/sym.html. General information about the Spring Symposia can be obtained at: http://aaai.org/Symposia/Spring/1998/sssparticipation-98.html Organizing Committee Eugene Freuder (chair), University of New Hampshire, ecf at cs.unh.edu Edwina Rissland, University of Massachusetts Peter Struss, Technical University of Munich Milind Tambe, University of Southern California Program Committee Rene Bakker, Telematics Research Centre Karl Branting, University of Wyoming Nick Cercone, University of Regina Ashok Goel, Georgia Institute of Technology Vineet Gupta, Xerox Palo Alto Research Center David Leake, University of Indiana Amnon Meisels, Ben Gurion University Robert Milne, Intelligent Applications Ltd Pearl Pu, Ecole Polytechnique F=E9d=E9rale de Lausanne Ron Sun, University of Alabama Jerzy Surma, Technical University of Wroclaw Katia Sycara, Carnegie Mellon University   From giles at research.nj.nec.com Mon Aug 18 13:02:10 1997 From: giles at research.nj.nec.com (Lee Giles) Date: Mon, 18 Aug 97 13:02:10 EDT Subject: paper on intelligent methods for file system optimization Message-ID: <9708181702.AA00759@alta> The following paper, published at the Proceedings of the Fourteenth National Conference on Artificial Intelligence and the Ninth Innovative Applications of Artificial Intelligence Conference (August, 1997), is now available at the sites listed below: http://www.neci.nj.nec.com/homepages/giles/papers/AAAI-97.intelligent.file.organization.ps.Z http://envy.cs.umass.edu/People/kuvayev/index.html ftp://ftp.nj.nec.com/pub/giles/papers/AAAI-97.intelligent.file.organization.ps.Z We apologize in advance for any multiple postings that may be received. *************************************************************************** Intelligent Methods for File System Optimization L. Kuvayev(2), C. L. Giles(1,3), J. Philbin(1), H. Cejtin(1) (1)NEC Research Institute, 4 Independence Way, Princeton, NJ 08540 (2)Dept. of Computer Science, University of Massachusetts, Amherst, MA 01002 (3)Institute for Advanced Computer Studies, U. of Maryland, College Park, Md kuvayev at cs.umass.edu {giles,philbin,henry}@research.nj.nec.com ABSTRACT The speed of I/O components is a major limitation of the speed of all other major components in today's computer systems. Motivated by this, we investigated several algorithms for efficient and intelligent organization of files on a hard disk. Total access time may be decreased if files with temporal locality also have spatial locality. Three intelligent methods based on file type, frequency, and transition probabilities information showed up to 60% savings of total I/O time over the naive placement of files. More computationally intensive hill climbing and genetic algorithms approaches did not outperform statistical methods. The experiments were run on a real and simulated hard drive in single and multiple user environments. Keywords: file systems, reasoning about physical systems, Markov models, probabilistic reasoning, genetic algorithms. __ C. Lee Giles / Computer Science / NEC Research Institute / 4 Independence Way / Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 www.neci.nj.nec.com/homepages/giles.html ==   From wulfram.gerstner at di.epfl.ch Tue Aug 19 02:42:00 1997 From: wulfram.gerstner at di.epfl.ch (Wulfram Gerstner) Date: Tue, 19 Aug 1997 08:42:00 +0200 Subject: ICANN_97 Oct.7-10: call for participation Message-ID: <199708190642.IAA07536@lamisun1.epfl.ch> Call for Participation: ICANN'97 in Lausanne (Switzerland). --------- ICANN'97 ------- International Conference on Artificial Neural Networks October 7-10 - Lausanne, Switzerland Tutorials on Tuesday, October 7 Plenary and parallel sessions, October 8-10 -- The 1997 Latsis Conference -- More details on the conference including the full program and registration forms can be found in http://www.epfl.ch/icann97/ Email icann97 at epfl.ch Fax +41 21 693-5656 _____________________________________________________________ Conference structure """""""""""""""""""" ICANN'97 is the 7th Annual Conference of the European Neural Network Society ENNS. The program includes plenary talks and 4 tracks of parallel sessions covering the domains of Theory, Biological Models, Applications, and Implementations. All posters are complemented by short oral poster spotlight presentation. The conference starts with a Tutorial day on October 7. Tutorials ^^^^^^^^^ Y. Abu-Mostafa (USA), P. Refenes (GB) Finance Applications X. Arreguit (CH) VLSI Implementations of Vision Systems J.L. van Hemmen (D), A. Kreiter (D) Cortical Oscillations M. Opper (D) Statistical Theories of Learning Invited plenary talks ^^^^^^^^^^^^^^^^^^^^ H. Bourlard, Martigny, CH, Speech recognition S. Grossberg, Boston, USA, Visual Perception H. Markram, Rehovot, Israel, Fast Synaptic Changes E. Oja, Espoo, Finland, Independent Comp. Analysis H. Ritter, Bielefeld, D, Self-Org. Maps for Robotics T. Roska, Budapest, HU, Cellular Neural Networks R. Sutton, Amherst, USA, Markov Decision Processes V. Vapnik, Holmdel, USA, Support Vector Machines E. Vittoz, Neuchatel, CH, Bioinspired Circuits Special Invited Sessions ^^^^^^^^^^^^^^^^^^^^^^^ Cortical Maps and Receptive Fields, Temporal Patterns and Brain Dynamics, Time Series Prediction, Adaptive Autonomous Agents Regular Sessions ^^^^^^^^^^^^^^^^^ THEORY: Learning, Signal Processing, Self Organization, Recurrent Networks Perceptrons, Kernel-based Networks BIOLOGY: Coding, Synaptic Learning, Neural Maps, Vision APPLICATIONS: Forecasting, Monitoring, Pattern Recognition, Robotics, Identification and Control IMPLEMENTATIONS Analog VLSI, Digital Implementations _____________________________________________________________ Registration """""""""""" The registration fee includes admission to all sessions, one copy of the proceedings, coffee breaks and 3 lunches, welcome drinks and banquet. before August 30 -- after Regular registration fee 580 CHF -- 640 CHF Student (with lunch, no banquet, no proceedings) 270 CHF -- 330 CHF Tutorial day (October 7) 30 CHF -- 50 CHF Ask for a copy of the forms or look on the Web http://www.epfl.ch/icann97/ Proceedings are published by Springer, Lecture Notes in Computer Science Series _____________________________________________________________ Conference location and accomodation """""""""""""""""""""""""""""""""""" The conference will be held at the EPFL (Swiss Federal Institute of Technology) in Lausanne. Lausanne is beautifully located on the shores of the lake of Geneva, and can easily by accessed by train and planes. Hotels are in the 50 to 150 CHF range. Reservation is not handled by the conference organizers. Ask Fassbind Hotels, fax +41 21 323 0145 _____________________________________________________________ Organizers ^^^^^^^^^^ General Chairman: Wulfram Gerstner, Mantra-EPFL Co-chairmen: Alain Germond, Martin Hasler, J.D. Nicoud, EPFL Registration secretariat: Andree Moinat, LRC-EPFL, tel +41 21 693-2661 FAX: +41 21 693 5656 ________________________________________________________ For the full program and other informations have a look at http://www.epfl.ch/icann97/ ________________________________________________________   From David_Redish at gs151.sp.cs.cmu.edu Thu Aug 21 11:17:39 1997 From: David_Redish at gs151.sp.cs.cmu.edu (David Redish) Date: Thu, 21 Aug 1997 11:17:39 -0400 Subject: Thesis available Message-ID: <23964.872176659@gs151.sp.cs.cmu.edu> My PhD thesis is now available on the WWW: BEYOND THE COGNITIVE MAP: Contributions to a Computational Neuroscience Theory of Rodent Navigation A. David Redish http://www.cs.cmu.edu/~dredish/pub/thesis.ps.gz The thesis is 452 pages (2.5M gzipped, 9.6M uncompressed). In addition to novel contributions (see below), the thesis includes a 100 page Experimental Review (Chapter 2), a 75 page Navigation overview, a 30 page overview of hippocampal theories, and over 500 references which some might find useful. I have appended the abstract for those interested. ***** NO HARDCOPIES AVAILABLE ******** I am sorry, but I due to the size of the thesis, I cannot send hardcopies. ------------------------------------------------------ Dr. A. David Redish Computer Science Department CMU graduated (!) student Center for the Neural Basis of Cognition (CNBC) http://www.cs.cmu.edu/~dredish ------------------------------------------------------------ BEYOND THE COGNITIVE MAP: Contributions to a Computational Neuroscience Theory of Rodent Navigation Ph.D Thesis A. David Redish Computer Science Department and Center for the Neural Basis of Cognition Carnegie Mellon University Rodent navigation is a unique domain for studying information processing in the brain because there is a vast literature of experimental results at many levels of description, including anatomical, behavioral, neurophysiological, and neuropharmacological. This literature provides many constraints on candidate theories. This thesis presents contributions to a theory of how rodents navigate as well as an overview of that theory and how it relates to the experimental literature. In the first half of the thesis, I present a review and overview of the rodent navigation literature, both experimental and theoretical. The key claim of the theory is that navigation can be divided into two categories: taxon/praxic navigation and locale navigation (O'Keefe and Nadel, 1978), and that locale navigation can be understood as an interaction between five subsystems: local view, head direction, path integration, place code, and goal memory (Redish and Touretzky, 1997). I bring ideas together from the extensive work done on rodent navigation over the last century to show how the interaction of these systems forms a comprehensive, computational theory of navigation. This comprehensive theory has implications for an understanding of the role of the hippocampus, suggesting that it shows three different modes: storage, recall, and replay. In the second half of the thesis, I show specific contributions to this overall theory. I report a simulation of the head direction system that can track multiple head direction speeds accurately. The simulations show that the theory implies that head direction tuning curves in the anterior thalamic nuclei should deform during rotations. This observation has been confirmed experimentally by Blair et al. (1997). By examining the computational requirements and the anatomical data, I suggest that the anatomical locus of the path integrator is in a loop comprised of the subiculum, the parasubiculum, and the superficial entorhinal cortex. This contrasts with other hypotheses of the anatomical locus of path integration (e.g. hippocampus, McNaughton et al., 1996) and predicts that the hippocampus should not be involved in path integration. This prediction has been recently tested and confirmed by Alyan et al. (1997). I present simulations demonstrating the viability of the three-mode hippocampal proposal, including storage and recall of locations within single environments, with ambiguous inputs, and in multiple environments. I present simulations demonstrating the viability of the dual-role hippocampus (recall and replay), showing that the two modes can coexist within the hippocampus even though the two roles seem to require incompatible connection matrices. In addition, I present simulations of specific experiments, including a simulation of the recent result from Barnes et al. (1997), showing that the model produces a bimodality in the correlations of representations of an environment in animals with deficient LTP. These simulations show that the Barnes et al. result does not necessarily imply that the intra-hippocampal connections are pre-wired to form separate charts as suggested by Samsonovich (1997). a simulation of Sharp et al.'s (1990) data on the interaction between entry point and external cues, showing the first simulations capable of replicating all the single place field conditions reported by Sharp et al. simulations of Cheng (1986) and Margules and Gallistel (1988) showing the importance of disorientation in self-localization. simulations of Morris (1981), showing that the model can replicate navigation in the water maze. simulations of Collett et al. (1986) and our own gerbil navigation results, showing that the model can replicate a number of reactions to different manipulations of landmark arrays.   From pcohen at cse.ogi.edu Thu Aug 21 19:31:00 1997 From: pcohen at cse.ogi.edu (Phil Cohen) Date: Thu, 21 Aug 1997 16:31:00 -0700 Subject: GESTURE RECOGNITION POSTDOCTORAL POSITION Message-ID: GESTURE RECOGNITION POSTDOCTORAL RESEARCHER The Center for Human-Computer Communication at the Oregon Graduate Institute of Science and Technology has a postdoctoral opening for a talented researcher interested in gesture recognition in the context of multimodal communication. We have considerable experience building multimodal systems, using speech and pen-based gestures, in which speech and gesture mutually compensate for one another's errors. The position would involve gestural research and development recog., both 2-D and 3-D, in a multimodal environment, as well as statistical language modeling. Experience with neural networks and HMMs is essential. Of course, background in handwriting recognition would be most appropriate. Salary and benefits for this position are competitive. To apply, submit a resume, names and contact information for 3 references, a brief statement of research career interests, and date of availability. Applications received by Sept. 1 will receive priority consideration. Qualified women and minorities are encouraged to apply. Please forward applications to: Gloria McCauley, Center Administrator Center for Human-Computer Communication Department of Computer Science Oregon Graduate Institute of Science & Technology P. O. Box 91000 Portland, Oregon 97291 Email: mccauley at cse.ogi.edu FAX: (503) 690-1548 For FEDEX, please use shipping address at: Department of Computer Science and Engineering 20000 N.W. Walker Road Beaverton, Oregon 97006 GENERAL INFORMATION ABOUT CHCC, OGI & PORTLAND AREA CHCC is an internationally known research center in the Computer Science Dept. at the Oregon Graduate Institute of Science & Technology (OGI). We are a multidisciplinary group that includes computer scientists, psychologists, and linguists dedicated to advancing the science and technology of human-computer communication. Our research results and system designs are published broadly, receive attention at the highest levels of government and industry, and are supported by Intel, Microsoft, the National Science Foundation, DARPA, ONR, and other well-known federal and corporate sponsors. The work environment at CHCC includes a new state-of-the-art data collection and intelligent systems laboratory with Pentium and Pentium Pro workstations, SGIs, Sun Sparcstations, PowerMacs, PDAs and other portable devices, and video recording and production equipment. The Computer Science Dept. at OGI is one of the most rapidly growing in the U.S., with 18 faculty, over 100 Ph.D. and M.S. students, and a departmental research budget exceeding $6M per year. The department's research activities are organized around a number of centers of excellence, including the Center for Human-Computer Communication, the Center for Spoken Language Understanding, the Pacific Software Center, the Data Intensive Systems Center, and the Center for Information Technology. OGI is located 12 miles west of Portland, Oregon, and serves the high-technology educational needs of Intel, Tektronix, Mentor Graphics, and other local corporations. Portland is a rapidly growing metropolitan area with over 1.2 M people. It offers extensive cultural, culinary, and recreational opportunities such as sailing, windsurfing, skiing, hiking, and beach sports within an hour's drive. Further information about CHCC can be found at http://www.cse.ogi.edu/CHCC Philip R. Cohen Professor and Director Center for Human-Computer Communication Dept. of Computer Science and Engineering Oregon Graduate Institute of Science and Technology 20000 NW Walker Rd. Beaverton, OR 97006 Phone: 503-690-1326. Fax: 503-690-1548 WWW: http://www.cse.ogi.edu/CHCC   From David_Redish at gs151.sp.cs.cmu.edu Fri Aug 22 10:04:30 1997 From: David_Redish at gs151.sp.cs.cmu.edu (David Redish) Date: Fri, 22 Aug 1997 10:04:30 -0400 Subject: Thesis available In-Reply-To: Your message of "Thu, 21 Aug 1997 11:17:39 EDT." <23964.872176659@gs151.sp.cs.cmu.edu> Message-ID: <26593.872258670@gs151.sp.cs.cmu.edu> Some people have balked at printing 452 pages and requested that I reformat the thesis if possible. I have been able to reformat it single spaced and save approx. 1/4 (so it is now 340 pages). ------------------------------------------------------ A. David Redish Computer Science Department CMU graduated (!) student Center for the Neural Basis of Cognition (CNBC) http://www.cs.cmu.edu/~dredish ------------------------------------------------------------ >My PhD thesis is now available on the WWW: > > BEYOND THE COGNITIVE MAP: > Contributions to a Computational Neuroscience Theory > of Rodent Navigation > > A. David Redish > > http://www.cs.cmu.edu/~dredish/pub/thesis.ps.gz Now, 340 pages.   From jlm at cnbc.cmu.edu Fri Aug 22 17:58:10 1997 From: jlm at cnbc.cmu.edu (Jay McClelland) Date: Fri, 22 Aug 1997 17:58:10 -0400 (EDT) Subject: Post Doc Opening: Network Solutions to Cognitive Tasks Message-ID: <199708222158.RAA23107@eagle.cnbc.cmu.edu> Post-Doctoral Opening: Computational Analysis of Neural Network Solutions to Cognitive Tasks I have three years of funding to support a post-doctoral fellow to study how task constraints and priors (both hard or soft constraints) jointly shape the representations that emerge in connectionist networks when they are applied to cognitive tasks in natural domains such as morphology, natural kind semantics, structure of language, and reading. I'm hoping to find someone familiar with the psychological, neuropsychological, and connectionist research in one or more of these areas who also posesses a firm understanding of the mathematics relevant to Bayesian/MDL formulations of network learning mechanisms. Interested applicants should send email containing a brief statement of interest a CV, and the names, smail, and email addresses of three individuals who can be contacted for references (all ascii please!). Please also provide the same materials on paper plus copies of publications and preprints. Start date can be any time in the next 12 months. PLEASE INCLUDE THE SUBJECT LINE OF THIS MESSAGE IN THE SUBJECT OF YOUR ELECTONIC REPLY. +-------------------------------------------------------------+ | James L. (Jay) McClelland | +-------------------------------------------------------------+ | Co-Director, Center for the Neural Basis of Cognition | | Professor of Psychology, Carnegie Mellon | | Adjunct Prof, Computer Science, Carnegie Mellon and | | Neuroscience, University of Pittsburgh | +-------------------------------------------------------------+ | jlm at cnbc.cmu.edu or | Room 115 | | mcclelland+ at cmu.edu | Mellon Institute | | 412-268-4000 (Voice) | 4400 Fifth Avenue | | 412-268-5060 (Fax) | Pittsburgh, PA 15213 | +-------------------------------------------------------------+ | Home page: http://www.cnbc.cmu.edu/people/mcclelland.html | +-------------------------------------------------------------+   From heiniw at challenge.dhp.nl Sat Aug 23 12:39:21 1997 From: heiniw at challenge.dhp.nl (Heini Withagen) Date: Sat, 23 Aug 1997 18:39:21 +0200 (MDT) Subject: PhD thesis on Analog Neural Hardware available Message-ID: <199708231639.SAA03661@challenge.dhp.nl> A non-text attachment was scrubbed... Name: not available Type: text Size: 2884 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/f55a435e/attachment.ksh From tgd at CS.ORST.EDU Sun Aug 24 14:42:36 1997 From: tgd at CS.ORST.EDU (Tom Dietterich) Date: Sun, 24 Aug 1997 11:42:36 -0700 Subject: Hierarchical Reinforcement Learning (tech report) Message-ID: <199708241842.LAA07903@edison> The following technical report is available in gzipped postscript format from ftp://ftp.cs.orst.edu/pub/tgd/papers/tr-maxq.ps.gz Hierarchical Reinforcement Learning with the MAXQ Value Function Decomposition Thomas G. Dietterich Department of Computer Science Oregon State University Corvallis, OR 97331 Abstract This paper describes the MAXQ method for hierarchical reinforcement learning based on a hierarchical decomposition of the value function and derives conditions under which the MAXQ decomposition can represent the optimal value function. We show that for certain execution models, the MAXQ decomposition will produce better policies than Feudal Q learning.   From ataxr at IMAP1.ASU.EDU Fri Aug 22 22:07:31 1997 From: ataxr at IMAP1.ASU.EDU (Asim Roy) Date: Fri, 22 Aug 1997 22:07:31 -0400 (EDT) Subject: CONNECTIONIST LEARNING: IS IT TIME TO RECONSIDER THE FOUNDATIONS? Message-ID: Dear Moderator, Please post this version of the note if you have not posted it already. I have reformatted it. My sincere apologies to those who get multiple copies. Asim Roy ---------------------------------- This note is to summarize the discussion that took place at ICNN'97 (International Conference on Neural Networks) in Houston in June on the topic of "Connectionist Learning: Is It Time to Reconsider the Foundations?" ICNN'97 was organized jointly by INNS (International Neural Network Society) and the IEEE Neural Network Council. The following persons were on the panel to discuss the questions being raised about classical connectionist learning: 1. Shunichi Amari 2. Eric Baum 3. Rolf Eckmiller 4. Lee Giles 5. Geoffrey Hinton 6. Teuvo Kohonen 7. Dan Levine 8. Jean Jacques Slotine 9. John Taylor 10. David Waltz 11. Paul Werbos 12. Nicolaos Karayiannis 13. Asim Roy Nick Karayiannis, General Chair of ICNN'97, moderated the panel discussion. Appendix 1 has the issues/questions being raised about classical connectionist learning. A general summary of the panel discussion as it relates to those questions is provided below. Appendix 2 provides a brief summary of what was said by individual panel members. In general, the individual panel members provided their own summaries. In some cases, they modified my draft of what they had said. This document took a while to prepare, given the fact that many of us go on vacation or to conferences during summer. A GENERAL SUMMARY OF THE PANEL DISCUSSION 1) On the issue of using memory for learning, many panel members strongly supported the idea and argued in its favor, saying that humans indeed store information in order to learn. Although there was no one who actually opposed the idea of using memory to learn, some still tend to believe that memoryless learning does indeed occur in certain situations, such as in the learning of motor skills. (I can argue very strongly that memory is indeed used in "every" learning situation, even to acquire motor skills! I plan to send out a memo on this shortly.) 2) On the question of global/local learning, many panelists agreed that global learning mechanisms are indeed used in the brain and pointed out the role of neuromodulators in transferring information to appropriate parts of the brain. Some others justified global mechanisms by saying that certain kinds of learning are only possible with "nonlocal" mechanisms. Again, although there was no one who vigorously opposed the idea of using global mechanisms to learn, some thought that some form of local learning may also be used by the brain in certain situations. 3) On the question of network design, several panelists argued that the brain must indeed know how to design networks in order to store/learn new knowledge and information. Some suggested that this design capability is derived from "experience" (as opposed to "inheritance" - David Waltz), while others mentioned "punishment/reward" mechanisms as its source (John Taylor) or implied it through the notion of "control of adaptivity" (Teuvo Kohonen). Shunichi Amari emphasized the network design capability from a robotics point of view, while Eric Baum said that learning beyond inherited structures involves knowing how to design networks. Perhaps all of us agree that we do indeed inherit some network structures through evolution/inheritance. But I did not hear anybody argue that our algorithms should not include network design as one of its tasks. SOME PERSONAL REMARKS ON THIS DEBATE I have come out of this debate with deep respect for the field and for many of its highly distinguished and prominent scholars. It has never been an acrimonious debate. I think most of them had been very open minded in examining the facts and arguments against classical connectionist learning. I had vigorous arguments with some of them, but it was always friendly and very respectful. And I think we all had fun arguing about these things. I think it bodes well for the science. The culture of a scientific field depends very much on its topmost scholars. I couldn't be among a better set of scholars with higher levels of intellectual integrity. And to be honest, I was indeed pleasantly surprised when I was nominated for the INNS Governing Board membership by Prof. Shunichi Amari. After publicly challenging some of the core connectionist ideas, I was afraid that I was going to be a permanent outcast in this field. I hope that will not be true. I hope to be part of this field. I think the ICNN'97 debate was very significant and useful. First, it engaged some of the most distinguished scholars in the field. Second, there are some very significant statements from many of these scholars. Paul Werbos was the first to acknowledge that memory is indeed used for learning. I think that was an important first step in this debate. But then, there are many others. For example, Shunichi Amari's call for "a new type of mathematical theories of neural computation" is very significant indeed. And so is Teuvo Kohonen's acknowledgment of a "third level of 'control of synaptic plasticity' that was ignored in the past in connectionism." And note Dan Levine's statement that "it is indeed time to reconsider the foundations of connectionist learning," despite his emphasis that the work of the last thirty years should be built upon rather than discarded. And John Taylor's remark that "classical connectionism perhaps has too narrow a view of the brain" and that "connectionism should not be limited to traditional artificial neural networks." And Eric Baum's remarks on the brain being a multiagent system and on the limitations of classical connectionist in explaining this multiagent behavior. And Lee Giles' call for a "deeper foundation for intelligence processing." And David Waltz's story about the learning experiences of Marvin Minsky's dog will certainly be a classic. He helped to hammer in the point strongly that humans do indeed use memory to learn. WHAT DID THE DEBATE REALLY ACCOMPLISH? Overall, the debate has established the following: 1) It is no longer necessary for our learning algorithms to have local learning laws similar to the ones in back propagation or perceptron or Hopfield net. This will allow us to develop much more robust and powerful learning algorithms using means that may be "nonlocal" in nature. In other words, we should be free to develop new kinds of algorithms to design and train networks without the need to use a local learning law. 2) The learning algorithms can now have better access to information much as humans do. Humans actually have access to all kinds of information in order to learn. And they use memory to remember some of it so that they can use it in the thinking and learning process. The idea of "memoryless" learning in classical connectionist learning is unduly restrictive and completely unnatural. There is no biological or behavioral basis for it. So, our learning algorithms should now be allowed to store learning examples in order to learn and have access to other kinds of information. This will allow the algorithms to look at the information about a problem, understand its complexity and then design and train an appropriate net. All this can perhaps be summarized in one sentence: Overall, it fundamentally changes the nature of algorithms that we might call "brain-like." So Shunichi Amari's call for "a new type of mathematical theories of neural computation" couldn't have been more appropriate. In my opinion, the debate on connectionist learning does not end here - it is just the beginning. We should continue to ask critical questions and engage ourselves in vigorous debate. It doesn't make sense for a scientific field to work for years on building a theory that falls apart on first rigorous common sense examination. Many technological advances depend on this field. So we need to guard against these major pitfalls. Perhaps one of the existing newsgroups or a new one can accommodate such open debates, bringing together neuroscientists, cognitive scientists and connectionists. I don't think we can be isolated anymore. The Internet is helpful and allows us to communicate across disciplines on a worldwide basis. We should no longer be the lonely researcher with very restricted interactions. I was once a lonely researcher with many questions in my mind. So I went to Stanford University during my sabbatical and sat in David Rumelhart's and Bernie Widrow's classes to ask all kinds of questions. But there must be a better way to ask such outrageous questions. An important issue for the field is that of setting standards for our algorithms. It is imperative that we define some "external behavioral characteristics" for our so-called brain-like autonomous learning algorithms, whatever kind they may be. But I hope that this is at least a first step towards defining and developing a more rigorous science. We cannot continue to "babysit" our so-called learning algorithms. They need to be truly autonomous. With regards to all, Asim Roy Arizona State University ------------------------------------------------------------ APPENDIX 1 PANEL TITLE: "Connectionist Learning: Is it Time to Reconsider the Foundations?" ABSTRACT Classical connectionist learning is based on two key ideas. First, no training examples are to be stored by the learning algorithm in its memory (memoryless learning). It can use and perform whatever computations are needed on any particular training example, but must forget that example before examining others. The idea is to obviate the need for large amounts of memory to store a large number of training examples. The second key idea is that of local learning - that the nodes of a network are autonomous learners. Local learning embodies the viewpoint that simple, autonomous learners, such as the single nodes of a network, can in fact produce complex behavior in a collective fashion. This second idea, in its purest form, implies a predefined net being provided to the algorithm for learning, such as in multilayer perceptrons. Recently, some questions have been raised about the validity of these classical ideas. The arguments against classical ideas are simple and compelling. For example, it is a common fact that humans do remember and recall information that is provided to them as part of learning. And the task of learning is considerably easier when one remembers relevant facts and information than when one doesn't. Second, strict local learning (e.g. basic back propagation type learning) is not a feasible idea for any system, biological or otherwise. It implies predefining a network "by the system" without having seen a single training example and without having any knowledge at all of the complexity of the problem. Again, there is no system that can do that in a meaningful way. The other fallacy of the local learning idea is that it acknowledges the existence of a "master" system that provides the design so that autonomous learners can learn. Recent work has shown that much better learning algorithms, in terms of computational properties (e.g. designing and training a network in polynomial time complexity, etc.), can be developed if we don't constrain them with the restrictions of classical learning. It is, therefore, perhaps time to reexamine the ideas of what we call "brain-like learning." This panel will attempt to address some of the following questions on classical connectionist learning: 1. Should memory be used for learning? Is memoryless learning an unnecessary restriction on learning algorithms? 2. Is local learning a sensible idea? Can better learning algorithms be developed without this restriction? 3. Who designs the network inside an autonomous learning system such as the brain? --------------------------------------------------------- APPENDIX 2 BRIEF SUMMARY OF INDIVIDUAL REMARKS 1) DR. SHUNICHI AMARI: Dr. Amari focused mainly on the neural network design and modularity of learning. Classical connectionist learning has treated microscopic aspects of learning where local generalized Hebbian rule plays a fundamental role. However, each neuron works in a network so that learning signals may be synthesized in the network nonlocally. He also said that, based on microscopic local learning rules, more macroscopic structural learning emerges such that a number of experts differentiate to play different roles cooperatively. This is a basis for concept formation and symbolization of microscopic neural excitations. He stresses that we need a new type of mathematical theories of neural computation. --------------- Shun-ichi Amari is a Professor-Emeritus at the University of Tokyo and is now working as a director of Information Processing Group in RIKEN Frontier Research Program. He has worked on mathematical theories of neural networks for thirty years, and his current interest is, among others, applications of information geometry to manifolds of neural networks. He is the past president of the International Neural Network Society (INNS), a council member of Bernoulli Society for Mathematical Statistics and Probability, IEEE Fellow, a member of Scientists Council of Japan, and served as founding Coeditor-in-Chief of Neural Networks. He is recipient of Japan Academy Award, IEEE Emanuel R. Piore Award, IEEE Neural Networks Pioneer Award, and so on. ---------------------------------------------------------- 2) DR. ERIC BAUM: Dr. Baum remarked that a number of disciplines have independently reached a near consensus that the brain is a multiagent system that computes using interaction of modules large compared to neurons. These different disciplines offer different pictures of what the modules are and how they interact, and it is illuminating to compare these different insights. Evolutionary psychologists talk about modules evolving, as we have evolved different organs to perform different tasks. They have presented the Wason selection test, a psychophysical test of reasoning which seems to indicate that humans have a module specifically for reasoning about social interactions and cheating detection. Brain imaging presents a physical picture of modules interacting and will give great insight into the nature of how modules interact to compute. Stroke and other lesion victims give insights into deficits that can arise from damage to a single module. Lakoff and Johnson's observation that language is metaphorical can be viewed in modular terms. For example, time is money: you buy time, save time, invest your time wisely, live on borrowed time, etc. What is this but a manifestation of a module for valuable resource management that is applied in different contexts? Dr Baum also remarked that evolution has built massive knowledge into us at birth. This knowledge guides our learning. Much of it is manifested in a detailed intermediate reward function (pleasure, pain) that guides us to reinforcement learn. There is copious evidence of built in knowledge-- for example consider the difference in personalities of a labrador retriever and a sheepdog. Or for example consider experiments showing that monkeys, born in the lab without fear of snakes, can acquire fear of snakes from seeing a video of a monkey scared of snakes, yet they will not acquire a fear of flowers from seeing a video of a monkey recoiling from a flower (Mineka et al, Animal Learning and Behavior, 8:653 (1980).) Thus learning was a two phase process, learning during evolution followed by learning during life (and actually three phase if you consider technology.) Dr. Baum remarked that traditional neural theories do not seem to encompass this modular nature well. In his opinion the critical question in managing interaction of agents is ensuring that the individual agents all see the correct incentive. This he feels implies that the multiagent model is essentially an economic model, and said that he is working in this direction. Other features of intelligence not well handled by standard connectionist approaches include metalearning, and metacomputing. People are able to learn new concepts from a single example, which requires recursively applying ones knowledge to learning. Creature's need to be able to decide what to compute and when to stop computing and act, which again indicates a recursive nature of intelligence. It is not clear how ever to deal with these problems in a connectionist framework, but they seem natural within the context of multiagent economies. ------------- Eric Baum received B.A. and M.A. degrees in physics from Harvard University in 1978 and the Ph.D. in physics from Princeton University in 1982. He has since held positions at Berkeley, M.I.T., Caltech, J.P.L., and Princeton University and has for eight years now been a Senior Research Scientist in the Computer Science Division of the NEC Research Institute, Princeton N.J. His primary research interests are in Cognition, Artificial Intelligence, Computational Learning Theory, and Neural Networks, but he has also been active in the nascent field of DNA Based Computers, co-chairing the first and chairing the second workshops on DNA Based Computers. His papers include: "Zero Cosmological Constant from Minimum Action", Physics Letters V 133B, p185 (1983) "What Size Net Gives Valid Generalization", with D. Haussler, Neural Computation v1 (1989) pp148-157 "Neural Net Algorithms that Learn in Polynomial Time from Examples and Queries", IEEE Transactions in Neural Networks, V2 No. 1 pp 5-19 (1991). "Best Play for Imperfect Players and Game Tree Search- Part 1 Theory" E. B. Baum and W. D. Smith, (submitted) "Where Genetic Algorithms Excel", E. B. Baum, D. Boneh, and C. Garrett, (submitted) "Toward a Model of Mind as a Laissez-Faire Economy of Idiots, Extended Abstract", Proceedings of the 13th International Conference on Machine Learning pp28-36, Morgan Kauffman (1996). ----------------------------------------------------- 3) DR. ROLF ECKMILLER: Dr. Eckmiller presented three theses about brain-like learning. First is the notion of factories or modular subsystems. Second, the neural networks belong to the geometrical or topological theory space and not in the algebraic or analytical theory space. Hence using notions of Von Neumann computing in our analysis might be equivalent to "barking up the wrong tree." Third, he called upon the research community to develop a new wave of neural computers - ones that can adapt weights and time delays, build new layers and structures, and build and integrate connections between various parts of the brain. He said that "biological systems are amathematical" and therefore needs new mathematical tools for analysis. ------------- Rolf Eckmiller was born in Berlin, Germany, in 1942. He received his M.Eng. and Dr. Eng. (with honors) degrees in electrical engineering from the Technical University of Berlin, in 1967 and 1971, respectively. Between 1967 and 1978, he worked in the fields of neurophysiology and neural net research at the Free University of Berlin, and received the habilitation for sensory and neurophysiology in 1976. From 1972 to 1973 and from 1977 to 1978, he was a visiting scientist at UC Berkeley and the Smith-Kettlewell Eye Research Foundation in San Francisco. From 1979 to 1992, he was professor at the University of Düsseldorf. Since 1992, he has been professor and head of the Division of Neuroinformatics, Department of Computer Science at the University of Bonn. His research interests include vision, eye movements in primates, neural nets for motor control in intelligent robots, and neurotechnology with emphasis on retina implants. ---------------------------------------------------------- 4) DR. LEE GILES: Dr. Giles opened his discussion by stating that the connectionist field has always been a very self-critical one that has always been receptive to new ideas. Furthermore, the topics proposed here have been discussed to some extent in the past but are important ones and certainly worth reevaluation. As an example, the idea of using memory in learning was one of the earliest ideas in neural networks and was proposed in the seminal 1943 paper of McCulloch and Pitts. Today, memory structures are used extensively in neural models concerned with temporal and sequence anaylsis. For example recurrent neural networks have successfully been used for such problems as time series prediction, signal processing, and control. In discrete time recurrent networks, memory structure and useage are very important both to the networks performance and computational power. Dr. Giles then stated that there is still a great deal of discrepancy between what our current models can do in theory and what they can do in practice, and that a deeper foundation for intelligence processing needs to be established. One approach is to look at hybrid systems, models that combine many difference learning and intelligence paradigms - neural networks, AI, etc - and develop the foundations of intelligent systems by exploring hybrid system fundamentals. As an example of what intelligent systems can't do but should be able to, Dr. Giles showed examples of a pattern classification taken from the book "Pattern Recognition" by M. Bongard. Here one sees six examplar pictures from one class and six from another. The pattern classification task is to extract the rule(s) that differentiate one class from the other; a problem that humans can solve but no machine seems currently seems close to solving. Bongard constructed 100 of these problems. Not only is learning involved but so is reasoning and explanation. This problem by Bongard is an example of the types of problems we should be trying to solve and the questions raised in solving it will give us insights into constructing and understanding intelligent systems. Reference: M. Bongard, "Pattern Recognition", Spartan Books, 1970. ---------------- C. Lee Giles is a Senior Research Scienctist in Computer Science at NEC Research Institute, Princeton, NJ and Adjunct Faculty at the University of Maryland Institute for Advanced Computer Studies, College Park, Md. His current research interests are: novel applications of neural network, machine learning and AI in the WWW, communications, computing and computers, multi-media, adaptive control, system identification, language processing, time series and finance; and dynamically-driven recurrent neural networks - their computational and processing capabilities and relationships to other adaptive, learning and intelligent paradigms. Dr. Giles was one of the founding members of the Governors Board of the International Neural Network Society and is a member of the IEEE Neural Networks Council Technical Committee. He has served or is currently serving on the editorial boards of IEEE Transactions on Neural Networks, IEEE Transactions on Knowledge and Data Engineering, Journal of Computational Intelligence in Finance, Journal of Parallel and Distributed Computing, Neural Networks, Neural Computation, Optical Computing and Processing, Applied Optics, and Academic Press. Dr. Giles is a Fellow of the IEEE, a member of AAAI, ACM, INNS, the OSA, and DIMACS - Rutgers University Center for Discrete Mathematics and Theoretical Computer Science. Previously, he was a Program Manager at the Air Force Office of Scientific Research in Washington, D.C. where he initiated and managed basic research programs in Neural Networks and in Optics in Computing and Processing. ----------------------------------------------------------- 5) DR. GEOFREY HINTON: Dr. Hinton started by pointing out the weaknesses of the back-propagation algorithm in learning and in certain pattern recognition tasks. He then focused on the good properties of Bayesian networks and showed how well the Bayesian networks do on a certain pattern recognition task. He believes that prescriptions from well-known researchers about necessary conditions on biologically realistic learning algorithms are of some sociological interest but are unlikely to lead to radically new ideas. -------- Geoffrey Hinton received his BA in experimental psychology from Cambridge in 1970 and his PhD in Artificial Intelligence from Edinburgh in 1978. He is currently a fellow of the Canadian Institute for Advanced Research and professor of Computer Science and Psychology at the University of Toronto. He does research on ways of using neural networks for learning, memory, perception and symbol processing and has over 100 publications in these areas. He was one of the researchers who introduced the back-propagation algorithm that is now widely used for practical applications. His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural nets, mixtures of experts, and Helmholtz machines. His current main interest is in unsupervised learning procedures for neural networks with rich sensory input. He serves on the editoral boards of the journals Artificial Intelligence, Neural Computation, and Cognitive Science. He is a fellow of the Royal Society of Canada and of the American Association for Artificial Intelligence and a former President of the Cognitive Science Society. --------------------------------------------------------- 6) DR. TEUVO KOHONEN: Dr. Kohonen felt that perhaps we should go back to the basics to answer some of the questions being raised about connectionist learning, especially concerning the right forms of transfer functions and learning laws. He then talked about three levels of neural functions. At the lowest level, he mentioned the idea of activation and inhibition as coming from the old views held in medical science. At the next level, the links between neurons get modified and change over time. This view was introduced to the neural science by theorists. He then mentioned that many earlier and recent neurobiological findings reveal that there is another third level of control in the brain that controls the adaptivity of networks, thereby implying certain "nonlocal" brain mechanisms and their role in designing and training networks. He called this third level "control of synaptic plasticity" that was ignored in the past in connectionism. He jokingly mentioned that his controversial views had developed along different lines over a long time since he is coming "from another planet" (Finland, that is). The audience laughed and applauded him heartily. ------------ Teuvo Kohonen, Dr. Eng., Professor of the Academy of Finland, head of the Neural Networks Research Centre, Helsinki University of Technolog, Finland. His research areas are associative memories, neural networks, and pattern recognition, in which he has published over 200 research papers and four monography books. His fifth book is on digital computers. Since the 1960s, Professor Kohonen has introduced several new concepts to neural computing: fundamental theories of distributed associative memory and optimal associative mappings, the learning subspace method, the self-organizing feature maps, the learning vector quantization, and novel algorithms for symbol processing like the redundant hash addressing and dynamically expanding context. The best known application of his work is the neural speech recognition system. Prof. Kohonen has also done design work for electronics industries. He is recipient of the Honorary Prize of Emil Aaltonen Foundation in 1983, the Cultural Prize of the Finnish Commercial Television (MTV) in 1984, the IEEE Neural Networks Council Pioneer Award in 1991, the International Neural Network Society Lifetime Achievement Award in 1992, Prize of the Finnish Cultural Foundation in 1994, Technical Achievement Award of the IEEE Signal Processing Society in 1995, Centennial Prize of the Finnish Association of Graduate Engineers in 1996, King-Sun Fu Prize in 1996, and others. He is Honorary Doctor of the University of York in U.K. and Abo Akademi in Finland, member of Academia Scientiarum et Artium Europaea, titular member of the Academie Europeenne des Sciences, des Arts et des Lettres, member of the Finnish Academy of Sciences and the Finnish Academy of Engineering Sciences, IEEE Fellow, and Honorary Member of the Pattern Recognition Society of Finland as well as the Finnish Society for Medical Physics and Medical Engineering. He was elected the First Vice President of the International Association for Pattern Recognition for the period 1982 - 84, and acted as the first President of the European Neural Network Society during 1991 - 92. -------------------------------------------------------- 7) DR. DANIEL LEVINE: Dr. Levine agreed that it was indeed time to reconsider the foundations of connectionist learning. He mentioned that he had been eager to defend classical connectionist ideas, but then changed his mind because of some his recent work on analogy formation and because of work in neuroscience on the role of neuromodulators and neurotransmitters. He was of the view that there has to be some "nonlocal" learning mechanisms at work, particularly because learning of analogies requires that we not only learn to associate two concepts as in traditional Hebbian learning, but learn the nature of the association. (Example: simply associating Houston to Texas isn't enough to tell us that Houston is "in" Texas.) Such nonlocal processes may, he added, provide more efficient mechanisms for property inheritance and property transfers. But Dr. Levine said that reconsidering the foundations of connectionism does not mean throwing out all existing work but building on it. Specifically, connectionist principles such as associative learning, competition, and resonance, that have been used in models of pattern recognition and classical conditioning can also be used in different combinations as building blocks in connectionist models of more complex cognitive processes. In these more complex networks, neuromodulation (via a transmitter "broadcast" from a distant source node) is likely to play an important role in selectively amplifying particular subprocesses based on context signals. -------------- DANIEL LEVINE is Professor of Psychology at the University of Texas at Arlington. Dr. Levine holds a Ph.D. in Applied Mathematics from the Massachusetts Institute of Technology and was a Postdoctoral Trainee in Physiology at the University of California at Los Angeles School of Medicine. His main recent area of research has been neural network models for the involvement of the frontal lobes in high-level cognitive tasks and in brain executive function, including their possible connections with the limbic system and basal ganglia. He has also recently published a network model of the effects of context on preference in multiattribute decision-making. Other areas in which he has published include models of attentional effects in Pavlovian conditioning, dynamics of nonlinear attractor networks, and models of visual illusions. Dr. Levine is author of the textbook, "Introduction to Neural and Cognitive Modeling," and senior editor of three books that have arisen out of conferences sponsored by the Dallas-Fort Worth-based Metroplex Institute for Neural Dynamics (M.I.N.D.). He has been on the editorial board of Neural Networks since 1988, serving as Book Review Editor from 1988 to 1995 and Newsletter editor from 1995 to the present. He has been a member of the INNS Board of Governors since 1995 and current candidate for President-Elect of INNS. He is a Program Co-Chair for the International Joint Conference on Neural Networks in 1997, sponsored by IEEE and INNS. -------------------------------------------------------- 8) DR. JEAN JACQUES SLOTINE: The issue of learning on an as-needed basis may not have yet received enough attention. Consider for example a robot manipulator, initially at rest under gravity forces, and whose desired task is to just stay there; no control needs being applied and no adaptation needs occuring, and this is indeed what a good adaptive controller, whether model-based, parametrized, or "neural" will do -- actually, doing anything else, e.g. moving so as to acquire parameter information, would detract it from its task. Conversely, if the robot is required to follow a desired trajectory so complicated that exact trajectory tracking necessarily requires an exact learning of the robot dynamics, then the guaranteed tracking convergence of the same adaptive algorithm will automatically guarantee such learning. While these issues are now well understood in a feedback control context, they may be of interest in a more general setting, since learning seems to often be equated to learning a whole system model, rather than to a faster, simpler purely goal-directed learning. The issue of transmission or computing delays, and the constraints they impose on stable learning also seem to deserve increased attention. ------------ Jean-Jacques Slotine was born in Paris in 1959, and received his Ph.D. from the Massachusetts Institute of Technology in 1983. After working at Bell Labs in the computer research department, in 1984 he joined the faculty at MIT, where he is now Professor of Mechanical Engineering and Information Sciences, Professor of Brain and Cognitive Sciences, and Director of the Nonlinear Systems Laboratory. He is the co-author of the textbooks "Robot Analysis and Control" (Wiley, 1986) and "Applied Nonlinear Control" (Prentice-Hall, 1991). ----------------------------------------------------------- 9) DR. JOHN TAYLOR: Dr. Taylor said that having worked at the Brain Institute in Germany for the last one year, he now has a new and different view of connectionism. He said that classical connectionism perhaps has too narrow a view of the brain. He then mentioned that the brain has a modular structure with three basic regions (nonconcious regions, concious regions and regions for reasoning, decision-making and so on). According to him, the following are some of the important characteristics of the brain: 1) the use of time in discrete chunks, or packets, so that there are three regimes: one at a few tens of milliseconds, one at the order of seconds and the third at about a minute. The first of these is involved in sensory processing, the second in higher order processing and the third in frontal 'reasoning'. The source of these longer times is as yet unknown but is very important. 2) the effects of neuromodulation from the punishment/reward system, which provides a global signal, 3) the distribution or break-down of complex tasks into sub-tasks which are themselves performed by smaller numbers of modules - the principle of divide and conquer! It is these networks which are now being uncovered by brain imaging; how they function in detail will be the next big task in neuroscience for the next century. 4) the use of a whole battery of neurotransmitters both for ongoing transmission of information and for learning changes brought about locally or globally. He emphasized that memory is indeed used in learning and that in addition to memory at the higher level (long term memory), there is working memory and memory in the time delays. With regard to the issue of global/local learning, he mentioned that neuromodulation possibly plays a role in passing global signals. As to the question of network design, he said the networks are designed by the brain and is a function of punishments and rewards coming from the environment. In closing, he articulated the view that connectionism should not be limited to traditional artificial neural networks, but must include new knowledge being discovered in computational neuroscience. -------------- John G. Taylor has been involved in Neural Networks since 1969, when he developed analysis of synaptic noise in neural transmission, which has more recently been turned into a neural chip (the pRAM) with on-chip learning. He is interested in a broad range of neural network questions, from theory of learning and the use of dynamical systems theory and spin glasses to cognitive understanding up to consciousness. He is presently Director of the Centre for Neural Networks, King's College London and a Guest Scientist at the Research Centre Juelich, where he is involved in developing new tools for analysing brain imaging data and performing experiments to detect the emergence of consciousness. He has published over 400 scientific papers in all, as well as over a dozen books and edited as many again. He was INNS President in 1995 and is currently European Editor-in-Chief of the journal 'Neural Networks', a Governor of INNS and a Vice-President of the European Neural Network Society. --------------------------------------------------------- 10) DR. DAVID WALTZ: Dr. Waltz articulated the viewpoint that brains indeed use memory to learn. He said that we do remember important experiences in life and then told a story about Marvin Minsky's dog and use of memory to learn (true story). Minsky's dog had formed the habit of chasing cars and biting their tires during the regular walks. One day, while trying to do this, she slipped the leash and got run over and injured by a car at a certain street corner. From then on, she was extremely reluctant to go near that particular street corner where the accident occurred, but continued to chase cars whenever possible (vivid memories and a wrong learning experience). While people (and animals) can generally learn better than this, vivid memories are probably shared by - and important to - all higher organisms. Dr. Waltz also emphasized the non-minimal nature of the brain in the sense that it tries to remember a lot of things in order to learn. For example, imagine that an intelligent system encounters a situation that leads to a very negative outcome, and then later encounters a similar situation that has a positive or neutral outcome. It is important that enough features of the original situation be remembered, so that the system can distinguish these situations in the future, and act accordingly. If the initial situation is not remembered, but has just been used to make synaptic weight changes, then the system will have no way to find features that could distinguish these cases in the future. So, with regard to the basic questions, he agreed that we do indeed use memory to learn in many cases, though not in every case (e.g. motor skills). On the network design issue, he said that some networks have been designed through evolution, but that other networks are indeed designed by the brain through "experience." On global/local learning, he speculated that perhaps both kinds exist. --------------- David Waltz is Vice President, Computer Science Research at the NEC Research Institute in Princeton, NJ, and an Adjunct Professor at Brandeis University. From 1984-93, he was Director of Advanced Information Systems at Thinking Machines Corporation and Professor of Computer Science at Brandeis. From 1974-83 he was a Professor of Electical and Computer Engineering at the University of Illinois at Urbana-Champaign. Dr. Waltz received SB, SM, and Ph.D. degrees from MIT, in 1965, 1968, and 1972 respectively. His research interests have included constraint propogation, massively parallel systems for relational and text databases, memory-based reasoning systems, protein structure prediction using hybrid neural net and memory-based methods, connectionist models for natural language processing, and natural language processing. He is President of the American Association of Artificial Intelligence and was elected a fellow of AAAI in 1990. He has served as President of ACM SIGART, Executive Editor of Cognitive Science, AI Editor for Communications of the ACM. He is a senior member of IEEE. --------------------------------------------------------- 11) DR. PAUL WERBOS: Dr. Werbos agreed strongly with the importance of memory-based learning. He argued that new neural network designs, using memory-based approaches, could help to solve the classical dilemma of learning speed versus generalization ability, which has plagued many practical applications of neural networks. He referred back to his idea of "syncretism," expressed in chapter 3 of the Handbook of Intelligent Control and in his paper on supervised learning in WCNN93 (and Roychowdhury's book). He believes that such mechanisms are essential to explaining certain capabilities in the neocortex, reflected in psychology. However, he does not argue that such mechanisms are present in ALL parts of the brain; for example, slower learning, based on simpler circuitry, does seem to occur in motor systems like the cerebellum. Higher motor systems, such as the neocortex/basal-ganglia/thalamus loops, clearly include memory-based learning; however, Houk, Ito and others have clearly shown that some degree of real-time weight-based learning does exist in the cerebellum. Recent experiments have hinted that even the cerebellum might be trained in part based on a replay of memories initially stored in the cerebral cortex; however, there are reasons to withhold judgment about that idea at the present time. Regarding local learning, he stressed that some of the broader discussions tend to mix up several different notions of "locality," each of which needs to be evaluated separately. Massively parallel distributed processing still remains a fundamental design principle, both of biological and practical importance. This in no way rules out the presence of some global signals such as "clocks," which are crucial to many designs, and which Llinas and others have found to be pervasive in the brain. Likewise, it does not rule out subsystems for "growing" and "pruning" connections, which are already well established in the connectionist literature (discussed in chapter 10 of the Handbook of Intelligent Control, and in many other places.). Regarding the role of learning versus evolution, he does not see the same kind of "either-or" choice that many people assume. His views are expressed in detail in a new paper to appear in Pribram's new book on Values from Erlbaum, 1997, reflecting the new "decision block" or "three brain" design discussed at this conference. -------- Dr. Paul J. Werbos holds 4 degrees from Harvard University and the London School of Economics, covering economics, mathematical phyiscs, decision and control, and the backpropagation algorithm. His 1974 PhD thesis presented the true backpropagation algorithm for the first time, permitting the efficient calculation of derivatives and adaptation of all kinds of nonlinear sparse structures, including neural networks; it has been reprinted in its entirety in his book, The Roots of Backpropagation, Wiley, 1994, along with several related seminal and tutorial papers. In these and other more recent papers, he has described how backpropagation may be incorporated into new intelligent control designs with extensive parallels to the structure of the human brain. See the hot links on www.nsf.gov/eng/ecs/enginsys.htm Dr. Werbos runs the Neuroengineering program and the SBIR Next Generation Vehicle program at the National Science Foundation. He is Past President of the International Neural Network Society(INNS), and is currently on the governing boards both of INNS and of the IEEE society for Systems, Man and Cybernetics.. Prior to NSF, he worked at the University of Maryland and the U.S. Department of Energy. He was born in 1947 near Philadelphia, Pennsylvania, has three children, and attends Quaker meetings. His publications range from neural networks through to quantum foundations, energy economics, and issues of consciousness. ---------------------------------------------------   From herbert.jaeger at gmd.de Mon Aug 25 09:10:49 1997 From: herbert.jaeger at gmd.de (Herbert Jaeger) Date: Mon, 25 Aug 1997 15:10:49 +0200 Subject: Beyond hidden Markov models: new techreports Message-ID: <34018456.7DFE@gmd.de> BEYOND HIDDEN MARKOV MODELS Two technical reports on stochastic time series modeling available ABSTRACT. Hidden Markov models (HMMs) provide widely used techniques for analysing discrete stochastic sequences. HMMs are induced from empirical data by gradient descent methods, which are computationally expensive, involve heuristic pre-estimation of model structure, and can get trapped in local optima. Furthermore, HMMs are mathematically not well understood. In particular, model equivalence cannot be characterised. A new class of stochastic models, "observable operator models" (OOMs), presents an advance over HMMs in the following respects: - OOMs are more general than HMMs, i.e. processes modeled by OOMs are a proper superclass of those modeled by HMMs. - Equivalence of OOMs can be characterized algebraically. - A *constructive* algorithm allows to reconstruct OOMs from empirical time series. This algorithm is extremely fast and transparent (boiling down essentially to a single matrix inversion). - OOMs reveal fundamental connections of stochastic processes with information theory and dynamical systems theory. The basic mathematical theory of OOMs, and their relation to HMMs, is described in: Herbert Jaeger: Observable Operator Models and Conditioned Continuation Representations. Arbeitspapiere der GMD 1043, GMD, St. Augustin 1997 (38 pp). The induction algorithm, and a standardized graphical representation of OOM-generated processes, is described in: Herbert Jaeger: Observable Operator Models II: Interpretable models and model induction. Arbeitspapiere der GMD 1083, GMD, St. Augustin 1997 (33 pp) Both papers can be fetched electronically from the author's webpage (see below) or directly from the following ftp-site: ftp://ftp.gmd.de/GMD/ai-research/Publications/1997/ (files jaeger.97.{oom,oom2}.{ps.gz,pdf}) ---------------------------------------------------------------- Dr. Herbert Jaeger Phone +49-2241-14-2253 German National Research Center Fax +49-2241-14-2384 for Information Technology (GMD) email herbert.jaeger at gmd.de FIT.KI Schloss Birlinghoven D-53754 Sankt Augustin, Germany http://www.gmd.de/People/Herbert.Jaeger/ ----------------------------------------------------------------   From gluck at pavlov.rutgers.edu Mon Aug 25 14:14:39 1997 From: gluck at pavlov.rutgers.edu (Mark A. Gluck) Date: Mon, 25 Aug 1997 10:14:39 -0800 Subject: RA or Postdoc Position in Computational Neuroscience at Rutgers-Newark Message-ID: We are looking to hire a good new person in my lab to work with us on the computational modelling of the memory circuits in hippocampus and cortex and basal forebrain, etc., which are involved in both animal and human learning. The candidate should be a person well-trained in basic neural-net modelling and theory, who can program NN's like a wiz. Their neuroscience training is not key as we can train someone in the biology and behavior. In fact, this could be a good opportunity for someone who is well trained with a computer science or EE background in NN's, but wants to explore and train for a career move into more biologically and behaviorally oriented modelling. We also have an active experimental animal lab as well as extensive neuropsychological studies of human memory which can also provide important training opportunities in the experimental and empirical aspect of computational neuroscience. Depending on the applicant's background, we can hire either a full-time RA/programmer who has an undergraduate background in NN; perhaps someone who is considering graduate school in a year or two. Alternatively, a postdoctoral position could be found for someone with a PhD in EE or CS with strong NN programming experience, who would be interested in a training situation that would prepare them to work in computational neuroscience, with an emphasis on empirically-constrained models and theories of brain function. More information on our lab and research can be found on the web page noted below. Anyone interested should email me with a cover letter stating his or her background and career goals. Rutgers-Newark has a large and growing group of connectionist modellers working in cognitive neuroscience and computational neuroscience at Rutgers-Newark. In addition to myself, this group includes Catherine Myers, Stephen Hanson, Mike Casey, Ralph Siegel, Michael Recce, and Ben Martin-Bly. - Mark Gluck _____________________________________________________________ Dr. Mark A. Gluck, Associate Professor Center for Molecular & Behavioral Neuroscience Rutgers University 197 University Ave. Newark, New Jersey 07102 Phone: (973) 353-1080 (Ext. 3221) Fax: (973) 353-1272 Cellular: (917) 855-8906 Email: gluck at pavlov.rutgers.edu WWW Homepage: www.gluck.edu _____________________________________________________________   From karaali at ukraine.corp.mot.com Wed Aug 27 14:31:25 1997 From: karaali at ukraine.corp.mot.com (Orhan Karaali) Date: Wed, 27 Aug 1997 13:31:25 -0500 Subject: Speech Synthesis Position Message-ID: <199708271831.NAA10591@fiji.mot.com> MOTOROLA is a leading provider of wireless communications, semiconductors, and advanced electronic systems, components, and services. Motorola's Chicago Corporate Research Laboratory in Schaumburg, IL, is seeking a researcher to join its Speech Synthesis and Machine Learning Group. The Speech Synthesis and Machine Learning Group at Motorola has developed innovative neural network and signal processing technologies for speech synthesis and speech recognition applications. Position Description: The duties of the position include applied research as well as software design and development. Innovation in research, application of technology, and a high level of motivation is the standard for all members of the team. The ability to work within a group to quickly implement and evaluate algorithms in a rapid research/development cycle is essential. Candidate's Qualifications: * M.S. or Ph.D. in EE, CS or a related discipline * Strong programming skills in the C++ language and knowledge of object-oriented programming techniques * Good written and oral communication skills * Expertise in at least one of the four fields listed below: * Computational and corpus linguistics, text processing, SGML. * Neural networks, genetic algorithms, decision trees. * Signal and speech processing, speech coders, speech production. * WIN32 and graphics (Direct3D and OpenGL) programming. We offer an excellent salary and benefits package. For consideration, please send or fax your resume to: Motorola Corporate, Dept.T7100 1303 E. Algonquin Rd. Schaumburg, IL 60196 Fax: (847)538-4688 Motorola is an Equal Employment Opportunity / Affirmative Action employer. We welcome and encourage diversity in our workforce. Proof of identity and eligibility to be employed in the United States is required. MOTOROLA What you never thought possible(TM)   From bert at mbfys.kun.nl Thu Aug 28 04:39:22 1997 From: bert at mbfys.kun.nl (Bert Kappen) Date: Thu, 28 Aug 1997 10:39:22 +0200 Subject: Boltzmann Machine learning using mean field theory ... Message-ID: <199708280839.KAA27865@bertus> Dear Connectionists, The following article Boltzmann Machine learning using mean field theory and linear response correction written by Hilbert Kappen and Paco Rodrigues Abstract: The learning process in Boltzmann Machines is computationally intractible. We present a new approximate learning algorithm for Boltzmann Machines, which is based on mean field theory and the linear response theorem. The computational complexity of the algorithm is cubic in the number of neurons. In the absence of hidden units, we show how the weights can be directly computed from the fixed point equation of the learning rules. We show that the solution of this method is close to opti which will apear in the proceedings NIPS of 1997 ed. Micheal Kearns can now be downloaded from as ftp://ftp.mbfys.kun.nl/snn/pub/reports/Kappen.LR_NIPS.ps.Z Yours sincerely, Hilbert Kappen FTP INSTRUCTIONS unix% ftp ftp.mbfys.kun.nl Name: anonymous Password: (use your e-mail address) ftp> cd snn/pub/reports/ ftp> binary ftp> get Kappen.LR_NIPS.ps.Z ftp> bye unix% uncompress Kappen.LR_NIPS.ps.Z unix% lpr Kappen.LR_NIPS.ps   From dnoelle at cs.ucsd.edu Thu Aug 28 20:34:45 1997 From: dnoelle at cs.ucsd.edu (David Noelle) Date: Thu, 28 Aug 1997 17:34:45 -0700 (PDT) Subject: TR on Attractor Networks Message-ID: <199708290034.RAA08151@hilbert.ucsd.edu> The following technical report is now available via both the World Wide Web and anonymous FTP: http://www.cse.ucsd.edu/users/dnoelle/publications/tr-s97/ ftp://ftp.cs.ucsd.edu:/pub/dnoelle/tr-s97.ps.Z Note that the web version includes a link to the PostScript version at the bottom of the page. Extreme Attraction: The Benefits of Corner Attractors ------------------------------------------------------ by David C. Noelle, Garrison W. Cottrell, and Fred R. Wilms Technical Report CS97-536 Department of Computer Science & Engineering University of California, San Diego Connectionist attractor networks have played a central role in many cognitive models involving associative memory and soft constraint satisfaction. While early attractor networks used step activation functions, permitting the construction of attractors for only binary (or bipolar) patterns, much recent work has focused on networks with continuous sigmoidal activation functions. The incorporation of sigmoidal processing elements allows for the use of expressive real vector representations in attractor networks. The empirical studies reported here, however, reveal that the learning performance of sigmoidal attractor networks is best when such general real vectors are avoided -- when training patterns are explicitly placed in the extreme corners of the network's activation space. Using binary (or bipolar) patterns produces benefits in the number of attractors learnable by a network, in the accuracy of the learned attractors, and in the amount of training required. These benefits persist under conditions of sparse patterns. Furthermore, these experiments show that the advantages of extreme-valued patterns are not solely effects of the large separation between training patterns afforded by corner attractors. Thank you for your consideration. -- David Noelle ----- Department of Computer Science & Engineering -- --------------------- Department of Cognitive Science --------------- --------------------- University of California, San Diego ----------- -- noelle at ucsd.edu -- http://www.cse.ucsd.edu/users/dnoelle/ --------   From ritter at psychology.nottingham.ac.uk Fri Aug 29 08:10:26 1997 From: ritter at psychology.nottingham.ac.uk (ritter@psychology.nottingham.ac.uk) Date: Fri, 29 Aug 1997 13:10:26 +0100 Subject: ECCM 98 - 1st Announcement Message-ID: <199708291210.NAA17518@vpsyc.psychology.nottingham.ac.uk> ------------------------------------------------------------------------- First announcement SECOND EUROPEAN CONFERENCE ON COGNITIVE MODELLING (ECCM-98) Nottingham, England, April 1-4 1998 ------------------------------------------------------------------------- GENERAL INFORMATION: The 2nd European Conference on Cognitive Modelling (ECCM-98) will be held in Nottingham, England, from April 1st to 4th 1998 (starting with a day of optional tutorials). The conference will cover all areas of cognitive modelling, including symbolic and connectionist models, evolutionary computation, artificial neural networks, grammatical inference, reinforcement learning, and data sets designed to test models. Papers that present a running model and its comparison with data are particularly encouraged. This meeting is open for work on cognitive modelling using general architectures (such as Soar and ACT) as well as other kinds of simulation models. These meetings were introduced to establish interdisciplinary co-operation in the domain of cognitive modeling. The first meeting held in Berlin in November 1996 attracted about 60 researchers from Europe and USA working in the fields of artificial intelligence, cognitive psychology, computer linguistics and philosophy of mind. Program: The program will include presentations of papers, demo sessions, invited talks, discussion groups and tutorials on cognitive modeling in the fields of AI programming, classification, problem solving, reasoning, inference, learning, language processing and human-computer-interaction. As Nottingham is an area of 'high touristic value', it will also include a social evening out. Further details are available from: http://www.psychology.nottingham.ac.uk/staff/ritter/eccm98/ PROGRAM CHAIRS Richard Young (U. of Hertfordshire) and Frank Ritter (U. of Nottingham) LOCAL CHAIR: Frank Ritter (U. of Nottingham) IMPORTANT DATES: Submission deadline: 7 January 1998 Decision by: 6 February 1998 Early registration: 9 March 1998 Conference: 1-4 April 1998 A call for papers will be issued soon. -------------------------------------------------------------------   From smyth at sifnos.ics.uci.edu Fri Aug 29 19:32:44 1997 From: smyth at sifnos.ics.uci.edu (Padhraic Smyth) Date: Fri, 29 Aug 1997 16:32:44 -0700 Subject: TR available on stacked density estimation Message-ID: <9708291634.aa12018@paris.ics.uci.edu> FTP-host: ftp.ics.uci.edu FTP-filename: /pub/smyth/papers/stacking.ps.gz The following paper is now available online at: ftp://ftp.ics.uci.edu/pub/smyth/papers/stacking.ps.gz title: STACKED DENSITY ESTIMATION authors: Padhraic Smyth (UCI/JPL) and David Wolpert (NASA Ames) Abstract: In this paper, the technique of stacking, previously only used for supervised learning, is applied to unsupervised learning. Specifically, it is used for non-parametric multivariate density estimation, to combine finite mixture model and kernel density estimators. Experimental results on both simulated data and real world data sets clearly demonstrate that stacked density estimation outperforms other strategies such as choosing the single best model based on cross-validation, combining with uniform weights, and even the single best model chosen by ``cheating" by looking at the data used for independent testing. (This paper will also appear at NIPS97)   From jose at tractatus.rutgers.edu Fri Aug 29 15:30:43 1997 From: jose at tractatus.rutgers.edu (Stephen J.Hanson) Date: Fri, 29 Aug 1997 15:30:43 -0400 Subject: Rutgers Newark Psycholgy--Cognitive Scientist Message-ID: <34072363.F26EDA90@tractatus.rutgers.edu> COGNITIVE SCIENTIST Rutgers University-Newark Campus: The Department of Psychology anticipates making one tenure-track appointment in Cognitive Science at the Assistant Professor level. Candidates should have an active research program in one or more of the following areas: learning, action, high-level vision, and language. Our particular interest are candidates who combine one or more of these research interests with mathematical and/or computational approaches. The position calls for candidates who are effective teachers at both the graduate and undergraduate levels. Review of applications will begin on December 15, 1997. Rutgers University is an equal opportunity/affirmative action employer. Qualified women and minority candidates are especially encouraged to apply. Send CV and three letters of recommendation to Professor S. J. Hanson, Chair, Department of Psychology - Cognitive Science Search, Rutgers University, Newark, NJ 07102. Email inquiries can be made to cogsci at psychology.rutgers.edu   From sylee at eekaist.kaist.ac.kr Sun Aug 3 03:13:52 1997 From: sylee at eekaist.kaist.ac.kr (sylee) Date: Sun, 03 Aug 1997 16:13:52 +0900 Subject: Post Doc in Analog Neuro-Chip Message-ID: <33E42FB0.2644@ee.kaist.ac.kr> One or two Post Doc position is open immediately at Korea Advanced Institute of Science and Technology. The Computation and Neural Systems Laboratory is working on neural systems, which includes neural architecture and learning algorithms, hardware implementation, and applications such as speech recognition. We had developed analog neuro-chips with on-chip learning capability, and working on several applications of these chips. Currently we are inviting researchers for the following two projects. (1) Adaptive nonlinear equalizer Channel equalizer is a kind of adaptive filter to extract signal from corrupted signal. Convolutional channel, random noise, and nonlinearity are considered to cause the signal correuption. It's architecture is similar to a single-layer Perceptron, which may also be generalized to multi-layer architectures. Therefore, the developed analog neuro-chip is well suited to this application. (2) Biosensor array It is a part of Biochip Project, a Star Project of KAIST. At the realy stage of the Biochip Project, we will develope an integreated chip which consists of multi biosensors, liquid channel, and signal processing neuro-chip for robust decison based on the multi-sensor output. The successful applicant will work on the signal processing neuro-chip with close cooperation with biologist and MEM researchers. The appointment period is one year, which may be extended upto 3 years. The monthly salary is about US$1400, which is enough to support two member family near the campus at Taeduk Science Town, Taejon, about 150 km south of Seoul. If interested, please send me a brief resume and list of publications. Prof. Soo-Young Lee Computation and Neural Systems Laboratory Department of Electrical Engineering Korea Advanced Institute of Science and Technology 373-1 Kusong-dong, Yusong-gu Taejon 305-701 Korea (South) Tel: +82-42-869-3431 Fax: +82-42-869-3410 E-mail: sylee at ee.kaist.ac.kr   From hirai at is.tsukuba.ac.jp Sun Aug 3 21:08:37 1997 From: hirai at is.tsukuba.ac.jp (Yuzo Hirai) Date: Mon, 4 Aug 1997 10:08:37 +0900 (JST) Subject: Invited session in KES'98 Message-ID: <199708040108.KAA27134@poplar.is.tsukuba.ac.jp> Dear colleagues, I was asked to organize one session at the SECOND INTERNATIONAL CONFERENCE ON KNOWLEDGE-BASED INTELLIGENT ELECTRONIC SYSTEMS (KES'98), 21st - 23rd April 1998, ADELAIDE, Australia. The title of the session is: "Hardware Implementation of Neural Networks." I encourage you to submit papers concerning microelectronic or optical implementation of neural networks. Experimental systems as well as commercially available systems in the following fields are welcome: * Learning and Classification * Feature Extraction and Dimension Reduction * Self-organization and Clustering * Principal Component Analysis and Independent Component Analysis * Associative Memory and Constraint Processings * Sensory Processings * Fault Tolerance in Hardware Neural Networks SUBMISSION OF PAPERS must be done according to the following rules: * Papers must be written in English (5 to 10 pages maximum). * Paper presentation is about 20 minutes each including questions and discussions. * Include corresponding author with full name, address, telephone and fax numbers, E-Mail address. * Include presenter address and his/her 4 line resume for introduction purposes only. * Fax or E-Mail copies are not acceptable. * The conference proceedings will be published by IEEE, U.S.A. * Please submit one original and three copies of the camera ready paper (A4 size), two column format in Times or similar font style, 10 points with one inch margin on all four sides for review to: Professor Yuzo Hirai Institute of Information Sciences and Electronics University of Tsukuba Address: 1-1-1 Ten-nodai, Tsukuba, Ibaraki 305, Japan Tel: +81-298-53-5519 Fax: +81-298-53-5206 E-mail: hirai at is.tsukuba.ac.jp DEADLINE FOR THE RECEIPT OF PAPERS is 30th September 1997 For further information please contact to Hirai. Best Regards, Yuzo Hirai   From devin at psy.uq.edu.au Mon Aug 4 02:10:05 1997 From: devin at psy.uq.edu.au (Devin McAuley) Date: Mon, 4 Aug 1997 16:10:05 +1000 (EST) Subject: Software Release: BrainWave neural network simulator (version 1.1) Message-ID: Dear colleagues, We are pleased to announce the release of BrainWave (version 1.1). BrainWave is a web-based neural network simulator, designed for teaching and research of connectionist models of cognition. It employs a highly graphical, direct manipulation interface -much like a drawing program - allowing students to focus on the models and not the interface. BrainWave is written in the Java programming language meaning that it can be run directly from web browsers such as Netscape and Internet Explorer (on Windows 95, MacOs 7.0 and several Unix platforms). You can access BrainWave at http://psy.uq.edu.au/~brainwav The following algorithms are included in version 1.1. * Interactive Activation and Competition * Self Organizing Map * Backpropagation * Hebbian Learning * Hopfield Network * Simple-Recurrent Network * Phase-Resetting Oscillators * ALCOVE All models in BrainWave are accessible in a user modifiable Networks menu, or can be loaded directly from local disk or a URL. Some of the models included in version 1.1 are * Letter Perception (McCelland & Rumelhart 1981) * Orientation Selectivity (von der Malsberg 1973) * Controlled and Automatic Processing -The Stroop Effect (Cohen, Dunbar & McClelland 1990) * Category Learning - ALCOVE (Kruschke 1993) * Episodic Memory - The Matrix Model (Humphreys, Bain & Pike 1989) * Language Acquisition - Simple Recurrent Network (Elman 1990) * Deep Dyslexia (Hinton & Shallice 1991) * Synchronous Fireflies Example (Buck & Buck 1976) Neural Networks by Example is an online workbook designed for use with the BrainWave simulator in the context of a course on neural networks or through a program of self study. The chapters of Neural Networks by Example (available from the BrainWave home page) cover a set of neural architectures that have been instrumental in the development of the field and that illustrate key concepts in the area. Network architectures are introduced through a series of exercises using the simulator, highlighting important issues or concepts that the architectures demonstrate. The approach is very hands on, and the best way to use the workbook is with a running copy of BrainWave. We hope that you find BrainWave a useful teaching and research tool. If you have any questions, please email us at brainwav at psy.uq.edu.au cheers, Simon Dennis and Devin McAuley   From cns-cas at cns.bu.edu Sun Aug 3 19:54:35 1997 From: cns-cas at cns.bu.edu (Boston University - Cognitive and Neural Systems) Date: Sun, 03 Aug 1997 19:54:35 -0400 Subject: CALL FOR PAPERS: 2nd International Conference on CNS Message-ID: <3.0.2.32.19970803195435.0071fdbc@cns.bu.edu> *****CALL FOR PAPERS***** SECOND INTERNATIONAL CONFERENCE ON COGNITIVE AND NEURAL SYSTEMS (CNS'98) May 27-30, 1998 Sponsored by the Center for Adaptive Systems and the Department of Cognitive and Neural Systems Boston University with financial support from the Defense Advanced Research Projects Agency and the Office of Naval Research CNS'98 will include invited lectures and contributed lectures and posters by experts on the biology and technology of how the brain and other intelligent systems adapt to a changing world. The conference is aimed at bringing together researchers in computational neuroscience, connectionist cognitive science, and artificial neural networks, among other disciplines, with a particular focus upon how intelligent systems adapt autonomously to a changing world. The First International Conference on Cognitive and Neural Systems was held on May 28-31, 1997 at Boston University. Its title was: Vision, Recognition, and Action: From Biology to Technology. Over 200 people from 18 countries attended this conference. Many participants asked that a sequel be held in 1998, and that the meeting scope be broadened. CNS'98 has been designed to achieve both goals. The meeting aims to be a forum for lively presentation and discussion of recent research that is relevant to modeling how the brain controls behavior, how the technology of intelligent systems can benefit from understanding human and animal intelligence, and how technology transfers between these two endeavors can be accomplished. The meeting's philosophy is to have a single oral or poster session at a time, so that all presented work is highly visible. Abstract submissions enable scientists and engineers to send in examples of their freshest work. Costs are kept at a minimum to enable the maximum number of people, including students, to attend, without compromising on the quality of tutorial notes, meeting proceedings, reception, and coffee breaks. Although Memorial Day falls on Saturday, May 30, it is observed on Monday, May 25, 1998. Contributions are welcomed on the following topics, among others. Contributors are requested to list a first and second choice from among these topics in their cover letter, and to say whether it is biological (B) or technological (T) work, when they submit their abstract, as described below. *vision *object recognition *image understanding *audition *speech and language *unsupervised learning *supervised learning *reinforcement and emotion *cognition, planning, and attention *sensory-motor control *spatial mapping and navigation *neural circuit models *neural system models *mathematics of neural systems *robotics *neuromorphic VLSI *hybrid systems (fuzzy, evolutionary, digital) *industrial applications *other Example: first choice: vision (B); second choice: neural system models (B). CALL FOR ABSTRACTS: Contributed abstracts by active modelers in cognitive science, computational neuroscience, artificial neural networks, artificial intelligence, and neuromorphic engineering are welcome. They must be received, in English, by January 31, 1998. Notification of acceptance will be given by February 28, 1998. A meeting registration fee of $45 for regular attendees and $30 for students must accompany each Abstract. See Registration Information below for details. The fee will be returned if the Abstract is not accepted for presentation and publication in the meeting proceedings. Registration fees of accepted abstracts will be returned on request only until April 1, 1998. Each Abstract should fit on one 8.5 x 11" white page with 1" margins on all sides, single-column format, single-spaced, Times Roman or similar font of 10 points or larger, printed on one side of the page only. Fax submissions will not be accepted. Abstract title, author name(s), affiliation(s), mailing, and email address(es) should begin each Abstract. An accompanying cover letter should include: Full title of Abstract, corresponding author and presenting author name, address, telephone, fax, and email address. Preference for oral or poster presentation should be noted. (Talks will be 15 minutes long. Posters will be up for a full day. Overhead, slide, and VCR facilities will be available for talks.) Abstracts which do not meet these requirements or which are submitted with insufficient funds will be returned. The original and 3 copies of each Abstract should be sent to: CNS'98, c/o Cynthia Bradford, Boston University, Department of Cognitive and Neural Systems, 677 Beacon Street, Boston, MA 02215. The program committee will determine whether papers will be accepted in an oral or poster presentation, or rejected. REGISTRATION INFORMATION: Since seating at the meeting is limited, early registration is recommended. To register, please fill out the registration form below. Student registrations must be accompanied by a letter of verification from a department chairperson or faculty/research advisor. If accompanied by an Abstract or if paying by check, mail to: CNS'98, c/o Cynthia Bradford, Boston University, Department of Cognitive and Neural Systems, 677 Beacon Street, Boston, MA 02215. If paying by credit card, mail as above, or fax to (617) 353-7755, or email to cindy at cns.bu.edu. The registration fee will help to pay for a reception, 6 coffee breaks, and the meeting proceedings. STUDENT FELLOWSHIPS: A limited number of fellowships for PhD candidates and postdoctoral fellows are available to at least partially defray meeting travel and living costs. The deadline for applying for fellowship support is January 31, 1998. Applicants will be notified by February 28, 1998. Each application should include the applicant's CV, including name; mailing address; email address; current student status; faculty or PhD research advisor's name, address, and email address; relevant courses and other educational data; and a list of research articles. A letter from the listed faculty or PhD advisor on official institutional stationery should accompany the application and summarize how the candidate may benefit from the meeting. Students who also submit an Abstract need to include the registration fee with their Abstract. Reimbursement checks will be distributed after the meeting. Their size will be determined by student need and the availability of funds. REGISTRATION FORM (Please Type or Print) Cognitive and Neural Systems Boston University Boston, Massachusetts Tutorials: May 27, 1998 Meeting: May 28-30, 1998 Mr/Ms/Dr/Prof: Name: Affiliation: Address: City, State, Postal Code: Phone and Fax: Email: The conference registration fee includes the meeting program, reception, two coffee breaks each day, and meeting proceedings. For registered participants in the conference, the regular tutorial registration fee is $25 and the student fee is $15. For attendees of only the conference, the regular registration fee is $45 and the student fee is $30. Two coffee breaks and a tutorial handout will be covered by the tutorial registration fee. CHECK ONE: [ ] $70 Conference plus Tutorial (Regular) [ ] $45 Conference plus Tutorial (Student) [ ] $45 Conference Only (Regular) [ ] $30 Conference Only (Student) [ ] $25 Tutorial Only (Regular) [ ] $15 Tutorial Only (Student) Method of Payment: [ ] Enclosed is a check made payable to "Boston University". Checks must be made payable in US dollars and issued by a US correspondent bank. Each registrant is responsible for any and all bank charges. [ ] I wish to pay my fees by credit card (MasterCard, Visa, or Discover Card only). Type of card: Name as it appears on the card: Account number: Expiration date: Signature and date: ****************************************   From peter.hansen at physiol.ox.ac.uk Mon Aug 4 06:18:59 1997 From: peter.hansen at physiol.ox.ac.uk (Peter Hansen) Date: Mon, 4 Aug 1997 11:18:59 +0100 (BST) Subject: 1997 Autumn School in Cognitive Neuroscience, Oxford Message-ID: AUTUMN SCHOOL IN COGNITIVE NEUROSCIENCE Oxford, 30 September to 3 October 1997 UNIVERSITY OF OXFORD OXFORD CENTRE FOR COGNITIVE NEUROSCIENCE The 1997 Annual Autumn School in Cognitive Neuroscience will be held in Oxford on the four days Tuesday 30 September to Friday 3 October. The School is intended primarily for doctoral students, other graduate students and postdoctoral scientists, at Oxford and at other universities, and also for third-year undergraduates who are considering the possibility of research in Neuroscience and would like to find out more about it. Each day will be devoted to a particular area of Cognitive Neuroscience. The preliminary programme is as follows. DAY 1 ATTENTION: FROM PERCEPTION TO SINGLE CELLS Lecturers: S Judge (Oxford), M Goldberg (Bethesda, USA), M Husain, J Driver (London), G Humphreys (Birmingham), J Duncan, C Spence (Cambridge), G Fink (Cologne) DAY 2 NEURAL TRANSPLANTATION AND RESTORATION OF FUNCTION Lecturers: J Gray, H Hodges, J Sinden (IOP, London) S Dunnett, R Franklin, C Svendsen, L Annett, A Rosser (Cambridge), G Raisman (NIMR, London), J Mallet (Paris) DAY 3 DYNAMIC IMAGING OF THE HUMAN BRAIN Lecturers: A Nobre, E Wilding, E Rolls, V Walsh (Oxford), P Fletcher, K Friston (London), R Mangun (Davis, USA), W Singer (Frankfurt) DAY 4 MOTOR FUNCTION: FUNCTIONAL IMAGING AND PSYCHOPHYSICAL APPROACHES Lecturers: J Stein, C Miall, P Matthews, R Passingham (Oxford), J Wann (Reading), P Haggard (UCL), D Brooks (Hammersmith), S Jackson (Bangor), G Stelmach (Phoenix, USA) This course is offered free of charge. A limited number of bursaries is available to graduates at UK universities outside Oxford, to assist with travel and accommodation expenses. For further information and application forms, see: http://www.physiol.ox.ac.uk/mcdp/autsch/ --- Dr Peter Hansen Oxford Centre for Cognitive Neuroscience Phone: (01865) 282163 Physiology Laboratory, Oxford University   From cg at eivind.imm.dtu.dk Mon Aug 4 12:21:44 1997 From: cg at eivind.imm.dtu.dk (Cyril Goutte) Date: Mon, 4 Aug 1997 18:21:44 +0200 (METDST) Subject: PhD thesis available. Message-ID: Dear Connectionists, I am pleased to announce that the manuscript of my thesis: STATISTICAL LEARNING AND REGULARISATION FOR REGRESSION is available through the WWW at the following URL: http://eivind.imm.dtu.dk/staff/goutte/PUBLIS/thesis.html --- Abstract : This thesis deals with the use of statistical learning and regularisation on regression problems, with a focus on time series modelling and system identification. Both linear models and non-linear neural networks are considered as particular modelling techniques. Linear and non-linear parametric regression are briefly introduced and their limit is shown using the bias-variance decomposition of the generalisation error. We then show that as such, those problems are ill-posed, and thus need to be regularised. Regularisation introduces a number of hyper-parameters, the setting of which is performed by estimating generalisation error. Several such methods are evoked in the course of this work. The use of these theoretical aspects is targeted towards two particular problems. First an iterative method relying on generalisation error to extract the relevant delays from time series data is presented. Then a particular regularisation functional is studied, that provides pruning of unnecessary parameters as well as a regularising effect. This last part uses Bayesian estimators, and a brief presentation of those estimators is also given in the thesis. --- Cyril. --- Cyril Goutte |> cg at imm.dtu.dk <| Tel: +45-4525 3921 (Fax: +45-4587 2599) Department of Mathematical Modelling - D.T.U., Bygn. 321 - DK-2800 Lyngby   From cardie at CS.Cornell.EDU Mon Aug 4 14:58:56 1997 From: cardie at CS.Cornell.EDU (Claire Cardie) Date: Mon, 4 Aug 1997 14:58:56 -0400 (EDT) Subject: Machine Learning Journal Special Issue on Natural Language Learning Message-ID: <199708041858.OAA21226@ewigkeit.cs.cornell.edu> CALL FOR PAPERS Machine Learning Journal Special Issue on Natural Language Learning The application of learning techniques to natural language processing has grown dramatically in recent years under the rubric of "corpus-based," "statistical," or "empirical" methods. However, most of this research has been conducted outside the traditional machine learning research community. This special issue is an attempt to bridge this divide by inviting researchers in all areas of natural language learning to communicate their recent results to a general machine learning audience. Papers are invited on learning applied to all natural language tasks including: * Syntax: Part-of-Speech tagging, parsing, language modeling, prepositional-phrase attachment, spelling correction, word segmentation * Semantics: Word-sense disambiguation, word clustering, lexicon acquisition, semantic analysis, database-query mapping * Discourse: Information extraction, anaphora resolution, discourse segmentation * Machine Translation: Bilingual text alignment, bilingual dictionary construction, lexical, syntactic, and semantic transfer and all learning approaches including: * Statistical: n-gram models, hidden Markov models, probabilistic context-free grammars, Bayesian networks * Symbolic: Decision trees, rule-based, case-based, inductive logic programming, automata and grammar induction * Neural-Network & Evolutionary: recurrent networks, self-organizing maps, genetic algorithms Experimental papers with significant results evaluating either engineering performance or cognitive-modeling validity on suitable corpora are invited. Papers will be evaluated by three reviewers, including at least two experts in the relevant area of natural language learning; however, they should be written to be reasonably accessible to a general machine learning audience. Schedule: December 1, 1997: Deadline for submissions March 1, 1998: Deadline for getting decisions back to authors May 1, 1998: Deadline for authors to submit final versions Fall 1998: Publication Submission Guidelines: 1) Manuscripts should conform to the formatting instructions in: http://www.cs.orst.edu/~tgd/mlj/info-for-authors.html The first author will be the primary contact unless otherwise stated. 2) Authors should send 5 copies of the manuscript to: Karen Cullen Machine Learning Editorial Office Attn: Special Issue on Natural Language Learning Kluwer Academic Press 101 Philip Drive Assinippi Park Norwell, MA 02061 617-871-6300 617-871-6528 (fax) kcullen at wkap.com and one copy to: Raymond J. Mooney Department of Computer Sciences Taylor Hall 2.124 University of Texas Austin, TX 78712-1188 (512) 471-9558 (512) 471-8885 (fax) mooney at cs.utexas.edu 3) Please also send an ASCII title page (title, authors, email, abstract, and keywords) and a postscript version of the manuscript to mooney at cs.utexas.edu. General Inquiries: Please address general inquiries to: mooney at cs.utexas.edu Up-to-date information will be maintained on the WWW at: http://www.cs.utexas.edu/users/ml/mlj-nll Co-Editors: Claire Cardie Cornell University cardie at cs.cornell.edu Raymond J. Mooney University of Texas at Austin mooney at cs.utexas.edu ------- End of forwarded message -------   From jose at tractatus.rutgers.edu Fri Aug 1 12:51:58 1997 From: jose at tractatus.rutgers.edu (Stephen Jose Hanson) Date: Fri, 01 Aug 1997 12:51:58 -0400 Subject: Cognitive Science Symposium on Modeling and Brain Imaging Message-ID: [ Moderator's note: Steve Hanson would like to suggest that a critical aspect of Brain Imaging in the future will be Neural Network (or system level) modeling. In order to stimulate discussion of this topic on the Connectionists list, he submitted the program for a symposium he's organized on the subject for next week's Cognitive Science conference. There are some intereting ideas here. Perhaps we'll also have a workshop on the topic at NIPS this year. Persons seeking more information about the Cognitive Science symposium may contact Steve at jose at psychology.rutgers.edu. -- Dave Touretzky ] ================================================================ 19th Annual Cognitive Science Society, Stanford 8/7-10/97 Brain Imaging: Models, Methods and High Level Cognition (8/8/97, 2pm - 4pm) (Organizer: Stephen Hanson) Brain Imaging methods hold the promise of being the new "brass instrument" for Psychology. These methods provide tantalizing snapshots of mental activity and function. Nonetheless, basic measurement questions arise as more complex mental functions are being inferred. Tensions arise in determining what is being measured during blood flow changes in the brain ? And what are the role of computational models in representing, interpreting and understanding the nature of the mental function which brain imaging methods probe? The idea behind this symposium is to examine the tension between measurement and modeling in Brain Imaging especially against the backdrop of high level cognitive processes, such as reasoning, categorization and language. An important compenent of these techniques in the future might be in how they may utilize computational and mathematical models that are initally biased with prior beliefs about the relevant location estimators and temporal structure of the underlying mental process. "Functional Neuroimaging: A bridge between Cognitive and Neuro Sciences?". Tomas Paus MNI I will start by posing a question whether one can marry cognitive and neuro-sciences, and what role functional neuroimaging can play here. I will ask, with Tulving, whether it is true that "we lack the requisite background knowledge to appreciate each others excitement", and what can be done about it. I will then go on to outline the basic principles and the techniques of the research that deals with the brain/behavior relationship, pointing out crucial distinctions between "disruption" (i.e. lesion, stimulation, etc.) and "correlate" (i.e. unit activity, EEG, PET, fMRI) studies. I will review the basic principles of the current neuroimaging methods (concentrating on PET and fMRI, but mentioning also NIRS). At the end of this methodological section, I will again stress that, using neuroimaging, we measure brain correlates of behavior and, as such, we are limited in drawing any causal inferences about the brain/behavior relationship. This does not mean that we shouldnt be doing this kind of research though. It only means, in my mind, that we may need to focus on fairly simple cognitive processes, and that we absolutely need to constrain the interpretation of imaging data by specific a priori hypotheses based on the knowledge of brain anatomy, physiology, etc. In this context, I will also make a distinction between directed (or predicted) and exploratory search in the entire brain volume for significant changes in the signal. In the second half of the talk, I will concentrate on the issue of functional connectivity and how we can study it using PET (and fMRI). I will briefly mention results of our research on corollary discharges and on combining transcranial magnetic stimulation with PET. "Methods and Models in interpreting fMRI: The case of Independent Components of fMRI Images" Martin J. McKeown The Salk Institute Many current fMRI experiments use a block design in which the subject is requested to sequentially perform experimental and control tasks in an alternating sequence of 20-40-s blocks. The bulk of the fluctuations in the resultant time series recorded from each brain region (a "voxel") arise not from local task-related activations, but rather from machine noise, subtle subject movements, and heart and breathing rhythms. This tangled mixture of signals presents a formidable challenge for analytical methods attempting to tease apart task-related changes in the time courses of 5,000 - 25,000 voxels. Correlational and ANOVA-like analytical methods technically require narrow \a priori\ assumptions that may not be valid in fMRI data. Moreover, activations arising from important cognitive processes like changes in subject task strategy or decreasing stimulus novelty cannot typically be tested for, as their time courses are not easily predicted in advance. Signal-processing strategies for analyzing fMRI experiments monitoring cognition are generally used without regard to basic neuropsychological principles, such as localization or connectionism. We propose that an appropriate criteria for the separation of fMRI data into cognitively and physiologically meaningful components is the determination of the separate groups of multi-focal anatomical brain areas that are activated synchronously during an fMRI trial. With this view, each scan obtained during an fMRI experiment can be considered as the mean activity plus the sum of enhancements (or suppressions) of activity from the possibly overlapping individual components. Using an Independent Component Analysis (ICA) algorithm, we demonstrate how the fMRI data from Stroop color-naming and attention orienting experiments can be separated into numerous spatially independent components, some of which demonstrate transient and sustained task-related activation during the behavioral experiment. Active areas of these task-related components correspond with regions implicated from PET and neuropsychological studies. Other components relate to machine noise, subtle head movements and presumed cardiac and breathing pulsations. Considering fMRI data to be the sum of independent areas activated with different time courses enables, with minimal \a priori\ assumptions, the separation of artifacts from transient and sustained task-related activations. Determining the independent components of fMRI data appears to be a promising method for the analysis of cognitive experiments in normal and clinical populations. Sometimes Weak is Strong: Functional Imaging Analysis with Minimal Assumptions Benjamin Martin Bly and Mark Griswold Brain-Imaging Studies of Categorization by Rule or Family Membership Andrea Patalano and Edward Smith A PET Study of Deductive Versus Probabilistic Reasoning Stefano F. Cappa, Daniela Perani, Daniel Osherson, Tatiana Schnur, and Ferruccio Fazio Deductive versus probabilistic inferences are distinguished by normative theories, but it is still unknown whether these two forms of reasoning engage similar brain areas. In order to investigate the neurological correlates of reasoning, we have performed an activation study using positron emission tomography and 15O-water in normal subjects. Cerebral perfusion was assessed during a "logic task", in which they had to distinguish between valid and invalid arguments; a "probability task", in which they had to judge whether the conclusion had a greater chance of being true or false, supposing the truth of the premises; and a "meaning task", in which they had to evaluate the premises and the conclusions to determine whether any had anomalous content. The latter was used as "baseline" task: identical arguments were evaluated either for validity, probability or anomaly. In the direct comparison of the two reasoning tasks, probabilistic reasoning increased regional cerebral blood flow (rCBF) in dorsolateral frontal regions, whereas deductive reasoning enhanced rCBF in associative occipital and parietal regions, with a right hemispheric prevalence. Compared to the meaning condition, which involved the same stimuli, both probabilistic and deductive reasoning increased rCBF in the cerebellum. These results are compatible with the idea that deductive reasoning has a geometrical character requiring visuo-spatial processing, while the involvement of the frontal lobe in probabilistic tasks is in agreement with neuropsychological evidence of impairment in cognitive estimation in patients with frontal lesions. The cerebellar activation found in both reasoning tasks may be related to the involvement of working memory. Neural Correlates of Mathematical Reasoning: An fMRI Study of Word-Problem Solving Bart Rypma, Vivek Prabhakaran, Jennifer A. L. Smith, John E. Desmond, Gary H. Glover, and John D. E. Gabrieli   From zhuh at santafe.edu Mon Aug 4 16:08:21 1997 From: zhuh at santafe.edu (Huaiyu Zhu) Date: Mon, 04 Aug 1997 14:08:21 -0600 Subject: Paper: Less predictable than random ... Message-ID: <33E636B5.3451@santafe.edu> The following paper has been submitted to Neural Computation: ftp://ftp.santafe.edu/pub/zhuh/anti.ps Anti-Predictable Sequences: Harder to Predict Than A Random Sequence Huaiyu Zhu Santa Fe Institute, 1399 Hyde Park Rd, Santa Fe, NM 87501, USA Wolfgang Kinzel Santa Fe Institute, 1399 Hyde Park Rd, Santa Fe, NM 87501, USA Institut f\"ur Theoretische Physik, Universit\"at, D-97074 W\"urzburg, Germany ABSTRACT For any discrete state sequence prediction algorithm $A$ it is always possible, using an algorithm $B$ no more complicated than $A$, to generate a sequence for which $A$'s prediction is always wrong. For any prediction algorithm $A$ and sequence $x$, there exists a sequence $y$ no more complicated than $x$, such that if $A$ performs better than random on $x$ then it will perform worse than random on $y$ by the same margin. An example of a simple neural network predicting a bit-sequence is used to illustrate this very general but not widely recognized phenomena. This implies that any predictor with good performance must rely on some (usually implicitly) assumed prior distributions of the problem. -- Huaiyu Zhu Tel: 1 505 984 8800 ext 305 Santa Fe Institute Fax: 1 505 983 0751 1399 Hyde Park Road mailto:zhuh at santafe.edu Santa Fe, NM 87501 http://www.santafe.edu/~zhuh/ USA ftp://ftp.santafe.edu/pub/zhuh/   From maja at cs.brandeis.edu Tue Aug 5 11:34:02 1997 From: maja at cs.brandeis.edu (Maja Mataric) Date: Tue, 5 Aug 1997 11:34:02 -0400 (EDT) Subject: Extended Deadline Autonomous Robots CFP Message-ID: <199708051534.LAA02786@garnet.cs.brandeis.edu> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! NOTE: Due to popular demand and unwieldy schedules, we have moved and *finalized* the submission deadline to Sep 1, 1997. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! CALL FOR PAPERS Autonomous Robots Journal Special Issue on Learning in Autonomous Robots http://www.cs.buffalo.edu/~hexmoor/autonomous-robots.html Guest editors: Henry Hexmoor and Maja Mataric Submission Deadline: September 1, 1997 Autonomous Robots is an international journal published by Kluwer Academic Publishers, Editor-in-Chief: George Bekey Current applications of machine learning in robotics explore learning behaviors such as obstacle avoidance, navigation, gaze control, pick and place operations, manipulating everyday objects, walking, foraging, herding, and delivering objects. It is hoped that these are first steps toward robots that will learn to perform complex operations ranging from folding clothes, cleaning up toxic waste and oil spills, picking up after the children, de-mining, look after a summer house, imitating a human teacher, or overseeing a factory or a space mission. As builders of autonomous embedded agents, researchers in robot learning deal with learning schemes in the context of physical embodiment. Strides are being made to design programs that change their initial encoding of know-how to include new concepts as well as improvements in the associations of sensing to acting. Driven by concerns about the quality and quantity of training data and real-time issues such as sparse and low-quality feedback from the environment, robot learning is undergoing a search for quantification and evaluation mechanisms, as well as for methods for scaling up the complexity of learning tasks. This special issue of Autonomous Robots will focus on novel robot learning applications and quantification of learning in autonomous robots. We are soliciting papers describing finished work preferably involving real manipulator or mobile robots. We invite submissions from all areas in AI and Machine Learning, Mobile Robotics, Machine Vision, Dexterous Manipulation, and Artificial Life that address robot learning. Submitted papers should be delivered by September 1, 1997. Potential authors intending to submit a manuscript can contact either guest editor for answer to any questions. Manuscripts should be typed or laser-printed in English (with American spelling preferred) and double-spaced. Both paper and electronic submission are possible, as described below. For paper submissions, send five (5) copies of submitted papers (hard-copy only) to: Dr. Henry Hexmoor Department of Computer Science State University of New York at Buffalo 226 Bell Hall Buffalo, NY 14260-2000 U.S.A. PHONE: 716-645-3197 FAX: 716-645-3464 For electronic submissions, use Postscript format, ftp the file to ftp.cs.buffalo.edu, and send an email notification to hexmoor at cs.buffalo.edu Detailed ftp instructions: compress your-paper (both Unix compress and gzip commands are ok) ftp ftp.cs.buffalo.edu (but check in case it has changed) give anonymous as your login name give your e-mail address as password set transmission to binary (just type the command BINARY) cd to users/hexmoor/ put your-paper send an email notification to hexmoor at cs.buffalo.edu to notify us that you transferred the paper Editoral Board: James Albus, NIST, USA Peter Bonasso, NASA Johnson Space Center, USA Enric Celaya, Institut de Robotica i Informatica Industrial, Spain Adam J. Cheyer, SRI International, USA Keith L. Doty, University of Florida, USA Marco Dorigo, Universite' Libre de Bruxelles, Belgium Judy Franklin, Mount Holyoke College, USA Rod Grupen, University of Mass, USA John Hallam, University of Edinburgh, UK Inman Harvey, COGS, Univ. of Sussex, UK Gillian Hayes, University of Edinburgh, UK James Hendler, University of Maryland, USA David Hinkle, Johns Hopkins University, USA R James Firby, University of Chicago, USA Ian Horswill, Northwestern University, USA Sven Koenig, Carnegie Mellon University, USA Kurt Konolige, SRI International, USA David Kortenkamp, NASA Johnson Space Center, USA Francois Michaud, Brandeis University, USA Robin R. Murphy, Colorado School of Mines, USA Jose del R. MILLAN, Joint Research Centre of the EU, Italy Amitabha Mukerjee, IIT, India David J. Musliner, Honeywell Technology Center, USA Ulrich Nehmzow, University of Manchester, UK Tim Smithers, Universidad del Pai's Vasco, Spain Martin Nilsson, Swedish Institute of Computer Science, Sweden Stefano Nolfi, Institute of Psychology, C.N.R., Italy Tony J Prescott, University of Sheffield, UK Ashwin Ram, Georgia Institute of Technology, USA Alan C. Schultz, Naval Research Laboratory, USA Noel Sharkey, Sheffield University, UK Chris Thornton, UK Francisco J. Vico, Campus Universitario de Teatinos, Spain Brian Yamauchi, Naval Research Laboratory, USA Uwe R. Zimmer, Schloss Birlinghoven, Germany Relevant Dates: September 1, 1997 submission deadline November 15, 1997 review deadline December 1, 1997 acceptance/rejection notifications to the authors   From hd at harris.monmouth.edu Tue Aug 5 21:25:14 1997 From: hd at harris.monmouth.edu (Harris Drucker) Date: Tue, 5 Aug 97 21:25:14 EDT Subject: paper:boosting regression Message-ID: <9708060125.AA04381@harris.monmouth.edu.monmouth.edu> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/drucker.boosting-regression.ps.Z The following paper on regression was presented at the Fourteenth International Conference on Machine Learning(1997), Morgan Kaufmann, publishers: Improving Regressors using Boosting Techniques Harris Drucker Monmouth University West Long Branch, NJ 07764 drucker at monmouth.edu Abstract In the regression context, boosting and bagging are techniques to build a committee of regressors that may be superior to a single regressor. We use regression trees as fundamental building blocks in bagging committee machines and boosting committee machines. Performance is analyzed on three non-linear functions and the Boston housing database. In all cases, boosting is at least equivalent, and in most cases better than bagging in terms of prediction error. If you do not have access to the proceedings, anonymous ftp from the above site may be used to retrieve this 9 page compressed paper. Sorry, no hard copies.   From jan at uran.informatik.uni-bonn.de Wed Aug 6 08:22:04 1997 From: jan at uran.informatik.uni-bonn.de (Jan Puzicha) Date: Wed, 6 Aug 1997 14:22:04 +0200 (MET DST) Subject: Preprints and Abstracts available online Message-ID: <199708061222.OAA13370@thalia.informatik.uni-bonn.de> This message has been posted to several lists. Sorry, if you receive multiple copies. The following seven PREPRINTS are now available as abstracts and compressed postscript online via the WWW-Home-Page http://www-dbv.cs.uni-bonn.de/ of the |---------------------------------------------| |Computer Vision and Pattern Recognition Group| | of the University of Bonn, | | Prof. J. Buhmann, Germany. | |---------------------------------------------| 1.) Hansjrg Klock and Joachim M. Buhmann: Data Visualization by MultidimensionalScaling: A Deterministic Annealing Approach. Technical Report IAI-TR-96-8, Institut fr Informatik III, University of Bonn. October 1996. 2.) Jan Puzicha, Thomas Hofmann and Joachim M. Buhmann: Deterministic Annealing: Fast Physical Heuristics for Real-Time Optimization of Large Systems. In: Proceedings of the 15th IMACS World Conference on Scientific Computation, Modelling and Applied Mathematics, Berlin, August 1997. 3.) Jan Puzicha and Joachim M. Buhmann: Multiscale Annealing for Real-Time Unsupervised Texture Segmentation. Technical Report IAI-TR-97-4, Institut fr Informatik III, University of Bonn. April 1997. Accepted for presentation at the International Congress on Computer Vision (ICCV'98). 4.) Jan Puzicha, Thomas Hofmann and Joachim M. Buhmann: Non-parametric Similarity Measures for Unsupervised Texture Segmentation and Image Retrieval. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, pp. 267-272, San Juan, 1997. 5.) Thomas Hofmann, Jan Puzicha and Joachim M. Buhmann: An Optimization Approach to Unsupervised Hierarchical Texture Segmentation. Proceedings of the International Conference on Image Processing, Santa Barbara, 1997. 6.) Andreas Polzer, Hans-Jrg Klock and Joachim M. Buhmann: Video-Coding by Region-Based Motion Compensation and Spatio-temporal wavelet transform. Proceedings of the International Conference on Image Processing, Santa Barbara, 1997. 7.) Hans-Jrg Klock, Andreas Polzer and Joachim M. Buhmann: Region-Based Motion Compensated 3D-Wavelet Transform Coding of Video. Proceedings of the International Conference on Image Processing, Santa Barbara, 1997. If you have any questions or remarks, please let me know. Best regards, Jan Puzicha -------------------------------------------------------------------- Jan Puzicha | email: jan at uran.cs.uni-bonn.de Institute f. Informatics III | jan at cs.uni-bonn.de University of Bonn | WWW : http://www.cs.uni-bonn.de/~jan | Roemerstrasse 164 | Tel. : +49 228 550-383 D-53117 Bonn | Fax : +49 228 550-382 --------------------------------------------------------------------   From Friedrich.Leisch at ci.tuwien.ac.at Wed Aug 6 06:10:28 1997 From: Friedrich.Leisch at ci.tuwien.ac.at (Friedrich Leisch) Date: Wed, 6 Aug 1997 12:10:28 +0200 Subject: CI BibTeX Collection -- Update Message-ID: <199708061010.MAA15951@galadriel.ci.tuwien.ac.at> The following volumes have been added to the collection of BibTeX files maintained by the Vienna Center for Computational Intelligence: Machine Learning 27 Neural Networks 10/3, Neural Computation 9/5, Neural Processing Letters 5/2 Advances in Neural Information Processing Systems 9 All files have been converted automatically from various source formats, please report any bugs you find. The complete collection can be downloaded from http://www.ci.tuwien.ac.at/docs/ci/bibtex_collection.html ftp://ftp.ci.tuwien.ac.at/pub/texmf/bibtex/ Best, Fritz -- Friedrich Leisch Institut fur Statistik Tel: (+43 1) 58801 4541 Technische Universitat Wien Fax: (+43 1) 504 14 98 Wiedner Hauptstrase 8-10/1071 Friedrich.Leisch at ci.tuwien.ac.at A-1040 Wien, Austria http://www.ci.tuwien.ac.at/~leisch PGP public key http://www.ci.tuwien.ac.at/~leisch/pgp.key   From nic at idsia.ch Thu Aug 7 08:46:29 1997 From: nic at idsia.ch (Nici Schraudolph) Date: Thu, 7 Aug 1997 14:46:29 +0200 Subject: paper available: On Centering Neural Network Weight Updates Message-ID: <19970807124629.AAA28359@kraut.idsia.ch> Dear colleagues, the following paper is now available by anonymous ftp from the locations ftp://ftp.idsia.ch/pub/nic/center.ps.gz and ftp://ftp.cnl.salk.edu/pub/schraudo/center.ps.gz On Centering Neural Network Weight Updates ------------------------------------------ by Nicol N. Schraudolph Technical Report IDSIA-19-97 IDSIA, Lugano 1997 It has long been known that neural networks can learn faster when their input and hidden unit activity is centered about zero; recently we have extended this approach to also encompass the centering of error signals (Schraudolph & Sejnowski, 1996). Here we generalize this notion to all factors involved in the weight update, leading us to propose centering the slope of hidden unit activation functions as well. Slope centering removes the linear component of backpropagated error; this improves credit assignment in networks with shortcut connections. Benchmark results show that this can speed up learning significantly without adversely affecting the trained network's generalization ability. Best regards, -- Dr. Nicol N. Schraudolph Tel: +41-91-911-9838 IDSIA Fax: +41-91-911-9839 Corso Elvezia 36 CH-6900 Lugano http://www.idsia.ch/~nic/ Switzerland http://www.cnl.salk.edu/~schraudo/   From terry at salk.edu Thu Aug 7 17:59:47 1997 From: terry at salk.edu (Terry Sejnowski) Date: Thu, 7 Aug 1997 14:59:47 -0700 (PDT) Subject: NIPS Workshop on Brain Imaging Message-ID: <199708072159.OAA18914@helmholtz.salk.edu> ---------------------- Call for Participants NIPS*97 Workshop on Brain Imaging December 5, 1997 Breckenridge, Colorado ---------------------- Title: Analysis of Brain Imaging Data Organizers: Scott Makeig and Terrence Sejnowski scott at salk.edu terry at salk.edu Abstract: The goal of this workshop is to bring together researchers who are interested in new techniques for analyzing brain recordings based on electroencephalography (EEG), event-related potentials (ERP), magnetoencephalography (MEG), optical recordings from cortex using voltage-sensitive dyes, functional magnetic resonance imaging (fMRI) and positron emission tomography (PET). This is a rapidly developing area of cognitive neuroscience and some new unsupervised learning techniques such as independent component analysis (ICA) are proving useful for analyzing data from humans and other primates. Both signal processing experts and neuroscientists will participate in this workshop. In addition to those who are interested in analysis of brain data, those interested in interfacing on-line recordings with practical devices are also welcome to participate. Format: The workshop will meet for 3 hours in the morning and 3 hours in the late afternoon on December 5. The sessions will include short (15 min) presentations by invited participants to help focus discussion. Participants from the signal processing and neuroscience communities are encouraged to attend and to actively participate in the discussion, contribute brief (5 min) presentations, and bring posters to the sessions. Invited Participants: Erkki Oja - Finland - Analysis of EEG Data using ICA Martin McKeown - Salk Institute - fMRI analysis using ICA Partha Mitra - Caltech - fMRI analysis using tapered spectral analysis Ehud Kaplan - PCA analysis of optical recordings Klaus Obermayer - Berlin, Germany - ICA analysis of optical recordings from visual cortex Karl Friston, Queens Square, London - Analysis of PET and fMRI data Henri Begleiter and David Chorlian (SUNY Health Center, NY) - ERP analysis Others who have expressed an interest in attending include: Brad Duckrow - Univ. Connecticut Alex Dimitrov - Univ. Chicago Andrew Oliver - UC London Klaus Prank - Hannover, Germany -----   From piuri at pine.ece.utexas.edu Fri Aug 8 14:37:26 1997 From: piuri at pine.ece.utexas.edu (Vincenzo Piuri) Date: Fri, 8 Aug 1997 20:37:26 +0200 Subject: ISATA98 CALL FOR PAPERS Message-ID: <199708081829.NAA10321@pine.ece.utexas.edu> ================================================================================ 31st ISATA International Symposium on Automotive Technology and Automation Dusseldorf, Germany, 2-5 June 1998 Chairman: Prof. Dr. Dieter Roller, Universitat Stuttgart, Germany ================================================================================ SPECIAL SESSION ON NEURAL IDENTIFICATION, PREDICTION AND CONTROL FOR AUTOMOTIVE EMBEDDED SYSTEMS CALL FOR PAPERS ================================================================================ The ISATA Symposium is an outstanding international forum addressing the most topical and important areas of research and development in a wide variety of fields relevant to the automotive industry. It brings together researchers and practitioners both from the academy and industry. The wide range of research and applications topics covered by this meeting is divided and presented in 8 simultaneous program tracks: "Automotive mechatronics design and engineering", "Simulation, virtual reality and supercomputing automotive applications", "Advanced manufacturing in the automotive industry", "Materials for energy- efficient vehicles", "New propulsion systems and alternative fuel vehicles", "Automotive electronics and new products", "Logistics management and environmental aspects", and "Passenger comfort, road and vehicle safety". The special session on "Neural identification, prediction and control for automotive embedded systems" will be a forum for analyzing, comparing and evaluating the capabilities and the effectiveness of neural techniques with specific reference to the automotive area. The use of such technologies in heterogeneous embedded system, composed by digital dedicated ASIC devices or microprocessor-based structures and neural components, is in fact becoming more and more attractive to realize advanced flexible, efficient and smart vehicles due to their "intelligent" and adaptive features. For example, typical application areas are fuel injection, engine efficiency, guide control, asset balancing, exhausted emissions control, assisted navigation, sensor enhancement and fusion, system diagnosis. For the special session, papers are welcome on all aspects of neural network theory and applications in identification, prediction and control for the automotive industry, with specific reference to their use in heterogeneous embedded systems. In particular, papers are solicited on neural techniques and implementations referring to system modeling, control, prediction, learning, stability, optimization, adaptivity, sensor fusion, classification, instrumentation, diagnosis, neural devices, VLSI and FPGA realizations, integration of neural components in embedded systems, interfacing of neural components in digital systems, specification of heterogeneous embedded systems, design automation of neural devices and heterogeneous embedded systems, testing, CAD tools, real applications, and experimental results. Perspective authors should submit a short abstract (100-150 words) both to the Special Session Organizer and to the Secretariat, including title, author(s) and affiliation(s), by October 31, 1997. The name of the special session must be clearly specified in the submission. The contact author must be identified, with his complete affiliation, address, phone, fax and email. The short abstract is required to plan the review process. Submission of abstracts can be performed also by email or fax. Authors must then send the paper drafts to the Secretariat only by January 16, 1998: submission of drafts must be performed by mail only (email and fax submissions are not accepted). Refereeing will be performed on the draft papers. Submission of the draft paper implies, if accepted for presentation at the conference, the willingness to send the final version of the paper, to register at the conference and to present the paper. Notification of rejection or acceptance will be mailed by February 28, 1998. The final camera ready version is due to the Secretariat by March 31, 1998. Organizer of the Special Session on Neural Identification, Prediction and Control for Automotive Embedded Systems prof. Vincenzo Piuri Department of Electronics and Information Politecnico di Milano piazza L. da Vinci 32 20133 Milano, Italy phone +39-2-2399-3623 fax +39-2-2399-3411 email piuri at elet.polimi.it Conference Secretariat: ISATA 31th ISATA Symposium 32A Queen Street Croydon, CRO 1SY, UK phone +44 181 681 3069 fax +44 181 686 1490 email 100270.1263 at compuserve.com web page http://www.isata.com ================================================================================   From sschaal at erato.atr.co.jp Sat Aug 9 12:02:49 1997 From: sschaal at erato.atr.co.jp (Stefan Schaal) Date: Sat, 9 Aug 97 12:02:49 JST Subject: NIPS*97 workshop on Imitation Learning, call for participation Message-ID: <9708090302.AA05867@power.hip.atr.co.jp> Call for participation for NIPS*97 workshop on: =========================================================================== Imitation Learning --------------------------------------------------------------------------- http://www.erato.atr.co.jp/nips97 NIPS*97 Post-Conference Workshop December 5-6, 1997, Beaver Run Resort (970 453-6000) in Breckenridge, Colorado Abstract -------- Imitation learning is an ability shared by humans and many different species of animals. In imitation learning, the learner observes a skill being demonstrated by a teacher, then attempts to imitate that skill, and finally refines the skill through trial and error learning. The ability of imitation learning provides the opportunity to profit from knowledge of others and to acquire new skills much more quickly. Effectively, imitation learning biases a learning system towards a good solution in order to significantly reduce the search space during trial by trial learning. The ability of imitation learning, however, is not trivial. It requires a sophisticated interplay between perceptual systems that recognize the demonstrated skill, and motor systems, onto which the recognized skill must be mapped. Differences between teacher and learner emphasize the need for more abstract representations for imitation learning. Recent demonstrations of imitation-specific neurons in primate premotor cortex have even lead to speculations that the development of imitation skills may have been a key milestone in the evolution of higher intelligence. Goal ---- The goal of this 1-day workshop is to identify and discuss the complex information processes of imitation learning in short presentations and panel discussions. The hope is to outline a strategy of how imitation learning could be studied systematically by bringing together researchers with a broad range of expertise. Topics of the workshop include: - how learning methods can profit from (i.e., can be biased by) a demonstration of a teacher, - how the recognition process of a demonstration could interact with the generative process of motor control (e.g., connecting to ideas of reciprocally constrained learning processes, as in Helmholtz machines), - how memory and attentional processes operate during imitation, - segmentation and recognition of the teacher's demonstration, - extracting the intent of a demonstration, - psychophysical experiments on imitation learning, - mapping imitation learning onto the functional structure in primate brains, - robot and simulation studies of imitation learning, - representations supporting imitation learning. Participants ------------ This workshop will bring together active researchers from neurobiology, computational neuroscience, behavioral sciences, cognitive science, machine learning, statistical learning, motor control, control theory, robotics, human computer interaction, and related fields. A tentative list of speakers is given below. We are interested in additional contributers. If you would like to give a presentation in this workshop, please send email to nips97_imitation at hip.atr.co.jp describing the material you would like to present. Tentative Speakers ------------------ Bob Sekuler (Brandeis Univ.), Chris Atkeson (GaTech), Dana Ballard (Univ. of Rochester), Jean-Jacques Slotine (MIT), Jeff Siskind (NEC), Kenji Doya (JST), Maja Mataric (USC), Matt Brand (MIT), Mitsuo Kawato (ATR, Japan), Polly Pook (MIT), Sebastian Thrun (CMU), Stefan Schaal (JST & USC), Trevor Darell (Interval), Yasuo Kuniyoshi (ETL,Japan), Zoubin Ghahramani (Univ. of Toronto). Organizers ---------- Stefan Schaal (JST & USC), Maja Mataric (USC), Chris Atkeson (GaTech) Location and More Information ----------------------------- The most up-to-date information about NIPS*97 can be found on the NIPS*97 Home Page (http://www.cs.cmu.edu/Groups/NIPS/NIPS.html) --------------------------------------------------------------------------- If you have comments or suggestions, send email to nips97_imitation at hip.atr.co.jp   From ft at uran.informatik.uni-bonn.de Mon Aug 11 10:49:21 1997 From: ft at uran.informatik.uni-bonn.de (Thorsten Froehlinghaus) Date: Mon, 11 Aug 1997 16:49:21 +0200 (MET DST) Subject: Stereo Images with Ground Truth for Benchmarking Message-ID: <199708111449.QAA14519@atlas.informatik.uni-bonn.de> Stereo Images with Ground Truth for Benchmarking At the University of Bonn, a synthetic byt realistic stereo image pair has been generated together with ground truth disparity and occlusion maps. They are shown on the following web-page: http://www-dbv.cs.uni-bonn.de/~ft/stereo.html The stereo images and the ground truth maps can be employed to benchmark the performance of different stereo matching techniques. Every interested stereo-researcher is invited to use their matching technique to compute a dense disparity map for these images. I will then evaluate all contributed results and compare them with regard to their precision and reliability. Thorsten Froehlinghaus University of Bonn Dept. of Computer Science III E-Mail: ft at cs.bonn.edu   From mepp at almaden.ibm.com Sun Aug 10 11:33:06 1997 From: mepp at almaden.ibm.com (Mark Plutowski/Almaden/IBM) Date: 10 Aug 97 11:33:06 Subject: Call for papers: AI Review, Special Issue on Datamining the Internet Message-ID: <9708101833.AA2613@almlnsg0.almaden.ibm.com> Artificial Intelligence Review: Special Issue on Data Mining on the Internet The advent of the World Wide Web has caused a dramatic increase in usage of the Internet. The resulting growth in on-line information combined with the almost chaotic nature of the web necessitates the development of powerful yet computationally efficient algorithms to track and tame this constantly evolving complex system. While traditionally the data mining community has dealt with structured databases, web mining poses problems not only due to the lack of structure, but also due to the intrinsic distributed nature of the data. Furthermore, mining on the Internet involves also dealing with multi-media content consisting of not only natural language documents but also images, audio and video streams. Several interesting and potentially useful applications have already been developed by academic researchers and industry practitioners to address these challenges. It is important to learn from these initial endeavors, if we are to develop new algorithms and interesting applications. The purpose of this special issue is to provide a comprehensive state-of-the-art overview of the technical challenges and successes in mining of the Internet. Of particular interest are papers describing both the development of novel algorithms and applications. Topics of interest could include but are not limited to: * Resource Discovery * Collaborative Filtering * Information Filtering * Content Mining (text, images, video, etc.) * Information Extraction * User Profiling * Applications, e.g., one-to-one marketing In addition to the call for full-length papers, we request that any researchers working in this area submit abstracts and/or pointers to recently published applications for the purposes of compiling a comprehensive survey of the current state-of-the-art. The mission of Artificial Intelligence Review: The Artificial Intelligence Review serves as a forum for the work of researchers and application developers from Artificial Intelligence, Cognitive Science and related disciplines. The Review publishes state-of-the-art research and applications and critical evaluations of techniques and algorithms from the field. The Review also presents refereed survey and tutorial articles, as well as reviews and commentary on topics from these applications. **** Instructions for submitting papers *** Papers should be no more than 30 printed pages (approximately 15,000 words) with a 12-point font and 18-point spacing, including figures and tables. Papers must not have appeared in, nor be under consideration by other journals. Include a separate page specifying the paper's title and providing the address of the contact author for correspondence (including postal, telephone number, fax number, and e-mail address). Send FOUR copies of each submission to the guest editor listed below. Papers in ascii or postscript form may be submitted electronically. Instructions for on-line submission are given below. ================================== Information For on-line submission ================================== Kluwer Academic Publishers allows on-line submission of scientific articles via ftp and e-mail. We will make this system more user-friendly by incorporating it into our KAPIS WWW server and use Netscape as the user-interface. This is currently being prepared and will be implemented by the end of this year. Below, please find the procedure that should be used until then. - an author sends an e-mail message to "submit at wkap.nl" containing the following line REQUEST SUBMISSIONFORM AIRE AIRE = Artificial Intelligence Review (the 4-letter code that is used at Kluwer) - the author receives the electronic submission form (see attachment) via e-mail with a dedicated file name filled in (and also the information that is given at point 4: the journal's four-letter code plus the full journal title) - the author fills in the submission form and send it back to: "submit at wkap.nl" - at the same time, the author submits his/her article via anonymous ftp at the following address: "ftp.wkap.nl" in the subdirectory INCOMING/SUBMIT, using the dedicated file name with an appropriate extension - at Kluwer, the article is registrated and taken into production in the usual way ======================================================================== ** Important Dates ** Papers Due: December 15, 1997 Acceptance Notification: March 1, 1998 Final Manuscript due: June 1, 1998 Guest Editor: Shivakumar Vaithyanathan, net.Mining, IBM Almaden Research Center, 650 Harry Road, San Jose, CA 95120 (408)927-2465 (Phone) (408)927-2240 (Fax) e-mail: shiv at almaden.ibm.com   From frey at cs.toronto.edu Mon Aug 11 16:12:54 1997 From: frey at cs.toronto.edu (Brendan J. Frey) Date: Mon, 11 Aug 1997 16:12:54 -0400 Subject: Bayes nets for classification, compression and error-correction Message-ID: <97Aug11.161255edt.1012@neuron.ai.toronto.edu> Doctoral dissertation Bayesian Networks for Pattern Classification, Data Compression, and Channel Coding Brendan J. Frey http://www.cs.utoronto.ca/~frey Pattern classification, data compression, and channel coding are tasks that usually must deal with complex but structured natural or artificial systems. Patterns that we wish to classify are a consequence of a causal physical process. Images that we wish to compress are also a consequence of a causal physical process. Noisy outputs from a telephone line are corrupted versions of a signal produced by a structured man-made telephone modem. Not only are these tasks characterized by complex structure, but they also contain random elements. Graphical models such as Bayesian networks provide a way to describe the relationships between random variables in a stochastic system. In this thesis, I use Bayesian networks as an overarching framework to describe and solve problems in the areas of pattern classification, data compression, and channel coding. Results on the classification of handwritten digits show that Bayesian network pattern classifiers outperform other standard methods, such as the k-nearest neighbor method. When Bayesian networks are used as source models for data compression, an exponentially large number of codewords are associated with each input pattern. It turns out that the code can still be used efficiently, if a new technique called ``bits-back coding'' is used. Several new error-correcting decoding algorithms are instances of ``probability propagation'' in various Bayesian networks. These new schemes are rapidly closing the gap between the performances of practical channel coding systems and Shannon's 50-year-old channel coding limit. The Bayesian network framework exposes the similarities between these codes and leads the way to a new class of ``trellis-constraint codes'' which also operate close to Shannon's limit. Brendan.   From ecm at casbah.acns.nwu.edu Tue Aug 12 10:22:26 1997 From: ecm at casbah.acns.nwu.edu (Edward Malthouse) Date: Tue, 12 Aug 1997 09:22:26 -0500 (CDT) Subject: Nonlinear Principal Components Analysis Message-ID: <199708121422.JAA22318@casbah.acns.nwu.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 1470 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/c7ececb8/attachment-0001.ksh From jdunn at cyllene.uwa.edu.au Tue Aug 12 00:17:16 1997 From: jdunn at cyllene.uwa.edu.au (John Dunn) Date: Tue, 12 Aug 1997 12:17:16 +0800 (WST) Subject: Eighth Australasian Mathematical Psychology Conference (AMPC97) Message-ID: <199708120417.MAA28639@cyllene.uwa.edu.au> Second Call for Papers Eighth Australasian Mathematical Psychology Conference (AMPC97) November 27-30, 1997 University of Western Australia Perth, W.A. Australia Conference organisers: John Dunn, Mike Kalish, Steve Lewandowsky Email to: mathpsych at psy.uwa.edu.au. ----------------------------------------------------- AMPC97 provides an opportunity for researchers interested in the application of mathematical analysis to psychology to meet and exchange views. Relevant domains include Experimental Psychology (particularly computational models), Cognitive Science, Connectionist Modelling, Scaling, Psychological Methods, Statistics and Test Theory. Papers are invited from researchers in all areas of mathematical psychology. Contributors are encouraged to propose topics for focused symposia, consisting of three to six papers, to present their work. The following symposia have been accepted. If you wish to present a paper at any of these, please contact the relevant convenor, listed below. Requests for additional symposia should be directed to the conference organisers at mathpsych at psy.uwa.edu.au. The current deadline for abstracts is August 31, 1997. ----------------------------------------------------- Symposia ----------------------------------------------------- Local energy detection in vision David Badcock, Universtiy of Western Australia david at psy.uwa.edu.au Nonlinear dynamics Robert A M Gregson, Australian National University Robert.Gregson at anu.edu.au Associative learning John K Kruschke, Indiana University kruschke at croton.psych.indiana.edu Computational models of memory Stephan Lewandowsky, University of Western Australia lewan at psy.uwa.edu.au Knowledge representation Josef Lukas, University of Halle, Germany j.lukas at psych.uni-halle.de Choice, decision, and measurement Anthony A J Marley, McGill University tony at hebb.psych.mcgill.ca Face recognition Alice O'Toole & Herve Abdi, University of Texas otoole at utdallas.edu Models of response time Roger Ratcliff, Northwestern University roger at eccles.psych.nwu.edu ----------------------------------------------------- Special issue of the Australian Journal of Psychology ----------------------------------------------------- A special issue of the Australian Journal of Psychology dedicated to Mathematical Psychology will be forthcoming in late 1998 or early 1999. The conference organisers, John Dunn, Mike Kalish and Stephan Lewandowsky, will act as guest editors of this issue. All contributors to AMPC97 are invited to submit a paper which will be fully peer reviewed by researchers who are not contributing to the special issue. The aim of the special issue is to showcase the work of Australian mathematical psychologists and to demonstrate how this work is at the forefront of international developments. We are therefore particularly interested in papers arising from international collaboration, preferably those co-authored by researchers in Australia and abroad. ----------------------------------------------------- Details concerning the conference, registration, and submission of papers are available at the AMPC97 Web site at the following URL: http://www.psy.uwa.edu.au/mathpsych/ Registration for the conference and submission of abstracts need to be submitted in electronic form through the AMPC97 Web Site. -----------------------------------------------------   From becker at curie.psychology.mcmaster.ca Tue Aug 12 15:27:09 1997 From: becker at curie.psychology.mcmaster.ca (Sue Becker) Date: Tue, 12 Aug 1997 15:27:09 -0400 (EDT) Subject: nips97 workshop: models of episodic memory and hippocampus Message-ID: NIPS*97 workshop announcement COMPUTATIONAL MODELS OF EPISODIC MEMORY AND HIPPOCAMPAL FUNCTION The aim of this one-day workshop is to bring together computational modellers (not just of the neural network type) and experimentalists to share information on data and models. This will allow us to explore how these models can be better informed by experimental data, and how they may be used to guide experimental questions. Understanding the mechanisms underlying episodic memory, and the long-term storage of temporally ordered experience in general, remains a fascinating problem for both cogntive psychology and neuroscience. Interestingly, episodic memory has long been thought to involve the hippocampus, a structure in which recent experimental and modelling advances have provided insight at both the physiological and behavioral levels. The aim of this workshop is to bring together modellers and experimenters from a wide range of disciplines to define the key aspects of human behaviour (and possibly physiology and anatomy) that a model of epsisodic memory should account for, and to discuss the computational mechanisms that might support it. Modellers will span a wide range of approaches, but will all make contact in some way with human memory data. One or more of the experimenters will start the workshop off by addressing the data regarding episodic memory and hippocampal function from psychology, neuropsychology, functional imaging and animal physiology. The format of the rest of the workshop will be a mixture of lectures and discussions on the merits of various modelling approaches - of which neural networks are just one example. Each speaker will have approximately 25 minutes, including about 7-10 minutes for discussion. Tentative speaker list: Paul Fletcher Mike Hasselmo Jay McClelland Janet Wiles Chip Levy Andy Yonelinas Alan Pickering Jaap Murre Mike Kahana Bill Skaggs Neil Burgess Randy O'Reilly Sue Becker Date and location: Friday December 5, in Breckenridge, Colorado, at the site of the NIPS*97 workshops following the main conference in Denver (see http://www.cs.cmu.edu/Web/Groups/NIPS for details) Call for submissions: We may have time for a small number of contributed talks. Interested participants are asked to submit by email a title, abstract and summary of relevant publications to each of the organizers. Organizers: Sue Becker (becker at mcmaster.ca) Neil Burgess (n.burgess at ucl.ac.uk) Randy O'Reilly (oreilly at flies.mit.edu) Web page: http://claret.psychology.mcmaster.ca/becker/nips97wshop Abstracts will be added here soon. Summaries of talks will be published here after the workshop.   From giles at research.nj.nec.com Wed Aug 13 13:26:59 1997 From: giles at research.nj.nec.com (Lee Giles) Date: Wed, 13 Aug 97 13:26:59 EDT Subject: paper on fuzzy automata and recurrent neural networks Message-ID: <9708131726.AA07646@alta> The following manuscript has been accepted in IEEE Transactions on Fuzzy Systems and is available at the WWW site listed below: www.neci.nj.nec.com/homepages/giles/papers/IEEE.TFS.fuzzy.automata.encoding.recurrent.net.ps.Z We apologize in advance for any multiple postings that may be received. *********************************************************************** Fuzzy Finite-State Automata Can Be Deterministically Encoded Into Recurrent Neural Networks Christian W. Omlin(1), Karvel K. Thornber(2), C. Lee~Giles(2,3) (1)Adaptive Computing Technologies, Troy, NY 12180 (2)NEC Research Institute, Princeton, NJ 08540 (3)UMIACS, U. of Maryland, College Park, MD 20742 ABSTRACT There has been an increased interest in combining fuzzy systems with neural networks because fuzzy neural systems merge the advantages of both paradigms. On the one hand, parameters in fuzzy systems have clear physical meanings, and rule-based and linguistic information can be incorporated into adaptive fuzzy systems in a systematic way. On the other hand, there exist powerful algorithms for training various neural network models. However, most of the proposed combined architectures are only able to process static input-output relationships; they are not able to process temporal input sequences of arbitrary length. Fuzzy finite-state automata (FFAs) can model dynamical processes whose current state depends on the current input and previous states. Unlike in the case of deterministic finite-state automata (DFAs), FFAs are not in one particular state, rather each state is occupied to some degree defined by a membership function. Based on previous work on encoding DFAs in discrete-time, second-order recurrent neural networks, we propose an algorithm that constructs an augmented recurrent neural network that encodes a FFA and recognizes a given fuzzy regular language with arbitrary accuracy. We then empirically verify the encoding methodology by correct string recognition of randomly generated FFAs. In particular, we examined how the networks' performance varies as a function of synaptic weight strengths Keywords: Fuzzy systems, fuzzy neural networks, recurrent neural networks, knowledge representation, automata, languages, nonlinear systems. - __ C. Lee Giles / Computer Science / NEC Research Institute / 4 Independence Way / Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 www.neci.nj.nec.com/homepages/giles.html ==   From cmb35 at newton.cam.ac.uk Thu Aug 14 08:34:17 1997 From: cmb35 at newton.cam.ac.uk (C.M. Bishop) Date: Thu, 14 Aug 1997 13:34:17 +0100 Subject: Workshop on pulsed neural networks Message-ID: <199708141234.NAA13173@feynman> WORKSHOP ON PULSED NEURAL NETWORKS ---------------------------------- Isaac Newton Institute, Cambridge, U.K. 26 and 27 August, 1997 Organisers: Wolfgang Maass and Chris Bishop ****** FINAL PROGRAMME ****** This workshop draws together many aspects of pulsed neural networks including computational models, theoretical analyses, neuro-biological motivation and hardware implementations. A provisional programme, together with a list of abstracts, is given below. The dates of the workshop have been chosen so that participation can easily be combined with a trip to the First European Workshop on Neuromorphic Systems (EWNS-1), August 29-31, 1997, in Stirling, Scotland (for details see: http://www.cs.stir.ac.uk/~lss/Neuromorphic/Info1.html). If you would like to attend this workshop, please complete and return the registration form below. There is no registration fee, and accommodation for participants will be available (at reasonable cost) in Wolfson Court adjacent to the Institute. This workshop will form part of the six month programme at the Isaac Newton Institute on "Neural Networks and Machine Learning". For further information about the Institute and this programme see: http://www.newton.cam.ac.uk/ http://www.newton.cam.ac.uk/programs/nnm.html If you wish to be kept informed of other workshops and seminars taking place during the programme, please subscribe to the nnm mailing list: Send mail to majordomo at newton.cam.ac.uk with a message whose BODY (not subject -- which is irrelevant) contains the line 'subscribe nnm-list your_email_address' We look forward to seeing you in Cambridge. Wolfgang Maass Chris Bishop --------------------------------------------------------------------------- REGISTRATION FORM ----------------- (Please return to H.Dawson at newton.cam.ac.uk) Last Name:....................................Title:..................... Forenames:.................................................................... Address of Home Institution: ................................... ................................... ................................... ................................... ................................... Office Phone:........................ Home Phone:........................... Fax Number:.......................... E-mail:.............................. Date of Arrival:.................... Date of Departure:.................... If you would like accommodation in Wolfson Court at 22.50 UK pounds per night for bed and breakfast, please contact Heather Dawson (H.Dawson at newton.cam.ac.uk) as soon as possible. ------------------------------------------------------------------------------ FINAL PROGRAMME --------------------- Tuesday, August 26: 9:00 - 10:15 Tutorial by Wulfram Gerstner (Swiss Federal Institute of Technology, Lausanne, Switzerland) "Motivation and Models for Spiking Neurons" 10:15 - 10:45 Coffee-Break 10:45 - 12:00 Tutorial by Wolfgang Maass (Technische Universitaet Graz, Austria) "Computation and Coding in Networks of Spiking Neurons" 12:00 - 14:00 Lunch 14:00 - 14:40 David Horn (Tel Aviv University, Israel) "Fast Temporal Encoding and Decoding with Spiking Neurons" 14:40 - 15:20 John Shawe-Taylor (Royal Holloway, University of London) "Neural Modelling and Implementation via Stochastic Computing" 15:20 - 16:00 Tea Break 16:00 - 16:40 Wolfgang Maass (Technische Universitaet Graz, Austria) "A Simple Model for Neural Computation with Pulse Rates and Pulse Correlations" 16:40 - 17:20 Wulfram Gerstner (Swiss Federal Institute of Technology, Lausanne, Switzerland) "Hebbian Tuning of Delay Lines for Coincidence Detection in the Barn Owl Auditory System" 17:20 - 18:00 Poster-Spotlights (5 minutes each) 18:00 - 19:00 Poster-Session (with wine reception) 19:00 Barbecue dinner at the Isaac Newton Institute ------------------------ Wednesday, August 27 9:00 - 10:15 Tutorial by Alan F. Murray (University of Edinburgh) "Pulse-Based Computation in VLSI Neural Networks : Fundamentals" 10:15 - 10:40 Coffee-Break 10:40 - 11:20 Alessandro Mortara (Centre Suisse d'Electronique et de Microtechnique, Neuchatel, Switzerland) "Communication and Computation using Spikes in Silicon Perceptive Systems" 11:20 - 12:00 David P.M. Northmore (University of Delaware, USA) "Interpreting Spike Trains with Networks of Dendritic-Tree Neuromorphs" 12:00 - 14:00 Lunch (During lunch we will discuss plans for an edited book on pulsed neural nets) 14:00 - 14:40 Alister Hamilton (University of Edinburgh) "Pulse Based Signal Processing for Programmable Analogue VLSI" 14:40 - 15:20 Rodney Douglas (ETH Zurich, Switzerland) "A Communications Infrastructure for Neuromorphic Analog VLSI Systems" 15:20 - 15:40 Coffee-Break 15:40 - 17:00 Plenary Discussion: Artifical Pulsed Neural Nets: Prospects and Problems +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ABSTRACTS --------- (in the order of the talks) Tutorial by Wulfram Gerstner (Swiss Federal Institute of Technology, Lausanne, Switzerland) "Motivation and Models for Spiking Neurons" In this introductory tutorial I will try to explain some basic ideas of and provide a common language for pulsed neural nets. To do so I will 0) motivate the idea of pulse coding as opposed to rate coding 1) discuss the relation between various simplified models of spiking neurons (integrate-and-fire, Hodgkin-Huxley) and argue that the Spike Response Model (=linear response kernels + threshold) is a suitable framework to think about such models. 2) discuss typical phenoma of the dynamics in populations of spiking neurons (oscillations, asynchronous states), provide stability arguments and introduce an integral equation for the population dynamics. 3) review the idea of feature binding and pattern segmentation by a 'synchronicity code'. ------------------------------------------------------------ Tutorial by Wolfgang Maass (Technische Universitaet Graz, Austria) "Computation and Coding in Networks of Spiking Neurons" This tutorial will provide an introduction to --- methods for encoding information in trains of pulses --- simplified computational models for networks of spiking neurons --- the computational power of networks of spiking neurons for concrete coding schemes --- computational consequences of synapses that are not static, but but give different "weights" to different pulses in a pulse train --- relationships between models for networks of spiking neurons and classical neural network models. ------------------------------------------------------------- David Horn (Tel Aviv University, Israel) "Fast Temporal Encoding and Decoding with Spiking Neurons" We propose a simple theoretical structure of interacting integrate and fire neurons that can handle fast information processing, and may account for the fact that only a few neuronal spikes suffice to transmit information in the brain. Using integrate and fire neurons that are subjected to individual noise and to a common external input, we calculate their first passage time (FPT), or inter-spike interval. We suggest using a population average for evaluating the FPT that represents the desired information. Instantaneous lateral excitation among these neurons helps the analysis. By employing a second layer of neurons with variable connections to the first layer, we represent the strength of the input by the number of output neurons that fire, thus decoding the temporal information. Such a model can easily lead to a logarithmic relation as in Weber's law. The latter follows naturally from information maximization, if the input strength is statistically distributed according to an approximate inverse law. ------------------------------------------- John Shawe-Taylor (Royal Holloway, University of London) "Neural Modelling and Implementation via Stochastic Computing" 'Stochastic computing' studies computation performed by manipulating streams of random bits which represent real values via a frequency encoding. The paper will review results obtained in applying this approach to neural computation. The following topics will be covered: * Basic neural modelling * Implementation of feedforward networks and learning strategies * Generalization analysis in the statistical learning framework * Recurrent networks for combinatorial optimization, simulated and mean field annealing * Applications to graph colouring * Hardware implementation in FPGAs ------------------------------------------ Wolfgang Maass (Technische Universitaet Graz, Austria) "A Simple Model for Neural Computation with Pulse Rates and Pulse Correlations" A simple extension of standard neural network models is introduced, that provides a model for computations with pulses where both the pulse frequencies and correlations in pulse times between different pulse trains are computationally relevant. Such extension appears to be useful since it has been shown that firing correlations play a significant computational role in many biological neural systems, and there exist attempts tp transport this coding mechanism to artifical pulsed neural networks. Standard neural network models are only suitable for describing computations in terms of pulse rates. The resulting extended neural network models are still relatively simple, so that their computational power can be analyzed theoretically. We prove rigorous separation results, which show that the use of pulse correlations in addition to pulse rates can increase the computational power of a neural network by a significant amount. ------------------------------------------------------------ Wulfram Gerstner (Swiss Federal Institute of Technology, Lausanne, Switzerland) "Hebbian Tuning of Delay Lines for Coincidence Detection in the Barn Owl Auditory System" Owls can locate sound sources in the complete darkness with a remarkable precision. This capability requires auditory information processing with a temporal precision of less than 5 microseconds. How is this possible, given that typical neurons are at least one order of magnitude slower? In this talk, an integrate-and-fire model is presented of a neuron in the auditory system of the barn owl. Given a coherent input the model neuron is capable to generate precisely timed output spikes. In order to make the input coherent, delay lines are tuned during an early period of the owls development by an unsupervised learning procedure. This results in an adaptive system which develops a sensitivity to the exact timing of pulses arriving from the left and the right ear, a necessary step for the localization of external sound sourcec and hence prey. *************************************************************** (Abstracts of Posters: see the end of this listing) ************************************************************** ------------------------------------------------------------- Tutorial by Alan F. Murray (University of Edinburgh) "Pulse-Based Computation in VLSI Neural Networks : Fundamentals" This tutorial will present the techniques that underly pulse generation, distribution and arithmetic in VLSI devices. The talk will concentrate on work performed in Edinburgh, but will include references to alternative approaches. Ancillary issues surrounding "neural" computation in analogue VLSI will be drawn out and the tutorial will include a brief introduction to MOSFET circuits and devices. ------------------------------------------------------------------ Alessandro Mortara (Centre Suisse d'Electronique et de Microtechnique, Neuchatel, Switzerland) "Communication and Computation using Spikes in Silicon Perceptive Systems" This presentation deals with the principles, the main properties and some applications of a pulsed communication system adapted to the needs of the analog implementation of perceptive and sensory-motor systems. The interface takes advantage of the fact that activity in perception tasks is often sparsely distributed over a large number of elementary processing units (cells) and facilitates the access to the communication channel to the more active cells. The resulting "open loop" communication architecture can be advantageously be used to set up connections between distant cells on the same chip or point to point connections between cells on different chips. The system also lends itself to the simple circuit implementation of typically biological connectivity patterns such as projection of the activity of one cell on a region (its "projective field") of the next neural processing layer, which can be on a different chip in an actual implementation. Examples of possible applications will be drawn from the fields of vision and sensory-motor loops. ------------------------------------------------------------------ David P.M. Northmore (University of Delaware, USA) "Interpreting Spike Trains with Networks of Dendritic-Tree Neuromorphs" The dendrites of neurons probably play very important signal processing roles in the CNS, allowing large numbers of afferent spike trains to be differentially weighted and delayed, with linear and non-linear summation. Our VLSI neuromorphs capture these essential properties and demonstrate the kinds of computations involved in sensory processing. As recent neurobiology shows, dendrites also play a critical role in learning by back-propagating output spikes to recently active synapses, leading to changes in their efficacy. Using a spike distribution system we are exploring Hebbian learning in networks of neuromorphs. -------------------------------------------------- Alister Hamilton (University of Edinburgh) "Pulse Based Signal Processing for Programmable Analogue VLSI" VLSI implementations of Pulsed Neural Systems often require the use of standard signal processing functions and neural networks in order to process sensory data. This talk will introduce a new pulse based technique for implementing standard signal processing functions - the Palmo technique. The technique we have developed is fully programmable, and may be used to implement Field Programmable Mixed Signal Arrays - making it of great interest to the wider electronics community. --------------------------------------------------- Rodney Douglas (ETH Zurich, Switzerland) "A Communications Infrastructure for Neuromorphic Analog VLSI Systems" Analogs of peripheral sensory structures such as retinas and cochleas, and populations of neurons have been successfully implemented on single neuromorphic analog Very Large Scale Integration (aVLSI) chips. However, the amount of computation that can be performed on a single chip is limited. The construction of large neuromorphic systems requires a multi-chip communication framework optimized for neuromorphic aVLSI designs. We have developed one such framework. It is an asynchronous multiplexing communication network based on address event data representation (AER). In AER, analog signals from the neurons are encoded by pulse frequency modulation. These pulses are abstractly represented on a communication bus by the address of the neuron that generated it, and the timing of these address-event communicate analog information. The multiplexing used by the communication framework attempts to take advantage of the greater speed of silicon technology over biological neurons to compensate for more limited direct physical connectivity of aVLSI. The AER provides a large degree of flexibility for routing digital signals to arbitrary physical locations. ******************************************************************* POSTERS ******* Irit Opher and David Horn (Tel Aviv University, Israel) "Arrays of Pulse Coupled Neurons: Spontaneous Activity Patterns and Image Analysis" Arrays of interacting identical pulse coupled neurons can develop coherent firing patterns, such as moving stripes, rotating spirals and expanding concentric rings. We obtain all of them using a novel two variable description of integrate and fire neurons that allows for a continuum formulation of neural fields. One of these variables distinguishes between the two different states of refractoriness and depolarization and acquires topological meaning when it is turned into a field. Hence it leads to a topologic characterization of the ensuing solitary waves. These are limited to point-like excitations on a line and linear excitations, including all the examples quoted above, on a two-dimensional surface. A moving patch of firing activity is not an allowed solitary wave on our neural surface. Only the presence of strong inhomogeneity that destroys the neural field continuity, allows for the appearance of patchy incoherent firing patterns driven by excitatory interactions. Such a neural manifold can be used for image analysis, performing edge detection and scene segmentation, under different connectivities. Using either DOG or short range synaptic connections we obtain edge detection at times when the total activity of the system runs through a minimum. With generalized Hebbian connections the system develops temporal segmentation. Its separation power is limited to a small number of segments. ----------------------------------------------------------------- Berthold Ruf und Michael Schmitt (Technische Universitaet Graz, Austria) "Self-Organizing Maps of Spiking Neurons Using Temporal Coding" The basic idea of self-organizing maps (SOM) introduced by Kohonen, namely to map similar input patterns to contiguous locations in the output space, is not only of importance to artificial but also to biological systems, e.g. in the visual cortex. However, the standard formulation of the SOM and the corresponding learning rule are not suitable for biological systems. Here we show how networks of spiking neurons can be used to implement a variation of the SOM in temporal coding, which has the same characteristic behavior. In contrast to the standard formulation of the SOM our construction has the additional advantage that the winner among the competing neurons can be determined fast and locally. ---------------------------------------------------- Wolfgang Maass and Michael Schmitt (Technische Universitaet Graz, Austria) "On the Complexity of Learning for Networks of Spiking Neurons" In a network of spiking neurons a new set of parameters becomes relevant which has no counterpart in traditional neural network models: the time that a pulse needs to travel through a connection between two neurons (also known as ``delay'' of a connection). It is known that these delays are tuned in biological neural systems through a variety of mechanisms. We investigate the VC-dimension of networks of spiking neurons where the delays are viewed as ``programmable parameters'' and we prove tight bounds for this VC-dimension. Thus we get quantitative estimates for the diversity of functions that a network with fixed architecture can compute with different settings of its delays. It turns out that a network of spiking neurons with k adjustable delays is able to compute a much richer class of Boolean functions than a threshold circuit with k adjustable weights. The results also yield bounds for the number of training examples that an algorithm needs for tuning the delays of a network of spiking neurons. Results about the computational complexity of such algorithms are also given. ------------------------------------------------------------------ Wolfgang Maass and Thomas Natschlaeger (Technische Universitaet Graz, Austria) " Networks of Spiking Neurons Can Emulate Arbitrary Hopfield Nets in Temporal Coding" A theoretical model for analog computation in networks of spiking neurons with temporal coding is introduced and tested through simulations in GENESIS. It turns out that the use of multiple synapses yields very noise robust mechanisms for analog computations via the timing of single spikes. One arrives in this way at a method for emulating arbitrary Hopfield nets with spiking neurons in temporal coding, yielding new models for associative recall of spatio-temporal firing patterns. We also show that it suffices to store these patterns in the efficacies of excitatory synapses. A corresponding layered architecture yields a refinement of the synfire-chain model that can assume a fairly large set of different stable firing patterns for different inputs. ----------------------------------------------------------- Wolfgang Maass and Berthold Ruf (Technische Universitaet Graz, Austria) It was previously shown that the computational power of formal models for computation with pulses is quite high if the pulses arriving at a spiking neuron have an approximately linearly rising or linearly decreasing initial segment. This property is satisfied by common models for biological neurons. On the other hand several implementations of pulsed neural nets in VLSI employ pulses that have the shape of step functions. We analyse the relevance of the shape of pulses for the computational power of formal models for pulsed neural nets. It turns out that the computational power is significantly higher if one employs pulses with a linearly increasing or decreasing segment. ******************   From michal at neuron.tau.ac.il Thu Aug 14 07:07:00 1997 From: michal at neuron.tau.ac.il (Michal Finkelman) Date: Thu, 14 Aug 1997 14:07:00 +0300 (IDT) Subject: D. Horn's 60th birthday - Symposium on Neural Comp. & Part. Physics Message-ID: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% TEL AVIV UNIVERSITY THE RAYMOND & BEVERLY SACKLER FACULTY OF EXACT SCIENCES SCHOOL OF PHYSICS AND ASTRONOMY SYMPOSIUM ON PARTICLE PHYSICS AND NEURAL COMPUTATION ---------------------------------------------------- IN HONOR OF DAVID HORN'S 60TH BIRTHDAY -------------------------------------- Monday, October 27th 1997 (9:15 AM - 05:30 PM) Lev Auditorium, Tel-Aviv University PROGRAM ---------- 9:15 AM: Opening addresses: Nili Cohen, Rector of Tel-Aviv University Yuval Ne'eman (Tel Aviv) 9:30 - 10:30: Gabriele Veneziano (CERN) - From s-t-u Duality to S-T-U Duality 10:30 - 11:00: Coffee break 11:00 - 12:00: Fredrick J Gilman (Carnegie Mellon) - CP Violation 12:00 - 1:30: Lunch break 1:30 - 2:30: Leon N Cooper (Brown) - From Receptive Fields to the Cellular Basis for Learning and Memory Storage: A Unified Learning Hypothesis 2:30 - 3:30: John J Hopfield (Princeton) - How Can We Be So Smart? Information Representation and Neurobiological Computation. 3:30 - 4:00: Coffee break 4:00 - 5:00: Yakir Aharonov (Tel Aviv) - A New Approach to Quantum Mechanics 5:00 PM: David Horn - Closing Remarks %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Notes: 1. This announcement serves also as an invitation to enter the TAU campus on that date. 2. Colleages and friends who wish to attend the symposium are kindly requested to NOTIFY US IN ADVANCE by e-mailing to michal at neuron.tau.ac.il. fax: 972-3-6407932 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%   From ecm at casbah.acns.nwu.edu Thu Aug 14 18:05:11 1997 From: ecm at casbah.acns.nwu.edu (Edward Malthouse) Date: Thu, 14 Aug 1997 17:05:11 -0500 (CDT) Subject: Nonlinear Principal Components Analysis (fwd) Message-ID: <199708142205.RAA13356@casbah.acns.nwu.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 355 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/f959c024/attachment-0001.ksh From rsun at cs.ua.edu Fri Aug 15 01:22:16 1997 From: rsun at cs.ua.edu (Ron Sun) Date: Fri, 15 Aug 1997 00:22:16 -0500 Subject: CFP: AAAI 1998 Spring Symposium on Multimodal Reasoning Message-ID: <199708150522.AAA10798@sun.cs.ua.edu> ------ CFP: AAAI 1998 Spring Symposium on Multimodal Reasoning There are a number of AI reasoning modes or paradigms that have widespread application, e.g. case-based reasoning, constraint-based reasoning, model-based reasoning, rule-based reasoning. The symposium will encourage integration of these reasoning modes, and interaction among the corresponding research communities. Topics include, but are not limited to: *Combining reasoning methods in a single application *Using one form of reasoning to support or guide another *Compiling one form of reasoning experience into another form of reasoning knowledge *Transferring successful methods from one form of reasoning to another *Interoperability of applications based on different reasoning technology *Switching among alternative forms of reasoning *Comparing and evaluating reasoning alternatives for specific problem domains *Identifying categories, structures, or properties of knowledge or tasks for which different reasoning techniques are appropriate or advantageous *Systematically relating reasoning formalisms *Demonstrating practical advantages of a multimodal approach for real problems *Identifying and exploiting commonalities Papers grounded in specific problems or domains will be welcome. More general or theoretical insights will also be appropriate. The Symposium will encourage building on the specific experiences of the attendees towards general principles of multimodal reasoning architecture, multimodal both in the sense of combining modes, and in the sense of being relevant to multiple modes. Submissions Submit an abstract of a new paper or a summary of previous relevant work. Submissions should be no more than four pages, single column, 12 point type. Include an illustrative example. E-mail PostScript of submissions to multimodal at cs.unh.edu. The symposium web page is at: www.cs.unh.edu/ccc/mm/sym.html. General information about the Spring Symposia can be obtained at: http://aaai.org/Symposia/Spring/1998/sssparticipation-98.html Organizing Committee Eugene Freuder (chair), University of New Hampshire, ecf at cs.unh.edu Edwina Rissland, University of Massachusetts Peter Struss, Technical University of Munich Milind Tambe, University of Southern California Program Committee Rene Bakker, Telematics Research Centre Karl Branting, University of Wyoming Nick Cercone, University of Regina Ashok Goel, Georgia Institute of Technology Vineet Gupta, Xerox Palo Alto Research Center David Leake, University of Indiana Amnon Meisels, Ben Gurion University Robert Milne, Intelligent Applications Ltd Pearl Pu, Ecole Polytechnique F=E9d=E9rale de Lausanne Ron Sun, University of Alabama Jerzy Surma, Technical University of Wroclaw Katia Sycara, Carnegie Mellon University   From giles at research.nj.nec.com Mon Aug 18 13:02:10 1997 From: giles at research.nj.nec.com (Lee Giles) Date: Mon, 18 Aug 97 13:02:10 EDT Subject: paper on intelligent methods for file system optimization Message-ID: <9708181702.AA00759@alta> The following paper, published at the Proceedings of the Fourteenth National Conference on Artificial Intelligence and the Ninth Innovative Applications of Artificial Intelligence Conference (August, 1997), is now available at the sites listed below: http://www.neci.nj.nec.com/homepages/giles/papers/AAAI-97.intelligent.file.organization.ps.Z http://envy.cs.umass.edu/People/kuvayev/index.html ftp://ftp.nj.nec.com/pub/giles/papers/AAAI-97.intelligent.file.organization.ps.Z We apologize in advance for any multiple postings that may be received. *************************************************************************** Intelligent Methods for File System Optimization L. Kuvayev(2), C. L. Giles(1,3), J. Philbin(1), H. Cejtin(1) (1)NEC Research Institute, 4 Independence Way, Princeton, NJ 08540 (2)Dept. of Computer Science, University of Massachusetts, Amherst, MA 01002 (3)Institute for Advanced Computer Studies, U. of Maryland, College Park, Md kuvayev at cs.umass.edu {giles,philbin,henry}@research.nj.nec.com ABSTRACT The speed of I/O components is a major limitation of the speed of all other major components in today's computer systems. Motivated by this, we investigated several algorithms for efficient and intelligent organization of files on a hard disk. Total access time may be decreased if files with temporal locality also have spatial locality. Three intelligent methods based on file type, frequency, and transition probabilities information showed up to 60% savings of total I/O time over the naive placement of files. More computationally intensive hill climbing and genetic algorithms approaches did not outperform statistical methods. The experiments were run on a real and simulated hard drive in single and multiple user environments. Keywords: file systems, reasoning about physical systems, Markov models, probabilistic reasoning, genetic algorithms. __ C. Lee Giles / Computer Science / NEC Research Institute / 4 Independence Way / Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 www.neci.nj.nec.com/homepages/giles.html ==   From wulfram.gerstner at di.epfl.ch Tue Aug 19 02:42:00 1997 From: wulfram.gerstner at di.epfl.ch (Wulfram Gerstner) Date: Tue, 19 Aug 1997 08:42:00 +0200 Subject: ICANN_97 Oct.7-10: call for participation Message-ID: <199708190642.IAA07536@lamisun1.epfl.ch> Call for Participation: ICANN'97 in Lausanne (Switzerland). --------- ICANN'97 ------- International Conference on Artificial Neural Networks October 7-10 - Lausanne, Switzerland Tutorials on Tuesday, October 7 Plenary and parallel sessions, October 8-10 -- The 1997 Latsis Conference -- More details on the conference including the full program and registration forms can be found in http://www.epfl.ch/icann97/ Email icann97 at epfl.ch Fax +41 21 693-5656 _____________________________________________________________ Conference structure """""""""""""""""""" ICANN'97 is the 7th Annual Conference of the European Neural Network Society ENNS. The program includes plenary talks and 4 tracks of parallel sessions covering the domains of Theory, Biological Models, Applications, and Implementations. All posters are complemented by short oral poster spotlight presentation. The conference starts with a Tutorial day on October 7. Tutorials ^^^^^^^^^ Y. Abu-Mostafa (USA), P. Refenes (GB) Finance Applications X. Arreguit (CH) VLSI Implementations of Vision Systems J.L. van Hemmen (D), A. Kreiter (D) Cortical Oscillations M. Opper (D) Statistical Theories of Learning Invited plenary talks ^^^^^^^^^^^^^^^^^^^^ H. Bourlard, Martigny, CH, Speech recognition S. Grossberg, Boston, USA, Visual Perception H. Markram, Rehovot, Israel, Fast Synaptic Changes E. Oja, Espoo, Finland, Independent Comp. Analysis H. Ritter, Bielefeld, D, Self-Org. Maps for Robotics T. Roska, Budapest, HU, Cellular Neural Networks R. Sutton, Amherst, USA, Markov Decision Processes V. Vapnik, Holmdel, USA, Support Vector Machines E. Vittoz, Neuchatel, CH, Bioinspired Circuits Special Invited Sessions ^^^^^^^^^^^^^^^^^^^^^^^ Cortical Maps and Receptive Fields, Temporal Patterns and Brain Dynamics, Time Series Prediction, Adaptive Autonomous Agents Regular Sessions ^^^^^^^^^^^^^^^^^ THEORY: Learning, Signal Processing, Self Organization, Recurrent Networks Perceptrons, Kernel-based Networks BIOLOGY: Coding, Synaptic Learning, Neural Maps, Vision APPLICATIONS: Forecasting, Monitoring, Pattern Recognition, Robotics, Identification and Control IMPLEMENTATIONS Analog VLSI, Digital Implementations _____________________________________________________________ Registration """""""""""" The registration fee includes admission to all sessions, one copy of the proceedings, coffee breaks and 3 lunches, welcome drinks and banquet. before August 30 -- after Regular registration fee 580 CHF -- 640 CHF Student (with lunch, no banquet, no proceedings) 270 CHF -- 330 CHF Tutorial day (October 7) 30 CHF -- 50 CHF Ask for a copy of the forms or look on the Web http://www.epfl.ch/icann97/ Proceedings are published by Springer, Lecture Notes in Computer Science Series _____________________________________________________________ Conference location and accomodation """""""""""""""""""""""""""""""""""" The conference will be held at the EPFL (Swiss Federal Institute of Technology) in Lausanne. Lausanne is beautifully located on the shores of the lake of Geneva, and can easily by accessed by train and planes. Hotels are in the 50 to 150 CHF range. Reservation is not handled by the conference organizers. Ask Fassbind Hotels, fax +41 21 323 0145 _____________________________________________________________ Organizers ^^^^^^^^^^ General Chairman: Wulfram Gerstner, Mantra-EPFL Co-chairmen: Alain Germond, Martin Hasler, J.D. Nicoud, EPFL Registration secretariat: Andree Moinat, LRC-EPFL, tel +41 21 693-2661 FAX: +41 21 693 5656 ________________________________________________________ For the full program and other informations have a look at http://www.epfl.ch/icann97/ ________________________________________________________   From David_Redish at gs151.sp.cs.cmu.edu Thu Aug 21 11:17:39 1997 From: David_Redish at gs151.sp.cs.cmu.edu (David Redish) Date: Thu, 21 Aug 1997 11:17:39 -0400 Subject: Thesis available Message-ID: <23964.872176659@gs151.sp.cs.cmu.edu> My PhD thesis is now available on the WWW: BEYOND THE COGNITIVE MAP: Contributions to a Computational Neuroscience Theory of Rodent Navigation A. David Redish http://www.cs.cmu.edu/~dredish/pub/thesis.ps.gz The thesis is 452 pages (2.5M gzipped, 9.6M uncompressed). In addition to novel contributions (see below), the thesis includes a 100 page Experimental Review (Chapter 2), a 75 page Navigation overview, a 30 page overview of hippocampal theories, and over 500 references which some might find useful. I have appended the abstract for those interested. ***** NO HARDCOPIES AVAILABLE ******** I am sorry, but I due to the size of the thesis, I cannot send hardcopies. ------------------------------------------------------ Dr. A. David Redish Computer Science Department CMU graduated (!) student Center for the Neural Basis of Cognition (CNBC) http://www.cs.cmu.edu/~dredish ------------------------------------------------------------ BEYOND THE COGNITIVE MAP: Contributions to a Computational Neuroscience Theory of Rodent Navigation Ph.D Thesis A. David Redish Computer Science Department and Center for the Neural Basis of Cognition Carnegie Mellon University Rodent navigation is a unique domain for studying information processing in the brain because there is a vast literature of experimental results at many levels of description, including anatomical, behavioral, neurophysiological, and neuropharmacological. This literature provides many constraints on candidate theories. This thesis presents contributions to a theory of how rodents navigate as well as an overview of that theory and how it relates to the experimental literature. In the first half of the thesis, I present a review and overview of the rodent navigation literature, both experimental and theoretical. The key claim of the theory is that navigation can be divided into two categories: taxon/praxic navigation and locale navigation (O'Keefe and Nadel, 1978), and that locale navigation can be understood as an interaction between five subsystems: local view, head direction, path integration, place code, and goal memory (Redish and Touretzky, 1997). I bring ideas together from the extensive work done on rodent navigation over the last century to show how the interaction of these systems forms a comprehensive, computational theory of navigation. This comprehensive theory has implications for an understanding of the role of the hippocampus, suggesting that it shows three different modes: storage, recall, and replay. In the second half of the thesis, I show specific contributions to this overall theory. I report a simulation of the head direction system that can track multiple head direction speeds accurately. The simulations show that the theory implies that head direction tuning curves in the anterior thalamic nuclei should deform during rotations. This observation has been confirmed experimentally by Blair et al. (1997). By examining the computational requirements and the anatomical data, I suggest that the anatomical locus of the path integrator is in a loop comprised of the subiculum, the parasubiculum, and the superficial entorhinal cortex. This contrasts with other hypotheses of the anatomical locus of path integration (e.g. hippocampus, McNaughton et al., 1996) and predicts that the hippocampus should not be involved in path integration. This prediction has been recently tested and confirmed by Alyan et al. (1997). I present simulations demonstrating the viability of the three-mode hippocampal proposal, including storage and recall of locations within single environments, with ambiguous inputs, and in multiple environments. I present simulations demonstrating the viability of the dual-role hippocampus (recall and replay), showing that the two modes can coexist within the hippocampus even though the two roles seem to require incompatible connection matrices. In addition, I present simulations of specific experiments, including a simulation of the recent result from Barnes et al. (1997), showing that the model produces a bimodality in the correlations of representations of an environment in animals with deficient LTP. These simulations show that the Barnes et al. result does not necessarily imply that the intra-hippocampal connections are pre-wired to form separate charts as suggested by Samsonovich (1997). a simulation of Sharp et al.'s (1990) data on the interaction between entry point and external cues, showing the first simulations capable of replicating all the single place field conditions reported by Sharp et al. simulations of Cheng (1986) and Margules and Gallistel (1988) showing the importance of disorientation in self-localization. simulations of Morris (1981), showing that the model can replicate navigation in the water maze. simulations of Collett et al. (1986) and our own gerbil navigation results, showing that the model can replicate a number of reactions to different manipulations of landmark arrays.   From pcohen at cse.ogi.edu Thu Aug 21 19:31:00 1997 From: pcohen at cse.ogi.edu (Phil Cohen) Date: Thu, 21 Aug 1997 16:31:00 -0700 Subject: GESTURE RECOGNITION POSTDOCTORAL POSITION Message-ID: GESTURE RECOGNITION POSTDOCTORAL RESEARCHER The Center for Human-Computer Communication at the Oregon Graduate Institute of Science and Technology has a postdoctoral opening for a talented researcher interested in gesture recognition in the context of multimodal communication. We have considerable experience building multimodal systems, using speech and pen-based gestures, in which speech and gesture mutually compensate for one another's errors. The position would involve gestural research and development recog., both 2-D and 3-D, in a multimodal environment, as well as statistical language modeling. Experience with neural networks and HMMs is essential. Of course, background in handwriting recognition would be most appropriate. Salary and benefits for this position are competitive. To apply, submit a resume, names and contact information for 3 references, a brief statement of research career interests, and date of availability. Applications received by Sept. 1 will receive priority consideration. Qualified women and minorities are encouraged to apply. Please forward applications to: Gloria McCauley, Center Administrator Center for Human-Computer Communication Department of Computer Science Oregon Graduate Institute of Science & Technology P. O. Box 91000 Portland, Oregon 97291 Email: mccauley at cse.ogi.edu FAX: (503) 690-1548 For FEDEX, please use shipping address at: Department of Computer Science and Engineering 20000 N.W. Walker Road Beaverton, Oregon 97006 GENERAL INFORMATION ABOUT CHCC, OGI & PORTLAND AREA CHCC is an internationally known research center in the Computer Science Dept. at the Oregon Graduate Institute of Science & Technology (OGI). We are a multidisciplinary group that includes computer scientists, psychologists, and linguists dedicated to advancing the science and technology of human-computer communication. Our research results and system designs are published broadly, receive attention at the highest levels of government and industry, and are supported by Intel, Microsoft, the National Science Foundation, DARPA, ONR, and other well-known federal and corporate sponsors. The work environment at CHCC includes a new state-of-the-art data collection and intelligent systems laboratory with Pentium and Pentium Pro workstations, SGIs, Sun Sparcstations, PowerMacs, PDAs and other portable devices, and video recording and production equipment. The Computer Science Dept. at OGI is one of the most rapidly growing in the U.S., with 18 faculty, over 100 Ph.D. and M.S. students, and a departmental research budget exceeding $6M per year. The department's research activities are organized around a number of centers of excellence, including the Center for Human-Computer Communication, the Center for Spoken Language Understanding, the Pacific Software Center, the Data Intensive Systems Center, and the Center for Information Technology. OGI is located 12 miles west of Portland, Oregon, and serves the high-technology educational needs of Intel, Tektronix, Mentor Graphics, and other local corporations. Portland is a rapidly growing metropolitan area with over 1.2 M people. It offers extensive cultural, culinary, and recreational opportunities such as sailing, windsurfing, skiing, hiking, and beach sports within an hour's drive. Further information about CHCC can be found at http://www.cse.ogi.edu/CHCC Philip R. Cohen Professor and Director Center for Human-Computer Communication Dept. of Computer Science and Engineering Oregon Graduate Institute of Science and Technology 20000 NW Walker Rd. Beaverton, OR 97006 Phone: 503-690-1326. Fax: 503-690-1548 WWW: http://www.cse.ogi.edu/CHCC   From David_Redish at gs151.sp.cs.cmu.edu Fri Aug 22 10:04:30 1997 From: David_Redish at gs151.sp.cs.cmu.edu (David Redish) Date: Fri, 22 Aug 1997 10:04:30 -0400 Subject: Thesis available In-Reply-To: Your message of "Thu, 21 Aug 1997 11:17:39 EDT." <23964.872176659@gs151.sp.cs.cmu.edu> Message-ID: <26593.872258670@gs151.sp.cs.cmu.edu> Some people have balked at printing 452 pages and requested that I reformat the thesis if possible. I have been able to reformat it single spaced and save approx. 1/4 (so it is now 340 pages). ------------------------------------------------------ A. David Redish Computer Science Department CMU graduated (!) student Center for the Neural Basis of Cognition (CNBC) http://www.cs.cmu.edu/~dredish ------------------------------------------------------------ >My PhD thesis is now available on the WWW: > > BEYOND THE COGNITIVE MAP: > Contributions to a Computational Neuroscience Theory > of Rodent Navigation > > A. David Redish > > http://www.cs.cmu.edu/~dredish/pub/thesis.ps.gz Now, 340 pages.   From jlm at cnbc.cmu.edu Fri Aug 22 17:58:10 1997 From: jlm at cnbc.cmu.edu (Jay McClelland) Date: Fri, 22 Aug 1997 17:58:10 -0400 (EDT) Subject: Post Doc Opening: Network Solutions to Cognitive Tasks Message-ID: <199708222158.RAA23107@eagle.cnbc.cmu.edu> Post-Doctoral Opening: Computational Analysis of Neural Network Solutions to Cognitive Tasks I have three years of funding to support a post-doctoral fellow to study how task constraints and priors (both hard or soft constraints) jointly shape the representations that emerge in connectionist networks when they are applied to cognitive tasks in natural domains such as morphology, natural kind semantics, structure of language, and reading. I'm hoping to find someone familiar with the psychological, neuropsychological, and connectionist research in one or more of these areas who also posesses a firm understanding of the mathematics relevant to Bayesian/MDL formulations of network learning mechanisms. Interested applicants should send email containing a brief statement of interest a CV, and the names, smail, and email addresses of three individuals who can be contacted for references (all ascii please!). Please also provide the same materials on paper plus copies of publications and preprints. Start date can be any time in the next 12 months. PLEASE INCLUDE THE SUBJECT LINE OF THIS MESSAGE IN THE SUBJECT OF YOUR ELECTONIC REPLY. +-------------------------------------------------------------+ | James L. (Jay) McClelland | +-------------------------------------------------------------+ | Co-Director, Center for the Neural Basis of Cognition | | Professor of Psychology, Carnegie Mellon | | Adjunct Prof, Computer Science, Carnegie Mellon and | | Neuroscience, University of Pittsburgh | +-------------------------------------------------------------+ | jlm at cnbc.cmu.edu or | Room 115 | | mcclelland+ at cmu.edu | Mellon Institute | | 412-268-4000 (Voice) | 4400 Fifth Avenue | | 412-268-5060 (Fax) | Pittsburgh, PA 15213 | +-------------------------------------------------------------+ | Home page: http://www.cnbc.cmu.edu/people/mcclelland.html | +-------------------------------------------------------------+   From heiniw at challenge.dhp.nl Sat Aug 23 12:39:21 1997 From: heiniw at challenge.dhp.nl (Heini Withagen) Date: Sat, 23 Aug 1997 18:39:21 +0200 (MDT) Subject: PhD thesis on Analog Neural Hardware available Message-ID: <199708231639.SAA03661@challenge.dhp.nl> A non-text attachment was scrubbed... Name: not available Type: text Size: 2884 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/f55a435e/attachment-0001.ksh From tgd at CS.ORST.EDU Sun Aug 24 14:42:36 1997 From: tgd at CS.ORST.EDU (Tom Dietterich) Date: Sun, 24 Aug 1997 11:42:36 -0700 Subject: Hierarchical Reinforcement Learning (tech report) Message-ID: <199708241842.LAA07903@edison> The following technical report is available in gzipped postscript format from ftp://ftp.cs.orst.edu/pub/tgd/papers/tr-maxq.ps.gz Hierarchical Reinforcement Learning with the MAXQ Value Function Decomposition Thomas G. Dietterich Department of Computer Science Oregon State University Corvallis, OR 97331 Abstract This paper describes the MAXQ method for hierarchical reinforcement learning based on a hierarchical decomposition of the value function and derives conditions under which the MAXQ decomposition can represent the optimal value function. We show that for certain execution models, the MAXQ decomposition will produce better policies than Feudal Q learning.   From ataxr at IMAP1.ASU.EDU Fri Aug 22 22:07:31 1997 From: ataxr at IMAP1.ASU.EDU (Asim Roy) Date: Fri, 22 Aug 1997 22:07:31 -0400 (EDT) Subject: CONNECTIONIST LEARNING: IS IT TIME TO RECONSIDER THE FOUNDATIONS? Message-ID: Dear Moderator, Please post this version of the note if you have not posted it already. I have reformatted it. My sincere apologies to those who get multiple copies. Asim Roy ---------------------------------- This note is to summarize the discussion that took place at ICNN'97 (International Conference on Neural Networks) in Houston in June on the topic of "Connectionist Learning: Is It Time to Reconsider the Foundations?" ICNN'97 was organized jointly by INNS (International Neural Network Society) and the IEEE Neural Network Council. The following persons were on the panel to discuss the questions being raised about classical connectionist learning: 1. Shunichi Amari 2. Eric Baum 3. Rolf Eckmiller 4. Lee Giles 5. Geoffrey Hinton 6. Teuvo Kohonen 7. Dan Levine 8. Jean Jacques Slotine 9. John Taylor 10. David Waltz 11. Paul Werbos 12. Nicolaos Karayiannis 13. Asim Roy Nick Karayiannis, General Chair of ICNN'97, moderated the panel discussion. Appendix 1 has the issues/questions being raised about classical connectionist learning. A general summary of the panel discussion as it relates to those questions is provided below. Appendix 2 provides a brief summary of what was said by individual panel members. In general, the individual panel members provided their own summaries. In some cases, they modified my draft of what they had said. This document took a while to prepare, given the fact that many of us go on vacation or to conferences during summer. A GENERAL SUMMARY OF THE PANEL DISCUSSION 1) On the issue of using memory for learning, many panel members strongly supported the idea and argued in its favor, saying that humans indeed store information in order to learn. Although there was no one who actually opposed the idea of using memory to learn, some still tend to believe that memoryless learning does indeed occur in certain situations, such as in the learning of motor skills. (I can argue very strongly that memory is indeed used in "every" learning situation, even to acquire motor skills! I plan to send out a memo on this shortly.) 2) On the question of global/local learning, many panelists agreed that global learning mechanisms are indeed used in the brain and pointed out the role of neuromodulators in transferring information to appropriate parts of the brain. Some others justified global mechanisms by saying that certain kinds of learning are only possible with "nonlocal" mechanisms. Again, although there was no one who vigorously opposed the idea of using global mechanisms to learn, some thought that some form of local learning may also be used by the brain in certain situations. 3) On the question of network design, several panelists argued that the brain must indeed know how to design networks in order to store/learn new knowledge and information. Some suggested that this design capability is derived from "experience" (as opposed to "inheritance" - David Waltz), while others mentioned "punishment/reward" mechanisms as its source (John Taylor) or implied it through the notion of "control of adaptivity" (Teuvo Kohonen). Shunichi Amari emphasized the network design capability from a robotics point of view, while Eric Baum said that learning beyond inherited structures involves knowing how to design networks. Perhaps all of us agree that we do indeed inherit some network structures through evolution/inheritance. But I did not hear anybody argue that our algorithms should not include network design as one of its tasks. SOME PERSONAL REMARKS ON THIS DEBATE I have come out of this debate with deep respect for the field and for many of its highly distinguished and prominent scholars. It has never been an acrimonious debate. I think most of them had been very open minded in examining the facts and arguments against classical connectionist learning. I had vigorous arguments with some of them, but it was always friendly and very respectful. And I think we all had fun arguing about these things. I think it bodes well for the science. The culture of a scientific field depends very much on its topmost scholars. I couldn't be among a better set of scholars with higher levels of intellectual integrity. And to be honest, I was indeed pleasantly surprised when I was nominated for the INNS Governing Board membership by Prof. Shunichi Amari. After publicly challenging some of the core connectionist ideas, I was afraid that I was going to be a permanent outcast in this field. I hope that will not be true. I hope to be part of this field. I think the ICNN'97 debate was very significant and useful. First, it engaged some of the most distinguished scholars in the field. Second, there are some very significant statements from many of these scholars. Paul Werbos was the first to acknowledge that memory is indeed used for learning. I think that was an important first step in this debate. But then, there are many others. For example, Shunichi Amari's call for "a new type of mathematical theories of neural computation" is very significant indeed. And so is Teuvo Kohonen's acknowledgment of a "third level of 'control of synaptic plasticity' that was ignored in the past in connectionism." And note Dan Levine's statement that "it is indeed time to reconsider the foundations of connectionist learning," despite his emphasis that the work of the last thirty years should be built upon rather than discarded. And John Taylor's remark that "classical connectionism perhaps has too narrow a view of the brain" and that "connectionism should not be limited to traditional artificial neural networks." And Eric Baum's remarks on the brain being a multiagent system and on the limitations of classical connectionist in explaining this multiagent behavior. And Lee Giles' call for a "deeper foundation for intelligence processing." And David Waltz's story about the learning experiences of Marvin Minsky's dog will certainly be a classic. He helped to hammer in the point strongly that humans do indeed use memory to learn. WHAT DID THE DEBATE REALLY ACCOMPLISH? Overall, the debate has established the following: 1) It is no longer necessary for our learning algorithms to have local learning laws similar to the ones in back propagation or perceptron or Hopfield net. This will allow us to develop much more robust and powerful learning algorithms using means that may be "nonlocal" in nature. In other words, we should be free to develop new kinds of algorithms to design and train networks without the need to use a local learning law. 2) The learning algorithms can now have better access to information much as humans do. Humans actually have access to all kinds of information in order to learn. And they use memory to remember some of it so that they can use it in the thinking and learning process. The idea of "memoryless" learning in classical connectionist learning is unduly restrictive and completely unnatural. There is no biological or behavioral basis for it. So, our learning algorithms should now be allowed to store learning examples in order to learn and have access to other kinds of information. This will allow the algorithms to look at the information about a problem, understand its complexity and then design and train an appropriate net. All this can perhaps be summarized in one sentence: Overall, it fundamentally changes the nature of algorithms that we might call "brain-like." So Shunichi Amari's call for "a new type of mathematical theories of neural computation" couldn't have been more appropriate. In my opinion, the debate on connectionist learning does not end here - it is just the beginning. We should continue to ask critical questions and engage ourselves in vigorous debate. It doesn't make sense for a scientific field to work for years on building a theory that falls apart on first rigorous common sense examination. Many technological advances depend on this field. So we need to guard against these major pitfalls. Perhaps one of the existing newsgroups or a new one can accommodate such open debates, bringing together neuroscientists, cognitive scientists and connectionists. I don't think we can be isolated anymore. The Internet is helpful and allows us to communicate across disciplines on a worldwide basis. We should no longer be the lonely researcher with very restricted interactions. I was once a lonely researcher with many questions in my mind. So I went to Stanford University during my sabbatical and sat in David Rumelhart's and Bernie Widrow's classes to ask all kinds of questions. But there must be a better way to ask such outrageous questions. An important issue for the field is that of setting standards for our algorithms. It is imperative that we define some "external behavioral characteristics" for our so-called brain-like autonomous learning algorithms, whatever kind they may be. But I hope that this is at least a first step towards defining and developing a more rigorous science. We cannot continue to "babysit" our so-called learning algorithms. They need to be truly autonomous. With regards to all, Asim Roy Arizona State University ------------------------------------------------------------ APPENDIX 1 PANEL TITLE: "Connectionist Learning: Is it Time to Reconsider the Foundations?" ABSTRACT Classical connectionist learning is based on two key ideas. First, no training examples are to be stored by the learning algorithm in its memory (memoryless learning). It can use and perform whatever computations are needed on any particular training example, but must forget that example before examining others. The idea is to obviate the need for large amounts of memory to store a large number of training examples. The second key idea is that of local learning - that the nodes of a network are autonomous learners. Local learning embodies the viewpoint that simple, autonomous learners, such as the single nodes of a network, can in fact produce complex behavior in a collective fashion. This second idea, in its purest form, implies a predefined net being provided to the algorithm for learning, such as in multilayer perceptrons. Recently, some questions have been raised about the validity of these classical ideas. The arguments against classical ideas are simple and compelling. For example, it is a common fact that humans do remember and recall information that is provided to them as part of learning. And the task of learning is considerably easier when one remembers relevant facts and information than when one doesn't. Second, strict local learning (e.g. basic back propagation type learning) is not a feasible idea for any system, biological or otherwise. It implies predefining a network "by the system" without having seen a single training example and without having any knowledge at all of the complexity of the problem. Again, there is no system that can do that in a meaningful way. The other fallacy of the local learning idea is that it acknowledges the existence of a "master" system that provides the design so that autonomous learners can learn. Recent work has shown that much better learning algorithms, in terms of computational properties (e.g. designing and training a network in polynomial time complexity, etc.), can be developed if we don't constrain them with the restrictions of classical learning. It is, therefore, perhaps time to reexamine the ideas of what we call "brain-like learning." This panel will attempt to address some of the following questions on classical connectionist learning: 1. Should memory be used for learning? Is memoryless learning an unnecessary restriction on learning algorithms? 2. Is local learning a sensible idea? Can better learning algorithms be developed without this restriction? 3. Who designs the network inside an autonomous learning system such as the brain? --------------------------------------------------------- APPENDIX 2 BRIEF SUMMARY OF INDIVIDUAL REMARKS 1) DR. SHUNICHI AMARI: Dr. Amari focused mainly on the neural network design and modularity of learning. Classical connectionist learning has treated microscopic aspects of learning where local generalized Hebbian rule plays a fundamental role. However, each neuron works in a network so that learning signals may be synthesized in the network nonlocally. He also said that, based on microscopic local learning rules, more macroscopic structural learning emerges such that a number of experts differentiate to play different roles cooperatively. This is a basis for concept formation and symbolization of microscopic neural excitations. He stresses that we need a new type of mathematical theories of neural computation. --------------- Shun-ichi Amari is a Professor-Emeritus at the University of Tokyo and is now working as a director of Information Processing Group in RIKEN Frontier Research Program. He has worked on mathematical theories of neural networks for thirty years, and his current interest is, among others, applications of information geometry to manifolds of neural networks. He is the past president of the International Neural Network Society (INNS), a council member of Bernoulli Society for Mathematical Statistics and Probability, IEEE Fellow, a member of Scientists Council of Japan, and served as founding Coeditor-in-Chief of Neural Networks. He is recipient of Japan Academy Award, IEEE Emanuel R. Piore Award, IEEE Neural Networks Pioneer Award, and so on. ---------------------------------------------------------- 2) DR. ERIC BAUM: Dr. Baum remarked that a number of disciplines have independently reached a near consensus that the brain is a multiagent system that computes using interaction of modules large compared to neurons. These different disciplines offer different pictures of what the modules are and how they interact, and it is illuminating to compare these different insights. Evolutionary psychologists talk about modules evolving, as we have evolved different organs to perform different tasks. They have presented the Wason selection test, a psychophysical test of reasoning which seems to indicate that humans have a module specifically for reasoning about social interactions and cheating detection. Brain imaging presents a physical picture of modules interacting and will give great insight into the nature of how modules interact to compute. Stroke and other lesion victims give insights into deficits that can arise from damage to a single module. Lakoff and Johnson's observation that language is metaphorical can be viewed in modular terms. For example, time is money: you buy time, save time, invest your time wisely, live on borrowed time, etc. What is this but a manifestation of a module for valuable resource management that is applied in different contexts? Dr Baum also remarked that evolution has built massive knowledge into us at birth. This knowledge guides our learning. Much of it is manifested in a detailed intermediate reward function (pleasure, pain) that guides us to reinforcement learn. There is copious evidence of built in knowledge-- for example consider the difference in personalities of a labrador retriever and a sheepdog. Or for example consider experiments showing that monkeys, born in the lab without fear of snakes, can acquire fear of snakes from seeing a video of a monkey scared of snakes, yet they will not acquire a fear of flowers from seeing a video of a monkey recoiling from a flower (Mineka et al, Animal Learning and Behavior, 8:653 (1980).) Thus learning was a two phase process, learning during evolution followed by learning during life (and actually three phase if you consider technology.) Dr. Baum remarked that traditional neural theories do not seem to encompass this modular nature well. In his opinion the critical question in managing interaction of agents is ensuring that the individual agents all see the correct incentive. This he feels implies that the multiagent model is essentially an economic model, and said that he is working in this direction. Other features of intelligence not well handled by standard connectionist approaches include metalearning, and metacomputing. People are able to learn new concepts from a single example, which requires recursively applying ones knowledge to learning. Creature's need to be able to decide what to compute and when to stop computing and act, which again indicates a recursive nature of intelligence. It is not clear how ever to deal with these problems in a connectionist framework, but they seem natural within the context of multiagent economies. ------------- Eric Baum received B.A. and M.A. degrees in physics from Harvard University in 1978 and the Ph.D. in physics from Princeton University in 1982. He has since held positions at Berkeley, M.I.T., Caltech, J.P.L., and Princeton University and has for eight years now been a Senior Research Scientist in the Computer Science Division of the NEC Research Institute, Princeton N.J. His primary research interests are in Cognition, Artificial Intelligence, Computational Learning Theory, and Neural Networks, but he has also been active in the nascent field of DNA Based Computers, co-chairing the first and chairing the second workshops on DNA Based Computers. His papers include: "Zero Cosmological Constant from Minimum Action", Physics Letters V 133B, p185 (1983) "What Size Net Gives Valid Generalization", with D. Haussler, Neural Computation v1 (1989) pp148-157 "Neural Net Algorithms that Learn in Polynomial Time from Examples and Queries", IEEE Transactions in Neural Networks, V2 No. 1 pp 5-19 (1991). "Best Play for Imperfect Players and Game Tree Search- Part 1 Theory" E. B. Baum and W. D. Smith, (submitted) "Where Genetic Algorithms Excel", E. B. Baum, D. Boneh, and C. Garrett, (submitted) "Toward a Model of Mind as a Laissez-Faire Economy of Idiots, Extended Abstract", Proceedings of the 13th International Conference on Machine Learning pp28-36, Morgan Kauffman (1996). ----------------------------------------------------- 3) DR. ROLF ECKMILLER: Dr. Eckmiller presented three theses about brain-like learning. First is the notion of factories or modular subsystems. Second, the neural networks belong to the geometrical or topological theory space and not in the algebraic or analytical theory space. Hence using notions of Von Neumann computing in our analysis might be equivalent to "barking up the wrong tree." Third, he called upon the research community to develop a new wave of neural computers - ones that can adapt weights and time delays, build new layers and structures, and build and integrate connections between various parts of the brain. He said that "biological systems are amathematical" and therefore needs new mathematical tools for analysis. ------------- Rolf Eckmiller was born in Berlin, Germany, in 1942. He received his M.Eng. and Dr. Eng. (with honors) degrees in electrical engineering from the Technical University of Berlin, in 1967 and 1971, respectively. Between 1967 and 1978, he worked in the fields of neurophysiology and neural net research at the Free University of Berlin, and received the habilitation for sensory and neurophysiology in 1976. From 1972 to 1973 and from 1977 to 1978, he was a visiting scientist at UC Berkeley and the Smith-Kettlewell Eye Research Foundation in San Francisco. From 1979 to 1992, he was professor at the University of Düsseldorf. Since 1992, he has been professor and head of the Division of Neuroinformatics, Department of Computer Science at the University of Bonn. His research interests include vision, eye movements in primates, neural nets for motor control in intelligent robots, and neurotechnology with emphasis on retina implants. ---------------------------------------------------------- 4) DR. LEE GILES: Dr. Giles opened his discussion by stating that the connectionist field has always been a very self-critical one that has always been receptive to new ideas. Furthermore, the topics proposed here have been discussed to some extent in the past but are important ones and certainly worth reevaluation. As an example, the idea of using memory in learning was one of the earliest ideas in neural networks and was proposed in the seminal 1943 paper of McCulloch and Pitts. Today, memory structures are used extensively in neural models concerned with temporal and sequence anaylsis. For example recurrent neural networks have successfully been used for such problems as time series prediction, signal processing, and control. In discrete time recurrent networks, memory structure and useage are very important both to the networks performance and computational power. Dr. Giles then stated that there is still a great deal of discrepancy between what our current models can do in theory and what they can do in practice, and that a deeper foundation for intelligence processing needs to be established. One approach is to look at hybrid systems, models that combine many difference learning and intelligence paradigms - neural networks, AI, etc - and develop the foundations of intelligent systems by exploring hybrid system fundamentals. As an example of what intelligent systems can't do but should be able to, Dr. Giles showed examples of a pattern classification taken from the book "Pattern Recognition" by M. Bongard. Here one sees six examplar pictures from one class and six from another. The pattern classification task is to extract the rule(s) that differentiate one class from the other; a problem that humans can solve but no machine seems currently seems close to solving. Bongard constructed 100 of these problems. Not only is learning involved but so is reasoning and explanation. This problem by Bongard is an example of the types of problems we should be trying to solve and the questions raised in solving it will give us insights into constructing and understanding intelligent systems. Reference: M. Bongard, "Pattern Recognition", Spartan Books, 1970. ---------------- C. Lee Giles is a Senior Research Scienctist in Computer Science at NEC Research Institute, Princeton, NJ and Adjunct Faculty at the University of Maryland Institute for Advanced Computer Studies, College Park, Md. His current research interests are: novel applications of neural network, machine learning and AI in the WWW, communications, computing and computers, multi-media, adaptive control, system identification, language processing, time series and finance; and dynamically-driven recurrent neural networks - their computational and processing capabilities and relationships to other adaptive, learning and intelligent paradigms. Dr. Giles was one of the founding members of the Governors Board of the International Neural Network Society and is a member of the IEEE Neural Networks Council Technical Committee. He has served or is currently serving on the editorial boards of IEEE Transactions on Neural Networks, IEEE Transactions on Knowledge and Data Engineering, Journal of Computational Intelligence in Finance, Journal of Parallel and Distributed Computing, Neural Networks, Neural Computation, Optical Computing and Processing, Applied Optics, and Academic Press. Dr. Giles is a Fellow of the IEEE, a member of AAAI, ACM, INNS, the OSA, and DIMACS - Rutgers University Center for Discrete Mathematics and Theoretical Computer Science. Previously, he was a Program Manager at the Air Force Office of Scientific Research in Washington, D.C. where he initiated and managed basic research programs in Neural Networks and in Optics in Computing and Processing. ----------------------------------------------------------- 5) DR. GEOFREY HINTON: Dr. Hinton started by pointing out the weaknesses of the back-propagation algorithm in learning and in certain pattern recognition tasks. He then focused on the good properties of Bayesian networks and showed how well the Bayesian networks do on a certain pattern recognition task. He believes that prescriptions from well-known researchers about necessary conditions on biologically realistic learning algorithms are of some sociological interest but are unlikely to lead to radically new ideas. -------- Geoffrey Hinton received his BA in experimental psychology from Cambridge in 1970 and his PhD in Artificial Intelligence from Edinburgh in 1978. He is currently a fellow of the Canadian Institute for Advanced Research and professor of Computer Science and Psychology at the University of Toronto. He does research on ways of using neural networks for learning, memory, perception and symbol processing and has over 100 publications in these areas. He was one of the researchers who introduced the back-propagation algorithm that is now widely used for practical applications. His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural nets, mixtures of experts, and Helmholtz machines. His current main interest is in unsupervised learning procedures for neural networks with rich sensory input. He serves on the editoral boards of the journals Artificial Intelligence, Neural Computation, and Cognitive Science. He is a fellow of the Royal Society of Canada and of the American Association for Artificial Intelligence and a former President of the Cognitive Science Society. --------------------------------------------------------- 6) DR. TEUVO KOHONEN: Dr. Kohonen felt that perhaps we should go back to the basics to answer some of the questions being raised about connectionist learning, especially concerning the right forms of transfer functions and learning laws. He then talked about three levels of neural functions. At the lowest level, he mentioned the idea of activation and inhibition as coming from the old views held in medical science. At the next level, the links between neurons get modified and change over time. This view was introduced to the neural science by theorists. He then mentioned that many earlier and recent neurobiological findings reveal that there is another third level of control in the brain that controls the adaptivity of networks, thereby implying certain "nonlocal" brain mechanisms and their role in designing and training networks. He called this third level "control of synaptic plasticity" that was ignored in the past in connectionism. He jokingly mentioned that his controversial views had developed along different lines over a long time since he is coming "from another planet" (Finland, that is). The audience laughed and applauded him heartily. ------------ Teuvo Kohonen, Dr. Eng., Professor of the Academy of Finland, head of the Neural Networks Research Centre, Helsinki University of Technolog, Finland. His research areas are associative memories, neural networks, and pattern recognition, in which he has published over 200 research papers and four monography books. His fifth book is on digital computers. Since the 1960s, Professor Kohonen has introduced several new concepts to neural computing: fundamental theories of distributed associative memory and optimal associative mappings, the learning subspace method, the self-organizing feature maps, the learning vector quantization, and novel algorithms for symbol processing like the redundant hash addressing and dynamically expanding context. The best known application of his work is the neural speech recognition system. Prof. Kohonen has also done design work for electronics industries. He is recipient of the Honorary Prize of Emil Aaltonen Foundation in 1983, the Cultural Prize of the Finnish Commercial Television (MTV) in 1984, the IEEE Neural Networks Council Pioneer Award in 1991, the International Neural Network Society Lifetime Achievement Award in 1992, Prize of the Finnish Cultural Foundation in 1994, Technical Achievement Award of the IEEE Signal Processing Society in 1995, Centennial Prize of the Finnish Association of Graduate Engineers in 1996, King-Sun Fu Prize in 1996, and others. He is Honorary Doctor of the University of York in U.K. and Abo Akademi in Finland, member of Academia Scientiarum et Artium Europaea, titular member of the Academie Europeenne des Sciences, des Arts et des Lettres, member of the Finnish Academy of Sciences and the Finnish Academy of Engineering Sciences, IEEE Fellow, and Honorary Member of the Pattern Recognition Society of Finland as well as the Finnish Society for Medical Physics and Medical Engineering. He was elected the First Vice President of the International Association for Pattern Recognition for the period 1982 - 84, and acted as the first President of the European Neural Network Society during 1991 - 92. -------------------------------------------------------- 7) DR. DANIEL LEVINE: Dr. Levine agreed that it was indeed time to reconsider the foundations of connectionist learning. He mentioned that he had been eager to defend classical connectionist ideas, but then changed his mind because of some his recent work on analogy formation and because of work in neuroscience on the role of neuromodulators and neurotransmitters. He was of the view that there has to be some "nonlocal" learning mechanisms at work, particularly because learning of analogies requires that we not only learn to associate two concepts as in traditional Hebbian learning, but learn the nature of the association. (Example: simply associating Houston to Texas isn't enough to tell us that Houston is "in" Texas.) Such nonlocal processes may, he added, provide more efficient mechanisms for property inheritance and property transfers. But Dr. Levine said that reconsidering the foundations of connectionism does not mean throwing out all existing work but building on it. Specifically, connectionist principles such as associative learning, competition, and resonance, that have been used in models of pattern recognition and classical conditioning can also be used in different combinations as building blocks in connectionist models of more complex cognitive processes. In these more complex networks, neuromodulation (via a transmitter "broadcast" from a distant source node) is likely to play an important role in selectively amplifying particular subprocesses based on context signals. -------------- DANIEL LEVINE is Professor of Psychology at the University of Texas at Arlington. Dr. Levine holds a Ph.D. in Applied Mathematics from the Massachusetts Institute of Technology and was a Postdoctoral Trainee in Physiology at the University of California at Los Angeles School of Medicine. His main recent area of research has been neural network models for the involvement of the frontal lobes in high-level cognitive tasks and in brain executive function, including their possible connections with the limbic system and basal ganglia. He has also recently published a network model of the effects of context on preference in multiattribute decision-making. Other areas in which he has published include models of attentional effects in Pavlovian conditioning, dynamics of nonlinear attractor networks, and models of visual illusions. Dr. Levine is author of the textbook, "Introduction to Neural and Cognitive Modeling," and senior editor of three books that have arisen out of conferences sponsored by the Dallas-Fort Worth-based Metroplex Institute for Neural Dynamics (M.I.N.D.). He has been on the editorial board of Neural Networks since 1988, serving as Book Review Editor from 1988 to 1995 and Newsletter editor from 1995 to the present. He has been a member of the INNS Board of Governors since 1995 and current candidate for President-Elect of INNS. He is a Program Co-Chair for the International Joint Conference on Neural Networks in 1997, sponsored by IEEE and INNS. -------------------------------------------------------- 8) DR. JEAN JACQUES SLOTINE: The issue of learning on an as-needed basis may not have yet received enough attention. Consider for example a robot manipulator, initially at rest under gravity forces, and whose desired task is to just stay there; no control needs being applied and no adaptation needs occuring, and this is indeed what a good adaptive controller, whether model-based, parametrized, or "neural" will do -- actually, doing anything else, e.g. moving so as to acquire parameter information, would detract it from its task. Conversely, if the robot is required to follow a desired trajectory so complicated that exact trajectory tracking necessarily requires an exact learning of the robot dynamics, then the guaranteed tracking convergence of the same adaptive algorithm will automatically guarantee such learning. While these issues are now well understood in a feedback control context, they may be of interest in a more general setting, since learning seems to often be equated to learning a whole system model, rather than to a faster, simpler purely goal-directed learning. The issue of transmission or computing delays, and the constraints they impose on stable learning also seem to deserve increased attention. ------------ Jean-Jacques Slotine was born in Paris in 1959, and received his Ph.D. from the Massachusetts Institute of Technology in 1983. After working at Bell Labs in the computer research department, in 1984 he joined the faculty at MIT, where he is now Professor of Mechanical Engineering and Information Sciences, Professor of Brain and Cognitive Sciences, and Director of the Nonlinear Systems Laboratory. He is the co-author of the textbooks "Robot Analysis and Control" (Wiley, 1986) and "Applied Nonlinear Control" (Prentice-Hall, 1991). ----------------------------------------------------------- 9) DR. JOHN TAYLOR: Dr. Taylor said that having worked at the Brain Institute in Germany for the last one year, he now has a new and different view of connectionism. He said that classical connectionism perhaps has too narrow a view of the brain. He then mentioned that the brain has a modular structure with three basic regions (nonconcious regions, concious regions and regions for reasoning, decision-making and so on). According to him, the following are some of the important characteristics of the brain: 1) the use of time in discrete chunks, or packets, so that there are three regimes: one at a few tens of milliseconds, one at the order of seconds and the third at about a minute. The first of these is involved in sensory processing, the second in higher order processing and the third in frontal 'reasoning'. The source of these longer times is as yet unknown but is very important. 2) the effects of neuromodulation from the punishment/reward system, which provides a global signal, 3) the distribution or break-down of complex tasks into sub-tasks which are themselves performed by smaller numbers of modules - the principle of divide and conquer! It is these networks which are now being uncovered by brain imaging; how they function in detail will be the next big task in neuroscience for the next century. 4) the use of a whole battery of neurotransmitters both for ongoing transmission of information and for learning changes brought about locally or globally. He emphasized that memory is indeed used in learning and that in addition to memory at the higher level (long term memory), there is working memory and memory in the time delays. With regard to the issue of global/local learning, he mentioned that neuromodulation possibly plays a role in passing global signals. As to the question of network design, he said the networks are designed by the brain and is a function of punishments and rewards coming from the environment. In closing, he articulated the view that connectionism should not be limited to traditional artificial neural networks, but must include new knowledge being discovered in computational neuroscience. -------------- John G. Taylor has been involved in Neural Networks since 1969, when he developed analysis of synaptic noise in neural transmission, which has more recently been turned into a neural chip (the pRAM) with on-chip learning. He is interested in a broad range of neural network questions, from theory of learning and the use of dynamical systems theory and spin glasses to cognitive understanding up to consciousness. He is presently Director of the Centre for Neural Networks, King's College London and a Guest Scientist at the Research Centre Juelich, where he is involved in developing new tools for analysing brain imaging data and performing experiments to detect the emergence of consciousness. He has published over 400 scientific papers in all, as well as over a dozen books and edited as many again. He was INNS President in 1995 and is currently European Editor-in-Chief of the journal 'Neural Networks', a Governor of INNS and a Vice-President of the European Neural Network Society. --------------------------------------------------------- 10) DR. DAVID WALTZ: Dr. Waltz articulated the viewpoint that brains indeed use memory to learn. He said that we do remember important experiences in life and then told a story about Marvin Minsky's dog and use of memory to learn (true story). Minsky's dog had formed the habit of chasing cars and biting their tires during the regular walks. One day, while trying to do this, she slipped the leash and got run over and injured by a car at a certain street corner. From then on, she was extremely reluctant to go near that particular street corner where the accident occurred, but continued to chase cars whenever possible (vivid memories and a wrong learning experience). While people (and animals) can generally learn better than this, vivid memories are probably shared by - and important to - all higher organisms. Dr. Waltz also emphasized the non-minimal nature of the brain in the sense that it tries to remember a lot of things in order to learn. For example, imagine that an intelligent system encounters a situation that leads to a very negative outcome, and then later encounters a similar situation that has a positive or neutral outcome. It is important that enough features of the original situation be remembered, so that the system can distinguish these situations in the future, and act accordingly. If the initial situation is not remembered, but has just been used to make synaptic weight changes, then the system will have no way to find features that could distinguish these cases in the future. So, with regard to the basic questions, he agreed that we do indeed use memory to learn in many cases, though not in every case (e.g. motor skills). On the network design issue, he said that some networks have been designed through evolution, but that other networks are indeed designed by the brain through "experience." On global/local learning, he speculated that perhaps both kinds exist. --------------- David Waltz is Vice President, Computer Science Research at the NEC Research Institute in Princeton, NJ, and an Adjunct Professor at Brandeis University. From 1984-93, he was Director of Advanced Information Systems at Thinking Machines Corporation and Professor of Computer Science at Brandeis. From 1974-83 he was a Professor of Electical and Computer Engineering at the University of Illinois at Urbana-Champaign. Dr. Waltz received SB, SM, and Ph.D. degrees from MIT, in 1965, 1968, and 1972 respectively. His research interests have included constraint propogation, massively parallel systems for relational and text databases, memory-based reasoning systems, protein structure prediction using hybrid neural net and memory-based methods, connectionist models for natural language processing, and natural language processing. He is President of the American Association of Artificial Intelligence and was elected a fellow of AAAI in 1990. He has served as President of ACM SIGART, Executive Editor of Cognitive Science, AI Editor for Communications of the ACM. He is a senior member of IEEE. --------------------------------------------------------- 11) DR. PAUL WERBOS: Dr. Werbos agreed strongly with the importance of memory-based learning. He argued that new neural network designs, using memory-based approaches, could help to solve the classical dilemma of learning speed versus generalization ability, which has plagued many practical applications of neural networks. He referred back to his idea of "syncretism," expressed in chapter 3 of the Handbook of Intelligent Control and in his paper on supervised learning in WCNN93 (and Roychowdhury's book). He believes that such mechanisms are essential to explaining certain capabilities in the neocortex, reflected in psychology. However, he does not argue that such mechanisms are present in ALL parts of the brain; for example, slower learning, based on simpler circuitry, does seem to occur in motor systems like the cerebellum. Higher motor systems, such as the neocortex/basal-ganglia/thalamus loops, clearly include memory-based learning; however, Houk, Ito and others have clearly shown that some degree of real-time weight-based learning does exist in the cerebellum. Recent experiments have hinted that even the cerebellum might be trained in part based on a replay of memories initially stored in the cerebral cortex; however, there are reasons to withhold judgment about that idea at the present time. Regarding local learning, he stressed that some of the broader discussions tend to mix up several different notions of "locality," each of which needs to be evaluated separately. Massively parallel distributed processing still remains a fundamental design principle, both of biological and practical importance. This in no way rules out the presence of some global signals such as "clocks," which are crucial to many designs, and which Llinas and others have found to be pervasive in the brain. Likewise, it does not rule out subsystems for "growing" and "pruning" connections, which are already well established in the connectionist literature (discussed in chapter 10 of the Handbook of Intelligent Control, and in many other places.). Regarding the role of learning versus evolution, he does not see the same kind of "either-or" choice that many people assume. His views are expressed in detail in a new paper to appear in Pribram's new book on Values from Erlbaum, 1997, reflecting the new "decision block" or "three brain" design discussed at this conference. -------- Dr. Paul J. Werbos holds 4 degrees from Harvard University and the London School of Economics, covering economics, mathematical phyiscs, decision and control, and the backpropagation algorithm. His 1974 PhD thesis presented the true backpropagation algorithm for the first time, permitting the efficient calculation of derivatives and adaptation of all kinds of nonlinear sparse structures, including neural networks; it has been reprinted in its entirety in his book, The Roots of Backpropagation, Wiley, 1994, along with several related seminal and tutorial papers. In these and other more recent papers, he has described how backpropagation may be incorporated into new intelligent control designs with extensive parallels to the structure of the human brain. See the hot links on www.nsf.gov/eng/ecs/enginsys.htm Dr. Werbos runs the Neuroengineering program and the SBIR Next Generation Vehicle program at the National Science Foundation. He is Past President of the International Neural Network Society(INNS), and is currently on the governing boards both of INNS and of the IEEE society for Systems, Man and Cybernetics.. Prior to NSF, he worked at the University of Maryland and the U.S. Department of Energy. He was born in 1947 near Philadelphia, Pennsylvania, has three children, and attends Quaker meetings. His publications range from neural networks through to quantum foundations, energy economics, and issues of consciousness. ---------------------------------------------------   From herbert.jaeger at gmd.de Mon Aug 25 09:10:49 1997 From: herbert.jaeger at gmd.de (Herbert Jaeger) Date: Mon, 25 Aug 1997 15:10:49 +0200 Subject: Beyond hidden Markov models: new techreports Message-ID: <34018456.7DFE@gmd.de> BEYOND HIDDEN MARKOV MODELS Two technical reports on stochastic time series modeling available ABSTRACT. Hidden Markov models (HMMs) provide widely used techniques for analysing discrete stochastic sequences. HMMs are induced from empirical data by gradient descent methods, which are computationally expensive, involve heuristic pre-estimation of model structure, and can get trapped in local optima. Furthermore, HMMs are mathematically not well understood. In particular, model equivalence cannot be characterised. A new class of stochastic models, "observable operator models" (OOMs), presents an advance over HMMs in the following respects: - OOMs are more general than HMMs, i.e. processes modeled by OOMs are a proper superclass of those modeled by HMMs. - Equivalence of OOMs can be characterized algebraically. - A *constructive* algorithm allows to reconstruct OOMs from empirical time series. This algorithm is extremely fast and transparent (boiling down essentially to a single matrix inversion). - OOMs reveal fundamental connections of stochastic processes with information theory and dynamical systems theory. The basic mathematical theory of OOMs, and their relation to HMMs, is described in: Herbert Jaeger: Observable Operator Models and Conditioned Continuation Representations. Arbeitspapiere der GMD 1043, GMD, St. Augustin 1997 (38 pp). The induction algorithm, and a standardized graphical representation of OOM-generated processes, is described in: Herbert Jaeger: Observable Operator Models II: Interpretable models and model induction. Arbeitspapiere der GMD 1083, GMD, St. Augustin 1997 (33 pp) Both papers can be fetched electronically from the author's webpage (see below) or directly from the following ftp-site: ftp://ftp.gmd.de/GMD/ai-research/Publications/1997/ (files jaeger.97.{oom,oom2}.{ps.gz,pdf}) ---------------------------------------------------------------- Dr. Herbert Jaeger Phone +49-2241-14-2253 German National Research Center Fax +49-2241-14-2384 for Information Technology (GMD) email herbert.jaeger at gmd.de FIT.KI Schloss Birlinghoven D-53754 Sankt Augustin, Germany http://www.gmd.de/People/Herbert.Jaeger/ ----------------------------------------------------------------   From gluck at pavlov.rutgers.edu Mon Aug 25 14:14:39 1997 From: gluck at pavlov.rutgers.edu (Mark A. Gluck) Date: Mon, 25 Aug 1997 10:14:39 -0800 Subject: RA or Postdoc Position in Computational Neuroscience at Rutgers-Newark Message-ID: We are looking to hire a good new person in my lab to work with us on the computational modelling of the memory circuits in hippocampus and cortex and basal forebrain, etc., which are involved in both animal and human learning. The candidate should be a person well-trained in basic neural-net modelling and theory, who can program NN's like a wiz. Their neuroscience training is not key as we can train someone in the biology and behavior. In fact, this could be a good opportunity for someone who is well trained with a computer science or EE background in NN's, but wants to explore and train for a career move into more biologically and behaviorally oriented modelling. We also have an active experimental animal lab as well as extensive neuropsychological studies of human memory which can also provide important training opportunities in the experimental and empirical aspect of computational neuroscience. Depending on the applicant's background, we can hire either a full-time RA/programmer who has an undergraduate background in NN; perhaps someone who is considering graduate school in a year or two. Alternatively, a postdoctoral position could be found for someone with a PhD in EE or CS with strong NN programming experience, who would be interested in a training situation that would prepare them to work in computational neuroscience, with an emphasis on empirically-constrained models and theories of brain function. More information on our lab and research can be found on the web page noted below. Anyone interested should email me with a cover letter stating his or her background and career goals. Rutgers-Newark has a large and growing group of connectionist modellers working in cognitive neuroscience and computational neuroscience at Rutgers-Newark. In addition to myself, this group includes Catherine Myers, Stephen Hanson, Mike Casey, Ralph Siegel, Michael Recce, and Ben Martin-Bly. - Mark Gluck _____________________________________________________________ Dr. Mark A. Gluck, Associate Professor Center for Molecular & Behavioral Neuroscience Rutgers University 197 University Ave. Newark, New Jersey 07102 Phone: (973) 353-1080 (Ext. 3221) Fax: (973) 353-1272 Cellular: (917) 855-8906 Email: gluck at pavlov.rutgers.edu WWW Homepage: www.gluck.edu _____________________________________________________________   From karaali at ukraine.corp.mot.com Wed Aug 27 14:31:25 1997 From: karaali at ukraine.corp.mot.com (Orhan Karaali) Date: Wed, 27 Aug 1997 13:31:25 -0500 Subject: Speech Synthesis Position Message-ID: <199708271831.NAA10591@fiji.mot.com> MOTOROLA is a leading provider of wireless communications, semiconductors, and advanced electronic systems, components, and services. Motorola's Chicago Corporate Research Laboratory in Schaumburg, IL, is seeking a researcher to join its Speech Synthesis and Machine Learning Group. The Speech Synthesis and Machine Learning Group at Motorola has developed innovative neural network and signal processing technologies for speech synthesis and speech recognition applications. Position Description: The duties of the position include applied research as well as software design and development. Innovation in research, application of technology, and a high level of motivation is the standard for all members of the team. The ability to work within a group to quickly implement and evaluate algorithms in a rapid research/development cycle is essential. Candidate's Qualifications: * M.S. or Ph.D. in EE, CS or a related discipline * Strong programming skills in the C++ language and knowledge of object-oriented programming techniques * Good written and oral communication skills * Expertise in at least one of the four fields listed below: * Computational and corpus linguistics, text processing, SGML. * Neural networks, genetic algorithms, decision trees. * Signal and speech processing, speech coders, speech production. * WIN32 and graphics (Direct3D and OpenGL) programming. We offer an excellent salary and benefits package. For consideration, please send or fax your resume to: Motorola Corporate, Dept.T7100 1303 E. Algonquin Rd. Schaumburg, IL 60196 Fax: (847)538-4688 Motorola is an Equal Employment Opportunity / Affirmative Action employer. We welcome and encourage diversity in our workforce. Proof of identity and eligibility to be employed in the United States is required. MOTOROLA What you never thought possible(TM)   From bert at mbfys.kun.nl Thu Aug 28 04:39:22 1997 From: bert at mbfys.kun.nl (Bert Kappen) Date: Thu, 28 Aug 1997 10:39:22 +0200 Subject: Boltzmann Machine learning using mean field theory ... Message-ID: <199708280839.KAA27865@bertus> Dear Connectionists, The following article Boltzmann Machine learning using mean field theory and linear response correction written by Hilbert Kappen and Paco Rodrigues Abstract: The learning process in Boltzmann Machines is computationally intractible. We present a new approximate learning algorithm for Boltzmann Machines, which is based on mean field theory and the linear response theorem. The computational complexity of the algorithm is cubic in the number of neurons. In the absence of hidden units, we show how the weights can be directly computed from the fixed point equation of the learning rules. We show that the solution of this method is close to opti which will apear in the proceedings NIPS of 1997 ed. Micheal Kearns can now be downloaded from as ftp://ftp.mbfys.kun.nl/snn/pub/reports/Kappen.LR_NIPS.ps.Z Yours sincerely, Hilbert Kappen FTP INSTRUCTIONS unix% ftp ftp.mbfys.kun.nl Name: anonymous Password: (use your e-mail address) ftp> cd snn/pub/reports/ ftp> binary ftp> get Kappen.LR_NIPS.ps.Z ftp> bye unix% uncompress Kappen.LR_NIPS.ps.Z unix% lpr Kappen.LR_NIPS.ps   From dnoelle at cs.ucsd.edu Thu Aug 28 20:34:45 1997 From: dnoelle at cs.ucsd.edu (David Noelle) Date: Thu, 28 Aug 1997 17:34:45 -0700 (PDT) Subject: TR on Attractor Networks Message-ID: <199708290034.RAA08151@hilbert.ucsd.edu> The following technical report is now available via both the World Wide Web and anonymous FTP: http://www.cse.ucsd.edu/users/dnoelle/publications/tr-s97/ ftp://ftp.cs.ucsd.edu:/pub/dnoelle/tr-s97.ps.Z Note that the web version includes a link to the PostScript version at the bottom of the page. Extreme Attraction: The Benefits of Corner Attractors ------------------------------------------------------ by David C. Noelle, Garrison W. Cottrell, and Fred R. Wilms Technical Report CS97-536 Department of Computer Science & Engineering University of California, San Diego Connectionist attractor networks have played a central role in many cognitive models involving associative memory and soft constraint satisfaction. While early attractor networks used step activation functions, permitting the construction of attractors for only binary (or bipolar) patterns, much recent work has focused on networks with continuous sigmoidal activation functions. The incorporation of sigmoidal processing elements allows for the use of expressive real vector representations in attractor networks. The empirical studies reported here, however, reveal that the learning performance of sigmoidal attractor networks is best when such general real vectors are avoided -- when training patterns are explicitly placed in the extreme corners of the network's activation space. Using binary (or bipolar) patterns produces benefits in the number of attractors learnable by a network, in the accuracy of the learned attractors, and in the amount of training required. These benefits persist under conditions of sparse patterns. Furthermore, these experiments show that the advantages of extreme-valued patterns are not solely effects of the large separation between training patterns afforded by corner attractors. Thank you for your consideration. -- David Noelle ----- Department of Computer Science & Engineering -- --------------------- Department of Cognitive Science --------------- --------------------- University of California, San Diego ----------- -- noelle at ucsd.edu -- http://www.cse.ucsd.edu/users/dnoelle/ --------   From ritter at psychology.nottingham.ac.uk Fri Aug 29 08:10:26 1997 From: ritter at psychology.nottingham.ac.uk (ritter@psychology.nottingham.ac.uk) Date: Fri, 29 Aug 1997 13:10:26 +0100 Subject: ECCM 98 - 1st Announcement Message-ID: <199708291210.NAA17518@vpsyc.psychology.nottingham.ac.uk> ------------------------------------------------------------------------- First announcement SECOND EUROPEAN CONFERENCE ON COGNITIVE MODELLING (ECCM-98) Nottingham, England, April 1-4 1998 ------------------------------------------------------------------------- GENERAL INFORMATION: The 2nd European Conference on Cognitive Modelling (ECCM-98) will be held in Nottingham, England, from April 1st to 4th 1998 (starting with a day of optional tutorials). The conference will cover all areas of cognitive modelling, including symbolic and connectionist models, evolutionary computation, artificial neural networks, grammatical inference, reinforcement learning, and data sets designed to test models. Papers that present a running model and its comparison with data are particularly encouraged. This meeting is open for work on cognitive modelling using general architectures (such as Soar and ACT) as well as other kinds of simulation models. These meetings were introduced to establish interdisciplinary co-operation in the domain of cognitive modeling. The first meeting held in Berlin in November 1996 attracted about 60 researchers from Europe and USA working in the fields of artificial intelligence, cognitive psychology, computer linguistics and philosophy of mind. Program: The program will include presentations of papers, demo sessions, invited talks, discussion groups and tutorials on cognitive modeling in the fields of AI programming, classification, problem solving, reasoning, inference, learning, language processing and human-computer-interaction. As Nottingham is an area of 'high touristic value', it will also include a social evening out. Further details are available from: http://www.psychology.nottingham.ac.uk/staff/ritter/eccm98/ PROGRAM CHAIRS Richard Young (U. of Hertfordshire) and Frank Ritter (U. of Nottingham) LOCAL CHAIR: Frank Ritter (U. of Nottingham) IMPORTANT DATES: Submission deadline: 7 January 1998 Decision by: 6 February 1998 Early registration: 9 March 1998 Conference: 1-4 April 1998 A call for papers will be issued soon. -------------------------------------------------------------------   From smyth at sifnos.ics.uci.edu Fri Aug 29 19:32:44 1997 From: smyth at sifnos.ics.uci.edu (Padhraic Smyth) Date: Fri, 29 Aug 1997 16:32:44 -0700 Subject: TR available on stacked density estimation Message-ID: <9708291634.aa12018@paris.ics.uci.edu> FTP-host: ftp.ics.uci.edu FTP-filename: /pub/smyth/papers/stacking.ps.gz The following paper is now available online at: ftp://ftp.ics.uci.edu/pub/smyth/papers/stacking.ps.gz title: STACKED DENSITY ESTIMATION authors: Padhraic Smyth (UCI/JPL) and David Wolpert (NASA Ames) Abstract: In this paper, the technique of stacking, previously only used for supervised learning, is applied to unsupervised learning. Specifically, it is used for non-parametric multivariate density estimation, to combine finite mixture model and kernel density estimators. Experimental results on both simulated data and real world data sets clearly demonstrate that stacked density estimation outperforms other strategies such as choosing the single best model based on cross-validation, combining with uniform weights, and even the single best model chosen by ``cheating" by looking at the data used for independent testing. (This paper will also appear at NIPS97)   From jose at tractatus.rutgers.edu Fri Aug 29 15:30:43 1997 From: jose at tractatus.rutgers.edu (Stephen J.Hanson) Date: Fri, 29 Aug 1997 15:30:43 -0400 Subject: Rutgers Newark Psycholgy--Cognitive Scientist Message-ID: <34072363.F26EDA90@tractatus.rutgers.edu> COGNITIVE SCIENTIST Rutgers University-Newark Campus: The Department of Psychology anticipates making one tenure-track appointment in Cognitive Science at the Assistant Professor level. Candidates should have an active research program in one or more of the following areas: learning, action, high-level vision, and language. Our particular interest are candidates who combine one or more of these research interests with mathematical and/or computational approaches. The position calls for candidates who are effective teachers at both the graduate and undergraduate levels. Review of applications will begin on December 15, 1997. Rutgers University is an equal opportunity/affirmative action employer. Qualified women and minority candidates are especially encouraged to apply. Send CV and three letters of recommendation to Professor S. J. Hanson, Chair, Department of Psychology - Cognitive Science Search, Rutgers University, Newark, NJ 07102. Email inquiries can be made to cogsci at psychology.rutgers.edu