From bill at nsma.arizona.edu Sun May 1 17:50:01 1994 From: bill at nsma.arizona.edu (Bill Skaggs) Date: Sun, 01 May 1994 14:50:01 -0700 (MST) Subject: Financial Forecasting Competition Message-ID: <9405012150.AA23928@nsma.arizona.edu> Motivated by the announcement on this list of the "First International Nonlinear Financial Forecasting Competition", I would like to raise a concern that has troubled me for some time. I wonder whether it is really socially responsible to work on these things, or to support such work. As I understand it, the value of our financial system is that it efficiently channels money into the most productive enterprises, and away from enterprises that are likely to fail. If the system works properly, then the only way to make unusual amounts of money is to be unusually good at predicting the success or failure of enterprises. What concerns me, is that the financial forecasting techniques I have seen are not oriented toward predicting the failure or success of individual enterprises, but rather toward identifying and predicting global trends in the flow of money. I am skeptical that this is possible in the first place, but even if it is possible, it seems to me that to make money this way is to be a parasite upon the financial system, rather than to serve it. Furthermore, it is pretty clear that the only way to consistently make money with such a technique would be to keep it secret. On the other hand, I have a great deal of respect for several of the people involved in the "Competition", and this leads me to wonder whether I might be missing some crucial point. Can anybody help me with this? -- Bill (I am also concerned that this message will provoke too many responses, and so I ask readers to think twice, or three times, before replying to Connectionists. Do you really have something to say that isn't going to be said by somebody else? Of course I am happy to get Email from anybody who cares to send it, and will summarize to Connectionists if it seems useful.) From hendin at thunder.tau.ac.il Mon May 2 13:17:27 1994 From: hendin at thunder.tau.ac.il (Ofer Hendin) Date: Mon, 2 May 1994 20:17:27 +0300 (IDT) Subject: printing problems. Message-ID: Hi, If you have problems printing the file "hendin.olfaction.ps" (the figurs does not print etc.), it is probably since the p-script file is to large, the way to print it is using the -s flag in: >> lpr -s -Pprinter filename.ps Thanks Ofer. ____________________________________________________________________ Ofer Hendin e-mail: hendin at thunder.tau.ac.il School of Physics and Astronomy Phone : +972 3 640 7452 Raymond and Beverly Sackler Faculty of Exact Sciences Tel Aviv University Tel Aviv 69978, Israel. ____________________________________________________________________ From BRAIN1 at taunivm.tau.ac.il Mon May 2 16:08:23 1994 From: BRAIN1 at taunivm.tau.ac.il (BRAIN1@taunivm.tau.ac.il) Date: Mon, 02 May 94 16:08:23 IST Subject: Bat-Sheva seminar on functional brain imaging Message-ID: % Dear Colleague, % % here follows the second announcement (plain TeX file) of the % % % "BAT-SHEVA SEMINAR ON FUNCTIONAL BRAIN IMAGING" % % which will take place in Tel-Aviv % June 5 to 10, 1994 % % May we ask you to post the announcement ? % % Many thanks and best regards, % % D. Horn G. Navon % \nopagenumbers \magnification=1200 \def\sk{\vskip .2cm} \hsize=13cm \centerline{\bf BAT-SHEVA SEMINAR ON FUNCTIONAL BRAIN IMAGING} \sk \centerline{\bf Tel-Aviv, Israel, June 5 to 10, 1994} \vskip 3cm \centerline{\bf SECOND ANNOUNCEMENT} \sk The seminar will bring together experts on various techniques of functional brain imaging (PET, EEG, MEG, Optical, and particular emphasis on MRI). It will start with a day of tutorials at Tel-Aviv University. These will serve as technical and scientific introductions for participants from different disciplines. It will continue in a resort hotel at the seashore with plenary lectures, describing recent advances in all different techniques and comparing their merits and scientific results. The number of participants in the workshop will be limited. \sk \vskip 1cm {\bf Tutorial Sessions}, Sunday June 5th at Tel Aviv University: Amos Korczyn: Introduction to mapping of the brain. Peter Bendel: Technical introduction to MRI. {\bf Invited Lectures} (at Dan Accadia Hotel, Hertzlia) Monday-Friday June 6--10: John Belliveau: 1. Functional MRI. 2. Correlation between EEG and fMRI. Alan S. Gevins: 1. High resolution EEG. 2. Sub-second networks of cognition in the human brain. Amiram Grinvald: 1. Optical imaging of functional architecture based on the intrinsic signals. 2. real time optical imaging of electric activity. Matti H\"am\"al\"ainen: MEG -- a tool for functional brain imaging: theory, instrumentation, results. Seiji Ogawa: Basic mechanisms in fMRI. Hillel Pratt: Imaging human brain activity from scalp recordings. Marcus E. Raichle: 1. Introduction to neuroimaging. 2. Multimodel functional imaging. 3. PET studies of language and memory. Robert Shulman: 1. Application of functional MRI to cognitive processes. 2. Principles of magnetic resonance spectroscopy of the brain. 3. Measurements of brain metabolic MRS. {\bf Schedule of activities:} Sunday, June 5: Tutorials at Tel Aviv University Monday, June 6: Session One and Two at Dan Accadia Hotel. Evening: Get-together at the Hotel Tuesday, June 7: Sessions Three and Four. Wednesday, June 8: Morning: Session Five. Afternoon: Organized Tour Thursday, June 9: Session Six and Seven. Evening: Dinner at Tel Aviv University. Friday, June 10: Morning: Session Eight. \sk {\bf Information and registration}: Dan Knassim Ltd., P.O.B. 57005, Tel-Aviv 61570, Israel. Tel: 972-3-562 6470 Fax: 972-3-561 2303 \sk \vskip 2cm \centerline {D. Horn~~~~~~~~G. Navon} \centerline {ADAMS SUPER-CENTER FOR BRAIN STUDIES} \centerline {TEL-AVIV UNIVERSITY, TEL-AVIV, ISRAEL} \centerline{ e-mail: brain1 at taunivm.tau.ac.il } \vskip 2cm \sk \vfill\eject\end From oby at TechFak.Uni-Bielefeld.DE Mon May 2 07:38:50 1994 From: oby at TechFak.Uni-Bielefeld.DE (oby@TechFak.Uni-Bielefeld.DE) Date: Mon, 2 May 94 13:38:50 +0200 Subject: No subject Message-ID: <9405021138.AA28671@gaukler.TechFak.Uni-Bielefeld.DE> Research Position in Computational Neuroscience Technische Fakultaet (computer science), University of Bielefeld, Germany Our research group in computational neuroscience is part of the university's young CS-department. We study the role self-organization and pattern formation processes in neural development as well as information processing strategies employed by primate visual cortex. Recent publications include: Obermayer et al. (1990), Proc. Nat. Acad. Sci. USA 87, 8345-8349; Obermayer et al. (1992), Phys. Rev. A 45, 7568-7589; Obermayer and Blasdel (1993), J. Neurosci.13, 4114-4129. Recently we have started a project with focus on the role of long range lateral connections in the visual cortex, which is part of an international collaboration involving a neurophysiology group at Harvard U. (USA) and a neuroanatomy group at London U. (GB). A graduate student position has become available for this project. The appointee is expected to choose a project out of the following areas: 1. The development of computer software for neuroanatomical tracing and section alignment, for cell reconstruction, and for the automatic recognition of important features including vasculature, axons, and synaptic boutons. 2. Computer models of neuronal circuits based on the observed patterns of lateral connections, and a comparison of predicted filter properties with experimental data. 3. Computer models involving long-range connections, which should explore their possible role in adaptation and contextual effects. Candidates should be familiar with C or C++ and should have background in one of the following areas: 1. graphics programming and computer vision 2. neuroanatomy 3. neural modelling The position can be made available beginning June 1st. Salary is equivalent to BATIIa/2, but the position may be upgraded to a full BATII position. Please send applications including CV, copies of certificates, and a statement of research interests to: Dr. Klaus Obermayer Technische Fakultaet, Universitaet Bielefeld, Universitaetsstrasse 25, 33615 Bielefeld, phone: 49-521-106-6058, fax: 49-521-106-6011 e-mail: oby at techfak.uni-bielefeld.de From Daniel_Seligson at ccm11.sc.intel.com Tue May 3 20:55:13 1994 From: Daniel_Seligson at ccm11.sc.intel.com (Daniel_Seligson@ccm11.sc.intel.com) Date: Tue, 03 May 94 16:55:13 PST Subject: Job Posting at CuraGen Message-ID: <9404037680.AA768009313@rnbsmt11.intel.com> Other than the fact that my friends at Curagen asked me to post this for them, there is no connection between Intel and Curagen. Please direct all correspondence to the address below, or Greg Went (gwent at curagen.com). Thanks, Dan Curagen Corporation CuraGen Corporation is a dynamic and expanding biotechnology company with a mission to systematically extract from the human genome those disease- related genes for which therapeutics can be successfully designed. CuraGen has assembled a research team with expertise in molecular biology, engineering physics, and computational methods. Close ties with several major academic laboratories complement our own research and facilities Structural Biology Division Post-Doctoral/Research Scientist Position Theoretical and Applied Computational Biology CuraGen has devised a novel means and instrumentation for obtaining DNA fragmentation patterns in order to determine rapidly the composition of disease-specific genes. The analysis and interpretation of these patterns is essential to the success of the project. We currently have an opening for a computational scientist to refine and implement an adaptive scheme for this task. Those with a Ph.D. in computer science, applied mathematics, biology, physics, or related disciplines are encouraged to apply. Significant programming accomplishments are essential; exposure to non-linear statistics, neural architectures, and biocomputing is desirable. There is a distinct possibility that a joint appointment with DIMACS ( supported by the NSF) is available in conjunction with this position which offers up to 50% discretionary time. CuraGen offers a competitive compensation package including salary, benefits, and equity participation. Our location in a shoreline community between Boston and New York City 12 minutes east of Yale University presents excellent scientific, recreational, and cultural opportunities. CuraGen is an AA/Equal Opportunity Employer CuraGen Corporation 322 East Main Street Branford, CT 06405 (203) 481 1104 (203) 481 1106 (FAX) From bruno at lgn.wustl.edu Tue May 3 18:07:12 1994 From: bruno at lgn.wustl.edu (Bruno Olshausen) Date: Tue, 3 May 94 17:07:12 CDT Subject: Thesis available on neuroprose Message-ID: <9405032207.AA01935@lgn.wustl.edu> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/Thesis/olshausen.thesis.tar.Z The file olshausen.thesis.tar.Z is now available for copying from the Neuroprose archive: Neural Routing Circuits for Forming Invariant Representations of Visual Objects Bruno A. Olshausen Ph.D. Thesis Computation and Neural Systems Program California Institute of Technology ABSTRACT: This thesis presents a biologically plausible model of an attentional mechanism for forming position- and scale-invariant representations of objects in the visual world. The model relies on a set of {\em control neurons} to dynamically modify the synaptic strengths of intra-cortical connections so that information from a windowed region of primary visual cortex (V1) is selectively routed to higher cortical areas. Local spatial relationships (i.e., topography) within the attentional window are preserved as information is routed through the cortex, thus enabling attended objects to be represented in higher cortical areas within an object-centered reference frame that is position and scale invariant. The representation in V1 is modeled as a multiscale stack of sample nodes with progressively lower resolution at higher eccentricities. Large changes in the size of the attentional window are accomplished by switching between different levels of the multiscale stack, while positional shifts and small changes in scale are accomplished by translating and rescaling the window within a single level of the stack. The control signals for setting the position and size of the attentional window are hypothesized to originate from neurons in the pulvinar and in the deep layers of visual cortex. The dynamics of these control neurons are governed by simple differential equations that can be realized by neurobiologically plausible circuits. In pre-attentive mode, the control neurons receive their input from a low-level ``saliency map'' representing potentially interesting regions of a scene. During the pattern recognition phase, control neurons are driven by the interaction between top-down (memory) and bottom-up (retinal input) sources. The model respects key neurophysiological, neuroanatomical, and psychophysical data relating to attention, and it makes a variety of experimentally testable predictions. An appendix describes details of pulvinar anatomy and physiology. --------------------------- Thesis is 119 pages (10 preamble + 109 text), subdivided into five ps files (each is ordered last page first). It will look best in double-sided printing. You may need to run chmod 755 on each ps file in order to print using lpr -s. Hardcopies will be made available for $4.00. From B344DSL at UTARLG.UTA.EDU Tue May 3 01:38:53 1994 From: B344DSL at UTARLG.UTA.EDU (B344DSL@UTARLG.UTA.EDU) Date: Mon, 02 May 1994 23:38:53 -0600 (CST) Subject: Final program and abstracts for MIND conference May 5-7 Message-ID: <01HBVR70GR42002G4Z@UTARLG.UTA.EDU> CONTENTS Announcement Program Schedule Abstracts of Presentations Directions to Conference Registration Form CONFERENCE ON OSCILLATIONS IN NEURAL SYSTEMS Sponsored by the Metroplex Institute for Neural Dynamics (MIND) and the University of Texas at Arlington Co-sponsored by the Departments of Mathematics and Psychology MAY 5-7, 1994 UNIVERSITY OF TEXAS AT ARLINGTON MAIN LIBRARY, 6TH FLOOR PARLOR The topic of neural oscillation is currently of great interest to psychologists and neuroscientists alike. Recently it has been observed that neurons in separate areas of the brain will oscillate in synchrony in response to certain stimuli. One hypothesized function for such synchronized oscillations is to solve the "binding problem," that is, how is it that disparate features of objects (e.g., a person's face and their voice) are tied together into a single unitary whole. Some bold speculators (such as Francis Crick in his recent book, The Astonishing Hypothesis) even argue that synchronized neural oscillations form the basis for consciousness. Further inquiries about the conference can be addressed to any of the conference organizers: Professor Daniel S. Levine Department of Mathematics, University of Texas at Arlington 411 S. Nedderman Drive Arlington, TX 76019-0408 817-273-3598, fax: 817-794-5802 b344dsl at utarlg.uta.edu Professor Vincent Brown Department of Psychology, University of Texas at Arlington Arlington, TX 76019 817-273-3247 b096vrb at utarlg.uta.edu Mr. Timothy Shirey 214-495-3500 or 214-422-4570 73353.3524 at compuserve.com Please distribute this announcement to anyone you think may be interested in the conference. SCHEDULE Posters (ongoing throughout the conference): Govindarajan, Lin, Mobus, Penz, Rhoades, Tam, Young Thursday: 9:00-9:15 Introduction by Peter Rosen, Dean of the College of Science 9:15-9:30 Introduction by Daniel S. Levine, Co- organizer of the conference 9:30-10:30 Mpitsos 10:30-11:15 Baxter 11:15-11:30 15 minute break 11:30-12:30 Stemmler 12:30-2:00 LUNCH 2:00-2:45 Thomas 2:45-3:45 Horn 3:45-4:00 15 minute break 4:00-5:00 Yuen 5:00-5:45 Gross Friday: 8:30-9:30 Wong 9:30-10:30 Traub 10:30-10:45 15 minute break 10:45-11:30 Soltesz 11:30-12:15 Wolpert 12:15-2:00 LUNCH 2:00-2:45 (A.) Brown 2:45-3:45 Bulsara 3:45-4:00 15 minute break 4:00-5:00 Maren 5:00-5:45 Jagota Saturday: 10:00-11:00 Baird 11:00-11:45 Park 11:45-12:00 15 minute break 12:00-12:45 DeMaris 12:45-1:45 LUNCH 1:45-2:45 Grunewald 2:45-3:30 Steyvers 3:30-3:45 15 minute break 3:45-5:00 Discussion (What Are Neural Oscillations Good For?) (If weather permits, discussion may continue after 5PM outside library.) 7:30-? Trip to The Ballpark in Arlington to see Minnesota Twins at Texas Rangers Titles and Abstracts of Talks and Posters (Alphabetical by First Author) BILL BAIRD, UNIVERSITY OF CALIFORNIA/BERKELEY (BAIRD at MATH.BERKELEY.EDU) "GRAMMATICAL INFERENCE BY ATTENTIONAL CONTROL OF SYNCHRONIZATION IN AN ARCHITECTURE OF COUPLED OSCILLATORY ASSOCIATIVE MEMORIES" We show how a neural network "computer" architecture, inspired by observations of cerebral cortex and constructed from recurrently connected oscillatory associative memory modules, can employ selective "attentional" control of synchronization to direct the flow of communication and computation within the architecture to solve a grammatical inference problem. The modules in the architecture learn connection weights between themselves which cause the system to evolve under a clocked "machine cycle" by a sequence of transitions of attractors within the modules, much as a digital computer evolves by transitions of its binary flip-flop states. The architecture thus employs the principle of "computing with attractors" used by macroscopic systems for reliable computation in the presence of noise. Even though it is constructed from a system of continuous nonlinear ordinary differential equations, the system can operate as a discrete-time symbol processing architecture, but with analog input and oscillatory subsymbolic representations. The discrete time steps (machine cycles) of the "Elman" network algorithm are implemented by rhythmic variation (clocking) of a bifurcation parameter. This holds input and "context" modules clamped at their attractors while hidden and output modules change state, then clamps hidden and output states while context modules are released to load those states as the new context for the next cycle of input. In this architecture, oscillation amplitude codes the information content or activity of a module (unit), whereas phase and frequency are used to "softwire" the network. Only synchronized modules communicate by exchanging amplitude information; the activity of non-resonating modules contributes incoherent crosstalk noise. The same hardware and connection matrix can thus subserve many different computations and patterns of interaction between modules. Attentional control is modeled as a special subset of the hidden modules with outputs which affect the resonant frequencies of other hidden modules. They perturb these frequencies to control synchrony among the other modules and direct the flow of computation (attention) to effect transitions between two subgraphs of a large automaton which the system emulates to generate a Reber grammar. The internal crosstalk noise is used to drive the required random transitions of the automaton. DOUG BAXTER, CARMEN CANAVIER, H. LECHNER, UNIVERSITY OF TEXAS/HOUSTON, JOHN CLARK, RICE UNIVERSITY, AND JOHN BYRNE, UNIVERSITY OF TEXAS/HOUSTON (DBAXTER at NBA19.MED.UTH.TMC.EDU) "COEXISTING STABLE OSCILLATORY STATES IN A MODEL NEURON SUGGEST NOVEL MECHANISMS FOR THE EFFECTS OF SYNAPTIC INPUTS AND NEUROMODULATORS" Enduring changes in the electrical activity of individual neurons have generally been attributed to persistent modulation of one or more of the biophysical parameters that govern, directly or indirectly, neuronal membrane conductances. Particularly striking examples of these modulatory actions can be seen in the changes in the activity of bursting neurons exposed to modulatory transmitters or second messengers. An implicit assumption has been that once all parameters are fixed, the ultimate mode of electrical activity exhibited is determined. An alternative possibility is that several stable modes of activity coexist at a single set of parameters, and that transient synaptic inputs or transient perturbations of voltage-dependent conductances could switch the neuron from one stable mode of activity to another. Although coexisting stable oscillatory states are a well known mathematical phenomenon, their appearance in a biologically plausible model of a neuron has not been previously reported. By using a realistic mathematical model and computer simulations of the R15 neuron in Aplysia, we identified a new and potentially fundamental role for nonlinear dynamics in information processing and learning and memory at the single-cell level. Transient synaptic input shifts the dynamic activity of the neuron between at least seven different patterns, or modes, of activity. These parameter-independent mode transitions are induced by a brief synaptic input, in some cases a single excitatory postsynaptic potential. Once established, each mode persists indefinitely or until subsequent synaptic input perturbs the neuron into another mode of activity. Moreover, the transitions are dependent on the timing of the synaptic input relative to the phase of the ongoing activities. Such temporal specificity is a characteristic feature of associative memories. We have also investigated the ways in which changes in two model parameters, the anomalous rectifier conductance (gR) and the slow inward calcium conductance (gSI), affect not only the intrinsic activity of R15, but also the ability of the neuron to exhibit parameter-independent mode transitions. gR and gSI were selected since they are key determinants of bursting activity and also because they are known to be modulated by dopamine and serotonin. We have found that small changes in these parameters can annihilate some of the coexisting modes of electrical activity. For some values of the parameters only a single mode is exhibited. Thus, changing the value of gR and gSI can regulate the number of modes that the neuron can exhibit. Preliminary electrophysiological experiments indicate that these mechanisms are present in vitro. These combined experimental and modeling studies provide new insights into the role of nonlinear dynamics in information processing and storage at the level of the single neuron and indicate that individual neurons can have extensive parameter-independent plastic capabilities in addition to the more extensively analyzed parameter-dependent ones. ANTHONY BROWN, DEFENSE RESEARCH AGENCY, UNITED KINGDOM (ABROWN at SIGNAL.DRA.HMG.GB) "PRELIMINARY WORK ON THE DESIGN OF AN ANALOG OSCILLATORY NEURAL NETWORK" Inspired by biological neural networks our aim is to produce an efficient information processing architecture based upon analogue circuits. In the past analogue circuits have suffered from a limited dynamic range caused by inter-device parameter variations. Any analogue information processing system must therefore be based upon an adaptive architecture which can compensate for these variations. Our approach to designing an adaptive architecture is to mimic neuro-biological exemplars, we are therefore examining architectures based upon the Hebb learning rule. In neuro-biological systems the Hebb rule is associated with temporal correlations which arise in phase locked oscillatory behaviour. The starting point for our new system is the Hopfield network. To modify the fixed point dynamics of such a network we have introduced a "hidden" layer of neurons. Each new neuron is connected to an existing neuron to form a pair which in isolation exhibits a decaying, oscillatory response to a stimulus. Several promising preliminary results have been obtained: Sustained oscillations are stimulated by the "known" patterns which were used to determine the weight matrix. In contrast "unknown" patterns result in a decaying oscillatory response, which can be reinforced for frequently occurring new input patterns to create a new characteristic response. Finally, a mixture of two known inputs will stimulate both characteristic oscillatory patterns separated by a constant phase lag. Overall the introduction of oscillatory behaviour in an associative memory will both simplify the embodiment of the learning rule and introduce new modes of behaviour which can be exploited. ADI BULSARA, NAVAL COMMAND, CONTROL, AND OCEAN SURVEILLANCE CENTER, SAN DIEGO (BULSARA at MANTA.NOSC.MIL) "COMPLEXITY IN THE NEUROSCIENCES: SIGNALS, NOISE, NONLINEARITY, AND THE MEANDERINGS OF A THEORETICAL PHYSICIST" We consider the interpretation of time series data from firing events in periodically stimulated sensory neurons. Theoretical models, representing the neurons as nonlinear dynamic switching elements subject to deterministic (taken to be time-periodic) signals buried in a Gaussian noise background, are developed. The models considered include simple bistable dynamics which provide a good description of the noise-induced cooperative behavior in neurons on a statistical or coarse-grained level, but do not account for many important features (e.g. capacitative effects) of real neuron behavior, as well as very simple "integrate-fire" models which provide reasonable descriptions of capacitative behavior but attempt to duplicate refractoriness through the boundary conditions on the dynamics. Both these classes of models can be derived through a systematic reduction of the Hodgkin-Huxley equations (assumed to be the best available description of neural dynamics). Cooperative effects, e.g. "stochastic resonance", arising through the interplay of the noise and deterministic modulation, are examined, together with their possible implications in the features of Inter-Spike-Interval Histograms (ISIHs) that are ubiquitous in the neurophysiological literature. We explore the connection between stochastic resonance, usually defined at the level of the power spectral density of the response, and the cooperative behavior observed in the ISIH. For the simpler (integrate-fire-type) threshold model, a precise connection between the two statistical measures (the power spectral density and the ISIH) of the system response can be established; for the more complex (bistable) models, such a connection is, currently, somewhat tenuous. DAVID DEMARIS, UNIVERSITY OF TEXAS/AUSTIN (DEMARIS at PINE.ECE.UTEXAS.EDU) (TITLE TO BE ADDED) A body of work on nonlinear oscillations in vision has emerged, both in the analysis of single unit inter-spike intervals and in a theory of perceptual encoding via spatio-temporal patterns. This paper considers other roles nonlinear oscillating networks may play in an active visual system. Kaneko's coupled map lattice models and extensions are examined for utility in explaining tasks in attention and monocular depth perception. Visual cortex is considered as an array of coupled nonlinear oscillators (complex cell networks) forced by imbedded simple cell detector networks of the Hubel and Wiesel type. In this model, elf organization of local and global bifurcation parameters may form spatial regions of heightened activity in attentional modules and form bounded dynamics regimes (domains) in visual modules related to binding and separation of figure and ground. This research is still in a rather speculative stage pending simulation studies; hence the aims of this talk are: * Provide a brief introduction to dynamics of spatially extended nonlinear systems such as coupled map lattices with self-organized control parameters and how these may support perceptual activity and encoding. * Review some recent work on underlying physiological mechanisms and measurements which support the use of nonlinear oscillator models. * Describe visual phenomena in the areas of ambiguous depth perception, figure / ground feature discrimination, and spatial distortions. Discuss mechanisms in coupled map models which may account for these phenomena. A demonstration of experiments involving cellular automata processing of Necker cube and Muller/Lyer figures is possible running on an IBM compatible PC. SRIRAM GOVINDARAJAN AND VINCENT BROWN, UNIVERSITY OF TEXAS/ARLINGTON (SRIRAM at CSE.UTA.EDU) "FEATURE BINDING AND ILLUSORY CONJUNCTIONS: PSYCHOLOGICAL CONSTRAINTS AND A MODEL" (Abstract to be added) GUENTER GROSS AND BARRY RHOADES, UNIVERSITY OF NORTH TEXAS (GROSS at MORTICIA.CNNS.UNT.EDU) "SPONTANEOUS AND EVOKED OSCILLATORY BURSTING STATES IN CULTURED NETWORKS" In monolayer networks derived from dissociated embryonic mouse spinal cord tissue, and maintained in culture for up to 9 months, oscillatory activity states are common in the burst domain and represent the most reproducible of all network behaviors. Extensive observations of self-organized oscillatory activity indicate that such network states represent a generic feature of networks in culture and suggest that possibly all networks comprised of mammalian neurons have a strong tendency to oscillate. Native Oscillatory States: The most characteristic pattern is a temporally variable, but spatially coordinated bursting. Quasi-periodic oscillations are generally transient but coordinated among most of the electrodes recording spontaneous activity. Networks left undisturbed for several hours display a tendency to enter coordinated oscillatory states and to remain in these states for long periods of time. Pharmacologically-induced oscillations: Synaptic inhibition by blocking glycine and GABA receptors increases spike rates, but generates a much different response pattern than that obtained from the excitatory transmitters. Whereas the latter produce excitation by disrupting existing patterns with increased spike and burst activity and only occasional transient oscillatory patterns, disinhibition brings the network into more tightly synchronized bursting with highly regular burst durations and periods in essentially all cultures. Such states can last for hours with minimal changes in burst variables. Other compounds such as 4-aminopyridine and cesium increase burst rate and regularity, in a manner qualitatively matched by elevating extracellular potassium. Cultures are much more sensitive to strychnine than to bicuculline. Whereas oscillatory behavior usually begins at 20-30 m bicuculline, similar pattern changes are obtained with nanomolar to low micromolar quantities of strychnine. Burst fusion and intense spiking (produced by NMDA) has never been observed as a result of network disinhibition. Extensive pharmacological manipulations of calcium and potassium channels has confirmed that spontaneous oscillations depend on potassium currents and intracellular Ca++ levels but not on calcium-dependent potassium conductances. Electrically-induced oscillations: Networks can often be periodically driven by repetitive electrical stimulation at a single electrode. Repeated stimulus trains have also been observed to induce episodes of intense, coherent bursting lasting beyond the termination of the stimulus pattern. Such responses appear "epileptiform" and might be considered a cultured network parallel to electrical induction of an epileptic seizure in vivo. Entrainment: Repetitive pulse train stimulation often causes the network burst patterns to organize and finally follow the temporal stimulation pattern. We have also found that networks in pharmacologically-induced periodic bursting modes can be entrained to a periodic single channel stimulation if the stimulus cycle is at or near a multiple of the spontaneous burst cycle period. The ability of a few axons at one electrode to entrain an entire network of 100 -300 neurons is remarkable and invites studies of entrainment mechanisms in these networks. ALEXANDER GRUNEWALD AND STEPHEN GROSSBERG, BOSTON UNIVERSITY (ALEX at CNS.BU.EDU) "BINDING OF OBJECT REPRESENTATIONS BY SYNCHRONOUS CORTICAL DYNAMICS EXPLAINS TEMPORAL ORDER AND SPATIAL POOLING DATA" A key problem in cognitive science concerns how the brain binds together parts of an object into a coherent visual object representation. One difficulty that this binding process needs to overcome is that different parts of an object may be processed by the brain at different rates and may thus become desynchronized. Perceptual framing is a mechanism that resynchronizes cortical activities corresponding to the same retinal object. A neural network model based on cooperation between oscillators via feedback from a subsequent processing stage is presented that is able to rapidly resynchronize desynchronized featural activities. Model properties help to explain perceptual framing data, including psychophysical data about temporal order judgments. These cooperative model interactions also simulate data concerning the reduction of threshold contrast as a function of stimulus length. The model hereby provides a unified explanation of temporal order and threshold contrast data as manifestations of a cortical binding process that can rapidly resynchronize image parts which belong together in visual object representations. DAVID HORN, TEL AVIV UNIVERSITY (HORN at VM.TAU.AC.IL) "SEGMENTATION AND BINDING IN OSCILLATORY NETWORKS" Segmentation and binding are cognitive operations which underlie the process of perception. They can be understood as taking place in the temporal domain, i.e. relying on features like simultaneity of neuronal firing. We analyze them in a system of oscillatory networks, consisting of Hebbian cell assemblies of excitatory neurons and inhibitory interneurons in which the oscillations are implemented by dynamical thresholds. We emphasize the importance of fluctuating input signals in producing binding and in enabling segmentation of a large set of common inputs. Segmentation properties can be studied by investigating the cyclic attractors of the system and the partial symmetries that they implement in a symmetric neural network. ARUN JAGOTA, MEMPHIS STATE UNIVERSITY, AND XIN WANG, UNIVERSITY OF CALIFORNIA/LOS ANGELES (JAGOTA at NEXT1.MSCI.MEMST.EDU) "OSCILLATIONS IN DISCRETE AND CONTINUOUS HOPFIELD NETWORKS" The first part of this talk deals with analyzing oscillatory behavior in discrete Hopfield networks with symmetric weights. It is well known that under synchronous updates, such networks admit cyclic behavior of order two but no higher. The two-cycles themselves are not known to have any useful characterizations in general however. By imposing certain restrictions on the weights, we obtain an exact characterization of the two-cycles in terms of properties of a certain graph underlying the network. This characterization has the following benefits. First, in small networks of this kind, all the two-cycles may be found merely by inspection of the underlying graph (which depends only on the weights). Second, this characterization suggests certain applications which exploit the two-cycles. We illustrate both of these benefits in detail. The second part of this talk deals with synthesizing chaotic or periodic oscillatory behavior in continuous Hopfield networks for the purposes of solving optimization problems. It is well known that certain dynamical rules for continuous Hopfield networks with symmetric weights exhibit convergent behavior to stable fixed points. Such convergent behavior is one reason for the use of these networks to solve optimization problems. Such behavior, however, also limits their performance in practice, as it is of the gradient-descent form, which often leads to sub-optimal local minima. As a potential remedy to this problem, we propose methods for injecting controllable chaos or periodic oscillations into the dynamical behavior of such networks. In particular, our methods allow chaotic or oscillatory behavior to be initiated and converted to convergent behavior at the turn of a "knob". This is in analogy with simulated annealing where at high temperature the behavior is "random" and at low temperature relatively "convergent". We introduce chaos or periodic oscillations into the network in two ways: one via the external input to each neuron, and the other by replacing each neuron by two neurons arranged into a coupled oscillator. We present some experimental results on the performance of our networks, with and without the oscillations, on the Maximum Clique optimization problem. SHIEN-FONG LIN, RASHI ABBAS, AND JOHN WIKSO, JR., VANDERBILT UNIVERSITY (LIN at MACPOST.VANDERBILT.EDU) "ONE-DIMENSIONAL MAGNETIC MEASUREMENT OF TWO-ORIGIN BIOELECTRIC CURRENT OSCILLATION" The squid giant axons when placed in low calcium and high sodium extracellular environment will abruptly enter a state of self-sustained oscillation. Such an oscillation exhibits a linear temperature dependence in frequency, can be entrained, and enters chaotic states with proper entrainment patterns. The origination of such an oscillation, although of significant implication to neural oscillation in general, has never been extensively studied experimentally. Specifically, one of the most intriguing problem was the scarcity of experimental evidence for symmetrical multiple oscillation origins in such a homogeneous one dimensional structure. In this presentation, we report a novel non-invasive magnetic observation of a stable 2-origin self-sustained oscillation in squid giant axons. The oscillation showed a standing-wave pattern when observed in the spatial domain, and a proper geometry was required to sustain the 2-origin pattern. The origins were coupled and synchronized in phase. The results from model simulation using explicit implementation of propagating Hodgkin-Huxley axon allowed us to investigate the mechanisms underlying such a behavior. The study clearly demonstrated the merits of magnetic methods in studying intricate neural oscillations. ALIANNA MAREN, ACCURATE AUTOMATION CORPORATION, AND E. SCHWARTZ, RADFORD UNIVERSITY (AJMAREN%AAC at OLDPAINT.ENGR.UTC.EDU) "A NEW METHOD FOR CROSS-SCALE INTERACTION USING AN ADAPTABLE BASIC PROCESSING ELEMENT" A new concept for the basic processing element in a neural network allows the characteristics of this element to change in response to changes at the neural network level. This makes it possible to have "cross-scale interactions," that is, events at the neural network level influence not only the immediate network state, but also the response characteristics of individual processing elements. This novel approach forms the basis for creating a new class of neural networks, one in which the processing elements are responsive to spatial and historical context. This capability provides a valuable tool in advancing the overall richness and complexity of neural network behavior. The most evident advantage of this new approach is that neural networks can be made dependent, in a substantial way, upon past history for the present state. This property is most useful in applications where past history is important in determining present actions or interpretations. There is a major difference between this approach and most current methods for adapting neural networks to exert the influence of time or to provide "learning." This lies in the fact that most existing methods provide either a means for maintaining the activation due to initial stimulus (either with time-delay connections or with recurrent feedback), or provide a means for changing the values of connection weights ("learning"). The approach offered here is substantively different from existing approaches, in that changes are made to the response characteristics of the individual processing units themselves; they now respond differently to stimuli. The model for the new interpretation of the basic processing element comes from considering the basic element as a (statistically large) ensemble of interacting bistate processing units. By way of analogy to domains of neurons in biological systems, we call this ensemble, or basic processing element, an artificial neural domain. The neural domain is modeled at the ensemble level, not at the level of individual components. Using a simple statistical thermodynamics model, we arrive at ensemble characteristics. Ensemble, or domain, behavior, is controlled not only by input activations but also by parameter values which are modified at the neural network level. This creates an avenue for cross-scale interaction. GEORGE MOBUS AND PAUL FISHER, UNIVERSITY OF NORTH TEXAS (MOBUS at PONDER.CSCI.UNT.EDU) "EDGE-OF-CHAOS-SEARCH: USING A QUASI-CHAOTIC OSCILLATOR CIRCUIT FOR FORAGING IN A MOBILE AUTONOMOUS ROBOT" A neural circuit that emulates some of the behavioral properties of central pattern generators (CPGs) in animals is used to control a stochastic search in a mobile, autonomous robot. When the robot is not being stimulated by signals that represent mission-support events, it searches its environment for such stimuli. The circuit generates a quasi-chaotic oscillation that causes the robot to weave back and forth like a drunken driver. Analysis of the weave pattern shows that the chaotic component yields sufficient novelty to cause the robot to conduct an effective search in a causally-controlled but non-stationary environment. Unlike a random-walk search which may exhaust the robot's power resources before it accomplishes its mission, we show, through simulations, that a quasi-chaotic search approaches optimality in the sense that the robot is more likely to succeed in finding mission-critical events. The search patterns displayed by the robot resemble, qualitatively, those of foraging animals. When the robot senses a stimulus associated with a mission-support event, a combination of location and distance signals from other parts of the simulated brain converge on the CPG causing it to transition to more ordered directional output. The robot orients relative to the stimulus and follows the stimulus gradient to the source. The possible role of chaotic CPGs and their transitions to ordered oscillation in searching non-stationary spaces is discussed and we suggest generalizations to other search problems. The role of learning causal associations as a prerequisite for successful search is also covered. GEORGE MPITSOS, OREGON STATE UNIVERSITY (GMPITSOS at SLUGO.HMSC.ORST.EDU) "ATTRACTOR GRADIENTS: ARCHITECTS OF NETWORK ORGANIZATION IN BIOLOGICAL SYSTEMS" Biological systems are composed of many components that must produce coherent adaptive responses. The interconnections between neurons in an assembly or between individuals in any population all pose similar questions, e.g,: How does one handle the many degrees of freedom to know how the system as a whole functions? What is the role of the individual component? Although individuals act using local rules, is there some global organizing principle that determines what these rules are? I raise the possibility that many simplifications occur if the system is dissipative; i.e., if it has an attractor such that it returns to a characteristic state in response to external perturbation. I ask what does the dissipative process do to the system itself? What global organizing effects does it produce? Biological and artificial neural networks are used to describe dissipative processes and to address such questions. Although individual systems may express different details, the fact that attractors are generally applicable constructs, suggests that the understanding of one complex system may give insight into similar problems of self-organization in others. Supported by AFOSR 92J-0140 NAM SEOG PARK, DAVE ROBERTSON, AND KEITH STENNING, UNIVERSITY OF EDINBURGH (NAMSEOG at AISB.EDINBURGH.AC.UK) "FROM DYNAMIC BINDINGS TO FURTHER SYMBOLIC KNOWLEDGE REPRESENTATION USING SYNCHRONOUS ACTIVITY OF NEURONS" A structured connectionist model using temporal synchrony has been proposed by Shastri and Ajjanagadde. This model has provided a mechanism which encodes rules and facts involving n-ary predicates and handles some types of dynamic variable binding using synchronous activity of neurons. Although their mechanism is powerful enough to provide a solution to the dynamic variable binding problem, it also shows some limitations in dealing with some knowledge representation issues such as binding generation, consistency checking, and unification, which are important in enabling their model achieving better symbolic processing capabilities. This paper presents how Shastri and Ajjanagadde's mechanism can be modified and extended to overcome those limitations. The modification is made by redefining a temporal property of one of four types of node used in their model and replacing it with the one newly defined. Two layers of node are also added to enable a uniform layered connection between the antecedent and the consequent of various types of rule, which allows comparatively straightforward translation from symbolic representation of rules to connectionist representation of them. As a result, the modified system is able to tackle more knowledge representation issues while, at the same time, reducing the number of types of node required and retaining the merits of the original model. ANDREW PENZ, TEXAS INSTRUMENTS (PENZ at RESBLD.TI.COM) (TITLE AND ABSTRACT TO BE ADDED) BARRY RHOADES, UNIVERSITY OF NORTH TEXAS (RHOADES at MORTICIA.CNNS.UNT.EDU) "GLOBAL NEUROCHEMICAL DETERMINATION OF LOCAL EEG IN THE OLFACTORY BULB" Spatially coherent bursts of EEG oscillations are a dominant electrophysiological feature of the mammalian olfactory bulb, accompanying each inspiratory phase of the respiratory cycle in the waking state and recurring intermittently under moderate anesthesia. In the rat these oscillatory bursts are nearly sinusoidal, with a typical oscillation frequency of 50-60 Hz. The averaged evoked potential (AEP) to repetitive near threshold-level electrical stimulation of either the primary olfactory nerve (PON) or lateral olfactory tract (LOT) has a dominant damped sinusoidal component at the same frequency. These oscillations are generated by the negative feedback relationship between the mitral/tufted (MT) cell principal neurons and the GABAergic granule (G) cell interneurons at reciprocal dendro-dendritic synapses of the external plexiform layer (EPL). This EPL generator produces oscillations in mitral/tufted cells and granule cell ensembles, under the high input levels produced by inspiratory activation of the olfactory epithelium or electrical stimulation of the bulbar input or output tracts. The dependence of oscillations in the bulbar EEG and evoked potentials on local and regional alterations in GABAergic neurochemistry was investigated in barbiturate anesthetized Sprague-Dawley rats. The main olfactory bulb, primary olfactory nerve (PON) and lateral olfactory tract (LOT) were surgically exposed, unilaterally. Basal EEG from both bulbs and AEPs from the exposed bulb in response to stimulation of the PON and LOT were recorded before and following both local microinjection and regional surface application of the GABAactive neurochemicals muscimol, picrotoxin, and bicuculline. Locally restricted microinjections profoundly altered AEP waveforms, but had negligible effects on the background EEG. Regional applications of the same neurochemicals at the same concentrations across the entire exposed bulbar surface produced discontinuous transitions in EEG oscillatory state. The temporal properties of the basal EEG recorded from a site on the bulbar surface could thus be altered only by GABAergic modification of G->MT cell synapses over a large region of the olfactory bulb. This provides neurochemical evidence that the temporally and spatially patterned oscillatory activity deriving from the interactions of mitral/tufted and granule cells is globally organized; i.e. that global oscillatory state overrides local neurochemistry in controlling background oscillations of local neuronal ensembles. This research was conducted in the laboratory of Walter J. Freeman at the University of California, Berkeley and supported primarily by funds from NIMH grant #MH06686. IVAN SOLTESZ, UNIVERSITY OF TEXAS HEALTH SCIENCES CENTER, DALLAS (SOLTESZ at UTSW.SWMED.EDU) (TITLE AND ABSTRACT TO BE ADDED) MARTIN STEMMLER, CALIFORNIA INSTITUTE OF TECHNOLOGY (STEMMLER at KLAB.CALTECH.EDU) "SYNCHRONIZATION AND OSCILLATIONS IN SPIKING NETWORKS" While cortical oscillations in the 30 to 70~Hz range are robust and commonly found in local field potential measurements in both cat and monkey visual cortex (Gray et al., 1990; Eckhorn et al., 1993), they are much less evident in single spike trains recorded from behaving monkeys (Young et al., 1982; Bair et al., 1994). We show that a simple neural network with spiking "units" and a plausible excitatory-inhibitory interconnection scheme can explain this discrepancy. The discharge patterns of single units is highly irregular and the associated single-unit power spectrum flat with a dip at low frequencies, as observed in cortical recordings in the behaving monkey (Bair et al., 1994). However, if the local field potential, defined as the summed spiking activity of all "units" within a particular distance, is computed over an area large enough to include direct inhibitory interactions among cell pairs, a prominent peak around 30-50 Hz becomes visible. MARK STEYVERS, INDIANA UNIVERSITY AND CEES VAN LEEUWEN, UNIVERSITY OF AMSTERDAM, THE NETHERLANDS (MSTEYVER at HERMES.PSYCH.INDIANA.EDU) "USE OF SYNCHRONIZED CHAOTIC OSCILLATIONS TO MODEL MULTISTABILITY IN PERCEPTUAL GROUPING" Computer simulations are presented to illustrate the utility of a new way of dynamic coupling in neural networks. It is demonstrated that oscillatory neural network activity can be synchronized even if the network remains in a chaotic state. An advantage of chaotic synchronous oscillations over periodic ones is that chaos provides a very powerful and intrinsic mechanism for solving the binding problem and at the same time, multistability in perception. The resulting switching-time distributions of a multistable grouping show qualitative similarities with experimentally obtained distributions. The chaotic oscillatory couplings were used to model the Gestalt laws of proximity, good continuation and symmetry preference. In addition, interpretations provided by the model were shown to be liable to sequence influence. DAVID TAM, UNIVERSITY OF NORTH TEXAS (DTAM at UNT.EDU) "SPIKE TRAIN ANALYSIS FOR DETECTING SYNCHRONIZED FIRING AMONG NEURONS IN NETWORKS" A specialized spike train analysis method is introduced to detect synchronized firing between neurons. This conditional correlation technique is developed to detect the probability of firing and non-firing of neurons based on the pre- and post-conditional cross-intervals, and interspike intervals after the reference spike has fired. This statistical measure is an estimation of the conditional probability of firing of a spike in a neuron based on the probability of firing of another neuron after the reference spike has occurred. By examining the lag times of post-interspike intervals and post-cross intervals, synchronized coupling effects between the firing of the reference neuron can be revealed. ELIZABETH THOMAS, WILLAMETTE COLLEGE (ETHOMAS at WILLAMETTE.EDU) "A COMPUTATIONAL MODEL OF SPINDLE OSCILLATIONS" A model of the thalamocortical system was constructed for the purpose of a computational analysis of spindle. The parameters used in the model were based on experimental measurements. The model included a reticular thalamic nucleus and a dorsal layer. The thalamic cells were capable of undergoing a low threshold calcium mediated spike. The simulation was used to investigate the plausibility and ramifications of certain proposals that have been put forward for the production of spindle. An initial stimulus to the model reticular thalamic layer was found to give rise to activity resembling spindle. The emergent population oscillations were analyzed for factors that affected its frequency and amplitude. The role of cortical feedback to the pacemaking RE layer was also investigated. Finally a non-linear dynamics analysis was conducted on the emergent population oscillations. This activity was found to yield a positive Lyapunov exponent and define an attractor of low dimension. Excitatory feedback was found to decrease the dimensionality of the attractor at the reticular thalamic layer. ROGER TRAUB, IBM T.J. WATSON RESEARCH CENTER (TRAUB at WATSON.IBM.COM) "CELLULAR MECHANISMS OF SOME EPILEPTIC OSCILLATIONS" Cortical circuitry can express epileptic discharges (synchronized population oscillations) when a number of different system parameters are experimentally manipulated: blockade of fast synaptic inhibition; enhancement of NMDA conductances; or prolongation of non-NMDA conductances. Despite the differences in synaptic mechanism, the population output is, remarkably, stereotyped. We shall present data indicating that the stereotypy can be explained by three basic ideas: recurrent excitatory connections between pyramidal neurons, the ability of pyramidal dendrites to produce repetitive bursts, and the fact that experimental epilepsies engage one or another prolonged depolarizing synaptic current. SETH WOLPERT, UNIVERSITY OF MAINE (WOLPERT at EECE.MAINE.EDU) "MODELING NEURAL OSCILLATIONS USING VLSI-BASED NEUROMIMES" As a prelude to the VLSI implementation of a locomotory network, neuronal oscillators that utilize reciprocal inhibition (RI) and recurrent cyclic inhibition (RCI) were re-created for parametric characterization using comprehensive VLSI-based artificial nerve cells, or Neuromimes. Two-phase RI oscillators consisting of a pair of self-exciting, mutually inhibiting neuronal analogs were implemented using both fixed and dynamic synaptic weighting, and cyclic inhibitory RCI ring networks of three and five cells with fixed synaptic weighting were characterized with respect to cell parameters representing resting cell membrane potential, resting threshold potential, refractory period and tonic inhibition from an external source. For each of these parameters, the frequency at which an individual cell would self-excite was measured. The impact of that cell's self-excitatory frequency on the frequency of the total network was then assessed in a series of parametric tests. Results indicated that, while all four input parameters continuously and coherently effected the cellular frequency, one input parameter, duration of the cellular refractory period, had no effect on overall network frequency, even though the cellular frequency ranged over more than two orders of magnitude. These results would suggest that neuronal oscillators are sensitive to concentrations of the ionic species contributing to resting cell membrane potential and threshold, but are stable with respect to cellular conditions affecting refraction, such as the conditions in the Sodium inactivation channels. ROBERT WONG, DOWNSTATE MEDICAL CENTER/BROOKLYN (NO E-MAIL; TELEPHONE 718-270-1339, FAX 718-270-2241) (TITLE AND ABSTRACT TO BE ADDED) DAVID YOUNG, LOUISIANA STATE UNIVERSITY (DYOUNG at MAX.EE.LSU.EDU) "OSCILLATIONS CREATED BY THE FRAGMENTED ACCESS OF DISTRIBUTED CONNECTIONIST REPRESENTATIONS" The rapid and efficient formation of transient interactions on a systems level is viewed as a necessary aspect of cognitive function. It is a principle behind the binding problem of symbolic rule-based reasoning which has seen many recent connectionist approaches inspired by observations of synchronized neural oscillations in separate cortical regions of the brain. However the combinatorial complexity of linking each of the numerously possible interactions that may be needed exposes a serious limitation inherent to connectionist networks. As is well known an artificial neural network constitutes a massively parallel device yet above the most basic organizational level it effectively does only one thing at a time. This limitation is called the opacity of a neural network and it describes the ability to access the knowledge embodied in the connections of a network from outside the network. This talk presents two new results relevant to neural oscillations. Firstly, wider access to the information storage of feedback structures is achieved through composite attraction basins that represent a combination of other learned basins. Secondly, a dynamics of inactivity is introduced and is shown to support concurrent processes within the same structure. By quieting the activity of dissimilar network elements system states are temporarily merged to form combined states of smaller dimension. The merged state will then proceed along a monotone decreasing path over an energy surface toward a composite basin just as a single state will proceed toward a single basin. Since changes are not made to interconnection weights specific instantiations of full dimension may be reconstructed from vector fragments. Moreover the fragment size is dynamic and may be altered as the system operates. Based on this observation a new dynamics of inactivity for feedback connectionist structures is presented allowing the network to operate in a fragment-wise manner on learned distributed representations. The new mechanism is seen as having tracks of activation passing through an otherwise quiet system. The active fragment repeatedly passes through the distributed representation setting up an oscillation. Inactive portions of the structure may then be utilized by other processes that are locally kept separate through phase differences and efferent coincidence. Out-of-phase tracks may be brought into synchrony thus allowing the interaction of disparate features of objects by lowering the inhibition of the neighboring elements involved. The feedback structure is less than fully connected globally but highly interconnected for local neighborhoods of network elements. Reduced global connectivity in an environment operating fragment-wise permits true concurrent behavior as opposed to the local use of time-shared resources which is not concurrent. A second structure is interwoven with and regulates the first through inhibitory stimulation. This relationship of the two networks agrees with the predicted regulatory influence that neurons with smooth dendritic arborizations have on pyramidal cells and stellate cells displaying spiny dendrites. GEOFFREY YUEN, NORTHWESTERN UNIVERSITY (YUEN at MILES.PHYSIO.NWU.EDU) "FROM THE ADJUSTABLE PATTERN GENERATOR MODEL OF THE CEREBELLUM TO BISTABILITY IN PURKINJE CELLS" Based on the anatomy and physiology of the cerebellum and red nucleus, the adjustable pattern generator (APG) model is a theory of movement control that emphasizes the quasi-feedforward nature of higher motor control processes. This is in contrast to the heavily feedback-based control processes on the level of the spinal cord. Thus, limb movement-related motor commands (i.e. high-frequency bursts discharges) in red nucleus during awake-monkey experiments are postulated to be generated by endogenous CNS pattern generators rather than via continuous feedback from the periphery. The postulated endogenous movement-command CNS pattern generator includes neurons in magnocellular red nucleus (RNm), deep cerebellar nuclei (i.e. nucleus interpositus (NI) for limb movements) and cerebellar Purkinje cells. Recurrent excitatory interactions between RNm and NI which give rise to burst discharges are modulated by the inhibitory outputs of cerebellar Purkinje cells. Thus dynamic burst durations and patterns are sculpted by learning-based inhibition from Purkinje cells, giving rise to appropriate movement command signals under different movements and circumstances. Intrinsic to the concept of a pattern generator is the existence of self-sustained activities. Aside from the reverberatory positive feedback circuit in the recurrent loop between the cerebellum and red nucleus, bistability in the membrane potentials of Purkinje cells can also support self-sustained activity. This concept of bistability is based on the phenomena of plateau potentials as observed in Purkinje cell dendrites. This talk will concisely summarize the APG theory and circuitry, report on the results of its use in limb-movement control simulations and describe current efforts to capture the biophysical basis of bistability in Purkinje cells. The bistability of cerebellar Purkinje cells also has significance particularly for the control of oscillations in the recurrent excitatory circuits between red nucleus and deep cerebellar nuclei, as well as movement control in general. With respect to the biophysical basis of dendritic bistability, we have carried out simulations and phase-plane analysis of the ionic currents which underlie dendritic plateau potentials in Purkinje cells. Here we shall report on the results of the phase plane analyses of the systems based on high-threshold P-calcium, delayed rectifier potassium and slow, calcium-mediated potassium channels. We gratefully acknowledge the support of the various aspects of this work by ONR (N-00014-93-1-0636 to G. L. F. Yuen), NIH (P50MH48185-01 to J. C. Houk) and NSF (NS-26915 to P. E. Houkberger). DIRECTIONS TO CONFERENCE AND EVENING ACTIVITIES To Those Attending the MIND conference on Oscillations in Neural Systems: For those of you who are baseball fans, or are perhaps just curious to see the new Ballpark in Arlington, we are arranging a trip to the game on Saturday May 7. The Minnesota Twins are in town to take on the Texas Rangers. Game time is 7:30 pm. If we gather a large enough crowd, we can probably get a group discount. Please send a response of you are interested. The second, but not least, purpose of this message is to inform those of you arriving by car how to get to the motel and UTA campus. If you are arriving by air, you need not read further. If you are entering Arlington from the north side via Interstate 30, you will exit on Cooper Street and travel south (after exiting you should cross back over the freeway to head south). You will drive about two or three miles to reach campus. You will pass Randol Mill and Division streets. UTA is about four blocks beyond Division Street. You should turn east (left) on Mitchell street. If you get to Park Row, you have gone too far. To get to the Park Inn, continue past Mitchell one block to Benge. The Inn is on your right. If you are entering Arlington from the south side via Interstate 20, you should exit on Cooper Street and head north. You will drive three or four miles to reach campus. Some of the major streets you will pass are Arbrook, Arkansas and Pioneer Parkway, and Park Row. UTA is just beyond Park Row. You should turn east (right) on Mitchell Street. If you get to Border Street, you have gone too far. The Park Inn is two blocks north of Park Row on your left. Turn left on Benge Street. Once you are on Mitchell, continue east two blocks to West street and turn left (north). Proceed one block to Nedderman. There are two parking lots at the corner of West and Nedderman. If possible park in the north lot. You will now have the nursing building to the west and the business building to the north. To get to the library on foot from the parking lot, head west towards the nursing building. You will cross on a sidewalk with the nursing building to your left and the parking garage to your right. (DO NOT park in the parking garage. It costs an arm and a leg. Parking in the other lot is free.) When you cross the street past the parking garage the library is the building on the right. The Life Sciences Building will be on the left. The conference is on the sixth floor of the library, in the Parlor. Parking permits (free of charge) will be available at the conference registration table, as will campus maps. If you are staying at the Inn, it is proabably easier to park at the Inn and then walk to campus (two blocks away). Campus maps will be available at the Park Inn desk. Hope the directions are clear. Vince Brown b096vrb at utarlg.uta.edu Registration and Travel Information Official Conference Motel: Park Inn 703 Benge Drive Arlington, TX 76013 1-800-777-0100 or 817-860-2323 A block of rooms has been reserved at the Park Inn for $35 a night (single or double). Room sharing arrangements are possible. Reservations should be made directly through the motel. Official Conference Travel Agent: Airline reservations to Dallas-Fort Worth airport should be made through Dan Dipert travel in Arlington, 1-800-443-5335. For those who wish to fly on American Airlines, a Star File account has been set up for a 5% discount off lowest available fares (two week advance, staying over Saturday night) or 10% off regular coach fare; arrangements for Star File reservations should be made through Dan Dipert. Please let the conference organizers know (by e-mail or telephone) when you plan to arrive: some people can be met at the airport (about 30 minutes from Arlington), others can call Super Shuttle at 817-329-2000 upon arrival for transportation to the Park Inn (about $14-$16 per person). Registration for the conference is $25 for students, $65 for non-student oral or poster presenters, $85 for others. MIND members will have $20 (or $10 for students) deducted from the registration. A registration form is attached to this announcement. Registrants will receive the MIND monthly newsletter (on e-mail when possible) for the remainder of 1994. REGISTRATION FOR MIND CONFERENCE ON OSCILLATIONS IN NEURAL SYSTEMS, UNIVERSITY OF TEXAS AT ARLINGTON, MAY 5-7, 1994 Name ______________________________________________________________ Address ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ____________________________________________________________ E-Mail __________________________________________________________ Telephone _________________________________________________________ Registration fee enclosed: _____ $15 Student, member of MIND _____ $25 Student _____ $65 Non-student oral or poster presenter _____ $65 Non-student member of MIND _____ $85 All others Will you be staying at the Park Inn? ____ Yes ____ No Are you planning to share a room with someone you know? ____ Yes ____ No If so, please list that person's name __________________________ If not, would be you be interested in sharing a room with another conference attendee to be assigned? ____ Yes ____ No PLEASE REMEMBER TO CALL THE PARK INN DIRECTLY FOR YOUR RESERVATION (WHETHER SINGLE OR DOUBLE) AT 1-800-777-0100 OR 817-860-2323.  From tenorio at ecn.purdue.edu Wed May 4 15:40:37 1994 From: tenorio at ecn.purdue.edu (tenorio@ecn.purdue.edu) Date: Wed, 4 May 1994 14:40:37 -0500 Subject: Financial Forecasting Competition Message-ID: <199405041940.OAA18035@dynamo.ecn.purdue.edu> At the risk of clogging everyone's mailboxes, and as someone involved with the competition I feel the need to answer Bill's concerns: >Motivated by the announcement on this list of the "First >International Nonlinear Financial Forecasting Competition", >I would like to raise a concern that has troubled me for >some time. I wonder whether it is really socially responsible >to work on these things, or to support such work. > .. > >What concerns me, is that the financial forecasting techniques >I have seen are not oriented toward predicting the failure >or success of individual enterprises, but rather toward >identifying and predicting global trends in the flow of >money. The capacity of predicting the failure, or success of an enterprise is more related to the analysis of fundamentals of the economy than technical analysis. In technical analysis, a supposition is made that price reflects all the information available to the market participants, as well as their expectations. I have not seen all the entries of the competition itself, but it is safe to assume that the predictors are not based on identifying "global trends in the flow of money." As a matter of fact, the competition is restricted to time series prediction processes where a single price series exists, apart from the reality of the world that generated it. You could say that this is the ultimate technical analysis test: a single price series alone. The problem posed here seems to be harder than the problem addressed by real traders, with the benefit of "market sideviews" and all the rumors, etc. Which brings us to the motivation behind this competition. This competition was meant as a follow up on to the work of Weigend, colleagues and collaborators. The Santa Fe Competition results, although not geared towards economic time series, were basically all negative towards their predictability. Meanwhile several software houses were making claims of great predictability accuracy, as well as commercial systems of great value based on non-linear techniques. It seemed very confusing to know for sure whether or not such techniques are viable. The charter of the competition, in spite of its limited scope, is to give the sketch of an answer whether there is validity to the use of non-linear techniques on this type of time series. There is a lot of smoke and mirrors, folklore, and even more bizarre tales when it come to predictability of the markets. Our goal is to shed some light on this, at least in this restricted viewpoint. Second, a number of researchers have looked at the economy as the result of the interaction of many independent agents, operating with similar policies. A sort of emergent behavior in a parallel distributed soup. Not very much different from other systems such as biological, ecological, and neurological networks. If this is the case, predictability of one has some implications to predictability of the others, at least in a gross sense. Now, commercial investment firms use all forms of prediction schemes in the market: from fundamental analysis to looking at the planets. It is all done in the name of making money. We do not claim to have an answer to the "ethics" of different methods. Besides they all seem to have the same (irk...) objective. For the market to work, there must be a seller and a buyer. Therefore someone must always be on the wrong side of the issue. So if company A did succeed and some investor lost a lot of money by not believing in its success, after his/her loss, is this lack of belief to be blamed as "promoting the destruction of a companies future?" Is that judgment changed by the fact that we know that the investor used fundamental analysis, whereas the "other guy" used a technical analysis? How about if the roles were reversed? I am skeptical that this is possible in the first >place, but even if it is possible, it seems to me that to >make money this way is to be a parasite upon the financial >system, rather than to serve it. Some of us are skeptical too, thus the competition. But what is the reason for the fear of its results? Again, I do not claim the authority to make ethical judgments on our financial system, which is plagued by both fundamental and technical analysts. From a strictly economic view point, all market players are buying and selling risk. This form of financial insurance is just a more sophisticated form of insurance on goods, which has promoted the great prosperity of western civilization in the last 500 years. Without insurance, ship owners would not venture the oceans after merchandise, etc. Without it today, we would not have simple things like the post office, UPS and the like, and farmers could not guarantee crop prices, etc. (For lack of health insurance some voters elected our President). It is all about minimizing risk, no matter how we do it. The problem is that the person who assumes your risk, does not do it for any one's blue eyes, but to make a living, and therefore will use whatever means available. Our role is to minimize the risk of investment in technologies that will not promote a true predictive gain. But I would like to remind you that science cannot verify negative assertions, the scientific method is not adequate for that. Further our resources are limited. The best we can hope for is to shed some light, and that is our charter as scientists. And the question is: "Where is the Truth concerning the performance of these methods?" From crg at ida.his.se Thu May 5 07:40:32 1994 From: crg at ida.his.se (Connectionist) Date: Thu, 5 May 94 13:40:32 +0200 Subject: CFP: SCC-95 THE SECOND SWEDISH CONFERENCE ON CONNECTIONISM The Connectionist Research Group University of Skovde, SWEDEN Message-ID: <9405051140.AA05414@mhost.ida.his.se> March 2-4, 1995 CALL FOR PAPERS SCOPE OF THE CONFERENCE Understanding neural information processing properties characterizes the field of connectionism, also known as Ar- tificial Neural Networks (ANN). The rapid growth, expansion and great popularity of connec- tionism is motivated by the new way of approaching and understanding the problems of artificial intelligence, and its applicability in many real-world applications. There is a number of subfields of connectionism among which we distinguish the following. The importance of a "Theory of connectionism" cannot be overstressed. The interest in theoretical analysis of neu- ronal models, and the complex dynamics of network architec- tures grows rapidly. It is often argued that abstract neural network models are best understood by analysing their computational properties with respect to their biological counterparts. A clear theoretical approach to developing neural models also provides insight in dynamics, learning, functionality and probabilities of different connectionist networks. "Cognitive connectionism" is bridging the gap between the theory of connectionism and cognitive science by modelling higher order brain functions from psychology by using methods offered by connectionist models. The findings of this field are often evaluated by their neuropsychological validity and not by their functional applicability. Sometimes the field of connectionism is referred to as the "new AI". Its applicability in AI has spawned a belief that AI will benefit from a good understanding of neural informa- tion processing capabilities. The subfield "Connectionism and artificial intelligence" is also concerned with the dis- tinction between connectionist and symbolic representations. The wide applicability and problem-solving abilities of neural networks are exposed in "Real-world computing". Robotics, vision, speech and neural hardware are some of the topics in this field. "The philosophy of connectionism" is concerned with such diverse questions as the mind-body problem and relations between distributed representations, their semantics and im- plications for intelligent behaviour. Experimental studies in "Neurobiology" have implications on the validity and design of new, artificial neural architec- tures. This branch of connectionism addresses topics such as self-organisation, modelling of cortex, and associative memory models. A number of internationally renowned keynote speakers will be invited to give plenary talks on the subjects listed above. GUIDELINES FOR PAPER SUBMISSIONS Instructions for submissions of manuscripts: Papers may be submitted, in three (3) copies, to one of the following sessions. ~ Theory of connectionism ~ Cognitive connectionism ~ Connectionism and artificial intelligence ~ Real-world computing ~ The philosophy of connectionism ~ Neurobiology A note should state principal author and email address (if any). It should also indicate what session the paper is sub- mitted to. Length: Papers must be a maximum of ten (10) pages long (including figures and references), the text area should be 6.5 inches by 9 inches, including footnotes but excluding page numbers), and in a 12-point font type. Template and style files conforming to these specifications for several text formatting programs, will be available to authors of accepted papers. Deadline: Papers must be received by Thursday, September 1, 1994 to ensure reviewing. All submitted papers will be reviewed by members of the program committee on the basis of technical quality, research significance, novelty and clarity. The principal author will be notified of acceptance no later than Tuesday, October 18, 1994. Proceedings: All accepted papers will appear in the conference proceed- ings. CONFERENCE CHAIRS Lars Niklasson, Mikael Boden lars.niklasson at ida.his.se mikael.boden at ida.his.se TENTATIVE SPEAKERS Michael Mozer University of Colorado, USA Ronan Reilly University College Dublin, Ireland Paul Smolensky University of Colorado, USA David Touretzky Carnegie Mellon University, USA This list is under completion. PROGRAM COMMITTEE Jim Bower California Inst. of Technology, USA Harald Brandt Ellemtel, Sweden Ron Chrisley University of Sussex, UK Gary Cottrell University of California, San Diego, USA Georg Dorffner University of Vienna, Austria Tim van Gelder National University of Australia, Australia Agneta Gulz University of Skovde, Sweden Olle Gallmo Uppsala University, Sweden Tommy Garling Goteborg University, Sweden Dan Hammerstrom Adaptive Solutions Inc., USA Jim Hendler University of Maryland, USA Erland Hjelmquist Goteborg University, Sweden Anders Lansner Royal Inst. of Techn., Stockholm, Sweden Reiner Lenz Linkoping University, Sweden Ajit Narayanan University of Exeter, UK Jordan Pollack Ohio State University, USA Noel Sharkey University of Sheffield, UK Bertil Svensson Chalmers Inst. of Technology, Sweden Tere Vaden University of Tampere, Finland PLEASE ADDRESS ALL CORRESPONDENCE TO: "SCC-95" The Connectionist Research Group University of Skovde P.O. Box 408 541 28 Skovde, SWEDEN E-mail: crg at ida.his.se From kipp at nvl.army.mil Fri May 6 11:50:00 1994 From: kipp at nvl.army.mil (Teresa Kipp) Date: Fri, 6 May 94 11:50 EDT Subject: Recruitment of Ph.D Neural Net Scientists Message-ID: JOBS FOR TALENTED NEURAL NET Ph.D's -------------------------------------- The Computer Vision Research Branch of the US Army Research Laboratory is composed of Ph.D's in both theory and experimentation in the field of theoretical computer science, probability and statistics. Our branch has also contracts with top theoretical computer scientists and mathematicians from various universities providing continuous interaction through regular visits. Our research is to design algorithms to recognize military targets in complex imagery generated by a variety of sensors. This research also includes commercial applications such as handwriting and face recognition. We are in the process of enlarging our branch by hiring neural net scientists on our in-house staff and by contracting with university neural net scientists. A Part of the new research effort is the comparison between neural net algorithms and those presently designed by our branch that are model-based with combinatorial trees in order to stimulate the cross fertilization, hybrid systems, and the unification between these two approaches. Talented neural net Ph.D's are invited to submit a copy of their curriculm vitae by regular or electronic mail. Vitae's sent by electronic mail are acceptable as either latex or postscript files. Send all communications to Ms. Teresa Kipp at any of the following addresses: electronic mail to: kipp at nvl.army.mil or send by regular mail to: DEPARTMENT OF THE ARMY US ARMY RESEARCH LABORATORY AMSRL SS SK (T. KIPP) 10221 BURBECK RD STE 430 FT BELVOIR VA 22060-5806 or contact Ms. Kipp at (703)-704-3656. From esann at dice.ucl.ac.be Sat May 7 17:21:18 1994 From: esann at dice.ucl.ac.be (esann@dice.ucl.ac.be) Date: Sat, 7 May 1994 23:21:18 +0200 Subject: ESANN'94 proceedings Message-ID: <9405072117.AA04231@ns1.dice.ucl.ac.be> ________________________________________________ ------------------------- ! PROCEEDINGS AVAILABLE ! ------------------------- ________________________________________________ ESANN ' 94 European Symposium on Artificial Neural Networks Brussels, April 20-21-22, 1994 ________________________________________________ The second European Symposium on Artificial Neural networks was held in Brussels (Belgium) on April 20-21-22. The conference presented a selection of high-quality papers in the field of theoretical and mathematical aspects of neural networks, algorithms, relations with classical methods of statistics and of information theory, and with biology. You will find enclosed the detailed program of the conference. The proceedings of this conference are available by sending the following completed form to the conference secretariat. Please use fax to avoid delays. The proceedings include all 44 papers presented during the conference. A limited number of copies of the ESANN'93 proceedings are still available; you will find the list of papers included in these proceedings at the end of this e-mail. Prices: ESANN'94 proceedings : BEF 2500 (proceedings BEF 2000 + postage & packing BEF 500) ESANN'93 proceedings : BEF 2000 (proceedings BEF 1500 + postage & packing BEF 500) Postage & packing (BEF 500) must only be charged once in case of multiple commands. _______________________________________________________________________ ESANN'94 and ESANN'93 proceedings: order form _____________________________________________ Ms., Mr. Dr., Prof.: ................................................. Name: ................................................................ First Name: .......................................................... Institution: ......................................................... ..................................................................... ..................................................................... Adress: .............................................................. ..................................................................... ..................................................................... ZIP: ................................................................. Town: ............................................................... Country: ............................................................ VAT n?: .............................................................. tel: ................................................................ fax: ................................................................ E-mail: ............................................................. Please send me ... copies of the ESANN'94 proceedings. Please send me ... copies of the ESANN'93 proceedings. Please send me an invoice: O Yes O No Amount: ESANN'94 proceedings : ... copies x BEF 2000 = BEF ..... ESANN'93 proceedings : ... copies x BEF 1500 = BEF ..... Postage & packing: BEF 500 _________ TOTAL BEF ..... Payment (please tick): O Bank transfer, stating "ESANN - proceedings" and your name, made payable to: Generale de Banque ch. de Waterloo 1341A B-1180 Brussels (Belgium) acc. no. 210-0468648-93 of D facto (45 rue Masui, 1210 Brussels) Bank transfers must be free of charges. Eventual charges must be paid as well. O Cheques/postal money orders made payable to: D facto - 45 rue Masui - B-1210 Brussels - Belgium Only orders accompanied by a cheque, a postal money order or the proof of bank transfer will be considered. order form and payment must be sent to the conference secretariat: D facto publications ESANN proceedings 45 rue Masui B-1210 Brussels Belgium tel: + 32 2 245 43 63 fax: + 32 2 245 46 94 ______________________________________________________________________ The proceedings of ESANN'94 contain the following papers: --------------------------------------------------------- "Concerning the formation of chaotic behaviour in recurrent neural networks" T. Kolb, K. Berns Forschungszentrum Informatik Karlsruhe (Germany) "Stability and bifurcation in an autoassociative memory model" W.G. Gibson, J. Robinson, C.M. Thomas University of Sidney (Australia) "Capabilities of a structured neural network. Learning and comparison with classical techniques" J. Codina, J. C. Aguado, J.M. Fuertes Universitat Politecnica de Catalunya (Spain) "Projection learning: alternative approaches to the computation of the projection" K. Weigl, M. Berthod INRIA Sophia Antipolis (France) "Stability bounds of momentum coefficient and learning rate in backpropagation algorithm" Z. Mao, T.C. Hsia University of California at Davis (USA) "Model selection for neural networks: comparing MDL and NIC" G. te Brake*, J.N. Kok*, P.M.B. Vitanyi** *Utrecht University, **Centre for Mathematics and Computer Science, Amsterdam (Netherlands) "Estimation of performance bounds in supervised classification" P. Comon*, J.L. Voz**, M. Verleysen** *Thomson-Sintra Sophia Antipolis (France), **Universit? Catholique de Louvain, Louvain-la-Neuve (Belgium) "Input Parameters' estimation via neural networks" I.V. Tetko, A.I. Luik Institute of Bioorganic & Petroleum Chemistry Kiev (Ukraine) "Combining multi-layer perceptrons in classification problems" E. Filippi, M. Costa, E. Pasero Politecnico di Torino (Italy) "Diluted neural networks with binary couplings: a replica symmetry breaking calculation of the storage capacity" J. Iwanski, J. Schietse Limburgs Universitair Centrum (Belgium) "Storage capacity of the reversed wedge perceptron with binary connections" G.J. Bex, R. Serneels Limburgs Universitair Centrum (Belgium) "A general model for higher order neurons" F.J. Lopez-Aligue, M.A. Jaramillo-Moran, I. Acedevo-Sotoca, M.G. Valle Universidad de Extremadura, Badajoz (Spain) "A discriminative HCNN modeling" B. Petek University of Ljubljana (Slovenia) "Biologically plausible hybrid network design and motor control" G.R. Mulhauser University of Edinburgh (Scotland) "Analysis of critical effects in a stochastic neural model" W. Mommaerts, E.C. van der Meulen, T.S. Turova K.U. Leuven (Belgium) "Stochastic model of odor intensity coding in first-order olfactory neurons" J.P. Rospars*, P. Lansky** *INRA Versailles (France), **Academy of Sciences, Prague (Czech Republic) "Memory, learning and neuromediators" A.S. Mikhailov Fritz-Haber-Institut der MPG, Berlin (Germany), and Russian Academy of Sciences, Moscow (Russia) "An explicit comparison of spike dynamics and firing rate dynamics in neural network modeling" F. Chapeau-Blondeau, N. Chambet Universit? d'Angers (France) "A stop criterion for the Boltzmann machine learning algorithm" B. Ruf Carleton University (Canada) "High-order Boltzmann machines applied to the Monk's problems" M. Grana, V. Lavin, A. D'Anjou, F.X. Albizuri, J.A. Lozano UPV/EHU, San Sebastian (Spain) "A constructive training algorithm for feedforward neural networks with ternary weights" F. Aviolat, E. Mayoraz Ecole Polytechnique F?d?rale de Lausanne (Switzerland) "Synchronization in a neural network of phase oscillators with time delayed coupling" T.B. Luzyanina Russian Academy of Sciences, Moscow (Russia) "Reinforcement learning and neural reinforcement learning" S. Sehad, C. Touzet Ecole pour les Etudes et la Recherche en Informatique et Electronique, N?mes (France) "Improving piecewise linear separation incremental algorithms using complexity reduction methods" J.M. Moreno, F. Castillo, J. Cabestany Universitat Politecnica de Catalunya (Spain) "A comparison of two weight pruning methods" O. Fambon, C. Jutten Institut National Polytechnique de Grenoble (France) "Extending immediate reinforcement learning on neural networks to multiple actions" C. Touzet Ecole pour les Etudes et la Recherche en Informatique et Electronique, N?mes (France) "Incremental increased complexity training" J. Ludik, I. Cloete University of Stellenbosch (South Africa) "Approximation of continuous functions by RBF and KBF networks" V. Kurkova, K. Hlavackova Academy of Sciences of the Czech Republic "An optimized RBF network for approximation of functions" M. Verleysen*, K. Hlavackova** *Universit? Catholique de Louvain, Louvain-la-Neuve (Belgium), **Academy of Science of the Czech Republic "VLSI complexity reduction by piece-wise approximation of the sigmoid function" V. Beiu, J.A. Peperstraete, J. Vandewalle, R. Lauwereins K.U. Leuven (Belgium) "Dynamic pattern selection for faster learning and controlled generalization of neural networks" A. R?bel Technische Universit?t Berlin (Germany) "Noise reduction by multi-target learning" J.A. Bullinaria Edinburgh University (Scotland) "Variable binding in a neural network using a distributed representation" A. Browne, J. Pilkington South Bank University, London (UK) "A comparison of neural networks, linear controllers, genetic algorithms and simulated annealing for real time control" M. Chiaberge*, J.J. Merelo**, L.M. Reyneri*, A. Prieto**, L. Zocca* *Politecnico di Torino (Italy), **Universidad de Granada (Spain) "Visualizing the learning process for neural networks" R. Rojas Freie Universit?t Berlin (Germany) "Stability analysis of diagonal recurrent neural networks" Y. Tan, M. Loccufier, R. De Keyser, E. Noldus University of Gent (Belgium) "Stochastics of on-line back-propagation" T. Heskes University of Illinois at Urbana-Champaign (USA) "A lateral contribution learning algorithm for multi MLP architecture" N. Pican*, J.C. Fort**, F. Alexandre* *INRIA Lorraine, **Universit? Nancy I (France) "Two or three things that we know about the Kohonen algorithm" M. Cottrell*, J.C. Fort**, G. Pag?s*** Universit?s *Paris 1, **Nancy 1, ***Paris 6 (France) "Decoding functions for Kohonen maps" M. Alvarez, A. Varfis CEC Joint Research Center, Ispra (Italy) "Improvement of learning results of the selforganizing map by calculating fractal dimensions" H. Speckmann, G. Raddatz, W. Rosenstiel University of T?bingen (Germany) "A non linear Kohonen algorithm" J.-C. Fort*, G. Pag?s** *Universit? Nancy 1, **Universit?s Pierre et Marie Curie, et Paris 12 (France) "Self-organizing maps based on differential equations" A. Kanstein, K. Goser Universit?t Dortmund (Germany) "Instabilities in self-organized feature maps with short neighbourhood range" R. Der, M. Herrmann Universit?t Leipzig (Germany) The proceedings of ESANN'93 contain the following papers: --------------------------------------------------------- "A modified trajectory reversing method for the stability analysis of neural networks" M. Loccufier, E. Noldus University of Ghent (Belgium) "A lateral inhibition network that emulates a winner-takes-all algorithm" B. Krekelberg, J.N. Kok Utrecht University (The Netherlands) "Tracking global minima using a range expansion algorithm" D. Gorse, A. Shepherd, J.G. Taylor University College London (United Kingdom) "Embedding knowledge into stochastic learning automata for fast solution of binary constraint satisfaction problems" D. Kontoravdis, A. Likas, A. Stafylopatis National Technical University of Athens (Greece) "Parallel dynamics of extremely diluted neural networks" D. Bolle, B. Vinck, A. Zagrebnov K.U. Leuven (Belgium) "Enhanced unit training for piecewise linear separation incremental algorithms" J.M. Moreno, F. Castillo, J. Cabestany Universitat Politecnica de Catalunya (Spain) "Incremental evolution of neural network architectures for adaptive behaviour" D. Cliff, I. Harvey, P. Husbands University of Sussex (United Kingdom) "Efficient decomposition of comparison and its applications" V. Beiu, J. Peperstraete, J. Vandewalle, R. Lauwereins K.U. Leuven (Belgium) "Modelling biological learning from its generalization capacity" F.J. Vico, F. Sandoval, J. Almaraz Universidad de Malaga (Spain) "A learning and pruning algorithm for genetic Boolean neural networks" F. Gruau Centre d'Etudes Nucleaires de Grenoble (France) "Population coding in a theoretical biologically plausible network" G.R. Mulhauser University of Edinburgh (Scotland) "Physiological modelling of cochlear nucleus responses" C. Lorenzi* **, F. Berthommier**, N. Tirandaz* *Universite de Lyon 2, ** Universite Joseph Fourier - Grenoble (France) "The Purkinje unit of the cerebellum as a model of a stable neural network" P. Chauvet*, G. Chauvet* ** *Universite d'Angers (France), **University of Southern California USA) "A mental problem for the solution of the direct and inverse kinematic problem" H. Cruse, U. Steinkuhler, J. Deitert Univ. of Bielefeld (Germany) "Probabilistic decision trees and multilayered perceptrons" P. Bigot, M. Cosnard Ecole Normale Superieure de Lyon (France) "Comparison of optimized backpropagation algorithms" W. Schiffmann, M. Joost, R. Werner University of Koblenz (Germany) "Minimerror: a perceptron learning rule that finds the optimal weights" M.B. Gordon, D. Berchier Centre d'Etudes Nucleaires de Grenoble (France) "MLP modular networks for multi-class recognition" P. Sebire, B. Dorizzi Institut National des Telecommunications (France) "Place-to-time code transformation during saccades" B. Breznen Slovak Academy of Sciences (Czechoslovakia) "An efficient learning model for the neural integrator of the oculomotor system" J.-P. Draye*, G. Cheron** ***, G. Libert*, E. Godaux** *Fac. Poly. de Mons, **Univ. de Mons-Hainaut, ***Univ. Libre de Bruxelles (Belgium) "Motion processing in the retina: about a velocity matched filter" J. Herault, W. Beaudot Institut National Polytechnique de Grenoble (France) "Laplacian pyramids with multi-layer perceptrons interpolators" B. Simon, B. Macq, M. Verleysen Universite Catholique de Louvain (Belgium) "EEG paroxystic activity detected by neural networks after wavelet transform analysis" P. Clochon*, R. Caterini**, D. Clarencon**, V. Roman** *INSERM U 320 Caen, **CRSSA U 18 Grenoble-la-Tronche (France) "An algorithm to learn sequences with the connectionist sequential machine" O. Sarzeaud, N. Giambiasi Ecole pour les Etudes et la Recherche en Informatique et Electronique - Nimes (France) "Time series and neural network: a statistical method for weight elimination" M. Cottrell, B. Girard, Y. Girard, M. Mangeas Universite Paris I (France) "The filtered activation networks" L.S. Smith, K. Swingler University of Stirling (Scotland) "Supervised learning and associative memory by the random neural network" M. Mokhtari Universite Rene Descartes - Paris (France) "Mixture states in Potts neural networks" D. Bolle, J. Huyghebaert K.U. Leuven (Belgium) "Trajectory learning using hierarchy of oscillatory modules" N.B. Toomarian, P. Baldi California Institute of Technology (USA) "Locally implementable learning with isospectral matrix flows" J. Dehaene, J. Vandewalle K.U. Leuven (Belgium) "Once more about the information capacity of Hopfield network" A.A. Frolov*, D. Husek** *Russian Acad. of Sci. - Moscow (Russia), **Acad. of Sci. Czech Republic - Prague (Czech Republic) "Self-organization of a Kohonen network with quantized weights and an arbitrary one-dimensional stimuli distribution" P. Thiran Ecole Polytechnique Federale de Lausanne (Switzerland) "Optimal decision surfaces in LVQ1 classification of patterns" M. Verleysen, P. Thissen, J.-D. Legat Universite Catholique de Louvain (Belgium) "Three algorithms for searching the minimum distance in self-organizing maps" V. Tryba*, K. Goser** *SICAN GmbH Hannover, **Universitat Dortmund (Germany) "Voronoi tesselation, space quantization algorithms and numerical integration" G. Pages Universite Paris I & Universite Pierre et Marie Curie (France) "An intuitive characterization for the reference vectors of a Kohonen map" A. Varfis, C. Versino CEC Joint Research Center (Italy) _____________________________ Michel Verleysen D facto conference services 45 rue Masui 1210 Brussels Belgium tel: +32 2 245 43 63 fax: +32 2 245 46 94 E-mail: esann at dice.ucl.ac.be _____________________________ From tenorio at ecn.purdue.edu Sat May 7 15:04:11 1994 From: tenorio at ecn.purdue.edu (tenorio@ecn.purdue.edu) Date: Sat, 7 May 1994 14:04:11 -0500 Subject: Financial Forecasting Competition Message-ID: <199405071904.OAA04234@dynamo.ecn.purdue.edu> First I would like to apologize to all and specially to Bill Skaggs for not placing the appropriate emotional indicators in my message to indicate tongue-and-cheek statements, such as: 8>o ;>) :>) Without these, the message could seem offensive, in spite of the fact that the sophistation level of the readers here is very high. My apologies to all. Bill further wrote: But aren't you a little worried that the company that does best in the competition, even if only by chance, will take the results as an official sanction and use them in advertising? (Feel free to ignore this question if you think I've already cost you too much time.) Thanks again, -- Bill And Steve wrote: >Its always been curious to me, tho why you would expect people who have >successful methods >who may be making money in the market to reveal >them publicly? Could this account for the negative results? > >Steve > > > > >Stephen J. Hanson, Ph.D. >Head, Learning Systems Department >SIEMENS Research >755 College Rd. East >Princeton, NJ 08540 The panel is considering a number of different metrics to be used, and not declare the winner on a single metric, which could create a winner by chance. If all the predictions are, for example, poor, indeed declaring a winner would be a mute point, but it would be very informative as to the difficulty of the problem. We walk a fine line there and care must be taken. I don't know how to solve the problem of biased sampling, except to give them a non-disclosure entry to offer us a counter example that someone sucessfully can predict the series. If someone was to claim that they can do the task after the competition, an interesting question would be: so why didn't you enter it? Also, all these points are only valid if we are talking about time series of tradable instruments. Other financial time series would still carry a lot of value to its prediction, but less of a flashiness, such as interest rates, sales etc. Regarding the point about the set of parallel agents for tradable instruments in the previous message: All agents have the same policy but different settings. All know about the current state of the world. An agent would: - If the market moves by a percentage P up buy, and sell if the market moves down by the same percentage. - If an agent is in a certain position (long or short) and the market goes against them they would sell at a certain percentage drop <= P. - If the market goes in their favor they would liquidate their position after a minimum move of 3P (percentages always measured from a small to a large number) Each agent is the same, but the percentages are very different. If there are more buyers than sellers, the price would go up a point for each extra buyer. Similarly to the sell side. Imagine a sentiment function that turns the sellers into buyers or vice-versa. This function goes up (say for being inclined to be a buyer) as the market moves up, up to a point, and then moves down, as the market may be perceived to be too expensive. At certain threshold points of this "market feeling function" decisions are made to buy and sell. This function has a parabolic shape and is recursive, similar to functions in the logistic family such as: x(t+1) = r * x(t) * (a-x(t)) which is known to yield to chaotic behavior. The market is then a composite (sum of threshold versions) of such functions. The variations on P incorporate trading styles, information, and time scale differences. The actual function is more like a sin(x) -pi:pi. To make the system more sophisticated, the agents may want to have a third alternative by going neutral before moving from sell to buy (hysterisis-like). Some agents (small number) can be made contrarians, by having the reverse behavior. Further, the market feeling function may be also a function of time with a decay term associated with slow moving markets. I plan to write such a simple simulator and place it on the net. If anyone is interested in beating me to writting the code, and willing to make it public, I'll help him/her with the task. About competitions of this kind: This is not a new idea at all. Makridakis (Journal of Forecasting) and others have made several competitions/comparisions among various techniques (mostly linear and in the financial area). Weigend et al. did the same for non linear techniques with several types of time series. We will be learning a lot from their experiences as well. --ft. ____________________________________________________________________________ ________________________________________ ___________________________ Manoel Fernando Tenorio Parallel Distributed Structures Lab School of Electrical Engineering Purdue University W. Lafayette, In 47907 Ph.: 317-494-3482 Fax: 317-494-6440 tenorio at ecn.purdue.edu ============================================================================ = From anoop at ipl.rpi.edu Mon May 9 08:32:21 1994 From: anoop at ipl.rpi.edu (Anoop K. Bhattacharjya) Date: Mon, 9 May 94 08:32:21 EDT Subject: vision by evolutionary optimization Message-ID: <9405091232.AA06654@ipl.rpi.edu> Reprints are available on request for the following paper: Bhattacharjya, A. K., and Roysam, B.,"Joint Solution of Low, Intermediate and High-Level Vision Tasks by Evolutionary Optimization: Application to Computer Vision at Low SNR," IEEE Trans. Neural Networks, Vol. 5, No. 1, pp. 83-95, 1994. Please direct reprint requests to roysam at ecse.rpi.edu. An abstract of the paper is given below: ABSTRACT Methods for conducting model-based computer vision from low- SNR (~ 1dB) image data are presented. Conventional algorithms break down in this regime due to a cascading of noise artifacts, and inconsistencies arising from the lack of optimal interaction between high and low-level processing. These problems are addressed by solving low-level problems such as intensity estimation, segmentation, and boundary estimation jointly (synergistically) with intermediate-level problems such as the estimation of position, magnification and orientation, and high-level problems such as object identification and scene interpretation. This is achieved by formulating a single objective function that incorporates all the data and object models, and a hierarchy of constraints in a Bayesian framework. All image processing operations, including those that exploit the low and high-level variables to satisfy multi-level pattern constraints, result directly from a parallel multi-trajectory global optimization algorithm. Experiments with simulated low-count (7-9 photons/pixel) 2-D Poisson images demonstrate that compared to non-joint methods, a joint solution not only results in more reliable scene interpretation, but also a superior estimation of low-level image variables. Typically, most object parameters are estimated to within a 5% accuracy, even with overlap and partial occlusion. From swaney at cogsci.ucsd.edu Mon May 9 13:58:10 1994 From: swaney at cogsci.ucsd.edu (swaney@cogsci.ucsd.edu) Date: Mon, 9 May 1994 09:58:10 -0800 Subject: Cognitive Science position Message-ID: <9405091657.AA25407@cogsci.UCSD.EDU> ASSISTANT PROFESSOR POSITION IN COGNITIVE SCIENCE UNIVERSITY OF CALIFORNIA, SAN DIEGO The department of Cognitive Science at the University of California, San Diego invites applications for a position at the assistant professor level (tenure-track) starting July 1, 1995 (contingent upon funding), the salary commensurate with the experience of the successful applicant and based on the UC pay scale. Applicants must have a PhD (or ABD) in an appropriate field and have research and teaching interests in higher level human cognition phenomena such as attention, memory, reasoning, or problem solving. Women and minorities are encouraged to apply. The University of California, San Diego is an affirmative action/equal opportunity employer. All applications received by September 1, 1994 or thereafter will receive thorough consideration until position is filled. Candidates should include a vita, reprints, a short letter describing their background and interests, and names and addresses of at least three references to: University of California, San Diego Search Committee Department of Cognitive Science 0515-G 9500 Gilman Drive La Jolla, CA 92093-0515 From battiti at volterra.science.unitn.it Mon May 9 10:23:32 1994 From: battiti at volterra.science.unitn.it (Roberto Battiti) Date: Mon, 9 May 94 16:23:32 +0200 Subject: preprints available: optimization & neural nets Message-ID: <9405091423.AA02899@volterra.science.unitn.it.noname> *** PREPRINTS AVAILABLE: *** OPTIMIZATION & NEURAL NETS The following technical reports are available by anonymous ftp at our local archive: volterra.science.unitn.it (130.186.34.16). The subjects are combinatorial and continuous optimization algorithms, and their application to neural nets. Two papers (battiti.neuro-hep.ps.Z, battiti.reactive-tabu-search.ps.Z) are also available from the neuroprose archive. A limited number of hardcopies can be obtained from: Roberto Battiti Dip. di Matematica Univ. di Trento 38050 Povo (Trento) - ITALY email: battiti at science.unitn.it or: Giampietro Tecchiolli Istituto per la Ricerca Scientifica e Tecnologica 38050 Povo (Trento) - ITALY email: tec at irst.it ________________________________________________________________________________ ARCHIVE-NN-1 title: The Reactive Tabu Search author: Roberto Battiti and Giampietro Tecchiolli number: UTM 405 Ottobre 1992 note: 27 pages, to appear in: ORSA Journal on Computing, 1994 abstract: We propose an algorithm for combinatorial optimization where an explicit check for the repetition of configurations is added to the basic scheme of Tabu search. In our Tabu scheme the appropriate size of the list is learned in an automated way by reacting to the occurrence of cycles. In addition, if the search appears to be repeating an excessive number of solutions excessively often, then the search is diversified by making a number of random moves proportional to a moving average of the cycle length. The reactive scheme is compared to a strict Tabu scheme, that forbids the repetition of configurations and to schemes with a fixed or randomly varying list size. From the implementation point of view we show that the Hashing or Digital Tree techniques can be used in order to search for repetitions in a time that is approximately constant. We present the results obtained for a series of computational tests on a benchmark function, on the 0-1 Knapsack Problem, and on the Quadratic Assignment Problem. FTP-host: volterra.science.unitn.it FTP-file: pub/neuronit/reactive-tabu-search.ps.Z ________________________________________________________________________________ ARCHIVE-NN-2 title: Local Search with Memory: Benchmarking RTS author: Roberto Battiti and Giampietro Tecchiolli number: UTM Ottobre 1993 note: 34 pages abstract: The purpose of this work is that of presenting a version of the Reactive Tabu Search method (RTS) that is suitable for constrained problems, and that of testing RTS on a series of constrained and unconstrained Combinatorial Optimi- zation tasks. The benchmark suite consists of many instances of the N-K model and of the Knapsack problem with various sizes and difficulties, defined with portable random number generators. The performance of RTS is compared with that of Repeated Local Minima Search, Simulated Annealing, Genetic Algorithms, and Neural Networks. In addition, the effects of different hashing schemes and of the presence of a simple `aspiration' criterion in the RTS algorithm are investigated. FTP-host: volterra.science.unitn.it FTP-file: pub/neuronit/rts-benchmark.ps.Z ________________________________________________________________________________ ARCHIVE-NN-3 title: Training Neural Nets with the Reactive Tabu Search author: Roberto Battiti and Giampietro Tecchiolli number: UTM 421 Novembre 1993 note: 45 pages, shorter version to appear in: IEEE Trans. on Neural Networks abstract: In this paper the task of training sub-symbolic systems is considered as a combinatorial optimization problem and solved with the heuristic scheme of the Reactive Tabu Search (RTS) proposed by the authors and based on F. Glover's Tabu Search. An iterative optimization process based on a ``modified greedy search'' component is complemented with a meta-strategy to realize a discrete dynamical system that discourages limit cycles and the confinement of the search trajectory in a limited portion of the search space. The possible cycles are discouraged by prohibiting (i.e., making tabu) the execution of moves that reverse the ones applied in the most recent part of the search, for a prohibition period that is adapted in an automated way. The confinement is avoided and a proper exploration is obtained by activating a diversification strategy when too many configurations are repeated excessively often. The RTS method is applicable to non-differentiable functions, it is robust with respect to the random initialization and effective in continuing the search after local minima. The limited memory and processing required make RTS a competitive candidate for special-purpose VLSI implementations. We present and discuss four tests of the technique on feedforward and feedback systems. FTP-host: volterra.science.unitn.it FTP-file: pub/neuronit/rts-neural-nets.ps.Z ________________________________________________________________________________ ARCHIVE-NN-4 title: Learning with first, second, and no derivatives: a case study in High Energy Physics author: Roberto Battiti and Giampietro Tecchiolli note: 36 pages, to appear in Neurocomputing 6, 181-206, 1994 abstract: In this paper different algorithms for training multi-layer perceptron architectures are applied to a significant discrimination task in High Energy Physics. The One Step Secant technique is compared with On-Line Backpropagation , the 'Bold Driver' batch version and Conjugate Gradient methods. In addition, a new algorithm (Affine Shaker) is proposed that uses stochastic search based on function values and affine transformations of the local search region. Although the Affine Shaker requires more CPU time to reach the maximum genera- lization, the technique can be interesting for special-purpose VLSI implementa- tions and for non-differentiable functions. FTP-host: volterra.science.unitn.it FTP-file: pub/neuronit/neuro-hep.ps.Z ________________________________________________________________________________ ARCHIVE-NN-5 title: Simulated Annealing and Tabu Search in the Long Run: a Comparison on QAP Tasks author: Roberto Battiti and Giampietro Tecchiolli number: UTM 427 Febbraio 1994 note: 11 pages, to appear in: Computer and Mathematics with Applications abstract: Simulated Annealing (SA) and Tabu Search (TS) are compared on the Quadratic Assignment Problem. A recent work on the same benchmark suite argued that SA could achieve a reasonable solution quality with fewer function evaluations than TS. The discussion is extended by showing that the conclusions must be changed if the task is hard or a very good approximation of the optimal solution is desired, or if CPU time is the relevant parameter. In addition, a recently proposed version of TS (the Reactive Tabu Search) solves the problem of finding the proper list size with an automatic memory-based reaction mechanism. FTP-host: volterra.science.unitn.it FTP-file: pub/neuronit/rts-versus-sa.ps.Z ________________________________________________________________________________ ARCHIVE-NN-6 title: The continuous reactive tabu search: blending combinatorial optimization and stochastic search for global optimization author: Roberto Battiti and Giampietro Tecchiolli number: UTM 432 Maggio 1994 note: 28 pages abstract: A novel algorithm for the global optimization of functions (C-RTS) is presented, in which a combinatorial optimization method cooperates with a stochastic local minimizer. The combinatorial optimization component, based on the Reactive Tabu Search recently proposed by the authors, locates the most promising ``boxes,'' where starting points for the local minimizer are generated. In order to cover a wide spectrum of possible applications with no user intervention, the method is designed with adaptive mechanisms: the box size is adapted to the local structure of the function to be optimized, the search parameters are adapted to obtain a proper balance of diversification and intensification. The algorithm is compared with some existing algorithms, and the experimental results are presented for a suite of benchmark tasks. FTP-host: volterra.science.unitn.it FTP-file: pub/neuronit/crts.ps.Z ________________________________________________________________________________ From vg197 at neutrino.pnl.gov Mon May 9 20:24:40 1994 From: vg197 at neutrino.pnl.gov (Sherif Hashem) Date: Mon, 09 May 1994 17:24:40 -0700 (PDT) Subject: Thesis available: Optimal Linear Combinations of Neural Networks Message-ID: <9405100024.AA19885@neutrino.pnl.gov> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/Thesis/hashem.thesis.ps.Z The file hashem.thesis.ps.Z is now available for copying from the Neuroprose archive: OPTIMAL LINEAR COMBINATIONS OF NEURAL NETWORKS Sherif Hashem Ph.D. Thesis Purdue University ABSTRACT: Neural network (NN) based modeling often involves trying multiple networks with different architectures, learning techniques, and training parameters in order to achieve ``acceptable'' model accuracy. Typically, one of the trained networks is chosen as ``best,'' while the rest are discarded. In this dissertation, using optimal linear combinations (OLCs) of the corresponding outputs of a number of NNs is proposed as an alternative to using a single network. Modeling accuracy is measured by mean squared error (MSE) with respect to the distribution of random inputs to the NNs. Optimality is defined by minimizing the MSE, with the resultant combination referred to as MSE-OLC. MSE-OLCs are investigated for four cases: allowing (or not) a constant term in the combination and requiring (or not) the combination-weights to sum to one. In each case, deriving the MSE-OLC is straightforward and the optimal combination-weights are simple, requiring modest matrix manipulations. In practice, the optimal combination-weights need to be estimated from observed data: observed inputs, the corresponding true responses, and the corresponding outputs of each component network. Given the data, estimating the optimal combination-weights is straightforward. Collinearity among the outputs and/or the approximation errors of the component NNs sometimes degrades the generalization ability of the estimated MSE-OLC. To improve generalization in the presence of degrading collinearity, six algorithms for selecting subsets of the NNs for the MSE-OLC are developed and tested. Several examples, including a real-world problem and an empirical study, are discussed. The examples illustrate the importance of addressing collinearity and demonstrate significant improvements in model accuracy as a result of employing MSE-OLCs supported by the NN selection algorithms. --------------------------- The thesis is 126 Pages (10 preamble + 116 text). To obtain a copy of the Postscript file: %ftp archive.cis.ohio-state.edu >Name: anonymous >Password: >cd pub/neuroprose/Thesis >binary >get hashem.thesis.ps.Z >quit Then: %uncompress hashem.thesis.ps.Z (The size of the uncompressed file is about 1.1Mbyte) %lpr -s -P hashem.thesis.ps --------------------------- Hard copies may be requested from the School of Industrial Engineering, 1287 Grissom Hall, Purdue University, West Lafayette, IN 47907-1287, USA. (Refer to Technical Report SMS 94-4.) --Sherif Hashem =================================================================== Pacific Northwest Laboratory E-mail: s_hashem at pnl.gov 906 Battelle Boulevard Tel. (509) 375-6995 P.O. Box 999, MSIN K1-87 Fax. (509) 375-6631 Richland, WA 99352 USA =================================================================== From bishopc at sun.aston.ac.uk Tue May 10 08:35:22 1994 From: bishopc at sun.aston.ac.uk (bishopc) Date: Tue, 10 May 1994 12:35:22 +0000 Subject: Paper available by ftp Message-ID: <10405.9405101135@sun.aston.ac.uk> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/bishop.noise.ps.Z The following technical report is available by anonymous ftp. ------------------------------------------------------------------------ TRAINING WITH NOISE IS EQUIVALENT TO TIKHONOV REGULARIZATION Chris M Bishop Neural Computing Research Group Aston University Birmingham, B4 7ET, U.K. email: c.m.bishop at aston.ac.uk Neural Computing Research Group Report: NCRG/4290 (Accepted for publication in Neural Computation) Abstract It is well known that the addition of noise to the input data of a neural network during training can, in some circumstances, lead to significant improvements in generalization performance. Previous work has shown that such training with noise is equivalent to a form of regularization in which an extra term is added to the error function. However, the regularization term, which involves second derivatives of the error function, is not bounded below, and so can lead to difficulties if used directly in a learning algorithm based on error minimization. In this paper we show that, for the purposes of network training, the regularization term can be reduced to a positive definite form which involves only first derivatives of the network mapping. For a sum-of-squares error function, the regularization term belongs to the class of generalized Tikhonov regularizers. Direct minimization of the regularized error function provides a practical alternative to training with noise. -------------------------------------------------------------------- ftp instructions: % ftp archive.cis.ohio-state.edu Name: anonymous password: your full email address ftp> cd pub/neuroprose ftp> binary ftp> get bishop.noise.ps.Z ftp> bye % uncompress bishop.noise.ps.Z % lpr bishop.noise.ps -------------------------------------------------------------------- Professor Chris M Bishop Tel. +44 (0)21 359 3611 x4270 Neural Computing Research Group Fax. +44 (0)21 333 6215 Dept. of Computer Science c.m.bishop at aston.ac.uk Aston University Birmingham B4 7ET, UK -------------------------------------------------------------------- From dayhoff at src.umd.edu Mon May 9 17:14:47 1994 From: dayhoff at src.umd.edu (Judith E. Dayhoff) Date: Mon, 9 May 1994 17:14:47 -0400 Subject: Final announcement for WCNN'94, with news Message-ID: <199405092114.RAA01547@newra.src.umd.edu> WW WW CCCCCCCC NN NN NN NN oo 999999 44 44 WW WW CC CC NNN NN NNN NN oo 99 99 44 44 WW W WW CC NNNN NN NNNN NN 99 99 44 44 WW WWW WW CC NN NN NN NN NN NN 9999999 4444444 WWW WWW CC NN NN NN NN NN NN 99 44 W W CCCCCCC NN NNNN NN NNNN 99 44 ************************** *UPDATED REGISTRATION INFORMATION *CALL FOR NOVEL RESULTS PAPERS ************************** WORLD CONGRESS ON NEURAL NETWORKS, SAN DIEGO, CALIFORNIA, JUNE 5-9, 1994 *** Industrial Exposition -- Giant-Screen Video 22 INNS University HALF-DAY Short Courses Six Plenary Talks *** Five Special Sessions Twenty Sessions of Invited and Contributed Talks At least 9 SIG sessions *** Sponsored and Organized by the International Neural Network Society (INNS) in cooperation with all other interested technical & professional societies. *** Table of Contents of This Announcement: 1. NEWS! NOVEL-RESULTS SUBMISSION to June 1; POSTDOC SPECIAL 2. INDUSTRIAL EXPOSITION SCHEDULE CHANGES AND NEW LECTURES 3. PLENARY TALKS 4. SPECIAL SESSIONS 5. INVITED/CONTRIBUTED SESSIONS 6. SHORT COURSES 7. TRAVEL ARRANGEMENTS 8. NOTE 9. REGISTRATION 10.HOTEL RESERVATIONS 11.STUDENT VOLUNTEERS! =================================================================== 1. NEWS! WCNN'94 has accepted over 600 papers that are published in the Proceedings. An interdisciplinary and scientific approach to neural network is maintained with a balanced program in all application areas as well. Furthermore, WCNN'94 has two days (Sunday & Monday June 5,6) filled with 22 half-day short courses (never given before) by all INNS Governors who have participated this year. Judging by these, WCNN'94 will be indeed a very exciting conference. To ease the on-site registration congestion, the official deadline for WCNN'94 pre-registration has been postponed to May 16, 1994. PostDocs may obtain the Student Rate by including a letter from your Supervisor with the Registration Form (or bring it to the Congress for a refund). FAX the Form (item 9) or email your questions to: Talley Associates (Att: WCNN'94 Melissa Bishop) Address: 875 Kings Highway, Suite 200 Woodbury, NJ 08096; Voice 609-845-1720; FAX 609-853-0411, e-mail: 74577.504 at compuserve.com SUBMIT YOUR NOVEL RESULTS UNTIL JUNE 1 FOR ON-SITE PUBLICATION In order to stimulate rapid growth in neural network research, we encourage the presentation of your newest results in the Congress. The deadline for Proceedings has passed, but in answer to many requests here is good news: You may submit one original and three copies in the standard format to our Talley conference management (to be reviewed by the three members of Organization Committee) for rapid separate publication at the Congress. The deadline is June 1, 1994. Notification will be made by fax, phone, or e-mail a few days after receipt of your paper. If accepted, your registration will be handled specially to enjoy the saving of pre-registration. If there are a sufficient number of these Novel papers accepted, there will be a special session created called "Novel Results" during the Congress. Otherwise, a poster presentation will be guaranteed. Moreover, in order to promote the WCNN'94 education program, you can give a short course to one of your friends free of charge, if you pay for one tuition. If you sign up for two courses, you will get one extra free, and this bonus is likewise extended to your chosen friend as well. Finally, we mentioned that we have streamlined WCNN'94 meeting management by giving Talley direct control over the management of the Conference, without going through the Executive Office. Many of you know the Talley management team from previous Congresses. You may use their FAX (609-853-0411) for registration; use the form at the end of this message. Signed: Paul Werbos, Bernard Widrow, Harold Szu P.S. Should you have any specific recommendation about ways to make WCNN'94 more successful, please contact any Governors that you know, or Dr. Harold Szu at (301) 390-3097; FAX (301) 390-3923; e-mail: hszu%ulysses at relay.nswc.navy.mil. *** To improve the structure of the Congress and achieve a more compact schedule for attendees, several changes have been made since the Preliminary Program: A. Short Courses Start Sunday Morning June 5. All Saturday Short Courses have been moved to Monday June 6, with the exception that Course [I] (J. Dayhoff) will be given Sunday 8AM - 12PM. To make room in the schedule for that change, Course [H] (W. Freeman) moves from Sunday to Monday 8AM - 12PM. On Monday the Short Courses are concurrent with the Exposition. [To Lecturers: Talley will reproduce Course Notes received by no later than May 20.] B. The SPECIAL OFFER has been made more generous, to encourage students. For each of your Short Course registrations you can give a colleague in the same or lower-priced Registration Category a FREE Short Course! Enter his or her name on the Registration Form below ``TOTAL.'' The recipient of the gift should indicate ``Gift from [your name]'' at the time of registration. IF YOU HAVE ALREADY PRE-REGISTERED, arrange the gift now by FAX to 609-853-0411. =================================================================== 2. INDUSTRIAL EXPOSITION SCHEDULE CHANGES AND NEW LECTURES Monday June 6: Chair: Prof. Takeshi Yamakawa, Kyushu Inst. of Tech., Japan; Soo-Young Lee of KAIST, Korean Liason; Pierre Martineau of M.&A., European Liason; R. Hecht-Nielsen, HNC, Inc.; D. Hammerstrom, Adaptive Solutions, Inc.; Robert Pap, AAC; C. Kimasauskas, NeuralWare, Inc.; J. Sutherland, America, Ltd. 8 - 11 AM: In Video: Hardware-Software Video-demo talks, and Posters; 10 - 11 AM: Student Contest. The Contest is free-form, permitting many types of imaginative entry; Certificates and T-shirts will be awarded; no cash Grand Prize. 11 - Noon: Panel on Government Funding + Two New Lectures - in the Exposition Area: 12 - 1PM: Teuvo Kohonen: ``Exotic Applications of the Self-Organizing Map'' 5 - 6PM: Walter Freeman: ``Noncomputational Neural Networks' =================================================================== 3. PLENARY TALKS: Lotfi Zadeh, UC Berkeley "Fuzzy Logic, Neural Networks, and Soft Computing" Per Bak, Brookhaven Nat. Lab. "Introduction to Self-Organized Criticality" Bernard Widrow, Stanford University "Adaptive Inverse Control" Melanie Mitchell, Santa Fe Institute "Genetic Algorithm Applications" Paul Werbos, NSF "Brain-Like Intelligence in Artificial Models: How Do We Really Get There?" John G. Taylor, King's College London "Capturing What It Is Like To Be: Modelling the Mind by Neural Networks" =================================================================== 4. SPECIAL SESSIONS "Biomedical Applications of Neural Networks," (Tuesday) David Brown, FDA; John Weinstein, NIH. "Commercial and Industrial Applications of Neural Networks," (Tuesday) B. Widrow, D. Hammerstrom, Ken Otwell, Ken Marko, Tariq Samad. "Financial and Economic Applications of Neural Networks," (Wednesday) Guido Deboeck, World Bank. "Neural Networks in Chemical Engineering," (Thursday) Am. Inst. of Chem. Eng. Thom McAvoy. "Mind, Brain and Consciousness" (Thursday) by J. Taylor, "TBD", W. Freeman, "Some category confusions in using neural networks to model consciousness", and presentations by S. Grossberg, G. Roth, B. Libet, P. Werbos, C. Koch, D. Levine, etc. =================================================================== 5. 20 INVITED/CONTRIBUTED SESSIONS June 7 - 9 Co-Chair by 20 INNS Governors & 20 Special Interest Group Chairpersons. Also at least 9 Special Interest Group (SIG) Sessions are scheduled for Wednesday, June 8 from 8 -9:30 pm. e.g.Neuroscience: D. Alkon, NIH; ATR/Biosensors: H. Hawkins, ONR, B. Telfer: Mental & Dysfunction: D. Levine;Power Eng.: D. Sobajic, EPRI, and others TBD. =================================================================== 6. SHORT COURSES 8am - 12pm Sunday, June 5 [M] Gail Carpenter, Boston University: Adaptive Resonance Theory [L] Bernard Widrow, Stanford University: Adaptive Filters, Adaptive Controls, Adaptive Neural Networks and Applications [I] Judith Dayhoff, University of Maryland: Neurodynamics of Temporal Processing [G] Shun-Ichi Amari, University of Tokyo: Learning Curves, Generalization Errors and Model Selection 1pm - 5pm Sunday, June 5 [U] Lotfi Zadeh, University of California, Berkeley: Fuzzy Logic and Calculi of Fuzzy Rules and Fuzzy Graphs [K] Paul Werbos, NSF: From Backpropagation to Real-Time Control [O] Stephen Grossberg, Boston University: Autonomous Neurodynamics: From Perception to Action [E] John Taylor, King's College, London: Stochastic Neural Computing: From Living Neurons to Hardware 6pm - 10 pm Sunday, June 5 [V] Nicolai G. Rambidi, Int'l. Research Inst. for Management Sciences: Image Processing and Pattern Recognition Based on Molecular Neural Networks [C] Christof Koch, California Institute of Technology: Vision Chips: Implementing Vision Algorithms with Analog VLSI Circuits 8am - 12pm Monday, June 6 [T] Melanie Mitchell, Santa Fe Institute: Genetic Algorithms, Theory and Applications [R] David Casasent, Carnegie Mellon University: Pattern Recognition and Neural Networks [H] Walter Freeman, University of California, Berkeley: Review of Neurobiology: From Single Neurons to Chaotic Dynamics of the Cerebral Cortex [P] Lee Giles, NEC Research Institute: Dynamically-driven Recurrent Networks: Models, Training Algorithms and Applications 1pm - 5pm Monday, June 6 [S] Per Bak, Brookhaven National Laboratory: Introduction to Self-Organized Criticality [D] Kunihiko Fukushima, Osaka University: Visual Pattern Recognition with Neural Networks [B] James A. Anderson, Brown University: Neural Networks Computation as Viewed by Cognitive Science and Neuroscience [Q] Alianna Maren, Accurate Automation Corporation: Introduction to Neural Network Applications 6pm - 10 pm Monday, June 6 [N] Takeshi Yamakawa, Kyushu Institute of Technology: What are the Differences and Similarities among Fuzzy, Neural, and Chaotic Systems? [A] Teuvo Kohonen, Helsinki University of Technology: Advances in the Theory and Applications of Self-Organizing Maps [J] Richard A. Andersen, Massachusetts Institute of Technology: Neurobiologically Plausible Network Models [F] Harold Szu, Naval Surface Warfare Center: Spatiotemporal Information Processing by Means of McCullouch-Pitts and Chaotic Neurons =================================================================== 7. TRAVEL RESERVATIONS: Executive Travel Associates (ETA) has been selected the official travel company for the World Congress on Neural Networks. ETA offers the lowest available fares on any airline at time of booking when you contact them at US phone number 202-828-3501 or toll free (in the US) at 800-562-0189 and identify yourself as a participant in the Congress. Flights booked on American Airlines, Delta Airline, the official airline for this meeting, will result in an additional discount. Please provide the booking agent you use with the AA code: Star #S0464FS =================================================================== 8. ** NOTE ** Neither WCNN'94 nor the Hotel can accept e-mail registration or reservations. The Hotel will accept phone and FAX reservations while rooms remain available. For WCNN'94 Registration, use surface/air mail or FAX. ********************************************************************** 9___ ____ ____ _ __ _____ ___ _ _____ _ ___ _ _ | | | | \ | / | | | / \ | | / \ |\ | |__\ --- | | \_ | |__\ /___\ | | | | | \ | | \ | \ __ | \ | | \ | | | | | | | \ | | | |___ \___| | __/ | | | | | | | \__/ L \| WCNN'94 at Town & Country Hotel, San Diego, California June 5 - 9, 1994 Phone:_______________ Name:_______________________________________ FAX:__________________ Address:____________________________________________________________ ____________________________________________________________ ___________________________________________________________ If your name badge is to read differently, indicate the changes here: REGISTRATION FEE (includes all sessions, plenaries, proceedings, reception, AND Industrial Exposition. Separate registration for Short Courses, below.) Before May 16, 1994 On-Site FEE ENCLOSED _ INNS Member Member Number__________ US$280 US$395 $_________ _ Non Members: US$380 US$495 $_________ _ Full Time Students: US$110 US$135 $_________ AND PostDocs (Include a letter from PostDoc Supervisor) _ Spouse/Guest: US$45 US$55 $_________ Name:________________ Or Neural Network Industrial Exposition -Only- _ US$55 US$55 $_________ *************************************************** INNS UNIVERSITY SHORT COURSE REGISTRATION (must be received by May 16, 1994) Circle paid selections: A B C D E F G H I J K L M N O P Q R S T U V Circle free selection (Pay for 2 short courses, get the third FREE) A B C D E F G H I J K L M N O P Q R S T U V SHORT COURSE FEE _ INNS Members: US$275 $_________ _ Non Members: US$325 $_________ _ Full Time Students US$150 $_________ Congress + Short Course TOTAL: $_________ For each paid course, nominate an accompanying person, registering in the same or lower category, for a free course: Mr./Ms.___________________ That person must also register by May 16, and indicate "Gift from [your name]" on the registration form. METHODS OF PAYMENT _ $ CHECK. All check payments made outside of the USA must be made on a USA bank in US dollars, payable to WCNN'94 _ $ CREDIT CARDS. Only VISA and MasterCard accepted. Registrations sent by FAX or surface/air mail must include an authorized signature. ( ) Visa ( ) M/C Name on Credit Card ______________________________________ Credit Card Number _______________________________________ Exp. Date ________________________________________________ Authorized Signature: _____________________________________ FAX: 609-853-0411 or E-mail: 74577.504 at compuserve.com then Mail to INNS/WCNN'94 c/o Talley Associates, 875 Kings Highway, Suite 200 Woodbury, NJ 08096 ========================================================================== 10. HOTEL RESERVATIONS REGISTER AT TOWN & COUNTRY HOTEL, SAN DIEGO, CALIFORNIA (WCNN'94 Site) Mail to Reservations, Town and Country Hotel, 500 Hotel Circle North, San Diego, CA 92108, USA; or FAX to 619-291-3584 Telephone: (800)772-8527 or (619)291-7131 INNS - WCNN'94 International Neural Network Society, World Congress on Neural Networks '94 _ Single: US$70 - US$95 plus applicable taxes _ Double: US$80 - US$105 plus applicable taxes Check in time: 3:00 pm. Check out time: 12:00 noon. Room reservations will be available on a first-come, first-serve basis until May 6, 1994. Reservations received after this date will be accepted on a space-available basis and cannot be guaranteed. Reservations after May 6 will also be subject to the rates prevailing at the time of the reservation. A confirmation of your reservation will be sent to you by the hotel. A first night's room deposit is required to confirm a reservation. PRINT OR TYPE ALL INFORMATION. Single________ Double_______ Arrival Date and approximate time:________________________________ Departure Date and approximate time:______________________________ Names of all occupants of room:____________________________________ RESERVATION CONFIRMATION SHOULD BE SENT TO: Name:____________________ Address:____________________________________________________________ ____________________________________________________________ City:____________________State/Province:_________________Country:__________ Type of Credit Card: (circle one) VISA/ MasterCard/ AmEx/ Diner's Club/ Discover/ Optima Card Number:______________________________ Exp. Date____________________ Name as it appears on your Card:______________________________ Authorized Signature: ______________________________ Cancellation Policy: Deposits are refundable if reservation is cancelled 48 hours in advance of arrival date. Be sure to record your cancellation number. Please indicate any disability which will require special assistance: _____________________________________________ FAX to 619-291-3584 e-mail: 74577.504 at compuserve.com 11. Student Volunteers INNS always tries to support students in NN. This is a tradition. Volunteer workers will get free registration and certain expenses. However, no travel expenses can be considered. We still need at least 8 students to help at WCNN'94. While Ms. Melissa Bishop will be the overall Coordinator, please contact for details about work and compensation the Student Leader: (1) Student Leader: Mr Charles Hsu, Ph D Candidate GWU (Student of Prof. Mona Zaghloul, Chair of GWU EE Dept,) WCNN'94 Oral Presentation in Session 14 Neurodynamics & Chaos "Chaotic neurochips .." (with Zaghloul) Thursday 1:30-1:50 PM Address: Charles Hsu, 1600 S. Joyce St. #C710, Arlington VA 22202 Phone: (202) 994-9390 e-mail: charles at seas.gwu.edu (2) Deputy Leader: Ms Ding Jinghua, M.S. in NN from Japan Tohoku Univ. WCNN'94 Oral Presentation in Session #3 Speech & Language Thurs. 8:00-8:20 AM "Comp. Psych. Approach to Human Facial Language Communication to Robots" Address: Jinghua Ding, Berukasa 201, Tamagawagakuen 1-6-11, Machiada-Shi, Tokyo Japan Phone: 81-427-26-2628 e-mail: lchj at ibis.iamp.tohoku.ac.jp ========================================================================== From uzimmer at informatik.uni-kl.de Tue May 10 11:44:23 1994 From: uzimmer at informatik.uni-kl.de (Uwe R. Zimmer, AG vP) Date: Tue, 10 May 94 16:44:23 +0100 Subject: Actual papers available (Visual Search, Navigation, Topologic Maps) Message-ID: <940510.164423.571@ag_vp_file_server.informatik.uni-kl.de> A couple of actual papers about: -------------------------------------------------------------- --- Learning, Robotics, Visual Search, Navigation, --- --- Topologic Maps & Robust Mobile Robots --- --- Neural Networks --- -------------------------------------------------------------- are now available via FTP: --------------------------------------------------------------------------- --- Connectionist Decision Systems for a Visual Search Problem --------------------------------------------------------------------------- --- File name is : Zimmer.Visual_Search.ps.Z --- IIZUKA `94, Fukuoka, Japan August 1-7, 1994, Invited paper Connectionist Decision Systems for a Visual Search Problem Uwe R. Zimmer Visual Search has been investigated by many researchers inspired by the biological fact, that the sensory elements on the mammal retina are not equably distributed. Therefore the focus of attention (the area of the retina with the highest density of sensory elements) has to be directed in a way to efficiently gather data according to certain criteria. The work discussed in this article concentrates on applying a laser range finder instead of a silicon retina. The laser range finder is maximal focused at any time, but therefore a low-resolution total-scene-image, available with camera-like devices from scratch on, cannot be used here. By adapting a couple of algorithms, the edge-scanning module steering the laser range finder is able to trace a detected edge. Based on the data scanned so far, two questions have to be answered. First: "Should the actual (edge-) scanning be interrupted in order to give another area of interest a chance of being investigated?" and second: "Where to start a new edge-scanning, after being interrupted?". These two decision-problems might be solved by a range of decision systems. The correctness of the decisions depends widely on the actual environment and the underlying rules may not be well initialized with a-priori knowledge. So we will present a version of a reinforcement decision system together with an overall scheme for efficiently controlling highly focused devices. --------------------------------------------------------------------------- --- Navigation on Topologic Feature-Maps --------------------------------------------------------------------------- --- File name is : Zimmer.Navigation.ps.Z --- IIZUKA `94, Fukuoka, Japan August 1-7, 1994 Navigation on Topologic Feature-Maps Uwe R. Zimmer, Cornelia Fischer & Ewald von Puttkamer Based on the idea of using topologic feature-maps instead of geometric environment maps in practical mobile robot tasks, we show an applicable way to navigate on such topologic maps. The main features regarding this kind of navigation are: handling of very inaccurate position (and orientation) information as well as implicit modelling of complex kinematics during an adaptation phase. Due to the lack of proper a-priori knowledge, a reinforcement based model is used for the translation of navigator commands to motor actions. Instead of employing a backpropagation network for the central associative memory module (attaching action-probabilities to sensor situations resp. navigator commands) a much faster dynamic cell structure system based on dynamic feature maps is shown. Standard graph-search heuristics like A* are applied in the planning phase. --------------------------------------------------------------------------- --- Realtime-learning on an Autonomous Mobile Robot with Neural Networks --------------------------------------------------------------------------- --- File name is : Zimmer.Topologic.ps.Z --- Euromicro `94 - RT-Workshop - Vaesteraas (Vasteras), Sweden, June 15-17, '94 Realtime-learning on an Autonomous Mobile Robot with Neural Networks Uwe R. Zimmer & Ewald von Puttkamer The problem to be discussed here, is the usage of neural network clustering techniques on a mobile robot, in order to build qualitative topologic environment maps. This has to be done in realtime, i.e. the internal world-model has to be adapted by the flow of sensor-samples without the possibility to stop this data-flow. Our experiments are done in a simulation environment as well as on a robot, called ALICE. ------------------------------------------------------------------ FTP-information (anonymous login): FTP-Server is : ftp.uni-kl.de Mode is : binary Directory is : reports_uni-kl/computer_science/mobile_robots/... Subdirectory is : 1994/papers File names are : Zimmer.Navigation.ps.Z Zimmer.Topologic.ps.Z Zimmer.Visual_Search.ps.Z Subdirectory is : 1993/papers File names are : Zimmer.learning_surfaces.ps.Z Zimmer.SPIN-NFDS.ps.Z Subdirectory is : 1992/papers File name is : Zimmer.rt_communication.ps.Z Subdirectory is : 1991/papers File names are : Edlinger.Pos_Estimation.ps.Z Edlinger.Eff_Navigation.ps.Z Knieriemen.euromicro_91.ps.Z Zimmer.albatross.ps.Z .. or ... FTP-Server is : archive.cis.ohio-state.edu Mode is : binary Directory is : /pub/neuroprose File names are : zimmer.navigation.ps.z zimmer.visual_search.ps.z zimmer.learning_surfaces.ps.z zimmer.spin-nfds.ps.z .. or ... FTP-Server is : ag_vp_file_server.informatik.uni-kl.de Mode is : binary Directory is : Neural_Networks/Reports File names are : Zimmer.Navigation.ps.Z Zimmer.Topologic.ps.Z Zimmer.Visual_Search.ps.Z Zimmer.Learning_Surfaces.ps.Z Zimmer.SPIN-NFDS.ps.Z ------------------------------------------------------------------ ----------------------------------------------------- ----- Uwe R. Zimmer --- University of Kaiserslautern - Computer Science Department | Research Group Prof. v. Puttkamer | 67663 Kaiserslautern - Germany | -------------------------------------------------------------- | P.O.Box:3049 | Phone:+49 631 205 2624 | Fax:+49 631 205 2803 | From arantza at cogs.susx.ac.uk Tue May 10 18:13:00 1994 From: arantza at cogs.susx.ac.uk (Arantza Etxeberria) Date: Tue, 10 May 94 18:13 BST Subject: CFP ECAL95 Message-ID: CONFERENCE ANNOUNCEMENT AND CALL FOR PAPERS 3rd. EUROPEAN CONFERENCE ON ARTIFICIAL LIFE ECAL95 Granada, Spain, 4-6 June, 1995 Despite its short history, Artificial Life (AL) is already becoming a mature scientific field. By trying to discover the rules of life and extract its essence so that it can be implemented in different media, AL research is leading us to a better understanding of a large set of interesting biology-related problems. The Conference will be organized into Scientific Sessions, Demonstrations, Videos, and Comercial Exhibits. Scientific Sessions will consist of Lectures (invited), Oral Presentations of submitted papers, and Posters. The site of ECAL95 will be the city of Granada, located in the South of Spain, in the region of Andalucia. Granada was the last Arabic site in the Iberian Peninsula, and it has the heritage of their culture, including the legacy of marvelous constructions such as the Alhambra and the Gardens of Generalife. ECAL95 will be organized in collaboration with the International Workshop on Artificial Neural Networks (IWANN95) to be held at Malaga (Costa del Sol, Spain), June 7-9, 1995. Granada and Malaga are only one hour apart by car. Special registration rates will be offered to people wishing to attend both meetings. Scientific Sessions and Topics 1. Foundations and Epistemology: Philosophical Issues. Emergence. Levels of Organization. Evolution of Hierarchical Systems. Evolvability. Computation and Dynamics. Ethical Problems. 2. Evolution: Self-organization. Pattern Formation. Prebiotic Evolution. Origins of Life. Evolution of Metabolism. Evolutionary Optimization. Fitness Landscapes. RNA Systems. Ecosystem Evolution. Biodiversity. Natural Selection and Sexual Selection. Units of Selection. 3. Adaptive and Cognitive Systems: Reaction, Neural and Immune Networks. Growth and Differentiation. Multicellular Development. Natural and Artificial Morphogenesis. Learning and Development. Communication 4. Artificial Worlds: Simulation of Ecologycal and Evolving Systems. System-Environment Correlation. Sensor-Effector Coordination. Environment Design. 5. Robotics and Emulation of Animal Behavior: Sensory and Motor Activity. Mobile Agents. Adaptive Robots. Autonomous Robots. Evolutionary Robotics. Ethology. 6. Societies and Collective Behavior: Swarm Intelligence. Cooperation and Communication among Animals and Robots. Evolution of Social Behavior. Social Organizations. Division of Tasks. 7. Applications and Common Tools: Optimization. Problem Solving. Virtual Reality and Computer Graphics. Genetic Algorithms. Neural Networks. Fuzzy Logic. Evolutionary Computation. Genetic Programming. Submission Instructions Conference Contributions can be either papers, posters, videos, or demonstrations. Authors should specify to wich session cotributions are intented. The contributions will be made available in two formats: 1) Conference Proceedings, published by Springer-Verlag before the Conference, including all accepted papers. One copy of the book will be given to each ECAL95 participant. 2) Abstracts Book, for papers and other contributions (posters, videos, or demos). For this purpose each contribution must include one Title/Abstract Page containing the following: - Title - Full name(s) of author(s) - Address(es) of author(s) with phone, fax, and E-mail - Extended abstract (100-200 words) - Keywords - Full papers: In addition to the Title/Abstract Page, manuscripts should not exceed 12 pages, including figures, in DIN-A4 format, with 2.5 cm (1 inch) margins all around, and no smaller than 10 point type in Times-Roman typeface. Cammera ready versions of the papers will be required after acceptance. - Posters: Submit only the Title/Abstract Page. - Demonstrations: In addition to the Title/Abstract Page, author(s) must specify the equipment needed for the demonstration. Robotic demonstrations are encouraged, approximately 250 m2 will be available for this purpose. - Videos: 15 minutes maximum duration, VHS format. In addition to the Title/Abstract Page, author(s) must specify recording standard (NTSC, Pal, or Secam). Submissions can be done in 2 different formats: hardcopy or electronic. A) Hardcopy originals (4 copies) should be sent by the author(s) to the Program Secretary at the address below. B) Electronic submission: an anonymous ftp directory has been created at the ECAL95 site (casip.ugr.es, /pub/ecal95/submissions). Only LaTeX and PostScript submissions will be accepted. The papers must be in the format specified above, and must include everything needed to print them (e.g., fonts, macros, figures, etc). LaTeX macros and more detailed instructions will be given upon request to the ECAL95 Program Secretary, or can be got by ftp from the ECAL95 site. For demonstrations and videos contact the Program Secretary. Registration / Information Program Secretary: Juan J. Merelo Dept. Electronica | Facultad de Ciencias | Phone: +34-58-243162 Campus Fuentenueva | Fax: +34-58-243230 18071 Granada, Spain | E-mail: ecal95 at casip.ugr.es Access to ECAL95 site: casip.ugr.es (150.214.60.74) login: anonymous cd /pub/ecal95 Organization Committee Federico Moran U. Complutense Madrid (E) Chair Alvaro Moreno U. Pais Vasco, San Sebastian (E) Chair Arantza Etxeberria U. Sussex (UK) Julio Fernandez U. Pais Vasco, San Sebastian (E) George Kampis ELTE Univ. Budapest (H) Francisco Montero U. Complutense, Madrid (E) Tim Smithers U. Pais Vasco, San Sebastian (E) Carme Torras U. Politecnica Catalunya, Barcelona (E) Local Committee Alberto Prieto U. Granada (E) Chair Juan J. Merelo U. Granada (E) Secretary Julio Ortega U. Granada (E) Francisco J. Pelayo U. Granada (E) Program Committee Francisco Varela CNRS/CREA, Paris (F) Chair Juan J. Merelo U. Granada (E) Secretary Riccardo Antonini U. Carlos III, Madrid (E) Michael Arbib USC, Los Angeles, CA (USA) Randall D. Beer Case Western Reserve U., Cleveland, OH (USA) George Bekey USC, Los Angeles, CA (USA) Hugues Bersini ULB, Brussels (B) Paul Bourgine CEMAGREF, Antony (F) Rodney Brooks MIT, Cambridge, MA (USA) Scott Camazine Wissenschaftskolleg, Berlin (D) Peter Cariani MEEI, Boston, MA (USA) Michael Conrad Wayne State U., Detroit, MI (USA) Jaques Demongeot U. J. Fourier, La Tronche (F) Jean-Louis Deneubourg U. Libre de Bruxelles, Brussels (B) Michael Dyer UCLA, Los Angeles, CA (USA) Claus Emmeche U. of Rosekilde, (DK) Walter Fontana U. of Vienna, (A) Brian C. Goodwin Open U., Milton Keynes (UK) Pauline Hogeweg U. of Utrecht, (NL) Philip Husbands U. of Sussex, Brighton (UK) John Koza Stanford U., CA (USA) Chris Langton Santa Fe Institute, NM (USA) Pier L. Luisi ETHZ, Zurich (CH) Pattie Maes MIT, Cambridge, MA (USA) Pedro C. Marijuan U. Zaragoza, (E) Maja J. Mataric MIT, Cambridge, MA (USA) Enrique Melendez-Hevia U. La Laguna, Tenerife (E) Eric Minch Stanford U., CA (USA) Melanie Mitchel Santa Fe Institute, NM (USA) Jim D. Murray U. of Washington, Seattle, WA (USA) Juan C. Nuno U. Politecnica de Madrid, (E) Domenico Parisi CNR, Roma (I) Mukesh Patel Politecnico di Milano, Milan (I) Howard Pattee SUNY, Binghampton, NY (USA) Juli Pereto U. Valencia, (E) Rolf Pfeifer U. Zurich-Irchel, Zurich (CH) Steen Rasmussen LANL, Los Alamos, NM (USA) Robert Rosen Dalhousie U. Halifax (CA) Peter Schuster IMB, Jena (D) Luc Steels VUB, Brussels (B) John Stewart Institut Pasteur, Paris (F) Jon Umerez SUNY Binghamton, NY (USA) William C. Winsatt U. of Chicago, (USA) Rene Zapata LIRM, Montpellier (F) Official Language: English Publisher: Springer-Verlag Important dates: January 9, 1995 Submission deadline March 10 Notification of acceptance March 24 Camera-ready due March 31 Early registration deadline May 4 Regular registration deadline June 3 Reception and on site registration June 4-6 Conference dates Sponsored by: Spanish RIG IEEE Neural Networks Council Silicon Graphics (Spain) Parque de las Ciencias de Granada EEC DGICYT (Spain) CICYT (Spain) Junta de Andalucia (Spain) EUDEMA Organised by: Universidad de Granada Universidad Complutense de Madrid Universidad del Pais Vasco From amari at sat.t.u-tokyo.ac.jp Wed May 11 18:27:56 1994 From: amari at sat.t.u-tokyo.ac.jp (Shun-ichi Amari) Date: Wed, 11 May 94 18:27:56 JST Subject: position available Message-ID: <9405110927.AA08940@mail.sat.t.u-tokyo.ac.jp> Research Positions in Computational Neuroscience ----- Riken Frontier Research Program The Institute of Physical and Chemical Research (RIKEN) will start a new eight-years Frontier Research Program on Neural Information Processing, beginning in October 1994. The Program includes three research laboratories, each consisting of one research leader and several researchers. They are laboratories for neural modeling, for neural information representations and for artificial brain systems. We will study fundamental principles underlying the higher order brain functioning from mathematical, information-theoretic and systems-theoretic points of view. The three laboratories cooperate in constructing various models of the brain, mathematically analyzing information princples in the brain, and designing artificial brain systems. We will have close correspondences with another Frontier Research Program on experimental neuroscience headed by Dr. M. Ito. We hope that the laboratories will be directed by outstanding leaders under international cooperation, keeping academic freedom, with relatively rich research funds. Research positions, available from October 1994, are open for one-year contracts to researchers and post-doctral fellows, extendable for at most five years. A laboratory leader position is also available for an outstanding established researcher for a three to eight year contract. The positions will have standard Japanese salaries. Those who have interest may send curriculum vitaes, lists of papers, some reference names and copies of one or two representative papers to the director of the Program: Dr. Shun-ichi Amari, Department of Mathematical Engineering and Information Physics, Faculty of Engineering, University of Tokyo, Bunkyo-ku, Tokyo 113, JAPAN tel. +81-3-3812-2111 ex.6910 fax. +81-3-5689-5752 amari at sat.t.u-tokyo.ac.jp From sylee at eekaist.kaist.ac.kr Thu May 12 15:18:20 1994 From: sylee at eekaist.kaist.ac.kr (Soo-Young Lee) Date: Thu, 12 May 94 15:18:20 KST Subject: ICONIP'94-Seoul Extended Deadline and Registration Message-ID: <9405120618.AA01833@eekaist.kaist.ac.kr> International Conference on Neural Information Processing ICONIP'94-Seoul October 17 - 20, 1994 ****************************************** PAPER DEADLINE EXTENDED UNTIL MAY 31, 1994 PRE-REGISTRATION BY AUGUST 31, 1994 ****************************************** Organized by Korean Association of Intelligent Information Processing Sponsored by Asian-Pacific Neural Network Assembly In Cooperation with International Neural Network Society IEEE Neural Network Council European Neural Network Society Japanese Neural Network Society o Dates : October 17 (Mon.) - October 20 (Thur.), 1994 o Venue : The Swiss Grand Hotel, Seoul, Korea Tel : +82(Korea)-2(Seoul)-356-5656 Fax : +82(Korea)-2(Seoul)-356-7799 o Official Language : The official language of the Conference is English which will be used for all paper presentation and printed materials. *************** CALL FOR PAPERS *************** Topics of Interests: All areas of neural networks and related areas such as fuzzy logic, genetic algorithm, and chaos are included. Neurobiological Systems Neural Network Architectures Network Dynamics Cognitive Science Self-Organization Learning & Memory Sensorimotor Systems Time-Series Prediction Optimization Communication Applications Power Electronics Applications Image Processing & Vision Speech Recognition & Language Robotics & Control Other Applications Implementation(Electronic, Optical, and Bio-chips) Hybrid Systems(Fuzzy Logic, Genetic Algorithm, Expert Systems, Chaos and AI) ***************************************************************************** TECHNICAL PROGRAM Plenary Talks Igor Aleksander, Imperial College, UK The Prospects for a Neural Artificial Consciousness Kunihiko Fukushima, Osaka Univ., Japan Neural Networks for Selective Looking Harold Szu, Naval Surface Warfare Center, USA Adaptive Wavelet Transforms by Wavenets Invited Talks Shun-ichi Amari, Univ. of Tokyo, Japan Information Geometry of Stochastic Multilayer Perceptron Walter Freeman, Univ. of California Berkeley, USA Not available yet Toshio Fukuda, Nagoya Univ., Japan Planning and Behavical Control of Intelligent Robot System with Fuzzy-Neuro-GA based Computational Intelligence Dan Hammerstrom, Adaptive Solutions, USA Silicon Cortex : The Impossible Dream ? Il-Song Han, Korea Telecom, Korea URAN : A Hybrid Neural Network VLSI Gerd Hausler, Univ. Erlangen, Germany Chaos, Pattern Formation & Associative Memory with Nonlinear Pictorial Feedback Masumi Ishikawa, Kyushu Inst. Tech., Japan Structural Learning and Modular Networks Mitsuo Kawato, ATR Human Information Processing Research Lab., Japan Teaching by Showing for Task Level Robot Learning through Movement Pattern Perception Eun-Soo Kim, Kwangwoon Univ., Korea Target Image Processing Based on Neural Networks Myung Won Kim, ETRI, Korea Artificial Creativity : Its Computational Modeling and Potential Applications Kazuo Kyuma, Mitsubishi Electric, Japan Comparison of Electrical and Optical Hardware Implementation of Neural Networks Francesco B. Lauria, Universita di Napoli, Italy On the Hebb Rule Implementation of a Boolean Neural Networks Soo-Young Lee, KAIST, Korea Requirements and Perspectives of Neuro-Computers : How and Where Neuro-Computers Can Win Against General-Purpose Computers ? Sukhan Lee, Univ. of Southern California & JPL, USA Theory and Application of Dual-Mode Dynamic Neural Networks Yillbyung Lee, Yonsei Univ., Korea Saccadic Eye Movement Signal Generation Modeling Using Recurrent Neural Network Joseph Malpeli, Univ. Illinois Urbana Champaign,USA A Thermodynamic Model of the Morphogenesis of the Primate Lateral Geniculate Nucleus Robert Marks II, Univ. of Washington, USA Evolutionary Inversion and Hausdort Distance Evalution of Trained Layered Perceptions Gen Matsumoto, Electrotechnical Lab., Japan The Brains as a Computer Nelson Morgan, Univ. California Berkeley, USA Using a Million Connections for Continuous Speech Recognition Yoichi Muraoka, Waseda Univ., Japan "Kansei" Information Processing-Can a Neural Network Live up to this Challenge? Kumpati Narendra, Yale Univ., USA Switching and Turning Using Multiple Neural Network Models Andras Pellionisz, Silicon Valley, USA Not available yet John Taylor, King's College London, UK Where is Neurobiological Modelling going to End Up? Philip Treleaven, Univ. College London, UK Intelligent Systems for Banking, Insurance and Retail : a Survey of UK Systems Minoru Tsukada, Tamagawa Univ., Japan Theoretical Model of the Hippocampal-Cortical Memory System Motivated by Physiological Functions Alex Waibel, Carnegie-Mellon Univ., USA Hybrid Connectionist and Classical Approaches in JANUS, an Advanced Speech-to-speech Translation System Bo-Hyeun Wang, Goldstar Central Research Lab., Korea Hybrid Location-Content Addresable Memories(HyLCAM) and Its Application to Character Recognition Problems Andreas Weigend, Univ. of Colorado, USA Predicting Predictability : How Well Can We Forecast the Future? Paul Werbos, NSF, USA Brain-Like Intelligence : How Far are We and How can We get There? Youshou Wu, Tsinghua Univ., Japan Strategy in Constructing a Large Scale Neural Network System for Chinese Character Recognition Takeshi Yamakawa, Kyushu Inst. of Tech., Japan Wavelet Neural Networks Realizing High Speed Learning Tutorials (Oct. 17 ) Igor Aleksander, Imperial College., UK Weightless Neural Systems Harold Szu, Naval Surface Warfare Center, USA Chaos Theory, Applications, and Implementations Alex Waibel, Carnegie-Mellon Univ., USA Connectionis Models in Multi-modal User Interfaces John Taylor, King's College London, UK Automatic Target Recognition with Neural Networks Andreas Weigend, Univ. of Colorado, USA Avoiding Overfitting in Time-Series Prediction Takeshi Yamakawa, Kyushu Inst. of Tech. Fuzzy Logic : Theory, Hardware Implementation and Applications ICONIP NEWs (Neural-net Evaluation Workshops) In addition to regular conference sessions, special topical workshops, NEWs (Neural-net Evaluation Workshops), are planned to promote in-depth discussions during the conference period at the conference venue. Currently following 5 NEWs are planned. NEW on Financial Applications Co-chairs : Guide Deboeck, World Bank Rhee-Man Kil, ETRI NEW on Speech Recognition Co-chairs : Moon-Sung Han, SERI Nelson Morgan, UC Berkeley NEW on Image and Machine Vision Co-chairs : Eun-Soo Kim, Kwangwoon Univ. NEW on Hybrid Systems Co-chairs : Hideyuki Tagagi, Matsushita Electric Ind. Co. Bo-Hyun Wang, Goldstar Central Research Lab. NEW on VLSI Implementations Co-chairs : Alister Hamilton, Edinburgh Univ. Il-Song Han, Korea Telecom ******************************************************************************* ORGANIZATION OF CONFERENCE Conference Co-Chairs Shun-ichi Amari, Univ. of Tokyo, Japan In-Ku Kang, KCRA, Korea Seung-Taik Yang, ETRI, Korea International Advisory Committee Igor Aleksander, Imperial College, UK Marcelo H. Ang, Jr., Nat'l Univ. of Singapore, Singapore Michael A. Arbib, USC, USA Yiannis Attikiouzel, Univ. of Western Australia, Australia Russel C. Eberhart, RTI, USA Walter Freeman, UC Berkeley, USA Toshio Fukuda, Nagoya Univ., Japan Marwan Jabri, Univ. of Sydney, Australia Nikola Kasabov, Univ. of Otago, New Zealand Teuvo Kohonen, Helsinki Univ. of Tech., Finland Ben I. Lin, Taiwan Nat'l Univ., Taiwan Cheng-Yuan Liou, Taiwan Nat'l Univ., Taiwan Teck-Seng Low, Nat'l Univ. of Singapore, Singapore Robert J. Marks II, Univ. of Washington, USA Gen Matsumoto, ETL, Japan Harold Szu, NSWC, USA Bernard Widrow, Stanford Univ., USA Youshou Wu, Tsinghua Univ., China Sha Zhong, Chinese Inst. Elec., China Domestic Advisory Committee Jeung-Nam Bien, Korea Fuzzy Mathematics & Systems Society Jung-Wan Cho, Center for Artificial Intelligence Research Kun-Moo Chung, Institute for Advanced Engineering Seong-Han Ha, Samsung Advanced Institute of Technology Seung-Hong Hong, The Korean Institute of Telematics & Electronics Heung-Soon Ihm, Hyundai Co. Ltd. Kyung-Chul Jang, Ministry of Science & Technology Chang-Soo Kim, Goldstar Co. Ltd. Chu-Shik Jhon, Research Institute of Advanced Computer Technology Jae-Kyoon Kim, Korean Institute of Communication Sciences Moon-Hyun Kim, System Engineering Research Institute Sang-Young Kim, The Electronic Times Se-Jong Kim, Ministry of Trade, Industry & Energy Yung-Taek Kim, Seoul Nat'l Univ. Cho-Sik Lee, The Korean Society for Cognitive Science Choong-Woong Lee, IEEE Korea Council Chung-Nim Lee, POSTECH Dong-Ho Lee, The Korean Institute of Electrical Engineers Suk-Ho Lee, Korea Information Science Society Yong-Bok Lee, Samsung Electronics Co., Ltd. Yong-Kyung Lee, Korea Telecom Seok-Keun Yoon, Ministry of Communication Si-Ryong Yu, Daewoo Electronics Co., Ltd. Organizing Committee Co-Chairs Sung-Yang Bang, POSTECH Kyu-Bock Cho, Hanseo Univ. Ho-Sun Chung, Kyungbook Nat'l Univ. Sub-Committee Chairs General Affairs : Eun-Soo Kim, Kwangwoon Univ. Finance : Sung-Kwon Kim, Samsung Electronics Co., Ltd Publicity : Sung-Kwon Park, Hanyang Univ. Publication : Il-Song Han, Korea Telecom Exhibition : Mun-Sung Han, SERI Local Arrangement : Yillbyung Lee, Yonsei univ. Registration : Duck-Jin Chung, Inha Univ. Tutorial : Soo-Ik Chae, Seoul Nat'l Univ. Industrial Liaison : Gwang-Hyung Lee, Soongsil Univ. Program Committee Co-Chairs Kunihiko Fukushima, Osaka Univ., Japan Stephen Grossberg, Boston Univ., USA Myung-Won Kim, ETRI, Korea Soo-Young Lee, KAIST, Korea John Taylor, King's College, UK Program Committee Members Seung-Kwon Ahn, Goldstar Central Research Lab., Korea Igor Aleksander, Imperial College, UK Luis B. Almeida, INESC, Portugal James Anderson, Brown Univ., USA Kazuo Asakawa, Fujitsu Lab. Ltd., Japan Yiannis Attikiouzel, Univ. of Western Australia, Australia Eui-Young Cha, Pusan Nat'l Univ., Korea Soo-Ik Chae, Seoul Nat'l Univ., Korea Lai-Wan Chan, Chinese Univ. of Hong Kong, Hong Kong Sung-Il Chien, Kyungbook Nat'l Univ., Korea Sungzoon Cho, POSTECH, Korea Yong-Beom Cho, Konkuk Univ., Korea Jin-Young Choi, Seoul Nat'l Univ., Korea Myung-Ryul Choi, Hanyang Univ., Korea Duck-Jin Chung, Inha Univ., Korea Hong Chung, POSTECH, Korea Dante Del Corso, Politecnico-De Torino, Italy Yann Le Cun, AT&T Bell Lab., USA Rolf Eckmiller, Univ. of Bonn, Germany Francoise Forgiman-Soulie, Univ. of Paris Sud, France Kunihiko Fukushima, Osaka Univ., Japan Lee Giles, NEC Inst., USA Stephen Grossberg, Boston Univ., USA Yeong-Ho Ha, Kyungbook Nat'l Univ., Korea Dan Hammerstrom, Adaptive Solutions Inc., USA Il-Song Han, Korea Telecom, Korea Mun-Sung Han, SERI, Korea Stephen Hanson, Siemens Co. Research, USA Yuzo Hirai, Univ. of Tsukuba, Japan Young-Sik Hong, Dongguk Univ., Korea Naohiro Ishii, Nagoya Inst. of Tech., Japan Masumi Ishikawa, Kyushu Inst. of Tech., Japan Akira Iwata, Nagoya Inst. of Tech., Japan Ju-Seog Jang, Nat'l Fisheries Univ., Korea B. Keith Jenkins, Univ. of Southern California, USA Hong-Tae Jeon, Chung-Ang Univ., Korea Yeun-Cheul Jeung, Samsung Co., Korea Nikola-Kirilov Kasabov, Univ. of Otago, New Zealand Rhee-Man Kil, ETRI, Korea Byung-Ki Kim, Chunnam Nat'l Univ., Korea Dae-Su Kim, Hanshin Univ., Korea Eun-Soo Kim, Kwangwoon Univ., Korea Eung-Soo Kim, Sunghwa Univ., Korea Jai-Hi Kim, Yonsei Univ., Korea Jin-Hyung Kim, KAIST, Korea Jung-Hawn Kim, Univ. of Louisiana, USA Moon-Won Kim, Naval Research Lab., USA Kwang-Ill Koh, Goldstar Industrial Systems, Korea Seong-Gon Kong, Soongsil Univ., Korea Bart Kosko, Univ. of Southern California, USA Kazuo Kyuma, Mitsubishi Electric Co., Japan Francesco Lauria, Univ. Napoli, Italy Bang-Won Lee, Samsung Electronics, Korea Chan-Do Lee, Taejon Univ., Korea Choon-Kil Lee, Seoul Nat'l Univ, Korea Geun-Bae Lee, POSTECH, Korea Gwang-Hyung Lee, Soongsil Univ., Korea Huen-Joo Lee, Goldstar Central Research Lab., Korea Hwang-Soo Lee, KAIST, Korea Ke-Sig Lee, Samsung Group, Korea Sukhan Lee, Univ. of Southern California, USA Won-Don Lee, Chungnam Nat'l Univ., Korea Yillbyung Lee, Yonsei Univ., Korea Young-Jik Lee, ETRI, Korea Cheng-Yuan Liou, Nat'l Taiwan Univ., Taiwan Raymond Lister, Univ. of Queensland, Australia Teck-Seng Low, Nat'l Univ. of Singapore, Singapore Ho Chung Lui, Nat'l Univ. of Singapore, Singapore Song-De Ma, Chinese Academy of Sciences, China Maria Marinaro, Univ. of Solerno Tuba, Italy Gen Matsumoto, Electrotechnical Lab., Japan Gyu Moon, Hallim Univ., Korea Pietro G. Morasso, Univ. of Genova, Italy Takashi Nagano, Hosei Univ., Japan Jong-Ho Nang, Sogang Univ., Korea Kumpati S. Narendra, Yale Univ., USA Jong-Hoon Oh, POSTECH, Korea Erkki Oja, Helsinki Univ., Finland Sigeru Omatu, Univ. of Tokushima, Japan Eung-Gi Paek, Rockwell, USA Cheol-Hoon Park, KAIST, Korea Dong-Jo Park, KAIST, Korea Sung-Kwon Park, Hanyang Univ., Korea Andras Pellionisz, Silicon Valley Neurocomputing Inst., USA Michael Perrone, Brown Univ., USA Alberto Prieto, Univ. de Granada, Spain Demetri Psaltis, California Inst. of Tech., USA Hide-Aki Saito, Tamagawa Univ., Japan Sebastian Seung, AT&T Bell Lab., USA Jong-Han Shin, ETRI, Korea Omori Takashi, Tokyo Univ. of Agri. & Tech., Japan Shaohua Tan, Nat'l Univ. of Singapore, Singapore Horia-Nicolai L. Teodorescu, Ecole Polytectechnique, Federale Lausanne Philip Treleaven, Univ. of College London, UK Minoru Tsukada, Tamagawa Univ., Japan Shiro Usui, Toyohashi Univ. of Technology, Japan M. Vidyasagar, Centre for AI & Robotics, India Bo-Hyeun Wang, Goldstar Central Research Lab., Korea Lei Xu, Chinese Univ. of Hong Kong, Hong Kong Pingfan Yan, Tsinghua Univ., China Hyun-Seung Yang, KAIST, Korea Young-Kyu Yang, SERI, Korea Toyohiko Yatagai, Univ. of Tsukuba, Japan Hyun-Soo Yoon, KAIST, Korea Shuji Yoshizawa, Univ. of Tokyo, Japan Byoung Tak Zhang, GMD, Germany Jacek Zurada, Univ. of Louisville, USA Information for Authors: Original papers are solicited that describe unpublished work on neural networks or related topics such as fuzzy logic, genetic algorithm, and chaos. One original and five copies of the manuscripts in English must be received by May 31, 1994(Extended Duedate). Submissions will be acknowledged on receipt. The submitted papers will be reviewed by Program Committee and corresponding authors will be informed of the decisions at the end of July 1994. No material submitted will be returned to authors. The Electronic-Mail or Facsimile Submissions are not acceptable. Paper Format The submitted manuscripts must be camera-ready on A4 size white papers with 2.5 cm margins on all four sides(24.5cm X 16.0cm printing area), and should not exceed 6 pages, including figures, tables, and references. Single space, single column format in times or similar font style with 10 point size is recommended. Centered at the top of the first page should be the complete paper title, full author name(s), affiliation(s), and mailing address(es). An abstract with less than 150 words should follow. Authors are encouraged to use LaTex. The appropriate LaTex style file and example file can be obtained by FTP or e-mail. To get those files by FTP, use the following instruction: ftp cnsl.kaist.ac.kr (LOGIN:) anonymous (PASSWORD:) cd paperformat get ICONIP94.sty get ICONIP94-example.tex bye If not convenient, just send an e-mail message to "iconip94 at cnsl.kaist.ac.kr", of which the first 2 lines should be: send ICONIP94.sty send ICONIP94-example.tex Non LaTex users may ask for an example of the paper layout by a fax. Acompanying Letter In an accompanying letter, the following should be included: full title of the paper corresponding author name with mailing address, fax number, and e-mail address presenting author name technical sessions (first and second choices) preferred presentation mode (oral or poster) keywords (up to 5) audio-visual requirements (overhead projector, slide projector, video) The Program Committee may recommend to change the presentation mode, if necessary. Conference Proceedings All papers at the oral and poster sessions will be published in Conference Proceedings, which will be available at the Conference. At least one author for each accepted paper must make advanced registration before August 31, 1994. Only manuscripts of author(s) with this requirement will be published in the Proceedings. Paper Copyright By submitting a paper, authors agree to the transfer of the copyright to the Conference Organizer for the proceedings. All submitted papers become the property of the Conference Organizer. Oral Presentation The official language of the Conference is Engilish, and no translation will be provided. The time assigned for contributed talks and invited talks will be 20 minutes (15 minutes for presentation and 5 minutes for discussion) and 30 minutes(25 minutes for presentation and 5 minutes for discussion), respectively. An overhead projector and a 35mm slide projector will be available in the preview room. The presenters may test their transparancies and slides. Poster Presentation Specific time (about 1.5 hour) for poster sessions will be allocated without any oral sessions in parallel. For poster sessions each author will be provided with 1.5m high X 0.9m wide bulletin board. Authors are requested to remain in the vicinity of the bulletin board for the whole duration of the session to answer questions. - One backboard panel is available for poster presentation. A board is 90cm X 150cm(WxH) - The Title, Authors' Name and Affiliation should be 20cm high at the top of the poster panel. - There should be a blank space square at the upper left corner for the reference number. - The poster should be prepared in English. Title should be brief, informative and readable from 2 to 3 meters. - Scotch tape, pins, paste and scissors will be provided by the secretariat. * Poster Sample +----+-----------------------+----------- |NO. |TITLE,NAME,AFFILIATION | 20cm +----+-----------------------+---- | | | | | | | | 150cm | | | | | | | | | | | | +----------------------------+----------- | 90cm | ***************************************************************************** o Letter of Invitation : Upon a request to the Secretariat, a Letter of Invitation to ICONIP'94-Seoul will be sent to those who fully prepaid. Please note that Organizing Committee will not bear any financial obligations to any part as a result of its issue. o Secretariat : All inquiries concerning the Conference should be addressed to the Secretariat ICONIP'94-Seoul Secretariat c/o INTERCOM Convention Service, Inc. (Conference Agency) SL. Kang Nam P.O.Box 641, Seoul 135-606, Korea Tel : +82-2-515-1560/546-7065 Fax : +82-2-516-4807 E-mail : ICONIP at cair.kaist.ac.kr o Important Due Date Extened Deadline for Paper Submission May 31, 1994 Notice of Acceptance July 31, 1994 Deadline for Advanced Registration and Hotel Reservation August 31, 1994 o Supporting Organization Samsung Electronics Co. Goldstar Co., Ltd. Hyundai Electronic Industry Company Ltd. Daewoo Telecom Ltd. From pjh at compsci.stirling.ac.uk Fri May 13 16:36:23 1994 From: pjh at compsci.stirling.ac.uk (Peter J.B. Hancock) Date: 13 May 94 16:36:23 BST (Fri) Subject: Call for papers Message-ID: <9405131636.AA26197@uk.ac.stir.cs.nevis> Our apologies if you receive this more than once... FINAL Call for papers 3rd Neural Computation and Psychology Workshop University of Stirling Scotland 31 August - 2 September 1994 This is the third of a series of workshops looking at the role of neural computational models in psychological research. The first, in Bangor in 1992, was on neurodynamics and psychology, last year's, in Edinburgh, concentrated on models of memory and language. This year's theme is models of perception: general vision, faces, olfaction, sound, music etc, though there will be at least one general session where the subjects will be determined by the papers proposed. There will be invited and contributed talks and posters. Invited speakers include Dr. Ray Meddis and Professor David Willshaw. It is hoped that a proceedings will be published after the event. The workshop will be limited to 75 participants to encourage an informal atmosphere. There will be 5 single-track sessions, starting on Wednesday morning and ending after lunch on Friday. Accomodation will be in student residences on campus, with the option of staying in the management centre hotel if wished. Stirling is situated in the centre of Scotland, with easy access by road, rail and air. For those wishing to spend the subsequent weekend walking, the Highlands are close at hand, and for those who prefer to be indoors, the Edinburgh International Festival and Fringe will still be in progress. We intend to keep costs low. Accommodation will be about 16 pounds per night Bed and Breakfast. Papers will be selected on the basis of abstracts of at most 1000 words, by email or hardcopy to the first address below. Extended deadline for submission: 15 June 1994. Participation from postgraduates is particularly encouraged. For further information contact: Peter Hancock, Department of Psychology, University of Stirling, FK9 4LA pjh at uk.ac.stir.cs, Telephone: (44) 0786-467659 Fax: (44) 0786 467641 Leslie Smith, Department of Computing Science and Mathematics, lss at uk.ac.stir.cs, Telephone: (44) 0786-467435, Fax: (44) 0786 464551 From hunt at DBresearch-berlin.de Mon May 16 16:48:00 1994 From: hunt at DBresearch-berlin.de (Dr. Ken Hunt) Date: Mon, 16 May 94 16:48 MET DST Subject: Neuro-Fuzzy Workshop Message-ID: CALL FOR PAPERS Workshop on Neuro-Fuzzy Systems ------------------------------- Hamburg, Germany, 8--9 September 1994 To be held following the International Conference on Intelligent Systems Engineering ISE 94, Hamburg, Germany (conference dates: September 5--8 1994). Background ---------- The ISE conference will be followed by a set of four workshops focussing on the relationships between methods of control engineering and artificial intelligence in the development of intelligent systems in engineering. The goal of the workshops is to support the exchange of background information and the development of a common understanding of the relative merits and limitations of the various approaches. The initiation of further collaborative actions is expected as a major outcome of the workshops with the long-term goal of bridging the gap between the two classes of approaches. These are intended to be highly interactive sessions leading to a better understanding of the motivation and perspectives of each discipline. The Neuro-Fuzzy Workshop ------------------------ Fuzzy systems and neural nets have been successfully applied to a wide range of application fields including signal processing, control, image analysis, pattern recognition and diagnostics. The combination of both paradigms allows the merging of the sophisticated learning algorithms developed in the realm of neural nets and the representation of qualitative and cognitive transparent rules in fuzzy inference systems. Various architectures for hybrid neuro-fuzzy systems have been proposed: serial and hierarchical coupling of fuzzy systems and neural nets and heterogenous fuzzy-neural nets. This empirical work on neuro-fuzzy combinations has recently been underpinned by theoretical results establishing the direct functional equivalence of certain types of networks and a class of fuzzy systems. The workshop aims to present a balanced overview of this rapidly expanding field both from a theoretical and an application oriented viewpoint. Recent developments in design tools will also play an important role. Specific topics for the workshop include, but are not limited to, - Functional equivalence of neural and fuzzy systems - Structure selection - Common training algorithms - Transparency/interpretation of trained systems - Knowledge-based neural networks - Fuzzy-neural adaptive control - Computational issues and implementation - Description of industrial applications - Design tools for hybrid architectures Papers will be selected according to their quality, significance, originality and their potential to generate discussions on the major theme of the workshop. Presentations should be specifically designed to support an exchange of ideas and to indicate areas where contributions from the respective other discipline are to be expected. Informal working notes will be distributed during the workshop; no copyright will be requested. A paper must not exceed 10000 words, excluding references and abstract. People who wish to attend the workshop without submitting a paper should a send a letter describing their background and research interests by the paper submission deadline. Submission: ---------- Please direct enquiries and submit papers or extended abstracts (3 copies please) to: Dr K J Hunt Systems Technology Research Daimler-Benz AG Alt-Moabit 91 b D-10559 BERLIN Germany Tel: (030) 399 82 275 Int: + 49 30 399 82 275 FAX: (030) 399 82 107 Int: + 49 30 399 82 107 Email: hunt at DBresearch-berlin.de Schedule: -------- Submission deadline July 1st, 1994 Notifications sent July 31st, 1994 Workshop September 8/9th, 1994 Organizing Committee: -------------------- Ken Hunt (Daimler-Benz Research, Berlin) hunt at DBresearch-berlin.de Roland Haas (Daimler-Benz Research, Berlin and TU-Clausthal) haas at DBresearch-berlin.de Dietmar Moeller (Technische Universitaet Clausthal) moeller at fuzzy-labor.in.tu-clausthal.de Other Workshops: --------------- The three other workshops and contacts are: Qualitative and Quantitative Approaches to Model-based Diagnosis freitag at zfe.siemens.de Advanced Planning and Scheduling hje at robots.oxford.ac.uk Architectures for Intelligent Systems marin at iic.es From pedreira at ele.puc-rio.br Mon May 16 17:27:16 1994 From: pedreira at ele.puc-rio.br (Carlos Eduardo Pedreira) Date: Mon, 16 May 94 16:27:16 EST Subject: 2 cfp Brazilian Cong. on Neural Networks Message-ID: <9405161927.AA11956@Octans.ele.puc-rio.br> Dear Neural networkers, Please note the new dead lines for papers submition. Carlos E. Pedreira - Chairman of the National Council on Neural Networks ***************************************************************************** 1st BRAZILIAN CONGRESS (and 2nd SCHOOL) ON ARTIFICIAL NEURAL NETWORKS October 24-27, 1994 Itajuba, Minas Gerais SECOND CALL FOR PAPERS The National Council on Neural Networks is pleased to announce the Federal Engineering School at Itajuba (EFEI) as the venue for the 1st Brazilian Congress / 2nd School on Artificial Neural Networks (ANN). The objectives of the Congress are twofold: the first is to encourage communication among researchers whose work either draws support from, or complements, the theory and applications of ANN related models; and the second is to explore industrial applications of ANN technology to make systems more convenient. The program committee cordially invites interested authors to submit papers dealing with any aspect of research and applications related to the use of ANN models. Papers will be carefully reviewed and only the accepted papers will appear in the proceedings, to be available at the congress to all registrants. Possible topic areas include (although not limited to) the following: - foundations and mathematical issues - learning and memory - new architectures - neurobiological systems - hybrid systems - hardware implementations - Applications to * robotics and automation * system control * signal processing * pattern recognition * forecasting * optimization In addition to the paper presentations, there will be other activities such as: - short course on the theory of neural computation; - tutorials on ANN applications in energy, business, biomedical, among others; - a panel session entitled "Perspective and Reality of ANN"; - social programs and sight-seeing tours. The short course and the tutorials are offered as an introduction for newcomers to the area. The lectures will be presented by reputed researchers and recognized practitioners. The short course will be on October 24th and the tutorials will be, along with the congress, on October 25-27, 1994. Papers can be written in portuguese or english. An article shall not exceed 6 pages on an 8.5x13 inches paper. Articles must use a double column format, with header, footer and lateral margins equal to 1 inch. A 7 cm header margin is required for the first page. Suggested font is Times Roman, with 5 characters per centimetre. All papers must include an abstract. PAPER SCHEDULE AT-A-GLANCE Authors submit original plus 3 copies of papers by: June 30th, 1994 Notification of review committee's decision to be posted by: August 15th, 1994 INTERNATIONAL DISTINGUISHED SCHOLARS Prof. Yoh-Han Pao - Case Western Reserve University Prof. Manoel F. Tenorio - Purdue University Prof. Dejan J. Sobajic - EPRI Prof. Yaser Abu-Mostafa - Caltech Dr. Steve Suddarth - AFOSR/NE PROGRAM COMMITTEE Prof. Alexandre P. Alves da Silva (Chairman) - EFEI Prof. Armando Freitas da Rocha - UNICAMP Prof. Manoel F. Tenorio - Purdue University Prof. Nestor Caticha - USP Prof. Carlos Eduardo Pedreira - PUC-Rio 2nd SCHOOL ON ANN CHAIR Profa. Teresa B. Ludermir - UFPE STEERING COMMITTEE Prof. Germano Lambert Torres (Chairman) - EFEI Prof. Luiz P. Caloba - COPPE/UFRJ Prof. Luiz Eduardo Borges da Silva - EFEI Dr. Eduardo Nery - CEMIG NATIONAL COUNCIL ON NEURAL NETWORKS Prof. Carlos Eduardo Pedreira - PUC-Rio Prof. Teresa B. Ludermir - UFPE Dr. Ricardo J. Machado - IBM-Brasil Prof. Dante A.C. Barone - UFRGS Prof. Luiz P. Caloba - UFRJ Prof. Renato M. Sabbatini - UNICAMP SPONSORED BY IBM-Brasil American Airlines EFEI / FUPAI FAPERJ FAPEMIG CEMIG All correspondence should be conducted through: Professor Alexandre P. Alves da Silva EFEI / IEE Campus Prof. J.R. Seabra Av. BPS, 1303 - CEP 37500-000 Itajuba - MG BRAZIL Tel.: +55-35-629-1247 Fax.: +55-35-629-1187 E-mail: alex at efei.dcc.ufmg.br From P.McKevitt at dcs.shef.ac.uk Tue May 17 15:01:52 1994 From: P.McKevitt at dcs.shef.ac.uk (Paul Mc Kevitt) Date: Tue, 17 May 94 15:01:52 BST Subject: Integration of Natural Language and Vision Processing Message-ID: <9405171401.AA27085@dcs.shef.ac.uk> **** VISION AND LANGUAGE AND VISION AND LANGUAGE AND VISION AND LANGUAGE **** **** VISION AND LANGUAGE AND VISION AND LANGUAGE AND VISION AND LANGUAGE **** PROGRAMME AND CALL FOR PARTICIPATION AAAI-94 Workshop on Integration of Natural Language and Vision Processing Twelfth National Conference on Artificial Intelligence (AAAI-94) Seattle, Washington, USA Tuesday/Wednesday, August 2nd/3rd, 1994 Chair: Paul Mc Kevitt Department of Computer Science University of Sheffield, ENGLAND, EU WORKSHOP COMMITTEE: Prof. Mike Brady (Oxford, England) Prof. Jerry Feldman (ICSI, Berkeley, USA) Prof. John Frisby (Sheffield, England) Prof. Frank Harary (CRL, New Mexico, USA) Dr. Eduard Hovy (USC ISI, Los Angeles, USA) Dr. Mark Maybury (MITRE, Cambridge, USA) Dr. Ryuichi Oka (RWC P, Tsukuba, Japan) Prof. Derek Partridge (Exeter, England) Dr. Terry Regier (ICSI, Berkeley, USA) Prof. Roger Schank (ILS, Illinois, USA) Prof. Noel Sharkey (Sheffield, England) Dr. Oliviero Stock (IRST, Italy) Prof. Dr. Wolfgang Wahlster (DFKI, Germany) Prof. Yorick Wilks (Sheffield, England) WORKSHOP DESCRIPTION: There has been a recent move towards considering the integration of perception sources in Artificial Intelligence (AI) (see Dennett 1991 and Mc Kevitt (Guest Ed.) 1994). This workshop will focus on research involved in the integration of Natural Language Processing (NLP) and Vision Processing (VP). Although there has been much progress in developing theories, models and systems in the areas of NLP and VP there has been little progress on integrating these two subareas of Artificial Intelligence (AI). It is not clear why there has not already been much activity in integrating these two areas. Is it because of the long-time reductionist trend in science up until the recent emphasis on chaos theory, nonlinear systems, and emergent behaviour? Or, is it because the people who have tended to work on NLP tend to be in other Departments, or of a different ilk, from those who have worked on VP? We believe it is high time to bring together NLP and VP. Already we have advertised a call for papers for a special volume of the Journal of AI Review to focus on their integration and we have had a tremendous response. There will be three special issues focussing on theory and applications of NLP and VP and intelligent multimedia systems. The workshop is of particular interest at this time because research in NLP and VP has advanced to the stage that they can each benefit from integrated approaches. Also, such integration is important as people in NLP and VP can gain insight from each others' work. References Dennett, Daniel (1991) Consciousness explained Harmondsworth: Penguin Mc Kevitt, Paul (1994) (Guest Editor) Integration of Natural Language and Vision Processing Special Volume 8(1,2,3) of AI Review Journal Dordrecht: Kluwer (forthcoming) WORKSHOP TOPICS: The workshop will focus on these themes: * Multimedia retrieval * Multimedia document processing * Speech, gesture and gaze * Theory * Multimedia presentation * Spatial relations * Multimedia interfaces * Reference PROGRAMME: Tuesday, August 2nd, 1994 ************************* INTRODUCTION I: 8.45 `Introduction' Paul Mc Kevitt MULTIMEDIA RETRIEVAL: (Chair: Neil C. Rowe) 9.00 `Domain-independent rules relating captions and pictures' Neil C. Rowe Computer Science, U.S. Naval Postgraduate School, Monterey CA, USA 9.30 `An image retrieval system that accepts natural language' Hiromasa NAKATANI and Yukihiro ITOH Department of Information and Knowledge Engineering, Shizuoka University, Hamamatsu, Japan 10.00 Break MULTIMEDIA DOCUMENT PROCESSING: (Chair: Rohini Srihari) 10.30 `Integrating text and graphical input to a knowledge base' Raman Rajagopalan Dept. of Computer Sciences, University of Texas at Austin, USA 11.00 `Photo understanding using visual constraints generated' from accompanying text Rohini Srihari Center of Excellence for Document Analysis and Recognition (CEDAR), SUNY Buffalo, NY, USA 11.30 Discussion SPEECH, GESTURE AND GAZE: (Chair: Jordi Robert-Ribes) 12.00 `Audiovisual recognition of speech units: a tentative functional model compatible with psychological data' Jordi Robert-Ribes, Michel Piquemal, Jean-Luc Schwartz & Pierre Escudier Institut de la Communication Parlee (ICP) Grenoble, France, EU 12.30 Discussion 12.45 LUNCH SITE DESCRIPTION (VIDEO): (Chair: Arnold G. Smith) 2.00 `The spoken image system: on the visual interpretation of verbal scene descriptions' Sean O Nuallain, Benoit Farley & Arnold G. Smith Dublin City University, Dublin, Ireland, EU & NRC, Ottawa, Canada THEORY: 2.20 `Behavioural descriptions from image sequences' Hilary Buxton and Richard Howarth School of Cognitive and Computing Sciences, University of Sussex & Department of Computing Science, QMW, University of London 2.50 `Visions of language' Paul Mc Kevitt Department of Computer Science, University of Sheffield, England, EU 3.15 Discussion 3.30 Break 4.00 `Language animation' A. Narayanan, L. Ford, D. Manuel, D. Tallis, and M. Yazdani Media Laboratory, Department of Computer Science, University of Exeter, England, EU 4.30 Discussion MULTIMEDIA PRESENTATION: (Chair: Arnold G. Smith) 4.45 `Assembly plan generation by integrating pictorial and textual information in an assembly illustration' Shoujie He, Norihiro Abe and Tadahiro Kitahashi Dept of Information Systems and Computer Science, National Univ. of Singapore, Singapore, Faculty of Computer Science and Systems Engineering, Kyushu Institute of Technology, Iizuka-shi, Japan & The Institute of Scientific and Industrial Research Osaka University, Osaka, Japan 5.15 `Multimedia presentation of interpreted visual data' Elisabeth Andre, Gerd Herzog & Thomas Rist DFKI & Universitaet des Saarlandes, Saarbruecken, Germany, EU 5.45 Discussion 6.00 OICHE MHAITH Wednesday, August 3rd, 1994 *************************** INTRODUCTION: 8.45 `Introduction' Paul Mc Kevitt SPATIAL RELATIONS I: (Chair: Jeffrey Mark Siskind) 9.00 `Propositional semantics in the WIP system' Patrick Olivier & Jun-ichi Tsujii Centre for Intelligent Systems University of Wales at Aberystwyth, Penglais, Wales, EU & Centre for Computational Linguistics, UMIST, Manchester, England, EU 9.30 `Spatial layout identification and incremental descriptions' Klaus-Peter Gapp & Wolfgang Maass Cognitive Science Program, Saarbruecken, Germany, EU 10.00 Break 10.30 `Axiomatic support for event perception' Jeffrey Mark Siskind Department of Computer Science, University of Toronto, Canada 11.00 Discussion SPATIAL RELATIONS II: (Chair: Stephan Kerpedjiev) 11.30 `A cognitive approach to an interlingua representation of spatial descriptions' Irina Reyero-Sans & Jun-ichi Tsujii Centre for Computational Linguistics, UMIST, Manchester, England, EU 12.00 `Describing spatial relations in weather reports through prepositions' Stephan Kerpedjiev, NOAA/ERL/Forecast Systems Laboratory, Boulder, Colorado, USA 12.30 Discussion 12.45 LUNCH MULTIMEDIA INTERFACES: (Chair: Yuri A. TIJERINO) 2.00 `Talking pictures: an empirical study into the usefulness of natural language output in a graphical interface' Carla Huls, Edwin Bos & Alice Dijkstra NICI, Nijmegen University, Nijmegen, The Netherlands & Unit of Experimental and Theoretical Psychology, Leiden University, The Netherlands 2.30 `From verbal and gestural input to 3-D visual feedback' Yuri A. TIJERINO, Tsutomu MIYASATO & Fumio KISHINO ATR Communication Systems Research Laboratories, Kyoto, Japan 3.00 Discussion 3.30 Break 4.00 `An integration of natural language and vision processing towards an agent-based future TV system' Yeun-Bae Kim, Masahiro Shibata & Masaki Hayashi NHK (Japan Broadcasting Corporation) Science & Technical Research Laboratories, Tokyo, Japan 4.30 Discussion REFERENCE: (Chair: Lawrence D. Roberts) 4.45 `An AI module for reference based on perception' John Moulton, Hartwick College, Oneonta, N.Y. USA and Lawrence D. Roberts, SUNY, Binghamton, N.Y. USA 5.15 `Instruction use by a vision-based mobile robot' Tomohiro Shibata, M. Inaba, & H. Inoue Department of Mechano Informatics, The University of Tokyo, Japan 5.45 Discussion 6.00 OICHE MHAITH PUBLICATION: Workshop notes/preprints will be published by AAAI. If there is sufficient interest we will publish a book on the workshop with AAAI Press. WORKSHOP CHAIR: Paul Mc Kevitt Department of Computer Science Regent Court University of Sheffield 211 Portobello Street GB- S1 4DP, Sheffield England, UK, EU. e-mail: p.mckevitt at dcs.shef.ac.uk fax: +44 742 780972 phone: +44 742 825572 (office) 825590 (secretary) ATTENDANCE: We hope to have an attendance between 30-50 people at the workshop. If you are interested in attending then please send the following form to p.mckevitt at dcs.shef.ac.uk as soon as possible: cut--------------------------------------------------------------------------- Name: Affiliation: Full Address: E-mail: cut---------------------------------------------------------------------------- REGISTRATION ENQUIRIES FOR AAAI CAN BE MADE TO: NCAI at aaai.org REGISTRATION FEE: Incorporated into the technical registration fee except for those who are workshop attendees only. **** VISION AND LANGUAGE AND VISION AND LANGUAGE AND VISION AND LANGUAGE **** **** VISION AND LANGUAGE AND VISION AND LANGUAGE AND VISION AND LANGUAGE **** From P.McKevitt at dcs.shef.ac.uk Tue May 17 15:16:07 1994 From: P.McKevitt at dcs.shef.ac.uk (Paul Mc Kevitt) Date: Tue, 17 May 94 15:16:07 BST Subject: Speech and Natural Language Processing Message-ID: <9405171416.AA27396@dcs.shef.ac.uk> **** SPEECH AND LANGUAGE AND SPEECH AND LANGUAGE AND SPEECH AND LANGUAGE **** **** SPEECH AND LANGUAGE AND SPEECH AND LANGUAGE AND SPEECH AND LANGUAGE **** PROGRAMME AND CALL FOR PARTICIPATION AAAI-94 Workshop on Integration of Natural Language and Speech Processing Twelfth National Conference on Artificial Intelligence (AAAI-94) Seattle, Washington, USA Sunday/Monday, July 31st/August 1st, 1994 Chair: Paul Mc Kevitt Department of Computer Science University of Sheffield, ENGLAND, EU WORKSHOP COMMITTEE: Prof. Ole Bernsen (Roskilde, Denmark) Dr. Martin Cooke (Sheffield, England) Dr. Daniel Jurafsky (ICSI, Berkeley, USA) Dr. Steve Renals (Cambridge, England) Prof. Noel Sharkey (Sheffield, England) Dr. Eiichiro Sumita (ATR, Japan) Prof. Dr. Walther v.Hahn (Hamburg, Germany) Prof. Yorick Wilks (Sheffield, England) Prof. Dr. Wolfgang Wahlster (DFKI, Germany) Dr. Sheryl R. Young (CMU, USA) WORKSHOP DESCRIPTION: There has been a recent move towards considering the integration of perception sources in Artificial Intelligence (AI) (see Dennett 1991 and Mc Kevitt (Ed.) 1994). This workshop will focus on research involved in the integration of Natural Language Processing (NLP) and Speech Processing (SP). The aim here is to bring to the AI community results being presented at computational linguistics (e.g. COLING/ACL), and speech conferences (e.g. ICASSP, ICSLP). Although there has been much progress in developing theories, models and systems in the areas of NLP and SP we have just started to see progress on integrating these two subareas of AI. Most success has been with speech synthesis and less with speech understanding. However, there are still a number of important questions to answer about the integration of speech and language processing. How is intentional information best gleaned from speech input? How does one cope with situations where there are multiple speakers in a dialogue with multiple intentions? How does discourse understanding occur in multi-speaker situations with noise? How does prosodic information help NLP systems? What corpora (e.g. DARPA ATIS corpora, MAP-TASK corpus from Edinburgh) exist for integrated data on speech and language? The workshop is of particular interest at this time because research in NLP and SP have advanced to the stage that they can each benefit from integrated approaches. Also, such integration is important as people in NLP and SP can gain insight from each others' work. References Dennett, Daniel (1991) Consciousness explained Harmondsworth: Penguin Mc Kevitt, Paul (1994) (Guest Editor) Integration of Natural Language and Vision Processing Special Volume 8(1,2,3) of AI Review Journal Dordrecht: Kluwer (forthcoming) WORKSHOP TOPICS: The workshop will focus on these themes: * Speech understanding * Dialogue & Discourse * Machine translation * Architectures * Site descriptions (Hamburg, JANUS-II, ATR, CMU) PROGRAMME: Sunday, July 31st, 1994 *********************** INTRODUCTION I: 8.45 `Introduction' Paul Mc Kevitt SPEECH UNDERSTANDING I: (Chair: Alberto Lavelli) 9.00 `Left-to-Right analysis of spoken language' Bernd Seestaedt, Franz Kummert & Gerhard Sagerer University of Bielefeld, Germany, EU 9.30 `An N-Best representation for bidirectional parsing strategies' Anna Corazza & Alberto Lavelli IRST, Trento, Italy, EU 10.00 Break 10.30 `Incorporation of phoneme-context-dependence in LR table through constraint propagation method' Hozumi TANAKA, Hui LI & Takenobu TOKUNAGA Tokyo Institute of Technology, Tokyo, Japan 11.00 Discussion SPEECH UNDERSTANDING II: (Chair: Karen Ward) 11.15 `On the need for a theory of knowledge sources for spoken language understanding' Karen Ward & David G. Novick Oregon Graduate Institute of Science and Technology, Oregon, USA 11.45 `Misrecognition detection in speech recognition' Sheryl R. Young Department of Computer Science, Carnegie Mellon University, USA 12.15 Discussion 12.30 LUNCH SITE DESCRIPTION I: (Chair: Nigel Ward) 2.00 `An outline of the Verbmobil project with focus on the work at the University of Hamburg' J. Amtrup, Andreas Hauenstein, C. Pyka, V. Weber & S. Wermter University of Hamburg, Germany, EU ARCHITECTURES I: (Chair: Nigel Ward) 2.15 `An investigation of tightly coupled time synchronous speech language interfaces using a unification grammar' Andreas Hauenstein & Hans H. Weber University of Hamburg & University of Erlangen-Nuernberg, Germany, EU 2.45 `An approach to tightly-coupled syntactic/semantic processing for speech understanding' Nigel Ward University of Tokyo, Japan 3.15 Discussion 3.30 Break DIALOGUE & DISCOURSE I: (Chair: Jean Veronis) 4.00 `Pragmatic linguistic constraint models for large-vocabulary speech processing' Eric Atwell and Paul Mc Kevitt University of Leeds & University of Sheffield, England, EU 4.30 `SpeechActs: a testbed for continuous speech applications' Paul Martin & Andy Kehler Sun Microsystems Laboratories & Harvard University, USA 5.00 `NL and speech in the Multext project' Jean Veronis, Daniel Hirst, Robert Espesser & Nancy Ide CNRS & Universite de Provence, Aix-en-Provence, France 5.30 Discussion 6.00 OICHE MHAITH Monday, August 1st, 1994 ************************ INTRODUCTION II: 8.45 `Introduction' Paul Mc Kevitt SITE DESCRIPTIONS II & III: (Chair: Eiichiro Sumita) 9.00 `JANUS-II: research in spoken language translation' Alex Waibel Center for Machine Translation, Carnegie Mellon University, USA & University of Karlsruhe, Germany, EU 9.15 `Work at ATR on spoken language translation' Dr. Eiichiro Sumita ATR Interpreting Telecommunications Research Laboratories, Kyoto, Japan MACHINE TRANSLATION: (Chair: Bernhard Suhm) 9.30 `Bilingual corpus for speech translation' Osamu FURUSE, Yasuhiro SOBASHIMA, Toshiyuki TAKEZAWA & Noriyoshi URATANI ATR Interpreting Telecommunications Research Laboratories, Kyoto, Japan 10.00 Break 10.30 `Speech-language integration in a multi-lingual speech translation system' Bernhard Suhm, Lori Levin, N. Coccaro, Jaime Carbonell, K. Horiguchi, R. Isotani, A. Lavie, L. Mayfield, C.P. Rose, C. Van Ess-Dykema & Alex Waibel Center for Machine Translation, Carnegie Mellon University, USA ATR Interpreting Telecommunications Research Laboratories, Kyoto, Japan U.S. Department of Defense, & University of Karlsruhe, Germany, EU 11.00 Discussion ARCHITECTURES II: (Chair: Daniel Jurafsky) 11.30 `Towards an artificial agent as the kernel of a spoken dialogue system: a progress report' David Sadek, A. Ferrieux & A. Cozannet French Telecom, CNET, France, EU 12.00 `Integrating experimental models of syntax, phonology, and accent/dialect in a speech recognizer' Daniel Jurafsky, Chuck Wooters, Gary Tajchman, Jonathan Segal, Andreas Stolcke & Nelson Morgan ICSI and University of California at Berkeley, Berkeley, USA 12.30 Discussion 12.45 LUNCH SITE DESCRIPTION IV: (Chair: Sheryl R. Young) 2.00 `Work at CMU on spoken dialogue systems' Sheryl R. Young Department of Computer Science, Carnegie Mellon University, USA DIALOGUE & DISCOURSE II: (Chair: Sheryl R. Young) 2.15 `Speech recognition in multi-agent dialogue' Sheryl R. Young Department of Computer Science, Carnegie Mellon University, USA 2.45 `A study of intonation and discourse structure in directions' Barbara J. Grosz, Julia Hirschberg & Christine H. Nakatani Harvard University & AT&T Bell Laboratories, USA 3.15 Discussion 3.30 Break ARCHITECTURES III: (Chair: Mary P. Harper) 4.00 `An integrative architecture for speech and language understanding' William Edmondson, Jon Iles & Paul Mc Kevitt University of Birmingham & University of Sheffield, England, EU 4.30 `Integrating language models with speech recognition' Mary P. Harper, Leah H. Jamieson, Carl D. Mitchell, Goangshiuan Ying, SiriPong Potisuk, Pramila N. Srinivasan, Ruxin Chen, Carla B. Zoltowski, Laura L. McPheters, Bryan Pellom & Randall A. Helzerman School of Electrical Engineering, Purdue University, USA 5.00 Discussion 5.15 OICHE MHAITH PUBLICATION: Workshop notes/preprints will be published by AAAI. If there is sufficient interest we will publish a book on the workshop with AAAI Press. WORKSHOP CHAIR: Paul Mc Kevitt Department of Computer Science Regent Court University of Sheffield 211 Portobello Street GB- S1 4DP, Sheffield England, UK, EU. e-mail: p.mckevitt at dcs.shef.ac.uk fax: +44 742 780972 phone: +44 742 825572 (office) 825590 (secretary) ATTENDANCE: We hope to have an attendance between 25-50 people at the workshop. If you are interested in attending then please send the following form to p.mckevitt at dcs.shef.ac.uk as soon as possible: cut--------------------------------------------------------------------------- Name: Affiliation: Full Address: E-mail: cut---------------------------------------------------------------------------- REGISTRATION ENQUIRIES FOR AAAI CAN BE MADE TO: NCAI at aaai.org REGISTRATION FEE: Incorporated into the technical registration fee except for those who are workshop attendees only. **** SPEECH AND LANGUAGE AND SPEECH AND LANGUAGE AND SPEECH AND LANGUAGE **** **** SPEECH AND LANGUAGE AND SPEECH AND LANGUAGE AND SPEECH AND LANGUAGE **** From kruschke at pallas.psych.indiana.edu Tue May 17 17:46:53 1994 From: kruschke at pallas.psych.indiana.edu (John Kruschke) Date: Tue, 17 May 1994 16:46:53 -0500 (EST) Subject: TR announcement: base rates in category learning Message-ID: A non-text attachment was scrubbed... Name: not available Type: text Size: 3041 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/265a49df/attachment.ksh From ym00 at crab.psy.cmu.edu Wed May 18 12:53:15 1994 From: ym00 at crab.psy.cmu.edu (Yuko Munakata) Date: Wed, 18 May 94 12:53:15 EDT Subject: TR: A PDP Framework for Object Permanence Tasks Message-ID: <9405181653.AA26835@crab.psy.cmu.edu.psy.cmu.edu> The following Technical Report is available both electronically from our own FTP server or in hard copy form. Instructions for obtaining copies may be found at the end of this post. ======================================================================== Now You See It, Now You Don't: A Gradualistic Framework for Understanding Infants' Successes and Failures in Object Permanence Tasks Yuko Munakata, James L. McClelland, Mark H. Johnson, & Robert S. Siegler Carnegie Mellon University Technical Report PDP.CNS.94.2 May, 1994 3.5-month-old infants seem to show an understanding of the concept of object permanence when tested through looking-time measures. Why, then, do infants fail to retrieve hidden objects until 8 months? Answers to this question, and to questions of infants' successes and failures in general, depend on one's conception of knowledge representations. Within a monolithic approach to object permanence, means-ends deficits provide the standard answer. However, the current experiments with 7-month-old infants indicate that the means-ends accounts are incomplete. In the first two studies, infants were trained to pull a towel or push a button to retrieve a distant toy. Infants were then tested on trials with an opaque or transparent screen in front of the toy. Trials without toys were also included, and the difference between Toy and No-Toy trials in number of retrieval responses was used as a measure of toy-guided retrieval. The means-ends abilities required for toy-guided retrieval in the Transparent and Opaque conditions were identical, yet toy-guided retrieval was more frequent in the Transparent condition. A third experiment eliminated the possibility that training on the retrieval of visible toys had led infants to generalize better to the Transparent condition. To explain these data, an account of the object permanence concept as a gradual strengthening of representations of occluded objects is developed in the form of a connectionist model. The simulations demonstrate how a system might come to form internal representations of occluded objects, how these representations could be graded and strengthened, and how the gradedness of representations could differentially impact upon looking and reaching behaviors. ======================================================================= Retrieval information for pdp.cns TRs: unix> ftp 128.2.248.152 # hydra.psy.cmu.edu Name: anonymous Password: ftp> cd pub/pdp.cns ftp> binary ftp> get pdp.cns.94.2.ps.Z ftp> quit unix> zcat pdp.cns.94.2.ps.Z | lpr # or however you print postscript NOTE: The compressed file is 292340 bytes long. Uncompressed, the file is 978382 byes long. The printed version is 43 total pages long. For those who do not have FTP access, physical copies can be requested from Barbara Dorney . From jari at vermis Wed May 18 05:17:00 1994 From: jari at vermis (Jari Kangas) Date: Wed, 18 May 94 12:17:00 +0300 Subject: Thesis available: On the Analysis of Pattern Sequences by SOMs Message-ID: <9405180917.AA07773@vermis> FTP-host: vermis.hut.fi FTP-file: pub/papers/kangas.thesis.ps.Z The file kangas.thesis.ps.Z is now available for copying from the anonymous ftp-site 'vermis.hut.fi' (130.233.168.57): On the Analysis of Pattern Sequences by Self-Organizing Maps Jari Kangas Dr.Tech. Thesis Helsinki University of Technology Abstract: This thesis is organized in three parts. In the first part, the Self-Organizing Map algorithm is introduced. The discussion focuses on the analysis of the Self-Organizing Map algorithm. It is shown that the nonlinear nature of the algorithm makes it difficult to analyze the algorithm except in some trivial cases. In the second part the Self-Organizing Map algorithm is applied to several patterns sequence analysis tasks. The first application is a voice quality analysis system. It is shown that the Self-Organizing Map algorithm can be applied to voice analysis by providing the visualization of certain deviations. The key point in the applicability of Self-Organizing Map algorithm is the topological nature of the mapping; similar voice samples are mapped to nearby locations in the map. The second application is a speech recognition system. Through several experiments it is demonstrated that by collecting some time dependent features and using them in conjunction with the basic Self-Organizing Map algorithm one can improve the speech recognition accuracy considerably. The applications explained in the second part of the thesis were rather straightforward works where the sequential signal itself was transformed for the analysis. In the third part of the thesis it is demonstrated that the Self-Organizing Map algorithm itself could be extended by identifying each Map unit with an arbitrary operator with capabilities for pattern sequence processing. It is shown that the operator maps are applicable for example to speech signal (waveform) categorization. -------------------------------------- The thesis is 86 pages (8 preamble + 78 text). To obtain a copy of the Postscript file: % ftp vermis.hut.fi > Name: anonymous > Password: > cd pub/papers > binary > get kangas.thesis.ps.Z (The size of the compressed file is about 0.4Mbyte) > quit Then: % uncompress kangas.thesis.ps.Z (The size of the uncompressed file is about 1.2Mbyte) % lpr -s -P kangas.thesis.ps ------------------------------------- Jari Kangas Helsinki University of Technology Neural Networks Research Centre Rakentajanaukio 2 C FIN-02150 Espoo, FINLAND From maass at igi.tu-graz.ac.at Thu May 19 06:03:20 1994 From: maass at igi.tu-graz.ac.at (Wolfgang Maass) Date: Thu, 19 May 94 12:03:20 +0200 Subject: new paper in neuroprose Message-ID: <9405191003.AA29968@figids01.tu-graz.ac.at> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/maass.perspectives.ps.Z The file maass.perspectives.ps.Z is now available for copying from the Neuroprose repository. This is a 37-page long paper. Hardcopies are not available. PERSPECTIVES OF CURRENT RESEARCH ABOUT THE COMPLEXITY OF LEARNING ON NEURAL NETS by Wolfgang Maass Institute for Theoretical Computer Science Technische Universitaet Graz, A-8010 Graz, Austria email: maass at igi.tu-graz.ac.at Abstract: This is a survey paper, which discusses within the framework of computational learning theory the current state of knowledge and important open problems in three areas of research about the complexity of learning on neural nets: -- Effient algorithms for neural nets that learn from mistakes -- Bounds for the number of examples needed to train neural nets -- PAC-learning on neural nets without a-priori assumptions about the learning problem All relevant definitions are given in the paper, and no previous knowledge about computational learning theory is assumed. ************************ How to obtain a copy ************************ Via Anonymous FTP: unix> ftp archive.cis.ohio-state.edu Name: anonymous Password: (type your email address) ftp> cd pub/neuroprose ftp> binary ftp> get maass.perspectives.ps.Z ftp> quit unix> uncompress maass.perspectives.ps.Z unix> lpr maass.perspectives.ps (or what you normally do to print PostScript) From rmeir at ee.technion.ac.il Thu May 19 14:37:25 1994 From: rmeir at ee.technion.ac.il (Ron Meir) Date: Thu, 19 May 1994 16:37:25 -0200 Subject: Paper available by ftp Message-ID: <199405191837.QAA02452@ee.technion.ac.il> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/meir.bias_variance.ps.Z The following technical report is available by anonymous ftp. 18 printed pages. ------------------------------------------------------------------------ Bias, Variance and the Combination of Estimators; The case of Linear Least Squares Ronny Meir Department of Electrical Engineering Technion Haifa 32000 Israel rmeir at ee.technioin.ac.il We consider the effect of combining several least squares estimators on the solution to a regression problem. Computing the exact bias and variance curves as a function of the sample size we are able to quantitatively compare the effect of the combination on the bias and variance separately, and thus on the expected error which is the sum of the two. First, we show that by splitting the data set into several independent parts and training each estimator on a different subset, the performance can in some cases be significantly improved. We find three basic regions of interest. For a small number of noisy samples the estimation quality is dramatically improved by combining several independent estimators. For intermediate sample sizes, however, the effect of combining estimators can in fact be deletarious, tending to increase the bias too much. For large sample sizes both the single and the combined estimator approach the same limit. Our results are derived analytically for the case of linear least-squares regression, and are valid for systems of large input dimensions. A definite conclusion of our work is that substantial improvement in the quality of least-squares estimation is possible by decreasing the variance at the cost of an increase in bias. This gain is especially pronounced for small and noisy data sets. We stress, however, that the approach of estimator combination is not a panacea for constructing improved estimators and must be applied with care. -------------------------------------------------------------------- ftp instructions: % ftp archive.cis.ohio-state.edu Name: anonymous password: your full email address ftp> cd pub/neuroprose ftp> binary ftp> get meir.bias_variance.ps.Z ftp> bye % uncompress meir.bias_variance.ps.Z % lpr meir.bias_variance.ps From philh at cogs.susx.ac.uk Thu May 19 13:51:36 1994 From: philh at cogs.susx.ac.uk (Phil Husbands) Date: Thu, 19 May 1994 18:51:36 +0100 (BST) Subject: SAB94 Program and Registration Message-ID: CONFERENCE PROGRAM AND INVITATION TO PARTICIPATE ------------------------------------------------ FROM ANIMALS TO ANIMATS Third International Conference on Simulation of Adaptive Behavior (SAB94) Brighton, UK, August 8-12, 1994 The object of the conference is to bring together researchers in ethology, psychology, ecology, cybernetics, artificial intelligence, robotics, and related fields so as to further our understanding of the behaviors and underlying mechanisms that allow animals and, potentially, robots to adapt and survive in uncertain environments. The conference will focus particularly on well-defined models, computer simulations, and built robots in order to help characterize and compare various organizational principles or architectures capable of inducing adaptive behavior in real or artificial animals. Technical Programme =================== The full technical programme is given below. There will be a single track of oral presentations, with poster sessions separately timetabled. There will also be computer, video and robotic demonstrations. Major topics covered will include: Individual and collective behavior Autonomous robots Neural correlates of behavior Hierarchical and parallel organizations Perception and motor control Emergent structures and behaviors Motivation and emotion Problem solving and planning Action selection and behavioral Goal directed behavior sequences Neural networks and evolutionary Ontogeny, learning and evolution computation Internal world models Characterization of environments and cognitive processes Applied adaptive behavior Invited speakers ================ Prof. Michael Arbib, University of Southern California, "Rats Running and Humans Reaching: The Brain's Multiple Styles of Learning" Prof. Rodney Brooks, MIT, "Coherent Behavior from Many Adaptive Processes" Prof. Herbert Roitblat, University of Hawaii, "Mechanisms and Process in Animal Behaviour: Models of Animals, Animals as Models" Prof. John Maynard Smith, University of Sussex,"The Evolution of Animal Signals" Prof. Jean-Jacques Slotine, MIT, "Stability in Adaptation and Learning" Proceedings =========== The conference proceeding will be published by MIT Press/Bradford Books and will be available at the conference. Official Language: English ========================== Demonstrations ============== Computer, video and robotic demonstrations are invited. They should be of work relevant to the conference. If you wish to offer a demonstration, please send a letter with your registration form briefly describing your contribution and indicating space and equipment requirements. Registration ============ Registration details are given after the technical program. Full conference details will be sent on registration. CONFERENCE PROGRAM ------------------ Sunday 7th August ----------------- Old Ship Hotel 7.00pm: Welcoming Reception and Registration ***All Conference Sessions in Brighton Conference Centre East Wing Monday 8th August ----------------- 9:00 Coffee and Late Registration 10:30 Conference opening 11:00 From SAB90 to SAB94: Four Years of Animat Research Jean-Arcady Meyer and Agnes Guillot, ENS, Paris 11:30 Invited Lecture: Mechanism and Process in Animal Behavior: Models of Animals, Animals as Models Herbert L. Roitblat, University of Hawaii 12:30 Lunch 14:00 Modeling the Role of Cerebellum in Prism Adaptation Michael A. Arbib, Nicolas Schweighofer, U. Southern California and W. T. Thach, Washington University 14:30 Robotic Experiments in Cricket Phonotaxis Barbara Webb, University of Edinburgh 15:00 How to Watch Your Step: Biological Evidence and an Initial Model Patrick R. Green, University of Nottingham 15:30 On Why Better Robots Make It Harder Tim Smithers, Euskal Herriko Unibersitatea 16:00 Coffee 16:30 What is Cognitive and What is *Not* Cognitive? Frederick Toates, Open University 17:00 Action-Selection in Hamsterdam: Lessons from Ethology Bruce Blumberg, MIT 17:30 Behavioral Dynamics of Escape and Avoidance: A Neural Network Approach Nestor A. Schmajuk, Duke University 18:00 End. Tuesday 9th August ------------------ 09:00 Invited Lecture: The Evolution of Animal Signals John Maynard Smith, University of Sussex 10:00 Coffee 10:30 An Hierarchical Classifier System Implementing a Motivationally Autonomous Animat Jean-Yves Donnart and Jean-Arcady Meyer, ENS, Paris 11:00 Spatial Learning and Representation in Animats Tony J. Prescott, University of Sheffield 11:30 Location Recognition in Rats and Robots William D. Smart and John Hallam, University of Edinburgh 12:00 Emergent Functionality in Human Infants Julie C. Rutkowska, University of Sussex 12:30 Lunch 14:00 -----------POSTER AND DEMONSTRATION SESSION------------ **Posters listed at the end of this schedule **Full demonstrations timetable available later 16:00 Coffee 16:30 Posters and Demo's continue 20:00 End. Wednesday 10th August --------------------- 09:00 Invited Lecture: Stability in Adaptation and Learning Jean-Jacques Slotine, MIT 10:00 Coffee 10:30 Connectionist Environment Modelling in a Real Robot William Chesters and G. M. Hayes, University of Edinburgh 11:00 A Hybrid Architecture for Learning Continuous Environmental Models in Maze Problems A. G. Pipe, T. C. Fogarty, and A. Winfield, University West of England 11:30 The Blind Breeding the Blind: Adaptive Behavior without Looking Peter M. Todd, Stewart W. Wilson, Rowland Institute, Anil B. Somayaji, and Holly Yanco, MIT 12:00 Memoryless Policies: Theoretical Limitations and Practical Results Michael L. Littman, Brown University 12:30 End. Thursday 11th August -------------------- 09:00 Invited Lecture: Rats Running and Humans Reaching: The Brain's Multiple Styles of Learning Michael Arbib, University of Southern California 10:00 Coffee 10:30 A Comparison of Q-Learning and Classifier Systems Marco Dorigo and Hugues Bersini, Universite Libre de Bruxelles 11:00 Paying Attention to What's Important: Using Focus of Attention to Improve Unsupervised Learning Leonard N. Foner and Pattie Maes, MIT 11:30 Learning Efficient Reactive Behavioral Sequences from Basic Reflexes in a Goal-Directed Autonomous Robot Jos'e del R. Mill'an, European Commission Research Centre 12:00 A Topological Neural Map for On-Line Learning: Emergence of Obstacle Avoidance in a Mobile Robot Philippe Gaussier and Stephane Zrehen, EPFL 12:30 Lunch 14:00 A Distributed Adaptive Control System for a Quadruped Mobile Robot Bruce L. Digney and M. M. Gupta, University of Saskatchewan 14:30 Reinforcement Tuning of Action Synthesis and Selection in a 'Virtual Frog'. Simon Giszter, MIT 15:00 Achieving Rapid Adaptations in Robots by Means of External Tuition Ulrich Nehmzow and Brendan McGonigle, University of Edinburgh 15:30 Two-link Robot Brachiation with Connectionist Q-Learning Fuminori Saito and Toshio Fukada, Nagoya University 16:00 Coffee 16:30 Integrating Reactive, Sequential, and Learning Behavior Using Dynamical Neural Networks Brian Yamauchi and Randall Beer, Case Western Reserve University 17:00 Seeing The Light: Artificial Evolution, Real Vision Inman Harvey, Phil Husbands, and Dave Cliff, University of Sussex 17:30 End. Friday 12th August ----------------- 09:00 Invited Lecture: Coherent Behavior from Many Adaptive Processes Rodney A. Brooks, MIT 10:00 Coffee 10:30 Evolution of Corridor Following Behavior in a Noisy World Craig W. Reynolds, Electronic Arts 11:00 Protean Behavior in Dynamic Games: Arguments for the Co-Evolution of Pursuit-Evasion Tactics Geoffrey F. Miller and Dave Cliff, University of Sussex 11:30 Towards Robot Cooperation David McFarland, University of Oxford 12:00 A Case Study in the Behavior-Oriented Design of Autonomous Agents Luc Steels, University of Brussels 12:30 Lunch 14:00 Learning to Behave Socially Maja J. Mataric, MIT 14:30 Signalling and Territorial Aggression: An Investigation by Means of Synthetic Behavioral Ecology Peter de Bourcier and Michael Wheeler, University of Sussex 15:00 Panel Session 16:00 Coffee 16:30 SAB96 -- Discussion of SAB94, Planning of SAB96. 17:30 End. ----POSTERS-----to be presented on the afternoon of Tuesday 9th August. ---------------- Authors Note: Display space = 100cm * 150cm per poster Insect Vision and Olfaction: Different Neural Architectures for Different Kinds of Sensory Signal? D. Osorio, University of Sussex, Wayne M. Getz, UC Berkeley and Jurgen Rybak, FU-Berlin The Interval Reduction Strategy for Monitoring Cupcake Problems Paul R. Cohen, Marc S. Atkin, and Eric A. Hansen, University of Massachusetts Visual Control of Altitude and Speed in a Flying Agent Fabrizio Mura and Nicolas Franceschini, CNRS, Marseille Organizing an Animat's Behavioural Repertoires Using Kohonen Feature Maps Nigel Ball, University of Cambridge Action Selection for Robots in Dynamic Environments through Inter-Behaviour Bidding Michael Sahota, University of British Columbia Using Second Order Neural Connections for Motivation of Behavioral Choices Gregory M. Werner, UCLA A Place Navigation Algorithm Based on Elementary Computing Procedures and Associative Memories Simon Benhamou, CNRS Marseille, Pierre Bouvet, University of Geneva, and Bruno Poucet, CNRS Marseille Self-Organizing Topographic Maps and Motor Planning Pietro Morasso and Vittorio Sanguineti, University of Genova The Effect of Memory Length on the Foraging Behavior of a Lizard Sharoni Shafir and Jonathan Roughgarden, Stanford University An Architecture for Learning to Behave Ashley M. Aitken, University of New South Wales Reinforcement Learning for Homeostatic Endogenous Variables Hugues Bersini, Universite Libre de Bruxelles An Architecture for Representing and Learning Behaviors by Trial and Error Pascal Blanchet, CRIN-CNRS/INRIA Lorraine The Importance of Leaky Levels for Behavior-Based AI Gregory M. Saunders, John F. Kolen, and Jordan B. Pollack, Ohio State University Reinforcement Learning with Dynamic Covering of State-Action Space: Partitioning Q-Learning Re'mi Munos and Jocelyn Patinel, CEMAGREF The Five Neuron Trick: Using Classical Conditioning to Learn How to Seek Light Tom Scutt, University of Nottingham Adaptation in Dynamic Environments Through a Minimal Probability of Exploration Gilles Venturini, Universite de Paris-Sud Automatic Creation of An Autonomous Agent: Genetic Evolution of a Neural-Network Driven Robot Dario Floreano, University of Trieste, and Francesco Mondada, EPFL The Effect of Parasitism on the Evolution of a Communication Protocol in an Artificial Life Simulation Phil Robbins, University of Greenwich Integration of Reactive and Telerobotic Control in Multi-Agent Robotic Systems Ronald C. Arkin and Khaled S. Ali, Georgia Institute of Technology MINIMEME: Of Life and Death in the Noosphere Stephane Bura, Universite Paris VI Learning Coordinated Motions in a Competition for Food Between Ant Colonies Masao Kubo and Yukinori Kakazu, Hokkaido University Emergent Colonization and Graph Partitioning Pascale Kuntz and Dominique Snyers, Telecom Bretagne Diversity and Adaptation in Populations of Clustering Ants Erik D. Lumer, Universite Libre de Bruxelles, and Baldo Faieta, Zetes Electronics --------------------------------------------------------------------------- Conference Committee ==================== Conference Chairs: Philip HUSBANDS Jean-Arcady MEYER Stewart WILSON School of Cognitive Groupe de Bioinformatique The Rowland Institute and Comp. Sciences Ecole Normale Superieure for Science University of Sussex 46 rue d'Ulm 100 Edwin H. Land Blvd. Brighton BN1 9QH, UK 75230 Paris Cedex 05 Cambridge, MA 02142, USA philh at cogs.susx.ac.uk meyer at wotan.ens.fr wilson at smith.rowland.org Program Chair: David CLIFF School of Cognitive and Computing Sciences University of Sussex Brighton BN1 9QH, UK davec at cogs.susx.ac.uk Financial Chair: P. Husbands, H. Roitblat Local Arrangements: I. Harvey, P. Husbands Program Committee ================= M. Arbib, USA R. Arkin, USA R. Beer, USA A. Berthoz, France L. Booker, USA R. Brooks, USA P. Colgan, Canada T. Collett, UK H. Cruse, Germany J. Delius, Germany J. Ferber, France N. Franceschini, France S. Goss, Belgium J. Halperin, Canada I. Harvey, UK I. Horswill, USA A. Houston, UK L. Kaelbling, USA H. Klopf, USA L-J. Lin, USA P. Maes, USA M. Mataric, USA D. McFarland, UK G. Miller, UK R. Pfeifer, Switzerland H. Roitblat, USA J. Slotine, USA O. Sporns, USA J. Staddon, USA F. Toates, UK P. Todd, USA S. Tsuji, Japan D. Waltz, USA R. Williams, USA Local Arrangements ================== For general enquiries contact: SAB94 Administration COGS University of Sussex Falmer, Brighton, BN1 9QH UK Tel: +44 (0)273 678448 Fax: +44 (0)273 671320 Email: sab94 at cogs.susx.ac.uk ftp === The SAB94 archive can be accessed by anonymous ftp. % ftp ftp.cogs.susx.ac.uk login: anonymous password: ftp> cd pub/sab94 ftp> get * ftp> quit * Files available at present are: README announcement reg_document hotel_booking_form program Sponsors ======== Sponsors include: British Telecom University of Sussex Applied AI Systems Inc Uchidate Co., Ltd. Mitsubishi Corporation Brighton Council The Renaissance Trust Financial Support ================ Limited financial support may be available to graduate students and young researchers in the field. Applicants should submit a letter describing their research, the year they expect to receive their degree, a letter of recommendation from their supervisor, and confirmation that they have no other sources of funds available. The number and size of awards will depend on the amount of money available. Venue ===== The conference will be held at the Brighton Centre, the largest conference venue in the town, situated on the seafront in Brighton's town centre and adjacent to the 'Lanes' district. Brighton is a thriving seaside resort, with many local attractions, situated on the south coast of England. It is just a 50 minute train journey from London, and 30 minutes from London Gatwick airport -- when making travel arrangements we advise, where possible, using London Gatwick in preference to London Heathrow. Social Activities ================= A welcome reception will be held on Sunday 7th August. The conference banquet will take place on Thursday 11th August. There will also be opportunities for sightseeing, wine cellar tours and a visit to Brighton's Royal Pavilion. Accommodation ============= We have organised preferential rates for SAB94 delegates at several good quality hotels along the seafront. All hotels are within easy walking distance of the Brighton Centre. Costs vary from 29 pounds to 70 pounds inclusive per night for bed and breakfast. An accommodation booking form will be sent out to you on request, or can be obtained by ftp (instructions above). Details of cheaper budget accommodation can be obtained from Brighton Accommodation Marketing Bureau (Tel: +44 273 327560 Fax: +44 273 777409). Insurance ========= The SAB94 organisers and sponsors can not accept liablility for personal injuries, or for loss or damage to property belonging to conference participants or their guests. It is recommended that attendees take out personal travel insurance. Registration Fees ================= Registration includes: the conference proceedings; technical program; lunch each day (except Wednesday when there will be no afternoon sessions); welcome reception; free entry to Brighton's Royal Pavilion; complimentary temporary membership of the Arts Club of Brighton. ----------------------------------------------------------------------------- REGISTRATION FORM 3rd International Conference on the Simulation of Adaptive Behaviour (SAB94) 8-12 August 1994 Brighton Centre, Brighton, UK Please complete the form below and send to the conference office with full payment. Name: ______________________________________________________________ Address: __________________________________________________________ ____________________________________________________________________ ____________________________________________________________________ Country: ___________________________________________________________ Postal Code or Zip Code: ___________________________________________ Email: _____________________________________________________________ Telephone:____________________________ Fax:_________________________ Professional Affiliation:___________________________________________ Name(s) of accompanying person(s): 1. ________________________________________________________________ 2. ________________________________________________________________ Dietary needs: ____________________________________________________ Any other special needs: _________________________________________ PAYMENTS ======== All payments must be made in pounds sterling. Delegates: ========= Tick if you will be attending the welcome reception on Sunday 7 August _____ Tick appropriate boxes. Individual Student Early (before 15 May 1994) 200 pounds ( ) 100 pounds ( ) Late (after 15 May 1994) 230 pounds ( ) 115 pounds ( ) On site 260 pounds ( ) 130 pounds ( ) Banquet 18 pounds ( ) 18 pounds ( ) STUDENTS MUST SUBMIT PROOF OF THEIR STATUS ALONG WITH THEIR REGISTRATION FEE. Accompanying persons: =================== Welcoming reception 10 pounds Banquet 28 pounds TOTAL PAYMENT ___________ Registration ___________ Banquet (delegate rate) (Please tick if vegetarian _____) ___________ Banquet (guest rate) (Please tick if vegetarian _____) ___________ Reception (guests only) ___________ Donation to support student scholarship fund METHOD OF PAYMENT ================= Please make payable to "SAB94", pounds sterling only. _____ Bank Draft or International Money Order: _ __________________ pounds _____ Cheque: (drawn on a UK bank or Euro Cheque) __________________ pounds Send to: SAB Administration COGS University of Sussex Falmer, Brighton, BN1 9QH UK CANCELLATIONS ============= The SAB Administration should be notified in writing of all cancellations. Cancellations received before 10 July will incur a 20% administration charge. We cannot accept any cancellations after that date. --------------------------------------------------------------------------------------- From David_Redish at GS17.SP.CS.CMU.EDU Thu May 19 16:21:45 1994 From: David_Redish at GS17.SP.CS.CMU.EDU (David Redish) Date: Thu, 19 May 94 16:21:45 -0400 Subject: Papers available by anonymous ftp Message-ID: A selection of papers by, Redish, Wan, and Touretzky are now available by anonymous FTP. The three most recent titles are: 1. The Reaching Task: Evidence for Vector Arithmetic in the Motor System? 2. Computing Goal Locations from Place Codes 3. Neural Representation of Space in Rats and Robots Other titles are also available. All of the papers are compressed using GZIP. Access information: host: b.gp.cs.cmu.edu (128.2.242.8) directory: /afs/cs/user/dredish/pub They are also available via Mosaic from http://www.cs.cmu.edu:8001/afs/cs/user/dredish/Web/bibliography.url ============================================================ file: biocyb.ps.gz The Reaching Task: Evidence for Vector Arithmetic in the Motor System? A. David Redish and David S. Touretzky To appear in Biological Cybernetics During a reaching task, the population vector is an encoding of direction based on cells with cosine response functions. Scaling the response by a magnitude factor produces a vector encoding, enabling vector arithmetic to be performed by summation of firing rates. We show that the response properties of selected populations of cells in MI and area 5 can be explained in terms of arithmetic relationships among load, goal, and motor command vectors. Our computer simulations show good agreement with single cell recording data. ============================================================ file: cogsci94.ps.gz Computing Goal Locations from Place Codes Hank S. Wan, David S. Touretzky, and A. David Redish To appear in: Proceedings of the 16th annual conference of the Cognitive Science society. Lawrence Earlbaum Associates. A model based on coupled mechanisms for place recognition, path integration, and maintenance of head direction in rodents replicates a variety of neurophysiological and behavioral data. Here we consider a task described in \cite{collett86} in which gerbils were trained to find food equidistant from three identical landmarks arranged in an equilateral triangle. In probe trials with various manipulations of the landmark array, the model produces behaviors similar to those of the animals. We discuss computer simulations and an implementation of portions of the model on a mobile robot. ============================================================ file: wcci94.ps.gz Neural Representation of Space in Rats and Robots David S. Touretzky, Hank S. Wan, and A. David Redish To appear in: J.M. Zurada and R. Marks, eds., Computational Intelligence: Imitating Life. IEEE Press, 1994. We describe a computer model that reproduces many observed features of rat navigation behavior, including response properties of place cells and head direction cells. We discuss issues that arise when implementing models of this sort on a mobile robot. From janetw at cs.uq.oz.au Fri May 20 16:42:23 1994 From: janetw at cs.uq.oz.au (janetw@cs.uq.oz.au) Date: Fri, 20 May 94 15:42:23 EST Subject: Tech report available - connectionist models and psychology Message-ID: <9405200542.AA01283@client> Please do not forward to other mailing lists. The following technical report is available. Please send requests to janetw at cs.uq.oz.au. Paper copy only. COLLECTED PAPERS FROM A SYMPOSIUM ON CONNECTIONIST MODELS AND PSYCHOLOGY (Eds) Janet Wiles, Cyril Latimer and Catherine Stevens. Technical Report No. 289 Department of Computer Science, University of Queensland, QLD 4072 Australia February 1994 118 pages -------------------------------------------------------------------------- Contents Preface: Danny Latimer, Catherine Stevens, and Janet Wiles SESSION 1. THE RATIONALE FOR PSYCHOLOGISTS USING MODELS Introduction: Peter Slezak Target paper: Danny Latimer. Computer Modeling of Cognitive Processes Commentaries: Max Coltheart. Connectionist Modelling and Cognitive Psychology Sally Andrews. What Connectionist Models Can (and Cannot) Tell Us George Oliphant. Connectionism, Psychology and Science SESSION 2. CORRESPONDENCE BETWEEN HUMAN AND NETWORK PERFORMANCE Introduction: Danny Latimer Papers: Kate Stevens. The In(put)s and Out(put)s of Comparing Human and Network Performance: Some Ideas on Representations, Activations and Weights Graeme Halford. How Far Do Neural Network Models Account for Human Reasoning? Simon Dennis. The Correspondence Between Psychological and Network Variables In Connectionist Models of Human Memory SESSION 3. BASIC COMPUTATIONAL PROCESSES Introduction: Steven Schwartz Target paper: Janet Wiles. The Connectionist Modeler's Toolkit: A review of some basic processes over distributed memories Commentaries: Mike Johnson. On the search for metaphors Zoltan Schreter. Distributed and Localist Representation in the Brain and in Connectionist Models Discussion commentaries: Paul Bakker, Richard A. Heath, Andrew Heathcote, Steven Phillips, J. P. Sutcliffe, Ellen Watson From webber at signal.dra.hmg.gb Fri May 20 04:04:21 1994 From: webber at signal.dra.hmg.gb (Chris Webber) Date: Fri, 20 May 94 09:04:21 +0100 Subject: NeuroProse preprint announcement Message-ID: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/webber.self-org.ps.Z The file "webber.self-org.ps.Z" is available for copying from the Neuroprose preprint archive. 26 pages, 1946396 bytes compressed, 4117115 uncompressed. Preprint of article submitted to "Network" journal: -------------------------------------------------------- "Self-organisation of transformation-invariant detectors for constituents of perceptual patterns" Chris J S Webber Cambridge University, (Now at) UK Defence Research Agency A geometrical interpretation of the elementary constituents which make up perceptual patterns is proposed: if a number of different pattern- vectors lie approximately within the same plane in the pattern-vector space, those patterns can be interpreted as sharing a common constituent. Individual constituents are associated with individual planes of patterns: a pattern lying within an intersection of several such planes corresponds to a combination of several constituents. This interpretation can model patterns as hierarchical combinations of constituents that are themselves combinations of yet more elementary constituents. A neuron can develop transformation-invariances in its recognition-response by aligning its synaptic vector with one of the plane-normals: a pattern-vector's projection along the synaptic vector is then an invariant of all the patterns on the plane. In this way, discriminating detectors for individual constituents can self-organise through Hebbian adaptation. Transformation-invariances that can self-organise in multiple-level vision systems include shape-tolerance and local position-tolerance. These principles are illustrated with demonstrations of transformation-tolerant face-recognition. -------------------------------------------------------- From cairo at csc.umist.ac.uk Sat May 21 20:51:00 1994 From: cairo at csc.umist.ac.uk (Cairo L Nascimento Jr) Date: Sat, 21 May 94 20:51:00 BST Subject: Thesis available by anonymous ftp Message-ID: <9404.9405211951@isabel.csc.umist.ac.uk> From iris at halo.tau.ac.il Sun May 22 07:48:01 1994 From: iris at halo.tau.ac.il (Iris Ginzburg) Date: Sun, 22 May 1994 14:48:01 +0300 (IDT) Subject: Paper available by ftp Message-ID: ************************************************************** FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/ginzburg.correlations.ps.Z The following paper is available by anonymous ftp. 42 printed pages THEORY OF CORRELATIONS IN STOCHASTIC NEURAL NETWORKS Iris Ginzburg School of Physics and Astronomy Tel-Aviv University, Tel-Aviv 69978, Israel and Haim Sompolinsky Racah Institute of Physics and Center for Neural Computation Hebrew University, Jerusalem 91904, Israel and AT&T Bell Laboratories, Murray Hill, NJ 07974, USA Submitted to Physical Review E, March 1994 ABSTRAT: One of the main experimental tools in probing the interactions between neurons has been the measurement of the correlations in their activity. In general, however, the interpretation of the observed correlations is difficult, since the correlation between a pair of neurons is influenced not only by the direct interaction between them but also by the dynamic state of the entire network to which they belong. Thus, a comparison between the observed correlations and the predictions from specific model networks is needed. In this paper we develop the theory of neuronal correlation functions in large networks comprising of several highly connected subpopulations, and obey stochastic dynamic rules. When the networks are in asynchronous states, the cross-correlations are relatively weak, i.e., their amplitude relative to that of the auto-correlations is of order of 1/N, N being the size of the interacting populations. Using the weakness of the cross- correlations, general equations which express the matrix of cross-correlations in terms of the mean neuronal activities, and the effective interaction matrix are presented. The effective interactions are the synaptic efficacies multiplied by the the gain of the postsynaptic neurons. The time-delayed cross-correlations can be expressed as a sum of exponentially decaying modes that correspond to the eigenvectors of the effective interaction matrix. The theory is extended to networks with random connectivity, such as randomly dilute networks. This allows for the comparison between the contribution from the internal common input and that from the direct interactions to the correlations of monosynaptically coupled pairs. A closely related quantity is the linear response of the neurons to external time-dependent perturbations. We derive the form of the dynamic linear response function of neurons in the above architecture, in terms of the eigenmodes of the effective interaction matrix. The behavior of the correlations and the linear response when the system is near a bifurcation point is analyzed. Near a saddle-node bifurcation the correlation matrix is dominated by a single slowly decaying critical mode. Near a Hopf-bifurcation the correlations exhibit weakly damped sinusoidal oscillations. The general theory is applied to the case of randomly dilute network consisting of excitatory and inhibitory subpopulations, using parameters that mimic the local circuit of 1 cube mm of rat neocortex. Both the effect of dilution as well as the influence of a nearby bifurcation to an oscillatory states are demonstrated. To retrieve the compressed postscript file, do the following: unix> ftp archive.cis.ohio-state.edu ftp> login: anonymous ftp> password: [your_full_email_address] ftp> cd pub/neuroprose ftp> binary ftp> get ginzburg.correlations.ps.Z ftp> bye unix> uncompress ginzburg.correlations.ps.Z unix> lpr -s ginzburg.correlations.ps (or however you print postscript) NOTE the -s flag in lpr. Since the file is rather large, some printers may truncate the file unless this flag in specified. From massone at mimosa.eecs.nwu.edu Mon May 23 10:43:55 1994 From: massone at mimosa.eecs.nwu.edu (Lina Massone) Date: Mon, 23 May 94 09:43:55 CDT Subject: paper available Message-ID: <9405231443.AA11110@mimosa.eecs.nwu.edu> Preprints of the following paper are available upon request: A Neural-Network System for Control of Eye Movements: Basic Mechanisms Lina L.E. Massone (to appear in Biological Cybernetics) Abstract: This paper presents a neural-network-based system that can generate and control movements of the eyes. It is inspired on a number of experimental observations on the saccadic and gaze systems of monkeys and cats. Because of the generality of the approach undertaken, the system can be regarded as a demonstration of how parallel distributed processing principles, namely learning and attractor dynamics, can be integrated with experimental findings, as well as a biologically-inspired controller for a dexterous robotic orientation device. The system is composed of three parts: a dynamic motor map, a push-pull circuitry, and a plant. The dynamics of the motor map is generated by a multi-layer network that was trained to compute a bidimensional temporal-spatial transformation. Simulation results indicate (i) that the system is able to reproduce some of the properties observed in the biological system at the neural and movement levels, (ii) that the dynamics of the motor map remains a stereotyped one even when the motor map is subject to abnormal stimulation patterns. The latter result emphasizes the role of the topographic projection that connects the the motor map to the push-pull circuitry in determining the features of the resulting movements. Please email requests to: linda at eecs.nwu.edu (not to me!) From seung at physics.att.com Mon May 23 11:47:05 1994 From: seung at physics.att.com (seung@physics.att.com) Date: Mon, 23 May 94 11:47:05 EDT Subject: preprint--rigorous learning curve bounds from statistical mechanics Message-ID: <9405231547.AA05916@physics.att.com> The following preprint is now available: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/seung.rigorous.ps.Z Authors: D. Haussler, M. Kearns, H. S. Seung, N. Tishby Title: Rigorous learning curve bounds from statistical mechanics Size: 20 pages Abstract: In this paper we introduce and investigate a mathematically rigorous theory of learning curves that is based on ideas from statistical mechanics. The advantage of our theory over the well-established Vapnik-Chervonenkis theory is that our bounds can be considerably tighter in many cases, and are also more reflective of the true behavior (functional form) of learning curves. This behavior can often exhibit dramatic properties such as phase transitions, as well as power law asymptotics not explained by the VC theory. The disadvantages of our theory are that its application requires knowledge of the input distribution, and it is limited so far to finite cardinality function classes. We illustrate our results with many concrete examples of learning curve bounds derived from our theory. From hilario at cui.unige.ch Mon May 23 08:46:57 1994 From: hilario at cui.unige.ch (Hilario Melanie) Date: Mon, 23 May 1994 14:46:57 +0200 Subject: Please broadcast via connectionist-ml Message-ID: <620*/S=hilario/OU=cui/O=unige/PRMD=switch/ADMD=arcom/C=ch/@MHS> WORKSHOP PROGRAMME & CALL FOR PARTICIPATION ECAI'94 Workshop on Combining Symbolic and Connectionist Processing August 9, 1994 - Amsterdam, The Netherlands Until a few years ago, the history of AI has been marked by two parallel, often antagonistic streams of development -- classical or symbolic AI and connectionist processing. A recent research trend, premised on the complementarity of these two paradigms, strives to build hybrid systems which combine the advantages of both to overcome the limitations of each. For instance, attempts have been made to accomplish complex tasks by blending neural networks with rule-based or case-based reasoning. This workshop will be the first Europe-wide effort to bring together researchers active in the area in view of laying the groundwork for a theory and methodology of symbolic/connectionist integration (SCI). Workshop Programme HYBRID EXPERT SYSTEM SHELLS 9:00 - 9:30 A Study of the Hybrid System SYNHESYS B. Orsier, B. Amy, V. Rialle, A. Giacometti LIFIA-IMAG & ENST (France) 9:30 - 10:00 Cognitive and Computational Foundations for Symbolic-Connectionist Integration R. Khosla, T. Dillon La Trobe University, Melbourne (Australia) MULTISTRATEGY LEARNING 10:00 - 10:30 Integration of Symbolic and Connectionist Learning to Ease Robot Programming and Control M. Kaiser, J. Kreuziger University of Karlsruhe (Germany) 10:30 - 11:00 A Hybrid Model of Psychological Experiments on Scientific Discovery E. Hoenkamp, R.A. Sumida University of Nijmegen (The Netherlands) 11:00 - 11:15 BREAK THEORETICAL FOUNDATIONS 11:15 - 11:45 Tracking the Neuro-Symbolic Continuum: Learning by Explicitation C. Thornton University of Sussex (United Kingdom) 11:45 - 12:15 Symbol Ground Revisited E. Prem Austian Institute for AI (Austria) 12:15 - 12:45 How Hybrid Should a Hybrid Model Be? R. Cooper, B. Franks University College & London School of Economics (United Kingdom) 12:45 - 14:15 LUNCH LOGIC AND INFERENCING 14:15 - 14:45 Towards a New Massively Parallel Computational Model for Logic Programming S. Hoelldobler, Y. Kalinke University of Dresden (Germany) 14:45 - 15:15 Scheduling of Modular Architectures for Inductive Inference of Regular Grammars M. Gori, M. Maggini, G. Soda University of Florence (Italy) 15:15 - 15:45 A Connectionist Control Component for the Theorem Prover SETHEO C. Goller Technical University of Munich (Germany) 15:45 - 16:00 BREAK NATURAL LANGUAGE PROCESSING 16:00 - 16:30 Metaphor and Memory: Symbolic and Connectionist Issues in Metaphor Comprehension T. Veale, M. Keane Trinity College (Eire) 16:30 - 17:00 Parsing Spontaneous Speech: A Hybrid Approach T.S. Polzin Carnegie Mellon University (USA) 17:00 - 17:30 A Symbolic-Connectionist Hybrid Abstract Generation System M. Aretoulaki, J. Tsujii UMIST, Manchester (United Kingdom) 17:30 - 17:45 BREAK VISUAL PATTERN RECOGNITION 17:45 - 18:15 A Hybrid Model for Visual Perception Based on Dynamic Conceptual Space A. Chella, M. Frixione, S. Gaglio University of Palermo & IIASS-Salerno (Italy) 18:15 - 18:45 Hybrid Trees for Supervised Learning of Decision Rules F. d'Alche-Buc, J.-P. Nadal, D. Zwierski Laboratoires d'Electronique Philips (France) Those who wish to attend the workshop should send a request describing their research interests and/or previous work in the field of SCI (maximum 1 page). Since attendance will be limited to ensure effective interaction, requests will be considered until the maximum number of participants is attained. Please note that all workshop participants are required to register for the main conference. PROGRAM COMMITTEE Bernard Amy (LIFIA-IMAG, Grenoble, France) Patrick Gallinari (LAFORIA, University of Paris 6, France) Franz Kurfess (Dept. Neural Information Processing, University of Ulm, Germany) Christian Pellegrini (CUI, University of Geneva, Switzerland) Noel Sharkey (DCS, University of Sheffield, UK) Alessandro Sperduti (CSD, University of Pisa, Italy) CONTACT PERSON Melanie Hilario CUI - University of Geneva 24 rue General Dufour CH-1211 Geneva 4 Voice: +41 22/705 7791 Fax: +41 22/320 2927 Email: hilario at cui.unige.ch From koza at CS.Stanford.EDU Tue May 24 14:22:09 1994 From: koza at CS.Stanford.EDU (John Koza) Date: Tue, 24 May 94 11:22:09 PDT Subject: New Book and Videotape on Genetic Programming Message-ID: Genetic Programming II and the associated videotape are now available from the MIT Press. GENETIC PROGRAMMING II: AUTOMATIC DISCOVERY OF REUSABLE SUBPROGRAMS by John R. Koza Computer Science Department Stanford University It is often argued that the process of solving complex problems can be automated by first decomposing the problem into subproblems, then solving the presumably simpler subproblems, and then assembling the solutions to the subproblems into an overall solution to the original problem. The overall effort required to solve a problem can potentially be reduced to the extent that the decomposition process uncovers subproblems that are diPesproportionately easy to solve and to the extent that regularities in the problem environment permit multiple use of the solutions to the subproblems. Sadly, conventional techniques of machine learning and artificial intelligence provide no effective means for automatically executing this alluring three-step problem-solving process on a computer. GENETIC PROGRAMMING II describes a way to automatically implement this three-step problem-solving process by means the recently developed technique of automatically defined functions in the context of genetic programming. Automatically defined functions enable genetic programming to define useful and reusable subroutines dynamically during a run. This new technique is illustrated by solving, or approximately solving, example problems from the fields of Boolean function learning, symbolic regression, control, pattern recognition, robotics, classification, and molecular biology. In each example, the problem is automatically decomposed into subproblems; the subproblems are automatically solved; and the solutions to the subproblems are automatically assembled into a solution to the original problem. Leverage accrues because genetic programming with automatically defined functions repeatedly uses the solutions to the subproblems in the assembly of the solution to the overall problem. Moreover, genetic programming with automatically defined functionsn produces solutions that are simpler and smaller than the solutions obtained without automatically defined functions. CONTENTS... 1. Introduction 2. Background on Genetic Algorithms, LISP, and Genetic Programming 3. Hierarchical Problem-Solving 4. Introduction to Automatically Defined Functions P The Two-Boxes Problem 5. Problems that Straddle the Breakeven Point for Computational Effort 6. Boolean Parity Functions 7. Determining the Architecture of the Program 8. The Lawnmower Problem 9. The Bumblebee Problem 10. The Increasing Benefits of ADFs as Problems are Scaled Up 11. Finding an Impulse Response Function 12. Artificial Ant on the San Mateo Trail 13. Obstacle-Avoiding Robot 14. The Minesweeper Problem 15. Automatic Discovery of Detectors for Letter Recognition 16. Flushes and Four-of-a-Kinds in a Pinochle Deck 17. Introduction to Molecular Biology 18. Prediction of Transmembrane Domains in Proteins 19. Prediction of Omega Loops in Proteins 20. Lookahead Version of the Transmembrane Problem 21. Evolution of the Architecture of the Overall Program 22. Evolution of Primitive Functions 23. Evolutionary Selection of Terminals 24. Evolution of Closure 25. Simultaneous Evolution of Architecture, Primitive Functions, Terminals, Sufficiency, and Closure 26. The Role of Representation and the Lens Effect 27. Conclusion Appendix A: List of Special Symbols Appendix B: List of Special Functions Bibliography Appendix C: List of Type Fonts Appendix D: Default Parameters for Controlling Runs of Genetic Programming Appendix E: Computer Implementation of Automatically Defined Functions Appendix F: Annotated Bibliography of Genetic Programming Appendix G: Electronic Newsletter, Public Repository, and FTP Site Hardcover. 746 pages. ISBN 0-262-11189-6. ----------------------------------------------------------------------- Genetic Programming II Videotape: The Next Generation by John R. Koza This videotape provides an explanation of automatically defined functions, the hierarchical approach to problem solving by means of genetic programming with automatically defined functions, and a visualization of computer runs for many of the problems discussed in Genetic Programming II. These problems include symbolic regression, the parity problem, the lawnmower problem, the bumblebee problem, the artificial ant, the impulse response problem, the minesweeper problem. the letter recognition problem, the transmembrane problem, and the omega loop problem. VHS videotape. 62-Minutes. Available in VHS NTSC, PAL, and SECAM formats. NTSC ISBN 0-262-61099-X. PAL ISBN 0-262-61100-7. SECAM ISBN 0-262-61101-5. ----------------------------------------------------------------------- The following order form can be used to order copies of Genetic Programming I or II, videotapes I or II, and Kinnear's recent book. Order Form Send to The MIT Press 55 Hayward Street Cambridge, MA 02142 USA You may order by phone 1-800-356-0343 (toll-free); or by phone to 617-625-8569; or by Fax to 617-625-6660; or by-e-mail to mitpress-orders at mit.edu Please send the following: ___copies of book Genetic Programming: On the Programming of Computers by Means of Natural Selection by John R. Koza (KOZGII) @$55.00 ___copies of book Genetic Programming II: Automatic Discovery of Reusable Programs by John R. Koza (KOZGH2) @$45.00 ___copies of book Advances in Genetic Programming by K. E. Kinnear (KINDH) @$45.00 ___copies of videoGenetic Programming: the Movie in VHS NTSC Format (KOZGVV) @$34.95 ___copies of videoGenetic Programming:the Movie in VHS PAL Format (KOZGPV) @$44.95 each ___copies of videoGenetic Programming:the Movie in VHS SECAM Format (KOZGSV) @$44.95 ___copies of video Genetic Programming II Videotape: The Next Generation in VHS NTSC Format (KOZGV2) @$34.95 ___copies of video Genetic Programming II Videotape: The Next Generation in VHS PAL Format (KOZGP2) @$44.95 ___copies of video Genetic Programming II Videotape: The Next Generation in VHS SECAM Format (KOZGS2) @$44.95 Shipping and handling: Add $3.00 per item. Outside U.S. and Canada: add $6.00 per item for surface shipment or $22.00 per item for air Total for items ordered ________ Shipping and handling ________ Canadian customers add 7% GST ________ Total ________ [ ] Check or money order enclosed [ ] Purchase order attached P Number __________ [ ] Mastercard [ ] Visa Expiration date ___________ Card Number _____________________________ Ship to: Name _____________________________________ Address ___________________________________ __________________________________________ __________________________________________ City ______________________________________ State ______________________ Zip or Postal Code___________ Country ___________________ Daytime Phone _____________________________ ----------------------------------- For orders in the UK, Eire, Continental Europe, please contact the London office of the MIT Press at: The MIT Press 14 Bloomsbury Square London WC1A 2LP England Tel (071) 404 0712 Fax (071) 404 0610 e-mail 100315.1423 at compuserve.com For order in Australia, please contact: Astam Books 57-61 John Street Leichhardt, NSW 2040 Australia Tel (02) 566 4400 Fax (02) 566 4411 Please note that prices may be higher outside the US. In all other areas of the world or in case of difficulty, please contact: The MIT Press International Department 55 Hayward Street, Cambridge, MA 02142 USA Tel 617 253 2887 Fax 617 253 1709 e-mail curtin at mit.edu From moody at chianti.cse.ogi.edu Tue May 24 19:46:28 1994 From: moody at chianti.cse.ogi.edu (John Moody) Date: Tue, 24 May 94 16:46:28 -0700 Subject: A Trivial but Fast Reinforcement Controller Message-ID: <9405242346.AA09905@chianti.cse.ogi.edu> The following paper is available via anonymous ftp: ========================================================================= File: moodyTresp94.reinforce.ps.Z To appear in Neural Computation, vol. 6, 1994. ------------------------------------------------------------------------- A Trivial but Fast Reinforcement Controller John Moody and Volker Tresp Abstract: We compare simulation results for the classic Barto-Sutton-Anderson pole balancer (which uses the Michie and Chambers ``boxes'' representation) with results for a reinforcement learning controller which employs a quadratic representation for both the adaptive critic element (ACE) and the associative search element (ASE). We find that this simple controller learns to balance the pole after a median of only 2 failures. This corresponds to a relative speed-up factor of over 7000 in simulated physical time. Moreover, the quality of the control, as measured by the residual kinetic energy of the cart/pole system after learning, is substantially better for the quadratic ACE/ASE controller. ========================================================================= Retrieval instructions are: unix> ftp neural.cse.ogi.edu login: anonymous password: name at email.address ftp> cd pub/neural ftp> cd papers ftp> get INDEX ftp> binary ftp> get moodyTresp94.reinforce.ps.Z ftp> quit unix> uncompress *.Z unix> lpr *.ps From moody at chianti.cse.ogi.edu Tue May 24 19:50:24 1994 From: moody at chianti.cse.ogi.edu (John Moody) Date: Tue, 24 May 94 16:50:24 -0700 Subject: Summer School Lectures: Prediction Risk and Architecture Selection Message-ID: <9405242350.AA09919@chianti.cse.ogi.edu> The following paper is available via anonymous ftp: ========================================================================= file: moody94.predictionrisk.ps.Z Appears in: From mav at psych.psy.uq.oz.au Mon May 23 19:28:48 1994 From: mav at psych.psy.uq.oz.au (Simon Dennis) Date: Tue, 24 May 1994 09:28:48 +1000 (EST) Subject: Thesis: Integrating Learning into Models of Human Memory Message-ID: FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/Thesis/dennis.thesis.ps.Z The file dennis.thesis.ps.Z is now available for copying from the Neuroprose archive from the directory pub/neuroprose/Thesis. The Integration of Learning into Models of Human Memory Simon Dennis Ph.D. Thesis Department of Computer Science University of Queensland Since memory was first distinguished as a separate phenomenon from learning (Melton, 1963), researchers in the area have concentrated on the memory component. Mathematical models, such as SAM (Raaijmakers & Shiffrin, 1981; Gillund & Shiffrin, 1984), TODAM (Murdock, 1982), CHARM (Eich, 1982), Minerva II (Hintzman, 1984) and the matrix model (Pike, 1984; Humphreys, Bain & Pike, 1989), have focussed on the mechanisms of encoding, storage and retrieval. The affects of variables such as retention time, number of presentations, spacing of presentations, type of retrieval test, nature of cues, encoding paradigm and the extent to which the study context is specified in the test instructions have been studied empirically and modelled. The learning component, which was the focus of the field for much of this century, has received less attention in recent years (Estes, 1991). Despite the extensive empirical database on learning phenomena (Postman, Burns & Hasher, 1970), attempts to model this data have been few. The attempts that do exist have concentrated on specifying algorithms by which experience might tune the parameters of existing memory models (Murdock, 1987) rather than attempting to explain how learning induces the representations, decision criteria and control processes of memory in the first instance. How people acquire these components of the memory system has important ramifications for the study of human retention. One of the most critical of these ramifications is the nature of the relationship between the environment and the mechanism of memory. Recent empirical work on the the environment of memory has revealed a striking correspondence between the structure of the environment and the pattern of performance in human subjects (Anderson & Schooler, 1991). This thesis extends this work by studying the environment empirically, developing a learning mechanism and demonstrating that this learning mechanism behaves in a qualitatively similar fashion to human subjects when exposed to an environment that mirrors that with which the subjects contend. Analyses of the relevant environments of two touchstone phenomena: the list strength effect in recognition and the word frequency effect in recognition were performed to establish the context in which interactive accounts of these phenomena must be set. It was found that while low frequency words occur less often than high frequency words, they are more likely to recur within a context. In addition, the probability of recurrence was found to increase if a word had occurred frequently in the current context, but was not affected by the amount of repetition of words other than the target word. A learning or interactive model of human memory called the Hebbian Recurrent Network (HRN) has been developed. The HRN integrates work in the mathematical modelling of memory with that in error correcting connectionist networks by incorporating the matrix model (Pike, 1984; Humphreys, Bain & Pike, 1989) into the Simple Recurrent Network (SRN, Elman, 1989; Elman, 1990). The result is an architecture which has the desirable memory characteristics of the matrix model such as low interference and massive generalisation but which is able to learn appropriate encodings for items, decision criteria and the control functions of memory which have traditionally been chosen a priori in the mathematical memory literature. Simulations demonstrate that the HRN is well suited to the recognition task. When compared with the SRN, the HRN is able to learn longer lists, generalises from smaller training sets, and is not degraded significantly by increasing the vocabulary size. To demonstrate that the HRN learning mechanism is capable of addressing experimental behaviour, the phenomena studied environmentally were modelled with the HRN. The HRN showed a low frequency word advantage when it was presented with an environment in which high frequency words occurred more often, but low frequency words were more likely to recur within a context. In addition, the HRN showed a null list strength effect while retaining the list length and item strength effects when exposed to an environment in which the environmental results were embedded. By incorporating a learning mechanism and examining the environment in which memory models are situated it is possible to produce models that: (1) can start to address developmental phenomena; (2) can provide a mechanism to address learning-to-learn phenomena; (3) can address how internal states attain their meanings; (4) are easily extended to a wide variety of cognitive phenomena; and (5) account for the striking similarity between the environmental demands placed upon the memory system and the performance of human subjects. --------------------------- The thesis is 218 pages (22 preamble + 196 text). Simon Dennis Department of Psychology mav at psych.psy.uq.oz.au Post Doctoral The University of Queensland Research Fellow QLD 4072 Australia From mackay at mrao.cam.ac.uk Thu May 26 10:21:00 1994 From: mackay at mrao.cam.ac.uk (David J.C. MacKay) Date: Thu, 26 May 94 10:21 BST Subject: The following preprint is now available by anonymous ftp. Message-ID: ======================================================================== Bayesian Neural Networks and Density Networks David J.C. MacKay University of Cambridge Cavendish Laboratory Madingley Road Cambridge CB3 0HE mackay at mrao.cam.ac.uk This paper reviews the Bayesian approach to learning in neural networks, then introduces a new adaptive model, the density network. This is a neural network for which target outputs are provided, but the inputs are unspecified. When a probability distribution is placed on the unknown inputs, a latent variable model is defined that is capable of discovering the underlying dimensionality of a data set. A Bayesian learning algorithm for these networks is derived and demonstrated with an application to the modelling of protein families. ======================================================================== The preprint may be obtained as follows: ftp 131.111.48.8 anonymous (your email) cd pub/mackay/density binary mget *.ps.Z quit uncompress *.ps.Z This document is 12 pages long. Sorry, hard copy is not available from the author. From KOKINOV at BGEARN.BITNET Wed May 25 16:05:57 1994 From: KOKINOV at BGEARN.BITNET (Boicho Kokinov) Date: Wed, 25 May 94 16:05:57 BG Subject: CogSci Summer School Message-ID: The Summer School features introductory and advanced courses in Cognitive Science, participant symposia, discussions, and student sessions. Participants will include university teachers and researchers, graduate and senior undergraduate students. International Advisory Board Elizabeth BATES (University of California at San Diego, USA) Amedeo CAPPELLI (CNR, Pisa, Italy) Cristiano CASTELFRANCHI (CNR, Roma, Italy) Daniel DENNETT (Tufts University, Medford, Massachusetts, USA) Ennio De RENZI (University of Modena, Italy) Charles DE WEERT (University of Nijmegen, Holland ) Christian FREKSA (Hamburg University, Germany) Dedre GENTNER (Northwestern University, Evanston, Illinois, USA) Christopher HABEL (Hamburg University, Germany) Joachim HOHNSBEIN (Dortmund University, Germany) Douglas HOFSTADTER (Indiana University, Bloomington, Indiana, USA) Keith HOLYOAK (University of California at Los Angeles, USA) Mark KEANE (Trinity College, Dublin, Ireland) Alan LESGOLD (University of Pittsburg, Pennsylvania, USA) Willem LEVELT (Max-Plank Institute of Psycholinguistics, Nijmegen, Holland) David RUMELHART (Stanford University, California, USA) Richard SHIFFRIN (Indiana University, Bloomington, Indiana, USA) Paul SMOLENSKY (University of Colorado, Boulder, USA) Chris THORNTON (University of Sussex, Brighton, England) Carlo UMILTA' (University of Padova, Italy) Local Organizers New Bulgarian University Bulgarian Academy of Sciences Bulgarian Cognitive Science Society Local Organizing Committee Boicho Kokinov - School Director Lilia Gurova, Vesselin Zaimov, Vassil Nikolov, Lora Likova, Marina Yoveva, Pasha Nikolova Courses Qualitative Spatial Reasoning Christian Freksa (Hamburg University, Germany) Computer Models of Analogy-Making Bob French (Indiana University, USA) Social Cognition Rosaria Conte (CNR, Roma, Italy) Multi-Agent Systems Iain Craig (University of Warwick, England) Cognitive Aspects of Language Processing Amedeo Cappelli (CNR, Pisa, Italy) Catastrophic Forgetting in Connectionist Networks Bob French (Indiana University, USA) Dynamic Networks for Cognitive Modeling Peter Braspenning (University of Limburg, Holland) Models of Brain Functions Andre Holley (CNRS, Lyon, France) Foundations of Cognitive Science Encho Gerganov, Naum Yakimov, Boicho Kokinov, Viktor Grilihes (New Bulgarian University, Bulgaria) Participant Symposia Participants are invited to submit papers which will be presented (30 min) at the participant symposia. Authors should send full papers (8 single spaced pages) in thriplicate or electronically (postcript, RTF, or plain ASCII) by July 30. Selected papers will be published in the School's Proceedings after the School itself. Only papers presented at the School will be eligible for publishing. Panel Discussions Integration of Methods and Approaches in Cognitive Science Trends in Cognitive Science Research Student Session At the student session proposals for M.Sc. Theses and Ph.D. Theses will be discussed as well as public defence of such theses (if presented). Fees (including participation, board and lodging) Advance Registration (payment in full, postmarked on or before June 15): $650 Late Registration (postmarked after June 15): $750 The fees should be transferred to the New Bulgarian University (for the Cognitive Science Summer School) at the Economic Bank (65 Maria Luisa Str., Sofia) - bank account 911422735300-8 or paid at registration. A very limited number of grants for partial support of participants from East European countries is available. Important dates: Send Application Form: now Deadline for Advance Registration: June 15 Deadline for Paper Submission: July 15 Inquiries, Applications, and Paper Submission to be send to: Boicho Kokinov Cognitive Science Department New Bulgarian University 54, G.M.Dimitrov blvd. Sofia 1125, Bulgaria fax: (+3592) 73-14-95 e-mail: cogsci94 at adm.nbu.bg or kokinov at bgearn.bitnet Parallel Events The International Conference in Artificial Intelligence - AIMSA'94 - will be held in Sofia in the period September 21-24. Summer School on Information Technologies - will be held in Sofia in the period September 16-20. --------------------------------------------------------------------------- International Summer School in Cognitive Science Sofia, September 12-24, 1994 Application Form Name: First Name: Status: faculty / graduate student / undergraduate student / other Affiliation: Country: Mailing address: e-mail address: fax: I intend to submit a paper: (title) From rjb at psy.ox.ac.uk Fri May 27 06:33:15 1994 From: rjb at psy.ox.ac.uk (Roland Baddeley) Date: Fri, 27 May 1994 11:33:15 +0100 Subject: Positions available at the University of Oxford. Message-ID: <199405271033.LAA01144@sun02.mrc-bbc.ox.ac.uk> Four positions have just become available at the University of Oxford Psychology Department, at least two of which may be of interest to readers of connectionists. - Roland Baddeley (rjb at psy.ox.ac.uk) UNIVERSITY OF OXFORD DEPARTMENT OF EXPERIMENTAL PSYCHOLOGY Posts in Visual Neuroscience The following posts are available as part of a long-term research programme combining neurophysiological and computational approaches to the functions of the temporal lobe visual cortical areas of primates. (1) Neurophysiologist (RS1A) to analyse the activity of single neurons in the temporal cortical visual areas of primates. (2) Computational neuroscientist (RS1A) to make formal models and/or analyse by simulation the functions of visual cortical areas and the hippocampus. (3) Programmer (RS1B), preferably with an interest in computational neuroscience, and with experience in C and Unix. The salaries are on the RS1A (postdoctoral) scale 13,601-20,442 pounds, or the RS1B (graduate) scale 12,828-17,349 pounds, with support provided by a programme grant. (4) Neurophysiologist (RS1A or RS1B) to analyse the activity of single neurons in the temporal cortical visual areas of primates, with EC Human Capital and Mobility support for 18 months for a European non-UK citizen. Applications including the names of two referees, or enquiries, to Dr. Edmund T. Rolls, University of Oxford, Department of Experimental Psychology, South Parks Road, Oxford OX1 3UD, England (telephone 0865-271348). The University is an Equal Opportunities Employer email enquires can be sent to Dr Rolls at erolls at psy.ox.ac.uk From David_Redish at GS17.SP.CS.CMU.EDU Fri May 27 10:50:22 1994 From: David_Redish at GS17.SP.CS.CMU.EDU (David Redish) Date: Fri, 27 May 94 10:50:22 -0400 Subject: Mosaic homepage for CNBC and NPC Message-ID: <26583.770050222@GS17.SP.CS.CMU.EDU> The Center for the Neural Basis of Cognition (CNBC) and the Neural Processes in Cognition Training Program (NPC) are joint projects of Carnegie Mellon University and the University of Pittsburgh. There is now a Mosaic homepage for these programs at the following url: http://www.cs.cmu.edu:8001/afs/cs/project/cnbc/CNBC.html Included in this homepage are - summary information on the CNBC and the NPC training program - information on how to apply to the NPC training program - faculty, postdoc, and graduate student lists and research statements - upcoming talks and colloquia - resources available from people at the CNBC (such as the connectionists archives and local ftp sites for online tech reports) From David_Redish at GS17.SP.CS.CMU.EDU Fri May 27 10:50:22 1994 From: David_Redish at GS17.SP.CS.CMU.EDU (David Redish) Date: Fri, 27 May 94 10:50:22 -0400 Subject: Mosaic homepage for CNBC and NPC Message-ID: <26583.770050222@GS17.SP.CS.CMU.EDU> The Center for the Neural Basis of Cognition (CNBC) and the Neural Processes in Cognition Training Program (NPC) are joint projects of Carnegie Mellon University and the University of Pittsburgh. There is now a Mosaic homepage for these programs at the following url: http://www.cs.cmu.edu:8001/afs/cs/project/cnbc/CNBC.html Included in this homepage are - summary information on the CNBC and the NPC training program - information on how to apply to the NPC training program - faculty, postdoc, and graduate student lists and research statements - upcoming talks and colloquia - resources available from people at the CNBC (such as the connectionists archives and local ftp sites for online tech reports) ------------------------------------------------------------ A short description of the programs follow: The Center for the Neural Basis of Cognition (CNBC) is a joint project of Carnegie Mellon University and the University of Pittsburgh, funded by a major gift from the R. K. Mellon Foundation. Created in 1994, the Center is dedicated to the study of the neural basis of cognitive processes, including learning and memory, language and thought, perception, attention, and planning. Studies of the neural basis of normal adult cognition, cognitive development, and disorders of cognition all fall within the purview of the Center. In addition, the Center promotes the application of the results of the study of the neural basis of cognition to artificial intelligence, technology, and medicine. The Center will synthesize the disciplines of basic and clinical neuroscience, cognitive psychology, and computer science, combining neurobiological, behavioral, computational and brain imaging methods. The Neural Processes in Cognition training program (NPC) is a joint project between 15 departments at the University of Pittsburgh and its medical school and 2 departments at Carnegie Mellon University, funded by the National Science Foundation. Students receive instruction in neurobiology, psychology, mathematics and computer simulation. Students are trained to interpret the function as well as the phenomena of neuroscience and to work collaboratively with specialists in multiple disciplines. ------------------------------------------------------------ David Redish Computer Science Carnegie Mellon University (NPC program) From David_Redish at GS17.SP.CS.CMU.EDU Sat May 28 07:41:50 1994 From: David_Redish at GS17.SP.CS.CMU.EDU (David Redish) Date: Sat, 28 May 94 07:41:50 -0400 Subject: NIPS*94 Mosaic hompage now available Message-ID: <27726.770125310@GS17.SP.CS.CMU.EDU> There is now a homepage for NIPS*94 at the following url: http://www.cs.cmu.edu:8001/afs/cs/project/cnbc/nips/NIPS.html Included in this homepage are: - the call for papers (html and ascii versions) - the call for workshops (html and ascii versions) - style files for papers When they become available, the following will also be added: - NIPS*94 program - NIPS*94 abstracts - NIPS*94 workshops - hotel and other local Denver information ------------------------------------------------------------ A short description of NIPS*94 follows: Neural Information Processing Systems -Natural and Synthetic- Monday, November 28 - Saturday, December 3, 1994 Denver, Colorado This is the eighth meeting of an interdisciplinary conference which brings together neuroscientists, engineers, computer scientists, cognitive scientists, physicists, and mathematicians interested in all aspects of neural processing and computation. The conference will include invited talks, and oral and poster presentations of refereed papers. There will be no parallel sessions. There will also be one day of tutorial presentations (Nov 28) preceding the regular session, and two days of focused workshops will follow at a nearby ski area (Dec 2-3). ------------------------------------------------------------ David Redish Carnegie Mellon University From hare at crl.ucsd.edu Sat May 28 09:57:17 1994 From: hare at crl.ucsd.edu (Mary Hare) Date: Sat, 28 May 94 06:57:17 PDT Subject: paper available Message-ID: <9405281357.AA12174@crl.ucsd.edu> The following paper is available by anonymous ftp from crl.ucsd.edu. LEARNING AND MORPHOLOGICAL CHANGE Mary Hare Dept. of Psychology Birkbeck College, University of London hare at crl.ucsd.edu Jeffrey Elman Dept. of Cognitive Science U. of California, San Diego elman at crl.ucsd.edu ABSTRACT: This paper offers an account of change over time in English verb morphology, based on a connectionist approach to how morphological knowledge is acquired and used (Rumelhart and McClelland 1986, Plunkett and Marchman 1991, 1993). A technique is first described that was developed for modeling historical change in connectionist networks, then that technique is applied to model English verb inflection as it developed from the highly complex past tense system of Old English towards that of the modern language, with one predominant regular pattern and a limited number of irregular forms. The model relies on the fact that certain input-output mappings are easier than others to learn in a connectionist network. Highly frequent patterns, or those that share phonological regularities with a number of others, are learned more quickly and with lower error than low-frequency, highly irregular patterns (Seidenberg and McClelland 1989). A network is taught a data set representative of the verb classes of Old English, but learning is stopped before errors have been eliminated, and the output of this network is used as the teacher for a new network. As a result, the errors in the first network are passed on to become part of the data set of the second. As this sequence is repeated, those patterns that are hardest to learn lead to the most errors, and over time are 'regularized' to fit a more dominant pattern. The results of the network simulations are highly consistent with the major historical developments. These results are predicted from well-understood aspects of network dynamics, which therefore provide a rationale for the shape of the attested changes. *************************** To obtain a copy ************************ unix> ftp crl.ucsd.edu Name: anonymous Password: (type your email address) ftp> cd pub/neuralnets ftp> binary ftp> get history.ps.Z ftp> quit unix> uncompress history.ps.Z unix> lpr history.ps (or what you normally do to print PostScript) From rsun at cs.ua.edu Sat May 28 18:57:45 1994 From: rsun at cs.ua.edu (Ron Sun) Date: Sat, 28 May 1994 17:57:45 -0500 Subject: No subject Message-ID: <9405282257.AA11587@athos.cs.ua.edu> Preprint available: -------------------------------------------- title: Robust Reasoning: Integrating Rule-Based and Similarity-Based Reasoning Ron Sun Department of Computer Science The University of Alabama Tuscaloosa, AL 35487 rsun at cs.ua.edu -------------------------------------------- to appear in: Artificial Intelligence (AIJ), Spring 1995 --------------------------------------------- The paper attempts to account for common patterns in commonsense reasoning through integrating rule-based reasoning and similarity-based reasoning as embodied in connectionist models. Reasoning examples are analyzed and a diverse range of patterns is identified. A principled synthesis based on simple rules and similarities is performed, which unifies these patterns that were before difficult to be accounted for without specialized mechanisms individually. A two-level connectionist architecture with dual representations is proposed as a computational mechanism for carrying out the theory. It is shown in detail how the common patterns can be generated by this mechanism. Finally, it is argued that the brittleness problem of rule-based models can be remedied in a principled way, with the theory proposed here. This work demonstrates that combining rules and similarities can result in more robust reasoning models, and many seemingly disparate patterns of commonsense reasoning are actually different manifestations of the same underlying process and can be generated using the integrated architecture, which captures the underlying process to a large extent. ---------------------------------------------------------------- * It is FTPable from aramis.cs.ua.edu in: /pub/tech-reports * No hardcopy available. * FTP procedure: unix> ftp aramis.cs.ua.edu Name: anonymous Password: (email-address) ftp> cd pub/tech-reports ftp> binary ftp> get sun.aij.ps.Z ftp> quit unix> uncompress sun.aij.ps.Z unix> lpr sun.aij.ps (or however you print postscript) ----------------------------------------------------------------- (A number of other publications are also available for FTP under pub/tech-reports) ================================================================ Dr. Ron Sun Department of Computer Science phone: (205) 348-6363 The University of Alabama fax: (205) 348-0219 Tuscaloosa, AL 35487 rsun at athos.cs.ua.edu ================================================================ From franz at neuro.informatik.uni-ulm.de Mon May 30 11:29:33 1994 From: franz at neuro.informatik.uni-ulm.de (Franz Kurfess) Date: Mon, 30 May 94 17:29:33 +0200 Subject: Fall School on Connectionism and Neural Nets HeKoNN 94 (in German) Message-ID: <9405301529.AA00205@neuro.informatik.uni-ulm.de> Below please find the announcement and call for participation of HeKoNN 94, a fall school on connectionism and neural networks to take place October 10-14, 1994 near Muenster, Germany. The courses will be held in German, so this will not be of much interest for people who don't speak German. Franz Kurfess HeKoNN 94 Herbstschule Konnektionismus und Neuronale Netze Muenster, 10.-14.10.1994 Im kommenden Oktober veranstalten die Fachgruppen "Konnektionismus" und "Neuronale Netze" der GI (Gesellschaft fuer Informatik) eine Herbstschule zu dem Themenbereich Konnektionismus und neuronale Netze. Sie bietet Einfuehrungen und vertiefende Darstellungen zu folgenden Themen: Grundlagen und Statistik, Implementierungen und Anwendungen, symbolischer Konnektionismus und kognitiver Konnektionismus. Konnektionistische Modelle und neuronale Netze sind einerseits inspiriert von biologischen Vorbildern, insbesondere dem menschlichen Gehirn, dienen andererseits inzwischen aber auch als praktikable Mechanismen zur Loesung konkreter Probleme. Durch diese Dichotomie ergibt sich die Gefahr von vielerlei Missverstaendnissen und unrealistischen Erwartungen, sei es bezueglich der Leistungsfaehigkeit im Vergleich zu herkoemmlichen Methoden, oder der Moeglichkeit biologische Systeme "nachzubauen". Seit etwa Mitte der achtziger Jahre ist das Backpropagation Modell vielen gelaeufig. Viel zu wenig bekannt sind aber theoretische Ergebnisse zu den Eigenschaften dieses und alternativer Verfahren und ueber die Zuverlaessigkeit der Methoden. Es ist heute klar, dass neuronale Netze in enger thematischer Verwandtschaft mit Statistik, Funktionsapproximation, und theoretischer Physik stehen und viele der dort gewonnen Erkenntnisse auch hier anwendbar sind. Darueberhinaus besteht noch ein recht grosses Defizit bei Fragen, die sich mit den biologischen, kognitiven und psychologischen Aspekten neuronaler Netze befassen. Hierbei dreht es sich um Konzepte zur Modellierung von Verhaltensweisen und Denkprozessen auf der Basis neuronaler Netze. Beispiele hierfuer sind die Repraesentation von "Wissen", insbesondere die Verankerung von internen Darstellungen mit den zugehoerigen Objekten der realen Welt, oder auch das Durchfuehren von einfachen Schlussfolgerungen. Solche Fragen sind nicht nur von akademischem Interesse, sondern ergeben sich auch beim Zusammenspiel von eher symbolorientierter Wissensverarbeitung, z.B. in Expertensystemen, und eher datenorientierten Verfahren etwa in der Mustererkennung. Und genau an dieser Stelle liegen auch viele Schwierigkeiten von herkoemmlichen Verfahren der Kuenstlichen Intelligenz, etwa in den Bereichen Sprach- oder Bildverarbeitung. Die Herbstschule bietet eine umfassende und fachuebergreifende Darstellung des Themengebiets mit besonderer Betonung der obigen Fragestellungen. Zwanzig aktive Wissenschaftler konnten als Dozenten gewonnen werden, die jeweils 8-stuendige Kurse abhalten. Die Kurse sind in vier Bereiche eingeteilt: * Grundlagen und Statistik * Implementierungen und Anwendungen * Symbolischer Konnektionismus * Kognitiver Konnektionismus Im ersten Bereich -- Grundlagen und Statistik -- gibt es fuenf Kurse, in denen die Grundlagen von Konnektionismus und neuronalen Netzen erlaeutert werden. Dabei geht es zunaechst um die Vorstellung der haeufig verwendeten Modelle sowie die Einfuehrung der entsprechenden Fachbegriffe und Algorithmen. Ein Kurs bietete eine Einfuehrung in die Grundlagen, Zielsetzungen und Forschungsfragen des Gebietes 'Konnektionismus' bzw. 'Neuronale Netzwerke' vom Standpunkt der K"unstlichen-Intelligenz-Forschung. Ein zweiter Kurs diskutiert neuronale Netze aus dem Blickwinkel der Approximationstheorie und Statistik. Hierbei wird besonderes Gewicht auf die Diskussion der Eigenschaften der verschiedenen Lern- und Optimierungsverfahren fuer die gaengigen Netzwerktypen gelegt. Ein anderer Kurs untersucht die Eignung kuenstlicher Neuronaler Netze als Modelle biologischer Vorgaenge. Die Betonung liegt hierbei auf dem erforderlichen Realitaetsgrad der Netze bei der Simulation der Informationsverarbeitung in den Nervensystemen von Lebewesen. Entscheidend fuer den praktischen Einsatz neuronaler Netze bei der Prognose und Prozesssteuerung ist die Zuverlaessigkeit der Ergebnisse. Ein spezieller Kurs zu dieser Fragestellung stellt Methoden zur Abschaetzung der Zuverlaessigkeit, zur Verbesserung der Prognosegenauigkeit und zur Optimierung der Netzwerktopologie vor. Besonders vielversprechend sind hier genetische Algorithmen, die ein globales Optimum fuer Funktionen mit vielen Nebenmaxima bestimmen koennen. Diese Verfahren werden in einem weitereren Kurs diskut iert. Der zweite Bereich -- Implementierungen und Anwendungen -- beinhaltet vier Kurse. In diesen Kursen werden zum einen mit SNNS und SESAME zwei weit verbreitete Public-Domain-Simulationswerkzeuge mit graphischer Benutzeroberflaeche vorgestellt, wobei praktische Uebungen am Rechner vorgesehen sind. Ein Schwerpunkt dieser Simulatoren ist die flexible Aenderung bestehender Netzstrukturen sowie die schnelle und sichere Modifikation von Lernverfahren. Waehrend der SNNS grossen Wert auf die graphischen Oberfl"ache unter X-Windows zur Generierung, Visualisierung und Modifikation der neuronalen Netze legt, hat Sesame seinen Schwerpunkt in einem modularen Experimentdesign, dass schnellen flexiblen Austausch einzelner Komponenten sicherstellt und durch Modularisierung die Konstruktion neuer Algorithmen unterstuetzt. Ein anderer Kurs praesentiert den derzeitigen Stand der Technik bei der Hardwareimplementierung neuronaler Netze. Hierbei handelt es sich einerseits um Zusatzkarten f"ur konventionelle Arbeitsplatzrechner und spezielle Parallelrechnersysteme, und zum anderen um Architekturen auf der Basis von anwendungsspezifischen mikroelektronischen Bausteinen, die entweder digital oder analog sein k"onnen. Als eine Anwendung werden schliesslich neuronale Netze im Bereich der Robotik diskutiert. Kuenstliche Neuronale Netze erscheinen hier geignet weil sie aus Trainingsbeispielen selbst"andig relevante Informationen extrahieren koennen. Schwerpunkt ist die Frage, ob sich hieraus in der Praxis Vorteile gegenueber den klassischen analytischen Verfahren der Robotik ergeben. Der dritte Bereich -- Symbolischer Konnektionismus -- versucht, den Zusammenhang herzustellen zwischen symbolorientierten Methoden der Kuenstlichen Intelligenz und sub-symbolischen Methoden, die meist im sensornahen Bereich, also bei der Datenerfassung vorliegen. Neuronale Netze werden gerade im datennahen Bereich oft mit Erfolg eingesetzt, sind jedoch nicht so ohne weiteres dafuer geeignet, Manipulationen auf Zeichenreihen von Symbolen durchzufuehren. Eine wichtige Fragestellungen hierbei ist die Ueberfuehrung von Rohdaten, wie etwa von einer Kamera aufgenommene Bilder oder von einem Mikrophon registrierte akustische Signale, in eine symbolische Form, auf der dann konventionelle Werkzeuge wie Expertensysteme aufsetzen koennen. Ein anderer wichtiger Aspekt ist die Extraktion von Wissen aus neuronalen Netzen, etwa zur Erklaerung ihres Verhaltens oder zur Darstellung der von dem Netzwerk gelernten Information in Form von Regeln. Der letzte Bereich schliesslich -- Kognitiver Konnektionismus -- befasst sich mit der Verwendung neuronaler Netze als Modelle fuer Wahrnehmung und Denkprozesse. Zum einen werden hier wichtige grundlegende Probleme dieser Modelle vor einem eher philosophischen Hintergrund diskutiert, zum anderen aber auch Ansaetze zur Modellierung von Phaenomenen wie Konzeptrepraesentation, Lernen, und Gedaechtnis besprochen. Weitere Themen betreffen die Untersuchung und Modellierung von informationsverarbeitenden Teilsystemen im Gehirn, etwa der Sprachverarbeitung oder des visuellen Systems. Die Kurse zu den vier obigen Bereichen werden parallel abgehalten; es ist hierbei jedoch nicht notwendig, einen Bereich als Gesamtes auszuwaehlen, sondern die Teilnehmer koennen und sollen Kurse aus verschiedenen Bereichen belegen. Die Herbstschule wird im Jugendgaestehaus Aasee bei Muenster stattfinden, wo sowohl die Teilnehmer als auch die Dozenten untergebracht sein werden. Dadurch soll die Moeglichkeit geboten werden, auch ausserhalb der eigentlichen Lehrveranstaltungen in einer zwanglosen Atmosphaere ueber interessante Fragestellungen weiterzudiskutieren. Angesprochen werden sollen insbesondere fortgeschrittene Studenten sowie Praktiker aus Forschung und Industrie. Die Zahl der Teilnehmer ist auf 100 beschraenkt. Der Preis fuer Studenenten wurde relativ niedrig gehalten (ca. 410,- DM incl Tagungsunterlagen und Vollpension). Um das Niveau der Tagung zu sichern, erfolgt die Auswahl der Teilnehmer auf Grund einer Bewerbung. Hierbei werden Vorkenntnisse, praktische Erfahrungen und das spezielle Interesse an Fragen des Konnektionismus und der Neuronalen Netze beruecksichtigt. Anmeldeschluss ist der - - - 1. Juli 1994 - - - Im Organisations- und Programmkomitee sind Ingo Duwe, Uni Bielefeld, Franz Kurfess, Uni Ulm, Gerhard Paass, GMD Sankt Augustin (Vorsitz), Guenther Palm, Uni Ulm, Helge Ritter, Uni Bielefeld, Stefan Vogel, Uni Koeln. Weitere Informationen sind erhaeltlich per anonymem ftp von "ftp.gmd.de", Directory "/Learning/neural/HeKoNN94", per electronic mail von "hekonn at neuro.informatik.uni-ulm.de", oder vom Tagungssekretariat HeKoNN 94 c/o Birgit Lonsinger, Universitaet Ulm Fakultaet fuer Informatik Abteilung Neuroinformatik D-89069 Ulm Tel: 0731 502 4151 Fax: 0731 502 4156 From gjg at cns.edinburgh.ac.uk Mon May 30 18:04:58 1994 From: gjg at cns.edinburgh.ac.uk (Geoffrey Goodhill) Date: Mon, 30 May 94 18:04:58 BST Subject: New TSP Algorithm(?) Message-ID: <19609.9405301704@cns.ed.ac.uk> Below is an article from the Guardian 28.5.94, a UK quality newspaper, which might be of interest to readers of this list. It's about a new algorithm for the TSP that claims to be the best yet. There is apparently an article forthcoming in the Journal of Neural Computing. I note two things: 1) Strange though it may seem to those of us who think that the purpose of publishing in reputable journals is to tell people what one has done, the article below suggests that the authors may not be going to tell us what the algorithm is. 2) Given that a number of inflated claims have been made for new TSP algorithms in the past based on comparisons with poor alternatives, I'd be interested to see proper comparisons to justify the authors' assertions. Geoff Goodhill ******************************************************************** SCIENCE WELL ON THE ROAD TO SALESMAN SOLUTION --------------------------------------------- Tim Radford reports on a near-answer to a deceptively simple mathematical question. A mathematical conundrum called the Travelling Salesman Problem may have lost its power to baffle, according to British Telecom Scientists. They admit they have not exactly solved it: just arrived at a way to make a computer produce the best and fastest solution yet. The 60-year-old problem is terribly simple, but has occupied some of the world's most powerful brains - and most powerful computers - for years. It is this: a salesman wants to visit 3, or 4, or 10, or 100 places. What is the shortest, or fastest, route? The options multiply dramatically with the number of calls. A journey to 3 sites involves 6 possible routes. A journey to 10 offers 3,628,800 possible choices. A journey around 100 would involve 10^156. This is 1 followed by 156 zeros, almost equivalent to the number of atoms in the universe - squared. The almost-optimum solution, by Dr Shara Amin and Dr Jose Luis Fernandez-Villacanas Martin at the BT laboratories in Martlesham Heath, near Ipswich, Suffolk, can now be reached in 1.6 seconds for a 100-point journey. The absolute best would take days. Even a 1000-point problem can be solved in less than 3 minutes. The answer, they say, will be reported in the Journal of Neural Computing in July. The algorithm they use will not be revealed, but there will be clues for other mathematicians on how to proceed. The algorithm - into which planners can slot variables such as a bank holiday in Oslo or engineering works between Birmingham and London - will help sales managers and, for that matter, telephone managers with a choice of calls. But the technique also has military uses. Imagine, said Professor Peter Cochrane, of the BT laboratories, a jet pilot under simultaneous attack from ground-to-air missiles and enemy aircraft. "Now you have a hyperspace problem. It is this: where do I steer, and who do I shoot at first, and in what order, to minimise the chances of me getting killed and maximise the amount of damage I can do to them?" But the immediate value would be in preparing the patterns on computer circuits, or searching in information "hyperspace", where the options can reach astronomical levels. Professor Cochrane sees the procedure as useful in automatic searches through networks and data libraries. "We are looking at giving information on demand, where you could have access to all the libraries in the world, all the museums in the world". From bill at nsma.arizona.edu Sun May 1 17:50:01 1994 From: bill at nsma.arizona.edu (Bill Skaggs) Date: Sun, 01 May 1994 14:50:01 -0700 (MST) Subject: Financial Forecasting Competition Message-ID: <9405012150.AA23928@nsma.arizona.edu> Motivated by the announcement on this list of the "First International Nonlinear Financial Forecasting Competition", I would like to raise a concern that has troubled me for some time. I wonder whether it is really socially responsible to work on these things, or to support such work. As I understand it, the value of our financial system is that it efficiently channels money into the most productive enterprises, and away from enterprises that are likely to fail. If the system works properly, then the only way to make unusual amounts of money is to be unusually good at predicting the success or failure of enterprises. What concerns me, is that the financial forecasting techniques I have seen are not oriented toward predicting the failure or success of individual enterprises, but rather toward identifying and predicting global trends in the flow of money. I am skeptical that this is possible in the first place, but even if it is possible, it seems to me that to make money this way is to be a parasite upon the financial system, rather than to serve it. Furthermore, it is pretty clear that the only way to consistently make money with such a technique would be to keep it secret. On the other hand, I have a great deal of respect for several of the people involved in the "Competition", and this leads me to wonder whether I might be missing some crucial point. Can anybody help me with this? -- Bill (I am also concerned that this message will provoke too many responses, and so I ask readers to think twice, or three times, before replying to Connectionists. Do you really have something to say that isn't going to be said by somebody else? Of course I am happy to get Email from anybody who cares to send it, and will summarize to Connectionists if it seems useful.) From hendin at thunder.tau.ac.il Mon May 2 13:17:27 1994 From: hendin at thunder.tau.ac.il (Ofer Hendin) Date: Mon, 2 May 1994 20:17:27 +0300 (IDT) Subject: printing problems. Message-ID: Hi, If you have problems printing the file "hendin.olfaction.ps" (the figurs does not print etc.), it is probably since the p-script file is to large, the way to print it is using the -s flag in: >> lpr -s -Pprinter filename.ps Thanks Ofer. ____________________________________________________________________ Ofer Hendin e-mail: hendin at thunder.tau.ac.il School of Physics and Astronomy Phone : +972 3 640 7452 Raymond and Beverly Sackler Faculty of Exact Sciences Tel Aviv University Tel Aviv 69978, Israel. ____________________________________________________________________ From BRAIN1 at taunivm.tau.ac.il Mon May 2 16:08:23 1994 From: BRAIN1 at taunivm.tau.ac.il (BRAIN1@taunivm.tau.ac.il) Date: Mon, 02 May 94 16:08:23 IST Subject: Bat-Sheva seminar on functional brain imaging Message-ID: % Dear Colleague, % % here follows the second announcement (plain TeX file) of the % % % "BAT-SHEVA SEMINAR ON FUNCTIONAL BRAIN IMAGING" % % which will take place in Tel-Aviv % June 5 to 10, 1994 % % May we ask you to post the announcement ? % % Many thanks and best regards, % % D. Horn G. Navon % \nopagenumbers \magnification=1200 \def\sk{\vskip .2cm} \hsize=13cm \centerline{\bf BAT-SHEVA SEMINAR ON FUNCTIONAL BRAIN IMAGING} \sk \centerline{\bf Tel-Aviv, Israel, June 5 to 10, 1994} \vskip 3cm \centerline{\bf SECOND ANNOUNCEMENT} \sk The seminar will bring together experts on various techniques of functional brain imaging (PET, EEG, MEG, Optical, and particular emphasis on MRI). It will start with a day of tutorials at Tel-Aviv University. These will serve as technical and scientific introductions for participants from different disciplines. It will continue in a resort hotel at the seashore with plenary lectures, describing recent advances in all different techniques and comparing their merits and scientific results. The number of participants in the workshop will be limited. \sk \vskip 1cm {\bf Tutorial Sessions}, Sunday June 5th at Tel Aviv University: Amos Korczyn: Introduction to mapping of the brain. Peter Bendel: Technical introduction to MRI. {\bf Invited Lectures} (at Dan Accadia Hotel, Hertzlia) Monday-Friday June 6--10: John Belliveau: 1. Functional MRI. 2. Correlation between EEG and fMRI. Alan S. Gevins: 1. High resolution EEG. 2. Sub-second networks of cognition in the human brain. Amiram Grinvald: 1. Optical imaging of functional architecture based on the intrinsic signals. 2. real time optical imaging of electric activity. Matti H\"am\"al\"ainen: MEG -- a tool for functional brain imaging: theory, instrumentation, results. Seiji Ogawa: Basic mechanisms in fMRI. Hillel Pratt: Imaging human brain activity from scalp recordings. Marcus E. Raichle: 1. Introduction to neuroimaging. 2. Multimodel functional imaging. 3. PET studies of language and memory. Robert Shulman: 1. Application of functional MRI to cognitive processes. 2. Principles of magnetic resonance spectroscopy of the brain. 3. Measurements of brain metabolic MRS. {\bf Schedule of activities:} Sunday, June 5: Tutorials at Tel Aviv University Monday, June 6: Session One and Two at Dan Accadia Hotel. Evening: Get-together at the Hotel Tuesday, June 7: Sessions Three and Four. Wednesday, June 8: Morning: Session Five. Afternoon: Organized Tour Thursday, June 9: Session Six and Seven. Evening: Dinner at Tel Aviv University. Friday, June 10: Morning: Session Eight. \sk {\bf Information and registration}: Dan Knassim Ltd., P.O.B. 57005, Tel-Aviv 61570, Israel. Tel: 972-3-562 6470 Fax: 972-3-561 2303 \sk \vskip 2cm \centerline {D. Horn~~~~~~~~G. Navon} \centerline {ADAMS SUPER-CENTER FOR BRAIN STUDIES} \centerline {TEL-AVIV UNIVERSITY, TEL-AVIV, ISRAEL} \centerline{ e-mail: brain1 at taunivm.tau.ac.il } \vskip 2cm \sk \vfill\eject\end From oby at TechFak.Uni-Bielefeld.DE Mon May 2 07:38:50 1994 From: oby at TechFak.Uni-Bielefeld.DE (oby@TechFak.Uni-Bielefeld.DE) Date: Mon, 2 May 94 13:38:50 +0200 Subject: No subject Message-ID: <9405021138.AA28671@gaukler.TechFak.Uni-Bielefeld.DE> Research Position in Computational Neuroscience Technische Fakultaet (computer science), University of Bielefeld, Germany Our research group in computational neuroscience is part of the university's young CS-department. We study the role self-organization and pattern formation processes in neural development as well as information processing strategies employed by primate visual cortex. Recent publications include: Obermayer et al. (1990), Proc. Nat. Acad. Sci. USA 87, 8345-8349; Obermayer et al. (1992), Phys. Rev. A 45, 7568-7589; Obermayer and Blasdel (1993), J. Neurosci.13, 4114-4129. Recently we have started a project with focus on the role of long range lateral connections in the visual cortex, which is part of an international collaboration involving a neurophysiology group at Harvard U. (USA) and a neuroanatomy group at London U. (GB). A graduate student position has become available for this project. The appointee is expected to choose a project out of the following areas: 1. The development of computer software for neuroanatomical tracing and section alignment, for cell reconstruction, and for the automatic recognition of important features including vasculature, axons, and synaptic boutons. 2. Computer models of neuronal circuits based on the observed patterns of lateral connections, and a comparison of predicted filter properties with experimental data. 3. Computer models involving long-range connections, which should explore their possible role in adaptation and contextual effects. Candidates should be familiar with C or C++ and should have background in one of the following areas: 1. graphics programming and computer vision 2. neuroanatomy 3. neural modelling The position can be made available beginning June 1st. Salary is equivalent to BATIIa/2, but the position may be upgraded to a full BATII position. Please send applications including CV, copies of certificates, and a statement of research interests to: Dr. Klaus Obermayer Technische Fakultaet, Universitaet Bielefeld, Universitaetsstrasse 25, 33615 Bielefeld, phone: 49-521-106-6058, fax: 49-521-106-6011 e-mail: oby at techfak.uni-bielefeld.de From Daniel_Seligson at ccm11.sc.intel.com Tue May 3 20:55:13 1994 From: Daniel_Seligson at ccm11.sc.intel.com (Daniel_Seligson@ccm11.sc.intel.com) Date: Tue, 03 May 94 16:55:13 PST Subject: Job Posting at CuraGen Message-ID: <9404037680.AA768009313@rnbsmt11.intel.com> Other than the fact that my friends at Curagen asked me to post this for them, there is no connection between Intel and Curagen. Please direct all correspondence to the address below, or Greg Went (gwent at curagen.com). Thanks, Dan Curagen Corporation CuraGen Corporation is a dynamic and expanding biotechnology company with a mission to systematically extract from the human genome those disease- related genes for which therapeutics can be successfully designed. CuraGen has assembled a research team with expertise in molecular biology, engineering physics, and computational methods. Close ties with several major academic laboratories complement our own research and facilities Structural Biology Division Post-Doctoral/Research Scientist Position Theoretical and Applied Computational Biology CuraGen has devised a novel means and instrumentation for obtaining DNA fragmentation patterns in order to determine rapidly the composition of disease-specific genes. The analysis and interpretation of these patterns is essential to the success of the project. We currently have an opening for a computational scientist to refine and implement an adaptive scheme for this task. Those with a Ph.D. in computer science, applied mathematics, biology, physics, or related disciplines are encouraged to apply. Significant programming accomplishments are essential; exposure to non-linear statistics, neural architectures, and biocomputing is desirable. There is a distinct possibility that a joint appointment with DIMACS ( supported by the NSF) is available in conjunction with this position which offers up to 50% discretionary time. CuraGen offers a competitive compensation package including salary, benefits, and equity participation. Our location in a shoreline community between Boston and New York City 12 minutes east of Yale University presents excellent scientific, recreational, and cultural opportunities. CuraGen is an AA/Equal Opportunity Employer CuraGen Corporation 322 East Main Street Branford, CT 06405 (203) 481 1104 (203) 481 1106 (FAX) From bruno at lgn.wustl.edu Tue May 3 18:07:12 1994 From: bruno at lgn.wustl.edu (Bruno Olshausen) Date: Tue, 3 May 94 17:07:12 CDT Subject: Thesis available on neuroprose Message-ID: <9405032207.AA01935@lgn.wustl.edu> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/Thesis/olshausen.thesis.tar.Z The file olshausen.thesis.tar.Z is now available for copying from the Neuroprose archive: Neural Routing Circuits for Forming Invariant Representations of Visual Objects Bruno A. Olshausen Ph.D. Thesis Computation and Neural Systems Program California Institute of Technology ABSTRACT: This thesis presents a biologically plausible model of an attentional mechanism for forming position- and scale-invariant representations of objects in the visual world. The model relies on a set of {\em control neurons} to dynamically modify the synaptic strengths of intra-cortical connections so that information from a windowed region of primary visual cortex (V1) is selectively routed to higher cortical areas. Local spatial relationships (i.e., topography) within the attentional window are preserved as information is routed through the cortex, thus enabling attended objects to be represented in higher cortical areas within an object-centered reference frame that is position and scale invariant. The representation in V1 is modeled as a multiscale stack of sample nodes with progressively lower resolution at higher eccentricities. Large changes in the size of the attentional window are accomplished by switching between different levels of the multiscale stack, while positional shifts and small changes in scale are accomplished by translating and rescaling the window within a single level of the stack. The control signals for setting the position and size of the attentional window are hypothesized to originate from neurons in the pulvinar and in the deep layers of visual cortex. The dynamics of these control neurons are governed by simple differential equations that can be realized by neurobiologically plausible circuits. In pre-attentive mode, the control neurons receive their input from a low-level ``saliency map'' representing potentially interesting regions of a scene. During the pattern recognition phase, control neurons are driven by the interaction between top-down (memory) and bottom-up (retinal input) sources. The model respects key neurophysiological, neuroanatomical, and psychophysical data relating to attention, and it makes a variety of experimentally testable predictions. An appendix describes details of pulvinar anatomy and physiology. --------------------------- Thesis is 119 pages (10 preamble + 109 text), subdivided into five ps files (each is ordered last page first). It will look best in double-sided printing. You may need to run chmod 755 on each ps file in order to print using lpr -s. Hardcopies will be made available for $4.00. From B344DSL at UTARLG.UTA.EDU Tue May 3 01:38:53 1994 From: B344DSL at UTARLG.UTA.EDU (B344DSL@UTARLG.UTA.EDU) Date: Mon, 02 May 1994 23:38:53 -0600 (CST) Subject: Final program and abstracts for MIND conference May 5-7 Message-ID: <01HBVR70GR42002G4Z@UTARLG.UTA.EDU> CONTENTS Announcement Program Schedule Abstracts of Presentations Directions to Conference Registration Form CONFERENCE ON OSCILLATIONS IN NEURAL SYSTEMS Sponsored by the Metroplex Institute for Neural Dynamics (MIND) and the University of Texas at Arlington Co-sponsored by the Departments of Mathematics and Psychology MAY 5-7, 1994 UNIVERSITY OF TEXAS AT ARLINGTON MAIN LIBRARY, 6TH FLOOR PARLOR The topic of neural oscillation is currently of great interest to psychologists and neuroscientists alike. Recently it has been observed that neurons in separate areas of the brain will oscillate in synchrony in response to certain stimuli. One hypothesized function for such synchronized oscillations is to solve the "binding problem," that is, how is it that disparate features of objects (e.g., a person's face and their voice) are tied together into a single unitary whole. Some bold speculators (such as Francis Crick in his recent book, The Astonishing Hypothesis) even argue that synchronized neural oscillations form the basis for consciousness. Further inquiries about the conference can be addressed to any of the conference organizers: Professor Daniel S. Levine Department of Mathematics, University of Texas at Arlington 411 S. Nedderman Drive Arlington, TX 76019-0408 817-273-3598, fax: 817-794-5802 b344dsl at utarlg.uta.edu Professor Vincent Brown Department of Psychology, University of Texas at Arlington Arlington, TX 76019 817-273-3247 b096vrb at utarlg.uta.edu Mr. Timothy Shirey 214-495-3500 or 214-422-4570 73353.3524 at compuserve.com Please distribute this announcement to anyone you think may be interested in the conference. SCHEDULE Posters (ongoing throughout the conference): Govindarajan, Lin, Mobus, Penz, Rhoades, Tam, Young Thursday: 9:00-9:15 Introduction by Peter Rosen, Dean of the College of Science 9:15-9:30 Introduction by Daniel S. Levine, Co- organizer of the conference 9:30-10:30 Mpitsos 10:30-11:15 Baxter 11:15-11:30 15 minute break 11:30-12:30 Stemmler 12:30-2:00 LUNCH 2:00-2:45 Thomas 2:45-3:45 Horn 3:45-4:00 15 minute break 4:00-5:00 Yuen 5:00-5:45 Gross Friday: 8:30-9:30 Wong 9:30-10:30 Traub 10:30-10:45 15 minute break 10:45-11:30 Soltesz 11:30-12:15 Wolpert 12:15-2:00 LUNCH 2:00-2:45 (A.) Brown 2:45-3:45 Bulsara 3:45-4:00 15 minute break 4:00-5:00 Maren 5:00-5:45 Jagota Saturday: 10:00-11:00 Baird 11:00-11:45 Park 11:45-12:00 15 minute break 12:00-12:45 DeMaris 12:45-1:45 LUNCH 1:45-2:45 Grunewald 2:45-3:30 Steyvers 3:30-3:45 15 minute break 3:45-5:00 Discussion (What Are Neural Oscillations Good For?) (If weather permits, discussion may continue after 5PM outside library.) 7:30-? Trip to The Ballpark in Arlington to see Minnesota Twins at Texas Rangers Titles and Abstracts of Talks and Posters (Alphabetical by First Author) BILL BAIRD, UNIVERSITY OF CALIFORNIA/BERKELEY (BAIRD at MATH.BERKELEY.EDU) "GRAMMATICAL INFERENCE BY ATTENTIONAL CONTROL OF SYNCHRONIZATION IN AN ARCHITECTURE OF COUPLED OSCILLATORY ASSOCIATIVE MEMORIES" We show how a neural network "computer" architecture, inspired by observations of cerebral cortex and constructed from recurrently connected oscillatory associative memory modules, can employ selective "attentional" control of synchronization to direct the flow of communication and computation within the architecture to solve a grammatical inference problem. The modules in the architecture learn connection weights between themselves which cause the system to evolve under a clocked "machine cycle" by a sequence of transitions of attractors within the modules, much as a digital computer evolves by transitions of its binary flip-flop states. The architecture thus employs the principle of "computing with attractors" used by macroscopic systems for reliable computation in the presence of noise. Even though it is constructed from a system of continuous nonlinear ordinary differential equations, the system can operate as a discrete-time symbol processing architecture, but with analog input and oscillatory subsymbolic representations. The discrete time steps (machine cycles) of the "Elman" network algorithm are implemented by rhythmic variation (clocking) of a bifurcation parameter. This holds input and "context" modules clamped at their attractors while hidden and output modules change state, then clamps hidden and output states while context modules are released to load those states as the new context for the next cycle of input. In this architecture, oscillation amplitude codes the information content or activity of a module (unit), whereas phase and frequency are used to "softwire" the network. Only synchronized modules communicate by exchanging amplitude information; the activity of non-resonating modules contributes incoherent crosstalk noise. The same hardware and connection matrix can thus subserve many different computations and patterns of interaction between modules. Attentional control is modeled as a special subset of the hidden modules with outputs which affect the resonant frequencies of other hidden modules. They perturb these frequencies to control synchrony among the other modules and direct the flow of computation (attention) to effect transitions between two subgraphs of a large automaton which the system emulates to generate a Reber grammar. The internal crosstalk noise is used to drive the required random transitions of the automaton. DOUG BAXTER, CARMEN CANAVIER, H. LECHNER, UNIVERSITY OF TEXAS/HOUSTON, JOHN CLARK, RICE UNIVERSITY, AND JOHN BYRNE, UNIVERSITY OF TEXAS/HOUSTON (DBAXTER at NBA19.MED.UTH.TMC.EDU) "COEXISTING STABLE OSCILLATORY STATES IN A MODEL NEURON SUGGEST NOVEL MECHANISMS FOR THE EFFECTS OF SYNAPTIC INPUTS AND NEUROMODULATORS" Enduring changes in the electrical activity of individual neurons have generally been attributed to persistent modulation of one or more of the biophysical parameters that govern, directly or indirectly, neuronal membrane conductances. Particularly striking examples of these modulatory actions can be seen in the changes in the activity of bursting neurons exposed to modulatory transmitters or second messengers. An implicit assumption has been that once all parameters are fixed, the ultimate mode of electrical activity exhibited is determined. An alternative possibility is that several stable modes of activity coexist at a single set of parameters, and that transient synaptic inputs or transient perturbations of voltage-dependent conductances could switch the neuron from one stable mode of activity to another. Although coexisting stable oscillatory states are a well known mathematical phenomenon, their appearance in a biologically plausible model of a neuron has not been previously reported. By using a realistic mathematical model and computer simulations of the R15 neuron in Aplysia, we identified a new and potentially fundamental role for nonlinear dynamics in information processing and learning and memory at the single-cell level. Transient synaptic input shifts the dynamic activity of the neuron between at least seven different patterns, or modes, of activity. These parameter-independent mode transitions are induced by a brief synaptic input, in some cases a single excitatory postsynaptic potential. Once established, each mode persists indefinitely or until subsequent synaptic input perturbs the neuron into another mode of activity. Moreover, the transitions are dependent on the timing of the synaptic input relative to the phase of the ongoing activities. Such temporal specificity is a characteristic feature of associative memories. We have also investigated the ways in which changes in two model parameters, the anomalous rectifier conductance (gR) and the slow inward calcium conductance (gSI), affect not only the intrinsic activity of R15, but also the ability of the neuron to exhibit parameter-independent mode transitions. gR and gSI were selected since they are key determinants of bursting activity and also because they are known to be modulated by dopamine and serotonin. We have found that small changes in these parameters can annihilate some of the coexisting modes of electrical activity. For some values of the parameters only a single mode is exhibited. Thus, changing the value of gR and gSI can regulate the number of modes that the neuron can exhibit. Preliminary electrophysiological experiments indicate that these mechanisms are present in vitro. These combined experimental and modeling studies provide new insights into the role of nonlinear dynamics in information processing and storage at the level of the single neuron and indicate that individual neurons can have extensive parameter-independent plastic capabilities in addition to the more extensively analyzed parameter-dependent ones. ANTHONY BROWN, DEFENSE RESEARCH AGENCY, UNITED KINGDOM (ABROWN at SIGNAL.DRA.HMG.GB) "PRELIMINARY WORK ON THE DESIGN OF AN ANALOG OSCILLATORY NEURAL NETWORK" Inspired by biological neural networks our aim is to produce an efficient information processing architecture based upon analogue circuits. In the past analogue circuits have suffered from a limited dynamic range caused by inter-device parameter variations. Any analogue information processing system must therefore be based upon an adaptive architecture which can compensate for these variations. Our approach to designing an adaptive architecture is to mimic neuro-biological exemplars, we are therefore examining architectures based upon the Hebb learning rule. In neuro-biological systems the Hebb rule is associated with temporal correlations which arise in phase locked oscillatory behaviour. The starting point for our new system is the Hopfield network. To modify the fixed point dynamics of such a network we have introduced a "hidden" layer of neurons. Each new neuron is connected to an existing neuron to form a pair which in isolation exhibits a decaying, oscillatory response to a stimulus. Several promising preliminary results have been obtained: Sustained oscillations are stimulated by the "known" patterns which were used to determine the weight matrix. In contrast "unknown" patterns result in a decaying oscillatory response, which can be reinforced for frequently occurring new input patterns to create a new characteristic response. Finally, a mixture of two known inputs will stimulate both characteristic oscillatory patterns separated by a constant phase lag. Overall the introduction of oscillatory behaviour in an associative memory will both simplify the embodiment of the learning rule and introduce new modes of behaviour which can be exploited. ADI BULSARA, NAVAL COMMAND, CONTROL, AND OCEAN SURVEILLANCE CENTER, SAN DIEGO (BULSARA at MANTA.NOSC.MIL) "COMPLEXITY IN THE NEUROSCIENCES: SIGNALS, NOISE, NONLINEARITY, AND THE MEANDERINGS OF A THEORETICAL PHYSICIST" We consider the interpretation of time series data from firing events in periodically stimulated sensory neurons. Theoretical models, representing the neurons as nonlinear dynamic switching elements subject to deterministic (taken to be time-periodic) signals buried in a Gaussian noise background, are developed. The models considered include simple bistable dynamics which provide a good description of the noise-induced cooperative behavior in neurons on a statistical or coarse-grained level, but do not account for many important features (e.g. capacitative effects) of real neuron behavior, as well as very simple "integrate-fire" models which provide reasonable descriptions of capacitative behavior but attempt to duplicate refractoriness through the boundary conditions on the dynamics. Both these classes of models can be derived through a systematic reduction of the Hodgkin-Huxley equations (assumed to be the best available description of neural dynamics). Cooperative effects, e.g. "stochastic resonance", arising through the interplay of the noise and deterministic modulation, are examined, together with their possible implications in the features of Inter-Spike-Interval Histograms (ISIHs) that are ubiquitous in the neurophysiological literature. We explore the connection between stochastic resonance, usually defined at the level of the power spectral density of the response, and the cooperative behavior observed in the ISIH. For the simpler (integrate-fire-type) threshold model, a precise connection between the two statistical measures (the power spectral density and the ISIH) of the system response can be established; for the more complex (bistable) models, such a connection is, currently, somewhat tenuous. DAVID DEMARIS, UNIVERSITY OF TEXAS/AUSTIN (DEMARIS at PINE.ECE.UTEXAS.EDU) (TITLE TO BE ADDED) A body of work on nonlinear oscillations in vision has emerged, both in the analysis of single unit inter-spike intervals and in a theory of perceptual encoding via spatio-temporal patterns. This paper considers other roles nonlinear oscillating networks may play in an active visual system. Kaneko's coupled map lattice models and extensions are examined for utility in explaining tasks in attention and monocular depth perception. Visual cortex is considered as an array of coupled nonlinear oscillators (complex cell networks) forced by imbedded simple cell detector networks of the Hubel and Wiesel type. In this model, elf organization of local and global bifurcation parameters may form spatial regions of heightened activity in attentional modules and form bounded dynamics regimes (domains) in visual modules related to binding and separation of figure and ground. This research is still in a rather speculative stage pending simulation studies; hence the aims of this talk are: * Provide a brief introduction to dynamics of spatially extended nonlinear systems such as coupled map lattices with self-organized control parameters and how these may support perceptual activity and encoding. * Review some recent work on underlying physiological mechanisms and measurements which support the use of nonlinear oscillator models. * Describe visual phenomena in the areas of ambiguous depth perception, figure / ground feature discrimination, and spatial distortions. Discuss mechanisms in coupled map models which may account for these phenomena. A demonstration of experiments involving cellular automata processing of Necker cube and Muller/Lyer figures is possible running on an IBM compatible PC. SRIRAM GOVINDARAJAN AND VINCENT BROWN, UNIVERSITY OF TEXAS/ARLINGTON (SRIRAM at CSE.UTA.EDU) "FEATURE BINDING AND ILLUSORY CONJUNCTIONS: PSYCHOLOGICAL CONSTRAINTS AND A MODEL" (Abstract to be added) GUENTER GROSS AND BARRY RHOADES, UNIVERSITY OF NORTH TEXAS (GROSS at MORTICIA.CNNS.UNT.EDU) "SPONTANEOUS AND EVOKED OSCILLATORY BURSTING STATES IN CULTURED NETWORKS" In monolayer networks derived from dissociated embryonic mouse spinal cord tissue, and maintained in culture for up to 9 months, oscillatory activity states are common in the burst domain and represent the most reproducible of all network behaviors. Extensive observations of self-organized oscillatory activity indicate that such network states represent a generic feature of networks in culture and suggest that possibly all networks comprised of mammalian neurons have a strong tendency to oscillate. Native Oscillatory States: The most characteristic pattern is a temporally variable, but spatially coordinated bursting. Quasi-periodic oscillations are generally transient but coordinated among most of the electrodes recording spontaneous activity. Networks left undisturbed for several hours display a tendency to enter coordinated oscillatory states and to remain in these states for long periods of time. Pharmacologically-induced oscillations: Synaptic inhibition by blocking glycine and GABA receptors increases spike rates, but generates a much different response pattern than that obtained from the excitatory transmitters. Whereas the latter produce excitation by disrupting existing patterns with increased spike and burst activity and only occasional transient oscillatory patterns, disinhibition brings the network into more tightly synchronized bursting with highly regular burst durations and periods in essentially all cultures. Such states can last for hours with minimal changes in burst variables. Other compounds such as 4-aminopyridine and cesium increase burst rate and regularity, in a manner qualitatively matched by elevating extracellular potassium. Cultures are much more sensitive to strychnine than to bicuculline. Whereas oscillatory behavior usually begins at 20-30 m bicuculline, similar pattern changes are obtained with nanomolar to low micromolar quantities of strychnine. Burst fusion and intense spiking (produced by NMDA) has never been observed as a result of network disinhibition. Extensive pharmacological manipulations of calcium and potassium channels has confirmed that spontaneous oscillations depend on potassium currents and intracellular Ca++ levels but not on calcium-dependent potassium conductances. Electrically-induced oscillations: Networks can often be periodically driven by repetitive electrical stimulation at a single electrode. Repeated stimulus trains have also been observed to induce episodes of intense, coherent bursting lasting beyond the termination of the stimulus pattern. Such responses appear "epileptiform" and might be considered a cultured network parallel to electrical induction of an epileptic seizure in vivo. Entrainment: Repetitive pulse train stimulation often causes the network burst patterns to organize and finally follow the temporal stimulation pattern. We have also found that networks in pharmacologically-induced periodic bursting modes can be entrained to a periodic single channel stimulation if the stimulus cycle is at or near a multiple of the spontaneous burst cycle period. The ability of a few axons at one electrode to entrain an entire network of 100 -300 neurons is remarkable and invites studies of entrainment mechanisms in these networks. ALEXANDER GRUNEWALD AND STEPHEN GROSSBERG, BOSTON UNIVERSITY (ALEX at CNS.BU.EDU) "BINDING OF OBJECT REPRESENTATIONS BY SYNCHRONOUS CORTICAL DYNAMICS EXPLAINS TEMPORAL ORDER AND SPATIAL POOLING DATA" A key problem in cognitive science concerns how the brain binds together parts of an object into a coherent visual object representation. One difficulty that this binding process needs to overcome is that different parts of an object may be processed by the brain at different rates and may thus become desynchronized. Perceptual framing is a mechanism that resynchronizes cortical activities corresponding to the same retinal object. A neural network model based on cooperation between oscillators via feedback from a subsequent processing stage is presented that is able to rapidly resynchronize desynchronized featural activities. Model properties help to explain perceptual framing data, including psychophysical data about temporal order judgments. These cooperative model interactions also simulate data concerning the reduction of threshold contrast as a function of stimulus length. The model hereby provides a unified explanation of temporal order and threshold contrast data as manifestations of a cortical binding process that can rapidly resynchronize image parts which belong together in visual object representations. DAVID HORN, TEL AVIV UNIVERSITY (HORN at VM.TAU.AC.IL) "SEGMENTATION AND BINDING IN OSCILLATORY NETWORKS" Segmentation and binding are cognitive operations which underlie the process of perception. They can be understood as taking place in the temporal domain, i.e. relying on features like simultaneity of neuronal firing. We analyze them in a system of oscillatory networks, consisting of Hebbian cell assemblies of excitatory neurons and inhibitory interneurons in which the oscillations are implemented by dynamical thresholds. We emphasize the importance of fluctuating input signals in producing binding and in enabling segmentation of a large set of common inputs. Segmentation properties can be studied by investigating the cyclic attractors of the system and the partial symmetries that they implement in a symmetric neural network. ARUN JAGOTA, MEMPHIS STATE UNIVERSITY, AND XIN WANG, UNIVERSITY OF CALIFORNIA/LOS ANGELES (JAGOTA at NEXT1.MSCI.MEMST.EDU) "OSCILLATIONS IN DISCRETE AND CONTINUOUS HOPFIELD NETWORKS" The first part of this talk deals with analyzing oscillatory behavior in discrete Hopfield networks with symmetric weights. It is well known that under synchronous updates, such networks admit cyclic behavior of order two but no higher. The two-cycles themselves are not known to have any useful characterizations in general however. By imposing certain restrictions on the weights, we obtain an exact characterization of the two-cycles in terms of properties of a certain graph underlying the network. This characterization has the following benefits. First, in small networks of this kind, all the two-cycles may be found merely by inspection of the underlying graph (which depends only on the weights). Second, this characterization suggests certain applications which exploit the two-cycles. We illustrate both of these benefits in detail. The second part of this talk deals with synthesizing chaotic or periodic oscillatory behavior in continuous Hopfield networks for the purposes of solving optimization problems. It is well known that certain dynamical rules for continuous Hopfield networks with symmetric weights exhibit convergent behavior to stable fixed points. Such convergent behavior is one reason for the use of these networks to solve optimization problems. Such behavior, however, also limits their performance in practice, as it is of the gradient-descent form, which often leads to sub-optimal local minima. As a potential remedy to this problem, we propose methods for injecting controllable chaos or periodic oscillations into the dynamical behavior of such networks. In particular, our methods allow chaotic or oscillatory behavior to be initiated and converted to convergent behavior at the turn of a "knob". This is in analogy with simulated annealing where at high temperature the behavior is "random" and at low temperature relatively "convergent". We introduce chaos or periodic oscillations into the network in two ways: one via the external input to each neuron, and the other by replacing each neuron by two neurons arranged into a coupled oscillator. We present some experimental results on the performance of our networks, with and without the oscillations, on the Maximum Clique optimization problem. SHIEN-FONG LIN, RASHI ABBAS, AND JOHN WIKSO, JR., VANDERBILT UNIVERSITY (LIN at MACPOST.VANDERBILT.EDU) "ONE-DIMENSIONAL MAGNETIC MEASUREMENT OF TWO-ORIGIN BIOELECTRIC CURRENT OSCILLATION" The squid giant axons when placed in low calcium and high sodium extracellular environment will abruptly enter a state of self-sustained oscillation. Such an oscillation exhibits a linear temperature dependence in frequency, can be entrained, and enters chaotic states with proper entrainment patterns. The origination of such an oscillation, although of significant implication to neural oscillation in general, has never been extensively studied experimentally. Specifically, one of the most intriguing problem was the scarcity of experimental evidence for symmetrical multiple oscillation origins in such a homogeneous one dimensional structure. In this presentation, we report a novel non-invasive magnetic observation of a stable 2-origin self-sustained oscillation in squid giant axons. The oscillation showed a standing-wave pattern when observed in the spatial domain, and a proper geometry was required to sustain the 2-origin pattern. The origins were coupled and synchronized in phase. The results from model simulation using explicit implementation of propagating Hodgkin-Huxley axon allowed us to investigate the mechanisms underlying such a behavior. The study clearly demonstrated the merits of magnetic methods in studying intricate neural oscillations. ALIANNA MAREN, ACCURATE AUTOMATION CORPORATION, AND E. SCHWARTZ, RADFORD UNIVERSITY (AJMAREN%AAC at OLDPAINT.ENGR.UTC.EDU) "A NEW METHOD FOR CROSS-SCALE INTERACTION USING AN ADAPTABLE BASIC PROCESSING ELEMENT" A new concept for the basic processing element in a neural network allows the characteristics of this element to change in response to changes at the neural network level. This makes it possible to have "cross-scale interactions," that is, events at the neural network level influence not only the immediate network state, but also the response characteristics of individual processing elements. This novel approach forms the basis for creating a new class of neural networks, one in which the processing elements are responsive to spatial and historical context. This capability provides a valuable tool in advancing the overall richness and complexity of neural network behavior. The most evident advantage of this new approach is that neural networks can be made dependent, in a substantial way, upon past history for the present state. This property is most useful in applications where past history is important in determining present actions or interpretations. There is a major difference between this approach and most current methods for adapting neural networks to exert the influence of time or to provide "learning." This lies in the fact that most existing methods provide either a means for maintaining the activation due to initial stimulus (either with time-delay connections or with recurrent feedback), or provide a means for changing the values of connection weights ("learning"). The approach offered here is substantively different from existing approaches, in that changes are made to the response characteristics of the individual processing units themselves; they now respond differently to stimuli. The model for the new interpretation of the basic processing element comes from considering the basic element as a (statistically large) ensemble of interacting bistate processing units. By way of analogy to domains of neurons in biological systems, we call this ensemble, or basic processing element, an artificial neural domain. The neural domain is modeled at the ensemble level, not at the level of individual components. Using a simple statistical thermodynamics model, we arrive at ensemble characteristics. Ensemble, or domain, behavior, is controlled not only by input activations but also by parameter values which are modified at the neural network level. This creates an avenue for cross-scale interaction. GEORGE MOBUS AND PAUL FISHER, UNIVERSITY OF NORTH TEXAS (MOBUS at PONDER.CSCI.UNT.EDU) "EDGE-OF-CHAOS-SEARCH: USING A QUASI-CHAOTIC OSCILLATOR CIRCUIT FOR FORAGING IN A MOBILE AUTONOMOUS ROBOT" A neural circuit that emulates some of the behavioral properties of central pattern generators (CPGs) in animals is used to control a stochastic search in a mobile, autonomous robot. When the robot is not being stimulated by signals that represent mission-support events, it searches its environment for such stimuli. The circuit generates a quasi-chaotic oscillation that causes the robot to weave back and forth like a drunken driver. Analysis of the weave pattern shows that the chaotic component yields sufficient novelty to cause the robot to conduct an effective search in a causally-controlled but non-stationary environment. Unlike a random-walk search which may exhaust the robot's power resources before it accomplishes its mission, we show, through simulations, that a quasi-chaotic search approaches optimality in the sense that the robot is more likely to succeed in finding mission-critical events. The search patterns displayed by the robot resemble, qualitatively, those of foraging animals. When the robot senses a stimulus associated with a mission-support event, a combination of location and distance signals from other parts of the simulated brain converge on the CPG causing it to transition to more ordered directional output. The robot orients relative to the stimulus and follows the stimulus gradient to the source. The possible role of chaotic CPGs and their transitions to ordered oscillation in searching non-stationary spaces is discussed and we suggest generalizations to other search problems. The role of learning causal associations as a prerequisite for successful search is also covered. GEORGE MPITSOS, OREGON STATE UNIVERSITY (GMPITSOS at SLUGO.HMSC.ORST.EDU) "ATTRACTOR GRADIENTS: ARCHITECTS OF NETWORK ORGANIZATION IN BIOLOGICAL SYSTEMS" Biological systems are composed of many components that must produce coherent adaptive responses. The interconnections between neurons in an assembly or between individuals in any population all pose similar questions, e.g,: How does one handle the many degrees of freedom to know how the system as a whole functions? What is the role of the individual component? Although individuals act using local rules, is there some global organizing principle that determines what these rules are? I raise the possibility that many simplifications occur if the system is dissipative; i.e., if it has an attractor such that it returns to a characteristic state in response to external perturbation. I ask what does the dissipative process do to the system itself? What global organizing effects does it produce? Biological and artificial neural networks are used to describe dissipative processes and to address such questions. Although individual systems may express different details, the fact that attractors are generally applicable constructs, suggests that the understanding of one complex system may give insight into similar problems of self-organization in others. Supported by AFOSR 92J-0140 NAM SEOG PARK, DAVE ROBERTSON, AND KEITH STENNING, UNIVERSITY OF EDINBURGH (NAMSEOG at AISB.EDINBURGH.AC.UK) "FROM DYNAMIC BINDINGS TO FURTHER SYMBOLIC KNOWLEDGE REPRESENTATION USING SYNCHRONOUS ACTIVITY OF NEURONS" A structured connectionist model using temporal synchrony has been proposed by Shastri and Ajjanagadde. This model has provided a mechanism which encodes rules and facts involving n-ary predicates and handles some types of dynamic variable binding using synchronous activity of neurons. Although their mechanism is powerful enough to provide a solution to the dynamic variable binding problem, it also shows some limitations in dealing with some knowledge representation issues such as binding generation, consistency checking, and unification, which are important in enabling their model achieving better symbolic processing capabilities. This paper presents how Shastri and Ajjanagadde's mechanism can be modified and extended to overcome those limitations. The modification is made by redefining a temporal property of one of four types of node used in their model and replacing it with the one newly defined. Two layers of node are also added to enable a uniform layered connection between the antecedent and the consequent of various types of rule, which allows comparatively straightforward translation from symbolic representation of rules to connectionist representation of them. As a result, the modified system is able to tackle more knowledge representation issues while, at the same time, reducing the number of types of node required and retaining the merits of the original model. ANDREW PENZ, TEXAS INSTRUMENTS (PENZ at RESBLD.TI.COM) (TITLE AND ABSTRACT TO BE ADDED) BARRY RHOADES, UNIVERSITY OF NORTH TEXAS (RHOADES at MORTICIA.CNNS.UNT.EDU) "GLOBAL NEUROCHEMICAL DETERMINATION OF LOCAL EEG IN THE OLFACTORY BULB" Spatially coherent bursts of EEG oscillations are a dominant electrophysiological feature of the mammalian olfactory bulb, accompanying each inspiratory phase of the respiratory cycle in the waking state and recurring intermittently under moderate anesthesia. In the rat these oscillatory bursts are nearly sinusoidal, with a typical oscillation frequency of 50-60 Hz. The averaged evoked potential (AEP) to repetitive near threshold-level electrical stimulation of either the primary olfactory nerve (PON) or lateral olfactory tract (LOT) has a dominant damped sinusoidal component at the same frequency. These oscillations are generated by the negative feedback relationship between the mitral/tufted (MT) cell principal neurons and the GABAergic granule (G) cell interneurons at reciprocal dendro-dendritic synapses of the external plexiform layer (EPL). This EPL generator produces oscillations in mitral/tufted cells and granule cell ensembles, under the high input levels produced by inspiratory activation of the olfactory epithelium or electrical stimulation of the bulbar input or output tracts. The dependence of oscillations in the bulbar EEG and evoked potentials on local and regional alterations in GABAergic neurochemistry was investigated in barbiturate anesthetized Sprague-Dawley rats. The main olfactory bulb, primary olfactory nerve (PON) and lateral olfactory tract (LOT) were surgically exposed, unilaterally. Basal EEG from both bulbs and AEPs from the exposed bulb in response to stimulation of the PON and LOT were recorded before and following both local microinjection and regional surface application of the GABAactive neurochemicals muscimol, picrotoxin, and bicuculline. Locally restricted microinjections profoundly altered AEP waveforms, but had negligible effects on the background EEG. Regional applications of the same neurochemicals at the same concentrations across the entire exposed bulbar surface produced discontinuous transitions in EEG oscillatory state. The temporal properties of the basal EEG recorded from a site on the bulbar surface could thus be altered only by GABAergic modification of G->MT cell synapses over a large region of the olfactory bulb. This provides neurochemical evidence that the temporally and spatially patterned oscillatory activity deriving from the interactions of mitral/tufted and granule cells is globally organized; i.e. that global oscillatory state overrides local neurochemistry in controlling background oscillations of local neuronal ensembles. This research was conducted in the laboratory of Walter J. Freeman at the University of California, Berkeley and supported primarily by funds from NIMH grant #MH06686. IVAN SOLTESZ, UNIVERSITY OF TEXAS HEALTH SCIENCES CENTER, DALLAS (SOLTESZ at UTSW.SWMED.EDU) (TITLE AND ABSTRACT TO BE ADDED) MARTIN STEMMLER, CALIFORNIA INSTITUTE OF TECHNOLOGY (STEMMLER at KLAB.CALTECH.EDU) "SYNCHRONIZATION AND OSCILLATIONS IN SPIKING NETWORKS" While cortical oscillations in the 30 to 70~Hz range are robust and commonly found in local field potential measurements in both cat and monkey visual cortex (Gray et al., 1990; Eckhorn et al., 1993), they are much less evident in single spike trains recorded from behaving monkeys (Young et al., 1982; Bair et al., 1994). We show that a simple neural network with spiking "units" and a plausible excitatory-inhibitory interconnection scheme can explain this discrepancy. The discharge patterns of single units is highly irregular and the associated single-unit power spectrum flat with a dip at low frequencies, as observed in cortical recordings in the behaving monkey (Bair et al., 1994). However, if the local field potential, defined as the summed spiking activity of all "units" within a particular distance, is computed over an area large enough to include direct inhibitory interactions among cell pairs, a prominent peak around 30-50 Hz becomes visible. MARK STEYVERS, INDIANA UNIVERSITY AND CEES VAN LEEUWEN, UNIVERSITY OF AMSTERDAM, THE NETHERLANDS (MSTEYVER at HERMES.PSYCH.INDIANA.EDU) "USE OF SYNCHRONIZED CHAOTIC OSCILLATIONS TO MODEL MULTISTABILITY IN PERCEPTUAL GROUPING" Computer simulations are presented to illustrate the utility of a new way of dynamic coupling in neural networks. It is demonstrated that oscillatory neural network activity can be synchronized even if the network remains in a chaotic state. An advantage of chaotic synchronous oscillations over periodic ones is that chaos provides a very powerful and intrinsic mechanism for solving the binding problem and at the same time, multistability in perception. The resulting switching-time distributions of a multistable grouping show qualitative similarities with experimentally obtained distributions. The chaotic oscillatory couplings were used to model the Gestalt laws of proximity, good continuation and symmetry preference. In addition, interpretations provided by the model were shown to be liable to sequence influence. DAVID TAM, UNIVERSITY OF NORTH TEXAS (DTAM at UNT.EDU) "SPIKE TRAIN ANALYSIS FOR DETECTING SYNCHRONIZED FIRING AMONG NEURONS IN NETWORKS" A specialized spike train analysis method is introduced to detect synchronized firing between neurons. This conditional correlation technique is developed to detect the probability of firing and non-firing of neurons based on the pre- and post-conditional cross-intervals, and interspike intervals after the reference spike has fired. This statistical measure is an estimation of the conditional probability of firing of a spike in a neuron based on the probability of firing of another neuron after the reference spike has occurred. By examining the lag times of post-interspike intervals and post-cross intervals, synchronized coupling effects between the firing of the reference neuron can be revealed. ELIZABETH THOMAS, WILLAMETTE COLLEGE (ETHOMAS at WILLAMETTE.EDU) "A COMPUTATIONAL MODEL OF SPINDLE OSCILLATIONS" A model of the thalamocortical system was constructed for the purpose of a computational analysis of spindle. The parameters used in the model were based on experimental measurements. The model included a reticular thalamic nucleus and a dorsal layer. The thalamic cells were capable of undergoing a low threshold calcium mediated spike. The simulation was used to investigate the plausibility and ramifications of certain proposals that have been put forward for the production of spindle. An initial stimulus to the model reticular thalamic layer was found to give rise to activity resembling spindle. The emergent population oscillations were analyzed for factors that affected its frequency and amplitude. The role of cortical feedback to the pacemaking RE layer was also investigated. Finally a non-linear dynamics analysis was conducted on the emergent population oscillations. This activity was found to yield a positive Lyapunov exponent and define an attractor of low dimension. Excitatory feedback was found to decrease the dimensionality of the attractor at the reticular thalamic layer. ROGER TRAUB, IBM T.J. WATSON RESEARCH CENTER (TRAUB at WATSON.IBM.COM) "CELLULAR MECHANISMS OF SOME EPILEPTIC OSCILLATIONS" Cortical circuitry can express epileptic discharges (synchronized population oscillations) when a number of different system parameters are experimentally manipulated: blockade of fast synaptic inhibition; enhancement of NMDA conductances; or prolongation of non-NMDA conductances. Despite the differences in synaptic mechanism, the population output is, remarkably, stereotyped. We shall present data indicating that the stereotypy can be explained by three basic ideas: recurrent excitatory connections between pyramidal neurons, the ability of pyramidal dendrites to produce repetitive bursts, and the fact that experimental epilepsies engage one or another prolonged depolarizing synaptic current. SETH WOLPERT, UNIVERSITY OF MAINE (WOLPERT at EECE.MAINE.EDU) "MODELING NEURAL OSCILLATIONS USING VLSI-BASED NEUROMIMES" As a prelude to the VLSI implementation of a locomotory network, neuronal oscillators that utilize reciprocal inhibition (RI) and recurrent cyclic inhibition (RCI) were re-created for parametric characterization using comprehensive VLSI-based artificial nerve cells, or Neuromimes. Two-phase RI oscillators consisting of a pair of self-exciting, mutually inhibiting neuronal analogs were implemented using both fixed and dynamic synaptic weighting, and cyclic inhibitory RCI ring networks of three and five cells with fixed synaptic weighting were characterized with respect to cell parameters representing resting cell membrane potential, resting threshold potential, refractory period and tonic inhibition from an external source. For each of these parameters, the frequency at which an individual cell would self-excite was measured. The impact of that cell's self-excitatory frequency on the frequency of the total network was then assessed in a series of parametric tests. Results indicated that, while all four input parameters continuously and coherently effected the cellular frequency, one input parameter, duration of the cellular refractory period, had no effect on overall network frequency, even though the cellular frequency ranged over more than two orders of magnitude. These results would suggest that neuronal oscillators are sensitive to concentrations of the ionic species contributing to resting cell membrane potential and threshold, but are stable with respect to cellular conditions affecting refraction, such as the conditions in the Sodium inactivation channels. ROBERT WONG, DOWNSTATE MEDICAL CENTER/BROOKLYN (NO E-MAIL; TELEPHONE 718-270-1339, FAX 718-270-2241) (TITLE AND ABSTRACT TO BE ADDED) DAVID YOUNG, LOUISIANA STATE UNIVERSITY (DYOUNG at MAX.EE.LSU.EDU) "OSCILLATIONS CREATED BY THE FRAGMENTED ACCESS OF DISTRIBUTED CONNECTIONIST REPRESENTATIONS" The rapid and efficient formation of transient interactions on a systems level is viewed as a necessary aspect of cognitive function. It is a principle behind the binding problem of symbolic rule-based reasoning which has seen many recent connectionist approaches inspired by observations of synchronized neural oscillations in separate cortical regions of the brain. However the combinatorial complexity of linking each of the numerously possible interactions that may be needed exposes a serious limitation inherent to connectionist networks. As is well known an artificial neural network constitutes a massively parallel device yet above the most basic organizational level it effectively does only one thing at a time. This limitation is called the opacity of a neural network and it describes the ability to access the knowledge embodied in the connections of a network from outside the network. This talk presents two new results relevant to neural oscillations. Firstly, wider access to the information storage of feedback structures is achieved through composite attraction basins that represent a combination of other learned basins. Secondly, a dynamics of inactivity is introduced and is shown to support concurrent processes within the same structure. By quieting the activity of dissimilar network elements system states are temporarily merged to form combined states of smaller dimension. The merged state will then proceed along a monotone decreasing path over an energy surface toward a composite basin just as a single state will proceed toward a single basin. Since changes are not made to interconnection weights specific instantiations of full dimension may be reconstructed from vector fragments. Moreover the fragment size is dynamic and may be altered as the system operates. Based on this observation a new dynamics of inactivity for feedback connectionist structures is presented allowing the network to operate in a fragment-wise manner on learned distributed representations. The new mechanism is seen as having tracks of activation passing through an otherwise quiet system. The active fragment repeatedly passes through the distributed representation setting up an oscillation. Inactive portions of the structure may then be utilized by other processes that are locally kept separate through phase differences and efferent coincidence. Out-of-phase tracks may be brought into synchrony thus allowing the interaction of disparate features of objects by lowering the inhibition of the neighboring elements involved. The feedback structure is less than fully connected globally but highly interconnected for local neighborhoods of network elements. Reduced global connectivity in an environment operating fragment-wise permits true concurrent behavior as opposed to the local use of time-shared resources which is not concurrent. A second structure is interwoven with and regulates the first through inhibitory stimulation. This relationship of the two networks agrees with the predicted regulatory influence that neurons with smooth dendritic arborizations have on pyramidal cells and stellate cells displaying spiny dendrites. GEOFFREY YUEN, NORTHWESTERN UNIVERSITY (YUEN at MILES.PHYSIO.NWU.EDU) "FROM THE ADJUSTABLE PATTERN GENERATOR MODEL OF THE CEREBELLUM TO BISTABILITY IN PURKINJE CELLS" Based on the anatomy and physiology of the cerebellum and red nucleus, the adjustable pattern generator (APG) model is a theory of movement control that emphasizes the quasi-feedforward nature of higher motor control processes. This is in contrast to the heavily feedback-based control processes on the level of the spinal cord. Thus, limb movement-related motor commands (i.e. high-frequency bursts discharges) in red nucleus during awake-monkey experiments are postulated to be generated by endogenous CNS pattern generators rather than via continuous feedback from the periphery. The postulated endogenous movement-command CNS pattern generator includes neurons in magnocellular red nucleus (RNm), deep cerebellar nuclei (i.e. nucleus interpositus (NI) for limb movements) and cerebellar Purkinje cells. Recurrent excitatory interactions between RNm and NI which give rise to burst discharges are modulated by the inhibitory outputs of cerebellar Purkinje cells. Thus dynamic burst durations and patterns are sculpted by learning-based inhibition from Purkinje cells, giving rise to appropriate movement command signals under different movements and circumstances. Intrinsic to the concept of a pattern generator is the existence of self-sustained activities. Aside from the reverberatory positive feedback circuit in the recurrent loop between the cerebellum and red nucleus, bistability in the membrane potentials of Purkinje cells can also support self-sustained activity. This concept of bistability is based on the phenomena of plateau potentials as observed in Purkinje cell dendrites. This talk will concisely summarize the APG theory and circuitry, report on the results of its use in limb-movement control simulations and describe current efforts to capture the biophysical basis of bistability in Purkinje cells. The bistability of cerebellar Purkinje cells also has significance particularly for the control of oscillations in the recurrent excitatory circuits between red nucleus and deep cerebellar nuclei, as well as movement control in general. With respect to the biophysical basis of dendritic bistability, we have carried out simulations and phase-plane analysis of the ionic currents which underlie dendritic plateau potentials in Purkinje cells. Here we shall report on the results of the phase plane analyses of the systems based on high-threshold P-calcium, delayed rectifier potassium and slow, calcium-mediated potassium channels. We gratefully acknowledge the support of the various aspects of this work by ONR (N-00014-93-1-0636 to G. L. F. Yuen), NIH (P50MH48185-01 to J. C. Houk) and NSF (NS-26915 to P. E. Houkberger). DIRECTIONS TO CONFERENCE AND EVENING ACTIVITIES To Those Attending the MIND conference on Oscillations in Neural Systems: For those of you who are baseball fans, or are perhaps just curious to see the new Ballpark in Arlington, we are arranging a trip to the game on Saturday May 7. The Minnesota Twins are in town to take on the Texas Rangers. Game time is 7:30 pm. If we gather a large enough crowd, we can probably get a group discount. Please send a response of you are interested. The second, but not least, purpose of this message is to inform those of you arriving by car how to get to the motel and UTA campus. If you are arriving by air, you need not read further. If you are entering Arlington from the north side via Interstate 30, you will exit on Cooper Street and travel south (after exiting you should cross back over the freeway to head south). You will drive about two or three miles to reach campus. You will pass Randol Mill and Division streets. UTA is about four blocks beyond Division Street. You should turn east (left) on Mitchell street. If you get to Park Row, you have gone too far. To get to the Park Inn, continue past Mitchell one block to Benge. The Inn is on your right. If you are entering Arlington from the south side via Interstate 20, you should exit on Cooper Street and head north. You will drive three or four miles to reach campus. Some of the major streets you will pass are Arbrook, Arkansas and Pioneer Parkway, and Park Row. UTA is just beyond Park Row. You should turn east (right) on Mitchell Street. If you get to Border Street, you have gone too far. The Park Inn is two blocks north of Park Row on your left. Turn left on Benge Street. Once you are on Mitchell, continue east two blocks to West street and turn left (north). Proceed one block to Nedderman. There are two parking lots at the corner of West and Nedderman. If possible park in the north lot. You will now have the nursing building to the west and the business building to the north. To get to the library on foot from the parking lot, head west towards the nursing building. You will cross on a sidewalk with the nursing building to your left and the parking garage to your right. (DO NOT park in the parking garage. It costs an arm and a leg. Parking in the other lot is free.) When you cross the street past the parking garage the library is the building on the right. The Life Sciences Building will be on the left. The conference is on the sixth floor of the library, in the Parlor. Parking permits (free of charge) will be available at the conference registration table, as will campus maps. If you are staying at the Inn, it is proabably easier to park at the Inn and then walk to campus (two blocks away). Campus maps will be available at the Park Inn desk. Hope the directions are clear. Vince Brown b096vrb at utarlg.uta.edu Registration and Travel Information Official Conference Motel: Park Inn 703 Benge Drive Arlington, TX 76013 1-800-777-0100 or 817-860-2323 A block of rooms has been reserved at the Park Inn for $35 a night (single or double). Room sharing arrangements are possible. Reservations should be made directly through the motel. Official Conference Travel Agent: Airline reservations to Dallas-Fort Worth airport should be made through Dan Dipert travel in Arlington, 1-800-443-5335. For those who wish to fly on American Airlines, a Star File account has been set up for a 5% discount off lowest available fares (two week advance, staying over Saturday night) or 10% off regular coach fare; arrangements for Star File reservations should be made through Dan Dipert. Please let the conference organizers know (by e-mail or telephone) when you plan to arrive: some people can be met at the airport (about 30 minutes from Arlington), others can call Super Shuttle at 817-329-2000 upon arrival for transportation to the Park Inn (about $14-$16 per person). Registration for the conference is $25 for students, $65 for non-student oral or poster presenters, $85 for others. MIND members will have $20 (or $10 for students) deducted from the registration. A registration form is attached to this announcement. Registrants will receive the MIND monthly newsletter (on e-mail when possible) for the remainder of 1994. REGISTRATION FOR MIND CONFERENCE ON OSCILLATIONS IN NEURAL SYSTEMS, UNIVERSITY OF TEXAS AT ARLINGTON, MAY 5-7, 1994 Name ______________________________________________________________ Address ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ____________________________________________________________ E-Mail __________________________________________________________ Telephone _________________________________________________________ Registration fee enclosed: _____ $15 Student, member of MIND _____ $25 Student _____ $65 Non-student oral or poster presenter _____ $65 Non-student member of MIND _____ $85 All others Will you be staying at the Park Inn? ____ Yes ____ No Are you planning to share a room with someone you know? ____ Yes ____ No If so, please list that person's name __________________________ If not, would be you be interested in sharing a room with another conference attendee to be assigned? ____ Yes ____ No PLEASE REMEMBER TO CALL THE PARK INN DIRECTLY FOR YOUR RESERVATION (WHETHER SINGLE OR DOUBLE) AT 1-800-777-0100 OR 817-860-2323.  From tenorio at ecn.purdue.edu Wed May 4 15:40:37 1994 From: tenorio at ecn.purdue.edu (tenorio@ecn.purdue.edu) Date: Wed, 4 May 1994 14:40:37 -0500 Subject: Financial Forecasting Competition Message-ID: <199405041940.OAA18035@dynamo.ecn.purdue.edu> At the risk of clogging everyone's mailboxes, and as someone involved with the competition I feel the need to answer Bill's concerns: >Motivated by the announcement on this list of the "First >International Nonlinear Financial Forecasting Competition", >I would like to raise a concern that has troubled me for >some time. I wonder whether it is really socially responsible >to work on these things, or to support such work. > .. > >What concerns me, is that the financial forecasting techniques >I have seen are not oriented toward predicting the failure >or success of individual enterprises, but rather toward >identifying and predicting global trends in the flow of >money. The capacity of predicting the failure, or success of an enterprise is more related to the analysis of fundamentals of the economy than technical analysis. In technical analysis, a supposition is made that price reflects all the information available to the market participants, as well as their expectations. I have not seen all the entries of the competition itself, but it is safe to assume that the predictors are not based on identifying "global trends in the flow of money." As a matter of fact, the competition is restricted to time series prediction processes where a single price series exists, apart from the reality of the world that generated it. You could say that this is the ultimate technical analysis test: a single price series alone. The problem posed here seems to be harder than the problem addressed by real traders, with the benefit of "market sideviews" and all the rumors, etc. Which brings us to the motivation behind this competition. This competition was meant as a follow up on to the work of Weigend, colleagues and collaborators. The Santa Fe Competition results, although not geared towards economic time series, were basically all negative towards their predictability. Meanwhile several software houses were making claims of great predictability accuracy, as well as commercial systems of great value based on non-linear techniques. It seemed very confusing to know for sure whether or not such techniques are viable. The charter of the competition, in spite of its limited scope, is to give the sketch of an answer whether there is validity to the use of non-linear techniques on this type of time series. There is a lot of smoke and mirrors, folklore, and even more bizarre tales when it come to predictability of the markets. Our goal is to shed some light on this, at least in this restricted viewpoint. Second, a number of researchers have looked at the economy as the result of the interaction of many independent agents, operating with similar policies. A sort of emergent behavior in a parallel distributed soup. Not very much different from other systems such as biological, ecological, and neurological networks. If this is the case, predictability of one has some implications to predictability of the others, at least in a gross sense. Now, commercial investment firms use all forms of prediction schemes in the market: from fundamental analysis to looking at the planets. It is all done in the name of making money. We do not claim to have an answer to the "ethics" of different methods. Besides they all seem to have the same (irk...) objective. For the market to work, there must be a seller and a buyer. Therefore someone must always be on the wrong side of the issue. So if company A did succeed and some investor lost a lot of money by not believing in its success, after his/her loss, is this lack of belief to be blamed as "promoting the destruction of a companies future?" Is that judgment changed by the fact that we know that the investor used fundamental analysis, whereas the "other guy" used a technical analysis? How about if the roles were reversed? I am skeptical that this is possible in the first >place, but even if it is possible, it seems to me that to >make money this way is to be a parasite upon the financial >system, rather than to serve it. Some of us are skeptical too, thus the competition. But what is the reason for the fear of its results? Again, I do not claim the authority to make ethical judgments on our financial system, which is plagued by both fundamental and technical analysts. From a strictly economic view point, all market players are buying and selling risk. This form of financial insurance is just a more sophisticated form of insurance on goods, which has promoted the great prosperity of western civilization in the last 500 years. Without insurance, ship owners would not venture the oceans after merchandise, etc. Without it today, we would not have simple things like the post office, UPS and the like, and farmers could not guarantee crop prices, etc. (For lack of health insurance some voters elected our President). It is all about minimizing risk, no matter how we do it. The problem is that the person who assumes your risk, does not do it for any one's blue eyes, but to make a living, and therefore will use whatever means available. Our role is to minimize the risk of investment in technologies that will not promote a true predictive gain. But I would like to remind you that science cannot verify negative assertions, the scientific method is not adequate for that. Further our resources are limited. The best we can hope for is to shed some light, and that is our charter as scientists. And the question is: "Where is the Truth concerning the performance of these methods?" From crg at ida.his.se Thu May 5 07:40:32 1994 From: crg at ida.his.se (Connectionist) Date: Thu, 5 May 94 13:40:32 +0200 Subject: CFP: SCC-95 THE SECOND SWEDISH CONFERENCE ON CONNECTIONISM The Connectionist Research Group University of Skovde, SWEDEN Message-ID: <9405051140.AA05414@mhost.ida.his.se> March 2-4, 1995 CALL FOR PAPERS SCOPE OF THE CONFERENCE Understanding neural information processing properties characterizes the field of connectionism, also known as Ar- tificial Neural Networks (ANN). The rapid growth, expansion and great popularity of connec- tionism is motivated by the new way of approaching and understanding the problems of artificial intelligence, and its applicability in many real-world applications. There is a number of subfields of connectionism among which we distinguish the following. The importance of a "Theory of connectionism" cannot be overstressed. The interest in theoretical analysis of neu- ronal models, and the complex dynamics of network architec- tures grows rapidly. It is often argued that abstract neural network models are best understood by analysing their computational properties with respect to their biological counterparts. A clear theoretical approach to developing neural models also provides insight in dynamics, learning, functionality and probabilities of different connectionist networks. "Cognitive connectionism" is bridging the gap between the theory of connectionism and cognitive science by modelling higher order brain functions from psychology by using methods offered by connectionist models. The findings of this field are often evaluated by their neuropsychological validity and not by their functional applicability. Sometimes the field of connectionism is referred to as the "new AI". Its applicability in AI has spawned a belief that AI will benefit from a good understanding of neural informa- tion processing capabilities. The subfield "Connectionism and artificial intelligence" is also concerned with the dis- tinction between connectionist and symbolic representations. The wide applicability and problem-solving abilities of neural networks are exposed in "Real-world computing". Robotics, vision, speech and neural hardware are some of the topics in this field. "The philosophy of connectionism" is concerned with such diverse questions as the mind-body problem and relations between distributed representations, their semantics and im- plications for intelligent behaviour. Experimental studies in "Neurobiology" have implications on the validity and design of new, artificial neural architec- tures. This branch of connectionism addresses topics such as self-organisation, modelling of cortex, and associative memory models. A number of internationally renowned keynote speakers will be invited to give plenary talks on the subjects listed above. GUIDELINES FOR PAPER SUBMISSIONS Instructions for submissions of manuscripts: Papers may be submitted, in three (3) copies, to one of the following sessions. ~ Theory of connectionism ~ Cognitive connectionism ~ Connectionism and artificial intelligence ~ Real-world computing ~ The philosophy of connectionism ~ Neurobiology A note should state principal author and email address (if any). It should also indicate what session the paper is sub- mitted to. Length: Papers must be a maximum of ten (10) pages long (including figures and references), the text area should be 6.5 inches by 9 inches, including footnotes but excluding page numbers), and in a 12-point font type. Template and style files conforming to these specifications for several text formatting programs, will be available to authors of accepted papers. Deadline: Papers must be received by Thursday, September 1, 1994 to ensure reviewing. All submitted papers will be reviewed by members of the program committee on the basis of technical quality, research significance, novelty and clarity. The principal author will be notified of acceptance no later than Tuesday, October 18, 1994. Proceedings: All accepted papers will appear in the conference proceed- ings. CONFERENCE CHAIRS Lars Niklasson, Mikael Boden lars.niklasson at ida.his.se mikael.boden at ida.his.se TENTATIVE SPEAKERS Michael Mozer University of Colorado, USA Ronan Reilly University College Dublin, Ireland Paul Smolensky University of Colorado, USA David Touretzky Carnegie Mellon University, USA This list is under completion. PROGRAM COMMITTEE Jim Bower California Inst. of Technology, USA Harald Brandt Ellemtel, Sweden Ron Chrisley University of Sussex, UK Gary Cottrell University of California, San Diego, USA Georg Dorffner University of Vienna, Austria Tim van Gelder National University of Australia, Australia Agneta Gulz University of Skovde, Sweden Olle Gallmo Uppsala University, Sweden Tommy Garling Goteborg University, Sweden Dan Hammerstrom Adaptive Solutions Inc., USA Jim Hendler University of Maryland, USA Erland Hjelmquist Goteborg University, Sweden Anders Lansner Royal Inst. of Techn., Stockholm, Sweden Reiner Lenz Linkoping University, Sweden Ajit Narayanan University of Exeter, UK Jordan Pollack Ohio State University, USA Noel Sharkey University of Sheffield, UK Bertil Svensson Chalmers Inst. of Technology, Sweden Tere Vaden University of Tampere, Finland PLEASE ADDRESS ALL CORRESPONDENCE TO: "SCC-95" The Connectionist Research Group University of Skovde P.O. Box 408 541 28 Skovde, SWEDEN E-mail: crg at ida.his.se From kipp at nvl.army.mil Fri May 6 11:50:00 1994 From: kipp at nvl.army.mil (Teresa Kipp) Date: Fri, 6 May 94 11:50 EDT Subject: Recruitment of Ph.D Neural Net Scientists Message-ID: JOBS FOR TALENTED NEURAL NET Ph.D's -------------------------------------- The Computer Vision Research Branch of the US Army Research Laboratory is composed of Ph.D's in both theory and experimentation in the field of theoretical computer science, probability and statistics. Our branch has also contracts with top theoretical computer scientists and mathematicians from various universities providing continuous interaction through regular visits. Our research is to design algorithms to recognize military targets in complex imagery generated by a variety of sensors. This research also includes commercial applications such as handwriting and face recognition. We are in the process of enlarging our branch by hiring neural net scientists on our in-house staff and by contracting with university neural net scientists. A Part of the new research effort is the comparison between neural net algorithms and those presently designed by our branch that are model-based with combinatorial trees in order to stimulate the cross fertilization, hybrid systems, and the unification between these two approaches. Talented neural net Ph.D's are invited to submit a copy of their curriculm vitae by regular or electronic mail. Vitae's sent by electronic mail are acceptable as either latex or postscript files. Send all communications to Ms. Teresa Kipp at any of the following addresses: electronic mail to: kipp at nvl.army.mil or send by regular mail to: DEPARTMENT OF THE ARMY US ARMY RESEARCH LABORATORY AMSRL SS SK (T. KIPP) 10221 BURBECK RD STE 430 FT BELVOIR VA 22060-5806 or contact Ms. Kipp at (703)-704-3656. From esann at dice.ucl.ac.be Sat May 7 17:21:18 1994 From: esann at dice.ucl.ac.be (esann@dice.ucl.ac.be) Date: Sat, 7 May 1994 23:21:18 +0200 Subject: ESANN'94 proceedings Message-ID: <9405072117.AA04231@ns1.dice.ucl.ac.be> ________________________________________________ ------------------------- ! PROCEEDINGS AVAILABLE ! ------------------------- ________________________________________________ ESANN ' 94 European Symposium on Artificial Neural Networks Brussels, April 20-21-22, 1994 ________________________________________________ The second European Symposium on Artificial Neural networks was held in Brussels (Belgium) on April 20-21-22. The conference presented a selection of high-quality papers in the field of theoretical and mathematical aspects of neural networks, algorithms, relations with classical methods of statistics and of information theory, and with biology. You will find enclosed the detailed program of the conference. The proceedings of this conference are available by sending the following completed form to the conference secretariat. Please use fax to avoid delays. The proceedings include all 44 papers presented during the conference. A limited number of copies of the ESANN'93 proceedings are still available; you will find the list of papers included in these proceedings at the end of this e-mail. Prices: ESANN'94 proceedings : BEF 2500 (proceedings BEF 2000 + postage & packing BEF 500) ESANN'93 proceedings : BEF 2000 (proceedings BEF 1500 + postage & packing BEF 500) Postage & packing (BEF 500) must only be charged once in case of multiple commands. _______________________________________________________________________ ESANN'94 and ESANN'93 proceedings: order form _____________________________________________ Ms., Mr. Dr., Prof.: ................................................. Name: ................................................................ First Name: .......................................................... Institution: ......................................................... ..................................................................... ..................................................................... Adress: .............................................................. ..................................................................... ..................................................................... ZIP: ................................................................. Town: ............................................................... Country: ............................................................ VAT n?: .............................................................. tel: ................................................................ fax: ................................................................ E-mail: ............................................................. Please send me ... copies of the ESANN'94 proceedings. Please send me ... copies of the ESANN'93 proceedings. Please send me an invoice: O Yes O No Amount: ESANN'94 proceedings : ... copies x BEF 2000 = BEF ..... ESANN'93 proceedings : ... copies x BEF 1500 = BEF ..... Postage & packing: BEF 500 _________ TOTAL BEF ..... Payment (please tick): O Bank transfer, stating "ESANN - proceedings" and your name, made payable to: Generale de Banque ch. de Waterloo 1341A B-1180 Brussels (Belgium) acc. no. 210-0468648-93 of D facto (45 rue Masui, 1210 Brussels) Bank transfers must be free of charges. Eventual charges must be paid as well. O Cheques/postal money orders made payable to: D facto - 45 rue Masui - B-1210 Brussels - Belgium Only orders accompanied by a cheque, a postal money order or the proof of bank transfer will be considered. order form and payment must be sent to the conference secretariat: D facto publications ESANN proceedings 45 rue Masui B-1210 Brussels Belgium tel: + 32 2 245 43 63 fax: + 32 2 245 46 94 ______________________________________________________________________ The proceedings of ESANN'94 contain the following papers: --------------------------------------------------------- "Concerning the formation of chaotic behaviour in recurrent neural networks" T. Kolb, K. Berns Forschungszentrum Informatik Karlsruhe (Germany) "Stability and bifurcation in an autoassociative memory model" W.G. Gibson, J. Robinson, C.M. Thomas University of Sidney (Australia) "Capabilities of a structured neural network. Learning and comparison with classical techniques" J. Codina, J. C. Aguado, J.M. Fuertes Universitat Politecnica de Catalunya (Spain) "Projection learning: alternative approaches to the computation of the projection" K. Weigl, M. Berthod INRIA Sophia Antipolis (France) "Stability bounds of momentum coefficient and learning rate in backpropagation algorithm" Z. Mao, T.C. Hsia University of California at Davis (USA) "Model selection for neural networks: comparing MDL and NIC" G. te Brake*, J.N. Kok*, P.M.B. Vitanyi** *Utrecht University, **Centre for Mathematics and Computer Science, Amsterdam (Netherlands) "Estimation of performance bounds in supervised classification" P. Comon*, J.L. Voz**, M. Verleysen** *Thomson-Sintra Sophia Antipolis (France), **Universit? Catholique de Louvain, Louvain-la-Neuve (Belgium) "Input Parameters' estimation via neural networks" I.V. Tetko, A.I. Luik Institute of Bioorganic & Petroleum Chemistry Kiev (Ukraine) "Combining multi-layer perceptrons in classification problems" E. Filippi, M. Costa, E. Pasero Politecnico di Torino (Italy) "Diluted neural networks with binary couplings: a replica symmetry breaking calculation of the storage capacity" J. Iwanski, J. Schietse Limburgs Universitair Centrum (Belgium) "Storage capacity of the reversed wedge perceptron with binary connections" G.J. Bex, R. Serneels Limburgs Universitair Centrum (Belgium) "A general model for higher order neurons" F.J. Lopez-Aligue, M.A. Jaramillo-Moran, I. Acedevo-Sotoca, M.G. Valle Universidad de Extremadura, Badajoz (Spain) "A discriminative HCNN modeling" B. Petek University of Ljubljana (Slovenia) "Biologically plausible hybrid network design and motor control" G.R. Mulhauser University of Edinburgh (Scotland) "Analysis of critical effects in a stochastic neural model" W. Mommaerts, E.C. van der Meulen, T.S. Turova K.U. Leuven (Belgium) "Stochastic model of odor intensity coding in first-order olfactory neurons" J.P. Rospars*, P. Lansky** *INRA Versailles (France), **Academy of Sciences, Prague (Czech Republic) "Memory, learning and neuromediators" A.S. Mikhailov Fritz-Haber-Institut der MPG, Berlin (Germany), and Russian Academy of Sciences, Moscow (Russia) "An explicit comparison of spike dynamics and firing rate dynamics in neural network modeling" F. Chapeau-Blondeau, N. Chambet Universit? d'Angers (France) "A stop criterion for the Boltzmann machine learning algorithm" B. Ruf Carleton University (Canada) "High-order Boltzmann machines applied to the Monk's problems" M. Grana, V. Lavin, A. D'Anjou, F.X. Albizuri, J.A. Lozano UPV/EHU, San Sebastian (Spain) "A constructive training algorithm for feedforward neural networks with ternary weights" F. Aviolat, E. Mayoraz Ecole Polytechnique F?d?rale de Lausanne (Switzerland) "Synchronization in a neural network of phase oscillators with time delayed coupling" T.B. Luzyanina Russian Academy of Sciences, Moscow (Russia) "Reinforcement learning and neural reinforcement learning" S. Sehad, C. Touzet Ecole pour les Etudes et la Recherche en Informatique et Electronique, N?mes (France) "Improving piecewise linear separation incremental algorithms using complexity reduction methods" J.M. Moreno, F. Castillo, J. Cabestany Universitat Politecnica de Catalunya (Spain) "A comparison of two weight pruning methods" O. Fambon, C. Jutten Institut National Polytechnique de Grenoble (France) "Extending immediate reinforcement learning on neural networks to multiple actions" C. Touzet Ecole pour les Etudes et la Recherche en Informatique et Electronique, N?mes (France) "Incremental increased complexity training" J. Ludik, I. Cloete University of Stellenbosch (South Africa) "Approximation of continuous functions by RBF and KBF networks" V. Kurkova, K. Hlavackova Academy of Sciences of the Czech Republic "An optimized RBF network for approximation of functions" M. Verleysen*, K. Hlavackova** *Universit? Catholique de Louvain, Louvain-la-Neuve (Belgium), **Academy of Science of the Czech Republic "VLSI complexity reduction by piece-wise approximation of the sigmoid function" V. Beiu, J.A. Peperstraete, J. Vandewalle, R. Lauwereins K.U. Leuven (Belgium) "Dynamic pattern selection for faster learning and controlled generalization of neural networks" A. R?bel Technische Universit?t Berlin (Germany) "Noise reduction by multi-target learning" J.A. Bullinaria Edinburgh University (Scotland) "Variable binding in a neural network using a distributed representation" A. Browne, J. Pilkington South Bank University, London (UK) "A comparison of neural networks, linear controllers, genetic algorithms and simulated annealing for real time control" M. Chiaberge*, J.J. Merelo**, L.M. Reyneri*, A. Prieto**, L. Zocca* *Politecnico di Torino (Italy), **Universidad de Granada (Spain) "Visualizing the learning process for neural networks" R. Rojas Freie Universit?t Berlin (Germany) "Stability analysis of diagonal recurrent neural networks" Y. Tan, M. Loccufier, R. De Keyser, E. Noldus University of Gent (Belgium) "Stochastics of on-line back-propagation" T. Heskes University of Illinois at Urbana-Champaign (USA) "A lateral contribution learning algorithm for multi MLP architecture" N. Pican*, J.C. Fort**, F. Alexandre* *INRIA Lorraine, **Universit? Nancy I (France) "Two or three things that we know about the Kohonen algorithm" M. Cottrell*, J.C. Fort**, G. Pag?s*** Universit?s *Paris 1, **Nancy 1, ***Paris 6 (France) "Decoding functions for Kohonen maps" M. Alvarez, A. Varfis CEC Joint Research Center, Ispra (Italy) "Improvement of learning results of the selforganizing map by calculating fractal dimensions" H. Speckmann, G. Raddatz, W. Rosenstiel University of T?bingen (Germany) "A non linear Kohonen algorithm" J.-C. Fort*, G. Pag?s** *Universit? Nancy 1, **Universit?s Pierre et Marie Curie, et Paris 12 (France) "Self-organizing maps based on differential equations" A. Kanstein, K. Goser Universit?t Dortmund (Germany) "Instabilities in self-organized feature maps with short neighbourhood range" R. Der, M. Herrmann Universit?t Leipzig (Germany) The proceedings of ESANN'93 contain the following papers: --------------------------------------------------------- "A modified trajectory reversing method for the stability analysis of neural networks" M. Loccufier, E. Noldus University of Ghent (Belgium) "A lateral inhibition network that emulates a winner-takes-all algorithm" B. Krekelberg, J.N. Kok Utrecht University (The Netherlands) "Tracking global minima using a range expansion algorithm" D. Gorse, A. Shepherd, J.G. Taylor University College London (United Kingdom) "Embedding knowledge into stochastic learning automata for fast solution of binary constraint satisfaction problems" D. Kontoravdis, A. Likas, A. Stafylopatis National Technical University of Athens (Greece) "Parallel dynamics of extremely diluted neural networks" D. Bolle, B. Vinck, A. Zagrebnov K.U. Leuven (Belgium) "Enhanced unit training for piecewise linear separation incremental algorithms" J.M. Moreno, F. Castillo, J. Cabestany Universitat Politecnica de Catalunya (Spain) "Incremental evolution of neural network architectures for adaptive behaviour" D. Cliff, I. Harvey, P. Husbands University of Sussex (United Kingdom) "Efficient decomposition of comparison and its applications" V. Beiu, J. Peperstraete, J. Vandewalle, R. Lauwereins K.U. Leuven (Belgium) "Modelling biological learning from its generalization capacity" F.J. Vico, F. Sandoval, J. Almaraz Universidad de Malaga (Spain) "A learning and pruning algorithm for genetic Boolean neural networks" F. Gruau Centre d'Etudes Nucleaires de Grenoble (France) "Population coding in a theoretical biologically plausible network" G.R. Mulhauser University of Edinburgh (Scotland) "Physiological modelling of cochlear nucleus responses" C. Lorenzi* **, F. Berthommier**, N. Tirandaz* *Universite de Lyon 2, ** Universite Joseph Fourier - Grenoble (France) "The Purkinje unit of the cerebellum as a model of a stable neural network" P. Chauvet*, G. Chauvet* ** *Universite d'Angers (France), **University of Southern California USA) "A mental problem for the solution of the direct and inverse kinematic problem" H. Cruse, U. Steinkuhler, J. Deitert Univ. of Bielefeld (Germany) "Probabilistic decision trees and multilayered perceptrons" P. Bigot, M. Cosnard Ecole Normale Superieure de Lyon (France) "Comparison of optimized backpropagation algorithms" W. Schiffmann, M. Joost, R. Werner University of Koblenz (Germany) "Minimerror: a perceptron learning rule that finds the optimal weights" M.B. Gordon, D. Berchier Centre d'Etudes Nucleaires de Grenoble (France) "MLP modular networks for multi-class recognition" P. Sebire, B. Dorizzi Institut National des Telecommunications (France) "Place-to-time code transformation during saccades" B. Breznen Slovak Academy of Sciences (Czechoslovakia) "An efficient learning model for the neural integrator of the oculomotor system" J.-P. Draye*, G. Cheron** ***, G. Libert*, E. Godaux** *Fac. Poly. de Mons, **Univ. de Mons-Hainaut, ***Univ. Libre de Bruxelles (Belgium) "Motion processing in the retina: about a velocity matched filter" J. Herault, W. Beaudot Institut National Polytechnique de Grenoble (France) "Laplacian pyramids with multi-layer perceptrons interpolators" B. Simon, B. Macq, M. Verleysen Universite Catholique de Louvain (Belgium) "EEG paroxystic activity detected by neural networks after wavelet transform analysis" P. Clochon*, R. Caterini**, D. Clarencon**, V. Roman** *INSERM U 320 Caen, **CRSSA U 18 Grenoble-la-Tronche (France) "An algorithm to learn sequences with the connectionist sequential machine" O. Sarzeaud, N. Giambiasi Ecole pour les Etudes et la Recherche en Informatique et Electronique - Nimes (France) "Time series and neural network: a statistical method for weight elimination" M. Cottrell, B. Girard, Y. Girard, M. Mangeas Universite Paris I (France) "The filtered activation networks" L.S. Smith, K. Swingler University of Stirling (Scotland) "Supervised learning and associative memory by the random neural network" M. Mokhtari Universite Rene Descartes - Paris (France) "Mixture states in Potts neural networks" D. Bolle, J. Huyghebaert K.U. Leuven (Belgium) "Trajectory learning using hierarchy of oscillatory modules" N.B. Toomarian, P. Baldi California Institute of Technology (USA) "Locally implementable learning with isospectral matrix flows" J. Dehaene, J. Vandewalle K.U. Leuven (Belgium) "Once more about the information capacity of Hopfield network" A.A. Frolov*, D. Husek** *Russian Acad. of Sci. - Moscow (Russia), **Acad. of Sci. Czech Republic - Prague (Czech Republic) "Self-organization of a Kohonen network with quantized weights and an arbitrary one-dimensional stimuli distribution" P. Thiran Ecole Polytechnique Federale de Lausanne (Switzerland) "Optimal decision surfaces in LVQ1 classification of patterns" M. Verleysen, P. Thissen, J.-D. Legat Universite Catholique de Louvain (Belgium) "Three algorithms for searching the minimum distance in self-organizing maps" V. Tryba*, K. Goser** *SICAN GmbH Hannover, **Universitat Dortmund (Germany) "Voronoi tesselation, space quantization algorithms and numerical integration" G. Pages Universite Paris I & Universite Pierre et Marie Curie (France) "An intuitive characterization for the reference vectors of a Kohonen map" A. Varfis, C. Versino CEC Joint Research Center (Italy) _____________________________ Michel Verleysen D facto conference services 45 rue Masui 1210 Brussels Belgium tel: +32 2 245 43 63 fax: +32 2 245 46 94 E-mail: esann at dice.ucl.ac.be _____________________________ From tenorio at ecn.purdue.edu Sat May 7 15:04:11 1994 From: tenorio at ecn.purdue.edu (tenorio@ecn.purdue.edu) Date: Sat, 7 May 1994 14:04:11 -0500 Subject: Financial Forecasting Competition Message-ID: <199405071904.OAA04234@dynamo.ecn.purdue.edu> First I would like to apologize to all and specially to Bill Skaggs for not placing the appropriate emotional indicators in my message to indicate tongue-and-cheek statements, such as: 8>o ;>) :>) Without these, the message could seem offensive, in spite of the fact that the sophistation level of the readers here is very high. My apologies to all. Bill further wrote: But aren't you a little worried that the company that does best in the competition, even if only by chance, will take the results as an official sanction and use them in advertising? (Feel free to ignore this question if you think I've already cost you too much time.) Thanks again, -- Bill And Steve wrote: >Its always been curious to me, tho why you would expect people who have >successful methods >who may be making money in the market to reveal >them publicly? Could this account for the negative results? > >Steve > > > > >Stephen J. Hanson, Ph.D. >Head, Learning Systems Department >SIEMENS Research >755 College Rd. East >Princeton, NJ 08540 The panel is considering a number of different metrics to be used, and not declare the winner on a single metric, which could create a winner by chance. If all the predictions are, for example, poor, indeed declaring a winner would be a mute point, but it would be very informative as to the difficulty of the problem. We walk a fine line there and care must be taken. I don't know how to solve the problem of biased sampling, except to give them a non-disclosure entry to offer us a counter example that someone sucessfully can predict the series. If someone was to claim that they can do the task after the competition, an interesting question would be: so why didn't you enter it? Also, all these points are only valid if we are talking about time series of tradable instruments. Other financial time series would still carry a lot of value to its prediction, but less of a flashiness, such as interest rates, sales etc. Regarding the point about the set of parallel agents for tradable instruments in the previous message: All agents have the same policy but different settings. All know about the current state of the world. An agent would: - If the market moves by a percentage P up buy, and sell if the market moves down by the same percentage. - If an agent is in a certain position (long or short) and the market goes against them they would sell at a certain percentage drop <= P. - If the market goes in their favor they would liquidate their position after a minimum move of 3P (percentages always measured from a small to a large number) Each agent is the same, but the percentages are very different. If there are more buyers than sellers, the price would go up a point for each extra buyer. Similarly to the sell side. Imagine a sentiment function that turns the sellers into buyers or vice-versa. This function goes up (say for being inclined to be a buyer) as the market moves up, up to a point, and then moves down, as the market may be perceived to be too expensive. At certain threshold points of this "market feeling function" decisions are made to buy and sell. This function has a parabolic shape and is recursive, similar to functions in the logistic family such as: x(t+1) = r * x(t) * (a-x(t)) which is known to yield to chaotic behavior. The market is then a composite (sum of threshold versions) of such functions. The variations on P incorporate trading styles, information, and time scale differences. The actual function is more like a sin(x) -pi:pi. To make the system more sophisticated, the agents may want to have a third alternative by going neutral before moving from sell to buy (hysterisis-like). Some agents (small number) can be made contrarians, by having the reverse behavior. Further, the market feeling function may be also a function of time with a decay term associated with slow moving markets. I plan to write such a simple simulator and place it on the net. If anyone is interested in beating me to writting the code, and willing to make it public, I'll help him/her with the task. About competitions of this kind: This is not a new idea at all. Makridakis (Journal of Forecasting) and others have made several competitions/comparisions among various techniques (mostly linear and in the financial area). Weigend et al. did the same for non linear techniques with several types of time series. We will be learning a lot from their experiences as well. --ft. ____________________________________________________________________________ ________________________________________ ___________________________ Manoel Fernando Tenorio Parallel Distributed Structures Lab School of Electrical Engineering Purdue University W. Lafayette, In 47907 Ph.: 317-494-3482 Fax: 317-494-6440 tenorio at ecn.purdue.edu ============================================================================ = From anoop at ipl.rpi.edu Mon May 9 08:32:21 1994 From: anoop at ipl.rpi.edu (Anoop K. Bhattacharjya) Date: Mon, 9 May 94 08:32:21 EDT Subject: vision by evolutionary optimization Message-ID: <9405091232.AA06654@ipl.rpi.edu> Reprints are available on request for the following paper: Bhattacharjya, A. K., and Roysam, B.,"Joint Solution of Low, Intermediate and High-Level Vision Tasks by Evolutionary Optimization: Application to Computer Vision at Low SNR," IEEE Trans. Neural Networks, Vol. 5, No. 1, pp. 83-95, 1994. Please direct reprint requests to roysam at ecse.rpi.edu. An abstract of the paper is given below: ABSTRACT Methods for conducting model-based computer vision from low- SNR (~ 1dB) image data are presented. Conventional algorithms break down in this regime due to a cascading of noise artifacts, and inconsistencies arising from the lack of optimal interaction between high and low-level processing. These problems are addressed by solving low-level problems such as intensity estimation, segmentation, and boundary estimation jointly (synergistically) with intermediate-level problems such as the estimation of position, magnification and orientation, and high-level problems such as object identification and scene interpretation. This is achieved by formulating a single objective function that incorporates all the data and object models, and a hierarchy of constraints in a Bayesian framework. All image processing operations, including those that exploit the low and high-level variables to satisfy multi-level pattern constraints, result directly from a parallel multi-trajectory global optimization algorithm. Experiments with simulated low-count (7-9 photons/pixel) 2-D Poisson images demonstrate that compared to non-joint methods, a joint solution not only results in more reliable scene interpretation, but also a superior estimation of low-level image variables. Typically, most object parameters are estimated to within a 5% accuracy, even with overlap and partial occlusion. From swaney at cogsci.ucsd.edu Mon May 9 13:58:10 1994 From: swaney at cogsci.ucsd.edu (swaney@cogsci.ucsd.edu) Date: Mon, 9 May 1994 09:58:10 -0800 Subject: Cognitive Science position Message-ID: <9405091657.AA25407@cogsci.UCSD.EDU> ASSISTANT PROFESSOR POSITION IN COGNITIVE SCIENCE UNIVERSITY OF CALIFORNIA, SAN DIEGO The department of Cognitive Science at the University of California, San Diego invites applications for a position at the assistant professor level (tenure-track) starting July 1, 1995 (contingent upon funding), the salary commensurate with the experience of the successful applicant and based on the UC pay scale. Applicants must have a PhD (or ABD) in an appropriate field and have research and teaching interests in higher level human cognition phenomena such as attention, memory, reasoning, or problem solving. Women and minorities are encouraged to apply. The University of California, San Diego is an affirmative action/equal opportunity employer. All applications received by September 1, 1994 or thereafter will receive thorough consideration until position is filled. Candidates should include a vita, reprints, a short letter describing their background and interests, and names and addresses of at least three references to: University of California, San Diego Search Committee Department of Cognitive Science 0515-G 9500 Gilman Drive La Jolla, CA 92093-0515 From battiti at volterra.science.unitn.it Mon May 9 10:23:32 1994 From: battiti at volterra.science.unitn.it (Roberto Battiti) Date: Mon, 9 May 94 16:23:32 +0200 Subject: preprints available: optimization & neural nets Message-ID: <9405091423.AA02899@volterra.science.unitn.it.noname> *** PREPRINTS AVAILABLE: *** OPTIMIZATION & NEURAL NETS The following technical reports are available by anonymous ftp at our local archive: volterra.science.unitn.it (130.186.34.16). The subjects are combinatorial and continuous optimization algorithms, and their application to neural nets. Two papers (battiti.neuro-hep.ps.Z, battiti.reactive-tabu-search.ps.Z) are also available from the neuroprose archive. A limited number of hardcopies can be obtained from: Roberto Battiti Dip. di Matematica Univ. di Trento 38050 Povo (Trento) - ITALY email: battiti at science.unitn.it or: Giampietro Tecchiolli Istituto per la Ricerca Scientifica e Tecnologica 38050 Povo (Trento) - ITALY email: tec at irst.it ________________________________________________________________________________ ARCHIVE-NN-1 title: The Reactive Tabu Search author: Roberto Battiti and Giampietro Tecchiolli number: UTM 405 Ottobre 1992 note: 27 pages, to appear in: ORSA Journal on Computing, 1994 abstract: We propose an algorithm for combinatorial optimization where an explicit check for the repetition of configurations is added to the basic scheme of Tabu search. In our Tabu scheme the appropriate size of the list is learned in an automated way by reacting to the occurrence of cycles. In addition, if the search appears to be repeating an excessive number of solutions excessively often, then the search is diversified by making a number of random moves proportional to a moving average of the cycle length. The reactive scheme is compared to a strict Tabu scheme, that forbids the repetition of configurations and to schemes with a fixed or randomly varying list size. From the implementation point of view we show that the Hashing or Digital Tree techniques can be used in order to search for repetitions in a time that is approximately constant. We present the results obtained for a series of computational tests on a benchmark function, on the 0-1 Knapsack Problem, and on the Quadratic Assignment Problem. FTP-host: volterra.science.unitn.it FTP-file: pub/neuronit/reactive-tabu-search.ps.Z ________________________________________________________________________________ ARCHIVE-NN-2 title: Local Search with Memory: Benchmarking RTS author: Roberto Battiti and Giampietro Tecchiolli number: UTM Ottobre 1993 note: 34 pages abstract: The purpose of this work is that of presenting a version of the Reactive Tabu Search method (RTS) that is suitable for constrained problems, and that of testing RTS on a series of constrained and unconstrained Combinatorial Optimi- zation tasks. The benchmark suite consists of many instances of the N-K model and of the Knapsack problem with various sizes and difficulties, defined with portable random number generators. The performance of RTS is compared with that of Repeated Local Minima Search, Simulated Annealing, Genetic Algorithms, and Neural Networks. In addition, the effects of different hashing schemes and of the presence of a simple `aspiration' criterion in the RTS algorithm are investigated. FTP-host: volterra.science.unitn.it FTP-file: pub/neuronit/rts-benchmark.ps.Z ________________________________________________________________________________ ARCHIVE-NN-3 title: Training Neural Nets with the Reactive Tabu Search author: Roberto Battiti and Giampietro Tecchiolli number: UTM 421 Novembre 1993 note: 45 pages, shorter version to appear in: IEEE Trans. on Neural Networks abstract: In this paper the task of training sub-symbolic systems is considered as a combinatorial optimization problem and solved with the heuristic scheme of the Reactive Tabu Search (RTS) proposed by the authors and based on F. Glover's Tabu Search. An iterative optimization process based on a ``modified greedy search'' component is complemented with a meta-strategy to realize a discrete dynamical system that discourages limit cycles and the confinement of the search trajectory in a limited portion of the search space. The possible cycles are discouraged by prohibiting (i.e., making tabu) the execution of moves that reverse the ones applied in the most recent part of the search, for a prohibition period that is adapted in an automated way. The confinement is avoided and a proper exploration is obtained by activating a diversification strategy when too many configurations are repeated excessively often. The RTS method is applicable to non-differentiable functions, it is robust with respect to the random initialization and effective in continuing the search after local minima. The limited memory and processing required make RTS a competitive candidate for special-purpose VLSI implementations. We present and discuss four tests of the technique on feedforward and feedback systems. FTP-host: volterra.science.unitn.it FTP-file: pub/neuronit/rts-neural-nets.ps.Z ________________________________________________________________________________ ARCHIVE-NN-4 title: Learning with first, second, and no derivatives: a case study in High Energy Physics author: Roberto Battiti and Giampietro Tecchiolli note: 36 pages, to appear in Neurocomputing 6, 181-206, 1994 abstract: In this paper different algorithms for training multi-layer perceptron architectures are applied to a significant discrimination task in High Energy Physics. The One Step Secant technique is compared with On-Line Backpropagation , the 'Bold Driver' batch version and Conjugate Gradient methods. In addition, a new algorithm (Affine Shaker) is proposed that uses stochastic search based on function values and affine transformations of the local search region. Although the Affine Shaker requires more CPU time to reach the maximum genera- lization, the technique can be interesting for special-purpose VLSI implementa- tions and for non-differentiable functions. FTP-host: volterra.science.unitn.it FTP-file: pub/neuronit/neuro-hep.ps.Z ________________________________________________________________________________ ARCHIVE-NN-5 title: Simulated Annealing and Tabu Search in the Long Run: a Comparison on QAP Tasks author: Roberto Battiti and Giampietro Tecchiolli number: UTM 427 Febbraio 1994 note: 11 pages, to appear in: Computer and Mathematics with Applications abstract: Simulated Annealing (SA) and Tabu Search (TS) are compared on the Quadratic Assignment Problem. A recent work on the same benchmark suite argued that SA could achieve a reasonable solution quality with fewer function evaluations than TS. The discussion is extended by showing that the conclusions must be changed if the task is hard or a very good approximation of the optimal solution is desired, or if CPU time is the relevant parameter. In addition, a recently proposed version of TS (the Reactive Tabu Search) solves the problem of finding the proper list size with an automatic memory-based reaction mechanism. FTP-host: volterra.science.unitn.it FTP-file: pub/neuronit/rts-versus-sa.ps.Z ________________________________________________________________________________ ARCHIVE-NN-6 title: The continuous reactive tabu search: blending combinatorial optimization and stochastic search for global optimization author: Roberto Battiti and Giampietro Tecchiolli number: UTM 432 Maggio 1994 note: 28 pages abstract: A novel algorithm for the global optimization of functions (C-RTS) is presented, in which a combinatorial optimization method cooperates with a stochastic local minimizer. The combinatorial optimization component, based on the Reactive Tabu Search recently proposed by the authors, locates the most promising ``boxes,'' where starting points for the local minimizer are generated. In order to cover a wide spectrum of possible applications with no user intervention, the method is designed with adaptive mechanisms: the box size is adapted to the local structure of the function to be optimized, the search parameters are adapted to obtain a proper balance of diversification and intensification. The algorithm is compared with some existing algorithms, and the experimental results are presented for a suite of benchmark tasks. FTP-host: volterra.science.unitn.it FTP-file: pub/neuronit/crts.ps.Z ________________________________________________________________________________ From vg197 at neutrino.pnl.gov Mon May 9 20:24:40 1994 From: vg197 at neutrino.pnl.gov (Sherif Hashem) Date: Mon, 09 May 1994 17:24:40 -0700 (PDT) Subject: Thesis available: Optimal Linear Combinations of Neural Networks Message-ID: <9405100024.AA19885@neutrino.pnl.gov> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/Thesis/hashem.thesis.ps.Z The file hashem.thesis.ps.Z is now available for copying from the Neuroprose archive: OPTIMAL LINEAR COMBINATIONS OF NEURAL NETWORKS Sherif Hashem Ph.D. Thesis Purdue University ABSTRACT: Neural network (NN) based modeling often involves trying multiple networks with different architectures, learning techniques, and training parameters in order to achieve ``acceptable'' model accuracy. Typically, one of the trained networks is chosen as ``best,'' while the rest are discarded. In this dissertation, using optimal linear combinations (OLCs) of the corresponding outputs of a number of NNs is proposed as an alternative to using a single network. Modeling accuracy is measured by mean squared error (MSE) with respect to the distribution of random inputs to the NNs. Optimality is defined by minimizing the MSE, with the resultant combination referred to as MSE-OLC. MSE-OLCs are investigated for four cases: allowing (or not) a constant term in the combination and requiring (or not) the combination-weights to sum to one. In each case, deriving the MSE-OLC is straightforward and the optimal combination-weights are simple, requiring modest matrix manipulations. In practice, the optimal combination-weights need to be estimated from observed data: observed inputs, the corresponding true responses, and the corresponding outputs of each component network. Given the data, estimating the optimal combination-weights is straightforward. Collinearity among the outputs and/or the approximation errors of the component NNs sometimes degrades the generalization ability of the estimated MSE-OLC. To improve generalization in the presence of degrading collinearity, six algorithms for selecting subsets of the NNs for the MSE-OLC are developed and tested. Several examples, including a real-world problem and an empirical study, are discussed. The examples illustrate the importance of addressing collinearity and demonstrate significant improvements in model accuracy as a result of employing MSE-OLCs supported by the NN selection algorithms. --------------------------- The thesis is 126 Pages (10 preamble + 116 text). To obtain a copy of the Postscript file: %ftp archive.cis.ohio-state.edu >Name: anonymous >Password: >cd pub/neuroprose/Thesis >binary >get hashem.thesis.ps.Z >quit Then: %uncompress hashem.thesis.ps.Z (The size of the uncompressed file is about 1.1Mbyte) %lpr -s -P hashem.thesis.ps --------------------------- Hard copies may be requested from the School of Industrial Engineering, 1287 Grissom Hall, Purdue University, West Lafayette, IN 47907-1287, USA. (Refer to Technical Report SMS 94-4.) --Sherif Hashem =================================================================== Pacific Northwest Laboratory E-mail: s_hashem at pnl.gov 906 Battelle Boulevard Tel. (509) 375-6995 P.O. Box 999, MSIN K1-87 Fax. (509) 375-6631 Richland, WA 99352 USA =================================================================== From bishopc at sun.aston.ac.uk Tue May 10 08:35:22 1994 From: bishopc at sun.aston.ac.uk (bishopc) Date: Tue, 10 May 1994 12:35:22 +0000 Subject: Paper available by ftp Message-ID: <10405.9405101135@sun.aston.ac.uk> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/bishop.noise.ps.Z The following technical report is available by anonymous ftp. ------------------------------------------------------------------------ TRAINING WITH NOISE IS EQUIVALENT TO TIKHONOV REGULARIZATION Chris M Bishop Neural Computing Research Group Aston University Birmingham, B4 7ET, U.K. email: c.m.bishop at aston.ac.uk Neural Computing Research Group Report: NCRG/4290 (Accepted for publication in Neural Computation) Abstract It is well known that the addition of noise to the input data of a neural network during training can, in some circumstances, lead to significant improvements in generalization performance. Previous work has shown that such training with noise is equivalent to a form of regularization in which an extra term is added to the error function. However, the regularization term, which involves second derivatives of the error function, is not bounded below, and so can lead to difficulties if used directly in a learning algorithm based on error minimization. In this paper we show that, for the purposes of network training, the regularization term can be reduced to a positive definite form which involves only first derivatives of the network mapping. For a sum-of-squares error function, the regularization term belongs to the class of generalized Tikhonov regularizers. Direct minimization of the regularized error function provides a practical alternative to training with noise. -------------------------------------------------------------------- ftp instructions: % ftp archive.cis.ohio-state.edu Name: anonymous password: your full email address ftp> cd pub/neuroprose ftp> binary ftp> get bishop.noise.ps.Z ftp> bye % uncompress bishop.noise.ps.Z % lpr bishop.noise.ps -------------------------------------------------------------------- Professor Chris M Bishop Tel. +44 (0)21 359 3611 x4270 Neural Computing Research Group Fax. +44 (0)21 333 6215 Dept. of Computer Science c.m.bishop at aston.ac.uk Aston University Birmingham B4 7ET, UK -------------------------------------------------------------------- From dayhoff at src.umd.edu Mon May 9 17:14:47 1994 From: dayhoff at src.umd.edu (Judith E. Dayhoff) Date: Mon, 9 May 1994 17:14:47 -0400 Subject: Final announcement for WCNN'94, with news Message-ID: <199405092114.RAA01547@newra.src.umd.edu> WW WW CCCCCCCC NN NN NN NN oo 999999 44 44 WW WW CC CC NNN NN NNN NN oo 99 99 44 44 WW W WW CC NNNN NN NNNN NN 99 99 44 44 WW WWW WW CC NN NN NN NN NN NN 9999999 4444444 WWW WWW CC NN NN NN NN NN NN 99 44 W W CCCCCCC NN NNNN NN NNNN 99 44 ************************** *UPDATED REGISTRATION INFORMATION *CALL FOR NOVEL RESULTS PAPERS ************************** WORLD CONGRESS ON NEURAL NETWORKS, SAN DIEGO, CALIFORNIA, JUNE 5-9, 1994 *** Industrial Exposition -- Giant-Screen Video 22 INNS University HALF-DAY Short Courses Six Plenary Talks *** Five Special Sessions Twenty Sessions of Invited and Contributed Talks At least 9 SIG sessions *** Sponsored and Organized by the International Neural Network Society (INNS) in cooperation with all other interested technical & professional societies. *** Table of Contents of This Announcement: 1. NEWS! NOVEL-RESULTS SUBMISSION to June 1; POSTDOC SPECIAL 2. INDUSTRIAL EXPOSITION SCHEDULE CHANGES AND NEW LECTURES 3. PLENARY TALKS 4. SPECIAL SESSIONS 5. INVITED/CONTRIBUTED SESSIONS 6. SHORT COURSES 7. TRAVEL ARRANGEMENTS 8. NOTE 9. REGISTRATION 10.HOTEL RESERVATIONS 11.STUDENT VOLUNTEERS! =================================================================== 1. NEWS! WCNN'94 has accepted over 600 papers that are published in the Proceedings. An interdisciplinary and scientific approach to neural network is maintained with a balanced program in all application areas as well. Furthermore, WCNN'94 has two days (Sunday & Monday June 5,6) filled with 22 half-day short courses (never given before) by all INNS Governors who have participated this year. Judging by these, WCNN'94 will be indeed a very exciting conference. To ease the on-site registration congestion, the official deadline for WCNN'94 pre-registration has been postponed to May 16, 1994. PostDocs may obtain the Student Rate by including a letter from your Supervisor with the Registration Form (or bring it to the Congress for a refund). FAX the Form (item 9) or email your questions to: Talley Associates (Att: WCNN'94 Melissa Bishop) Address: 875 Kings Highway, Suite 200 Woodbury, NJ 08096; Voice 609-845-1720; FAX 609-853-0411, e-mail: 74577.504 at compuserve.com SUBMIT YOUR NOVEL RESULTS UNTIL JUNE 1 FOR ON-SITE PUBLICATION In order to stimulate rapid growth in neural network research, we encourage the presentation of your newest results in the Congress. The deadline for Proceedings has passed, but in answer to many requests here is good news: You may submit one original and three copies in the standard format to our Talley conference management (to be reviewed by the three members of Organization Committee) for rapid separate publication at the Congress. The deadline is June 1, 1994. Notification will be made by fax, phone, or e-mail a few days after receipt of your paper. If accepted, your registration will be handled specially to enjoy the saving of pre-registration. If there are a sufficient number of these Novel papers accepted, there will be a special session created called "Novel Results" during the Congress. Otherwise, a poster presentation will be guaranteed. Moreover, in order to promote the WCNN'94 education program, you can give a short course to one of your friends free of charge, if you pay for one tuition. If you sign up for two courses, you will get one extra free, and this bonus is likewise extended to your chosen friend as well. Finally, we mentioned that we have streamlined WCNN'94 meeting management by giving Talley direct control over the management of the Conference, without going through the Executive Office. Many of you know the Talley management team from previous Congresses. You may use their FAX (609-853-0411) for registration; use the form at the end of this message. Signed: Paul Werbos, Bernard Widrow, Harold Szu P.S. Should you have any specific recommendation about ways to make WCNN'94 more successful, please contact any Governors that you know, or Dr. Harold Szu at (301) 390-3097; FAX (301) 390-3923; e-mail: hszu%ulysses at relay.nswc.navy.mil. *** To improve the structure of the Congress and achieve a more compact schedule for attendees, several changes have been made since the Preliminary Program: A. Short Courses Start Sunday Morning June 5. All Saturday Short Courses have been moved to Monday June 6, with the exception that Course [I] (J. Dayhoff) will be given Sunday 8AM - 12PM. To make room in the schedule for that change, Course [H] (W. Freeman) moves from Sunday to Monday 8AM - 12PM. On Monday the Short Courses are concurrent with the Exposition. [To Lecturers: Talley will reproduce Course Notes received by no later than May 20.] B. The SPECIAL OFFER has been made more generous, to encourage students. For each of your Short Course registrations you can give a colleague in the same or lower-priced Registration Category a FREE Short Course! Enter his or her name on the Registration Form below ``TOTAL.'' The recipient of the gift should indicate ``Gift from [your name]'' at the time of registration. IF YOU HAVE ALREADY PRE-REGISTERED, arrange the gift now by FAX to 609-853-0411. =================================================================== 2. INDUSTRIAL EXPOSITION SCHEDULE CHANGES AND NEW LECTURES Monday June 6: Chair: Prof. Takeshi Yamakawa, Kyushu Inst. of Tech., Japan; Soo-Young Lee of KAIST, Korean Liason; Pierre Martineau of M.&A., European Liason; R. Hecht-Nielsen, HNC, Inc.; D. Hammerstrom, Adaptive Solutions, Inc.; Robert Pap, AAC; C. Kimasauskas, NeuralWare, Inc.; J. Sutherland, America, Ltd. 8 - 11 AM: In Video: Hardware-Software Video-demo talks, and Posters; 10 - 11 AM: Student Contest. The Contest is free-form, permitting many types of imaginative entry; Certificates and T-shirts will be awarded; no cash Grand Prize. 11 - Noon: Panel on Government Funding + Two New Lectures - in the Exposition Area: 12 - 1PM: Teuvo Kohonen: ``Exotic Applications of the Self-Organizing Map'' 5 - 6PM: Walter Freeman: ``Noncomputational Neural Networks' =================================================================== 3. PLENARY TALKS: Lotfi Zadeh, UC Berkeley "Fuzzy Logic, Neural Networks, and Soft Computing" Per Bak, Brookhaven Nat. Lab. "Introduction to Self-Organized Criticality" Bernard Widrow, Stanford University "Adaptive Inverse Control" Melanie Mitchell, Santa Fe Institute "Genetic Algorithm Applications" Paul Werbos, NSF "Brain-Like Intelligence in Artificial Models: How Do We Really Get There?" John G. Taylor, King's College London "Capturing What It Is Like To Be: Modelling the Mind by Neural Networks" =================================================================== 4. SPECIAL SESSIONS "Biomedical Applications of Neural Networks," (Tuesday) David Brown, FDA; John Weinstein, NIH. "Commercial and Industrial Applications of Neural Networks," (Tuesday) B. Widrow, D. Hammerstrom, Ken Otwell, Ken Marko, Tariq Samad. "Financial and Economic Applications of Neural Networks," (Wednesday) Guido Deboeck, World Bank. "Neural Networks in Chemical Engineering," (Thursday) Am. Inst. of Chem. Eng. Thom McAvoy. "Mind, Brain and Consciousness" (Thursday) by J. Taylor, "TBD", W. Freeman, "Some category confusions in using neural networks to model consciousness", and presentations by S. Grossberg, G. Roth, B. Libet, P. Werbos, C. Koch, D. Levine, etc. =================================================================== 5. 20 INVITED/CONTRIBUTED SESSIONS June 7 - 9 Co-Chair by 20 INNS Governors & 20 Special Interest Group Chairpersons. Also at least 9 Special Interest Group (SIG) Sessions are scheduled for Wednesday, June 8 from 8 -9:30 pm. e.g.Neuroscience: D. Alkon, NIH; ATR/Biosensors: H. Hawkins, ONR, B. Telfer: Mental & Dysfunction: D. Levine;Power Eng.: D. Sobajic, EPRI, and others TBD. =================================================================== 6. SHORT COURSES 8am - 12pm Sunday, June 5 [M] Gail Carpenter, Boston University: Adaptive Resonance Theory [L] Bernard Widrow, Stanford University: Adaptive Filters, Adaptive Controls, Adaptive Neural Networks and Applications [I] Judith Dayhoff, University of Maryland: Neurodynamics of Temporal Processing [G] Shun-Ichi Amari, University of Tokyo: Learning Curves, Generalization Errors and Model Selection 1pm - 5pm Sunday, June 5 [U] Lotfi Zadeh, University of California, Berkeley: Fuzzy Logic and Calculi of Fuzzy Rules and Fuzzy Graphs [K] Paul Werbos, NSF: From Backpropagation to Real-Time Control [O] Stephen Grossberg, Boston University: Autonomous Neurodynamics: From Perception to Action [E] John Taylor, King's College, London: Stochastic Neural Computing: From Living Neurons to Hardware 6pm - 10 pm Sunday, June 5 [V] Nicolai G. Rambidi, Int'l. Research Inst. for Management Sciences: Image Processing and Pattern Recognition Based on Molecular Neural Networks [C] Christof Koch, California Institute of Technology: Vision Chips: Implementing Vision Algorithms with Analog VLSI Circuits 8am - 12pm Monday, June 6 [T] Melanie Mitchell, Santa Fe Institute: Genetic Algorithms, Theory and Applications [R] David Casasent, Carnegie Mellon University: Pattern Recognition and Neural Networks [H] Walter Freeman, University of California, Berkeley: Review of Neurobiology: From Single Neurons to Chaotic Dynamics of the Cerebral Cortex [P] Lee Giles, NEC Research Institute: Dynamically-driven Recurrent Networks: Models, Training Algorithms and Applications 1pm - 5pm Monday, June 6 [S] Per Bak, Brookhaven National Laboratory: Introduction to Self-Organized Criticality [D] Kunihiko Fukushima, Osaka University: Visual Pattern Recognition with Neural Networks [B] James A. Anderson, Brown University: Neural Networks Computation as Viewed by Cognitive Science and Neuroscience [Q] Alianna Maren, Accurate Automation Corporation: Introduction to Neural Network Applications 6pm - 10 pm Monday, June 6 [N] Takeshi Yamakawa, Kyushu Institute of Technology: What are the Differences and Similarities among Fuzzy, Neural, and Chaotic Systems? [A] Teuvo Kohonen, Helsinki University of Technology: Advances in the Theory and Applications of Self-Organizing Maps [J] Richard A. Andersen, Massachusetts Institute of Technology: Neurobiologically Plausible Network Models [F] Harold Szu, Naval Surface Warfare Center: Spatiotemporal Information Processing by Means of McCullouch-Pitts and Chaotic Neurons =================================================================== 7. TRAVEL RESERVATIONS: Executive Travel Associates (ETA) has been selected the official travel company for the World Congress on Neural Networks. ETA offers the lowest available fares on any airline at time of booking when you contact them at US phone number 202-828-3501 or toll free (in the US) at 800-562-0189 and identify yourself as a participant in the Congress. Flights booked on American Airlines, Delta Airline, the official airline for this meeting, will result in an additional discount. Please provide the booking agent you use with the AA code: Star #S0464FS =================================================================== 8. ** NOTE ** Neither WCNN'94 nor the Hotel can accept e-mail registration or reservations. The Hotel will accept phone and FAX reservations while rooms remain available. For WCNN'94 Registration, use surface/air mail or FAX. ********************************************************************** 9___ ____ ____ _ __ _____ ___ _ _____ _ ___ _ _ | | | | \ | / | | | / \ | | / \ |\ | |__\ --- | | \_ | |__\ /___\ | | | | | \ | | \ | \ __ | \ | | \ | | | | | | | \ | | | |___ \___| | __/ | | | | | | | \__/ L \| WCNN'94 at Town & Country Hotel, San Diego, California June 5 - 9, 1994 Phone:_______________ Name:_______________________________________ FAX:__________________ Address:____________________________________________________________ ____________________________________________________________ ___________________________________________________________ If your name badge is to read differently, indicate the changes here: REGISTRATION FEE (includes all sessions, plenaries, proceedings, reception, AND Industrial Exposition. Separate registration for Short Courses, below.) Before May 16, 1994 On-Site FEE ENCLOSED _ INNS Member Member Number__________ US$280 US$395 $_________ _ Non Members: US$380 US$495 $_________ _ Full Time Students: US$110 US$135 $_________ AND PostDocs (Include a letter from PostDoc Supervisor) _ Spouse/Guest: US$45 US$55 $_________ Name:________________ Or Neural Network Industrial Exposition -Only- _ US$55 US$55 $_________ *************************************************** INNS UNIVERSITY SHORT COURSE REGISTRATION (must be received by May 16, 1994) Circle paid selections: A B C D E F G H I J K L M N O P Q R S T U V Circle free selection (Pay for 2 short courses, get the third FREE) A B C D E F G H I J K L M N O P Q R S T U V SHORT COURSE FEE _ INNS Members: US$275 $_________ _ Non Members: US$325 $_________ _ Full Time Students US$150 $_________ Congress + Short Course TOTAL: $_________ For each paid course, nominate an accompanying person, registering in the same or lower category, for a free course: Mr./Ms.___________________ That person must also register by May 16, and indicate "Gift from [your name]" on the registration form. METHODS OF PAYMENT _ $ CHECK. All check payments made outside of the USA must be made on a USA bank in US dollars, payable to WCNN'94 _ $ CREDIT CARDS. Only VISA and MasterCard accepted. Registrations sent by FAX or surface/air mail must include an authorized signature. ( ) Visa ( ) M/C Name on Credit Card ______________________________________ Credit Card Number _______________________________________ Exp. Date ________________________________________________ Authorized Signature: _____________________________________ FAX: 609-853-0411 or E-mail: 74577.504 at compuserve.com then Mail to INNS/WCNN'94 c/o Talley Associates, 875 Kings Highway, Suite 200 Woodbury, NJ 08096 ========================================================================== 10. HOTEL RESERVATIONS REGISTER AT TOWN & COUNTRY HOTEL, SAN DIEGO, CALIFORNIA (WCNN'94 Site) Mail to Reservations, Town and Country Hotel, 500 Hotel Circle North, San Diego, CA 92108, USA; or FAX to 619-291-3584 Telephone: (800)772-8527 or (619)291-7131 INNS - WCNN'94 International Neural Network Society, World Congress on Neural Networks '94 _ Single: US$70 - US$95 plus applicable taxes _ Double: US$80 - US$105 plus applicable taxes Check in time: 3:00 pm. Check out time: 12:00 noon. Room reservations will be available on a first-come, first-serve basis until May 6, 1994. Reservations received after this date will be accepted on a space-available basis and cannot be guaranteed. Reservations after May 6 will also be subject to the rates prevailing at the time of the reservation. A confirmation of your reservation will be sent to you by the hotel. A first night's room deposit is required to confirm a reservation. PRINT OR TYPE ALL INFORMATION. Single________ Double_______ Arrival Date and approximate time:________________________________ Departure Date and approximate time:______________________________ Names of all occupants of room:____________________________________ RESERVATION CONFIRMATION SHOULD BE SENT TO: Name:____________________ Address:____________________________________________________________ ____________________________________________________________ City:____________________State/Province:_________________Country:__________ Type of Credit Card: (circle one) VISA/ MasterCard/ AmEx/ Diner's Club/ Discover/ Optima Card Number:______________________________ Exp. Date____________________ Name as it appears on your Card:______________________________ Authorized Signature: ______________________________ Cancellation Policy: Deposits are refundable if reservation is cancelled 48 hours in advance of arrival date. Be sure to record your cancellation number. Please indicate any disability which will require special assistance: _____________________________________________ FAX to 619-291-3584 e-mail: 74577.504 at compuserve.com 11. Student Volunteers INNS always tries to support students in NN. This is a tradition. Volunteer workers will get free registration and certain expenses. However, no travel expenses can be considered. We still need at least 8 students to help at WCNN'94. While Ms. Melissa Bishop will be the overall Coordinator, please contact for details about work and compensation the Student Leader: (1) Student Leader: Mr Charles Hsu, Ph D Candidate GWU (Student of Prof. Mona Zaghloul, Chair of GWU EE Dept,) WCNN'94 Oral Presentation in Session 14 Neurodynamics & Chaos "Chaotic neurochips .." (with Zaghloul) Thursday 1:30-1:50 PM Address: Charles Hsu, 1600 S. Joyce St. #C710, Arlington VA 22202 Phone: (202) 994-9390 e-mail: charles at seas.gwu.edu (2) Deputy Leader: Ms Ding Jinghua, M.S. in NN from Japan Tohoku Univ. WCNN'94 Oral Presentation in Session #3 Speech & Language Thurs. 8:00-8:20 AM "Comp. Psych. Approach to Human Facial Language Communication to Robots" Address: Jinghua Ding, Berukasa 201, Tamagawagakuen 1-6-11, Machiada-Shi, Tokyo Japan Phone: 81-427-26-2628 e-mail: lchj at ibis.iamp.tohoku.ac.jp ========================================================================== From uzimmer at informatik.uni-kl.de Tue May 10 11:44:23 1994 From: uzimmer at informatik.uni-kl.de (Uwe R. Zimmer, AG vP) Date: Tue, 10 May 94 16:44:23 +0100 Subject: Actual papers available (Visual Search, Navigation, Topologic Maps) Message-ID: <940510.164423.571@ag_vp_file_server.informatik.uni-kl.de> A couple of actual papers about: -------------------------------------------------------------- --- Learning, Robotics, Visual Search, Navigation, --- --- Topologic Maps & Robust Mobile Robots --- --- Neural Networks --- -------------------------------------------------------------- are now available via FTP: --------------------------------------------------------------------------- --- Connectionist Decision Systems for a Visual Search Problem --------------------------------------------------------------------------- --- File name is : Zimmer.Visual_Search.ps.Z --- IIZUKA `94, Fukuoka, Japan August 1-7, 1994, Invited paper Connectionist Decision Systems for a Visual Search Problem Uwe R. Zimmer Visual Search has been investigated by many researchers inspired by the biological fact, that the sensory elements on the mammal retina are not equably distributed. Therefore the focus of attention (the area of the retina with the highest density of sensory elements) has to be directed in a way to efficiently gather data according to certain criteria. The work discussed in this article concentrates on applying a laser range finder instead of a silicon retina. The laser range finder is maximal focused at any time, but therefore a low-resolution total-scene-image, available with camera-like devices from scratch on, cannot be used here. By adapting a couple of algorithms, the edge-scanning module steering the laser range finder is able to trace a detected edge. Based on the data scanned so far, two questions have to be answered. First: "Should the actual (edge-) scanning be interrupted in order to give another area of interest a chance of being investigated?" and second: "Where to start a new edge-scanning, after being interrupted?". These two decision-problems might be solved by a range of decision systems. The correctness of the decisions depends widely on the actual environment and the underlying rules may not be well initialized with a-priori knowledge. So we will present a version of a reinforcement decision system together with an overall scheme for efficiently controlling highly focused devices. --------------------------------------------------------------------------- --- Navigation on Topologic Feature-Maps --------------------------------------------------------------------------- --- File name is : Zimmer.Navigation.ps.Z --- IIZUKA `94, Fukuoka, Japan August 1-7, 1994 Navigation on Topologic Feature-Maps Uwe R. Zimmer, Cornelia Fischer & Ewald von Puttkamer Based on the idea of using topologic feature-maps instead of geometric environment maps in practical mobile robot tasks, we show an applicable way to navigate on such topologic maps. The main features regarding this kind of navigation are: handling of very inaccurate position (and orientation) information as well as implicit modelling of complex kinematics during an adaptation phase. Due to the lack of proper a-priori knowledge, a reinforcement based model is used for the translation of navigator commands to motor actions. Instead of employing a backpropagation network for the central associative memory module (attaching action-probabilities to sensor situations resp. navigator commands) a much faster dynamic cell structure system based on dynamic feature maps is shown. Standard graph-search heuristics like A* are applied in the planning phase. --------------------------------------------------------------------------- --- Realtime-learning on an Autonomous Mobile Robot with Neural Networks --------------------------------------------------------------------------- --- File name is : Zimmer.Topologic.ps.Z --- Euromicro `94 - RT-Workshop - Vaesteraas (Vasteras), Sweden, June 15-17, '94 Realtime-learning on an Autonomous Mobile Robot with Neural Networks Uwe R. Zimmer & Ewald von Puttkamer The problem to be discussed here, is the usage of neural network clustering techniques on a mobile robot, in order to build qualitative topologic environment maps. This has to be done in realtime, i.e. the internal world-model has to be adapted by the flow of sensor-samples without the possibility to stop this data-flow. Our experiments are done in a simulation environment as well as on a robot, called ALICE. ------------------------------------------------------------------ FTP-information (anonymous login): FTP-Server is : ftp.uni-kl.de Mode is : binary Directory is : reports_uni-kl/computer_science/mobile_robots/... Subdirectory is : 1994/papers File names are : Zimmer.Navigation.ps.Z Zimmer.Topologic.ps.Z Zimmer.Visual_Search.ps.Z Subdirectory is : 1993/papers File names are : Zimmer.learning_surfaces.ps.Z Zimmer.SPIN-NFDS.ps.Z Subdirectory is : 1992/papers File name is : Zimmer.rt_communication.ps.Z Subdirectory is : 1991/papers File names are : Edlinger.Pos_Estimation.ps.Z Edlinger.Eff_Navigation.ps.Z Knieriemen.euromicro_91.ps.Z Zimmer.albatross.ps.Z .. or ... FTP-Server is : archive.cis.ohio-state.edu Mode is : binary Directory is : /pub/neuroprose File names are : zimmer.navigation.ps.z zimmer.visual_search.ps.z zimmer.learning_surfaces.ps.z zimmer.spin-nfds.ps.z .. or ... FTP-Server is : ag_vp_file_server.informatik.uni-kl.de Mode is : binary Directory is : Neural_Networks/Reports File names are : Zimmer.Navigation.ps.Z Zimmer.Topologic.ps.Z Zimmer.Visual_Search.ps.Z Zimmer.Learning_Surfaces.ps.Z Zimmer.SPIN-NFDS.ps.Z ------------------------------------------------------------------ ----------------------------------------------------- ----- Uwe R. Zimmer --- University of Kaiserslautern - Computer Science Department | Research Group Prof. v. Puttkamer | 67663 Kaiserslautern - Germany | -------------------------------------------------------------- | P.O.Box:3049 | Phone:+49 631 205 2624 | Fax:+49 631 205 2803 | From arantza at cogs.susx.ac.uk Tue May 10 18:13:00 1994 From: arantza at cogs.susx.ac.uk (Arantza Etxeberria) Date: Tue, 10 May 94 18:13 BST Subject: CFP ECAL95 Message-ID: CONFERENCE ANNOUNCEMENT AND CALL FOR PAPERS 3rd. EUROPEAN CONFERENCE ON ARTIFICIAL LIFE ECAL95 Granada, Spain, 4-6 June, 1995 Despite its short history, Artificial Life (AL) is already becoming a mature scientific field. By trying to discover the rules of life and extract its essence so that it can be implemented in different media, AL research is leading us to a better understanding of a large set of interesting biology-related problems. The Conference will be organized into Scientific Sessions, Demonstrations, Videos, and Comercial Exhibits. Scientific Sessions will consist of Lectures (invited), Oral Presentations of submitted papers, and Posters. The site of ECAL95 will be the city of Granada, located in the South of Spain, in the region of Andalucia. Granada was the last Arabic site in the Iberian Peninsula, and it has the heritage of their culture, including the legacy of marvelous constructions such as the Alhambra and the Gardens of Generalife. ECAL95 will be organized in collaboration with the International Workshop on Artificial Neural Networks (IWANN95) to be held at Malaga (Costa del Sol, Spain), June 7-9, 1995. Granada and Malaga are only one hour apart by car. Special registration rates will be offered to people wishing to attend both meetings. Scientific Sessions and Topics 1. Foundations and Epistemology: Philosophical Issues. Emergence. Levels of Organization. Evolution of Hierarchical Systems. Evolvability. Computation and Dynamics. Ethical Problems. 2. Evolution: Self-organization. Pattern Formation. Prebiotic Evolution. Origins of Life. Evolution of Metabolism. Evolutionary Optimization. Fitness Landscapes. RNA Systems. Ecosystem Evolution. Biodiversity. Natural Selection and Sexual Selection. Units of Selection. 3. Adaptive and Cognitive Systems: Reaction, Neural and Immune Networks. Growth and Differentiation. Multicellular Development. Natural and Artificial Morphogenesis. Learning and Development. Communication 4. Artificial Worlds: Simulation of Ecologycal and Evolving Systems. System-Environment Correlation. Sensor-Effector Coordination. Environment Design. 5. Robotics and Emulation of Animal Behavior: Sensory and Motor Activity. Mobile Agents. Adaptive Robots. Autonomous Robots. Evolutionary Robotics. Ethology. 6. Societies and Collective Behavior: Swarm Intelligence. Cooperation and Communication among Animals and Robots. Evolution of Social Behavior. Social Organizations. Division of Tasks. 7. Applications and Common Tools: Optimization. Problem Solving. Virtual Reality and Computer Graphics. Genetic Algorithms. Neural Networks. Fuzzy Logic. Evolutionary Computation. Genetic Programming. Submission Instructions Conference Contributions can be either papers, posters, videos, or demonstrations. Authors should specify to wich session cotributions are intented. The contributions will be made available in two formats: 1) Conference Proceedings, published by Springer-Verlag before the Conference, including all accepted papers. One copy of the book will be given to each ECAL95 participant. 2) Abstracts Book, for papers and other contributions (posters, videos, or demos). For this purpose each contribution must include one Title/Abstract Page containing the following: - Title - Full name(s) of author(s) - Address(es) of author(s) with phone, fax, and E-mail - Extended abstract (100-200 words) - Keywords - Full papers: In addition to the Title/Abstract Page, manuscripts should not exceed 12 pages, including figures, in DIN-A4 format, with 2.5 cm (1 inch) margins all around, and no smaller than 10 point type in Times-Roman typeface. Cammera ready versions of the papers will be required after acceptance. - Posters: Submit only the Title/Abstract Page. - Demonstrations: In addition to the Title/Abstract Page, author(s) must specify the equipment needed for the demonstration. Robotic demonstrations are encouraged, approximately 250 m2 will be available for this purpose. - Videos: 15 minutes maximum duration, VHS format. In addition to the Title/Abstract Page, author(s) must specify recording standard (NTSC, Pal, or Secam). Submissions can be done in 2 different formats: hardcopy or electronic. A) Hardcopy originals (4 copies) should be sent by the author(s) to the Program Secretary at the address below. B) Electronic submission: an anonymous ftp directory has been created at the ECAL95 site (casip.ugr.es, /pub/ecal95/submissions). Only LaTeX and PostScript submissions will be accepted. The papers must be in the format specified above, and must include everything needed to print them (e.g., fonts, macros, figures, etc). LaTeX macros and more detailed instructions will be given upon request to the ECAL95 Program Secretary, or can be got by ftp from the ECAL95 site. For demonstrations and videos contact the Program Secretary. Registration / Information Program Secretary: Juan J. Merelo Dept. Electronica | Facultad de Ciencias | Phone: +34-58-243162 Campus Fuentenueva | Fax: +34-58-243230 18071 Granada, Spain | E-mail: ecal95 at casip.ugr.es Access to ECAL95 site: casip.ugr.es (150.214.60.74) login: anonymous cd /pub/ecal95 Organization Committee Federico Moran U. Complutense Madrid (E) Chair Alvaro Moreno U. Pais Vasco, San Sebastian (E) Chair Arantza Etxeberria U. Sussex (UK) Julio Fernandez U. Pais Vasco, San Sebastian (E) George Kampis ELTE Univ. Budapest (H) Francisco Montero U. Complutense, Madrid (E) Tim Smithers U. Pais Vasco, San Sebastian (E) Carme Torras U. Politecnica Catalunya, Barcelona (E) Local Committee Alberto Prieto U. Granada (E) Chair Juan J. Merelo U. Granada (E) Secretary Julio Ortega U. Granada (E) Francisco J. Pelayo U. Granada (E) Program Committee Francisco Varela CNRS/CREA, Paris (F) Chair Juan J. Merelo U. Granada (E) Secretary Riccardo Antonini U. Carlos III, Madrid (E) Michael Arbib USC, Los Angeles, CA (USA) Randall D. Beer Case Western Reserve U., Cleveland, OH (USA) George Bekey USC, Los Angeles, CA (USA) Hugues Bersini ULB, Brussels (B) Paul Bourgine CEMAGREF, Antony (F) Rodney Brooks MIT, Cambridge, MA (USA) Scott Camazine Wissenschaftskolleg, Berlin (D) Peter Cariani MEEI, Boston, MA (USA) Michael Conrad Wayne State U., Detroit, MI (USA) Jaques Demongeot U. J. Fourier, La Tronche (F) Jean-Louis Deneubourg U. Libre de Bruxelles, Brussels (B) Michael Dyer UCLA, Los Angeles, CA (USA) Claus Emmeche U. of Rosekilde, (DK) Walter Fontana U. of Vienna, (A) Brian C. Goodwin Open U., Milton Keynes (UK) Pauline Hogeweg U. of Utrecht, (NL) Philip Husbands U. of Sussex, Brighton (UK) John Koza Stanford U., CA (USA) Chris Langton Santa Fe Institute, NM (USA) Pier L. Luisi ETHZ, Zurich (CH) Pattie Maes MIT, Cambridge, MA (USA) Pedro C. Marijuan U. Zaragoza, (E) Maja J. Mataric MIT, Cambridge, MA (USA) Enrique Melendez-Hevia U. La Laguna, Tenerife (E) Eric Minch Stanford U., CA (USA) Melanie Mitchel Santa Fe Institute, NM (USA) Jim D. Murray U. of Washington, Seattle, WA (USA) Juan C. Nuno U. Politecnica de Madrid, (E) Domenico Parisi CNR, Roma (I) Mukesh Patel Politecnico di Milano, Milan (I) Howard Pattee SUNY, Binghampton, NY (USA) Juli Pereto U. Valencia, (E) Rolf Pfeifer U. Zurich-Irchel, Zurich (CH) Steen Rasmussen LANL, Los Alamos, NM (USA) Robert Rosen Dalhousie U. Halifax (CA) Peter Schuster IMB, Jena (D) Luc Steels VUB, Brussels (B) John Stewart Institut Pasteur, Paris (F) Jon Umerez SUNY Binghamton, NY (USA) William C. Winsatt U. of Chicago, (USA) Rene Zapata LIRM, Montpellier (F) Official Language: English Publisher: Springer-Verlag Important dates: January 9, 1995 Submission deadline March 10 Notification of acceptance March 24 Camera-ready due March 31 Early registration deadline May 4 Regular registration deadline June 3 Reception and on site registration June 4-6 Conference dates Sponsored by: Spanish RIG IEEE Neural Networks Council Silicon Graphics (Spain) Parque de las Ciencias de Granada EEC DGICYT (Spain) CICYT (Spain) Junta de Andalucia (Spain) EUDEMA Organised by: Universidad de Granada Universidad Complutense de Madrid Universidad del Pais Vasco From amari at sat.t.u-tokyo.ac.jp Wed May 11 18:27:56 1994 From: amari at sat.t.u-tokyo.ac.jp (Shun-ichi Amari) Date: Wed, 11 May 94 18:27:56 JST Subject: position available Message-ID: <9405110927.AA08940@mail.sat.t.u-tokyo.ac.jp> Research Positions in Computational Neuroscience ----- Riken Frontier Research Program The Institute of Physical and Chemical Research (RIKEN) will start a new eight-years Frontier Research Program on Neural Information Processing, beginning in October 1994. The Program includes three research laboratories, each consisting of one research leader and several researchers. They are laboratories for neural modeling, for neural information representations and for artificial brain systems. We will study fundamental principles underlying the higher order brain functioning from mathematical, information-theoretic and systems-theoretic points of view. The three laboratories cooperate in constructing various models of the brain, mathematically analyzing information princples in the brain, and designing artificial brain systems. We will have close correspondences with another Frontier Research Program on experimental neuroscience headed by Dr. M. Ito. We hope that the laboratories will be directed by outstanding leaders under international cooperation, keeping academic freedom, with relatively rich research funds. Research positions, available from October 1994, are open for one-year contracts to researchers and post-doctral fellows, extendable for at most five years. A laboratory leader position is also available for an outstanding established researcher for a three to eight year contract. The positions will have standard Japanese salaries. Those who have interest may send curriculum vitaes, lists of papers, some reference names and copies of one or two representative papers to the director of the Program: Dr. Shun-ichi Amari, Department of Mathematical Engineering and Information Physics, Faculty of Engineering, University of Tokyo, Bunkyo-ku, Tokyo 113, JAPAN tel. +81-3-3812-2111 ex.6910 fax. +81-3-5689-5752 amari at sat.t.u-tokyo.ac.jp From sylee at eekaist.kaist.ac.kr Thu May 12 15:18:20 1994 From: sylee at eekaist.kaist.ac.kr (Soo-Young Lee) Date: Thu, 12 May 94 15:18:20 KST Subject: ICONIP'94-Seoul Extended Deadline and Registration Message-ID: <9405120618.AA01833@eekaist.kaist.ac.kr> International Conference on Neural Information Processing ICONIP'94-Seoul October 17 - 20, 1994 ****************************************** PAPER DEADLINE EXTENDED UNTIL MAY 31, 1994 PRE-REGISTRATION BY AUGUST 31, 1994 ****************************************** Organized by Korean Association of Intelligent Information Processing Sponsored by Asian-Pacific Neural Network Assembly In Cooperation with International Neural Network Society IEEE Neural Network Council European Neural Network Society Japanese Neural Network Society o Dates : October 17 (Mon.) - October 20 (Thur.), 1994 o Venue : The Swiss Grand Hotel, Seoul, Korea Tel : +82(Korea)-2(Seoul)-356-5656 Fax : +82(Korea)-2(Seoul)-356-7799 o Official Language : The official language of the Conference is English which will be used for all paper presentation and printed materials. *************** CALL FOR PAPERS *************** Topics of Interests: All areas of neural networks and related areas such as fuzzy logic, genetic algorithm, and chaos are included. Neurobiological Systems Neural Network Architectures Network Dynamics Cognitive Science Self-Organization Learning & Memory Sensorimotor Systems Time-Series Prediction Optimization Communication Applications Power Electronics Applications Image Processing & Vision Speech Recognition & Language Robotics & Control Other Applications Implementation(Electronic, Optical, and Bio-chips) Hybrid Systems(Fuzzy Logic, Genetic Algorithm, Expert Systems, Chaos and AI) ***************************************************************************** TECHNICAL PROGRAM Plenary Talks Igor Aleksander, Imperial College, UK The Prospects for a Neural Artificial Consciousness Kunihiko Fukushima, Osaka Univ., Japan Neural Networks for Selective Looking Harold Szu, Naval Surface Warfare Center, USA Adaptive Wavelet Transforms by Wavenets Invited Talks Shun-ichi Amari, Univ. of Tokyo, Japan Information Geometry of Stochastic Multilayer Perceptron Walter Freeman, Univ. of California Berkeley, USA Not available yet Toshio Fukuda, Nagoya Univ., Japan Planning and Behavical Control of Intelligent Robot System with Fuzzy-Neuro-GA based Computational Intelligence Dan Hammerstrom, Adaptive Solutions, USA Silicon Cortex : The Impossible Dream ? Il-Song Han, Korea Telecom, Korea URAN : A Hybrid Neural Network VLSI Gerd Hausler, Univ. Erlangen, Germany Chaos, Pattern Formation & Associative Memory with Nonlinear Pictorial Feedback Masumi Ishikawa, Kyushu Inst. Tech., Japan Structural Learning and Modular Networks Mitsuo Kawato, ATR Human Information Processing Research Lab., Japan Teaching by Showing for Task Level Robot Learning through Movement Pattern Perception Eun-Soo Kim, Kwangwoon Univ., Korea Target Image Processing Based on Neural Networks Myung Won Kim, ETRI, Korea Artificial Creativity : Its Computational Modeling and Potential Applications Kazuo Kyuma, Mitsubishi Electric, Japan Comparison of Electrical and Optical Hardware Implementation of Neural Networks Francesco B. Lauria, Universita di Napoli, Italy On the Hebb Rule Implementation of a Boolean Neural Networks Soo-Young Lee, KAIST, Korea Requirements and Perspectives of Neuro-Computers : How and Where Neuro-Computers Can Win Against General-Purpose Computers ? Sukhan Lee, Univ. of Southern California & JPL, USA Theory and Application of Dual-Mode Dynamic Neural Networks Yillbyung Lee, Yonsei Univ., Korea Saccadic Eye Movement Signal Generation Modeling Using Recurrent Neural Network Joseph Malpeli, Univ. Illinois Urbana Champaign,USA A Thermodynamic Model of the Morphogenesis of the Primate Lateral Geniculate Nucleus Robert Marks II, Univ. of Washington, USA Evolutionary Inversion and Hausdort Distance Evalution of Trained Layered Perceptions Gen Matsumoto, Electrotechnical Lab., Japan The Brains as a Computer Nelson Morgan, Univ. California Berkeley, USA Using a Million Connections for Continuous Speech Recognition Yoichi Muraoka, Waseda Univ., Japan "Kansei" Information Processing-Can a Neural Network Live up to this Challenge? Kumpati Narendra, Yale Univ., USA Switching and Turning Using Multiple Neural Network Models Andras Pellionisz, Silicon Valley, USA Not available yet John Taylor, King's College London, UK Where is Neurobiological Modelling going to End Up? Philip Treleaven, Univ. College London, UK Intelligent Systems for Banking, Insurance and Retail : a Survey of UK Systems Minoru Tsukada, Tamagawa Univ., Japan Theoretical Model of the Hippocampal-Cortical Memory System Motivated by Physiological Functions Alex Waibel, Carnegie-Mellon Univ., USA Hybrid Connectionist and Classical Approaches in JANUS, an Advanced Speech-to-speech Translation System Bo-Hyeun Wang, Goldstar Central Research Lab., Korea Hybrid Location-Content Addresable Memories(HyLCAM) and Its Application to Character Recognition Problems Andreas Weigend, Univ. of Colorado, USA Predicting Predictability : How Well Can We Forecast the Future? Paul Werbos, NSF, USA Brain-Like Intelligence : How Far are We and How can We get There? Youshou Wu, Tsinghua Univ., Japan Strategy in Constructing a Large Scale Neural Network System for Chinese Character Recognition Takeshi Yamakawa, Kyushu Inst. of Tech., Japan Wavelet Neural Networks Realizing High Speed Learning Tutorials (Oct. 17 ) Igor Aleksander, Imperial College., UK Weightless Neural Systems Harold Szu, Naval Surface Warfare Center, USA Chaos Theory, Applications, and Implementations Alex Waibel, Carnegie-Mellon Univ., USA Connectionis Models in Multi-modal User Interfaces John Taylor, King's College London, UK Automatic Target Recognition with Neural Networks Andreas Weigend, Univ. of Colorado, USA Avoiding Overfitting in Time-Series Prediction Takeshi Yamakawa, Kyushu Inst. of Tech. Fuzzy Logic : Theory, Hardware Implementation and Applications ICONIP NEWs (Neural-net Evaluation Workshops) In addition to regular conference sessions, special topical workshops, NEWs (Neural-net Evaluation Workshops), are planned to promote in-depth discussions during the conference period at the conference venue. Currently following 5 NEWs are planned. NEW on Financial Applications Co-chairs : Guide Deboeck, World Bank Rhee-Man Kil, ETRI NEW on Speech Recognition Co-chairs : Moon-Sung Han, SERI Nelson Morgan, UC Berkeley NEW on Image and Machine Vision Co-chairs : Eun-Soo Kim, Kwangwoon Univ. NEW on Hybrid Systems Co-chairs : Hideyuki Tagagi, Matsushita Electric Ind. Co. Bo-Hyun Wang, Goldstar Central Research Lab. NEW on VLSI Implementations Co-chairs : Alister Hamilton, Edinburgh Univ. Il-Song Han, Korea Telecom ******************************************************************************* ORGANIZATION OF CONFERENCE Conference Co-Chairs Shun-ichi Amari, Univ. of Tokyo, Japan In-Ku Kang, KCRA, Korea Seung-Taik Yang, ETRI, Korea International Advisory Committee Igor Aleksander, Imperial College, UK Marcelo H. Ang, Jr., Nat'l Univ. of Singapore, Singapore Michael A. Arbib, USC, USA Yiannis Attikiouzel, Univ. of Western Australia, Australia Russel C. Eberhart, RTI, USA Walter Freeman, UC Berkeley, USA Toshio Fukuda, Nagoya Univ., Japan Marwan Jabri, Univ. of Sydney, Australia Nikola Kasabov, Univ. of Otago, New Zealand Teuvo Kohonen, Helsinki Univ. of Tech., Finland Ben I. Lin, Taiwan Nat'l Univ., Taiwan Cheng-Yuan Liou, Taiwan Nat'l Univ., Taiwan Teck-Seng Low, Nat'l Univ. of Singapore, Singapore Robert J. Marks II, Univ. of Washington, USA Gen Matsumoto, ETL, Japan Harold Szu, NSWC, USA Bernard Widrow, Stanford Univ., USA Youshou Wu, Tsinghua Univ., China Sha Zhong, Chinese Inst. Elec., China Domestic Advisory Committee Jeung-Nam Bien, Korea Fuzzy Mathematics & Systems Society Jung-Wan Cho, Center for Artificial Intelligence Research Kun-Moo Chung, Institute for Advanced Engineering Seong-Han Ha, Samsung Advanced Institute of Technology Seung-Hong Hong, The Korean Institute of Telematics & Electronics Heung-Soon Ihm, Hyundai Co. Ltd. Kyung-Chul Jang, Ministry of Science & Technology Chang-Soo Kim, Goldstar Co. Ltd. Chu-Shik Jhon, Research Institute of Advanced Computer Technology Jae-Kyoon Kim, Korean Institute of Communication Sciences Moon-Hyun Kim, System Engineering Research Institute Sang-Young Kim, The Electronic Times Se-Jong Kim, Ministry of Trade, Industry & Energy Yung-Taek Kim, Seoul Nat'l Univ. Cho-Sik Lee, The Korean Society for Cognitive Science Choong-Woong Lee, IEEE Korea Council Chung-Nim Lee, POSTECH Dong-Ho Lee, The Korean Institute of Electrical Engineers Suk-Ho Lee, Korea Information Science Society Yong-Bok Lee, Samsung Electronics Co., Ltd. Yong-Kyung Lee, Korea Telecom Seok-Keun Yoon, Ministry of Communication Si-Ryong Yu, Daewoo Electronics Co., Ltd. Organizing Committee Co-Chairs Sung-Yang Bang, POSTECH Kyu-Bock Cho, Hanseo Univ. Ho-Sun Chung, Kyungbook Nat'l Univ. Sub-Committee Chairs General Affairs : Eun-Soo Kim, Kwangwoon Univ. Finance : Sung-Kwon Kim, Samsung Electronics Co., Ltd Publicity : Sung-Kwon Park, Hanyang Univ. Publication : Il-Song Han, Korea Telecom Exhibition : Mun-Sung Han, SERI Local Arrangement : Yillbyung Lee, Yonsei univ. Registration : Duck-Jin Chung, Inha Univ. Tutorial : Soo-Ik Chae, Seoul Nat'l Univ. Industrial Liaison : Gwang-Hyung Lee, Soongsil Univ. Program Committee Co-Chairs Kunihiko Fukushima, Osaka Univ., Japan Stephen Grossberg, Boston Univ., USA Myung-Won Kim, ETRI, Korea Soo-Young Lee, KAIST, Korea John Taylor, King's College, UK Program Committee Members Seung-Kwon Ahn, Goldstar Central Research Lab., Korea Igor Aleksander, Imperial College, UK Luis B. Almeida, INESC, Portugal James Anderson, Brown Univ., USA Kazuo Asakawa, Fujitsu Lab. Ltd., Japan Yiannis Attikiouzel, Univ. of Western Australia, Australia Eui-Young Cha, Pusan Nat'l Univ., Korea Soo-Ik Chae, Seoul Nat'l Univ., Korea Lai-Wan Chan, Chinese Univ. of Hong Kong, Hong Kong Sung-Il Chien, Kyungbook Nat'l Univ., Korea Sungzoon Cho, POSTECH, Korea Yong-Beom Cho, Konkuk Univ., Korea Jin-Young Choi, Seoul Nat'l Univ., Korea Myung-Ryul Choi, Hanyang Univ., Korea Duck-Jin Chung, Inha Univ., Korea Hong Chung, POSTECH, Korea Dante Del Corso, Politecnico-De Torino, Italy Yann Le Cun, AT&T Bell Lab., USA Rolf Eckmiller, Univ. of Bonn, Germany Francoise Forgiman-Soulie, Univ. of Paris Sud, France Kunihiko Fukushima, Osaka Univ., Japan Lee Giles, NEC Inst., USA Stephen Grossberg, Boston Univ., USA Yeong-Ho Ha, Kyungbook Nat'l Univ., Korea Dan Hammerstrom, Adaptive Solutions Inc., USA Il-Song Han, Korea Telecom, Korea Mun-Sung Han, SERI, Korea Stephen Hanson, Siemens Co. Research, USA Yuzo Hirai, Univ. of Tsukuba, Japan Young-Sik Hong, Dongguk Univ., Korea Naohiro Ishii, Nagoya Inst. of Tech., Japan Masumi Ishikawa, Kyushu Inst. of Tech., Japan Akira Iwata, Nagoya Inst. of Tech., Japan Ju-Seog Jang, Nat'l Fisheries Univ., Korea B. Keith Jenkins, Univ. of Southern California, USA Hong-Tae Jeon, Chung-Ang Univ., Korea Yeun-Cheul Jeung, Samsung Co., Korea Nikola-Kirilov Kasabov, Univ. of Otago, New Zealand Rhee-Man Kil, ETRI, Korea Byung-Ki Kim, Chunnam Nat'l Univ., Korea Dae-Su Kim, Hanshin Univ., Korea Eun-Soo Kim, Kwangwoon Univ., Korea Eung-Soo Kim, Sunghwa Univ., Korea Jai-Hi Kim, Yonsei Univ., Korea Jin-Hyung Kim, KAIST, Korea Jung-Hawn Kim, Univ. of Louisiana, USA Moon-Won Kim, Naval Research Lab., USA Kwang-Ill Koh, Goldstar Industrial Systems, Korea Seong-Gon Kong, Soongsil Univ., Korea Bart Kosko, Univ. of Southern California, USA Kazuo Kyuma, Mitsubishi Electric Co., Japan Francesco Lauria, Univ. Napoli, Italy Bang-Won Lee, Samsung Electronics, Korea Chan-Do Lee, Taejon Univ., Korea Choon-Kil Lee, Seoul Nat'l Univ, Korea Geun-Bae Lee, POSTECH, Korea Gwang-Hyung Lee, Soongsil Univ., Korea Huen-Joo Lee, Goldstar Central Research Lab., Korea Hwang-Soo Lee, KAIST, Korea Ke-Sig Lee, Samsung Group, Korea Sukhan Lee, Univ. of Southern California, USA Won-Don Lee, Chungnam Nat'l Univ., Korea Yillbyung Lee, Yonsei Univ., Korea Young-Jik Lee, ETRI, Korea Cheng-Yuan Liou, Nat'l Taiwan Univ., Taiwan Raymond Lister, Univ. of Queensland, Australia Teck-Seng Low, Nat'l Univ. of Singapore, Singapore Ho Chung Lui, Nat'l Univ. of Singapore, Singapore Song-De Ma, Chinese Academy of Sciences, China Maria Marinaro, Univ. of Solerno Tuba, Italy Gen Matsumoto, Electrotechnical Lab., Japan Gyu Moon, Hallim Univ., Korea Pietro G. Morasso, Univ. of Genova, Italy Takashi Nagano, Hosei Univ., Japan Jong-Ho Nang, Sogang Univ., Korea Kumpati S. Narendra, Yale Univ., USA Jong-Hoon Oh, POSTECH, Korea Erkki Oja, Helsinki Univ., Finland Sigeru Omatu, Univ. of Tokushima, Japan Eung-Gi Paek, Rockwell, USA Cheol-Hoon Park, KAIST, Korea Dong-Jo Park, KAIST, Korea Sung-Kwon Park, Hanyang Univ., Korea Andras Pellionisz, Silicon Valley Neurocomputing Inst., USA Michael Perrone, Brown Univ., USA Alberto Prieto, Univ. de Granada, Spain Demetri Psaltis, California Inst. of Tech., USA Hide-Aki Saito, Tamagawa Univ., Japan Sebastian Seung, AT&T Bell Lab., USA Jong-Han Shin, ETRI, Korea Omori Takashi, Tokyo Univ. of Agri. & Tech., Japan Shaohua Tan, Nat'l Univ. of Singapore, Singapore Horia-Nicolai L. Teodorescu, Ecole Polytectechnique, Federale Lausanne Philip Treleaven, Univ. of College London, UK Minoru Tsukada, Tamagawa Univ., Japan Shiro Usui, Toyohashi Univ. of Technology, Japan M. Vidyasagar, Centre for AI & Robotics, India Bo-Hyeun Wang, Goldstar Central Research Lab., Korea Lei Xu, Chinese Univ. of Hong Kong, Hong Kong Pingfan Yan, Tsinghua Univ., China Hyun-Seung Yang, KAIST, Korea Young-Kyu Yang, SERI, Korea Toyohiko Yatagai, Univ. of Tsukuba, Japan Hyun-Soo Yoon, KAIST, Korea Shuji Yoshizawa, Univ. of Tokyo, Japan Byoung Tak Zhang, GMD, Germany Jacek Zurada, Univ. of Louisville, USA Information for Authors: Original papers are solicited that describe unpublished work on neural networks or related topics such as fuzzy logic, genetic algorithm, and chaos. One original and five copies of the manuscripts in English must be received by May 31, 1994(Extended Duedate). Submissions will be acknowledged on receipt. The submitted papers will be reviewed by Program Committee and corresponding authors will be informed of the decisions at the end of July 1994. No material submitted will be returned to authors. The Electronic-Mail or Facsimile Submissions are not acceptable. Paper Format The submitted manuscripts must be camera-ready on A4 size white papers with 2.5 cm margins on all four sides(24.5cm X 16.0cm printing area), and should not exceed 6 pages, including figures, tables, and references. Single space, single column format in times or similar font style with 10 point size is recommended. Centered at the top of the first page should be the complete paper title, full author name(s), affiliation(s), and mailing address(es). An abstract with less than 150 words should follow. Authors are encouraged to use LaTex. The appropriate LaTex style file and example file can be obtained by FTP or e-mail. To get those files by FTP, use the following instruction: ftp cnsl.kaist.ac.kr (LOGIN:) anonymous (PASSWORD:) cd paperformat get ICONIP94.sty get ICONIP94-example.tex bye If not convenient, just send an e-mail message to "iconip94 at cnsl.kaist.ac.kr", of which the first 2 lines should be: send ICONIP94.sty send ICONIP94-example.tex Non LaTex users may ask for an example of the paper layout by a fax. Acompanying Letter In an accompanying letter, the following should be included: full title of the paper corresponding author name with mailing address, fax number, and e-mail address presenting author name technical sessions (first and second choices) preferred presentation mode (oral or poster) keywords (up to 5) audio-visual requirements (overhead projector, slide projector, video) The Program Committee may recommend to change the presentation mode, if necessary. Conference Proceedings All papers at the oral and poster sessions will be published in Conference Proceedings, which will be available at the Conference. At least one author for each accepted paper must make advanced registration before August 31, 1994. Only manuscripts of author(s) with this requirement will be published in the Proceedings. Paper Copyright By submitting a paper, authors agree to the transfer of the copyright to the Conference Organizer for the proceedings. All submitted papers become the property of the Conference Organizer. Oral Presentation The official language of the Conference is Engilish, and no translation will be provided. The time assigned for contributed talks and invited talks will be 20 minutes (15 minutes for presentation and 5 minutes for discussion) and 30 minutes(25 minutes for presentation and 5 minutes for discussion), respectively. An overhead projector and a 35mm slide projector will be available in the preview room. The presenters may test their transparancies and slides. Poster Presentation Specific time (about 1.5 hour) for poster sessions will be allocated without any oral sessions in parallel. For poster sessions each author will be provided with 1.5m high X 0.9m wide bulletin board. Authors are requested to remain in the vicinity of the bulletin board for the whole duration of the session to answer questions. - One backboard panel is available for poster presentation. A board is 90cm X 150cm(WxH) - The Title, Authors' Name and Affiliation should be 20cm high at the top of the poster panel. - There should be a blank space square at the upper left corner for the reference number. - The poster should be prepared in English. Title should be brief, informative and readable from 2 to 3 meters. - Scotch tape, pins, paste and scissors will be provided by the secretariat. * Poster Sample +----+-----------------------+----------- |NO. |TITLE,NAME,AFFILIATION | 20cm +----+-----------------------+---- | | | | | | | | 150cm | | | | | | | | | | | | +----------------------------+----------- | 90cm | ***************************************************************************** o Letter of Invitation : Upon a request to the Secretariat, a Letter of Invitation to ICONIP'94-Seoul will be sent to those who fully prepaid. Please note that Organizing Committee will not bear any financial obligations to any part as a result of its issue. o Secretariat : All inquiries concerning the Conference should be addressed to the Secretariat ICONIP'94-Seoul Secretariat c/o INTERCOM Convention Service, Inc. (Conference Agency) SL. Kang Nam P.O.Box 641, Seoul 135-606, Korea Tel : +82-2-515-1560/546-7065 Fax : +82-2-516-4807 E-mail : ICONIP at cair.kaist.ac.kr o Important Due Date Extened Deadline for Paper Submission May 31, 1994 Notice of Acceptance July 31, 1994 Deadline for Advanced Registration and Hotel Reservation August 31, 1994 o Supporting Organization Samsung Electronics Co. Goldstar Co., Ltd. Hyundai Electronic Industry Company Ltd. Daewoo Telecom Ltd. From pjh at compsci.stirling.ac.uk Fri May 13 16:36:23 1994 From: pjh at compsci.stirling.ac.uk (Peter J.B. Hancock) Date: 13 May 94 16:36:23 BST (Fri) Subject: Call for papers Message-ID: <9405131636.AA26197@uk.ac.stir.cs.nevis> Our apologies if you receive this more than once... FINAL Call for papers 3rd Neural Computation and Psychology Workshop University of Stirling Scotland 31 August - 2 September 1994 This is the third of a series of workshops looking at the role of neural computational models in psychological research. The first, in Bangor in 1992, was on neurodynamics and psychology, last year's, in Edinburgh, concentrated on models of memory and language. This year's theme is models of perception: general vision, faces, olfaction, sound, music etc, though there will be at least one general session where the subjects will be determined by the papers proposed. There will be invited and contributed talks and posters. Invited speakers include Dr. Ray Meddis and Professor David Willshaw. It is hoped that a proceedings will be published after the event. The workshop will be limited to 75 participants to encourage an informal atmosphere. There will be 5 single-track sessions, starting on Wednesday morning and ending after lunch on Friday. Accomodation will be in student residences on campus, with the option of staying in the management centre hotel if wished. Stirling is situated in the centre of Scotland, with easy access by road, rail and air. For those wishing to spend the subsequent weekend walking, the Highlands are close at hand, and for those who prefer to be indoors, the Edinburgh International Festival and Fringe will still be in progress. We intend to keep costs low. Accommodation will be about 16 pounds per night Bed and Breakfast. Papers will be selected on the basis of abstracts of at most 1000 words, by email or hardcopy to the first address below. Extended deadline for submission: 15 June 1994. Participation from postgraduates is particularly encouraged. For further information contact: Peter Hancock, Department of Psychology, University of Stirling, FK9 4LA pjh at uk.ac.stir.cs, Telephone: (44) 0786-467659 Fax: (44) 0786 467641 Leslie Smith, Department of Computing Science and Mathematics, lss at uk.ac.stir.cs, Telephone: (44) 0786-467435, Fax: (44) 0786 464551 From hunt at DBresearch-berlin.de Mon May 16 16:48:00 1994 From: hunt at DBresearch-berlin.de (Dr. Ken Hunt) Date: Mon, 16 May 94 16:48 MET DST Subject: Neuro-Fuzzy Workshop Message-ID: CALL FOR PAPERS Workshop on Neuro-Fuzzy Systems ------------------------------- Hamburg, Germany, 8--9 September 1994 To be held following the International Conference on Intelligent Systems Engineering ISE 94, Hamburg, Germany (conference dates: September 5--8 1994). Background ---------- The ISE conference will be followed by a set of four workshops focussing on the relationships between methods of control engineering and artificial intelligence in the development of intelligent systems in engineering. The goal of the workshops is to support the exchange of background information and the development of a common understanding of the relative merits and limitations of the various approaches. The initiation of further collaborative actions is expected as a major outcome of the workshops with the long-term goal of bridging the gap between the two classes of approaches. These are intended to be highly interactive sessions leading to a better understanding of the motivation and perspectives of each discipline. The Neuro-Fuzzy Workshop ------------------------ Fuzzy systems and neural nets have been successfully applied to a wide range of application fields including signal processing, control, image analysis, pattern recognition and diagnostics. The combination of both paradigms allows the merging of the sophisticated learning algorithms developed in the realm of neural nets and the representation of qualitative and cognitive transparent rules in fuzzy inference systems. Various architectures for hybrid neuro-fuzzy systems have been proposed: serial and hierarchical coupling of fuzzy systems and neural nets and heterogenous fuzzy-neural nets. This empirical work on neuro-fuzzy combinations has recently been underpinned by theoretical results establishing the direct functional equivalence of certain types of networks and a class of fuzzy systems. The workshop aims to present a balanced overview of this rapidly expanding field both from a theoretical and an application oriented viewpoint. Recent developments in design tools will also play an important role. Specific topics for the workshop include, but are not limited to, - Functional equivalence of neural and fuzzy systems - Structure selection - Common training algorithms - Transparency/interpretation of trained systems - Knowledge-based neural networks - Fuzzy-neural adaptive control - Computational issues and implementation - Description of industrial applications - Design tools for hybrid architectures Papers will be selected according to their quality, significance, originality and their potential to generate discussions on the major theme of the workshop. Presentations should be specifically designed to support an exchange of ideas and to indicate areas where contributions from the respective other discipline are to be expected. Informal working notes will be distributed during the workshop; no copyright will be requested. A paper must not exceed 10000 words, excluding references and abstract. People who wish to attend the workshop without submitting a paper should a send a letter describing their background and research interests by the paper submission deadline. Submission: ---------- Please direct enquiries and submit papers or extended abstracts (3 copies please) to: Dr K J Hunt Systems Technology Research Daimler-Benz AG Alt-Moabit 91 b D-10559 BERLIN Germany Tel: (030) 399 82 275 Int: + 49 30 399 82 275 FAX: (030) 399 82 107 Int: + 49 30 399 82 107 Email: hunt at DBresearch-berlin.de Schedule: -------- Submission deadline July 1st, 1994 Notifications sent July 31st, 1994 Workshop September 8/9th, 1994 Organizing Committee: -------------------- Ken Hunt (Daimler-Benz Research, Berlin) hunt at DBresearch-berlin.de Roland Haas (Daimler-Benz Research, Berlin and TU-Clausthal) haas at DBresearch-berlin.de Dietmar Moeller (Technische Universitaet Clausthal) moeller at fuzzy-labor.in.tu-clausthal.de Other Workshops: --------------- The three other workshops and contacts are: Qualitative and Quantitative Approaches to Model-based Diagnosis freitag at zfe.siemens.de Advanced Planning and Scheduling hje at robots.oxford.ac.uk Architectures for Intelligent Systems marin at iic.es From pedreira at ele.puc-rio.br Mon May 16 17:27:16 1994 From: pedreira at ele.puc-rio.br (Carlos Eduardo Pedreira) Date: Mon, 16 May 94 16:27:16 EST Subject: 2 cfp Brazilian Cong. on Neural Networks Message-ID: <9405161927.AA11956@Octans.ele.puc-rio.br> Dear Neural networkers, Please note the new dead lines for papers submition. Carlos E. Pedreira - Chairman of the National Council on Neural Networks ***************************************************************************** 1st BRAZILIAN CONGRESS (and 2nd SCHOOL) ON ARTIFICIAL NEURAL NETWORKS October 24-27, 1994 Itajuba, Minas Gerais SECOND CALL FOR PAPERS The National Council on Neural Networks is pleased to announce the Federal Engineering School at Itajuba (EFEI) as the venue for the 1st Brazilian Congress / 2nd School on Artificial Neural Networks (ANN). The objectives of the Congress are twofold: the first is to encourage communication among researchers whose work either draws support from, or complements, the theory and applications of ANN related models; and the second is to explore industrial applications of ANN technology to make systems more convenient. The program committee cordially invites interested authors to submit papers dealing with any aspect of research and applications related to the use of ANN models. Papers will be carefully reviewed and only the accepted papers will appear in the proceedings, to be available at the congress to all registrants. Possible topic areas include (although not limited to) the following: - foundations and mathematical issues - learning and memory - new architectures - neurobiological systems - hybrid systems - hardware implementations - Applications to * robotics and automation * system control * signal processing * pattern recognition * forecasting * optimization In addition to the paper presentations, there will be other activities such as: - short course on the theory of neural computation; - tutorials on ANN applications in energy, business, biomedical, among others; - a panel session entitled "Perspective and Reality of ANN"; - social programs and sight-seeing tours. The short course and the tutorials are offered as an introduction for newcomers to the area. The lectures will be presented by reputed researchers and recognized practitioners. The short course will be on October 24th and the tutorials will be, along with the congress, on October 25-27, 1994. Papers can be written in portuguese or english. An article shall not exceed 6 pages on an 8.5x13 inches paper. Articles must use a double column format, with header, footer and lateral margins equal to 1 inch. A 7 cm header margin is required for the first page. Suggested font is Times Roman, with 5 characters per centimetre. All papers must include an abstract. PAPER SCHEDULE AT-A-GLANCE Authors submit original plus 3 copies of papers by: June 30th, 1994 Notification of review committee's decision to be posted by: August 15th, 1994 INTERNATIONAL DISTINGUISHED SCHOLARS Prof. Yoh-Han Pao - Case Western Reserve University Prof. Manoel F. Tenorio - Purdue University Prof. Dejan J. Sobajic - EPRI Prof. Yaser Abu-Mostafa - Caltech Dr. Steve Suddarth - AFOSR/NE PROGRAM COMMITTEE Prof. Alexandre P. Alves da Silva (Chairman) - EFEI Prof. Armando Freitas da Rocha - UNICAMP Prof. Manoel F. Tenorio - Purdue University Prof. Nestor Caticha - USP Prof. Carlos Eduardo Pedreira - PUC-Rio 2nd SCHOOL ON ANN CHAIR Profa. Teresa B. Ludermir - UFPE STEERING COMMITTEE Prof. Germano Lambert Torres (Chairman) - EFEI Prof. Luiz P. Caloba - COPPE/UFRJ Prof. Luiz Eduardo Borges da Silva - EFEI Dr. Eduardo Nery - CEMIG NATIONAL COUNCIL ON NEURAL NETWORKS Prof. Carlos Eduardo Pedreira - PUC-Rio Prof. Teresa B. Ludermir - UFPE Dr. Ricardo J. Machado - IBM-Brasil Prof. Dante A.C. Barone - UFRGS Prof. Luiz P. Caloba - UFRJ Prof. Renato M. Sabbatini - UNICAMP SPONSORED BY IBM-Brasil American Airlines EFEI / FUPAI FAPERJ FAPEMIG CEMIG All correspondence should be conducted through: Professor Alexandre P. Alves da Silva EFEI / IEE Campus Prof. J.R. Seabra Av. BPS, 1303 - CEP 37500-000 Itajuba - MG BRAZIL Tel.: +55-35-629-1247 Fax.: +55-35-629-1187 E-mail: alex at efei.dcc.ufmg.br From P.McKevitt at dcs.shef.ac.uk Tue May 17 15:01:52 1994 From: P.McKevitt at dcs.shef.ac.uk (Paul Mc Kevitt) Date: Tue, 17 May 94 15:01:52 BST Subject: Integration of Natural Language and Vision Processing Message-ID: <9405171401.AA27085@dcs.shef.ac.uk> **** VISION AND LANGUAGE AND VISION AND LANGUAGE AND VISION AND LANGUAGE **** **** VISION AND LANGUAGE AND VISION AND LANGUAGE AND VISION AND LANGUAGE **** PROGRAMME AND CALL FOR PARTICIPATION AAAI-94 Workshop on Integration of Natural Language and Vision Processing Twelfth National Conference on Artificial Intelligence (AAAI-94) Seattle, Washington, USA Tuesday/Wednesday, August 2nd/3rd, 1994 Chair: Paul Mc Kevitt Department of Computer Science University of Sheffield, ENGLAND, EU WORKSHOP COMMITTEE: Prof. Mike Brady (Oxford, England) Prof. Jerry Feldman (ICSI, Berkeley, USA) Prof. John Frisby (Sheffield, England) Prof. Frank Harary (CRL, New Mexico, USA) Dr. Eduard Hovy (USC ISI, Los Angeles, USA) Dr. Mark Maybury (MITRE, Cambridge, USA) Dr. Ryuichi Oka (RWC P, Tsukuba, Japan) Prof. Derek Partridge (Exeter, England) Dr. Terry Regier (ICSI, Berkeley, USA) Prof. Roger Schank (ILS, Illinois, USA) Prof. Noel Sharkey (Sheffield, England) Dr. Oliviero Stock (IRST, Italy) Prof. Dr. Wolfgang Wahlster (DFKI, Germany) Prof. Yorick Wilks (Sheffield, England) WORKSHOP DESCRIPTION: There has been a recent move towards considering the integration of perception sources in Artificial Intelligence (AI) (see Dennett 1991 and Mc Kevitt (Guest Ed.) 1994). This workshop will focus on research involved in the integration of Natural Language Processing (NLP) and Vision Processing (VP). Although there has been much progress in developing theories, models and systems in the areas of NLP and VP there has been little progress on integrating these two subareas of Artificial Intelligence (AI). It is not clear why there has not already been much activity in integrating these two areas. Is it because of the long-time reductionist trend in science up until the recent emphasis on chaos theory, nonlinear systems, and emergent behaviour? Or, is it because the people who have tended to work on NLP tend to be in other Departments, or of a different ilk, from those who have worked on VP? We believe it is high time to bring together NLP and VP. Already we have advertised a call for papers for a special volume of the Journal of AI Review to focus on their integration and we have had a tremendous response. There will be three special issues focussing on theory and applications of NLP and VP and intelligent multimedia systems. The workshop is of particular interest at this time because research in NLP and VP has advanced to the stage that they can each benefit from integrated approaches. Also, such integration is important as people in NLP and VP can gain insight from each others' work. References Dennett, Daniel (1991) Consciousness explained Harmondsworth: Penguin Mc Kevitt, Paul (1994) (Guest Editor) Integration of Natural Language and Vision Processing Special Volume 8(1,2,3) of AI Review Journal Dordrecht: Kluwer (forthcoming) WORKSHOP TOPICS: The workshop will focus on these themes: * Multimedia retrieval * Multimedia document processing * Speech, gesture and gaze * Theory * Multimedia presentation * Spatial relations * Multimedia interfaces * Reference PROGRAMME: Tuesday, August 2nd, 1994 ************************* INTRODUCTION I: 8.45 `Introduction' Paul Mc Kevitt MULTIMEDIA RETRIEVAL: (Chair: Neil C. Rowe) 9.00 `Domain-independent rules relating captions and pictures' Neil C. Rowe Computer Science, U.S. Naval Postgraduate School, Monterey CA, USA 9.30 `An image retrieval system that accepts natural language' Hiromasa NAKATANI and Yukihiro ITOH Department of Information and Knowledge Engineering, Shizuoka University, Hamamatsu, Japan 10.00 Break MULTIMEDIA DOCUMENT PROCESSING: (Chair: Rohini Srihari) 10.30 `Integrating text and graphical input to a knowledge base' Raman Rajagopalan Dept. of Computer Sciences, University of Texas at Austin, USA 11.00 `Photo understanding using visual constraints generated' from accompanying text Rohini Srihari Center of Excellence for Document Analysis and Recognition (CEDAR), SUNY Buffalo, NY, USA 11.30 Discussion SPEECH, GESTURE AND GAZE: (Chair: Jordi Robert-Ribes) 12.00 `Audiovisual recognition of speech units: a tentative functional model compatible with psychological data' Jordi Robert-Ribes, Michel Piquemal, Jean-Luc Schwartz & Pierre Escudier Institut de la Communication Parlee (ICP) Grenoble, France, EU 12.30 Discussion 12.45 LUNCH SITE DESCRIPTION (VIDEO): (Chair: Arnold G. Smith) 2.00 `The spoken image system: on the visual interpretation of verbal scene descriptions' Sean O Nuallain, Benoit Farley & Arnold G. Smith Dublin City University, Dublin, Ireland, EU & NRC, Ottawa, Canada THEORY: 2.20 `Behavioural descriptions from image sequences' Hilary Buxton and Richard Howarth School of Cognitive and Computing Sciences, University of Sussex & Department of Computing Science, QMW, University of London 2.50 `Visions of language' Paul Mc Kevitt Department of Computer Science, University of Sheffield, England, EU 3.15 Discussion 3.30 Break 4.00 `Language animation' A. Narayanan, L. Ford, D. Manuel, D. Tallis, and M. Yazdani Media Laboratory, Department of Computer Science, University of Exeter, England, EU 4.30 Discussion MULTIMEDIA PRESENTATION: (Chair: Arnold G. Smith) 4.45 `Assembly plan generation by integrating pictorial and textual information in an assembly illustration' Shoujie He, Norihiro Abe and Tadahiro Kitahashi Dept of Information Systems and Computer Science, National Univ. of Singapore, Singapore, Faculty of Computer Science and Systems Engineering, Kyushu Institute of Technology, Iizuka-shi, Japan & The Institute of Scientific and Industrial Research Osaka University, Osaka, Japan 5.15 `Multimedia presentation of interpreted visual data' Elisabeth Andre, Gerd Herzog & Thomas Rist DFKI & Universitaet des Saarlandes, Saarbruecken, Germany, EU 5.45 Discussion 6.00 OICHE MHAITH Wednesday, August 3rd, 1994 *************************** INTRODUCTION: 8.45 `Introduction' Paul Mc Kevitt SPATIAL RELATIONS I: (Chair: Jeffrey Mark Siskind) 9.00 `Propositional semantics in the WIP system' Patrick Olivier & Jun-ichi Tsujii Centre for Intelligent Systems University of Wales at Aberystwyth, Penglais, Wales, EU & Centre for Computational Linguistics, UMIST, Manchester, England, EU 9.30 `Spatial layout identification and incremental descriptions' Klaus-Peter Gapp & Wolfgang Maass Cognitive Science Program, Saarbruecken, Germany, EU 10.00 Break 10.30 `Axiomatic support for event perception' Jeffrey Mark Siskind Department of Computer Science, University of Toronto, Canada 11.00 Discussion SPATIAL RELATIONS II: (Chair: Stephan Kerpedjiev) 11.30 `A cognitive approach to an interlingua representation of spatial descriptions' Irina Reyero-Sans & Jun-ichi Tsujii Centre for Computational Linguistics, UMIST, Manchester, England, EU 12.00 `Describing spatial relations in weather reports through prepositions' Stephan Kerpedjiev, NOAA/ERL/Forecast Systems Laboratory, Boulder, Colorado, USA 12.30 Discussion 12.45 LUNCH MULTIMEDIA INTERFACES: (Chair: Yuri A. TIJERINO) 2.00 `Talking pictures: an empirical study into the usefulness of natural language output in a graphical interface' Carla Huls, Edwin Bos & Alice Dijkstra NICI, Nijmegen University, Nijmegen, The Netherlands & Unit of Experimental and Theoretical Psychology, Leiden University, The Netherlands 2.30 `From verbal and gestural input to 3-D visual feedback' Yuri A. TIJERINO, Tsutomu MIYASATO & Fumio KISHINO ATR Communication Systems Research Laboratories, Kyoto, Japan 3.00 Discussion 3.30 Break 4.00 `An integration of natural language and vision processing towards an agent-based future TV system' Yeun-Bae Kim, Masahiro Shibata & Masaki Hayashi NHK (Japan Broadcasting Corporation) Science & Technical Research Laboratories, Tokyo, Japan 4.30 Discussion REFERENCE: (Chair: Lawrence D. Roberts) 4.45 `An AI module for reference based on perception' John Moulton, Hartwick College, Oneonta, N.Y. USA and Lawrence D. Roberts, SUNY, Binghamton, N.Y. USA 5.15 `Instruction use by a vision-based mobile robot' Tomohiro Shibata, M. Inaba, & H. Inoue Department of Mechano Informatics, The University of Tokyo, Japan 5.45 Discussion 6.00 OICHE MHAITH PUBLICATION: Workshop notes/preprints will be published by AAAI. If there is sufficient interest we will publish a book on the workshop with AAAI Press. WORKSHOP CHAIR: Paul Mc Kevitt Department of Computer Science Regent Court University of Sheffield 211 Portobello Street GB- S1 4DP, Sheffield England, UK, EU. e-mail: p.mckevitt at dcs.shef.ac.uk fax: +44 742 780972 phone: +44 742 825572 (office) 825590 (secretary) ATTENDANCE: We hope to have an attendance between 30-50 people at the workshop. If you are interested in attending then please send the following form to p.mckevitt at dcs.shef.ac.uk as soon as possible: cut--------------------------------------------------------------------------- Name: Affiliation: Full Address: E-mail: cut---------------------------------------------------------------------------- REGISTRATION ENQUIRIES FOR AAAI CAN BE MADE TO: NCAI at aaai.org REGISTRATION FEE: Incorporated into the technical registration fee except for those who are workshop attendees only. **** VISION AND LANGUAGE AND VISION AND LANGUAGE AND VISION AND LANGUAGE **** **** VISION AND LANGUAGE AND VISION AND LANGUAGE AND VISION AND LANGUAGE **** From P.McKevitt at dcs.shef.ac.uk Tue May 17 15:16:07 1994 From: P.McKevitt at dcs.shef.ac.uk (Paul Mc Kevitt) Date: Tue, 17 May 94 15:16:07 BST Subject: Speech and Natural Language Processing Message-ID: <9405171416.AA27396@dcs.shef.ac.uk> **** SPEECH AND LANGUAGE AND SPEECH AND LANGUAGE AND SPEECH AND LANGUAGE **** **** SPEECH AND LANGUAGE AND SPEECH AND LANGUAGE AND SPEECH AND LANGUAGE **** PROGRAMME AND CALL FOR PARTICIPATION AAAI-94 Workshop on Integration of Natural Language and Speech Processing Twelfth National Conference on Artificial Intelligence (AAAI-94) Seattle, Washington, USA Sunday/Monday, July 31st/August 1st, 1994 Chair: Paul Mc Kevitt Department of Computer Science University of Sheffield, ENGLAND, EU WORKSHOP COMMITTEE: Prof. Ole Bernsen (Roskilde, Denmark) Dr. Martin Cooke (Sheffield, England) Dr. Daniel Jurafsky (ICSI, Berkeley, USA) Dr. Steve Renals (Cambridge, England) Prof. Noel Sharkey (Sheffield, England) Dr. Eiichiro Sumita (ATR, Japan) Prof. Dr. Walther v.Hahn (Hamburg, Germany) Prof. Yorick Wilks (Sheffield, England) Prof. Dr. Wolfgang Wahlster (DFKI, Germany) Dr. Sheryl R. Young (CMU, USA) WORKSHOP DESCRIPTION: There has been a recent move towards considering the integration of perception sources in Artificial Intelligence (AI) (see Dennett 1991 and Mc Kevitt (Ed.) 1994). This workshop will focus on research involved in the integration of Natural Language Processing (NLP) and Speech Processing (SP). The aim here is to bring to the AI community results being presented at computational linguistics (e.g. COLING/ACL), and speech conferences (e.g. ICASSP, ICSLP). Although there has been much progress in developing theories, models and systems in the areas of NLP and SP we have just started to see progress on integrating these two subareas of AI. Most success has been with speech synthesis and less with speech understanding. However, there are still a number of important questions to answer about the integration of speech and language processing. How is intentional information best gleaned from speech input? How does one cope with situations where there are multiple speakers in a dialogue with multiple intentions? How does discourse understanding occur in multi-speaker situations with noise? How does prosodic information help NLP systems? What corpora (e.g. DARPA ATIS corpora, MAP-TASK corpus from Edinburgh) exist for integrated data on speech and language? The workshop is of particular interest at this time because research in NLP and SP have advanced to the stage that they can each benefit from integrated approaches. Also, such integration is important as people in NLP and SP can gain insight from each others' work. References Dennett, Daniel (1991) Consciousness explained Harmondsworth: Penguin Mc Kevitt, Paul (1994) (Guest Editor) Integration of Natural Language and Vision Processing Special Volume 8(1,2,3) of AI Review Journal Dordrecht: Kluwer (forthcoming) WORKSHOP TOPICS: The workshop will focus on these themes: * Speech understanding * Dialogue & Discourse * Machine translation * Architectures * Site descriptions (Hamburg, JANUS-II, ATR, CMU) PROGRAMME: Sunday, July 31st, 1994 *********************** INTRODUCTION I: 8.45 `Introduction' Paul Mc Kevitt SPEECH UNDERSTANDING I: (Chair: Alberto Lavelli) 9.00 `Left-to-Right analysis of spoken language' Bernd Seestaedt, Franz Kummert & Gerhard Sagerer University of Bielefeld, Germany, EU 9.30 `An N-Best representation for bidirectional parsing strategies' Anna Corazza & Alberto Lavelli IRST, Trento, Italy, EU 10.00 Break 10.30 `Incorporation of phoneme-context-dependence in LR table through constraint propagation method' Hozumi TANAKA, Hui LI & Takenobu TOKUNAGA Tokyo Institute of Technology, Tokyo, Japan 11.00 Discussion SPEECH UNDERSTANDING II: (Chair: Karen Ward) 11.15 `On the need for a theory of knowledge sources for spoken language understanding' Karen Ward & David G. Novick Oregon Graduate Institute of Science and Technology, Oregon, USA 11.45 `Misrecognition detection in speech recognition' Sheryl R. Young Department of Computer Science, Carnegie Mellon University, USA 12.15 Discussion 12.30 LUNCH SITE DESCRIPTION I: (Chair: Nigel Ward) 2.00 `An outline of the Verbmobil project with focus on the work at the University of Hamburg' J. Amtrup, Andreas Hauenstein, C. Pyka, V. Weber & S. Wermter University of Hamburg, Germany, EU ARCHITECTURES I: (Chair: Nigel Ward) 2.15 `An investigation of tightly coupled time synchronous speech language interfaces using a unification grammar' Andreas Hauenstein & Hans H. Weber University of Hamburg & University of Erlangen-Nuernberg, Germany, EU 2.45 `An approach to tightly-coupled syntactic/semantic processing for speech understanding' Nigel Ward University of Tokyo, Japan 3.15 Discussion 3.30 Break DIALOGUE & DISCOURSE I: (Chair: Jean Veronis) 4.00 `Pragmatic linguistic constraint models for large-vocabulary speech processing' Eric Atwell and Paul Mc Kevitt University of Leeds & University of Sheffield, England, EU 4.30 `SpeechActs: a testbed for continuous speech applications' Paul Martin & Andy Kehler Sun Microsystems Laboratories & Harvard University, USA 5.00 `NL and speech in the Multext project' Jean Veronis, Daniel Hirst, Robert Espesser & Nancy Ide CNRS & Universite de Provence, Aix-en-Provence, France 5.30 Discussion 6.00 OICHE MHAITH Monday, August 1st, 1994 ************************ INTRODUCTION II: 8.45 `Introduction' Paul Mc Kevitt SITE DESCRIPTIONS II & III: (Chair: Eiichiro Sumita) 9.00 `JANUS-II: research in spoken language translation' Alex Waibel Center for Machine Translation, Carnegie Mellon University, USA & University of Karlsruhe, Germany, EU 9.15 `Work at ATR on spoken language translation' Dr. Eiichiro Sumita ATR Interpreting Telecommunications Research Laboratories, Kyoto, Japan MACHINE TRANSLATION: (Chair: Bernhard Suhm) 9.30 `Bilingual corpus for speech translation' Osamu FURUSE, Yasuhiro SOBASHIMA, Toshiyuki TAKEZAWA & Noriyoshi URATANI ATR Interpreting Telecommunications Research Laboratories, Kyoto, Japan 10.00 Break 10.30 `Speech-language integration in a multi-lingual speech translation system' Bernhard Suhm, Lori Levin, N. Coccaro, Jaime Carbonell, K. Horiguchi, R. Isotani, A. Lavie, L. Mayfield, C.P. Rose, C. Van Ess-Dykema & Alex Waibel Center for Machine Translation, Carnegie Mellon University, USA ATR Interpreting Telecommunications Research Laboratories, Kyoto, Japan U.S. Department of Defense, & University of Karlsruhe, Germany, EU 11.00 Discussion ARCHITECTURES II: (Chair: Daniel Jurafsky) 11.30 `Towards an artificial agent as the kernel of a spoken dialogue system: a progress report' David Sadek, A. Ferrieux & A. Cozannet French Telecom, CNET, France, EU 12.00 `Integrating experimental models of syntax, phonology, and accent/dialect in a speech recognizer' Daniel Jurafsky, Chuck Wooters, Gary Tajchman, Jonathan Segal, Andreas Stolcke & Nelson Morgan ICSI and University of California at Berkeley, Berkeley, USA 12.30 Discussion 12.45 LUNCH SITE DESCRIPTION IV: (Chair: Sheryl R. Young) 2.00 `Work at CMU on spoken dialogue systems' Sheryl R. Young Department of Computer Science, Carnegie Mellon University, USA DIALOGUE & DISCOURSE II: (Chair: Sheryl R. Young) 2.15 `Speech recognition in multi-agent dialogue' Sheryl R. Young Department of Computer Science, Carnegie Mellon University, USA 2.45 `A study of intonation and discourse structure in directions' Barbara J. Grosz, Julia Hirschberg & Christine H. Nakatani Harvard University & AT&T Bell Laboratories, USA 3.15 Discussion 3.30 Break ARCHITECTURES III: (Chair: Mary P. Harper) 4.00 `An integrative architecture for speech and language understanding' William Edmondson, Jon Iles & Paul Mc Kevitt University of Birmingham & University of Sheffield, England, EU 4.30 `Integrating language models with speech recognition' Mary P. Harper, Leah H. Jamieson, Carl D. Mitchell, Goangshiuan Ying, SiriPong Potisuk, Pramila N. Srinivasan, Ruxin Chen, Carla B. Zoltowski, Laura L. McPheters, Bryan Pellom & Randall A. Helzerman School of Electrical Engineering, Purdue University, USA 5.00 Discussion 5.15 OICHE MHAITH PUBLICATION: Workshop notes/preprints will be published by AAAI. If there is sufficient interest we will publish a book on the workshop with AAAI Press. WORKSHOP CHAIR: Paul Mc Kevitt Department of Computer Science Regent Court University of Sheffield 211 Portobello Street GB- S1 4DP, Sheffield England, UK, EU. e-mail: p.mckevitt at dcs.shef.ac.uk fax: +44 742 780972 phone: +44 742 825572 (office) 825590 (secretary) ATTENDANCE: We hope to have an attendance between 25-50 people at the workshop. If you are interested in attending then please send the following form to p.mckevitt at dcs.shef.ac.uk as soon as possible: cut--------------------------------------------------------------------------- Name: Affiliation: Full Address: E-mail: cut---------------------------------------------------------------------------- REGISTRATION ENQUIRIES FOR AAAI CAN BE MADE TO: NCAI at aaai.org REGISTRATION FEE: Incorporated into the technical registration fee except for those who are workshop attendees only. **** SPEECH AND LANGUAGE AND SPEECH AND LANGUAGE AND SPEECH AND LANGUAGE **** **** SPEECH AND LANGUAGE AND SPEECH AND LANGUAGE AND SPEECH AND LANGUAGE **** From kruschke at pallas.psych.indiana.edu Tue May 17 17:46:53 1994 From: kruschke at pallas.psych.indiana.edu (John Kruschke) Date: Tue, 17 May 1994 16:46:53 -0500 (EST) Subject: TR announcement: base rates in category learning Message-ID: A non-text attachment was scrubbed... Name: not available Type: text Size: 3041 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/2384e49f/attachment.ksh From ym00 at crab.psy.cmu.edu Wed May 18 12:53:15 1994 From: ym00 at crab.psy.cmu.edu (Yuko Munakata) Date: Wed, 18 May 94 12:53:15 EDT Subject: TR: A PDP Framework for Object Permanence Tasks Message-ID: <9405181653.AA26835@crab.psy.cmu.edu.psy.cmu.edu> The following Technical Report is available both electronically from our own FTP server or in hard copy form. Instructions for obtaining copies may be found at the end of this post. ======================================================================== Now You See It, Now You Don't: A Gradualistic Framework for Understanding Infants' Successes and Failures in Object Permanence Tasks Yuko Munakata, James L. McClelland, Mark H. Johnson, & Robert S. Siegler Carnegie Mellon University Technical Report PDP.CNS.94.2 May, 1994 3.5-month-old infants seem to show an understanding of the concept of object permanence when tested through looking-time measures. Why, then, do infants fail to retrieve hidden objects until 8 months? Answers to this question, and to questions of infants' successes and failures in general, depend on one's conception of knowledge representations. Within a monolithic approach to object permanence, means-ends deficits provide the standard answer. However, the current experiments with 7-month-old infants indicate that the means-ends accounts are incomplete. In the first two studies, infants were trained to pull a towel or push a button to retrieve a distant toy. Infants were then tested on trials with an opaque or transparent screen in front of the toy. Trials without toys were also included, and the difference between Toy and No-Toy trials in number of retrieval responses was used as a measure of toy-guided retrieval. The means-ends abilities required for toy-guided retrieval in the Transparent and Opaque conditions were identical, yet toy-guided retrieval was more frequent in the Transparent condition. A third experiment eliminated the possibility that training on the retrieval of visible toys had led infants to generalize better to the Transparent condition. To explain these data, an account of the object permanence concept as a gradual strengthening of representations of occluded objects is developed in the form of a connectionist model. The simulations demonstrate how a system might come to form internal representations of occluded objects, how these representations could be graded and strengthened, and how the gradedness of representations could differentially impact upon looking and reaching behaviors. ======================================================================= Retrieval information for pdp.cns TRs: unix> ftp 128.2.248.152 # hydra.psy.cmu.edu Name: anonymous Password: ftp> cd pub/pdp.cns ftp> binary ftp> get pdp.cns.94.2.ps.Z ftp> quit unix> zcat pdp.cns.94.2.ps.Z | lpr # or however you print postscript NOTE: The compressed file is 292340 bytes long. Uncompressed, the file is 978382 byes long. The printed version is 43 total pages long. For those who do not have FTP access, physical copies can be requested from Barbara Dorney . From jari at vermis Wed May 18 05:17:00 1994 From: jari at vermis (Jari Kangas) Date: Wed, 18 May 94 12:17:00 +0300 Subject: Thesis available: On the Analysis of Pattern Sequences by SOMs Message-ID: <9405180917.AA07773@vermis> FTP-host: vermis.hut.fi FTP-file: pub/papers/kangas.thesis.ps.Z The file kangas.thesis.ps.Z is now available for copying from the anonymous ftp-site 'vermis.hut.fi' (130.233.168.57): On the Analysis of Pattern Sequences by Self-Organizing Maps Jari Kangas Dr.Tech. Thesis Helsinki University of Technology Abstract: This thesis is organized in three parts. In the first part, the Self-Organizing Map algorithm is introduced. The discussion focuses on the analysis of the Self-Organizing Map algorithm. It is shown that the nonlinear nature of the algorithm makes it difficult to analyze the algorithm except in some trivial cases. In the second part the Self-Organizing Map algorithm is applied to several patterns sequence analysis tasks. The first application is a voice quality analysis system. It is shown that the Self-Organizing Map algorithm can be applied to voice analysis by providing the visualization of certain deviations. The key point in the applicability of Self-Organizing Map algorithm is the topological nature of the mapping; similar voice samples are mapped to nearby locations in the map. The second application is a speech recognition system. Through several experiments it is demonstrated that by collecting some time dependent features and using them in conjunction with the basic Self-Organizing Map algorithm one can improve the speech recognition accuracy considerably. The applications explained in the second part of the thesis were rather straightforward works where the sequential signal itself was transformed for the analysis. In the third part of the thesis it is demonstrated that the Self-Organizing Map algorithm itself could be extended by identifying each Map unit with an arbitrary operator with capabilities for pattern sequence processing. It is shown that the operator maps are applicable for example to speech signal (waveform) categorization. -------------------------------------- The thesis is 86 pages (8 preamble + 78 text). To obtain a copy of the Postscript file: % ftp vermis.hut.fi > Name: anonymous > Password: > cd pub/papers > binary > get kangas.thesis.ps.Z (The size of the compressed file is about 0.4Mbyte) > quit Then: % uncompress kangas.thesis.ps.Z (The size of the uncompressed file is about 1.2Mbyte) % lpr -s -P kangas.thesis.ps ------------------------------------- Jari Kangas Helsinki University of Technology Neural Networks Research Centre Rakentajanaukio 2 C FIN-02150 Espoo, FINLAND From maass at igi.tu-graz.ac.at Thu May 19 06:03:20 1994 From: maass at igi.tu-graz.ac.at (Wolfgang Maass) Date: Thu, 19 May 94 12:03:20 +0200 Subject: new paper in neuroprose Message-ID: <9405191003.AA29968@figids01.tu-graz.ac.at> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/maass.perspectives.ps.Z The file maass.perspectives.ps.Z is now available for copying from the Neuroprose repository. This is a 37-page long paper. Hardcopies are not available. PERSPECTIVES OF CURRENT RESEARCH ABOUT THE COMPLEXITY OF LEARNING ON NEURAL NETS by Wolfgang Maass Institute for Theoretical Computer Science Technische Universitaet Graz, A-8010 Graz, Austria email: maass at igi.tu-graz.ac.at Abstract: This is a survey paper, which discusses within the framework of computational learning theory the current state of knowledge and important open problems in three areas of research about the complexity of learning on neural nets: -- Effient algorithms for neural nets that learn from mistakes -- Bounds for the number of examples needed to train neural nets -- PAC-learning on neural nets without a-priori assumptions about the learning problem All relevant definitions are given in the paper, and no previous knowledge about computational learning theory is assumed. ************************ How to obtain a copy ************************ Via Anonymous FTP: unix> ftp archive.cis.ohio-state.edu Name: anonymous Password: (type your email address) ftp> cd pub/neuroprose ftp> binary ftp> get maass.perspectives.ps.Z ftp> quit unix> uncompress maass.perspectives.ps.Z unix> lpr maass.perspectives.ps (or what you normally do to print PostScript) From rmeir at ee.technion.ac.il Thu May 19 14:37:25 1994 From: rmeir at ee.technion.ac.il (Ron Meir) Date: Thu, 19 May 1994 16:37:25 -0200 Subject: Paper available by ftp Message-ID: <199405191837.QAA02452@ee.technion.ac.il> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/meir.bias_variance.ps.Z The following technical report is available by anonymous ftp. 18 printed pages. ------------------------------------------------------------------------ Bias, Variance and the Combination of Estimators; The case of Linear Least Squares Ronny Meir Department of Electrical Engineering Technion Haifa 32000 Israel rmeir at ee.technioin.ac.il We consider the effect of combining several least squares estimators on the solution to a regression problem. Computing the exact bias and variance curves as a function of the sample size we are able to quantitatively compare the effect of the combination on the bias and variance separately, and thus on the expected error which is the sum of the two. First, we show that by splitting the data set into several independent parts and training each estimator on a different subset, the performance can in some cases be significantly improved. We find three basic regions of interest. For a small number of noisy samples the estimation quality is dramatically improved by combining several independent estimators. For intermediate sample sizes, however, the effect of combining estimators can in fact be deletarious, tending to increase the bias too much. For large sample sizes both the single and the combined estimator approach the same limit. Our results are derived analytically for the case of linear least-squares regression, and are valid for systems of large input dimensions. A definite conclusion of our work is that substantial improvement in the quality of least-squares estimation is possible by decreasing the variance at the cost of an increase in bias. This gain is especially pronounced for small and noisy data sets. We stress, however, that the approach of estimator combination is not a panacea for constructing improved estimators and must be applied with care. -------------------------------------------------------------------- ftp instructions: % ftp archive.cis.ohio-state.edu Name: anonymous password: your full email address ftp> cd pub/neuroprose ftp> binary ftp> get meir.bias_variance.ps.Z ftp> bye % uncompress meir.bias_variance.ps.Z % lpr meir.bias_variance.ps From philh at cogs.susx.ac.uk Thu May 19 13:51:36 1994 From: philh at cogs.susx.ac.uk (Phil Husbands) Date: Thu, 19 May 1994 18:51:36 +0100 (BST) Subject: SAB94 Program and Registration Message-ID: CONFERENCE PROGRAM AND INVITATION TO PARTICIPATE ------------------------------------------------ FROM ANIMALS TO ANIMATS Third International Conference on Simulation of Adaptive Behavior (SAB94) Brighton, UK, August 8-12, 1994 The object of the conference is to bring together researchers in ethology, psychology, ecology, cybernetics, artificial intelligence, robotics, and related fields so as to further our understanding of the behaviors and underlying mechanisms that allow animals and, potentially, robots to adapt and survive in uncertain environments. The conference will focus particularly on well-defined models, computer simulations, and built robots in order to help characterize and compare various organizational principles or architectures capable of inducing adaptive behavior in real or artificial animals. Technical Programme =================== The full technical programme is given below. There will be a single track of oral presentations, with poster sessions separately timetabled. There will also be computer, video and robotic demonstrations. Major topics covered will include: Individual and collective behavior Autonomous robots Neural correlates of behavior Hierarchical and parallel organizations Perception and motor control Emergent structures and behaviors Motivation and emotion Problem solving and planning Action selection and behavioral Goal directed behavior sequences Neural networks and evolutionary Ontogeny, learning and evolution computation Internal world models Characterization of environments and cognitive processes Applied adaptive behavior Invited speakers ================ Prof. Michael Arbib, University of Southern California, "Rats Running and Humans Reaching: The Brain's Multiple Styles of Learning" Prof. Rodney Brooks, MIT, "Coherent Behavior from Many Adaptive Processes" Prof. Herbert Roitblat, University of Hawaii, "Mechanisms and Process in Animal Behaviour: Models of Animals, Animals as Models" Prof. John Maynard Smith, University of Sussex,"The Evolution of Animal Signals" Prof. Jean-Jacques Slotine, MIT, "Stability in Adaptation and Learning" Proceedings =========== The conference proceeding will be published by MIT Press/Bradford Books and will be available at the conference. Official Language: English ========================== Demonstrations ============== Computer, video and robotic demonstrations are invited. They should be of work relevant to the conference. If you wish to offer a demonstration, please send a letter with your registration form briefly describing your contribution and indicating space and equipment requirements. Registration ============ Registration details are given after the technical program. Full conference details will be sent on registration. CONFERENCE PROGRAM ------------------ Sunday 7th August ----------------- Old Ship Hotel 7.00pm: Welcoming Reception and Registration ***All Conference Sessions in Brighton Conference Centre East Wing Monday 8th August ----------------- 9:00 Coffee and Late Registration 10:30 Conference opening 11:00 From SAB90 to SAB94: Four Years of Animat Research Jean-Arcady Meyer and Agnes Guillot, ENS, Paris 11:30 Invited Lecture: Mechanism and Process in Animal Behavior: Models of Animals, Animals as Models Herbert L. Roitblat, University of Hawaii 12:30 Lunch 14:00 Modeling the Role of Cerebellum in Prism Adaptation Michael A. Arbib, Nicolas Schweighofer, U. Southern California and W. T. Thach, Washington University 14:30 Robotic Experiments in Cricket Phonotaxis Barbara Webb, University of Edinburgh 15:00 How to Watch Your Step: Biological Evidence and an Initial Model Patrick R. Green, University of Nottingham 15:30 On Why Better Robots Make It Harder Tim Smithers, Euskal Herriko Unibersitatea 16:00 Coffee 16:30 What is Cognitive and What is *Not* Cognitive? Frederick Toates, Open University 17:00 Action-Selection in Hamsterdam: Lessons from Ethology Bruce Blumberg, MIT 17:30 Behavioral Dynamics of Escape and Avoidance: A Neural Network Approach Nestor A. Schmajuk, Duke University 18:00 End. Tuesday 9th August ------------------ 09:00 Invited Lecture: The Evolution of Animal Signals John Maynard Smith, University of Sussex 10:00 Coffee 10:30 An Hierarchical Classifier System Implementing a Motivationally Autonomous Animat Jean-Yves Donnart and Jean-Arcady Meyer, ENS, Paris 11:00 Spatial Learning and Representation in Animats Tony J. Prescott, University of Sheffield 11:30 Location Recognition in Rats and Robots William D. Smart and John Hallam, University of Edinburgh 12:00 Emergent Functionality in Human Infants Julie C. Rutkowska, University of Sussex 12:30 Lunch 14:00 -----------POSTER AND DEMONSTRATION SESSION------------ **Posters listed at the end of this schedule **Full demonstrations timetable available later 16:00 Coffee 16:30 Posters and Demo's continue 20:00 End. Wednesday 10th August --------------------- 09:00 Invited Lecture: Stability in Adaptation and Learning Jean-Jacques Slotine, MIT 10:00 Coffee 10:30 Connectionist Environment Modelling in a Real Robot William Chesters and G. M. Hayes, University of Edinburgh 11:00 A Hybrid Architecture for Learning Continuous Environmental Models in Maze Problems A. G. Pipe, T. C. Fogarty, and A. Winfield, University West of England 11:30 The Blind Breeding the Blind: Adaptive Behavior without Looking Peter M. Todd, Stewart W. Wilson, Rowland Institute, Anil B. Somayaji, and Holly Yanco, MIT 12:00 Memoryless Policies: Theoretical Limitations and Practical Results Michael L. Littman, Brown University 12:30 End. Thursday 11th August -------------------- 09:00 Invited Lecture: Rats Running and Humans Reaching: The Brain's Multiple Styles of Learning Michael Arbib, University of Southern California 10:00 Coffee 10:30 A Comparison of Q-Learning and Classifier Systems Marco Dorigo and Hugues Bersini, Universite Libre de Bruxelles 11:00 Paying Attention to What's Important: Using Focus of Attention to Improve Unsupervised Learning Leonard N. Foner and Pattie Maes, MIT 11:30 Learning Efficient Reactive Behavioral Sequences from Basic Reflexes in a Goal-Directed Autonomous Robot Jos'e del R. Mill'an, European Commission Research Centre 12:00 A Topological Neural Map for On-Line Learning: Emergence of Obstacle Avoidance in a Mobile Robot Philippe Gaussier and Stephane Zrehen, EPFL 12:30 Lunch 14:00 A Distributed Adaptive Control System for a Quadruped Mobile Robot Bruce L. Digney and M. M. Gupta, University of Saskatchewan 14:30 Reinforcement Tuning of Action Synthesis and Selection in a 'Virtual Frog'. Simon Giszter, MIT 15:00 Achieving Rapid Adaptations in Robots by Means of External Tuition Ulrich Nehmzow and Brendan McGonigle, University of Edinburgh 15:30 Two-link Robot Brachiation with Connectionist Q-Learning Fuminori Saito and Toshio Fukada, Nagoya University 16:00 Coffee 16:30 Integrating Reactive, Sequential, and Learning Behavior Using Dynamical Neural Networks Brian Yamauchi and Randall Beer, Case Western Reserve University 17:00 Seeing The Light: Artificial Evolution, Real Vision Inman Harvey, Phil Husbands, and Dave Cliff, University of Sussex 17:30 End. Friday 12th August ----------------- 09:00 Invited Lecture: Coherent Behavior from Many Adaptive Processes Rodney A. Brooks, MIT 10:00 Coffee 10:30 Evolution of Corridor Following Behavior in a Noisy World Craig W. Reynolds, Electronic Arts 11:00 Protean Behavior in Dynamic Games: Arguments for the Co-Evolution of Pursuit-Evasion Tactics Geoffrey F. Miller and Dave Cliff, University of Sussex 11:30 Towards Robot Cooperation David McFarland, University of Oxford 12:00 A Case Study in the Behavior-Oriented Design of Autonomous Agents Luc Steels, University of Brussels 12:30 Lunch 14:00 Learning to Behave Socially Maja J. Mataric, MIT 14:30 Signalling and Territorial Aggression: An Investigation by Means of Synthetic Behavioral Ecology Peter de Bourcier and Michael Wheeler, University of Sussex 15:00 Panel Session 16:00 Coffee 16:30 SAB96 -- Discussion of SAB94, Planning of SAB96. 17:30 End. ----POSTERS-----to be presented on the afternoon of Tuesday 9th August. ---------------- Authors Note: Display space = 100cm * 150cm per poster Insect Vision and Olfaction: Different Neural Architectures for Different Kinds of Sensory Signal? D. Osorio, University of Sussex, Wayne M. Getz, UC Berkeley and Jurgen Rybak, FU-Berlin The Interval Reduction Strategy for Monitoring Cupcake Problems Paul R. Cohen, Marc S. Atkin, and Eric A. Hansen, University of Massachusetts Visual Control of Altitude and Speed in a Flying Agent Fabrizio Mura and Nicolas Franceschini, CNRS, Marseille Organizing an Animat's Behavioural Repertoires Using Kohonen Feature Maps Nigel Ball, University of Cambridge Action Selection for Robots in Dynamic Environments through Inter-Behaviour Bidding Michael Sahota, University of British Columbia Using Second Order Neural Connections for Motivation of Behavioral Choices Gregory M. Werner, UCLA A Place Navigation Algorithm Based on Elementary Computing Procedures and Associative Memories Simon Benhamou, CNRS Marseille, Pierre Bouvet, University of Geneva, and Bruno Poucet, CNRS Marseille Self-Organizing Topographic Maps and Motor Planning Pietro Morasso and Vittorio Sanguineti, University of Genova The Effect of Memory Length on the Foraging Behavior of a Lizard Sharoni Shafir and Jonathan Roughgarden, Stanford University An Architecture for Learning to Behave Ashley M. Aitken, University of New South Wales Reinforcement Learning for Homeostatic Endogenous Variables Hugues Bersini, Universite Libre de Bruxelles An Architecture for Representing and Learning Behaviors by Trial and Error Pascal Blanchet, CRIN-CNRS/INRIA Lorraine The Importance of Leaky Levels for Behavior-Based AI Gregory M. Saunders, John F. Kolen, and Jordan B. Pollack, Ohio State University Reinforcement Learning with Dynamic Covering of State-Action Space: Partitioning Q-Learning Re'mi Munos and Jocelyn Patinel, CEMAGREF The Five Neuron Trick: Using Classical Conditioning to Learn How to Seek Light Tom Scutt, University of Nottingham Adaptation in Dynamic Environments Through a Minimal Probability of Exploration Gilles Venturini, Universite de Paris-Sud Automatic Creation of An Autonomous Agent: Genetic Evolution of a Neural-Network Driven Robot Dario Floreano, University of Trieste, and Francesco Mondada, EPFL The Effect of Parasitism on the Evolution of a Communication Protocol in an Artificial Life Simulation Phil Robbins, University of Greenwich Integration of Reactive and Telerobotic Control in Multi-Agent Robotic Systems Ronald C. Arkin and Khaled S. Ali, Georgia Institute of Technology MINIMEME: Of Life and Death in the Noosphere Stephane Bura, Universite Paris VI Learning Coordinated Motions in a Competition for Food Between Ant Colonies Masao Kubo and Yukinori Kakazu, Hokkaido University Emergent Colonization and Graph Partitioning Pascale Kuntz and Dominique Snyers, Telecom Bretagne Diversity and Adaptation in Populations of Clustering Ants Erik D. Lumer, Universite Libre de Bruxelles, and Baldo Faieta, Zetes Electronics --------------------------------------------------------------------------- Conference Committee ==================== Conference Chairs: Philip HUSBANDS Jean-Arcady MEYER Stewart WILSON School of Cognitive Groupe de Bioinformatique The Rowland Institute and Comp. Sciences Ecole Normale Superieure for Science University of Sussex 46 rue d'Ulm 100 Edwin H. Land Blvd. Brighton BN1 9QH, UK 75230 Paris Cedex 05 Cambridge, MA 02142, USA philh at cogs.susx.ac.uk meyer at wotan.ens.fr wilson at smith.rowland.org Program Chair: David CLIFF School of Cognitive and Computing Sciences University of Sussex Brighton BN1 9QH, UK davec at cogs.susx.ac.uk Financial Chair: P. Husbands, H. Roitblat Local Arrangements: I. Harvey, P. Husbands Program Committee ================= M. Arbib, USA R. Arkin, USA R. Beer, USA A. Berthoz, France L. Booker, USA R. Brooks, USA P. Colgan, Canada T. Collett, UK H. Cruse, Germany J. Delius, Germany J. Ferber, France N. Franceschini, France S. Goss, Belgium J. Halperin, Canada I. Harvey, UK I. Horswill, USA A. Houston, UK L. Kaelbling, USA H. Klopf, USA L-J. Lin, USA P. Maes, USA M. Mataric, USA D. McFarland, UK G. Miller, UK R. Pfeifer, Switzerland H. Roitblat, USA J. Slotine, USA O. Sporns, USA J. Staddon, USA F. Toates, UK P. Todd, USA S. Tsuji, Japan D. Waltz, USA R. Williams, USA Local Arrangements ================== For general enquiries contact: SAB94 Administration COGS University of Sussex Falmer, Brighton, BN1 9QH UK Tel: +44 (0)273 678448 Fax: +44 (0)273 671320 Email: sab94 at cogs.susx.ac.uk ftp === The SAB94 archive can be accessed by anonymous ftp. % ftp ftp.cogs.susx.ac.uk login: anonymous password: ftp> cd pub/sab94 ftp> get * ftp> quit * Files available at present are: README announcement reg_document hotel_booking_form program Sponsors ======== Sponsors include: British Telecom University of Sussex Applied AI Systems Inc Uchidate Co., Ltd. Mitsubishi Corporation Brighton Council The Renaissance Trust Financial Support ================ Limited financial support may be available to graduate students and young researchers in the field. Applicants should submit a letter describing their research, the year they expect to receive their degree, a letter of recommendation from their supervisor, and confirmation that they have no other sources of funds available. The number and size of awards will depend on the amount of money available. Venue ===== The conference will be held at the Brighton Centre, the largest conference venue in the town, situated on the seafront in Brighton's town centre and adjacent to the 'Lanes' district. Brighton is a thriving seaside resort, with many local attractions, situated on the south coast of England. It is just a 50 minute train journey from London, and 30 minutes from London Gatwick airport -- when making travel arrangements we advise, where possible, using London Gatwick in preference to London Heathrow. Social Activities ================= A welcome reception will be held on Sunday 7th August. The conference banquet will take place on Thursday 11th August. There will also be opportunities for sightseeing, wine cellar tours and a visit to Brighton's Royal Pavilion. Accommodation ============= We have organised preferential rates for SAB94 delegates at several good quality hotels along the seafront. All hotels are within easy walking distance of the Brighton Centre. Costs vary from 29 pounds to 70 pounds inclusive per night for bed and breakfast. An accommodation booking form will be sent out to you on request, or can be obtained by ftp (instructions above). Details of cheaper budget accommodation can be obtained from Brighton Accommodation Marketing Bureau (Tel: +44 273 327560 Fax: +44 273 777409). Insurance ========= The SAB94 organisers and sponsors can not accept liablility for personal injuries, or for loss or damage to property belonging to conference participants or their guests. It is recommended that attendees take out personal travel insurance. Registration Fees ================= Registration includes: the conference proceedings; technical program; lunch each day (except Wednesday when there will be no afternoon sessions); welcome reception; free entry to Brighton's Royal Pavilion; complimentary temporary membership of the Arts Club of Brighton. ----------------------------------------------------------------------------- REGISTRATION FORM 3rd International Conference on the Simulation of Adaptive Behaviour (SAB94) 8-12 August 1994 Brighton Centre, Brighton, UK Please complete the form below and send to the conference office with full payment. Name: ______________________________________________________________ Address: __________________________________________________________ ____________________________________________________________________ ____________________________________________________________________ Country: ___________________________________________________________ Postal Code or Zip Code: ___________________________________________ Email: _____________________________________________________________ Telephone:____________________________ Fax:_________________________ Professional Affiliation:___________________________________________ Name(s) of accompanying person(s): 1. ________________________________________________________________ 2. ________________________________________________________________ Dietary needs: ____________________________________________________ Any other special needs: _________________________________________ PAYMENTS ======== All payments must be made in pounds sterling. Delegates: ========= Tick if you will be attending the welcome reception on Sunday 7 August _____ Tick appropriate boxes. Individual Student Early (before 15 May 1994) 200 pounds ( ) 100 pounds ( ) Late (after 15 May 1994) 230 pounds ( ) 115 pounds ( ) On site 260 pounds ( ) 130 pounds ( ) Banquet 18 pounds ( ) 18 pounds ( ) STUDENTS MUST SUBMIT PROOF OF THEIR STATUS ALONG WITH THEIR REGISTRATION FEE. Accompanying persons: =================== Welcoming reception 10 pounds Banquet 28 pounds TOTAL PAYMENT ___________ Registration ___________ Banquet (delegate rate) (Please tick if vegetarian _____) ___________ Banquet (guest rate) (Please tick if vegetarian _____) ___________ Reception (guests only) ___________ Donation to support student scholarship fund METHOD OF PAYMENT ================= Please make payable to "SAB94", pounds sterling only. _____ Bank Draft or International Money Order: _ __________________ pounds _____ Cheque: (drawn on a UK bank or Euro Cheque) __________________ pounds Send to: SAB Administration COGS University of Sussex Falmer, Brighton, BN1 9QH UK CANCELLATIONS ============= The SAB Administration should be notified in writing of all cancellations. Cancellations received before 10 July will incur a 20% administration charge. We cannot accept any cancellations after that date. --------------------------------------------------------------------------------------- From David_Redish at GS17.SP.CS.CMU.EDU Thu May 19 16:21:45 1994 From: David_Redish at GS17.SP.CS.CMU.EDU (David Redish) Date: Thu, 19 May 94 16:21:45 -0400 Subject: Papers available by anonymous ftp Message-ID: A selection of papers by, Redish, Wan, and Touretzky are now available by anonymous FTP. The three most recent titles are: 1. The Reaching Task: Evidence for Vector Arithmetic in the Motor System? 2. Computing Goal Locations from Place Codes 3. Neural Representation of Space in Rats and Robots Other titles are also available. All of the papers are compressed using GZIP. Access information: host: b.gp.cs.cmu.edu (128.2.242.8) directory: /afs/cs/user/dredish/pub They are also available via Mosaic from http://www.cs.cmu.edu:8001/afs/cs/user/dredish/Web/bibliography.url ============================================================ file: biocyb.ps.gz The Reaching Task: Evidence for Vector Arithmetic in the Motor System? A. David Redish and David S. Touretzky To appear in Biological Cybernetics During a reaching task, the population vector is an encoding of direction based on cells with cosine response functions. Scaling the response by a magnitude factor produces a vector encoding, enabling vector arithmetic to be performed by summation of firing rates. We show that the response properties of selected populations of cells in MI and area 5 can be explained in terms of arithmetic relationships among load, goal, and motor command vectors. Our computer simulations show good agreement with single cell recording data. ============================================================ file: cogsci94.ps.gz Computing Goal Locations from Place Codes Hank S. Wan, David S. Touretzky, and A. David Redish To appear in: Proceedings of the 16th annual conference of the Cognitive Science society. Lawrence Earlbaum Associates. A model based on coupled mechanisms for place recognition, path integration, and maintenance of head direction in rodents replicates a variety of neurophysiological and behavioral data. Here we consider a task described in \cite{collett86} in which gerbils were trained to find food equidistant from three identical landmarks arranged in an equilateral triangle. In probe trials with various manipulations of the landmark array, the model produces behaviors similar to those of the animals. We discuss computer simulations and an implementation of portions of the model on a mobile robot. ============================================================ file: wcci94.ps.gz Neural Representation of Space in Rats and Robots David S. Touretzky, Hank S. Wan, and A. David Redish To appear in: J.M. Zurada and R. Marks, eds., Computational Intelligence: Imitating Life. IEEE Press, 1994. We describe a computer model that reproduces many observed features of rat navigation behavior, including response properties of place cells and head direction cells. We discuss issues that arise when implementing models of this sort on a mobile robot. From janetw at cs.uq.oz.au Fri May 20 16:42:23 1994 From: janetw at cs.uq.oz.au (janetw@cs.uq.oz.au) Date: Fri, 20 May 94 15:42:23 EST Subject: Tech report available - connectionist models and psychology Message-ID: <9405200542.AA01283@client> Please do not forward to other mailing lists. The following technical report is available. Please send requests to janetw at cs.uq.oz.au. Paper copy only. COLLECTED PAPERS FROM A SYMPOSIUM ON CONNECTIONIST MODELS AND PSYCHOLOGY (Eds) Janet Wiles, Cyril Latimer and Catherine Stevens. Technical Report No. 289 Department of Computer Science, University of Queensland, QLD 4072 Australia February 1994 118 pages -------------------------------------------------------------------------- Contents Preface: Danny Latimer, Catherine Stevens, and Janet Wiles SESSION 1. THE RATIONALE FOR PSYCHOLOGISTS USING MODELS Introduction: Peter Slezak Target paper: Danny Latimer. Computer Modeling of Cognitive Processes Commentaries: Max Coltheart. Connectionist Modelling and Cognitive Psychology Sally Andrews. What Connectionist Models Can (and Cannot) Tell Us George Oliphant. Connectionism, Psychology and Science SESSION 2. CORRESPONDENCE BETWEEN HUMAN AND NETWORK PERFORMANCE Introduction: Danny Latimer Papers: Kate Stevens. The In(put)s and Out(put)s of Comparing Human and Network Performance: Some Ideas on Representations, Activations and Weights Graeme Halford. How Far Do Neural Network Models Account for Human Reasoning? Simon Dennis. The Correspondence Between Psychological and Network Variables In Connectionist Models of Human Memory SESSION 3. BASIC COMPUTATIONAL PROCESSES Introduction: Steven Schwartz Target paper: Janet Wiles. The Connectionist Modeler's Toolkit: A review of some basic processes over distributed memories Commentaries: Mike Johnson. On the search for metaphors Zoltan Schreter. Distributed and Localist Representation in the Brain and in Connectionist Models Discussion commentaries: Paul Bakker, Richard A. Heath, Andrew Heathcote, Steven Phillips, J. P. Sutcliffe, Ellen Watson From webber at signal.dra.hmg.gb Fri May 20 04:04:21 1994 From: webber at signal.dra.hmg.gb (Chris Webber) Date: Fri, 20 May 94 09:04:21 +0100 Subject: NeuroProse preprint announcement Message-ID: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/webber.self-org.ps.Z The file "webber.self-org.ps.Z" is available for copying from the Neuroprose preprint archive. 26 pages, 1946396 bytes compressed, 4117115 uncompressed. Preprint of article submitted to "Network" journal: -------------------------------------------------------- "Self-organisation of transformation-invariant detectors for constituents of perceptual patterns" Chris J S Webber Cambridge University, (Now at) UK Defence Research Agency A geometrical interpretation of the elementary constituents which make up perceptual patterns is proposed: if a number of different pattern- vectors lie approximately within the same plane in the pattern-vector space, those patterns can be interpreted as sharing a common constituent. Individual constituents are associated with individual planes of patterns: a pattern lying within an intersection of several such planes corresponds to a combination of several constituents. This interpretation can model patterns as hierarchical combinations of constituents that are themselves combinations of yet more elementary constituents. A neuron can develop transformation-invariances in its recognition-response by aligning its synaptic vector with one of the plane-normals: a pattern-vector's projection along the synaptic vector is then an invariant of all the patterns on the plane. In this way, discriminating detectors for individual constituents can self-organise through Hebbian adaptation. Transformation-invariances that can self-organise in multiple-level vision systems include shape-tolerance and local position-tolerance. These principles are illustrated with demonstrations of transformation-tolerant face-recognition. -------------------------------------------------------- From cairo at csc.umist.ac.uk Sat May 21 20:51:00 1994 From: cairo at csc.umist.ac.uk (Cairo L Nascimento Jr) Date: Sat, 21 May 94 20:51:00 BST Subject: Thesis available by anonymous ftp Message-ID: <9404.9405211951@isabel.csc.umist.ac.uk> From iris at halo.tau.ac.il Sun May 22 07:48:01 1994 From: iris at halo.tau.ac.il (Iris Ginzburg) Date: Sun, 22 May 1994 14:48:01 +0300 (IDT) Subject: Paper available by ftp Message-ID: ************************************************************** FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/ginzburg.correlations.ps.Z The following paper is available by anonymous ftp. 42 printed pages THEORY OF CORRELATIONS IN STOCHASTIC NEURAL NETWORKS Iris Ginzburg School of Physics and Astronomy Tel-Aviv University, Tel-Aviv 69978, Israel and Haim Sompolinsky Racah Institute of Physics and Center for Neural Computation Hebrew University, Jerusalem 91904, Israel and AT&T Bell Laboratories, Murray Hill, NJ 07974, USA Submitted to Physical Review E, March 1994 ABSTRAT: One of the main experimental tools in probing the interactions between neurons has been the measurement of the correlations in their activity. In general, however, the interpretation of the observed correlations is difficult, since the correlation between a pair of neurons is influenced not only by the direct interaction between them but also by the dynamic state of the entire network to which they belong. Thus, a comparison between the observed correlations and the predictions from specific model networks is needed. In this paper we develop the theory of neuronal correlation functions in large networks comprising of several highly connected subpopulations, and obey stochastic dynamic rules. When the networks are in asynchronous states, the cross-correlations are relatively weak, i.e., their amplitude relative to that of the auto-correlations is of order of 1/N, N being the size of the interacting populations. Using the weakness of the cross- correlations, general equations which express the matrix of cross-correlations in terms of the mean neuronal activities, and the effective interaction matrix are presented. The effective interactions are the synaptic efficacies multiplied by the the gain of the postsynaptic neurons. The time-delayed cross-correlations can be expressed as a sum of exponentially decaying modes that correspond to the eigenvectors of the effective interaction matrix. The theory is extended to networks with random connectivity, such as randomly dilute networks. This allows for the comparison between the contribution from the internal common input and that from the direct interactions to the correlations of monosynaptically coupled pairs. A closely related quantity is the linear response of the neurons to external time-dependent perturbations. We derive the form of the dynamic linear response function of neurons in the above architecture, in terms of the eigenmodes of the effective interaction matrix. The behavior of the correlations and the linear response when the system is near a bifurcation point is analyzed. Near a saddle-node bifurcation the correlation matrix is dominated by a single slowly decaying critical mode. Near a Hopf-bifurcation the correlations exhibit weakly damped sinusoidal oscillations. The general theory is applied to the case of randomly dilute network consisting of excitatory and inhibitory subpopulations, using parameters that mimic the local circuit of 1 cube mm of rat neocortex. Both the effect of dilution as well as the influence of a nearby bifurcation to an oscillatory states are demonstrated. To retrieve the compressed postscript file, do the following: unix> ftp archive.cis.ohio-state.edu ftp> login: anonymous ftp> password: [your_full_email_address] ftp> cd pub/neuroprose ftp> binary ftp> get ginzburg.correlations.ps.Z ftp> bye unix> uncompress ginzburg.correlations.ps.Z unix> lpr -s ginzburg.correlations.ps (or however you print postscript) NOTE the -s flag in lpr. Since the file is rather large, some printers may truncate the file unless this flag in specified. From massone at mimosa.eecs.nwu.edu Mon May 23 10:43:55 1994 From: massone at mimosa.eecs.nwu.edu (Lina Massone) Date: Mon, 23 May 94 09:43:55 CDT Subject: paper available Message-ID: <9405231443.AA11110@mimosa.eecs.nwu.edu> Preprints of the following paper are available upon request: A Neural-Network System for Control of Eye Movements: Basic Mechanisms Lina L.E. Massone (to appear in Biological Cybernetics) Abstract: This paper presents a neural-network-based system that can generate and control movements of the eyes. It is inspired on a number of experimental observations on the saccadic and gaze systems of monkeys and cats. Because of the generality of the approach undertaken, the system can be regarded as a demonstration of how parallel distributed processing principles, namely learning and attractor dynamics, can be integrated with experimental findings, as well as a biologically-inspired controller for a dexterous robotic orientation device. The system is composed of three parts: a dynamic motor map, a push-pull circuitry, and a plant. The dynamics of the motor map is generated by a multi-layer network that was trained to compute a bidimensional temporal-spatial transformation. Simulation results indicate (i) that the system is able to reproduce some of the properties observed in the biological system at the neural and movement levels, (ii) that the dynamics of the motor map remains a stereotyped one even when the motor map is subject to abnormal stimulation patterns. The latter result emphasizes the role of the topographic projection that connects the the motor map to the push-pull circuitry in determining the features of the resulting movements. Please email requests to: linda at eecs.nwu.edu (not to me!) From seung at physics.att.com Mon May 23 11:47:05 1994 From: seung at physics.att.com (seung@physics.att.com) Date: Mon, 23 May 94 11:47:05 EDT Subject: preprint--rigorous learning curve bounds from statistical mechanics Message-ID: <9405231547.AA05916@physics.att.com> The following preprint is now available: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/seung.rigorous.ps.Z Authors: D. Haussler, M. Kearns, H. S. Seung, N. Tishby Title: Rigorous learning curve bounds from statistical mechanics Size: 20 pages Abstract: In this paper we introduce and investigate a mathematically rigorous theory of learning curves that is based on ideas from statistical mechanics. The advantage of our theory over the well-established Vapnik-Chervonenkis theory is that our bounds can be considerably tighter in many cases, and are also more reflective of the true behavior (functional form) of learning curves. This behavior can often exhibit dramatic properties such as phase transitions, as well as power law asymptotics not explained by the VC theory. The disadvantages of our theory are that its application requires knowledge of the input distribution, and it is limited so far to finite cardinality function classes. We illustrate our results with many concrete examples of learning curve bounds derived from our theory. From hilario at cui.unige.ch Mon May 23 08:46:57 1994 From: hilario at cui.unige.ch (Hilario Melanie) Date: Mon, 23 May 1994 14:46:57 +0200 Subject: Please broadcast via connectionist-ml Message-ID: <620*/S=hilario/OU=cui/O=unige/PRMD=switch/ADMD=arcom/C=ch/@MHS> WORKSHOP PROGRAMME & CALL FOR PARTICIPATION ECAI'94 Workshop on Combining Symbolic and Connectionist Processing August 9, 1994 - Amsterdam, The Netherlands Until a few years ago, the history of AI has been marked by two parallel, often antagonistic streams of development -- classical or symbolic AI and connectionist processing. A recent research trend, premised on the complementarity of these two paradigms, strives to build hybrid systems which combine the advantages of both to overcome the limitations of each. For instance, attempts have been made to accomplish complex tasks by blending neural networks with rule-based or case-based reasoning. This workshop will be the first Europe-wide effort to bring together researchers active in the area in view of laying the groundwork for a theory and methodology of symbolic/connectionist integration (SCI). Workshop Programme HYBRID EXPERT SYSTEM SHELLS 9:00 - 9:30 A Study of the Hybrid System SYNHESYS B. Orsier, B. Amy, V. Rialle, A. Giacometti LIFIA-IMAG & ENST (France) 9:30 - 10:00 Cognitive and Computational Foundations for Symbolic-Connectionist Integration R. Khosla, T. Dillon La Trobe University, Melbourne (Australia) MULTISTRATEGY LEARNING 10:00 - 10:30 Integration of Symbolic and Connectionist Learning to Ease Robot Programming and Control M. Kaiser, J. Kreuziger University of Karlsruhe (Germany) 10:30 - 11:00 A Hybrid Model of Psychological Experiments on Scientific Discovery E. Hoenkamp, R.A. Sumida University of Nijmegen (The Netherlands) 11:00 - 11:15 BREAK THEORETICAL FOUNDATIONS 11:15 - 11:45 Tracking the Neuro-Symbolic Continuum: Learning by Explicitation C. Thornton University of Sussex (United Kingdom) 11:45 - 12:15 Symbol Ground Revisited E. Prem Austian Institute for AI (Austria) 12:15 - 12:45 How Hybrid Should a Hybrid Model Be? R. Cooper, B. Franks University College & London School of Economics (United Kingdom) 12:45 - 14:15 LUNCH LOGIC AND INFERENCING 14:15 - 14:45 Towards a New Massively Parallel Computational Model for Logic Programming S. Hoelldobler, Y. Kalinke University of Dresden (Germany) 14:45 - 15:15 Scheduling of Modular Architectures for Inductive Inference of Regular Grammars M. Gori, M. Maggini, G. Soda University of Florence (Italy) 15:15 - 15:45 A Connectionist Control Component for the Theorem Prover SETHEO C. Goller Technical University of Munich (Germany) 15:45 - 16:00 BREAK NATURAL LANGUAGE PROCESSING 16:00 - 16:30 Metaphor and Memory: Symbolic and Connectionist Issues in Metaphor Comprehension T. Veale, M. Keane Trinity College (Eire) 16:30 - 17:00 Parsing Spontaneous Speech: A Hybrid Approach T.S. Polzin Carnegie Mellon University (USA) 17:00 - 17:30 A Symbolic-Connectionist Hybrid Abstract Generation System M. Aretoulaki, J. Tsujii UMIST, Manchester (United Kingdom) 17:30 - 17:45 BREAK VISUAL PATTERN RECOGNITION 17:45 - 18:15 A Hybrid Model for Visual Perception Based on Dynamic Conceptual Space A. Chella, M. Frixione, S. Gaglio University of Palermo & IIASS-Salerno (Italy) 18:15 - 18:45 Hybrid Trees for Supervised Learning of Decision Rules F. d'Alche-Buc, J.-P. Nadal, D. Zwierski Laboratoires d'Electronique Philips (France) Those who wish to attend the workshop should send a request describing their research interests and/or previous work in the field of SCI (maximum 1 page). Since attendance will be limited to ensure effective interaction, requests will be considered until the maximum number of participants is attained. Please note that all workshop participants are required to register for the main conference. PROGRAM COMMITTEE Bernard Amy (LIFIA-IMAG, Grenoble, France) Patrick Gallinari (LAFORIA, University of Paris 6, France) Franz Kurfess (Dept. Neural Information Processing, University of Ulm, Germany) Christian Pellegrini (CUI, University of Geneva, Switzerland) Noel Sharkey (DCS, University of Sheffield, UK) Alessandro Sperduti (CSD, University of Pisa, Italy) CONTACT PERSON Melanie Hilario CUI - University of Geneva 24 rue General Dufour CH-1211 Geneva 4 Voice: +41 22/705 7791 Fax: +41 22/320 2927 Email: hilario at cui.unige.ch From koza at CS.Stanford.EDU Tue May 24 14:22:09 1994 From: koza at CS.Stanford.EDU (John Koza) Date: Tue, 24 May 94 11:22:09 PDT Subject: New Book and Videotape on Genetic Programming Message-ID: Genetic Programming II and the associated videotape are now available from the MIT Press. GENETIC PROGRAMMING II: AUTOMATIC DISCOVERY OF REUSABLE SUBPROGRAMS by John R. Koza Computer Science Department Stanford University It is often argued that the process of solving complex problems can be automated by first decomposing the problem into subproblems, then solving the presumably simpler subproblems, and then assembling the solutions to the subproblems into an overall solution to the original problem. The overall effort required to solve a problem can potentially be reduced to the extent that the decomposition process uncovers subproblems that are diPesproportionately easy to solve and to the extent that regularities in the problem environment permit multiple use of the solutions to the subproblems. Sadly, conventional techniques of machine learning and artificial intelligence provide no effective means for automatically executing this alluring three-step problem-solving process on a computer. GENETIC PROGRAMMING II describes a way to automatically implement this three-step problem-solving process by means the recently developed technique of automatically defined functions in the context of genetic programming. Automatically defined functions enable genetic programming to define useful and reusable subroutines dynamically during a run. This new technique is illustrated by solving, or approximately solving, example problems from the fields of Boolean function learning, symbolic regression, control, pattern recognition, robotics, classification, and molecular biology. In each example, the problem is automatically decomposed into subproblems; the subproblems are automatically solved; and the solutions to the subproblems are automatically assembled into a solution to the original problem. Leverage accrues because genetic programming with automatically defined functions repeatedly uses the solutions to the subproblems in the assembly of the solution to the overall problem. Moreover, genetic programming with automatically defined functionsn produces solutions that are simpler and smaller than the solutions obtained without automatically defined functions. CONTENTS... 1. Introduction 2. Background on Genetic Algorithms, LISP, and Genetic Programming 3. Hierarchical Problem-Solving 4. Introduction to Automatically Defined Functions P The Two-Boxes Problem 5. Problems that Straddle the Breakeven Point for Computational Effort 6. Boolean Parity Functions 7. Determining the Architecture of the Program 8. The Lawnmower Problem 9. The Bumblebee Problem 10. The Increasing Benefits of ADFs as Problems are Scaled Up 11. Finding an Impulse Response Function 12. Artificial Ant on the San Mateo Trail 13. Obstacle-Avoiding Robot 14. The Minesweeper Problem 15. Automatic Discovery of Detectors for Letter Recognition 16. Flushes and Four-of-a-Kinds in a Pinochle Deck 17. Introduction to Molecular Biology 18. Prediction of Transmembrane Domains in Proteins 19. Prediction of Omega Loops in Proteins 20. Lookahead Version of the Transmembrane Problem 21. Evolution of the Architecture of the Overall Program 22. Evolution of Primitive Functions 23. Evolutionary Selection of Terminals 24. Evolution of Closure 25. Simultaneous Evolution of Architecture, Primitive Functions, Terminals, Sufficiency, and Closure 26. The Role of Representation and the Lens Effect 27. Conclusion Appendix A: List of Special Symbols Appendix B: List of Special Functions Bibliography Appendix C: List of Type Fonts Appendix D: Default Parameters for Controlling Runs of Genetic Programming Appendix E: Computer Implementation of Automatically Defined Functions Appendix F: Annotated Bibliography of Genetic Programming Appendix G: Electronic Newsletter, Public Repository, and FTP Site Hardcover. 746 pages. ISBN 0-262-11189-6. ----------------------------------------------------------------------- Genetic Programming II Videotape: The Next Generation by John R. Koza This videotape provides an explanation of automatically defined functions, the hierarchical approach to problem solving by means of genetic programming with automatically defined functions, and a visualization of computer runs for many of the problems discussed in Genetic Programming II. These problems include symbolic regression, the parity problem, the lawnmower problem, the bumblebee problem, the artificial ant, the impulse response problem, the minesweeper problem. the letter recognition problem, the transmembrane problem, and the omega loop problem. VHS videotape. 62-Minutes. Available in VHS NTSC, PAL, and SECAM formats. NTSC ISBN 0-262-61099-X. PAL ISBN 0-262-61100-7. SECAM ISBN 0-262-61101-5. ----------------------------------------------------------------------- The following order form can be used to order copies of Genetic Programming I or II, videotapes I or II, and Kinnear's recent book. Order Form Send to The MIT Press 55 Hayward Street Cambridge, MA 02142 USA You may order by phone 1-800-356-0343 (toll-free); or by phone to 617-625-8569; or by Fax to 617-625-6660; or by-e-mail to mitpress-orders at mit.edu Please send the following: ___copies of book Genetic Programming: On the Programming of Computers by Means of Natural Selection by John R. Koza (KOZGII) @$55.00 ___copies of book Genetic Programming II: Automatic Discovery of Reusable Programs by John R. Koza (KOZGH2) @$45.00 ___copies of book Advances in Genetic Programming by K. E. Kinnear (KINDH) @$45.00 ___copies of videoGenetic Programming: the Movie in VHS NTSC Format (KOZGVV) @$34.95 ___copies of videoGenetic Programming:the Movie in VHS PAL Format (KOZGPV) @$44.95 each ___copies of videoGenetic Programming:the Movie in VHS SECAM Format (KOZGSV) @$44.95 ___copies of video Genetic Programming II Videotape: The Next Generation in VHS NTSC Format (KOZGV2) @$34.95 ___copies of video Genetic Programming II Videotape: The Next Generation in VHS PAL Format (KOZGP2) @$44.95 ___copies of video Genetic Programming II Videotape: The Next Generation in VHS SECAM Format (KOZGS2) @$44.95 Shipping and handling: Add $3.00 per item. Outside U.S. and Canada: add $6.00 per item for surface shipment or $22.00 per item for air Total for items ordered ________ Shipping and handling ________ Canadian customers add 7% GST ________ Total ________ [ ] Check or money order enclosed [ ] Purchase order attached P Number __________ [ ] Mastercard [ ] Visa Expiration date ___________ Card Number _____________________________ Ship to: Name _____________________________________ Address ___________________________________ __________________________________________ __________________________________________ City ______________________________________ State ______________________ Zip or Postal Code___________ Country ___________________ Daytime Phone _____________________________ ----------------------------------- For orders in the UK, Eire, Continental Europe, please contact the London office of the MIT Press at: The MIT Press 14 Bloomsbury Square London WC1A 2LP England Tel (071) 404 0712 Fax (071) 404 0610 e-mail 100315.1423 at compuserve.com For order in Australia, please contact: Astam Books 57-61 John Street Leichhardt, NSW 2040 Australia Tel (02) 566 4400 Fax (02) 566 4411 Please note that prices may be higher outside the US. In all other areas of the world or in case of difficulty, please contact: The MIT Press International Department 55 Hayward Street, Cambridge, MA 02142 USA Tel 617 253 2887 Fax 617 253 1709 e-mail curtin at mit.edu From moody at chianti.cse.ogi.edu Tue May 24 19:46:28 1994 From: moody at chianti.cse.ogi.edu (John Moody) Date: Tue, 24 May 94 16:46:28 -0700 Subject: A Trivial but Fast Reinforcement Controller Message-ID: <9405242346.AA09905@chianti.cse.ogi.edu> The following paper is available via anonymous ftp: ========================================================================= File: moodyTresp94.reinforce.ps.Z To appear in Neural Computation, vol. 6, 1994. ------------------------------------------------------------------------- A Trivial but Fast Reinforcement Controller John Moody and Volker Tresp Abstract: We compare simulation results for the classic Barto-Sutton-Anderson pole balancer (which uses the Michie and Chambers ``boxes'' representation) with results for a reinforcement learning controller which employs a quadratic representation for both the adaptive critic element (ACE) and the associative search element (ASE). We find that this simple controller learns to balance the pole after a median of only 2 failures. This corresponds to a relative speed-up factor of over 7000 in simulated physical time. Moreover, the quality of the control, as measured by the residual kinetic energy of the cart/pole system after learning, is substantially better for the quadratic ACE/ASE controller. ========================================================================= Retrieval instructions are: unix> ftp neural.cse.ogi.edu login: anonymous password: name at email.address ftp> cd pub/neural ftp> cd papers ftp> get INDEX ftp> binary ftp> get moodyTresp94.reinforce.ps.Z ftp> quit unix> uncompress *.Z unix> lpr *.ps From moody at chianti.cse.ogi.edu Tue May 24 19:50:24 1994 From: moody at chianti.cse.ogi.edu (John Moody) Date: Tue, 24 May 94 16:50:24 -0700 Subject: Summer School Lectures: Prediction Risk and Architecture Selection Message-ID: <9405242350.AA09919@chianti.cse.ogi.edu> The following paper is available via anonymous ftp: ========================================================================= file: moody94.predictionrisk.ps.Z Appears in: From mav at psych.psy.uq.oz.au Mon May 23 19:28:48 1994 From: mav at psych.psy.uq.oz.au (Simon Dennis) Date: Tue, 24 May 1994 09:28:48 +1000 (EST) Subject: Thesis: Integrating Learning into Models of Human Memory Message-ID: FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/Thesis/dennis.thesis.ps.Z The file dennis.thesis.ps.Z is now available for copying from the Neuroprose archive from the directory pub/neuroprose/Thesis. The Integration of Learning into Models of Human Memory Simon Dennis Ph.D. Thesis Department of Computer Science University of Queensland Since memory was first distinguished as a separate phenomenon from learning (Melton, 1963), researchers in the area have concentrated on the memory component. Mathematical models, such as SAM (Raaijmakers & Shiffrin, 1981; Gillund & Shiffrin, 1984), TODAM (Murdock, 1982), CHARM (Eich, 1982), Minerva II (Hintzman, 1984) and the matrix model (Pike, 1984; Humphreys, Bain & Pike, 1989), have focussed on the mechanisms of encoding, storage and retrieval. The affects of variables such as retention time, number of presentations, spacing of presentations, type of retrieval test, nature of cues, encoding paradigm and the extent to which the study context is specified in the test instructions have been studied empirically and modelled. The learning component, which was the focus of the field for much of this century, has received less attention in recent years (Estes, 1991). Despite the extensive empirical database on learning phenomena (Postman, Burns & Hasher, 1970), attempts to model this data have been few. The attempts that do exist have concentrated on specifying algorithms by which experience might tune the parameters of existing memory models (Murdock, 1987) rather than attempting to explain how learning induces the representations, decision criteria and control processes of memory in the first instance. How people acquire these components of the memory system has important ramifications for the study of human retention. One of the most critical of these ramifications is the nature of the relationship between the environment and the mechanism of memory. Recent empirical work on the the environment of memory has revealed a striking correspondence between the structure of the environment and the pattern of performance in human subjects (Anderson & Schooler, 1991). This thesis extends this work by studying the environment empirically, developing a learning mechanism and demonstrating that this learning mechanism behaves in a qualitatively similar fashion to human subjects when exposed to an environment that mirrors that with which the subjects contend. Analyses of the relevant environments of two touchstone phenomena: the list strength effect in recognition and the word frequency effect in recognition were performed to establish the context in which interactive accounts of these phenomena must be set. It was found that while low frequency words occur less often than high frequency words, they are more likely to recur within a context. In addition, the probability of recurrence was found to increase if a word had occurred frequently in the current context, but was not affected by the amount of repetition of words other than the target word. A learning or interactive model of human memory called the Hebbian Recurrent Network (HRN) has been developed. The HRN integrates work in the mathematical modelling of memory with that in error correcting connectionist networks by incorporating the matrix model (Pike, 1984; Humphreys, Bain & Pike, 1989) into the Simple Recurrent Network (SRN, Elman, 1989; Elman, 1990). The result is an architecture which has the desirable memory characteristics of the matrix model such as low interference and massive generalisation but which is able to learn appropriate encodings for items, decision criteria and the control functions of memory which have traditionally been chosen a priori in the mathematical memory literature. Simulations demonstrate that the HRN is well suited to the recognition task. When compared with the SRN, the HRN is able to learn longer lists, generalises from smaller training sets, and is not degraded significantly by increasing the vocabulary size. To demonstrate that the HRN learning mechanism is capable of addressing experimental behaviour, the phenomena studied environmentally were modelled with the HRN. The HRN showed a low frequency word advantage when it was presented with an environment in which high frequency words occurred more often, but low frequency words were more likely to recur within a context. In addition, the HRN showed a null list strength effect while retaining the list length and item strength effects when exposed to an environment in which the environmental results were embedded. By incorporating a learning mechanism and examining the environment in which memory models are situated it is possible to produce models that: (1) can start to address developmental phenomena; (2) can provide a mechanism to address learning-to-learn phenomena; (3) can address how internal states attain their meanings; (4) are easily extended to a wide variety of cognitive phenomena; and (5) account for the striking similarity between the environmental demands placed upon the memory system and the performance of human subjects. --------------------------- The thesis is 218 pages (22 preamble + 196 text). Simon Dennis Department of Psychology mav at psych.psy.uq.oz.au Post Doctoral The University of Queensland Research Fellow QLD 4072 Australia From mackay at mrao.cam.ac.uk Thu May 26 10:21:00 1994 From: mackay at mrao.cam.ac.uk (David J.C. MacKay) Date: Thu, 26 May 94 10:21 BST Subject: The following preprint is now available by anonymous ftp. Message-ID: ======================================================================== Bayesian Neural Networks and Density Networks David J.C. MacKay University of Cambridge Cavendish Laboratory Madingley Road Cambridge CB3 0HE mackay at mrao.cam.ac.uk This paper reviews the Bayesian approach to learning in neural networks, then introduces a new adaptive model, the density network. This is a neural network for which target outputs are provided, but the inputs are unspecified. When a probability distribution is placed on the unknown inputs, a latent variable model is defined that is capable of discovering the underlying dimensionality of a data set. A Bayesian learning algorithm for these networks is derived and demonstrated with an application to the modelling of protein families. ======================================================================== The preprint may be obtained as follows: ftp 131.111.48.8 anonymous (your email) cd pub/mackay/density binary mget *.ps.Z quit uncompress *.ps.Z This document is 12 pages long. Sorry, hard copy is not available from the author. From KOKINOV at BGEARN.BITNET Wed May 25 16:05:57 1994 From: KOKINOV at BGEARN.BITNET (Boicho Kokinov) Date: Wed, 25 May 94 16:05:57 BG Subject: CogSci Summer School Message-ID: The Summer School features introductory and advanced courses in Cognitive Science, participant symposia, discussions, and student sessions. Participants will include university teachers and researchers, graduate and senior undergraduate students. International Advisory Board Elizabeth BATES (University of California at San Diego, USA) Amedeo CAPPELLI (CNR, Pisa, Italy) Cristiano CASTELFRANCHI (CNR, Roma, Italy) Daniel DENNETT (Tufts University, Medford, Massachusetts, USA) Ennio De RENZI (University of Modena, Italy) Charles DE WEERT (University of Nijmegen, Holland ) Christian FREKSA (Hamburg University, Germany) Dedre GENTNER (Northwestern University, Evanston, Illinois, USA) Christopher HABEL (Hamburg University, Germany) Joachim HOHNSBEIN (Dortmund University, Germany) Douglas HOFSTADTER (Indiana University, Bloomington, Indiana, USA) Keith HOLYOAK (University of California at Los Angeles, USA) Mark KEANE (Trinity College, Dublin, Ireland) Alan LESGOLD (University of Pittsburg, Pennsylvania, USA) Willem LEVELT (Max-Plank Institute of Psycholinguistics, Nijmegen, Holland) David RUMELHART (Stanford University, California, USA) Richard SHIFFRIN (Indiana University, Bloomington, Indiana, USA) Paul SMOLENSKY (University of Colorado, Boulder, USA) Chris THORNTON (University of Sussex, Brighton, England) Carlo UMILTA' (University of Padova, Italy) Local Organizers New Bulgarian University Bulgarian Academy of Sciences Bulgarian Cognitive Science Society Local Organizing Committee Boicho Kokinov - School Director Lilia Gurova, Vesselin Zaimov, Vassil Nikolov, Lora Likova, Marina Yoveva, Pasha Nikolova Courses Qualitative Spatial Reasoning Christian Freksa (Hamburg University, Germany) Computer Models of Analogy-Making Bob French (Indiana University, USA) Social Cognition Rosaria Conte (CNR, Roma, Italy) Multi-Agent Systems Iain Craig (University of Warwick, England) Cognitive Aspects of Language Processing Amedeo Cappelli (CNR, Pisa, Italy) Catastrophic Forgetting in Connectionist Networks Bob French (Indiana University, USA) Dynamic Networks for Cognitive Modeling Peter Braspenning (University of Limburg, Holland) Models of Brain Functions Andre Holley (CNRS, Lyon, France) Foundations of Cognitive Science Encho Gerganov, Naum Yakimov, Boicho Kokinov, Viktor Grilihes (New Bulgarian University, Bulgaria) Participant Symposia Participants are invited to submit papers which will be presented (30 min) at the participant symposia. Authors should send full papers (8 single spaced pages) in thriplicate or electronically (postcript, RTF, or plain ASCII) by July 30. Selected papers will be published in the School's Proceedings after the School itself. Only papers presented at the School will be eligible for publishing. Panel Discussions Integration of Methods and Approaches in Cognitive Science Trends in Cognitive Science Research Student Session At the student session proposals for M.Sc. Theses and Ph.D. Theses will be discussed as well as public defence of such theses (if presented). Fees (including participation, board and lodging) Advance Registration (payment in full, postmarked on or before June 15): $650 Late Registration (postmarked after June 15): $750 The fees should be transferred to the New Bulgarian University (for the Cognitive Science Summer School) at the Economic Bank (65 Maria Luisa Str., Sofia) - bank account 911422735300-8 or paid at registration. A very limited number of grants for partial support of participants from East European countries is available. Important dates: Send Application Form: now Deadline for Advance Registration: June 15 Deadline for Paper Submission: July 15 Inquiries, Applications, and Paper Submission to be send to: Boicho Kokinov Cognitive Science Department New Bulgarian University 54, G.M.Dimitrov blvd. Sofia 1125, Bulgaria fax: (+3592) 73-14-95 e-mail: cogsci94 at adm.nbu.bg or kokinov at bgearn.bitnet Parallel Events The International Conference in Artificial Intelligence - AIMSA'94 - will be held in Sofia in the period September 21-24. Summer School on Information Technologies - will be held in Sofia in the period September 16-20. --------------------------------------------------------------------------- International Summer School in Cognitive Science Sofia, September 12-24, 1994 Application Form Name: First Name: Status: faculty / graduate student / undergraduate student / other Affiliation: Country: Mailing address: e-mail address: fax: I intend to submit a paper: (title) From rjb at psy.ox.ac.uk Fri May 27 06:33:15 1994 From: rjb at psy.ox.ac.uk (Roland Baddeley) Date: Fri, 27 May 1994 11:33:15 +0100 Subject: Positions available at the University of Oxford. Message-ID: <199405271033.LAA01144@sun02.mrc-bbc.ox.ac.uk> Four positions have just become available at the University of Oxford Psychology Department, at least two of which may be of interest to readers of connectionists. - Roland Baddeley (rjb at psy.ox.ac.uk) UNIVERSITY OF OXFORD DEPARTMENT OF EXPERIMENTAL PSYCHOLOGY Posts in Visual Neuroscience The following posts are available as part of a long-term research programme combining neurophysiological and computational approaches to the functions of the temporal lobe visual cortical areas of primates. (1) Neurophysiologist (RS1A) to analyse the activity of single neurons in the temporal cortical visual areas of primates. (2) Computational neuroscientist (RS1A) to make formal models and/or analyse by simulation the functions of visual cortical areas and the hippocampus. (3) Programmer (RS1B), preferably with an interest in computational neuroscience, and with experience in C and Unix. The salaries are on the RS1A (postdoctoral) scale 13,601-20,442 pounds, or the RS1B (graduate) scale 12,828-17,349 pounds, with support provided by a programme grant. (4) Neurophysiologist (RS1A or RS1B) to analyse the activity of single neurons in the temporal cortical visual areas of primates, with EC Human Capital and Mobility support for 18 months for a European non-UK citizen. Applications including the names of two referees, or enquiries, to Dr. Edmund T. Rolls, University of Oxford, Department of Experimental Psychology, South Parks Road, Oxford OX1 3UD, England (telephone 0865-271348). The University is an Equal Opportunities Employer email enquires can be sent to Dr Rolls at erolls at psy.ox.ac.uk From David_Redish at GS17.SP.CS.CMU.EDU Fri May 27 10:50:22 1994 From: David_Redish at GS17.SP.CS.CMU.EDU (David Redish) Date: Fri, 27 May 94 10:50:22 -0400 Subject: Mosaic homepage for CNBC and NPC Message-ID: <26583.770050222@GS17.SP.CS.CMU.EDU> The Center for the Neural Basis of Cognition (CNBC) and the Neural Processes in Cognition Training Program (NPC) are joint projects of Carnegie Mellon University and the University of Pittsburgh. There is now a Mosaic homepage for these programs at the following url: http://www.cs.cmu.edu:8001/afs/cs/project/cnbc/CNBC.html Included in this homepage are - summary information on the CNBC and the NPC training program - information on how to apply to the NPC training program - faculty, postdoc, and graduate student lists and research statements - upcoming talks and colloquia - resources available from people at the CNBC (such as the connectionists archives and local ftp sites for online tech reports) From David_Redish at GS17.SP.CS.CMU.EDU Fri May 27 10:50:22 1994 From: David_Redish at GS17.SP.CS.CMU.EDU (David Redish) Date: Fri, 27 May 94 10:50:22 -0400 Subject: Mosaic homepage for CNBC and NPC Message-ID: <26583.770050222@GS17.SP.CS.CMU.EDU> The Center for the Neural Basis of Cognition (CNBC) and the Neural Processes in Cognition Training Program (NPC) are joint projects of Carnegie Mellon University and the University of Pittsburgh. There is now a Mosaic homepage for these programs at the following url: http://www.cs.cmu.edu:8001/afs/cs/project/cnbc/CNBC.html Included in this homepage are - summary information on the CNBC and the NPC training program - information on how to apply to the NPC training program - faculty, postdoc, and graduate student lists and research statements - upcoming talks and colloquia - resources available from people at the CNBC (such as the connectionists archives and local ftp sites for online tech reports) ------------------------------------------------------------ A short description of the programs follow: The Center for the Neural Basis of Cognition (CNBC) is a joint project of Carnegie Mellon University and the University of Pittsburgh, funded by a major gift from the R. K. Mellon Foundation. Created in 1994, the Center is dedicated to the study of the neural basis of cognitive processes, including learning and memory, language and thought, perception, attention, and planning. Studies of the neural basis of normal adult cognition, cognitive development, and disorders of cognition all fall within the purview of the Center. In addition, the Center promotes the application of the results of the study of the neural basis of cognition to artificial intelligence, technology, and medicine. The Center will synthesize the disciplines of basic and clinical neuroscience, cognitive psychology, and computer science, combining neurobiological, behavioral, computational and brain imaging methods. The Neural Processes in Cognition training program (NPC) is a joint project between 15 departments at the University of Pittsburgh and its medical school and 2 departments at Carnegie Mellon University, funded by the National Science Foundation. Students receive instruction in neurobiology, psychology, mathematics and computer simulation. Students are trained to interpret the function as well as the phenomena of neuroscience and to work collaboratively with specialists in multiple disciplines. ------------------------------------------------------------ David Redish Computer Science Carnegie Mellon University (NPC program) From David_Redish at GS17.SP.CS.CMU.EDU Sat May 28 07:41:50 1994 From: David_Redish at GS17.SP.CS.CMU.EDU (David Redish) Date: Sat, 28 May 94 07:41:50 -0400 Subject: NIPS*94 Mosaic hompage now available Message-ID: <27726.770125310@GS17.SP.CS.CMU.EDU> There is now a homepage for NIPS*94 at the following url: http://www.cs.cmu.edu:8001/afs/cs/project/cnbc/nips/NIPS.html Included in this homepage are: - the call for papers (html and ascii versions) - the call for workshops (html and ascii versions) - style files for papers When they become available, the following will also be added: - NIPS*94 program - NIPS*94 abstracts - NIPS*94 workshops - hotel and other local Denver information ------------------------------------------------------------ A short description of NIPS*94 follows: Neural Information Processing Systems -Natural and Synthetic- Monday, November 28 - Saturday, December 3, 1994 Denver, Colorado This is the eighth meeting of an interdisciplinary conference which brings together neuroscientists, engineers, computer scientists, cognitive scientists, physicists, and mathematicians interested in all aspects of neural processing and computation. The conference will include invited talks, and oral and poster presentations of refereed papers. There will be no parallel sessions. There will also be one day of tutorial presentations (Nov 28) preceding the regular session, and two days of focused workshops will follow at a nearby ski area (Dec 2-3). ------------------------------------------------------------ David Redish Carnegie Mellon University From hare at crl.ucsd.edu Sat May 28 09:57:17 1994 From: hare at crl.ucsd.edu (Mary Hare) Date: Sat, 28 May 94 06:57:17 PDT Subject: paper available Message-ID: <9405281357.AA12174@crl.ucsd.edu> The following paper is available by anonymous ftp from crl.ucsd.edu. LEARNING AND MORPHOLOGICAL CHANGE Mary Hare Dept. of Psychology Birkbeck College, University of London hare at crl.ucsd.edu Jeffrey Elman Dept. of Cognitive Science U. of California, San Diego elman at crl.ucsd.edu ABSTRACT: This paper offers an account of change over time in English verb morphology, based on a connectionist approach to how morphological knowledge is acquired and used (Rumelhart and McClelland 1986, Plunkett and Marchman 1991, 1993). A technique is first described that was developed for modeling historical change in connectionist networks, then that technique is applied to model English verb inflection as it developed from the highly complex past tense system of Old English towards that of the modern language, with one predominant regular pattern and a limited number of irregular forms. The model relies on the fact that certain input-output mappings are easier than others to learn in a connectionist network. Highly frequent patterns, or those that share phonological regularities with a number of others, are learned more quickly and with lower error than low-frequency, highly irregular patterns (Seidenberg and McClelland 1989). A network is taught a data set representative of the verb classes of Old English, but learning is stopped before errors have been eliminated, and the output of this network is used as the teacher for a new network. As a result, the errors in the first network are passed on to become part of the data set of the second. As this sequence is repeated, those patterns that are hardest to learn lead to the most errors, and over time are 'regularized' to fit a more dominant pattern. The results of the network simulations are highly consistent with the major historical developments. These results are predicted from well-understood aspects of network dynamics, which therefore provide a rationale for the shape of the attested changes. *************************** To obtain a copy ************************ unix> ftp crl.ucsd.edu Name: anonymous Password: (type your email address) ftp> cd pub/neuralnets ftp> binary ftp> get history.ps.Z ftp> quit unix> uncompress history.ps.Z unix> lpr history.ps (or what you normally do to print PostScript) From rsun at cs.ua.edu Sat May 28 18:57:45 1994 From: rsun at cs.ua.edu (Ron Sun) Date: Sat, 28 May 1994 17:57:45 -0500 Subject: No subject Message-ID: <9405282257.AA11587@athos.cs.ua.edu> Preprint available: -------------------------------------------- title: Robust Reasoning: Integrating Rule-Based and Similarity-Based Reasoning Ron Sun Department of Computer Science The University of Alabama Tuscaloosa, AL 35487 rsun at cs.ua.edu -------------------------------------------- to appear in: Artificial Intelligence (AIJ), Spring 1995 --------------------------------------------- The paper attempts to account for common patterns in commonsense reasoning through integrating rule-based reasoning and similarity-based reasoning as embodied in connectionist models. Reasoning examples are analyzed and a diverse range of patterns is identified. A principled synthesis based on simple rules and similarities is performed, which unifies these patterns that were before difficult to be accounted for without specialized mechanisms individually. A two-level connectionist architecture with dual representations is proposed as a computational mechanism for carrying out the theory. It is shown in detail how the common patterns can be generated by this mechanism. Finally, it is argued that the brittleness problem of rule-based models can be remedied in a principled way, with the theory proposed here. This work demonstrates that combining rules and similarities can result in more robust reasoning models, and many seemingly disparate patterns of commonsense reasoning are actually different manifestations of the same underlying process and can be generated using the integrated architecture, which captures the underlying process to a large extent. ---------------------------------------------------------------- * It is FTPable from aramis.cs.ua.edu in: /pub/tech-reports * No hardcopy available. * FTP procedure: unix> ftp aramis.cs.ua.edu Name: anonymous Password: (email-address) ftp> cd pub/tech-reports ftp> binary ftp> get sun.aij.ps.Z ftp> quit unix> uncompress sun.aij.ps.Z unix> lpr sun.aij.ps (or however you print postscript) ----------------------------------------------------------------- (A number of other publications are also available for FTP under pub/tech-reports) ================================================================ Dr. Ron Sun Department of Computer Science phone: (205) 348-6363 The University of Alabama fax: (205) 348-0219 Tuscaloosa, AL 35487 rsun at athos.cs.ua.edu ================================================================ From franz at neuro.informatik.uni-ulm.de Mon May 30 11:29:33 1994 From: franz at neuro.informatik.uni-ulm.de (Franz Kurfess) Date: Mon, 30 May 94 17:29:33 +0200 Subject: Fall School on Connectionism and Neural Nets HeKoNN 94 (in German) Message-ID: <9405301529.AA00205@neuro.informatik.uni-ulm.de> Below please find the announcement and call for participation of HeKoNN 94, a fall school on connectionism and neural networks to take place October 10-14, 1994 near Muenster, Germany. The courses will be held in German, so this will not be of much interest for people who don't speak German. Franz Kurfess HeKoNN 94 Herbstschule Konnektionismus und Neuronale Netze Muenster, 10.-14.10.1994 Im kommenden Oktober veranstalten die Fachgruppen "Konnektionismus" und "Neuronale Netze" der GI (Gesellschaft fuer Informatik) eine Herbstschule zu dem Themenbereich Konnektionismus und neuronale Netze. Sie bietet Einfuehrungen und vertiefende Darstellungen zu folgenden Themen: Grundlagen und Statistik, Implementierungen und Anwendungen, symbolischer Konnektionismus und kognitiver Konnektionismus. Konnektionistische Modelle und neuronale Netze sind einerseits inspiriert von biologischen Vorbildern, insbesondere dem menschlichen Gehirn, dienen andererseits inzwischen aber auch als praktikable Mechanismen zur Loesung konkreter Probleme. Durch diese Dichotomie ergibt sich die Gefahr von vielerlei Missverstaendnissen und unrealistischen Erwartungen, sei es bezueglich der Leistungsfaehigkeit im Vergleich zu herkoemmlichen Methoden, oder der Moeglichkeit biologische Systeme "nachzubauen". Seit etwa Mitte der achtziger Jahre ist das Backpropagation Modell vielen gelaeufig. Viel zu wenig bekannt sind aber theoretische Ergebnisse zu den Eigenschaften dieses und alternativer Verfahren und ueber die Zuverlaessigkeit der Methoden. Es ist heute klar, dass neuronale Netze in enger thematischer Verwandtschaft mit Statistik, Funktionsapproximation, und theoretischer Physik stehen und viele der dort gewonnen Erkenntnisse auch hier anwendbar sind. Darueberhinaus besteht noch ein recht grosses Defizit bei Fragen, die sich mit den biologischen, kognitiven und psychologischen Aspekten neuronaler Netze befassen. Hierbei dreht es sich um Konzepte zur Modellierung von Verhaltensweisen und Denkprozessen auf der Basis neuronaler Netze. Beispiele hierfuer sind die Repraesentation von "Wissen", insbesondere die Verankerung von internen Darstellungen mit den zugehoerigen Objekten der realen Welt, oder auch das Durchfuehren von einfachen Schlussfolgerungen. Solche Fragen sind nicht nur von akademischem Interesse, sondern ergeben sich auch beim Zusammenspiel von eher symbolorientierter Wissensverarbeitung, z.B. in Expertensystemen, und eher datenorientierten Verfahren etwa in der Mustererkennung. Und genau an dieser Stelle liegen auch viele Schwierigkeiten von herkoemmlichen Verfahren der Kuenstlichen Intelligenz, etwa in den Bereichen Sprach- oder Bildverarbeitung. Die Herbstschule bietet eine umfassende und fachuebergreifende Darstellung des Themengebiets mit besonderer Betonung der obigen Fragestellungen. Zwanzig aktive Wissenschaftler konnten als Dozenten gewonnen werden, die jeweils 8-stuendige Kurse abhalten. Die Kurse sind in vier Bereiche eingeteilt: * Grundlagen und Statistik * Implementierungen und Anwendungen * Symbolischer Konnektionismus * Kognitiver Konnektionismus Im ersten Bereich -- Grundlagen und Statistik -- gibt es fuenf Kurse, in denen die Grundlagen von Konnektionismus und neuronalen Netzen erlaeutert werden. Dabei geht es zunaechst um die Vorstellung der haeufig verwendeten Modelle sowie die Einfuehrung der entsprechenden Fachbegriffe und Algorithmen. Ein Kurs bietete eine Einfuehrung in die Grundlagen, Zielsetzungen und Forschungsfragen des Gebietes 'Konnektionismus' bzw. 'Neuronale Netzwerke' vom Standpunkt der K"unstlichen-Intelligenz-Forschung. Ein zweiter Kurs diskutiert neuronale Netze aus dem Blickwinkel der Approximationstheorie und Statistik. Hierbei wird besonderes Gewicht auf die Diskussion der Eigenschaften der verschiedenen Lern- und Optimierungsverfahren fuer die gaengigen Netzwerktypen gelegt. Ein anderer Kurs untersucht die Eignung kuenstlicher Neuronaler Netze als Modelle biologischer Vorgaenge. Die Betonung liegt hierbei auf dem erforderlichen Realitaetsgrad der Netze bei der Simulation der Informationsverarbeitung in den Nervensystemen von Lebewesen. Entscheidend fuer den praktischen Einsatz neuronaler Netze bei der Prognose und Prozesssteuerung ist die Zuverlaessigkeit der Ergebnisse. Ein spezieller Kurs zu dieser Fragestellung stellt Methoden zur Abschaetzung der Zuverlaessigkeit, zur Verbesserung der Prognosegenauigkeit und zur Optimierung der Netzwerktopologie vor. Besonders vielversprechend sind hier genetische Algorithmen, die ein globales Optimum fuer Funktionen mit vielen Nebenmaxima bestimmen koennen. Diese Verfahren werden in einem weitereren Kurs diskut iert. Der zweite Bereich -- Implementierungen und Anwendungen -- beinhaltet vier Kurse. In diesen Kursen werden zum einen mit SNNS und SESAME zwei weit verbreitete Public-Domain-Simulationswerkzeuge mit graphischer Benutzeroberflaeche vorgestellt, wobei praktische Uebungen am Rechner vorgesehen sind. Ein Schwerpunkt dieser Simulatoren ist die flexible Aenderung bestehender Netzstrukturen sowie die schnelle und sichere Modifikation von Lernverfahren. Waehrend der SNNS grossen Wert auf die graphischen Oberfl"ache unter X-Windows zur Generierung, Visualisierung und Modifikation der neuronalen Netze legt, hat Sesame seinen Schwerpunkt in einem modularen Experimentdesign, dass schnellen flexiblen Austausch einzelner Komponenten sicherstellt und durch Modularisierung die Konstruktion neuer Algorithmen unterstuetzt. Ein anderer Kurs praesentiert den derzeitigen Stand der Technik bei der Hardwareimplementierung neuronaler Netze. Hierbei handelt es sich einerseits um Zusatzkarten f"ur konventionelle Arbeitsplatzrechner und spezielle Parallelrechnersysteme, und zum anderen um Architekturen auf der Basis von anwendungsspezifischen mikroelektronischen Bausteinen, die entweder digital oder analog sein k"onnen. Als eine Anwendung werden schliesslich neuronale Netze im Bereich der Robotik diskutiert. Kuenstliche Neuronale Netze erscheinen hier geignet weil sie aus Trainingsbeispielen selbst"andig relevante Informationen extrahieren koennen. Schwerpunkt ist die Frage, ob sich hieraus in der Praxis Vorteile gegenueber den klassischen analytischen Verfahren der Robotik ergeben. Der dritte Bereich -- Symbolischer Konnektionismus -- versucht, den Zusammenhang herzustellen zwischen symbolorientierten Methoden der Kuenstlichen Intelligenz und sub-symbolischen Methoden, die meist im sensornahen Bereich, also bei der Datenerfassung vorliegen. Neuronale Netze werden gerade im datennahen Bereich oft mit Erfolg eingesetzt, sind jedoch nicht so ohne weiteres dafuer geeignet, Manipulationen auf Zeichenreihen von Symbolen durchzufuehren. Eine wichtige Fragestellungen hierbei ist die Ueberfuehrung von Rohdaten, wie etwa von einer Kamera aufgenommene Bilder oder von einem Mikrophon registrierte akustische Signale, in eine symbolische Form, auf der dann konventionelle Werkzeuge wie Expertensysteme aufsetzen koennen. Ein anderer wichtiger Aspekt ist die Extraktion von Wissen aus neuronalen Netzen, etwa zur Erklaerung ihres Verhaltens oder zur Darstellung der von dem Netzwerk gelernten Information in Form von Regeln. Der letzte Bereich schliesslich -- Kognitiver Konnektionismus -- befasst sich mit der Verwendung neuronaler Netze als Modelle fuer Wahrnehmung und Denkprozesse. Zum einen werden hier wichtige grundlegende Probleme dieser Modelle vor einem eher philosophischen Hintergrund diskutiert, zum anderen aber auch Ansaetze zur Modellierung von Phaenomenen wie Konzeptrepraesentation, Lernen, und Gedaechtnis besprochen. Weitere Themen betreffen die Untersuchung und Modellierung von informationsverarbeitenden Teilsystemen im Gehirn, etwa der Sprachverarbeitung oder des visuellen Systems. Die Kurse zu den vier obigen Bereichen werden parallel abgehalten; es ist hierbei jedoch nicht notwendig, einen Bereich als Gesamtes auszuwaehlen, sondern die Teilnehmer koennen und sollen Kurse aus verschiedenen Bereichen belegen. Die Herbstschule wird im Jugendgaestehaus Aasee bei Muenster stattfinden, wo sowohl die Teilnehmer als auch die Dozenten untergebracht sein werden. Dadurch soll die Moeglichkeit geboten werden, auch ausserhalb der eigentlichen Lehrveranstaltungen in einer zwanglosen Atmosphaere ueber interessante Fragestellungen weiterzudiskutieren. Angesprochen werden sollen insbesondere fortgeschrittene Studenten sowie Praktiker aus Forschung und Industrie. Die Zahl der Teilnehmer ist auf 100 beschraenkt. Der Preis fuer Studenenten wurde relativ niedrig gehalten (ca. 410,- DM incl Tagungsunterlagen und Vollpension). Um das Niveau der Tagung zu sichern, erfolgt die Auswahl der Teilnehmer auf Grund einer Bewerbung. Hierbei werden Vorkenntnisse, praktische Erfahrungen und das spezielle Interesse an Fragen des Konnektionismus und der Neuronalen Netze beruecksichtigt. Anmeldeschluss ist der - - - 1. Juli 1994 - - - Im Organisations- und Programmkomitee sind Ingo Duwe, Uni Bielefeld, Franz Kurfess, Uni Ulm, Gerhard Paass, GMD Sankt Augustin (Vorsitz), Guenther Palm, Uni Ulm, Helge Ritter, Uni Bielefeld, Stefan Vogel, Uni Koeln. Weitere Informationen sind erhaeltlich per anonymem ftp von "ftp.gmd.de", Directory "/Learning/neural/HeKoNN94", per electronic mail von "hekonn at neuro.informatik.uni-ulm.de", oder vom Tagungssekretariat HeKoNN 94 c/o Birgit Lonsinger, Universitaet Ulm Fakultaet fuer Informatik Abteilung Neuroinformatik D-89069 Ulm Tel: 0731 502 4151 Fax: 0731 502 4156 From gjg at cns.edinburgh.ac.uk Mon May 30 18:04:58 1994 From: gjg at cns.edinburgh.ac.uk (Geoffrey Goodhill) Date: Mon, 30 May 94 18:04:58 BST Subject: New TSP Algorithm(?) Message-ID: <19609.9405301704@cns.ed.ac.uk> Below is an article from the Guardian 28.5.94, a UK quality newspaper, which might be of interest to readers of this list. It's about a new algorithm for the TSP that claims to be the best yet. There is apparently an article forthcoming in the Journal of Neural Computing. I note two things: 1) Strange though it may seem to those of us who think that the purpose of publishing in reputable journals is to tell people what one has done, the article below suggests that the authors may not be going to tell us what the algorithm is. 2) Given that a number of inflated claims have been made for new TSP algorithms in the past based on comparisons with poor alternatives, I'd be interested to see proper comparisons to justify the authors' assertions. Geoff Goodhill ******************************************************************** SCIENCE WELL ON THE ROAD TO SALESMAN SOLUTION --------------------------------------------- Tim Radford reports on a near-answer to a deceptively simple mathematical question. A mathematical conundrum called the Travelling Salesman Problem may have lost its power to baffle, according to British Telecom Scientists. They admit they have not exactly solved it: just arrived at a way to make a computer produce the best and fastest solution yet. The 60-year-old problem is terribly simple, but has occupied some of the world's most powerful brains - and most powerful computers - for years. It is this: a salesman wants to visit 3, or 4, or 10, or 100 places. What is the shortest, or fastest, route? The options multiply dramatically with the number of calls. A journey to 3 sites involves 6 possible routes. A journey to 10 offers 3,628,800 possible choices. A journey around 100 would involve 10^156. This is 1 followed by 156 zeros, almost equivalent to the number of atoms in the universe - squared. The almost-optimum solution, by Dr Shara Amin and Dr Jose Luis Fernandez-Villacanas Martin at the BT laboratories in Martlesham Heath, near Ipswich, Suffolk, can now be reached in 1.6 seconds for a 100-point journey. The absolute best would take days. Even a 1000-point problem can be solved in less than 3 minutes. The answer, they say, will be reported in the Journal of Neural Computing in July. The algorithm they use will not be revealed, but there will be clues for other mathematicians on how to proceed. The algorithm - into which planners can slot variables such as a bank holiday in Oslo or engineering works between Birmingham and London - will help sales managers and, for that matter, telephone managers with a choice of calls. But the technique also has military uses. Imagine, said Professor Peter Cochrane, of the BT laboratories, a jet pilot under simultaneous attack from ground-to-air missiles and enemy aircraft. "Now you have a hyperspace problem. It is this: where do I steer, and who do I shoot at first, and in what order, to minimise the chances of me getting killed and maximise the amount of damage I can do to them?" But the immediate value would be in preparing the patterns on computer circuits, or searching in information "hyperspace", where the options can reach astronomical levels. Professor Cochrane sees the procedure as useful in automatic searches through networks and data libraries. "We are looking at giving information on demand, where you could have access to all the libraries in the world, all the museums in the world".