From mcasey at euclid.ucsd.edu Sun Apr 2 06:24:13 1995 From: mcasey at euclid.ucsd.edu (Mike Casey) Date: Sun, 2 Apr 1995 03:24:13 -0700 (PDT) Subject: Thesis Available "Computation In Discrete-Time Dynamical Systems" Message-ID: <9504021024.AA04461@euclid> A non-text attachment was scrubbed... Name: not available Type: text Size: 2351 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/f394d149/attachment.ksh From listerrj at helios.aston.ac.uk Mon Apr 3 05:13:40 1995 From: listerrj at helios.aston.ac.uk (listerrj) Date: Mon, 3 Apr 1995 10:13:40 +0100 (BST) Subject: Research studentship at Aston Message-ID: <28938.9504030913@sun.aston.ac.uk> RESEARCH STUDENTSHIP OPPORTUNITY ================================ NEURAL COMPUTING RESEARCH GROUP ------------------------------- ASTON UNIVERSITY ---------------- UK -- NEURAL NETWORKS APPLIED TO IGNITION TIMING AND AUTOMATIC CALIBRATION ==================================================================== An opportunity exists for a research student to be involved in a collaborative research programme between the Neural Computing Research Group, Aston University and SAGEM in the general area of applying neural networks to the ignition timing and calibration of gasoline internal combustion engines. This is a three year programme led by Professor David Lowe and the candidate would be expected to register for a PhD with the University. The ideal student would be computationally literate (preferably in C/C++) on UNIX and PC systems and have good mathematical and/or engineering abilities. An awareness of the importance of applying advanced technology and implementing ideas as engineering products is essential. In addition the ideal candidate would have some knowledge and interest in internal combustion engines and also relevant sensor technology. Further information and details may be obtained from Prof C Bishop Research Admissions Tutor University of Aston Aston Triangle Birmingham B4 7ET, UK. email c.m.bishop at aston.ac.uk tel: (+44/0)121 359 3611 ext 4268 For more information about the Neural Computing Research Group see our World Wide Web pages at: http://neural-server.aston.ac.uk/ -----------------------------end------------------------------------------- From g.gaskell at psychology.bbk.ac.uk Mon Apr 3 11:15:00 1995 From: g.gaskell at psychology.bbk.ac.uk (Gareth Gaskell) Date: Mon, 3 Apr 95 11:15 BST Subject: Paper: Phonological Representations in Speech Perception Message-ID: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/gaskell.phonrep.ps.Z The following paper (28 pages) is available in the neuroprose archive. This paper is due to be published in Cognitive Science and examines the phonological representations and processes involved in the perception of speech from a connectionist viewpoint. A Connectionist Model of Phonological Representation in Speech Perception Gareth Gaskell, Mary Hare & William Marslen-Wilson Abstract: A number of recent studies have examined the effects of phonological variation on the perception of speech. These studies show that both the lexical representations of words and the mechanisms of lexical access are organized so that natural, systematic variation is tolerated by the perceptual system, while a general intolerance of random deviation is maintained. Lexical abstraction distinguishes between phonetic features that form the invariant core of a word and those that are susceptible to variation. Phonological inference relies on the context of surface changes to retrieve the underlying phonological form. In this paper we present a model of these processes in speech perception, based on connectionist learning techniques. A simple recurrent network was trained on the mapping from the variant surface form of speech to the underlying form. Once trained, the network exhibited features of both abstraction and inference in its processing of normal speech, and predicted that similar behavior will be found in the perception of nonsense words. This prediction was confirmed in subsequent research (Gaskell & Marslen-Wilson, 1994). To retrieve the file: ftp archive.cis.ohio-state.edu login: anonymous password: ftp> cd /pub/neuroprose ftp> binary ftp> get gaskell.phonrep.ps.Z ftp> bye uncompress gaskell.phonrep.ps.Z lpr gaskell.phonrep.ps [or whatever you normally do to print] Gareth Gaskell Centre for Speech and Language, Birkbeck College, London g.gaskell at psyc.bbk.ac.uk From reiner at isy.liu.se Mon Apr 3 03:59:33 1995 From: reiner at isy.liu.se (Reiner Lenz) Date: Mon, 3 Apr 95 09:59:33 +0200 Subject: No subject Message-ID: <9504030759.AA22685@rainier.isy.liu.se> jaaskelainen at joyl.joensuu.fi Subject: Paper on Neuroprose: lenz.colorpca.ps.Z, Unsupervised Filtering of Color Spectra FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/lenz.colorpac.ps.Z HTTP: ftp://archive.cis.ohio-state.edu/pub/neuroprose/lenz.colorpac.ps.Z The following paper has been placed in the Neuroprose archive at Ohio State University: Title: Unsupervised Filtering of Color Spectra Reiner Lenz, Mats \"Osterberg, Dept. EE, Link\"opin g University, S-58183 Link\"oping, Sweden, Jouni Hiltunen, Timo Jaaskelainen, Dept. Physics, University of Joensuu, FIN-80101 Joensuu,Finland Jussi Parkkinen, Dept. Information Technology, Lappeenranta University of Technology, FIN-53851 Lappeenranta, Finland Abstract We describe how unsupervised neural networks can be used to extract features from databases of reflectance spectra. These databases try to sample color space in a way which reflects the properties of human color perception. In our construction of neural networks we identify first desirable properties of the network. These properties are then incorporated into an energy function and finally a learning rule is derived using optimization methods to find weight matrices which lead to minimal values of the energy function. We describe several energy functions and the performance of the resulting networks for the databases with the reflectance spectra. The experiments show that the weight matrix for one of the systems is very similar to the eigenvector system whereas the second type of systems tries to rotate the eigenvector system in such a way that the resulting filters partition the spectrum into different bands. We will also show how the additional constraint of positive filter coefficients can be incorporated into the design. It will appear in the Proc. Scandinavian Conference Image Analysis, Uppsala, 1995. (8 pages. No hard copies available.) _______________________________ More information about the unsupervised network used in the paper can be found in the PhD thesis: M. O"sterber: Quality functions for parallel selective principal component analysis. ISBN 91-7871-411-7 _______________________________ "Kleinphi macht auch Mist" Reiner Lenz | Dept. EE. | | Linkoeping University | email: reiner at isy.liu.se | S-58183 Linkoeping/Sweden | From eplunix!peter at eddie.mit.edu Mon Apr 3 16:39:48 1995 From: eplunix!peter at eddie.mit.edu (eplunix!peter@eddie.mit.edu) Date: Mon, 03 Apr 95 16:39:48 EDT Subject: The neural coding problem Message-ID: <9504031748.aa18232@eddie.mit.edu> In a very useful note Marius Usher recently (3/31/95) brought up the neural coding problem for discussion: > Perhaps the most crucial question in the study of cortical function is > whether the brain uses a mean rate code or a temporal code. > Recently a number of models have been proposed in order to account > for the variability of spike trains (discussed by Softky and Koch, 1993). > As it seems, each of these models can account for variability, despite their > very different assumptions and implications regarding the "neural code". > We are writing this note in order to highlight the specific predictions in > which these models differ, hoping in particular to direct the attention of > experimentalists to the "missing " data required to disambiguate between > these theoretical models and their implications about the neural code. For the last five years I have been investigating population interspike interval codes in the auditory system which may subserve perception of the low pitches of complex tones (periodicity pitch) and the discrimination of phonetic elements. As a consequence, I have given a great deal of thought to how central auditory structures might use the wealth of timing information which is available in the auditory periphery. Here are some thoughts that may facilitate the more general discussion of the neural coding problem: 1. Very many neural codes based on temporal patterns or times-of-arrival (including interneural synchrony codes) are possible, but only a very small subset of the possible codes, particularly of the higher-order pattern codes, have yet been seriously considered, either experimentally or theoretically. We should not rule out more complex codes on the basis of not finding evidence for the simplest or most obvious ones. 2. Neural codes are generally not mutually exclusive. A given spike train can be interpreted in different ways by different neural assemblies downstream. Thus discharge rates could be used by some cell populations, temporal patterns by another, and patterns of spike latencies by another. A possible example of this can be found in the ascending auditory pathway of many mammals, where there are several brainstem pathways which subserve auditory localization. Some pathways appear to convey binaural level differences encoded in discharge rates while others appear to convey interaural time differences encoded in spike latencies and interneural synchronies. There are almost undoubtedly real neural architectures which gracefully fuse both rate and time-based information (cf. Licklider), but very few "duplex" models have been proposed. 3. Often several aspects of neural discharge covary. For example, in the peripheral auditory system the roles of discharge rates and temporal patterns are hard to separate, since both kinds of information are present together in nearly all auditory populations. 4. While information can be encoded in the discharges of individual neurons, it seems likely from reliability considerations that information is encoded in the activity of populations of neurons. We are very familiar by now with the possibility of distributed rate codes, but it is also possible to have distributed temporal codes. An example of a distributed synchrony code is the "volley principle" in the auditory system. An example of a distributed temporal pattern code would be a population interspike interval code, where the all-order interval distribution of a population conveys information concerning stimulus periodicities. Distributed temporal codes can be either synchronous or asynchronous. Every hypothetical neural code has a corresponding hypothetical processing architecture. 5. Deciding whether a particular pattern of discharge is a "code" (i.e. that it has a functional role in the representation, transmission, and processing of information) is a difficult problem, since there are only a few systems whose function is understood well enough to see immediately what role a given putative encoding would play. Possibly the most direct way to demonstrate that a given discharge pattern has functional significance is to impose a particular pattern of activity on a neural population, e.g. by electrical stimulation, and to observe the perceptual and behavior consequences. Specific electrical time patterns are known to evoke particular sensations in many diverse sensory modalities: audition (single-channel cochlear implants, Eddington), somatoception (Mountcastle), gustation (Covey, DiLorenzo), and even color vision (Young). The next best thing is to look for correspondences between neurophysiology and psychophysics by comparing how closely a putative neural representation covaries with the percepts/behaviors which the representation hypothetically subserves. On the perceptual side this is a stimulus-coding problem -- does the code covary with perceptual performances? If discharge rates saturate and representations based on rates are degraded at high stimulus intensities when perceptual discriminations are unchanged or even improve, then this is evidence against a functional role for rate representations (in lieu of elaborate compensatory mechanisms, which then must be found and incorporated into the representation's description). 6. Putative codes can be ruled out by showing that the information needed to perform a particular perceptual or behavioral task is not present in the discharge activity of a particular population. It is important not to erect "straw man" codes when trying to rule out possible coding schemes. In general, the kinds of temporal codes thus far considered in the literature have been only the most simple and obvious ones, and much more consideration needs to be given to population synchrony codes (a la Abeles' synfire codes) and asynchronous temporal pattern codes. (a la Abeles' neurophysiological results). 7. If one finds a correspondence between discharge rates and some perceptually- or behaviorally-relevant distinction, this does not necessarily rule out a time code. Because rate-codes have been the conventional assumption of most of neuroscience, often physiological investigations stop when scientists find what they are looking for, i.e. rate-based correspondences with perception or behavior. However, underlying the rate-based responses may be complex time patterns of excitation and inhibition that may better correspond to the psychophysics than the rate code itself (arguably this is the case in explaining frequency selectivity in the peripheral auditory system -- while one can point to rate-based "frequency-channels" in the auditory nerve, the interspike interval distributions of the auditory nerve fibers yield much more robust and higher quality information (Goldstein) which, like the percepts, does not degrade with higher stimulus sound pressure levels. A similar situation exists in the fly visual system -- Bialek, Reichardt) 8. Long temporal integration times do not preclude temporal coding. In the auditory system there are a number of reasons to believe that the time window for fusing sounds is on the order of 5-10 msec (e.g. Chistovitch), whereas there is a longer build-up process associated with the apparent loudnesses of sounds of short durations (Zwislocki, Chistovitch). (We have many examples of rate-based processing models in the literature, but a dearth of time-based ones -- I will therefore outline a possible temporal integration mechanism as an example). Let us suppose that we have a complex acoustic stimulus with a low pitch, say a vowel with a fundamental frequency F0 (voice pitch) of 100 Hz). The most frequent interspike interval in the population of auditory nerve fibers and (probably) most cochlear nucleus populations will be 1/F0 = 10 msec. At the auditory cortex, this voice pitch will be seen in periodicities of auditory evoked potentials (e.g. Steinschneider et al), so there are evidently populations of neurons which are discharging either singly or in spatially-distributed volleys at intervals of 10 msec. There are probably other units which have discharge periodicities related to 10 msec which are not synchronized relative to the rest of the population. There are many recurrent pathways within the auditory cortices and the thalamus where spike trains containing a disproportionate numbers of these intervals can circulate. It is not then hard to imagine a temporal cross-correlation process between intervals circulating in these loops and incoming temporal patterns, and as 10 msec intervals are differentially faciliated based on their prevalence in the reverberating loops, this kind of structure would produce an asynchronous build-up of 10 msec intervals over longer time windows. It's only a sketch, but such mechanisms do not seem to be out of the question. Marius Usher also gave an example in favor of rate-coding: > Proponents of the coincidence detection principle may need to find an > explanation for the wealth of evidence showing integration in the perceptual > system. For example, the Bloch law (Loftus and Ruthruff 1993) shows that, for > stimuli of duration shorter than 100 msec, perceptual discrimination depends > only on the INTEGRAL of the stimulus (a high contrast 10 msec stimuli, > produces exactly the same affect on perception as a 20 msec stimuli > of half contrast). While I am not a visual scientist, I do know that images of very short duration (tachistoscopically presented) can be recognized, and that, like in the auditory system, the time windows for perceptual fusion are much shorter than for integration of intensity-related information. There are many alternatives to rate-based models of intensity discrimination, but these are generally underdeveloped. Two general classes of alternative models would be latency-based models (latency, latency variances) and temporal correlation models (population interval models). The latency codes need gating/reset mechanisms in addition to buildup loops. Apparently the latency of visual evoked potentials corresponds well to subjective brightness (see S.S. Stevens, "Sensory power functions and neural events" in Principles of Receptor Physiology, Loewenstein, ed. Springer-Verlag, 1971.), so that there are some examples, even in vision, of possible codes not based on rate. In addition Reichardt et al and Bialek et al found evidence for temporal cross-correlation operations in insect motion detection and Chung, Raymond, and Lettvin found interspike intervals corresponding to various luminance conditions in the frog. At the risk of heresy, I think that there could be a general temporal correlation theory of vision, particularly of form vision. I would think that there would be a great deal of spatio-temporal structure which would be imposed on retinal responses whenever a structured image moves across the receptor array, which, as I understand is the normal state of affairs -- the eye is always moving, even in "fixation". I have never understood how a rate-based model of visual form accounts for this (and what is it that is integrated over 100 msec if the image is constantly moving?). For an alternative temporal model of visual form it would be useful to know exactly how reliable are the latency distributions of vertebrate retinal ganglion cells to edges crossing their fields and how much temporal cross-correlation information might exist between ganglion cells in a local region. Does anyone know offhand if (where) such data exists? If the temporal correlations in the retinal responses are what matter, then higher contrast stimuli should produce more spatio-temporal structure. It may be the case that the 10 msec high-contrast stimulus may generate as many temporal cross-correlations as the 20 msec low-contrast stimulus, and that these cross-correlation patterns are integrated at higher levels through recurrent temporal cross-correlation. It would also be worth checking whether the rate-model corresponds to the psychophysics under a wide variety of conditions (very low and very high light levels, in the presence of visual noise, chromatic light, etc.). In the auditory system, auditory nerve discharge rate models work fairly well for moderate sound pressure levels but not very well for either levels near threshold or high levels. 9. (last thought). There also are many perceptual phenomena which are not easily explained using average rate models, but which are explicable in terms of temporal codes. Some of these are: the low pitch of unresolved harmonics, the pitch of repetition noise, the pitch of AM noise, the perception of different vibratory frequencies in somatoception, achromatic color (Benham's Top), the Pulfrich effect (latency & perception of depth), the perception of visual "spatial beats" which is the analogue of the "missing fundamental" in audition (Hammett & Smith), and all of the electrical stimulation examples alluded to above. Bekesy reportedly was able to localize stimuli differing in arrival time by 1-2 msec using many different modalities, (e.g. audition, somatoception, taste, olfaction) presumably on the basis of latency differences (Bower, Bekesy). There is really a bewildering array of physiological and psychophysical evidence that needs to be examined in some sort of systematic way for correspondences. I've started to collect and collate the disparate evidence for temporal coding, and I have yet to find a sensory modality for which there is not some evidence available in the literature. The neural coding problem is fundamental because until we understand the nature of the neural signals involved, we may miss those aspects of neural activity which are essential to the functional organization of the system. Dr. Peter Cariani Eaton Peabody Laboratory of Auditory Physiology Massachusetts Eye & Ear Infirmary 243 Charles St., Boston, MA 02114 USA 4/3/95 email: eplunix!peter at eddie.mit.edu tel: (617) 573-4243 FAX: (617) 720-4408 References -------------------------------------------------------------------------- Abeles, M., H. Bergman, E. Margalit, and E. Vaadia. "Spatiotemporal firing patterns in the frontal cortex of behaving monkeys." J. Neurophysiol. 70 (4 1993): 1629-1638. See also Abeles et al in Concepts in Neuroscience, 4(2): 131-158, 1993. Bekesy, Georg von. "Olfactory analogue to directional hearing." Journal of Applied Physiology 19 (3 1964a): 369-373. Bekesy, Georg von. "Rythmical variations accompanying gustatory stimulation observed by means of localization phenomena." Journal of General Physiology 47 (5 1964b): 809-825. Bialek, W., F. Rieke, R. R. van Stevenink, and Warland de Ruyter D. "Reading a neural code." Science 252 (28 June 1991): 1854-1856. Fly vision. Bower, T. G. R. "The evolution of sensory systems." In Perception: Essays in Honor of James J. Gibson, ed. Robert B. MacLeod and Herbert Pick Jr. 141-152. Ithaca: Cornell University Press, 1974. (Bekesy anecdotes) Bullock, T.H. "Signals and neural coding." In The Neurosciences: A Study Program, ed. G.C. Quarton, T. Melnechuck, and F.O. Schmitt. 347-352. New York: Rockefeller University Press, 1967. General review. Cariani, P. and B. Delgutte. "Interspike interval distributions of auditory nerve fibers in response to variable-pitch complex stimuli." Assoc. Res. Otolaryng. (ARO) Abstr. (1992): Cariani, P and B. Delgutte. "The pitch of complex sounds is simply coded in interspike interval distributions of auditory nerve fibers." Soc. Neurosci. Abstr. 18 (1992): 383. Cariani, P. As if time really mattered: temporal strategies for neural coding of sensory information. Communication and Cognition - Aritificial Intelligence, 1995, 12(1-2):161-229. Preprinted in: Origins: Brain and Self-Organization, K Pribram, ed., Lawrence Erlbaum Assoc., 1994; 208-252. Chistovitch, L. A. Central auditory processing of peripheral vowel spectra. J. Acoust. Soc. Am. 77(3):789-805. Time window for fusion of spectral shapes. Chung, S.H., S.A. Raymond, and J.Y. Lettvin. "Multiple meaning in single visual units." Brain Behav Evol 3 (1970): 72-101. Interval codes in frog vision. Covey, Ellen. "Temporal Neural Coding in Gustation." Ph.D., Duke University, 1980. Time pattern codes in rodent taste. Delgutte, B. and P. Cariani. "Coding of the pitch of harmonic and inharmonic complex tones in the interspike intervals of auditory nerve fibers." In The Processing of Speech, ed. M.E.H. Schouten. Berlin: Mouton-DeGruyer, 1992. Di Lorenzo, Patricia M. and Gerald S. Hecht. "Perceptual consequences of electrical stimulation in the gustatory system." Behavioral Neuroscience 107 (1993): 130-138. Time pattern codes in rodent taste. Eddington, D. K., W.H. Dobelle, D. E. Brackman, M. G. Mladejovsky, and J. Parkin. "Place and periodicity pitch by stimulation of multiple scla tympani electrodes in deaf volunteers." Trans. Am. Soc. Artif. Intern. Organs XXIV (1978): Festinger, Leon , Mark R. Allyn, and Charles W. White. "The perception of color with achromatic stimulation." Vision Res. 11 (1971): 591-612. Hammett, S.T. and Smith, A.T. Temporal beats in the human visual system. Vision Research 34(21):2833-2840. Missing (spatial) fundamentals. Goldstein, J. L. and P. Srulovicz. "Auditory-nerve spike intervals as an adequate basis for aural frequency measurement." In Psychophysics and Physiology of Hearing, ed. E.F. Evans and J.P. Wilson. London: Academic Press, 1977. Kozak, W.M. and H.J. Reitboeck. "Color-dependent distribution of spikes in single optic tract fibers of the cat." Vision Research 14 (1974): 405-419. Kozak, W.M., H.J. Reitboeck, and F. Meno. "Subjective color sensations elicited by moving patterns: effect of luminance." In Seeing Contour and Colour, ed. J.J. Kulikowski Dickenson, C.M. 294-310. New York: Pergamon Press, 1989. Licklider, J.C.R. "A duplex theory of pitch perception." Experientia VII (4 1951): 128-134. Mixed time-place autocorrelation model. Licklider, J.C.R. "Three auditory theories." In Psychology: A Study of a Science. Study I. Conceptual and Systematic, ed. Sigmund Koch. 41-144. Volume I. Sensory, Perceptual, and Physiological Formulations. New York: McGraw-Hill, 1959. Macrides, F. "Dynamic aspects of central olfactory processing." In Chemical Signals in Vertebrates, ed. D. Muller Schwartze and M. M. Mozell. 207-229. 3. New York: Plenum, 1977. Time patterns in smell. See also more recent work by Gilles Laurent in insect olfaction. Science, 265: 1872-75, Sept 23, 1994. Macrides, Foteos and Stephan L. Chorover. "Olfactory bulb units: activity correlated with inhalation cycles and odor quality." Science 175 (7 January 1972): 84-86. Temporal code for smell. Morrell, F. "Electrical signs of sensory coding." In The Neurosciences: A Study Program, ed. G.C. Quarton, T. Melnechuck, and F.O. Schmitt. 452-469. New York: Rockefeller University Press, 1967. Review. Mountcastle, Vernon. "The problem of sensing and the neural coding of sensory events." In The Neurosciences: A Study Program, ed. G.C. Quarton Melnechuk, T., and Schmitt, F.O. New York: Rockefeller University Press, 1967. Review. Mountcastle, Vernon. "Temporal order determinants in a somatosthetic frequency discrimination: sequential order coding." Annals New York Acad. Sci. 682 (1993): 151-170. Problem of vibration discrimination/neural representation. Mountcastle, V.B., W.H. Talbot, H. Sakata, and J. Hyvarinen. "Cortical neuronal mechanisms in flutter-vibration studied in unanesthetized monkeys. Neuronal periodicity and frequency discrimination." J. Neurophysiol. 32 (1969): 452-485. Reichardt, Werner. "Autocorrelation, a principle for the evaluation of sensory information by the central nervous system." In Sensory Communication, ed. Walter A. Rosenblith. 303-317. New York: MIT Press/Wiley, 1961. See also Egelhaaf & Borst. A look into the cockpit of the fly: visual orientation, algorithms, and identified neurons. J. Neuroscience, Nov. 1993 13(11):4563-4574. Uttal, W.R. The Psychobiology of Sensory Coding. New York: Harper and Row, 1973. Review of the coding problem. Young, R.A. "Some observations on temporal coding of color vision: psychophysical results." Vision Research 17 (1977): 957-965. Electrical temporal pattern stimulation produces colored phsophenes. Zwislocki, J. 1960. Theory of temporal auditory summation. J. Acoust. Soc. Am. 1960 32(8):1046-60. From cohn at psyche.mit.edu Mon Apr 3 13:49:56 1995 From: cohn at psyche.mit.edu (David Cohn) Date: Mon, 3 Apr 95 13:49:56 EDT Subject: reminder: Active Learning position papers due April 14 Message-ID: <9504031749.AA08795@psyche.mit.edu> AAAI Fall Symposium on Active Learning November 10-12, at MIT An active learning system is one that can influence the training data it receives by actions or queries to its environment. Potential participants in the AAAI Fall Symposium on Active Learning are invited to submit a short position paper (at most two pages) discussing what they could contribute to a dialogue on active learning and/or what they hope to learn by participating. Send papers to arrive by April 14, 1995 to: David D. Lewis (lewis at research.att.com) AT&T Bell Laboratories 600 Mountain Ave.; Room 2C-408 Murray Hill, NJ 07974-0636; USA Electronic mail submissions are strongly preferred. The full Call for Participation is available via world wide web at or by contacting me or lewis at research.att.com. -David Cohn (cohn at psyche.mit.edu) Co-chair, AAAI FSS on Active Learning From ucganlb at ucl.ac.uk Tue Apr 4 05:42:10 1995 From: ucganlb at ucl.ac.uk (Dr Neil Burgess - Anatomy UCL London) Date: Tue, 04 Apr 95 10:42:10 +0100 Subject: rate vs temporal coding Message-ID: <254412.9504040942@link-1.ts.bcc.ac.uk> How I enjoyed Peter Cariani's comments on rate vs. temporal coding! In our lab. there are also some data supporting the role of temporal coding, concerning rat hippocampus and navigation (O'Keefe & Recce, 1993), & some (simplistic) models of it, combining rate and temporal coding (Burgess et al., 1993 & 1994). A paper by Nicolelis et al. (1993) also points to temporal coding in the rat's thalamic representation of (sensory stimulation to) its face. All the best, Neil Burgess N, O'Keefe J \& Recce M (1993) `Using hippocampal `place cells' for navigation, exploiting phase coding', in: S. J. Hanson, C. L. Giles and J. D. Cowan (eds.) {\it Advances in Neural Information Processing Systems 5}, 929-936. San Mateo, CA: Morgan Kaufmann. (or neuroprose/burgess.hipnav.ps.Z) Burgess N, Recce M \& O'Keefe J (1994) `A model of hippocampal function', {\it Neural Networks}, {\bf 7}, 1065-1081. (or neuroprose/burgess.hipmod.ps.Z; http://rana.usc.edu:8376/~aguazzel/cs664/Burgess/paper.html) Nicolelis M A L, Lin R C S, Woodward D J \& Chapin J K (1993) `Dynamic and distributed properties of many-neuron ensembles in the ventral posterior medial thalamus of awake rats', {\it Proc. Natl. Acad. Sci. USA} {\bf 90} 2212-2216. O'Keefe J \& Recce M (1993) `Phase relationship between hippocampal place units and the EEG theta rhythm', {\it Hippocampus} {\bf 3} 317-330. From pja at barbarian.endicott.ibm.com Tue Apr 4 11:20:00 1995 From: pja at barbarian.endicott.ibm.com (Peter J. Angeline) Date: Tue, 4 Apr 1995 11:20:00 -0400 Subject: CFP for Genetic Programming Workshop at ICGA95 Message-ID: <9504041520.AA14075@barbarian.endicott.ibm.com> Call for Participation Advances in Genetic Programming Workshop at ICGA-95 and _Advances in Genetic Programming II_ to be published by MIT Press An informal workshop on Genetic Programming is planned for ICGA-95. This workshop is intended to bring together conference attendees interested in Genetic Programming in a more informal format to foster discussion and review the most recent work in the field. The workshop will consist of several presentations by researchers currently working on various advanced topics in Genetic Programming. As with the GP workshop at ICGA-93, an edited book of papers archiving current state-of-the-art research in Genetic Programming is also planned. This book will be comprised of papers chiefly from the workshop but will also include additional original submitted work. We are currently soliciting submissions both for presentation at the workshop and for publication in the edited book. Appropriate topics include, but are not restricted to, the following: o Theory of genetic programming o Extensions to Genetic Programming o Evolution of Modular GP structures o Comparisons between Genetic Programming and other techniques o Coevolution in GP o Hybrid algorithms using GP elements o Novel applications of Genetic Programming Authors interested in presenting at the workshop and/or being considered for inclusion to the book should submit an extended abstract describing their work to one of the workshop organizers listed below no later than June 6, 1995. Abstracts describing work-in-progress will be considered but will be given a lower priority than abstracts reporting results. Extended abstracts should be no longer than 3 pages in 12 pt. font, including graphs and references, when printed. Please submit postscript and/or ASCII via email if possible, although hardcopies will also be accepted. Abstracts describing work accepted for presentation at ICGA-95 should NOT be submitted. Please mark your abstract with the phrase "WORKSHOP", "BOOK" or "WORKSHOP AND BOOK" to signify your submission's status. All unmarked submissions will be considered for inclusion to both the workshop and book. Authors of abstracts tentatively accepted for the book must submit a full paper describing their work to the editors on or before August 1, 1995. Authors will be notified of final acceptance to the edited book on August 28, 1995. Additional information concerning preparation of the paper for the edited book will be sent to participants after final acceptance. Camera ready papers will be due September 19 with publication in spring of 1996. Abstracts for both the workshop and the edited book will be selected for their originality, clarity and contribution to Genetic Programming. Important Dates: ---------------- June 6, 1995 Extended Abstracts Due to workshop organizers June 25, 1995 Notification of abstracts selected for the workshop and tentative acceptance of abstracts for edited book. July 15-19, 1995 ICGA-95 Conference August 1, 1995 Papers for book must be RECEIVED by editors. August 28, 1995 Final notification for acceptance to edited book sent. Sept. 19, 1995 Final Camera-ready copies must be RECEIVED. Workshop Organizers / Editors: Peter J. Angeline Loral Federal Systems MD 0210 1801 State Route 17C Owego, NY 13827 pja at lfs.loral.com Kim Kinnear Adaptive Computing Technology 62 Picnic Rd Boxboro, MA 01719 kinnear at adapt.com From bill at nsma.arizona.edu Wed Apr 5 02:32:20 1995 From: bill at nsma.arizona.edu (Bill Skaggs) Date: Tue, 04 Apr 1995 23:32:20 -0700 (MST) Subject: discussion on variability and neural codes Message-ID: <9504050632.AA17789@nsma.arizona.edu> Marius Usher and Martin Stemmler write: > Perhaps the most crucial question in the study of cortical function > is whether the brain uses a mean rate code or a temporal code. I would like to argue that we should refrain from putting the question in these terms, because it is not productive. But first I should say that my criticism applies only to this one sentence that Marius and Martin wrote, not to the rest of their presentation, which I think was quite sophisticated and insightful. The problem with posing the question in terms of a mean rate code versus a temporal code is that the reality is clearly somewhere in between. A mean rate code is one in which shifting the time of an action potential makes no difference; a temporal code is one in which shifting the time does make a difference. For any code actually used in the brain, though, shifting the time of an action potential will make a difference if and only if the shift is sufficiently large. This is actually pretty obvious: shifting a spike by 1 nanosecond surely won't make a difference anywhere in the brain, but shifting by 1 year surely will make a difference everywhere. The right question to ask is how large a shift it takes to make a difference. We can think of this in terms of a plot of the following form: | Effect | *************************** | *********** | ******* | *** | ** | ** | * | * | * | * |* |* |_____________________________________________________________ Spike Time Shift Thus, for any sort of imaginable code, the effect of shifting a spike will increase linearly for very small shifts, and will eventually saturate at a level beyond which further time shifts have no greater effect. (The saturation level is equal to the effect of deleting the spike entirely.) Of course, complicated things may happen in between. Instead of asking whether we are looking at a mean rate code or a temporal code (which is a meaningless question), we should ask what the shape of the time shift-vs-effect curve is, and in particular, what the largest and smallest time constants are. Note that, although the shape of the curve may change if the effect is quantified in a different way, the time constants are likely to remain similar. In some parts of the brain, the auditory system in particular, time constants in the submillisecond range are clearly present. In a wide range of systems, though, including Bialek's fly motion cells, Wyeth Bair's MT cells, and the hippocampal place cells our group has been working with, the smallest time constants seem to be in the 10--20 msec range. As a cynic might perhaps expect, this range is perfectly positioned for both the mean rate and temporal coding camps to seize on as evidence to support their views, thereby confusing the issue almost beyond hope. I think we will make better progress in understanding neural coding if we can get beyond this simplistic dichotomy. In summary: Ask not whether 'tis a rate code or a time code; ask rather what the time constant is. -- Bill From Roland.Baddeley at psy.ox.ac.uk Wed Apr 5 09:26:00 1995 From: Roland.Baddeley at psy.ox.ac.uk (Roland Baddeley) Date: Wed, 5 Apr 1995 14:26:00 +0100 Subject: Paper available on exploritory projection pursuit Message-ID: <199504051326.OAA28015@axp01.mrc-bbc.ox.ac.uk> The following 21 pages long manuscript is now available by ftp: FTP-host: axp01.mrc-bbc.ox.ac.uk FTP-filename: /pub/pub/users/rjb/fyfe_project.ps.Z Hardcopies are not avaliable. ---------------------------------------------------------------------------- Non-linear Data Structure Extraction Using Simple Hebbian Networks Colin Fyfe, Dept of Computer Science, University of Strathclyde, Scotland email: fyfe_ci0 at paisley.ac.uk and Roland Baddeley, Department of Physiology and Experimental Psychology, University of Oxford, England OX1 3UD. email: Roland.Baddeley at psy.ox.ac.uk Abstract: We present a class of neural networks based on simple Hebbian learning which allow the finding of higher order structure in data. The neural networks use negative feedback of activation to self-organise; such networks have previously been shown to be capable of performing Principal Component Analysis (PCA). In this paper, this is extended to Exploratory Projection Pursuit (EPP) which is a statistical method for investigating structure in high dimensional data sets. As opposed to previous proposals for networks which learn using Hebbian learning, no explicit weight normalisation, decay or weight clipping is required. The results are extended to multiple units and related to both the statistical literature on EPP and the neural network network literature on Non-linear PCA. This paper is to appear in Biological Cybernetics. ________________________________________ Roland Baddeley Department of Psychology and Physiology South Parks Road University of Oxford Oxford, England 0X1 3UD ________________________________________ From FRYRL at f1groups.fsd.jhuapl.edu Wed Apr 5 10:30:00 1995 From: FRYRL at f1groups.fsd.jhuapl.edu (Fry, Robert L.) Date: Wed, 05 Apr 95 10:30:00 EDT Subject: Temporal Information Message-ID: <2F821D00@fsdsmtpgw.fsd.jhuapl.edu> The establishment of what actually comprises information in biological systems is an essential problem since this determination provides the basis for the analytical evaluation of the information processing capability of neural structures. In response to the question "What comprises information to a neuron?" consider the answer that those quantitites which are observable or measureable by a neuron represent information. Hence what is information to one neuron may not necessarily be information to another. Now as current discussions have pointed out, there are many possibilities regarding what exactly these measureable quantites might consist of in the way of rate encoding, time-of-arrival, and so on or even possibly combinations thereof. Consider the following simplistic perspective. Observable quantities may be measured in both space and in time both of which can be conceptually be thought of as being quantized in a neural context. Spatial quantization occurs due to the specificity of synaptic (or perhaps axonal input accrding to current understandings of some neural structures) for a given neural. The synaptic efficacies can be viewed as a Hermitian measurement operator giving rise to the somatic measured quantity. In a dual sense, time is also quantized if time-of-arrival is the critical measurement temporal quantity of specific action potential which either do or do not exist at a given instant in time. The term "instant" used here obviously must be considered in regard to "Bill's" question of what the critical time constant is or are for the subject neural assemblies. There is empirical evidence that there are adaptation mechanisms in place which serve to modulate time-of-arrival giving rise to a delay vector having a one-to-one correspondance with the efficacy vector. From this perspective there is a dual time-space dependency on at least some of the quantites observable by an individual neuron. The observable quantity would then consist of a_n*x(t-tau_n) where a_n is the learned connection strength and tau_n is the learned delay. This has been the basis for my research in which I have been applying the basic Shannon measures of entropy, mutual information , and relative entropy to the study of neural structures which are optimal in an information-theoretic sense and have publications and papers some of which exist in the neuroprose repository. With this view, the sets {a_n} and {tau_n} are seen to represent Lagrange vectors which serve to maximize the mutual informatioon between neural inputs and output. This is of course a personal perspective and obviously there may be many other temporal modalities for the inter-neuron exchange of information. It can be argued however, that the above modality is in many ways the most simple. Analytically, it seems a very tractable perspective as opposed to rate, latencies, etc. Robert Fry Johns Hopkins University/ Applied Physics Laboratory Johns Hopkins Road Laurel, MD 20723 From eplunix!peter at eddie.mit.edu Wed Apr 5 11:43:23 1995 From: eplunix!peter at eddie.mit.edu (eplunix!peter@eddie.mit.edu) Date: Wed, 05 Apr 95 11:43:23 EDT Subject: Codes and time constants Message-ID: <9504051243.aa04074@eddie.mit.edu> Regarding mean rate vs. temporal codes, Bill Skaggs (4/4/95) commented: > Instead of asking whether we are looking at a mean rate code or a > temporal code (which is a meaningless question), we should ask what the > shape of the time shift-vs-effect curve is, and in particular, what the > largest and smallest time constants are. Note that, although the shape > of the curve may change if the effect is quantified in a different way, > the time constants are likely to remain similar. I know this is the way that many people think of the problem of rate vs. temporal codes, but it can lead to the conflation of codes and concepts which, in my opinion, really are different and should be kept distinct. The issue of time constants is related to the temporal precision needed to convey information via some code (what distortions in spike time pattern are sufficient to change one message into another?). The issue of what spike train patterns convey the message is complementary to the issue of precision. (e.g. the average power of a signal is a different property than its Fourier spectrum, regardless of what sampling rate is used to specify the signal.) An average rate code means that the average number of spikes within a given temporal integration window is all that counts in determining the message, i.e. rearranging spike time patterns without changing the number of spikes within a window should have no effect (otherwise we would have something more elaborate than a pure mean rate code). A temporal code is one in which the message sent is determined by: 1) the time patterns of spikes (e.g. the complex Fourier spectrum of the spike pattern) or 2) the particular spike arrival times relative to some reference event (e.g. the return time after an echolocation call, or absolute time-of-arrival relative to that of other spikes in spike trains produced by other neurons -- interneural synchrony). For a temporal pattern code, if the time patterns of spike arrivals are scrambled without changing the mean rate, then the message is altered. In the Covey/DiLorenzo electrical stimulation experiments in the gustatory system that I cited in the previous message, a particular time pattern of electrical pulses evokes behavioral signs in a rat that it tastes a sweet substance, whereas scrambling the patterns while maintaining the average number of pulses evokes no such signs. The gustatory system is slow, so the temporal precision of the code is probably in the tens of milliseconds, but nevertheless, the time pattern does appear to be the coding vehicle, since its disruption evidently has perceptual consequences. The differences between rate-based and timing-based codes can also be seen from the perspective of the decoding operations required of each. The neural operations needed to interpret rate codes are rate-integration processes (all other things being equal, the longer the window the higher the precision), whereas those needed to interpret temporal codes are coincidence and delay processes (the shorter the coincidence windows and the more reliable the delay mechanisms, the higher the precision). In my opinion, this is why the discussion of whether most cortical pyramidal neurons are performing rate-integrations vs. coincidence detections (and yes, on what time scales they might be doing these things) is so crucial. It might even be possible for a given neuron to be doing both, albeit on different time scales, since particular time patterns of coincidence can be embedded in spike trains also driven by Poisson-like inputs (what information-processing operations a neuron carries out depend upon the way(s) its output is interpreted by other parts of the system). This is why the detailed pattern analysis of Abeles et al and Lestienne & Strehler is probably needed in addition to statistical approaches based on stationary processes. It should be noted that average rates are one dimensional, scalar codes, whilst temporal codes can support spike trains conveying more than one independent signal type (multiplexing). I think that this property of temporal codes has potentially very great implications for the design of artificial neural networks, if only because a multiplicity of orthogonal signals allows one to keep different kinds of information from interfering with each other. Dr. Peter Cariani Eaton Peabody Laboratory of Auditory Physiology Massachusetts Eye & Ear Infirmary 243 Charles St., Boston, MA 02114 USA email: eplunix!peter at eddie.mit.edu tel: (617) 573-4243 FAX: (617) 720-4408 From marshall at cs.unc.edu Wed Apr 5 16:24:15 1995 From: marshall at cs.unc.edu (Jonathan Marshall) Date: Wed, 5 Apr 1995 16:24:15 -0400 Subject: Paper available: Context, uncertainty, multiplicity, & scale Message-ID: <199504052024.QAA03996@marshall.cs.unc.edu> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/marshall.context.ps.Z This paper is available via anonymous-ftp from the Neuroprose archive. It is scheduled to appear in Neural Networks 8(3). This is a revision (April 1994) of the previously-distributed version (February 1993), with some new sections. -------------------------------------------------------------------------- ADAPTIVE PERCEPTUAL PATTERN RECOGNITION BY SELF-ORGANIZING NEURAL NETWORKS: CONTEXT, UNCERTAINTY, MULTIPLICITY, AND SCALE JONATHAN A. MARSHALL Department of Computer Science, CB 3175, Sitterson Hall University of North Carolina, Chapel Hill, NC 27599-3175, U.S.A. Telephone 919-962-1887, e-mail marshall at cs.unc.edu ABSTRACT. A new context-sensitive neural network, called an "EXIN" (excitatory+inhibitory) network, is described. EXIN networks self-organize in complex perceptual environments, in the presence of multiple superimposed patterns, multiple scales, and uncertainty. The networks use a new inhibitory learning rule, in addition to an excitatory learning rule, to allow superposition of multiple simultaneous neural activations (multiple winners), under strictly regulated circumstances, instead of forcing winner-take-all pattern classifications. The multiple activations represent uncertainty or multiplicity in perception and pattern recognition. Perceptual scission (breaking of linkages) between independent category groupings thus arises and allows effective global context-sensitive segmentation constraint satisfaction, and exclusive credit attribution. A Weber Law neuron-growth rule lets the network learn and classify input patterns despite variations in their spatial scale. Applications of the new techniques include segmentation of superimposed auditory or biosonar signals, segmentation of visual regions, and representation of visual transparency. KEYWORDS. Masking fields, Anti-Hebbian learning, Distributed coding, Adaptive constraint satisfaction, Decorrelators, Excitatory+inhibitory (EXIN) learning, Transparency, Segmentation. 46 pages. Thanks to Jordan Pollack for maintaining the Neuroprose archive! -------------------------------------------------------------------------- From jari at vermis.hut.fi Thu Apr 6 08:59:57 1995 From: jari at vermis.hut.fi (Jari Kangas) Date: Thu, 6 Apr 1995 15:59:57 +0300 Subject: Updated version 3.1 of SOM_PAK Message-ID: <199504061259.PAA00366@vermis> ************************************************************************ * * * SOM_PAK * * * * The * * * * Self-Organizing Map * * * * Program Package * * * * Version 3.1 (April 7, 1995) * * * * Prepared by the * * SOM Programming Team of the * * Helsinki University of Technology * * Laboratory of Computer and Information Science * * Rakentajanaukio 2 C, SF-02150 Espoo * * FINLAND * * * * Copyright (c) 1992-1995 * * * ************************************************************************ Updated public-domain programs for Self-Organizing Map (SOM) algorithms are available via anonymous FTP on the Internet. A new book on SOM and LVQ (Learning Vector Quantization) has also recently come out: Teuvo Kohonen. Self-Organizing Maps (Springer Series in Information Sciences, Vol 30, 1995). 362 pp. Price (hardcover only) USD 49.50 or DEM 98,-. ISBN 3-540-58600-8. In short, Self-Organizing Map (SOM) defines a 'nonlinear projection' of the probability density function of the high-dimensional input data onto the two-dimensional display. SOM places a number of reference vectors into an input data space to approximate to its data set in an ordered fashion, and thus implements a kind of nonparametric, nonlinear regression. This package contains all necessary programs for the application of Self-Organizing Map algorithms in an arbitrary complex data visualization task. This code is distributed without charge on an "as is" basis. There is no warranty of any kind by the authors or by Helsinki University of Technology. In the implementation of the SOM programs we have tried to use as simple code as possible. Therefore the programs are supposed to compile in various machines without any specific modifications made on the code. All programs have been written in ANSI C. The programs are available in two archive formats, one for the UNIX-environment, the other for MS-DOS. Both archives contain exactly the same files. These files can be accessed via FTP as follows: 1. Create an FTP connection from wherever you are to machine "cochlea.hut.fi". The internet address of this machine is 130.233.168.48, for those who need it. 2. Log in as user "anonymous" with your own e-mail address as password. 3. Change remote directory to "/pub/som_pak". 4. At this point FTP should be able to get a listing of files in this directory with DIR and fetch the ones you want with GET. (The exact FTP commands you use depend on your local FTP program.) Remember to use the binary transfer mode for compressed files. The som_pak program package includes the following files: - Documentation: README short description of the package and installation instructions som_doc.ps documentation in (c) PostScript format som_doc.ps.Z same as above but compressed som_doc.txt documentation in ASCII format - Source file archives: som_p3r1.exe Self-extracting MS-DOS archive file som_pak-3.1.tar UNIX tape archive file som_pak-3.1.tar.Z same as above but compressed An example of FTP access is given below unix> ftp cochlea.hut.fi (or 130.233.168.48) Name: anonymous Password: ftp> cd /pub/som_pak ftp> binary ftp> get som_pak-3.1.tar.Z ftp> quit unix> uncompress som_pak-3.1.tar.Z unix> tar xvfo som_pak-3.1.tar See file README for further installation instructions. All comments concerning this package should be addressed to som at cochlea.hut.fi. ************************************************************************ From Roland.Baddeley at psy.ox.ac.uk Fri Apr 7 08:09:23 1995 From: Roland.Baddeley at psy.ox.ac.uk (Roland Baddeley) Date: Fri, 7 Apr 1995 13:09:23 +0100 Subject: Incorrect location for paper on exploritory projection pursuit Message-ID: <199504071209.NAA12204@axp01.mrc-bbc.ox.ac.uk> Unfortunetely, as pointed out by a number of people, I added one too many pub's to the directory location of the paper: "Non-linear Data Structure Extraction Using Simple Hebbian Networks". Therefore the address should be: FTP-host: axp01.mrc-bbc.ox.ac.uk FTP-filename: /pub/users/rjb/fyfe_project.ps.Z NOT /pub/pub/users/rjb/fyfe_project.ps.Z Sorry for any problem caused by having too mant pubs, Roland Baddeley ================================================================= o reiterate the paper was: "Non-linear Data Structure Extraction Using Simple Hebbian Networks" Colin Fyfe, Dept of Computer Science, University of Strathclyde, Scotland email: fyfe_ci0 at paisley.ac.uk and Roland Baddeley, Department of Physiology and Experimental Psychology, University of Oxford, England OX1 3UD. email: Roland.Baddeley at psy.ox.ac.uk Abstract: We present a class of neural networks based on simple Hebbian learning which allow the finding of higher order structure in data. The neural networks use negative feedback of activation to self-organise; such networks have previously been shown to be capable of performing Principal Component Analysis (PCA). In this paper, this is extended to Exploratory Projection Pursuit (EPP) which is a statistical method for investigating structure in high dimensional data sets. As opposed to previous proposals for networks which learn using Hebbian learning, no explicit weight normalisation, decay or weight clipping is required. The results are extended to multiple units and related to both the statistical literature on EPP and the neural network network literature on Non-linear PCA. This paper is to appear in Biological Cybernetics. ________________________________________ Roland Baddeley Department of Psychology and Physiology South Parks Road University of Oxford Oxford, England 0X1 3UD ________________________________________ From jamie at atlas.ex.ac.uk Fri Apr 7 06:55:28 1995 From: jamie at atlas.ex.ac.uk (jamie@atlas.ex.ac.uk) Date: Fri, 7 Apr 95 11:55:28 +0100 Subject: Temporal Codes In-Reply-To: ml-connectionists-request@EDU.CMU.CS.SRV.TELNET-1's message of Thu, 6 Apr 1995 08:53:35 -0400 Message-ID: <22185.9504071055@sirius.dcs.exeter.ac.uk> Peter Cariani (4/5/95) wrote: >It should be noted that average rates are one dimensional, scalar codes, >whilst temporal codes can support spike trains conveying more than one >independent signal type (multiplexing). I think that this property of >temporal codes has potentially very great implications for the design of >artificial neural networks, if only because a multiplicity of orthogonal >signals allows one to keep different kinds of information from interfering >with each other. I also think this is an important point. In particular, the variable binding problem is precisely the problem of keeping information about one entity from interfering with information about another. For example, keeping the representation of a red square and a blue triangle from being interpreted as a red triangle and a blue square. Another reason for expecting temporal codes to be used for representing such binding information is that a neuron will generally react the same way to a given input regardless of the absolute time at which the input occurs. This property can be used to argue that a temporal synchrony representation (Cariani's temporal code, option 2) of variable bindings inherently implies systematicity. I know of no other code that can be argued to imply such generalization across entities. Of course, an argument for expecting variable binding information to be represented in a temporal code is in no way an argument against expecting other kinds of information to be represented in other codes. These kinds of computational considerations do, however, tell us something about what kind of information to look for in what kind of code. James Henderson Department of Computer Science University of Exeter Exeter EX4 4PT, UK From jagota at next2.msci.memst.edu Sat Apr 8 14:09:02 1995 From: jagota at next2.msci.memst.edu (Arun Jagota) Date: Sat, 8 Apr 1995 13:09:02 -0500 Subject: HKP excercises Message-ID: <199504081809.AA06512@next2> Dear Connectionists: The text "Introduction to the Theory of Neural Computation" by Hertz, Krogh, and Palmer does not come with excercises, so I have compiled some of my own, in connection with a neural nets course I am teaching this term (for the second time from HKP). Professor Palmer, who I discussed this with, liked the idea of making this list available on Connectionists. If you would like this compilation of "classroom tested" excercises and some computer projects, please send me electronic mail at jagota at next2.msci.memst.edu List is available only by e-mail at present so I can keep track of responses. Currently it is a little biased, reflecting my background and interests. To correct this bias and to expand the list, I invite submissions of new material (via e-mail to me). Submissions should pertain to the material as covered in HKP. The list is especially lacking excercises on Chapters SEVEN (Recurrent Networks), EIGHT (Unsupervised Hebbian Learning), and NINE (Unsupervised Competitive Learning). Submissions in LaTeX would be especially convenient for me. The final list, including all "accepted" submissions, will be forwarded also to Professor Palmer, who has indicated he might evolve it into a possibly larger list. All submissions I decide to include will be acknowledged. Arun Jagota Math Sciences, University of Memphis From edelman at wisdom.weizmann.ac.il Sun Apr 9 09:31:34 1995 From: edelman at wisdom.weizmann.ac.il (Edelman Shimon) Date: Sun, 9 Apr 1995 13:31:34 GMT Subject: TR available: representation by similarity to prototypes Message-ID: <199504091331.NAA07037@lachesis.wisdom.weizmann.ac.il> FTP-host: eris.wisdom.weizmann.ac.il (132.76.80.53) FTP-filename: /pub/cs-tr-95-11.ps.Z 30 pages, with 23 figures; about 7 MB uncompressed, 1.4 MB compressed ---------------------------------------------------------------------- On Similarity to Prototypes in 3D Object Representation Sharon Duvdevani-Bar and Shimon Edelman Dept. of Applied Mathematics and Computer Science The Weizmann Institute of Science A representational scheme under which the ranking between represented dissimilarities is isomorphic to the ranking between the corresponding shape dissimilarities can support perfect shape classification, because it preserves the clustering of shapes according to the natural kinds prevailing in the external world. We discuss the computational requirements of rank-preserving representation, and examine its plausibility within a prototype-based framework of shape vision. ---------------------------------------------------------------------- -Shimon Shimon Edelman, Applied Math & CS, Weizmann Institute http://www.wisdom.weizmann.ac.il/~edelman/shimon.html Cyber Rights Now: Accept No Compromise From jagota at next2.msci.memst.edu Mon Apr 10 12:47:10 1995 From: jagota at next2.msci.memst.edu (Arun Jagota) Date: Mon, 10 Apr 1995 11:47:10 -0500 Subject: HKP followup Message-ID: <199504101647.AA16003@next2> Dear Connectionists: My apologies for a followup post on Connectionists; however I think it useful. So far I have received about 140 requests for HKP exercises. (I anticipated a large response and gave my email address where I don't receive mail from anywhere else.) I am glad this effort is of interest to so many people, and thank all who responded or will. However, I have received only one set of contributions so far. If you have potentially usable HKP-type questions sitting somewhere in your directories, especially on chapters 6 to 9, please do consider sending them to me. I am willing to sift through them, and clean them up if necessary. (Only in English, however, please.) I will send the HKP list to all who requested some time next week (Apr 16-21). Regards, Arun Jagota Math Sciences, University of Memphis From pja at barbarian.endicott.ibm.com Mon Apr 10 13:04:26 1995 From: pja at barbarian.endicott.ibm.com (Peter J. Angeline) Date: Mon, 10 Apr 1995 13:04:26 -0400 Subject: Student Travel Assistance to ICGA Message-ID: <9504101704.AA12914@barbarian.endicott.ibm.com> There is a limited amount of money set aside for student travel assistance to this year's ICGA. Below is the relevant information. I encourage everyone to look at the ICGA95 homepage at URL: http://www.aic.nrl.navy.mil:80/galist/icga95/ for additional details. I've included some information below. -pete +----------------------------------------------------------------------------+ | Peter J. Angeline, PhD | | | Advanced Technologies Dept. | | | Loral Federal Systems | | | State Route 17C | I have nothing to say, | | Mail Drop 0210 | and I am saying it. | | Owego, NY 13827-3994 | | | Voice: (607)751-4109 | - John Cage | | Fax: (607)751-6025 | | | Email: pja at lfs.loral.com | | +----------------------------------------------------------------------------+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Travel Assistance Information ----------------------------- The Naval Center for Applied Research in Artificial Intelligence at the Naval Research Laboratory and the ICGA-95 conference committee has provided a limited amount of funds for student travel assistance to ICGA-95. Travel assistance will be granted to those students demonstrating a need and will be limited to partial compensation for travel expenses to the conference site. No other expenses will be considered. Important Information --------------------- To Receive Travel Assistance: o Make a formal request for travel assistance funds to the conference financial chair. Contact information is below. Email is the preferred method of applying for funds. o Have your advisor forward a letter (email or fax) to the conference financial chair verifying your current status as a student and certifying that sufficient travel funds are not available. Please have them include their email address for verification. o Both of these must be sent by May 22, 1995. o Notification of travel assistance will be sent by 5/29/95. o Once you receive notification of a travel assistance award, send confirmation to the financial chair that you will attend the conference. Confirmation must be received no later than 6/8/95. Unconfirmed travel grants may be reallocated to other applicants! o Register for the conference as soon as possible! Registering for the conference early will increase your chances of receiving funds. Summary of Important Dates -------------------------- April 5, 1995: Notifications mailed to authors. May 22, 1995: Request for assistance and advisor letter sent to financial chair. May 29, 1995: Notification of travel grants sent to applicants. June 11, 1995: Last day for early registration. June 12, 1995: Confirmation of attendance must be sent to financial chair. July 15-20, 1995: Conference Dates. FAQs ---- Q: My company won't cover my airfare. Can I apply for travel assistance? o Funds are limited to assisting only students. Q: When will I receive my travel assistance grant? o Funds will be distributed at the conference registration desk during conference check-in. Q: How much money will I get? o Funds will be allocated based on need and distance traveled to the conference site (Pittsburgh, PA USA) NOT ON COST OF TRANSPORTATION. You should make your travel plans early so you can get the best deal. Travel grants WILL NOT cover the full cost of the travel expenses so that as many students as possible can receive assistance. Q: If I request travel assistance after the deadline, will I still have a chance of receiving funds? o There is a chance but it depends on how many people who met the deadline don't use their travel grants. You will be limited how much assistance they were offered. It pays to complete your request early! Q: Can I ask for a waiver of registration fees? o No. Assistance is only for travel costs. ICGA-95 student registration fees are among the lowest for a conference of this size. In a sense, all student registration is already being subsidized by the conference. Watch this space for additional information and updates! If you have any questions, please feel free to contact the financial chair: Peter J. Angeline Voice: (607)751-4109 Fax: (607)751-6025 Email: pja at lfs.loral.com From peg at cs.rochester.edu Mon Apr 10 08:44:14 1995 From: peg at cs.rochester.edu (peg@cs.rochester.edu) Date: Mon, 10 Apr 1995 08:44:14 -0400 Subject: Ballard et al., "Deictic Codes ... Embodiment of Cognition" Message-ID: <199504101244.IAA27809@artery.cs.rochester.edu> Title: Deictic Codes for the Embodiment of Cognition Authors: Dana H. Ballard, Mary M. Hayhoe, and Polly K. Pook ftp.cs.rochester.edu, directory pub/papers/ai http://www.cs.rochester.edu/trs/ai-trs.html filename: 95.NRLTR1.Deictic_codes_for_the_embodiment_of_cognition.ps.gz To describe phenomena that occur at different time scales, computational models of the brain necessarily must incorporate different levels of abstraction. We argue that at time scales of approximately one-third of a second, orienting movements of the body play a crucial role in cognition and form a useful computational level, termed the embodiment level. At this level, the constraints of the body determine the nature of cognitive operations, since the natural sequentiality of body movements can be matched to the natural computational economies of sequential decision systems. The way this is done is through a system of implicit reference termed deictic, whereby pointing movements are used to bind objects in the world to cognitive programs. We show how deictic bindings enable the solution of natural tasks and argue that one of the central features of cognition, working memory, can be related to moment-by-moment dispositions of body features such as eye movements and hand movements. From akaho at etl.go.jp Mon Apr 10 23:01:08 1995 From: akaho at etl.go.jp (Shotaro Akaho/=?ISO-2022-JP?B?GyRAQFZKZj48QkBPOhsoSg==?=) Date: Tue, 11 Apr 1995 12:01:08 +0900 Subject: TR available on "Mixture model for image understanding and the EM algorithm" Message-ID: <9504110301.AA12519@etlsu12.etl.go.jp> The following technical report is available via anonymous ftp. FTP-host: etlport.etl.go.jp FTP-filename: /pub/akaho/ETL-TR-95-13E.ps.Z "Mixture model for image understanding and the EM algorithm" Shotaro Akaho Abstract: We present a mixture model that can be applied to the recognition of multiple objects in an image plane. The model consists of any shape of submodules. Each submodule is a probability density function of data points with scale and shift parameters, and the modules are combined with weight probabilities. We present the EM (Expectation-Maximization) algorithm to estimate those parameters. We also modify the algorithm in the case that data points are restricted in an attention window. ----------------------------------------------------------------- To retrieve from etlport.etl.go.jp: unix> ftp etlport.etl.go.jp Name (etlport.etl.go.jp:akaho): anonymous Password: (use your email addrss) ftp> cd pub/akaho ftp> binary ftp> get ETL-TR-95-13E.ps.Z unix> uncompress ETL-TR-95-13E.ps.Z unix> lpr ETL-TR-95-13E.ps.Z -- Shotaro AKAHO (akaho at etl.go.jp) Electrotechnical Laboratory (ETL) Information Science Division, Mathematical Informatics Section 1-1-4 Umezono, Tsukuba-shi, Ibaraki, 305 Japan From tony at hebb.psych.mcgill.ca Tue Apr 11 13:00:48 1995 From: tony at hebb.psych.mcgill.ca (Tony Marley) Date: Tue, 11 Apr 1995 13:00:48 -0400 (EDT) Subject: Postdoctoral Positions with A. A. J. Marley, Department of Psychology, McGill University Message-ID: (I apologize if you receive mutiple copies of this announcement. I have mailed it to several different, but overlapping, lists) POSSIBLE POSTDOCTORAL POSITIONS WITH PROFESSOR A. A. J. MARLEY Department of Psychology, McGill University I have funds available for one, possibly two, postdoctoral fellows to begin working with me as soon as mutually arrangeable, in the first instance for 12 months, with the strong possibility of extension for a second 12 months. I am seeking candidates especially in two areas: 1. Mathematical, Simulation, and Experimental Work In Absolute Identification, Categorization, and Comparative Judgment. Recently I and my colleagues have developed and tested "neural" and random walk models in the above areas. We plan to continue this work, and would especially like to further develop the mathematical aspects of the models, and to discover "critical" tests of our basic ideas. A further possible position exists to work with myself and Yves Lacouture, who is at Universite Laval. This latter position is especially suited to a (unilingual or multilingual) French speaker. 2. Characterization Theorems and Stochastic Models in the Mathematical Social Sciences This is an interdisciplinary project involving mathematicians, statisticians, and social scientists. We are developing results concerning theories of aggregation, characterization of choice models, entropy approaches in the mathematical social sciences, etc. For each of these positions a strong background in mathematics (mathematical modeling) and/or computer science (computer moodeling) is extremely important. If you are interested, please send me a vitae, statement of research interests, and three letters of reference (preferably all by email or by fax - number below). Tony Marley A. A. J. (Tony) Marley Department of Psychology McGill University 1205 Avenue Dr. Penfield Montreal Quebec H3Y 2L2 CANADA email: tony at hebb.psych.mcgill.ca tel: (514) 398-6102 fax: (514) 398-4896 From C.Campbell at bristol.ac.uk Tue Apr 11 07:20:20 1995 From: C.Campbell at bristol.ac.uk (I C G Campbell) Date: Tue, 11 Apr 1995 12:20:20 +0100 (BST) Subject: Fifth Irish Neural Networks Conference Message-ID: <9504111120.AA11426@zeus.bris.ac.uk> Please forward ... FIFTH IRISH NEURAL NETWORK CONFERENCE St. Patricks's College, Maynooth, Ireland September 11-13, 1995 ***FINAL CALL FOR PAPERS*** Papers are solicited for the Fifth Irish Neural Network Conference. They can be in any area of theoretical or applied neural computing including, for example: Learning algorithms Cognitive modelling Neurobiology Natural language processing Vision Signal processing Time series analysis Hardware implementations Selected papers from the conference proceedings will be published in the journal Neural Computing and Applications (Springer International). The conference is the fifth in a series previously held at Queen's University, Belfast and University College, Dublin. An extended abstract of not more than 500 words should be sent to: Dr. John Keating, Re: Neural Networks Conference, Dept. of Computer Science St. Patricks's College, Maynooth, Co. Kildare, IRELAND e-mail: JNKEATING at VAX1.MAY.IE NOTE: If submitting by postal mail please make sure to include your e-mail address. The deadline for receipt of abstracts is 1st May 1995. Authors will be contacted regarding acceptance by 1st June, 1995. Full papers will be required by 31st August 1995. ================================================================== FIFTH IRISH NEURAL NETWORKS CONFERENCE REGISTRATION FORM Name: __________________________________________________ Address: __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ e-mail: ______________________ fax: ______________________ REGISTRATION FEE Before August 1, 1995 After Fee enclosed IR#50 IR#60 IR#________ The registration fee covers the cost of the conference proceedings and the session coffee breaks. METHOD OF PAYMENT Payment should be in Irish Pounds in the form of a cheque or banker's draft made payable to INNC'95. =================================================================== FIFTH IRISH NEURAL NETWORKS CONFERENCE ACCOMMODATION FORM Accomodation and meals are available on campus. The rooms are organised into apartments of 6 bedrooms. Each apartment has a bathroom, shower, and a fully equipped dining room/kitchen. The room rate is IR#12 per night (excl. breakfast, breakfast is IR#3 for continental and IR#4 for Full Irish). Name: ___________________________________________________ Address: ___________________________________________________ ___________________________________________________ ___________________________________________________ __________________________________________________ e-mail: ______________________ fax: ______________________ Arrival date: ______________________ Departure date: ______________________ No. of nights: ________ Please fill out a separate copy of the accommodation form for each individual requiring accommodation. If you have any queries, contact John Keating at JNKEATING at VAX1.MAY.IE The second day of the conference (Tuesday 12th September) is a half-day and includes an excursion to Newgrange and Dublin during the afternoon. The cost of this excursion is IR#10. I will be going on the excursion on Tues. afternoon yes/no (please delete as appropriate). ================================================================== Return fees with completed registration/ accommodation forms to: Dr John Keating, Re: Neural Networks Conference, Dept. of Computer Science, St. Patrick's College, Maynooth, Co. Kildare, IRELAND Unfortunately, we cannot accept registration or accommodation bookings by e-mail. =================================================================== Fifth Irish Neural Networks Conference - Paper format The format for accepted submissions will be as follows: LENGTH: 8 pages maximum. PAPER size: European A4 MARGINS: 2cms all round PAGE LAYOUT: Title, author(s), affiliation and e-mail address should be centred on the first page. No running heads or page numbers should be included. TEXT: Should be 10pt and preferably Times Roman. From ghosh at pine.ece.utexas.edu Tue Apr 11 17:31:31 1995 From: ghosh at pine.ece.utexas.edu (Joydeep Ghosh) Date: Tue, 11 Apr 1995 16:31:31 -0500 Subject: Papers on Ridge Polynomial Networks and on Generalization Message-ID: <199504112131.QAA24900@pine.ece.utexas.edu> =========================== Paper announcement ======================== The following two papers are available via anonymous ftp: FTP-host: www.lans.ece.utexas.edu (128.83.52.78) filenames: /pub/papers/rpn_paper.ps.Z and /pub/papers/struc_adapt_jann94.ps.Z More conveniently, they can be retrieved from the HOME PAGE of the LAB. FOR ARTIFICIAL NEURAL SYSTEMS (LANS) at Univ. of Texas, Austin: http://www.lans.ece.utexas.edu where, under "selected publications", the abstracts of more than 40 papers can be viewed and the corresponding .ps.Z files can be downloaded. ===================================================================== RIDGE POLYNOMIAL NETWORKS (to appear, IEEE Trans. Neural Networks) Yoan Shin and Joydeep Ghosh This paper presents a polynomial connectionist network called RIDGE POLYNOMIAL NETWORK (RPN) that can uniformly approximate any continuous function on a compact set in multi-dimensional input space $Re^{d}$, with arbitrary degree of accuracy. This network provides a more efficient and regular architecture compared to ord inary higher-order feedforward networks while maintaining their fast learning property. The ridge polynomial network is a generalization of the pi-sigma network and u ses a special form of ridge polynomials. It is shown that any multivariate polynomial can be repre sented in this form, and realized by an RPN. Approximation capability of the RPNs is shown by this representation theorem an d the Weier- strass polynomial approximation theorem. The RPN provides a na- tural mechanism for incremental network growth. Simulation results on a surface fitting problem, the classification of high-dim ensional data and the realization of a multivariate po- lynomial function are given to highligh t the capability of the network. In particular, a constructive learning algorithm developed for the network is shown to yield smooth generalization and steady learning. ===================================================================== STRUCTURAL ADAPTATION AND GENERALIZATION IN SUPERVISED FEED-FORWARD NETWORKS (Jl. of Artificial Neural Networks, 1(4), 1994, pp. 431-458.) Joydeep Ghosh and Kagan Tumer This work explores diverse techniques for improving the generali- zation ability of supervised feed-forward neural networks via structural adaptation, and introduces a new network structure with sparse connectivity. Pruning methods which start from a large network and proceed in trimming it until a satisfactory solution is reached, are studied first. Then, construction methods, which build a network from a simple initial configura- tion, are presented. A survey of related results from the discip- lines of function approximation theory, nonparametric statistical inference and estimation theory leads to methods for principled architecture selection and estimation of prediction error. A network based on sparse connectivity is proposed as an alterna- tive approach to adaptive networks. The generalization ability of this network is improved by partly decoupling the outputs. We perform numerical simulations and provide comparative results for both classification and regression problems to show the generali- zation abilities of the sparse network. ===========================repeat FTP info ======================== FTP-host: www.lans.ece.utexas.edu (128.83.52.78) filenames: /pub/papers/rpn_paper.ps.Z and /pub/papers/struc_adapt_jann94.ps.Z ************* SORRY, NO HARD COPIES *********** From patrick at magi.ncsl.nist.gov Tue Apr 11 10:35:30 1995 From: patrick at magi.ncsl.nist.gov (Patrick Grother) Date: Tue, 11 Apr 95 10:35:30 EDT Subject: New Very Large NIST OCR Database Message-ID: <9504111435.AA01488@magi.ncsl.nist.gov> +--------------------------+ | NIST Special Database 19 | +--------------------------+ Handprinted Forms and Characters Database Special Database 19 contains NIST's entire corpus of training materials for handprinted doucument and character recognition. It publishes Handprinted Sample Forms from 3600 writers, 810000 character images isolated from their forms, ground truth classifications for those images, reference forms for further data collection, and software utilities for image management and handling. It supersedes Special Databases 3 and 7. + "Final" accumulation of NIST's handprinted sample data + Full page HSF forms from 3600 writers + Separate digit, upper and lower case, and free text fields + over 800000 images with hand checked classifications + Binary images Scanned at 11.8 dots per mm ( 300 dpi ) + Updated CCITT IV Compression Source Code + Database management utilities + The images of Special Database 19 form a superset of the images of two previous releases: Special Databases 3 and 7 which are now discontinued. The database is NIST's largest and probably final release of images intended for handprint document processing and OCR research. The full page images are the default input to the "NIST FORM-BASED HANDPRINT RECOGNITION SYSTEM", a public domain release of end to end page recognition software. Special Database 19 is available as a 5.25 inch CD-ROM in the ISO-9660 format. For sales contact: Standard Reference Data National Institute of Standards and Technology Building 221, Room A323 Gaithersburg, MD 20899 Voice: (301) 975-2208 FAX: (301) 926-0416 email: srdata at enh.nist.gov For technical details contact: Patrick Grother Visual Image Processing Group National Institute of Standards and Technology Building 225, Room A216 Gaithersburg, Maryland 20899 Voice: (301) 975-4157 FAX: (301) 840-1357 email: patrick at magi.ncsl.nist.gov From heckerma at microsoft.com Tue Apr 11 11:51:48 1995 From: heckerma at microsoft.com (David Heckerman) Date: Tue, 11 Apr 95 11:51:48 TZ Subject: Bayesian networks Message-ID: <9504120001.AA08979@netmail2.microsoft.com> A Bayesian network (a.k.a. belief network) is a graphical, modular representation of a joint probability distribution over a set of variables. A Bayesian network is often easy to build directly from domain or expert knowledge and also can be learned from data, making it an excellent representation language in which to combine domain knowledge and data. The march issue of CACM contains a tutorial on the representation as well as three articles on applications. Also, I've written a tutorial on learning Bayesian networks containing many pointers to the literature. The tutorial (in part) will appear in the forthcoming collection "Advances in Knowledge Discovery and Data Mining" edited by U. Fayyad, G. Piatesky-Shapiro, P. Smyth, and R. Uthurusamy. It can be obtained via anonymous ftp at research.microsoft.com://pub/tech-reports/winter94-95/tr-95-06.ps or via my home page http://www.research.microsoft.com/research/dtg/heckerma/heckerma.html. David From john at dcs.rhbnc.ac.uk Wed Apr 12 12:20:56 1995 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Wed, 12 Apr 95 17:20:56 +0100 Subject: MSc in Computational Intelligence Message-ID: <199504121620.RAA18977@platon.cs.rhbnc.ac.uk> MSc in COMPUTATIONAL INTELLIGENCE at the Computer Science Department Royal Holloway, University of London We offer a new twelve-month MSc in Computational Intelligence covering a wide range of subjects: Neural Computing Inference Systems Probabilistic Reasoning Constraint Networks Simulated Annealing Neurochips and VLSI Equational Reasoning Computer Vision Concurrent Programming Object-Oriented Programming Connectionist Expert Systems Computational Learning Theory Royal Holloway is one of the largest colleges of the University of London and is located on a beautiful wooded campus. For further information email: cims at dcs.rhbnc.ac.uk or write to: Course Director, MSc in Computational Intelligence Computer Science Department Royal Holloway, University of London EGHAM, Surrey, TW20 0EX Tel: +44 (0)1784 333421 Fax: +44 (0)1784 443420 From HMSKERK at rulfsw.fsw.LeidenUniv.nl Wed Apr 12 12:07:31 1995 From: HMSKERK at rulfsw.fsw.LeidenUniv.nl (Jan Heemskerk) Date: Wed, 12 Apr 1995 17:07:31 +0100 (MET) Subject: Neural hardware overview Message-ID: <01HP9C37KJ90B7JDXY@rulfsw.LeidenUniv.nl> A preliminary version of the paper "Overview of neural hardware" is now available by anonymous ftp from our ftp-site: ftp.mrc-apu.cam.ac.uk directory name pub/nn/murre filename: neurhard.ps (23 pages) This is a draft version based on Chapter 3 in: Heemskerk, J.N.H. (1995). Neurocomputers for Brain-Style Processing. Design, Implementation and Application. PhD thesis, Unit of Experimental and Theoretical Psycholo- gy Leiden University, The Netherlands. ABSTRACT Neural hardware has undergone rapid development during the last few years. This paper presents an overview of neural hardware projects within industries and academia. It describes digital, analog, and hybrid neurochips and accelerator boards as well as large-scale neurocomputers built from general purpose processors and communication elements. Special attention is given to multiprocessor projects that focus on scalability, flexibility, and adaptivity of the design and thus seem suitable for brain-style (cognitive) processing. The sources used for this overview are taken from journal papers, conference proceedings, data sheets, and ftp-sites and present an up-to-date overview of current state- of-the-art neural hardware implementations. From mcasey at euclid.ucsd.edu Thu Apr 13 08:14:03 1995 From: mcasey at euclid.ucsd.edu (Mike Casey) Date: Thu, 13 Apr 1995 05:14:03 -0700 (PDT) Subject: TRs on Dynamical Systems and RNNs Available Message-ID: <9504131214.AA18138@euclid> A non-text attachment was scrubbed... Name: not available Type: text Size: 3278 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/c32e06ad/attachment.ksh From josh at faline.bellcore.com Thu Apr 13 15:27:17 1995 From: josh at faline.bellcore.com (Joshua Alspector) Date: Thu, 13 Apr 1995 15:27:17 -0400 Subject: neural nets in telecom workshop Message-ID: <199504131927.PAA03544@faline.bellcore.com> International Workshop on Applications of Neural Networks to Telecommunications (IWANNT*95) Stockholm, Sweden May 22-24, 1995 Organizing Committee General Chair: Josh Alspector, Bellcore Program Chair: Rod Goodman, Caltech Publications Chair: Timothy X Brown, Bellcore Treasurer: Anthony Jayakumar, Bellcore Publicity: Atul Chhabra, NYNEX Lee Giles, NEC Research Institute Local Arrangements: Miklos Boda, Ellemtel Bengt Asker, Ericsson Program Committee: Harald Brandt, Ellemtel Tzi-Dar Chiueh, National Taiwan U Francoise Fogelman, SLIGOS Tony Reeder, British Telecom Larry Jackel, AT&T Bell Laboratories Thomas John, Southwestern Bell Adam Kowalczyk, Telecom Australia S Y Kung, Princeton University Tadashi Sone, NTT INNS Liaison: Bernard Widrow, Stanford University IEEE Liaison: Steve Weinstein, NEC Conference Administrator: Betty Greer, IWANNT*95 Bellcore, MRE 2P-295 445 South Street Morristown, NJ 07960, USA voice: (201)829-4993 fax: (201)829-5888 bg1 at faline.bellcore.com Dear Colleague: You are invited to an international workshop on applications of neural networks and other intelligent systems to problems in telecommunica- tions and information networking. This is the second workshop in a series that began in Princeton, New Jersey on October, 18-20 1993. Topics include: Network Management Congestion Control Adaptive Equalization Speech Recognition Language ID/Translation Information Filtering Dynamic Routing Software Engineering Fraud Detection Financial and Market Prediction Fault Identification and Prediction Character Recognition Adaptive Control Data Compression This conference will take place at a time of the year when the beauti- ful city of Stockholm is at its best. The conference will take place in the facilities of the Royal Swedish Academy of Engineering Sciences, right in the middle of Stockholm. There will be several hotels in dif- ferent categories to choose from in the neighborhood. One evening, there will be a boat tour in the famous archipelago which will include dinner. We enclose an advance program for the workshop as well as informa- tion for registration and hotels. There will be a hard cover proceed- ings available at the workshop. There is further information on the IWANNT home page at: ftp://ftp.bellcore.com/pub/iwannt/iwannt.html I hope to see you at the workshop. Sincerely, Josh Alspector, General Chair -------------------------------------------------------------------------- -------------------------------------------------------------------------- Preliminary Program Monday, May 22, 1995: 7:00 Registration and Coffee Session 1: 8:30 J. Alspector, Welcome 8:45 Invited Speaker: Bernt Ericson, VP Ericsson Research and Technology 9:30 C. Cortes, L. D. Jackel, W-P Chiang, Predicting Failures of Telecommunication Paths: Limits on Learning Machine Accuracy Imposed by Data Quality 10:00 Break 10:30 C. S. Hood, C. Ji, An Intelligent Monitoring Hierarchy for Network Management 11:00 A. Holst, A.Lansner, A Higher Order Bayesian Neural Network for Classification and Diagnosis 11:30 T. Sone, A Strong Combination of Neural Networks and Deep Reasoning in Fault Diagnosis 12:00 J. Connor, L. Brothers, J. Alspector, Neural Network Detection of Fraudulent Calling Card Patterns 12:30 Lunch Session 2: 13:30 D. S. Reay, Non-Linear Channel Equalisation Using Associative Memory Neural Networks 14:00 A. Jayakumar, J. Alspector, Experimental Analog Neural Network Based Decision Feedback Equalizer for Digital Mobile Radio 14:30 M. Junius, O. Kennemann, Intelligent Techniques for the GSM Handover Process 15:00 Break 15:30 P. Campbell, H. Ferr, A. Kowalczyk, C. Leckie, P. Sember, Neural Networks in Real Time Decision Making 16:00 P. Chardaire, A. Kapsalis, J. W. Mann, V. J. Rayward-Smith, G. D. Smith, Applications of Genetic Algorithms in Telecommunications 16:30 S. Bengio, F. Fessant, D. Collobert, A Connectionist System for Medium-Term Horizon Time Series Prediction Tuesday, May 23, 1995: 8:00 Coffee and Registration Session 3: 8:30 Invited Speaker: Martin Hyndman, Derwent Information, Neural Network Patenting 9:00 B. de Vries, C. W. Che, R. Crane, J. Flanagan, Q. Lin, J. Pearson, Neural Network Speech Enhancement for Noise Robust Speech Recognition 9:30 Break 10:00 E. Barnard, R. Cole, M. Fanty, P. Vermeulen, Real-World Speech Recognition with Neural Networks 10:30 A. K. Chhabra, V. Misra, Experiments with Statistical Connectionist Methods and Hidden Markov Models for Recognition of Text in Telephone Company Drawings 11:00 R. A. Bustos, T. D. Gedeon, Learning Synonyms and Related Concepts in Document Collections 11:30 N. Karunanithi, J. Alspector, A Feature-Based Neural Network Movie Selection Approach 12:00 H. Brandt, ATM Basics Tutorial 12:30 Lunch: Tuesday PM 13:30 Poster Session: A. Arts-Rodrguez, F. Gonzlez-Serrano, A Figueiras-Vidal, L. Weruaga-Prieto, Compensation of Bandpass Nonlinearities by Look-Up-Tables and CMAC P. Barson, N. Davey, S. Field, R. Frank, D. S. W. Tansley, Dynamic Competitive Learning Applied to the Clone Detection Problem R. Battiti, A. Sartori, G. Tecchiolli, P. Tonella, A. Zorat, Neural Compression: An Integrated Application to EEG Signals E. Bayro-Corrochano, R. Espinoza-Soliz, Neural Network Based Approach for External Telephone Network Management M. Berger, Fast Channel Assignment in Cellular Radio Systems M. J. Bradley, P. Mars, Analysis of Recurrent Neural Networks as Digital Communication Channel Equalizer T. Brown, A Technique for Mapping Optimization Solutions into Hardware M. Collobert, D. Collobert, A Neural System to Detect Faulty Components on Complex Boards in Digital Switches F. Comellas, J. Ozn, Graph Coloring Algorithms for Assignment Problems in Radio Networks M. Dixon, M. Bellgard, G. R. Cole, A Neural Network Algorithm to Solve the Routing Problem in Communication Networks A. P. Engelbrecht, I. Cloete, Dimensioning of Telephone Networks Using a Neural Network as Traffic Distribution Approximator A. D. Estrella, E. Casilari, A. Jurado, F. Sandoval, ATM Traffic Neural Control: Multiservice Call Admission and Policing Function S. Fredrickson, L. Tarassenko, Text-Independent Speaker Recognition Using Radial Basis Functions N. Kasabov, Hybrid Environments for Building Comprehensive AI and the Task of Speech Recognition K. Kohara, Selective Presentation Learning for Forecasting by Neural Networks H. C. Lau, K. Y. Szeto, K. Y. M. Wong, D. Y. Yeung, A Hybrid Expert System for Error Message Classification F. Mekuria, T. Fjllbrant, Neural Networks for Efficient Adaptive Vector Quantization of Signals A. F. Nejad, T. D. Gedeon, Analyser Neural Networks:An Empirical Study in Revealing Regularities of Complex Systems A. Varma, R. Antonucci, A Neural-Network Controller for Scheduling Packet Transmissions in a Crossbar Switch M. B. Zaremba, K.-Q. Liao, G. Chan, M. Gaudreau, Link Bandwidth Allocation in Multiservice Networks Using Neural Technology 16:30 Boat Tour and Dinner Wednesday, May 24, 1995: 8:00 Coffee and Registration Session 4: 8:30 Invited Speaker: Per Roland, Karolinska Institute, The Real Neural Network 9:30 S. Field, N. Davey, R. Frank, A Complexity Analysis of Telecommunications Software Using Neural Nets 10:00 Break 10:30 T-D Chiueh, L-K Bu, Theory and Implementation of an Analog Network that Solves the Shortest Path Problem 11:00 E. Nordstrm, J. Carlstrm, A Reinforcement Learning Scheme for Adaptive Link Allocation in ATM Networks 12:00 W. K. F. Lor, K. Y. M. Wong, Decentralized Neural Dynamic Routing in Circuit-Switched Networks 12:30 Lunch Session 5: 13:30 A. Garcia-Lopera, A. Ariza Quintana, F. Sandoval Hernandez, Modular Neural Control of Buffered Banyan Networks 14:00 A. Murgu, Adaptive Flow Control in Multistage Communications Networks Based on a Sliding Window Learning Algorithm 14:30 T. Brown, A Technique for Classifying Network States 15:00 Break 15:30 R. M. Goodman, B. E. Ambrose, Learning Telephone Network Trunk Reservation Congestion Control Using Neural Networks 16:00 A. Farag, M. Boda, H. Brandt, T. Henk, T. Trn, J. Br, Virtual Lookahead - a New Approach to Train Neural Nets for Solving On-Line Decision Problems 16:30 O. Gallmo, L. Asplund, Reinforcement Learning by Construction of Hypothetical Targets 17:00 Adjourn ------------------------------------------------------------------------ ------------------------------------------------------------------------ Site The conference will be held at the IVA or Royal Swedish Academy of Engineering Sciences. The location is a mixture of old and new. The conference will take place in modern facilties built in 1983, while the lunches are held in a beautiful dining room from the turn of the century. IVA is situated at Grev Turegatan 14 (Count Ture's street) which is very central. Close by is Sturegallerian, a fine shopping center, located in a number of buildings from the early 20th century. A few hundred meters walk will take you to Nybroviken, where you may take a ferry to the big outdoor museum Skansen. The same distance in the other direction will bring you to Hamngatan and Sergels torg, right in the middle of the shopping district. How to Get There From willicki at cs.aston.ac.uk Thu Apr 13 13:10:46 1995 From: willicki at cs.aston.ac.uk (Chris Williams) Date: Thu, 13 Apr 1995 18:10:46 +0100 Subject: TR available on "Mixture model for image understanding and the EM algorithm" Message-ID: <21963.9504131710@sun.aston.ac.uk> Re: the paper "Mixture model for image understanding and the EM algorithm" recently announced by Shotaro Akaho. I believe the idea of using a mixture model parameterized by scale and translation parameters is very similar to the "constrained mixture models" we have been using for character recognition for some years. Basically, the idea is to create a template out of a number of Gaussians; in the character recognition case the Gaussians will be spaced out along the stroke; each one is like a spray-can ink generator. This template can be scaled or translated by applying the transformation to each of the Gaussian centres. In fact our model went further than this in that it allowed deformable templates, so that the Gaussians could be moved away from their "home locations" in the object-based frame, at a cost. We also allowed a full 2x2 affine transformation plus translation rather than just translation and scaling, and used a "noise model" to reject outlier/noise data points. We used a method based on the EM algorithm for fitting these templates to data. For the non-deformable case there is (as Akaho points out) a direct EM algorithm. This is mentioned in Appendix B of my thesis. The fitting algorithm converged to the desired solution (i.e. one which looks correct -- this is the advantage of working in 2d :-)) in around 99% of the cases when only a single character was present. We have run some experiments with two templates and two objects and found that we only got convergence to the desired solution if the starting point was rather close to it. I should also comment that Eric Mjolsness and his colleagues have been doing some similar work, although they have used an explicit match- matrix to encode the correspondence between datapoints and model points; the mixture model can be obtained by integrating out one of the row and column constraints. [ref: eg. Chien-Ping Lu and Eric Mjolsness, NIPS 6, 985-992; also NIPS 7 (forthcoming), and earlier work back to a TR YALEU-DCS-TR-854 in 1990] Our Refs: [early paper] @incollection (hinton-williams-revow-92, author = "Hinton, G. ~E. and Williams, C. ~K. ~I. and Revow, M. ~D.", title = "Adaptive elastic models for hand-printed character recognition", editor = "J. E. Moody and S. J. Hanson and R. P. Lippmann", booktitle= "Advances in Neural Information Processing Systems 4", year = "1992", publisher= "Morgan Kauffmann", place = "San Mateo CA." ) [up to date work] * a paper submitted to IEEE Transactions on Pattern Analysis and Machine Intelligence in 1994: pami.ps.Z (36 pages, 0.3 Mb) * my PhD thesis: thesis.ps.Z (95 pages, 0.6 Mb) both available from the Toronto ftp server unix> ftp ftp.cs.toronto.edu (or 128.100.3.6, or 128.100.1.105) (log in as "anonymous", e-mail address as password) ftp> binary ftp> cd pub/ckiw ftp> get thesis.ps.Z ftp> get pami.ps.Z ftp> quit Regards Chris Williams c.k.i.williams at aston.ac.uk Department of Computer Science and Applied Maths Aston University Birmingham B4 7ET England tel: +44 121 359 3621 x 4382 fax: +44 121 333 6215 From terry at salk.edu Thu Apr 13 19:06:11 1995 From: terry at salk.edu (Terry Sejnowski) Date: Thu, 13 Apr 95 16:06:11 PDT Subject: Neural Comp 7:3 - Abstracts on WWW Message-ID: <9504132306.AA13731@salk.edu> Neural Computation Abstracts are now available on WWW: URL: http://www-mitpress.mit.edu/ ----- NEURAL COMPUTATION may 1995 Volume 7 Number 3 Review: Models of orientation and ocular dominance columns in the visual cortex: A critical comparison E. Erwin, K. Obermayer and K. Schulten Letters: How precise is neuronal synchronization? Peter K=F6nig, Andreas K. Engel, Pieter R. Roelfsema and Wolf Singer Quantitative analysis of electronic structure and membrane properties of NMDA activated lamprey spinal neurons C. R. Murphey, L. E. Moore and J. T. Buchanan Reduced representation by neural networks with restricted receptive fields Marco Idiart, Barry Berk and L. F. Abbott Stochastic single neurons Toru Ohira and Jack D. Cowan Memory recall by quasi-fixed-point attractors in oscillator neural networks Tomoki Fukai and Masatoshi Shiino Learning population codes by minimizing description length Richard S. Zemel and Geoffrey Hinton Competition and multiple cause models Peter Dayan and Richard S. Zemel Bayesian self-organization driven by prior probability distributions Alan L. Yuille, Stelios M. Smirnakas and Lei Xu Adaptive voting rules for k-NN classifiers R. Rovatti, R. Tagazzoni, Zs. M. Kov=E0cs and R. Guerrieri Regularisation in the selection of radial basis function centres Mark J. L. Orr Bootstrapping confidence intervals for clinical input variable effects in a network trained to identify the presence of acute myocardial infarction William G. Baxt and Halbert White ----- SUBSCRIPTIONS - 1995 - VOLUME 7 - BIMONTHLY (6 issues) ______ $40 Student and Retired ______ $68 Individual ______ $180 Institution Add $22 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-5 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 e-mail: hiscox at mitvma.mit.edu ----- From oliensis at research.nj.nec.com Fri Apr 14 14:38:31 1995 From: oliensis at research.nj.nec.com (John Oliensis) Date: Fri, 14 Apr 1995 14:38:31 -0400 Subject: NECI Vision Workshop: www proceedings Message-ID: <199504141838.OAA01131@iris63> NECI VISION WORKSHOP FEB. 27 - MAR. 10, 1995 NECI Research Institute 4 Independence Way Princeton, NJ 08540 DESCRIPTION The NECI Vision Workshop brought together vision psychologists and computer vision researchers for a two week period to exchange ideas. The meeting was oriented toward discussion with a relaxed schedule of presentations. Foci of discussion included object recognition, recovery of structure and motion, subjective contours, perceptual inference, and low level vision. ATTENDEES Bill Bialek (NECI), Heinrich Bulthoff (Max-Planck), Brian Burns (Teleos), Jacob Feldman (Rutgers), Ingemar Cox (NECI), David Forsyth (Berkeley), Jonas Garding (KTH, Sweden), Richard Hartley (GE), David Jacobs (NECI), Allan Jepson (U. of Toronto), Dan Kersten, (U. of Minnesota), David Knill (U. of Pennsylvania), Tony Lindeberg (KTH, Sweden), Mike Langer (McGill), Zili Liu (NECI), Larry Maloney (NYU), Steve Maybank (GEC/U. of Oxford), John Oliensis (NECI), Pietro Perona (Cal Tech), Jean Ponce (U. of Illinois), Harpreet Sawhney (IBM), Bob Shapley (NYU), Stefano Soatto (Cal Tech), Mike Tarr (Yale), Shimon Ullman (Weizmann), Bill Warren (Brown), Lance Williams (NECI), Alan Yuille (Harvard), Steve Zucker (McGill). PROCEEDINGS The www "proceedings" for the NECI Vision Workshop is available at: http://www.neci.nj.nec.com/homepages/oliensis/NECI_Vision_Workshop.html From njm at cupido.inesc.pt Sun Apr 16 08:32:51 1995 From: njm at cupido.inesc.pt (njm@cupido.inesc.pt) Date: Sun, 16 Apr 95 13:32:51 +0100 Subject: 2nd CFP: Neural Nets & Genetic Algorithms Workshop Message-ID: <9504161232.AA26132@cupido.inesc.pt> ________________________________________________________ -------------------------------------------------------- EPIA'95 WORKSHOPS - CALL FOR PARTICIPATION NEURAL NETWORKS AND GENETIC ALGORITHMS -------------------------------------------------------- A subsection of the: FUZZY LOGIC AND NEURAL NETWORKS IN ENGINEERING WORKSHOP ________________________________________________________ -------------------------------------------------------- Seventh Portuguese Conference on Artificial Intelligence Funchal, Madeira Island, Portugal October 3-6, 1995 (Under the auspices of the Portuguese Association for AI) -------------------------------------------- REMEMBER: SUBMISSION DEADLINE: May 2, 1995 -------------------------------------------- INTRODUCTION ~~~~~~~~~~~~ The workshop on Fuzzy Logic and Neural Networks in Engineering, running during the Seventh Portuguese Conference on Artificial Intelligence (EPIA'95), includes a subsection on Neural Networks and Genetic Algorithms. This subsection of the workshop will be devoted to models of simulating human reasoning and behaviour based on GA and NN combinations. Recently in disciplines such as AI, Engineering, Robotics and Artificial Life there has been a rise in interest in hybrid methodologies such as neural networks and genetic algorithms which enables modelling of more realistic flexible and adaptive behaviour and learning. So far such hybrid models have proved very promising in investigating and characterizing the nature of complex reasoning and control behaviour. Participants are expected to base their contribution on current research and the workshop emphasis will be on wide-ranging discussions of the feasibility and application of such hybrid models. This part of the workshop is intended to promote the exchange of ideas and approaches in these areas and for these methods, through paper presentations, open discussions, and the corresponding exhibition of running systems, demonstrations or simulations. COORDINATION OF THIS SUBSECTION ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Mukesh Patel Institute of Computer Science, Foundation for Research and Technology-Hellas (FORTH) P.O.Box 1385, GR 711 10 Heraklion, Crete, Greece Voice: +30 (81) 39 16 35 Fax: +30 (81) 39 16 01/09 Email: mukesh at ics.forth.gr The submission requirements, attendance and deadlines information are the same of the workshop, which Call for Papers is enclosed. Further inquiries could be addressed either to the subsection coordinator or the Workshop address. ============================================================================= ============================================================================= ============================================================================= -------------------------------------------------------- EPIA'95 WORKSHOPS - CALL FOR PARTICIPATION FUZZY LOGIC AND NEURAL NETWORKS IN ENGINEERING WORKSHOP -------------------------------------------------------- Seventh Portuguese Conference on Artificial Intelligence Funchal, Madeira Island, Portugal October 3-6, 1995 (Under the auspices of the Portuguese Association for AI) INTRODUCTION ~~~~~~~~~~~~ The Seventh Portuguese Conference on Artificial Intelligence (EPIA'95) will be held at Funchal, Madeira Island, Portugal, between October 3-6, 1995. As in previous cases ('89, '91, and '93), EPIA'95 will be run as an international conference, English being the official language. The scientific program includes tutorials, invited lectures, demonstrations, and paper presentations. The Conference will include three parallel workshops on Expert Systems, Fuzzy Logic and Neural Networks, and Applications of A.I. to Robotics and Vision Systems. These workshops will run simultaneously (see below) and consist of invited talks, panels, paper presentations and poster sessions. Fuzzy Logic And Neural Networks In Engineering workshop may last for either 1, 2 or 3 days, depending on the quantity and quality of submissions. FUZZY LOGIC AND NEURAL NETWORKS IN ENGINEERING WORKSHOP ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The search for systems simulating human reasoning in what regards uncertainty has created a strong research community. In particular, Fuzzy Logic and Neural Networks have been a source of synergies among researchers of both areas, aiming at developing theoretical approaches and applications towards the characterization and experimentation of such kinds of reasoning. The workshop is intended to promote the exchange of ideas and approaches in those areas, through paper presentations, open discussions, and the corresponding exhibition of running systems, demonstrations or simulations. The organization committee invites you to participate, submitting papers together with videos, demonstrations or running systems, to illustrate relevant issues and applications. EXHIBITIONS ~~~~~~~~~~~ In order to illustrate and to support theoretical presentations the organization will provide adequate conditions (space and facilities) for exhibitions regarding the three workshops mentioned. These exhibitions can include software running systems (several platforms are available), video presentations (PAL-G VHS system), robotics systems (such as robotics insects, and autonomous robots), and posters. On the one hand, this space will allow the presentation of results and real-world applications of the research developed by our community and, on the other it will serve as a source of motivation to students and young researchers. SUBMISSION REQUIREMENTS ~~~~~~~~~~~~~~~~~~~~~~~ Authors are asked to submit five (5) copies of their papers to the submissions address by May 2, 95. Notification of acceptance or rejection will be mailed to the first (or designated) author on June 5, 95, and camera ready copies for inclusion in the workshop proceedings will be due on July 3, 95. Each copy of submitted papers should include a separate title page giving the names, addresses, phone numbers and email addresses (where available) of all authors, and a list of keywords identifying the subject area of the paper. Papers should be a maximum of 16 pages and printed on A4 paper in 12 point type with a maximum of 38 lines per page and 75 characters per line ( corresponding to LaTeX article style, 12 pt). Double sided submissions are preferred. Electronic or faxed submissions will not be accepted. Further inquiries should be addressed to the inquiries address. ATTENDANCE ~~~~~~~~~~ Each workshop will be limited to at most fifty people. In addition to presenters of papers and posters, there will be space for a limited number of other participants chosen on the basis of a one- to two-page research summary which should include a list of relevant publications, along with an electronic mail address if possible. A set of working notes will be available prior to the commencement of the workshops. Registration information will be available in June 1995. Please write for registration information to the inquiries address. DEADLINES ~~~~~~~~~ Papers submission: ................. May 2, 1995 Notification of acceptance: ........ June 5, 1995 Camera Ready Copies Due: ........... July 3, 1995 PROGRAM-CHAIR ~~~~~~~~~~~~~ Jose Tome (IST, Portugal) ORGANIZING-CHAIR ~~~~~~~~~~~~~~~~ Luis Custodio (IST, Portugal) SUBMISSION AND INQUIRIES ADDRESS ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ EPIA'95 Fuzzy Logic & Neural Networks Workshop INESC, Apartado 13069 1000 Lisboa Portugal Voice: +351 (1) 310-0325 Fax: +351 (1) 525843 Email: epia95-FLNNWorkshop at inesc.pt PLANNING TO ATTEND ~~~~~~~~~~~~~~~~~~ People planning to submit a paper or/and to attend the workshop are asked to complete and return the following form (by fax or email) to the inquiries address standing their intention. It will help the workshop organizer to estimate the facilities needed and will enable all interested people to receive updated information. +----------------------------------------------------------------+ | REGISTRATION OF INTEREST | | (Fuzzy Logic & Neural Networks Workshop) | | | | Title . . . . . Name . . . . . . . . . . . . . . . . . . . . | | Institution . . . . . . . . . . . . . . . . . . . . . . . . . | | Address1 . . . . . . . . . . . . . . . . . . . . . . . . . . . | | Address2 . . . . . . . . . . . . . . . . . . . . . . . . . . . | | Country . . . . . . . . . . . . . . . . . . . . . . . . . . . | | Telephone. . . . . . . . . . . . . . . Fax . . . . . . . . . . | | Email address. . . . . . . . . . . . . . . . . . . . . . . . . | | I intend to submit a paper (yes/no). . . . . . . . . . . . . . | | I intend to participate only (yes/no). . . . . . . . . . . . . | | I will travel with ... guests | +----------------------------------------------------------------+ From JCONNOR at lbs.lon.ac.uk Mon Apr 17 20:46:55 1995 From: JCONNOR at lbs.lon.ac.uk (Jerry Connor) Date: Mon, 17 Apr 1995 20:46:55 BST Subject: NNCM-95, SECOND ANNOUNCEMENT Message-ID: SECOND ANNOUNCEMENT AND CALL FOR PAPERS NNCM-95 Third International Conference On NEURAL NETWORKS IN THE CAPITAL MARKETS Thursday-Friday, October 12-13, 1995 with tutorials on Wednesday, October 11, 1995. The Langham Hilton, London, England. (Note Deadline for Camera Ready Full Papers To Be Published In Hard Back Conference Proceedings) Neural networks are now emerging as a major modeling methodology in financial engineering. Because of the overwhelming interest in the NNCM workshops held in London in 1993 and Pasadena in 1994, the third annual NNCM conference will be held on October 12-13, 1995, in London. NNCM*95 will take a critical look at state of the art neural network applications in finance. This is a research meeting where original, high-quality contribu- tions to the field are presented and discussed. In addition, a day of introductory tutorials (Wednesday, October 11) will be included to familiarise audiences of different backgrounds with financial engineering, neural networks, and the mathematical aspects of the field. Application areas include: + Bond and stock valuation and trading + Foreign exchange rate prediction and trading + Commodity price forecasting + Risk management + Tactical asset allocation + Portfolio management + Option Pricing + Trading strategies Technical areas include, but are not limited to: + Neural networks + Nonparametric statistics + Econometrics + Pattern recognition + Time series analysis + Model Selection + Signal processing and control + Genetic and evolutionary algorithms + Fuzzy systems + Expert systems + Machine learning Instructions for Authors Authors who wish to present a paper should mail a copy of their extended abstract (4 pages, single-sided, single-spaced) typed on A4 (8.5" by 11") paper to the secretariat no later than May 31, 1995. Submissions will be refereed by no less than four referees, and authors will be notified of acceptance by 14 June 1995. Authors of accepted papers will be mailed guidelines for producing final camera ready papers which are due 12 July 1995. The accepted papers will be published in a hard back conference proceedings published by World Scientific. Separate registration is required using the attached registration form. Authors are encouraged to submit abstracts as soon as possible. Registration To register, complete the registration form and mail to the sec- retariat. Please note that attendance is limited and will be allocated on a "first-come, first-served" basis. Secretariat: For further information, please contact the NNCM-95 secretariat: Ms Busola Oguntula, London Business School Sussex Place, Regent's Park, London NW1 4SA, UK e-mail: boguntula at lbs.lon.ac.uk phone (+44) (0171) 262 50 50 fax (+44) (0171) 724 78 75 Location: The main conference will be held at The Langham Hilton, which is situated near Regent's Park and is a short walk from Baker Street Underground Station. Further directions including a map will be sent to all registries. Programme Commitee Dr A. Refenes, London Business School (Chairman) Dr Y. Abu-Mostafa, Caltech Dr A. Atiya, Cairo University Dr N. Biggs, London School of Economics Dr D. Bunn, London Business School Dr M. Jabri, University of Sydney Dr B. LeBaron, University of Wisconsin Dr A. Lo, MIT Sloan School Dr J. Moody, Oregon Graduate Institute Dr C. Pedreira, Catholic University PUC-Rio Dr M. Steiner, Universitaet Munster Dr A. Timermann, University of California, San Diego Dr A. Weigend, University of Colorado Dr H. White, University of California, San Diego Hotel Accommodation: Convenient hotels include: The Langham Hilton 1 Portland Place London W1N 4JA Tel: (+44) (0171) 636 10 00 Fax: (+44) (0171) 323 23 40 Sherlock Holmes Hotel 108 Baker Street, London NW1 1LB Tel: (+44) (0171) 486 61 61 Fax: (+44) (0171) 486 08 84 The White House Hotel Albany St., Regent's Park, London NW1 Tel: (+44) (0171) 387 12 00 Fax: (+44) (0171) 388 00 91 --------------------------Registration Form -------------------------- -- NNCM-95 Registration Form Third International Conference on Neural Networks in the Capital Markets October 12-13 1995 Name:____________________________________________________ Affiliation:_____________________________________________ Mailing Address: ________________________________________ _________________________________________________________ Telephone:_______________________________________________ ****Please circle the applicable fees and write the total below**** Main Conference (October 12-13): (British Pounds) Registration fee 450 Discounted fee for academicians 250 (letter on university letterhead required) Discounted fee for full-time students 100 (letter from registrar or faculty advisor required) Tutorials (October 11): You must be registered for the main conference in order to register for the tutorials. (British Pounds) Morning Session Only 100 Afternoon Session Only 100 Both Sessions 150 Full-time students 50 (letter from registrar or faculty advisor required) TOTAL: _________ Payment may be made by: (please tick) ____ Check payable to London Business School ____ VISA ____Access ____American Express Card Number:___________________________________ From terry at salk.edu Mon Apr 17 18:25:43 1995 From: terry at salk.edu (Terry Sejnowski) Date: Mon, 17 Apr 95 15:25:43 PDT Subject: Telluride DEADLINE April 24 Message-ID: <9504172225.AA04122@salk.edu> FINAL CALL FOR PARTICIPATION IN A WORKSHOP ON "NEUROMORPHIC ENGINEERING" JUNE 25 - JULY 8, 1995 TELLURIDE, COLORADO DEADLINE for application is April 24, 1995. Christof Koch (Caltech) and Terry Sejnowski (Salk Institute/UCSD) invite applications for one two-week workshop that will be held in Telluride, Colorado in 1995. The first Telluride Workshop on Neuromorphic Engineering was held in July, 1994 and was sponsored by the NSF. A summary of the 94 workshop and a list of participants is available over MOSAIC: http://www.klab.caltech.edu/~timmer/telluride.html OR http://www.salk.edu/~bryan/telluride.html GOALS: Carver Mead introduced the term "Neuromorphic Engineering" for a new discipline based on the design and fabrication of artificial neural systems, such as vision systems, head-eye systems, and roving robots, whose architecture and design principles are based on those of biological nervous systems. The goal of this workshop is to bring together young investigators and more established researchers from academia with their counterparts in industry and national laboratories, working on both neurobiological as well as engineering aspects of sensory systems and sensory-motor integration. The focus of the workshop will be on ``active" participation, with demonstration systems and hands-on-experience for all participants. Neuromorphic engineering has a wide range of applications from nonlinear adaptive control of complex systems to the design of smart sensors. Many of the fundamental principles in this field, such as the use of learning methods and the design of parallel hardware, are inspired by biological systems. However, existing applications are modest and the challenge of scaling up from small artificial neural networks and designing completely autonomous systems at the levels achieved by biological systems lies ahead. The assumption underlying this two week workshop is that the next generation of neuromorphic systems would benefit from closer attention to the principles found through experimental and theoretical studies of brain systems. The focus of the first week is on exploring neuromorphic systems through the medium of analog VLSI and will be organized by Rodney Douglas (Oxford) and Misha Mahowald (Oxford). Sessions will cover methods for the design and fabrication of multi-chip neuromorphic systems. This framework is suitable both for creating analogs of specific biological systems, which can serve as a modeling environment for biologists, and as a tool for engineers to create cooperative circuits based on biological principles. The workshop will provide the community with a common formal language for describing neuromorphic systems. Equipment will be available for participants to evaluate existing neuromorphic chips (including silicon retina, silicon neurons, oculomotor system). The second week of the course will be on vision and human sensory-motor coordination and will be organized by Dana Ballard and Mary Hayhoe (Rochester). Sessions will cover issues of sensory-motor integration in the mammalian brain. Special emphasis will be placed on understanding neural algorithms used by the brain which can provide insights into constructing electrical circuits which can accomplish similar tasks. Issues to be covered will include spatial localization and constancy, attention, motor planning, eye movements, and the use of visual motion information for motor control. These researchers will also be asked to bring their own demonstrations, classroom experiments, and software for computer models. Demonstrations will include a robot head active vision system consisting of a three degree-of-freedom binocular camera system that is fully programmable. The vision system us based on a DataCube videopipe which in turn provides drive signals to the three motors of the head. FORMAT: Time will be divided between lectures, practical labs, and interest group meetings. There will be three lectures in the morning that cover issues that are important to the community in general. In general, one lecture will be neurobiological, one computational, and one on analog VLSI. Because of the diverse range of backgrounds among the participants, the majority of these lectures will be tutorials, rather than detailed reports of current research. These lectures will be given by invited speakers. Participants will be free to explore and play with whatever they choose in the afternoon. Participants are encouraged to bring their own material to share with others. After dinner, participants will get together more informally to hear lectures and demonstrations. LOCATION AND ARRANGEMENTS: The workshop will take place at the "Telluride Summer Research Center," located in the small town of Telluride, 9000 feet high in Southwest Colorado, about 6 hours away from Denver (350 miles) and 4 hours from Aspen. Continental and United Airlines provide many daily flights directly into Telluride. Participants will be housed in shared condominiums, within walking distance of the Center. Bring hiking boots and a backpack, since Telluride is surrounded by beautiful mountains. The workshop is intended to be very informal and hands-on. Participants are not required to have had previous experience in analog VLSI circuit design, computational or machine vision, systems level neurophysiology or modeling the brain at the systems level. However, we strongly encourage active researchers with relevant backgrounds from academia, industry and national laboratories to apply, in particular if they are prepared to talk about their work or to bring demonstrators to Telluride (e.g. robots, chips, software). Internet access will be provided. Technical staff present throughout the workshops will assist with software and hardware problems. We will have a network of SUN workstations running UNIX and PCs running windows and LINUX. Up to $500 will be reimbursed for domestic travel and all housing expenses will be provided. Participants are expected to pay for food and incidental expenses and are expected to stay for the duration of this two week workshop. A limited number of travel awards will be available for international travel. PARTIAL LIST OF INVITED LECTURERS: Richard Anderson, Caltech. Chris Atkeson, Georgia Tech. Dana Ballard, Rochester. Kwabena Boahen, Caltech. Avis Cohen, Maryland. Tobi Delbruck, Arithmos, Palo Alto. Steve DeWeerth, Georgia Tech. Steve Deiss, Applied NeuroDynamics, San Diego. Chris Dioro, Caltech. Rodney Douglas, Oxford and Zurich. John Elias, Delaware University. Mary Hayhoe, Rochester. Christof Koch, Caltech. Shih-Chii Liu, Caltech and Rockwell. Jack Loomis, UC Santa Barbara. Jonathan Mills, Indiana University. Misha Mahowald, Oxford and Zurich. Mark Tilden, Los Alamos: Multi-legged Robots. Terry Sejnowski, Salk Institute and UCSan Diego. Mona Zaghoul, George Washington University. HOW TO APPLY: The deadline for receipt of applications is April 24, 1995 Applicants should be at the level of graduate students or above (i.e. post- doctoral fellows, faculty, research and engineering staff and the equivalent positions in industry and national laboratories). We actively encourage qualified women and minority candidates to apply. Application should include: 1. Name, address, telephone, e-mail, FAX, and minority status (optional). 2. Curriculum Vitae. 3. One page summary of background and interests relevant to the workshop. 4. Description of special equipment needed for demonstrations that could be brought to the workshop. 5. Two letters of recommendation Complete applications should be sent to: Prof. Terrence Sejnowski The Salk Institute 10010 North Torrey Pines Road San Diego, CA 92037 email: terry at salk.edu FAX: (619) 587 0417 Applicants will be notified druign the week of May 7, 1995. ----- From l.s.smith at cs.stir.ac.uk Tue Apr 18 11:29:46 1995 From: l.s.smith at cs.stir.ac.uk (Dr L S Smith (Staff)) Date: Tue, 18 Apr 1995 16:29:46 +0100 Subject: New book: Neural Computation and Psychology Message-ID: <199504181529.QAA18251@katrine.cs.stir.ac.uk> Newly-published book available. Order it from you bookshop! Neural Computation and Psychology eds: Leslie S. Smith, Peter J.B. Hancock. Proceedings of the 3rd Neural Computation and Psychology Workshop (NCPW3), Stirling, Scotland, 31 August - 2 September 1994 Springer-Verlag Workshops in Computing Series: published in collaboration with the British Computer Society. ISBN 3-540-19948-9 Published April 1995 Contents: Preface Cognition. Symbolic and subsymbolic approaches to cognition. David Willshaw (Centre for Cognitive Science, University of Edinburgh). Mapping across domains without feedback: a neural network model of transfer of implicit knowledge. Zoltan Dienes, Gerry T.M. Altmann, Shi-Ji Gao (Lab of Experimental Psychology, University of Sussex). Modelling reaction times. John A. Bullinaria (Dept of Psychology, University of Edinburgh). Chunking: an interpretation bottleneck. Jon Slack (Department of Psychology, University of Kent). Learning, relearning and recall for two strengths of learning in a neural networks 'aged' by simulated dendritic attrition. R. Cartwright, G.W. Humphries (School of Psychology, University of Birmingham). Perception. Learning invariances via spatio-temporal constraints. James V. Stone (Cognitive and Computing Sciences, University of Sussex). Topographic map formation as statistical inference. Roland Baddeley (Dept of Psychology, University of Oxford). Edge enhancement and exploratory projection pursuit. Colin Fyfe, Roland Baddeley (Dept of Computer Science, University of Strathclyde, Dept of Psychology, University of Oxford). The "perceptual magnet" effect: a model based on self-organizing feature maps. M. Herrmann , H.-U Bauer, R. Der (Nordita, Copenhagen, Inst. f. Theor Physik, Universitaet Frankfurt, and Inst f. Informatik, Universitaet Leipzig). How local cortical processors that maximize coherent variation could lay foundations for representation proper. W.A Phillips, Jim Kay and D. Smyth (Dept of Psychology, University of Stirling, SASS, Aberdeen, Dept of Psychology, University of Stirling). Audition and Vision Using complementary streams in a model that learns representations of abstract diatonic pitch. Niall Griffifth (Dept of Computer Science, University of Exeter). Data-driven sound interpretation: its application to voiced sounds. Leslie S. Smith (Dept of Computing Science, University of Stirling). Computer simulation of gestalt auditory grouping by frequency proximity. Michael W. Beauvois, Ray Meddis , (IRCAM, Paris, and Dept of Human Sciences, Loughborough University of Technology). Mechanisms of visual search:an implementation of guided search. K.J. Smith, G.W. Humphreys (School of Psychology, University of Birmingham). Categorical perception as an acquired phenomenon: what are the implications? James M. Beale, Frank C. Keil (Dept of Psychology, Cornell University). Sequence Learning. A computational account of phonologically mediated free recall. Peter J. Lovatt , Dimitrios Bairaktaris (CCCN, Dept of Computing Science, University of Stirling). Interactions between knowledge sources in a dual-route connectionist model of spelling. David W. Glasspool, George Houghton, Tim Shallice (University College, London). Author Index. ____________________________________________________ Dr Leslie S. Smith Dept of Computing and Mathematics, Univ of Stirling Stirling FK9 4LA Scotland lss at cs.stir.ac.uk (NeXTmail welcome) Tel (44) 1786 467435 Fax (44) 1786 464551 www http://www.cs.stir.ac.uk/~lss/ From pierre.demartines at csemne.ch Tue Apr 18 10:14:00 1995 From: pierre.demartines at csemne.ch (pierre.demartines@csemne.ch) Date: Tue, 18 Apr 1995 16:14:00 +0200 Subject: French Doctoral Thesis available: Nonlinear Data Analysis through Self-Organizing Neural Networks Message-ID: <199504181414.QAA16521@grillet.csemne.ch> Hello, It is my pleasure to inform you about the availability of my doc- toral dissertation (in french only) on "Data Analysis through Self-Organizing Neural Networks". You can get it by FTP from the TIRFLab ftp-server (Grenoble, France). FTP-host: tirf.inpg.fr FTP-name: anonymous FTP-passwd: anything (your email for instance) FTP-file: /pub/demartin/demartin.phd94.ps.Z (2.2 Mo compressed, 8.7 Mo uncompressed, 214 pages) ----------------------------------------------------------------- DATA ANALYSIS THROUGH SELF-ORGANIZED NEURAL NETWORKS Keywords -------- Data structure (submanifold), Self-Organizing Maps (Kohonen), Fractal Dimension, Dimension Reduction, Nonlinear Projection, Un- folding, "VQP" algorithm, diffeomorphism, interpolation, extrapo- lation, Multidimensional Scaling, Nonlinear Mapping, Industrial Applications. Abstract -------- Data understanding is often based on hidden informations re- trieval within a big amount of collected variables. It is a search for linear or non linear dependencies between these ob- served variables, and consists in reducing these variables to small number of parameters. A classical method, widely used for this purpose, is the so- called Principal Component Analysis (PCA). Unfortunately, this method is only linear, and fails to reduce data that are redun- dant in a non linear way. The Kohonen's Self-Organizing Maps are a type of artificial neur- al networks, the functionality of which can be viewed as a non linear extension of PCA: data samples are mapped onto a grid of neurons. A major drawback of these maps, however, is their a priori defined shape (generally a square or a rectangle), which is rarely adapted to the shape of the parametric space to represent. We relax this constraint with a new algorithm, called ``Vector Quantization and Projection'' (VQP). It is a kind of self- organizing map, the output space of which is continuous and takes automatically the relevant shape. From a mathematical point of view, VQP is the search for a diffeomorphism between the raw data set and an unknown parametric representation to be found. More intuitively, this is an unfolding of data structure towards a low-dimensional space, which dimension is the number of degrees of freedom of the observed phenomenon, and can be determined through fractal analysis of the data set. In order to illustrate the generality of VQP, we give a wide range of application examples (real or simulated), in several domains such as data fusion, graphes matching, industrial process monitoring or analysis, faults detection in devices and adaptive routing in telecommunications. ---------------- ANALYSE DE DONNEES PAR RESEAUX DE NEURONES AUTO-ORGANISES Mots-cles --------- Structure de donnees (variete), cartes auto-organisantes (Kohonen), dimension fractale, reduction de dimension, projection non-lineaire, depliage, algorithme "VQP", diffeomorphisme, inter- polation, extrapolation, "Multidimensional Scaling", "Nonlinear Mapping", applications industrielles. Resume ------ Chercher a comprendre des donnees, c'est souvent chercher a trouver de l'information cachee dans un gros volume de mesures redondantes. C'est chercher des dependances, lineaires ou non, entre les variables observees pour pouvoir resumer ces dernieres par un petit nombre de parametres. Une methode classique, l'Analyse en Composantes Principales (ACP), est abondamment employee dans ce but. Malheureusement, il s'agit d'une methode exclusivement lineaire, qui est donc incapa- ble de reveler les dependances non lineaires entre les variables. Les cartes auto-organisantes de Kohonen sont des reseaux de neu- rones artificiels dont la fonction peut etre vue comme une exten- sion de l'ACP aux cas non-lineaires. L'espace parametrique est represente par une grille de neurones, dont la forme, generale- ment carree ou rectangulaire, doit malheureusement etre choisie a priori. Cette forme est souvent inadaptee a celle de l'espace parametrique recherche. Nous liberons cette contrainte avec un nouvel algorithme, nomme ``Vector Quantization and Projection'' (VQP), qui est une sorte de carte auto-organisante dont l'espace de sortie est continu et prend automatiquement la forme adequate. Sur le plan mathema- tique, VQP peut etre defini comme la recherche d'un diffeomor- phisme entre l'espace brut des donnees et un espace parametrique inconnu a trouver. Plus intuitivement, il s'agit d'un depliage de la structure des donnees vers un espace de plus petite dimension. Cette dimension, qui correspond au nombre de degres de liberte du phenomene etudie, peut etre determinee par des methodes d'analyse fractale du nuage de donnees. Afin d'illustrer la generalite de l'approche VQP, nous donnons une serie d'exemples d'applications, simulees ou reelles, dans des domaines varies qui vont de la fusion de donnees a l'appariement de graphes, en passant par l'analyse ou la surveil- lance de procedes industriels, la detection de defauts dans des machines et le routage adaptatif en telecommunications. ----------------------------------------------------------------- FTP INSTRUCTIONS: unix> ftp tirf.inpg.fr (or 192.70.29.33) Name: anonymous Password: ftp> cd pub/demartin ftp> binary ftp> get demartin.phd94.ps.Z ftp> quit unix> uncompress demartin.phd94.ps.Z -------------------------------------------------------------------- Pierre Demartines email: demartin at csemne.ch C.S.E.M. Phone: (41) 38 205 252 Maladiere 71 Fax: (41) 38 205 770 CH-2007 Neuchatel Mosaic: ftp://tirf.inpg.fr/pub/HTML/tirf.html Switzerland -------------------------------------------------------------------- From mike at PSYCH.UALBERTA.CA Tue Apr 18 22:49:11 1995 From: mike at PSYCH.UALBERTA.CA (Mike Dawson) Date: Tue, 18 Apr 1995 20:49:11 -0600 Subject: Biological Computation Project WWW Message-ID: A non-text attachment was scrubbed... Name: not available Type: text Size: 733 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/094a88d7/attachment.ksh From nips95 at mines.colorado.edu Wed Apr 19 03:07:41 1995 From: nips95 at mines.colorado.edu (NIPS Conference Office) Date: Wed, 19 Apr 95 03:07:41 -0400 Subject: reminder: May 20 NIPS submission deadline Message-ID: CALL FOR PAPERS Neural Information Processing Systems Natural and Synthetic Monday, Nov. 27 - Saturday, Dec. 2, 1995 Denver, Colorado This is the ninth meeting of an interdisciplinary conference which brings together neuroscientists, engineers, computer scientists, cognitive scientists, physicists, and mathematicians interested in all aspects of neural processing and computation. The confer- ence will include invited talks, and oral and poster presenta- tions of refereed papers. There will be no parallel sessions. There will also be one day of tutorial presentations (Nov. 27) preceding the regular session, and two days of focused workshops will follow at a nearby ski area (Dec. 1-2). Major categories for paper submission, with example subcategories, are as follows: Neuroscience: systems physiology, signal and noise analysis, oscillations, synchronization, mechanisms of inhibition and neuromodulation, synaptic plasticity, computational models Theory: computational learning theory, complexity theory, dynamical systems, statistical mechanics, probability and statistics, approximation and estimation theory Implementation: analog and digital VLSI, novel neuro-devices, neurocomputing systems, optical, simulation tools, parallelism Algorithms and Architectures: learning algorithms, decision trees constructive/pruning algorithms, localized basis func- tions, recurrent networks, genetic algorithms, combinatorial optimization, performance comparisons Visual Processing: image recognition, coding and classifica- tion, stereopsis, motion detection and tracking, visual psycho- physics Speech, Handwriting and Signal Processing: speech recognition, coding and synthesis, handwriting recognition, adaptive equali- zation, nonlinear noise removal, auditory scene analysis Applications: time-series prediction, medical diagnosis, finan- cial analysis, DNA/protein sequence analysis, music processing, expert systems, database mining Cognitive Science & AI: natural language, human learning and memory, perception and psychophysics, symbolic reasoning Control, Navigation, and Planning: robotic motor control, pro- cess control, navigation, path planning, exploration, dynamic programming, reinforcement learning Review Criteria: All submitted papers will be thoroughly refereed on the basis of technical quality, novelty, significance, and clarity. Submissions should contain new results that have not been published previously. Authors should not be dissuaded from submitting recent work, as there will be an opportunity after the meeting to revise accepted manuscripts before submitting final camera-ready copy. Paper Format: Submitted papers may be up to eight pages in length, including figures and references. The page limit will be strictly enforced, and any submission exceeding eight pages will not be considered. Authors are encouraged (but not required) to use the NIPS style files obtainable by anonymous FTP at the sites given below. Papers must include physical and e-mail addresses of all authors, and MUST indicate one of the nine major categories listed above. Authors may also indicate a subcategory, and their preference, if any, for oral or poster presentation; this preference will play no role in paper acceptance. Unless otherwise indicated, correspondence will be sent to the first au- thor. Submission Instructions: Send six copies of submitted papers to the address below; electronic or FAX submission is not accept- able. Include one additional copy of the abstract only, to be used for preparation of the abstracts booklet distributed at the meeting. Submissions mailed first-class from within the US or Canada, or sent from overseas via Federal Express/Airborne/DHL or similar carrier must be POSTMARKED by May 20, 1995. All other submissions must ARRIVE by this date. Mail submissions to: Michael Mozer NIPS*95 Program Chair Department of Computer Science University of Colorado Colorado Avenue and Regent Drive Boulder, CO 80309-0430 USA Mail general inquiries/requests for registration material to: NIPS*95 Registration Dept. of Mathematical and Computer Sciences Colorado School of Mines Golden, CO 80401 USA FAX: (303) 273-3875 e-mail: nips95 at mines.colorado.edu Sites for LaTex style files: Copies of "nips.tex" and "nips.sty" are available via anonymous ftp at helper.systems.caltech.edu (131.215.68.12) in /pub/nips, b.gp.cs.cmu.edu (128.2.242.8) in /usr/dst/public/nips. The style files and other conference information may also be retrieved via World Wide Web at http://www.cs.cmu.edu/Web/Groups/NIPS/NIPS.html NIPS*95 Organizing Committee: General Chair, David S. Touretzky, CMU; Program Chair, Michael Mozer, U. Colorado; Publications Chair, Michael Hasselmo, Harvard; Tutorial Chair, Jack Cowan, U. Chicago; Workshops Chair, Michael Perrone, IBM; Publicity Chair, David Cohn, MIT; Local Arrangements, Manavendra Misra, Colorado School of Mines; Treasurer, John Lazzaro, Berkeley. DEADLINE FOR SUBMISSIONS IS MAY 20, 1995 (POSTMARKED) -please post- From nips95 at mines.colorado.edu Wed Apr 19 03:08:45 1995 From: nips95 at mines.colorado.edu (NIPS Conference Office) Date: Wed, 19 Apr 95 03:08:45 -0400 Subject: NIPS workshop proposals due May 20 Message-ID: CALL FOR PROPOSALS NIPS*95 Post Conference Workshops December 1 and 2, 1995 Vail, Colorado Following the regular program of the Neural Information Processing Systems 1995 conference, workshops on current topics in neural information processing will be held on December 1 and 2, 1995, in Vail, Colorado. Proposals by qualified individuals interested in chairing one of these workshops are solicited. Past topics have included: active learning and control, architectural issues, at- tention, bayesian analysis, benchmarking neural network applica- tions, computational complexity issues, computational neurosci- ence, fast training techniques, genetic algorithms, music, neural network dynamics, optimization, recurrent nets, rules and connec- tionist models, self-organization, sensory biophysics, speech, time series prediction, vision and audition, implementations, and grammars. The goal of the workshops is to provide an informal forum for researchers to discuss important issues of current interest. Sessions will meet in the morning and in the afternoon of both days, with free time in between for ongoing individual exchange or outdoor activities. Concrete open and/or controversial issues are encouraged and preferred as workshop topics. Representation of alternative viewpoints and panel-style discussions are partic- ularly encouraged. Individuals proposing to chair a workshop will have responsibilities including: 1) arranging short informal presentations by experts working on the topic, 2) moderating or leading the discussion and reporting its high points, findings, and conclusions to the group during evening plenary sessions (the "gong show"), and 3) writing a brief summary. Submission Instructions: Interested parties should submit a short proposal for a workshop of interest postmarked by May 20, 1995. (Express mail is not necessary. Submissions by electronic mail will also be accepted.) Proposals should include a title, a description of what the workshop is to address and accomplish, the proposed length of the workshop (one day or two days), and the planned format. It should motivate why the topic is of in- terest or controversial, why it should be discussed and what the targeted group of participants is. In addition, please send a brief resume of the prospective workshop chair, a list of publi- cations and evidence of scholarship in the field of interest. Submissions should include contact name, address, email address, phone number and fax number if available. Mail proposals to: Michael P. Perrone NIPS*95 Workshops Chair IBM T.J. Watson Research Center P.O. Box 704 Yorktown Heights, NY 10598 (email: mpp at watson.ibm.com) PROPOSALS MUST BE POSTMARKED BY MAY 20, 1995 -Please Post- From pierre.demartines at csemne.ch Wed Apr 19 04:26:00 1995 From: pierre.demartines at csemne.ch (pierre.demartines@csemne.ch) Date: Wed, 19 Apr 1995 10:26:00 +0200 Subject: French Doctoral Thesis available: Nonlinear Data Analysis through Self-Organizing Neural Networks Message-ID: <199504190826.KAA17936@gervans.csemne.ch> Hem... As some people told me, I forgot to check the correct read access to my dissertation Postscript file (the shame on me...) It's ok now. Excuse-me for the trouble. For non french reader, you can find on the same ftp-anonymous server (tirf.inpg.fr or 192.70.29.33, directory pub/demartin) some short papers on the subject (demartin.{gretsi95,nimes93,iwann93}.ps.Z). In fact, the shortest (2 pages) and the most up-to-date one is gretsi95. The one of IWANN'93 is a bit obsolete. Anyway, in the thesis you'll find a lot of figures and equations that are readable by everybody. -------------------------------------------------------------------- Pierre Demartines email: demartin at csemne.ch C.S.E.M. Phone: (41) 38 205 252 Maladiere 71 Fax: (41) 38 205 770 CH-2007 Neuchatel Mosaic: ftp://tirf.inpg.fr/pub/HTML/tirf.html Switzerland -------------------------------------------------------------------- From kak at gate.ee.lsu.edu Wed Apr 19 12:46:35 1995 From: kak at gate.ee.lsu.edu (Subhash Kak) Date: Wed, 19 Apr 95 11:46:35 CDT Subject: Paper Message-ID: <9504191646.AA15786@gate.ee.lsu.edu> The following paper has just been published: S.C. Kak, On quantum neural computing. INFORMATION SCIENCES, vol. 83, pp. 143-160, 1995. ---------------------------------------------------- Abstract: This paper examines the notion of quantum neural computing in the context of several new directions in neural network research. In particular, we consider new neuron and network models that lead to rapid training, chaotic dynamics in neuron assemblies, models of attention and awareness, cytoskeletal microtubule information processing. ----------------------------------------------------- You can get a copy of the latex file by anonymous ftp from gate.ee.lsu.edu The directory is ftp/pub/kak and the filename is q.tex From hu at eceserv0.ece.wisc.edu Wed Apr 19 15:08:25 1995 From: hu at eceserv0.ece.wisc.edu (Yu Hu) Date: Wed, 19 Apr 1995 14:08:25 -0500 Subject: CFP: Int'l Symp. on ANN, Dec.18-20, 1995, Taiwan, ROC (86 lines) Message-ID: <199504191908.AA27828@eceserv0.ece.wisc.edu> FIRST ANNOUNCEMENT AND CALL FOR PAPERS -------------------------------------- 1995 International Symposium on Artificial Neural Networks December 18-20, 1995, Hsinchu, Taiwan, Republic of China Sponsored by National Chiao-Tung University in cooperation with Ministry of Education, Taiwan R.O.C. National Science Council, Taiwan R.O.C. IEEE Signal Processing Society Call for Papers ------------------ The third of a series of International Symposium on Artificial Neural Networks will be held at the National Chiao-Tung University, Hsinchu, Taiwan in December of 1995. Papers are solicited for, but not limited to, the following topics: Associative Memory Robotics Electrical Neurocomputers Sensation & Perception Image/Speech Processing Sensory/Motor Control Systems Machine Vision Supervised Learning Neurocognition Unsupervised Learning Neurodynamics Fuzzy Neural Systems Optical Neurocomputers Mathematical Methods Optimization Other Applications Prospective authors are invited to submit 4 copies of extended summaries of no more than 4 pages. All the manuscripts should be written in English with single-spaced, single column, on 8.5" by 11" white papers. The top of the first page of the summary should include a title, authors' names, affiliations, address, telephone/fax numbers, and email address if applicable. The indicated corresponding author will receive an acknowledgement of his/her submissions. Camera-ready full papers of accepted manuscripts will be published in a hard-bound proceedings and distributed at the symposium. For more information, please consult at the MOSAIC URL site http://www.ee.washington.edu/isann95.html, or use anonymous ftp from pierce.ee.washington.edu/pub/isann95/read.me (128.95.31.129). For submission from USA and Europe: Professor Yu-Hen Hu Dept. of Electrical and Computer Engineering Univ. of Wisconsin - Madison, Madison, WI 53706-1691 Phone: (608) 262-6724, Fax: (608) 262-1267 Email: hu at engr.wisc.edu For submission from Asia and Other Areas: Professor Sin-Horng Chen Dept. of Communication Engineering National Chiao-Tung Univ., Hsinchu, Taiwan Phone: (886) 35-712121 ext. 54522, Fax: (886) 35-710116 Email: isann95 at cc.nctu.edu.tw ************************* SCHEDULE ************************* Submission of extended summary: July 15 Notification of acceptance: September 30 Submission of photo-ready paper: October 31 Advanced registration, before: November 10 ORGANIZATIOIN General Co-Chairs Hsin-Chia Fu Jenq-Neng Hwang National Chiao-Tung University University of Washington Hsinchu, Taiwan Seattle, Washington, USA hcfu at csie.nctu.edu.tw hwang at ee.washington.ed Program Co-Chairs Sin-Horng Chen Yu-Hen Hu National Chiao-Tung University University of Wisconsin Hsinchu, Taiwan Madison, Wisconsin, USA schen at cc.nctu.edu.tw hu at engr.wisc.edu Advisory Board Co-Chair Sun-Yuan Kung C. Y. Wu Princeton University National Science Council Princeton, New Jersey, US Taipei, Taiwan, ROC From kak at gate.ee.lsu.edu Thu Apr 20 10:47:17 1995 From: kak at gate.ee.lsu.edu (Subhash Kak) Date: Thu, 20 Apr 95 09:47:17 CDT Subject: Paper announcement Message-ID: <9504201447.AA23093@gate.ee.lsu.edu> I regret that the directory listing for the anonymous ftp of my paper was in error in the announcement yesterday. The correct listing is given below: S.C. Kak, On quantum neural computing. INFORMATION SCIENCES, vol 83, pp. 143-160, 1995. -- Abstract: This paper examines the notion of quantum neural computing in the context of several new directions in neural networks research. In particular, we consider new neuron and network models that lead to rapid training, chaotic dynamics in neuron assemblies, models of attention and awareness, cytoskeletal microtubule processing. Several characteristics of quantum neural computing are examined. -- To obtain the .ps file of the paper use anonymous ftp at gate.ee.lsu.edu Get into directory pub and subdirectory kak. The .ps file is named q.ps ftp://gate.ee.lsu.edu/pub/kak/q.ps A more comprehensive (three times larger) report to appear in the ``Advances in Imaging and Electron Physics'' may also be obtained using anonymous ftp. The compressed postscript file is named a.ps.Z ftp://gate.ee.lsu.edu/pub/kak/a.ps.Z From uzimmer at informatik.uni-kl.de Thu Apr 20 10:54:44 1995 From: uzimmer at informatik.uni-kl.de (Uwe R. Zimmer, AG vP) Date: Thu, 20 Apr 95 15:54:44 +0100 Subject: Paper available (mobile robots, self-localization) Message-ID: <950420.155444.1722@ag-vp-file-server.informatik.uni-kl.de> A report on a current mobile robot project concerning basic mobile robot tasks is available via ftp or (together with some other reports) from the following WWW-server: WWW-Server is: http://ag-vp-www.informatik.uni-kl.de/ --------------------------------------------------------------------------- --- Self-Localization in Dynamic Environments --------------------------------------------------------------------------- FTP-Server is: ftp.uni-kl.de Mode is : binary Directory is : reports_uni-kl/computer_science/mobile_robots/1995/papers File name is : Zimmer.Self-Loc.ps.gz IEEE/SOFT International Workshop BIES'95 May 30 - 31, 1995, Tokyo, Japan Self-Localization in Dynamic Environments Uwe R. Zimmer Self-localization in unknown environments respectively correlation of current and former impressions of the world is an essential ability for most mobile robots. The method, proposed in this article is the construction of a qualitative, topological world model as a basis for self-localization. As a central aspect the reliability regarding error-tolerance and stability will be emphasized. The proposed techniques demand very low constraints for the kind and quality of the employed sensors as well as for the kinematic precision of the utilized mobile platform. Hard real-time constraints can be handled due to the low computational complexity. The principal discussions are supported by real-world experiments with the mobile robot "ALICE". keywords: artificial neural networks, mobile robots, self-localization, self-organization, world-modelling (8 pages with photos and other figures) ----------------------------------------------------- ----- Uwe R. Zimmer --- University of Kaiserslautern - Computer Science Department | 67663 Kaiserslautern - Germany | ------------------------------.--------------------------------. Phone:+49 631 205 2624 | Fax:+49 631 205 2803 | From jon at maths.flinders.edu.au Fri Apr 21 09:13:14 1995 From: jon at maths.flinders.edu.au (Jonathan Baxter) Date: Fri, 21 Apr 1995 22:43:14 +0930 Subject: Paper Avaliable: Learning Internal Representations Message-ID: <199504211313.AA12304@calvin.maths.flinders.edu.au> The following paper is available by anonymous ftp from calvin.maths.flinders.edu.au (129.96.32.2) /pub/jon/repcolt.ps.Z It is a (hopefully lossy) compression of part of my thesis and will appear in the proceedings of COLT '95. Instructions for retrieval are at the end of this message. Title: Learning Internal Representations (10 pages) Author: Jonathan Baxter Abstract: Probably the most important problem in machine learning is the preliminary biasing of a learner's hypothesis space so that it is small enough to ensure good generalisation from reasonable training sets, yet large enough that it contains a good solution to the problem being learnt. In this paper a mechanism for {\em automatically} learning or biasing the learner's hypothesis space is introduced. It works by first learning an appropriate {\em internal representation} for a learning environment and then using that representation to bias the learner's hypothesis space for the learning of future tasks drawn from the same environment. An internal representation must be learnt by sampling from {\em many similar tasks}, not just a single task as occurs in ordinary machine learning. It is proved that the number of examples $m$ {\em per task} required to ensure good generalisation from a representation learner obeys $m = O(a+b/n)$ where $n$ is the number of tasks being learnt and $a$ and $b$ are constants. If the tasks are learnt independently ({\em i.e.} without a common representation) then $m=O(a+b)$. It is argued that for learning environments such as speech and character recognition $b\gg a$ and hence representation learning in these environments can potentially yield a drastic reduction in the number of examples required per task. It is also proved that if $n = O(b)$ (with $m=O(a+b/n)$) then the representation learnt will be good for learning novel tasks from the same environment, and that the number of examples required to generalise well on a novel task will be reduced to $O(a)$ (as opposed to $O(a+b)$ if no representation is used). It is shown that gradient descent can be used to train neural network representations and the results of an experiment are reported in which a neural network representation was learnt for an environment consisting of {\em translationally invariant} Boolean functions. The experiment provides strong qualitative support for the theoretical results. FTP Instructions: unix> ftp calvin.maths.flinders.edu.au (or 129.96.32.2) login: anonymous password: (your e-mail address) ftp> cd pub/jon ftp> binary ftp> get repcolt.ps.Z ftp> quit unix> uncompress repcolt.ps.Z unix> lpr repcolt.ps (or however you print) From rao at cs.rochester.edu Sat Apr 22 16:05:32 1995 From: rao at cs.rochester.edu (rao@cs.rochester.edu) Date: Sat, 22 Apr 1995 16:05:32 -0400 Subject: Paper Available: Face Recognition using Spatial Filters and Message-ID: <199504222005.QAA27006@panda.cs.rochester.edu> Sparse Distributed Memory The following paper is currently available via ftp: Rajesh P. N. Rao and Dana H. Ballard, "Natural Basis Functions and Topographic Memory for Face Recognition", IJCAI'95 (to appear). ftp://cs.rochester.edu/pub/u/rao/papers/ijcai95.ps.Z Abstract: Recent work regarding the statistics of natural images has revealed that the dominant eigenvectors of arbitrary natural images closely approximate various oriented derivative-of-Gaussian functions; these functions have also been shown to provide the best fit to the receptive field profiles of cells in the primate striate cortex. We propose a scheme for expression-invariant face recognition that employs a fixed set of these ``natural'' basis functions to generate multiscale iconic representations of human faces. Using a fixed set of basis functions obviates the need for recomputing eigenvectors (a step that was necessary in some previous approaches employing principal component analysis (PCA) for recognition) while at the same time retaining the redundancy-reducing properties of PCA. A face is represented by a set of iconic representations automatically extracted from an input image. The description thus obtained is stored in a topographically-organized sparse distributed memory that is based on a model of human long-term memory first proposed by Kanerva. We describe experimental results for an implementation of the method on a pipeline image processor that is capable of achieving near real-time recognition by exploiting the processor's frame-rate convolution capability for indexing purposes. --------- Rajesh Rao Internet: rao at cs.rochester.edu Dept. of Computer Science VOX: (716) 275-2527 University of Rochester FAX: (716) 461-2018 Rochester NY 14627-0226 WWW: http://www.cs.rochester.edu/u/rao/ From jagota at next2.msci.memst.edu Sun Apr 23 14:29:04 1995 From: jagota at next2.msci.memst.edu (Arun Jagota) Date: Sun, 23 Apr 1995 13:29:04 -0500 Subject: HKP exercises (ftp) Message-ID: <199504231829.AA20899@next2> Dear Connectionists: The HKP exercise list (version 1), which has been sent by email to those who requested it, is now also available by anonymous ftp: ftp ftp.cs.buffalo.edu > cd users/jagota > binary > get HKP.ps.Z The same directory has an uncompressed version HKP.ps also. Arun Jagota, Math Sciences, University of Memphis From u095 at unb.ca Mon Apr 24 22:55:47 1995 From: u095 at unb.ca (Kamat) Date: Mon, 24 Apr 1995 23:55:47 -0300 (ADT) Subject: paper available "Symbolic vs Vector Space Learning" Message-ID: FTP-host: jupiter.csd.unb.ca FTP-filename: /pub/symbol/vector.ps.Z ---------------------------------------------------------------------------- Dear Connectionists, The following paper has been accepted for publication in Pattern Recognition Letters and is available through anonymous ftp. ---------------------------------------------------------------------------- CAN A VECTOR SPACE BASED LEARNING MODEL DISCOVER INDUCTIVE CLASS GENERALIZATION IN A SYMBOLIC ENVIRONMENT? Lev Goldfarb, John Abela, Virendra C. Bhavsar and Vithal N. Kamat Faculty of Computer Science University of New Brunswick Fredericton, N.B., Canada E3B 5A3 E-mail: goldfarb, x45i, bhavsar, u095 at unb.ca Abstract We outline a general framework for inductive learning based on the recently proposed evolving transformation system model. Mathematical foundations of this framework include two basic components: a set of operations (on objects) and the corresponding geometry defined by means of these operations. According to the framework, to perform inductive learning in a symbolic environment, the set of operations (class features) may need to be dynamically updated, and this requires that the geometric component allows for an evolving topology. In symbolic systems, as defined in this paper, the geometric component allows for a dynamic change in topology, whereas finite-dimensional numeric systems (vector spaces) can essentially have only one natural topology. This fact should form the basis of a complete formal proof that, in a symbolic setting, the vector space based models, e.g. artificial neural networks, cannot capture inductive generalization. Since the presented argument indicates that the symbolic learning process is more powerful than the numeric process, it appears that only the former should be properly called an inductive learning process. Keywords: Inductive learning, inductive generalization, vector space learning models, artificial neural networks, symbolic models, evolving transformation system, learning topologies. ------------------------------------------------------------------------- FTP-host: jupiter.csd.unb.ca FTP-filename: /pub/symbol/vector.ps.Z ------------------------------------------------------------------------- ftp instructions: % ftp jupiter.csd.unb.ca Name: anonymous password: your full email address ftp> cd pub/symbol ftp> binary ftp> get vector.ps.Z ftp> bye % uncompress vector.ps.Z % lpr vector.ps -------------------------------------------------------------------- Vithal N. Kamat Tel. (506) 453-4566 Faculty of Computer Science Fax. (506) 453-3566 University of New Brunswick E-mail: u095 at unb.ca Fredericton, N.B., CANADA E3B 5A3 -------------------------------------------------------------------- From saad at castle.ed.ac.uk Tue Apr 25 19:16:26 1995 From: saad at castle.ed.ac.uk (D Saad) Date: Tue, 25 Apr 95 19:16:26 BST Subject: TR announcement: On-Line Learning in Soft Committee Machines Message-ID: <9504251916.aa03297@uk.ac.ed.castle> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/saad.online.ps.Z The file saad.online.ps.Z is now available for copying from the Neuroprose repository: On-Line Learning in Soft Committee Machines (33 pages) David Saad - Department of Physics, University of Edinburgh, Edinburgh EH9 3JZ, UK. Sara A. Solla - CONNECT, The Niels Bohr Institute, Blegdamsdvej 17, Copenhagen 2100, Denmark. The paper has been submitted for publication in Phys.Rev.E, a letter describing the main results is to appear in Phys.Rev.Lett. Abstract: -------- The problem of on-line learning in two-layer neural networks is studied within the framework of statistical mechanics. A fully connected committee machine with $K$ hidden units is trained by gradient descent to perform a task defined by a teacher committee machine with M hidden units acting on randomly drawn inputs. The approach, based on a direct averaging over the activation of the hidden units, results in a set of first order differential equations which describe the dynamical evolution of the overlaps among the various hidden units and allow for a computation of the generalization error. The equations of motion are obtained analytically for general K and M, and provide a new and powerful tool used here to study a variety of realizable, over-realizable, and unrealizable learning scenarios, and to analyze the role of the learning rate in controlling the evolution and convergence of the learning process. From mel at quake.usc.edu Tue Apr 25 01:23:36 1995 From: mel at quake.usc.edu (Bartlett Mel) Date: Tue, 25 Apr 1995 13:23:36 +0800 Subject: POST-DOC POSITION Message-ID: <9504252023.AA14539@quake.usc.edu> ----- POST-DOCTORAL RESEARCH POSITION AVAILABLE ----- A post-doctoral position is now available in the laboratory of Dr. Bartlett Mel in the Biomedical Engineering Department at the University of Southern California. This position is for collaborative work on an NSF-funded project involving the study of synaptic learning in neurons with complex dendritic trees. Applicants should have a good background in neuroscience and strong computational and mathematical skills. The position is for one year, with possibility for renewal for a second year. Salary is around $30,000. PROJECT OVERVIEW - A growing body of neuroanatomical, physiological, and computational modeling work is consistent with the idea that activity- independent synapse formation coupled with activity-dependent (Hebbian) synapse stabilization could lead to development of correlation-based spatial structure of the synaptic contacts onto the dendrites of INDIVIDUAL neurons, by analogy with correlation-induced spatial maps formed across POPULATIONS of neurons (e.g. Miller 1994). Given the likely capacity for nonlinear subunit processing within dendritic trees (see Mel 1994), this additional putative "layer" of synaptic organization is therefore likely to have profound consequences for neurobiological function, such as for the development of complex receptive field properties in sensory neurons, as well as for memory capacity in the context of supervised associative learning. WHAT THE APPLICANT WOULD DO includes at least two of the following: (1) detailed biophysical modeling of synaptic plasticity at the single-neuron level, (2) abstract modeling of individual dendritic neurons and populations of neurons in both supervised and unsupervised neurobiological learning contexts, and (3) mathematical analysis of the computational capacities of dendritic neurons. In addition, the applicant would participate in a collaboration with experimental neuroscience collegues (Drs. Nancy Desmond and William Levy at the University of Virginia), who will be conducting experiments in rat hippocampus that relate closely to the above issues. OTHER PROJECTS currently ongoing in the lab include (i) the construction of a large-scale neurally-inspired system for 3-D visual object recognition, (ii) psychophysical experiments involving human visual perception and memory in collaboration with Dr. Kaz Morikawa, and (iii) biophysical-level modeling of the temporal response characteristics of dendritic neurons, in collaboration with Dr. Ernst Niebur at Caltech. ELSEWHERE AT USC, the Neural, Informational, and Behavioral Sciences (NIBS) graduate program encompasses several dozen neuroscience, psychology, computer science and engineering faculty interested in all aspects of brain and behavioral function. A few of these include Michael Arbib, Michel Beaudry, George Bekey, Ted Berger, Irving Biederman, Christof von der Malsburg, Larry Swanson, Armand Tanguay, and Richard Thompson. Several excellent seminar series run throughout the year, and a daily NIBS tea establishes a focal point for daily interactions. THE UNIVERSITY OF SOUTHERN CALIFORNIA is the oldest and largest private research university in the western US, and is among the top 10 private universities receiving federal funds for research and development in the country. The University is situated in the center of an unusually diverse metropolis (Los Angeles) surrounded by stunning natural scenery. Los Angeles may be the only city in the world in which it is possible to climb a 10,000 ft. peak in the morning, picnic on the beach for lunch, receive "aromatherapy" in the afternoon, dine in a fabulous Santa Monica restaurant, catch the LA Philharmonic at the Hollywood Bowl, and then drown one's late-night existentialist thoughts at a West-Side coffeehouse. APPLICATIONS SHOULD INCLUDE (1) a CV and cover letter detailing the applicant's background, motivations, and qualifications, (2) at least two letters of recommendation, and (3) a maximum of three relevant publications, sent to Dr. Bartlett Mel Biomedical Engineering Department USC, 1451 Los Angeles, CA 90089 (213)740-0334 Applicants should be a U.S. citizen or permanent resident. Applications are encouraged from minorities and women. USC is an equal opportunity/affirmative action employer. DEADLINE for submission is June 1, 1995. BIBLIOGRAHPY Mel, B.W. (1994) Information processing in dendritic trees. Neural Computation, 6, 1031-1085. Miller, K. (1994) A model for the development of simple-cell receptive fields, and the ordered arrangement of orientation columns through activity-dependent competition of on- and off-center inputs. J. Neurosci., 14, 409-441. From M.West at statslab.cam.ac.uk Wed Apr 26 17:09:00 1995 From: M.West at statslab.cam.ac.uk (Mike West) Date: Wed, 26 Apr 95 17:09 BST Subject: No subject Message-ID: +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1996 Joint Statistical Meetings, Chicago, 4-8 August 96 Call for Papers: ASA Section on BAYESIAN STATISTICAL SCIENCE The latest issue of Amstat News contains a call for Invited Paper Session suggestions and proposals from Dick Gunst, the ASA Program Chair. This is a follow-up call from SBSS. The Section will have at least one Invited Session, possibly more including sessions co-sponsored by other sections. Proposals and suggestions received will also be considered for Special Contributed Paper Sessions. At this stage, suggestions and ideas for sessions need not identify a full list of speakers and discussants, but you should provide a general idea of the topic and focus. The theme for the 1996 meetings is "Challenging the Frontiers of Knowledge Using Statistical Science", intended to highlight new statistical developments at the forefront of the discipline -- theory, methods, applications, and cross-disciplinary activities. Suggestions for invited sessions should relate to this theme, involving topics of real novelty and importance, new directions of development in Bayesian statistics, and reflecting the current vibrancy of the discipline. Invited sessions typically have three speakers and one discussant, though the format can vary from one to three speakers, or comprise a panel discussion. Novel format suggestions are welcome. Special Contributed Sessions typically have four or five speakers plus a discussant. Please contact me with suggestions and ideas for sessions. The deadline for receipt at ASA of all invited paper sessions is soon: July 1 1995. Mike West 1996 SBSS Program Chair Email me at: January--July 6th 1995: m.west at statslab.cam.ac.uk After July 7th 1995: mw at isds.duke.edu +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ From ingber at alumni.caltech.edu Wed Apr 26 18:24:12 1995 From: ingber at alumni.caltech.edu (Lester Ingber) Date: Wed, 26 Apr 1995 15:24:12 -0700 Subject: New book: Neocortical Dynamics and Human EEG Rhythms Message-ID: <199504262224.PAA28361@alumni.caltech.edu> Neocortical Dynamics and Human EEG Rhythms P.L. Nunez Oxford University Press, 1995 >From the US, order by calling (800)451-7556. From outside the US, call (919)677-0977. From Europe, call Oxford U Press in London. An ascii file of the full preface can be obtained via ftp: Interactively [brackets signify machine prompts]: [your_machine%] ftp ftp.alumni.caltech.edu [Name (...):] anonymous [Password:] your_e-mail_address [ftp>] cd pub/ingber/MISC.DIR [ftp>] get nunez95_preface.txt [ftp>] quit This archive also can be accessed via WWW path http://alumni.caltech.edu/~ingber ======================================================================== CONTENTS 1. Quantitative States of Neocortex (PL Nunez) 2. Toward a Physics of Neocortex (PL Nunez) 3. Mind, Brain, and Electroencephalography (PL Nunez) 4. Physiologic, Medical, and Cognitive Correlates of Electroencephalography (KL Pilgreen) 5. Dynamics of Electrical Activity of the Brain, Local Networks, and Modulating Systems (FH Lopes da Silva) 6. Steady-State Visually Evoked Potentials, Brain Resonances, and Cognitive Processes (RB Silberstein) 7. Neuroelectric Measures of Mind (AS Gevins and BA Cutillo) 8. Discrete Linear Systems of Physics and Brain (PL Nunez) 9. Continuous Linear Systems of Physics and Brain (PL Nunez) 10. Nonlinear Phenomena and Chaos (PL Nunez) 11. Global Contributions to EEG Dynamics (PL Nunez) 12. Experimental Connections Between EEG Data and the Global Wave Theory (PL Nunez) 13. Neuromodulation of Neocortical Dynamics (RB Silberstein) 14. Statistical Mechanics of Multiple Scales of Neocortical Interactions (L Ingber) APPENDIX (PL Nunez) ======================================================================== /* RESEARCH E-Mail: ingber at alumni.caltech.edu * * INGBER WWW: http://alumni.caltech.edu/~ingber * * LESTER Archive: ftp.alumni.caltech.edu:/pub/ingber * * Prof. Lester Ingber _ P.O. Box 857 _ McLean, VA 22101 _ 1.800.L.INGBER */ From u095 at unb.ca Wed Apr 26 21:17:11 1995 From: u095 at unb.ca (Kamat) Date: Wed, 26 Apr 1995 22:17:11 -0300 (ADT) Subject: Symbolic vs Vector Space Learning Message-ID: Dear Connectionists, It appears that many neural net researchers do not agree with the main point of the paper [*] posted by me two days ago: that there is a fundamental difference between the **appropriate** underlying mathematical models for neural nets and symbolic learning machines. Since one of us will be working on the formal proof of the above statement, and the issue is so critical that it might be useful (to say the least) to discuss this issue on this mailing list. Vithal [*] L. Goldfarb, J. Abela, V. C. Bhavsar and V. N. Kamat, Can a vector space based learning model discover inductive class generalization in a symbolic environment? (to be published in Pattern Recognition Letters). ============================================================================ Vithal N. Kamat, PhD Student, AI group, Faculty of Computer Science, University of New Brunswick, PO Box 4400, Fredericton, NB, E3B 5A3, CANADA. u095 at unb.ca. Fax:(506) 453-3566 My URL is http://ccortex.cs.unb.ca:8080/~kamat/kamat.html. ============================================================================ From ken at phy.ucsf.edu Wed Apr 26 22:24:40 1995 From: ken at phy.ucsf.edu (Ken Miller) Date: Wed, 26 Apr 1995 19:24:40 -0700 Subject: faculty position in computational neuroscience at UC Davis Message-ID: <9504270224.AA02578@coltrane.ucsf.edu> The following notice appeared on a more obscure list, it seems appropriate to redistribute here. Ken Miller ken at phy.ucsf.edu p.s. please don't write to me about the job, I don't know anything more about it. ----------------------------------------------------------------- From khbritten at ucdavis.edu Tue Apr 25 12:48:14 1995 From: khbritten at ucdavis.edu (khbritten@ucdavis.edu) Date: Tue, 25 Apr 1995 09:48:14 -0700 Subject: faculty position, last-minute notice Message-ID: TENURE-TRACK FACULTY POSITION The Center for Neuroscience and the Department of Psychology at the University of California, Davis, invite applications for a tenure-track position at the assistant professor level in the area of computational neuroscience. Candidates specializing in analytical approaches and predictive modeling of perceptual and/or motor circuitry in vertebrates are encouraged to apply. Postdoctoral experience is desirable. Ideal candidates would also incorporate neurophysiological, neuroanatomical, psychophysical, and/or cognitive-experimental techniques to test models. The appointee will be expected to teach undergraduate and graduate level courses in his/her areas of expertise. The University of California is interested in candidates who are committed to the highest standards of scholarship and professional activities. This position is open until filled, but applications must be received by May 1, 1995 to be assured full consideration. Applicants should submit curriculum vitae, bibliography, a brief description of research interests and the names of at least three references to: Michael S. Gazzaniga, Ph.D., Director, Center for Neuroscience, University of California, Davis, CA 95616 The University of California is an Equal Opportunity/Affirmative Action Employer with a strong institutional commitment to the achievement of diversity among its faculty and staff. From wimw at mbfys.kun.nl Thu Apr 27 06:16:19 1995 From: wimw at mbfys.kun.nl (Wim Wiegerinck) Date: Thu, 27 Apr 1995 12:16:19 +0200 Subject: 3rd SNN Neural Network Symposium Message-ID: <199504271016.MAA03517@septimius.mbfys.kun.nl> NEURAL NETWORKS AND ARTIFICIAL INTELLIGENCE 3rd SNN Neural Network Symposium September 14-15, 1995 Nijmegen, the Netherlands Call for Papers Deadline 21 May 1995 -------------------------------------------------------------- On september 14 and 15 SNN will organize its annual Symposium on Neural Networks in the University Auditorium and Conference Centre of the University of Nijmegen. The topic of the conference is "Neural Networks and Artificial Intelligence". The term "Artificial Intelligence" is often associated with "traditional AI" methodology. Here it used in its literal sense, indicating the problem to create intelligence by artificial means, regardless of the method that is being used. The aim of the conference is two-fold: to give an overview of new developments in neuro-biology and the cognitive sciences that may lead to novel computational paradigms and to give an overview of recent achievements for some of the major conceptual problems of artificial intelligence and data modelling. Specific sessions are: - Robotics (hierarchical motor control, exploration, trajectory planning, active vision) - Attention (computational models, learning paradigms) - Biological models (memory, perception, motor control, oscillations) - Data interpretation (statistical theory, confidences of networks, Bayesian approach, rule extraction) - Cognitive models (memory, reasoning, language interpretation) The conference consists of 4 one-half day single track sessions. Each session will consist of 2 invited contributions and 2 or 3 contributions selected from the submitted papers. In addition there will be poster sessions. We expect approximately 250 attendants and 50 contributed papers. The proceedings will be published by Springer-Verlag. INVITED SPEAKERS S. Thrun and J. Buhmann (University of Bonn) Neural networks for map building: How RHINO navigates in offices! F. Groen (University of Amsterdam) Time-varying images and visual servoing D. MacKay (University of Cambridge) Developments in Probabilistic Modelling with Neural Networks B.D. Ripley (University of Oxford) Statistical ideas for selecting network architectures L. van Hemmen (University of Munchen) Spiky neurons and models of synchrony A. Herz University of Oxford) Rapid local synchronization of action potentials R. Eckhorn (University of Marburg) Segmentation coding in the visual system B. van Dijk (University of Amsterdam) Synchrony and plasticity in the visual cortex PROGRAM COMMITTEE Aertsen (Weizmann Institute, Israel), Amari (Tokyo University), Buhmann (University of Bonn), van Dijk (University of Amsterdam), Eckhorn (University of Marburg), Gielen (University of Nijmegen), van Hemmen (University of Munchen), Herz (University of Oxford), Heskes (University of Nijmegen), Kappen (University of Nijmegen), Krose (University of Amsterdam), Lopez (University of Madrid), Martinetz (Siemens Research, Munchen), MacKay (University of Cambridge), von Seelen (University of Bochum, Germany), Taylor (King's College, London) INSTRUCTIONS FOR SUBMISSION OF MANUSCRIPTS Please submit 4 copies of a maximally 4 page extended abstract by regular mail to SNN at the address below, BEFORE MAY 21 1995. FAX OR EMAIL SUBMISSIONS ARE NOT ACCEPTED. All manuscripts must be written in English. Indicate whether the manuscript is intended for oral or poster presentation. With each submitted manuscript, indicate the name of the principal author, the mail and email address, telephone and fax numbers and the session the manuscript is submitted to. Authors will be notified about acceptance of their contribution as a oral or poster presentation by June 15 1995. Accepted contributions are requested to submit a 4 page camera ready contribution for the proceedings BEFORE JULY 1 1995. WE ADVISE THAT SUBMITTED MANUSCRIPTS ALREADY FOLLOW THE FINAL LAY OUT. Therefore, please observe carefully these instructions: 1. Text and illustrations should fill, but not extend beyond, an area of 120 x 195 mm. 2. Use a 10pt font size with line spacing of 2 pts. 3. Use a serifed font (e.g. Times) 4. Text should be justifed on both left and right margins, not ragged 5. Do not use running heads 6. Do not use page nummbers 7. The title should be written in capital letters 2 cm from the top of the first page, followed by the authors' name and addresses and the abstract 8. In the text, do not indent headings or captions 9. Insert all tables, figures, and figure captions in the text at their final positions 10. For references in the text, use numbers in square brackets A LaTeX style file is held on the CTAN archive at Aston University (UK). The files can be retrieved by anonymous ftp from ftp.tex.ac.uk where the files wicread.me wicsadv.org wicsadv.tex wicsbook.org and wicsbook.sty are held in the directory /pub/archive/macros/latex209/contrib/springer/wics INDUSTRIAL TRACK There will be an industrial conference entitled NEURAL NETWORKS AT WORK which runs concurrently with the scientific conference. A selection of the best working and 'money making' applications of neural networks in Europe will be presented. The industrial track is organized as part of the activities of the Esprit project SIENNA which aims are to conduct a survey of successful and unsuccessful applications of neural networks in the European market. For additional information and a detailed program, please contact SNN at the address below. VENUE The conference will be held at the University Auditorium and Conference Centre of the University of Nijmegen. Instructions on how to get to the University Auditorium and Conference Centre will be sent to you with your conference registration. CONFERENCE REGISTRATION Before/after June 1 1995, the registration fee for the scientific track is NLG 250/300 for the two days and includes coffee/tea and lunches. One day registration is NLG 200. Scientific registration gives access to the scientific oral and poster presentations, and includes a copy of the scientific proceedings. Before/after June 1 1995, the registration fee for the industrial track is NLG 400/450 per day and includes coffee/tea and lunches. One day registration is NLG 300. Industrial registration gives access to the industrial track presentations as well as the scientific oral and poster presentations, and includes a copy of the scientific and the industrial proceedings. Full-time PhD or Master students may register at a reduced rate. Before/after July 15 1995, the registration fee for students is NLG 125/150 for the two days and includes coffee/tea and lunches. Student registration gives access to the scientific oral and poster presentations. Students must send a copy of their university registration card or a letter from their supervisor together with the registration form. Methods of payment are outlined in the enclosed registration form. To those who have completed the registration form with remittance of the appropriate fees, a receipt will be sent. This receipt should be presented at the registration desk at the conference. Payment must have been received by us before the conference. If not, you will have to pay in cash or with personal cheques at the conference. At the conference, CREDIT CARDS CAN NOT BE ACCEPTED. CANCELLATION Notification of cancellation must be sent in writing to the Conference Organizing Bureau of the University of Nijmegen (see address below). Cancellations received before July 1 will be refunded, excluding an administrative fee of NLG 50,-. Cancellations received after July 1 will not be refunded, but the proceedings will be mailed. ACCOMMODATIONS Hotel reservations will be made for you as indicated on the registration form. Payment can be made at arrival or departure of the hotel (depends on the hotel policy). All hotels are in central Nijmegen and within a ten minute bus ride from the University. The Foundation for Neural Networks and the Conference Organizing bureau cannot be held responsible for hotel reservations and related costs. LIABILITY SNN cannot be held liable for any personal accident or damage to the private property of participants during the conference. ADDRESSES Send SUBMISSIONS to: Prof.dr. C. Gielen, Dr. H. Kappen, Mrs. M. Rengers Foundation for Neural Networks (SNN) University of Nijmegen PObox 9101 6500 HB Nijmegen, The Netherlands tel: +31 80 614245 fax: +31 80 541435 email: snn at mbfys.kun.nl Send your REGISTRATION to: University of Nijmegen Conference organization bureau POBox 9111 6500 HN Nijmegen, The Netherlands tel: +31 80 615968 or 612184 fax: +31 80 567956 --------------------------------------------------------------------- REGISTRATION FORM Name: .......................................................... Mr/Mrs Affiliation: .................................................... ................................................................ Address: ........................................................ ................................................................ Zipcode/City: ................................................... Country: ........................................................ Conference Registration () I will participate at the SNN symposium Amount () industrial registration 2 days ...... () industrial registration 1 day: 14/15 september*) ...... () academic registration 2 days ...... () academic registration 1 day: 14/15 september*) ...... () student registration ...... *) please strike what is not applicable () I intend to present an oral or poster presentation for the scientific track Title: ............................................................ ................................................................... Session: .......................................................... ................................................................... () A 4 page abstract has been submitted () Bank transfer has been made (FREE OF BANK CHARGES) to SNN conferences, Credit Lyonnais Nederland NV Nijmegen, on bank account number 637984838, swift code CRLIJNL2RS. Amount: ................... () Charge my credit card for the amount of .................... () VISA () Master Card () American Express Card no.: Expiry date: Signature: () Please make hotel reservations in my name: Date of arrival: Date of departure: Single/double room (strike what is not applicable) single double (prices per night) () Hotel Mercure 145 165 () Hotel Apollo 90 120 () Hotel Catharina 43,50 87 (with shower and toilet) 57,50 115 --------------------------------------------------------------------- The symposium information can also be found on the World Wide Web: http://www.mbfys.kun.nl/SNN/Symposium/ From biehl at Physik.Uni-Wuerzburg.DE Thu Apr 27 17:05:57 1995 From: biehl at Physik.Uni-Wuerzburg.DE (Michael Biehl) Date: Thu, 27 Apr 95 17:05:57 MESZ Subject: paper available: Learning from Noisy Data... Message-ID: <199504271505.RAA00356@wptx08.physik.uni-wuerzburg.de> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/biehl.noisy.ps.Z The following paper has been placed in the Neuroprose archive (see above for ftp-host) and is now available as a compressed postscript file named biehl.noisy.ps.Z (5 pages of output) email address: biehl at physik.uni-wuerzburg.de **** Hardcopies cannot be provided **** ------------------------------------------------------------------ "Learning from Noisy Data: An Exactly Solvable Model" Michael Biehl, Peter Riegler, and Martin Stechert Institut fuer Theoretische Physik Julius-Maximilians-Universitaet Am Hubland D-97074 Wuerzburg Germany --------------------------------------------------------------------- Abstract: Exact results are derived for the learning of a linearly separable rule with a single layer perceptron. We consider two sources of noise in the training data: the random inversion of the example outputs and weight noise in the teacher network respectively. In both scenarios we investigate on-line learning schemes which utilize only the latest in a sequence of uncorrelated random examples for an update of the student weights. We study Hebbian learning as well as on-line algorithms which achieve an optimal decrease of the generalization error. The latter realize an asymptotic decay of the gneralization error that coincides, apart from prefactors, with the one found for off-line schemes. ---------------------------------------------------------------------- From lpease at admin.ogi.edu Thu Apr 27 17:32:07 1995 From: lpease at admin.ogi.edu (lpease@admin.ogi.edu) Date: Thu, 27 Apr 95 14:32:07 PDT Subject: Neural Networks short course Message-ID: <9504272132.AA26111@admin.ogi.edu> ^*^*^*^*^*^*^*^*^*^*^**^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^* Linda M. Pease, Director lpease at admin.ogi.edu Office of Continuing Education Oregon Graduate Institute of Science & Technology 20000 N.W. Walker Road, Beaverton OR 97006 USA (shipping) P.O. Box 91000, Portland, OR 97291-1000 USA (mailing) +1-503-690-1259 +1-503-690-1686 fax "The future belongs to those who believe in the beauty of their dreams" -Eleanor Roosevelt ^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^* Oregon Graduate Institute of Science & Technology, Office of Continuing Education, offers the short course: NEURAL NETWORKS: ALGORITHMS AND APPLICATIONS June 12-16, 1995, at the OGI campus near Portland, Oregon. Course Organizer: John E. Moody Lead Instructor: Hong Pi With Lectures By: Dan Hammerstrom Todd K. Leen John E. Moody Thorsteinn S. Rognvaldsson Eric A. Wan Artificial neural networks (ANN) have emerged as a new information processing technique and an effective computational model for solving pattern recognition and completion, feature extraction, optimization, and function approximation problems. This course introduces participants to the neural network paradigms and their applications in pattern classification; system identification; signal processing and image analysis; control engineering; diagnosis; time series prediction; financial analysis and trading; and speech recognition. Designing a neural network application involves steps from data preprocessing to network tuning and selection. This course, with many examples, application demos and hands-on lab practice, will familiarize the participants with the techniques necessary for building successful applications. About 50 percent of the class time is assigned to lab sessions. The simulations will be based on Matlab, the Matlab Neural Net Toolbox, and other software running on 486 PCs. Prerequisites: Linear algebra and calculus. Previous experience with using Matlab is helpful, but not required. Who will benefit: Technical professionals, business analysts and other individuals who wish to gain a basic understanding of the theory and algorithms of neural computation and/or are interested in applying ANN techniques to real-world, data-driven modeling problems. Course Objectives: After completing the course, students will: - Understand the basic neural networks paradigms - Be familiar with the range of ANN applications - Have a good understanding of the techniques for designing successful applications - Gain hands-on experience with ANN modeling. Course Outline Neural Networks: Biological and Artificial The biological inspiration. History of neural computing. Types of architectures and learning algorithms. Application areas. Simple Perceptrons and Adalines Decision surfaces. Perceptron and Adaline learning rules. Stochastic gradient descent. Lab experiments. Multi-Layer Feed-Forward Networks I Multi-Layer Perceptrons. Back-propagation learning. Generalization. Early Stopping. Network performance analysis. Lab experiments. Multi-Layer Feed-Forward Networks II Radial basis function networks. Projection pursuit regression. Variants of back-propagation. Levenburg-Marquardt optimization. Lab experiments. Network Performance Optimization Network pruning techniques. Input variable selection. Sensitivity Analysis. Regularization. Lab experiments. Neural Networks for Pattern Recognition and Classification Nonparametric classification. Logistic regression. Bayesian approach. Statistical inference. Relation to other classification methods. Self-Organized Networks and Unsupervised Learning K-means clustering. Kohonen feature mapping. Learning vector quantization. Adaptive principal components analysis. Exploratory projection pursuit. Applications. Lab experiments. Time Series Prediction with Neural Networks Linear time series models. Nonlinear approaches. Case studies: economic and financial time series analysis. Lab experiments. Neural Network for Adaptive Control Nonlinear modeling in control. Neural network representations for dynamical systems. Reinforcement learning. Applications. Lab Experiments. Massively Parallel Implementation of Neural Nets on the Desktop Architecture and application demos of the Adaptive Solutions' CNAPS System. Current State of Research and Future Directions About the Instructors Dan Hammerstrom received the B.S. degree in Electrical Engineering, with distinction, from Montana State University, the M.S. degree in Electrical Engineering from Stanford University, and the Ph.D. degree in Electrical Engineering from the University of Illinois. He was on the faculty of Cornell University from 1977 to 1980 as an assistant professor. From 1980 to 1985 he worked for Intel where he participated in the development and implementation of the iAPX-432 and i960 and, as a consultant, the iWarp systolic processor that was jointly developed by Intel and Carnegie Mellon University. He is an associate professor at Oregon Graduate Institute where he is pursuing research in massively parallel VLSI architectures, and is the founder and Chief Technical Officer of Adaptive Solutions, Inc. He is the architect of the Adaptive Solutions CNAPS neurocomputer.Dr. Hammerstrom's research interests are in the area of the VLSI architectures for pattern recognition. Todd K. Leen is associate professor of Computer Science and Engineering at Oregon Graduate Institute of Science & Technology. He received his Ph.D. in theoretical Physics from the University of Wisconsin in 1982. From 1982-1987 he worked at IBM Corporation, and then pursued research in mathematical biology at Good Samaritan Hospital's Neurological Sciences Institute. He joined OGI in 1989. Dr. Leen's current research interests include neural learning, algorithms and architectures, stochastic optimization, model constraints and pruning, and neural and non-neural approaches to data representation and coding. He is particularly interested in fast, local modeling approaches, and applications to image and speech processing. Dr. Leen served as theory program chair for the 1993 Neural Information Processing Systems (NIPS) conference, and workshops chair for the 1994 NIPS conference. John E. Moody is associate professor of Computer Science and Engineering at Oregon Graduate Institute of Science & Technology. His current research focuses on neural network learning theory and algorithms in it's many manifestations. He is particularly interested in statistical learning theory, the dynamics of learning, and learning in dynamical contexts. Key application areas of his work are adaptive signal processing, adaptive control, time series analysis, forecasting, economics and finance. Moody has authored over 35 scientific papers, more than 25 of which concern the theory, algorithms, and applications of neural networks. Prior to joining the Oregon Graduate Institute, Moody was a member of the Computer Science and Neuroscience faculties at Yale University. Moody received his Ph.D. and M.A. degrees in Theoretical Physics from Princeton University, and graduated Summa Cum Laude with a B.A. in Physics from the University of Chicago. Hong Pi is a senior research associate at Oregon Graduate Institute. He received his Ph.D. in theoretical physics from University of Wisconsin. His research interests include nonlinear modeling, neural network algorithms and applications. Thorsteinn S. Rognvaldsson received the Ph.D. degree in theoretical physics from Lund University, Sweden, in 1994. His research interests are Neural Networks for prediction and classification. He is currently a postdoctoral research associate at Oregon Graduate Institute. Eric A. Wan, Assistant Professor of Electrical Engineering and Applied Physics, Oregon Graduate Institute of Science & Technology, received his Ph.D. in electrical engineering from Stanford University in 1994. His research interests include learning algorithms and architectures for neural networks and adaptive signal processing. He is particularly interested in neural applications to time series prediction, speech enhancement, system identification, and adaptive control. He is a member of IEEE, INNS, Tau Beta Pi, Sigma Xi, and Phi Beta Kappa. Course Dates: M-F, June 12-16, 1995, 8:30am-5pm Course fee: $1695 (includes instruction, course materials, labs, break refreshments and lunches, Monday night reception and Thursday night dinner) For a complete course brochure contact: Linda M. Pease, Director Office of Continuing Education Oregon Graduate Institute of Science & Technology PO Box 91000 Portland, OR 97291-1000 +1-503-690-1259 +1-503-690-1686 (fax) e-mail: continuinged at admin.ogi.edu WWW home page: http://www.ogi.edu From ken at phy.ucsf.edu Thu Apr 27 18:37:49 1995 From: ken at phy.ucsf.edu (Ken Miller) Date: Thu, 27 Apr 1995 15:37:49 -0700 Subject: yet another faculty job opening, in theoretical visual neuroscience Message-ID: <9504272237.AA02912@coltrane.ucsf.edu> Hi Folks, I seem to be making a habit of this, but here's another computational neuroscience job that just appeared on the same list as the last one. Once again, thought I'd redistribute it. Once again, please don't write to me about the job, I don't know anything more about it. Ken ken at phy.ucsf.edu ---------------------------------------------------------------------- -> Date: Thu, 27 Apr 95 12:42:45 EDT -> From: msl at cns.NYU.EDU (Michael Landy) -> Subject: Job Posting -> New York University Center for Neural Science and Courant Institute -> of Mathematical Sciences -> As part of its Sloan Theoretical Neuroscience Program, the Center -> for Neural Science at New York University, together with the -> Courant Institute of Mathematical Sciences, is planning to hire an -> Assistant Professor (tenure-track) in the field of Theoretical -> Visual Neuroscience. Applicants should have a background in -> mathematics, physics, and/or computer science with a proven record -> of research in visual science or neuroscience. Applications -> (deadline June 30, 1995) should include a CV, the names and -> addresses of at least three individuals willing to write letters of -> reference, and a statement of research interests. -> Send to: -> Sloan Search Committee, -> Center for Neural Science, New York University, -> 4 Washington Place, New York NY 10003. -> New York University is an affirmative action/equal opportunity employer. From P.McKevitt at dcs.shef.ac.uk Sat Apr 29 11:58:35 1995 From: P.McKevitt at dcs.shef.ac.uk (Paul Mc Kevitt) Date: Sat, 29 Apr 95 11:58:35 BST Subject: IEE COLLOQ. LONDON MAY 15TH: GROUNDING REPRESENTATIONS (MURPHY) Message-ID: <9504291058.AA17092@dcs.shef.ac.uk> ============================================================================== GROUNDING REPRESENTATIONS GROUNDING REPRESENTATIONS GROUNDING REPRESENTATIONS ============================================================================== ------------------------------------------------------------------------------ NOTE: Please note that there has been a programme change below and a new speaker (Elisabeth Andr/e: DFKI, Germany and Sheffield, England) added in. ------------------------------------------------------------------------------ PROGRAMME AND CALL FOR PARTICIPATION GROUNDING REPRESENTATIONS: Integration of sensory information in Natural Language Processing, Artificial Intelligence and Neural Networks IEE COLLOQUIUM IEE Computing and Control Division [Professional group: C4 (Artificial Intelligence)] in association with: British Computer Society Specialist Group on Expert Systems and The Society for the Study of Artificial Intelligence and Simulation of Behaviour (SSAISB) MONDAY, MAY 15th, 1995 ********************** at the IEE Colloquium Savoy Place London, ENGLAND Chairs NOEL SHARKEY and PAUL MC KEVITT Department of Computer Science University of Sheffield, England WORKSHOP DESCRIPTION: Perhaps the most famous criticism of traditional Artificial Intelligence is that computer programs use symbols that are arbitrarily interpretable (see Searle, 1980 for the Chinese Room and Harnad, 1990 for the symbol grounding problem). We could, for example, use the word "apple" to mean anything from a "common fruit" to a "pig's nose". All the computer knows is the relationship between this symbol the others that we have given it. The question is, how is it possible to move from this notion of meaning, as the relationship between arbitrary symbols, to a notion of "intrinsic" meaning. In other words, how do we provide meaning by grounding computer symbols or representations in the physical world? The aim of this colloquium is to take a broad look at many of the important issues in relating machine intelligence to the world and to make accessible some of the most recent research in integrating information from different modalities. For example, why is it important to have symbol or representation grounding and what is the role of the emerging neural network technology? One approach has been to link intelligence to the sensory world through visual systems or robotic devices such as MURPHY. Another approach is work on systems that integrate information from different modalities such as vision and language. Yet another approach has been to examine how the human brain relates sensory, motor and other information. It looks like we may be at long last getting a handle on the age old CHINESE ROOM and SYMBOL GROUNDING problems. Hence this colloquium has as its focus, "grounding representations. The colloquium will occur over one day and will focus on three themes: (1) Biology and development; (2) Computational models and (3) Symbol grounding. The target audience of this colloquium will include Engineers and Scientists in Neural Networks and Artificial Intelligence, Developmental Psychologists, Cognitive Scientists, Philosophers of mind, Biologists and all of those interested in the application of Artificial Intelligence to real world problems. PROGRAMME: Monday, May 15th, 1995 ************************ INTRODUCTION: 9.00 REGISTRATION + SUSTENANCE 10.00 `An introduction' NOEL SHARKEY (Department of Computer Science, University of Sheffield, ENGLAND) COMPUTATIONAL MODELS: 10.30 `From visual data to multimedia presentations' ELISABETH ANDR/E (German Research Center for Artificial Intelligence (DFKI) Saarbr"ucken, GERMANY) & (Department of Computer Science, University of Sheffield, ENGLAND) 11.00 `Natural language and exploration of an information space' OLIVIERO STOCK (Istituto per la Ricerca Scientifica e Technologica, IRST) (Trento, ITALY) 11.30 `How visual salience influences natural language descriptions' WOLFGANG MAASS (Cognitive Science Programme) (Universitaet des Saarlandes, Saarbruecken, GERMANY) 12.00 DISCUSSION 12.30 LUNCH GROUNDING SYMBOLS: 2.00 `Grounding symbols in sensorimotor categories with neural networks' STEVAN HARNAD (Department of Psychology, University of Southampton, ENGLAND) 2.30 `Some observations on symbol-grounding from a combined symbolic/connectionist viewpoint' JOHN BARNDEN (Computing Research Laboratory, New Mexico, USA) & (Department of Computer Science, University of Reading, ENGLAND) 3.00 Sustenance Break 3.30 `On grounding language with neural networks' GEORG DORFFNER (Austrian Institute for Artificial Intelligence, Vienna, AUSTRIA) PANEL DISCUSSION AND QUESTIONS: 4.00 `Grounding representations' Chairs + Invited speakers S/IN S/IN: 4.30 `De brief/comments' PAUL MC KEVITT (Department of Computer Science, University of Sheffield, ENGLAND) 5.00 O/ICHE MHA/ITH ***************************** PUBLICATION: We intend to publish a book on this Colloquium Proceedings. ADDRESSES IEE CONTACT: Sarah Leong Groups Officer The Institution of Electrical Engineers (IEE) Savoy Place GB- WC2R OBL, London England, UK, EU. E-mail: SLeong at iee.org.uk (Sarah Leong) E-mail: mbarrett at iee.org.uk (Martin Barrett) E-mail: dpenrose at iee.org.uk (David Penrose) WWW: http://www.iee.org.uk Ftp: ftp.iee.org.uk FaX: +44 (0) 171-497-3633 Phone: +44 (0) 171-240-1871 (general) Phone: +44 (0) 171-344-8423 (direct) LOCATION: The Institution of Electrical Engineers (IEE) Savoy Place GB- WC2R OBL, London England, UK, EU. ACADEMIC CONTACT: Paul Mc Kevitt Department of Computer Science Regent Court 211 Portobello Street University of Sheffield GB- S1 4DP, Sheffield England, UK, EU. E-mail: p.mckevitt at dcs.shef.ac.uk WWW: http://www.dcs.shef.ac.uk/ WWW: http://www.shef.ac.uk/ Ftp: ftp.dcs.shef.ac.uk FaX: +44 (0) 114-278-0972 Phone: +44 (0) 114-282-5572 (Office) 282-5596 (Lab.) 282-5590 (Secretary) REGISTRATION: Registration forms are available from SARAH LEONG at the above address and should be sent to the following address: (It is NOT possible to register by E-mail.) Colloquium Bookings Institution of Electrical Engineers (IEE) PO Box 96 Stevenage GB- SG1 2SD Herts England, UK, EU. Fax: +44 (0) 143 874 2792 Receipt Enquiries: +44 (0) 143 876 7243 Registration enquiries: +44 (0) 171 240 1871 x.2206 PRE-REGISTRATION IS ADVISED ALTHOUGH YOU CAN REGISTER ON THE DAY OF THE EVENT. ________________________________________________________________________ R E G I S T R A T I O N COSTS ________________________________________________________________________ (ALL FIGURES INCLUDE VAT) IEE MEMBERS 44.00 NON-IEE MEMBERS 74.00 IEE MEMBERS (Retired, Unemployed, Students) FREE NON-IEE MEMBERS (Retired, Unemployed, Students) 22.00 LUNCH TICKET 4.70 MEMBERS: Members of the IEEIE, The British Computer Society and the Society for the Study of Artificial Intelligence and Simulation of Behaviour and Eurel Member Associations will be admitted at Members' rates. ============================================================================== GROUNDING REPRESENTATIONS GROUNDING REPRESENTATIONS GROUNDING REPRESENTATIONS ============================================================================== From mcasey at euclid.ucsd.edu Sun Apr 2 06:24:13 1995 From: mcasey at euclid.ucsd.edu (Mike Casey) Date: Sun, 2 Apr 1995 03:24:13 -0700 (PDT) Subject: Thesis Available "Computation In Discrete-Time Dynamical Systems" Message-ID: <9504021024.AA04461@euclid> A non-text attachment was scrubbed... Name: not available Type: text Size: 2351 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/f394d149/attachment-0001.ksh From listerrj at helios.aston.ac.uk Mon Apr 3 05:13:40 1995 From: listerrj at helios.aston.ac.uk (listerrj) Date: Mon, 3 Apr 1995 10:13:40 +0100 (BST) Subject: Research studentship at Aston Message-ID: <28938.9504030913@sun.aston.ac.uk> RESEARCH STUDENTSHIP OPPORTUNITY ================================ NEURAL COMPUTING RESEARCH GROUP ------------------------------- ASTON UNIVERSITY ---------------- UK -- NEURAL NETWORKS APPLIED TO IGNITION TIMING AND AUTOMATIC CALIBRATION ==================================================================== An opportunity exists for a research student to be involved in a collaborative research programme between the Neural Computing Research Group, Aston University and SAGEM in the general area of applying neural networks to the ignition timing and calibration of gasoline internal combustion engines. This is a three year programme led by Professor David Lowe and the candidate would be expected to register for a PhD with the University. The ideal student would be computationally literate (preferably in C/C++) on UNIX and PC systems and have good mathematical and/or engineering abilities. An awareness of the importance of applying advanced technology and implementing ideas as engineering products is essential. In addition the ideal candidate would have some knowledge and interest in internal combustion engines and also relevant sensor technology. Further information and details may be obtained from Prof C Bishop Research Admissions Tutor University of Aston Aston Triangle Birmingham B4 7ET, UK. email c.m.bishop at aston.ac.uk tel: (+44/0)121 359 3611 ext 4268 For more information about the Neural Computing Research Group see our World Wide Web pages at: http://neural-server.aston.ac.uk/ -----------------------------end------------------------------------------- From g.gaskell at psychology.bbk.ac.uk Mon Apr 3 11:15:00 1995 From: g.gaskell at psychology.bbk.ac.uk (Gareth Gaskell) Date: Mon, 3 Apr 95 11:15 BST Subject: Paper: Phonological Representations in Speech Perception Message-ID: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/gaskell.phonrep.ps.Z The following paper (28 pages) is available in the neuroprose archive. This paper is due to be published in Cognitive Science and examines the phonological representations and processes involved in the perception of speech from a connectionist viewpoint. A Connectionist Model of Phonological Representation in Speech Perception Gareth Gaskell, Mary Hare & William Marslen-Wilson Abstract: A number of recent studies have examined the effects of phonological variation on the perception of speech. These studies show that both the lexical representations of words and the mechanisms of lexical access are organized so that natural, systematic variation is tolerated by the perceptual system, while a general intolerance of random deviation is maintained. Lexical abstraction distinguishes between phonetic features that form the invariant core of a word and those that are susceptible to variation. Phonological inference relies on the context of surface changes to retrieve the underlying phonological form. In this paper we present a model of these processes in speech perception, based on connectionist learning techniques. A simple recurrent network was trained on the mapping from the variant surface form of speech to the underlying form. Once trained, the network exhibited features of both abstraction and inference in its processing of normal speech, and predicted that similar behavior will be found in the perception of nonsense words. This prediction was confirmed in subsequent research (Gaskell & Marslen-Wilson, 1994). To retrieve the file: ftp archive.cis.ohio-state.edu login: anonymous password: ftp> cd /pub/neuroprose ftp> binary ftp> get gaskell.phonrep.ps.Z ftp> bye uncompress gaskell.phonrep.ps.Z lpr gaskell.phonrep.ps [or whatever you normally do to print] Gareth Gaskell Centre for Speech and Language, Birkbeck College, London g.gaskell at psyc.bbk.ac.uk From reiner at isy.liu.se Mon Apr 3 03:59:33 1995 From: reiner at isy.liu.se (Reiner Lenz) Date: Mon, 3 Apr 95 09:59:33 +0200 Subject: No subject Message-ID: <9504030759.AA22685@rainier.isy.liu.se> jaaskelainen at joyl.joensuu.fi Subject: Paper on Neuroprose: lenz.colorpca.ps.Z, Unsupervised Filtering of Color Spectra FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/lenz.colorpac.ps.Z HTTP: ftp://archive.cis.ohio-state.edu/pub/neuroprose/lenz.colorpac.ps.Z The following paper has been placed in the Neuroprose archive at Ohio State University: Title: Unsupervised Filtering of Color Spectra Reiner Lenz, Mats \"Osterberg, Dept. EE, Link\"opin g University, S-58183 Link\"oping, Sweden, Jouni Hiltunen, Timo Jaaskelainen, Dept. Physics, University of Joensuu, FIN-80101 Joensuu,Finland Jussi Parkkinen, Dept. Information Technology, Lappeenranta University of Technology, FIN-53851 Lappeenranta, Finland Abstract We describe how unsupervised neural networks can be used to extract features from databases of reflectance spectra. These databases try to sample color space in a way which reflects the properties of human color perception. In our construction of neural networks we identify first desirable properties of the network. These properties are then incorporated into an energy function and finally a learning rule is derived using optimization methods to find weight matrices which lead to minimal values of the energy function. We describe several energy functions and the performance of the resulting networks for the databases with the reflectance spectra. The experiments show that the weight matrix for one of the systems is very similar to the eigenvector system whereas the second type of systems tries to rotate the eigenvector system in such a way that the resulting filters partition the spectrum into different bands. We will also show how the additional constraint of positive filter coefficients can be incorporated into the design. It will appear in the Proc. Scandinavian Conference Image Analysis, Uppsala, 1995. (8 pages. No hard copies available.) _______________________________ More information about the unsupervised network used in the paper can be found in the PhD thesis: M. O"sterber: Quality functions for parallel selective principal component analysis. ISBN 91-7871-411-7 _______________________________ "Kleinphi macht auch Mist" Reiner Lenz | Dept. EE. | | Linkoeping University | email: reiner at isy.liu.se | S-58183 Linkoeping/Sweden | From eplunix!peter at eddie.mit.edu Mon Apr 3 16:39:48 1995 From: eplunix!peter at eddie.mit.edu (eplunix!peter@eddie.mit.edu) Date: Mon, 03 Apr 95 16:39:48 EDT Subject: The neural coding problem Message-ID: <9504031748.aa18232@eddie.mit.edu> In a very useful note Marius Usher recently (3/31/95) brought up the neural coding problem for discussion: > Perhaps the most crucial question in the study of cortical function is > whether the brain uses a mean rate code or a temporal code. > Recently a number of models have been proposed in order to account > for the variability of spike trains (discussed by Softky and Koch, 1993). > As it seems, each of these models can account for variability, despite their > very different assumptions and implications regarding the "neural code". > We are writing this note in order to highlight the specific predictions in > which these models differ, hoping in particular to direct the attention of > experimentalists to the "missing " data required to disambiguate between > these theoretical models and their implications about the neural code. For the last five years I have been investigating population interspike interval codes in the auditory system which may subserve perception of the low pitches of complex tones (periodicity pitch) and the discrimination of phonetic elements. As a consequence, I have given a great deal of thought to how central auditory structures might use the wealth of timing information which is available in the auditory periphery. Here are some thoughts that may facilitate the more general discussion of the neural coding problem: 1. Very many neural codes based on temporal patterns or times-of-arrival (including interneural synchrony codes) are possible, but only a very small subset of the possible codes, particularly of the higher-order pattern codes, have yet been seriously considered, either experimentally or theoretically. We should not rule out more complex codes on the basis of not finding evidence for the simplest or most obvious ones. 2. Neural codes are generally not mutually exclusive. A given spike train can be interpreted in different ways by different neural assemblies downstream. Thus discharge rates could be used by some cell populations, temporal patterns by another, and patterns of spike latencies by another. A possible example of this can be found in the ascending auditory pathway of many mammals, where there are several brainstem pathways which subserve auditory localization. Some pathways appear to convey binaural level differences encoded in discharge rates while others appear to convey interaural time differences encoded in spike latencies and interneural synchronies. There are almost undoubtedly real neural architectures which gracefully fuse both rate and time-based information (cf. Licklider), but very few "duplex" models have been proposed. 3. Often several aspects of neural discharge covary. For example, in the peripheral auditory system the roles of discharge rates and temporal patterns are hard to separate, since both kinds of information are present together in nearly all auditory populations. 4. While information can be encoded in the discharges of individual neurons, it seems likely from reliability considerations that information is encoded in the activity of populations of neurons. We are very familiar by now with the possibility of distributed rate codes, but it is also possible to have distributed temporal codes. An example of a distributed synchrony code is the "volley principle" in the auditory system. An example of a distributed temporal pattern code would be a population interspike interval code, where the all-order interval distribution of a population conveys information concerning stimulus periodicities. Distributed temporal codes can be either synchronous or asynchronous. Every hypothetical neural code has a corresponding hypothetical processing architecture. 5. Deciding whether a particular pattern of discharge is a "code" (i.e. that it has a functional role in the representation, transmission, and processing of information) is a difficult problem, since there are only a few systems whose function is understood well enough to see immediately what role a given putative encoding would play. Possibly the most direct way to demonstrate that a given discharge pattern has functional significance is to impose a particular pattern of activity on a neural population, e.g. by electrical stimulation, and to observe the perceptual and behavior consequences. Specific electrical time patterns are known to evoke particular sensations in many diverse sensory modalities: audition (single-channel cochlear implants, Eddington), somatoception (Mountcastle), gustation (Covey, DiLorenzo), and even color vision (Young). The next best thing is to look for correspondences between neurophysiology and psychophysics by comparing how closely a putative neural representation covaries with the percepts/behaviors which the representation hypothetically subserves. On the perceptual side this is a stimulus-coding problem -- does the code covary with perceptual performances? If discharge rates saturate and representations based on rates are degraded at high stimulus intensities when perceptual discriminations are unchanged or even improve, then this is evidence against a functional role for rate representations (in lieu of elaborate compensatory mechanisms, which then must be found and incorporated into the representation's description). 6. Putative codes can be ruled out by showing that the information needed to perform a particular perceptual or behavioral task is not present in the discharge activity of a particular population. It is important not to erect "straw man" codes when trying to rule out possible coding schemes. In general, the kinds of temporal codes thus far considered in the literature have been only the most simple and obvious ones, and much more consideration needs to be given to population synchrony codes (a la Abeles' synfire codes) and asynchronous temporal pattern codes. (a la Abeles' neurophysiological results). 7. If one finds a correspondence between discharge rates and some perceptually- or behaviorally-relevant distinction, this does not necessarily rule out a time code. Because rate-codes have been the conventional assumption of most of neuroscience, often physiological investigations stop when scientists find what they are looking for, i.e. rate-based correspondences with perception or behavior. However, underlying the rate-based responses may be complex time patterns of excitation and inhibition that may better correspond to the psychophysics than the rate code itself (arguably this is the case in explaining frequency selectivity in the peripheral auditory system -- while one can point to rate-based "frequency-channels" in the auditory nerve, the interspike interval distributions of the auditory nerve fibers yield much more robust and higher quality information (Goldstein) which, like the percepts, does not degrade with higher stimulus sound pressure levels. A similar situation exists in the fly visual system -- Bialek, Reichardt) 8. Long temporal integration times do not preclude temporal coding. In the auditory system there are a number of reasons to believe that the time window for fusing sounds is on the order of 5-10 msec (e.g. Chistovitch), whereas there is a longer build-up process associated with the apparent loudnesses of sounds of short durations (Zwislocki, Chistovitch). (We have many examples of rate-based processing models in the literature, but a dearth of time-based ones -- I will therefore outline a possible temporal integration mechanism as an example). Let us suppose that we have a complex acoustic stimulus with a low pitch, say a vowel with a fundamental frequency F0 (voice pitch) of 100 Hz). The most frequent interspike interval in the population of auditory nerve fibers and (probably) most cochlear nucleus populations will be 1/F0 = 10 msec. At the auditory cortex, this voice pitch will be seen in periodicities of auditory evoked potentials (e.g. Steinschneider et al), so there are evidently populations of neurons which are discharging either singly or in spatially-distributed volleys at intervals of 10 msec. There are probably other units which have discharge periodicities related to 10 msec which are not synchronized relative to the rest of the population. There are many recurrent pathways within the auditory cortices and the thalamus where spike trains containing a disproportionate numbers of these intervals can circulate. It is not then hard to imagine a temporal cross-correlation process between intervals circulating in these loops and incoming temporal patterns, and as 10 msec intervals are differentially faciliated based on their prevalence in the reverberating loops, this kind of structure would produce an asynchronous build-up of 10 msec intervals over longer time windows. It's only a sketch, but such mechanisms do not seem to be out of the question. Marius Usher also gave an example in favor of rate-coding: > Proponents of the coincidence detection principle may need to find an > explanation for the wealth of evidence showing integration in the perceptual > system. For example, the Bloch law (Loftus and Ruthruff 1993) shows that, for > stimuli of duration shorter than 100 msec, perceptual discrimination depends > only on the INTEGRAL of the stimulus (a high contrast 10 msec stimuli, > produces exactly the same affect on perception as a 20 msec stimuli > of half contrast). While I am not a visual scientist, I do know that images of very short duration (tachistoscopically presented) can be recognized, and that, like in the auditory system, the time windows for perceptual fusion are much shorter than for integration of intensity-related information. There are many alternatives to rate-based models of intensity discrimination, but these are generally underdeveloped. Two general classes of alternative models would be latency-based models (latency, latency variances) and temporal correlation models (population interval models). The latency codes need gating/reset mechanisms in addition to buildup loops. Apparently the latency of visual evoked potentials corresponds well to subjective brightness (see S.S. Stevens, "Sensory power functions and neural events" in Principles of Receptor Physiology, Loewenstein, ed. Springer-Verlag, 1971.), so that there are some examples, even in vision, of possible codes not based on rate. In addition Reichardt et al and Bialek et al found evidence for temporal cross-correlation operations in insect motion detection and Chung, Raymond, and Lettvin found interspike intervals corresponding to various luminance conditions in the frog. At the risk of heresy, I think that there could be a general temporal correlation theory of vision, particularly of form vision. I would think that there would be a great deal of spatio-temporal structure which would be imposed on retinal responses whenever a structured image moves across the receptor array, which, as I understand is the normal state of affairs -- the eye is always moving, even in "fixation". I have never understood how a rate-based model of visual form accounts for this (and what is it that is integrated over 100 msec if the image is constantly moving?). For an alternative temporal model of visual form it would be useful to know exactly how reliable are the latency distributions of vertebrate retinal ganglion cells to edges crossing their fields and how much temporal cross-correlation information might exist between ganglion cells in a local region. Does anyone know offhand if (where) such data exists? If the temporal correlations in the retinal responses are what matter, then higher contrast stimuli should produce more spatio-temporal structure. It may be the case that the 10 msec high-contrast stimulus may generate as many temporal cross-correlations as the 20 msec low-contrast stimulus, and that these cross-correlation patterns are integrated at higher levels through recurrent temporal cross-correlation. It would also be worth checking whether the rate-model corresponds to the psychophysics under a wide variety of conditions (very low and very high light levels, in the presence of visual noise, chromatic light, etc.). In the auditory system, auditory nerve discharge rate models work fairly well for moderate sound pressure levels but not very well for either levels near threshold or high levels. 9. (last thought). There also are many perceptual phenomena which are not easily explained using average rate models, but which are explicable in terms of temporal codes. Some of these are: the low pitch of unresolved harmonics, the pitch of repetition noise, the pitch of AM noise, the perception of different vibratory frequencies in somatoception, achromatic color (Benham's Top), the Pulfrich effect (latency & perception of depth), the perception of visual "spatial beats" which is the analogue of the "missing fundamental" in audition (Hammett & Smith), and all of the electrical stimulation examples alluded to above. Bekesy reportedly was able to localize stimuli differing in arrival time by 1-2 msec using many different modalities, (e.g. audition, somatoception, taste, olfaction) presumably on the basis of latency differences (Bower, Bekesy). There is really a bewildering array of physiological and psychophysical evidence that needs to be examined in some sort of systematic way for correspondences. I've started to collect and collate the disparate evidence for temporal coding, and I have yet to find a sensory modality for which there is not some evidence available in the literature. The neural coding problem is fundamental because until we understand the nature of the neural signals involved, we may miss those aspects of neural activity which are essential to the functional organization of the system. Dr. Peter Cariani Eaton Peabody Laboratory of Auditory Physiology Massachusetts Eye & Ear Infirmary 243 Charles St., Boston, MA 02114 USA 4/3/95 email: eplunix!peter at eddie.mit.edu tel: (617) 573-4243 FAX: (617) 720-4408 References -------------------------------------------------------------------------- Abeles, M., H. Bergman, E. Margalit, and E. Vaadia. "Spatiotemporal firing patterns in the frontal cortex of behaving monkeys." J. Neurophysiol. 70 (4 1993): 1629-1638. See also Abeles et al in Concepts in Neuroscience, 4(2): 131-158, 1993. Bekesy, Georg von. "Olfactory analogue to directional hearing." Journal of Applied Physiology 19 (3 1964a): 369-373. Bekesy, Georg von. "Rythmical variations accompanying gustatory stimulation observed by means of localization phenomena." Journal of General Physiology 47 (5 1964b): 809-825. Bialek, W., F. Rieke, R. R. van Stevenink, and Warland de Ruyter D. "Reading a neural code." Science 252 (28 June 1991): 1854-1856. Fly vision. Bower, T. G. R. "The evolution of sensory systems." In Perception: Essays in Honor of James J. Gibson, ed. Robert B. MacLeod and Herbert Pick Jr. 141-152. Ithaca: Cornell University Press, 1974. (Bekesy anecdotes) Bullock, T.H. "Signals and neural coding." In The Neurosciences: A Study Program, ed. G.C. Quarton, T. Melnechuck, and F.O. Schmitt. 347-352. New York: Rockefeller University Press, 1967. General review. Cariani, P. and B. Delgutte. "Interspike interval distributions of auditory nerve fibers in response to variable-pitch complex stimuli." Assoc. Res. Otolaryng. (ARO) Abstr. (1992): Cariani, P and B. Delgutte. "The pitch of complex sounds is simply coded in interspike interval distributions of auditory nerve fibers." Soc. Neurosci. Abstr. 18 (1992): 383. Cariani, P. As if time really mattered: temporal strategies for neural coding of sensory information. Communication and Cognition - Aritificial Intelligence, 1995, 12(1-2):161-229. Preprinted in: Origins: Brain and Self-Organization, K Pribram, ed., Lawrence Erlbaum Assoc., 1994; 208-252. Chistovitch, L. A. Central auditory processing of peripheral vowel spectra. J. Acoust. Soc. Am. 77(3):789-805. Time window for fusion of spectral shapes. Chung, S.H., S.A. Raymond, and J.Y. Lettvin. "Multiple meaning in single visual units." Brain Behav Evol 3 (1970): 72-101. Interval codes in frog vision. Covey, Ellen. "Temporal Neural Coding in Gustation." Ph.D., Duke University, 1980. Time pattern codes in rodent taste. Delgutte, B. and P. Cariani. "Coding of the pitch of harmonic and inharmonic complex tones in the interspike intervals of auditory nerve fibers." In The Processing of Speech, ed. M.E.H. Schouten. Berlin: Mouton-DeGruyer, 1992. Di Lorenzo, Patricia M. and Gerald S. Hecht. "Perceptual consequences of electrical stimulation in the gustatory system." Behavioral Neuroscience 107 (1993): 130-138. Time pattern codes in rodent taste. Eddington, D. K., W.H. Dobelle, D. E. Brackman, M. G. Mladejovsky, and J. Parkin. "Place and periodicity pitch by stimulation of multiple scla tympani electrodes in deaf volunteers." Trans. Am. Soc. Artif. Intern. Organs XXIV (1978): Festinger, Leon , Mark R. Allyn, and Charles W. White. "The perception of color with achromatic stimulation." Vision Res. 11 (1971): 591-612. Hammett, S.T. and Smith, A.T. Temporal beats in the human visual system. Vision Research 34(21):2833-2840. Missing (spatial) fundamentals. Goldstein, J. L. and P. Srulovicz. "Auditory-nerve spike intervals as an adequate basis for aural frequency measurement." In Psychophysics and Physiology of Hearing, ed. E.F. Evans and J.P. Wilson. London: Academic Press, 1977. Kozak, W.M. and H.J. Reitboeck. "Color-dependent distribution of spikes in single optic tract fibers of the cat." Vision Research 14 (1974): 405-419. Kozak, W.M., H.J. Reitboeck, and F. Meno. "Subjective color sensations elicited by moving patterns: effect of luminance." In Seeing Contour and Colour, ed. J.J. Kulikowski Dickenson, C.M. 294-310. New York: Pergamon Press, 1989. Licklider, J.C.R. "A duplex theory of pitch perception." Experientia VII (4 1951): 128-134. Mixed time-place autocorrelation model. Licklider, J.C.R. "Three auditory theories." In Psychology: A Study of a Science. Study I. Conceptual and Systematic, ed. Sigmund Koch. 41-144. Volume I. Sensory, Perceptual, and Physiological Formulations. New York: McGraw-Hill, 1959. Macrides, F. "Dynamic aspects of central olfactory processing." In Chemical Signals in Vertebrates, ed. D. Muller Schwartze and M. M. Mozell. 207-229. 3. New York: Plenum, 1977. Time patterns in smell. See also more recent work by Gilles Laurent in insect olfaction. Science, 265: 1872-75, Sept 23, 1994. Macrides, Foteos and Stephan L. Chorover. "Olfactory bulb units: activity correlated with inhalation cycles and odor quality." Science 175 (7 January 1972): 84-86. Temporal code for smell. Morrell, F. "Electrical signs of sensory coding." In The Neurosciences: A Study Program, ed. G.C. Quarton, T. Melnechuck, and F.O. Schmitt. 452-469. New York: Rockefeller University Press, 1967. Review. Mountcastle, Vernon. "The problem of sensing and the neural coding of sensory events." In The Neurosciences: A Study Program, ed. G.C. Quarton Melnechuk, T., and Schmitt, F.O. New York: Rockefeller University Press, 1967. Review. Mountcastle, Vernon. "Temporal order determinants in a somatosthetic frequency discrimination: sequential order coding." Annals New York Acad. Sci. 682 (1993): 151-170. Problem of vibration discrimination/neural representation. Mountcastle, V.B., W.H. Talbot, H. Sakata, and J. Hyvarinen. "Cortical neuronal mechanisms in flutter-vibration studied in unanesthetized monkeys. Neuronal periodicity and frequency discrimination." J. Neurophysiol. 32 (1969): 452-485. Reichardt, Werner. "Autocorrelation, a principle for the evaluation of sensory information by the central nervous system." In Sensory Communication, ed. Walter A. Rosenblith. 303-317. New York: MIT Press/Wiley, 1961. See also Egelhaaf & Borst. A look into the cockpit of the fly: visual orientation, algorithms, and identified neurons. J. Neuroscience, Nov. 1993 13(11):4563-4574. Uttal, W.R. The Psychobiology of Sensory Coding. New York: Harper and Row, 1973. Review of the coding problem. Young, R.A. "Some observations on temporal coding of color vision: psychophysical results." Vision Research 17 (1977): 957-965. Electrical temporal pattern stimulation produces colored phsophenes. Zwislocki, J. 1960. Theory of temporal auditory summation. J. Acoust. Soc. Am. 1960 32(8):1046-60. From cohn at psyche.mit.edu Mon Apr 3 13:49:56 1995 From: cohn at psyche.mit.edu (David Cohn) Date: Mon, 3 Apr 95 13:49:56 EDT Subject: reminder: Active Learning position papers due April 14 Message-ID: <9504031749.AA08795@psyche.mit.edu> AAAI Fall Symposium on Active Learning November 10-12, at MIT An active learning system is one that can influence the training data it receives by actions or queries to its environment. Potential participants in the AAAI Fall Symposium on Active Learning are invited to submit a short position paper (at most two pages) discussing what they could contribute to a dialogue on active learning and/or what they hope to learn by participating. Send papers to arrive by April 14, 1995 to: David D. Lewis (lewis at research.att.com) AT&T Bell Laboratories 600 Mountain Ave.; Room 2C-408 Murray Hill, NJ 07974-0636; USA Electronic mail submissions are strongly preferred. The full Call for Participation is available via world wide web at or by contacting me or lewis at research.att.com. -David Cohn (cohn at psyche.mit.edu) Co-chair, AAAI FSS on Active Learning From ucganlb at ucl.ac.uk Tue Apr 4 05:42:10 1995 From: ucganlb at ucl.ac.uk (Dr Neil Burgess - Anatomy UCL London) Date: Tue, 04 Apr 95 10:42:10 +0100 Subject: rate vs temporal coding Message-ID: <254412.9504040942@link-1.ts.bcc.ac.uk> How I enjoyed Peter Cariani's comments on rate vs. temporal coding! In our lab. there are also some data supporting the role of temporal coding, concerning rat hippocampus and navigation (O'Keefe & Recce, 1993), & some (simplistic) models of it, combining rate and temporal coding (Burgess et al., 1993 & 1994). A paper by Nicolelis et al. (1993) also points to temporal coding in the rat's thalamic representation of (sensory stimulation to) its face. All the best, Neil Burgess N, O'Keefe J \& Recce M (1993) `Using hippocampal `place cells' for navigation, exploiting phase coding', in: S. J. Hanson, C. L. Giles and J. D. Cowan (eds.) {\it Advances in Neural Information Processing Systems 5}, 929-936. San Mateo, CA: Morgan Kaufmann. (or neuroprose/burgess.hipnav.ps.Z) Burgess N, Recce M \& O'Keefe J (1994) `A model of hippocampal function', {\it Neural Networks}, {\bf 7}, 1065-1081. (or neuroprose/burgess.hipmod.ps.Z; http://rana.usc.edu:8376/~aguazzel/cs664/Burgess/paper.html) Nicolelis M A L, Lin R C S, Woodward D J \& Chapin J K (1993) `Dynamic and distributed properties of many-neuron ensembles in the ventral posterior medial thalamus of awake rats', {\it Proc. Natl. Acad. Sci. USA} {\bf 90} 2212-2216. O'Keefe J \& Recce M (1993) `Phase relationship between hippocampal place units and the EEG theta rhythm', {\it Hippocampus} {\bf 3} 317-330. From pja at barbarian.endicott.ibm.com Tue Apr 4 11:20:00 1995 From: pja at barbarian.endicott.ibm.com (Peter J. Angeline) Date: Tue, 4 Apr 1995 11:20:00 -0400 Subject: CFP for Genetic Programming Workshop at ICGA95 Message-ID: <9504041520.AA14075@barbarian.endicott.ibm.com> Call for Participation Advances in Genetic Programming Workshop at ICGA-95 and _Advances in Genetic Programming II_ to be published by MIT Press An informal workshop on Genetic Programming is planned for ICGA-95. This workshop is intended to bring together conference attendees interested in Genetic Programming in a more informal format to foster discussion and review the most recent work in the field. The workshop will consist of several presentations by researchers currently working on various advanced topics in Genetic Programming. As with the GP workshop at ICGA-93, an edited book of papers archiving current state-of-the-art research in Genetic Programming is also planned. This book will be comprised of papers chiefly from the workshop but will also include additional original submitted work. We are currently soliciting submissions both for presentation at the workshop and for publication in the edited book. Appropriate topics include, but are not restricted to, the following: o Theory of genetic programming o Extensions to Genetic Programming o Evolution of Modular GP structures o Comparisons between Genetic Programming and other techniques o Coevolution in GP o Hybrid algorithms using GP elements o Novel applications of Genetic Programming Authors interested in presenting at the workshop and/or being considered for inclusion to the book should submit an extended abstract describing their work to one of the workshop organizers listed below no later than June 6, 1995. Abstracts describing work-in-progress will be considered but will be given a lower priority than abstracts reporting results. Extended abstracts should be no longer than 3 pages in 12 pt. font, including graphs and references, when printed. Please submit postscript and/or ASCII via email if possible, although hardcopies will also be accepted. Abstracts describing work accepted for presentation at ICGA-95 should NOT be submitted. Please mark your abstract with the phrase "WORKSHOP", "BOOK" or "WORKSHOP AND BOOK" to signify your submission's status. All unmarked submissions will be considered for inclusion to both the workshop and book. Authors of abstracts tentatively accepted for the book must submit a full paper describing their work to the editors on or before August 1, 1995. Authors will be notified of final acceptance to the edited book on August 28, 1995. Additional information concerning preparation of the paper for the edited book will be sent to participants after final acceptance. Camera ready papers will be due September 19 with publication in spring of 1996. Abstracts for both the workshop and the edited book will be selected for their originality, clarity and contribution to Genetic Programming. Important Dates: ---------------- June 6, 1995 Extended Abstracts Due to workshop organizers June 25, 1995 Notification of abstracts selected for the workshop and tentative acceptance of abstracts for edited book. July 15-19, 1995 ICGA-95 Conference August 1, 1995 Papers for book must be RECEIVED by editors. August 28, 1995 Final notification for acceptance to edited book sent. Sept. 19, 1995 Final Camera-ready copies must be RECEIVED. Workshop Organizers / Editors: Peter J. Angeline Loral Federal Systems MD 0210 1801 State Route 17C Owego, NY 13827 pja at lfs.loral.com Kim Kinnear Adaptive Computing Technology 62 Picnic Rd Boxboro, MA 01719 kinnear at adapt.com From bill at nsma.arizona.edu Wed Apr 5 02:32:20 1995 From: bill at nsma.arizona.edu (Bill Skaggs) Date: Tue, 04 Apr 1995 23:32:20 -0700 (MST) Subject: discussion on variability and neural codes Message-ID: <9504050632.AA17789@nsma.arizona.edu> Marius Usher and Martin Stemmler write: > Perhaps the most crucial question in the study of cortical function > is whether the brain uses a mean rate code or a temporal code. I would like to argue that we should refrain from putting the question in these terms, because it is not productive. But first I should say that my criticism applies only to this one sentence that Marius and Martin wrote, not to the rest of their presentation, which I think was quite sophisticated and insightful. The problem with posing the question in terms of a mean rate code versus a temporal code is that the reality is clearly somewhere in between. A mean rate code is one in which shifting the time of an action potential makes no difference; a temporal code is one in which shifting the time does make a difference. For any code actually used in the brain, though, shifting the time of an action potential will make a difference if and only if the shift is sufficiently large. This is actually pretty obvious: shifting a spike by 1 nanosecond surely won't make a difference anywhere in the brain, but shifting by 1 year surely will make a difference everywhere. The right question to ask is how large a shift it takes to make a difference. We can think of this in terms of a plot of the following form: | Effect | *************************** | *********** | ******* | *** | ** | ** | * | * | * | * |* |* |_____________________________________________________________ Spike Time Shift Thus, for any sort of imaginable code, the effect of shifting a spike will increase linearly for very small shifts, and will eventually saturate at a level beyond which further time shifts have no greater effect. (The saturation level is equal to the effect of deleting the spike entirely.) Of course, complicated things may happen in between. Instead of asking whether we are looking at a mean rate code or a temporal code (which is a meaningless question), we should ask what the shape of the time shift-vs-effect curve is, and in particular, what the largest and smallest time constants are. Note that, although the shape of the curve may change if the effect is quantified in a different way, the time constants are likely to remain similar. In some parts of the brain, the auditory system in particular, time constants in the submillisecond range are clearly present. In a wide range of systems, though, including Bialek's fly motion cells, Wyeth Bair's MT cells, and the hippocampal place cells our group has been working with, the smallest time constants seem to be in the 10--20 msec range. As a cynic might perhaps expect, this range is perfectly positioned for both the mean rate and temporal coding camps to seize on as evidence to support their views, thereby confusing the issue almost beyond hope. I think we will make better progress in understanding neural coding if we can get beyond this simplistic dichotomy. In summary: Ask not whether 'tis a rate code or a time code; ask rather what the time constant is. -- Bill From Roland.Baddeley at psy.ox.ac.uk Wed Apr 5 09:26:00 1995 From: Roland.Baddeley at psy.ox.ac.uk (Roland Baddeley) Date: Wed, 5 Apr 1995 14:26:00 +0100 Subject: Paper available on exploritory projection pursuit Message-ID: <199504051326.OAA28015@axp01.mrc-bbc.ox.ac.uk> The following 21 pages long manuscript is now available by ftp: FTP-host: axp01.mrc-bbc.ox.ac.uk FTP-filename: /pub/pub/users/rjb/fyfe_project.ps.Z Hardcopies are not avaliable. ---------------------------------------------------------------------------- Non-linear Data Structure Extraction Using Simple Hebbian Networks Colin Fyfe, Dept of Computer Science, University of Strathclyde, Scotland email: fyfe_ci0 at paisley.ac.uk and Roland Baddeley, Department of Physiology and Experimental Psychology, University of Oxford, England OX1 3UD. email: Roland.Baddeley at psy.ox.ac.uk Abstract: We present a class of neural networks based on simple Hebbian learning which allow the finding of higher order structure in data. The neural networks use negative feedback of activation to self-organise; such networks have previously been shown to be capable of performing Principal Component Analysis (PCA). In this paper, this is extended to Exploratory Projection Pursuit (EPP) which is a statistical method for investigating structure in high dimensional data sets. As opposed to previous proposals for networks which learn using Hebbian learning, no explicit weight normalisation, decay or weight clipping is required. The results are extended to multiple units and related to both the statistical literature on EPP and the neural network network literature on Non-linear PCA. This paper is to appear in Biological Cybernetics. ________________________________________ Roland Baddeley Department of Psychology and Physiology South Parks Road University of Oxford Oxford, England 0X1 3UD ________________________________________ From FRYRL at f1groups.fsd.jhuapl.edu Wed Apr 5 10:30:00 1995 From: FRYRL at f1groups.fsd.jhuapl.edu (Fry, Robert L.) Date: Wed, 05 Apr 95 10:30:00 EDT Subject: Temporal Information Message-ID: <2F821D00@fsdsmtpgw.fsd.jhuapl.edu> The establishment of what actually comprises information in biological systems is an essential problem since this determination provides the basis for the analytical evaluation of the information processing capability of neural structures. In response to the question "What comprises information to a neuron?" consider the answer that those quantitites which are observable or measureable by a neuron represent information. Hence what is information to one neuron may not necessarily be information to another. Now as current discussions have pointed out, there are many possibilities regarding what exactly these measureable quantites might consist of in the way of rate encoding, time-of-arrival, and so on or even possibly combinations thereof. Consider the following simplistic perspective. Observable quantities may be measured in both space and in time both of which can be conceptually be thought of as being quantized in a neural context. Spatial quantization occurs due to the specificity of synaptic (or perhaps axonal input accrding to current understandings of some neural structures) for a given neural. The synaptic efficacies can be viewed as a Hermitian measurement operator giving rise to the somatic measured quantity. In a dual sense, time is also quantized if time-of-arrival is the critical measurement temporal quantity of specific action potential which either do or do not exist at a given instant in time. The term "instant" used here obviously must be considered in regard to "Bill's" question of what the critical time constant is or are for the subject neural assemblies. There is empirical evidence that there are adaptation mechanisms in place which serve to modulate time-of-arrival giving rise to a delay vector having a one-to-one correspondance with the efficacy vector. From this perspective there is a dual time-space dependency on at least some of the quantites observable by an individual neuron. The observable quantity would then consist of a_n*x(t-tau_n) where a_n is the learned connection strength and tau_n is the learned delay. This has been the basis for my research in which I have been applying the basic Shannon measures of entropy, mutual information , and relative entropy to the study of neural structures which are optimal in an information-theoretic sense and have publications and papers some of which exist in the neuroprose repository. With this view, the sets {a_n} and {tau_n} are seen to represent Lagrange vectors which serve to maximize the mutual informatioon between neural inputs and output. This is of course a personal perspective and obviously there may be many other temporal modalities for the inter-neuron exchange of information. It can be argued however, that the above modality is in many ways the most simple. Analytically, it seems a very tractable perspective as opposed to rate, latencies, etc. Robert Fry Johns Hopkins University/ Applied Physics Laboratory Johns Hopkins Road Laurel, MD 20723 From eplunix!peter at eddie.mit.edu Wed Apr 5 11:43:23 1995 From: eplunix!peter at eddie.mit.edu (eplunix!peter@eddie.mit.edu) Date: Wed, 05 Apr 95 11:43:23 EDT Subject: Codes and time constants Message-ID: <9504051243.aa04074@eddie.mit.edu> Regarding mean rate vs. temporal codes, Bill Skaggs (4/4/95) commented: > Instead of asking whether we are looking at a mean rate code or a > temporal code (which is a meaningless question), we should ask what the > shape of the time shift-vs-effect curve is, and in particular, what the > largest and smallest time constants are. Note that, although the shape > of the curve may change if the effect is quantified in a different way, > the time constants are likely to remain similar. I know this is the way that many people think of the problem of rate vs. temporal codes, but it can lead to the conflation of codes and concepts which, in my opinion, really are different and should be kept distinct. The issue of time constants is related to the temporal precision needed to convey information via some code (what distortions in spike time pattern are sufficient to change one message into another?). The issue of what spike train patterns convey the message is complementary to the issue of precision. (e.g. the average power of a signal is a different property than its Fourier spectrum, regardless of what sampling rate is used to specify the signal.) An average rate code means that the average number of spikes within a given temporal integration window is all that counts in determining the message, i.e. rearranging spike time patterns without changing the number of spikes within a window should have no effect (otherwise we would have something more elaborate than a pure mean rate code). A temporal code is one in which the message sent is determined by: 1) the time patterns of spikes (e.g. the complex Fourier spectrum of the spike pattern) or 2) the particular spike arrival times relative to some reference event (e.g. the return time after an echolocation call, or absolute time-of-arrival relative to that of other spikes in spike trains produced by other neurons -- interneural synchrony). For a temporal pattern code, if the time patterns of spike arrivals are scrambled without changing the mean rate, then the message is altered. In the Covey/DiLorenzo electrical stimulation experiments in the gustatory system that I cited in the previous message, a particular time pattern of electrical pulses evokes behavioral signs in a rat that it tastes a sweet substance, whereas scrambling the patterns while maintaining the average number of pulses evokes no such signs. The gustatory system is slow, so the temporal precision of the code is probably in the tens of milliseconds, but nevertheless, the time pattern does appear to be the coding vehicle, since its disruption evidently has perceptual consequences. The differences between rate-based and timing-based codes can also be seen from the perspective of the decoding operations required of each. The neural operations needed to interpret rate codes are rate-integration processes (all other things being equal, the longer the window the higher the precision), whereas those needed to interpret temporal codes are coincidence and delay processes (the shorter the coincidence windows and the more reliable the delay mechanisms, the higher the precision). In my opinion, this is why the discussion of whether most cortical pyramidal neurons are performing rate-integrations vs. coincidence detections (and yes, on what time scales they might be doing these things) is so crucial. It might even be possible for a given neuron to be doing both, albeit on different time scales, since particular time patterns of coincidence can be embedded in spike trains also driven by Poisson-like inputs (what information-processing operations a neuron carries out depend upon the way(s) its output is interpreted by other parts of the system). This is why the detailed pattern analysis of Abeles et al and Lestienne & Strehler is probably needed in addition to statistical approaches based on stationary processes. It should be noted that average rates are one dimensional, scalar codes, whilst temporal codes can support spike trains conveying more than one independent signal type (multiplexing). I think that this property of temporal codes has potentially very great implications for the design of artificial neural networks, if only because a multiplicity of orthogonal signals allows one to keep different kinds of information from interfering with each other. Dr. Peter Cariani Eaton Peabody Laboratory of Auditory Physiology Massachusetts Eye & Ear Infirmary 243 Charles St., Boston, MA 02114 USA email: eplunix!peter at eddie.mit.edu tel: (617) 573-4243 FAX: (617) 720-4408 From marshall at cs.unc.edu Wed Apr 5 16:24:15 1995 From: marshall at cs.unc.edu (Jonathan Marshall) Date: Wed, 5 Apr 1995 16:24:15 -0400 Subject: Paper available: Context, uncertainty, multiplicity, & scale Message-ID: <199504052024.QAA03996@marshall.cs.unc.edu> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/marshall.context.ps.Z This paper is available via anonymous-ftp from the Neuroprose archive. It is scheduled to appear in Neural Networks 8(3). This is a revision (April 1994) of the previously-distributed version (February 1993), with some new sections. -------------------------------------------------------------------------- ADAPTIVE PERCEPTUAL PATTERN RECOGNITION BY SELF-ORGANIZING NEURAL NETWORKS: CONTEXT, UNCERTAINTY, MULTIPLICITY, AND SCALE JONATHAN A. MARSHALL Department of Computer Science, CB 3175, Sitterson Hall University of North Carolina, Chapel Hill, NC 27599-3175, U.S.A. Telephone 919-962-1887, e-mail marshall at cs.unc.edu ABSTRACT. A new context-sensitive neural network, called an "EXIN" (excitatory+inhibitory) network, is described. EXIN networks self-organize in complex perceptual environments, in the presence of multiple superimposed patterns, multiple scales, and uncertainty. The networks use a new inhibitory learning rule, in addition to an excitatory learning rule, to allow superposition of multiple simultaneous neural activations (multiple winners), under strictly regulated circumstances, instead of forcing winner-take-all pattern classifications. The multiple activations represent uncertainty or multiplicity in perception and pattern recognition. Perceptual scission (breaking of linkages) between independent category groupings thus arises and allows effective global context-sensitive segmentation constraint satisfaction, and exclusive credit attribution. A Weber Law neuron-growth rule lets the network learn and classify input patterns despite variations in their spatial scale. Applications of the new techniques include segmentation of superimposed auditory or biosonar signals, segmentation of visual regions, and representation of visual transparency. KEYWORDS. Masking fields, Anti-Hebbian learning, Distributed coding, Adaptive constraint satisfaction, Decorrelators, Excitatory+inhibitory (EXIN) learning, Transparency, Segmentation. 46 pages. Thanks to Jordan Pollack for maintaining the Neuroprose archive! -------------------------------------------------------------------------- From jari at vermis.hut.fi Thu Apr 6 08:59:57 1995 From: jari at vermis.hut.fi (Jari Kangas) Date: Thu, 6 Apr 1995 15:59:57 +0300 Subject: Updated version 3.1 of SOM_PAK Message-ID: <199504061259.PAA00366@vermis> ************************************************************************ * * * SOM_PAK * * * * The * * * * Self-Organizing Map * * * * Program Package * * * * Version 3.1 (April 7, 1995) * * * * Prepared by the * * SOM Programming Team of the * * Helsinki University of Technology * * Laboratory of Computer and Information Science * * Rakentajanaukio 2 C, SF-02150 Espoo * * FINLAND * * * * Copyright (c) 1992-1995 * * * ************************************************************************ Updated public-domain programs for Self-Organizing Map (SOM) algorithms are available via anonymous FTP on the Internet. A new book on SOM and LVQ (Learning Vector Quantization) has also recently come out: Teuvo Kohonen. Self-Organizing Maps (Springer Series in Information Sciences, Vol 30, 1995). 362 pp. Price (hardcover only) USD 49.50 or DEM 98,-. ISBN 3-540-58600-8. In short, Self-Organizing Map (SOM) defines a 'nonlinear projection' of the probability density function of the high-dimensional input data onto the two-dimensional display. SOM places a number of reference vectors into an input data space to approximate to its data set in an ordered fashion, and thus implements a kind of nonparametric, nonlinear regression. This package contains all necessary programs for the application of Self-Organizing Map algorithms in an arbitrary complex data visualization task. This code is distributed without charge on an "as is" basis. There is no warranty of any kind by the authors or by Helsinki University of Technology. In the implementation of the SOM programs we have tried to use as simple code as possible. Therefore the programs are supposed to compile in various machines without any specific modifications made on the code. All programs have been written in ANSI C. The programs are available in two archive formats, one for the UNIX-environment, the other for MS-DOS. Both archives contain exactly the same files. These files can be accessed via FTP as follows: 1. Create an FTP connection from wherever you are to machine "cochlea.hut.fi". The internet address of this machine is 130.233.168.48, for those who need it. 2. Log in as user "anonymous" with your own e-mail address as password. 3. Change remote directory to "/pub/som_pak". 4. At this point FTP should be able to get a listing of files in this directory with DIR and fetch the ones you want with GET. (The exact FTP commands you use depend on your local FTP program.) Remember to use the binary transfer mode for compressed files. The som_pak program package includes the following files: - Documentation: README short description of the package and installation instructions som_doc.ps documentation in (c) PostScript format som_doc.ps.Z same as above but compressed som_doc.txt documentation in ASCII format - Source file archives: som_p3r1.exe Self-extracting MS-DOS archive file som_pak-3.1.tar UNIX tape archive file som_pak-3.1.tar.Z same as above but compressed An example of FTP access is given below unix> ftp cochlea.hut.fi (or 130.233.168.48) Name: anonymous Password: ftp> cd /pub/som_pak ftp> binary ftp> get som_pak-3.1.tar.Z ftp> quit unix> uncompress som_pak-3.1.tar.Z unix> tar xvfo som_pak-3.1.tar See file README for further installation instructions. All comments concerning this package should be addressed to som at cochlea.hut.fi. ************************************************************************ From Roland.Baddeley at psy.ox.ac.uk Fri Apr 7 08:09:23 1995 From: Roland.Baddeley at psy.ox.ac.uk (Roland Baddeley) Date: Fri, 7 Apr 1995 13:09:23 +0100 Subject: Incorrect location for paper on exploritory projection pursuit Message-ID: <199504071209.NAA12204@axp01.mrc-bbc.ox.ac.uk> Unfortunetely, as pointed out by a number of people, I added one too many pub's to the directory location of the paper: "Non-linear Data Structure Extraction Using Simple Hebbian Networks". Therefore the address should be: FTP-host: axp01.mrc-bbc.ox.ac.uk FTP-filename: /pub/users/rjb/fyfe_project.ps.Z NOT /pub/pub/users/rjb/fyfe_project.ps.Z Sorry for any problem caused by having too mant pubs, Roland Baddeley ================================================================= o reiterate the paper was: "Non-linear Data Structure Extraction Using Simple Hebbian Networks" Colin Fyfe, Dept of Computer Science, University of Strathclyde, Scotland email: fyfe_ci0 at paisley.ac.uk and Roland Baddeley, Department of Physiology and Experimental Psychology, University of Oxford, England OX1 3UD. email: Roland.Baddeley at psy.ox.ac.uk Abstract: We present a class of neural networks based on simple Hebbian learning which allow the finding of higher order structure in data. The neural networks use negative feedback of activation to self-organise; such networks have previously been shown to be capable of performing Principal Component Analysis (PCA). In this paper, this is extended to Exploratory Projection Pursuit (EPP) which is a statistical method for investigating structure in high dimensional data sets. As opposed to previous proposals for networks which learn using Hebbian learning, no explicit weight normalisation, decay or weight clipping is required. The results are extended to multiple units and related to both the statistical literature on EPP and the neural network network literature on Non-linear PCA. This paper is to appear in Biological Cybernetics. ________________________________________ Roland Baddeley Department of Psychology and Physiology South Parks Road University of Oxford Oxford, England 0X1 3UD ________________________________________ From jamie at atlas.ex.ac.uk Fri Apr 7 06:55:28 1995 From: jamie at atlas.ex.ac.uk (jamie@atlas.ex.ac.uk) Date: Fri, 7 Apr 95 11:55:28 +0100 Subject: Temporal Codes In-Reply-To: ml-connectionists-request@EDU.CMU.CS.SRV.TELNET-1's message of Thu, 6 Apr 1995 08:53:35 -0400 Message-ID: <22185.9504071055@sirius.dcs.exeter.ac.uk> Peter Cariani (4/5/95) wrote: >It should be noted that average rates are one dimensional, scalar codes, >whilst temporal codes can support spike trains conveying more than one >independent signal type (multiplexing). I think that this property of >temporal codes has potentially very great implications for the design of >artificial neural networks, if only because a multiplicity of orthogonal >signals allows one to keep different kinds of information from interfering >with each other. I also think this is an important point. In particular, the variable binding problem is precisely the problem of keeping information about one entity from interfering with information about another. For example, keeping the representation of a red square and a blue triangle from being interpreted as a red triangle and a blue square. Another reason for expecting temporal codes to be used for representing such binding information is that a neuron will generally react the same way to a given input regardless of the absolute time at which the input occurs. This property can be used to argue that a temporal synchrony representation (Cariani's temporal code, option 2) of variable bindings inherently implies systematicity. I know of no other code that can be argued to imply such generalization across entities. Of course, an argument for expecting variable binding information to be represented in a temporal code is in no way an argument against expecting other kinds of information to be represented in other codes. These kinds of computational considerations do, however, tell us something about what kind of information to look for in what kind of code. James Henderson Department of Computer Science University of Exeter Exeter EX4 4PT, UK From jagota at next2.msci.memst.edu Sat Apr 8 14:09:02 1995 From: jagota at next2.msci.memst.edu (Arun Jagota) Date: Sat, 8 Apr 1995 13:09:02 -0500 Subject: HKP excercises Message-ID: <199504081809.AA06512@next2> Dear Connectionists: The text "Introduction to the Theory of Neural Computation" by Hertz, Krogh, and Palmer does not come with excercises, so I have compiled some of my own, in connection with a neural nets course I am teaching this term (for the second time from HKP). Professor Palmer, who I discussed this with, liked the idea of making this list available on Connectionists. If you would like this compilation of "classroom tested" excercises and some computer projects, please send me electronic mail at jagota at next2.msci.memst.edu List is available only by e-mail at present so I can keep track of responses. Currently it is a little biased, reflecting my background and interests. To correct this bias and to expand the list, I invite submissions of new material (via e-mail to me). Submissions should pertain to the material as covered in HKP. The list is especially lacking excercises on Chapters SEVEN (Recurrent Networks), EIGHT (Unsupervised Hebbian Learning), and NINE (Unsupervised Competitive Learning). Submissions in LaTeX would be especially convenient for me. The final list, including all "accepted" submissions, will be forwarded also to Professor Palmer, who has indicated he might evolve it into a possibly larger list. All submissions I decide to include will be acknowledged. Arun Jagota Math Sciences, University of Memphis From edelman at wisdom.weizmann.ac.il Sun Apr 9 09:31:34 1995 From: edelman at wisdom.weizmann.ac.il (Edelman Shimon) Date: Sun, 9 Apr 1995 13:31:34 GMT Subject: TR available: representation by similarity to prototypes Message-ID: <199504091331.NAA07037@lachesis.wisdom.weizmann.ac.il> FTP-host: eris.wisdom.weizmann.ac.il (132.76.80.53) FTP-filename: /pub/cs-tr-95-11.ps.Z 30 pages, with 23 figures; about 7 MB uncompressed, 1.4 MB compressed ---------------------------------------------------------------------- On Similarity to Prototypes in 3D Object Representation Sharon Duvdevani-Bar and Shimon Edelman Dept. of Applied Mathematics and Computer Science The Weizmann Institute of Science A representational scheme under which the ranking between represented dissimilarities is isomorphic to the ranking between the corresponding shape dissimilarities can support perfect shape classification, because it preserves the clustering of shapes according to the natural kinds prevailing in the external world. We discuss the computational requirements of rank-preserving representation, and examine its plausibility within a prototype-based framework of shape vision. ---------------------------------------------------------------------- -Shimon Shimon Edelman, Applied Math & CS, Weizmann Institute http://www.wisdom.weizmann.ac.il/~edelman/shimon.html Cyber Rights Now: Accept No Compromise From jagota at next2.msci.memst.edu Mon Apr 10 12:47:10 1995 From: jagota at next2.msci.memst.edu (Arun Jagota) Date: Mon, 10 Apr 1995 11:47:10 -0500 Subject: HKP followup Message-ID: <199504101647.AA16003@next2> Dear Connectionists: My apologies for a followup post on Connectionists; however I think it useful. So far I have received about 140 requests for HKP exercises. (I anticipated a large response and gave my email address where I don't receive mail from anywhere else.) I am glad this effort is of interest to so many people, and thank all who responded or will. However, I have received only one set of contributions so far. If you have potentially usable HKP-type questions sitting somewhere in your directories, especially on chapters 6 to 9, please do consider sending them to me. I am willing to sift through them, and clean them up if necessary. (Only in English, however, please.) I will send the HKP list to all who requested some time next week (Apr 16-21). Regards, Arun Jagota Math Sciences, University of Memphis From pja at barbarian.endicott.ibm.com Mon Apr 10 13:04:26 1995 From: pja at barbarian.endicott.ibm.com (Peter J. Angeline) Date: Mon, 10 Apr 1995 13:04:26 -0400 Subject: Student Travel Assistance to ICGA Message-ID: <9504101704.AA12914@barbarian.endicott.ibm.com> There is a limited amount of money set aside for student travel assistance to this year's ICGA. Below is the relevant information. I encourage everyone to look at the ICGA95 homepage at URL: http://www.aic.nrl.navy.mil:80/galist/icga95/ for additional details. I've included some information below. -pete +----------------------------------------------------------------------------+ | Peter J. Angeline, PhD | | | Advanced Technologies Dept. | | | Loral Federal Systems | | | State Route 17C | I have nothing to say, | | Mail Drop 0210 | and I am saying it. | | Owego, NY 13827-3994 | | | Voice: (607)751-4109 | - John Cage | | Fax: (607)751-6025 | | | Email: pja at lfs.loral.com | | +----------------------------------------------------------------------------+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Travel Assistance Information ----------------------------- The Naval Center for Applied Research in Artificial Intelligence at the Naval Research Laboratory and the ICGA-95 conference committee has provided a limited amount of funds for student travel assistance to ICGA-95. Travel assistance will be granted to those students demonstrating a need and will be limited to partial compensation for travel expenses to the conference site. No other expenses will be considered. Important Information --------------------- To Receive Travel Assistance: o Make a formal request for travel assistance funds to the conference financial chair. Contact information is below. Email is the preferred method of applying for funds. o Have your advisor forward a letter (email or fax) to the conference financial chair verifying your current status as a student and certifying that sufficient travel funds are not available. Please have them include their email address for verification. o Both of these must be sent by May 22, 1995. o Notification of travel assistance will be sent by 5/29/95. o Once you receive notification of a travel assistance award, send confirmation to the financial chair that you will attend the conference. Confirmation must be received no later than 6/8/95. Unconfirmed travel grants may be reallocated to other applicants! o Register for the conference as soon as possible! Registering for the conference early will increase your chances of receiving funds. Summary of Important Dates -------------------------- April 5, 1995: Notifications mailed to authors. May 22, 1995: Request for assistance and advisor letter sent to financial chair. May 29, 1995: Notification of travel grants sent to applicants. June 11, 1995: Last day for early registration. June 12, 1995: Confirmation of attendance must be sent to financial chair. July 15-20, 1995: Conference Dates. FAQs ---- Q: My company won't cover my airfare. Can I apply for travel assistance? o Funds are limited to assisting only students. Q: When will I receive my travel assistance grant? o Funds will be distributed at the conference registration desk during conference check-in. Q: How much money will I get? o Funds will be allocated based on need and distance traveled to the conference site (Pittsburgh, PA USA) NOT ON COST OF TRANSPORTATION. You should make your travel plans early so you can get the best deal. Travel grants WILL NOT cover the full cost of the travel expenses so that as many students as possible can receive assistance. Q: If I request travel assistance after the deadline, will I still have a chance of receiving funds? o There is a chance but it depends on how many people who met the deadline don't use their travel grants. You will be limited how much assistance they were offered. It pays to complete your request early! Q: Can I ask for a waiver of registration fees? o No. Assistance is only for travel costs. ICGA-95 student registration fees are among the lowest for a conference of this size. In a sense, all student registration is already being subsidized by the conference. Watch this space for additional information and updates! If you have any questions, please feel free to contact the financial chair: Peter J. Angeline Voice: (607)751-4109 Fax: (607)751-6025 Email: pja at lfs.loral.com From peg at cs.rochester.edu Mon Apr 10 08:44:14 1995 From: peg at cs.rochester.edu (peg@cs.rochester.edu) Date: Mon, 10 Apr 1995 08:44:14 -0400 Subject: Ballard et al., "Deictic Codes ... Embodiment of Cognition" Message-ID: <199504101244.IAA27809@artery.cs.rochester.edu> Title: Deictic Codes for the Embodiment of Cognition Authors: Dana H. Ballard, Mary M. Hayhoe, and Polly K. Pook ftp.cs.rochester.edu, directory pub/papers/ai http://www.cs.rochester.edu/trs/ai-trs.html filename: 95.NRLTR1.Deictic_codes_for_the_embodiment_of_cognition.ps.gz To describe phenomena that occur at different time scales, computational models of the brain necessarily must incorporate different levels of abstraction. We argue that at time scales of approximately one-third of a second, orienting movements of the body play a crucial role in cognition and form a useful computational level, termed the embodiment level. At this level, the constraints of the body determine the nature of cognitive operations, since the natural sequentiality of body movements can be matched to the natural computational economies of sequential decision systems. The way this is done is through a system of implicit reference termed deictic, whereby pointing movements are used to bind objects in the world to cognitive programs. We show how deictic bindings enable the solution of natural tasks and argue that one of the central features of cognition, working memory, can be related to moment-by-moment dispositions of body features such as eye movements and hand movements. From akaho at etl.go.jp Mon Apr 10 23:01:08 1995 From: akaho at etl.go.jp (Shotaro Akaho/=?ISO-2022-JP?B?GyRAQFZKZj48QkBPOhsoSg==?=) Date: Tue, 11 Apr 1995 12:01:08 +0900 Subject: TR available on "Mixture model for image understanding and the EM algorithm" Message-ID: <9504110301.AA12519@etlsu12.etl.go.jp> The following technical report is available via anonymous ftp. FTP-host: etlport.etl.go.jp FTP-filename: /pub/akaho/ETL-TR-95-13E.ps.Z "Mixture model for image understanding and the EM algorithm" Shotaro Akaho Abstract: We present a mixture model that can be applied to the recognition of multiple objects in an image plane. The model consists of any shape of submodules. Each submodule is a probability density function of data points with scale and shift parameters, and the modules are combined with weight probabilities. We present the EM (Expectation-Maximization) algorithm to estimate those parameters. We also modify the algorithm in the case that data points are restricted in an attention window. ----------------------------------------------------------------- To retrieve from etlport.etl.go.jp: unix> ftp etlport.etl.go.jp Name (etlport.etl.go.jp:akaho): anonymous Password: (use your email addrss) ftp> cd pub/akaho ftp> binary ftp> get ETL-TR-95-13E.ps.Z unix> uncompress ETL-TR-95-13E.ps.Z unix> lpr ETL-TR-95-13E.ps.Z -- Shotaro AKAHO (akaho at etl.go.jp) Electrotechnical Laboratory (ETL) Information Science Division, Mathematical Informatics Section 1-1-4 Umezono, Tsukuba-shi, Ibaraki, 305 Japan From tony at hebb.psych.mcgill.ca Tue Apr 11 13:00:48 1995 From: tony at hebb.psych.mcgill.ca (Tony Marley) Date: Tue, 11 Apr 1995 13:00:48 -0400 (EDT) Subject: Postdoctoral Positions with A. A. J. Marley, Department of Psychology, McGill University Message-ID: (I apologize if you receive mutiple copies of this announcement. I have mailed it to several different, but overlapping, lists) POSSIBLE POSTDOCTORAL POSITIONS WITH PROFESSOR A. A. J. MARLEY Department of Psychology, McGill University I have funds available for one, possibly two, postdoctoral fellows to begin working with me as soon as mutually arrangeable, in the first instance for 12 months, with the strong possibility of extension for a second 12 months. I am seeking candidates especially in two areas: 1. Mathematical, Simulation, and Experimental Work In Absolute Identification, Categorization, and Comparative Judgment. Recently I and my colleagues have developed and tested "neural" and random walk models in the above areas. We plan to continue this work, and would especially like to further develop the mathematical aspects of the models, and to discover "critical" tests of our basic ideas. A further possible position exists to work with myself and Yves Lacouture, who is at Universite Laval. This latter position is especially suited to a (unilingual or multilingual) French speaker. 2. Characterization Theorems and Stochastic Models in the Mathematical Social Sciences This is an interdisciplinary project involving mathematicians, statisticians, and social scientists. We are developing results concerning theories of aggregation, characterization of choice models, entropy approaches in the mathematical social sciences, etc. For each of these positions a strong background in mathematics (mathematical modeling) and/or computer science (computer moodeling) is extremely important. If you are interested, please send me a vitae, statement of research interests, and three letters of reference (preferably all by email or by fax - number below). Tony Marley A. A. J. (Tony) Marley Department of Psychology McGill University 1205 Avenue Dr. Penfield Montreal Quebec H3Y 2L2 CANADA email: tony at hebb.psych.mcgill.ca tel: (514) 398-6102 fax: (514) 398-4896 From C.Campbell at bristol.ac.uk Tue Apr 11 07:20:20 1995 From: C.Campbell at bristol.ac.uk (I C G Campbell) Date: Tue, 11 Apr 1995 12:20:20 +0100 (BST) Subject: Fifth Irish Neural Networks Conference Message-ID: <9504111120.AA11426@zeus.bris.ac.uk> Please forward ... FIFTH IRISH NEURAL NETWORK CONFERENCE St. Patricks's College, Maynooth, Ireland September 11-13, 1995 ***FINAL CALL FOR PAPERS*** Papers are solicited for the Fifth Irish Neural Network Conference. They can be in any area of theoretical or applied neural computing including, for example: Learning algorithms Cognitive modelling Neurobiology Natural language processing Vision Signal processing Time series analysis Hardware implementations Selected papers from the conference proceedings will be published in the journal Neural Computing and Applications (Springer International). The conference is the fifth in a series previously held at Queen's University, Belfast and University College, Dublin. An extended abstract of not more than 500 words should be sent to: Dr. John Keating, Re: Neural Networks Conference, Dept. of Computer Science St. Patricks's College, Maynooth, Co. Kildare, IRELAND e-mail: JNKEATING at VAX1.MAY.IE NOTE: If submitting by postal mail please make sure to include your e-mail address. The deadline for receipt of abstracts is 1st May 1995. Authors will be contacted regarding acceptance by 1st June, 1995. Full papers will be required by 31st August 1995. ================================================================== FIFTH IRISH NEURAL NETWORKS CONFERENCE REGISTRATION FORM Name: __________________________________________________ Address: __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ e-mail: ______________________ fax: ______________________ REGISTRATION FEE Before August 1, 1995 After Fee enclosed IR#50 IR#60 IR#________ The registration fee covers the cost of the conference proceedings and the session coffee breaks. METHOD OF PAYMENT Payment should be in Irish Pounds in the form of a cheque or banker's draft made payable to INNC'95. =================================================================== FIFTH IRISH NEURAL NETWORKS CONFERENCE ACCOMMODATION FORM Accomodation and meals are available on campus. The rooms are organised into apartments of 6 bedrooms. Each apartment has a bathroom, shower, and a fully equipped dining room/kitchen. The room rate is IR#12 per night (excl. breakfast, breakfast is IR#3 for continental and IR#4 for Full Irish). Name: ___________________________________________________ Address: ___________________________________________________ ___________________________________________________ ___________________________________________________ __________________________________________________ e-mail: ______________________ fax: ______________________ Arrival date: ______________________ Departure date: ______________________ No. of nights: ________ Please fill out a separate copy of the accommodation form for each individual requiring accommodation. If you have any queries, contact John Keating at JNKEATING at VAX1.MAY.IE The second day of the conference (Tuesday 12th September) is a half-day and includes an excursion to Newgrange and Dublin during the afternoon. The cost of this excursion is IR#10. I will be going on the excursion on Tues. afternoon yes/no (please delete as appropriate). ================================================================== Return fees with completed registration/ accommodation forms to: Dr John Keating, Re: Neural Networks Conference, Dept. of Computer Science, St. Patrick's College, Maynooth, Co. Kildare, IRELAND Unfortunately, we cannot accept registration or accommodation bookings by e-mail. =================================================================== Fifth Irish Neural Networks Conference - Paper format The format for accepted submissions will be as follows: LENGTH: 8 pages maximum. PAPER size: European A4 MARGINS: 2cms all round PAGE LAYOUT: Title, author(s), affiliation and e-mail address should be centred on the first page. No running heads or page numbers should be included. TEXT: Should be 10pt and preferably Times Roman. From ghosh at pine.ece.utexas.edu Tue Apr 11 17:31:31 1995 From: ghosh at pine.ece.utexas.edu (Joydeep Ghosh) Date: Tue, 11 Apr 1995 16:31:31 -0500 Subject: Papers on Ridge Polynomial Networks and on Generalization Message-ID: <199504112131.QAA24900@pine.ece.utexas.edu> =========================== Paper announcement ======================== The following two papers are available via anonymous ftp: FTP-host: www.lans.ece.utexas.edu (128.83.52.78) filenames: /pub/papers/rpn_paper.ps.Z and /pub/papers/struc_adapt_jann94.ps.Z More conveniently, they can be retrieved from the HOME PAGE of the LAB. FOR ARTIFICIAL NEURAL SYSTEMS (LANS) at Univ. of Texas, Austin: http://www.lans.ece.utexas.edu where, under "selected publications", the abstracts of more than 40 papers can be viewed and the corresponding .ps.Z files can be downloaded. ===================================================================== RIDGE POLYNOMIAL NETWORKS (to appear, IEEE Trans. Neural Networks) Yoan Shin and Joydeep Ghosh This paper presents a polynomial connectionist network called RIDGE POLYNOMIAL NETWORK (RPN) that can uniformly approximate any continuous function on a compact set in multi-dimensional input space $Re^{d}$, with arbitrary degree of accuracy. This network provides a more efficient and regular architecture compared to ord inary higher-order feedforward networks while maintaining their fast learning property. The ridge polynomial network is a generalization of the pi-sigma network and u ses a special form of ridge polynomials. It is shown that any multivariate polynomial can be repre sented in this form, and realized by an RPN. Approximation capability of the RPNs is shown by this representation theorem an d the Weier- strass polynomial approximation theorem. The RPN provides a na- tural mechanism for incremental network growth. Simulation results on a surface fitting problem, the classification of high-dim ensional data and the realization of a multivariate po- lynomial function are given to highligh t the capability of the network. In particular, a constructive learning algorithm developed for the network is shown to yield smooth generalization and steady learning. ===================================================================== STRUCTURAL ADAPTATION AND GENERALIZATION IN SUPERVISED FEED-FORWARD NETWORKS (Jl. of Artificial Neural Networks, 1(4), 1994, pp. 431-458.) Joydeep Ghosh and Kagan Tumer This work explores diverse techniques for improving the generali- zation ability of supervised feed-forward neural networks via structural adaptation, and introduces a new network structure with sparse connectivity. Pruning methods which start from a large network and proceed in trimming it until a satisfactory solution is reached, are studied first. Then, construction methods, which build a network from a simple initial configura- tion, are presented. A survey of related results from the discip- lines of function approximation theory, nonparametric statistical inference and estimation theory leads to methods for principled architecture selection and estimation of prediction error. A network based on sparse connectivity is proposed as an alterna- tive approach to adaptive networks. The generalization ability of this network is improved by partly decoupling the outputs. We perform numerical simulations and provide comparative results for both classification and regression problems to show the generali- zation abilities of the sparse network. ===========================repeat FTP info ======================== FTP-host: www.lans.ece.utexas.edu (128.83.52.78) filenames: /pub/papers/rpn_paper.ps.Z and /pub/papers/struc_adapt_jann94.ps.Z ************* SORRY, NO HARD COPIES *********** From patrick at magi.ncsl.nist.gov Tue Apr 11 10:35:30 1995 From: patrick at magi.ncsl.nist.gov (Patrick Grother) Date: Tue, 11 Apr 95 10:35:30 EDT Subject: New Very Large NIST OCR Database Message-ID: <9504111435.AA01488@magi.ncsl.nist.gov> +--------------------------+ | NIST Special Database 19 | +--------------------------+ Handprinted Forms and Characters Database Special Database 19 contains NIST's entire corpus of training materials for handprinted doucument and character recognition. It publishes Handprinted Sample Forms from 3600 writers, 810000 character images isolated from their forms, ground truth classifications for those images, reference forms for further data collection, and software utilities for image management and handling. It supersedes Special Databases 3 and 7. + "Final" accumulation of NIST's handprinted sample data + Full page HSF forms from 3600 writers + Separate digit, upper and lower case, and free text fields + over 800000 images with hand checked classifications + Binary images Scanned at 11.8 dots per mm ( 300 dpi ) + Updated CCITT IV Compression Source Code + Database management utilities + The images of Special Database 19 form a superset of the images of two previous releases: Special Databases 3 and 7 which are now discontinued. The database is NIST's largest and probably final release of images intended for handprint document processing and OCR research. The full page images are the default input to the "NIST FORM-BASED HANDPRINT RECOGNITION SYSTEM", a public domain release of end to end page recognition software. Special Database 19 is available as a 5.25 inch CD-ROM in the ISO-9660 format. For sales contact: Standard Reference Data National Institute of Standards and Technology Building 221, Room A323 Gaithersburg, MD 20899 Voice: (301) 975-2208 FAX: (301) 926-0416 email: srdata at enh.nist.gov For technical details contact: Patrick Grother Visual Image Processing Group National Institute of Standards and Technology Building 225, Room A216 Gaithersburg, Maryland 20899 Voice: (301) 975-4157 FAX: (301) 840-1357 email: patrick at magi.ncsl.nist.gov From heckerma at microsoft.com Tue Apr 11 11:51:48 1995 From: heckerma at microsoft.com (David Heckerman) Date: Tue, 11 Apr 95 11:51:48 TZ Subject: Bayesian networks Message-ID: <9504120001.AA08979@netmail2.microsoft.com> A Bayesian network (a.k.a. belief network) is a graphical, modular representation of a joint probability distribution over a set of variables. A Bayesian network is often easy to build directly from domain or expert knowledge and also can be learned from data, making it an excellent representation language in which to combine domain knowledge and data. The march issue of CACM contains a tutorial on the representation as well as three articles on applications. Also, I've written a tutorial on learning Bayesian networks containing many pointers to the literature. The tutorial (in part) will appear in the forthcoming collection "Advances in Knowledge Discovery and Data Mining" edited by U. Fayyad, G. Piatesky-Shapiro, P. Smyth, and R. Uthurusamy. It can be obtained via anonymous ftp at research.microsoft.com://pub/tech-reports/winter94-95/tr-95-06.ps or via my home page http://www.research.microsoft.com/research/dtg/heckerma/heckerma.html. David From john at dcs.rhbnc.ac.uk Wed Apr 12 12:20:56 1995 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Wed, 12 Apr 95 17:20:56 +0100 Subject: MSc in Computational Intelligence Message-ID: <199504121620.RAA18977@platon.cs.rhbnc.ac.uk> MSc in COMPUTATIONAL INTELLIGENCE at the Computer Science Department Royal Holloway, University of London We offer a new twelve-month MSc in Computational Intelligence covering a wide range of subjects: Neural Computing Inference Systems Probabilistic Reasoning Constraint Networks Simulated Annealing Neurochips and VLSI Equational Reasoning Computer Vision Concurrent Programming Object-Oriented Programming Connectionist Expert Systems Computational Learning Theory Royal Holloway is one of the largest colleges of the University of London and is located on a beautiful wooded campus. For further information email: cims at dcs.rhbnc.ac.uk or write to: Course Director, MSc in Computational Intelligence Computer Science Department Royal Holloway, University of London EGHAM, Surrey, TW20 0EX Tel: +44 (0)1784 333421 Fax: +44 (0)1784 443420 From HMSKERK at rulfsw.fsw.LeidenUniv.nl Wed Apr 12 12:07:31 1995 From: HMSKERK at rulfsw.fsw.LeidenUniv.nl (Jan Heemskerk) Date: Wed, 12 Apr 1995 17:07:31 +0100 (MET) Subject: Neural hardware overview Message-ID: <01HP9C37KJ90B7JDXY@rulfsw.LeidenUniv.nl> A preliminary version of the paper "Overview of neural hardware" is now available by anonymous ftp from our ftp-site: ftp.mrc-apu.cam.ac.uk directory name pub/nn/murre filename: neurhard.ps (23 pages) This is a draft version based on Chapter 3 in: Heemskerk, J.N.H. (1995). Neurocomputers for Brain-Style Processing. Design, Implementation and Application. PhD thesis, Unit of Experimental and Theoretical Psycholo- gy Leiden University, The Netherlands. ABSTRACT Neural hardware has undergone rapid development during the last few years. This paper presents an overview of neural hardware projects within industries and academia. It describes digital, analog, and hybrid neurochips and accelerator boards as well as large-scale neurocomputers built from general purpose processors and communication elements. Special attention is given to multiprocessor projects that focus on scalability, flexibility, and adaptivity of the design and thus seem suitable for brain-style (cognitive) processing. The sources used for this overview are taken from journal papers, conference proceedings, data sheets, and ftp-sites and present an up-to-date overview of current state- of-the-art neural hardware implementations. From mcasey at euclid.ucsd.edu Thu Apr 13 08:14:03 1995 From: mcasey at euclid.ucsd.edu (Mike Casey) Date: Thu, 13 Apr 1995 05:14:03 -0700 (PDT) Subject: TRs on Dynamical Systems and RNNs Available Message-ID: <9504131214.AA18138@euclid> A non-text attachment was scrubbed... Name: not available Type: text Size: 3278 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/c32e06ad/attachment-0001.ksh From josh at faline.bellcore.com Thu Apr 13 15:27:17 1995 From: josh at faline.bellcore.com (Joshua Alspector) Date: Thu, 13 Apr 1995 15:27:17 -0400 Subject: neural nets in telecom workshop Message-ID: <199504131927.PAA03544@faline.bellcore.com> International Workshop on Applications of Neural Networks to Telecommunications (IWANNT*95) Stockholm, Sweden May 22-24, 1995 Organizing Committee General Chair: Josh Alspector, Bellcore Program Chair: Rod Goodman, Caltech Publications Chair: Timothy X Brown, Bellcore Treasurer: Anthony Jayakumar, Bellcore Publicity: Atul Chhabra, NYNEX Lee Giles, NEC Research Institute Local Arrangements: Miklos Boda, Ellemtel Bengt Asker, Ericsson Program Committee: Harald Brandt, Ellemtel Tzi-Dar Chiueh, National Taiwan U Francoise Fogelman, SLIGOS Tony Reeder, British Telecom Larry Jackel, AT&T Bell Laboratories Thomas John, Southwestern Bell Adam Kowalczyk, Telecom Australia S Y Kung, Princeton University Tadashi Sone, NTT INNS Liaison: Bernard Widrow, Stanford University IEEE Liaison: Steve Weinstein, NEC Conference Administrator: Betty Greer, IWANNT*95 Bellcore, MRE 2P-295 445 South Street Morristown, NJ 07960, USA voice: (201)829-4993 fax: (201)829-5888 bg1 at faline.bellcore.com Dear Colleague: You are invited to an international workshop on applications of neural networks and other intelligent systems to problems in telecommunica- tions and information networking. This is the second workshop in a series that began in Princeton, New Jersey on October, 18-20 1993. Topics include: Network Management Congestion Control Adaptive Equalization Speech Recognition Language ID/Translation Information Filtering Dynamic Routing Software Engineering Fraud Detection Financial and Market Prediction Fault Identification and Prediction Character Recognition Adaptive Control Data Compression This conference will take place at a time of the year when the beauti- ful city of Stockholm is at its best. The conference will take place in the facilities of the Royal Swedish Academy of Engineering Sciences, right in the middle of Stockholm. There will be several hotels in dif- ferent categories to choose from in the neighborhood. One evening, there will be a boat tour in the famous archipelago which will include dinner. We enclose an advance program for the workshop as well as informa- tion for registration and hotels. There will be a hard cover proceed- ings available at the workshop. There is further information on the IWANNT home page at: ftp://ftp.bellcore.com/pub/iwannt/iwannt.html I hope to see you at the workshop. Sincerely, Josh Alspector, General Chair -------------------------------------------------------------------------- -------------------------------------------------------------------------- Preliminary Program Monday, May 22, 1995: 7:00 Registration and Coffee Session 1: 8:30 J. Alspector, Welcome 8:45 Invited Speaker: Bernt Ericson, VP Ericsson Research and Technology 9:30 C. Cortes, L. D. Jackel, W-P Chiang, Predicting Failures of Telecommunication Paths: Limits on Learning Machine Accuracy Imposed by Data Quality 10:00 Break 10:30 C. S. Hood, C. Ji, An Intelligent Monitoring Hierarchy for Network Management 11:00 A. Holst, A.Lansner, A Higher Order Bayesian Neural Network for Classification and Diagnosis 11:30 T. Sone, A Strong Combination of Neural Networks and Deep Reasoning in Fault Diagnosis 12:00 J. Connor, L. Brothers, J. Alspector, Neural Network Detection of Fraudulent Calling Card Patterns 12:30 Lunch Session 2: 13:30 D. S. Reay, Non-Linear Channel Equalisation Using Associative Memory Neural Networks 14:00 A. Jayakumar, J. Alspector, Experimental Analog Neural Network Based Decision Feedback Equalizer for Digital Mobile Radio 14:30 M. Junius, O. Kennemann, Intelligent Techniques for the GSM Handover Process 15:00 Break 15:30 P. Campbell, H. Ferr, A. Kowalczyk, C. Leckie, P. Sember, Neural Networks in Real Time Decision Making 16:00 P. Chardaire, A. Kapsalis, J. W. Mann, V. J. Rayward-Smith, G. D. Smith, Applications of Genetic Algorithms in Telecommunications 16:30 S. Bengio, F. Fessant, D. Collobert, A Connectionist System for Medium-Term Horizon Time Series Prediction Tuesday, May 23, 1995: 8:00 Coffee and Registration Session 3: 8:30 Invited Speaker: Martin Hyndman, Derwent Information, Neural Network Patenting 9:00 B. de Vries, C. W. Che, R. Crane, J. Flanagan, Q. Lin, J. Pearson, Neural Network Speech Enhancement for Noise Robust Speech Recognition 9:30 Break 10:00 E. Barnard, R. Cole, M. Fanty, P. Vermeulen, Real-World Speech Recognition with Neural Networks 10:30 A. K. Chhabra, V. Misra, Experiments with Statistical Connectionist Methods and Hidden Markov Models for Recognition of Text in Telephone Company Drawings 11:00 R. A. Bustos, T. D. Gedeon, Learning Synonyms and Related Concepts in Document Collections 11:30 N. Karunanithi, J. Alspector, A Feature-Based Neural Network Movie Selection Approach 12:00 H. Brandt, ATM Basics Tutorial 12:30 Lunch: Tuesday PM 13:30 Poster Session: A. Arts-Rodrguez, F. Gonzlez-Serrano, A Figueiras-Vidal, L. Weruaga-Prieto, Compensation of Bandpass Nonlinearities by Look-Up-Tables and CMAC P. Barson, N. Davey, S. Field, R. Frank, D. S. W. Tansley, Dynamic Competitive Learning Applied to the Clone Detection Problem R. Battiti, A. Sartori, G. Tecchiolli, P. Tonella, A. Zorat, Neural Compression: An Integrated Application to EEG Signals E. Bayro-Corrochano, R. Espinoza-Soliz, Neural Network Based Approach for External Telephone Network Management M. Berger, Fast Channel Assignment in Cellular Radio Systems M. J. Bradley, P. Mars, Analysis of Recurrent Neural Networks as Digital Communication Channel Equalizer T. Brown, A Technique for Mapping Optimization Solutions into Hardware M. Collobert, D. Collobert, A Neural System to Detect Faulty Components on Complex Boards in Digital Switches F. Comellas, J. Ozn, Graph Coloring Algorithms for Assignment Problems in Radio Networks M. Dixon, M. Bellgard, G. R. Cole, A Neural Network Algorithm to Solve the Routing Problem in Communication Networks A. P. Engelbrecht, I. Cloete, Dimensioning of Telephone Networks Using a Neural Network as Traffic Distribution Approximator A. D. Estrella, E. Casilari, A. Jurado, F. Sandoval, ATM Traffic Neural Control: Multiservice Call Admission and Policing Function S. Fredrickson, L. Tarassenko, Text-Independent Speaker Recognition Using Radial Basis Functions N. Kasabov, Hybrid Environments for Building Comprehensive AI and the Task of Speech Recognition K. Kohara, Selective Presentation Learning for Forecasting by Neural Networks H. C. Lau, K. Y. Szeto, K. Y. M. Wong, D. Y. Yeung, A Hybrid Expert System for Error Message Classification F. Mekuria, T. Fjllbrant, Neural Networks for Efficient Adaptive Vector Quantization of Signals A. F. Nejad, T. D. Gedeon, Analyser Neural Networks:An Empirical Study in Revealing Regularities of Complex Systems A. Varma, R. Antonucci, A Neural-Network Controller for Scheduling Packet Transmissions in a Crossbar Switch M. B. Zaremba, K.-Q. Liao, G. Chan, M. Gaudreau, Link Bandwidth Allocation in Multiservice Networks Using Neural Technology 16:30 Boat Tour and Dinner Wednesday, May 24, 1995: 8:00 Coffee and Registration Session 4: 8:30 Invited Speaker: Per Roland, Karolinska Institute, The Real Neural Network 9:30 S. Field, N. Davey, R. Frank, A Complexity Analysis of Telecommunications Software Using Neural Nets 10:00 Break 10:30 T-D Chiueh, L-K Bu, Theory and Implementation of an Analog Network that Solves the Shortest Path Problem 11:00 E. Nordstrm, J. Carlstrm, A Reinforcement Learning Scheme for Adaptive Link Allocation in ATM Networks 12:00 W. K. F. Lor, K. Y. M. Wong, Decentralized Neural Dynamic Routing in Circuit-Switched Networks 12:30 Lunch Session 5: 13:30 A. Garcia-Lopera, A. Ariza Quintana, F. Sandoval Hernandez, Modular Neural Control of Buffered Banyan Networks 14:00 A. Murgu, Adaptive Flow Control in Multistage Communications Networks Based on a Sliding Window Learning Algorithm 14:30 T. Brown, A Technique for Classifying Network States 15:00 Break 15:30 R. M. Goodman, B. E. Ambrose, Learning Telephone Network Trunk Reservation Congestion Control Using Neural Networks 16:00 A. Farag, M. Boda, H. Brandt, T. Henk, T. Trn, J. Br, Virtual Lookahead - a New Approach to Train Neural Nets for Solving On-Line Decision Problems 16:30 O. Gallmo, L. Asplund, Reinforcement Learning by Construction of Hypothetical Targets 17:00 Adjourn ------------------------------------------------------------------------ ------------------------------------------------------------------------ Site The conference will be held at the IVA or Royal Swedish Academy of Engineering Sciences. The location is a mixture of old and new. The conference will take place in modern facilties built in 1983, while the lunches are held in a beautiful dining room from the turn of the century. IVA is situated at Grev Turegatan 14 (Count Ture's street) which is very central. Close by is Sturegallerian, a fine shopping center, located in a number of buildings from the early 20th century. A few hundred meters walk will take you to Nybroviken, where you may take a ferry to the big outdoor museum Skansen. The same distance in the other direction will bring you to Hamngatan and Sergels torg, right in the middle of the shopping district. How to Get There From willicki at cs.aston.ac.uk Thu Apr 13 13:10:46 1995 From: willicki at cs.aston.ac.uk (Chris Williams) Date: Thu, 13 Apr 1995 18:10:46 +0100 Subject: TR available on "Mixture model for image understanding and the EM algorithm" Message-ID: <21963.9504131710@sun.aston.ac.uk> Re: the paper "Mixture model for image understanding and the EM algorithm" recently announced by Shotaro Akaho. I believe the idea of using a mixture model parameterized by scale and translation parameters is very similar to the "constrained mixture models" we have been using for character recognition for some years. Basically, the idea is to create a template out of a number of Gaussians; in the character recognition case the Gaussians will be spaced out along the stroke; each one is like a spray-can ink generator. This template can be scaled or translated by applying the transformation to each of the Gaussian centres. In fact our model went further than this in that it allowed deformable templates, so that the Gaussians could be moved away from their "home locations" in the object-based frame, at a cost. We also allowed a full 2x2 affine transformation plus translation rather than just translation and scaling, and used a "noise model" to reject outlier/noise data points. We used a method based on the EM algorithm for fitting these templates to data. For the non-deformable case there is (as Akaho points out) a direct EM algorithm. This is mentioned in Appendix B of my thesis. The fitting algorithm converged to the desired solution (i.e. one which looks correct -- this is the advantage of working in 2d :-)) in around 99% of the cases when only a single character was present. We have run some experiments with two templates and two objects and found that we only got convergence to the desired solution if the starting point was rather close to it. I should also comment that Eric Mjolsness and his colleagues have been doing some similar work, although they have used an explicit match- matrix to encode the correspondence between datapoints and model points; the mixture model can be obtained by integrating out one of the row and column constraints. [ref: eg. Chien-Ping Lu and Eric Mjolsness, NIPS 6, 985-992; also NIPS 7 (forthcoming), and earlier work back to a TR YALEU-DCS-TR-854 in 1990] Our Refs: [early paper] @incollection (hinton-williams-revow-92, author = "Hinton, G. ~E. and Williams, C. ~K. ~I. and Revow, M. ~D.", title = "Adaptive elastic models for hand-printed character recognition", editor = "J. E. Moody and S. J. Hanson and R. P. Lippmann", booktitle= "Advances in Neural Information Processing Systems 4", year = "1992", publisher= "Morgan Kauffmann", place = "San Mateo CA." ) [up to date work] * a paper submitted to IEEE Transactions on Pattern Analysis and Machine Intelligence in 1994: pami.ps.Z (36 pages, 0.3 Mb) * my PhD thesis: thesis.ps.Z (95 pages, 0.6 Mb) both available from the Toronto ftp server unix> ftp ftp.cs.toronto.edu (or 128.100.3.6, or 128.100.1.105) (log in as "anonymous", e-mail address as password) ftp> binary ftp> cd pub/ckiw ftp> get thesis.ps.Z ftp> get pami.ps.Z ftp> quit Regards Chris Williams c.k.i.williams at aston.ac.uk Department of Computer Science and Applied Maths Aston University Birmingham B4 7ET England tel: +44 121 359 3621 x 4382 fax: +44 121 333 6215 From terry at salk.edu Thu Apr 13 19:06:11 1995 From: terry at salk.edu (Terry Sejnowski) Date: Thu, 13 Apr 95 16:06:11 PDT Subject: Neural Comp 7:3 - Abstracts on WWW Message-ID: <9504132306.AA13731@salk.edu> Neural Computation Abstracts are now available on WWW: URL: http://www-mitpress.mit.edu/ ----- NEURAL COMPUTATION may 1995 Volume 7 Number 3 Review: Models of orientation and ocular dominance columns in the visual cortex: A critical comparison E. Erwin, K. Obermayer and K. Schulten Letters: How precise is neuronal synchronization? Peter K=F6nig, Andreas K. Engel, Pieter R. Roelfsema and Wolf Singer Quantitative analysis of electronic structure and membrane properties of NMDA activated lamprey spinal neurons C. R. Murphey, L. E. Moore and J. T. Buchanan Reduced representation by neural networks with restricted receptive fields Marco Idiart, Barry Berk and L. F. Abbott Stochastic single neurons Toru Ohira and Jack D. Cowan Memory recall by quasi-fixed-point attractors in oscillator neural networks Tomoki Fukai and Masatoshi Shiino Learning population codes by minimizing description length Richard S. Zemel and Geoffrey Hinton Competition and multiple cause models Peter Dayan and Richard S. Zemel Bayesian self-organization driven by prior probability distributions Alan L. Yuille, Stelios M. Smirnakas and Lei Xu Adaptive voting rules for k-NN classifiers R. Rovatti, R. Tagazzoni, Zs. M. Kov=E0cs and R. Guerrieri Regularisation in the selection of radial basis function centres Mark J. L. Orr Bootstrapping confidence intervals for clinical input variable effects in a network trained to identify the presence of acute myocardial infarction William G. Baxt and Halbert White ----- SUBSCRIPTIONS - 1995 - VOLUME 7 - BIMONTHLY (6 issues) ______ $40 Student and Retired ______ $68 Individual ______ $180 Institution Add $22 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-5 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 e-mail: hiscox at mitvma.mit.edu ----- From oliensis at research.nj.nec.com Fri Apr 14 14:38:31 1995 From: oliensis at research.nj.nec.com (John Oliensis) Date: Fri, 14 Apr 1995 14:38:31 -0400 Subject: NECI Vision Workshop: www proceedings Message-ID: <199504141838.OAA01131@iris63> NECI VISION WORKSHOP FEB. 27 - MAR. 10, 1995 NECI Research Institute 4 Independence Way Princeton, NJ 08540 DESCRIPTION The NECI Vision Workshop brought together vision psychologists and computer vision researchers for a two week period to exchange ideas. The meeting was oriented toward discussion with a relaxed schedule of presentations. Foci of discussion included object recognition, recovery of structure and motion, subjective contours, perceptual inference, and low level vision. ATTENDEES Bill Bialek (NECI), Heinrich Bulthoff (Max-Planck), Brian Burns (Teleos), Jacob Feldman (Rutgers), Ingemar Cox (NECI), David Forsyth (Berkeley), Jonas Garding (KTH, Sweden), Richard Hartley (GE), David Jacobs (NECI), Allan Jepson (U. of Toronto), Dan Kersten, (U. of Minnesota), David Knill (U. of Pennsylvania), Tony Lindeberg (KTH, Sweden), Mike Langer (McGill), Zili Liu (NECI), Larry Maloney (NYU), Steve Maybank (GEC/U. of Oxford), John Oliensis (NECI), Pietro Perona (Cal Tech), Jean Ponce (U. of Illinois), Harpreet Sawhney (IBM), Bob Shapley (NYU), Stefano Soatto (Cal Tech), Mike Tarr (Yale), Shimon Ullman (Weizmann), Bill Warren (Brown), Lance Williams (NECI), Alan Yuille (Harvard), Steve Zucker (McGill). PROCEEDINGS The www "proceedings" for the NECI Vision Workshop is available at: http://www.neci.nj.nec.com/homepages/oliensis/NECI_Vision_Workshop.html From njm at cupido.inesc.pt Sun Apr 16 08:32:51 1995 From: njm at cupido.inesc.pt (njm@cupido.inesc.pt) Date: Sun, 16 Apr 95 13:32:51 +0100 Subject: 2nd CFP: Neural Nets & Genetic Algorithms Workshop Message-ID: <9504161232.AA26132@cupido.inesc.pt> ________________________________________________________ -------------------------------------------------------- EPIA'95 WORKSHOPS - CALL FOR PARTICIPATION NEURAL NETWORKS AND GENETIC ALGORITHMS -------------------------------------------------------- A subsection of the: FUZZY LOGIC AND NEURAL NETWORKS IN ENGINEERING WORKSHOP ________________________________________________________ -------------------------------------------------------- Seventh Portuguese Conference on Artificial Intelligence Funchal, Madeira Island, Portugal October 3-6, 1995 (Under the auspices of the Portuguese Association for AI) -------------------------------------------- REMEMBER: SUBMISSION DEADLINE: May 2, 1995 -------------------------------------------- INTRODUCTION ~~~~~~~~~~~~ The workshop on Fuzzy Logic and Neural Networks in Engineering, running during the Seventh Portuguese Conference on Artificial Intelligence (EPIA'95), includes a subsection on Neural Networks and Genetic Algorithms. This subsection of the workshop will be devoted to models of simulating human reasoning and behaviour based on GA and NN combinations. Recently in disciplines such as AI, Engineering, Robotics and Artificial Life there has been a rise in interest in hybrid methodologies such as neural networks and genetic algorithms which enables modelling of more realistic flexible and adaptive behaviour and learning. So far such hybrid models have proved very promising in investigating and characterizing the nature of complex reasoning and control behaviour. Participants are expected to base their contribution on current research and the workshop emphasis will be on wide-ranging discussions of the feasibility and application of such hybrid models. This part of the workshop is intended to promote the exchange of ideas and approaches in these areas and for these methods, through paper presentations, open discussions, and the corresponding exhibition of running systems, demonstrations or simulations. COORDINATION OF THIS SUBSECTION ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Mukesh Patel Institute of Computer Science, Foundation for Research and Technology-Hellas (FORTH) P.O.Box 1385, GR 711 10 Heraklion, Crete, Greece Voice: +30 (81) 39 16 35 Fax: +30 (81) 39 16 01/09 Email: mukesh at ics.forth.gr The submission requirements, attendance and deadlines information are the same of the workshop, which Call for Papers is enclosed. Further inquiries could be addressed either to the subsection coordinator or the Workshop address. ============================================================================= ============================================================================= ============================================================================= -------------------------------------------------------- EPIA'95 WORKSHOPS - CALL FOR PARTICIPATION FUZZY LOGIC AND NEURAL NETWORKS IN ENGINEERING WORKSHOP -------------------------------------------------------- Seventh Portuguese Conference on Artificial Intelligence Funchal, Madeira Island, Portugal October 3-6, 1995 (Under the auspices of the Portuguese Association for AI) INTRODUCTION ~~~~~~~~~~~~ The Seventh Portuguese Conference on Artificial Intelligence (EPIA'95) will be held at Funchal, Madeira Island, Portugal, between October 3-6, 1995. As in previous cases ('89, '91, and '93), EPIA'95 will be run as an international conference, English being the official language. The scientific program includes tutorials, invited lectures, demonstrations, and paper presentations. The Conference will include three parallel workshops on Expert Systems, Fuzzy Logic and Neural Networks, and Applications of A.I. to Robotics and Vision Systems. These workshops will run simultaneously (see below) and consist of invited talks, panels, paper presentations and poster sessions. Fuzzy Logic And Neural Networks In Engineering workshop may last for either 1, 2 or 3 days, depending on the quantity and quality of submissions. FUZZY LOGIC AND NEURAL NETWORKS IN ENGINEERING WORKSHOP ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The search for systems simulating human reasoning in what regards uncertainty has created a strong research community. In particular, Fuzzy Logic and Neural Networks have been a source of synergies among researchers of both areas, aiming at developing theoretical approaches and applications towards the characterization and experimentation of such kinds of reasoning. The workshop is intended to promote the exchange of ideas and approaches in those areas, through paper presentations, open discussions, and the corresponding exhibition of running systems, demonstrations or simulations. The organization committee invites you to participate, submitting papers together with videos, demonstrations or running systems, to illustrate relevant issues and applications. EXHIBITIONS ~~~~~~~~~~~ In order to illustrate and to support theoretical presentations the organization will provide adequate conditions (space and facilities) for exhibitions regarding the three workshops mentioned. These exhibitions can include software running systems (several platforms are available), video presentations (PAL-G VHS system), robotics systems (such as robotics insects, and autonomous robots), and posters. On the one hand, this space will allow the presentation of results and real-world applications of the research developed by our community and, on the other it will serve as a source of motivation to students and young researchers. SUBMISSION REQUIREMENTS ~~~~~~~~~~~~~~~~~~~~~~~ Authors are asked to submit five (5) copies of their papers to the submissions address by May 2, 95. Notification of acceptance or rejection will be mailed to the first (or designated) author on June 5, 95, and camera ready copies for inclusion in the workshop proceedings will be due on July 3, 95. Each copy of submitted papers should include a separate title page giving the names, addresses, phone numbers and email addresses (where available) of all authors, and a list of keywords identifying the subject area of the paper. Papers should be a maximum of 16 pages and printed on A4 paper in 12 point type with a maximum of 38 lines per page and 75 characters per line ( corresponding to LaTeX article style, 12 pt). Double sided submissions are preferred. Electronic or faxed submissions will not be accepted. Further inquiries should be addressed to the inquiries address. ATTENDANCE ~~~~~~~~~~ Each workshop will be limited to at most fifty people. In addition to presenters of papers and posters, there will be space for a limited number of other participants chosen on the basis of a one- to two-page research summary which should include a list of relevant publications, along with an electronic mail address if possible. A set of working notes will be available prior to the commencement of the workshops. Registration information will be available in June 1995. Please write for registration information to the inquiries address. DEADLINES ~~~~~~~~~ Papers submission: ................. May 2, 1995 Notification of acceptance: ........ June 5, 1995 Camera Ready Copies Due: ........... July 3, 1995 PROGRAM-CHAIR ~~~~~~~~~~~~~ Jose Tome (IST, Portugal) ORGANIZING-CHAIR ~~~~~~~~~~~~~~~~ Luis Custodio (IST, Portugal) SUBMISSION AND INQUIRIES ADDRESS ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ EPIA'95 Fuzzy Logic & Neural Networks Workshop INESC, Apartado 13069 1000 Lisboa Portugal Voice: +351 (1) 310-0325 Fax: +351 (1) 525843 Email: epia95-FLNNWorkshop at inesc.pt PLANNING TO ATTEND ~~~~~~~~~~~~~~~~~~ People planning to submit a paper or/and to attend the workshop are asked to complete and return the following form (by fax or email) to the inquiries address standing their intention. It will help the workshop organizer to estimate the facilities needed and will enable all interested people to receive updated information. +----------------------------------------------------------------+ | REGISTRATION OF INTEREST | | (Fuzzy Logic & Neural Networks Workshop) | | | | Title . . . . . Name . . . . . . . . . . . . . . . . . . . . | | Institution . . . . . . . . . . . . . . . . . . . . . . . . . | | Address1 . . . . . . . . . . . . . . . . . . . . . . . . . . . | | Address2 . . . . . . . . . . . . . . . . . . . . . . . . . . . | | Country . . . . . . . . . . . . . . . . . . . . . . . . . . . | | Telephone. . . . . . . . . . . . . . . Fax . . . . . . . . . . | | Email address. . . . . . . . . . . . . . . . . . . . . . . . . | | I intend to submit a paper (yes/no). . . . . . . . . . . . . . | | I intend to participate only (yes/no). . . . . . . . . . . . . | | I will travel with ... guests | +----------------------------------------------------------------+ From JCONNOR at lbs.lon.ac.uk Mon Apr 17 20:46:55 1995 From: JCONNOR at lbs.lon.ac.uk (Jerry Connor) Date: Mon, 17 Apr 1995 20:46:55 BST Subject: NNCM-95, SECOND ANNOUNCEMENT Message-ID: SECOND ANNOUNCEMENT AND CALL FOR PAPERS NNCM-95 Third International Conference On NEURAL NETWORKS IN THE CAPITAL MARKETS Thursday-Friday, October 12-13, 1995 with tutorials on Wednesday, October 11, 1995. The Langham Hilton, London, England. (Note Deadline for Camera Ready Full Papers To Be Published In Hard Back Conference Proceedings) Neural networks are now emerging as a major modeling methodology in financial engineering. Because of the overwhelming interest in the NNCM workshops held in London in 1993 and Pasadena in 1994, the third annual NNCM conference will be held on October 12-13, 1995, in London. NNCM*95 will take a critical look at state of the art neural network applications in finance. This is a research meeting where original, high-quality contribu- tions to the field are presented and discussed. In addition, a day of introductory tutorials (Wednesday, October 11) will be included to familiarise audiences of different backgrounds with financial engineering, neural networks, and the mathematical aspects of the field. Application areas include: + Bond and stock valuation and trading + Foreign exchange rate prediction and trading + Commodity price forecasting + Risk management + Tactical asset allocation + Portfolio management + Option Pricing + Trading strategies Technical areas include, but are not limited to: + Neural networks + Nonparametric statistics + Econometrics + Pattern recognition + Time series analysis + Model Selection + Signal processing and control + Genetic and evolutionary algorithms + Fuzzy systems + Expert systems + Machine learning Instructions for Authors Authors who wish to present a paper should mail a copy of their extended abstract (4 pages, single-sided, single-spaced) typed on A4 (8.5" by 11") paper to the secretariat no later than May 31, 1995. Submissions will be refereed by no less than four referees, and authors will be notified of acceptance by 14 June 1995. Authors of accepted papers will be mailed guidelines for producing final camera ready papers which are due 12 July 1995. The accepted papers will be published in a hard back conference proceedings published by World Scientific. Separate registration is required using the attached registration form. Authors are encouraged to submit abstracts as soon as possible. Registration To register, complete the registration form and mail to the sec- retariat. Please note that attendance is limited and will be allocated on a "first-come, first-served" basis. Secretariat: For further information, please contact the NNCM-95 secretariat: Ms Busola Oguntula, London Business School Sussex Place, Regent's Park, London NW1 4SA, UK e-mail: boguntula at lbs.lon.ac.uk phone (+44) (0171) 262 50 50 fax (+44) (0171) 724 78 75 Location: The main conference will be held at The Langham Hilton, which is situated near Regent's Park and is a short walk from Baker Street Underground Station. Further directions including a map will be sent to all registries. Programme Commitee Dr A. Refenes, London Business School (Chairman) Dr Y. Abu-Mostafa, Caltech Dr A. Atiya, Cairo University Dr N. Biggs, London School of Economics Dr D. Bunn, London Business School Dr M. Jabri, University of Sydney Dr B. LeBaron, University of Wisconsin Dr A. Lo, MIT Sloan School Dr J. Moody, Oregon Graduate Institute Dr C. Pedreira, Catholic University PUC-Rio Dr M. Steiner, Universitaet Munster Dr A. Timermann, University of California, San Diego Dr A. Weigend, University of Colorado Dr H. White, University of California, San Diego Hotel Accommodation: Convenient hotels include: The Langham Hilton 1 Portland Place London W1N 4JA Tel: (+44) (0171) 636 10 00 Fax: (+44) (0171) 323 23 40 Sherlock Holmes Hotel 108 Baker Street, London NW1 1LB Tel: (+44) (0171) 486 61 61 Fax: (+44) (0171) 486 08 84 The White House Hotel Albany St., Regent's Park, London NW1 Tel: (+44) (0171) 387 12 00 Fax: (+44) (0171) 388 00 91 --------------------------Registration Form -------------------------- -- NNCM-95 Registration Form Third International Conference on Neural Networks in the Capital Markets October 12-13 1995 Name:____________________________________________________ Affiliation:_____________________________________________ Mailing Address: ________________________________________ _________________________________________________________ Telephone:_______________________________________________ ****Please circle the applicable fees and write the total below**** Main Conference (October 12-13): (British Pounds) Registration fee 450 Discounted fee for academicians 250 (letter on university letterhead required) Discounted fee for full-time students 100 (letter from registrar or faculty advisor required) Tutorials (October 11): You must be registered for the main conference in order to register for the tutorials. (British Pounds) Morning Session Only 100 Afternoon Session Only 100 Both Sessions 150 Full-time students 50 (letter from registrar or faculty advisor required) TOTAL: _________ Payment may be made by: (please tick) ____ Check payable to London Business School ____ VISA ____Access ____American Express Card Number:___________________________________ From terry at salk.edu Mon Apr 17 18:25:43 1995 From: terry at salk.edu (Terry Sejnowski) Date: Mon, 17 Apr 95 15:25:43 PDT Subject: Telluride DEADLINE April 24 Message-ID: <9504172225.AA04122@salk.edu> FINAL CALL FOR PARTICIPATION IN A WORKSHOP ON "NEUROMORPHIC ENGINEERING" JUNE 25 - JULY 8, 1995 TELLURIDE, COLORADO DEADLINE for application is April 24, 1995. Christof Koch (Caltech) and Terry Sejnowski (Salk Institute/UCSD) invite applications for one two-week workshop that will be held in Telluride, Colorado in 1995. The first Telluride Workshop on Neuromorphic Engineering was held in July, 1994 and was sponsored by the NSF. A summary of the 94 workshop and a list of participants is available over MOSAIC: http://www.klab.caltech.edu/~timmer/telluride.html OR http://www.salk.edu/~bryan/telluride.html GOALS: Carver Mead introduced the term "Neuromorphic Engineering" for a new discipline based on the design and fabrication of artificial neural systems, such as vision systems, head-eye systems, and roving robots, whose architecture and design principles are based on those of biological nervous systems. The goal of this workshop is to bring together young investigators and more established researchers from academia with their counterparts in industry and national laboratories, working on both neurobiological as well as engineering aspects of sensory systems and sensory-motor integration. The focus of the workshop will be on ``active" participation, with demonstration systems and hands-on-experience for all participants. Neuromorphic engineering has a wide range of applications from nonlinear adaptive control of complex systems to the design of smart sensors. Many of the fundamental principles in this field, such as the use of learning methods and the design of parallel hardware, are inspired by biological systems. However, existing applications are modest and the challenge of scaling up from small artificial neural networks and designing completely autonomous systems at the levels achieved by biological systems lies ahead. The assumption underlying this two week workshop is that the next generation of neuromorphic systems would benefit from closer attention to the principles found through experimental and theoretical studies of brain systems. The focus of the first week is on exploring neuromorphic systems through the medium of analog VLSI and will be organized by Rodney Douglas (Oxford) and Misha Mahowald (Oxford). Sessions will cover methods for the design and fabrication of multi-chip neuromorphic systems. This framework is suitable both for creating analogs of specific biological systems, which can serve as a modeling environment for biologists, and as a tool for engineers to create cooperative circuits based on biological principles. The workshop will provide the community with a common formal language for describing neuromorphic systems. Equipment will be available for participants to evaluate existing neuromorphic chips (including silicon retina, silicon neurons, oculomotor system). The second week of the course will be on vision and human sensory-motor coordination and will be organized by Dana Ballard and Mary Hayhoe (Rochester). Sessions will cover issues of sensory-motor integration in the mammalian brain. Special emphasis will be placed on understanding neural algorithms used by the brain which can provide insights into constructing electrical circuits which can accomplish similar tasks. Issues to be covered will include spatial localization and constancy, attention, motor planning, eye movements, and the use of visual motion information for motor control. These researchers will also be asked to bring their own demonstrations, classroom experiments, and software for computer models. Demonstrations will include a robot head active vision system consisting of a three degree-of-freedom binocular camera system that is fully programmable. The vision system us based on a DataCube videopipe which in turn provides drive signals to the three motors of the head. FORMAT: Time will be divided between lectures, practical labs, and interest group meetings. There will be three lectures in the morning that cover issues that are important to the community in general. In general, one lecture will be neurobiological, one computational, and one on analog VLSI. Because of the diverse range of backgrounds among the participants, the majority of these lectures will be tutorials, rather than detailed reports of current research. These lectures will be given by invited speakers. Participants will be free to explore and play with whatever they choose in the afternoon. Participants are encouraged to bring their own material to share with others. After dinner, participants will get together more informally to hear lectures and demonstrations. LOCATION AND ARRANGEMENTS: The workshop will take place at the "Telluride Summer Research Center," located in the small town of Telluride, 9000 feet high in Southwest Colorado, about 6 hours away from Denver (350 miles) and 4 hours from Aspen. Continental and United Airlines provide many daily flights directly into Telluride. Participants will be housed in shared condominiums, within walking distance of the Center. Bring hiking boots and a backpack, since Telluride is surrounded by beautiful mountains. The workshop is intended to be very informal and hands-on. Participants are not required to have had previous experience in analog VLSI circuit design, computational or machine vision, systems level neurophysiology or modeling the brain at the systems level. However, we strongly encourage active researchers with relevant backgrounds from academia, industry and national laboratories to apply, in particular if they are prepared to talk about their work or to bring demonstrators to Telluride (e.g. robots, chips, software). Internet access will be provided. Technical staff present throughout the workshops will assist with software and hardware problems. We will have a network of SUN workstations running UNIX and PCs running windows and LINUX. Up to $500 will be reimbursed for domestic travel and all housing expenses will be provided. Participants are expected to pay for food and incidental expenses and are expected to stay for the duration of this two week workshop. A limited number of travel awards will be available for international travel. PARTIAL LIST OF INVITED LECTURERS: Richard Anderson, Caltech. Chris Atkeson, Georgia Tech. Dana Ballard, Rochester. Kwabena Boahen, Caltech. Avis Cohen, Maryland. Tobi Delbruck, Arithmos, Palo Alto. Steve DeWeerth, Georgia Tech. Steve Deiss, Applied NeuroDynamics, San Diego. Chris Dioro, Caltech. Rodney Douglas, Oxford and Zurich. John Elias, Delaware University. Mary Hayhoe, Rochester. Christof Koch, Caltech. Shih-Chii Liu, Caltech and Rockwell. Jack Loomis, UC Santa Barbara. Jonathan Mills, Indiana University. Misha Mahowald, Oxford and Zurich. Mark Tilden, Los Alamos: Multi-legged Robots. Terry Sejnowski, Salk Institute and UCSan Diego. Mona Zaghoul, George Washington University. HOW TO APPLY: The deadline for receipt of applications is April 24, 1995 Applicants should be at the level of graduate students or above (i.e. post- doctoral fellows, faculty, research and engineering staff and the equivalent positions in industry and national laboratories). We actively encourage qualified women and minority candidates to apply. Application should include: 1. Name, address, telephone, e-mail, FAX, and minority status (optional). 2. Curriculum Vitae. 3. One page summary of background and interests relevant to the workshop. 4. Description of special equipment needed for demonstrations that could be brought to the workshop. 5. Two letters of recommendation Complete applications should be sent to: Prof. Terrence Sejnowski The Salk Institute 10010 North Torrey Pines Road San Diego, CA 92037 email: terry at salk.edu FAX: (619) 587 0417 Applicants will be notified druign the week of May 7, 1995. ----- From l.s.smith at cs.stir.ac.uk Tue Apr 18 11:29:46 1995 From: l.s.smith at cs.stir.ac.uk (Dr L S Smith (Staff)) Date: Tue, 18 Apr 1995 16:29:46 +0100 Subject: New book: Neural Computation and Psychology Message-ID: <199504181529.QAA18251@katrine.cs.stir.ac.uk> Newly-published book available. Order it from you bookshop! Neural Computation and Psychology eds: Leslie S. Smith, Peter J.B. Hancock. Proceedings of the 3rd Neural Computation and Psychology Workshop (NCPW3), Stirling, Scotland, 31 August - 2 September 1994 Springer-Verlag Workshops in Computing Series: published in collaboration with the British Computer Society. ISBN 3-540-19948-9 Published April 1995 Contents: Preface Cognition. Symbolic and subsymbolic approaches to cognition. David Willshaw (Centre for Cognitive Science, University of Edinburgh). Mapping across domains without feedback: a neural network model of transfer of implicit knowledge. Zoltan Dienes, Gerry T.M. Altmann, Shi-Ji Gao (Lab of Experimental Psychology, University of Sussex). Modelling reaction times. John A. Bullinaria (Dept of Psychology, University of Edinburgh). Chunking: an interpretation bottleneck. Jon Slack (Department of Psychology, University of Kent). Learning, relearning and recall for two strengths of learning in a neural networks 'aged' by simulated dendritic attrition. R. Cartwright, G.W. Humphries (School of Psychology, University of Birmingham). Perception. Learning invariances via spatio-temporal constraints. James V. Stone (Cognitive and Computing Sciences, University of Sussex). Topographic map formation as statistical inference. Roland Baddeley (Dept of Psychology, University of Oxford). Edge enhancement and exploratory projection pursuit. Colin Fyfe, Roland Baddeley (Dept of Computer Science, University of Strathclyde, Dept of Psychology, University of Oxford). The "perceptual magnet" effect: a model based on self-organizing feature maps. M. Herrmann , H.-U Bauer, R. Der (Nordita, Copenhagen, Inst. f. Theor Physik, Universitaet Frankfurt, and Inst f. Informatik, Universitaet Leipzig). How local cortical processors that maximize coherent variation could lay foundations for representation proper. W.A Phillips, Jim Kay and D. Smyth (Dept of Psychology, University of Stirling, SASS, Aberdeen, Dept of Psychology, University of Stirling). Audition and Vision Using complementary streams in a model that learns representations of abstract diatonic pitch. Niall Griffifth (Dept of Computer Science, University of Exeter). Data-driven sound interpretation: its application to voiced sounds. Leslie S. Smith (Dept of Computing Science, University of Stirling). Computer simulation of gestalt auditory grouping by frequency proximity. Michael W. Beauvois, Ray Meddis , (IRCAM, Paris, and Dept of Human Sciences, Loughborough University of Technology). Mechanisms of visual search:an implementation of guided search. K.J. Smith, G.W. Humphreys (School of Psychology, University of Birmingham). Categorical perception as an acquired phenomenon: what are the implications? James M. Beale, Frank C. Keil (Dept of Psychology, Cornell University). Sequence Learning. A computational account of phonologically mediated free recall. Peter J. Lovatt , Dimitrios Bairaktaris (CCCN, Dept of Computing Science, University of Stirling). Interactions between knowledge sources in a dual-route connectionist model of spelling. David W. Glasspool, George Houghton, Tim Shallice (University College, London). Author Index. ____________________________________________________ Dr Leslie S. Smith Dept of Computing and Mathematics, Univ of Stirling Stirling FK9 4LA Scotland lss at cs.stir.ac.uk (NeXTmail welcome) Tel (44) 1786 467435 Fax (44) 1786 464551 www http://www.cs.stir.ac.uk/~lss/ From pierre.demartines at csemne.ch Tue Apr 18 10:14:00 1995 From: pierre.demartines at csemne.ch (pierre.demartines@csemne.ch) Date: Tue, 18 Apr 1995 16:14:00 +0200 Subject: French Doctoral Thesis available: Nonlinear Data Analysis through Self-Organizing Neural Networks Message-ID: <199504181414.QAA16521@grillet.csemne.ch> Hello, It is my pleasure to inform you about the availability of my doc- toral dissertation (in french only) on "Data Analysis through Self-Organizing Neural Networks". You can get it by FTP from the TIRFLab ftp-server (Grenoble, France). FTP-host: tirf.inpg.fr FTP-name: anonymous FTP-passwd: anything (your email for instance) FTP-file: /pub/demartin/demartin.phd94.ps.Z (2.2 Mo compressed, 8.7 Mo uncompressed, 214 pages) ----------------------------------------------------------------- DATA ANALYSIS THROUGH SELF-ORGANIZED NEURAL NETWORKS Keywords -------- Data structure (submanifold), Self-Organizing Maps (Kohonen), Fractal Dimension, Dimension Reduction, Nonlinear Projection, Un- folding, "VQP" algorithm, diffeomorphism, interpolation, extrapo- lation, Multidimensional Scaling, Nonlinear Mapping, Industrial Applications. Abstract -------- Data understanding is often based on hidden informations re- trieval within a big amount of collected variables. It is a search for linear or non linear dependencies between these ob- served variables, and consists in reducing these variables to small number of parameters. A classical method, widely used for this purpose, is the so- called Principal Component Analysis (PCA). Unfortunately, this method is only linear, and fails to reduce data that are redun- dant in a non linear way. The Kohonen's Self-Organizing Maps are a type of artificial neur- al networks, the functionality of which can be viewed as a non linear extension of PCA: data samples are mapped onto a grid of neurons. A major drawback of these maps, however, is their a priori defined shape (generally a square or a rectangle), which is rarely adapted to the shape of the parametric space to represent. We relax this constraint with a new algorithm, called ``Vector Quantization and Projection'' (VQP). It is a kind of self- organizing map, the output space of which is continuous and takes automatically the relevant shape. From a mathematical point of view, VQP is the search for a diffeomorphism between the raw data set and an unknown parametric representation to be found. More intuitively, this is an unfolding of data structure towards a low-dimensional space, which dimension is the number of degrees of freedom of the observed phenomenon, and can be determined through fractal analysis of the data set. In order to illustrate the generality of VQP, we give a wide range of application examples (real or simulated), in several domains such as data fusion, graphes matching, industrial process monitoring or analysis, faults detection in devices and adaptive routing in telecommunications. ---------------- ANALYSE DE DONNEES PAR RESEAUX DE NEURONES AUTO-ORGANISES Mots-cles --------- Structure de donnees (variete), cartes auto-organisantes (Kohonen), dimension fractale, reduction de dimension, projection non-lineaire, depliage, algorithme "VQP", diffeomorphisme, inter- polation, extrapolation, "Multidimensional Scaling", "Nonlinear Mapping", applications industrielles. Resume ------ Chercher a comprendre des donnees, c'est souvent chercher a trouver de l'information cachee dans un gros volume de mesures redondantes. C'est chercher des dependances, lineaires ou non, entre les variables observees pour pouvoir resumer ces dernieres par un petit nombre de parametres. Une methode classique, l'Analyse en Composantes Principales (ACP), est abondamment employee dans ce but. Malheureusement, il s'agit d'une methode exclusivement lineaire, qui est donc incapa- ble de reveler les dependances non lineaires entre les variables. Les cartes auto-organisantes de Kohonen sont des reseaux de neu- rones artificiels dont la fonction peut etre vue comme une exten- sion de l'ACP aux cas non-lineaires. L'espace parametrique est represente par une grille de neurones, dont la forme, generale- ment carree ou rectangulaire, doit malheureusement etre choisie a priori. Cette forme est souvent inadaptee a celle de l'espace parametrique recherche. Nous liberons cette contrainte avec un nouvel algorithme, nomme ``Vector Quantization and Projection'' (VQP), qui est une sorte de carte auto-organisante dont l'espace de sortie est continu et prend automatiquement la forme adequate. Sur le plan mathema- tique, VQP peut etre defini comme la recherche d'un diffeomor- phisme entre l'espace brut des donnees et un espace parametrique inconnu a trouver. Plus intuitivement, il s'agit d'un depliage de la structure des donnees vers un espace de plus petite dimension. Cette dimension, qui correspond au nombre de degres de liberte du phenomene etudie, peut etre determinee par des methodes d'analyse fractale du nuage de donnees. Afin d'illustrer la generalite de l'approche VQP, nous donnons une serie d'exemples d'applications, simulees ou reelles, dans des domaines varies qui vont de la fusion de donnees a l'appariement de graphes, en passant par l'analyse ou la surveil- lance de procedes industriels, la detection de defauts dans des machines et le routage adaptatif en telecommunications. ----------------------------------------------------------------- FTP INSTRUCTIONS: unix> ftp tirf.inpg.fr (or 192.70.29.33) Name: anonymous Password: ftp> cd pub/demartin ftp> binary ftp> get demartin.phd94.ps.Z ftp> quit unix> uncompress demartin.phd94.ps.Z -------------------------------------------------------------------- Pierre Demartines email: demartin at csemne.ch C.S.E.M. Phone: (41) 38 205 252 Maladiere 71 Fax: (41) 38 205 770 CH-2007 Neuchatel Mosaic: ftp://tirf.inpg.fr/pub/HTML/tirf.html Switzerland -------------------------------------------------------------------- From mike at PSYCH.UALBERTA.CA Tue Apr 18 22:49:11 1995 From: mike at PSYCH.UALBERTA.CA (Mike Dawson) Date: Tue, 18 Apr 1995 20:49:11 -0600 Subject: Biological Computation Project WWW Message-ID: A non-text attachment was scrubbed... Name: not available Type: text Size: 733 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/094a88d7/attachment-0001.ksh From nips95 at mines.colorado.edu Wed Apr 19 03:07:41 1995 From: nips95 at mines.colorado.edu (NIPS Conference Office) Date: Wed, 19 Apr 95 03:07:41 -0400 Subject: reminder: May 20 NIPS submission deadline Message-ID: CALL FOR PAPERS Neural Information Processing Systems Natural and Synthetic Monday, Nov. 27 - Saturday, Dec. 2, 1995 Denver, Colorado This is the ninth meeting of an interdisciplinary conference which brings together neuroscientists, engineers, computer scientists, cognitive scientists, physicists, and mathematicians interested in all aspects of neural processing and computation. The confer- ence will include invited talks, and oral and poster presenta- tions of refereed papers. There will be no parallel sessions. There will also be one day of tutorial presentations (Nov. 27) preceding the regular session, and two days of focused workshops will follow at a nearby ski area (Dec. 1-2). Major categories for paper submission, with example subcategories, are as follows: Neuroscience: systems physiology, signal and noise analysis, oscillations, synchronization, mechanisms of inhibition and neuromodulation, synaptic plasticity, computational models Theory: computational learning theory, complexity theory, dynamical systems, statistical mechanics, probability and statistics, approximation and estimation theory Implementation: analog and digital VLSI, novel neuro-devices, neurocomputing systems, optical, simulation tools, parallelism Algorithms and Architectures: learning algorithms, decision trees constructive/pruning algorithms, localized basis func- tions, recurrent networks, genetic algorithms, combinatorial optimization, performance comparisons Visual Processing: image recognition, coding and classifica- tion, stereopsis, motion detection and tracking, visual psycho- physics Speech, Handwriting and Signal Processing: speech recognition, coding and synthesis, handwriting recognition, adaptive equali- zation, nonlinear noise removal, auditory scene analysis Applications: time-series prediction, medical diagnosis, finan- cial analysis, DNA/protein sequence analysis, music processing, expert systems, database mining Cognitive Science & AI: natural language, human learning and memory, perception and psychophysics, symbolic reasoning Control, Navigation, and Planning: robotic motor control, pro- cess control, navigation, path planning, exploration, dynamic programming, reinforcement learning Review Criteria: All submitted papers will be thoroughly refereed on the basis of technical quality, novelty, significance, and clarity. Submissions should contain new results that have not been published previously. Authors should not be dissuaded from submitting recent work, as there will be an opportunity after the meeting to revise accepted manuscripts before submitting final camera-ready copy. Paper Format: Submitted papers may be up to eight pages in length, including figures and references. The page limit will be strictly enforced, and any submission exceeding eight pages will not be considered. Authors are encouraged (but not required) to use the NIPS style files obtainable by anonymous FTP at the sites given below. Papers must include physical and e-mail addresses of all authors, and MUST indicate one of the nine major categories listed above. Authors may also indicate a subcategory, and their preference, if any, for oral or poster presentation; this preference will play no role in paper acceptance. Unless otherwise indicated, correspondence will be sent to the first au- thor. Submission Instructions: Send six copies of submitted papers to the address below; electronic or FAX submission is not accept- able. Include one additional copy of the abstract only, to be used for preparation of the abstracts booklet distributed at the meeting. Submissions mailed first-class from within the US or Canada, or sent from overseas via Federal Express/Airborne/DHL or similar carrier must be POSTMARKED by May 20, 1995. All other submissions must ARRIVE by this date. Mail submissions to: Michael Mozer NIPS*95 Program Chair Department of Computer Science University of Colorado Colorado Avenue and Regent Drive Boulder, CO 80309-0430 USA Mail general inquiries/requests for registration material to: NIPS*95 Registration Dept. of Mathematical and Computer Sciences Colorado School of Mines Golden, CO 80401 USA FAX: (303) 273-3875 e-mail: nips95 at mines.colorado.edu Sites for LaTex style files: Copies of "nips.tex" and "nips.sty" are available via anonymous ftp at helper.systems.caltech.edu (131.215.68.12) in /pub/nips, b.gp.cs.cmu.edu (128.2.242.8) in /usr/dst/public/nips. The style files and other conference information may also be retrieved via World Wide Web at http://www.cs.cmu.edu/Web/Groups/NIPS/NIPS.html NIPS*95 Organizing Committee: General Chair, David S. Touretzky, CMU; Program Chair, Michael Mozer, U. Colorado; Publications Chair, Michael Hasselmo, Harvard; Tutorial Chair, Jack Cowan, U. Chicago; Workshops Chair, Michael Perrone, IBM; Publicity Chair, David Cohn, MIT; Local Arrangements, Manavendra Misra, Colorado School of Mines; Treasurer, John Lazzaro, Berkeley. DEADLINE FOR SUBMISSIONS IS MAY 20, 1995 (POSTMARKED) -please post- From nips95 at mines.colorado.edu Wed Apr 19 03:08:45 1995 From: nips95 at mines.colorado.edu (NIPS Conference Office) Date: Wed, 19 Apr 95 03:08:45 -0400 Subject: NIPS workshop proposals due May 20 Message-ID: CALL FOR PROPOSALS NIPS*95 Post Conference Workshops December 1 and 2, 1995 Vail, Colorado Following the regular program of the Neural Information Processing Systems 1995 conference, workshops on current topics in neural information processing will be held on December 1 and 2, 1995, in Vail, Colorado. Proposals by qualified individuals interested in chairing one of these workshops are solicited. Past topics have included: active learning and control, architectural issues, at- tention, bayesian analysis, benchmarking neural network applica- tions, computational complexity issues, computational neurosci- ence, fast training techniques, genetic algorithms, music, neural network dynamics, optimization, recurrent nets, rules and connec- tionist models, self-organization, sensory biophysics, speech, time series prediction, vision and audition, implementations, and grammars. The goal of the workshops is to provide an informal forum for researchers to discuss important issues of current interest. Sessions will meet in the morning and in the afternoon of both days, with free time in between for ongoing individual exchange or outdoor activities. Concrete open and/or controversial issues are encouraged and preferred as workshop topics. Representation of alternative viewpoints and panel-style discussions are partic- ularly encouraged. Individuals proposing to chair a workshop will have responsibilities including: 1) arranging short informal presentations by experts working on the topic, 2) moderating or leading the discussion and reporting its high points, findings, and conclusions to the group during evening plenary sessions (the "gong show"), and 3) writing a brief summary. Submission Instructions: Interested parties should submit a short proposal for a workshop of interest postmarked by May 20, 1995. (Express mail is not necessary. Submissions by electronic mail will also be accepted.) Proposals should include a title, a description of what the workshop is to address and accomplish, the proposed length of the workshop (one day or two days), and the planned format. It should motivate why the topic is of in- terest or controversial, why it should be discussed and what the targeted group of participants is. In addition, please send a brief resume of the prospective workshop chair, a list of publi- cations and evidence of scholarship in the field of interest. Submissions should include contact name, address, email address, phone number and fax number if available. Mail proposals to: Michael P. Perrone NIPS*95 Workshops Chair IBM T.J. Watson Research Center P.O. Box 704 Yorktown Heights, NY 10598 (email: mpp at watson.ibm.com) PROPOSALS MUST BE POSTMARKED BY MAY 20, 1995 -Please Post- From pierre.demartines at csemne.ch Wed Apr 19 04:26:00 1995 From: pierre.demartines at csemne.ch (pierre.demartines@csemne.ch) Date: Wed, 19 Apr 1995 10:26:00 +0200 Subject: French Doctoral Thesis available: Nonlinear Data Analysis through Self-Organizing Neural Networks Message-ID: <199504190826.KAA17936@gervans.csemne.ch> Hem... As some people told me, I forgot to check the correct read access to my dissertation Postscript file (the shame on me...) It's ok now. Excuse-me for the trouble. For non french reader, you can find on the same ftp-anonymous server (tirf.inpg.fr or 192.70.29.33, directory pub/demartin) some short papers on the subject (demartin.{gretsi95,nimes93,iwann93}.ps.Z). In fact, the shortest (2 pages) and the most up-to-date one is gretsi95. The one of IWANN'93 is a bit obsolete. Anyway, in the thesis you'll find a lot of figures and equations that are readable by everybody. -------------------------------------------------------------------- Pierre Demartines email: demartin at csemne.ch C.S.E.M. Phone: (41) 38 205 252 Maladiere 71 Fax: (41) 38 205 770 CH-2007 Neuchatel Mosaic: ftp://tirf.inpg.fr/pub/HTML/tirf.html Switzerland -------------------------------------------------------------------- From kak at gate.ee.lsu.edu Wed Apr 19 12:46:35 1995 From: kak at gate.ee.lsu.edu (Subhash Kak) Date: Wed, 19 Apr 95 11:46:35 CDT Subject: Paper Message-ID: <9504191646.AA15786@gate.ee.lsu.edu> The following paper has just been published: S.C. Kak, On quantum neural computing. INFORMATION SCIENCES, vol. 83, pp. 143-160, 1995. ---------------------------------------------------- Abstract: This paper examines the notion of quantum neural computing in the context of several new directions in neural network research. In particular, we consider new neuron and network models that lead to rapid training, chaotic dynamics in neuron assemblies, models of attention and awareness, cytoskeletal microtubule information processing. ----------------------------------------------------- You can get a copy of the latex file by anonymous ftp from gate.ee.lsu.edu The directory is ftp/pub/kak and the filename is q.tex From hu at eceserv0.ece.wisc.edu Wed Apr 19 15:08:25 1995 From: hu at eceserv0.ece.wisc.edu (Yu Hu) Date: Wed, 19 Apr 1995 14:08:25 -0500 Subject: CFP: Int'l Symp. on ANN, Dec.18-20, 1995, Taiwan, ROC (86 lines) Message-ID: <199504191908.AA27828@eceserv0.ece.wisc.edu> FIRST ANNOUNCEMENT AND CALL FOR PAPERS -------------------------------------- 1995 International Symposium on Artificial Neural Networks December 18-20, 1995, Hsinchu, Taiwan, Republic of China Sponsored by National Chiao-Tung University in cooperation with Ministry of Education, Taiwan R.O.C. National Science Council, Taiwan R.O.C. IEEE Signal Processing Society Call for Papers ------------------ The third of a series of International Symposium on Artificial Neural Networks will be held at the National Chiao-Tung University, Hsinchu, Taiwan in December of 1995. Papers are solicited for, but not limited to, the following topics: Associative Memory Robotics Electrical Neurocomputers Sensation & Perception Image/Speech Processing Sensory/Motor Control Systems Machine Vision Supervised Learning Neurocognition Unsupervised Learning Neurodynamics Fuzzy Neural Systems Optical Neurocomputers Mathematical Methods Optimization Other Applications Prospective authors are invited to submit 4 copies of extended summaries of no more than 4 pages. All the manuscripts should be written in English with single-spaced, single column, on 8.5" by 11" white papers. The top of the first page of the summary should include a title, authors' names, affiliations, address, telephone/fax numbers, and email address if applicable. The indicated corresponding author will receive an acknowledgement of his/her submissions. Camera-ready full papers of accepted manuscripts will be published in a hard-bound proceedings and distributed at the symposium. For more information, please consult at the MOSAIC URL site http://www.ee.washington.edu/isann95.html, or use anonymous ftp from pierce.ee.washington.edu/pub/isann95/read.me (128.95.31.129). For submission from USA and Europe: Professor Yu-Hen Hu Dept. of Electrical and Computer Engineering Univ. of Wisconsin - Madison, Madison, WI 53706-1691 Phone: (608) 262-6724, Fax: (608) 262-1267 Email: hu at engr.wisc.edu For submission from Asia and Other Areas: Professor Sin-Horng Chen Dept. of Communication Engineering National Chiao-Tung Univ., Hsinchu, Taiwan Phone: (886) 35-712121 ext. 54522, Fax: (886) 35-710116 Email: isann95 at cc.nctu.edu.tw ************************* SCHEDULE ************************* Submission of extended summary: July 15 Notification of acceptance: September 30 Submission of photo-ready paper: October 31 Advanced registration, before: November 10 ORGANIZATIOIN General Co-Chairs Hsin-Chia Fu Jenq-Neng Hwang National Chiao-Tung University University of Washington Hsinchu, Taiwan Seattle, Washington, USA hcfu at csie.nctu.edu.tw hwang at ee.washington.ed Program Co-Chairs Sin-Horng Chen Yu-Hen Hu National Chiao-Tung University University of Wisconsin Hsinchu, Taiwan Madison, Wisconsin, USA schen at cc.nctu.edu.tw hu at engr.wisc.edu Advisory Board Co-Chair Sun-Yuan Kung C. Y. Wu Princeton University National Science Council Princeton, New Jersey, US Taipei, Taiwan, ROC From kak at gate.ee.lsu.edu Thu Apr 20 10:47:17 1995 From: kak at gate.ee.lsu.edu (Subhash Kak) Date: Thu, 20 Apr 95 09:47:17 CDT Subject: Paper announcement Message-ID: <9504201447.AA23093@gate.ee.lsu.edu> I regret that the directory listing for the anonymous ftp of my paper was in error in the announcement yesterday. The correct listing is given below: S.C. Kak, On quantum neural computing. INFORMATION SCIENCES, vol 83, pp. 143-160, 1995. -- Abstract: This paper examines the notion of quantum neural computing in the context of several new directions in neural networks research. In particular, we consider new neuron and network models that lead to rapid training, chaotic dynamics in neuron assemblies, models of attention and awareness, cytoskeletal microtubule processing. Several characteristics of quantum neural computing are examined. -- To obtain the .ps file of the paper use anonymous ftp at gate.ee.lsu.edu Get into directory pub and subdirectory kak. The .ps file is named q.ps ftp://gate.ee.lsu.edu/pub/kak/q.ps A more comprehensive (three times larger) report to appear in the ``Advances in Imaging and Electron Physics'' may also be obtained using anonymous ftp. The compressed postscript file is named a.ps.Z ftp://gate.ee.lsu.edu/pub/kak/a.ps.Z From uzimmer at informatik.uni-kl.de Thu Apr 20 10:54:44 1995 From: uzimmer at informatik.uni-kl.de (Uwe R. Zimmer, AG vP) Date: Thu, 20 Apr 95 15:54:44 +0100 Subject: Paper available (mobile robots, self-localization) Message-ID: <950420.155444.1722@ag-vp-file-server.informatik.uni-kl.de> A report on a current mobile robot project concerning basic mobile robot tasks is available via ftp or (together with some other reports) from the following WWW-server: WWW-Server is: http://ag-vp-www.informatik.uni-kl.de/ --------------------------------------------------------------------------- --- Self-Localization in Dynamic Environments --------------------------------------------------------------------------- FTP-Server is: ftp.uni-kl.de Mode is : binary Directory is : reports_uni-kl/computer_science/mobile_robots/1995/papers File name is : Zimmer.Self-Loc.ps.gz IEEE/SOFT International Workshop BIES'95 May 30 - 31, 1995, Tokyo, Japan Self-Localization in Dynamic Environments Uwe R. Zimmer Self-localization in unknown environments respectively correlation of current and former impressions of the world is an essential ability for most mobile robots. The method, proposed in this article is the construction of a qualitative, topological world model as a basis for self-localization. As a central aspect the reliability regarding error-tolerance and stability will be emphasized. The proposed techniques demand very low constraints for the kind and quality of the employed sensors as well as for the kinematic precision of the utilized mobile platform. Hard real-time constraints can be handled due to the low computational complexity. The principal discussions are supported by real-world experiments with the mobile robot "ALICE". keywords: artificial neural networks, mobile robots, self-localization, self-organization, world-modelling (8 pages with photos and other figures) ----------------------------------------------------- ----- Uwe R. Zimmer --- University of Kaiserslautern - Computer Science Department | 67663 Kaiserslautern - Germany | ------------------------------.--------------------------------. Phone:+49 631 205 2624 | Fax:+49 631 205 2803 | From jon at maths.flinders.edu.au Fri Apr 21 09:13:14 1995 From: jon at maths.flinders.edu.au (Jonathan Baxter) Date: Fri, 21 Apr 1995 22:43:14 +0930 Subject: Paper Avaliable: Learning Internal Representations Message-ID: <199504211313.AA12304@calvin.maths.flinders.edu.au> The following paper is available by anonymous ftp from calvin.maths.flinders.edu.au (129.96.32.2) /pub/jon/repcolt.ps.Z It is a (hopefully lossy) compression of part of my thesis and will appear in the proceedings of COLT '95. Instructions for retrieval are at the end of this message. Title: Learning Internal Representations (10 pages) Author: Jonathan Baxter Abstract: Probably the most important problem in machine learning is the preliminary biasing of a learner's hypothesis space so that it is small enough to ensure good generalisation from reasonable training sets, yet large enough that it contains a good solution to the problem being learnt. In this paper a mechanism for {\em automatically} learning or biasing the learner's hypothesis space is introduced. It works by first learning an appropriate {\em internal representation} for a learning environment and then using that representation to bias the learner's hypothesis space for the learning of future tasks drawn from the same environment. An internal representation must be learnt by sampling from {\em many similar tasks}, not just a single task as occurs in ordinary machine learning. It is proved that the number of examples $m$ {\em per task} required to ensure good generalisation from a representation learner obeys $m = O(a+b/n)$ where $n$ is the number of tasks being learnt and $a$ and $b$ are constants. If the tasks are learnt independently ({\em i.e.} without a common representation) then $m=O(a+b)$. It is argued that for learning environments such as speech and character recognition $b\gg a$ and hence representation learning in these environments can potentially yield a drastic reduction in the number of examples required per task. It is also proved that if $n = O(b)$ (with $m=O(a+b/n)$) then the representation learnt will be good for learning novel tasks from the same environment, and that the number of examples required to generalise well on a novel task will be reduced to $O(a)$ (as opposed to $O(a+b)$ if no representation is used). It is shown that gradient descent can be used to train neural network representations and the results of an experiment are reported in which a neural network representation was learnt for an environment consisting of {\em translationally invariant} Boolean functions. The experiment provides strong qualitative support for the theoretical results. FTP Instructions: unix> ftp calvin.maths.flinders.edu.au (or 129.96.32.2) login: anonymous password: (your e-mail address) ftp> cd pub/jon ftp> binary ftp> get repcolt.ps.Z ftp> quit unix> uncompress repcolt.ps.Z unix> lpr repcolt.ps (or however you print) From rao at cs.rochester.edu Sat Apr 22 16:05:32 1995 From: rao at cs.rochester.edu (rao@cs.rochester.edu) Date: Sat, 22 Apr 1995 16:05:32 -0400 Subject: Paper Available: Face Recognition using Spatial Filters and Message-ID: <199504222005.QAA27006@panda.cs.rochester.edu> Sparse Distributed Memory The following paper is currently available via ftp: Rajesh P. N. Rao and Dana H. Ballard, "Natural Basis Functions and Topographic Memory for Face Recognition", IJCAI'95 (to appear). ftp://cs.rochester.edu/pub/u/rao/papers/ijcai95.ps.Z Abstract: Recent work regarding the statistics of natural images has revealed that the dominant eigenvectors of arbitrary natural images closely approximate various oriented derivative-of-Gaussian functions; these functions have also been shown to provide the best fit to the receptive field profiles of cells in the primate striate cortex. We propose a scheme for expression-invariant face recognition that employs a fixed set of these ``natural'' basis functions to generate multiscale iconic representations of human faces. Using a fixed set of basis functions obviates the need for recomputing eigenvectors (a step that was necessary in some previous approaches employing principal component analysis (PCA) for recognition) while at the same time retaining the redundancy-reducing properties of PCA. A face is represented by a set of iconic representations automatically extracted from an input image. The description thus obtained is stored in a topographically-organized sparse distributed memory that is based on a model of human long-term memory first proposed by Kanerva. We describe experimental results for an implementation of the method on a pipeline image processor that is capable of achieving near real-time recognition by exploiting the processor's frame-rate convolution capability for indexing purposes. --------- Rajesh Rao Internet: rao at cs.rochester.edu Dept. of Computer Science VOX: (716) 275-2527 University of Rochester FAX: (716) 461-2018 Rochester NY 14627-0226 WWW: http://www.cs.rochester.edu/u/rao/ From jagota at next2.msci.memst.edu Sun Apr 23 14:29:04 1995 From: jagota at next2.msci.memst.edu (Arun Jagota) Date: Sun, 23 Apr 1995 13:29:04 -0500 Subject: HKP exercises (ftp) Message-ID: <199504231829.AA20899@next2> Dear Connectionists: The HKP exercise list (version 1), which has been sent by email to those who requested it, is now also available by anonymous ftp: ftp ftp.cs.buffalo.edu > cd users/jagota > binary > get HKP.ps.Z The same directory has an uncompressed version HKP.ps also. Arun Jagota, Math Sciences, University of Memphis From u095 at unb.ca Mon Apr 24 22:55:47 1995 From: u095 at unb.ca (Kamat) Date: Mon, 24 Apr 1995 23:55:47 -0300 (ADT) Subject: paper available "Symbolic vs Vector Space Learning" Message-ID: FTP-host: jupiter.csd.unb.ca FTP-filename: /pub/symbol/vector.ps.Z ---------------------------------------------------------------------------- Dear Connectionists, The following paper has been accepted for publication in Pattern Recognition Letters and is available through anonymous ftp. ---------------------------------------------------------------------------- CAN A VECTOR SPACE BASED LEARNING MODEL DISCOVER INDUCTIVE CLASS GENERALIZATION IN A SYMBOLIC ENVIRONMENT? Lev Goldfarb, John Abela, Virendra C. Bhavsar and Vithal N. Kamat Faculty of Computer Science University of New Brunswick Fredericton, N.B., Canada E3B 5A3 E-mail: goldfarb, x45i, bhavsar, u095 at unb.ca Abstract We outline a general framework for inductive learning based on the recently proposed evolving transformation system model. Mathematical foundations of this framework include two basic components: a set of operations (on objects) and the corresponding geometry defined by means of these operations. According to the framework, to perform inductive learning in a symbolic environment, the set of operations (class features) may need to be dynamically updated, and this requires that the geometric component allows for an evolving topology. In symbolic systems, as defined in this paper, the geometric component allows for a dynamic change in topology, whereas finite-dimensional numeric systems (vector spaces) can essentially have only one natural topology. This fact should form the basis of a complete formal proof that, in a symbolic setting, the vector space based models, e.g. artificial neural networks, cannot capture inductive generalization. Since the presented argument indicates that the symbolic learning process is more powerful than the numeric process, it appears that only the former should be properly called an inductive learning process. Keywords: Inductive learning, inductive generalization, vector space learning models, artificial neural networks, symbolic models, evolving transformation system, learning topologies. ------------------------------------------------------------------------- FTP-host: jupiter.csd.unb.ca FTP-filename: /pub/symbol/vector.ps.Z ------------------------------------------------------------------------- ftp instructions: % ftp jupiter.csd.unb.ca Name: anonymous password: your full email address ftp> cd pub/symbol ftp> binary ftp> get vector.ps.Z ftp> bye % uncompress vector.ps.Z % lpr vector.ps -------------------------------------------------------------------- Vithal N. Kamat Tel. (506) 453-4566 Faculty of Computer Science Fax. (506) 453-3566 University of New Brunswick E-mail: u095 at unb.ca Fredericton, N.B., CANADA E3B 5A3 -------------------------------------------------------------------- From saad at castle.ed.ac.uk Tue Apr 25 19:16:26 1995 From: saad at castle.ed.ac.uk (D Saad) Date: Tue, 25 Apr 95 19:16:26 BST Subject: TR announcement: On-Line Learning in Soft Committee Machines Message-ID: <9504251916.aa03297@uk.ac.ed.castle> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/saad.online.ps.Z The file saad.online.ps.Z is now available for copying from the Neuroprose repository: On-Line Learning in Soft Committee Machines (33 pages) David Saad - Department of Physics, University of Edinburgh, Edinburgh EH9 3JZ, UK. Sara A. Solla - CONNECT, The Niels Bohr Institute, Blegdamsdvej 17, Copenhagen 2100, Denmark. The paper has been submitted for publication in Phys.Rev.E, a letter describing the main results is to appear in Phys.Rev.Lett. Abstract: -------- The problem of on-line learning in two-layer neural networks is studied within the framework of statistical mechanics. A fully connected committee machine with $K$ hidden units is trained by gradient descent to perform a task defined by a teacher committee machine with M hidden units acting on randomly drawn inputs. The approach, based on a direct averaging over the activation of the hidden units, results in a set of first order differential equations which describe the dynamical evolution of the overlaps among the various hidden units and allow for a computation of the generalization error. The equations of motion are obtained analytically for general K and M, and provide a new and powerful tool used here to study a variety of realizable, over-realizable, and unrealizable learning scenarios, and to analyze the role of the learning rate in controlling the evolution and convergence of the learning process. From mel at quake.usc.edu Tue Apr 25 01:23:36 1995 From: mel at quake.usc.edu (Bartlett Mel) Date: Tue, 25 Apr 1995 13:23:36 +0800 Subject: POST-DOC POSITION Message-ID: <9504252023.AA14539@quake.usc.edu> ----- POST-DOCTORAL RESEARCH POSITION AVAILABLE ----- A post-doctoral position is now available in the laboratory of Dr. Bartlett Mel in the Biomedical Engineering Department at the University of Southern California. This position is for collaborative work on an NSF-funded project involving the study of synaptic learning in neurons with complex dendritic trees. Applicants should have a good background in neuroscience and strong computational and mathematical skills. The position is for one year, with possibility for renewal for a second year. Salary is around $30,000. PROJECT OVERVIEW - A growing body of neuroanatomical, physiological, and computational modeling work is consistent with the idea that activity- independent synapse formation coupled with activity-dependent (Hebbian) synapse stabilization could lead to development of correlation-based spatial structure of the synaptic contacts onto the dendrites of INDIVIDUAL neurons, by analogy with correlation-induced spatial maps formed across POPULATIONS of neurons (e.g. Miller 1994). Given the likely capacity for nonlinear subunit processing within dendritic trees (see Mel 1994), this additional putative "layer" of synaptic organization is therefore likely to have profound consequences for neurobiological function, such as for the development of complex receptive field properties in sensory neurons, as well as for memory capacity in the context of supervised associative learning. WHAT THE APPLICANT WOULD DO includes at least two of the following: (1) detailed biophysical modeling of synaptic plasticity at the single-neuron level, (2) abstract modeling of individual dendritic neurons and populations of neurons in both supervised and unsupervised neurobiological learning contexts, and (3) mathematical analysis of the computational capacities of dendritic neurons. In addition, the applicant would participate in a collaboration with experimental neuroscience collegues (Drs. Nancy Desmond and William Levy at the University of Virginia), who will be conducting experiments in rat hippocampus that relate closely to the above issues. OTHER PROJECTS currently ongoing in the lab include (i) the construction of a large-scale neurally-inspired system for 3-D visual object recognition, (ii) psychophysical experiments involving human visual perception and memory in collaboration with Dr. Kaz Morikawa, and (iii) biophysical-level modeling of the temporal response characteristics of dendritic neurons, in collaboration with Dr. Ernst Niebur at Caltech. ELSEWHERE AT USC, the Neural, Informational, and Behavioral Sciences (NIBS) graduate program encompasses several dozen neuroscience, psychology, computer science and engineering faculty interested in all aspects of brain and behavioral function. A few of these include Michael Arbib, Michel Beaudry, George Bekey, Ted Berger, Irving Biederman, Christof von der Malsburg, Larry Swanson, Armand Tanguay, and Richard Thompson. Several excellent seminar series run throughout the year, and a daily NIBS tea establishes a focal point for daily interactions. THE UNIVERSITY OF SOUTHERN CALIFORNIA is the oldest and largest private research university in the western US, and is among the top 10 private universities receiving federal funds for research and development in the country. The University is situated in the center of an unusually diverse metropolis (Los Angeles) surrounded by stunning natural scenery. Los Angeles may be the only city in the world in which it is possible to climb a 10,000 ft. peak in the morning, picnic on the beach for lunch, receive "aromatherapy" in the afternoon, dine in a fabulous Santa Monica restaurant, catch the LA Philharmonic at the Hollywood Bowl, and then drown one's late-night existentialist thoughts at a West-Side coffeehouse. APPLICATIONS SHOULD INCLUDE (1) a CV and cover letter detailing the applicant's background, motivations, and qualifications, (2) at least two letters of recommendation, and (3) a maximum of three relevant publications, sent to Dr. Bartlett Mel Biomedical Engineering Department USC, 1451 Los Angeles, CA 90089 (213)740-0334 Applicants should be a U.S. citizen or permanent resident. Applications are encouraged from minorities and women. USC is an equal opportunity/affirmative action employer. DEADLINE for submission is June 1, 1995. BIBLIOGRAHPY Mel, B.W. (1994) Information processing in dendritic trees. Neural Computation, 6, 1031-1085. Miller, K. (1994) A model for the development of simple-cell receptive fields, and the ordered arrangement of orientation columns through activity-dependent competition of on- and off-center inputs. J. Neurosci., 14, 409-441. From M.West at statslab.cam.ac.uk Wed Apr 26 17:09:00 1995 From: M.West at statslab.cam.ac.uk (Mike West) Date: Wed, 26 Apr 95 17:09 BST Subject: No subject Message-ID: +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1996 Joint Statistical Meetings, Chicago, 4-8 August 96 Call for Papers: ASA Section on BAYESIAN STATISTICAL SCIENCE The latest issue of Amstat News contains a call for Invited Paper Session suggestions and proposals from Dick Gunst, the ASA Program Chair. This is a follow-up call from SBSS. The Section will have at least one Invited Session, possibly more including sessions co-sponsored by other sections. Proposals and suggestions received will also be considered for Special Contributed Paper Sessions. At this stage, suggestions and ideas for sessions need not identify a full list of speakers and discussants, but you should provide a general idea of the topic and focus. The theme for the 1996 meetings is "Challenging the Frontiers of Knowledge Using Statistical Science", intended to highlight new statistical developments at the forefront of the discipline -- theory, methods, applications, and cross-disciplinary activities. Suggestions for invited sessions should relate to this theme, involving topics of real novelty and importance, new directions of development in Bayesian statistics, and reflecting the current vibrancy of the discipline. Invited sessions typically have three speakers and one discussant, though the format can vary from one to three speakers, or comprise a panel discussion. Novel format suggestions are welcome. Special Contributed Sessions typically have four or five speakers plus a discussant. Please contact me with suggestions and ideas for sessions. The deadline for receipt at ASA of all invited paper sessions is soon: July 1 1995. Mike West 1996 SBSS Program Chair Email me at: January--July 6th 1995: m.west at statslab.cam.ac.uk After July 7th 1995: mw at isds.duke.edu +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ From ingber at alumni.caltech.edu Wed Apr 26 18:24:12 1995 From: ingber at alumni.caltech.edu (Lester Ingber) Date: Wed, 26 Apr 1995 15:24:12 -0700 Subject: New book: Neocortical Dynamics and Human EEG Rhythms Message-ID: <199504262224.PAA28361@alumni.caltech.edu> Neocortical Dynamics and Human EEG Rhythms P.L. Nunez Oxford University Press, 1995 >From the US, order by calling (800)451-7556. From outside the US, call (919)677-0977. From Europe, call Oxford U Press in London. An ascii file of the full preface can be obtained via ftp: Interactively [brackets signify machine prompts]: [your_machine%] ftp ftp.alumni.caltech.edu [Name (...):] anonymous [Password:] your_e-mail_address [ftp>] cd pub/ingber/MISC.DIR [ftp>] get nunez95_preface.txt [ftp>] quit This archive also can be accessed via WWW path http://alumni.caltech.edu/~ingber ======================================================================== CONTENTS 1. Quantitative States of Neocortex (PL Nunez) 2. Toward a Physics of Neocortex (PL Nunez) 3. Mind, Brain, and Electroencephalography (PL Nunez) 4. Physiologic, Medical, and Cognitive Correlates of Electroencephalography (KL Pilgreen) 5. Dynamics of Electrical Activity of the Brain, Local Networks, and Modulating Systems (FH Lopes da Silva) 6. Steady-State Visually Evoked Potentials, Brain Resonances, and Cognitive Processes (RB Silberstein) 7. Neuroelectric Measures of Mind (AS Gevins and BA Cutillo) 8. Discrete Linear Systems of Physics and Brain (PL Nunez) 9. Continuous Linear Systems of Physics and Brain (PL Nunez) 10. Nonlinear Phenomena and Chaos (PL Nunez) 11. Global Contributions to EEG Dynamics (PL Nunez) 12. Experimental Connections Between EEG Data and the Global Wave Theory (PL Nunez) 13. Neuromodulation of Neocortical Dynamics (RB Silberstein) 14. Statistical Mechanics of Multiple Scales of Neocortical Interactions (L Ingber) APPENDIX (PL Nunez) ======================================================================== /* RESEARCH E-Mail: ingber at alumni.caltech.edu * * INGBER WWW: http://alumni.caltech.edu/~ingber * * LESTER Archive: ftp.alumni.caltech.edu:/pub/ingber * * Prof. Lester Ingber _ P.O. Box 857 _ McLean, VA 22101 _ 1.800.L.INGBER */ From u095 at unb.ca Wed Apr 26 21:17:11 1995 From: u095 at unb.ca (Kamat) Date: Wed, 26 Apr 1995 22:17:11 -0300 (ADT) Subject: Symbolic vs Vector Space Learning Message-ID: Dear Connectionists, It appears that many neural net researchers do not agree with the main point of the paper [*] posted by me two days ago: that there is a fundamental difference between the **appropriate** underlying mathematical models for neural nets and symbolic learning machines. Since one of us will be working on the formal proof of the above statement, and the issue is so critical that it might be useful (to say the least) to discuss this issue on this mailing list. Vithal [*] L. Goldfarb, J. Abela, V. C. Bhavsar and V. N. Kamat, Can a vector space based learning model discover inductive class generalization in a symbolic environment? (to be published in Pattern Recognition Letters). ============================================================================ Vithal N. Kamat, PhD Student, AI group, Faculty of Computer Science, University of New Brunswick, PO Box 4400, Fredericton, NB, E3B 5A3, CANADA. u095 at unb.ca. Fax:(506) 453-3566 My URL is http://ccortex.cs.unb.ca:8080/~kamat/kamat.html. ============================================================================ From ken at phy.ucsf.edu Wed Apr 26 22:24:40 1995 From: ken at phy.ucsf.edu (Ken Miller) Date: Wed, 26 Apr 1995 19:24:40 -0700 Subject: faculty position in computational neuroscience at UC Davis Message-ID: <9504270224.AA02578@coltrane.ucsf.edu> The following notice appeared on a more obscure list, it seems appropriate to redistribute here. Ken Miller ken at phy.ucsf.edu p.s. please don't write to me about the job, I don't know anything more about it. ----------------------------------------------------------------- From khbritten at ucdavis.edu Tue Apr 25 12:48:14 1995 From: khbritten at ucdavis.edu (khbritten@ucdavis.edu) Date: Tue, 25 Apr 1995 09:48:14 -0700 Subject: faculty position, last-minute notice Message-ID: TENURE-TRACK FACULTY POSITION The Center for Neuroscience and the Department of Psychology at the University of California, Davis, invite applications for a tenure-track position at the assistant professor level in the area of computational neuroscience. Candidates specializing in analytical approaches and predictive modeling of perceptual and/or motor circuitry in vertebrates are encouraged to apply. Postdoctoral experience is desirable. Ideal candidates would also incorporate neurophysiological, neuroanatomical, psychophysical, and/or cognitive-experimental techniques to test models. The appointee will be expected to teach undergraduate and graduate level courses in his/her areas of expertise. The University of California is interested in candidates who are committed to the highest standards of scholarship and professional activities. This position is open until filled, but applications must be received by May 1, 1995 to be assured full consideration. Applicants should submit curriculum vitae, bibliography, a brief description of research interests and the names of at least three references to: Michael S. Gazzaniga, Ph.D., Director, Center for Neuroscience, University of California, Davis, CA 95616 The University of California is an Equal Opportunity/Affirmative Action Employer with a strong institutional commitment to the achievement of diversity among its faculty and staff. From wimw at mbfys.kun.nl Thu Apr 27 06:16:19 1995 From: wimw at mbfys.kun.nl (Wim Wiegerinck) Date: Thu, 27 Apr 1995 12:16:19 +0200 Subject: 3rd SNN Neural Network Symposium Message-ID: <199504271016.MAA03517@septimius.mbfys.kun.nl> NEURAL NETWORKS AND ARTIFICIAL INTELLIGENCE 3rd SNN Neural Network Symposium September 14-15, 1995 Nijmegen, the Netherlands Call for Papers Deadline 21 May 1995 -------------------------------------------------------------- On september 14 and 15 SNN will organize its annual Symposium on Neural Networks in the University Auditorium and Conference Centre of the University of Nijmegen. The topic of the conference is "Neural Networks and Artificial Intelligence". The term "Artificial Intelligence" is often associated with "traditional AI" methodology. Here it used in its literal sense, indicating the problem to create intelligence by artificial means, regardless of the method that is being used. The aim of the conference is two-fold: to give an overview of new developments in neuro-biology and the cognitive sciences that may lead to novel computational paradigms and to give an overview of recent achievements for some of the major conceptual problems of artificial intelligence and data modelling. Specific sessions are: - Robotics (hierarchical motor control, exploration, trajectory planning, active vision) - Attention (computational models, learning paradigms) - Biological models (memory, perception, motor control, oscillations) - Data interpretation (statistical theory, confidences of networks, Bayesian approach, rule extraction) - Cognitive models (memory, reasoning, language interpretation) The conference consists of 4 one-half day single track sessions. Each session will consist of 2 invited contributions and 2 or 3 contributions selected from the submitted papers. In addition there will be poster sessions. We expect approximately 250 attendants and 50 contributed papers. The proceedings will be published by Springer-Verlag. INVITED SPEAKERS S. Thrun and J. Buhmann (University of Bonn) Neural networks for map building: How RHINO navigates in offices! F. Groen (University of Amsterdam) Time-varying images and visual servoing D. MacKay (University of Cambridge) Developments in Probabilistic Modelling with Neural Networks B.D. Ripley (University of Oxford) Statistical ideas for selecting network architectures L. van Hemmen (University of Munchen) Spiky neurons and models of synchrony A. Herz University of Oxford) Rapid local synchronization of action potentials R. Eckhorn (University of Marburg) Segmentation coding in the visual system B. van Dijk (University of Amsterdam) Synchrony and plasticity in the visual cortex PROGRAM COMMITTEE Aertsen (Weizmann Institute, Israel), Amari (Tokyo University), Buhmann (University of Bonn), van Dijk (University of Amsterdam), Eckhorn (University of Marburg), Gielen (University of Nijmegen), van Hemmen (University of Munchen), Herz (University of Oxford), Heskes (University of Nijmegen), Kappen (University of Nijmegen), Krose (University of Amsterdam), Lopez (University of Madrid), Martinetz (Siemens Research, Munchen), MacKay (University of Cambridge), von Seelen (University of Bochum, Germany), Taylor (King's College, London) INSTRUCTIONS FOR SUBMISSION OF MANUSCRIPTS Please submit 4 copies of a maximally 4 page extended abstract by regular mail to SNN at the address below, BEFORE MAY 21 1995. FAX OR EMAIL SUBMISSIONS ARE NOT ACCEPTED. All manuscripts must be written in English. Indicate whether the manuscript is intended for oral or poster presentation. With each submitted manuscript, indicate the name of the principal author, the mail and email address, telephone and fax numbers and the session the manuscript is submitted to. Authors will be notified about acceptance of their contribution as a oral or poster presentation by June 15 1995. Accepted contributions are requested to submit a 4 page camera ready contribution for the proceedings BEFORE JULY 1 1995. WE ADVISE THAT SUBMITTED MANUSCRIPTS ALREADY FOLLOW THE FINAL LAY OUT. Therefore, please observe carefully these instructions: 1. Text and illustrations should fill, but not extend beyond, an area of 120 x 195 mm. 2. Use a 10pt font size with line spacing of 2 pts. 3. Use a serifed font (e.g. Times) 4. Text should be justifed on both left and right margins, not ragged 5. Do not use running heads 6. Do not use page nummbers 7. The title should be written in capital letters 2 cm from the top of the first page, followed by the authors' name and addresses and the abstract 8. In the text, do not indent headings or captions 9. Insert all tables, figures, and figure captions in the text at their final positions 10. For references in the text, use numbers in square brackets A LaTeX style file is held on the CTAN archive at Aston University (UK). The files can be retrieved by anonymous ftp from ftp.tex.ac.uk where the files wicread.me wicsadv.org wicsadv.tex wicsbook.org and wicsbook.sty are held in the directory /pub/archive/macros/latex209/contrib/springer/wics INDUSTRIAL TRACK There will be an industrial conference entitled NEURAL NETWORKS AT WORK which runs concurrently with the scientific conference. A selection of the best working and 'money making' applications of neural networks in Europe will be presented. The industrial track is organized as part of the activities of the Esprit project SIENNA which aims are to conduct a survey of successful and unsuccessful applications of neural networks in the European market. For additional information and a detailed program, please contact SNN at the address below. VENUE The conference will be held at the University Auditorium and Conference Centre of the University of Nijmegen. Instructions on how to get to the University Auditorium and Conference Centre will be sent to you with your conference registration. CONFERENCE REGISTRATION Before/after June 1 1995, the registration fee for the scientific track is NLG 250/300 for the two days and includes coffee/tea and lunches. One day registration is NLG 200. Scientific registration gives access to the scientific oral and poster presentations, and includes a copy of the scientific proceedings. Before/after June 1 1995, the registration fee for the industrial track is NLG 400/450 per day and includes coffee/tea and lunches. One day registration is NLG 300. Industrial registration gives access to the industrial track presentations as well as the scientific oral and poster presentations, and includes a copy of the scientific and the industrial proceedings. Full-time PhD or Master students may register at a reduced rate. Before/after July 15 1995, the registration fee for students is NLG 125/150 for the two days and includes coffee/tea and lunches. Student registration gives access to the scientific oral and poster presentations. Students must send a copy of their university registration card or a letter from their supervisor together with the registration form. Methods of payment are outlined in the enclosed registration form. To those who have completed the registration form with remittance of the appropriate fees, a receipt will be sent. This receipt should be presented at the registration desk at the conference. Payment must have been received by us before the conference. If not, you will have to pay in cash or with personal cheques at the conference. At the conference, CREDIT CARDS CAN NOT BE ACCEPTED. CANCELLATION Notification of cancellation must be sent in writing to the Conference Organizing Bureau of the University of Nijmegen (see address below). Cancellations received before July 1 will be refunded, excluding an administrative fee of NLG 50,-. Cancellations received after July 1 will not be refunded, but the proceedings will be mailed. ACCOMMODATIONS Hotel reservations will be made for you as indicated on the registration form. Payment can be made at arrival or departure of the hotel (depends on the hotel policy). All hotels are in central Nijmegen and within a ten minute bus ride from the University. The Foundation for Neural Networks and the Conference Organizing bureau cannot be held responsible for hotel reservations and related costs. LIABILITY SNN cannot be held liable for any personal accident or damage to the private property of participants during the conference. ADDRESSES Send SUBMISSIONS to: Prof.dr. C. Gielen, Dr. H. Kappen, Mrs. M. Rengers Foundation for Neural Networks (SNN) University of Nijmegen PObox 9101 6500 HB Nijmegen, The Netherlands tel: +31 80 614245 fax: +31 80 541435 email: snn at mbfys.kun.nl Send your REGISTRATION to: University of Nijmegen Conference organization bureau POBox 9111 6500 HN Nijmegen, The Netherlands tel: +31 80 615968 or 612184 fax: +31 80 567956 --------------------------------------------------------------------- REGISTRATION FORM Name: .......................................................... Mr/Mrs Affiliation: .................................................... ................................................................ Address: ........................................................ ................................................................ Zipcode/City: ................................................... Country: ........................................................ Conference Registration () I will participate at the SNN symposium Amount () industrial registration 2 days ...... () industrial registration 1 day: 14/15 september*) ...... () academic registration 2 days ...... () academic registration 1 day: 14/15 september*) ...... () student registration ...... *) please strike what is not applicable () I intend to present an oral or poster presentation for the scientific track Title: ............................................................ ................................................................... Session: .......................................................... ................................................................... () A 4 page abstract has been submitted () Bank transfer has been made (FREE OF BANK CHARGES) to SNN conferences, Credit Lyonnais Nederland NV Nijmegen, on bank account number 637984838, swift code CRLIJNL2RS. Amount: ................... () Charge my credit card for the amount of .................... () VISA () Master Card () American Express Card no.: Expiry date: Signature: () Please make hotel reservations in my name: Date of arrival: Date of departure: Single/double room (strike what is not applicable) single double (prices per night) () Hotel Mercure 145 165 () Hotel Apollo 90 120 () Hotel Catharina 43,50 87 (with shower and toilet) 57,50 115 --------------------------------------------------------------------- The symposium information can also be found on the World Wide Web: http://www.mbfys.kun.nl/SNN/Symposium/ From biehl at Physik.Uni-Wuerzburg.DE Thu Apr 27 17:05:57 1995 From: biehl at Physik.Uni-Wuerzburg.DE (Michael Biehl) Date: Thu, 27 Apr 95 17:05:57 MESZ Subject: paper available: Learning from Noisy Data... Message-ID: <199504271505.RAA00356@wptx08.physik.uni-wuerzburg.de> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/biehl.noisy.ps.Z The following paper has been placed in the Neuroprose archive (see above for ftp-host) and is now available as a compressed postscript file named biehl.noisy.ps.Z (5 pages of output) email address: biehl at physik.uni-wuerzburg.de **** Hardcopies cannot be provided **** ------------------------------------------------------------------ "Learning from Noisy Data: An Exactly Solvable Model" Michael Biehl, Peter Riegler, and Martin Stechert Institut fuer Theoretische Physik Julius-Maximilians-Universitaet Am Hubland D-97074 Wuerzburg Germany --------------------------------------------------------------------- Abstract: Exact results are derived for the learning of a linearly separable rule with a single layer perceptron. We consider two sources of noise in the training data: the random inversion of the example outputs and weight noise in the teacher network respectively. In both scenarios we investigate on-line learning schemes which utilize only the latest in a sequence of uncorrelated random examples for an update of the student weights. We study Hebbian learning as well as on-line algorithms which achieve an optimal decrease of the generalization error. The latter realize an asymptotic decay of the gneralization error that coincides, apart from prefactors, with the one found for off-line schemes. ---------------------------------------------------------------------- From lpease at admin.ogi.edu Thu Apr 27 17:32:07 1995 From: lpease at admin.ogi.edu (lpease@admin.ogi.edu) Date: Thu, 27 Apr 95 14:32:07 PDT Subject: Neural Networks short course Message-ID: <9504272132.AA26111@admin.ogi.edu> ^*^*^*^*^*^*^*^*^*^*^**^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^* Linda M. Pease, Director lpease at admin.ogi.edu Office of Continuing Education Oregon Graduate Institute of Science & Technology 20000 N.W. Walker Road, Beaverton OR 97006 USA (shipping) P.O. Box 91000, Portland, OR 97291-1000 USA (mailing) +1-503-690-1259 +1-503-690-1686 fax "The future belongs to those who believe in the beauty of their dreams" -Eleanor Roosevelt ^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^*^* Oregon Graduate Institute of Science & Technology, Office of Continuing Education, offers the short course: NEURAL NETWORKS: ALGORITHMS AND APPLICATIONS June 12-16, 1995, at the OGI campus near Portland, Oregon. Course Organizer: John E. Moody Lead Instructor: Hong Pi With Lectures By: Dan Hammerstrom Todd K. Leen John E. Moody Thorsteinn S. Rognvaldsson Eric A. Wan Artificial neural networks (ANN) have emerged as a new information processing technique and an effective computational model for solving pattern recognition and completion, feature extraction, optimization, and function approximation problems. This course introduces participants to the neural network paradigms and their applications in pattern classification; system identification; signal processing and image analysis; control engineering; diagnosis; time series prediction; financial analysis and trading; and speech recognition. Designing a neural network application involves steps from data preprocessing to network tuning and selection. This course, with many examples, application demos and hands-on lab practice, will familiarize the participants with the techniques necessary for building successful applications. About 50 percent of the class time is assigned to lab sessions. The simulations will be based on Matlab, the Matlab Neural Net Toolbox, and other software running on 486 PCs. Prerequisites: Linear algebra and calculus. Previous experience with using Matlab is helpful, but not required. Who will benefit: Technical professionals, business analysts and other individuals who wish to gain a basic understanding of the theory and algorithms of neural computation and/or are interested in applying ANN techniques to real-world, data-driven modeling problems. Course Objectives: After completing the course, students will: - Understand the basic neural networks paradigms - Be familiar with the range of ANN applications - Have a good understanding of the techniques for designing successful applications - Gain hands-on experience with ANN modeling. Course Outline Neural Networks: Biological and Artificial The biological inspiration. History of neural computing. Types of architectures and learning algorithms. Application areas. Simple Perceptrons and Adalines Decision surfaces. Perceptron and Adaline learning rules. Stochastic gradient descent. Lab experiments. Multi-Layer Feed-Forward Networks I Multi-Layer Perceptrons. Back-propagation learning. Generalization. Early Stopping. Network performance analysis. Lab experiments. Multi-Layer Feed-Forward Networks II Radial basis function networks. Projection pursuit regression. Variants of back-propagation. Levenburg-Marquardt optimization. Lab experiments. Network Performance Optimization Network pruning techniques. Input variable selection. Sensitivity Analysis. Regularization. Lab experiments. Neural Networks for Pattern Recognition and Classification Nonparametric classification. Logistic regression. Bayesian approach. Statistical inference. Relation to other classification methods. Self-Organized Networks and Unsupervised Learning K-means clustering. Kohonen feature mapping. Learning vector quantization. Adaptive principal components analysis. Exploratory projection pursuit. Applications. Lab experiments. Time Series Prediction with Neural Networks Linear time series models. Nonlinear approaches. Case studies: economic and financial time series analysis. Lab experiments. Neural Network for Adaptive Control Nonlinear modeling in control. Neural network representations for dynamical systems. Reinforcement learning. Applications. Lab Experiments. Massively Parallel Implementation of Neural Nets on the Desktop Architecture and application demos of the Adaptive Solutions' CNAPS System. Current State of Research and Future Directions About the Instructors Dan Hammerstrom received the B.S. degree in Electrical Engineering, with distinction, from Montana State University, the M.S. degree in Electrical Engineering from Stanford University, and the Ph.D. degree in Electrical Engineering from the University of Illinois. He was on the faculty of Cornell University from 1977 to 1980 as an assistant professor. From 1980 to 1985 he worked for Intel where he participated in the development and implementation of the iAPX-432 and i960 and, as a consultant, the iWarp systolic processor that was jointly developed by Intel and Carnegie Mellon University. He is an associate professor at Oregon Graduate Institute where he is pursuing research in massively parallel VLSI architectures, and is the founder and Chief Technical Officer of Adaptive Solutions, Inc. He is the architect of the Adaptive Solutions CNAPS neurocomputer.Dr. Hammerstrom's research interests are in the area of the VLSI architectures for pattern recognition. Todd K. Leen is associate professor of Computer Science and Engineering at Oregon Graduate Institute of Science & Technology. He received his Ph.D. in theoretical Physics from the University of Wisconsin in 1982. From 1982-1987 he worked at IBM Corporation, and then pursued research in mathematical biology at Good Samaritan Hospital's Neurological Sciences Institute. He joined OGI in 1989. Dr. Leen's current research interests include neural learning, algorithms and architectures, stochastic optimization, model constraints and pruning, and neural and non-neural approaches to data representation and coding. He is particularly interested in fast, local modeling approaches, and applications to image and speech processing. Dr. Leen served as theory program chair for the 1993 Neural Information Processing Systems (NIPS) conference, and workshops chair for the 1994 NIPS conference. John E. Moody is associate professor of Computer Science and Engineering at Oregon Graduate Institute of Science & Technology. His current research focuses on neural network learning theory and algorithms in it's many manifestations. He is particularly interested in statistical learning theory, the dynamics of learning, and learning in dynamical contexts. Key application areas of his work are adaptive signal processing, adaptive control, time series analysis, forecasting, economics and finance. Moody has authored over 35 scientific papers, more than 25 of which concern the theory, algorithms, and applications of neural networks. Prior to joining the Oregon Graduate Institute, Moody was a member of the Computer Science and Neuroscience faculties at Yale University. Moody received his Ph.D. and M.A. degrees in Theoretical Physics from Princeton University, and graduated Summa Cum Laude with a B.A. in Physics from the University of Chicago. Hong Pi is a senior research associate at Oregon Graduate Institute. He received his Ph.D. in theoretical physics from University of Wisconsin. His research interests include nonlinear modeling, neural network algorithms and applications. Thorsteinn S. Rognvaldsson received the Ph.D. degree in theoretical physics from Lund University, Sweden, in 1994. His research interests are Neural Networks for prediction and classification. He is currently a postdoctoral research associate at Oregon Graduate Institute. Eric A. Wan, Assistant Professor of Electrical Engineering and Applied Physics, Oregon Graduate Institute of Science & Technology, received his Ph.D. in electrical engineering from Stanford University in 1994. His research interests include learning algorithms and architectures for neural networks and adaptive signal processing. He is particularly interested in neural applications to time series prediction, speech enhancement, system identification, and adaptive control. He is a member of IEEE, INNS, Tau Beta Pi, Sigma Xi, and Phi Beta Kappa. Course Dates: M-F, June 12-16, 1995, 8:30am-5pm Course fee: $1695 (includes instruction, course materials, labs, break refreshments and lunches, Monday night reception and Thursday night dinner) For a complete course brochure contact: Linda M. Pease, Director Office of Continuing Education Oregon Graduate Institute of Science & Technology PO Box 91000 Portland, OR 97291-1000 +1-503-690-1259 +1-503-690-1686 (fax) e-mail: continuinged at admin.ogi.edu WWW home page: http://www.ogi.edu From ken at phy.ucsf.edu Thu Apr 27 18:37:49 1995 From: ken at phy.ucsf.edu (Ken Miller) Date: Thu, 27 Apr 1995 15:37:49 -0700 Subject: yet another faculty job opening, in theoretical visual neuroscience Message-ID: <9504272237.AA02912@coltrane.ucsf.edu> Hi Folks, I seem to be making a habit of this, but here's another computational neuroscience job that just appeared on the same list as the last one. Once again, thought I'd redistribute it. Once again, please don't write to me about the job, I don't know anything more about it. Ken ken at phy.ucsf.edu ---------------------------------------------------------------------- -> Date: Thu, 27 Apr 95 12:42:45 EDT -> From: msl at cns.NYU.EDU (Michael Landy) -> Subject: Job Posting -> New York University Center for Neural Science and Courant Institute -> of Mathematical Sciences -> As part of its Sloan Theoretical Neuroscience Program, the Center -> for Neural Science at New York University, together with the -> Courant Institute of Mathematical Sciences, is planning to hire an -> Assistant Professor (tenure-track) in the field of Theoretical -> Visual Neuroscience. Applicants should have a background in -> mathematics, physics, and/or computer science with a proven record -> of research in visual science or neuroscience. Applications -> (deadline June 30, 1995) should include a CV, the names and -> addresses of at least three individuals willing to write letters of -> reference, and a statement of research interests. -> Send to: -> Sloan Search Committee, -> Center for Neural Science, New York University, -> 4 Washington Place, New York NY 10003. -> New York University is an affirmative action/equal opportunity employer. From P.McKevitt at dcs.shef.ac.uk Sat Apr 29 11:58:35 1995 From: P.McKevitt at dcs.shef.ac.uk (Paul Mc Kevitt) Date: Sat, 29 Apr 95 11:58:35 BST Subject: IEE COLLOQ. LONDON MAY 15TH: GROUNDING REPRESENTATIONS (MURPHY) Message-ID: <9504291058.AA17092@dcs.shef.ac.uk> ============================================================================== GROUNDING REPRESENTATIONS GROUNDING REPRESENTATIONS GROUNDING REPRESENTATIONS ============================================================================== ------------------------------------------------------------------------------ NOTE: Please note that there has been a programme change below and a new speaker (Elisabeth Andr/e: DFKI, Germany and Sheffield, England) added in. ------------------------------------------------------------------------------ PROGRAMME AND CALL FOR PARTICIPATION GROUNDING REPRESENTATIONS: Integration of sensory information in Natural Language Processing, Artificial Intelligence and Neural Networks IEE COLLOQUIUM IEE Computing and Control Division [Professional group: C4 (Artificial Intelligence)] in association with: British Computer Society Specialist Group on Expert Systems and The Society for the Study of Artificial Intelligence and Simulation of Behaviour (SSAISB) MONDAY, MAY 15th, 1995 ********************** at the IEE Colloquium Savoy Place London, ENGLAND Chairs NOEL SHARKEY and PAUL MC KEVITT Department of Computer Science University of Sheffield, England WORKSHOP DESCRIPTION: Perhaps the most famous criticism of traditional Artificial Intelligence is that computer programs use symbols that are arbitrarily interpretable (see Searle, 1980 for the Chinese Room and Harnad, 1990 for the symbol grounding problem). We could, for example, use the word "apple" to mean anything from a "common fruit" to a "pig's nose". All the computer knows is the relationship between this symbol the others that we have given it. The question is, how is it possible to move from this notion of meaning, as the relationship between arbitrary symbols, to a notion of "intrinsic" meaning. In other words, how do we provide meaning by grounding computer symbols or representations in the physical world? The aim of this colloquium is to take a broad look at many of the important issues in relating machine intelligence to the world and to make accessible some of the most recent research in integrating information from different modalities. For example, why is it important to have symbol or representation grounding and what is the role of the emerging neural network technology? One approach has been to link intelligence to the sensory world through visual systems or robotic devices such as MURPHY. Another approach is work on systems that integrate information from different modalities such as vision and language. Yet another approach has been to examine how the human brain relates sensory, motor and other information. It looks like we may be at long last getting a handle on the age old CHINESE ROOM and SYMBOL GROUNDING problems. Hence this colloquium has as its focus, "grounding representations. The colloquium will occur over one day and will focus on three themes: (1) Biology and development; (2) Computational models and (3) Symbol grounding. The target audience of this colloquium will include Engineers and Scientists in Neural Networks and Artificial Intelligence, Developmental Psychologists, Cognitive Scientists, Philosophers of mind, Biologists and all of those interested in the application of Artificial Intelligence to real world problems. PROGRAMME: Monday, May 15th, 1995 ************************ INTRODUCTION: 9.00 REGISTRATION + SUSTENANCE 10.00 `An introduction' NOEL SHARKEY (Department of Computer Science, University of Sheffield, ENGLAND) COMPUTATIONAL MODELS: 10.30 `From visual data to multimedia presentations' ELISABETH ANDR/E (German Research Center for Artificial Intelligence (DFKI) Saarbr"ucken, GERMANY) & (Department of Computer Science, University of Sheffield, ENGLAND) 11.00 `Natural language and exploration of an information space' OLIVIERO STOCK (Istituto per la Ricerca Scientifica e Technologica, IRST) (Trento, ITALY) 11.30 `How visual salience influences natural language descriptions' WOLFGANG MAASS (Cognitive Science Programme) (Universitaet des Saarlandes, Saarbruecken, GERMANY) 12.00 DISCUSSION 12.30 LUNCH GROUNDING SYMBOLS: 2.00 `Grounding symbols in sensorimotor categories with neural networks' STEVAN HARNAD (Department of Psychology, University of Southampton, ENGLAND) 2.30 `Some observations on symbol-grounding from a combined symbolic/connectionist viewpoint' JOHN BARNDEN (Computing Research Laboratory, New Mexico, USA) & (Department of Computer Science, University of Reading, ENGLAND) 3.00 Sustenance Break 3.30 `On grounding language with neural networks' GEORG DORFFNER (Austrian Institute for Artificial Intelligence, Vienna, AUSTRIA) PANEL DISCUSSION AND QUESTIONS: 4.00 `Grounding representations' Chairs + Invited speakers S/IN S/IN: 4.30 `De brief/comments' PAUL MC KEVITT (Department of Computer Science, University of Sheffield, ENGLAND) 5.00 O/ICHE MHA/ITH ***************************** PUBLICATION: We intend to publish a book on this Colloquium Proceedings. ADDRESSES IEE CONTACT: Sarah Leong Groups Officer The Institution of Electrical Engineers (IEE) Savoy Place GB- WC2R OBL, London England, UK, EU. E-mail: SLeong at iee.org.uk (Sarah Leong) E-mail: mbarrett at iee.org.uk (Martin Barrett) E-mail: dpenrose at iee.org.uk (David Penrose) WWW: http://www.iee.org.uk Ftp: ftp.iee.org.uk FaX: +44 (0) 171-497-3633 Phone: +44 (0) 171-240-1871 (general) Phone: +44 (0) 171-344-8423 (direct) LOCATION: The Institution of Electrical Engineers (IEE) Savoy Place GB- WC2R OBL, London England, UK, EU. ACADEMIC CONTACT: Paul Mc Kevitt Department of Computer Science Regent Court 211 Portobello Street University of Sheffield GB- S1 4DP, Sheffield England, UK, EU. E-mail: p.mckevitt at dcs.shef.ac.uk WWW: http://www.dcs.shef.ac.uk/ WWW: http://www.shef.ac.uk/ Ftp: ftp.dcs.shef.ac.uk FaX: +44 (0) 114-278-0972 Phone: +44 (0) 114-282-5572 (Office) 282-5596 (Lab.) 282-5590 (Secretary) REGISTRATION: Registration forms are available from SARAH LEONG at the above address and should be sent to the following address: (It is NOT possible to register by E-mail.) Colloquium Bookings Institution of Electrical Engineers (IEE) PO Box 96 Stevenage GB- SG1 2SD Herts England, UK, EU. Fax: +44 (0) 143 874 2792 Receipt Enquiries: +44 (0) 143 876 7243 Registration enquiries: +44 (0) 171 240 1871 x.2206 PRE-REGISTRATION IS ADVISED ALTHOUGH YOU CAN REGISTER ON THE DAY OF THE EVENT. ________________________________________________________________________ R E G I S T R A T I O N COSTS ________________________________________________________________________ (ALL FIGURES INCLUDE VAT) IEE MEMBERS 44.00 NON-IEE MEMBERS 74.00 IEE MEMBERS (Retired, Unemployed, Students) FREE NON-IEE MEMBERS (Retired, Unemployed, Students) 22.00 LUNCH TICKET 4.70 MEMBERS: Members of the IEEIE, The British Computer Society and the Society for the Study of Artificial Intelligence and Simulation of Behaviour and Eurel Member Associations will be admitted at Members' rates. ============================================================================== GROUNDING REPRESENTATIONS GROUNDING REPRESENTATIONS GROUNDING REPRESENTATIONS ==============================================================================