From rsun at cecs.missouri.edu Wed May 1 10:49:29 2002 From: rsun at cecs.missouri.edu (rsun@cecs.missouri.edu) Date: Wed, 1 May 2002 09:49:29 -0500 Subject: temporal abstraction Message-ID: <200205011449.g41EnTR27802@ari1.cecs.missouri.edu> For autonomously creating temporal abstractions (open-loop or closed-loop policies), see also: R. Sun and C. Sessions, " Self-segmentation of sequences: automatic formation of hierarchies of sequential behaviors. " IEEE Transactions on Systems, Man, and Cybernetics: Part B Cybernetics, Vol.30, No.3, pp.403-418. 2000. http://www.cecs.missouri.edu/~rsun/sun.smc00.ps http://www.cecs.missouri.edu/~rsun/sun.smc00.pdf The paper presents an approach for hierarchical reinforcement learning that does not rely on a priori domain-specific knowledge regarding hierarchical structures. It involves learning to segment action sequences to create hierarchical structures (for example, for the purpose of dealing with partially observable Markov decision processes, with multiple limited-memory or memoryless modules). Segmentation is based on reinforcement received during task execution, with different levels of control communicating with each other through sharing reinforcement estimates obtained by each other. The algorithm segments action sequences to reduce non-Markovian temporal dependencies, and seeks out proper configurations of long- and short-range dependencies, to facilitate the learning of the overall task. R. Sun and C. Sessions, "Learning plans without a priori knowledge." Adaptive Behavior, Vol.8, No.3/4, pp.225-253. 2000. (The paper has just appeared. The publication of journal was significantly delayed.) http://www.cecs.missouri.edu/~rsun/sun.ab00.ps This paper is concerned with the autonomous learning of plans in probabilistic domains without a priori domain-specific knowledge. In contrast to existing reinforcement learning algorithms that generate only reactive plans, and existing probabilistic planning algorithms that require a substantial amount of a priori knowledge in order to plan, a two-stage bottom-up process is devised in which first reinforcement learning/dynamic programming is applied, without the use of a priori domain-specific knowledge, to acquire a reactive plan, and then explicit plans are extracted from the reactive plan. Several options for plan extraction are examined, each of which is based on a beam search that performs temporal projection in a restricted fashion, guided by the value functions resulting from reinforcement learning/dynamic programming. Some completeness and soundness results are given. Examples in several domains are discussed that together demonstrate the working of the proposed model. Or go through my Web page at http://www.cecs.missouri.edu/~rsun Cheers, ---Ron From Pascal.Fries at fcdonders.kun.nl Wed May 1 05:32:36 2002 From: Pascal.Fries at fcdonders.kun.nl (Pascal Fries) Date: Wed, 01 May 2002 11:32:36 +0200 Subject: Ph.D. positions Message-ID: Ph.D. positions F. C. Donders Centre for Cognitive Neuroimaging University of Nijmegen Nijmegen, The Netherlands Two Ph.D. positions are available to study the role of neuronal coherence in human cognition in the Neurophysiology group of the F. C. Donders Centre for Cognitive Neuroimaging, headed by Pascal Fries (see Fries et al. Science 2001; Fries et al. Nature Neuroscience, 2001; Fries et al. J. Neurosci. 2002). The F.C. Donders Centre is a recently established research centre for cognitive neuroscience (see www.kun.nl/fcdonders). The centre provides, in one location, a 151 channel whole head MEG system with integrated and coregistered EEG, several 128 channel EEG scanning facilities and two (f)MRI scanners (1.5 T and 3T). All facilities are dedicated for research in cognitive neuroscience. The centre has an international staff and communication is in English. The Ph.D. candidates will be trained in the use of all imaging technologies available at the Centre and will use them in combination to work on projects concerned with the functional role of neuronal coherence for human visuo-motor and cross-modal integration. The ideal candidates have good knowledge of system and cognitive neuroscience, experience in programming (e. g. Matlab), an understanding of signal processing basics and a commitment to scientific excellence. The real candidates have some of those features :-) . The positions are for 4 years. Please send e-mail applications to: Pascal.Fries at fcdonders.kun.nl Pascal Fries, M.D., Ph.D. Principal Investigator, Neurophysiology group F.C. Donders Centre for Cognitive Neuroimaging University of Nijmegen Trigon Bldg.; Rm. 0.40 Adelbertusplein 1 6525 EK Nijmegen, The Netherlands Tel: (+31) (0)24 36 10657 Fax: (+31) (0)24 36 10652 e-mail : pascal.fries at fcdonders.kun.nl website: http://www.kun.nl/fcdonders/ From clinton at compneuro.umn.edu Thu May 2 15:44:34 2002 From: clinton at compneuro.umn.edu (Kathleen Clinton) Date: Thu, 02 May 2002 14:44:34 -0500 Subject: NEURON Workshop Message-ID: <3CD19722.2080002@compneuro.umn.edu> ****************************** NEURON Workshop Announcement ****************************** Michael Hines and Ted Carnevale of Yale University will conduct a three to five day workshop on NEURON, a computer code that simulates neural systems. The workshop will be held from Monday to Friday, September 9-13, 2002 at the University of Minnesota Digital Technology Center in Minneapolis, Minnesota. Registration is open to students and researchers from academic, government, and commercial organizations. Space is limited, and registrations will be accepted on a first-come, first-serve basis. The workshop is sponsored by the University of Minnesota Computational Neuroscience Program which is supported by a National Science Foundation-Integrative Graduate Education and Research Training grant and the University of Minnesota Graduate School, Institute of Technology, Medical School, and Supercomputing Institute for Digital Simulation and Advanced Computation. **Topics and Format** Participants may attend the workshop for three or five days. The first three days cover material necessary for the most common applications in neuroscience research and education. The fourth and fifth days deal with advanced topics of users whose projects may require problem-specific customizations. Windows and Linux platforms will be used. Days 1 - 3 "Fundamentals of Using the NEURON Simulation Environment" The first three days will cover the material that is required for informed use of the NEURON simulation environment. The emphasis will be on applying the graphical interface, which enables maximum productivity and conceptual control over models while at the same time reducing or eliminating the need to write code. Participants will be building their own models from the start of the course. By the end of the third day they will be well prepared to use NEURON on their own to explore a wide range of neural phenomena. Topics will include: Integration methods --accuracy, stability, and computational efficiency --fixed order, fixed timestep integration --global and local variable order, variable timestep integration Strategies for increasing computational efficiency. Using NEURON's graphical interface to --construct models of individual neurons with architectures that range from the simplest spherical cell to detailed models based on quantitative morphometric data (the CellBuilder). --construct models that combine neurons with electronic instrumentation (i.e. capacitors, resistors, amplifiers, current sources and voltage sources) (the Linear Circuit Builder). --construct network models that include artificial neurons, model cells with anatomical and biophysical properties, and hybrid nets with both kinds of cells (the Network Builder). --control simulations. --display simulation results as functions of time and space. --analyze simulation results. --analyze the electrotonic properties of neurons. Adding new biophysical mechanisms. Uses of the Vector class such as --synthesizing custom stimuli --analyzing experimental data --recording and analyzing simulation results Managing modeling projects. Days 4 and 5 "Beyond the GUI" The fourth and fifth days deal with advanced topics for users whose projects may require problem-specific customizations. Topics will include: Advanced use of the CellBuilder, Network Builder, and Linear Circuit Builder. When and how to modify model specification, initialization, and NEURON's main computational loop. Exploiting special features of the Network Connection class for efficient implementation of use-dependent synaptic plasticity. Using NEURON's tools for optimizing models. Parallelizing computations. Using new features of the extracellular mechanism for --extracellular stimulation and recording --implementation of gap junctions and ephaptic interactions Developing new GUI tools. **Registration** For academic or government employees the registration fee is $175 for the first three days and $270 for the full five days. These fees are $350 and $540, respectively, for commercial participants. Registration forms can be obtained at www.compneuro.umn.edu/NEURONregistration.html or from the workshop coordinator, Kathleen Clinton, at clinton at compneuro.umn.edu or (612) 625-8424. **Lodging** Out-of-town participants may stay at the Radisson Metrodome, 615 Washington Avenue SE in Minneapolis. It is within walking distance of the Digital Technology Center, located in Walter Library. Participants are responsible for making their own hotel reservations. When making reservations, participants should state that they are attending the NEURON Workshop. A small block of rooms is available until August 16, 2002. Reservations can be arranged by contacting Kathleen Clinton at clinton at compneuro.umn.edu or (612) 625-8424. From laura.bonzano at dibe.unige.it Mon May 6 05:52:49 2002 From: laura.bonzano at dibe.unige.it (Laura Bonzano) Date: Mon, 6 May 2002 11:52:49 +0200 Subject: NeuroEngineering Workshop and advanced School Message-ID: <00a201c1f4e3$c7f2dba0$6959fb82@bio.dibe.unige.it> Dear list members, I'm happy to announce the second edition of the "NeuroEngineering Workshop and advanced School", organized by: a.. prof. Sergio Martinoia, Neuroengineering and Bio-nanoTechnologies Group, Department of Biophysical and Electronic Engineering (DIBE), University of Genova, Italy b.. prof. Pietro Morasso, Department of Communications, Computer and System Sciences (DIST), University of Genova, Italy and funded by the University of Genova, Italy. It will take place in June 10-13, 2002 in Villa Cambiaso, Genova. For more information, please visit our site: http://www.bio.dibe.unige.it/ Please transmit to anyone potentially interested. Best Regards, Laura Bonzano Apologies if you receive this more than once. ---------------------------------------------------------------- Laura Bonzano, Ph.D. Student Neuroengineering and Bio-nanoTechnology - NBT Department of Biophysical and Electronic Engineering - DIBE Via All'Opera Pia 11A, 16145, GENOA, ITALY Phone: +39-010-3532765 Fax: +39-010-3532133 URL: http://www.bio.dibe.unige.it/ E-mail: laura.bonzano at dibe.unige.it From finton at cs.wisc.edu Mon May 6 19:51:40 2002 From: finton at cs.wisc.edu (David J. Finton) Date: Mon, 6 May 2002 18:51:40 -0500 (CDT) Subject: Dissertation on cognitive economy and reinforcement learning Message-ID: Dear Connectionists: I am pleased to announce the availability of my Ph.D. dissertation for download: ____________________________________________________________________ Cognitive Economy and the Role of Representation in On-Line Learning PDF: http://www.cs.wisc.edu/~finton/thesis/main.pdf PS: http://www.cs.wisc.edu/~finton/thesis/main.ps.gz ____________________________________________________________________ The dissertation is 265 pages, and the downloadable files are 1.79 MB (PDF version) and 1.13 MB (gzipped PostScript version). An abstract follows. --David Finton finton at cs.wisc.edu http://www.cs.wisc.edu/~finton/ Abstract ________ How can an intelligent agent learn an effective representation of its world? This dissertation applies the psychological principle of cognitive economy to the problem of representation in reinforcement learning. Psychologists have shown that humans cope with difficult tasks by simplifying the task domain, focusing on relevant features and generalizing over states of the world which are ``the same'' with respect to the task. This dissertation defines a principled set of requirements for representations in reinforcement learning, by applying these principles of cognitive economy to the agent's need to choose the correct actions in its task. The dissertation formalizes the principle of cognitive economy into algorithmic criteria for feature extraction in reinforcement learning. To do this, it develops mathematical definitions of feature importance, sound decisions, state compatibility, and necessary distinctions, in terms of the rewards expected by the agent in the task. The analysis shows how the representation determines the apparent values of the agent's actions, and proves that the state compatibility criteria presented here result in representations which satisfy a criterion for task learnability. The dissertation reports on experiments that illustrate one implementation of these ideas in a system which constructs its representation as it goes about learning the task. Results with the puck-on-a-hill task and the pole-balancing task show that the ideas are sound and can be of practical benefit. The principal contributions of this dissertation are a new framework for thinking about feature extraction in terms of cognitive economy, and a demonstration of the effectiveness of an algorithm based on this new framework. From mlyons at atr.co.jp Tue May 7 04:21:37 2002 From: mlyons at atr.co.jp (Michael J. Lyons) Date: Tue, 7 May 2002 17:21:37 +0900 Subject: Job opportunity at ATR Message-ID: Dear Connectionists, I would be grateful if you could post this to any appropriate departmental mailing lists. Thank you in advance, Michael Lyons ATR Media Information Science Labs http://www.mis.atr.co.jp/~mlyons -- Opening for Visiting Researcher at ATR in Kyoto, Japan ------------------------------------------------------ There is an opening for a Visiting Researcher at the ATR Media Information Science Laboratories. Applicants should have a Masters or PhD in computer science, electrical engineering or a related discipline, and research experience and interests in the areas of computer vision, machine learning, human computer interaction, affective computing. Excellent software development skills and knowledge of image and signal processing techniques are a must. The initial appointment is for 1 year, renewable up to 4 years, based on performance. The salary and benefits package at ATR (which includes subsidized housing) is attractive by academic standards. The Advanced Telecommunications Research Labs is a basic research institute located at the birthplace of Japanese culture, close to the cities of Kyoto, Osaka, and Nara. Opportunities for cultural and outdoor activities abound. Approximately 20% of the researchers are from overseas and foreign staff support is excellent. Japanese language is not needed for this position. Applicants should send a a 1 page resume, list of publications, and names and contact information for 3 referees (PDF format only) by e-mail to: Michael J. Lyons, PhD Senior Researcher ATR Media Information Sciences mlyons at atr.co.jp http://www.mis.atr.co.jp/~mlyons From duff at envy.cs.umass.edu Tue May 7 11:13:54 2002 From: duff at envy.cs.umass.edu (Michael Duff) Date: Tue, 7 May 2002 11:13:54 -0400 (EDT) Subject: Ph.D. thesis available Message-ID: Dear Connectionists, The following Ph.D. thesis has been made available: Optimal Learning: Computational procedures for Bayes-adaptive Markov decision processes Michael O. Duff Department of Computer Science University of Massachusetts, Amherst The thesis may be retrieved from: http://envy.cs.umass.edu/People/duff/diss.html ----------------------------------------------- Abstract In broad terms, this dissertation is about decision making under uncertainty. At each stage, a decision-making agent operating in an uncertain world takes an action that elicits a reinforcement signal and causes the state of the world (or agent) to change. The agent's goal is to maximize the total reward it derives over its entire duration of operation---an interval that may require the agent to strike a delicate balance between two sometimes conflicting impulses: (1) greedy expoitation of its current world model, and (2) exploration of its world to gain information that can refine the world model and improve the agent's policy. Over the years, a number of researchers have formulated this problem mathematically---"adaptive control processes," "dual control," "value of information," and "optimal learning" all address essentially the same issue and share a basic Bayesian framework that is well-suited for modeling the role of information and for defining what a solution is. Unfortunately, classical procedures for computing policies that optimally balance expoitation with exploration are intractable and have only been able to address problems that have a very small number of physical states and short planning horizons. This dissertation proposes compuational procedures that retain the Bayesian formulation, but sidestep intractability by employing Monte- Carlo simulation, function approximation, and diffusion modeling of information-state dynamics. From alistair at robots.ox.ac.uk Tue May 7 07:23:32 2002 From: alistair at robots.ox.ac.uk (Alistair McEwan) Date: Tue, 7 May 2002 12:23:32 +0100 (BST) Subject: Post Graduate Studentship in Silicon Cochlea Design Message-ID: UNIVERSITY OF OXFORD DEPARTMENT OF ENGINEERING SCIENCE Post Graduate Studentship in Silicon Cochlea Design http://www.eng.ox.ac.uk/~wpcadm2/jobs/scadvert.html Applications are invited for a 3 year EPSRC funded research studentship in the Microelectronic Circuits and Analogue Devices Research Group within the Department of Engineering Science. The position is available to commence at any time prior to 1 October 2002. A successful candidate will have a first or upper second degree in Electronics or a similar subject. The area of the research is biologically inspired information processing, in particular auditory signal processing in the cochlea. The aim of the project, supervised by Dr. Steve Collins, is to develop a new design of a silicon cochlea. This work, which is part of a wider EPSRC funded project, will be undertaken in collaboration with Professor Leslie Smith at the University of Stirling, who has highlighted the benefits of exploiting temporal correlations in the auditory system. Work at Oxford will concentrate on designing, building and testing a silicon cochlea which will then be integrated into a larger system at Stirling. Work at Oxford will concentrate on designing, building and testing circuits which perform the critical functions identified by simulation work undertaken at the University of Stirling. Following previous work, the basic filtering action will be based upon subthreshold gm-C filters. However, the output of these filters will be represented as a stream of pulses, rather than an analogue voltage. This will both avoid the problems arising from using positive feedback in existing systems and create the opportunity to exploit temporal correlations in the power within different frequency bands. Several different designs will be looked at in collaboration with Stirling to determine the one which produces the required functionality most efficiently. This design will then be implemented in a prototype system which will be characterised in detail at Oxford and the functionality tested at Stirling. For more details contact steve collins (steve.collins at eng.ox.ac.uk) or visit the group website http://www.robots.ox.ac.uk/~mcad/ Further particulars may be found on the web http://www.eng.ox.ac.uk/~wpcadm2/jobs/scfp.html Please quote SS/SC/DF/02/021 in all correspondence. The closing date for applications is 4th June 2002. The University is an Equal Opportunity Employer. From mvzaanen at science.uva.nl Wed May 8 03:44:49 2002 From: mvzaanen at science.uva.nl (Menno van Zaanen) Date: Wed, 8 May 2002 09:44:49 +0200 (CEST) Subject: PhD Thesis available Message-ID: Dear Connectionists, My PhD thesis, which I hope might be of any interest to some of you, has been made available: Bootstrapping Structure into Language: Alignment-Based Learning Menno M. van Zaanen School of Computing University of Leeds Leeds, UK It can be found at: http://www.science.uva.nl/~mvzaanen/docs/t_leeds.ps http://www.science.uva.nl/~mvzaanen/docs/t_leeds.ps.gz or via my homepage: http://www.science.uva.nl/~mvzaanen/ Abstract: This thesis introduces a new unsupervised learning framework, called Alignment-Based Learning, which is based on the alignment of sentences and Harris's (1951) notion of substitutability. Instances of the framework can be applied to an untagged, unstructured corpus of natural language sentences, resulting in a labelled, bracketed version of that corpus. Firstly, the framework aligns all sentences in the corpus in pairs, resulting in a partition of the sentences consisting of parts of the sentences that are equal in both sentences and parts that are unequal. Unequal parts of sentences can be seen as being substitutable for each other, since substituting one unequal part for the other results in another valid sentence. The unequal parts of the sentences are thus considered to be possible (possibly overlapping) constituents, called hypotheses. Secondly, the selection learning phase considers all hypotheses found by the alignment learning phase and selects the best of these. The hypotheses are selected based on the order in which they were found, or based on a probabilistic function. The framework can be extended with a grammar extraction phase. This extended framework is called parseABL. Instead of returning a structured version of the unstructured input corpus, like the ABL system, this system also returns a stochastic context-free or tree substitution grammar. Different instances of the framework have been tested on the English ATIS corpus, the Dutch OVIS corpus and the Wall Street Journal corpus. One of the interesting results, apart from the encouraging numerical results, is that all instances can (and do) learn recursive structures. Best regards, Menno van Zaanen +-------------------------------------+ | Menno van Zaanen | "The more it stays the same, | mvzaanen at science.uva.nl | the less it changes." | http://www.science.uva.nl/~mvzaanen | -Spinal Tap From mvzaanen at science.uva.nl Wed May 8 03:44:49 2002 From: mvzaanen at science.uva.nl (Menno van Zaanen) Date: Wed, 8 May 2002 09:44:49 +0200 (CEST) Subject: PhD Thesis available Message-ID: Dear Connectionists, My PhD thesis, which I hope might be of any interest to some of you, has been made available: Bootstrapping Structure into Language: Alignment-Based Learning Menno M. van Zaanen School of Computing University of Leeds Leeds, UK It can be found at: http://www.science.uva.nl/~mvzaanen/docs/t_leeds.ps http://www.science.uva.nl/~mvzaanen/docs/t_leeds.ps.gz or via my homepage: http://www.science.uva.nl/~mvzaanen/ Abstract: This thesis introduces a new unsupervised learning framework, called Alignment-Based Learning, which is based on the alignment of sentences and Harris's (1951) notion of substitutability. Instances of the framework can be applied to an untagged, unstructured corpus of natural language sentences, resulting in a labelled, bracketed version of that corpus. Firstly, the framework aligns all sentences in the corpus in pairs, resulting in a partition of the sentences consisting of parts of the sentences that are equal in both sentences and parts that are unequal. Unequal parts of sentences can be seen as being substitutable for each other, since substituting one unequal part for the other results in another valid sentence. The unequal parts of the sentences are thus considered to be possible (possibly overlapping) constituents, called hypotheses. Secondly, the selection learning phase considers all hypotheses found by the alignment learning phase and selects the best of these. The hypotheses are selected based on the order in which they were found, or based on a probabilistic function. The framework can be extended with a grammar extraction phase. This extended framework is called parseABL. Instead of returning a structured version of the unstructured input corpus, like the ABL system, this system also returns a stochastic context-free or tree substitution grammar. Different instances of the framework have been tested on the English ATIS corpus, the Dutch OVIS corpus and the Wall Street Journal corpus. One of the interesting results, apart from the encouraging numerical results, is that all instances can (and do) learn recursive structures. Best regards, Menno van Zaanen +-------------------------------------+ | Menno van Zaanen | "The more it stays the same, | mvzaanen at science.uva.nl | the less it changes." | http://www.science.uva.nl/~mvzaanen | -Spinal Tap From philh at cogs.susx.ac.uk Fri May 10 14:09:48 2002 From: philh at cogs.susx.ac.uk (Phil Husbands) Date: Fri, 10 May 2002 19:09:48 +0100 Subject: lectureship/senior lectureship neural computation Message-ID: <3CDBFEDE.D51C48DE@cogs.susx.ac.uk> > > > LECTURESHIP/ SENIOR LECTURESHIP Ref 359 > > IN NEURAL COMPUTATION > > 25,455 to 32,537 or 34,158 to 38,603 per annum > > Applicants are invited for a permanent faculty position within the > Computer Science and Artificial Intelligence Subject Group of the School > of Cognitive and Computing Sciences. The expected start date is 1 > October 2002 or as soon as possible thereafter. The candidate is > expected to have expertise in the area of neural computation and a > proven track record in research in a relevant field. The successful > applicant will be expected to expand the existing high research profile > of the neural computation group and to teach both undergraduate and > masters level. > > Informal enquiries may be made to Des Watson (+44 1273 678045, email > desw at cogs.susx.ac.uk) or Phil Husbands (+44 1273 678556, email > philh at cogs.susx.ac.uk). Details of the School are available at > http://www.cogs.susx.ac.uk/ > > Closing date: Friday 31 May 2002. > > > Application details are available from and should be returned to the > Staffing Services Office, Sussex House, University of Sussex, Falmer, > Brighton, BN1 9RH. Tel 01273 678706, Fax 01273 877401, email > recruitment at sussex.ac.uk. Details of all posts can be found via the > University website: > > http://www.susx.ac.uk/Units/staffing > > An Equal Opportunity Employer From jt at lanl.gov Fri May 10 12:46:45 2002 From: jt at lanl.gov (James Theiler) Date: Fri, 10 May 2002 10:46:45 -0600 (MDT) Subject: postdoctoral position in machine learning at los alamos Message-ID: POSTDOCTORAL POSITION IN MACHINE LEARNING THEORY AND APPLICATIONS Space and Remote Sensing Sciences Group Los Alamos National Laboratory Candidates are sought for a postdoctoral position in the Space and Remote Sensing Sciences Group at Los Alamos National Laboratory in New Mexico, USA. The job will involve developing and applying state of the art machine learning techniques to practical problems in multispectral image feature identification, and in multichannel time series analysis. Prospective candidates should have a strong mathematical background, good oral and written communication skills, and a demonstrated ability to perform independent and creative research. Familiarity with modern statistical machine learning techniques such as support vector machines, boosting, Gaussian processes or Bayesian methods is essential. Experience with other machine learning paradigms including neural networks and genetic algorithms is also desirable. The candidate should be able to program competently in a language such as C, C++, Java, Matlab, etc. Experience with image or signal processing is a plus, and some knowledge of remote sensing or space physics would also be useful. The Space and Remote Sensing Sciences Group is part of the Nonproliferation and International Security Division at LANL. Its mission is to develop and apply remote sensing technologies to a variety of problems of national and international interest, including nonproliferation, detection of nuclear explosions, safeguarding nuclear materials, climate studies, environmental monitoring, volcanology, space sciences, and astrophysics. Los Alamos is a small and very friendly town situated 7200 ft up in the scenic Jemez mountains in northern New Mexico. The climate is very pleasant and opportunities for outdoor recreation are numerous (skiing, hiking, biking, climbing, etc). The Los Alamos public school system is excellent. LANL provides a very constructive working environment with abundant resources and support, and the opportunity to work with intelligent and creative people on a variety of interesting projects. Post-doc starting salaries are usually in the range $50-60K depending on experience, and assistance is provided with relocation expenses. The initial contract offered would be for two years, with good possibilities for contract extensions. The ability to get a US Department of Energy 'Q' clearance (which normally requires US citizenship) is helpful but not essential. Applicants must have received their PhD within the last five years. Interested candidates should visit the Jobs at LANL website at http://www.hr.lanl.gov/FindJob/index.stm, click on "Postdoctoral", and look for Job #201857. It is preferred that you apply through the website, but if you have any questions, contact James Theiler, by e-mail: jt at lanl.gov; or snail mail: Los Alamos National Laboratory, Mail Stop D-436, Los Alamos, NM 87545, USA. Direct applications should include a full resume with a list of two or three references, and a cover letter explaining why you think you would make a good candidate. Plain text, Postscript, or PDF attachments are fine. We can also read MS Word, but we don't like to. jt --------------------------------------------- James Theiler jt at lanl.gov MS-D436, NIS-2, LANL Los Alamos, NM 87545 ----- Space and Remote Sensing Sciences ----- From nnk at his.atr.co.jp Tue May 14 04:20:17 2002 From: nnk at his.atr.co.jp (Neural Networks Japan Office) Date: Tue, 14 May 2002 17:20:17 +0900 Subject: Neural Networks 15(3) Message-ID: NEURAL NETWORKS 15(3) Contents - Volume 15, Number 3 - 2002 ------------------------------------------------------------------ CURRENT OPINION: Three creatures named 'forward model'. A. Karniel CONTRIBUTED ARTICLES: ***** Neuroscience and Neuropsychology ***** A control model of the movement of attention. J. G. Taylor, M. Rogers A local and neurobiologically plausible method of learning correlated patterns. G. Athithan ***** Mathematical and Computational Analysis ***** Learning generative models of natural images. J. M. Wu, Z. H. Lin Optimal design of regularization term and regularization parameter by subspace information criterion. M. Sugiyama, H. Ogawa Parameter setting of the Hopfield network applied to TSP. P. M. Talavan, J. Yanez Transformations of sigma-pi nets: obtaining reflected functions by reflecting weight matrices. R. S. Neville, S. Eldridge On the capabilities of neural networks using limited precision weights. S. Draghici Exponential stability of Cohen-Grossberg neural network. L. Wang, X. Zou ***** Engineering and Design ***** A dynamically coupled neural oscillator network for image segmentation. K. Chen, D. Wang A deterministic annealing algorithm for approximating a solution of the max-bisection problem. C. Dang, L. He, I. K. Hui ***** Technology and Applications ***** AANN an alternative to GMM for pattern recognition. B. Yegnanarayana, S. P. Kishore CURRENT EVENTS ------------------------------------------------------------------ Electronic access: www.elsevier.com/locate/neunet/. Individuals can look up instructions, aims & scope, see news, tables of contents, etc. Those who are at institutions which subscribe to Neural Networks get access to full article text as part of the institutional subscription. Sample copies can be requested for free and back issues can be ordered through the Elsevier customer support offices: nlinfo-f at elsevier.nl usinfo-f at elsevier.com or info at elsevier.co.jp ------------------------------ INNS/ENNS/JNNS Membership includes a subscription to Neural Networks: The International (INNS), European (ENNS), and Japanese (JNNS) Neural Network Societies are associations of scientists, engineers, students, and others seeking to learn about and advance the understanding of the modeling of behavioral and brain processes, and the application of neural modeling concepts to technological problems. Membership in any of the societies includes a subscription to Neural Networks, the official journal of the societies. Application forms should be sent to all the societies you want to apply to (for example, one as a member with subscription and the other one or two as a member without subscription). The JNNS does not accept credit cards or checks; to apply to the JNNS, send in the application form and wait for instructions about remitting payment. The ENNS accepts bank orders in Swedish Crowns (SEK) or credit cards. The INNS does not invoice for payment. ---------------------------------------------------------------------------- Membership Type INNS ENNS JNNS ---------------------------------------------------------------------------- membership with $80 (regular) SEK 660 (regular) Y 13,000 (regular) Neural Networks (plus 2,000 enrollment fee) $20 (student) SEK 460 (student) Y 11,000 (student) (plus 2,000 enrollment fee) ----------------------------------------------------------------------------- membership without $30 SEK 200 not available to Neural Networks non-students (subscribe through another society) Y 5,000 (student) (plus 2,000 enrollment fee) ----------------------------------------------------------------------------- Name: _____________________________________ Title: _____________________________________ Address: _____________________________________ _____________________________________ _____________________________________ Phone: _____________________________________ Fax: _____________________________________ Email: _____________________________________ Payment: [ ] Check or money order enclosed, payable to INNS or ENNS OR [ ] Charge my VISA or MasterCard card number ____________________________ expiration date ________________________ INNS Membership 19 Mantua Road Mount Royal NJ 08061 USA 856 423 0162 (phone) 856 423 3420 (fax) innshq at talley.com http://www.inns.org ENNS Membership University of Skovde P.O. Box 408 531 28 Skovde Sweden 46 500 44 83 37 (phone) 46 500 44 83 99 (fax) enns at ida.his.se http://www.his.se/ida/enns JNNS Membership c/o Professor Takashi Nagano Faculty of Engineering Hosei University 3-7-2, Kajinocho, Koganei-shi Tokyo 184-8584 Japan 81 42 387 6350 (phone and fax) jnns at k.hosei.ac.jp http://jnns.inf.eng.tamagawa.ac.jp/home-j.html ----------------------------------------------------------------- From tt at cs.dal.ca Tue May 14 08:43:53 2002 From: tt at cs.dal.ca (Thomas T. Trappenberg) Date: Tue, 14 May 2002 09:43:53 -0300 Subject: New book on computational neuroscience Message-ID: <001c01c1fb45$01980620$2743ad81@oyster> Dear Colleagues, It is my pleasure to announce the availability of my book Fundamentals of Computational Neuroscience, Oxford University Press, ISBN 0-19-851583-9 (see http://www.oup.co.uk/isbn/0-19-851583-9) This book contains introductory reviews from the basic mechanisms of single neurons to system level organizations in the brain. The emphasis is thereby on the relation of simplified models of single neurons and neuronal networks to the information processing in the brain. I hope that this will be useful for students who are seeking some introduction to this area, as well as to those wondering about the relation of abstract neural networks to information processing in the brain. The book can be order in Europe through the Oxford University Press web site at . The web catalogs of the other OUP divisions () will be updated soon. Meanwhile you can order the book by calling the OUP representative in your area, and, of course, use other well-known sources (e.g. ). I will make some teaching material available through my web site (), and hope for your suggestions and comments. Sincerely, Thomas Trappenberg ------------------------------------------------- Dr. Thomas P. Trappenberg Associate Professor Faculty of Computer Science Dalhousie University 6050 University Avenue Halifax, Nova Scotia Canada B3H 1W5 Phone: (902) 494-3087 Fax: (902)=492-1517 Email: tt at cs.dal.ca From masulli at disi.unige.it Tue May 14 13:48:42 2002 From: masulli at disi.unige.it (Francesco Masulli) Date: Tue, 14 May 2002 17:48:42 +0000 Subject: School on ENSEMBLE METHODS FOR LEARNING MACHINES Vietri sul Mare 22-28 September 2002 Message-ID: <02051417484206.00959@portofino.disi.unige.it> 7th Course of the "International School on Neural Nets Eduardo R. Caianiello" on ENSEMBLE METHODS FOR LEARNING MACHINES IIASS-Vietri sul Mare (Salerno)-ITALY 22-28 September 2002 web page: http://www.iiass.it/school2002 JOINTLY ORGANIZED BY IIASS-International Institute EMFCSC-Ettore Majorana Foundation for Advanced Scientifc Studies and E.R. Caianiello, Center for Scientific Culture, Vietri sul Mare (SA) Italy Erice (TR) Italy AIMS In the last decade, ensemble methods have shown to be effective in many application domains and constitute one of the main current directions in Machine Learning research. This school will address from a theoretical and empirical view point, several important questions concerning the combination of Learning Machines. In particular, different approaches to the problem which have been proposed in the context of Machine Learning, Neural Networks, and Statistical Pattern Recognition will be discussed. Moreover, a special stress will be given to theoretical and practical tools to develop ensemble methods and evaluate their applications on real-world domains, such as Remote Sensing, Bioinformatics and Medical field. SPONSORS GNCS-Gruppo Nazionale per il Calcolo Scientifico IEEE-Neural Networks Council INNS-International Neural Network Society SIREN-Italian Neural Networks Society University of Salerno,Italy DIRECTORS OF THE COURSE DIRECTORS OF THE SCHOOL Nathan Intrator (USA) Michael Jordan (USA) Francesco Masulli (Italy) Maria Marinaro (Italy) LECTURERS Leo Breiman, University of California at Berkeley, California, USA Lorenzo Bruzzone, University of Trento, Trento, Italy Thomas G. Dietterich, Oregon State University, Oregon, USA Cesare Furlanello, Istituto per la Ricerca Scientifica e Tecnologica, Trento, Italy Giuseppina C. Gini, Politecnico di Milano, Milano, Italy Tin Kam Ho, Bell Laboratories, New Jersey, USA Nathan Intrator, Brown University, Providence, Rhode Island, USA Ludmila I. Kuncheva, University of Wales, Bangor, UK Francesco Masulli, University of Pisa, Italy Stefano Merler, Istituto per la Ricerca Scientifica e Tecnologica, Trento, Italy Fabio Roli, University of Cagliari, Cagliari, Italy Giorgio Valentini, University of Genova, Italy PLACE International Institute for Advanced Scientific Studies E.R. Caianiello (IIASS) Via Pellegrino 19, 84019 Vietri sul Mare, Salerno (Italy) POETIC TOUCH Vietri (from "Veteri", its ancient Roman name) sul Mare ("on sea") is located within walking distance from Salerno and marks the beginning of the Amalfi coast. Short rides take to Positano, Sorrento, Pompei, Herculaneum, Paestum, Vesuvius, or by boat, the islands of Capri, Ischia, and Procida. Velia (the ancient "Elea" of Zeno and Parmenide) is a hundred kilometers farther down along the coast. GENERALITIES Recently, driven by application needs, multiple classifier combinations have evolved into a practical and effective solution for real-world pattern recognition tasks. The idea appears in various disciplines (including Machine Learning, Neural Networks, Pattern Recognition, and Statistics) under several names: hybrid methods, combining decisions, multiple experts, mixture of experts, sensor fusion and many more. In some cases, the combination is motivated by the simple observation that classifier performance is not uniform across the input space and different classifiers excel in different regions. Under a Bayesian framework, integrating over expert distribution leads naturally to expert combination. The generalization capabilities of ensembles of learning machines have been interpreted in the framework of Statistical Learning Theory and in the related theory of Large Margin Classifiers. There are several ways to use more than one classifier in a classification problem. A first "averaging" approach consists of generating multiple hypotheses from a single or multiple learning algorithms, and combining them through majority voting or different linear and non linear combinations. A "feature-oriented" approach is based on different methods to build ensembles of learning machines by subdividing the input space (e.g., random subspace methods, multiple sensors fusion, feature transformation fusion). "Divide-and-conquer" approaches isolate the regions in input space on which each classifier performs well, and direct new input accordingly, or subdivide a complex learning problem in a set of simpler subproblems, recombining them using suitable decoding methods. A "sequential-resampling" approach builds multiple classifier systems using bootstrap methods in order to reduce variance (bagging) or jointly bias and unbiased variance (boosting). There are fundamental questions that need to be addressed for a practical use of this collection of approaches: What are the theoretical tools to interpret possibly in a unified framework this multiplicity of ensemble methods? What is gained and lost in a combination of experts, when is it preferable to alternative approaches? What types of data are best suitable to expert combination? What types of experts are best suited for combinations? What are optimal training methods for experts which are expected to participate in a collective decision? What combination strategies are best suited to a particular problem and to a particular distribution of the data? What are the statistical methods and the appropriate benchmark data to evaluate multiclassifier systems? The school will address some of the above questions from a theoretical and empirical view point and will teach students about this exciting and very promising field using current state of the art data sets for pattern recognition, classification and regression. The main goals of the school are: 1. Offer an overview of the main research issues of ensemble methods from the different and complementary perspectives of Machine Learning, Neural Networks, Statistics and Pattern Recognition. 2. Offer theoretical tools to analyze the diverse approaches, and critically evaluate their applications. 3. Offer practical and theoretical tools to develop new ensemble methods and analyze their application on real-world problems. FORMAT The meeting will follow the usual format of tutorials and panel discussions together with poster sessions for contributed papers. A demo lab with four Linux workstations will be available to the participants for testing and comparing ensemble methods. There will be a network of wireless 11MHz connection available so that students arriving with their laptops and an appropriate wireless communication card can stay connected while at the meeting area. DURATION Participants are expected to arrive in time for the evening meal on Sunday Sept 22th and depart on Sunday Sept 28th. Sessions will take place from Monday 23th to Saturday 27th. PROCEEDINGS The proceedings will be published in the form of a book containing tutorial chapters written by the lecturers and possibly shorter papers from other participants. One free copy of the book will be distributed to each participant after the school. LANGUAGE The official language of the school will be English. POSTER SUBMISSION There will be a poster session for contributed presentations from participants. Proposals consisting of a one page abstract for review by the organizers should be submitted with applications. REGISTRATION FEE Master and PhD Students: 650,00 Euro Academic Participants (govt/univ): 800,00 Euro Industrial Participants: 1.100,00 Euro The fee includes accommodation (3 stars hotel - double room), meals and a copy of the proceedings of the school. Transportation is not included. A supplement of 20 Euro per night should be paid for single room. Members of sponsoring organizations will receive a discount of 50 Euro on the registration fee. A few scholarships are available for students who are otherwise unable to participate at the school. Payment details will be notified with acceptance of applications. ELIGIBILITY The school is open to all suitably qualified scientists. People with few years of experience in the field should include a recommendation letter of their supervisor. APPLICATION PROCEDURE Important Dates: Application deadline: June 20 2002 Notification of acceptance: July 10 2002 Registration fee payment deadline: July 20 2002 School Sept 22-28 2002 Places are limited to a maximum of 60 participants in addition to the lecturers. These will be allocated on a first come, first served basis. ********************************************************************** APPLICATION FORM Title: ............................................................... Family Name: ......................................................... Other Names:.......................................................... Mailing Address (include institution or company name if appropriate): ..................................................................... ..................................................................... ..................................................................... ..................................................................... ..................................................................... Phone:......................Fax:...................................... E-mail: .............................................................. Date of Arrival : .................................................... Date of Departure: ................................................... Are you sending the abstract of a poster? yes/no (delete the alternative which does not apply) Are you applying for a scholarship? yes/no (delete the alternative which does not apply) If yes please include a justification letter for the scholarship request. ***************************************************************** Please send the application form together the recommendation letter by electronic mail to: iiass.vietri at tin.it, subject: summer school; or by fax to: +39 089 761 189 (att.ne Prof. M. Marinaro) or by ordinary mail to the address below: IIASS Via Pellegrino 19, I-84019 Vietri sul Mare (Sa) Italy WEB PAGE OF THE COURSE The web page of the course is http://www.iiass.it/school2002 and will be contain all the updates related to the course. At http://www.iiass.it/school2002/ensemble-lab.html a web portal to ENSEMBLE METHODS is in development including pointers to relevant papers, data-bases and software. Contributions to this portal are kindly requested to all researchers involved in this area. Please send all contributions to Giorgio Valentini (valenti at disi.unige.it). FOR FURTHER INFORMATION PLEASE CONTACT: Prof. Francesco Masulli DISI&INFM email: masulli at ge.infm.it Via Dodecaneso 35 fax: +39 010 353 6699 16146 Genova (Italy) tel: +39 010 353 6604 From christof at teuscher.ch Wed May 15 12:00:30 2002 From: christof at teuscher.ch (Christof Teuscher) Date: Wed, 15 May 2002 18:00:30 +0200 Subject: [Turing Day] - Last Call for Participation Message-ID: <3CE2861E.2471C176@teuscher.ch> ================================================================ We apologize if you receive multiple copies of this email. Please distribute this announcement to all interested parties. ================================================================ **************************************************************** SECOND AND LAST CALL FOR PARTICIPATION **************************************************************** ** Turing Day ** Computing science 90 years from the birth of Alan M. Turing. Friday, June 28, 2002 Swiss Federal Institute of Technology Lausanne (EPFL) Lausanne, Switzerland http://lslwww.epfl.ch/turingday **************************************************************** Purpose: -------- Alan Mathison Turing, born on June 23, 1912, is considered one of the most creative thinkers of the 20th century. His interests, from computing to the mind through information science and biology, span many of the emerging themes of the 21st century. On June 28, 2002, we commemorate the 90th anniversary of Alan Mathison Turing's birthday in the form of a one-day workshop held at the Swiss Federal Institute of Technology in Lausanne. The goal of this special day is to remember Alan Turing and to revisit his contributions to computing science. The workshop will consist of a series of invited talks given by internationally-renowned experts in the field. Invited Speakers: ----------------- B. Jack Copeland - "Artificial Intelligence and the Turing Test" Martin Davis - "The Church-Turing Thesis: Has it been Superseded?" Andrew Hodges - "What would Alan Turing have done after 1954?" Douglas R. Hofstadter - "The Strange Loop -- from Epimenides to Cantor to Russell to Richard to Goedel to Turing to Tarski to von Neumann to Crick and Watson" Tony Sale - "What did Turing do at Bletchley Park?" Jonathan Swinton - "Watching the Daisies Grow: Turing and Fibonacci Phyllotaxis" Gianluca Tempesti - "The Turing Machine Redux: A Bio-Inspired Implementation" Christof Teuscher - "Connectionism, Turing, and the Brain" Special Events: --------------- o Display and demonstration of an original Enigma machine o Exhibition of historical computers (Bolo's Computer Museum) o Demonstration of Turing's neural networks on the BioWall o Demonstration of a self-replicating universal Turing machine For up-to-date information and registration, consult the Turing Day web-site: http://lslwww.epfl.ch/turingday We are looking forward to seeing you in beautiful Lausanne! Sincerely, - Christof Teuscher ---------------------------------------------------------------- Christof Teuscher Swiss Federal Institute of Technology Lausanne (EPFL) christof at teuscher.ch http://www.teuscher.ch/christof ---------------------------------------------------------------- Turing Day: http://lslwww.epfl.ch/turingday IPCAT2003: http://lslwww.epfl.ch/ipcat2003 ---------------------------------------------------------------- From mbethge at physik.uni-bremen.de Thu May 16 11:37:47 2002 From: mbethge at physik.uni-bremen.de (Matthias Bethge) Date: Thu, 16 May 2002 17:37:47 +0200 Subject: paper available Message-ID: <3CE3D24B.5E377465@physik.uni-bremen.de> Dear Connectionists, the following preprint is available for downloading: http://www-neuro.physik.uni-bremen.de/~mbethge/publications.html Matthias Bethge, David Rotermund and Klaus Pawelzik. Optimal short-term population coding: when Fisher information fails. Neural Computation, in press. Abstract: Efficient coding has been proposed as a first principle explaining neuronal response properties in the central nervous system. The shape of optimal codes, however, strongly depends on the natural limitations of the particular physical system. Here we investigate how optimal neuronal encoding strategies are influenced by the finite number of neurons $N$ (place constraint), the limited decoding time window length $T$ (time constraint), the maximum neuronal firing rate $f_{max}$ (power constraint) and the maximal average rate $\langle f \rangle_{max}$ (energy constraint). While Fisher information provides a general lower bound for the mean squared error of unbiased signal reconstruction, its use to characterize the coding precision is limited. Analyzing simple examples, we illustrate some typical pitfalls and thereby show that Fisher information provides a valid measure for the precision of a code only if the dynamic range $(f_{min}T, f_{max}T)$ is sufficiently large. In particular, we demonstrate that the optimal width of Gaussian tuning curves depends on the available decoding time $T$. Within the broader class of unimodal tuning functions it turns out that the shape of a Fisher-optimal coding scheme is not unique. We solve this ambiguity by taking the minimum mean square error into account, which leads to flat tuning curves. The tuning width, however, remains to be determined rather by energy constraints than by the principle of efficient coding. -- ________________________________________________________ Matthias Bethge @ Institute of Theoretical Physics www: http://www-neuro.physik.uni-bremen.de/~mbethge Tel. : (+49)421-218-4460 /\_______________ Fax : (+49)421-218-9104 /\/ _______________________________ /\/ \/ From mbethge at physik.uni-bremen.de Fri May 17 05:06:58 2002 From: mbethge at physik.uni-bremen.de (Matthias Bethge) Date: Fri, 17 May 2002 11:06:58 +0200 Subject: broken links Message-ID: <3CE4C832.E77A63AD@physik.uni-bremen.de> To all, who did not succeed to download the preprint 'Optimal short-term population coding: when Fisher information fails' from my homepage: for some reason (which I don't really know) all links at my homepage to the preprints were broken. Now the problem is fixed. Sorry for the inconvenience and please try again: http://www-neuro.physik.uni-bremen.de/~mbethge/publications.html Good luck Matthias -- ________________________________________________________ Matthias Bethge @ Institute of Theoretical Physics www: http://www-neuro.physik.uni-bremen.de/~mbethge Tel. : (+49)421-218-4460 /\_______________ Fax : (+49)421-218-9104 /\/ _______________________________ /\/ \/ From terry at salk.edu Wed May 22 14:15:58 2002 From: terry at salk.edu (Terry Sejnowski) Date: Wed, 22 May 2002 11:15:58 -0700 (PDT) Subject: NEURAL COMPUTATION 14:6 In-Reply-To: <199703060454.UAA08685@helmholtz.salk.edu> Message-ID: <200205221815.g4MIFwZ14239@purkinje.salk.edu> Neural Computation - Contents - Volume 14, Number 6 - June 1, 2002 ARTICLE Cosine Tuning Minimizes Motor Errors Emanuel Todorov NOTES SMEM Algorithm Is Not Fully Compatible with Maximum-Likelihood Framework Akihiro Minagawa, Norio Tagawa, Toshiyuki Tanaka A Note on the Decomposition Methods for Support Vector Regression Shuo-Peng Liao, Hsuan-Tien Lin and Chih-Jen Lin LETTERS An Image Analysis Algorithm for Dendritic Spines Ingrid Y. Y. Koh, W. Brent Lindquist, Karen Zito, Esther A. Nimchinsky and Karel Svoboda Multiplicative Synaptic Normalization and a Non-Linear Hebb Rule Underlie a Neurotrophic Model of Competitive Synaptic Plasticity T. Elliott and N.R. Shadbolt Energy-Efficient Coding with Discrete Stochastic Events Susanne Schreiber, Christian K. Machens, Andreas V.M. Herz and Simon B. Laughlin Multiple Model-Based Reinforcement Learning Kenji Doya, Kazuyuki Samejima, Ken-ichi Katagiri and Mitsuo Kawato A Bayesian Approach to the Stereo Correspondence Problem Jenny C. A. Read Learning Curves for Gaussian Process Regression: Approximations and Bounds Peter Sollich and Anason Halees A Global Optimum Approach for One-Layer Neural Networks Enrique Castillo, Oscar Fontenla-Romero, Bertha Guijarro-Berdinas and Amparo Alonso-Betanzos MLP in Layer-Wise Form with Applications to Weight Decay Tommi Karkkainen Local Overfitting Control via Leverages Gaetan Monari and Gerard Dreyfus ----- ON-LINE - http://neco.mitpress.org/ SUBSCRIPTIONS - 2002 - VOLUME 14 - 12 ISSUES USA Canada* Other Countries Student/Retired $60 $64.20 $108 Individual $88 $94.16 $136 Institution $506 $451.42 $554 * includes 7% GST MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 577-1545 journals-orders at mit.edu ----- From adr at adrlab.ahc.umn.edu Sat May 25 13:06:22 2002 From: adr at adrlab.ahc.umn.edu (A David Redish) Date: Sat, 25 May 2002 12:06:22 -0500 Subject: MClust-3.0 Message-ID: <200205251706.g4PH6Ne3007636@adrlab.ahc.umn.edu> MClust-3.0 Announcing the release of a new version of the MClust spike-sorting toolbox. MClust is a Matlab toolbox which enables a user to perform manual clustering on single-electrode, stereotrode, and tetrode recordings taken with a variety of recording systems. The current system ships with engines for loading Neuralynx ".dat" files, but one of the new features of MClust-3.0 is a modular loading-engine system and it should be easy to write new loading engines for other data formats. The MClust web page can be found at http://www.cbc.umn.edu/~redish/MClust The MClust toolbox is free-ware, but you will need Matlab 6.0 or higher to run it. It has been tested under the Windows families of operating systems, but ports to other operating systems should be simple. Further details (such as the copyright notice, authors credits, and disclaimer) are available from the website above. New features available in MClust-3.0 include more modularity and new cutting-engines. Loading-engines are easily added. This will greatly facilitate using MClust with other data formats. As before, spike-features are easily added. Additional features are also shipped with the new version. MClust-3.0 now includes automated/assisted cluster cutting engines, including BubbleClust (P. Lipa, University of Arizona, Tucson AZ) and KlustaKwik (K. Harris, Rutgers University, Newark NJ). After preprocessing data (using a new batch-processing system), clusters cut with BubbleClust or KlustaKwik can be selected, merged, split, and generally touched-up using the new selection/decision windows and the original manual cutting engine. Manual cutting is still, of course, always an option. ----------------------------------------------------- A. David Redish redish at ahc.umn.edu Assistant Professor http://www.cbc.umn.edu/~redish Department of Neuroscience, University of Minnesota 6-145 Jackson Hall 321 Church St SE Minneapolis MN 55455 ----------------------------------------------------- From T.J.Prescott at sheffield.ac.uk Tue May 28 12:00:54 2002 From: T.J.Prescott at sheffield.ac.uk (Tony Prescott) Date: Tue, 28 May 2002 17:00:54 +0100 Subject: Robotics as Theoretical Biology Workshop Message-ID: (apologies for repeat postings) SECOND CALL FOR PARTICIPATION WORKSHOP ON ROBOTICS AS THEORETICAL BIOLOGY AUGUST 10TH, 2002, EDINBURGH, SCOTLAND. PART OF SAB '02: THE 7TH MEETING OF THE INTERNATIONAL SOCIETY FOR SIMULATION OF ADAPTIVE BEHAVIOR AUGUST 4TH-11TH, 2002, EDINBURGH, SCOTLAND. We now have a full program of invited talks from international researcher in robotics, biology, and neuroscience, which includes lots of time for discussion. We are also accepting 'submissions' from workshop attendees in a simple form: If you have a poster from any previous conference that you think is relevant, we'd like to have it displayed, and include an abstract for it in our workshop notes. THE DEADLINE FOR ABSTRACTS IS THE 15TH OF JUNE. We're hoping to make a good show for the field, so please think about coming, bringing a poster, and spreading the word for others to do the same. See the web-page http://www.shef.ac.uk/~abrg/sab02/index.shtml for more details. Best Wishes, Tony Prescott & Barbara Webb (workshop co-organisers) From ngoddard at anc.ed.ac.uk Thu May 30 02:36:16 2002 From: ngoddard at anc.ed.ac.uk (Nigel Goddard) Date: Thu, 30 May 2002 07:36:16 +0100 Subject: Faculty position in language modeling - deadline soon! Message-ID: <3CF5C860.EB579E2A@anc.ed.ac.uk> UNIVERSITY OF EDINBURGH DIVISION OF INFORMATICS AND DEPARTMENT OF PSYCHOLGY This position may be of interest to neurocognitive modelers of language function. Deadline for applications is June 7th. The Department of Psychology and the Division of Informatics invite applications from highly-qualified candidates for a 3-year Lectureship to be jointly held in Psychology and Informatics. You must be able to teach existing courses in both departments, and one or more of the following areas: Cognitive Modelling, Computational Psycholinguistics, Cognitive Neuroscience, Computational Neuroscience, Human-Computer Interaction, Experimental Methods. You should demonstrate a world-class research record and both interest and ability in teaching. You will be an experimentalist with a firm grounding in theory and computation. Informal enquiries to Professor Bonnie Webber, Division of Informatics, +44 131 650 4190 or to hod.Psych at ed.ac.uk, Department of Psychology, Ph: +44 131 650 3440, Fax: +44 131 650 3461. Salary range: ?20,470 - ?24,435p.a. or -?25,455 - ?32,537p.a. Please quote reference no: 311414JW Closing date: 7 June 2002 For further particulars see http://www.jobs.ed.ac.uk/jobs/index.cfm?action=jobdet&jobid=1024 and for an an application pack visit our website http://www.jobs.ed.ac.uk or telephone the recruitment line on +44 131 650 2511 -- ==================================================================== Dr. Nigel Goddard, Institute for Adaptive and Neural Computation, Division of Informatics, University of Edinburgh, C19, 5 Forrest Hill, Edinburgh EH1 2QL, Scotland http://www.streetmap.co.uk/streetmap.dll?Postcode2Map?code=EH1+2QL Office: +44 (0)131 650 3087 mobile: +44 (0)787 967 1811 mailto:Nigel.Goddard at ed.ac.uk http://anc.ed.ac.uk/~ngoddard FAX->email [USA] (603) 698 5854 [UK] (0870) 130 5014 Calendar: http://calendar.yahoo.com/public/nigel_goddard ==================================================================== From ken at phy.ucsf.edu Thu May 30 15:36:26 2002 From: ken at phy.ucsf.edu (Ken Miller) Date: Thu, 30 May 2002 12:36:26 -0700 Subject: Paper available: Analysis of LGN Input and Orientation Tuning Message-ID: <15606.32570.37668.510564@coltrane.ucsf.edu> The following paper is available as ftp://ftp.keck.ucsf.edu/pub/ken/troyer_etal02.pdf or from http://www.keck.ucsf.edu/~ken (click on 'publications', then on 'Models of neuronal integration and circuitry') This is a preprint of an article that has now appeared as: Journal of Neurophysiology 87, 2741-2752 (2002). LGN Input to Simple Cells and Contrast-Invariant Orientation Tuning: An Analysis Todd W. Troyer, Anton E. Krukowski and Kenneth D. Miller Abstract: We develop a new analysis of the LGN input to a cortical simple cell, demonstrating that this input is the sum of two terms, a linear term and a nonlinear term. In response to a drifting grating, the linear term represents the temporal modulation of input, and the nonlinear term represents the mean input. The nonlinear term, which grows with stimulus contrast, has been neglected in many previous models of simple cell response. We then analyze two scenarios by which contrast-invariance of orientation tuning may arise. In the first scenario, at larger contrasts, the nonlinear part of the LGN input, in combination with strong push-pull inhibition, counteracts the nonlinear effects of cortical spike threshold, giving the result that orientation tuning scales with contrast. In the second scenario, at low contrasts, the nonlinear component of LGN input is negligible, and noise smooths the nonlinearity of spike threshold so that the input-output function approximates a power-law function. These scenarios can be combined to yield contrast-invariant tuning over the full range of stimulus contrast. The model clarifies the contribution of LGN nonlinearities to the orientation tuning of simple cells, and demonstrates how these nonlinearities may impact different models of contrast-invariant tuning. Ken Kenneth D. Miller telephone: (415) 476-8217 Associate Professor fax: (415) 476-4929 Dept. of Physiology, UCSF internet: ken at phy.ucsf.edu 513 Parnassus www: http://www.keck.ucsf.edu/~ken San Francisco, CA 94143-0444 From erik at bbf.uia.ac.be Thu May 30 12:46:47 2002 From: erik at bbf.uia.ac.be (Erik De Schutter) Date: Thu, 30 May 2002 18:46:47 +0200 Subject: CNS*2002: early registration Message-ID: The early registration for the CNS*2002 meeting closes on June 10th. Final program, preprint versions of the submitted papers and registration facilities can all be accessed through our webserver at http://www.neuroinf.org/CNS.shtml Eleventh Annual Computational Neuroscience Meeting CNS*2002 July 21 - July 25, 2002 Chicago, Illinois USA CNS*2002 will be held in Chicago from Sunday, July 21, 2002 to Thursday, July 25 in the Congress Plaza Hotel & Convention Center. This is a historic hotel located on Lake Michigan in downtown Chicago. General sessions will be Sunday-Wednesday, Thursday will be a full day of workshops at the University of Chicago. The conference dinner will be Wednesday night, followed by the rock-n-roll jam session. INVITED SPEAKERS: Ad Aertsen (Albert-Ludwigs-University, Germany) Leah Keshet (University British Columbia, Canada) Alex Thomson (University College London, UK) ORGANIZING COMMITTEE: Program chair: Erik De Schutter (University of Antwerp, Belgium) Local organizer: Philip Ulinski (University of Chicago, USA) Workshop organizer: Maneesh Sahani (Gatsby Computational Neuroscience Unit, UK) Government Liaison: Dennis Glanzman (NIMH/NIH, USA) Program Committee: Upinder Bhalla (National Centre for Biological Sciences, India) Avrama Blackwell (George Mason University, USA) Victoria Booth (New Jersey Institute of Technology, USA) Alain Destexhe (CNRS Gif-sur-Yvette, France) John Hertz (Nordita, Denmark) David Horn (University of Tel Aviv, Israel) Barry Richmond (NIMH, USA) Steven Schiff (George Mason University, USA) Todd Troyer (University of Maryland, USA) From tewon at salk.edu Thu May 30 20:18:04 2002 From: tewon at salk.edu (Te-Won Lee) Date: Thu, 30 May 2002 17:18:04 -0700 Subject: JMLR special issue on ICA Message-ID: <015701c20838$a288f2b0$0693ef84@redmond.corp.microsoft.com> Journal of Machine Learning Research Special Issue on "Independent Component Analysis" Guest Editors: Te-Won Lee, Jean-Francois Cardoso, Erkki Oja, Shun-Ichi Amari CALL FOR PAPERS We invite papers on Independent Component Analysis (ICA) and Blind Source Separation (BSS) for a special issue in the Journal of Machine Learning Research (on-line publication and subsequent publication from MIT Press). In recent years, ICA has received attention from many research areas including statistical signal processing, machine learning, neural networks, information theory and exploratory data analysis. Applications of ICA algorithms in speech signal processing and biomedical signal processing are growing and maturing and ICA methods are also considered in many other fields where this novel data analysis technique provides new insights. Recent approaches to ICA such as variational methods, kernel methods and tensor methods have lead to new theoretical insights. They permit us to relax some of the constraints in the traditional ICA assumptions yielding new algorithms and increasing the domains of application. Certain nonlinear mixing systems can be inverted, more sources than the number of sensors can be recovered, and further understanding of the convergence properties and gradient optimizations are now available. The ICA framework is an interdisciplinary research area. The combination of ideas from machine learning and statistical signal processing is a developing avenue of research and ICA is a first step into this new direction. We invite original contributions that explore theoretical and practical issues related to ICA. A list of possible topics include: Theory and Algorithms Bayesian methods Information theoretic approaches High order statistics Convolutive mixtures Convergence and stability issues Graphical models Nonlinear mixing Undercomplete mixtures Sparse coding Methodology and Applications Biomedical applications Speech signal processing Image processing Performance comparisons Model validation Dimension reduction and visualization Learning features in high dimensional data Important Dates: - Submission: October, 1st 2002 - Decision: January, 1st 2003 - Final: March, 1st 2003 Submission procedure: see http://rhythm.ucsd.edu/~tewon/JMLR.html For further details or enquiries, send mail to tewon at inc.ucsd.edu Links: http://www-sig.enst.fr/~ica99/ http://www.cis.hut.fi/ica2000 http://www.ica2001.org http://ica2003.jp From cia at brain.riken.go.jp Fri May 31 13:01:51 2002 From: cia at brain.riken.go.jp (Andrzej Cichocki) Date: Sat, 01 Jun 2002 02:01:51 +0900 Subject: New monograph Message-ID: <3CF7AC7F.4020701@brain.riken.go.jp> [Our sincere apologies if you receive multiple copies of this email] The following book is now available: ADAPTIVE BLIND SIGNAL and IMAGE PROCESSING: Learning Algorithms and Applications A. Cichocki, S. Amari Published by John Wiley & Sons, Chichester UK, April 2002, 586 Pages. The books cover the following areas: Independent Component Analysis (ICA), blind source separation (BSS), blind recovery, blind signal extraction (BSE), multichannel blind deconvolution, blind equalization, second and higher order statistics, blind spatial and temporal decorrealtion, robust whitening, blind filtering, matrix factorizations, robust principal component analysis, minor component analysis, sparse representations, automatic dimension reduction, features extraction in high dimensional data, noise reduction and related problems. Moreover, some interesting benchmarks are available to compare performance of various unsupervised learning algorithms. More information about the book you can find on web pages: http://www.bsp.brain.riken.go.jp/ICAbookPAGE/ http://www.wiley.com/cda/product/0,,0471607916,00.html and in below brief summary. Andrzej Cichocki Laboratory for Advanced Brain Signal Processing, Riken BSI 2-1 Hirosawa, Wako-shi, Saitama 351-0198, JAPAN E-mail: cia at bsp.brain.riken.go.jp URL: http://www.bsp.brain.riken.go.jp/ Summary of the book Chapter 1: Introduction to Blind Signal Processing: Problems and Applications Blind Signal Processing (BSP) is now one of the hottest and exciting topics in the fields of neural computation, advanced statistics, and signal processing with solid theoretical foundations and many potential applications. In fact, BSP has become a very important topic of research and development in many areas, especially biomedical engineering, medical imaging, speech enhancement, remote sensing, communication systems, exploration seismology, geophysics, econometrics, data mining, neural networks, etc. The blind signal processing techniques principally do not use any training data and do not assume a priori knowledge about parameters of convolutive, filtering and mixing systems. BSP includes three major areas: Blind Signal Separation and Extraction (BSS/BSE), Independent Component Analysis (ICA), and Multichannel Blind Deconvolution (MBD) and Equalization which are the main subjects of the book. In this chapter are formulated fundamental problems of the BSP, given important definitions and described basic mathematical and physical models. Moreover, several potential and promising applications are reviewed. Keywords: Blind Source Separation (BSS), Blind Source Extraction (BSE), Independent Component Analysis (ICA), Multichannel Blind Deconvolution (MBD), Basic definitions and models, Applications. Chapter 2: Solving a System of Linear Algebraic Equations and Related Problems In modern signal and image processing fields like biomedical engineering, computer tomography (image reconstruction from projections), automatic control, robotics, speech and communication, linear parametric estimation, models such as auto-regressive moving-average (ARMA) and linear prediction (LP) have been extensively utilized. In fact, such models can be mathematically described by an overdetermined system of linear algebraic equations. Such systems of equations are often contaminated by noise or errors, thus the problem of finding an optimal and robust with respect noise solution arises if some a priori information about the error is available. On the other hand, wide classes of extrapolation, reconstruction, estimation, approximation, interpolation and inverse problems can be converted to minimum norm problems of solving underdetermined systems of linear equations. Generally speaking, in signal processing applications, the overdetermined system of linear equations describes filtering, enhancement, deconvolution and identification problems, while the underdetermined case describes inverse and extrapolation problems. This chapter provides a tutorial to the problem of solving large overdetermined and underdetermined systems of linear equations, especially when there is an uncertainty in parameter values and/or the systems are contaminated by noise. A special emphasis is placed in on-line fast adaptive and iterative algorithms for arbitrary noise statistics. This chapter also gives several illustrative examples that demonstrate the characteristics of the developed novel algorithms. Keywords: Least Squares (LS) problem, Extended Total Least Squares (TLS), Data Least Squares (DLS), Least Absolute Deviation (LAD), 1-norm solution, Solving of system of linear equations with non-negativity constraints, Non-negative Matrix Factorization (NMF), Regularization, Sparse signal representation, Sparse solutions, Minimum Fuel Problem (MFP), Focuss algorithms, Amari-Hopfield recurrent neural networks for on-line solutions. Chapter 3: Principal/Minor Component Analysis and Related Problems Neural networks with unsupervised learning algorithms organize themselves in such a way that they can detect or extract useful features, regularities, correlations of data or signals or separate or decorrelate some signals with little or no prior knowledge of the desired results. Normalized (constrained) Hebbian and anti-Hebbian learning rules are simple variants of basic unsupervised learning algorithms; in particular, learning algorithms for principal component analysis (PCA), singular value decomposition (SVD) and minor component analysis (MCA) belong to this class of unsupervised rules. Recently, many efficient and powerful adaptive algorithms have been developed for PCA, MCA and SVD and their extensions The main objective of this chapter is a derivation and overview of the most important adaptive algorithms. Keywords: PCA, MCA, SVD, Subspace methods, Automatic dimensionality reduction, AIC and MDL criteria, Power method, Robust PCA, Multistage PCA for blind source separation. Chapter 4: Blind Decorrelation and Second Order Statistics for Robust Blind Identification Temporal, spatial and spatio-temporal decorrelations play important roles in signal processing. These techniques are based only on second-order statistics (SOS). They are the basis for modern subspace methods of spectrum analysis and array processing and often used in a preprocessing stage in order to improve convergence properties of adaptive systems, to eliminate redundancy or to reduce noise. Spatial decorrelation or prewhitening is often considered as a necessary (but not sufficient) condition for the stronger stochastic independence criteria. After prewhitening, the BSS or ICA tasks usually become somewhat easier and well-posed (less ill-conditioned), because the subsequent separating (unmixing) system is described by an orthogonal matrix for real-valued signals and a unitary matrix for complex-valued signals and weights. Furthermore, spatio-temporal and time-delayed decorrelation can be used to identify the mixing matrix and perform blind source separation of colored sources. In this chapter, we discuss and analyze a number of efficient and robust adaptive and batch algorithms for spatial whitening, orthogonalization, spatio-temporal and time-delayed blind decorrelation. Moreover, we discuss several promising robust algorithms for blind identification and blind source separation of non-stationary and/or colored sources. Keywords: Robust whitening, Robust orthogonalization, Gram-Schmidt orthogonalization,, Second order statistics (SOS) blind identification, Multistage EVD/SVD for BSS, Simultaneous diagonalization, Joint approximative diagonalization, SOBI and JADE algorithms, Blind source separation for non-stationary signals, Natural gradient, Atick-Redlich formula, Gradient descent with Frobenius norm constraint. Chapter 5: Sequential Blind Signal Extraction There are three main objectives of this chapter: (a) To present simple neural networks (processing units) and propose unconstrained extraction and deflation criteria that do not require either a priori knowledge of source signals or the whitening of mixed signals. These criteria lead to simple, efficient, purely local and biologically plausible learning rules (e.g., Hebbian/anti-Hebbian type learning algorithms). (b) To prove that the proposed criteria have no spurious equilibriums. In other words, the most learning rules discussed in this chapter always reach desired solutions, regardless of initial conditions (see appendixes for proof). (c) To demonstrate with computer simulations the validity and high performance for practical use of the derived learning algorithms. In this chapter there are used two different models and approaches. The first approach is based on higher order statistics (HOS) which assume that sources are mutually statistically independent and they are non-Gaussian (expect at most one) and as criteria of independence, we will use some measures of non-Gaussianity. The second approach based on the second order statistics (SOS) assumes that source signals have some temporal structure, i.e., the sources are colored with different autocorrelation functions or equivalently different shape spectra. Special emphasis will be given to blind source extraction (BSE) in the case when sensor signals are corrupted by additive noise using the bank of band pass filters. Keywords: Basic criteria for blind source extraction, Kurtosis, Gray function, Cascade neural network, Deflation procedures, KuickNet, Fixed-point algorithms, Blind extraction with reference signal, Linear predictor and band-pass filters for BSS, Statistical analysis, Log likelihood, Extraction of sources from convolutive mixture, Stability, Global convergence. Chapter 6: Natural Gradient Approach to Independent Component Analysis In this chapter, fundamental signal processing and information theoretic approaches are presented together with learning algorithms for the problem of adaptive blind source separation (BSS) and Independent Component Analysis (ICA). We discuss recent developments of adaptive learning algorithms based on the natural gradient approach in the general linear, orthogonal and Stiefel manifolds. Mutual information, Kullback-Leibler divergence, and several promising schemes are discussed and reviewed in this chapter, especially for signals with various unknown distributions and unknown number of sources. Emphasis is given to an information-theoretical and information-geometrical unifying approach, adaptive filtering models and associated on-line adaptive nonlinear learning algorithms. We discuss the optimal choice of nonlinear activation functions for various distributions, e.g., Gaussian, Laplacian, impulsive and uniformly-distributed signals based on a generalized-Gaussian-distributed model. Furthermore, families of efficient and flexible algorithms that exploit non-stationarity of signals are also derived. Keywords: Kullback-Leibler divergence, Natural gradient concept, Derivation and analysis of natural gradient algorithms, Local stability analysis, Nonholonomic constraints, Generalized Gaussian and Cauchy distributions, Pearson model. Natural gradient algorithms for non-stationary sources. Extraction of arbitrary group of sources, Semi-orthogonality constraints, Stiefel manifolds. Chapter 7: Locally Adaptive Algorithms for ICA and their Implementations The main purpose of this chapter is to describe and overview models and to present a family of practical and efficient associated adaptive or locally adaptive learning algorithms which have special advantages of efficiency and/or simplicity and straightforward electronic implementations. Some of the described algorithms have special advantages in the cases of noisy, badly scaled or ill-conditioned signals. The developed algorithms are extended for the case when the number of sources and their statistics are unknown. Finally, problem of an optimal choice of nonlinear activation function and general local stability conditions are also discussed. In particular, we focus on simple locally adaptive Hebbian/anti-Hebbian learning algorithms and their implementations using multi-layer neural networks are proposed. Keywords: Modified Jutten-Herault algorithm, robust local algorithms for ICA/BSS, Multi-layer network for ICA, Flexible ICA for unknown number of sources, Generalized EASI algorithms, and Generalized stability conditions. Chapter 8: Robust Techniques for BSS and ICA with Noisy Data In this chapter we focus mainly on approaches to blind separation of sources when the measured signals are contaminated by large additive noise. We extend existing adaptive algorithms with equivariant properties in order to considerably reduce the bias caused by measurement noise for the estimation of mixing and separating matrices. Moreover, we propose dynamical recurrent neural networks for simultaneous estimation of the unknown mixing matrix, source signals and reduction of noise in the extracted output signals. The optimal choice of nonlinear activation functions for various noise distributions assuming a generalized-Gaussian-distributed noise model is also discussed. Computer simulations of selected techniques are provided that confirms their usefulness and good performance. The main objective of this chapter is to present several approaches and derive learning algorithms that are more robust with respect to noise than the techniques described in the previous chapters or that can reduce the noise in the estimated output vector of independent components Keywords: Bias removal techniques, Wiener filters with references convolutive noise, Noise cancellation and reduction, Cumulants based cost functions and equivariant algorithms, Blind source separation with more sensors than sources, Robust extraction of arbitrary group of sources, Recurrent neural network for noisy data, Amari-Hopfield neural network. Chapter 9: Multichannel Blind Deconvolution: Natural Gradient Approach The main objective of this chapter is to review and extend existing adaptive natural gradient algorithms for various multichannel blind deconvolution models. Blind separation/deconvolution of source signals has been a subject under consideration for more than two decades. There are significant potential applications of blind separation/deconvolution in various fields, for example, wireless telecommunication systems, sonar and radar systems, audio and acoustics, image enhancement and biomedical signal processing (EEG/MEG signals). In these applications, single or multiple unknown but independent temporal signals propagate through a mixing and filtering medium. The blind source separation/deconvolution problem is concerned with recovering independent sources from sensor outputs without assuming any a priori knowledge of the original signals, except certain statistical features. In this chapter, we present using various models and assumptions, relatively simple and efficient, adaptive and batch algorithms for blind deconvolution and equalization for single-input/multiple-output (SIMO) and multiple-input/multiple-output (MIMO) dynamical minimum phase and non-minimum phase systems. The basic relationships between standard ICA/BSS (Independent Component Analysis and Blind Source Separation) and multichannel blind deconvolution are discussed in detail. They enable us to extend algorithms derived in the previous chapters. In particular, the natural gradient approaches for instantaneous mixture to convolutive dynamical models. We also derive a family of equivariant algorithms and analyze their stability and convergence properties. Furthermore, a Lie group and Riemannian metric are introduced on the manifold of FIR filters and using the isometry of the Riemannian metric, the natural gradient on the FIR manifold is described. Based on the minimization of mutual information, we present then a natural gradient algorithm for the causal minimum phase finite impulse response (FIR) multichannel filter. Using information back-propagation, we also discuss an efficient implementation of the learning algorithm for the non-causal FIR filters. Computer simulations are also presented to illustrate the validity and good learning performance of the described algorithms. Keywords: Basic models for blind equalization and multichannel deconvolution, Fractionally Sampled system, SIMO and MIMO models, Equalization criteria, Separation-deconvolution criteria, Relationships between BSS/ICA and multichannel blind deconvolution (MBD), Natural gradient algorithms for MBD, Information Back-propagation. Chapter 10: Estimating Functions and Superefficiency for ICA and Deconvolution Chapter 10 introduces the method of estimating functions to elucidate the common structures in most of the ICA/BSS and MBD algorithms. We use information geometry for this purpose, and define estimating functions in semiparametric statistical models which include unknown functions as parameters. Differences in most existing algorithms are only in the choices of estimating functions. We then give error analysis and stability analysis in terms of estimating functions. This makes it possible to design various adaptive methods for choosing unknown parameters included in estimating functions, which control accuracy and stability. The Newton method is automatically derived by the standardized estimating functions. First the standard BSS/ICA problem is formulated in the framework of the semiparametric model and a family of estimating functions. Furthermore, the present chapter will discuss and extend further convergence and efficiency of the batch estimator and natural gradient learning for blind separation/deconvolution via the semiparametric statistical model and estimating functions and standardized estimating functions derived by using efficient score functions elucidated recently by Amari et al. We present the geometrical properties of the manifold of the FIR filters based on the Lie group structure and formulate the multichannel blind deconvolution problem within the framework of the semiparametric model deriving a family of estimating functions for blind deconvolution. We then analyze the efficiency of the batch estimator based on estimating function - obtaining its convergence rate. Finally, we show that both batch learning and on-line natural gradient learning are superefficient under given nonsingular conditions. Keywords: Estimating functions, Semiparametric statistical models, Superefficiency, Likelihood, Score functions, Batch estimator, Information geometry, Stability analysis. Chapter 11: Blind Filtering and Separation Using a State-Space Approach The state-space description of dynamical systems is a powerful and flexible generalized model for blind separation and deconvolution or more generally for filtering and separation. There are several reasons why the state-space models are advantageous for blind separation and filtering. Although transfer function models in the Z -domain or the frequency domain are equivalent to the state-space models in the time domain for any linear, stable time-invariant dynamical system, using transfer function directly it is difficult to exploit internal representation of real dynamical systems. The main advantage of the state-space description is that it not only gives the internal description of a system, but there are various equivalent canonical types of state-space realizations for a system, such as balanced realization and observable canonical forms. In particular, it is possible to parameterize some specific classes of models which are of interest in applications. In addition, it is relatively easy to tackle the stability problem of state-space systems using the Kalman filter. Moreover, the state-space model enables a much more general description than the standard finite impulse response (FIR) convolutive filtering models discussed in the Chapter 9. In fact, all the known filtering models, such as the AR, MA, ARMA, ARMAX and Gamma filtering, could also be considered as special cases of flexible state-space models. In this chapter, we briefly review adaptive learning algorithms based on the natural gradient approach and give some perspective and new insight into multiple-input multiple-output blind separation and filtering in the state-space framework. Keywords: Linear basic state space model, Natural gradient algorithm for state space model, Estimation of output and state space matrices, Comparison of various algorithms, Kalman filter, Two stage blind separation/filtering approach. Chapter 12: Nonlinear State Space Models - Semi-Blind Signal Processing In this chapter we attempt to extend and generalize the results discussed in the previous chapters to nonlinear dynamical models. However, the problem is not only very challenging but intractable in the general case without a priori knowledge about the mixing and filtering nonlinear process. Therefore, in this chapter we consider very briefly only some simplified nonlinear models. In addition, we assume that some information about the mixing and separating system and source signals is available. In practice, special nonlinear dynamical models are considered in order to simplify the problem and solve it efficiently for specific applications. Specific examples include the Wiener model, the Hammerstein model and Nonlinear Autoregressive Moving Average models. Keywords: Semi-blind separation and filtering, Wiener and Hammerstein models, Nonlinear Autoregressive Moving Average (NARMA) model, Hyper radial basis function (HRBF) neural network. Appendix A: Mathematical Preliminaries In this appendix some mathematical background needed for complete understanding of the text are quickly reviewed. Many useful definitions, formulas for matrix algebra and matrix differentiation are given Keywords: Matrix inverse update rules, Matrix differentiation, Differentiations of scalar cost function with respect to a vector, Trace, Matrix differentiation of trace of matrices, Matrix expectation, Properties of determinant, Moore-Penrose pseudo inverse, Discrimination measures, Distance measures . Appendix B: Glossary of Symbols and Abbreviations Appendix B contains the list of basic symbols, notation and abbreviations used in the book REFERENCES The list of references contains more than 1350 publications. CD-ROM Accompanying CD-ROM includes electronic, interactive version of the book with hyperlinks, full-color figures and text. The black and white electronic version with hyperlinks is also provided. In addition MATLAB user friendly demo package for performing family of ICA and BSS/BSE algorithms is included. From terry at salk.edu Fri May 31 20:19:39 2002 From: terry at salk.edu (Terry Sejnowski) Date: Fri, 31 May 2002 17:19:39 -0700 (PDT) Subject: NEURAL COMPUTATION 14:7 In-Reply-To: <200204021833.g32IXx834531@purkinje.salk.edu> Message-ID: <200206010019.g510Jd343973@purkinje.salk.edu> Neural Computation - Contents - Volume 14, Number 7 - July 1, 2002 ARTICLE A Monte Carlo EM Approach for Partially Observable Diffusion Processes: Theory and Applications to Neural Networks Javier R. Movellan, Paul Mineiro, R. J. Williams NOTES Are Visual Cortex Maps Optimized For Coverage? Miguel A. Carreira-Perpinan and Geoffrey J. Goodhill Kernel-Based Topographic Map Formation by Local Density Modeling Marc M. Van Hulle LETTERS A Simple Model of Long-Term Spike Train Regularization Relly Brandman and Mark E. Nelson Spatiotemporal Spike Encoding of a Continuous External Signal Naoki Masuda and Kazuyuki Aihara Attractor Reliability Reveals Deterministic Structure in Neuronal Spike Trains P.H.E. Tiesinga, J.-M. Fellous and Terrence J. Sejnowski Traveling Waves of Excitation in Neural Field Models: Equivalence of Rate Descriptions and Integrate-and-Fire Dynamics Daniel Cremers and Andreas V.M. Herz Attentional Recruitment of Inter-Areal Recurrent Networks for Selective Gain Control Richard H. R. Hahnloser, Rodney J. Douglas, and Klaus Hepp CCCP Algorithms to Minimize the Bethe and Kikuchi Free Energies: Convergent Alternatives to Belief Propagation A. L. Yuille Fast Curvature Matrix-Vector Products for Second-Order Gradient Descent Nicol N. Schraudolph Representation and Extrapolation in Multilayer Perceptrons Antony Browne Using Noise to Compute Error Surfaces in Connectionist Networks: A Novel Means of Reducing Catastrophic Forgetting Robert French and Nick Chater ----- ON-LINE - http://neco.mitpress.org/ SUBSCRIPTIONS - 2002 - VOLUME 14 - 12 ISSUES USA Canada* Other Countries Student/Retired $60 $64.20 $108 Individual $88 $94.16 $136 Institution $506 $451.42 $554 * includes 7% GST MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 577-1545 journals-orders at mit.edu ----- From Dan Thu May 30 12:16:24 2002 From: Dan (Dan) Date: Thu, 30 May 2002 18:16:24 +0200 Subject: Use Cogprints! Message-ID: Dear fellow-members of this list, This is a personal message to everyone of you. It is about making our papers available to one another and to other people interested in pragmatics. As most of you probably know, there is an easy to use, free, electronic self-archiving service, Cogprints, created by Stevan Harnad, where you can archive your own papers, whether published or not, refereed or not, and, where you can, of course, read or download the papers of others. Cogprints has no competitor in its domain and is complementary to academic institutions' electronic archives. It describes itself as follows: CogPrints is a service to two consituencies: For AUTHORS, it provides a way to make their pre-refereeing preprints and their refereed, published reprints available to the world scholarly and scientific community on a scale that is impossible in paper. For READERS, it provides free worldwide access to the primary scholarly and scientific research literature on a scale that is likewise impossible in paper CogPrints is an electronic archive for papers in any area of Psychology, Neuroscience, and Linguistics, and many areas of Computer Science (e.g., artificial intelligence, robotics, vison, learning, speech, neural networks), Philosophy (e.g., mind, language, knowledge, science, logic), Biology (e.g., ethology, behavioral ecology, sociobiology, behaviour genetics, evolutionary theory), Medicine (e.g., Psychiatry, Neurology, human genetics, Imaging), Anthropology (e.g., primatology, cognitive ethnology, archeology, paleontology), as well as any other portions of the physical, social and mathematical sciences that are pertinent to the study of cognition It has a Pragmatics category with 45 archived papers at present, but I am, I believe the only one from this list to have put papers there. Just think of this: If all the researchers on this list would archive a copy of their own papers (past, present and future) at Cogprints (whether or not they are already archived at another institutional or personal site), Francisco Yus' bibliographic service on RT would be complemented with a de facto relevance theory archive. Moreover all our papers would reach a larger readership and be easily accessible to everyone, researchers, students etc. around the world. So I beseech you, yes YOU, to START ARCHIVING YOUR PAPERS AT COGPRINTS NOW! Go to http://cogprints.soton.ac.uk/ , look at the FAQ page http://cogprints.soton.ac.uk/faq.html and the help page http://cogprints.soton.ac.uk/help/ , register and start uploading! (Once you have learnt the routine, which may take you a good half hour, uploading a paper takes, in my experience, about 10 minutes.) I would like to see dozens of RT papers there in the coming weeks, and soon hundreds. Wouldn't you? Well, it is in YOUR hands. Cheers, Dan ----------------------------- Dan Sperber Institut Jean Nicod http://www.institutnicod.org 1bis avenue de Lowendal 75007 Paris, France web site: http://www.dan.sperber.com ------------------------------ From rsun at cecs.missouri.edu Wed May 1 10:49:29 2002 From: rsun at cecs.missouri.edu (rsun@cecs.missouri.edu) Date: Wed, 1 May 2002 09:49:29 -0500 Subject: temporal abstraction Message-ID: <200205011449.g41EnTR27802@ari1.cecs.missouri.edu> For autonomously creating temporal abstractions (open-loop or closed-loop policies), see also: R. Sun and C. Sessions, " Self-segmentation of sequences: automatic formation of hierarchies of sequential behaviors. " IEEE Transactions on Systems, Man, and Cybernetics: Part B Cybernetics, Vol.30, No.3, pp.403-418. 2000. http://www.cecs.missouri.edu/~rsun/sun.smc00.ps http://www.cecs.missouri.edu/~rsun/sun.smc00.pdf The paper presents an approach for hierarchical reinforcement learning that does not rely on a priori domain-specific knowledge regarding hierarchical structures. It involves learning to segment action sequences to create hierarchical structures (for example, for the purpose of dealing with partially observable Markov decision processes, with multiple limited-memory or memoryless modules). Segmentation is based on reinforcement received during task execution, with different levels of control communicating with each other through sharing reinforcement estimates obtained by each other. The algorithm segments action sequences to reduce non-Markovian temporal dependencies, and seeks out proper configurations of long- and short-range dependencies, to facilitate the learning of the overall task. R. Sun and C. Sessions, "Learning plans without a priori knowledge." Adaptive Behavior, Vol.8, No.3/4, pp.225-253. 2000. (The paper has just appeared. The publication of journal was significantly delayed.) http://www.cecs.missouri.edu/~rsun/sun.ab00.ps This paper is concerned with the autonomous learning of plans in probabilistic domains without a priori domain-specific knowledge. In contrast to existing reinforcement learning algorithms that generate only reactive plans, and existing probabilistic planning algorithms that require a substantial amount of a priori knowledge in order to plan, a two-stage bottom-up process is devised in which first reinforcement learning/dynamic programming is applied, without the use of a priori domain-specific knowledge, to acquire a reactive plan, and then explicit plans are extracted from the reactive plan. Several options for plan extraction are examined, each of which is based on a beam search that performs temporal projection in a restricted fashion, guided by the value functions resulting from reinforcement learning/dynamic programming. Some completeness and soundness results are given. Examples in several domains are discussed that together demonstrate the working of the proposed model. Or go through my Web page at http://www.cecs.missouri.edu/~rsun Cheers, ---Ron From Pascal.Fries at fcdonders.kun.nl Wed May 1 05:32:36 2002 From: Pascal.Fries at fcdonders.kun.nl (Pascal Fries) Date: Wed, 01 May 2002 11:32:36 +0200 Subject: Ph.D. positions Message-ID: Ph.D. positions F. C. Donders Centre for Cognitive Neuroimaging University of Nijmegen Nijmegen, The Netherlands Two Ph.D. positions are available to study the role of neuronal coherence in human cognition in the Neurophysiology group of the F. C. Donders Centre for Cognitive Neuroimaging, headed by Pascal Fries (see Fries et al. Science 2001; Fries et al. Nature Neuroscience, 2001; Fries et al. J. Neurosci. 2002). The F.C. Donders Centre is a recently established research centre for cognitive neuroscience (see www.kun.nl/fcdonders). The centre provides, in one location, a 151 channel whole head MEG system with integrated and coregistered EEG, several 128 channel EEG scanning facilities and two (f)MRI scanners (1.5 T and 3T). All facilities are dedicated for research in cognitive neuroscience. The centre has an international staff and communication is in English. The Ph.D. candidates will be trained in the use of all imaging technologies available at the Centre and will use them in combination to work on projects concerned with the functional role of neuronal coherence for human visuo-motor and cross-modal integration. The ideal candidates have good knowledge of system and cognitive neuroscience, experience in programming (e. g. Matlab), an understanding of signal processing basics and a commitment to scientific excellence. The real candidates have some of those features :-) . The positions are for 4 years. Please send e-mail applications to: Pascal.Fries at fcdonders.kun.nl Pascal Fries, M.D., Ph.D. Principal Investigator, Neurophysiology group F.C. Donders Centre for Cognitive Neuroimaging University of Nijmegen Trigon Bldg.; Rm. 0.40 Adelbertusplein 1 6525 EK Nijmegen, The Netherlands Tel: (+31) (0)24 36 10657 Fax: (+31) (0)24 36 10652 e-mail : pascal.fries at fcdonders.kun.nl website: http://www.kun.nl/fcdonders/ From clinton at compneuro.umn.edu Thu May 2 15:44:34 2002 From: clinton at compneuro.umn.edu (Kathleen Clinton) Date: Thu, 02 May 2002 14:44:34 -0500 Subject: NEURON Workshop Message-ID: <3CD19722.2080002@compneuro.umn.edu> ****************************** NEURON Workshop Announcement ****************************** Michael Hines and Ted Carnevale of Yale University will conduct a three to five day workshop on NEURON, a computer code that simulates neural systems. The workshop will be held from Monday to Friday, September 9-13, 2002 at the University of Minnesota Digital Technology Center in Minneapolis, Minnesota. Registration is open to students and researchers from academic, government, and commercial organizations. Space is limited, and registrations will be accepted on a first-come, first-serve basis. The workshop is sponsored by the University of Minnesota Computational Neuroscience Program which is supported by a National Science Foundation-Integrative Graduate Education and Research Training grant and the University of Minnesota Graduate School, Institute of Technology, Medical School, and Supercomputing Institute for Digital Simulation and Advanced Computation. **Topics and Format** Participants may attend the workshop for three or five days. The first three days cover material necessary for the most common applications in neuroscience research and education. The fourth and fifth days deal with advanced topics of users whose projects may require problem-specific customizations. Windows and Linux platforms will be used. Days 1 - 3 "Fundamentals of Using the NEURON Simulation Environment" The first three days will cover the material that is required for informed use of the NEURON simulation environment. The emphasis will be on applying the graphical interface, which enables maximum productivity and conceptual control over models while at the same time reducing or eliminating the need to write code. Participants will be building their own models from the start of the course. By the end of the third day they will be well prepared to use NEURON on their own to explore a wide range of neural phenomena. Topics will include: Integration methods --accuracy, stability, and computational efficiency --fixed order, fixed timestep integration --global and local variable order, variable timestep integration Strategies for increasing computational efficiency. Using NEURON's graphical interface to --construct models of individual neurons with architectures that range from the simplest spherical cell to detailed models based on quantitative morphometric data (the CellBuilder). --construct models that combine neurons with electronic instrumentation (i.e. capacitors, resistors, amplifiers, current sources and voltage sources) (the Linear Circuit Builder). --construct network models that include artificial neurons, model cells with anatomical and biophysical properties, and hybrid nets with both kinds of cells (the Network Builder). --control simulations. --display simulation results as functions of time and space. --analyze simulation results. --analyze the electrotonic properties of neurons. Adding new biophysical mechanisms. Uses of the Vector class such as --synthesizing custom stimuli --analyzing experimental data --recording and analyzing simulation results Managing modeling projects. Days 4 and 5 "Beyond the GUI" The fourth and fifth days deal with advanced topics for users whose projects may require problem-specific customizations. Topics will include: Advanced use of the CellBuilder, Network Builder, and Linear Circuit Builder. When and how to modify model specification, initialization, and NEURON's main computational loop. Exploiting special features of the Network Connection class for efficient implementation of use-dependent synaptic plasticity. Using NEURON's tools for optimizing models. Parallelizing computations. Using new features of the extracellular mechanism for --extracellular stimulation and recording --implementation of gap junctions and ephaptic interactions Developing new GUI tools. **Registration** For academic or government employees the registration fee is $175 for the first three days and $270 for the full five days. These fees are $350 and $540, respectively, for commercial participants. Registration forms can be obtained at www.compneuro.umn.edu/NEURONregistration.html or from the workshop coordinator, Kathleen Clinton, at clinton at compneuro.umn.edu or (612) 625-8424. **Lodging** Out-of-town participants may stay at the Radisson Metrodome, 615 Washington Avenue SE in Minneapolis. It is within walking distance of the Digital Technology Center, located in Walter Library. Participants are responsible for making their own hotel reservations. When making reservations, participants should state that they are attending the NEURON Workshop. A small block of rooms is available until August 16, 2002. Reservations can be arranged by contacting Kathleen Clinton at clinton at compneuro.umn.edu or (612) 625-8424. From laura.bonzano at dibe.unige.it Mon May 6 05:52:49 2002 From: laura.bonzano at dibe.unige.it (Laura Bonzano) Date: Mon, 6 May 2002 11:52:49 +0200 Subject: NeuroEngineering Workshop and advanced School Message-ID: <00a201c1f4e3$c7f2dba0$6959fb82@bio.dibe.unige.it> Dear list members, I'm happy to announce the second edition of the "NeuroEngineering Workshop and advanced School", organized by: a.. prof. Sergio Martinoia, Neuroengineering and Bio-nanoTechnologies Group, Department of Biophysical and Electronic Engineering (DIBE), University of Genova, Italy b.. prof. Pietro Morasso, Department of Communications, Computer and System Sciences (DIST), University of Genova, Italy and funded by the University of Genova, Italy. It will take place in June 10-13, 2002 in Villa Cambiaso, Genova. For more information, please visit our site: http://www.bio.dibe.unige.it/ Please transmit to anyone potentially interested. Best Regards, Laura Bonzano Apologies if you receive this more than once. ---------------------------------------------------------------- Laura Bonzano, Ph.D. Student Neuroengineering and Bio-nanoTechnology - NBT Department of Biophysical and Electronic Engineering - DIBE Via All'Opera Pia 11A, 16145, GENOA, ITALY Phone: +39-010-3532765 Fax: +39-010-3532133 URL: http://www.bio.dibe.unige.it/ E-mail: laura.bonzano at dibe.unige.it From finton at cs.wisc.edu Mon May 6 19:51:40 2002 From: finton at cs.wisc.edu (David J. Finton) Date: Mon, 6 May 2002 18:51:40 -0500 (CDT) Subject: Dissertation on cognitive economy and reinforcement learning Message-ID: Dear Connectionists: I am pleased to announce the availability of my Ph.D. dissertation for download: ____________________________________________________________________ Cognitive Economy and the Role of Representation in On-Line Learning PDF: http://www.cs.wisc.edu/~finton/thesis/main.pdf PS: http://www.cs.wisc.edu/~finton/thesis/main.ps.gz ____________________________________________________________________ The dissertation is 265 pages, and the downloadable files are 1.79 MB (PDF version) and 1.13 MB (gzipped PostScript version). An abstract follows. --David Finton finton at cs.wisc.edu http://www.cs.wisc.edu/~finton/ Abstract ________ How can an intelligent agent learn an effective representation of its world? This dissertation applies the psychological principle of cognitive economy to the problem of representation in reinforcement learning. Psychologists have shown that humans cope with difficult tasks by simplifying the task domain, focusing on relevant features and generalizing over states of the world which are ``the same'' with respect to the task. This dissertation defines a principled set of requirements for representations in reinforcement learning, by applying these principles of cognitive economy to the agent's need to choose the correct actions in its task. The dissertation formalizes the principle of cognitive economy into algorithmic criteria for feature extraction in reinforcement learning. To do this, it develops mathematical definitions of feature importance, sound decisions, state compatibility, and necessary distinctions, in terms of the rewards expected by the agent in the task. The analysis shows how the representation determines the apparent values of the agent's actions, and proves that the state compatibility criteria presented here result in representations which satisfy a criterion for task learnability. The dissertation reports on experiments that illustrate one implementation of these ideas in a system which constructs its representation as it goes about learning the task. Results with the puck-on-a-hill task and the pole-balancing task show that the ideas are sound and can be of practical benefit. The principal contributions of this dissertation are a new framework for thinking about feature extraction in terms of cognitive economy, and a demonstration of the effectiveness of an algorithm based on this new framework. From mlyons at atr.co.jp Tue May 7 04:21:37 2002 From: mlyons at atr.co.jp (Michael J. Lyons) Date: Tue, 7 May 2002 17:21:37 +0900 Subject: Job opportunity at ATR Message-ID: Dear Connectionists, I would be grateful if you could post this to any appropriate departmental mailing lists. Thank you in advance, Michael Lyons ATR Media Information Science Labs http://www.mis.atr.co.jp/~mlyons -- Opening for Visiting Researcher at ATR in Kyoto, Japan ------------------------------------------------------ There is an opening for a Visiting Researcher at the ATR Media Information Science Laboratories. Applicants should have a Masters or PhD in computer science, electrical engineering or a related discipline, and research experience and interests in the areas of computer vision, machine learning, human computer interaction, affective computing. Excellent software development skills and knowledge of image and signal processing techniques are a must. The initial appointment is for 1 year, renewable up to 4 years, based on performance. The salary and benefits package at ATR (which includes subsidized housing) is attractive by academic standards. The Advanced Telecommunications Research Labs is a basic research institute located at the birthplace of Japanese culture, close to the cities of Kyoto, Osaka, and Nara. Opportunities for cultural and outdoor activities abound. Approximately 20% of the researchers are from overseas and foreign staff support is excellent. Japanese language is not needed for this position. Applicants should send a a 1 page resume, list of publications, and names and contact information for 3 referees (PDF format only) by e-mail to: Michael J. Lyons, PhD Senior Researcher ATR Media Information Sciences mlyons at atr.co.jp http://www.mis.atr.co.jp/~mlyons From duff at envy.cs.umass.edu Tue May 7 11:13:54 2002 From: duff at envy.cs.umass.edu (Michael Duff) Date: Tue, 7 May 2002 11:13:54 -0400 (EDT) Subject: Ph.D. thesis available Message-ID: Dear Connectionists, The following Ph.D. thesis has been made available: Optimal Learning: Computational procedures for Bayes-adaptive Markov decision processes Michael O. Duff Department of Computer Science University of Massachusetts, Amherst The thesis may be retrieved from: http://envy.cs.umass.edu/People/duff/diss.html ----------------------------------------------- Abstract In broad terms, this dissertation is about decision making under uncertainty. At each stage, a decision-making agent operating in an uncertain world takes an action that elicits a reinforcement signal and causes the state of the world (or agent) to change. The agent's goal is to maximize the total reward it derives over its entire duration of operation---an interval that may require the agent to strike a delicate balance between two sometimes conflicting impulses: (1) greedy expoitation of its current world model, and (2) exploration of its world to gain information that can refine the world model and improve the agent's policy. Over the years, a number of researchers have formulated this problem mathematically---"adaptive control processes," "dual control," "value of information," and "optimal learning" all address essentially the same issue and share a basic Bayesian framework that is well-suited for modeling the role of information and for defining what a solution is. Unfortunately, classical procedures for computing policies that optimally balance expoitation with exploration are intractable and have only been able to address problems that have a very small number of physical states and short planning horizons. This dissertation proposes compuational procedures that retain the Bayesian formulation, but sidestep intractability by employing Monte- Carlo simulation, function approximation, and diffusion modeling of information-state dynamics. From alistair at robots.ox.ac.uk Tue May 7 07:23:32 2002 From: alistair at robots.ox.ac.uk (Alistair McEwan) Date: Tue, 7 May 2002 12:23:32 +0100 (BST) Subject: Post Graduate Studentship in Silicon Cochlea Design Message-ID: UNIVERSITY OF OXFORD DEPARTMENT OF ENGINEERING SCIENCE Post Graduate Studentship in Silicon Cochlea Design http://www.eng.ox.ac.uk/~wpcadm2/jobs/scadvert.html Applications are invited for a 3 year EPSRC funded research studentship in the Microelectronic Circuits and Analogue Devices Research Group within the Department of Engineering Science. The position is available to commence at any time prior to 1 October 2002. A successful candidate will have a first or upper second degree in Electronics or a similar subject. The area of the research is biologically inspired information processing, in particular auditory signal processing in the cochlea. The aim of the project, supervised by Dr. Steve Collins, is to develop a new design of a silicon cochlea. This work, which is part of a wider EPSRC funded project, will be undertaken in collaboration with Professor Leslie Smith at the University of Stirling, who has highlighted the benefits of exploiting temporal correlations in the auditory system. Work at Oxford will concentrate on designing, building and testing a silicon cochlea which will then be integrated into a larger system at Stirling. Work at Oxford will concentrate on designing, building and testing circuits which perform the critical functions identified by simulation work undertaken at the University of Stirling. Following previous work, the basic filtering action will be based upon subthreshold gm-C filters. However, the output of these filters will be represented as a stream of pulses, rather than an analogue voltage. This will both avoid the problems arising from using positive feedback in existing systems and create the opportunity to exploit temporal correlations in the power within different frequency bands. Several different designs will be looked at in collaboration with Stirling to determine the one which produces the required functionality most efficiently. This design will then be implemented in a prototype system which will be characterised in detail at Oxford and the functionality tested at Stirling. For more details contact steve collins (steve.collins at eng.ox.ac.uk) or visit the group website http://www.robots.ox.ac.uk/~mcad/ Further particulars may be found on the web http://www.eng.ox.ac.uk/~wpcadm2/jobs/scfp.html Please quote SS/SC/DF/02/021 in all correspondence. The closing date for applications is 4th June 2002. The University is an Equal Opportunity Employer. From mvzaanen at science.uva.nl Wed May 8 03:44:49 2002 From: mvzaanen at science.uva.nl (Menno van Zaanen) Date: Wed, 8 May 2002 09:44:49 +0200 (CEST) Subject: PhD Thesis available Message-ID: Dear Connectionists, My PhD thesis, which I hope might be of any interest to some of you, has been made available: Bootstrapping Structure into Language: Alignment-Based Learning Menno M. van Zaanen School of Computing University of Leeds Leeds, UK It can be found at: http://www.science.uva.nl/~mvzaanen/docs/t_leeds.ps http://www.science.uva.nl/~mvzaanen/docs/t_leeds.ps.gz or via my homepage: http://www.science.uva.nl/~mvzaanen/ Abstract: This thesis introduces a new unsupervised learning framework, called Alignment-Based Learning, which is based on the alignment of sentences and Harris's (1951) notion of substitutability. Instances of the framework can be applied to an untagged, unstructured corpus of natural language sentences, resulting in a labelled, bracketed version of that corpus. Firstly, the framework aligns all sentences in the corpus in pairs, resulting in a partition of the sentences consisting of parts of the sentences that are equal in both sentences and parts that are unequal. Unequal parts of sentences can be seen as being substitutable for each other, since substituting one unequal part for the other results in another valid sentence. The unequal parts of the sentences are thus considered to be possible (possibly overlapping) constituents, called hypotheses. Secondly, the selection learning phase considers all hypotheses found by the alignment learning phase and selects the best of these. The hypotheses are selected based on the order in which they were found, or based on a probabilistic function. The framework can be extended with a grammar extraction phase. This extended framework is called parseABL. Instead of returning a structured version of the unstructured input corpus, like the ABL system, this system also returns a stochastic context-free or tree substitution grammar. Different instances of the framework have been tested on the English ATIS corpus, the Dutch OVIS corpus and the Wall Street Journal corpus. One of the interesting results, apart from the encouraging numerical results, is that all instances can (and do) learn recursive structures. Best regards, Menno van Zaanen +-------------------------------------+ | Menno van Zaanen | "The more it stays the same, | mvzaanen at science.uva.nl | the less it changes." | http://www.science.uva.nl/~mvzaanen | -Spinal Tap From mvzaanen at science.uva.nl Wed May 8 03:44:49 2002 From: mvzaanen at science.uva.nl (Menno van Zaanen) Date: Wed, 8 May 2002 09:44:49 +0200 (CEST) Subject: PhD Thesis available Message-ID: Dear Connectionists, My PhD thesis, which I hope might be of any interest to some of you, has been made available: Bootstrapping Structure into Language: Alignment-Based Learning Menno M. van Zaanen School of Computing University of Leeds Leeds, UK It can be found at: http://www.science.uva.nl/~mvzaanen/docs/t_leeds.ps http://www.science.uva.nl/~mvzaanen/docs/t_leeds.ps.gz or via my homepage: http://www.science.uva.nl/~mvzaanen/ Abstract: This thesis introduces a new unsupervised learning framework, called Alignment-Based Learning, which is based on the alignment of sentences and Harris's (1951) notion of substitutability. Instances of the framework can be applied to an untagged, unstructured corpus of natural language sentences, resulting in a labelled, bracketed version of that corpus. Firstly, the framework aligns all sentences in the corpus in pairs, resulting in a partition of the sentences consisting of parts of the sentences that are equal in both sentences and parts that are unequal. Unequal parts of sentences can be seen as being substitutable for each other, since substituting one unequal part for the other results in another valid sentence. The unequal parts of the sentences are thus considered to be possible (possibly overlapping) constituents, called hypotheses. Secondly, the selection learning phase considers all hypotheses found by the alignment learning phase and selects the best of these. The hypotheses are selected based on the order in which they were found, or based on a probabilistic function. The framework can be extended with a grammar extraction phase. This extended framework is called parseABL. Instead of returning a structured version of the unstructured input corpus, like the ABL system, this system also returns a stochastic context-free or tree substitution grammar. Different instances of the framework have been tested on the English ATIS corpus, the Dutch OVIS corpus and the Wall Street Journal corpus. One of the interesting results, apart from the encouraging numerical results, is that all instances can (and do) learn recursive structures. Best regards, Menno van Zaanen +-------------------------------------+ | Menno van Zaanen | "The more it stays the same, | mvzaanen at science.uva.nl | the less it changes." | http://www.science.uva.nl/~mvzaanen | -Spinal Tap From philh at cogs.susx.ac.uk Fri May 10 14:09:48 2002 From: philh at cogs.susx.ac.uk (Phil Husbands) Date: Fri, 10 May 2002 19:09:48 +0100 Subject: lectureship/senior lectureship neural computation Message-ID: <3CDBFEDE.D51C48DE@cogs.susx.ac.uk> > > > LECTURESHIP/ SENIOR LECTURESHIP Ref 359 > > IN NEURAL COMPUTATION > > 25,455 to 32,537 or 34,158 to 38,603 per annum > > Applicants are invited for a permanent faculty position within the > Computer Science and Artificial Intelligence Subject Group of the School > of Cognitive and Computing Sciences. The expected start date is 1 > October 2002 or as soon as possible thereafter. The candidate is > expected to have expertise in the area of neural computation and a > proven track record in research in a relevant field. The successful > applicant will be expected to expand the existing high research profile > of the neural computation group and to teach both undergraduate and > masters level. > > Informal enquiries may be made to Des Watson (+44 1273 678045, email > desw at cogs.susx.ac.uk) or Phil Husbands (+44 1273 678556, email > philh at cogs.susx.ac.uk). Details of the School are available at > http://www.cogs.susx.ac.uk/ > > Closing date: Friday 31 May 2002. > > > Application details are available from and should be returned to the > Staffing Services Office, Sussex House, University of Sussex, Falmer, > Brighton, BN1 9RH. Tel 01273 678706, Fax 01273 877401, email > recruitment at sussex.ac.uk. Details of all posts can be found via the > University website: > > http://www.susx.ac.uk/Units/staffing > > An Equal Opportunity Employer From jt at lanl.gov Fri May 10 12:46:45 2002 From: jt at lanl.gov (James Theiler) Date: Fri, 10 May 2002 10:46:45 -0600 (MDT) Subject: postdoctoral position in machine learning at los alamos Message-ID: POSTDOCTORAL POSITION IN MACHINE LEARNING THEORY AND APPLICATIONS Space and Remote Sensing Sciences Group Los Alamos National Laboratory Candidates are sought for a postdoctoral position in the Space and Remote Sensing Sciences Group at Los Alamos National Laboratory in New Mexico, USA. The job will involve developing and applying state of the art machine learning techniques to practical problems in multispectral image feature identification, and in multichannel time series analysis. Prospective candidates should have a strong mathematical background, good oral and written communication skills, and a demonstrated ability to perform independent and creative research. Familiarity with modern statistical machine learning techniques such as support vector machines, boosting, Gaussian processes or Bayesian methods is essential. Experience with other machine learning paradigms including neural networks and genetic algorithms is also desirable. The candidate should be able to program competently in a language such as C, C++, Java, Matlab, etc. Experience with image or signal processing is a plus, and some knowledge of remote sensing or space physics would also be useful. The Space and Remote Sensing Sciences Group is part of the Nonproliferation and International Security Division at LANL. Its mission is to develop and apply remote sensing technologies to a variety of problems of national and international interest, including nonproliferation, detection of nuclear explosions, safeguarding nuclear materials, climate studies, environmental monitoring, volcanology, space sciences, and astrophysics. Los Alamos is a small and very friendly town situated 7200 ft up in the scenic Jemez mountains in northern New Mexico. The climate is very pleasant and opportunities for outdoor recreation are numerous (skiing, hiking, biking, climbing, etc). The Los Alamos public school system is excellent. LANL provides a very constructive working environment with abundant resources and support, and the opportunity to work with intelligent and creative people on a variety of interesting projects. Post-doc starting salaries are usually in the range $50-60K depending on experience, and assistance is provided with relocation expenses. The initial contract offered would be for two years, with good possibilities for contract extensions. The ability to get a US Department of Energy 'Q' clearance (which normally requires US citizenship) is helpful but not essential. Applicants must have received their PhD within the last five years. Interested candidates should visit the Jobs at LANL website at http://www.hr.lanl.gov/FindJob/index.stm, click on "Postdoctoral", and look for Job #201857. It is preferred that you apply through the website, but if you have any questions, contact James Theiler, by e-mail: jt at lanl.gov; or snail mail: Los Alamos National Laboratory, Mail Stop D-436, Los Alamos, NM 87545, USA. Direct applications should include a full resume with a list of two or three references, and a cover letter explaining why you think you would make a good candidate. Plain text, Postscript, or PDF attachments are fine. We can also read MS Word, but we don't like to. jt --------------------------------------------- James Theiler jt at lanl.gov MS-D436, NIS-2, LANL Los Alamos, NM 87545 ----- Space and Remote Sensing Sciences ----- From nnk at his.atr.co.jp Tue May 14 04:20:17 2002 From: nnk at his.atr.co.jp (Neural Networks Japan Office) Date: Tue, 14 May 2002 17:20:17 +0900 Subject: Neural Networks 15(3) Message-ID: NEURAL NETWORKS 15(3) Contents - Volume 15, Number 3 - 2002 ------------------------------------------------------------------ CURRENT OPINION: Three creatures named 'forward model'. A. Karniel CONTRIBUTED ARTICLES: ***** Neuroscience and Neuropsychology ***** A control model of the movement of attention. J. G. Taylor, M. Rogers A local and neurobiologically plausible method of learning correlated patterns. G. Athithan ***** Mathematical and Computational Analysis ***** Learning generative models of natural images. J. M. Wu, Z. H. Lin Optimal design of regularization term and regularization parameter by subspace information criterion. M. Sugiyama, H. Ogawa Parameter setting of the Hopfield network applied to TSP. P. M. Talavan, J. Yanez Transformations of sigma-pi nets: obtaining reflected functions by reflecting weight matrices. R. S. Neville, S. Eldridge On the capabilities of neural networks using limited precision weights. S. Draghici Exponential stability of Cohen-Grossberg neural network. L. Wang, X. Zou ***** Engineering and Design ***** A dynamically coupled neural oscillator network for image segmentation. K. Chen, D. Wang A deterministic annealing algorithm for approximating a solution of the max-bisection problem. C. Dang, L. He, I. K. Hui ***** Technology and Applications ***** AANN an alternative to GMM for pattern recognition. B. Yegnanarayana, S. P. Kishore CURRENT EVENTS ------------------------------------------------------------------ Electronic access: www.elsevier.com/locate/neunet/. Individuals can look up instructions, aims & scope, see news, tables of contents, etc. Those who are at institutions which subscribe to Neural Networks get access to full article text as part of the institutional subscription. Sample copies can be requested for free and back issues can be ordered through the Elsevier customer support offices: nlinfo-f at elsevier.nl usinfo-f at elsevier.com or info at elsevier.co.jp ------------------------------ INNS/ENNS/JNNS Membership includes a subscription to Neural Networks: The International (INNS), European (ENNS), and Japanese (JNNS) Neural Network Societies are associations of scientists, engineers, students, and others seeking to learn about and advance the understanding of the modeling of behavioral and brain processes, and the application of neural modeling concepts to technological problems. Membership in any of the societies includes a subscription to Neural Networks, the official journal of the societies. Application forms should be sent to all the societies you want to apply to (for example, one as a member with subscription and the other one or two as a member without subscription). The JNNS does not accept credit cards or checks; to apply to the JNNS, send in the application form and wait for instructions about remitting payment. The ENNS accepts bank orders in Swedish Crowns (SEK) or credit cards. The INNS does not invoice for payment. ---------------------------------------------------------------------------- Membership Type INNS ENNS JNNS ---------------------------------------------------------------------------- membership with $80 (regular) SEK 660 (regular) Y 13,000 (regular) Neural Networks (plus 2,000 enrollment fee) $20 (student) SEK 460 (student) Y 11,000 (student) (plus 2,000 enrollment fee) ----------------------------------------------------------------------------- membership without $30 SEK 200 not available to Neural Networks non-students (subscribe through another society) Y 5,000 (student) (plus 2,000 enrollment fee) ----------------------------------------------------------------------------- Name: _____________________________________ Title: _____________________________________ Address: _____________________________________ _____________________________________ _____________________________________ Phone: _____________________________________ Fax: _____________________________________ Email: _____________________________________ Payment: [ ] Check or money order enclosed, payable to INNS or ENNS OR [ ] Charge my VISA or MasterCard card number ____________________________ expiration date ________________________ INNS Membership 19 Mantua Road Mount Royal NJ 08061 USA 856 423 0162 (phone) 856 423 3420 (fax) innshq at talley.com http://www.inns.org ENNS Membership University of Skovde P.O. Box 408 531 28 Skovde Sweden 46 500 44 83 37 (phone) 46 500 44 83 99 (fax) enns at ida.his.se http://www.his.se/ida/enns JNNS Membership c/o Professor Takashi Nagano Faculty of Engineering Hosei University 3-7-2, Kajinocho, Koganei-shi Tokyo 184-8584 Japan 81 42 387 6350 (phone and fax) jnns at k.hosei.ac.jp http://jnns.inf.eng.tamagawa.ac.jp/home-j.html ----------------------------------------------------------------- From tt at cs.dal.ca Tue May 14 08:43:53 2002 From: tt at cs.dal.ca (Thomas T. Trappenberg) Date: Tue, 14 May 2002 09:43:53 -0300 Subject: New book on computational neuroscience Message-ID: <001c01c1fb45$01980620$2743ad81@oyster> Dear Colleagues, It is my pleasure to announce the availability of my book Fundamentals of Computational Neuroscience, Oxford University Press, ISBN 0-19-851583-9 (see http://www.oup.co.uk/isbn/0-19-851583-9) This book contains introductory reviews from the basic mechanisms of single neurons to system level organizations in the brain. The emphasis is thereby on the relation of simplified models of single neurons and neuronal networks to the information processing in the brain. I hope that this will be useful for students who are seeking some introduction to this area, as well as to those wondering about the relation of abstract neural networks to information processing in the brain. The book can be order in Europe through the Oxford University Press web site at . The web catalogs of the other OUP divisions () will be updated soon. Meanwhile you can order the book by calling the OUP representative in your area, and, of course, use other well-known sources (e.g. ). I will make some teaching material available through my web site (), and hope for your suggestions and comments. Sincerely, Thomas Trappenberg ------------------------------------------------- Dr. Thomas P. Trappenberg Associate Professor Faculty of Computer Science Dalhousie University 6050 University Avenue Halifax, Nova Scotia Canada B3H 1W5 Phone: (902) 494-3087 Fax: (902)=492-1517 Email: tt at cs.dal.ca From masulli at disi.unige.it Tue May 14 13:48:42 2002 From: masulli at disi.unige.it (Francesco Masulli) Date: Tue, 14 May 2002 17:48:42 +0000 Subject: School on ENSEMBLE METHODS FOR LEARNING MACHINES Vietri sul Mare 22-28 September 2002 Message-ID: <02051417484206.00959@portofino.disi.unige.it> 7th Course of the "International School on Neural Nets Eduardo R. Caianiello" on ENSEMBLE METHODS FOR LEARNING MACHINES IIASS-Vietri sul Mare (Salerno)-ITALY 22-28 September 2002 web page: http://www.iiass.it/school2002 JOINTLY ORGANIZED BY IIASS-International Institute EMFCSC-Ettore Majorana Foundation for Advanced Scientifc Studies and E.R. Caianiello, Center for Scientific Culture, Vietri sul Mare (SA) Italy Erice (TR) Italy AIMS In the last decade, ensemble methods have shown to be effective in many application domains and constitute one of the main current directions in Machine Learning research. This school will address from a theoretical and empirical view point, several important questions concerning the combination of Learning Machines. In particular, different approaches to the problem which have been proposed in the context of Machine Learning, Neural Networks, and Statistical Pattern Recognition will be discussed. Moreover, a special stress will be given to theoretical and practical tools to develop ensemble methods and evaluate their applications on real-world domains, such as Remote Sensing, Bioinformatics and Medical field. SPONSORS GNCS-Gruppo Nazionale per il Calcolo Scientifico IEEE-Neural Networks Council INNS-International Neural Network Society SIREN-Italian Neural Networks Society University of Salerno,Italy DIRECTORS OF THE COURSE DIRECTORS OF THE SCHOOL Nathan Intrator (USA) Michael Jordan (USA) Francesco Masulli (Italy) Maria Marinaro (Italy) LECTURERS Leo Breiman, University of California at Berkeley, California, USA Lorenzo Bruzzone, University of Trento, Trento, Italy Thomas G. Dietterich, Oregon State University, Oregon, USA Cesare Furlanello, Istituto per la Ricerca Scientifica e Tecnologica, Trento, Italy Giuseppina C. Gini, Politecnico di Milano, Milano, Italy Tin Kam Ho, Bell Laboratories, New Jersey, USA Nathan Intrator, Brown University, Providence, Rhode Island, USA Ludmila I. Kuncheva, University of Wales, Bangor, UK Francesco Masulli, University of Pisa, Italy Stefano Merler, Istituto per la Ricerca Scientifica e Tecnologica, Trento, Italy Fabio Roli, University of Cagliari, Cagliari, Italy Giorgio Valentini, University of Genova, Italy PLACE International Institute for Advanced Scientific Studies E.R. Caianiello (IIASS) Via Pellegrino 19, 84019 Vietri sul Mare, Salerno (Italy) POETIC TOUCH Vietri (from "Veteri", its ancient Roman name) sul Mare ("on sea") is located within walking distance from Salerno and marks the beginning of the Amalfi coast. Short rides take to Positano, Sorrento, Pompei, Herculaneum, Paestum, Vesuvius, or by boat, the islands of Capri, Ischia, and Procida. Velia (the ancient "Elea" of Zeno and Parmenide) is a hundred kilometers farther down along the coast. GENERALITIES Recently, driven by application needs, multiple classifier combinations have evolved into a practical and effective solution for real-world pattern recognition tasks. The idea appears in various disciplines (including Machine Learning, Neural Networks, Pattern Recognition, and Statistics) under several names: hybrid methods, combining decisions, multiple experts, mixture of experts, sensor fusion and many more. In some cases, the combination is motivated by the simple observation that classifier performance is not uniform across the input space and different classifiers excel in different regions. Under a Bayesian framework, integrating over expert distribution leads naturally to expert combination. The generalization capabilities of ensembles of learning machines have been interpreted in the framework of Statistical Learning Theory and in the related theory of Large Margin Classifiers. There are several ways to use more than one classifier in a classification problem. A first "averaging" approach consists of generating multiple hypotheses from a single or multiple learning algorithms, and combining them through majority voting or different linear and non linear combinations. A "feature-oriented" approach is based on different methods to build ensembles of learning machines by subdividing the input space (e.g., random subspace methods, multiple sensors fusion, feature transformation fusion). "Divide-and-conquer" approaches isolate the regions in input space on which each classifier performs well, and direct new input accordingly, or subdivide a complex learning problem in a set of simpler subproblems, recombining them using suitable decoding methods. A "sequential-resampling" approach builds multiple classifier systems using bootstrap methods in order to reduce variance (bagging) or jointly bias and unbiased variance (boosting). There are fundamental questions that need to be addressed for a practical use of this collection of approaches: What are the theoretical tools to interpret possibly in a unified framework this multiplicity of ensemble methods? What is gained and lost in a combination of experts, when is it preferable to alternative approaches? What types of data are best suitable to expert combination? What types of experts are best suited for combinations? What are optimal training methods for experts which are expected to participate in a collective decision? What combination strategies are best suited to a particular problem and to a particular distribution of the data? What are the statistical methods and the appropriate benchmark data to evaluate multiclassifier systems? The school will address some of the above questions from a theoretical and empirical view point and will teach students about this exciting and very promising field using current state of the art data sets for pattern recognition, classification and regression. The main goals of the school are: 1. Offer an overview of the main research issues of ensemble methods from the different and complementary perspectives of Machine Learning, Neural Networks, Statistics and Pattern Recognition. 2. Offer theoretical tools to analyze the diverse approaches, and critically evaluate their applications. 3. Offer practical and theoretical tools to develop new ensemble methods and analyze their application on real-world problems. FORMAT The meeting will follow the usual format of tutorials and panel discussions together with poster sessions for contributed papers. A demo lab with four Linux workstations will be available to the participants for testing and comparing ensemble methods. There will be a network of wireless 11MHz connection available so that students arriving with their laptops and an appropriate wireless communication card can stay connected while at the meeting area. DURATION Participants are expected to arrive in time for the evening meal on Sunday Sept 22th and depart on Sunday Sept 28th. Sessions will take place from Monday 23th to Saturday 27th. PROCEEDINGS The proceedings will be published in the form of a book containing tutorial chapters written by the lecturers and possibly shorter papers from other participants. One free copy of the book will be distributed to each participant after the school. LANGUAGE The official language of the school will be English. POSTER SUBMISSION There will be a poster session for contributed presentations from participants. Proposals consisting of a one page abstract for review by the organizers should be submitted with applications. REGISTRATION FEE Master and PhD Students: 650,00 Euro Academic Participants (govt/univ): 800,00 Euro Industrial Participants: 1.100,00 Euro The fee includes accommodation (3 stars hotel - double room), meals and a copy of the proceedings of the school. Transportation is not included. A supplement of 20 Euro per night should be paid for single room. Members of sponsoring organizations will receive a discount of 50 Euro on the registration fee. A few scholarships are available for students who are otherwise unable to participate at the school. Payment details will be notified with acceptance of applications. ELIGIBILITY The school is open to all suitably qualified scientists. People with few years of experience in the field should include a recommendation letter of their supervisor. APPLICATION PROCEDURE Important Dates: Application deadline: June 20 2002 Notification of acceptance: July 10 2002 Registration fee payment deadline: July 20 2002 School Sept 22-28 2002 Places are limited to a maximum of 60 participants in addition to the lecturers. These will be allocated on a first come, first served basis. ********************************************************************** APPLICATION FORM Title: ............................................................... Family Name: ......................................................... Other Names:.......................................................... Mailing Address (include institution or company name if appropriate): ..................................................................... ..................................................................... ..................................................................... ..................................................................... ..................................................................... Phone:......................Fax:...................................... E-mail: .............................................................. Date of Arrival : .................................................... Date of Departure: ................................................... Are you sending the abstract of a poster? yes/no (delete the alternative which does not apply) Are you applying for a scholarship? yes/no (delete the alternative which does not apply) If yes please include a justification letter for the scholarship request. ***************************************************************** Please send the application form together the recommendation letter by electronic mail to: iiass.vietri at tin.it, subject: summer school; or by fax to: +39 089 761 189 (att.ne Prof. M. Marinaro) or by ordinary mail to the address below: IIASS Via Pellegrino 19, I-84019 Vietri sul Mare (Sa) Italy WEB PAGE OF THE COURSE The web page of the course is http://www.iiass.it/school2002 and will be contain all the updates related to the course. At http://www.iiass.it/school2002/ensemble-lab.html a web portal to ENSEMBLE METHODS is in development including pointers to relevant papers, data-bases and software. Contributions to this portal are kindly requested to all researchers involved in this area. Please send all contributions to Giorgio Valentini (valenti at disi.unige.it). FOR FURTHER INFORMATION PLEASE CONTACT: Prof. Francesco Masulli DISI&INFM email: masulli at ge.infm.it Via Dodecaneso 35 fax: +39 010 353 6699 16146 Genova (Italy) tel: +39 010 353 6604 From christof at teuscher.ch Wed May 15 12:00:30 2002 From: christof at teuscher.ch (Christof Teuscher) Date: Wed, 15 May 2002 18:00:30 +0200 Subject: [Turing Day] - Last Call for Participation Message-ID: <3CE2861E.2471C176@teuscher.ch> ================================================================ We apologize if you receive multiple copies of this email. Please distribute this announcement to all interested parties. ================================================================ **************************************************************** SECOND AND LAST CALL FOR PARTICIPATION **************************************************************** ** Turing Day ** Computing science 90 years from the birth of Alan M. Turing. Friday, June 28, 2002 Swiss Federal Institute of Technology Lausanne (EPFL) Lausanne, Switzerland http://lslwww.epfl.ch/turingday **************************************************************** Purpose: -------- Alan Mathison Turing, born on June 23, 1912, is considered one of the most creative thinkers of the 20th century. His interests, from computing to the mind through information science and biology, span many of the emerging themes of the 21st century. On June 28, 2002, we commemorate the 90th anniversary of Alan Mathison Turing's birthday in the form of a one-day workshop held at the Swiss Federal Institute of Technology in Lausanne. The goal of this special day is to remember Alan Turing and to revisit his contributions to computing science. The workshop will consist of a series of invited talks given by internationally-renowned experts in the field. Invited Speakers: ----------------- B. Jack Copeland - "Artificial Intelligence and the Turing Test" Martin Davis - "The Church-Turing Thesis: Has it been Superseded?" Andrew Hodges - "What would Alan Turing have done after 1954?" Douglas R. Hofstadter - "The Strange Loop -- from Epimenides to Cantor to Russell to Richard to Goedel to Turing to Tarski to von Neumann to Crick and Watson" Tony Sale - "What did Turing do at Bletchley Park?" Jonathan Swinton - "Watching the Daisies Grow: Turing and Fibonacci Phyllotaxis" Gianluca Tempesti - "The Turing Machine Redux: A Bio-Inspired Implementation" Christof Teuscher - "Connectionism, Turing, and the Brain" Special Events: --------------- o Display and demonstration of an original Enigma machine o Exhibition of historical computers (Bolo's Computer Museum) o Demonstration of Turing's neural networks on the BioWall o Demonstration of a self-replicating universal Turing machine For up-to-date information and registration, consult the Turing Day web-site: http://lslwww.epfl.ch/turingday We are looking forward to seeing you in beautiful Lausanne! Sincerely, - Christof Teuscher ---------------------------------------------------------------- Christof Teuscher Swiss Federal Institute of Technology Lausanne (EPFL) christof at teuscher.ch http://www.teuscher.ch/christof ---------------------------------------------------------------- Turing Day: http://lslwww.epfl.ch/turingday IPCAT2003: http://lslwww.epfl.ch/ipcat2003 ---------------------------------------------------------------- From mbethge at physik.uni-bremen.de Thu May 16 11:37:47 2002 From: mbethge at physik.uni-bremen.de (Matthias Bethge) Date: Thu, 16 May 2002 17:37:47 +0200 Subject: paper available Message-ID: <3CE3D24B.5E377465@physik.uni-bremen.de> Dear Connectionists, the following preprint is available for downloading: http://www-neuro.physik.uni-bremen.de/~mbethge/publications.html Matthias Bethge, David Rotermund and Klaus Pawelzik. Optimal short-term population coding: when Fisher information fails. Neural Computation, in press. Abstract: Efficient coding has been proposed as a first principle explaining neuronal response properties in the central nervous system. The shape of optimal codes, however, strongly depends on the natural limitations of the particular physical system. Here we investigate how optimal neuronal encoding strategies are influenced by the finite number of neurons $N$ (place constraint), the limited decoding time window length $T$ (time constraint), the maximum neuronal firing rate $f_{max}$ (power constraint) and the maximal average rate $\langle f \rangle_{max}$ (energy constraint). While Fisher information provides a general lower bound for the mean squared error of unbiased signal reconstruction, its use to characterize the coding precision is limited. Analyzing simple examples, we illustrate some typical pitfalls and thereby show that Fisher information provides a valid measure for the precision of a code only if the dynamic range $(f_{min}T, f_{max}T)$ is sufficiently large. In particular, we demonstrate that the optimal width of Gaussian tuning curves depends on the available decoding time $T$. Within the broader class of unimodal tuning functions it turns out that the shape of a Fisher-optimal coding scheme is not unique. We solve this ambiguity by taking the minimum mean square error into account, which leads to flat tuning curves. The tuning width, however, remains to be determined rather by energy constraints than by the principle of efficient coding. -- ________________________________________________________ Matthias Bethge @ Institute of Theoretical Physics www: http://www-neuro.physik.uni-bremen.de/~mbethge Tel. : (+49)421-218-4460 /\_______________ Fax : (+49)421-218-9104 /\/ _______________________________ /\/ \/ From mbethge at physik.uni-bremen.de Fri May 17 05:06:58 2002 From: mbethge at physik.uni-bremen.de (Matthias Bethge) Date: Fri, 17 May 2002 11:06:58 +0200 Subject: broken links Message-ID: <3CE4C832.E77A63AD@physik.uni-bremen.de> To all, who did not succeed to download the preprint 'Optimal short-term population coding: when Fisher information fails' from my homepage: for some reason (which I don't really know) all links at my homepage to the preprints were broken. Now the problem is fixed. Sorry for the inconvenience and please try again: http://www-neuro.physik.uni-bremen.de/~mbethge/publications.html Good luck Matthias -- ________________________________________________________ Matthias Bethge @ Institute of Theoretical Physics www: http://www-neuro.physik.uni-bremen.de/~mbethge Tel. : (+49)421-218-4460 /\_______________ Fax : (+49)421-218-9104 /\/ _______________________________ /\/ \/ From terry at salk.edu Wed May 22 14:15:58 2002 From: terry at salk.edu (Terry Sejnowski) Date: Wed, 22 May 2002 11:15:58 -0700 (PDT) Subject: NEURAL COMPUTATION 14:6 In-Reply-To: <199703060454.UAA08685@helmholtz.salk.edu> Message-ID: <200205221815.g4MIFwZ14239@purkinje.salk.edu> Neural Computation - Contents - Volume 14, Number 6 - June 1, 2002 ARTICLE Cosine Tuning Minimizes Motor Errors Emanuel Todorov NOTES SMEM Algorithm Is Not Fully Compatible with Maximum-Likelihood Framework Akihiro Minagawa, Norio Tagawa, Toshiyuki Tanaka A Note on the Decomposition Methods for Support Vector Regression Shuo-Peng Liao, Hsuan-Tien Lin and Chih-Jen Lin LETTERS An Image Analysis Algorithm for Dendritic Spines Ingrid Y. Y. Koh, W. Brent Lindquist, Karen Zito, Esther A. Nimchinsky and Karel Svoboda Multiplicative Synaptic Normalization and a Non-Linear Hebb Rule Underlie a Neurotrophic Model of Competitive Synaptic Plasticity T. Elliott and N.R. Shadbolt Energy-Efficient Coding with Discrete Stochastic Events Susanne Schreiber, Christian K. Machens, Andreas V.M. Herz and Simon B. Laughlin Multiple Model-Based Reinforcement Learning Kenji Doya, Kazuyuki Samejima, Ken-ichi Katagiri and Mitsuo Kawato A Bayesian Approach to the Stereo Correspondence Problem Jenny C. A. Read Learning Curves for Gaussian Process Regression: Approximations and Bounds Peter Sollich and Anason Halees A Global Optimum Approach for One-Layer Neural Networks Enrique Castillo, Oscar Fontenla-Romero, Bertha Guijarro-Berdinas and Amparo Alonso-Betanzos MLP in Layer-Wise Form with Applications to Weight Decay Tommi Karkkainen Local Overfitting Control via Leverages Gaetan Monari and Gerard Dreyfus ----- ON-LINE - http://neco.mitpress.org/ SUBSCRIPTIONS - 2002 - VOLUME 14 - 12 ISSUES USA Canada* Other Countries Student/Retired $60 $64.20 $108 Individual $88 $94.16 $136 Institution $506 $451.42 $554 * includes 7% GST MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 577-1545 journals-orders at mit.edu ----- From adr at adrlab.ahc.umn.edu Sat May 25 13:06:22 2002 From: adr at adrlab.ahc.umn.edu (A David Redish) Date: Sat, 25 May 2002 12:06:22 -0500 Subject: MClust-3.0 Message-ID: <200205251706.g4PH6Ne3007636@adrlab.ahc.umn.edu> MClust-3.0 Announcing the release of a new version of the MClust spike-sorting toolbox. MClust is a Matlab toolbox which enables a user to perform manual clustering on single-electrode, stereotrode, and tetrode recordings taken with a variety of recording systems. The current system ships with engines for loading Neuralynx ".dat" files, but one of the new features of MClust-3.0 is a modular loading-engine system and it should be easy to write new loading engines for other data formats. The MClust web page can be found at http://www.cbc.umn.edu/~redish/MClust The MClust toolbox is free-ware, but you will need Matlab 6.0 or higher to run it. It has been tested under the Windows families of operating systems, but ports to other operating systems should be simple. Further details (such as the copyright notice, authors credits, and disclaimer) are available from the website above. New features available in MClust-3.0 include more modularity and new cutting-engines. Loading-engines are easily added. This will greatly facilitate using MClust with other data formats. As before, spike-features are easily added. Additional features are also shipped with the new version. MClust-3.0 now includes automated/assisted cluster cutting engines, including BubbleClust (P. Lipa, University of Arizona, Tucson AZ) and KlustaKwik (K. Harris, Rutgers University, Newark NJ). After preprocessing data (using a new batch-processing system), clusters cut with BubbleClust or KlustaKwik can be selected, merged, split, and generally touched-up using the new selection/decision windows and the original manual cutting engine. Manual cutting is still, of course, always an option. ----------------------------------------------------- A. David Redish redish at ahc.umn.edu Assistant Professor http://www.cbc.umn.edu/~redish Department of Neuroscience, University of Minnesota 6-145 Jackson Hall 321 Church St SE Minneapolis MN 55455 ----------------------------------------------------- From T.J.Prescott at sheffield.ac.uk Tue May 28 12:00:54 2002 From: T.J.Prescott at sheffield.ac.uk (Tony Prescott) Date: Tue, 28 May 2002 17:00:54 +0100 Subject: Robotics as Theoretical Biology Workshop Message-ID: (apologies for repeat postings) SECOND CALL FOR PARTICIPATION WORKSHOP ON ROBOTICS AS THEORETICAL BIOLOGY AUGUST 10TH, 2002, EDINBURGH, SCOTLAND. PART OF SAB '02: THE 7TH MEETING OF THE INTERNATIONAL SOCIETY FOR SIMULATION OF ADAPTIVE BEHAVIOR AUGUST 4TH-11TH, 2002, EDINBURGH, SCOTLAND. We now have a full program of invited talks from international researcher in robotics, biology, and neuroscience, which includes lots of time for discussion. We are also accepting 'submissions' from workshop attendees in a simple form: If you have a poster from any previous conference that you think is relevant, we'd like to have it displayed, and include an abstract for it in our workshop notes. THE DEADLINE FOR ABSTRACTS IS THE 15TH OF JUNE. We're hoping to make a good show for the field, so please think about coming, bringing a poster, and spreading the word for others to do the same. See the web-page http://www.shef.ac.uk/~abrg/sab02/index.shtml for more details. Best Wishes, Tony Prescott & Barbara Webb (workshop co-organisers) From ngoddard at anc.ed.ac.uk Thu May 30 02:36:16 2002 From: ngoddard at anc.ed.ac.uk (Nigel Goddard) Date: Thu, 30 May 2002 07:36:16 +0100 Subject: Faculty position in language modeling - deadline soon! Message-ID: <3CF5C860.EB579E2A@anc.ed.ac.uk> UNIVERSITY OF EDINBURGH DIVISION OF INFORMATICS AND DEPARTMENT OF PSYCHOLGY This position may be of interest to neurocognitive modelers of language function. Deadline for applications is June 7th. The Department of Psychology and the Division of Informatics invite applications from highly-qualified candidates for a 3-year Lectureship to be jointly held in Psychology and Informatics. You must be able to teach existing courses in both departments, and one or more of the following areas: Cognitive Modelling, Computational Psycholinguistics, Cognitive Neuroscience, Computational Neuroscience, Human-Computer Interaction, Experimental Methods. You should demonstrate a world-class research record and both interest and ability in teaching. You will be an experimentalist with a firm grounding in theory and computation. Informal enquiries to Professor Bonnie Webber, Division of Informatics, +44 131 650 4190 or to hod.Psych at ed.ac.uk, Department of Psychology, Ph: +44 131 650 3440, Fax: +44 131 650 3461. Salary range: ?20,470 - ?24,435p.a. or -?25,455 - ?32,537p.a. Please quote reference no: 311414JW Closing date: 7 June 2002 For further particulars see http://www.jobs.ed.ac.uk/jobs/index.cfm?action=jobdet&jobid=1024 and for an an application pack visit our website http://www.jobs.ed.ac.uk or telephone the recruitment line on +44 131 650 2511 -- ==================================================================== Dr. Nigel Goddard, Institute for Adaptive and Neural Computation, Division of Informatics, University of Edinburgh, C19, 5 Forrest Hill, Edinburgh EH1 2QL, Scotland http://www.streetmap.co.uk/streetmap.dll?Postcode2Map?code=EH1+2QL Office: +44 (0)131 650 3087 mobile: +44 (0)787 967 1811 mailto:Nigel.Goddard at ed.ac.uk http://anc.ed.ac.uk/~ngoddard FAX->email [USA] (603) 698 5854 [UK] (0870) 130 5014 Calendar: http://calendar.yahoo.com/public/nigel_goddard ==================================================================== From ken at phy.ucsf.edu Thu May 30 15:36:26 2002 From: ken at phy.ucsf.edu (Ken Miller) Date: Thu, 30 May 2002 12:36:26 -0700 Subject: Paper available: Analysis of LGN Input and Orientation Tuning Message-ID: <15606.32570.37668.510564@coltrane.ucsf.edu> The following paper is available as ftp://ftp.keck.ucsf.edu/pub/ken/troyer_etal02.pdf or from http://www.keck.ucsf.edu/~ken (click on 'publications', then on 'Models of neuronal integration and circuitry') This is a preprint of an article that has now appeared as: Journal of Neurophysiology 87, 2741-2752 (2002). LGN Input to Simple Cells and Contrast-Invariant Orientation Tuning: An Analysis Todd W. Troyer, Anton E. Krukowski and Kenneth D. Miller Abstract: We develop a new analysis of the LGN input to a cortical simple cell, demonstrating that this input is the sum of two terms, a linear term and a nonlinear term. In response to a drifting grating, the linear term represents the temporal modulation of input, and the nonlinear term represents the mean input. The nonlinear term, which grows with stimulus contrast, has been neglected in many previous models of simple cell response. We then analyze two scenarios by which contrast-invariance of orientation tuning may arise. In the first scenario, at larger contrasts, the nonlinear part of the LGN input, in combination with strong push-pull inhibition, counteracts the nonlinear effects of cortical spike threshold, giving the result that orientation tuning scales with contrast. In the second scenario, at low contrasts, the nonlinear component of LGN input is negligible, and noise smooths the nonlinearity of spike threshold so that the input-output function approximates a power-law function. These scenarios can be combined to yield contrast-invariant tuning over the full range of stimulus contrast. The model clarifies the contribution of LGN nonlinearities to the orientation tuning of simple cells, and demonstrates how these nonlinearities may impact different models of contrast-invariant tuning. Ken Kenneth D. Miller telephone: (415) 476-8217 Associate Professor fax: (415) 476-4929 Dept. of Physiology, UCSF internet: ken at phy.ucsf.edu 513 Parnassus www: http://www.keck.ucsf.edu/~ken San Francisco, CA 94143-0444 From erik at bbf.uia.ac.be Thu May 30 12:46:47 2002 From: erik at bbf.uia.ac.be (Erik De Schutter) Date: Thu, 30 May 2002 18:46:47 +0200 Subject: CNS*2002: early registration Message-ID: The early registration for the CNS*2002 meeting closes on June 10th. Final program, preprint versions of the submitted papers and registration facilities can all be accessed through our webserver at http://www.neuroinf.org/CNS.shtml Eleventh Annual Computational Neuroscience Meeting CNS*2002 July 21 - July 25, 2002 Chicago, Illinois USA CNS*2002 will be held in Chicago from Sunday, July 21, 2002 to Thursday, July 25 in the Congress Plaza Hotel & Convention Center. This is a historic hotel located on Lake Michigan in downtown Chicago. General sessions will be Sunday-Wednesday, Thursday will be a full day of workshops at the University of Chicago. The conference dinner will be Wednesday night, followed by the rock-n-roll jam session. INVITED SPEAKERS: Ad Aertsen (Albert-Ludwigs-University, Germany) Leah Keshet (University British Columbia, Canada) Alex Thomson (University College London, UK) ORGANIZING COMMITTEE: Program chair: Erik De Schutter (University of Antwerp, Belgium) Local organizer: Philip Ulinski (University of Chicago, USA) Workshop organizer: Maneesh Sahani (Gatsby Computational Neuroscience Unit, UK) Government Liaison: Dennis Glanzman (NIMH/NIH, USA) Program Committee: Upinder Bhalla (National Centre for Biological Sciences, India) Avrama Blackwell (George Mason University, USA) Victoria Booth (New Jersey Institute of Technology, USA) Alain Destexhe (CNRS Gif-sur-Yvette, France) John Hertz (Nordita, Denmark) David Horn (University of Tel Aviv, Israel) Barry Richmond (NIMH, USA) Steven Schiff (George Mason University, USA) Todd Troyer (University of Maryland, USA) From tewon at salk.edu Thu May 30 20:18:04 2002 From: tewon at salk.edu (Te-Won Lee) Date: Thu, 30 May 2002 17:18:04 -0700 Subject: JMLR special issue on ICA Message-ID: <015701c20838$a288f2b0$0693ef84@redmond.corp.microsoft.com> Journal of Machine Learning Research Special Issue on "Independent Component Analysis" Guest Editors: Te-Won Lee, Jean-Francois Cardoso, Erkki Oja, Shun-Ichi Amari CALL FOR PAPERS We invite papers on Independent Component Analysis (ICA) and Blind Source Separation (BSS) for a special issue in the Journal of Machine Learning Research (on-line publication and subsequent publication from MIT Press). In recent years, ICA has received attention from many research areas including statistical signal processing, machine learning, neural networks, information theory and exploratory data analysis. Applications of ICA algorithms in speech signal processing and biomedical signal processing are growing and maturing and ICA methods are also considered in many other fields where this novel data analysis technique provides new insights. Recent approaches to ICA such as variational methods, kernel methods and tensor methods have lead to new theoretical insights. They permit us to relax some of the constraints in the traditional ICA assumptions yielding new algorithms and increasing the domains of application. Certain nonlinear mixing systems can be inverted, more sources than the number of sensors can be recovered, and further understanding of the convergence properties and gradient optimizations are now available. The ICA framework is an interdisciplinary research area. The combination of ideas from machine learning and statistical signal processing is a developing avenue of research and ICA is a first step into this new direction. We invite original contributions that explore theoretical and practical issues related to ICA. A list of possible topics include: Theory and Algorithms Bayesian methods Information theoretic approaches High order statistics Convolutive mixtures Convergence and stability issues Graphical models Nonlinear mixing Undercomplete mixtures Sparse coding Methodology and Applications Biomedical applications Speech signal processing Image processing Performance comparisons Model validation Dimension reduction and visualization Learning features in high dimensional data Important Dates: - Submission: October, 1st 2002 - Decision: January, 1st 2003 - Final: March, 1st 2003 Submission procedure: see http://rhythm.ucsd.edu/~tewon/JMLR.html For further details or enquiries, send mail to tewon at inc.ucsd.edu Links: http://www-sig.enst.fr/~ica99/ http://www.cis.hut.fi/ica2000 http://www.ica2001.org http://ica2003.jp From cia at brain.riken.go.jp Fri May 31 13:01:51 2002 From: cia at brain.riken.go.jp (Andrzej Cichocki) Date: Sat, 01 Jun 2002 02:01:51 +0900 Subject: New monograph Message-ID: <3CF7AC7F.4020701@brain.riken.go.jp> [Our sincere apologies if you receive multiple copies of this email] The following book is now available: ADAPTIVE BLIND SIGNAL and IMAGE PROCESSING: Learning Algorithms and Applications A. Cichocki, S. Amari Published by John Wiley & Sons, Chichester UK, April 2002, 586 Pages. The books cover the following areas: Independent Component Analysis (ICA), blind source separation (BSS), blind recovery, blind signal extraction (BSE), multichannel blind deconvolution, blind equalization, second and higher order statistics, blind spatial and temporal decorrealtion, robust whitening, blind filtering, matrix factorizations, robust principal component analysis, minor component analysis, sparse representations, automatic dimension reduction, features extraction in high dimensional data, noise reduction and related problems. Moreover, some interesting benchmarks are available to compare performance of various unsupervised learning algorithms. More information about the book you can find on web pages: http://www.bsp.brain.riken.go.jp/ICAbookPAGE/ http://www.wiley.com/cda/product/0,,0471607916,00.html and in below brief summary. Andrzej Cichocki Laboratory for Advanced Brain Signal Processing, Riken BSI 2-1 Hirosawa, Wako-shi, Saitama 351-0198, JAPAN E-mail: cia at bsp.brain.riken.go.jp URL: http://www.bsp.brain.riken.go.jp/ Summary of the book Chapter 1: Introduction to Blind Signal Processing: Problems and Applications Blind Signal Processing (BSP) is now one of the hottest and exciting topics in the fields of neural computation, advanced statistics, and signal processing with solid theoretical foundations and many potential applications. In fact, BSP has become a very important topic of research and development in many areas, especially biomedical engineering, medical imaging, speech enhancement, remote sensing, communication systems, exploration seismology, geophysics, econometrics, data mining, neural networks, etc. The blind signal processing techniques principally do not use any training data and do not assume a priori knowledge about parameters of convolutive, filtering and mixing systems. BSP includes three major areas: Blind Signal Separation and Extraction (BSS/BSE), Independent Component Analysis (ICA), and Multichannel Blind Deconvolution (MBD) and Equalization which are the main subjects of the book. In this chapter are formulated fundamental problems of the BSP, given important definitions and described basic mathematical and physical models. Moreover, several potential and promising applications are reviewed. Keywords: Blind Source Separation (BSS), Blind Source Extraction (BSE), Independent Component Analysis (ICA), Multichannel Blind Deconvolution (MBD), Basic definitions and models, Applications. Chapter 2: Solving a System of Linear Algebraic Equations and Related Problems In modern signal and image processing fields like biomedical engineering, computer tomography (image reconstruction from projections), automatic control, robotics, speech and communication, linear parametric estimation, models such as auto-regressive moving-average (ARMA) and linear prediction (LP) have been extensively utilized. In fact, such models can be mathematically described by an overdetermined system of linear algebraic equations. Such systems of equations are often contaminated by noise or errors, thus the problem of finding an optimal and robust with respect noise solution arises if some a priori information about the error is available. On the other hand, wide classes of extrapolation, reconstruction, estimation, approximation, interpolation and inverse problems can be converted to minimum norm problems of solving underdetermined systems of linear equations. Generally speaking, in signal processing applications, the overdetermined system of linear equations describes filtering, enhancement, deconvolution and identification problems, while the underdetermined case describes inverse and extrapolation problems. This chapter provides a tutorial to the problem of solving large overdetermined and underdetermined systems of linear equations, especially when there is an uncertainty in parameter values and/or the systems are contaminated by noise. A special emphasis is placed in on-line fast adaptive and iterative algorithms for arbitrary noise statistics. This chapter also gives several illustrative examples that demonstrate the characteristics of the developed novel algorithms. Keywords: Least Squares (LS) problem, Extended Total Least Squares (TLS), Data Least Squares (DLS), Least Absolute Deviation (LAD), 1-norm solution, Solving of system of linear equations with non-negativity constraints, Non-negative Matrix Factorization (NMF), Regularization, Sparse signal representation, Sparse solutions, Minimum Fuel Problem (MFP), Focuss algorithms, Amari-Hopfield recurrent neural networks for on-line solutions. Chapter 3: Principal/Minor Component Analysis and Related Problems Neural networks with unsupervised learning algorithms organize themselves in such a way that they can detect or extract useful features, regularities, correlations of data or signals or separate or decorrelate some signals with little or no prior knowledge of the desired results. Normalized (constrained) Hebbian and anti-Hebbian learning rules are simple variants of basic unsupervised learning algorithms; in particular, learning algorithms for principal component analysis (PCA), singular value decomposition (SVD) and minor component analysis (MCA) belong to this class of unsupervised rules. Recently, many efficient and powerful adaptive algorithms have been developed for PCA, MCA and SVD and their extensions The main objective of this chapter is a derivation and overview of the most important adaptive algorithms. Keywords: PCA, MCA, SVD, Subspace methods, Automatic dimensionality reduction, AIC and MDL criteria, Power method, Robust PCA, Multistage PCA for blind source separation. Chapter 4: Blind Decorrelation and Second Order Statistics for Robust Blind Identification Temporal, spatial and spatio-temporal decorrelations play important roles in signal processing. These techniques are based only on second-order statistics (SOS). They are the basis for modern subspace methods of spectrum analysis and array processing and often used in a preprocessing stage in order to improve convergence properties of adaptive systems, to eliminate redundancy or to reduce noise. Spatial decorrelation or prewhitening is often considered as a necessary (but not sufficient) condition for the stronger stochastic independence criteria. After prewhitening, the BSS or ICA tasks usually become somewhat easier and well-posed (less ill-conditioned), because the subsequent separating (unmixing) system is described by an orthogonal matrix for real-valued signals and a unitary matrix for complex-valued signals and weights. Furthermore, spatio-temporal and time-delayed decorrelation can be used to identify the mixing matrix and perform blind source separation of colored sources. In this chapter, we discuss and analyze a number of efficient and robust adaptive and batch algorithms for spatial whitening, orthogonalization, spatio-temporal and time-delayed blind decorrelation. Moreover, we discuss several promising robust algorithms for blind identification and blind source separation of non-stationary and/or colored sources. Keywords: Robust whitening, Robust orthogonalization, Gram-Schmidt orthogonalization,, Second order statistics (SOS) blind identification, Multistage EVD/SVD for BSS, Simultaneous diagonalization, Joint approximative diagonalization, SOBI and JADE algorithms, Blind source separation for non-stationary signals, Natural gradient, Atick-Redlich formula, Gradient descent with Frobenius norm constraint. Chapter 5: Sequential Blind Signal Extraction There are three main objectives of this chapter: (a) To present simple neural networks (processing units) and propose unconstrained extraction and deflation criteria that do not require either a priori knowledge of source signals or the whitening of mixed signals. These criteria lead to simple, efficient, purely local and biologically plausible learning rules (e.g., Hebbian/anti-Hebbian type learning algorithms). (b) To prove that the proposed criteria have no spurious equilibriums. In other words, the most learning rules discussed in this chapter always reach desired solutions, regardless of initial conditions (see appendixes for proof). (c) To demonstrate with computer simulations the validity and high performance for practical use of the derived learning algorithms. In this chapter there are used two different models and approaches. The first approach is based on higher order statistics (HOS) which assume that sources are mutually statistically independent and they are non-Gaussian (expect at most one) and as criteria of independence, we will use some measures of non-Gaussianity. The second approach based on the second order statistics (SOS) assumes that source signals have some temporal structure, i.e., the sources are colored with different autocorrelation functions or equivalently different shape spectra. Special emphasis will be given to blind source extraction (BSE) in the case when sensor signals are corrupted by additive noise using the bank of band pass filters. Keywords: Basic criteria for blind source extraction, Kurtosis, Gray function, Cascade neural network, Deflation procedures, KuickNet, Fixed-point algorithms, Blind extraction with reference signal, Linear predictor and band-pass filters for BSS, Statistical analysis, Log likelihood, Extraction of sources from convolutive mixture, Stability, Global convergence. Chapter 6: Natural Gradient Approach to Independent Component Analysis In this chapter, fundamental signal processing and information theoretic approaches are presented together with learning algorithms for the problem of adaptive blind source separation (BSS) and Independent Component Analysis (ICA). We discuss recent developments of adaptive learning algorithms based on the natural gradient approach in the general linear, orthogonal and Stiefel manifolds. Mutual information, Kullback-Leibler divergence, and several promising schemes are discussed and reviewed in this chapter, especially for signals with various unknown distributions and unknown number of sources. Emphasis is given to an information-theoretical and information-geometrical unifying approach, adaptive filtering models and associated on-line adaptive nonlinear learning algorithms. We discuss the optimal choice of nonlinear activation functions for various distributions, e.g., Gaussian, Laplacian, impulsive and uniformly-distributed signals based on a generalized-Gaussian-distributed model. Furthermore, families of efficient and flexible algorithms that exploit non-stationarity of signals are also derived. Keywords: Kullback-Leibler divergence, Natural gradient concept, Derivation and analysis of natural gradient algorithms, Local stability analysis, Nonholonomic constraints, Generalized Gaussian and Cauchy distributions, Pearson model. Natural gradient algorithms for non-stationary sources. Extraction of arbitrary group of sources, Semi-orthogonality constraints, Stiefel manifolds. Chapter 7: Locally Adaptive Algorithms for ICA and their Implementations The main purpose of this chapter is to describe and overview models and to present a family of practical and efficient associated adaptive or locally adaptive learning algorithms which have special advantages of efficiency and/or simplicity and straightforward electronic implementations. Some of the described algorithms have special advantages in the cases of noisy, badly scaled or ill-conditioned signals. The developed algorithms are extended for the case when the number of sources and their statistics are unknown. Finally, problem of an optimal choice of nonlinear activation function and general local stability conditions are also discussed. In particular, we focus on simple locally adaptive Hebbian/anti-Hebbian learning algorithms and their implementations using multi-layer neural networks are proposed. Keywords: Modified Jutten-Herault algorithm, robust local algorithms for ICA/BSS, Multi-layer network for ICA, Flexible ICA for unknown number of sources, Generalized EASI algorithms, and Generalized stability conditions. Chapter 8: Robust Techniques for BSS and ICA with Noisy Data In this chapter we focus mainly on approaches to blind separation of sources when the measured signals are contaminated by large additive noise. We extend existing adaptive algorithms with equivariant properties in order to considerably reduce the bias caused by measurement noise for the estimation of mixing and separating matrices. Moreover, we propose dynamical recurrent neural networks for simultaneous estimation of the unknown mixing matrix, source signals and reduction of noise in the extracted output signals. The optimal choice of nonlinear activation functions for various noise distributions assuming a generalized-Gaussian-distributed noise model is also discussed. Computer simulations of selected techniques are provided that confirms their usefulness and good performance. The main objective of this chapter is to present several approaches and derive learning algorithms that are more robust with respect to noise than the techniques described in the previous chapters or that can reduce the noise in the estimated output vector of independent components Keywords: Bias removal techniques, Wiener filters with references convolutive noise, Noise cancellation and reduction, Cumulants based cost functions and equivariant algorithms, Blind source separation with more sensors than sources, Robust extraction of arbitrary group of sources, Recurrent neural network for noisy data, Amari-Hopfield neural network. Chapter 9: Multichannel Blind Deconvolution: Natural Gradient Approach The main objective of this chapter is to review and extend existing adaptive natural gradient algorithms for various multichannel blind deconvolution models. Blind separation/deconvolution of source signals has been a subject under consideration for more than two decades. There are significant potential applications of blind separation/deconvolution in various fields, for example, wireless telecommunication systems, sonar and radar systems, audio and acoustics, image enhancement and biomedical signal processing (EEG/MEG signals). In these applications, single or multiple unknown but independent temporal signals propagate through a mixing and filtering medium. The blind source separation/deconvolution problem is concerned with recovering independent sources from sensor outputs without assuming any a priori knowledge of the original signals, except certain statistical features. In this chapter, we present using various models and assumptions, relatively simple and efficient, adaptive and batch algorithms for blind deconvolution and equalization for single-input/multiple-output (SIMO) and multiple-input/multiple-output (MIMO) dynamical minimum phase and non-minimum phase systems. The basic relationships between standard ICA/BSS (Independent Component Analysis and Blind Source Separation) and multichannel blind deconvolution are discussed in detail. They enable us to extend algorithms derived in the previous chapters. In particular, the natural gradient approaches for instantaneous mixture to convolutive dynamical models. We also derive a family of equivariant algorithms and analyze their stability and convergence properties. Furthermore, a Lie group and Riemannian metric are introduced on the manifold of FIR filters and using the isometry of the Riemannian metric, the natural gradient on the FIR manifold is described. Based on the minimization of mutual information, we present then a natural gradient algorithm for the causal minimum phase finite impulse response (FIR) multichannel filter. Using information back-propagation, we also discuss an efficient implementation of the learning algorithm for the non-causal FIR filters. Computer simulations are also presented to illustrate the validity and good learning performance of the described algorithms. Keywords: Basic models for blind equalization and multichannel deconvolution, Fractionally Sampled system, SIMO and MIMO models, Equalization criteria, Separation-deconvolution criteria, Relationships between BSS/ICA and multichannel blind deconvolution (MBD), Natural gradient algorithms for MBD, Information Back-propagation. Chapter 10: Estimating Functions and Superefficiency for ICA and Deconvolution Chapter 10 introduces the method of estimating functions to elucidate the common structures in most of the ICA/BSS and MBD algorithms. We use information geometry for this purpose, and define estimating functions in semiparametric statistical models which include unknown functions as parameters. Differences in most existing algorithms are only in the choices of estimating functions. We then give error analysis and stability analysis in terms of estimating functions. This makes it possible to design various adaptive methods for choosing unknown parameters included in estimating functions, which control accuracy and stability. The Newton method is automatically derived by the standardized estimating functions. First the standard BSS/ICA problem is formulated in the framework of the semiparametric model and a family of estimating functions. Furthermore, the present chapter will discuss and extend further convergence and efficiency of the batch estimator and natural gradient learning for blind separation/deconvolution via the semiparametric statistical model and estimating functions and standardized estimating functions derived by using efficient score functions elucidated recently by Amari et al. We present the geometrical properties of the manifold of the FIR filters based on the Lie group structure and formulate the multichannel blind deconvolution problem within the framework of the semiparametric model deriving a family of estimating functions for blind deconvolution. We then analyze the efficiency of the batch estimator based on estimating function - obtaining its convergence rate. Finally, we show that both batch learning and on-line natural gradient learning are superefficient under given nonsingular conditions. Keywords: Estimating functions, Semiparametric statistical models, Superefficiency, Likelihood, Score functions, Batch estimator, Information geometry, Stability analysis. Chapter 11: Blind Filtering and Separation Using a State-Space Approach The state-space description of dynamical systems is a powerful and flexible generalized model for blind separation and deconvolution or more generally for filtering and separation. There are several reasons why the state-space models are advantageous for blind separation and filtering. Although transfer function models in the Z -domain or the frequency domain are equivalent to the state-space models in the time domain for any linear, stable time-invariant dynamical system, using transfer function directly it is difficult to exploit internal representation of real dynamical systems. The main advantage of the state-space description is that it not only gives the internal description of a system, but there are various equivalent canonical types of state-space realizations for a system, such as balanced realization and observable canonical forms. In particular, it is possible to parameterize some specific classes of models which are of interest in applications. In addition, it is relatively easy to tackle the stability problem of state-space systems using the Kalman filter. Moreover, the state-space model enables a much more general description than the standard finite impulse response (FIR) convolutive filtering models discussed in the Chapter 9. In fact, all the known filtering models, such as the AR, MA, ARMA, ARMAX and Gamma filtering, could also be considered as special cases of flexible state-space models. In this chapter, we briefly review adaptive learning algorithms based on the natural gradient approach and give some perspective and new insight into multiple-input multiple-output blind separation and filtering in the state-space framework. Keywords: Linear basic state space model, Natural gradient algorithm for state space model, Estimation of output and state space matrices, Comparison of various algorithms, Kalman filter, Two stage blind separation/filtering approach. Chapter 12: Nonlinear State Space Models - Semi-Blind Signal Processing In this chapter we attempt to extend and generalize the results discussed in the previous chapters to nonlinear dynamical models. However, the problem is not only very challenging but intractable in the general case without a priori knowledge about the mixing and filtering nonlinear process. Therefore, in this chapter we consider very briefly only some simplified nonlinear models. In addition, we assume that some information about the mixing and separating system and source signals is available. In practice, special nonlinear dynamical models are considered in order to simplify the problem and solve it efficiently for specific applications. Specific examples include the Wiener model, the Hammerstein model and Nonlinear Autoregressive Moving Average models. Keywords: Semi-blind separation and filtering, Wiener and Hammerstein models, Nonlinear Autoregressive Moving Average (NARMA) model, Hyper radial basis function (HRBF) neural network. Appendix A: Mathematical Preliminaries In this appendix some mathematical background needed for complete understanding of the text are quickly reviewed. Many useful definitions, formulas for matrix algebra and matrix differentiation are given Keywords: Matrix inverse update rules, Matrix differentiation, Differentiations of scalar cost function with respect to a vector, Trace, Matrix differentiation of trace of matrices, Matrix expectation, Properties of determinant, Moore-Penrose pseudo inverse, Discrimination measures, Distance measures . Appendix B: Glossary of Symbols and Abbreviations Appendix B contains the list of basic symbols, notation and abbreviations used in the book REFERENCES The list of references contains more than 1350 publications. CD-ROM Accompanying CD-ROM includes electronic, interactive version of the book with hyperlinks, full-color figures and text. The black and white electronic version with hyperlinks is also provided. In addition MATLAB user friendly demo package for performing family of ICA and BSS/BSE algorithms is included. From terry at salk.edu Fri May 31 20:19:39 2002 From: terry at salk.edu (Terry Sejnowski) Date: Fri, 31 May 2002 17:19:39 -0700 (PDT) Subject: NEURAL COMPUTATION 14:7 In-Reply-To: <200204021833.g32IXx834531@purkinje.salk.edu> Message-ID: <200206010019.g510Jd343973@purkinje.salk.edu> Neural Computation - Contents - Volume 14, Number 7 - July 1, 2002 ARTICLE A Monte Carlo EM Approach for Partially Observable Diffusion Processes: Theory and Applications to Neural Networks Javier R. Movellan, Paul Mineiro, R. J. Williams NOTES Are Visual Cortex Maps Optimized For Coverage? Miguel A. Carreira-Perpinan and Geoffrey J. Goodhill Kernel-Based Topographic Map Formation by Local Density Modeling Marc M. Van Hulle LETTERS A Simple Model of Long-Term Spike Train Regularization Relly Brandman and Mark E. Nelson Spatiotemporal Spike Encoding of a Continuous External Signal Naoki Masuda and Kazuyuki Aihara Attractor Reliability Reveals Deterministic Structure in Neuronal Spike Trains P.H.E. Tiesinga, J.-M. Fellous and Terrence J. Sejnowski Traveling Waves of Excitation in Neural Field Models: Equivalence of Rate Descriptions and Integrate-and-Fire Dynamics Daniel Cremers and Andreas V.M. Herz Attentional Recruitment of Inter-Areal Recurrent Networks for Selective Gain Control Richard H. R. Hahnloser, Rodney J. Douglas, and Klaus Hepp CCCP Algorithms to Minimize the Bethe and Kikuchi Free Energies: Convergent Alternatives to Belief Propagation A. L. Yuille Fast Curvature Matrix-Vector Products for Second-Order Gradient Descent Nicol N. Schraudolph Representation and Extrapolation in Multilayer Perceptrons Antony Browne Using Noise to Compute Error Surfaces in Connectionist Networks: A Novel Means of Reducing Catastrophic Forgetting Robert French and Nick Chater ----- ON-LINE - http://neco.mitpress.org/ SUBSCRIPTIONS - 2002 - VOLUME 14 - 12 ISSUES USA Canada* Other Countries Student/Retired $60 $64.20 $108 Individual $88 $94.16 $136 Institution $506 $451.42 $554 * includes 7% GST MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 577-1545 journals-orders at mit.edu ----- From Dan Thu May 30 12:16:24 2002 From: Dan (Dan) Date: Thu, 30 May 2002 18:16:24 +0200 Subject: Use Cogprints! Message-ID: Dear fellow-members of this list, This is a personal message to everyone of you. It is about making our papers available to one another and to other people interested in pragmatics. As most of you probably know, there is an easy to use, free, electronic self-archiving service, Cogprints, created by Stevan Harnad, where you can archive your own papers, whether published or not, refereed or not, and, where you can, of course, read or download the papers of others. Cogprints has no competitor in its domain and is complementary to academic institutions' electronic archives. It describes itself as follows: CogPrints is a service to two consituencies: For AUTHORS, it provides a way to make their pre-refereeing preprints and their refereed, published reprints available to the world scholarly and scientific community on a scale that is impossible in paper. For READERS, it provides free worldwide access to the primary scholarly and scientific research literature on a scale that is likewise impossible in paper CogPrints is an electronic archive for papers in any area of Psychology, Neuroscience, and Linguistics, and many areas of Computer Science (e.g., artificial intelligence, robotics, vison, learning, speech, neural networks), Philosophy (e.g., mind, language, knowledge, science, logic), Biology (e.g., ethology, behavioral ecology, sociobiology, behaviour genetics, evolutionary theory), Medicine (e.g., Psychiatry, Neurology, human genetics, Imaging), Anthropology (e.g., primatology, cognitive ethnology, archeology, paleontology), as well as any other portions of the physical, social and mathematical sciences that are pertinent to the study of cognition It has a Pragmatics category with 45 archived papers at present, but I am, I believe the only one from this list to have put papers there. Just think of this: If all the researchers on this list would archive a copy of their own papers (past, present and future) at Cogprints (whether or not they are already archived at another institutional or personal site), Francisco Yus' bibliographic service on RT would be complemented with a de facto relevance theory archive. Moreover all our papers would reach a larger readership and be easily accessible to everyone, researchers, students etc. around the world. So I beseech you, yes YOU, to START ARCHIVING YOUR PAPERS AT COGPRINTS NOW! Go to http://cogprints.soton.ac.uk/ , look at the FAQ page http://cogprints.soton.ac.uk/faq.html and the help page http://cogprints.soton.ac.uk/help/ , register and start uploading! (Once you have learnt the routine, which may take you a good half hour, uploading a paper takes, in my experience, about 10 minutes.) I would like to see dozens of RT papers there in the coming weeks, and soon hundreds. Wouldn't you? Well, it is in YOUR hands. Cheers, Dan ----------------------------- Dan Sperber Institut Jean Nicod http://www.institutnicod.org 1bis avenue de Lowendal 75007 Paris, France web site: http://www.dan.sperber.com ------------------------------