From blair at it.uq.edu.au Wed Jan 7 00:09:11 1998 From: blair at it.uq.edu.au (Alan Blair) Date: Wed, 7 Jan 1998 15:09:11 +1000 (EST) Subject: Research positions available - Brisbane, Australia Message-ID: Project: Dynamical recognizers and complexity classes of languages Keywords: neural networks, dynamical systems, language induction The Cognitive Science group at The University of Queensland is seeking qualified applicants for projects in the area of neural networks and formal language learning. 1. Postdoctoral fellow or senior research assistant (1 year with possible extension for a second year) 2. Research assistant (6 months or half-time for 12 months) Qualifications required: Excellent programming skills in Java (or C) and Matlab, experience with neural network simulations. Background in dynamical systems, formal language theory and/or linguistics would be an advantage. The positions may be suitable for masters or PhD students. Location: The University of Queensland is located in Brisbane, Australia. Unfortunately, these grants are unable to provide travel funds, so successful applicants would be responsible for their own travel arrangements. Applicants are encouraged to contact the principal investigators via email, and to send expressions of interest, current CV and names and contact details of three referees by 31 Jan 1998 to Dr Janet Wiles (away from email jan 1-12) Dr Alan Blair < blair at cs.uq.edu.au> fax: +61 7 3365 1999 phone: +61 7 3365 2902 ------------------------------------------------------------- Dr Janet Wiles _-_|\ Cognitive Science Group / * Dept of Computer Science & Electrical Engineering \_.-._/ The University of Queensland, 4072 AUSTRALIA v http://www.cs.uq.edu.au/MENU/RESEARCH_GROUPS/csrg/cogsci.html ------------------------------------------------------------- From tho at james.hut.fi Wed Jan 7 10:02:08 1998 From: tho at james.hut.fi (Timo Honkela) Date: Wed, 7 Jan 1998 17:02:08 +0200 (EET) Subject: Thesis: Self-Organizing Maps in Natural Language Processing Message-ID: The following Dr.Phil. thesis is available at http://www.cis.hut.fi/~tho/thesis/honkela.ps.Z (compressed postscript) http://www.cis.hut.fi/~tho/thesis/honkela.ps (postscript) http://www.cis.hut.fi/~tho/thesis/ (html) ---------------------------------------------------------------------- SELF-ORGANIZING MAPS IN NATURAL LANGUAGE PROCESSING Timo Honkela Helsinki University of Technology Neural Networks Research Centre P.O.Box 2200 (Rakentajanaukio 2C) FIN-02015 HUT, Finland Timo.Honkela at hut.fi Kohonen's Self-Organizing Map (SOM) is one of the most popular artificial neural network algorithms. Word category maps are SOMs that have been organized according to word similarities, measured by the similarity of the short contexts of the words. Conceptually interrelated words tend to fall into the same or neighboring map nodes. Nodes may thus be viewed as word categories. Although no a priori information about classes is given, during the self-organizing process a model of the word classes emerges. The central topic of the thesis is the use of the SOM in natural language processing. The approach based on the word category maps is compared with the methods that are widely used in artificial intelligence research. Modeling gradience, conceptual change, and subjectivity of natural language interpretation are considered. The main application area is information retrieval and textual data mining for which a specific SOM-based method called the WEBSOM has been developed. The WEBSOM method organizes a document collection on a map display that provides an overview of the collection and facilitates interactive browsing. ------------------- --------------------------- Timo Honkela Timo.Honkela at hut.fi http://www.cis.hut.fi/~tho/ Neural Networks Research Centre, Helsinki Univ of Technology and P.O.Box 2200 FIN-02015 HUT, Finland Nat Lang Proc Tel. +358-9-451 3275, Fax +358-9-451 3277 From becker at curie.psychology.mcmaster.ca Wed Jan 7 14:14:14 1998 From: becker at curie.psychology.mcmaster.ca (Sue Becker) Date: Wed, 7 Jan 1998 14:14:14 -0500 (EST) Subject: graduate training opportunities Message-ID: Neuroscience Graduate Training Opportunities The Center for Neural Systems at McMaster University is a multidisciplinary research group, whose faculty members span the departments of Psychology, Biology, Electrical and Computer Engineering, and Computer Science. Students in the CNS group register in the graduate program of their thesis advisor's home department. The CNS provides an exciting intellectual environment and excellent shared facilities for studying the nervous system at a variety of levels ranging from molecular to systems and theoretical. Our laboratories employ the most advanced techniques, including neural pathway tracing, brain imaging, computational modelling, electrophysiology and genetics. Application materials for Graduate Studies in Neuroscience are available on our web page, http://www.science.mcmaster.ca/Psychology/behave.comp.neuro.html or by writing to the Center for Neural Systems, McMaster University, Department of Psychology, 1280 Main Street West, Hamilton, Ontario. From oby at cs.tu-berlin.de Thu Jan 8 11:12:01 1998 From: oby at cs.tu-berlin.de (Klaus Obermayer) Date: Thu, 8 Jan 1998 17:12:01 +0100 (MET) Subject: paper available Message-ID: <199801081612.RAA26533@pollux.cs.tu-berlin.de> Dear Connectionists, The following tech-report and paper are available online: -------------------------------------------------------------------- A Model for the Intracortical Origin of Orientation Preference and Tuning in Macaque Striate Cortex Peter Adorjan^1, Jonathan B. Levitt^2, Jennifer S. Lund^2, and Klaus Obermayer^1 ^1 CS Department, Technical University of Berlin, Berlin, Germany, ^2 Institute for Ophthalmology, UCL, London, UK We report results of numerical simulations for a model of orientation selectivity in macaque striate cortex. In contrast to previous models, where the initial orientation bias is generated by convergent geniculate input to simple cells and subsequently sharpened by lateral circuits, this approach is based on anisotropic intracortical excitatory connections which provide both the initial orientation bias and the subsequent amplification. Our study shows that the emerging response properties are similar to the response properties which are observed experimentally, hence the hypothesis of an intracortical generation of orientation bias is a sensible alternative to the notion of an afferent bias by convergent geniculocortical projection patterns. In contrast to models based on an afferent orientation bias, however, the ``intracortical hypothesis'' predicts that orientation tuning gradually evolves from an initially nonoriented response and a complete loss of orientation tuning when the recurrent excitation is blocked, but new experiments must be designed to unambiguously decide between both hypotheses. TU Berlin Technical Report, TR 98-1, http://kon.cs.tu-berlin.de/publications/#techrep ------------------------------------------------------------------------- Development and Regeneration of the Retinotectal Map in Goldfish: A Computational Study C. Weber^1, H. Ritter^2, J. Cowan^3, and K. Obermayer^1 ^1 CS Department, Technical University of Berlin, Berlin, Germany, ^2 Technische Fakultaet, University of Bielefeld, Germany, ^3 Departments of Mathematics and Neurology, The University of Chicago, IL, USA We present a simple computational model to study the interplay of activity dependent and intrinsic processes thought to be involved in the formation of topographic neural projections. Our model consists of two input layers which project to one target layer. The connections between layers are described by a set of synaptic weights. These weights develop according to three interacting developmental rules: (i) an intrinsic fiber-target interaction which generates chemospecific adhesion between afferent fibers and target cells, (ii) an intrinsic fiber-fiber interaction which generates mutual selective adhesion between the afferent fibers and (iii) an activity-dependent fiber-fiber interaction which implements Hebbian learning. Additionally, constraints are imposed to keep synaptic weights finite. The model is applied to a set of eleven experiments on the regeneration of the retinotectal projection in goldfish. We find that the model is able to reproduce the outcome of an unprecedented range of experiments with the same set of model parameters, including details of the size of receptive and projective fields. We expect this mathematical framework to be a useful tool for the analysis of developmental processes in general. Phil. Trans. Roy. Soc. Lond. B 352, 1603-1623 (1997) http://kon.cs.tu-berlin.de/publications/#journals From bmg at numbat.cs.rmit.edu.au Thu Jan 8 05:27:38 1998 From: bmg at numbat.cs.rmit.edu.au (B Garner) Date: Thu, 8 Jan 1998 21:27:38 +1100 (EST) Subject: two papers Message-ID: <199801081027.VAA03715@numbat.cs.rmit.edu.au> These published papers are available at the following WWW sites http://yallara.cs.rmit.edu.au/~bmg/algA.rtf http://yallara.cs.rmit.edu.au/~bmg/algB.rtf ************************************************************************** A symbolic solution for adaptive feedforward neural networks found with a new training algorithm B. M. Garner, Department of Computer Science, RMIT, Melbourne, Australia. ABSTRACT Traditional adaptive feed forward neural network (NN) training algorithms find numerical values for the weights and thresholds. In this paper it is shown that a NN composed of linear threshold gates (LTGs) can function as a fully trained neural network without finding numerical values for the weights and thresholds. This surprising result is demonstrated by presenting a new training algorithm for this type of NN that resolves the network into constraints which describes all the numeric values the NN's weights and thresholds can take. The constraints do not require a numerical solution for the network to function as a fully trained NN which can generalize. The solution is said to be symbolic as a numerical solution is not required. *************************************************************************** A training algorithm for Adaptive Feedforward Neural Networks that determines its own topology B. M. Garner, Department of Computer Science, RMIT, Melbourne, Australia. ABSTRACT There has been some interest in developing neural network training algorithms that determine their own architecture. A training algorithm for adaptive feedforward neural networks (NN) composed of Linear Threshold Gates (LTGs) is presented here that determines it's own architecture and trains in a single pass. This algorithm produces what is said to be a symbolic solution as it resolves the relationships between the weights and the thresholds into constraints which do not require to be solved numerically. The network has been shown to behave as a fully trained neural network which generalizes and the possibility that the algorithm has polynomial time complexity is discussed. The algorithm uses binary data during training. Bernadette ============================================================================= Bernadette Garner He shall fall down a pit called Because, and there bmg at numbat.cs.rmit.edu.au he shall perish with the dogs of Reason http://yallara.cs.rmit.edu.au/~bmg/ - Aleister Crowley ============================================================================= From reggia at cs.umd.edu Thu Jan 8 14:11:50 1998 From: reggia at cs.umd.edu (James A. Reggia) Date: Thu, 8 Jan 1998 14:11:50 -0500 (EST) Subject: Travel Fellowships, Neural Modeling Brain/Cognitive Disorders Meeting Message-ID: <199801081911.OAA12824@avion.cs.umd.edu> We are happy to announce that funding is expected for a few TRAVEL FELLOWSHIPS to THE SECOND INTERNATIONAL WORKSHOP ON NEURAL MODELING OF BRAIN AND COGNITIVE DISORDERS held on June 4 - 6, 1998 at the University of Maryland, College Park, just outside of Washington, DC. Preference will be given in awarding this travel support to students, post-docs or residents with posters accepted for presentation at the meeting. The focus of this meeting will be on lesioning neural models to study disorders in neurology, neuropsychology and psychiatry, e.g., Alzheimer's disease, amnesia, aphasia, depression, epilepsy, neglect, parkinsonism, schizophrenia, and stroke. Individuals wishing to present a poster related to any aspect of the workshop's theme should submit an abstract describing the nature of their presentation. The single page submission should include title, author(s), contact information (address and email/fax), and abstract. One inch margins and a typesize of at least 10 points should be used. Abstracts will be reviewed by the Program Committee; those accepted will be published in the workshop proceedings. Six copies of the camera-ready abstract should be mailed TO ARRIVE by February 3, 1998 to James Reggia, Dept. of Computer Science, A.V. Williams Bldg., University of Maryland, College Park, MD 20742 USA. The latest information about the meeting can be found at http://www.cs.umd.edu/~reggia/workshop/ To receive registration materials (distributed most likely in February), please send your name, address, email address, phone number and fax number to Cecilia Kullman, UMIACS, A. V. Williams Bldg., University of Maryland, College Park, MD 20742 USA. (Tel: (301) 405-0304, Fax: (301) 314-9658, and email: cecilia at umiacs.umd.edu). From bmg at numbat.cs.rmit.edu.au Fri Jan 9 06:38:03 1998 From: bmg at numbat.cs.rmit.edu.au (B Garner) Date: Fri, 9 Jan 1998 22:38:03 +1100 (EST) Subject: two papers Message-ID: <199801091138.WAA18834@numbat.cs.rmit.edu.au> A number of people have requested that the following published papers be made available in postscript format. They are now available at WWW site http://yallara.cs.rmit.edu.au/~bmg/algA.ps http://yallara.cs.rmit.edu.au/~bmg/algB.ps I apologize if you have received multiple copies of this posting. ************************************************************************** A symbolic solution for adaptive feedforward neural networks found with a new training algorithm B. M. Garner, Department of Computer Science, RMIT, Melbourne, Australia. ABSTRACT Traditional adaptive feed forward neural network (NN) training algorithms find numerical values for the weights and thresholds. In this paper it is shown that a NN composed of linear threshold gates (LTGs) can function as a fully trained neural network without finding numerical values for the weights and thresholds. This surprising result is demonstrated by presenting a new training algorithm for this type of NN that resolves the network into constraints which describes all the numeric values the NN's weights and thresholds can take. The constraints do not require a numerical solution for the network to function as a fully trained NN which can generalize. The solution is said to be symbolic as a numerical solution is not required. *************************************************************************** A training algorithm for Adaptive Feedforward Neural Networks that determines its own topology B. M. Garner, Department of Computer Science, RMIT, Melbourne, Australia. ABSTRACT There has been some interest in developing neural network training algorithms that determine their own architecture. A training algorithm for adaptive feedforward neural networks (NN) composed of Linear Threshold Gates (LTGs) is presented here that determines it's own architecture and trains in a single pass. This algorithm produces what is said to be a symbolic solution as it resolves the relationships between the weights and the thresholds into constraints which do not require to be solved numerically. The network has been shown to behave as a fully trained neural network which generalizes and the possibility that the algorithm has polynomial time complexity is discussed. The algorithm uses binary data during training. Bernadette ============================================================================= Bernadette Garner He shall fall down a pit called Because, and there bmg at numbat.cs.rmit.edu.au he shall perish with the dogs of Reason http://yallara.cs.rmit.edu.au/~bmg/ - Aleister Crowley ============================================================================= From Melanie.Hilario at cui.unige.ch Fri Jan 9 08:34:20 1998 From: Melanie.Hilario at cui.unige.ch (Melanie Hilario) Date: Fri, 09 Jan 1998 14:34:20 +0100 Subject: CFP: ECML'98 WS - Upgrading Learning to the Meta-Level Message-ID: <34B6275C.2A73@cui.unige.ch> [Our apologies if you receive multiple copies of this CFP] Call for Papers ECML'98 Workshop UPGRADING LEARNING TO THE META-LEVEL: MODEL SELECTION AND DATA TRANSFORMATION To be held in conjunction with the 10th European Conference on Machine Learning Chemnitz, Germany, April 24, 1997 http://www.cs.bris.ac.uk/~cgc/ecml98-ws.html Motivation and Technical Description Over the past decade, machine learning (ML) techniques have successfully started the transition from research laboratories to the real world. The number of fielded applications has grown steadily, evidence that industry needs and uses ML techniques. However, most successful applications are custom-designed and the result of skillful use of human expertise. This is due, in part, to the large, ever increasing number of available ML models, their relative complexity and the lack of systematic methods for discriminating among them. Current data mining tools are only as powerful/useful as their users. They provide multiple techniques within a single system, but the selection and combination of these techniques are external to the system and performed by the user. This makes it difficult and costly for non-initiated users to access the much needed technology directly. The problem of model selection is that of choosing the appropriate learning method/model for a given application task. It is currently a matter of consensus that there are no universally superior models and methods for learning. The key question in model selection is not which learning method is better than the others, but under which precise conditions a given method is better than others for a given task. The problem of data transformation is distinct but inseparable from model selection. Data often need to be cleaned and transformed before applying (or even selecting) a learning algorithm. Here again, the hurdle is that of choosing the appropriate method for the specific transformation required. In both the learning and data pre-processing phases, users often resort to a trial-and-error process to select the most suitable model. Clearly, trying all possible options is impractical, and choosing the option that appears most promising often yields to a sub-optimal solution. Hence, an informed search process is needed to reduce the amount of experimentation while avoiding the pitfalls of local optima. Informed search requires meta-knowledge, which is not available to non-initiated, industrial end-users. Objectives and Scope The aim of this workshop is to explore the different ways of acquiring and using the meta-knowledge needed to address the model selection and data transformation problems. For some researchers, the choice of learning and data transformation methods should be fully automated if machine learning and data mining systems are to be of any use to non specialists. Others claim that full automation of the learning process is not within the reach of current technology. Still others doubt that it is even desirable. An intermediate solution is the design of assistant systems which aim less to replace the user than to help him make the right choices or, failing that, to guide him through the space of experiments. Whichever the proposed solution, there seems to be an implicit agreement that meta-knowledge should be integrated seamlessly into the learning tool. This workshop is intended to bring together researchers who have attempted to use meta-level approaches to automate or guide decision-making at all stages of the learning process. One broad line of research is the static use of prior (meta-)knowledge. Knowledge-based approaches to model selection have been explored in both symbolic and neural network learning. For instance, prior knowledge of invariances has been used to select the appropriate neural network architecture for optical character recognition problems. Another research avenue aims at augmenting and/or refining meta-knowledge dynamically across different learning experiences. Meta-learning approaches have been attempted to automate model selection (as in VBMS and StatLog) as well as model arbitration and model combination (as in JAM). Contributions are sought on any of the above--or other--approaches from all main sub-fields of machine learning, including neural networks, symbolic machine learning and inductive logic programming. The results of this workshop will extend those of prior workshops, such as the ECML95 Workshop on Learning at the Knowledge Level and the ICML97 Workshop on Machine Learning Applications in the Real World, as well as complement those of the upcoming AAAI98/ICML98 Workshop on the Methodology of Applying Machine Learning. Format and Schedule The workshop will consist of one invited talk, a number of refereed contributions and small group discussions. The idea is to bring researchers together to present current work and identify future areas of research and development. This is intended to be a one-day workshop and the proposed schedule is as follows. 9:00 Welcome 10:00 Paper session (5 x 30mins) 12:30 Lunch 1:30 Paper session (3 x 30mins) 3:00 Summary: the issues/the future 3:15 Small group discussions (3-4 groups) 4:00 Reports from each group 4:45 Closing remarks 5:00 End Timetable The following timetable will be strictly adhered to: * Registration of interest: starting now (email to: Christophe G-C, please specify intention to attend/intention to submit a paper) * Submission of paper: 6 March 1998 (electronic postscript only to either organiser: Christophe G-C, Melanie H) * Notification of acceptance: 20 March 1998 * Camera-ready: 28 March 1998 Program Committee Submitted papers will be reviewed by at least two independent referees from the following program committee. Pavel Brazdil, University of Porto Robert Engels, University of Karlsruhe Dieter Fensel, University of Karlsruhe Jean-Gabriel Ganascia, Universite Pierre et Marie Curie Christophe Giraud-Carrier, University of Bristol Ashok Goel, Georgia Institute of Technology Melanie Hilario, University of Geneva Igor Kononenko, University of Ljubljana Dunja Mladenic, Josef Stefan Institute, Slovenia Gholaremza Nakhaizadeh, Daimler-Benz Ashwin Ram, Georgia Institute of Technology Colin Shearer, Integrated Solutions Ltd Walter van de Welde, Riverland Next Generation Maarten van Someren, University of Amsterdam Gerhard Widmer, Austrian Institute for Artificial Intelligence Research Accepted papers will be published in the workshop proceedings and contributors will be allocated 30 minutes for an oral presentation during the workshop. Organisers Christophe Giraud-Carrier Department of Computer Science University of Bristol Bristol, BS8 1UB United Kingdom Tel: +44-117-954-5145 Fax: +44-117-954-5208 Email: cgc at cs.bris.ac.uk Melanie Hilario Computer Science Department University of Geneva 24, Rue General-Dufour CH-1211 Geneva 4 Switzerland Tel: +41-22-705-7791 Fax: +41-22-705-7780 Email: Melanie.Hilario at cui.unige.ch From shimone at cogs.susx.ac.uk Fri Jan 9 07:03:59 1998 From: shimone at cogs.susx.ac.uk (Shimon Edelman) Date: Fri, 9 Jan 1998 12:03:59 +0000 (GMT) Subject: preprint on visual recognition and categorization Message-ID: --------------------------------------------------------------------- Visual recognition and categorization on the basis of similarities to multiple class prototypes Sharon Duvdevani-Bar and Shimon Edelman Available directly via FTP ftp://eris.wisdom.weizmann.ac.il/pub/recog+categ.ps.Z or via this Web page http://eris.wisdom.weizmann.ac.il/~edelman/archive.html Abstract: One of the difficulties of object recognition stems from the need to overcome the variability in object appearance caused by factors such as illumination and pose. The influence of these factors can be countered by learning to interpolate between stored views of the target object, taken under representative combinations of viewing conditions. Difficulties of another kind arise in daily life situations that require categorization, rather than recognition, of objects. We show that, although categorization cannot rely on interpolation between stored examples, knowledge of several representative members, or prototypes, of each of the categories of interest can still provide the necessary computational substrate for the categorization of new instances. The resulting representational scheme based on similarities to prototypes is computationally viable, and is readily mapped onto the mechanisms of biological vision revealed by recent psychophysical and physiological studies. --------------------------------------------------------------------- [get it now] Comments welcome. -Shimon Shimon Edelman, School of Cognitive and Computing Sciences University of Sussex, Falmer, Brighton BN1 9QH, UK http://www.cogs.susx.ac.uk/users/shimone +44 1273 678659 From pazienza at info.utovrm.it Fri Jan 9 08:27:22 1998 From: pazienza at info.utovrm.it (Maria Teresa Pazienza) Date: Fri, 9 Jan 1998 15:27:22 +0200 Subject: Job offer Message-ID: The AI group of the Department of Computer Science, Systems and Production, University of Rome Tor Vergata (ITALY), from a long while involved in reasearches in Lexical Acquisition, Machine Laerning and Engineering of adaptive NLP systems, is looking for an experienced computer scientist interested in joining the AI group in Rome for an international (state-of-art reasearch & development) project in the area of text processing for Information Extraction and Filtering. The project's specific goal is multilingual (English, Spanish and Italian) extraction of information from Web documents. The AI group that will host the candidate has a long lasting tradition in the engineering of NLP systems, and is currently integrating its existing systems for Lexical Acquisition and Text Processing in Italian and English into an industrial prototype. A qualified candidate must have preferably a PhD degree, on Computer Science, Software Engineering or Computational Linguistics, with extensive programming experience in at least one of the following languages: C++, Java and Prolog. It is highly preferred a strong background in software engineering of large-scale text processing systems and ability on innovative approaches in natural language (statistical as well as symbolic methods). Although this is not a job for theoreticians only, a specific talent for research problems, experimental studies and familiarity with the empirical methods in NL are a plus. Candidate should know very well UNIX and in general be a skilled programmer. Knowledge of Java programming under Windows NT is a relevant aspect, but not necessary. The position corresponds to a contract of the University of Roma, Tor Vergata for (at least) one year (March 1998-March 1999): salary and conditions are equivalent to the position of a resarcher in the University. To apply for this position, please contact/send a curriculum by fax / e-mail to: Maria Teresa Pazienza Department of Computer Science, Systems and Production University of Roma, Tor Vergata Via di Tor Vergata 00133 Roma, (ITALY) fax : +39 6 72597460 tel : +39 6 72597378 e-mail: pazienza at info.utovrm.it -------------------------------------------------- prof. Maria Teresa Pazienza Dept. of Computer Science, Systems and Production University of Roma, Tor Vergata Via di Tor Vergata 00133 ROMA (ITALY) tel +39 6 72597378 fax +39 6 72597460 e_mail: pazienza at info.utovrm.it http://babele.info.utovrm.it/ -------------------------------------------------- From zoubin at cs.toronto.edu Fri Jan 9 17:24:05 1998 From: zoubin at cs.toronto.edu (Zoubin Ghahramani) Date: Fri, 9 Jan 1998 17:24:05 -0500 Subject: paper: Hierarchical Factor Analysis and Topographic Maps Message-ID: <98Jan9.172413edt.1352@neuron.ai.toronto.edu> The following paper is now available at: ftp://ftp.cs.toronto.edu/pub/zoubin/nips97.ps.gz http://www.cs.toronto.edu/~zoubin ---------------------------------------------------------------------- Hierarchical Non-linear Factor Analysis and Topographic Maps Zoubin Ghahramani and Geoffrey E. Hinton Department of Computer Science University of Toronto We first describe a hierarchical, generative model that can be viewed as a non-linear generalisation of factor analysis and can be implemented in a neural network. The model performs perceptual inference in a probabilistically consistent manner by using top-down, bottom-up and lateral connections. These connections can be learned using simple rules that require only locally available information. We then show how to incorporate lateral connections into the generative model. The model extracts a sparse, distributed, hierarchical representation of depth from simplified random-dot stereograms and the localised disparity detectors in the first hidden layer form a topographic map. When presented with image patches from natural scenes, the model develops topographically organised local feature detectors. To appear in Jordan, M.I, Kearns, M.J., and Solla, S.A. Advances in Neural Information Processing Systems 10. MIT Press: Cambridge, MA, 1998. From kehagias at egnatia.ee.auth.gr Sat Jan 10 15:23:22 1998 From: kehagias at egnatia.ee.auth.gr (Thanasis Kehagias) Date: Sat, 10 Jan 1998 12:23:22 -0800 Subject: new paper on Multi-model Algorithm for Parameter Estimation of Time-varying Nonlinear Systems Message-ID: <34B7D8BA.2804@egnatia.ee.auth.gr> The following paper will appear in AUTOMATICA. While it is not strictly about neural networks, the presented analysis of credit assignment convergence, for a multimodel scheme, may be of interest for people working with modular neural networks, mixtures of experts and so on. Title: A Multi-model Algorithm for Parameter Estimation of Time-varying Nonlinear Systems Authors: V. Petridis and Ath. Kehagias Source: Automatica (to appear) Link: http://skiron.control.ee.auth.gr/~kehagias/97epeke.htm Abstract: Many methods have been developed to solve the problem of parameter stimation for dynamical systems (Ljung, 1987). Of particular interest is the case of on-line algorithms which are used to estimate time-varying parameters. Here we present such an algorithm which assumes a nonlinear dynamical system. The system is time-varying: its parameter changes values according to a Markovian model switching mechanism. The algorithm starts with a finite number of models, each corresponding to one of the parameter values, and selects the ``phenomenologically best'' parameter value; namely the one which produces the best fit to the observed behavior of the system. Our algorithm is related to the Partition Algorith (PA) presented in (Hilborn & Lainiotis, 1969; Lainiotis, 1971; Lainiotis & Plataniotis, 1994; Sims, Lainiotis & Magill, 1969). PA is suitable for the parameter estimation of a linear dynamical system with Gaussian noise in the input and output; no provision is made for model switching. Under these assumptions, an algorithm is developed for exact computation of the models' posterior probabilities; these are used for Maximum a Posteriori (MAP) estimation of the unknown parameter. This method has been used extensively in a number of applications, including parameter estimation and system identification (Kehagias, 1991; Lainiotis & Plataniotis, 1994; Petridis, 1981). Our algorithm is more general than the PA: it applies to nonlinear systems and requires no probabilistic assumptions regarding the noise. Furthermore, while there are several convergence studies of the PA without a switching mechanism (Anderson & Moore, 1979; Kehagias, 1991; Tugnait, 1980), as far as we know, the analysis presented here is the first one that handles the Markovian switching assumption. A rigorous convergence analysis is also presented. Thanasis Kehagias, Research Associate, Dept. of Electrical and Computer Eng, Aristotle Un., Thessaloniki Ass. Prof., Dept. of Mathematics and Computer Sci., American College of Thessaloniki http://skiron.control.ee.auth.gr/~kehagias/index.htm From kehagias at egnatia.ee.auth.gr Sun Jan 11 23:03:05 1998 From: kehagias at egnatia.ee.auth.gr (Thanasis Kehagias) Date: Sun, 11 Jan 1998 20:03:05 -0800 Subject: correction to new paper URL Message-ID: <34B995F9.59FA@egnatia.ee.auth.gr> I apologize for reposting, but there was an error in the URL I gave in the announcement of my paper; it should be http://skiron.control.ee.auth.gr/~kehagias/THN/97epeke.htm The paper will appear in AUTOMATICA. While it is not strictly about neural networks, the presented analysis of credit assignment convergence, for a multimodel scheme, may be of interest for people working with modular neural networks, mixtures of experts and so on. Title: A Multi-model Algorithm for Parameter Estimation of Time-varying Nonlinear Systems Authors: V. Petridis and Ath. Kehagias Source: Automatica (to appear) Link: http://skiron.control.ee.auth.gr/~kehagias/THN/97epeke.htm Abstract: Many methods have been developed to solve the problem of parameter stimation for dynamical systems (Ljung, 1987). Of particular interest is the case of on-line algorithms which are used to estimate time-varying parameters. Here we present such an algorithm which assumes a nonlinear dynamical system. The system is time-varying: its parameter changes values according to a Markovian model switching mechanism. The algorithm starts with a finite number of models, each corresponding to one of the parameter values, and selects the ``phenomenologically best'' parameter value; namely the one which produces the best fit to the observed behavior of the system. Our algorithm is related to the Partition Algorith (PA) presented in (Hilborn & Lainiotis, 1969; Lainiotis, 1971; Lainiotis & Plataniotis, 1994; Sims, Lainiotis & Magill, 1969). PA is suitable for the parameter estimation of a linear dynamical system with Gaussian noise in the input and output; no provision is made for model switching. Under these assumptions, an algorithm is developed for exact computation of the models' posterior probabilities; these are used for Maximum a Posteriori (MAP) estimation of the unknown parameter. This method has been used extensively in a number of applications, including parameter estimation and system identification (Kehagias, 1991; Lainiotis & Plataniotis, 1994; Petridis, 1981). Our algorithm is more general than the PA: it applies to nonlinear systems and requires no probabilistic assumptions regarding the noise. Furthermore, while there are several convergence studies of the PA without a switching mechanism (Anderson & Moore, 1979; Kehagias, 1991; Tugnait, 1980), as far as we know, the analysis presented here is the first one that handles the Markovian switching assumption. A rigorous convergence analysis is also presented. Thanasis Kehagias, Research Associate, Dept. of Electrical and Computer Eng, Aristotle Un., Thessaloniki Ass. Prof., Dept. of Mathematics and Computer Sci., American College of Thessaloniki http://skiron.control.ee.auth.gr/~kehagias/index.htm -- Athanasios Kehagias, Research Associate, Dept. of Electrical and Computer Eng., Aristotle University of Thessaloniki, GR54006, Thessaloniki, GREECE and Assistant Professor, Dept. of Mathematics and Computer Science, American College of Thessaloniki, P.O. Box 21021, GR 55510 Pylea, Thessaloniki, GREECE home page: http://skiron.control.ee.auth.gr/~kehagias/index.htm email: kehagias at egnatia.ee.auth.gr, kehagias at ac.anatolia.edu.gr From blair at it.uq.edu.au Wed Jan 7 00:09:11 1998 From: blair at it.uq.edu.au (Alan Blair) Date: Wed, 7 Jan 1998 15:09:11 +1000 (EST) Subject: Research positions available - Brisbane, Australia Message-ID: Project: Dynamical recognizers and complexity classes of languages Keywords: neural networks, dynamical systems, language induction The Cognitive Science group at The University of Queensland is seeking qualified applicants for projects in the area of neural networks and formal language learning. 1. Postdoctoral fellow or senior research assistant (1 year with possible extension for a second year) 2. Research assistant (6 months or half-time for 12 months) Qualifications required: Excellent programming skills in Java (or C) and Matlab, experience with neural network simulations. Background in dynamical systems, formal language theory and/or linguistics would be an advantage. The positions may be suitable for masters or PhD students. Location: The University of Queensland is located in Brisbane, Australia. Unfortunately, these grants are unable to provide travel funds, so successful applicants would be responsible for their own travel arrangements. Applicants are encouraged to contact the principal investigators via email, and to send expressions of interest, current CV and names and contact details of three referees by 31 Jan 1998 to Dr Janet Wiles (away from email jan 1-12) Dr Alan Blair < blair at cs.uq.edu.au> fax: +61 7 3365 1999 phone: +61 7 3365 2902 ------------------------------------------------------------- Dr Janet Wiles _-_|\ Cognitive Science Group / * Dept of Computer Science & Electrical Engineering \_.-._/ The University of Queensland, 4072 AUSTRALIA v http://www.cs.uq.edu.au/MENU/RESEARCH_GROUPS/csrg/cogsci.html ------------------------------------------------------------- From tho at james.hut.fi Wed Jan 7 10:02:08 1998 From: tho at james.hut.fi (Timo Honkela) Date: Wed, 7 Jan 1998 17:02:08 +0200 (EET) Subject: Thesis: Self-Organizing Maps in Natural Language Processing Message-ID: The following Dr.Phil. thesis is available at http://www.cis.hut.fi/~tho/thesis/honkela.ps.Z (compressed postscript) http://www.cis.hut.fi/~tho/thesis/honkela.ps (postscript) http://www.cis.hut.fi/~tho/thesis/ (html) ---------------------------------------------------------------------- SELF-ORGANIZING MAPS IN NATURAL LANGUAGE PROCESSING Timo Honkela Helsinki University of Technology Neural Networks Research Centre P.O.Box 2200 (Rakentajanaukio 2C) FIN-02015 HUT, Finland Timo.Honkela at hut.fi Kohonen's Self-Organizing Map (SOM) is one of the most popular artificial neural network algorithms. Word category maps are SOMs that have been organized according to word similarities, measured by the similarity of the short contexts of the words. Conceptually interrelated words tend to fall into the same or neighboring map nodes. Nodes may thus be viewed as word categories. Although no a priori information about classes is given, during the self-organizing process a model of the word classes emerges. The central topic of the thesis is the use of the SOM in natural language processing. The approach based on the word category maps is compared with the methods that are widely used in artificial intelligence research. Modeling gradience, conceptual change, and subjectivity of natural language interpretation are considered. The main application area is information retrieval and textual data mining for which a specific SOM-based method called the WEBSOM has been developed. The WEBSOM method organizes a document collection on a map display that provides an overview of the collection and facilitates interactive browsing. ------------------- --------------------------- Timo Honkela Timo.Honkela at hut.fi http://www.cis.hut.fi/~tho/ Neural Networks Research Centre, Helsinki Univ of Technology and P.O.Box 2200 FIN-02015 HUT, Finland Nat Lang Proc Tel. +358-9-451 3275, Fax +358-9-451 3277 From becker at curie.psychology.mcmaster.ca Wed Jan 7 14:14:14 1998 From: becker at curie.psychology.mcmaster.ca (Sue Becker) Date: Wed, 7 Jan 1998 14:14:14 -0500 (EST) Subject: graduate training opportunities Message-ID: Neuroscience Graduate Training Opportunities The Center for Neural Systems at McMaster University is a multidisciplinary research group, whose faculty members span the departments of Psychology, Biology, Electrical and Computer Engineering, and Computer Science. Students in the CNS group register in the graduate program of their thesis advisor's home department. The CNS provides an exciting intellectual environment and excellent shared facilities for studying the nervous system at a variety of levels ranging from molecular to systems and theoretical. Our laboratories employ the most advanced techniques, including neural pathway tracing, brain imaging, computational modelling, electrophysiology and genetics. Application materials for Graduate Studies in Neuroscience are available on our web page, http://www.science.mcmaster.ca/Psychology/behave.comp.neuro.html or by writing to the Center for Neural Systems, McMaster University, Department of Psychology, 1280 Main Street West, Hamilton, Ontario. From oby at cs.tu-berlin.de Thu Jan 8 11:12:01 1998 From: oby at cs.tu-berlin.de (Klaus Obermayer) Date: Thu, 8 Jan 1998 17:12:01 +0100 (MET) Subject: paper available Message-ID: <199801081612.RAA26533@pollux.cs.tu-berlin.de> Dear Connectionists, The following tech-report and paper are available online: -------------------------------------------------------------------- A Model for the Intracortical Origin of Orientation Preference and Tuning in Macaque Striate Cortex Peter Adorjan^1, Jonathan B. Levitt^2, Jennifer S. Lund^2, and Klaus Obermayer^1 ^1 CS Department, Technical University of Berlin, Berlin, Germany, ^2 Institute for Ophthalmology, UCL, London, UK We report results of numerical simulations for a model of orientation selectivity in macaque striate cortex. In contrast to previous models, where the initial orientation bias is generated by convergent geniculate input to simple cells and subsequently sharpened by lateral circuits, this approach is based on anisotropic intracortical excitatory connections which provide both the initial orientation bias and the subsequent amplification. Our study shows that the emerging response properties are similar to the response properties which are observed experimentally, hence the hypothesis of an intracortical generation of orientation bias is a sensible alternative to the notion of an afferent bias by convergent geniculocortical projection patterns. In contrast to models based on an afferent orientation bias, however, the ``intracortical hypothesis'' predicts that orientation tuning gradually evolves from an initially nonoriented response and a complete loss of orientation tuning when the recurrent excitation is blocked, but new experiments must be designed to unambiguously decide between both hypotheses. TU Berlin Technical Report, TR 98-1, http://kon.cs.tu-berlin.de/publications/#techrep ------------------------------------------------------------------------- Development and Regeneration of the Retinotectal Map in Goldfish: A Computational Study C. Weber^1, H. Ritter^2, J. Cowan^3, and K. Obermayer^1 ^1 CS Department, Technical University of Berlin, Berlin, Germany, ^2 Technische Fakultaet, University of Bielefeld, Germany, ^3 Departments of Mathematics and Neurology, The University of Chicago, IL, USA We present a simple computational model to study the interplay of activity dependent and intrinsic processes thought to be involved in the formation of topographic neural projections. Our model consists of two input layers which project to one target layer. The connections between layers are described by a set of synaptic weights. These weights develop according to three interacting developmental rules: (i) an intrinsic fiber-target interaction which generates chemospecific adhesion between afferent fibers and target cells, (ii) an intrinsic fiber-fiber interaction which generates mutual selective adhesion between the afferent fibers and (iii) an activity-dependent fiber-fiber interaction which implements Hebbian learning. Additionally, constraints are imposed to keep synaptic weights finite. The model is applied to a set of eleven experiments on the regeneration of the retinotectal projection in goldfish. We find that the model is able to reproduce the outcome of an unprecedented range of experiments with the same set of model parameters, including details of the size of receptive and projective fields. We expect this mathematical framework to be a useful tool for the analysis of developmental processes in general. Phil. Trans. Roy. Soc. Lond. B 352, 1603-1623 (1997) http://kon.cs.tu-berlin.de/publications/#journals From bmg at numbat.cs.rmit.edu.au Thu Jan 8 05:27:38 1998 From: bmg at numbat.cs.rmit.edu.au (B Garner) Date: Thu, 8 Jan 1998 21:27:38 +1100 (EST) Subject: two papers Message-ID: <199801081027.VAA03715@numbat.cs.rmit.edu.au> These published papers are available at the following WWW sites http://yallara.cs.rmit.edu.au/~bmg/algA.rtf http://yallara.cs.rmit.edu.au/~bmg/algB.rtf ************************************************************************** A symbolic solution for adaptive feedforward neural networks found with a new training algorithm B. M. Garner, Department of Computer Science, RMIT, Melbourne, Australia. ABSTRACT Traditional adaptive feed forward neural network (NN) training algorithms find numerical values for the weights and thresholds. In this paper it is shown that a NN composed of linear threshold gates (LTGs) can function as a fully trained neural network without finding numerical values for the weights and thresholds. This surprising result is demonstrated by presenting a new training algorithm for this type of NN that resolves the network into constraints which describes all the numeric values the NN's weights and thresholds can take. The constraints do not require a numerical solution for the network to function as a fully trained NN which can generalize. The solution is said to be symbolic as a numerical solution is not required. *************************************************************************** A training algorithm for Adaptive Feedforward Neural Networks that determines its own topology B. M. Garner, Department of Computer Science, RMIT, Melbourne, Australia. ABSTRACT There has been some interest in developing neural network training algorithms that determine their own architecture. A training algorithm for adaptive feedforward neural networks (NN) composed of Linear Threshold Gates (LTGs) is presented here that determines it's own architecture and trains in a single pass. This algorithm produces what is said to be a symbolic solution as it resolves the relationships between the weights and the thresholds into constraints which do not require to be solved numerically. The network has been shown to behave as a fully trained neural network which generalizes and the possibility that the algorithm has polynomial time complexity is discussed. The algorithm uses binary data during training. Bernadette ============================================================================= Bernadette Garner He shall fall down a pit called Because, and there bmg at numbat.cs.rmit.edu.au he shall perish with the dogs of Reason http://yallara.cs.rmit.edu.au/~bmg/ - Aleister Crowley ============================================================================= From reggia at cs.umd.edu Thu Jan 8 14:11:50 1998 From: reggia at cs.umd.edu (James A. Reggia) Date: Thu, 8 Jan 1998 14:11:50 -0500 (EST) Subject: Travel Fellowships, Neural Modeling Brain/Cognitive Disorders Meeting Message-ID: <199801081911.OAA12824@avion.cs.umd.edu> We are happy to announce that funding is expected for a few TRAVEL FELLOWSHIPS to THE SECOND INTERNATIONAL WORKSHOP ON NEURAL MODELING OF BRAIN AND COGNITIVE DISORDERS held on June 4 - 6, 1998 at the University of Maryland, College Park, just outside of Washington, DC. Preference will be given in awarding this travel support to students, post-docs or residents with posters accepted for presentation at the meeting. The focus of this meeting will be on lesioning neural models to study disorders in neurology, neuropsychology and psychiatry, e.g., Alzheimer's disease, amnesia, aphasia, depression, epilepsy, neglect, parkinsonism, schizophrenia, and stroke. Individuals wishing to present a poster related to any aspect of the workshop's theme should submit an abstract describing the nature of their presentation. The single page submission should include title, author(s), contact information (address and email/fax), and abstract. One inch margins and a typesize of at least 10 points should be used. Abstracts will be reviewed by the Program Committee; those accepted will be published in the workshop proceedings. Six copies of the camera-ready abstract should be mailed TO ARRIVE by February 3, 1998 to James Reggia, Dept. of Computer Science, A.V. Williams Bldg., University of Maryland, College Park, MD 20742 USA. The latest information about the meeting can be found at http://www.cs.umd.edu/~reggia/workshop/ To receive registration materials (distributed most likely in February), please send your name, address, email address, phone number and fax number to Cecilia Kullman, UMIACS, A. V. Williams Bldg., University of Maryland, College Park, MD 20742 USA. (Tel: (301) 405-0304, Fax: (301) 314-9658, and email: cecilia at umiacs.umd.edu). From bmg at numbat.cs.rmit.edu.au Fri Jan 9 06:38:03 1998 From: bmg at numbat.cs.rmit.edu.au (B Garner) Date: Fri, 9 Jan 1998 22:38:03 +1100 (EST) Subject: two papers Message-ID: <199801091138.WAA18834@numbat.cs.rmit.edu.au> A number of people have requested that the following published papers be made available in postscript format. They are now available at WWW site http://yallara.cs.rmit.edu.au/~bmg/algA.ps http://yallara.cs.rmit.edu.au/~bmg/algB.ps I apologize if you have received multiple copies of this posting. ************************************************************************** A symbolic solution for adaptive feedforward neural networks found with a new training algorithm B. M. Garner, Department of Computer Science, RMIT, Melbourne, Australia. ABSTRACT Traditional adaptive feed forward neural network (NN) training algorithms find numerical values for the weights and thresholds. In this paper it is shown that a NN composed of linear threshold gates (LTGs) can function as a fully trained neural network without finding numerical values for the weights and thresholds. This surprising result is demonstrated by presenting a new training algorithm for this type of NN that resolves the network into constraints which describes all the numeric values the NN's weights and thresholds can take. The constraints do not require a numerical solution for the network to function as a fully trained NN which can generalize. The solution is said to be symbolic as a numerical solution is not required. *************************************************************************** A training algorithm for Adaptive Feedforward Neural Networks that determines its own topology B. M. Garner, Department of Computer Science, RMIT, Melbourne, Australia. ABSTRACT There has been some interest in developing neural network training algorithms that determine their own architecture. A training algorithm for adaptive feedforward neural networks (NN) composed of Linear Threshold Gates (LTGs) is presented here that determines it's own architecture and trains in a single pass. This algorithm produces what is said to be a symbolic solution as it resolves the relationships between the weights and the thresholds into constraints which do not require to be solved numerically. The network has been shown to behave as a fully trained neural network which generalizes and the possibility that the algorithm has polynomial time complexity is discussed. The algorithm uses binary data during training. Bernadette ============================================================================= Bernadette Garner He shall fall down a pit called Because, and there bmg at numbat.cs.rmit.edu.au he shall perish with the dogs of Reason http://yallara.cs.rmit.edu.au/~bmg/ - Aleister Crowley ============================================================================= From Melanie.Hilario at cui.unige.ch Fri Jan 9 08:34:20 1998 From: Melanie.Hilario at cui.unige.ch (Melanie Hilario) Date: Fri, 09 Jan 1998 14:34:20 +0100 Subject: CFP: ECML'98 WS - Upgrading Learning to the Meta-Level Message-ID: <34B6275C.2A73@cui.unige.ch> [Our apologies if you receive multiple copies of this CFP] Call for Papers ECML'98 Workshop UPGRADING LEARNING TO THE META-LEVEL: MODEL SELECTION AND DATA TRANSFORMATION To be held in conjunction with the 10th European Conference on Machine Learning Chemnitz, Germany, April 24, 1997 http://www.cs.bris.ac.uk/~cgc/ecml98-ws.html Motivation and Technical Description Over the past decade, machine learning (ML) techniques have successfully started the transition from research laboratories to the real world. The number of fielded applications has grown steadily, evidence that industry needs and uses ML techniques. However, most successful applications are custom-designed and the result of skillful use of human expertise. This is due, in part, to the large, ever increasing number of available ML models, their relative complexity and the lack of systematic methods for discriminating among them. Current data mining tools are only as powerful/useful as their users. They provide multiple techniques within a single system, but the selection and combination of these techniques are external to the system and performed by the user. This makes it difficult and costly for non-initiated users to access the much needed technology directly. The problem of model selection is that of choosing the appropriate learning method/model for a given application task. It is currently a matter of consensus that there are no universally superior models and methods for learning. The key question in model selection is not which learning method is better than the others, but under which precise conditions a given method is better than others for a given task. The problem of data transformation is distinct but inseparable from model selection. Data often need to be cleaned and transformed before applying (or even selecting) a learning algorithm. Here again, the hurdle is that of choosing the appropriate method for the specific transformation required. In both the learning and data pre-processing phases, users often resort to a trial-and-error process to select the most suitable model. Clearly, trying all possible options is impractical, and choosing the option that appears most promising often yields to a sub-optimal solution. Hence, an informed search process is needed to reduce the amount of experimentation while avoiding the pitfalls of local optima. Informed search requires meta-knowledge, which is not available to non-initiated, industrial end-users. Objectives and Scope The aim of this workshop is to explore the different ways of acquiring and using the meta-knowledge needed to address the model selection and data transformation problems. For some researchers, the choice of learning and data transformation methods should be fully automated if machine learning and data mining systems are to be of any use to non specialists. Others claim that full automation of the learning process is not within the reach of current technology. Still others doubt that it is even desirable. An intermediate solution is the design of assistant systems which aim less to replace the user than to help him make the right choices or, failing that, to guide him through the space of experiments. Whichever the proposed solution, there seems to be an implicit agreement that meta-knowledge should be integrated seamlessly into the learning tool. This workshop is intended to bring together researchers who have attempted to use meta-level approaches to automate or guide decision-making at all stages of the learning process. One broad line of research is the static use of prior (meta-)knowledge. Knowledge-based approaches to model selection have been explored in both symbolic and neural network learning. For instance, prior knowledge of invariances has been used to select the appropriate neural network architecture for optical character recognition problems. Another research avenue aims at augmenting and/or refining meta-knowledge dynamically across different learning experiences. Meta-learning approaches have been attempted to automate model selection (as in VBMS and StatLog) as well as model arbitration and model combination (as in JAM). Contributions are sought on any of the above--or other--approaches from all main sub-fields of machine learning, including neural networks, symbolic machine learning and inductive logic programming. The results of this workshop will extend those of prior workshops, such as the ECML95 Workshop on Learning at the Knowledge Level and the ICML97 Workshop on Machine Learning Applications in the Real World, as well as complement those of the upcoming AAAI98/ICML98 Workshop on the Methodology of Applying Machine Learning. Format and Schedule The workshop will consist of one invited talk, a number of refereed contributions and small group discussions. The idea is to bring researchers together to present current work and identify future areas of research and development. This is intended to be a one-day workshop and the proposed schedule is as follows. 9:00 Welcome 10:00 Paper session (5 x 30mins) 12:30 Lunch 1:30 Paper session (3 x 30mins) 3:00 Summary: the issues/the future 3:15 Small group discussions (3-4 groups) 4:00 Reports from each group 4:45 Closing remarks 5:00 End Timetable The following timetable will be strictly adhered to: * Registration of interest: starting now (email to: Christophe G-C, please specify intention to attend/intention to submit a paper) * Submission of paper: 6 March 1998 (electronic postscript only to either organiser: Christophe G-C, Melanie H) * Notification of acceptance: 20 March 1998 * Camera-ready: 28 March 1998 Program Committee Submitted papers will be reviewed by at least two independent referees from the following program committee. Pavel Brazdil, University of Porto Robert Engels, University of Karlsruhe Dieter Fensel, University of Karlsruhe Jean-Gabriel Ganascia, Universite Pierre et Marie Curie Christophe Giraud-Carrier, University of Bristol Ashok Goel, Georgia Institute of Technology Melanie Hilario, University of Geneva Igor Kononenko, University of Ljubljana Dunja Mladenic, Josef Stefan Institute, Slovenia Gholaremza Nakhaizadeh, Daimler-Benz Ashwin Ram, Georgia Institute of Technology Colin Shearer, Integrated Solutions Ltd Walter van de Welde, Riverland Next Generation Maarten van Someren, University of Amsterdam Gerhard Widmer, Austrian Institute for Artificial Intelligence Research Accepted papers will be published in the workshop proceedings and contributors will be allocated 30 minutes for an oral presentation during the workshop. Organisers Christophe Giraud-Carrier Department of Computer Science University of Bristol Bristol, BS8 1UB United Kingdom Tel: +44-117-954-5145 Fax: +44-117-954-5208 Email: cgc at cs.bris.ac.uk Melanie Hilario Computer Science Department University of Geneva 24, Rue General-Dufour CH-1211 Geneva 4 Switzerland Tel: +41-22-705-7791 Fax: +41-22-705-7780 Email: Melanie.Hilario at cui.unige.ch From shimone at cogs.susx.ac.uk Fri Jan 9 07:03:59 1998 From: shimone at cogs.susx.ac.uk (Shimon Edelman) Date: Fri, 9 Jan 1998 12:03:59 +0000 (GMT) Subject: preprint on visual recognition and categorization Message-ID: --------------------------------------------------------------------- Visual recognition and categorization on the basis of similarities to multiple class prototypes Sharon Duvdevani-Bar and Shimon Edelman Available directly via FTP ftp://eris.wisdom.weizmann.ac.il/pub/recog+categ.ps.Z or via this Web page http://eris.wisdom.weizmann.ac.il/~edelman/archive.html Abstract: One of the difficulties of object recognition stems from the need to overcome the variability in object appearance caused by factors such as illumination and pose. The influence of these factors can be countered by learning to interpolate between stored views of the target object, taken under representative combinations of viewing conditions. Difficulties of another kind arise in daily life situations that require categorization, rather than recognition, of objects. We show that, although categorization cannot rely on interpolation between stored examples, knowledge of several representative members, or prototypes, of each of the categories of interest can still provide the necessary computational substrate for the categorization of new instances. The resulting representational scheme based on similarities to prototypes is computationally viable, and is readily mapped onto the mechanisms of biological vision revealed by recent psychophysical and physiological studies. --------------------------------------------------------------------- [get it now] Comments welcome. -Shimon Shimon Edelman, School of Cognitive and Computing Sciences University of Sussex, Falmer, Brighton BN1 9QH, UK http://www.cogs.susx.ac.uk/users/shimone +44 1273 678659 From pazienza at info.utovrm.it Fri Jan 9 08:27:22 1998 From: pazienza at info.utovrm.it (Maria Teresa Pazienza) Date: Fri, 9 Jan 1998 15:27:22 +0200 Subject: Job offer Message-ID: The AI group of the Department of Computer Science, Systems and Production, University of Rome Tor Vergata (ITALY), from a long while involved in reasearches in Lexical Acquisition, Machine Laerning and Engineering of adaptive NLP systems, is looking for an experienced computer scientist interested in joining the AI group in Rome for an international (state-of-art reasearch & development) project in the area of text processing for Information Extraction and Filtering. The project's specific goal is multilingual (English, Spanish and Italian) extraction of information from Web documents. The AI group that will host the candidate has a long lasting tradition in the engineering of NLP systems, and is currently integrating its existing systems for Lexical Acquisition and Text Processing in Italian and English into an industrial prototype. A qualified candidate must have preferably a PhD degree, on Computer Science, Software Engineering or Computational Linguistics, with extensive programming experience in at least one of the following languages: C++, Java and Prolog. It is highly preferred a strong background in software engineering of large-scale text processing systems and ability on innovative approaches in natural language (statistical as well as symbolic methods). Although this is not a job for theoreticians only, a specific talent for research problems, experimental studies and familiarity with the empirical methods in NL are a plus. Candidate should know very well UNIX and in general be a skilled programmer. Knowledge of Java programming under Windows NT is a relevant aspect, but not necessary. The position corresponds to a contract of the University of Roma, Tor Vergata for (at least) one year (March 1998-March 1999): salary and conditions are equivalent to the position of a resarcher in the University. To apply for this position, please contact/send a curriculum by fax / e-mail to: Maria Teresa Pazienza Department of Computer Science, Systems and Production University of Roma, Tor Vergata Via di Tor Vergata 00133 Roma, (ITALY) fax : +39 6 72597460 tel : +39 6 72597378 e-mail: pazienza at info.utovrm.it -------------------------------------------------- prof. Maria Teresa Pazienza Dept. of Computer Science, Systems and Production University of Roma, Tor Vergata Via di Tor Vergata 00133 ROMA (ITALY) tel +39 6 72597378 fax +39 6 72597460 e_mail: pazienza at info.utovrm.it http://babele.info.utovrm.it/ -------------------------------------------------- From zoubin at cs.toronto.edu Fri Jan 9 17:24:05 1998 From: zoubin at cs.toronto.edu (Zoubin Ghahramani) Date: Fri, 9 Jan 1998 17:24:05 -0500 Subject: paper: Hierarchical Factor Analysis and Topographic Maps Message-ID: <98Jan9.172413edt.1352@neuron.ai.toronto.edu> The following paper is now available at: ftp://ftp.cs.toronto.edu/pub/zoubin/nips97.ps.gz http://www.cs.toronto.edu/~zoubin ---------------------------------------------------------------------- Hierarchical Non-linear Factor Analysis and Topographic Maps Zoubin Ghahramani and Geoffrey E. Hinton Department of Computer Science University of Toronto We first describe a hierarchical, generative model that can be viewed as a non-linear generalisation of factor analysis and can be implemented in a neural network. The model performs perceptual inference in a probabilistically consistent manner by using top-down, bottom-up and lateral connections. These connections can be learned using simple rules that require only locally available information. We then show how to incorporate lateral connections into the generative model. The model extracts a sparse, distributed, hierarchical representation of depth from simplified random-dot stereograms and the localised disparity detectors in the first hidden layer form a topographic map. When presented with image patches from natural scenes, the model develops topographically organised local feature detectors. To appear in Jordan, M.I, Kearns, M.J., and Solla, S.A. Advances in Neural Information Processing Systems 10. MIT Press: Cambridge, MA, 1998. From kehagias at egnatia.ee.auth.gr Sat Jan 10 15:23:22 1998 From: kehagias at egnatia.ee.auth.gr (Thanasis Kehagias) Date: Sat, 10 Jan 1998 12:23:22 -0800 Subject: new paper on Multi-model Algorithm for Parameter Estimation of Time-varying Nonlinear Systems Message-ID: <34B7D8BA.2804@egnatia.ee.auth.gr> The following paper will appear in AUTOMATICA. While it is not strictly about neural networks, the presented analysis of credit assignment convergence, for a multimodel scheme, may be of interest for people working with modular neural networks, mixtures of experts and so on. Title: A Multi-model Algorithm for Parameter Estimation of Time-varying Nonlinear Systems Authors: V. Petridis and Ath. Kehagias Source: Automatica (to appear) Link: http://skiron.control.ee.auth.gr/~kehagias/97epeke.htm Abstract: Many methods have been developed to solve the problem of parameter stimation for dynamical systems (Ljung, 1987). Of particular interest is the case of on-line algorithms which are used to estimate time-varying parameters. Here we present such an algorithm which assumes a nonlinear dynamical system. The system is time-varying: its parameter changes values according to a Markovian model switching mechanism. The algorithm starts with a finite number of models, each corresponding to one of the parameter values, and selects the ``phenomenologically best'' parameter value; namely the one which produces the best fit to the observed behavior of the system. Our algorithm is related to the Partition Algorith (PA) presented in (Hilborn & Lainiotis, 1969; Lainiotis, 1971; Lainiotis & Plataniotis, 1994; Sims, Lainiotis & Magill, 1969). PA is suitable for the parameter estimation of a linear dynamical system with Gaussian noise in the input and output; no provision is made for model switching. Under these assumptions, an algorithm is developed for exact computation of the models' posterior probabilities; these are used for Maximum a Posteriori (MAP) estimation of the unknown parameter. This method has been used extensively in a number of applications, including parameter estimation and system identification (Kehagias, 1991; Lainiotis & Plataniotis, 1994; Petridis, 1981). Our algorithm is more general than the PA: it applies to nonlinear systems and requires no probabilistic assumptions regarding the noise. Furthermore, while there are several convergence studies of the PA without a switching mechanism (Anderson & Moore, 1979; Kehagias, 1991; Tugnait, 1980), as far as we know, the analysis presented here is the first one that handles the Markovian switching assumption. A rigorous convergence analysis is also presented. Thanasis Kehagias, Research Associate, Dept. of Electrical and Computer Eng, Aristotle Un., Thessaloniki Ass. Prof., Dept. of Mathematics and Computer Sci., American College of Thessaloniki http://skiron.control.ee.auth.gr/~kehagias/index.htm From kehagias at egnatia.ee.auth.gr Sun Jan 11 23:03:05 1998 From: kehagias at egnatia.ee.auth.gr (Thanasis Kehagias) Date: Sun, 11 Jan 1998 20:03:05 -0800 Subject: correction to new paper URL Message-ID: <34B995F9.59FA@egnatia.ee.auth.gr> I apologize for reposting, but there was an error in the URL I gave in the announcement of my paper; it should be http://skiron.control.ee.auth.gr/~kehagias/THN/97epeke.htm The paper will appear in AUTOMATICA. While it is not strictly about neural networks, the presented analysis of credit assignment convergence, for a multimodel scheme, may be of interest for people working with modular neural networks, mixtures of experts and so on. Title: A Multi-model Algorithm for Parameter Estimation of Time-varying Nonlinear Systems Authors: V. Petridis and Ath. Kehagias Source: Automatica (to appear) Link: http://skiron.control.ee.auth.gr/~kehagias/THN/97epeke.htm Abstract: Many methods have been developed to solve the problem of parameter stimation for dynamical systems (Ljung, 1987). Of particular interest is the case of on-line algorithms which are used to estimate time-varying parameters. Here we present such an algorithm which assumes a nonlinear dynamical system. The system is time-varying: its parameter changes values according to a Markovian model switching mechanism. The algorithm starts with a finite number of models, each corresponding to one of the parameter values, and selects the ``phenomenologically best'' parameter value; namely the one which produces the best fit to the observed behavior of the system. Our algorithm is related to the Partition Algorith (PA) presented in (Hilborn & Lainiotis, 1969; Lainiotis, 1971; Lainiotis & Plataniotis, 1994; Sims, Lainiotis & Magill, 1969). PA is suitable for the parameter estimation of a linear dynamical system with Gaussian noise in the input and output; no provision is made for model switching. Under these assumptions, an algorithm is developed for exact computation of the models' posterior probabilities; these are used for Maximum a Posteriori (MAP) estimation of the unknown parameter. This method has been used extensively in a number of applications, including parameter estimation and system identification (Kehagias, 1991; Lainiotis & Plataniotis, 1994; Petridis, 1981). Our algorithm is more general than the PA: it applies to nonlinear systems and requires no probabilistic assumptions regarding the noise. Furthermore, while there are several convergence studies of the PA without a switching mechanism (Anderson & Moore, 1979; Kehagias, 1991; Tugnait, 1980), as far as we know, the analysis presented here is the first one that handles the Markovian switching assumption. A rigorous convergence analysis is also presented. Thanasis Kehagias, Research Associate, Dept. of Electrical and Computer Eng, Aristotle Un., Thessaloniki Ass. Prof., Dept. of Mathematics and Computer Sci., American College of Thessaloniki http://skiron.control.ee.auth.gr/~kehagias/index.htm -- Athanasios Kehagias, Research Associate, Dept. of Electrical and Computer Eng., Aristotle University of Thessaloniki, GR54006, Thessaloniki, GREECE and Assistant Professor, Dept. of Mathematics and Computer Science, American College of Thessaloniki, P.O. Box 21021, GR 55510 Pylea, Thessaloniki, GREECE home page: http://skiron.control.ee.auth.gr/~kehagias/index.htm email: kehagias at egnatia.ee.auth.gr, kehagias at ac.anatolia.edu.gr From jfgf at eng.cam.ac.uk Mon Jan 12 10:46:08 1998 From: jfgf at eng.cam.ac.uk (J.F. Gomes De Freitas) Date: Mon, 12 Jan 1998 15:46:08 +0000 (GMT) Subject: Tech Report on regularisation in sequential learning Message-ID: Hi A technical report on regularisation in sequential learning is now available at ftp://svr-ftp.eng.cam.ac.uk/pub/reports/freitas_tr307.ps.gz (SVR - Cambridge) The paper covers topics such as Bayesian inference with hierarchical models, extended Kalman filtering, regularisation, adaptive learning rates and automatic relevance determination. It is a longer version of a recent NIPS publication and feedback will be gratefully appreciated. I hope you find it interesting too. ABSTRACT: In this paper, we show that a hierarchical Bayesian modelling approach to sequential learning leads to many interesting attributes such as regularisation and automatic relevance determination. We identify three inference levels within this hierarchy, namely model selection, parameter estimation and noise estimation. In environments where data arrives sequentially ,techniques such as cross-validation to achieve regularisation or model selection are not possible. The Bayesian approach, with extended Kalman filtering at the parameter estimation level, allows for regularisation within a minimum variance framework.A multi-layer perceptron is used to generate the extended Kalman filter nonlinear measurements mapping. We describe several algorithms at the noise estimation level, which allow us to implement adaptive regularisation and automatic relevance determination of model inputs and basis functions. An important contribution of this paper is to show the theoretical links between adaptive noise estimation in extended Kalman filtering, multiple adaptive learning rates and multiple smoothing regularisation coefficients. Thanks Nando de Freitas _______________________________________________________________________________ JFG de Freitas (Nando) Speech, Vision and Robotics Group Information Engineering Cambridge University CB2 1PZ England Tel (01223) 302323 (H) (01223) 332754 (W) _______________________________________________________________________________ From Corinne.Ledoux at cts-fs1.du.se Mon Jan 12 12:51:56 1998 From: Corinne.Ledoux at cts-fs1.du.se (Corinne Ledoux) Date: Mon, 12 Jan 1998 18:51:56 +0100 Subject: Special session on Neural Networks applied to transport Message-ID: <3.0.1.32.19980112185156.007c4100@cts.du.se> NIMES-98 Conference on Complex Systems, Intelligent Systems & Interfaces 25-27 May 1998, Nimes, France Special session on Neural Networks and Transport Call for papers The 1st call for the Nimes-98 Conference on Complex Systems, Intelligent Systems & Interfaces, to be held in Nimes, 25-27 May, 1998, has just been issued. A special session will concentrate on the neural networks techniques used to solve specific traffic problems in air, road, maritime and railways transport. For this special session, papers based upon the following topics areas are welcome : * Real time Control Systems : freeway and corridor control, incident detection and management, signalized junctions and networks, traffic control systems, * Vehicular Navigation and Control : vehicle location monitoring, vehicle control, driver behaviour modelling, * Planning and modelling techniques : traffic flow, simulation models, dynamic traffic models, forecasting, * Sensor data processing : data fusion, vehicle identification/classification, traffic pattern analysis * Benefits of advanced technologies : safety impacts, environmental impacts. There is a two-stage reviewing procedure with the following schedule : 6 February 1998 Submission of abstracts (250 words) including title, author(s), affiliation(s). The contact author must be identified with his complete affiliation, address, phone, fax and e-mail adress. Seven copies of the abstract have to be send by post-mail to the Secretariat. 23 February 1998 Notification of acceptance/rejection 23 March 1998 Submission of the full paper Scientific Committee Gerard Scemama (INRETS, France) Sophie Midenet (INRETS, France) Corinne Ledoux (CTS, Sweden) Mark Dougherty (CTS, Sweden) Stephen G. Ritchie (University of California, Irvine) Conference Secretariat: EC2 & Developpement 51-59, Rue Ledru-Rolin 94200 Ivry-sur-Seine, France Tel : +33 1 45 15 27 53 Fax . + 33 1 45 15 27 54 e-mail : jeanclaude.rault at utopia.eunet.fr ___________________________________________________ Dr. Corinne Ledoux CTS - Dalarna University S 781 88 Borlange, Sweden e-mail: Corinne.Ledoux at cts.du.se Phone : + 46 23 77 85 46 Fax : + 46 23 77 85 01 http://www.du.se/cts ___________________________________________________ From rojas at inf.fu-berlin.de Wed Jan 14 10:29:00 1998 From: rojas at inf.fu-berlin.de (Raul Rojas) Date: Wed, 14 Jan 98 10:29 MET Subject: Call for participation IK-98 Message-ID: LAST CALL FOR PARTICIPATION: IK-98 CALL FOR POSTERS SECOND SPRING SCHOOL ON ARTIFICIAL INTELLIGENCE, NEURO- AND COGNITIVE SCIENCE March 7 - March 14, 1998, Guenne am Moehnesee, Germany http://www.tzi.uni-bremen.de/ik98/ IK-98 is a one-week intensive spring-school on artificial intelligence and brain research. The courses are offered by researchers working in the fields of symbolic Artificial Intelligence, Neural Networks, Brain Sciences and Cognitive Science. The main topic of IK-98 is "Language and Communication". Several courses will deal with the neurological basis of speech, speech recognition, linguistic aspects, natural language processing, etc. We will have several invited talks dealing with brain imaging, connectionist simulation of speech acquisition, and intelligent agents. We invite all participants of IK-98 to present their research results during the evening poster sessions. The main conference language is German (although some courses will be held in English). The program in German, with the courses to be offered, follows below. EINLADUNG ZUR TEILNAHME: 2. INTERDISZIPLINAERES KOLLEG IK-98 FRUEHJAHRSSCHULE "INTELLIGENZ UND GEHIRN" 7.3.-14.3.98 Guenne am Moehnesee, Germany http://www.tzi.uni-bremen.de/ik98/ >> Was ist das Interdisziplinaere Kolleg? Das Interdisziplinaere Kolleg (IK) ist eine intensive interdisziplinaere Fruehjahrsschule zum Generalthema "Intelligenz und Gehirn". Die Schirmwissenschaften des IK sind die Neurowissenschaft, die Kognitionswissenschaft, die Kuenstliche Intelligenz und die Neuroinformatik. Angesehene Dozenten aus diesen Disziplinen vermitteln Grundlagenkenntnisse, fuehren in methodische Vorgehensweisen ein und erlaeutern aktuelle Forschungsfragen. Ein abgestimmtes Spektrum von Grund-, Theorie- und Spezialkursen, sowie an disziplinuebergreifenden Veranstaltungen teilweise mit praktischen bungen richtet sich an Studenten und Forscher aus dem akademischen und industriellen Bereich. In den letzten Jahren gab es in Deutschland einen interdisziplinaeren Aufbruch. Er fand im Herbst 1996 einen ersten Hoehepunkt in der Tagung Wege ins Hirn (http://www.hlrz.kfa-juelich.de/~peters/WegeInsHirn/). Dort wurde auch beschlossen, das IK als Nachfolgerin der bekannten KI-Fruehjahrschulen (KIFS) auszurichten. Als Veranstalter fungierten unter anderem die deutschen Fachverbaende der beteiligten Disziplinen. Dadurch wurde das IK auch als Institution abgesichert. Diese Aufbruchstimmung ging beim beim ersten IK im Fruehjahr 1997 auf die Teilnehmenden und Dozierenden ueber. Die Kurse und die Atmosphaere fanden grossen, oft sogar enthusiastischen Anklang. Das IK findet nun alljaehrlich statt. >> Veranstalter Das IK98 wird veranstaltet vom Fachbereich KI der Gesellschaft fuer Informatik (GI) in Kooperation mit: FG Neuronale Netze der GI; GMD - Forschungszentrum Informationstechnik GmbH; DFG-Graduiertenkolleg "Signalketten in Lebenden Systemen"; European Neural Network Society (ENNS); German Chapter der ENNS (GNNS); Gesellschaft fuer Kognitionswissenschaft e.V.; und Neurowissenschaftliche Gesellschaft e.V. >> Veranstaltungsort Das Tagungsheim ist die Familienbildungsstaette "Heinrich-Luebke-Haus" in Guenne (Sauerland). Dies Haus liegt abgeschieden am Moehnesee im Naturpark Arnsberger Wald. Die Teilnehmer sind im Tagungsheim untergebracht. Alles foerdert einen konzentrierten, geselligen Austausch zwischen den Teilnehmern auch abends nach den eigentlichen Kursveranstaltungen. >> Schwerpunktthema Das IK-98 hat als besonderen Schwerpunkt das Thema "Sprache und Kommunikation", das in mehreren weiterfuehrenden Kursen von unterschiedlichen Disziplinen her beleuchtet wird. >> Postergallerie Parallel zu den Kursen haben die Teilnehmer die Moeglichkeit, sich ihre Forschungen in einer informellen Postergalerie vorzustellen. >> Kurse und Dozenten Grundkurse G1 Neurobiologie (Gerhard Roth) G2 Kuenstliche Neuronale Netze - Theorie und Praxis (Guenther Palm G3 Einfuehrung in die KI (Ipke Wachsmuth) G4 Kognitive Systeme - Eine Einfuehrung in die Kognitionswissenschaft (Gerhard Strube) >> Theoriekurse T1 Das komplexe reale Neuron (Helmut Schwegler) T2 Connectionist Speech Recognition (Herve Bourlard) T3 Perception of Temporal Structures - Especially in Speech (Robert F. Port) T4 Sprachstruktur - Hirnarchitektur ; Sprachverarbeitung - Hirnprozesse (Helmut Schnelle) T5 Optimierungsstrategien fuer neuronale Lernverfahren (Helge Ritter) >> Spezialkurse S1 Hybride konnektionistische und symbolische Ansaetze zur Verarbeitung natuerlicher Sprache (Stefan Wermter) S2 Intelligente Agenten fuer Multimedia-Schnittstellen (Wolfgang Wahlster, Elisabeth Andre) S3 Neurobiologie des Hoersystems (Guenter Ehret) S4 Sprachproduktion (Thomas Pechmann) >> Disziplinuebergreifende Kurse D1 Fuzzy und Neurosysteme (Rudolf Kruse, Detlev Nauck) D2 Zeitliche Kognition (Ernst Poeppel, Till Roenneberg) D3 The origins and evolution of language and meaning (Luc Steels) D4 Kontrolle von Bewegung in biologischen Systemen und Navigation mobiler Roboter (Josef Schmitz, Thomas Christaller) D5 Optimieren neuronaler Netze durch Lernen und Evolution (Heinz Braun) D6 Koordination von Sprache und Handlung (Wolfgang Heydrich, Hannes Rieser) D7 Dynamik spikender Neurone und Zeitliche Kodierung (Andreas Herz) >> Abendprogramm In visionaeren, feurigen und/oder kuehnen "after-dinner-talks" werden herausragende Forscher und Forscherinnen zu Kontroversen einladen: Angela D. Friederici, Jerome Feldman, Robert F. Port, Hans-Dieter Burkhard. >> Kursunterlagen Zu allen Kursen wird es schriftliche Dokumentationen geben, welche als Sammelband allen Teilnehmern ausgehaendigt werden. >> Beirat Um die Anliegen des Interdisziplinaeren Kollegs in den verschiedenen deutschen Forscherkreisen bekanntzumachen und zu vertreten, hat sich folgender Beirat aus namhaften Wissenschaftlern gebildet: Wolfgang Banzhaf, Wilfried Brauer, Armin B. Cremers, Christian Freksa, Otthein Herzog, Wolfgang Hoeppner, Hanspeter Mallot, Thomas Metzinger, Heiko Neumann, Hermann Ney, Guenther Palm, Ernst Poeppel, Wolfgang Prinz, Burghard Rieger, Helge Ritter, Claus Rollinger, Werner von Seelen, Hans Spada, Gerhard Strube, Helmut Schwegler, Ipke Wachsmuth, Wolfgang Wahlster. >> Organisationskomitee Thomas Christaller, Bernhard Froetschl, Christopher Habel, Herbert Jaeger, Anthony Jameson, Frank Pasemann, Bjoern-Olaf Peters, Annegret Pfoh, Raul Rojas (Gesamtleitung), Gerhard Roth, Kerstin Schill, Werner Tack. >> Tagungsbuero Christine Harms, c/o GMD, Schloss Birlinghoven, D-53754 Sankt Augustin, Telefon 02241- 14-2473, Fax 02241-14-2472, email harms at gmd.de >> Weitere Informationen Detaillierte Infos zum Hintergrund und dem Tagungsprogramm des IK-98 sind auf dessen Internet-homepage (http://www.tzi.uni-bremen.de/ik98/) abrufbar. From juergen at idsia.ch Wed Jan 14 05:43:47 1998 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Wed, 14 Jan 1998 11:43:47 +0100 Subject: IDSIA: 1997 JOURNAL PUBLICATIONS; JOB OPENINGS Message-ID: <199801141043.LAA00645@ruebe.idsia.ch> This is a list of journal papers published or accepted during 1997, (co)authored by members of the Swiss AI research institute IDSIA. Many additional 1997 book chapters, conference papers etc. can be found in IDSIA's individual home pages: www.idsia.ch/people.html ________________________________________________________________________ 1. P. Badeau (Univ. Blaise Pascal) & M. Gendreau, F. Guertin, J.-Y. Potvin (Univ. Montreal) & E. D. Taillard (IDSIA): A parallel tabu search heuristic for the vehicle routing problem with time windows. Transportation Research-C 5, 109-122, 1997. A parallel implementation of an adaptive memory programme. http://www.idsia.ch/~eric/articles.dir/crt95_84.ps.Z 2. M. Dorigo (IRIDIA) & L. M. Gambardella (IDSIA): Ant Colony System: A Cooperative Learning Approach to the TSP. IEEE Transactions On Evolutionary Computation, 1(1):53-66, 1997. ACS consists of cooperating ant-like agents. In TSP applications, ACS is compared to some of the best algorithms for symmetric and asymmetric TSPs. ftp://ftp.idsia.ch/pub/luca/papers/ACS-EC97.ps.gz 3. M. Dorigo (IRIDIA) & L. M. Gambardella (IDSIA): Ant Colony For the Traveling Salesman Problem. BioSystems 43:73-81, 1997. ftp://ftp.idsia.ch/pub/luca/papers/ACS-BIO97.ps.gz 4. B. L. Golden (Univ. Maryland) & G. Laporte (Ecole des Hautes Etudes Commerciales de Montreal) & E. Taillard (IDSIA): An Adaptive Memory Heuristic For a Class of Vehicle Routing Problems with Minimax Objective. Computers & Operations Research 24, 1997, 445-452. A study of the capacitated vehicle routing problem (CVRP), CVRP with multiple use of vehicles (MUV), and the m-TSP with MUV. A novel method produces excellent solutions within reasonable time. http://www.idsia.ch/~eric/articles.dir/crt95_74.ps.Z 5. P. Hansen (Univ. Montreal) & N. Mladenovic (Univ. Montreal) & E. D. Taillard (IDSIA): Heuristic solution of the multisource Weber problem as a p-median problem. Accepted by Operations Research Letters, 1997. We examine a heuristic method that has been forgotten for more than 30 years. It is very appropriate for small to medium size multisource Weber problems. http://www.idsia.ch/~eric/articles.dir/localloc1.ps.Z 6. S. Hochreiter (TU Munich) & J. Schmidhuber (IDSIA): Flat Minima. Neural Computation, 9(1):1-43, 1997. An MDL-based, Bayesian argument suggests that flat minima of the error function are essential because they correspond to "simple", low-complexity neural nets and low expected overfitting. The argument is based on a Gibbs algorithm variant and a novel way of splitting generalization error into underfitting and overfitting error. An efficient algorithm called "flat minimum search" outperforms other widely used methods on stock market prediction tasks. ftp://ftp.idsia.ch/pub/juergen/fm.ps.gz 7. S. Hochreiter (TU Munich) & J. Schmidhuber (IDSIA): Long Short-Term Memory. Neural Computation, 9(8):1681-1726. A novel recurrent net algorithm with update complexity O(1) per weight and time step. LSTM can solve hard problems unsolvable by previous neural net algorithms. ftp://ftp.idsia.ch/pub/juergen/lstm.ps.gz 8. R. P. Salustowicz (IDSIA) & J. Schmidhuber (IDSIA): Probabilistic Incremental Program Evolution. Evolutionary Computation 5(2):123-141, 1997. A novel method for evolving programs by stochastic search in program space. Comparisons to "genetic programming", applications to partially observable environments. ftp://ftp.idsia.ch/pub/juergen/PIPE.ps.gz 9. R. P. Salustowicz (IDSIA) & M. Wiering (IDSIA) & J. Schmidhuber (IDSIA): Learning team strategies: soccer case studies. Machine Learning, accepted 1997. Multiagent learning: each soccer team's players share action set and policy. We compare TD-Q learning and Probabilistic Incremental Program Evolution (PIPE). ftp://ftp.idsia.ch/pub/juergen/soccer.ps.gz 10. J. Schmidhuber (IDSIA) & J. Zhao (IDSIA) & M. Wiering (IDSIA): Shifting Inductive Bias with Success-Story Algorithm, Adaptive Levin Search, and Incremental Self-Improvement. Machine Learning 28:105-130, 1997. We focus on searching program space and "learning to learn" in changing, partially observable environments. ftp://ftp.idsia.ch/pub/juergen/bias.ps.gz 11. J. Schmidhuber (IDSIA): Discovering Neural Nets with Low Kolmogorov Complexity and High Generalization Capability. Neural Networks 10(5):857-873, 1997. Review of basic concepts of Kolmogorov complexity theory relevant to machine learning. Toy experiments with a Levin search variant lead to better generalization performance than more traditional neural net algorithms. ftp://ftp.idsia.ch/pub/juergen/loconet.ps.gz 12. J. Schmidhuber (IDSIA). Low-Complexity Art: Leonardo, Journal of the International Society for the Arts, Sciences, and Technology, 30(2):97-103, MIT Press, 1997. Low-complexity art is the computer-age equivalent of simple art: art with low Kolmogorov complexity. With example cartoons and attempts at using MDL to explain what's "beautiful". ftp://ftp.idsia.ch/pub/juergen/locoart.ps.gz 13. E. D. Taillard (IDSIA) & P. Badeau (Univ. Blaise Pascal) & M. Gendreau, F. Guertin, J.Y. Potvin (Univ. Montreal): A tabu search heuristic for the vehicle routing problem with soft time windows. Transportation Science 31, 170-186, 1997. An efficient neighbourhood structure for vehicle routing problems implemented in an adaptive memory programme for dealing with soft time windows. http://www.idsia.ch/~eric/articles.dir/crt95_66.ps.Z 14. M. Wiering (IDSIA) & J. Schmidhuber (IDSIA). HQ-Learning. Adaptive Behavior, 6(2), accepted 1997. A hierarchical extension of Q(lambda)-learning designed to solve certain types of partially observable Markov decision problems (POMDPs). HQ automatically decomposes POMDPs into sequences of simpler subtasks that can be solved by memoryless policies. ftp://ftp.idsia.ch/pub/juergen/hq.ps.gz ------------------------JOB OPENINGS AT IDSIA--------------------------- We have a few openings for postdocs, research associates, and outstanding PhD students, as well as one system manager position. See www.idsia.ch for details and application instructions. In case you are not interested in any of the current openings but would like to be considered for future ones, please send HARDCOPIES (no email!) of a statement of research interests plus an outline of your research project, CV, list of publications, and your 3 best papers. Also send a BRIEF email message listing email addresses of three references. Please use your full name as subject header. EXAMPLE: subject: John_Smith We are looking forward to receiving your application! _________________________________________________ Juergen Schmidhuber research director IDSIA, Corso Elvezia 36, 6900-Lugano, Switzerland juergen at idsia.ch http://www.idsia.ch/~juergen From bert at mbfys.kun.nl Wed Jan 14 11:40:50 1998 From: bert at mbfys.kun.nl (Bert Kappen) Date: Wed, 14 Jan 1998 17:40:50 +0100 Subject: Boltzmann Machine learning using mean field theory ... Message-ID: <199801141640.RAA24864@bertus> Dear Connectionists, The following article will apear in the proceedings NIPS of 1998 ed. Micheal Kearns. This version contains some significant improvements over the earlier version. Boltzmann Machine learning using mean field theory and linear response correction written by (Hil)bert Kappen and Paco Rodrigues We present a new approximate learning algorithm for Boltzmann Machines, using a systematic expansion of the Gibbs free energy to second order in the weights. The linear response correction to the correlations is given by the Hessian of the Gibbs free energy. The computational complexity of the algorithm is cubic in the number of neurons. We compare the performance of the exact BM learning algorithm with first order (Weiss) mean field theory and second order (TAP) mean field theory. The learning task consists of a fully connected Ising spin glass model on 10 neurons. We conclude that 1) the method works well for paramagnetic problems 2) the TAP correction gives a significant improvement over the Weiss mean field theory, both for paramagnetic and spin glass problems and 3) that the inclusion of diagonal weights improves the Weiss approximation for paramagnetic problems, but not for spin glass problems. This article can now be downloaded from ftp://ftp.mbfys.kun.nl/snn/pub/reports/Kappen.LR_NIPS.ps.Z Best regards, Hilbert Kappen FTP INSTRUCTIONS unix% ftp ftp.mbfys.kun.nl Name: anonymous Password: (use your e-mail address) ftp> cd snn/pub/reports/ ftp> binary ftp> get Kappen.LR_NIPS.ps.Z ftp> bye unix% uncompress Kappen.LR_NIPS.ps.Z unix% lpr Kappen.LR_NIPS.ps From harnad at cogsci.soton.ac.uk Wed Jan 14 15:47:58 1998 From: harnad at cogsci.soton.ac.uk (Stevan Harnad) Date: Wed, 14 Jan 1998 20:47:58 +0000 (GMT) Subject: Consciousness and Connectionism: BBS Call for Commentators Message-ID: Below is the abstract of a forthcoming BBS target article on: A CONNECTIONIST THEORY OF PHENOMENAL EXPERIENCE by Gerard O'Brien and John Opie This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or nominated by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send EMAIL to: bbs at cogsci.soton.ac.uk or write to: Behavioral and Brain Sciences Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/ ftp://ftp.princeton.edu/pub/harnad/BBS/ ftp://ftp.cogsci.soton.ac.uk/pub/bbs/ gopher://gopher.princeton.edu:70/11/.libraries/.pujournals If you are not a BBS Associate, please send your CV and the name of a BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work. All past BBS authors, referees and commentators are eligible to become BBS Associates. To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection with a WWW browser, anonymous ftp or gopher according to the instructions that follow after the abstract. ____________________________________________________________________ A CONNECTIONIST THEORY OF PHENOMENAL EXPERIENCE Gerard O'Brien and John Opie Department of Philosophy The University of Adelaide South Australia 5005 AUSTRALIA KEYWORDS: computation, connectionism, consciousness, dissociation, mental representation, phenomenal experience ABSTRACT: When cognitive scientists apply computational theory to the problem of phenomenal consciousness, as many of them have been doing recently, there are two fundamentally distinct approaches available. Either consciousness is to be explained in terms of the nature of the representational vehicles the brain deploys, or it is to be explained in terms of the computational processes defined over these vehicles. We call versions of these two approaches VEHICLE and PROCESS theories of consciousness, respectively. However, while there may be space for vehicle theories of consciousness in cognitive science, they are relatively rare. This is because of the influence exerted, on the one hand, by a large body of research which purports to show that the explicit representation of information in the brain and conscious experience are dissociable, and on the other, by the classical computational theory of mind: the theory that takes human cognition to be a species of symbol manipulation. Two recent developments in cognitive science combine to suggest that a reappraisal of this situation is in order. First, a number of theorists have recently been highly critical of the experimental methodologies used in the dissociation studies -- so critical, in fact, that it is no longer reasonable to assume that the dissociability of conscious experience and explicit representation has been adequately demonstrated. Second, computationalism, as a theory of human cognition, is no longer as dominant in cognitive science as it once was. It now has a lively competitor in the form of connectionism; and connectionism, unlike computationalism, does have the computational resources to support a robust vehicle theory of consciousness. In this paper we develop and defend this connectionist-vehicle theory of consciousness. It takes the form of the following simple empirical hypothesis: phenomenal experience consists in the explicit representation of information in neurally realized pdp networks. This hypothesis leads us to reassess some common wisdom about consciousness, but, we will argue, in fruitful and ultimately plausible ways. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable from the World Wide Web or by anonymous ftp or gopher from the US or UK BBS Archive. Ftp instructions follow below. Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. The URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.obrien.html ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.obrien ftp://ftp.cogsci.soton.ac.uk/pub/bbs/Archive/bbs.obrien gopher://gopher.princeton.edu:70/11/.libraries/.pujournals To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.howe When you have the file(s) you want, type: quit From harnad at coglit.soton.ac.uk Wed Jan 14 14:12:48 1998 From: harnad at coglit.soton.ac.uk (S.Harnad) Date: Wed, 14 Jan 1998 19:12:48 GMT Subject: Lexical Access: BBS Call for Commentators Message-ID: <199801141912.TAA24108@amnesia.psy.soton.ac.uk> Below is the abstract of a forthcoming BBS target article on: A THEORY OF LEXICAL ACCESS IN SPEECH PRODUCTION by Willem J.M. Levelt, Ardi Roelofs, and Antje S. Meyer This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or nominated by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send EMAIL to: bbs at cogsci.soton.ac.uk or write to: Behavioral and Brain Sciences Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/ ftp://ftp.princeton.edu/pub/harnad/BBS/ ftp://ftp.cogsci.soton.ac.uk/pub/bbs/ gopher://gopher.princeton.edu:70/11/.libraries/.pujournals If you are not a BBS Associate, please send your CV and the name of a BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work. All past BBS authors, referees and commentators are eligible to become BBS Associates. To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection with a WWW browser, anonymous ftp or gopher according to the instructions that follow after the abstract. ____________________________________________________________________ A THEORY OF LEXICAL ACCESS IN SPEECH PRODUCTION Willem J.M. Levelt Max Planck Institute for Psycholinguistics P.O. Box 310 6500 AH Nijmegen The Netherlands pim at mpi.nl Ardi Roelofs Max Planck Institute for Psycholinguistics P.O. Box 310 6500 AH Nijmegen The Netherlands Antje S. Meyer Max Planck Institute for Psycholinguistics P.O. Box 310 6500 AH Nijmegen The Netherlands KEYWORDS: speaking, lexical access, conceptual preparation, lexical selection, morphological encoding, phonological encoding, syllabification, articulation, self-monitoring, lemma, morpheme, phoneme, speech error, magnetic encephalography, readiness potential, brain imaging ABSTRACT: Preparing words in speech production is normally a fast and accurate process. We generate them two or three per second in fluent conversation, and overtly naming a clear picture of an object can easily be initiated within 600 ms after picture onset. The underlying process, however, is exceedingly complex. The theory reviewed in this target article analyzes this process as staged and feedforward. After a first stage of conceptual preparation, word generation proceeds through lexical selection, morphological and phonological encoding, phonetic encoding and articulation itself. In addition, the speaker exerts some degree of output control by monitoring self-produced internal and overt speech. The core of the theory, ranging from lexical selection to the initiation of phonetic encoding, is captured in a computational model, called WEAVER++. Both the theory and the computational model have been developed in conjunction with reaction time experiments, particularly in picture naming or related word production paradigms with the aim of accounting for the real-time processing in normal word production. A comprehensive review of theory, model and experiments are presented. The model can handle some of the main observations in the domain of speech errors (the major empirical domain for most other theories of lexical access), and the theory also opens new ways of approaching the cerebral organization of speech production by way of high-resolution temporal imaging. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable from the World Wide Web or by anonymous ftp or gopher from the US or UK BBS Archive. Ftp instructions follow below. Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. The URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.levelt.html ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.levelt ftp://ftp.cogsci.soton.ac.uk/pub/bbs/Archive/bbs.levelt gopher://gopher.princeton.edu:70/11/.libraries/.pujournals To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.howe When you have the file(s) you want, type: quit From seung at physics.bell-labs.com Mon Jan 12 15:52:43 1998 From: seung at physics.bell-labs.com (Sebastian Seung) Date: Mon, 12 Jan 1998 15:52:43 -0500 Subject: preprints available Message-ID: <199801122052.PAA01030@heungbu.div111.lucent.com> The following preprints are now available at http://www.bell-labs.com/user/seung ---------------------------------------------------------------------- Learning continuous attractors in recurrent networks H. S. Seung One approach to invariant object recognition employs a recurrent neural network as an associative memory. In the standard depiction of the network's state space, memories of objects are stored as attractive fixed points of the dynamics. I argue for a modification of this picture: if an object has a continuous family of instantiations, it should be represented by a continuous attractor. This idea is illustrated with a network that learns to complete patterns. To perform the task of filling in missing information, the network develops a continuous attractor that models the manifold from which the patterns are drawn. From a statistical viewpoint, the pattern completion task allows a formulation of unsupervised learning in terms of regression rather than density estimation. http://www.bell-labs.com/user/seung/papers/continuous.ps.gz [To appear in Adv. Neural Info. Proc. Syst. 10 (1998)] ---------------------------------------------------------------------- Minimax and Hamiltonian dynamics of excitatory-inhibitory networks H. S. Seung, T. J. Richardson, J. C. Lagarias, and J. J. Hopfield A Lyapunov function for excitatory-inhibitory networks is constructed. The construction assumes symmetric interactions within excitatory and inhibitory populations of neurons, and antisymmetric interactions between populations. The Lyapunov function yields sufficient conditions for the global asymptotic stability of fixed points. If these conditions are violated, limit cycles may be stable. The relations of the Lyapunov function to optimization theory and classical mechanics are revealed by minimax and dissipative Hamiltonian forms of the network dynamics. http://www.bell-labs.com/user/seung/papers/minimax.ps.gz [To appear in Adv. Neural Info. Proc. Syst. 10 (1998)] ---------------------------------------------------------------------- Learning generative models with the up-propagation algorithm J.-H. Oh and H. S. Seung Up-propagation is an algorithm for inverting and learning neural network generative models. Sensory input is processed by inverting a model that generates patterns from hidden variables using top-down connections. The inversion process is iterative, utilizing a negative feedback loop that depends on an error signal propagated by bottom-up connections. The error signal is also used to learn the generative model from examples. The algorithm is benchmarked against principal component analysis in experiments on images of handwritten digits. http://www.bell-labs.com/user/seung/papers/up-prop.ps.gz [To appear in Adv. Neural Info. Proc. Syst. 10 (1998)] ---------------------------------------------------------------------- The rectified Gaussian distribution N. D. Socci, D. D. Lee, and H. S. Seung A simple but powerful modification of the standard Gaussian distribution is studied. The variables of the rectified Gaussian are constrained to be nonnegative, enabling the use of nonconvex energy functions. Two multimodal examples, the competitive and cooperative distributions, illustrate the representational power of the rectified Gaussian. Since the cooperative distribution can represent the translations of a pattern, it demonstrates the potential of the rectified Gaussian for modeling pattern manifolds. http://www.bell-labs.com/user/seung/papers/rg.ps.gz [To appear in Adv. Neural Info. Proc. Syst. 10 (1998)] ---------------------------------------------------------------------- Pattern analysis and synthesis in attractor neural networks H. S. Seung The representation of hidden variable models by attractor neural networks is studied. Memories are stored in a dynamical attractor that is a continuous manifold of fixed points, as illustrated by linear and nonlinear networks with hidden neurons. Pattern analysis and synthesis are forms of pattern completion by recall of a stored memory. Analysis and synthesis in the linear network are performed by bottom-up and top-down connections. In the nonlinear network, the analysis computation additionally requires rectification nonlinearity and inner product inhibition between hidden neurons. http://www.bell-labs.com/user/seung/papers/pattern.ps.gz [In Theoretical Aspects of Neural Computation: A Multidisciplinary Perspective, Proceedings of TANC'97. Springer-Verlag (1997)] From act at uow.edu.au Thu Jan 15 04:34:27 1998 From: act at uow.edu.au (Ah Chung Tsoi) Date: Thu, 15 Jan 1998 20:34:27 +1100 (EST) Subject: Research Associate in Data Mining and Knowledge Discovery Message-ID: <199801150934.UAA20863@wumpus.its.uow.edu.au> The following advertisement will appear in The Australian, 21st January, 1998. UNIVERSITY OF WOLLONGONG FACULTY OF INFORMATICS ARC Funded Research Associate in Data Mining and Knowledge Discovery Three Year Fixed Term Appointment Applications are called for from suitably qualified persons to participate in an Australian Research Council (ARC) funded position in data mining and knowledge discovery in collaboration with the Health Insurance Commission. This three-year project will be to carry out generic research in applying, researching and developing a number of data mining and knowledge discovery techniques to study the behavioural patterns of Diagnostic Imaging practitioners. It is expected that the successful candidate will have completed a PhD or be about to submit a thesis for a PhD in one or more of the following areas: artificial neural networks; expert systems; data analysis; artificial intelligence; fuzzy systems; statistics; graphical models. Preference will be given to candidates who have some relevant post doctoral experience. It is expected that the successful candidate will work closely with the Health Insurance Commission, in particular, with medically qualified experts in Diagnostic Imaging. The successful candidate will be based in Wollongong working with a dynamic team in neural networks and artificial intelligence in the Faculty of Informatics, under the direction of the Dean, Professor A. C. Tsoi. Dependent upon the qualifications and the experience of the successful applicant, appointment will be as an Associate Fellow in the salary range A$42,413 to A$45,527. Further information can be obtained by contacting Professor Ah Chung Tsoi, Dean, Faculty of Informatics, University of Wollongong; Email: ah_chung_tsoi at uow.edu.au, Phone: +61 2 42 21 38 43; Fax: +61 2 42 21 48 43. Closing date 13 February 1998. Applications should quote reference number CM98- contain details of qualifications, employment history, research interest, publications and names and addresses (including fax number or email address) of five referees and be forwarded to the Personnel Officer. Please mark envelope ``Confidential Appointment''. Mail address: University of Wollongong, Northfields Ave, Wollongong, NSW 2522 Australia. The University of Wollongong is an equal opportunity employer. ========================================= From weaveraj at helios.aston.ac.uk Thu Jan 15 09:18:06 1998 From: weaveraj at helios.aston.ac.uk (Andrew Weaver) Date: Thu, 15 Jan 1998 14:18:06 +0000 Subject: Full Time Lectureship Post, Aston University, UK Message-ID: <11722.199801151418@sun.aston.ac.uk> FULL TIME LECTURESHIP NEURAL COMPUTING RESEARCH GROUP ASTON UNIVERSITY, UK The group currently comprises five full time members of staff (David Lowe, Manfred Opper, David Saad, Ian Nabney and Chris Williams), 8 Postdoctoral Research Fellows, a Research Programmer, a Research Coordinator, 11 PhD students and 10 MSc research students. Current research contracts total approximately ukp1.5 million. We are seeking an additional highly motivated, enthusiastic individual to join our research team in the general areas of artificial neural networks, biomedical signal analysis, nonlinear pattern and time series processing and machine vision. The individual will also be expected to contribute to the graduate and undergraduate taught programmes. Further information on the activities and interests of the group may be obtained from the website http://www.ncrg.aston.ac.uk/ Terms of appointment will depend on the background and experience of particular candidates. The minimum period for which appointments are made is normally three years, with the possibility of renewal or transfer to continuing appointments. Salary will be in the range ukp16,045 to ukp27,985 per annum, and exceptionally ukp31,269 per annum (Lecturer Grade A & B), according to qualifications and experience. The appointment is part of a wider research expansion in the Electronic Engineering and Computer Science Division at Aston which includes `intelligent' databases, future internet technology and telecommunications network modelling. Candidates interested in this wider area are also encouraged to make further enquiries. Interested individuals should email a current C.V. including the contact details of at least three referees to:-- Professor David Lowe Head of Computer Science email: d.lowe at aston.ac.uk Neural Computing Research Group www: http://www.ncrg.aston.ac.uk/ Aston University tel: (+44/0) 121 333 4631 Aston Triangle fax: (+44/0) 121 333 4586 Birmingham B4 7ET UK Closing Date:- 12th March 1998. From hadley at cs.sfu.ca Thu Jan 15 16:16:40 1998 From: hadley at cs.sfu.ca (Bob Hadley) Date: Thu, 15 Jan 1998 13:16:40 -0800 (PST) Subject: paper available: "Connectionism, Novel Skill Combinations, Cognitive Architecture" Message-ID: <199801152116.NAA10281@css.cs.sfu.ca> FTP-host: ftp.fas.sfu.ca FTP-filename: pub/cs/hadley/skills.ps Total pages: 32 at 1.2 spacing. Connectionism and Novel Combinations of Skills: Implications for Cognitive Architecture by Robert F. Hadley Technical Report SFU CMPT TR 1998-01 ABSTRACT In the late 1980s, there were many who heralded the emergence of connectionism as a new paradigm -- one which would eventually displace the classically symbolic methods then dominant in AI and Cognitive Science. At present, there remain influential connectionists who continue to defend connectionism as a more realistic paradigm for modelling cognition, at all levels of abstraction, than the classical algorithmic methods of AI. Not infrequently, one encounters arguments along these lines: given what we know about neurophysiology, it is just not plausible to suppose that our brains possess an architecture anything like classical von Neumann machines. Our brains are not digital computers, and so, cannot support a classical architecture. In this paper, I advocate a middle ground. I assume, for argument's sake, that some form(s) of connectionism can provide reasonably approximate models -- at least for lower-level cognitive processes. Given this assumption, I argue on theoretical and empirical grounds that MOST human mental skills must reside in *separate* connectionist modules or ``sub-networks''. Ultimately, it is argued that the basic tenets of connectionism, in conjunction with the fact that humans often employ novel combinations of skill modules in rule following and problem solving, lead to the plausible conclusion that, in certain domains, high level cognition requires some form of classical architecture. During the course of argument, it emerges that only an architecture with classical structure could support the novel patterns of *information flow* and interaction that would exist among the relevant set of modules. Such a classical architecture might very well reside in the abstract levels of a hybrid system whose lower-level modules are purely connectionist. N.B. "Classical architecture" here derives from models found in computer science. My arguments are not the same as those given by Fodor and Pylyshyn, 1988. ----------------------------------------------------------------------- The paper, ``Connectionism and Novel Combinations ...'' can be obtained via ftp by doing the following: ftp ftp.fas.sfu.ca When asked for your name, type the word: anonymous When asked for a password, use your e-mail address. Then, you should change directories as follows: cd pub cd cs cd hadley and then do a get, as in: get skills.ps To exit from ftp, type : quit From kirchmai at informatik.tu-muenchen.de Fri Jan 16 08:08:59 1998 From: kirchmai at informatik.tu-muenchen.de (Clemens Kirchmair) Date: Fri, 16 Jan 1998 14:08:59 +0100 (MET) Subject: WORKSHOP PROGRAM: Fuzzy-Neuro-Systems '98 Message-ID: ---------------------------------- | Fuzzy-Neuro Systems '98 | | - Computational Intelligence - | | | | 5th International Workshop | | March, 19 - 20, 1998 | ---------------------------------- Technische Universitaet Muenchen Gesellschaft fuer Informatik e.V. Fachausschuss 1.2 "Inferenzsysteme" Technische Universitaet Muenchen Institut fuer Informatik Fuzzy-Neuro Systems '98 is the fifth event of a well established series of workshops with international participation. Its aim is to give an overview of the state of art in research and development of fuzzy systems and artificial neural networks. Another aim is to highlight applications of these methods and to forge innovative links between theory and application by means of creative discussions. Fuzzy-Neuro Systems '98 is being organized by the Technical Committee 1.2 "Inference Systems" (Fachausschuss 1.2 "Inferenzsysteme") of the German Informatics Society GI (Gesellschaft fuer Informatik e. V.) and Institut fuer Informatik, Technische Universitaet Muenchen in cooperation with Siemens AG and with the support of Kratzer Automatisierung GmbH. The workshop takes place at the Technische Universitaet Muenchen in Munich from March, 19 to 20, 1998. PROGRAM ------- Wednesday, March 18, 1998 18:00 Informal Get-Together Registration 21:00 End of reception and registration Thursday, March 19, 1998 8:00 Registration 9:00 Formal Opening President, TU Muenchen Dekan, Institut fuer Informatik, TU Muenchen Workshop Chair 9:15 Invited Lecture 1: Sets, Fuzzy Sets and Rough Sets Zdzislaw Pawlak, Warsaw University of Technology, Poland Chairman: W. Brauer, TU Muenchen 10:00 Session 1: Fuzzy Control Chairman: R. Isermann, TU Darmstadt Indirect Adaptive Sugeno Fuzzy Control J. Abonyi, L. Nagy, S. Ferenc, University of Veszprem, Veszprem, Hungary Simultaneous Creation of Fuzzy Sets and Rules for Hierarchical Fuzzy Systems R. Holve, FORWISS, Erlangen, Germany 10:50 Coffee break - Presentation of Posters 11:10 Session 2: Neural Networks for Classification Chairman: K. Obermayer, TU Berlin Hybrid Systems for Time Series Classification C. Neukirchen, G. Rigoll, Gerhard-Mercator-Universitaet, Duisburg How Parallel Plug-in Classifiers Optimally Contribute to the Overall System W. Utschick, J.A. Nossek, TU Muenchen 12:00 Invited Lecture 2: Is Readibility Compatible with Accuracy? Hugues Bersini, Universite Libre de Bruxelles, Belgium Chairman: J. Hollatz, Siemens AG, Muenchen 12:45 Lunch 14:00 Session 3: Fuzzy Logic in Data Analysis Chairman: C. Freksa, Universitaet Hamburg Fuzzy Topographic Kernel Clustering T. Graepel, K. Obermayer, TU Berlin Dynamic Data Analysis: Similarity Between Trajectories A. Joentgen, L. Mikenina, R. Weber, H.-J. Zimmermann, RWTH Aachen Spatial Reasoning with Uncertain Data Using Stochastic Relaxation R. Moratz, C. Freksa, Universitaet Hamburg Noise Clustering For Partially Supervised Classifier Design C. Otte, P. Jensch, Universitaet Oldenburg Fuzzy c-Mixed Prototypes Clustering C. Stutz, TU Muenchen T.A. Runkler, Siemens AG, Muenchen 16:00 Coffee break - Presentation of Posters 16:30 Invited Lecture 3: Neural Network Architectures for Time Series Prediction with Applications to Financial Data Forecasting Hans-Georg Zimmermann, Siemens AG, Muenchen Chairman: R. Rojas, FU Berlin 17:15 Session 4: Fuzzy-Neuro Systems Chairman: R. Kruse, Universitaet Magdeburg A Neuro-Fuzzy Approach to Feedforward Modeling of Nonlinear Time Series T. Briegel, V. Tresp, Siemens AG, Muenchen A Learning Algorithm for Fuzzy Neural Nets T. Feuring,Westfaelische Wilhelms-Universitaet Muenster James J. Buckley, University of Alabama at Birmingham, Birmingham, USA Improving a priori Control Knowledge by Reinforcement Learning M. Spott, M. Riedmiller, Universitaet Karlsruhe 18:30 End of First Day 20:00 Conference Dinner Friday, March 20, 1998 9:00 Session 5: Applications Chairman: G. Nakhaeizadeh, Daimler Benz AG, Forschung + Technik, Ulm Batch Recipe Optimization with Neural Networks and Genetice Algorithms K. Eder, Kratzer Automatisierung GmbH, Unterschleissheim Robust Tuning of Power System Stabilizers by an Accelerated Fuzzy-Logic Based Genetic Algorithm M. Khederzadeh, Power and Water Institute of Technology, Tehran, Iran Relating Chemical Structure to Activity: An Application of the Neural Folding Architecture T. Schmitt, C. Goller, TU Muenchen Optimization of a Fuzzy System Using Evolutionary Algorithms Q. Zhuang, M. Kreutz, J. Gayko, Ruhr-Universitaet Bochum 10:40 Coffee break - Presentation of Posters 11:00 Invited Lecture 4: Advanced Fuzzy-Concepts and Applications Harro Kiendl, Universitaet Dortmund Chairman: K. Eder, Kratzer Automatisierung GmbH, Unterschleissheim 11:45 Session 6: Theory and Foundations of Fuzzy-Logic Chairman: P. Klement, Universitaet Linz, Austria Rule Weights in Fuzzy Systems D. Nauck, R. Kruse, Universitaet Magdeburg Sliding-Mode-Based analysis of Fuzzy Gain Schedulers - The MIMO Case R. Palm, Siemens AG, Muenchen D. Driankov, University of Linkoeping, Sweden Qualitative Operators For Dealing With Uncertainty H. Seridi, Universite de Reims, France F. Bannay-Dupin, Universite d'Angers, France H. Akdag, Universite P. & M. Curie, Paris, France 13:00 Lunch 14:00 Session 7: Theory and Foundations of Neural Networks Chairman: A. Grauel, Universitaet Paderborn Prestructured Recurrent Neural Networks T. Brychcy, TU Muenchen Formalizing Neural Networks I. Fischer, University of Erlangen M. Koch, Technical University of Berlin M.R. Berthold, University of California, Berkeley, USA Correlation and Regression Based Neuron Pruning Strategies M. Rychetsky, S. Ortmann, C. Labeck, M. Glesner, TU Darmstadt 15:15 Invited Lecture 5: Soft Computing: the Synergistic Interaction of Fuzzy, Neural, and Evolutionary Computation Piero P. Bonissone, General Electric Corporate R&D Artificial Intelligence Laboratory, Schenectady, USA Chairman: S. Gottwald, Universitaet Leipzig 16:00 Closing Remarks and Invitation to FNS'99 Posters ------- Comparing Fuzzy Graphs M.R. Berthold, University of California, Berkeley, USA K.-P. Huber, Universitaet Karlsruhe A Numerical Approach to Approximate Reasoning via a Symbolic Interface. Application to Image Classification A. Borgi, H. Akdag, Universite P. & M. Curie, Paris, France J.-M. Bazin, Universite de Reims, France Entropy-Controlled Probabilistic Search M. David, J. Gottlieb, I. Kupka, TU Clausthal Ensembles of Evolutionary Created Artificial Neural Networks C.M. Friedrich, Universitaet Witten/Herdecke Design and Implementation of a Flexible Simulation Tool for Hybrid Problem Solving H. Geiger, IBV and TU Muenchen J. Pfalzgraf, K. Frank, T. Neuboeck, J. Weichenberger, Universitaet Salzburg, Austria A. Buecherl, TU Muenchen A Fuzzy Invariant Indexing Technique for Object Recognition under Partial Occlusion T. Graf, A. Knoll, A. Wolfram, Universitaet Bielefeld Fuzzy Causal Networks R. Hofmann, V. Tresp, Siemens AG, Muenchen Dynamic Data Analysis: Problem Description And Solution Approaches A. Joentgen, L. Mikenina, R. Weber, H.-J. Zimmermann, RWTH Aachen Filtering and Compressing Information by Neural Information Processor R. Kamimura, Tokai University, Japan A Fuzzy Local Map with Asymmetric Smoothing Using Voronoi Diagrams B. Lang, Siemens AG, Muenchen Fuzzy Interface with Prior Concepts and Non-convex Regularization J.C. Lemm, Universitaet Muenster Modeling and Simulating a Time-Dependent Physical System Using Fuzzy Techniques and a Recurrent Neural Network A. Nuernberger, A. Radetzky, R. Kruse, Universitaet Magdeburg The Kohonen Network Incorporating Explicit Statistics and Its Application to the Traveling Salesman Problem B.J. Oommen, Carleton University, Ottawa, Canada Automated Feature Selection Strategies: An experimental comparison improving Engine Knock Detection S. Ortmann, M. Rychetsky, M. Glesner, TU Darmstadt A Fuzzy-Neuro System for Reconstruction of Multi-Sensor information S. Petit-Renaud, T. Deneux, Universite de Technologie de Compiegne, Compiegne, France RACE: Relational Alternating Cluster Estimation and the Wedding Table Problem T.A. Runkler, Siemens AG, Muenchen J.C. Bezdek, University of West Florida, Pensacola, USA Neural Networks Handle Technological Information for Milling if Training Data is Carefully Preprocessed G. Schulz, D. Fichtner, A. Nestler, J. Hoffmann, TU Dresden Medically Motivated Testbed for Reinforcement Learning in Neural Architectures D. Surmeli, G. Koehler, H.-M. Gross, TU Ilmenau Adaptive Input-Space Clustering for Continuous Learning Tasks M. Tagscherer, P. Protzel, FORWISS, Erlangen A Criminalistic And Forensic Application Of Neural Networks A. Tenhagen, T. Feuring, W.-M. Lippe, G. Henke, H. Lahl, WWU-Muenster A Classical and a Fuzzy System Based Algorithm for the Simulation of the Waste Humidity in a Landfill M. Theisen, M. Glesner, TU Darmstadt FuNN, A Fuzzy Neural Logic Model R. Yasdi, GMD - Forschungszentrum Informationstechnik, Sankt Augustin An Efficient Model for Learning Systems of High-Dimensional Input within Local Scenarios J. Zhang, V. Schwert, Universitaet Bielefeld Optimization of a Fuzzy Controller for a Driver Assistant System Q. Zhuang, J. Gayko, M. Kreutz, Ruhr-Universitaet-Bochum Program Committee ----------------- Prof. Dr. W. Banzhaf, Universitaet Dortmund Dr. M. Berthold, Universitaet Karlsruhe Prof. Dr. Dr. h.c. W. Brauer, TU Muenchen (Chairman) Prof. Dr. G. Brewka, Universitaet Leipzig Dr. K. Eder, Kratzer Automatisierung GmbH, Unterschleissheim Prof. Dr. C. Freksa, Universitaet Hamburg Prof. Dr. M. Glesner, TU Darmstadt Prof. Dr. S. Gottwald, Universitaet Leipzig Prof. Dr. A. Grauel, Universitaet Paderborn/Soest Prof. Dr. H.-M. Gross, TU Ilmenau Dr. A. Guenter, Universitaet Bremen Dr. J. Hollatz, Siemens AG, Muenchen Prof. Dr. R. Isermann, TU Darmstadt Prof. Dr. P. Klement, Universitaet Linz, Austria Prof. Dr. R. Kruse, Universitaet Magdeburg (Vice Chairman) Prof. Dr. B. Mertsching, Universitaet Hamburg Prof. Dr. G. Nakhaeizadeh, Daimler Benz AG, Forschung + Technik, Ulm Prof. Dr. K. Obermayer, TU Berlin Prof. Dr. G. Palm, Universitaet Ulm Dr. R. Palm, Siemens AG, Muenchen Dr. L. Peters, GMD - Forschungszentrum Informationstechnik GmbH, Sankt Augustin Prof. Dr. F. Pichler, Universitaet Linz, Austria Dr. P. Protzel, FORWISS, Erlangen Prof. Dr. B. Reusch, Universitaet Dortmund Prof. Dr. Rigoll, Universitaet Duisburg Prof. Dr. R. Rojas, Freie Universitaet Berlin Prof. Dr. B. Schuermann, Siemens AG, Muenchen (Vice Chairman) Prof. Dr. W. von Seelen, Universitaet Bochum Prof. Dr. H. Thiele, Universitaet Dortmund Prof. Dr. W. Wahlster, Universitaet Saarbruecken Prof. Dr. H.-J. Zimmermann, RWTH Aachen Organization Committee ---------------------- Prof. Dr. Dr. h.c. W. Brauer (Chairman) Dieter Bartmann Till Brychcy Clemens Kirchmair Technische Universitaet Muenchen Tel.: 0 89/2 89-2 84 19 Fax: 0 89/2 89-2 84 83 Dr. Juergen Hollatz, Siemens AG, Muenchen (Vice Chairman) Christine Harms, - ccHa -, Sankt Augustin Conference Site --------------- TU Muenchen Barerstrasse 23 Entrance: Arcisstrasse Lecture hall S0320 D-80333 Muenchen Workshop Secretariat -------------------- Christine Harms c/o GMD / FNS'98 Schloss Birlinghoven D-53754 Sankt Augustin Tel.: ++49 2241 14-24 73 Fax: ++49 2241 14-24 72 email: christine.harms at gmd.de Registration ------------ Please make your (binding) reservation by sending the enclosed registration form to the conference secretariat. Confirmation will be given after receipt of the registration form. Conference Fees: (see registration form) industry rate: 495,- DM university rate: 345,- DM GI members: 295,- DM authors: 295,- DM students (up to age of 26): 60,- DM (*) *) excluding proceedings and conference dinner. A surcharge of DM 100,- is payable for registration after February, 18, 1998. Services of Gesellschaft fuer Informatik e. V. (GI) are VAT-free according to German law p. 4 Nr. 22a UStG. Payment (see registration form) ------- [ ] I have transferred the whole amount of DM________ to Gesellschaft fuer Informatik (GI), Sparkasse Bonn Account No.: 39 479 Bankcode: 380 500 00 Ref: SK-Fuzzy-98 [ ] I enclose a Eurocheque amounting to made payable to Gesellschaft fuer Informatik [ ] Please debit my [ ] Diners [ ] Visa [ ] Euro/Mastercard Cardnumber: Expiration date: Cardholder: Social events ------------- Informal get-together: March, 18, 1998, 18.00 - 21.00 Conference dinner: Thursday, March, 19,1998. Accommodation ------------- A limited number of rooms has been reserved at the FORUM/Penta Hotel at the special rate of single room DM 175,- double room DM 200,- FORUM Hotel Hochstrasse 3 D-81669 Muenchen Cancellation ------------ If cancellation is received up to February, 17, 1998, a 75% refund will be given. For cancellations received afterwards, no refunds can be guaranteed. WWW-Homepage ------------ URL: http://wwwbrauer.informatik.tu-muenchen.de/~fns98/ ----- snip, snip ----- Registration form for Fuzzy-Neuro Systems '98 --------------------------------------------- Please register me as follows Conference Fees: ---------------- [ ] industry rate: 495,- DM [ ] university rate: 345,- DM [ ] GI member No. 295,- DM [ ] authors: 295,- DM [ ] students (up to age of 26): 60,- DM (*) *) excluding proceedings and conference dinner Accommodation: -------------- I would like to make a binding reservation at the FORUM/Penta Hotel [ ] single room DM 175,- [ ] double room DM 200,- (together with ____________________________) Arrival date ______________________________ Departure date ___________________________ Payment directly at the hotel. Hotel booking has to be made until February, 17, 1998. After that we cannot guarantee any bookings. Conference diner: ----------------- [ ] I intend to participate in the conference dinner ...... extra ticket for conference dinner DM 50,-. Payment: -------- [ ] I have transferred the whole amount of DM________ to Gesellschaft fuer Informatik (GI), Sparkasse Bonn Account No.: 39 479 Bankcode: 380 500 00 Reference: SK-Fuzzy-98 [ ] I enclose a Eurocheque amounting to DM_________ made payable to Gesellschaft fuer Informatik [ ] Please debit my [ ] Diners [ ] Visa [ ] Euro/Mastercard Cardnumber:______________Expiration date:_________ Cardholder:_______________________________________ If cancellation is received up to February, 17, 1998, a 75% refund will be given. For cancellations received afterwards, no refunds can be guaranteed. Date:___________ Signature:__________________ Sender: ------- Last Name (Mr. / Mrs. / MS. Title): ________________________________________ First Name: ________________________________________ Affiliation: ________________________________________ Street/POB: ________________________________________ Zip/Postal Code/City: ________________________________________ Country: ________________________________________ Phone/Fax: ________________________________________ E-mail: ________________________________________ If you would like to take part in the workshop, please send the completed registration form to Christine Harms c/o GMD / FNS'98 Schloss Birlinghoven D-53754 Sankt Augustin Tel.: ++49 2241 14-24 73 Fax: ++49 2241 14-24 72 email: christine.harms at gmd.de From bdevries at sarnoff.com Fri Jan 16 12:55:10 1998 From: bdevries at sarnoff.com (Aalbert De Vries x2456) Date: Fri, 16 Jan 1998 12:55:10 -0500 Subject: NNSP98 Call for Papers Message-ID: <34BF9EFE.D192D090@sarnoff.com> CALL FOR PAPERS =============== *=====================================================* * THE 1998 IEEE SIGNAL PROCESSING SOCIETY WORKSHOP ON * * * * NEURAL NETWORKS FOR SIGNAL PROCESSING * *=====================================================* August 31 - September 3, 1998 Submission of extended summary : February 26, 1998 ================= Isaac Newton Institute for Mathematical Sciences, Cambridge, England The 1998 IEEE Workshop on Neural Networks for Signal Processing is the seventh in the series of workshops. Cambridge is a historic town, housing one of the leading Universities and several research institutions. In the Summer it is a beautiful place and a large number of visitors come here. It is easily reached by train and road from the Airports in London. The combination of these make it an ideal setting to host this workshop. The Isaac Newton Institute for Mathematical Sciences is based in Cambridge, adjoining the University and the Colleges. It was founded in 1992, and is devoted to the study of all branches of Mathematics. The Institute runs programmes that last for upto six months on various topics in mathematical sciences. Past programmes of relevance to this proposal include Computer Vision, Financial Mathematics and the current programme on Neural Networks and Machine Learning (July - December, 1997). One of the programmes at the Institute in July-December 1998 is Nonlinear and Nonstationary Signal Processing. Hence hosting this conference at the Institute will benefit the participants in many ways. 4. Accommodations Accommodation will be at Robinson College, Cambridge. Robinson is one of the new Colleges in Cambridge, and uses its facilities to host conferences during the summer months. It can accommodate about 300 guests in comfortable rooms. The College is within walking distance to the Cambridge city center and the Newton Institute. 5. Organization General Chairs Prof. Tony CONSTANTINIDES (Imperial) Prof. Sun-Yuan KUNG (Princeton) Vice-Chair Dr Bill Fitzgerald (Cambridge) Finance Chair Dr Christophe Molina (Anglia) Proceeding Chair Dr Elizabeth J. Wilson (Raytheon Co.) Publicity Chairs Dr Bert de Vries (Sarnoff) Dr Jonathan Chambers (Imperial) Program Chair Dr Mahesan Niranjan (Cambridge) Program Committee Tulay ADALI Andrew BACK Jean-Francois CARDOSO Bert DE VRIES Lee GILES Federico GIROSSI Yu Hen HU Jenq-Neng HWANG Jan LARSEN Yann LECUN David LOWE Christophe MOLINA Visakan KADIRKAMANATHAN Shigeru KATAGIRI Gary KUHN Elias MANOLAKOS Mahesan NIRANJAN Dragan OBRADOVIC Erkki OJA Kuldip PALIWAL Lionel TARASSENKO Volker TRESP Marc VAN HULLE Andreas WEIGEND Papers describing original research are solicited in the areas described below. All submitted papers will be reviewed by members of the Programme Committee. 6. Technical Areas Paradigms artificial neural networks, Markov models, graphical models, dynamical systems, nonlinear signal processing, and wavelets Application areas speech processing, image processing, OCR, robotics, adaptive filtering, communications, sensors, system identification, issues related to RWC, and other general signal processing and pattern recognition Theories generalization, design algorithms, optimization, probabilistic inference, parameter estimation, and network architectures Implementations parallel and distributed implementation, hardware design, and other general implementation technologies 7. Schedule Prospective authors are invited to submit 5 copies of extended summaries of no more than 6 pages. The top of the first page of the summary should include a title, authors' names, affiliations, address, telephone and fax numbers and email address, if any. Camera-ready full papers of accepted proposals will be published in a hard-bound volume by IEEE and distributed at the workshop. For further information, please contact Dr. Mahesan Niranjan, Cambridge University Engineering Department, Cambridge CB2 1PZ, England, (Tel.) +44 1223 332720, (Fax.) +44 1223 332662, (e-mail) niranjan at eng.cam.ac.uk. More information relating to the workshop will be available in http://www.sarnoff.com/conferences/nnsp98.htm and http://www-svr.eng.cam.ac.uk/nnsp98. Submissions to: Dr Mahesan Niranjan IEEE NNSP'98 Cambridge University Engineering Department Trumpington Street, Cambridge CB2 1PZ England ***** Important Dates ****** Submission of extended summary : February 26, 1998 Notification of acceptance : April 6, 1998 Submission of photo-ready accepted paper : May 3, 1998 Advanced registration, before : June 30, 1998 ============================================================== From geoff at giccs.georgetown.edu Fri Jan 16 20:58:11 1998 From: geoff at giccs.georgetown.edu (Geoff Goodhill) Date: Fri, 16 Jan 1998 20:58:11 -0500 Subject: NIPS Preprints available Message-ID: <199801170158.UAA00584@fathead.giccs.georgetown.edu> The following 3 papers from Georgetown University will appear in the 1998 NIPS proceedings, and are now available from http://www.giccs.georgetown.edu/~alex and ~geoff respectively: NEURAL BASIS OF OBJECT-CENTERED REPRESENTATIONS Sophie Deneve and Alexandre Pouget Georgetown Institute for Cognitive and Computational Sciences We present a neural model that can perform eye movements to a particular side of an object regardless of the position and orientation of the object in space, a generalization of a task which has been recently used by Olson and Gettner to investigate the neural structure of object-centered representations. Our model uses an intermediate representation in which units have oculocentric receptive fields-- just like collicular neurons--- whose gain is modulated by the side of the object to which the movement is directed, as well as the orientation of the object. We show that these gain modulations are consistent with Olson and Gettner single cell recordings in the supplementary eye field. This demonstrates that it is possible to perform an object-centered task without a representation involving an object-centered map, viz., without neurons whose receptive fields are defined in object-centered coordinates. We also show that the same approach can account for object-centered neglect, a situation in which patients with a right parietal lesion neglect the left side of objects regardless of the orientation of the objects. A MATHEMATICAL MODEL OF AXON GUIDANCE BY DIFFUSIBLE FACTORS Geoffrey J. Goodhill Georgetown Institute for Cognitive and Computational Sciences In the developing nervous system, gradients of target-derived diffusible factors play an important role in guiding axons to appropriate targets. In this paper, the shape that such a gradient might have is calculated as a function of distance from the target and the time since the start of factor production. Using estimates of the relevant parameter values from the experimental literature, the spatiotemporal domain in which a growth cone could detect such a gradient is derived. For large times, a value for the maximum guidance range of about 1 mm is obtained. This value fits well with experimental data. For smaller times, the analysis predicts that guidance over longer ranges may be possible. This prediction remains to be tested. GRADIENTS FOR RETINOTECTAL MAPPING Geoffrey J. Goodhill Georgetown Institute for Cognitive and Computational Sciences The initial activity-independent formation of a topographic map in the retinotectal system has long been thought to rely on the matching of molecular cues expressed in gradients in the retina and the tectum. However, direct experimental evidence for the existence of such gradients has only emerged since 1995. The new data has provoked the discussion of a new set of models in the experimental literature. Here, the capabilities of these models are analyzed, and the gradient shapes they predict in vivo are derived. From cchang at cns.bu.edu Sat Jan 17 16:18:53 1998 From: cchang at cns.bu.edu (Carolina Chang) Date: Sat, 17 Jan 1998 16:18:53 -0500 (EST) Subject: CFP: Biomimetic Robotics Message-ID: CALL FOR PAPERS --------------- Special session of ISIC/CIRA/ISAS'98 on BIOMIMETIC ROBOTICS Co-chairs: Carolina Chang and Paolo Gaudiano Boston University Neurobotics Lab Dept. of Cognitive and Neural Systems September 14-17, 1998 Gaithersburg, Maryland U.S.A SUBMISSION DEADLINE: February 27, 1998 http://neurobotics.bu.edu/conferences/CIRA98/ It has been argued that today's supercomputers are able to process information at a rate comparable to that of simple invertebrates. And yet, even ignoring physical constraints, no existing algorithm running on the fastest supercomputer could enable a robot to fly around a room, avoid obstacles, land upside down on the ceiling, feed, reproduce, and perform many of the other simple tasks that a housefly learns to perform without external training or supervision. The apparent simplicity with which flies and even much simpler biological organisms manage to survive in a constantly changing environment suggests that a potentially fruitful avenue of research is that of understanding the mechanisms adopted by biological systems for perception and control, and applying what is learned to robots. While we may not yet be able to make a computer function as flexibly as a housefly, there have been many promising starts in that direction. The goal of this special session is to present recent results in "biomimetic robotics", or the application of biological principles to robotics. The term "biological" in this case should be taken broadly to refer to any aspect of biological function, including for examples psychological theories or detailed models of neural function. We are soliciting submissions that describe biomimetic applications in any branch of robotics. Preference will be given to applications that utilize real systems, be they robotic or biological. SUBMISSION PROCEDURE -------------------- All submissions must be made in electronic format (postscript or MS Word preferred) as described below. Submissions must be formatted as specified in the call for papers for the joint conference ISIC/CIRA/ISAS'98 (see http://isd.cme.nist.gov/proj/is98/index.html): Papers should be limited to 6 pages including abstract, figures, and tables (i.e., two column format, 10pt Times font, and 8.5x11" paper). Authors who plan to submit a paper by the February 27 deadline are encouraged to contact C. Chang or P. Gaudiano by electronic mail (cchang at bu.edu, gaudiano at bu.edu) as soon as possible. Notification of acceptance and the author's kit will be mailed by May 8, 1998. The full paper typed in camera-ready form must be received by June 12, 1998. Final instructions for camera-ready copy submission will be in the author's kit. To submit an electronic copy of your manuscript, please prepare a postscript or MS Word version of the paper, including all figures, and upload it to the anonymous ftp site (instructions below if needed): ftp://neurobotics.bu.edu/pub/biomimetic To expedite uploading, your document may be compressed using gzip, pkzip, winzip, or any other commonly used compression scheme. As soon as you have uploaded your file to our ftp site, please send e-mail to cchang at bu.edu indicating the filename, who will serve as the corresponding author, and include the title, the name of the author(s), affiliation, address, telephone number, fax, and e-mail address. FTP UPLOADING INSTRUCTIONS -------------------------- Connect to the neurobotics ftp server using the "ftp" command or using one of the windows/mac ftp programs. Use the login name "anonymous" or "ftp", and send your e-mail address as password. For instance on a UNIX system you would do the following: ftp neurobotics.bu.edu Connected to neurobotics.bu.edu. 220 neurobotics.bu.edu FTP server (Version wu-2.4.2-academ[BETA-12](1) Wed Mar 5 12:37:21 EST 1997) ready. Name (neurobotics.bu.edu:gaudiano): ftp 331 Guest login ok, send your complete e-mail address as password. Password: (your e-mail) 230 Guest login ok, access restrictions apply. Remote system type is UNIX. Using binary mode to transfer files. ftp> cd pub/biomimetic 250 CWD command successful. ftp> bin 200 Type set to I. ftp> put your_file_name.ps.gz local: your_file_name.ps.gz remote: your_file_name.ps.gz 200 PORT command successful. 150 Opening BINARY mode data connection for your_file_name.ps.gz . 226 Transfer complete. 2312 bytes sent in 0.0204 secs (1.1e+02 Kbytes/sec) ftp> quit 221 Goodbye. For Mac and Windows systems there are many ftp programs with a graphical user interface that should simplify this process. Please note that permissions are set in such a way that you cannot view the contents of the ftp directory even after you have uploaded your file. For additional information about this special session, please send e-mail to cchang at bu.edu or gaudiano at bu.edu. For all information about ISIC/CIRA/ISAS'98 please consult the conference web page at http://isd.cme.nist.gov/proj/is98/index.html From smc at decsai.ugr.es Sun Jan 18 20:02:58 1998 From: smc at decsai.ugr.es (Serafin Moral) Date: Mon, 19 Jan 1998 01:02:58 +0000 Subject: UAI'98 Second Call for Papers Message-ID: <34C2A642.1340AEB9@decsai.ugr.es> We apologize if you receive multiple copies of this message. Please distribute to interested persons. ******************************************************************* NEW UPDATED INFORMATION ABOUT UAI-98 CONFERENCE ******************************************************************* >>>> New revised deadline to receive full papers. >>>> Length of submitted papers has been clarified. For more details about the updates given below, please visit the UAI-98 WWW page at http://www.uai98.cbmi.upmc.edu =========================================================== S E C O N D C A L L F O R P A P E R S =========================================================== ** U A I - 98 ** THE FOURTEENTH ANNUAL CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE July 24-26, 1998 University of Wisconsin Business School Madison, Wisconsin, USA ======================================= ++++++++++++++++++++++++++++++ Important Dates ++++++++++++++++++++++++++++++ >> Abstract and paper submission data received by: Monday, February 23, 1998 >> Postscript files of the papers received by: Thursday, February 26, 1988 >> Notification of acceptance by: Friday, April 10, 1998 >> Camera-ready copy due: Friday, May 8, 1998 >> Conference dates: July 24, 25, 26, 1998 >> Advanced tutorials on Uncertain Reasoning: Monday, July 27, 1998 These deadlines are truly strict. The period for reviewer assignment has been reduced to a minimum. So, deadline extensions will not be possible. ************************************************************************* Submitted papers must be at most 20 pages of 12pt Latex article style or equivalent, which is approximately 7,000 words. Accepted papers will be limited to 8 pages (with two additional pages allowed for a fee) in the UAI proceedings style, which is available at ftp://decsai.ugr.es/pub/utai/other/smc/proceedings.sty for Latex users. An 8-page paper in the proceedings style with no figures has a word count that typically is in the range of 6,000 to 7,000 words. The paper abstract and data should be sent by using the electronic form at the following address: http://decsai.ugr.es/~smc/uai98/send.html To submit a paper, send an electronic version of the paper (Postscript format) to the following address: uai98 at cbmi.upmc.edu The subject line of this message should be: $.ps, where $ is an identifier created from the last name of the first author, followed by the first initial of the author's first name. Multiple submissions by the same first author should be indicated by adding a number (e.g., pearlj2.ps) to the end of the identifier. Authors unable to submit papers electronically should send 5 hard copies of the complete paper to one of the Program Chairs (for their postal addresses, see http://www.uai98.cbmi.upmc.edu). ********************************************************************* Conference E-mail Address: uai98 at cbmi.upmc.edu Program Co-chairs: Gregory F. Cooper and Serafin Moral Conference Chair: Prakash P. Shenoy From at at cogsci.soton.ac.uk Sun Jan 18 15:26:06 1998 From: at at cogsci.soton.ac.uk (Adriaan Tijsseling) Date: Sun, 18 Jan 1998 20:26:06 +0000 Subject: PAPER on category learning in backprop nets. Message-ID: The following paper is available electronically from our web server: http://www.soton.ac.uk/~coglab/simcat.ps.gz ======================================================================== Warping Similarity Space in Category Learning by Backprop Nets Adriaan Tijsseling & Stevan Harnad Cognitive Science Centre, University of Southampton, Uk http://www.soton.ac.uk/~coglab/ Presented at "Interdisciplinary Workshop on Similarity and Categorisation, University of Edinburgh, November, 1997. Two previous neural network simulations of categorical perception (CP) have been replicated. The results have been subjected to more rigorous analysis. The findings still support the claim that backpropagation networks exhibit CP effects, but this is weakened as to incorporate the crucial role of the way auto-association training influences the organization of hidden unit representations. However, this appears to be more in accordance with CP effects in human subjects. ====================================================================== From terry at salk.edu Mon Jan 19 02:47:42 1998 From: terry at salk.edu (Terry Sejnowski) Date: Sun, 18 Jan 1998 23:47:42 -0800 (PST) Subject: Telluride Deadline Feb 1 Message-ID: <199801190747.XAA10560@helmholtz.salk.edu> "NEUROMORPHIC ENGINEERING WORKSHOP" JUNE 29 - JULY 19, 1998 TELLURIDE, COLORADO *** Deadline for application is February 1, 1998 *** Avis COHEN (University of Maryland) Rodney DOUGLAS (University of Zurich and ETH, Zurich/Switzerland) Christof KOCH (California Institute of Technology) Terrence SEJNOWSKI (Salk Institute and UCSD) Shihab SHAMMA (University of Maryland) We invite applications for a three week summer workshop that will be held in Telluride, Colorado from Monday, June 29 to Sunday, July 19, 1998. The 1997 summer workshop on "Neuromorphic Engineering", sponsored by the National Science Foundation, the Gatsby Foundation and by the "Center for Neuromorphic Systems Engineering" at the California Institute of Technology, was an exciting event and a great success. A detailed report on the workshop is available at http://www.klab.caltech.edu/~timmer/telluride.html We strongly encourage interested parties to browse through these reports and photo albums. GOALS: Carver Mead introduced the term "Neuromorphic Engineering" for a new field based on the design and fabrication of artificial neural systems, such as vision systems, head-eye systems, and roving robots, whose architecture and design principles are based on those of biological nervous systems. The goal of this workshop is to bring together young investigators and more established researchers from academia with their counterparts in industry and national laboratories, working on both neurobiological as well as engineering aspects of sensory systems and sensory-motor integration. The focus of the workshop will be on "active" participation, with demonstration systems and hands-on-experience for all participants. Neuromorphic engineering has a wide range of applications from nonlinear adaptive control of complex systems to the design of smart sensors. Many of the fundamental principles in this field, such as the use of learning methods and the design of parallel hardware, are inspired by biological systems. However, existing applications are modest and the challenge of scaling up from small artificial neural networks and designing completely autonomous systems at the levels achieved by biological systems lies ahead. The assumption underlying this three week workshop is that the next generation of neuromorphic systems would benefit from closer attention to the principles found through experimental and theoretical studies of real biological nervous systems as whole systems. FORMAT: The three week summer workshop will include background lectures systems neuroscience (in particular learning, oculo-motor and other motor systems and attention), practical tutorials on analog VLSI design, small mobile robots (Khoalas), hands-on projects, and special interest groups. Participants are required to take part and possibly complete at least one of the projects proposed (soon to be defined). They are furthermore encouraged to become involved in as many of the other activities proposed as interest and time allow. There will be two lectures in the morning that cover issues that are important to the community in general. Because of the diverse range of backgrounds among the participants, the majority of these lectures will be tutorials, rather than detailed reports of current research. These lectures will be given by invited speakers. Participants will be free to explore and play with whatever they choose in the afternoon. Projects and interest groups meet in the late afternoons, and after dinner. The analog VLSI practical tutorials will cover all aspects of analog VLSI design, simulation, layout, and testing over the workshop of the three weeks. The first week covers basics of transistors, simple circuit design and simulation. This material is intended for participants who have no experience with analog VLSI. The second week will focus on design frames for silicon retinas, from the silicon compilation and layout of on-chip video scanners, to building the peripheral boards necessary for interfacing analog VLSI retinas to video output monitors. Retina chips will be provided. The third week will feature sessions on floating gates, including lectures on the physics of tunneling and injection, and on inter-chip communication systems. We will also feature a tutorial on the use of small, mobile robots, focussing on Khoala's, as an ideal platform for vision, auditory and sensory-motor circuits. Projects that are carried out during the workshop will be centered in a number of groups, including active vision, audition, olfaction, motor control, central pattern generator, robotics, multichip communication, analog VLSI and learning. The "active perception" project group will emphasize vision and human sensory-motor coordination. Issues to be covered will include spatial localization and constancy, attention, motor planning, eye movements, and the use of visual motion information for motor control. Demonstrations will include a robot head active vision system consisting of a three degree-of-freedom binocular camera system that is fully programmable. The "central pattern generator" group will focus on small walking robots. It will look at characteristics and sources of parts for building robots, play with working examples of legged robots, and discuss CPG's and theories of nonlinear oscillators for locomotion. It will also explore the use of simple analog VLSI sensors for autonomous robots. The "robotics" group will use rovers, robot arms and working digital vision boards to investigate issues of sensory motor integration, passive compliance of the limb, and learning of inverse kinematics and inverse dynamics. The "multichip communication" project group will use existing interchip communication interfaces to program small networks of artificial neurons to exhibit particular behaviors such as amplification, oscillation, and associative memory. Issues in multichip communication will be discussed. LOCATION AND ARRANGEMENTS: The workshop will take place at the Telluride Elementary School located in the small town of Telluride, 9000 feet high in Southwest Colorado, about 6 hours away from Denver (350 miles). Continental and United Airlines provide daily flights directly into Telluride. All facilities within the beautifully renovated public school building are fully accessible to participants with disabilities. Participants will be housed in ski condominiums, within walking distance of the school. Participants are expected to share condominiums. No cars are required. Bring hiking boots, warm clothes and a backpack, since Telluride is surrounded by beautiful mountains. The workshop is intended to be very informal and hands-on. Participants are not required to have had previous experience in analog VLSI circuit design, computational or machine vision, systems level neurophysiology or modeling the brain at the systems level. However, we strongly encourage active researchers with relevant backgrounds from academia, industry and national laboratories to apply, in particular if they are prepared to work on specific projects, talk about their own work or bring demonstrations to Telluride (e.g. robots, chips, software). Internet access will be provided. Technical staff present throughout the workshops will assist with software and hardware issues. We will have a network of SUN workstations running UNIX, MACs and PCs running LINUX and Windows95. Unless otherwise arranged with one of the organizers, we expect participants to stay for the duration of this three week workshop. FINANCIAL ARRANGEMENT: We have several funding requests pending to pay for most of the costs associated with this workshop. Different from previous years, after notification of acceptances have been mailed out around March 15., 1998, participants are expected to pay a $250 workshop fee. In case of real hardship, this can be waived. Shared condominiums will be provided for all academic participants at no cost to them. We expect participant from National Laboratories and Industry to pay for these modestly priced condominiums. We expect to have funds to reimburse a small number of participants for up to travel (up to $500 for domestic travel and up to $800 for overseas travel). Please specify on the application whether such financial help is needed. HOW TO APPLY: The deadline for receipt of applications is February 1, 1998. Applicants should be at the level of graduate students or above (i.e. post-doctoral fellows, faculty, research and engineering staff and the equivalent positions in industry and national laboratories). We actively encourage qualified women and minority candidates to apply. Application should include: 1. Name, address, telephone, e-mail, FAX, and minority status (optional). 2. Curriculum Vitae. 3. One page summary of background and interests relevant to the workshop. 4. Description of special equipment needed for demonstrations that could be brought to the workshop. 5. Two letters of recommendation Complete applications should be sent to: Terrence J. Sejnowski The Salk Institute 10010 North Torrey Pines Road San Diego, CA 92037 email: terry at salk.edu FAX: (619) 587 0417 Applicants will be notified around March 15. 1998. From leila at ida.his.se Mon Jan 19 06:00:56 1998 From: leila at ida.his.se (Leila Khammari) Date: Mon, 19 Jan 1998 12:00:56 +0100 Subject: CFP ICANN 98 Message-ID: <34C33268.EC354B76@ida.his.se> This is being mailed to multiple mailing lists. Please accept our apologies if you receive multiple copies. _________________________________________________________________ CALL FOR PAPERS 8th INTERNATIONAL CONFERENCE ON ARTIFICIAL NEURAL NETWORKS (ICANN 98) September 2-4, 1998, Skoevde, Sweden Submission deadline March 25, 1998. http://www.his.se/ida/icann98/ _________________________________________________________________ INVITED SPEAKERS (to be completed) John Barnden, University of Birmingham, UK Chris Bishop, Microsoft Research, Cambridge, UK Rodney Brooks, MIT, Cambridge, USA Leif Finkel, University of Pennsylvania, USA Phil Husbands, University of Sussex, UK Teuvo Kohonen, Helsinki Univ. of Technology, Finland David MacKay, Cavendish Laboratory, Cambridge, UK Barak Pearlmutter, University of New Mexico, USA Ulrich Rueckert, Universitaet Paderborn, Germany David Rumelhart, Stanford University, USA Bernd Schuermann, Siemens, Germany _________________________________________________________________ ICANN 98, the 8th International Conference on Artificial Neural Networks, is held 2-4 September 1998 in Skoevde, Sweden. ICANN, the conference series of the European Neural Network Society and Europe's premier meeting in the field, sets out to extensively cover the whole spectrum of ANN-related research, ranging from industrial applications to biological systems. Carrying the heritage from Helsinki (1991), Brighton (1992), Amsterdam (1993), Sorrento (1994), Paris (1995), Bochum (1996) and Lausanne (1997), ICANN 98 includes a qualified scientific program, consisting of oral and poster presentations, a special 'Industry and Research' panel session, an exhibition with products related to ANNs, a set of tutorials on basic topics in ANN research and development, and on top of all this a relaxed and exciting social program. The conference is hosted and organized by the Connectionist Research Group at Hgskolan Skoevde in collaboration with the Swedish Neural Network Society (SNNS) and the European Neural Network Society (ENNS). It is supported by the International Neural Network Society (INNS), the Asian Pacific Neural Network Assembly (APNNA), the IEEE Neural Network Council and the IEE. ________________________________________________________________ SCOPE ICANN 98 aims to cover all aspects of ANN research, broadly divided into six areas, corresponding to separate organizational modules for which contributions are sought. THEORY: This module covers the broad area of theory. Topics include, but are not limited to: Model issues; Unsupervised/ supervised learning issues; "Life-long" learning; Inference; Signal Processing; Neurocontrol; Analysis; Combinatorial optimization. APPLICATIONS: We particularly encourage submissions covering novel ANN applications in, for example, the following areas: Pattern recognition; Time series prediction; Optimization; Data analysis/Data mining; Telecommunications; Control; Speech and signal processing; Vision and image processing. COMPUTATIONAL NEUROSCIENCE AND BRAIN THEORY: This module covers computational models of biological neural systems, functions, techniques and tools, e.g. Sensory and perceptual processing; Sensory fusion; Motor pattern generation and control; Computational neuroethology; Plasticity in the nervous system; Neuromodulation; Cognitive neuroscience; Behavior selection; Decision making; Cortical associative memory; Neuromorphic computer architectures. CONNECTIONIST COGNITIVE SCIENCE AND AI: This module covers the use of ANNs for modeling cognitive capacities and the relation between ANNs and AI, e.g. Vision and perception; Recognition and categorization; Development; Representational issues; Reasoning, problem solving and planning; Language and speech; Cognitive plausibility; Connectionism, Hybridism and Symbolism; Philosophical aspects and implications. AUTONOMOUS ROBOTICS AND ADAPTIVE BEHAVIOR: This module covers ANNs for adaptive control of autonomous robots as well as modeling of animal behavior. Possible topics include: Adaptive behavior in biological/artificial autonomous agents; ANN learning methods for adaptation, control and navigation; Multi-agent systems; Representational issues; Dynamics of agent-environment interaction; Biologically/ethologically inspired robotics; ANNs in evolutionary robotics and Artificial Life; Cognitive robotics. HARDWARE/IMPLEMENTATION: This module covers ANN hardware and implementational issues. Possible topics include: Analog/digital implementations; Pulse stream networks; On chip learning; Systems and architectures; Hardware implants/coupling silicon to biological nerves; Vision and image processing. In addition to the modules mentioned above we aim to further promote contacts between researchers and industry. To achieve this, a special panel session on 'Industry and Research' will be organized. Within this session, a number of speakers will be invited to present usage of state-of-the-art ANN technology in Japan, Europe and the USA. We also intend to invite potential funding agencies, in order to get their view on what kind of ANN research will be considered for funding in the future. _________________________________________________________________ SUBMISSION Prospective authors are invited to submit papers for oral or poster presentation by March 25, 1998. For details please see: http://www.his.se/ida/icann98/ or contact the conference secretariat (see below). All papers accepted for oral or poster presentation will appear in the conference proceedings which is published by Springer Verlag. _________________________________________________________________ PROGRAM COMMITTEE (to be completed) Bengt Asker, Independent Consultant, Sweden Lars Asplund, Uppsala University, Sweden Randall Beer, Santa Fe Institute & Case Western Reserve, University, USA Chris Bishop, Microsoft Research, Cambridge, UK Miklos Boda, Ericsson Telecom AB, Stockholm, Sweden Mikael Boden, Hoegskolan Skoevde, Sweden Valentino Braitenberg, Max Planck Institute for Biological Cybernetics, Tuebingen Germany Harald Brandt, Ericsson Telecom AB, Stockholm, Sweden Abhay Bulsari, AB Nonlinear Solutions OY, Finland Bo Cartling, Royal Institute of Technology, Sweden Ron Chrisley, University of Sussex, UK Erik De Schutter, University of Antwerp, Belgium Georg Dorffner, University of Vienna, Austria Rolf Eckmiller, University of Bonn, Germany Dario Floreano, Swiss Federal Institute of Technology, Lausanne, Switzerland Francoise Fogelman Soulie, SLIGOS, France Wulfram Gerstner, Centre for Neuro-Mimetic Systems, Lausanne, Switzerland John Hertz, Nordita, Denmark Pentti Kanerva, SICS, Sweden Bert Kappen, University of Nijmegen, The Netherlands Anders Lansner, Royal Inst. of Technology, Sweden Klaus-Robert Mueller, GMD First, Germany Ajit Narayanan, University of Exeter, UK Lars Niklasson, Hoegskolan Skoevde, Sweden Stefano Nolfi, National Research Council, Rome, Italy Erkki Oja, Helsinki University of Technology, Finland Gnther Palm, University of Ulm, Germany Jordan Pollack, Brandeis University, USA Ronan Reilly, University College Dublin, Ireland Brian Ripley, Oxford University, UK Thorsteinn Roegnvaldsson, Hoegskolan Halmstad, Sweden Bernd Schuermann, Siemens AG, Munich, Germany Noel Sharkey, University of Sheffield, UK Olli Simula, Helsinki University of Technology, Finland Jonas Sjoeberg, Chalmers University of Technology, Sweden Gunnar Sjoedin, SICS, Sweden Bertil Svensson, Hoegskolan Halmstad & Chalmers, University of Technology, Sweden Jun Tani, Sony CSL Inc., Tokyo, Japan Carme Torras, Universitat Politecnica de Catalunya, Barcelona, Spain Tim van Gelder, University of Melbourne, Australia Francisco Varela, LENA - CNRS, Paris, France Eric A. Wan, Oregon Graduate Institute, USA Florentin Woergoetter, Ruhr-Universitt Bochum, Germany Tom Ziemke, Hoegskolan Skoevde, Sweden _________________________________________________________________ DATES TO REMEMBER March 25, 1998 - submissions must be received May 6, 1998 - notification of acceptance May 28, 1998 - final camera ready papers must be received September 1, 1998 - ICANN 98 tutorials September 2-4, 1998 - ICANN 98 takes place _________________________________________________________________ CONFERENCE SECRETARIAT ICANN 98 Hoegskolan Skoevde P.O. Box 408 S-541 28 Skoevde SWEDEN Email: icann98 at ida.his.se Telefax: +46 (0)500-46 47 25 http://www.his.se/ida/icann98/ From kia at particle.kth.se Mon Jan 19 15:49:56 1998 From: kia at particle.kth.se (Karina Waldemark) Date: Mon, 19 Jan 1998 21:49:56 +0100 Subject: VI-DYNN'98 Call for papers Message-ID: <34C3BC74.AE9E7F9A@particle.kth.se> ------------------------------------------------------------------------ 2:nd call for papers: VI-DYNN'98 Workshop on Virtual Intelligence - Dynamic Neural Networks Stockholm June 22-26, 1998 Royal Institute of Technology, KTH Stockholm, Sweden ------------------------------------------------------------------------ Abstracts due to: February 28, 1998 ------------------------------------------------------------------------ VI-DYNN'98 Web: http://msia02.msi.se/vi-dynn VI-DYNN'98 will combine the DYNN emphasis on biologically inspired neural network models, especially Pulse Coupled Neural Networks (PCNN), to the practical applications emphasis of the VI workshops. In particular we will focus on why, how, and where to use biologically inspired neural systems. For example, we will learn how to adapt such systems to sensors such as digital X-Ray imaging devices, CCD's and SAR, etc. and examine questions of accuracy, speed, etc. Developments in research on biological neural systems, such as the mammalian visual systems, and how smart sensors can benefit from this knowledge will also be presented. Pulse Coupled Neural Networks (PCNN) have recently become among the most exciting new developments in the field of artificial neural networks (ANN), showing great promise for pattern recognition and other applications. The PCNN type models are related much more closely to real biological neural systems than most ANN's and many researchers in the field of ANN- Pattern Recognition are unfamiliar with them. VI-DYNN'98 will continue in the spirit with the Virtual Intelligence workshop series. ----------------------------------------------------------------- VI-DYNN'98 Topics: Dynamic NN Fuzzy Systems Spiking Neurons Rough Sets Brain Image Genetic Algorithms Virtual Reality ------------------------------------------------------------------ Applications: Medical Defense & Space Others ------------------------------------------------------------------- Special sessions: PCNN - Pulse Coupled Neural Networks exciting new artificial neural networks related to real biological neural systems PCNN applications: pattern recognition image processing digital x-ray imaging devices, CCDs & SAR Biologically inspired neural network models why, how and where to use them The mammalian visual system smart sensors benefit from their knowledge The Electronic Nose ------------------------------------------------------------------------ International Organizing Committee: John L. Johnson (MICOM, USA), Jason M. Kinser (George Mason U., USA) Thomas Lindblad (KTH, Sweden) Robert Lorenz (Univ. Wisconsin, USA) Mary Lou Padgett (Auburn U., USA), Robert T. Savely (NASA, Houston) Manual Samuelides(CERT-ONERA,Toulouse,France) John Taylor (Kings College,UK) Simon Thorpe (CERI-CNRS, Toulouse, France) ------------------------------------------------------------------------ Local Organizing Committee: Thomas Lindblad (KTH) - Conf. Chairman ClarkS. Lindsey (KTH) - Conf. Secretary Kenneth Agehed (KTH) Joakim Waldemark (KTH) Karina Waldemark (KTH) Nina Weil (KTH) Moyra Mann - registration officer --------------------------------------------------------------------- Contact: Thomas Lindblad (KTH) - Conf. Chairman email: lindblad at particle.kth.se Phone: [+46] - (0)8 - 16 11 09 ClarkS. Lindsey (KTH) - Conf. Secretary email: lindsey at particle.kth.se Phone: [+46] - (0)8 - 16 10 74 Switchboard: [+46] - (0)8 - 16 10 00 Fax: [+46] - (0)8 - 15 86 74 VI-DYNN'98 Web: http://msia02.msi.se/vi-dynn -- ------------------------------------------------------ This is an email from: Ph.D. Karina Waldemark ------------------------------------------------------ Royal Institute of Technology, KTH Physics Department Frescati Particle Physics and Instrumentation group ------------------------------------------------------ Email: kia at particle.kth.se Snail mail: Frescativagen 24 S-104 05 Stockholm SWEDEN Phone: [+46] - (0)8 - 16 10 81 Switchboard: [+46] - (0)8 - 16 10 00 Fax : [+46] - (0)8 - 15 86 74 Please visit MY HOME PAGE at http://msia02.msi.se/~kia/kia.html From smagt at dlr.de Tue Jan 20 05:43:59 1998 From: smagt at dlr.de (Patrick van der Smagt) Date: Tue, 20 Jan 1998 11:43:59 +0100 Subject: abstracts of NIPS cerebellar workshop available Message-ID: <34C47FEF.FEBD362C@robotic.dlr.de> Abstracts of the NIPS*97 workshop "Can Artificial Cerebellar Models Compete to Control Robots?" can now be downloaded from the Web at http://www.op.dlr.de/FF-DR-RS/CONFERENCES/nips-workshop/ The bibliographical information: P. van der Smagt and D. Bullock (editors), 1997 Extended Abstracts of the NIPS*97 Workshop `Can Artificial Cerebellar Models Compete to Control Robots?' DLR Technical Report # 515-97-28 (38 pages) Contents: * Patrick van der Smagt, "Dynamic control in new robot structures: Can we learn nonlinear functions of high dimensionality?" * Daniel Bullock, "Cerebellar learning for context sensitive and critically timed coordination of multiple action channels" * Gordon Kraft, "Optimized Weight Smoothing for CMAC Neural Networks" * Jose Contreras-Vidal and Juan Lopez-Coronado, "Adaptive Cerebellar Control of Opponent Muscles" * Mitsuo Kawato, "Multiple Internal Models in the Cerebellum" * Andrew Fagg, Leo Zelevinsky, Andrew Barto, and James Houk, "Using Crude Corrective Movements to Learn Accurate Motor Programs for Reaching" * Mark E. Nelson, "Adaptive Motor Control Without a Cerebellum" * Jacob Spoelstra, Michael Arbib, and Nicolas Schweighofer, "Cerebellar control of a simulated biomimetic manipulator for fast movements" * Marwan Jabri, Olivier Coenen, Jerry Huang, and Terrence Sejnowski, "Sensorimotor integration and control" Patrick van der Smagt -- dr Patrick van der Smagt phone +49 8153 281152 DLR/Institute of Robotics and System Dynamics fax +49 8153 281134 P.O. Box 1116, 82230 Wessling, Germany email From espaa at soc.plym.ac.uk Wed Jan 21 07:10:08 1998 From: espaa at soc.plym.ac.uk (espaa) Date: Wed, 21 Jan 1998 12:10:08 GMT Subject: Call for Paper PAA Journal Message-ID: <334F974762@scfs3.soc.plym.ac.uk> CALL FOR PAPERS PATTERN ANALYSIS AND APPLICATIONS journal http://www.soc.plym.ac.uk/soc/sameer/paa.htm Springer-Verlag Limited Springer Verlag Ltd is launching a new journal - Pattern Analysis and Applications (PAA) - in Spring 1998. Original Papers are now invited for the journal covering the following areas of interest: Aims and Scope of PAA: The journal publishes high quality articles in areas of fundamental research in pattern analysis and applications. It aims to provide a forum for original research which describes novel pattern analysis techniques and industrial applications of the current technology. The main aim of the journal is to publish high quality research in intelligent pattern analysis in computer science and engineering. In addition, the journal will also publish articles on pattern analysis applications in medical imaging. The journal solicits articles that detail new technology and methods for pattern recognition and analysis in applied domains including, but not limited to, computer vision and image processing, speech analysis, robotics, multimedia, document analysis, character recognition, knowledge engineering for pattern recognition, fractal analysis, and intelligent control. The journal publishes articles on the use of advanced pattern recognition and analysis methods including statistical techniques, neural networks, genetic algorithms, fuzzy pattern recognition, machine learning, and hardware implementations which are either relevant to the development of pattern analysis as a research area or detail novel pattern analysis applications. Papers proposing new classifier systems or their development, pattern analysis systems for real-time applications, fuzzy and temporal pattern recognition and uncertainty management in applied pattern recognition are particularly solicited. The journal encourages the submission of original case-studies on applied pattern recognition which describe important research in the area. The journal also solicits reviews on novel pattern analysis benchmarks, evaluation of pattern analysis tools, and important research activities at international centres of excellence working in pattern analysis. Audience: Researchers in computer science and engineering. Research and Development Personnel in industry. Researchers/ applications where pattern analysis is used, researchers in the area of novel pattern recognition and analysis techniques and their specific applications. ________________________________________________________________ Editorial Board: Sukhan Lee University of Southern California, USA Eric Saund Xerox Palo Alto Research Center, USA Narendra Ahuja University of Illinois - Urbana Champaign, USA Andras Lorincz University of Szeged, Hungary William Freeman Mitsubishi Electric Research Lab., USA Jianchang Mao IBM, USA Xuedong Huang Microsoft Corporation, USA Haruo Asada Toshiba Corporation, Japan Jim Keller University of Missouri-Columbia, USA Kurt Hornik Technical University of Vienna, Austria K K Biswas Indian Institute of Technology India Frederick Jelinek John Hopkins University, USA Adnan Amin University of New South Wales, Australia Jussi Parkkinen Lappeenranta University of Technology, Finland Hans Guesgen University of Aukland, New Zealand Gregory Hager Yale University, USA Gerhard Ritter University of Florida, USA Gabor Herman University of Pennsylvania, USA Ravi Kothari University of Cincinnati, USA Tan Chew Lim National University of Singapore, Singapore Horst Bischof Technical University of Vienna, Austria Larry Spitz Daimler-Benz Research & Tech., USA Torfinn Taxt University of Bergen, Norway Ching Y Suen Concordia University, Canada Terry Caelli Curtin University of Technology, Australia Eric Ristad Princeton University, USA Andreas Dengel German Research Centre for AI GmbH, Germany Henri Prade Universite Paul Sabatier, France Alexander Franz Sony Computer Science Lab Technology, Japan Dan Adam Technion-Israel Inst. of Technology, Israel John MacIntyre University of Sunderland, UK Robert Duin Delft University of Technology, Netherlands Hsi-Jian Lee National Chiao Tung University, Taiwan Steven Salzberg John Hopkins University, USA Ruggero Milanese University of Geneva, Switzerland Masayuki Nakajima Tokyo Institute of Technology, Japan Melanie Mitchell Santa Fe Institute, USA Madan Singh UMIST, UK James Duncan Yale University, USA Sanjoy Mitter MIT, USA Mari Ostendorf Boston University, USA Steve Young University of Cambridge, UK Alan Bovik University of Texas at Austin, USA Michael Brady University of Oxford, UK Simon Kasif University of Illinois at Chicago, USA David G Stork RICOH Silicon Valley, USA ______________________________________________________ Send your submissions to: Sameer Singh Editor-in-Chief, Pattern Analysis and Applications School of Computing University of Plymouth Plymouth PL4 8AA UK Full information about the journal and detailed instructions for Call for Papers can be found at the PAA web site. From eppler at hpe.fzk.de Wed Jan 21 08:54:59 1998 From: eppler at hpe.fzk.de (Wolfgang Eppler) Date: Wed, 21 Jan 1998 14:54:59 +0100 Subject: Interpretation and Optimization of Neural Systems Message-ID: <34C5FE33.1B0259B6@hpe.fzk.de> Call for Papers Invited Session "Interpretation and Optimization of Neural Systems" at EUFIT '98, Aachen, Germany, September 7 - 10, 1998 Neural networks are known to be black boxes. Their internal structure is determined by training algorithms that do not care about human comprehension. Some of the neural networks can be analyzed in an easy way, e.g. self-organized feature maps with their weights being prototype vectors in the input space or radial basis function networks with their weights and biases being centers and variances of gaussian regions. But even with multi-layer perceptrons there exist methods to understand better the interior of a network. Some approaches use Fuzzy rules or other symbolic methods, others use geometrical interpretations. The analyzing capability is an important feature for using neural networks in domains with high security requirements. The unknown response of a network to extreme values and badly testable input spaces are the reason for the necessity of such analyzing and manipulation tools. The manipulation of a network after training may help to optimize the generalization capability of the solution found. Genetic algorithms and graphical approaches are examples to achieve these objectives. The invited session cares about the different optimization and interpretation techniques. Both new theoretical methods and demonstration of tools are wellcome. Deadline for abstract: February 1, 1998 Deadline for camera ready paper: March 31, 1998 Address: Prof. Dr. H. Gemmeke Forschungszentrum Karlsruhe, FZK (Research Centre Karlsruhe) POB 3640 76021 Karlsruhe Germany or Fax: ++49 7247 82 3560 Tel: ++49 7247 82 5537 or email: eppler at hpe.fzk.de Please contact: Wolfgang Eppler, Tel: ++49 7247 82 5537, email: eppler at hpe.fzk.de Prof. Dr. Hartmut Gemmeke, Tel: ++49 7247 82 5635, email: gemmeke at hpe.fzk.de From jose at tractatus.rutgers.edu Wed Jan 21 12:26:22 1998 From: jose at tractatus.rutgers.edu (Stephen Hanson) Date: Wed, 21 Jan 1998 12:26:22 -0500 Subject: POSTDOC immediately available. RUTGERS Newark PSYCHOLOGY DEPARTMENT Message-ID: <34C62FBE.258E6EB8@tractatus.rutgers.edu> The Department of Psychology of Rutgers University-Newark Campus -- POSTDOCTORAL Position A PostDoctoral position that can be filled *immediately* running through fall98/Spring99 with a possbility of a second year renewal. Area of specialization in connectionist modeling with applications to categorization, recurrent networks, brain imaging or more generally cognitive neuroscience. Review of applications will begin immediately-- but will continue to be accepted until the position is filled. Starting date is flexible in the Spring 97 semester. Rutgers University is an equal opportunity/affirmative action employer. Qualified women and minority candidates are especially encouraged to apply. Send CV to Professor S. J. Hanson, Chair, Department of Psychology - Post Doc Search, Rutgers University, Newark, NJ 07102. Email enquiries can be made to jose at psychology.rutgers.edu please include POSTDOC in the subject heading. -- Stephen J. Hanson Professor & Chair Department of Psychology Smith Hall Rutgers University Newark, NJ 07102 voice: 1-973-353-5440 x5095 fax: 1-973-353-1171 http://psychology.rutgers.edu email: jose at kreizler.rutgers.edu cellular: 1-201-757-2589 From baluja at jprc.com Wed Jan 21 17:03:36 1998 From: baluja at jprc.com (Shumeet Baluja) Date: Wed, 21 Jan 1998 17:03:36 -0500 Subject: Paper Available on Rotation Invariant Face Detection Message-ID: <199801212203.RAA16887@india.jprc.com> Rotation Invariant Neural Network-Based Face Detection by: Henry Rowley Shumeet Baluja Takeo Kanade Abstract: In this paper, we present a neural network-based face detection system. Unlike similar systems which are limited to detecting upright, frontal faces, this system detects faces at any degree of rotation in the image plane. The system employs multiple networks; the first is a ``router'' network which processes each input window to determine its orientation and then uses this information to prepare the window for one or more ``detector'' networks. We present the training methods for both types of networks. We also perform sensitivity analysis on the networks, and present empirical results on a large test set. Finally, we present preliminary results for detecting faces which are rotated out of the image plane, such as profiles and semi-profiles. This is Technical Report: CMU-CS-97-201 Available from: http://www.cs.cmu.edu/~har/faces.html and http://www.cs.cmu.edu/~baluja/techreps.html Questions and comments are welcome. From hagai at phy.ucsf.EDU Wed Jan 21 19:04:29 1998 From: hagai at phy.ucsf.EDU (Hagai Attias) Date: Thu, 22 Jan 98 00:04:29 +0000 Subject: Paper available: blind source separation Message-ID: <199801220804.AAA20240@phy.ucsf.EDU> A new paper on blind separation of mixed and convolved sources is available at: http://keck.ucsf.edu/~hagai/papers.html ------------------------------------------------------- BLIND SOURCE SEPARATION AND DECONVOLUTION: THE DYNAMIC COMPONENT ANALYSIS ALGORITHM Hagai Attias and Christoph E. Schreiner University of California, San Francisco hagai at phy.ucsf.edu (Neural Computation 1998, in press) We present a novel family of unsupervised learning algorithms for blind separation of mixed and convolved sources. Our approach, termed `dynamic component analysis' (DCA), is based on formulating the separation problem as a learning task of a spatio-temporal generative model. The resulting learning rules achieve separation by exploiting high-order spatio-temporal statistics of the observed data. Using an extension of the relative-gradient concept to the spatio-temporal case, we derive different rules by learning generative models in the frequency and time domains, whereas a hybrid frequency/time model leads to the best performance. These algorithms generalize independent component analysis (ICA) to the case of convolutive mixtures, and exhibit superior performance on instantaneous mixtures. In Addition, our approach can incorporate information about the mixing situation when available, resulting in a `semi-blind' separation algorithm. Finally, the spatio-temporal redundancy reduction performed by DCA algorithms is shown to be equivalent to information-rate maximization through a simple network. From PHKYWONG at usthk.ust.hk Fri Jan 23 01:39:13 1998 From: PHKYWONG at usthk.ust.hk (PHKYWONG@usthk.ust.hk) Date: Fri, 23 Jan 1998 14:39:13 +0800 Subject: New Book: Theoretical Aspects of Neural Computation Message-ID: <01ISPX7VCY2A9110EM@usthk.ust.hk> Announcing a new book by Springer (order information attached): Theoretical Aspects of Neural Computation ----------------------------------------- A Multidisciplinary Perspective [Proceedings of Hong Kong International Workshop(TANC'97)] Kwok-Yee Michael Wong, Irwin King and Dit-Yan Yeung (Eds.) Over the past decade or so, neural computation has emerged as a research area with active involvement by researchers from a number of different disciplines, including computer science, engineering, mathematics, neurobiology, physics, and statistics. The Hong Kong International Workshop, TANC'97, brought together researchers with a diverse background to review the current status of neural computation research. Three aspects of neural computation have been emphasized: neuroscience aspects, computational and mathematical aspects, and statistical physics aspects. This book contains 29 contributions from frontier researchers in these fields. Thoroughly re-edited, and in some cases revised post-workshop, these papers collated into this review volume provide a top-class reference summary of the state-of-the-art work done in this field. Table of Contents ================= The Natural Gradient Learning Algorithm for Neural Networks Shun-ichi Amari Regression with Gaussian Processes: Average Case Performance Manfred Opper Bayesian Ying-Yang System and Theory as A Unified Statistical Learning Approach (II): From Unsupervised Learning to Supervised Learning and Temporal Modeling Lei Xu Bayesian Ying-Yang System and Theory as A Unified Statistical Learning Approach: (III) Models and Algorithms for Dependence Reduction, Data Dimension Reduction, ICA and Supervised Learning Lei Xu Optimal Bayesian Online Learning Ole Winther and Sara A. Solla Several Aspects of Pruning Methods in Recursive Least Square Algorithms for Neural Networks Chi-Sing Leung, Pui-Fai Sum, Ah-Chung Tsoi, and Lai-Wan Chan Experts or an Ensemble? A Statistical Mechanics Perspective of Multiple Neural Network Approaches Jong-Hoon Oh and Kukjin Kang Mean Field Theory of Learning in Pruned Perceptrons K. Y. Michael Wong Solving Inverse Problems by Bayesian Iterative Inversion of Neural Networks Jenq-Neng Hwang Stochastic Orientation of the Generating Distribution in Very Fast Simu- lated Reannealing Bruce E. Rosen Graph Partitioning Using Homotopy Based Fast Annealed Neural Networks Jian-Jun Xue yand Xiao-Hu Yuz Modelling Synfire Processing John Hertz Ideal Observers of Visual Object Recognition Zili Liu Primary Cortical Dynamics for Visual Grouping Zhaoping Li Architecture of Cortex Revealed by Divided Attention Experiments Ching Elizabeth Ho Information Merging in Neural Modelling Massimo Battisti*, Pietro Burrascano* and Dario Pirollo Signal Recognition Based on Wavelet and Wavelet Neural Network Yao-Jun Wu Xi-Zhi Shi Ming Xu Chaos Theory in EEG Analysis Hongkui Jing and Shijun Chen A Neurocomputational Model of Figure-Ground Discrimination by Rela- tive Motion Aike Guo, Haijian Sun, and Lin Liu Feature Selectivity in a Cortical Module with Short-Range Excitation David Hansely and Haim Sompolinskyz A Psychophysical Experiment to Test the Efficient Stereo Coding Theory Danmei Chen and Zhaoping Li An Exact Solution for On-Line Learning of Smooth Functions Sara A. Solla Unsupervised Learning by Examples: On-line Versus Off-line C. Van den Broeck A Simple Perceptron that Learns Non-Monotonic Rules Jun-ichi Inoue, Hidetoshi Nishimori and Yoshiyuki Kabashima The Stability of Asymmetric Hopfield Networks With Nonnegative Weights 267 Jinwen Ma Fully Connected Q-Ising Neural Networks: A General Scheme for Dis- cussing Parallel Dynamics D. Boll'e, G. Jongen and G. M. Shim Recurrent Sampling Models Peter Dayan Low-Complexity Coding and Decoding Sepp Hochreiter and J"urgen Schmidhuber Pattern Analysis and Synthesis in Attractor Neural Networks H. Sebastian Seung Author Index Subject Index ============================================================================ ORDERING INFORMATION: ISBN 981-3083-70-0 Book price: US$49.00 (excluding postage charges) Please find the ordering information at the website: http://www.springer.com.sg under Books on Computer Science. Alternatively, please fax your orders to: [1] Singapore sales (65) 84 20 107 for clients in SE Asia (email: orders at cyberway.com.sg; tel: (65) 84 20 112) or [2] Hong Kong sales (852) 2724 2366 for clients in N. Asia (email: joes at springer.com.hk; tel: (852) 2723 9698) From Dave_Touretzky at cs.cmu.edu Fri Jan 23 04:08:53 1998 From: Dave_Touretzky at cs.cmu.edu (Dave_Touretzky@cs.cmu.edu) Date: Fri, 23 Jan 1998 04:08:53 -0500 Subject: summer undergrad. program in cognitive/computational neuroscience Message-ID: <10831.885546533@skinner.boltz.cs.cmu.edu> The Center for the Neural Basis of Cognition, a joint program of the University of Pittsburgh and Carnegie Mellon University, offers an annual summer program for a small number of qualified undergraduates interested in studying cognitive or computational neuroscience. The program offers undergraduates ten weeks of intensive involvement in laboratory research supervised by one of the program's faculty. The program also includes weekly journal club meetings and a series of lectures and laboratory tours designed to give students a broad exposure to cognitive and computational neuroscience topics. Students' individual research experiences are planned in consultation with the training program's Director. Potential laboratory environments include single unit recording, neuroanatomy, brain imaging, computer simulation of biological or cognitive phenomena, robotics, and neuropsychological or behavioral assessment of clinical subjects. Students selecteed to participate in the program will receive a $2500 stipend, plus housing and a modest travel allowance. Support is provided by the National Science Foundation and the Center for the Neural Basis of Cognition. The application deadline this year is February 15, 1998. The program begins in early June and lasts for 10 weeks. For additional information about the program or to obtain application materials, visit our web site at http://www.cnbc.cmu.edu/Training/summer From sml%essex.ac.uk at seralph21.essex.ac.uk Tue Jan 27 04:09:08 1998 From: sml%essex.ac.uk at seralph21.essex.ac.uk (Simon Lucas) Date: Tue, 27 Jan 1998 09:09:08 +0000 Subject: paper and applet on chromosome coding for NN evolution Message-ID: <34CDA434.48C4@essex.ac.uk> Dear Connectionists, The following paper and applet are available from my website: http://esewww.essex.ac.uk/~sml entitled: A comparison of matrix rewriting versus direct encoding for evolving neural networks A.A. Siddiqui and S.M. Lucas Proceedings of IEEE International Conference on Evolutionary Computation, 1998 (to appear) Abstract: The intuitive expectation is that the scheme used to encode the neural network in the chromosome should be critical to the success of evolving neural networks to solve difficult problems. In 1990 Kitano (Complex Systems, vol 4, pp 461 - 476) published an encoding scheme based on context-free parallel matrix rewriting. The method allowed compact, finite, chromosomes to grow neural networks of potentially infinite size. Results were presented that demonstrated superior evolutionary properties of the matrix rewriting method compared to a simple direct encoding. In this paper, we present results that contradict those findings, and demonstrate that a genetic algorithm (GA) using a direct encoding can find good individuals just as efficiently as a GA using matrix rewriting. The applet allows you to attempt to reproduce the results presented in the paper and to extend the comparsion to other datasets. Best regards, Simon Lucas -- ------------------------------------------------ Dr. Simon Lucas Department of Electronic Systems Engineering University of Essex Colchester CO4 3SQ United Kingdom Tel: (+44) 1206 872935 Fax: (+44) 1206 872900 Email: sml at essex.ac.uk http://esewww.essex.ac.uk/~sml secretary: Mrs Wendy Ryder (+44) 1206 872437 ------------------------------------------------- From diane at cs.cmu.edu Tue Jan 27 16:37:01 1998 From: diane at cs.cmu.edu (Diane Stidle) Date: Tue, 27 Jan 1998 16:37:01 -0500 Subject: CONALD Conference Announcement Message-ID: <3.0.2.32.19980127163701.01014da8@ux5.sp.cs.cmu.edu> CONFERENCE ON AUTOMATED LEARNING AND DISCOVERY (CONALD) June 11-13, 1998 at Carnegie Mellon University, Pittsburgh, PA The Conference on Automated Learning and Discovery (CONALD'98) will bring together leading researchers from scientific disciplines concerned with learning from data. It will cover scientific research at the intersection of statistics, computer science, artificial intelligence, databases, social sciences and language technologies. The goal of this meeting is to explore new, unified research directions in this cross-disciplinary field. The conference features eight one-day cross-disciplinary workshops, interleaved with seven invited plenary talks by renowned statisticians, computer scientists, and cognitive scientists. The workshops will address issues such as: what is the state of the art, what can we do and what is missing? what are promising research directions? what are the most promising opportunities for cross-disciplinary research? CONALD differs from other meetings in the field in its broad, interdisciplinary scope. The goal of CONALD is to characterize the state-of-the-art in automated learning and discovery, and to identify promising cross-disciplinary research directions. The format will be very much tailored towards open discussions and free exchange of ideas. This meeting will be summarized by a written report that will be made available to the scientific community and NSF. ___Plenary speakers________________________________________________ * Tom Dietterich * Stuart Geman * David Heckerman * Michael Jordan * Daryl Pregibon * Herb Simon * Robert Tibshirani ___Workshops_______________________________________________________ * Visual Methods for the Study of Massive Data Sets organized by Bill Eddy and Steve Eick * Learning Causal Bayesian Networks organized by Richard Scheines and Larry Wasserman * Discovery in Natural and Social Science organized by Raul Valdes-Perez * Mixed-Media Databases organized by Shumeet Baluja, Christos Faloutsos, Alex Hauptmann, and Michael Witbrock * Learning from Text and the Web organized by Yiming Yang, Jaime Carbonell, Steve Fienberg, and Tom Mitchell * Robot Exploration and Learning organized by Howie Choset, Maja Mataric and Sebastian Thrun * Machine Learning and Reinforcement Learning for Manufacturing organized by Sridhar Mahadevan and Andrew Moore * Large-Scale Consumer Databases organized by Mike Meyer, Teddy Seidenfeld and Kannan Srinivasan ___Deadline_for_paper_submissions__________________________________ * February 16, 1998 ___More_information________________________________________________ * Web: http://www.cs.cmu.edu/~conald * E-mail: conald at cs.cmu.edu For submission instructions, consult our Web page or contact the organizers of the specific workshop. A limited number of travel stipends will be available. The conference will be sponsored by CMU's newly created Center for Automated Learning and Discovery. Additional financial support will be provided by the National Science Foundation (NSF). From terry at salk.edu Wed Jan 28 06:33:38 1998 From: terry at salk.edu (terry@salk.edu) Date: Wed, 28 Jan 1998 03:33:38 -0800 (PST) Subject: NEURAL COMPUTATION 10:2 Message-ID: <199801281133.DAA01530@hebb.salk.edu> Neural Computation - Contents Volume 10, Number 2 - February 15, 1998 ARTICLE Natural Gradient Works Efficiently in Learning Shun-ichi Amari NOTES Adding Lateral Inhibition to a Simple Feedforward Network Enables it to Perform Exclusive-Or Leslie S. Smith Combined Learning and Use for a Mixture Model Equivalent to the RBF Classifier David J. Miller, and Hasan S. Uyar LETTERS Modeling the Surround of MT Cells and Their Selectivity for Surface Orientation in Depth Specified by Motion Lin Liu and Marc M. van Hulle A Self-Organizing Neural Network Architecture for Navigation Using Optic Flow Seth Cameron, Stephen Grossberg, and Frank H. Guenther Analysis of Direction Selectivity Arising From Recurrent Cortical Interactions Paul Mineiro, and David Zipser Statistically Efficient Estimation Using Population Coding Alexandre Pouget, Kechen Zhang, Sophie Deneve, and Peter E. Latham Probabilistic Interpretation of Population Codes Richard S. Zemel, Peter Dayan, and Alexandre Pouget Stable and Rapid Recurrent Processing in Realistic Autoassociative Memories Francesco P. Battaglia and Alessandro Treves Synaptic Runaway In Associative Networks And The Pathogenesis Of Schizophrenia Asnat Greenstein-Messica, and Eytan Ruppin On Numerical Simulations of Integrate-and-Fire Neural Networks D. Hansel, G. Mato, C. Meunier, and L. Neltner A Floating Gate MOS Implementation of Resistive Fuse T. Matsumoto, T. Sawaji, T. Sakai, and H. Nagai ----- ABSTRACTS - http://mitpress.mit.edu/NECO/ SUBSCRIPTIONS - 1998 - VOLUME 10 - 8 ISSUES USA Canada* Other Countries Student/Retired $50 $53.50 $78 Individual $82 $87.74 $110 Institution $285 $304.95 $318 * includes 7% GST (Back issues from Volumes 1-9 are regularly available for $28 each to institutions and $14 each for individuals. Add $5 for postage per issue outside USA and Canada. Add +7% GST for Canada.) MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 258-6779 mitpress-orders at mit.edu ----- From tom.ziemke at ida.his.se Wed Jan 28 02:53:49 1998 From: tom.ziemke at ida.his.se (Tom Ziemke) Date: Wed, 28 Jan 1998 08:53:49 +0100 Subject: CFP - Autonomous Robotics and Adaptive Behaviour at ICANN 98 Message-ID: <199801280753.IAA27796@tor.ida.his.se> ------------------------------------------------------- This is being mailed to multiple mailing lists. Please accept our apologies if you receive multiple copies. ------------------------------------------------------- CALL FOR PAPERS ------------------------------------------- AUTONOMOUS ROBOTICS and ADAPTIVE BEHAVIOUR ------------------------------------------- A module of ICANN 98, the 8th International Conference on Artificial Neural Networks, Skoevde, Sweden, 2-4 Sept. 1998 ----------------- INVITED SPEAKERS ----------------- Invited speakers for this module are * Rodney Brooks, MIT AI Lab, Cambridge, USA * Phil Husbands, COGS, University of Sussex, UK ------ SCOPE ------ The adaptivity and flexibility of artificial neural networks (ANNs), as well as their capacity for learning and self-organisation, make them ideal candidates for the control of autonomous robots. For the same reasons, ANNs and ANN-controlled robots are also increasingly being used to model biological mechanisms underlying the adaptive behaviour of animals. Shared characteristics of biological and arti- ficial autonomous agents include embodiment, situatedness, and the requirement to exhibit adaptive behaviour in sensorimotor interaction with dynamic environments. This module of ICANN 98 covers the use of ANN techniques for adaptive control of autonomous robots as well as ANN/robotic models of animal behaviour. Possible topics include, but are not restricted to: * adaptive behaviour in biological and artificial autonomous agents * ANN learning methods for adaptation, control and navigation * communication, cooperation and collective behaviour in multi-agent systems * the role of representation in embodied/situated/autonomous systems * dynamics of agent-environment interaction * biologically and ethologically inspired agent modelling * evolutionary robotics and artificial life * cognitive robotics and embodied cognition ------------------------------------ PROGRAMME COMMITTEE for this module ------------------------------------ * Randall Beer * Valentino Braitenberg * Dario Floreano * Stefano Nolfi * Jordan Pollack * Noel Sharkey * Jun Tani * Carme Torras * Francisco Varela * Tom Ziemke ----------- SUBMISSION ----------- Prospective authors are invited to submit papers for oral or poster presentation by MARCH 25, 1998. For details please see: http://www.his.se/ida/icann98/submission or contact the conference secretariat (see below). ------------ PUBLICATION ------------ All papers accepted for oral or poster presentation will appear in the ICANN 98 proceedings published by Springer-Verlag. The organizers have also arranged to edit a journal special issue which will include (in extended form) selected papers from this track. -------------------- FURTHER INFORMATION -------------------- This module is organized by Noel Sharkey, University of Sheffield, UK, and Tom Ziemke, Univ. of Skoevde, Sweden. For further information concerning this module please see the web page http://www.ida.his.se/ida/icann98/robotics or contact Tom Ziemke University of Skoevde Dept. of Computer Science P.O. Box 408 S-541 28 Skoevde SWEDEN tom at ida.his.se fax +46 - (0)500 - 46 47 25 tel +46 - (0)500 - 46 47 30 For further information concerning ICANN 98 please see the web site http://www.his.se/ida/icann98/ or contact the conference secretariat ICANN 98 Leila Khammari University of Skoevde P.O. Box 408 S-541 28 Skoevde SWEDEN icann98 at ida.his.se fax +46 - (0)500 - 46 47 25 ---------------- IMPORTANT DATES ---------------- March 25, 1998 - submissions must be received May 6, 1998 - notification of acceptance or rejection May 28, 1998 - final camera-ready papers are due Sept. 2-4, 1998 - ICANN 98 takes place From jimmy at ecowar.demon.co.uk Wed Jan 28 13:03:30 1998 From: jimmy at ecowar.demon.co.uk (Jimmy Shadbolt) Date: Wed, 28 Jan 1998 18:03:30 +0000 Subject: research position - market prediction Message-ID: Econostat Ltd Hennerton House Wargrave Berks RG10 8PD United Kingdom We would like to invite applications for a research position at Econostat. The research team is involved principally in the prediction of monthly returns in the global bond and equity markets. All methods of prediction are investigated - regression, neural networks, genetic algorithms, Bayesian analysis, and anything else you can suggest. Original work is encouraged (and necessary!). Position Quantitative Research Analyst Job Description Research and development of expected return models Applications to Jimmy Shadbolt jimmy at ecowar.demon.co.uk Start Date IMMEDIATE Qualifications First degree in numerate discipline (maths, engineering, physics, statistics, etc). PhD (or MSc) in one of econometrics, mathematical statistics, applied mathematics or other related field of study. Strong interest in financial economics, as evidenced by research topic Training and experience Experience in econometrics, modern regression or optimisation methods Programming in C/C++ and/or Splus User experience in PC (word processing and spreadsheet) and Unix environments Aptitude and Ability Good oral and writing skills Creative and problem solving approach to research Personal Attributes Ability to work without close supervision as a member of a team Flexibility to meet changing opportunities in a dynamic research environment -- Jimmy Shadbolt From tani at csl.sony.co.jp Thu Jan 29 06:38:33 1998 From: tani at csl.sony.co.jp (Jun.Tani (SONY CSL)) Date: Thu, 29 Jan 98 20:38:33 +0900 Subject: TR: Self-organizing levels of articulation in sensory-motor systems. Message-ID: <9801291138.AA04434@tani.csl.sony.co.jp> Dear Connectionists, The following technical paper is avairable in http://www.csl.sony.co.jp/person/tani.html or directly to: ftp://ftp.csl.sony.co.jp/CSL/CSL-Papers/97/SCSL-TR-97-008.ps.Z --------------------------------------------------------------------------- Self-Organization of Modules and Their Hierarchy in Robot Learning Problems: A Dynamical Systems Approach Jun Tani and Stefano Nolfi (Sony CSL Technical Report: SCSL-TR-97-008) ABSTRACT: This paper discusses how modular and hierarchical structures can be self-organized dynamically in a robot learning paradigm. We develop an on-line learning scheme -- the so-called mixture of recurrent neural net (RNN) experts -- in which a set of RNN modules becomes self-organized as experts in order to account for the different categories of sensory-motor flow which the robot experiences. Autonomous switching between winning expert modules, responding to structural changes in the sensory-motor flow, actually corresponds to the temporal segmentation of behavior. In the meanwhile, another mixture of RNNs at a higher level learns the sequences of module switching occurring in the lower level, by which articulation at a further more abstract level is achieved. The proposed scheme was examined through simulation experiments involving the navigation learning problem. The simulated robot equipped with range sensors traveled around rooms of different shape. It was shown that modules corresponding to concepts such as turning right and left at corners, going straight along corridors and encountering junctions are self-organized in the lower level network. The modules corresponding to traveling in different rooms are self-organized in the higher level network. The robot succeeded in learning to perceive the world as articulated at multiple levels through its recursive interactions. -------------------------------------------------------------------------- Jun TANI, Ph.D Senior Researcher Sony Computer Science Laboratory Inc. Takanawa Muse Building, 3-14-13 Higashi-gotanda, Shinagawa-ku, Tokyo, 141 JAPAN email: tani at csl.sony.co.jp http://www.csl.sony.co.jp/person/tani.html Fax +81-3-5448-4273 Tel +81-3-5448-4380 From niall.griffith at ul.ie Thu Jan 29 05:29:58 1998 From: niall.griffith at ul.ie (Niall Griffith) Date: Thu, 29 Jan 1998 10:29:58 GMT Subject: Phd. studentships: Connectionist Models of Musical Processes Message-ID: <9801291029.AA03089@shannon.csis.ul.ie> Please post this to those who would be interested. Thanks. Connectionist Models in Computational Musicology etc. ----------------------------------------------------- Centre for Computational Musicology and Computer Music Department of Computer Science and Informations Systems University of Limerick Research studentships leading to a PhD. in: Connectionist Models in Computational Musicology, Computer Music or Cognitive Musicology Applications are invited from students interested in working towards a Doctorate in the area of Computer Music, Computational Musicology or Cognitive Musicology, developing models of musical processes using neural networks and related machine learning techniques (GA's, Reinforcement Learning). Initially students will register for a Master's by research and subsequently be re-registered for a doctorate. The student(s) will be supervised by Dr. Niall Griffith who has an interest in models that learn about musical structure and that can use what has been learned. Limerick is musically very active with the Irish World Music Centre and the Centre for Computational Musicology and Computer Music. The CCMCM offers a Masters in Music Technology. Current projects include a collaboration with members of the Irish World Music Centre and the Interaction Design Centre at UL in designing and implementing a "wired" dance floor that can track, represent and analyse dance steps. This project is ongoing and involves the floor as performance medium, a compositional and analystical tool and as a choreographic aid. Network models will be used extensively in this project and others. Applicants should have a 2.1 honours degree in a relevant subject (e.g. Music, Cognitive Science, Computer Science, Psychology), though experience and other qulifications will be taken into account. Programming skills are an advantage. If you are curious then please visit the following web sites at UL... http://www.csis.ul.ie /* CSIS home page http://www.csis.ul.ie/ccmmc /* Centre for Computational Musicology http://www.ul.ie/~pal/litefoot /* LiteFoot home page Contact: Niall Griffith, Department of CSIS University of Limerick. email: niall.griffith at ul.ie Telephone: +353 61 202785 Fax: +353 61 330876 From Yves.Moreau at esat.kuleuven.ac.be Thu Jan 29 13:12:11 1998 From: Yves.Moreau at esat.kuleuven.ac.be (Yves Moreau) Date: Thu, 29 Jan 1998 19:12:11 +0100 Subject: TR: Embedding Recurrent Neural Networks into Predator-Prey Models Message-ID: <34D0C67B.9A0BA590@esat.kuleuven.ac.be> Dear Connectionists, The following technical report is available via ftp or the World Wide Web: EMBEDDING RECURRENT NEURAL NETWORKS INTO PREDATOR-PREY MODELS Yves Moreau and Joos Vandewalle, K.U.Leuven ESAT-SISTA K.U.Leuven, Elektrotechniek-ESAT, Technical report ESAT-SISTA TR98-02 ftp://ftp.esat.kuleuven.ac.be/pub/SISTA/moreau/reports/lotka_volterra_tr98-02.ps Comments are more than welcome! ABSTRACT ======== We study changes of coordinates that allow the embedding of the ordinary differential equations describing continuous-time recurrent neural networks into differential equations describing predator-prey models ---also called Lotka-Volterra systems. We do this by transforming the equations for the neural network first into quasi-monomial form, where we express the vector field of the dynamical system as a linear combination of products of powers of the variables. From this quasi-monomial form, we can directly transform the system further into Lotka-Volterra equations. The resulting Lotka-Volterra system is of higher dimension than the original system, but the behavior of its first variables is equivalent to the behavior of the original neural network. We expect that this transformation will permit the application of existing techniques for the analysis of Lotka-Volterra systems to recurrent-neural networks. Furthermore, our result shows that Lotka-Volterra systems are universal approximators of dynamical systems, just as continuous-time neural networks. Keywords: Continuous-time neural networks, Equivalence of dynamical systems, Lotka-Volterra systems, Predator-prey models, Quasi-monomial forms. -------------------------------------------------------------- To get it from the World Wide Web, point your browser at: ftp://ftp.esat.kuleuven.ac.be/pub/SISTA/moreau/reports/lotka_volterra_tr98-02.ps To get it via FTP: ftp ftp.esat.kuleuven.ac.be cd pub/SISTA/moreau/reports get lotka_volterra_tr98-02.ps -------------------- Yves Moreau Department of Electrical Engineering Katholieke Universiteit Leuven Leuven, Belgium email: moreau at esat.kuleuven.ac.be homepage: http://www.esat.kuleuven.ac.be/~moreau publications: http://www.esat.kuleuven.ac.be/~moreau/publication_list.html From plaut at cmu.edu Thu Jan 29 13:50:39 1998 From: plaut at cmu.edu (David Plaut) Date: Thu, 29 Jan 1998 13:50:39 -0500 Subject: Preprint: Modeling phonological development Message-ID: <3744.886099839@eagle.cnbc.cmu.edu> The following preprint is available via the Web or anonymous ftp. The Emergence of Phonology from the Interplay of Speech Comprehension and Production: A Distributed Connectionist Approach David C. Plaut Christopher T. Kello Carnegie Mellon University and the Center for the Neural Basis of Cognition To appear in B. MacWhinney (Ed.), The emergence of language. Mahweh, NJ: Erlbaum. A distributed connectionist framework for phonological development is proposed in which phonological representations are not predefined but emerge under the pressure of mediating among acoustic, semantic, and articulatory representations in the service of both comprehension and production. Within the framework, articulatory feedback during speech production is derived from the acoustic consequences of the system's own articulations via a learned forward model of the physical mapping relating articulation to acoustics. An implementation of the framework, in the form of a discrete-time simple recurrent network, learned to comprehend, imitate, and intentionally name a corpus of 400 monosyllabic words, and its errors in development showed similar tendencies as those of young children. Although only a first step, the results provide support that the approach may ultimately form the basis for a comprehensive account of phonological development. [25 pages] URL: http://www.cnbc.cmu.edu/~plaut/papers/PlautKelloINPRESSchap.phon.ps.gz FTP-host: cnbc.cmu.edu FTP-file: pub/user/plaut/papers/PlautKelloINPRESSchap.phon.ps.gz (Note: an uncompressed version can be found under papers/uncompressed/) -Dave =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= David Plaut Center for the Neural Basis of Cognition and Mellon Institute 115, CNBC Departments of Psychology and Computer Science Carnegie Mellon University MI 115I, 412/268-5145 (fax -5060) 4400 Fifth Ave., Pittsburgh PA 15213-2683 http://www.cnbc.cmu.edu/~plaut "Doubt is not a pleasant condition but certainty is an absurd one." -Voltaire =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= From udah057 at bay.cc.kcl.ac.uk Fri Jan 30 11:36:37 1998 From: udah057 at bay.cc.kcl.ac.uk (John Taylor) Date: Fri, 30 Jan 1998 16:36:37 GMT Subject: POSTDOCTORAL POSITION IN HYBRID PROBLEMS Message-ID: <199801301636.QAA13485@mail.kcl.ac.uk> POSTDOCTORAL POSITION IN HYBRID PROBLEMS Applications are invited for a 3-year position in the PHYSTA (TMR: Training and Mobility) EC Network at the Centre for Neural Networks, Department of Mathematics, King's College, Strand, London, WC2R2LS, UK to work with Prof JG Taylor. The research will be to develop neural network modular and hierarchical hybrid architectures which are able to create, in an adaptive manner, subsymbolic representations especially for emotionally-loaded inputs based on speech and image analysis. These will then be related to processing at a symbolic level involved with the inputs. The applicant should have a PhD preferably in Neural Networks, if possible with experience in applying neural networks to problems in vision, speech or a related area. The applicant will also be expected to make short trips to the laboratories of the other partners in the TMR Network (S Gielen, Nijmegen; B Apolloni, Milan; S Kollias, Athens are the relevant ones). Please send your CV by reply. From chalmers at paradox.ucsc.edu Sat Jan 31 18:54:47 1998 From: chalmers at paradox.ucsc.edu (David Chalmers) Date: Sat, 31 Jan 98 15:54:47 PST Subject: Toward a Science of Consciousness 1998 Message-ID: <9801312354.AA29934@paradox.ucsc.edu.lingdomain> TOWARD A SCIENCE OF CONSCIOUSNESS 1998 TUCSON, ARIZONA APRIL 27 - MAY 2, 1998 Program details are now available for the third Tucson conference on "Toward a Science of Consciousness". The conference will take place from Monday April 27 to Saturday May 2, 1998, at the Tucson Convention Center and Music Hall, sponsored by the University of Arizona. Included below is an outline of plenary sessions and speakers, a list of concurrent sessions, and registration details. The information below is subject to slight revision. Note that the deadline for early registration is FEBRUARY 2. More details (including information on pre-conference workshops, poster sessions, abstracts, lodging, detailed schedule, and so on) can be found on the conference web sites at: http://www.consciousness.arizona.edu/page.htm (general information) http://www.zynet.co.uk/imprint/Tucson/ (abstracts, etc) PROGRAM COMMITTEE: David Chalmers, Stuart Hameroff, Alfred Kaszniak, Christof Koch, Marilyn Schlitz, Alwyn Scott, Petra Stoerig, Keith Sutherland, Michael Winkelman ---------------------------------------------------------------------- PLENARY SESSIONS Monday April 27 PL1: THE SELF * G. Strawson: The self. * M.S. Gazzaniga: The mind's past. * J. Shear: Experiential clarification of `The problem of self'. PL2: IMPLICIT PROCESSES * A. Greenwald: Simple mental feats that require conscious cognition (because unconscious cognition can't do them.) * P. Merikle: Is there memory for events during anesthesia? PL3: PATHWAYS OF VISUAL CONSCIOUSNESS * D. Milner: Unconscious visual processing for action: neuropsychological evidence. * M. Goodale: Unconscious visual processing for action: evidence from normal observers. * M. Mishkin: On the neural basis of visual awareness. Tuesday April 28 PL4: SLEEP AND DREAMING * B. McNaughton: Sleep and dreaming [working title] * J.A. Hobson: Neuropsychology of dreaming consciousness. * S. LaBerge: Lucid dreaming: psychophysiological studies of consciousness during REM sleep. PL5: INTEGRATIVE PERSPECTIVES * C. Koch: Visual awareness and the frontal lobes. * M. Tye: Representation and consciousness. PL6: COLOR AND CONSCIOUSNESS * K. Nordby: A 'colorful' life in black and white. * C.L. Hardin: Color quality and color structure. * M. Nida-Rumelin: Color and consciousness [working title] Wednesday April 29 PL7: TRANSPERSONAL PSYCHOLOGY * F. Vaughan: Essential dimensions of consciousness: Objective, subjective and intersubjective. * H. Hunt: Transpersonal and cognitive psychologies of consciousness: A necessary and reciprocal dialogue. * M. Schlitz: Transpersonal consciousness? Assessing the evidence. PL8: EMOTIONAL EXPERIENCE * A. Kaszniak: Conscious experience and autonomic response to emotion following frontal lobe damage. * R.D. Lane: Subregions within the anterior cingulate cortex may differentially participate in phenomenal and reflective consciousness awareness of emotion. Thursday April 30 PL9: EVOLUTION AND FUNCTION OF CONSCIOUSNESS I * S. Mithen: Handaxes: Some hard evidence regarding the evolution of the mind and consciousness * N. Humphrey: Cave painting, autism and the evolution of the human mind. * Third speaker TBA PL10: EVOLUTION AND FUNCTION OF CONSCIOUSNESS II * A.G. Cairns-Smith: If qualia evolved... * R.L. Gregory: What do qualia do? PL11: THE EXPLANATORY GAP * J. Levine: Conceivability, possibility, and the explanatory gap * C. McGinn: The explanatory gap [working title] * G. Rosenberg: On the intrinsic nature of the physical. Friday May 1 PL12: CULTURE AND CONSCIOUSNESS * A. Zajonc: Goethe and the science of consciousness: Toward a scientist's phenomenology of mind. * C. Laughlin: Biogenetic structural theory and the neurophenomenology of consciousness. * M. Winkelman: The fundamental properties of systems with consciousness. PL13: BLINDSIGHT * P. Stoerig, A. Cowey, R. Goebel: Blindsight and its neuronal basis. * S. Zeki: Blindsight [working title] PL14: SPACE, TIME, AND CONSCIOUSNESS * L. Smolin: Space, time and consciousness [working title] * P. Hut: Exploring actuality, through experiment and experience. * K. Yasue: Consciousness and photon dynamics in the brain. Saturday May 2 PL15: NEURAL CORRELATES OF CONSCIOUSNESS * B. Baars: Is a real psychoscope possible? Inferring when brain scans show us conscious experiences. * A. Revonsuo: How to take consciousness seriously in cognitive neuroscience. * J.B. Newman: Beyond pandemonium: the role of the reticular core in unifying the stream of consciousness PL16: AESTHETICS AND CONSCIOUSNESS * C.W. Tyler: The structure of interpersonal consciousness in art. * Second speaker TBA --------------------------------------------------------------- CONCURRENT SESSIONS Each session will have five speakers. For more details, see the conference web site. Monday April 27 C1: Qualia C2: Neural correlates of consciousness C3: Implicit cognition C4: Time C5: Isomorphism between phenomenology and neuroscience C6: Crosscultural perspectives Tuesday April 28 C7: Materialism and dualism C8: The function of consciousness C9: Attention and vision C10: Quantum biology and consciousness C11: Parapsychology C12: Consciousness and literature C13: Awareness, attention, and memory during sleep Thursday April 30 C14: The concept of consciousness C15: Computational and cognitive models C16: Blindsight C17: Evolution of consciousness C18: Altered states of consciousness C19: First-, second-, and third-person perspectives C20: Unconscious influences on motivational/affective awareness Friday May 1 C21: Unity of consciousness and the self C22: Ethics C23: Sleep and dreaming C24: Consciousness and physical reality C25: Emotion and volition C26: Art, music, and consciousness --------------------------------------------------------------- REGISTRATION FORM Toward a Science of Consciousness 1998 81ULCON227 (Please print in block letters or type.) Mr./Mrs./Ms./Dr. __________________________________________ Organization ______________________________________________ Address ___________________________________________________ City ________________________ State/Province______________ Postal Code__________________ Country_____________________ Daytime phone _______________ Fax ________________________ E-mail ____________________________________________________ Conference Fees ___ Early registration (payment received before February 2) $250.00 ___ Registration fee (payment received after February 2) $325.00 ___ Early student registration fee (payment before February 2) $100.00 Please include a copy of your current student ID. ___ Student registration fee (for current, full-time students) $150.00 Other Fees ___ Banquet, April 29, White Stallion Ranch $55.00 ___ Guest at banquet (Name ___________________________) $55.00 Meal choice: ___ chicken ___ salmon ___ vegetarian Pre-conference Workshops Saturday, April 25 All day ___ Observing the Mind: Basic Training in Skilled Means (C. Tart) $90.00 ___ Dream Interpretation (D. Roomy) $90.00 Morning ___ Health, Healing and Consciousness (Kohatsu/Koffler/Lee) $45.00 Afternoon ___ "Global workspace" capacity in the brain (B. Baars) $45.00 Sunday, April 26 Morning ___ Overview of Tucson III (V. Shamas) $45.00 ___ Quantum Theory, Reality and Consciousness (P. Pylkkanen) $45.00 ___ Exceptional Experience in Sports (R. White & S. Brown) $45.00 Afternoon ___ The Mammalian Visual System (C. Koch) $45.00 ___ Exploring Consciousness with Lucid Dreaming (S. LaBerge) $45.00 ___ Consciousness and the Binding Problem (A. Revonsuo) $45.00 Field Trips ___ Sabino Canyon $45.00 ___ Tubac and San Xavier $45.00 Total $_____ Payment Information Total payment $_________ ___ Check enclosed (in dollars from US bank), payable to Extended University ___ Credit card Visa____ MasterCard____ Account number ___________________________________________ Expiration date _______ Signature ________________________ ___ Purchase Order (enclose please) There are four ways to register. Payment or purchase order must accompany registration. PHONE: Call 520-621-7724 from 8:00 a.m.-5:00 p.m. MST, Monday-Friday. VISA and MasterCard accepted. FAX: Fax this form to 520-621-3269. Fax lines are open 24 hours. VISA and MasterCard accepted. MAIL: Send this form with payment to The University of Arizona Extended University; Attention: Registration; P.O. Box 210158; Tucson, AZ 85721-0158. E-MAIL: Send to extuniv at ccit.arizona.edu. Please include conference name, your name, priority code from the mail panel, address, daytime phone. Include full details of workshop options and total amount to be charged to your credit card. Give VISA or MasterCard number and expiration date. Cancellation policy: If you cancel your registration in writing by March 27, you'll receive a refund less a $35 cancellation fee. Non-attendance does not constitute a cancellation. If you have a disability and require accommodation, please contact us at the time of registration at 520-621-7724. From blair at it.uq.edu.au Wed Jan 7 00:09:11 1998 From: blair at it.uq.edu.au (Alan Blair) Date: Wed, 7 Jan 1998 15:09:11 +1000 (EST) Subject: Research positions available - Brisbane, Australia Message-ID: Project: Dynamical recognizers and complexity classes of languages Keywords: neural networks, dynamical systems, language induction The Cognitive Science group at The University of Queensland is seeking qualified applicants for projects in the area of neural networks and formal language learning. 1. Postdoctoral fellow or senior research assistant (1 year with possible extension for a second year) 2. Research assistant (6 months or half-time for 12 months) Qualifications required: Excellent programming skills in Java (or C) and Matlab, experience with neural network simulations. Background in dynamical systems, formal language theory and/or linguistics would be an advantage. The positions may be suitable for masters or PhD students. Location: The University of Queensland is located in Brisbane, Australia. Unfortunately, these grants are unable to provide travel funds, so successful applicants would be responsible for their own travel arrangements. Applicants are encouraged to contact the principal investigators via email, and to send expressions of interest, current CV and names and contact details of three referees by 31 Jan 1998 to Dr Janet Wiles (away from email jan 1-12) Dr Alan Blair < blair at cs.uq.edu.au> fax: +61 7 3365 1999 phone: +61 7 3365 2902 ------------------------------------------------------------- Dr Janet Wiles _-_|\ Cognitive Science Group / * Dept of Computer Science & Electrical Engineering \_.-._/ The University of Queensland, 4072 AUSTRALIA v http://www.cs.uq.edu.au/MENU/RESEARCH_GROUPS/csrg/cogsci.html ------------------------------------------------------------- From tho at james.hut.fi Wed Jan 7 10:02:08 1998 From: tho at james.hut.fi (Timo Honkela) Date: Wed, 7 Jan 1998 17:02:08 +0200 (EET) Subject: Thesis: Self-Organizing Maps in Natural Language Processing Message-ID: The following Dr.Phil. thesis is available at http://www.cis.hut.fi/~tho/thesis/honkela.ps.Z (compressed postscript) http://www.cis.hut.fi/~tho/thesis/honkela.ps (postscript) http://www.cis.hut.fi/~tho/thesis/ (html) ---------------------------------------------------------------------- SELF-ORGANIZING MAPS IN NATURAL LANGUAGE PROCESSING Timo Honkela Helsinki University of Technology Neural Networks Research Centre P.O.Box 2200 (Rakentajanaukio 2C) FIN-02015 HUT, Finland Timo.Honkela at hut.fi Kohonen's Self-Organizing Map (SOM) is one of the most popular artificial neural network algorithms. Word category maps are SOMs that have been organized according to word similarities, measured by the similarity of the short contexts of the words. Conceptually interrelated words tend to fall into the same or neighboring map nodes. Nodes may thus be viewed as word categories. Although no a priori information about classes is given, during the self-organizing process a model of the word classes emerges. The central topic of the thesis is the use of the SOM in natural language processing. The approach based on the word category maps is compared with the methods that are widely used in artificial intelligence research. Modeling gradience, conceptual change, and subjectivity of natural language interpretation are considered. The main application area is information retrieval and textual data mining for which a specific SOM-based method called the WEBSOM has been developed. The WEBSOM method organizes a document collection on a map display that provides an overview of the collection and facilitates interactive browsing. ------------------- --------------------------- Timo Honkela Timo.Honkela at hut.fi http://www.cis.hut.fi/~tho/ Neural Networks Research Centre, Helsinki Univ of Technology and P.O.Box 2200 FIN-02015 HUT, Finland Nat Lang Proc Tel. +358-9-451 3275, Fax +358-9-451 3277 From becker at curie.psychology.mcmaster.ca Wed Jan 7 14:14:14 1998 From: becker at curie.psychology.mcmaster.ca (Sue Becker) Date: Wed, 7 Jan 1998 14:14:14 -0500 (EST) Subject: graduate training opportunities Message-ID: Neuroscience Graduate Training Opportunities The Center for Neural Systems at McMaster University is a multidisciplinary research group, whose faculty members span the departments of Psychology, Biology, Electrical and Computer Engineering, and Computer Science. Students in the CNS group register in the graduate program of their thesis advisor's home department. The CNS provides an exciting intellectual environment and excellent shared facilities for studying the nervous system at a variety of levels ranging from molecular to systems and theoretical. Our laboratories employ the most advanced techniques, including neural pathway tracing, brain imaging, computational modelling, electrophysiology and genetics. Application materials for Graduate Studies in Neuroscience are available on our web page, http://www.science.mcmaster.ca/Psychology/behave.comp.neuro.html or by writing to the Center for Neural Systems, McMaster University, Department of Psychology, 1280 Main Street West, Hamilton, Ontario. From oby at cs.tu-berlin.de Thu Jan 8 11:12:01 1998 From: oby at cs.tu-berlin.de (Klaus Obermayer) Date: Thu, 8 Jan 1998 17:12:01 +0100 (MET) Subject: paper available Message-ID: <199801081612.RAA26533@pollux.cs.tu-berlin.de> Dear Connectionists, The following tech-report and paper are available online: -------------------------------------------------------------------- A Model for the Intracortical Origin of Orientation Preference and Tuning in Macaque Striate Cortex Peter Adorjan^1, Jonathan B. Levitt^2, Jennifer S. Lund^2, and Klaus Obermayer^1 ^1 CS Department, Technical University of Berlin, Berlin, Germany, ^2 Institute for Ophthalmology, UCL, London, UK We report results of numerical simulations for a model of orientation selectivity in macaque striate cortex. In contrast to previous models, where the initial orientation bias is generated by convergent geniculate input to simple cells and subsequently sharpened by lateral circuits, this approach is based on anisotropic intracortical excitatory connections which provide both the initial orientation bias and the subsequent amplification. Our study shows that the emerging response properties are similar to the response properties which are observed experimentally, hence the hypothesis of an intracortical generation of orientation bias is a sensible alternative to the notion of an afferent bias by convergent geniculocortical projection patterns. In contrast to models based on an afferent orientation bias, however, the ``intracortical hypothesis'' predicts that orientation tuning gradually evolves from an initially nonoriented response and a complete loss of orientation tuning when the recurrent excitation is blocked, but new experiments must be designed to unambiguously decide between both hypotheses. TU Berlin Technical Report, TR 98-1, http://kon.cs.tu-berlin.de/publications/#techrep ------------------------------------------------------------------------- Development and Regeneration of the Retinotectal Map in Goldfish: A Computational Study C. Weber^1, H. Ritter^2, J. Cowan^3, and K. Obermayer^1 ^1 CS Department, Technical University of Berlin, Berlin, Germany, ^2 Technische Fakultaet, University of Bielefeld, Germany, ^3 Departments of Mathematics and Neurology, The University of Chicago, IL, USA We present a simple computational model to study the interplay of activity dependent and intrinsic processes thought to be involved in the formation of topographic neural projections. Our model consists of two input layers which project to one target layer. The connections between layers are described by a set of synaptic weights. These weights develop according to three interacting developmental rules: (i) an intrinsic fiber-target interaction which generates chemospecific adhesion between afferent fibers and target cells, (ii) an intrinsic fiber-fiber interaction which generates mutual selective adhesion between the afferent fibers and (iii) an activity-dependent fiber-fiber interaction which implements Hebbian learning. Additionally, constraints are imposed to keep synaptic weights finite. The model is applied to a set of eleven experiments on the regeneration of the retinotectal projection in goldfish. We find that the model is able to reproduce the outcome of an unprecedented range of experiments with the same set of model parameters, including details of the size of receptive and projective fields. We expect this mathematical framework to be a useful tool for the analysis of developmental processes in general. Phil. Trans. Roy. Soc. Lond. B 352, 1603-1623 (1997) http://kon.cs.tu-berlin.de/publications/#journals From bmg at numbat.cs.rmit.edu.au Thu Jan 8 05:27:38 1998 From: bmg at numbat.cs.rmit.edu.au (B Garner) Date: Thu, 8 Jan 1998 21:27:38 +1100 (EST) Subject: two papers Message-ID: <199801081027.VAA03715@numbat.cs.rmit.edu.au> These published papers are available at the following WWW sites http://yallara.cs.rmit.edu.au/~bmg/algA.rtf http://yallara.cs.rmit.edu.au/~bmg/algB.rtf ************************************************************************** A symbolic solution for adaptive feedforward neural networks found with a new training algorithm B. M. Garner, Department of Computer Science, RMIT, Melbourne, Australia. ABSTRACT Traditional adaptive feed forward neural network (NN) training algorithms find numerical values for the weights and thresholds. In this paper it is shown that a NN composed of linear threshold gates (LTGs) can function as a fully trained neural network without finding numerical values for the weights and thresholds. This surprising result is demonstrated by presenting a new training algorithm for this type of NN that resolves the network into constraints which describes all the numeric values the NN's weights and thresholds can take. The constraints do not require a numerical solution for the network to function as a fully trained NN which can generalize. The solution is said to be symbolic as a numerical solution is not required. *************************************************************************** A training algorithm for Adaptive Feedforward Neural Networks that determines its own topology B. M. Garner, Department of Computer Science, RMIT, Melbourne, Australia. ABSTRACT There has been some interest in developing neural network training algorithms that determine their own architecture. A training algorithm for adaptive feedforward neural networks (NN) composed of Linear Threshold Gates (LTGs) is presented here that determines it's own architecture and trains in a single pass. This algorithm produces what is said to be a symbolic solution as it resolves the relationships between the weights and the thresholds into constraints which do not require to be solved numerically. The network has been shown to behave as a fully trained neural network which generalizes and the possibility that the algorithm has polynomial time complexity is discussed. The algorithm uses binary data during training. Bernadette ============================================================================= Bernadette Garner He shall fall down a pit called Because, and there bmg at numbat.cs.rmit.edu.au he shall perish with the dogs of Reason http://yallara.cs.rmit.edu.au/~bmg/ - Aleister Crowley ============================================================================= From reggia at cs.umd.edu Thu Jan 8 14:11:50 1998 From: reggia at cs.umd.edu (James A. Reggia) Date: Thu, 8 Jan 1998 14:11:50 -0500 (EST) Subject: Travel Fellowships, Neural Modeling Brain/Cognitive Disorders Meeting Message-ID: <199801081911.OAA12824@avion.cs.umd.edu> We are happy to announce that funding is expected for a few TRAVEL FELLOWSHIPS to THE SECOND INTERNATIONAL WORKSHOP ON NEURAL MODELING OF BRAIN AND COGNITIVE DISORDERS held on June 4 - 6, 1998 at the University of Maryland, College Park, just outside of Washington, DC. Preference will be given in awarding this travel support to students, post-docs or residents with posters accepted for presentation at the meeting. The focus of this meeting will be on lesioning neural models to study disorders in neurology, neuropsychology and psychiatry, e.g., Alzheimer's disease, amnesia, aphasia, depression, epilepsy, neglect, parkinsonism, schizophrenia, and stroke. Individuals wishing to present a poster related to any aspect of the workshop's theme should submit an abstract describing the nature of their presentation. The single page submission should include title, author(s), contact information (address and email/fax), and abstract. One inch margins and a typesize of at least 10 points should be used. Abstracts will be reviewed by the Program Committee; those accepted will be published in the workshop proceedings. Six copies of the camera-ready abstract should be mailed TO ARRIVE by February 3, 1998 to James Reggia, Dept. of Computer Science, A.V. Williams Bldg., University of Maryland, College Park, MD 20742 USA. The latest information about the meeting can be found at http://www.cs.umd.edu/~reggia/workshop/ To receive registration materials (distributed most likely in February), please send your name, address, email address, phone number and fax number to Cecilia Kullman, UMIACS, A. V. Williams Bldg., University of Maryland, College Park, MD 20742 USA. (Tel: (301) 405-0304, Fax: (301) 314-9658, and email: cecilia at umiacs.umd.edu). From bmg at numbat.cs.rmit.edu.au Fri Jan 9 06:38:03 1998 From: bmg at numbat.cs.rmit.edu.au (B Garner) Date: Fri, 9 Jan 1998 22:38:03 +1100 (EST) Subject: two papers Message-ID: <199801091138.WAA18834@numbat.cs.rmit.edu.au> A number of people have requested that the following published papers be made available in postscript format. They are now available at WWW site http://yallara.cs.rmit.edu.au/~bmg/algA.ps http://yallara.cs.rmit.edu.au/~bmg/algB.ps I apologize if you have received multiple copies of this posting. ************************************************************************** A symbolic solution for adaptive feedforward neural networks found with a new training algorithm B. M. Garner, Department of Computer Science, RMIT, Melbourne, Australia. ABSTRACT Traditional adaptive feed forward neural network (NN) training algorithms find numerical values for the weights and thresholds. In this paper it is shown that a NN composed of linear threshold gates (LTGs) can function as a fully trained neural network without finding numerical values for the weights and thresholds. This surprising result is demonstrated by presenting a new training algorithm for this type of NN that resolves the network into constraints which describes all the numeric values the NN's weights and thresholds can take. The constraints do not require a numerical solution for the network to function as a fully trained NN which can generalize. The solution is said to be symbolic as a numerical solution is not required. *************************************************************************** A training algorithm for Adaptive Feedforward Neural Networks that determines its own topology B. M. Garner, Department of Computer Science, RMIT, Melbourne, Australia. ABSTRACT There has been some interest in developing neural network training algorithms that determine their own architecture. A training algorithm for adaptive feedforward neural networks (NN) composed of Linear Threshold Gates (LTGs) is presented here that determines it's own architecture and trains in a single pass. This algorithm produces what is said to be a symbolic solution as it resolves the relationships between the weights and the thresholds into constraints which do not require to be solved numerically. The network has been shown to behave as a fully trained neural network which generalizes and the possibility that the algorithm has polynomial time complexity is discussed. The algorithm uses binary data during training. Bernadette ============================================================================= Bernadette Garner He shall fall down a pit called Because, and there bmg at numbat.cs.rmit.edu.au he shall perish with the dogs of Reason http://yallara.cs.rmit.edu.au/~bmg/ - Aleister Crowley ============================================================================= From Melanie.Hilario at cui.unige.ch Fri Jan 9 08:34:20 1998 From: Melanie.Hilario at cui.unige.ch (Melanie Hilario) Date: Fri, 09 Jan 1998 14:34:20 +0100 Subject: CFP: ECML'98 WS - Upgrading Learning to the Meta-Level Message-ID: <34B6275C.2A73@cui.unige.ch> [Our apologies if you receive multiple copies of this CFP] Call for Papers ECML'98 Workshop UPGRADING LEARNING TO THE META-LEVEL: MODEL SELECTION AND DATA TRANSFORMATION To be held in conjunction with the 10th European Conference on Machine Learning Chemnitz, Germany, April 24, 1997 http://www.cs.bris.ac.uk/~cgc/ecml98-ws.html Motivation and Technical Description Over the past decade, machine learning (ML) techniques have successfully started the transition from research laboratories to the real world. The number of fielded applications has grown steadily, evidence that industry needs and uses ML techniques. However, most successful applications are custom-designed and the result of skillful use of human expertise. This is due, in part, to the large, ever increasing number of available ML models, their relative complexity and the lack of systematic methods for discriminating among them. Current data mining tools are only as powerful/useful as their users. They provide multiple techniques within a single system, but the selection and combination of these techniques are external to the system and performed by the user. This makes it difficult and costly for non-initiated users to access the much needed technology directly. The problem of model selection is that of choosing the appropriate learning method/model for a given application task. It is currently a matter of consensus that there are no universally superior models and methods for learning. The key question in model selection is not which learning method is better than the others, but under which precise conditions a given method is better than others for a given task. The problem of data transformation is distinct but inseparable from model selection. Data often need to be cleaned and transformed before applying (or even selecting) a learning algorithm. Here again, the hurdle is that of choosing the appropriate method for the specific transformation required. In both the learning and data pre-processing phases, users often resort to a trial-and-error process to select the most suitable model. Clearly, trying all possible options is impractical, and choosing the option that appears most promising often yields to a sub-optimal solution. Hence, an informed search process is needed to reduce the amount of experimentation while avoiding the pitfalls of local optima. Informed search requires meta-knowledge, which is not available to non-initiated, industrial end-users. Objectives and Scope The aim of this workshop is to explore the different ways of acquiring and using the meta-knowledge needed to address the model selection and data transformation problems. For some researchers, the choice of learning and data transformation methods should be fully automated if machine learning and data mining systems are to be of any use to non specialists. Others claim that full automation of the learning process is not within the reach of current technology. Still others doubt that it is even desirable. An intermediate solution is the design of assistant systems which aim less to replace the user than to help him make the right choices or, failing that, to guide him through the space of experiments. Whichever the proposed solution, there seems to be an implicit agreement that meta-knowledge should be integrated seamlessly into the learning tool. This workshop is intended to bring together researchers who have attempted to use meta-level approaches to automate or guide decision-making at all stages of the learning process. One broad line of research is the static use of prior (meta-)knowledge. Knowledge-based approaches to model selection have been explored in both symbolic and neural network learning. For instance, prior knowledge of invariances has been used to select the appropriate neural network architecture for optical character recognition problems. Another research avenue aims at augmenting and/or refining meta-knowledge dynamically across different learning experiences. Meta-learning approaches have been attempted to automate model selection (as in VBMS and StatLog) as well as model arbitration and model combination (as in JAM). Contributions are sought on any of the above--or other--approaches from all main sub-fields of machine learning, including neural networks, symbolic machine learning and inductive logic programming. The results of this workshop will extend those of prior workshops, such as the ECML95 Workshop on Learning at the Knowledge Level and the ICML97 Workshop on Machine Learning Applications in the Real World, as well as complement those of the upcoming AAAI98/ICML98 Workshop on the Methodology of Applying Machine Learning. Format and Schedule The workshop will consist of one invited talk, a number of refereed contributions and small group discussions. The idea is to bring researchers together to present current work and identify future areas of research and development. This is intended to be a one-day workshop and the proposed schedule is as follows. 9:00 Welcome 10:00 Paper session (5 x 30mins) 12:30 Lunch 1:30 Paper session (3 x 30mins) 3:00 Summary: the issues/the future 3:15 Small group discussions (3-4 groups) 4:00 Reports from each group 4:45 Closing remarks 5:00 End Timetable The following timetable will be strictly adhered to: * Registration of interest: starting now (email to: Christophe G-C, please specify intention to attend/intention to submit a paper) * Submission of paper: 6 March 1998 (electronic postscript only to either organiser: Christophe G-C, Melanie H) * Notification of acceptance: 20 March 1998 * Camera-ready: 28 March 1998 Program Committee Submitted papers will be reviewed by at least two independent referees from the following program committee. Pavel Brazdil, University of Porto Robert Engels, University of Karlsruhe Dieter Fensel, University of Karlsruhe Jean-Gabriel Ganascia, Universite Pierre et Marie Curie Christophe Giraud-Carrier, University of Bristol Ashok Goel, Georgia Institute of Technology Melanie Hilario, University of Geneva Igor Kononenko, University of Ljubljana Dunja Mladenic, Josef Stefan Institute, Slovenia Gholaremza Nakhaizadeh, Daimler-Benz Ashwin Ram, Georgia Institute of Technology Colin Shearer, Integrated Solutions Ltd Walter van de Welde, Riverland Next Generation Maarten van Someren, University of Amsterdam Gerhard Widmer, Austrian Institute for Artificial Intelligence Research Accepted papers will be published in the workshop proceedings and contributors will be allocated 30 minutes for an oral presentation during the workshop. Organisers Christophe Giraud-Carrier Department of Computer Science University of Bristol Bristol, BS8 1UB United Kingdom Tel: +44-117-954-5145 Fax: +44-117-954-5208 Email: cgc at cs.bris.ac.uk Melanie Hilario Computer Science Department University of Geneva 24, Rue General-Dufour CH-1211 Geneva 4 Switzerland Tel: +41-22-705-7791 Fax: +41-22-705-7780 Email: Melanie.Hilario at cui.unige.ch From shimone at cogs.susx.ac.uk Fri Jan 9 07:03:59 1998 From: shimone at cogs.susx.ac.uk (Shimon Edelman) Date: Fri, 9 Jan 1998 12:03:59 +0000 (GMT) Subject: preprint on visual recognition and categorization Message-ID: --------------------------------------------------------------------- Visual recognition and categorization on the basis of similarities to multiple class prototypes Sharon Duvdevani-Bar and Shimon Edelman Available directly via FTP ftp://eris.wisdom.weizmann.ac.il/pub/recog+categ.ps.Z or via this Web page http://eris.wisdom.weizmann.ac.il/~edelman/archive.html Abstract: One of the difficulties of object recognition stems from the need to overcome the variability in object appearance caused by factors such as illumination and pose. The influence of these factors can be countered by learning to interpolate between stored views of the target object, taken under representative combinations of viewing conditions. Difficulties of another kind arise in daily life situations that require categorization, rather than recognition, of objects. We show that, although categorization cannot rely on interpolation between stored examples, knowledge of several representative members, or prototypes, of each of the categories of interest can still provide the necessary computational substrate for the categorization of new instances. The resulting representational scheme based on similarities to prototypes is computationally viable, and is readily mapped onto the mechanisms of biological vision revealed by recent psychophysical and physiological studies. --------------------------------------------------------------------- [get it now] Comments welcome. -Shimon Shimon Edelman, School of Cognitive and Computing Sciences University of Sussex, Falmer, Brighton BN1 9QH, UK http://www.cogs.susx.ac.uk/users/shimone +44 1273 678659 From pazienza at info.utovrm.it Fri Jan 9 08:27:22 1998 From: pazienza at info.utovrm.it (Maria Teresa Pazienza) Date: Fri, 9 Jan 1998 15:27:22 +0200 Subject: Job offer Message-ID: The AI group of the Department of Computer Science, Systems and Production, University of Rome Tor Vergata (ITALY), from a long while involved in reasearches in Lexical Acquisition, Machine Laerning and Engineering of adaptive NLP systems, is looking for an experienced computer scientist interested in joining the AI group in Rome for an international (state-of-art reasearch & development) project in the area of text processing for Information Extraction and Filtering. The project's specific goal is multilingual (English, Spanish and Italian) extraction of information from Web documents. The AI group that will host the candidate has a long lasting tradition in the engineering of NLP systems, and is currently integrating its existing systems for Lexical Acquisition and Text Processing in Italian and English into an industrial prototype. A qualified candidate must have preferably a PhD degree, on Computer Science, Software Engineering or Computational Linguistics, with extensive programming experience in at least one of the following languages: C++, Java and Prolog. It is highly preferred a strong background in software engineering of large-scale text processing systems and ability on innovative approaches in natural language (statistical as well as symbolic methods). Although this is not a job for theoreticians only, a specific talent for research problems, experimental studies and familiarity with the empirical methods in NL are a plus. Candidate should know very well UNIX and in general be a skilled programmer. Knowledge of Java programming under Windows NT is a relevant aspect, but not necessary. The position corresponds to a contract of the University of Roma, Tor Vergata for (at least) one year (March 1998-March 1999): salary and conditions are equivalent to the position of a resarcher in the University. To apply for this position, please contact/send a curriculum by fax / e-mail to: Maria Teresa Pazienza Department of Computer Science, Systems and Production University of Roma, Tor Vergata Via di Tor Vergata 00133 Roma, (ITALY) fax : +39 6 72597460 tel : +39 6 72597378 e-mail: pazienza at info.utovrm.it -------------------------------------------------- prof. Maria Teresa Pazienza Dept. of Computer Science, Systems and Production University of Roma, Tor Vergata Via di Tor Vergata 00133 ROMA (ITALY) tel +39 6 72597378 fax +39 6 72597460 e_mail: pazienza at info.utovrm.it http://babele.info.utovrm.it/ -------------------------------------------------- From zoubin at cs.toronto.edu Fri Jan 9 17:24:05 1998 From: zoubin at cs.toronto.edu (Zoubin Ghahramani) Date: Fri, 9 Jan 1998 17:24:05 -0500 Subject: paper: Hierarchical Factor Analysis and Topographic Maps Message-ID: <98Jan9.172413edt.1352@neuron.ai.toronto.edu> The following paper is now available at: ftp://ftp.cs.toronto.edu/pub/zoubin/nips97.ps.gz http://www.cs.toronto.edu/~zoubin ---------------------------------------------------------------------- Hierarchical Non-linear Factor Analysis and Topographic Maps Zoubin Ghahramani and Geoffrey E. Hinton Department of Computer Science University of Toronto We first describe a hierarchical, generative model that can be viewed as a non-linear generalisation of factor analysis and can be implemented in a neural network. The model performs perceptual inference in a probabilistically consistent manner by using top-down, bottom-up and lateral connections. These connections can be learned using simple rules that require only locally available information. We then show how to incorporate lateral connections into the generative model. The model extracts a sparse, distributed, hierarchical representation of depth from simplified random-dot stereograms and the localised disparity detectors in the first hidden layer form a topographic map. When presented with image patches from natural scenes, the model develops topographically organised local feature detectors. To appear in Jordan, M.I, Kearns, M.J., and Solla, S.A. Advances in Neural Information Processing Systems 10. MIT Press: Cambridge, MA, 1998. From kehagias at egnatia.ee.auth.gr Sat Jan 10 15:23:22 1998 From: kehagias at egnatia.ee.auth.gr (Thanasis Kehagias) Date: Sat, 10 Jan 1998 12:23:22 -0800 Subject: new paper on Multi-model Algorithm for Parameter Estimation of Time-varying Nonlinear Systems Message-ID: <34B7D8BA.2804@egnatia.ee.auth.gr> The following paper will appear in AUTOMATICA. While it is not strictly about neural networks, the presented analysis of credit assignment convergence, for a multimodel scheme, may be of interest for people working with modular neural networks, mixtures of experts and so on. Title: A Multi-model Algorithm for Parameter Estimation of Time-varying Nonlinear Systems Authors: V. Petridis and Ath. Kehagias Source: Automatica (to appear) Link: http://skiron.control.ee.auth.gr/~kehagias/97epeke.htm Abstract: Many methods have been developed to solve the problem of parameter stimation for dynamical systems (Ljung, 1987). Of particular interest is the case of on-line algorithms which are used to estimate time-varying parameters. Here we present such an algorithm which assumes a nonlinear dynamical system. The system is time-varying: its parameter changes values according to a Markovian model switching mechanism. The algorithm starts with a finite number of models, each corresponding to one of the parameter values, and selects the ``phenomenologically best'' parameter value; namely the one which produces the best fit to the observed behavior of the system. Our algorithm is related to the Partition Algorith (PA) presented in (Hilborn & Lainiotis, 1969; Lainiotis, 1971; Lainiotis & Plataniotis, 1994; Sims, Lainiotis & Magill, 1969). PA is suitable for the parameter estimation of a linear dynamical system with Gaussian noise in the input and output; no provision is made for model switching. Under these assumptions, an algorithm is developed for exact computation of the models' posterior probabilities; these are used for Maximum a Posteriori (MAP) estimation of the unknown parameter. This method has been used extensively in a number of applications, including parameter estimation and system identification (Kehagias, 1991; Lainiotis & Plataniotis, 1994; Petridis, 1981). Our algorithm is more general than the PA: it applies to nonlinear systems and requires no probabilistic assumptions regarding the noise. Furthermore, while there are several convergence studies of the PA without a switching mechanism (Anderson & Moore, 1979; Kehagias, 1991; Tugnait, 1980), as far as we know, the analysis presented here is the first one that handles the Markovian switching assumption. A rigorous convergence analysis is also presented. Thanasis Kehagias, Research Associate, Dept. of Electrical and Computer Eng, Aristotle Un., Thessaloniki Ass. Prof., Dept. of Mathematics and Computer Sci., American College of Thessaloniki http://skiron.control.ee.auth.gr/~kehagias/index.htm From kehagias at egnatia.ee.auth.gr Sun Jan 11 23:03:05 1998 From: kehagias at egnatia.ee.auth.gr (Thanasis Kehagias) Date: Sun, 11 Jan 1998 20:03:05 -0800 Subject: correction to new paper URL Message-ID: <34B995F9.59FA@egnatia.ee.auth.gr> I apologize for reposting, but there was an error in the URL I gave in the announcement of my paper; it should be http://skiron.control.ee.auth.gr/~kehagias/THN/97epeke.htm The paper will appear in AUTOMATICA. While it is not strictly about neural networks, the presented analysis of credit assignment convergence, for a multimodel scheme, may be of interest for people working with modular neural networks, mixtures of experts and so on. Title: A Multi-model Algorithm for Parameter Estimation of Time-varying Nonlinear Systems Authors: V. Petridis and Ath. Kehagias Source: Automatica (to appear) Link: http://skiron.control.ee.auth.gr/~kehagias/THN/97epeke.htm Abstract: Many methods have been developed to solve the problem of parameter stimation for dynamical systems (Ljung, 1987). Of particular interest is the case of on-line algorithms which are used to estimate time-varying parameters. Here we present such an algorithm which assumes a nonlinear dynamical system. The system is time-varying: its parameter changes values according to a Markovian model switching mechanism. The algorithm starts with a finite number of models, each corresponding to one of the parameter values, and selects the ``phenomenologically best'' parameter value; namely the one which produces the best fit to the observed behavior of the system. Our algorithm is related to the Partition Algorith (PA) presented in (Hilborn & Lainiotis, 1969; Lainiotis, 1971; Lainiotis & Plataniotis, 1994; Sims, Lainiotis & Magill, 1969). PA is suitable for the parameter estimation of a linear dynamical system with Gaussian noise in the input and output; no provision is made for model switching. Under these assumptions, an algorithm is developed for exact computation of the models' posterior probabilities; these are used for Maximum a Posteriori (MAP) estimation of the unknown parameter. This method has been used extensively in a number of applications, including parameter estimation and system identification (Kehagias, 1991; Lainiotis & Plataniotis, 1994; Petridis, 1981). Our algorithm is more general than the PA: it applies to nonlinear systems and requires no probabilistic assumptions regarding the noise. Furthermore, while there are several convergence studies of the PA without a switching mechanism (Anderson & Moore, 1979; Kehagias, 1991; Tugnait, 1980), as far as we know, the analysis presented here is the first one that handles the Markovian switching assumption. A rigorous convergence analysis is also presented. Thanasis Kehagias, Research Associate, Dept. of Electrical and Computer Eng, Aristotle Un., Thessaloniki Ass. Prof., Dept. of Mathematics and Computer Sci., American College of Thessaloniki http://skiron.control.ee.auth.gr/~kehagias/index.htm -- Athanasios Kehagias, Research Associate, Dept. of Electrical and Computer Eng., Aristotle University of Thessaloniki, GR54006, Thessaloniki, GREECE and Assistant Professor, Dept. of Mathematics and Computer Science, American College of Thessaloniki, P.O. Box 21021, GR 55510 Pylea, Thessaloniki, GREECE home page: http://skiron.control.ee.auth.gr/~kehagias/index.htm email: kehagias at egnatia.ee.auth.gr, kehagias at ac.anatolia.edu.gr From blair at it.uq.edu.au Wed Jan 7 00:09:11 1998 From: blair at it.uq.edu.au (Alan Blair) Date: Wed, 7 Jan 1998 15:09:11 +1000 (EST) Subject: Research positions available - Brisbane, Australia Message-ID: Project: Dynamical recognizers and complexity classes of languages Keywords: neural networks, dynamical systems, language induction The Cognitive Science group at The University of Queensland is seeking qualified applicants for projects in the area of neural networks and formal language learning. 1. Postdoctoral fellow or senior research assistant (1 year with possible extension for a second year) 2. Research assistant (6 months or half-time for 12 months) Qualifications required: Excellent programming skills in Java (or C) and Matlab, experience with neural network simulations. Background in dynamical systems, formal language theory and/or linguistics would be an advantage. The positions may be suitable for masters or PhD students. Location: The University of Queensland is located in Brisbane, Australia. Unfortunately, these grants are unable to provide travel funds, so successful applicants would be responsible for their own travel arrangements. Applicants are encouraged to contact the principal investigators via email, and to send expressions of interest, current CV and names and contact details of three referees by 31 Jan 1998 to Dr Janet Wiles (away from email jan 1-12) Dr Alan Blair < blair at cs.uq.edu.au> fax: +61 7 3365 1999 phone: +61 7 3365 2902 ------------------------------------------------------------- Dr Janet Wiles _-_|\ Cognitive Science Group / * Dept of Computer Science & Electrical Engineering \_.-._/ The University of Queensland, 4072 AUSTRALIA v http://www.cs.uq.edu.au/MENU/RESEARCH_GROUPS/csrg/cogsci.html ------------------------------------------------------------- From tho at james.hut.fi Wed Jan 7 10:02:08 1998 From: tho at james.hut.fi (Timo Honkela) Date: Wed, 7 Jan 1998 17:02:08 +0200 (EET) Subject: Thesis: Self-Organizing Maps in Natural Language Processing Message-ID: The following Dr.Phil. thesis is available at http://www.cis.hut.fi/~tho/thesis/honkela.ps.Z (compressed postscript) http://www.cis.hut.fi/~tho/thesis/honkela.ps (postscript) http://www.cis.hut.fi/~tho/thesis/ (html) ---------------------------------------------------------------------- SELF-ORGANIZING MAPS IN NATURAL LANGUAGE PROCESSING Timo Honkela Helsinki University of Technology Neural Networks Research Centre P.O.Box 2200 (Rakentajanaukio 2C) FIN-02015 HUT, Finland Timo.Honkela at hut.fi Kohonen's Self-Organizing Map (SOM) is one of the most popular artificial neural network algorithms. Word category maps are SOMs that have been organized according to word similarities, measured by the similarity of the short contexts of the words. Conceptually interrelated words tend to fall into the same or neighboring map nodes. Nodes may thus be viewed as word categories. Although no a priori information about classes is given, during the self-organizing process a model of the word classes emerges. The central topic of the thesis is the use of the SOM in natural language processing. The approach based on the word category maps is compared with the methods that are widely used in artificial intelligence research. Modeling gradience, conceptual change, and subjectivity of natural language interpretation are considered. The main application area is information retrieval and textual data mining for which a specific SOM-based method called the WEBSOM has been developed. The WEBSOM method organizes a document collection on a map display that provides an overview of the collection and facilitates interactive browsing. ------------------- --------------------------- Timo Honkela Timo.Honkela at hut.fi http://www.cis.hut.fi/~tho/ Neural Networks Research Centre, Helsinki Univ of Technology and P.O.Box 2200 FIN-02015 HUT, Finland Nat Lang Proc Tel. +358-9-451 3275, Fax +358-9-451 3277 From becker at curie.psychology.mcmaster.ca Wed Jan 7 14:14:14 1998 From: becker at curie.psychology.mcmaster.ca (Sue Becker) Date: Wed, 7 Jan 1998 14:14:14 -0500 (EST) Subject: graduate training opportunities Message-ID: Neuroscience Graduate Training Opportunities The Center for Neural Systems at McMaster University is a multidisciplinary research group, whose faculty members span the departments of Psychology, Biology, Electrical and Computer Engineering, and Computer Science. Students in the CNS group register in the graduate program of their thesis advisor's home department. The CNS provides an exciting intellectual environment and excellent shared facilities for studying the nervous system at a variety of levels ranging from molecular to systems and theoretical. Our laboratories employ the most advanced techniques, including neural pathway tracing, brain imaging, computational modelling, electrophysiology and genetics. Application materials for Graduate Studies in Neuroscience are available on our web page, http://www.science.mcmaster.ca/Psychology/behave.comp.neuro.html or by writing to the Center for Neural Systems, McMaster University, Department of Psychology, 1280 Main Street West, Hamilton, Ontario. From oby at cs.tu-berlin.de Thu Jan 8 11:12:01 1998 From: oby at cs.tu-berlin.de (Klaus Obermayer) Date: Thu, 8 Jan 1998 17:12:01 +0100 (MET) Subject: paper available Message-ID: <199801081612.RAA26533@pollux.cs.tu-berlin.de> Dear Connectionists, The following tech-report and paper are available online: -------------------------------------------------------------------- A Model for the Intracortical Origin of Orientation Preference and Tuning in Macaque Striate Cortex Peter Adorjan^1, Jonathan B. Levitt^2, Jennifer S. Lund^2, and Klaus Obermayer^1 ^1 CS Department, Technical University of Berlin, Berlin, Germany, ^2 Institute for Ophthalmology, UCL, London, UK We report results of numerical simulations for a model of orientation selectivity in macaque striate cortex. In contrast to previous models, where the initial orientation bias is generated by convergent geniculate input to simple cells and subsequently sharpened by lateral circuits, this approach is based on anisotropic intracortical excitatory connections which provide both the initial orientation bias and the subsequent amplification. Our study shows that the emerging response properties are similar to the response properties which are observed experimentally, hence the hypothesis of an intracortical generation of orientation bias is a sensible alternative to the notion of an afferent bias by convergent geniculocortical projection patterns. In contrast to models based on an afferent orientation bias, however, the ``intracortical hypothesis'' predicts that orientation tuning gradually evolves from an initially nonoriented response and a complete loss of orientation tuning when the recurrent excitation is blocked, but new experiments must be designed to unambiguously decide between both hypotheses. TU Berlin Technical Report, TR 98-1, http://kon.cs.tu-berlin.de/publications/#techrep ------------------------------------------------------------------------- Development and Regeneration of the Retinotectal Map in Goldfish: A Computational Study C. Weber^1, H. Ritter^2, J. Cowan^3, and K. Obermayer^1 ^1 CS Department, Technical University of Berlin, Berlin, Germany, ^2 Technische Fakultaet, University of Bielefeld, Germany, ^3 Departments of Mathematics and Neurology, The University of Chicago, IL, USA We present a simple computational model to study the interplay of activity dependent and intrinsic processes thought to be involved in the formation of topographic neural projections. Our model consists of two input layers which project to one target layer. The connections between layers are described by a set of synaptic weights. These weights develop according to three interacting developmental rules: (i) an intrinsic fiber-target interaction which generates chemospecific adhesion between afferent fibers and target cells, (ii) an intrinsic fiber-fiber interaction which generates mutual selective adhesion between the afferent fibers and (iii) an activity-dependent fiber-fiber interaction which implements Hebbian learning. Additionally, constraints are imposed to keep synaptic weights finite. The model is applied to a set of eleven experiments on the regeneration of the retinotectal projection in goldfish. We find that the model is able to reproduce the outcome of an unprecedented range of experiments with the same set of model parameters, including details of the size of receptive and projective fields. We expect this mathematical framework to be a useful tool for the analysis of developmental processes in general. Phil. Trans. Roy. Soc. Lond. B 352, 1603-1623 (1997) http://kon.cs.tu-berlin.de/publications/#journals From bmg at numbat.cs.rmit.edu.au Thu Jan 8 05:27:38 1998 From: bmg at numbat.cs.rmit.edu.au (B Garner) Date: Thu, 8 Jan 1998 21:27:38 +1100 (EST) Subject: two papers Message-ID: <199801081027.VAA03715@numbat.cs.rmit.edu.au> These published papers are available at the following WWW sites http://yallara.cs.rmit.edu.au/~bmg/algA.rtf http://yallara.cs.rmit.edu.au/~bmg/algB.rtf ************************************************************************** A symbolic solution for adaptive feedforward neural networks found with a new training algorithm B. M. Garner, Department of Computer Science, RMIT, Melbourne, Australia. ABSTRACT Traditional adaptive feed forward neural network (NN) training algorithms find numerical values for the weights and thresholds. In this paper it is shown that a NN composed of linear threshold gates (LTGs) can function as a fully trained neural network without finding numerical values for the weights and thresholds. This surprising result is demonstrated by presenting a new training algorithm for this type of NN that resolves the network into constraints which describes all the numeric values the NN's weights and thresholds can take. The constraints do not require a numerical solution for the network to function as a fully trained NN which can generalize. The solution is said to be symbolic as a numerical solution is not required. *************************************************************************** A training algorithm for Adaptive Feedforward Neural Networks that determines its own topology B. M. Garner, Department of Computer Science, RMIT, Melbourne, Australia. ABSTRACT There has been some interest in developing neural network training algorithms that determine their own architecture. A training algorithm for adaptive feedforward neural networks (NN) composed of Linear Threshold Gates (LTGs) is presented here that determines it's own architecture and trains in a single pass. This algorithm produces what is said to be a symbolic solution as it resolves the relationships between the weights and the thresholds into constraints which do not require to be solved numerically. The network has been shown to behave as a fully trained neural network which generalizes and the possibility that the algorithm has polynomial time complexity is discussed. The algorithm uses binary data during training. Bernadette ============================================================================= Bernadette Garner He shall fall down a pit called Because, and there bmg at numbat.cs.rmit.edu.au he shall perish with the dogs of Reason http://yallara.cs.rmit.edu.au/~bmg/ - Aleister Crowley ============================================================================= From reggia at cs.umd.edu Thu Jan 8 14:11:50 1998 From: reggia at cs.umd.edu (James A. Reggia) Date: Thu, 8 Jan 1998 14:11:50 -0500 (EST) Subject: Travel Fellowships, Neural Modeling Brain/Cognitive Disorders Meeting Message-ID: <199801081911.OAA12824@avion.cs.umd.edu> We are happy to announce that funding is expected for a few TRAVEL FELLOWSHIPS to THE SECOND INTERNATIONAL WORKSHOP ON NEURAL MODELING OF BRAIN AND COGNITIVE DISORDERS held on June 4 - 6, 1998 at the University of Maryland, College Park, just outside of Washington, DC. Preference will be given in awarding this travel support to students, post-docs or residents with posters accepted for presentation at the meeting. The focus of this meeting will be on lesioning neural models to study disorders in neurology, neuropsychology and psychiatry, e.g., Alzheimer's disease, amnesia, aphasia, depression, epilepsy, neglect, parkinsonism, schizophrenia, and stroke. Individuals wishing to present a poster related to any aspect of the workshop's theme should submit an abstract describing the nature of their presentation. The single page submission should include title, author(s), contact information (address and email/fax), and abstract. One inch margins and a typesize of at least 10 points should be used. Abstracts will be reviewed by the Program Committee; those accepted will be published in the workshop proceedings. Six copies of the camera-ready abstract should be mailed TO ARRIVE by February 3, 1998 to James Reggia, Dept. of Computer Science, A.V. Williams Bldg., University of Maryland, College Park, MD 20742 USA. The latest information about the meeting can be found at http://www.cs.umd.edu/~reggia/workshop/ To receive registration materials (distributed most likely in February), please send your name, address, email address, phone number and fax number to Cecilia Kullman, UMIACS, A. V. Williams Bldg., University of Maryland, College Park, MD 20742 USA. (Tel: (301) 405-0304, Fax: (301) 314-9658, and email: cecilia at umiacs.umd.edu). From bmg at numbat.cs.rmit.edu.au Fri Jan 9 06:38:03 1998 From: bmg at numbat.cs.rmit.edu.au (B Garner) Date: Fri, 9 Jan 1998 22:38:03 +1100 (EST) Subject: two papers Message-ID: <199801091138.WAA18834@numbat.cs.rmit.edu.au> A number of people have requested that the following published papers be made available in postscript format. They are now available at WWW site http://yallara.cs.rmit.edu.au/~bmg/algA.ps http://yallara.cs.rmit.edu.au/~bmg/algB.ps I apologize if you have received multiple copies of this posting. ************************************************************************** A symbolic solution for adaptive feedforward neural networks found with a new training algorithm B. M. Garner, Department of Computer Science, RMIT, Melbourne, Australia. ABSTRACT Traditional adaptive feed forward neural network (NN) training algorithms find numerical values for the weights and thresholds. In this paper it is shown that a NN composed of linear threshold gates (LTGs) can function as a fully trained neural network without finding numerical values for the weights and thresholds. This surprising result is demonstrated by presenting a new training algorithm for this type of NN that resolves the network into constraints which describes all the numeric values the NN's weights and thresholds can take. The constraints do not require a numerical solution for the network to function as a fully trained NN which can generalize. The solution is said to be symbolic as a numerical solution is not required. *************************************************************************** A training algorithm for Adaptive Feedforward Neural Networks that determines its own topology B. M. Garner, Department of Computer Science, RMIT, Melbourne, Australia. ABSTRACT There has been some interest in developing neural network training algorithms that determine their own architecture. A training algorithm for adaptive feedforward neural networks (NN) composed of Linear Threshold Gates (LTGs) is presented here that determines it's own architecture and trains in a single pass. This algorithm produces what is said to be a symbolic solution as it resolves the relationships between the weights and the thresholds into constraints which do not require to be solved numerically. The network has been shown to behave as a fully trained neural network which generalizes and the possibility that the algorithm has polynomial time complexity is discussed. The algorithm uses binary data during training. Bernadette ============================================================================= Bernadette Garner He shall fall down a pit called Because, and there bmg at numbat.cs.rmit.edu.au he shall perish with the dogs of Reason http://yallara.cs.rmit.edu.au/~bmg/ - Aleister Crowley ============================================================================= From Melanie.Hilario at cui.unige.ch Fri Jan 9 08:34:20 1998 From: Melanie.Hilario at cui.unige.ch (Melanie Hilario) Date: Fri, 09 Jan 1998 14:34:20 +0100 Subject: CFP: ECML'98 WS - Upgrading Learning to the Meta-Level Message-ID: <34B6275C.2A73@cui.unige.ch> [Our apologies if you receive multiple copies of this CFP] Call for Papers ECML'98 Workshop UPGRADING LEARNING TO THE META-LEVEL: MODEL SELECTION AND DATA TRANSFORMATION To be held in conjunction with the 10th European Conference on Machine Learning Chemnitz, Germany, April 24, 1997 http://www.cs.bris.ac.uk/~cgc/ecml98-ws.html Motivation and Technical Description Over the past decade, machine learning (ML) techniques have successfully started the transition from research laboratories to the real world. The number of fielded applications has grown steadily, evidence that industry needs and uses ML techniques. However, most successful applications are custom-designed and the result of skillful use of human expertise. This is due, in part, to the large, ever increasing number of available ML models, their relative complexity and the lack of systematic methods for discriminating among them. Current data mining tools are only as powerful/useful as their users. They provide multiple techniques within a single system, but the selection and combination of these techniques are external to the system and performed by the user. This makes it difficult and costly for non-initiated users to access the much needed technology directly. The problem of model selection is that of choosing the appropriate learning method/model for a given application task. It is currently a matter of consensus that there are no universally superior models and methods for learning. The key question in model selection is not which learning method is better than the others, but under which precise conditions a given method is better than others for a given task. The problem of data transformation is distinct but inseparable from model selection. Data often need to be cleaned and transformed before applying (or even selecting) a learning algorithm. Here again, the hurdle is that of choosing the appropriate method for the specific transformation required. In both the learning and data pre-processing phases, users often resort to a trial-and-error process to select the most suitable model. Clearly, trying all possible options is impractical, and choosing the option that appears most promising often yields to a sub-optimal solution. Hence, an informed search process is needed to reduce the amount of experimentation while avoiding the pitfalls of local optima. Informed search requires meta-knowledge, which is not available to non-initiated, industrial end-users. Objectives and Scope The aim of this workshop is to explore the different ways of acquiring and using the meta-knowledge needed to address the model selection and data transformation problems. For some researchers, the choice of learning and data transformation methods should be fully automated if machine learning and data mining systems are to be of any use to non specialists. Others claim that full automation of the learning process is not within the reach of current technology. Still others doubt that it is even desirable. An intermediate solution is the design of assistant systems which aim less to replace the user than to help him make the right choices or, failing that, to guide him through the space of experiments. Whichever the proposed solution, there seems to be an implicit agreement that meta-knowledge should be integrated seamlessly into the learning tool. This workshop is intended to bring together researchers who have attempted to use meta-level approaches to automate or guide decision-making at all stages of the learning process. One broad line of research is the static use of prior (meta-)knowledge. Knowledge-based approaches to model selection have been explored in both symbolic and neural network learning. For instance, prior knowledge of invariances has been used to select the appropriate neural network architecture for optical character recognition problems. Another research avenue aims at augmenting and/or refining meta-knowledge dynamically across different learning experiences. Meta-learning approaches have been attempted to automate model selection (as in VBMS and StatLog) as well as model arbitration and model combination (as in JAM). Contributions are sought on any of the above--or other--approaches from all main sub-fields of machine learning, including neural networks, symbolic machine learning and inductive logic programming. The results of this workshop will extend those of prior workshops, such as the ECML95 Workshop on Learning at the Knowledge Level and the ICML97 Workshop on Machine Learning Applications in the Real World, as well as complement those of the upcoming AAAI98/ICML98 Workshop on the Methodology of Applying Machine Learning. Format and Schedule The workshop will consist of one invited talk, a number of refereed contributions and small group discussions. The idea is to bring researchers together to present current work and identify future areas of research and development. This is intended to be a one-day workshop and the proposed schedule is as follows. 9:00 Welcome 10:00 Paper session (5 x 30mins) 12:30 Lunch 1:30 Paper session (3 x 30mins) 3:00 Summary: the issues/the future 3:15 Small group discussions (3-4 groups) 4:00 Reports from each group 4:45 Closing remarks 5:00 End Timetable The following timetable will be strictly adhered to: * Registration of interest: starting now (email to: Christophe G-C, please specify intention to attend/intention to submit a paper) * Submission of paper: 6 March 1998 (electronic postscript only to either organiser: Christophe G-C, Melanie H) * Notification of acceptance: 20 March 1998 * Camera-ready: 28 March 1998 Program Committee Submitted papers will be reviewed by at least two independent referees from the following program committee. Pavel Brazdil, University of Porto Robert Engels, University of Karlsruhe Dieter Fensel, University of Karlsruhe Jean-Gabriel Ganascia, Universite Pierre et Marie Curie Christophe Giraud-Carrier, University of Bristol Ashok Goel, Georgia Institute of Technology Melanie Hilario, University of Geneva Igor Kononenko, University of Ljubljana Dunja Mladenic, Josef Stefan Institute, Slovenia Gholaremza Nakhaizadeh, Daimler-Benz Ashwin Ram, Georgia Institute of Technology Colin Shearer, Integrated Solutions Ltd Walter van de Welde, Riverland Next Generation Maarten van Someren, University of Amsterdam Gerhard Widmer, Austrian Institute for Artificial Intelligence Research Accepted papers will be published in the workshop proceedings and contributors will be allocated 30 minutes for an oral presentation during the workshop. Organisers Christophe Giraud-Carrier Department of Computer Science University of Bristol Bristol, BS8 1UB United Kingdom Tel: +44-117-954-5145 Fax: +44-117-954-5208 Email: cgc at cs.bris.ac.uk Melanie Hilario Computer Science Department University of Geneva 24, Rue General-Dufour CH-1211 Geneva 4 Switzerland Tel: +41-22-705-7791 Fax: +41-22-705-7780 Email: Melanie.Hilario at cui.unige.ch From shimone at cogs.susx.ac.uk Fri Jan 9 07:03:59 1998 From: shimone at cogs.susx.ac.uk (Shimon Edelman) Date: Fri, 9 Jan 1998 12:03:59 +0000 (GMT) Subject: preprint on visual recognition and categorization Message-ID: --------------------------------------------------------------------- Visual recognition and categorization on the basis of similarities to multiple class prototypes Sharon Duvdevani-Bar and Shimon Edelman Available directly via FTP ftp://eris.wisdom.weizmann.ac.il/pub/recog+categ.ps.Z or via this Web page http://eris.wisdom.weizmann.ac.il/~edelman/archive.html Abstract: One of the difficulties of object recognition stems from the need to overcome the variability in object appearance caused by factors such as illumination and pose. The influence of these factors can be countered by learning to interpolate between stored views of the target object, taken under representative combinations of viewing conditions. Difficulties of another kind arise in daily life situations that require categorization, rather than recognition, of objects. We show that, although categorization cannot rely on interpolation between stored examples, knowledge of several representative members, or prototypes, of each of the categories of interest can still provide the necessary computational substrate for the categorization of new instances. The resulting representational scheme based on similarities to prototypes is computationally viable, and is readily mapped onto the mechanisms of biological vision revealed by recent psychophysical and physiological studies. --------------------------------------------------------------------- [get it now] Comments welcome. -Shimon Shimon Edelman, School of Cognitive and Computing Sciences University of Sussex, Falmer, Brighton BN1 9QH, UK http://www.cogs.susx.ac.uk/users/shimone +44 1273 678659 From pazienza at info.utovrm.it Fri Jan 9 08:27:22 1998 From: pazienza at info.utovrm.it (Maria Teresa Pazienza) Date: Fri, 9 Jan 1998 15:27:22 +0200 Subject: Job offer Message-ID: The AI group of the Department of Computer Science, Systems and Production, University of Rome Tor Vergata (ITALY), from a long while involved in reasearches in Lexical Acquisition, Machine Laerning and Engineering of adaptive NLP systems, is looking for an experienced computer scientist interested in joining the AI group in Rome for an international (state-of-art reasearch & development) project in the area of text processing for Information Extraction and Filtering. The project's specific goal is multilingual (English, Spanish and Italian) extraction of information from Web documents. The AI group that will host the candidate has a long lasting tradition in the engineering of NLP systems, and is currently integrating its existing systems for Lexical Acquisition and Text Processing in Italian and English into an industrial prototype. A qualified candidate must have preferably a PhD degree, on Computer Science, Software Engineering or Computational Linguistics, with extensive programming experience in at least one of the following languages: C++, Java and Prolog. It is highly preferred a strong background in software engineering of large-scale text processing systems and ability on innovative approaches in natural language (statistical as well as symbolic methods). Although this is not a job for theoreticians only, a specific talent for research problems, experimental studies and familiarity with the empirical methods in NL are a plus. Candidate should know very well UNIX and in general be a skilled programmer. Knowledge of Java programming under Windows NT is a relevant aspect, but not necessary. The position corresponds to a contract of the University of Roma, Tor Vergata for (at least) one year (March 1998-March 1999): salary and conditions are equivalent to the position of a resarcher in the University. To apply for this position, please contact/send a curriculum by fax / e-mail to: Maria Teresa Pazienza Department of Computer Science, Systems and Production University of Roma, Tor Vergata Via di Tor Vergata 00133 Roma, (ITALY) fax : +39 6 72597460 tel : +39 6 72597378 e-mail: pazienza at info.utovrm.it -------------------------------------------------- prof. Maria Teresa Pazienza Dept. of Computer Science, Systems and Production University of Roma, Tor Vergata Via di Tor Vergata 00133 ROMA (ITALY) tel +39 6 72597378 fax +39 6 72597460 e_mail: pazienza at info.utovrm.it http://babele.info.utovrm.it/ -------------------------------------------------- From zoubin at cs.toronto.edu Fri Jan 9 17:24:05 1998 From: zoubin at cs.toronto.edu (Zoubin Ghahramani) Date: Fri, 9 Jan 1998 17:24:05 -0500 Subject: paper: Hierarchical Factor Analysis and Topographic Maps Message-ID: <98Jan9.172413edt.1352@neuron.ai.toronto.edu> The following paper is now available at: ftp://ftp.cs.toronto.edu/pub/zoubin/nips97.ps.gz http://www.cs.toronto.edu/~zoubin ---------------------------------------------------------------------- Hierarchical Non-linear Factor Analysis and Topographic Maps Zoubin Ghahramani and Geoffrey E. Hinton Department of Computer Science University of Toronto We first describe a hierarchical, generative model that can be viewed as a non-linear generalisation of factor analysis and can be implemented in a neural network. The model performs perceptual inference in a probabilistically consistent manner by using top-down, bottom-up and lateral connections. These connections can be learned using simple rules that require only locally available information. We then show how to incorporate lateral connections into the generative model. The model extracts a sparse, distributed, hierarchical representation of depth from simplified random-dot stereograms and the localised disparity detectors in the first hidden layer form a topographic map. When presented with image patches from natural scenes, the model develops topographically organised local feature detectors. To appear in Jordan, M.I, Kearns, M.J., and Solla, S.A. Advances in Neural Information Processing Systems 10. MIT Press: Cambridge, MA, 1998. From kehagias at egnatia.ee.auth.gr Sat Jan 10 15:23:22 1998 From: kehagias at egnatia.ee.auth.gr (Thanasis Kehagias) Date: Sat, 10 Jan 1998 12:23:22 -0800 Subject: new paper on Multi-model Algorithm for Parameter Estimation of Time-varying Nonlinear Systems Message-ID: <34B7D8BA.2804@egnatia.ee.auth.gr> The following paper will appear in AUTOMATICA. While it is not strictly about neural networks, the presented analysis of credit assignment convergence, for a multimodel scheme, may be of interest for people working with modular neural networks, mixtures of experts and so on. Title: A Multi-model Algorithm for Parameter Estimation of Time-varying Nonlinear Systems Authors: V. Petridis and Ath. Kehagias Source: Automatica (to appear) Link: http://skiron.control.ee.auth.gr/~kehagias/97epeke.htm Abstract: Many methods have been developed to solve the problem of parameter stimation for dynamical systems (Ljung, 1987). Of particular interest is the case of on-line algorithms which are used to estimate time-varying parameters. Here we present such an algorithm which assumes a nonlinear dynamical system. The system is time-varying: its parameter changes values according to a Markovian model switching mechanism. The algorithm starts with a finite number of models, each corresponding to one of the parameter values, and selects the ``phenomenologically best'' parameter value; namely the one which produces the best fit to the observed behavior of the system. Our algorithm is related to the Partition Algorith (PA) presented in (Hilborn & Lainiotis, 1969; Lainiotis, 1971; Lainiotis & Plataniotis, 1994; Sims, Lainiotis & Magill, 1969). PA is suitable for the parameter estimation of a linear dynamical system with Gaussian noise in the input and output; no provision is made for model switching. Under these assumptions, an algorithm is developed for exact computation of the models' posterior probabilities; these are used for Maximum a Posteriori (MAP) estimation of the unknown parameter. This method has been used extensively in a number of applications, including parameter estimation and system identification (Kehagias, 1991; Lainiotis & Plataniotis, 1994; Petridis, 1981). Our algorithm is more general than the PA: it applies to nonlinear systems and requires no probabilistic assumptions regarding the noise. Furthermore, while there are several convergence studies of the PA without a switching mechanism (Anderson & Moore, 1979; Kehagias, 1991; Tugnait, 1980), as far as we know, the analysis presented here is the first one that handles the Markovian switching assumption. A rigorous convergence analysis is also presented. Thanasis Kehagias, Research Associate, Dept. of Electrical and Computer Eng, Aristotle Un., Thessaloniki Ass. Prof., Dept. of Mathematics and Computer Sci., American College of Thessaloniki http://skiron.control.ee.auth.gr/~kehagias/index.htm From kehagias at egnatia.ee.auth.gr Sun Jan 11 23:03:05 1998 From: kehagias at egnatia.ee.auth.gr (Thanasis Kehagias) Date: Sun, 11 Jan 1998 20:03:05 -0800 Subject: correction to new paper URL Message-ID: <34B995F9.59FA@egnatia.ee.auth.gr> I apologize for reposting, but there was an error in the URL I gave in the announcement of my paper; it should be http://skiron.control.ee.auth.gr/~kehagias/THN/97epeke.htm The paper will appear in AUTOMATICA. While it is not strictly about neural networks, the presented analysis of credit assignment convergence, for a multimodel scheme, may be of interest for people working with modular neural networks, mixtures of experts and so on. Title: A Multi-model Algorithm for Parameter Estimation of Time-varying Nonlinear Systems Authors: V. Petridis and Ath. Kehagias Source: Automatica (to appear) Link: http://skiron.control.ee.auth.gr/~kehagias/THN/97epeke.htm Abstract: Many methods have been developed to solve the problem of parameter stimation for dynamical systems (Ljung, 1987). Of particular interest is the case of on-line algorithms which are used to estimate time-varying parameters. Here we present such an algorithm which assumes a nonlinear dynamical system. The system is time-varying: its parameter changes values according to a Markovian model switching mechanism. The algorithm starts with a finite number of models, each corresponding to one of the parameter values, and selects the ``phenomenologically best'' parameter value; namely the one which produces the best fit to the observed behavior of the system. Our algorithm is related to the Partition Algorith (PA) presented in (Hilborn & Lainiotis, 1969; Lainiotis, 1971; Lainiotis & Plataniotis, 1994; Sims, Lainiotis & Magill, 1969). PA is suitable for the parameter estimation of a linear dynamical system with Gaussian noise in the input and output; no provision is made for model switching. Under these assumptions, an algorithm is developed for exact computation of the models' posterior probabilities; these are used for Maximum a Posteriori (MAP) estimation of the unknown parameter. This method has been used extensively in a number of applications, including parameter estimation and system identification (Kehagias, 1991; Lainiotis & Plataniotis, 1994; Petridis, 1981). Our algorithm is more general than the PA: it applies to nonlinear systems and requires no probabilistic assumptions regarding the noise. Furthermore, while there are several convergence studies of the PA without a switching mechanism (Anderson & Moore, 1979; Kehagias, 1991; Tugnait, 1980), as far as we know, the analysis presented here is the first one that handles the Markovian switching assumption. A rigorous convergence analysis is also presented. Thanasis Kehagias, Research Associate, Dept. of Electrical and Computer Eng, Aristotle Un., Thessaloniki Ass. Prof., Dept. of Mathematics and Computer Sci., American College of Thessaloniki http://skiron.control.ee.auth.gr/~kehagias/index.htm -- Athanasios Kehagias, Research Associate, Dept. of Electrical and Computer Eng., Aristotle University of Thessaloniki, GR54006, Thessaloniki, GREECE and Assistant Professor, Dept. of Mathematics and Computer Science, American College of Thessaloniki, P.O. Box 21021, GR 55510 Pylea, Thessaloniki, GREECE home page: http://skiron.control.ee.auth.gr/~kehagias/index.htm email: kehagias at egnatia.ee.auth.gr, kehagias at ac.anatolia.edu.gr From jfgf at eng.cam.ac.uk Mon Jan 12 10:46:08 1998 From: jfgf at eng.cam.ac.uk (J.F. Gomes De Freitas) Date: Mon, 12 Jan 1998 15:46:08 +0000 (GMT) Subject: Tech Report on regularisation in sequential learning Message-ID: Hi A technical report on regularisation in sequential learning is now available at ftp://svr-ftp.eng.cam.ac.uk/pub/reports/freitas_tr307.ps.gz (SVR - Cambridge) The paper covers topics such as Bayesian inference with hierarchical models, extended Kalman filtering, regularisation, adaptive learning rates and automatic relevance determination. It is a longer version of a recent NIPS publication and feedback will be gratefully appreciated. I hope you find it interesting too. ABSTRACT: In this paper, we show that a hierarchical Bayesian modelling approach to sequential learning leads to many interesting attributes such as regularisation and automatic relevance determination. We identify three inference levels within this hierarchy, namely model selection, parameter estimation and noise estimation. In environments where data arrives sequentially ,techniques such as cross-validation to achieve regularisation or model selection are not possible. The Bayesian approach, with extended Kalman filtering at the parameter estimation level, allows for regularisation within a minimum variance framework.A multi-layer perceptron is used to generate the extended Kalman filter nonlinear measurements mapping. We describe several algorithms at the noise estimation level, which allow us to implement adaptive regularisation and automatic relevance determination of model inputs and basis functions. An important contribution of this paper is to show the theoretical links between adaptive noise estimation in extended Kalman filtering, multiple adaptive learning rates and multiple smoothing regularisation coefficients. Thanks Nando de Freitas _______________________________________________________________________________ JFG de Freitas (Nando) Speech, Vision and Robotics Group Information Engineering Cambridge University CB2 1PZ England Tel (01223) 302323 (H) (01223) 332754 (W) _______________________________________________________________________________ From Corinne.Ledoux at cts-fs1.du.se Mon Jan 12 12:51:56 1998 From: Corinne.Ledoux at cts-fs1.du.se (Corinne Ledoux) Date: Mon, 12 Jan 1998 18:51:56 +0100 Subject: Special session on Neural Networks applied to transport Message-ID: <3.0.1.32.19980112185156.007c4100@cts.du.se> NIMES-98 Conference on Complex Systems, Intelligent Systems & Interfaces 25-27 May 1998, Nimes, France Special session on Neural Networks and Transport Call for papers The 1st call for the Nimes-98 Conference on Complex Systems, Intelligent Systems & Interfaces, to be held in Nimes, 25-27 May, 1998, has just been issued. A special session will concentrate on the neural networks techniques used to solve specific traffic problems in air, road, maritime and railways transport. For this special session, papers based upon the following topics areas are welcome : * Real time Control Systems : freeway and corridor control, incident detection and management, signalized junctions and networks, traffic control systems, * Vehicular Navigation and Control : vehicle location monitoring, vehicle control, driver behaviour modelling, * Planning and modelling techniques : traffic flow, simulation models, dynamic traffic models, forecasting, * Sensor data processing : data fusion, vehicle identification/classification, traffic pattern analysis * Benefits of advanced technologies : safety impacts, environmental impacts. There is a two-stage reviewing procedure with the following schedule : 6 February 1998 Submission of abstracts (250 words) including title, author(s), affiliation(s). The contact author must be identified with his complete affiliation, address, phone, fax and e-mail adress. Seven copies of the abstract have to be send by post-mail to the Secretariat. 23 February 1998 Notification of acceptance/rejection 23 March 1998 Submission of the full paper Scientific Committee Gerard Scemama (INRETS, France) Sophie Midenet (INRETS, France) Corinne Ledoux (CTS, Sweden) Mark Dougherty (CTS, Sweden) Stephen G. Ritchie (University of California, Irvine) Conference Secretariat: EC2 & Developpement 51-59, Rue Ledru-Rolin 94200 Ivry-sur-Seine, France Tel : +33 1 45 15 27 53 Fax . + 33 1 45 15 27 54 e-mail : jeanclaude.rault at utopia.eunet.fr ___________________________________________________ Dr. Corinne Ledoux CTS - Dalarna University S 781 88 Borlange, Sweden e-mail: Corinne.Ledoux at cts.du.se Phone : + 46 23 77 85 46 Fax : + 46 23 77 85 01 http://www.du.se/cts ___________________________________________________ From rojas at inf.fu-berlin.de Wed Jan 14 10:29:00 1998 From: rojas at inf.fu-berlin.de (Raul Rojas) Date: Wed, 14 Jan 98 10:29 MET Subject: Call for participation IK-98 Message-ID: LAST CALL FOR PARTICIPATION: IK-98 CALL FOR POSTERS SECOND SPRING SCHOOL ON ARTIFICIAL INTELLIGENCE, NEURO- AND COGNITIVE SCIENCE March 7 - March 14, 1998, Guenne am Moehnesee, Germany http://www.tzi.uni-bremen.de/ik98/ IK-98 is a one-week intensive spring-school on artificial intelligence and brain research. The courses are offered by researchers working in the fields of symbolic Artificial Intelligence, Neural Networks, Brain Sciences and Cognitive Science. The main topic of IK-98 is "Language and Communication". Several courses will deal with the neurological basis of speech, speech recognition, linguistic aspects, natural language processing, etc. We will have several invited talks dealing with brain imaging, connectionist simulation of speech acquisition, and intelligent agents. We invite all participants of IK-98 to present their research results during the evening poster sessions. The main conference language is German (although some courses will be held in English). The program in German, with the courses to be offered, follows below. EINLADUNG ZUR TEILNAHME: 2. INTERDISZIPLINAERES KOLLEG IK-98 FRUEHJAHRSSCHULE "INTELLIGENZ UND GEHIRN" 7.3.-14.3.98 Guenne am Moehnesee, Germany http://www.tzi.uni-bremen.de/ik98/ >> Was ist das Interdisziplinaere Kolleg? Das Interdisziplinaere Kolleg (IK) ist eine intensive interdisziplinaere Fruehjahrsschule zum Generalthema "Intelligenz und Gehirn". Die Schirmwissenschaften des IK sind die Neurowissenschaft, die Kognitionswissenschaft, die Kuenstliche Intelligenz und die Neuroinformatik. Angesehene Dozenten aus diesen Disziplinen vermitteln Grundlagenkenntnisse, fuehren in methodische Vorgehensweisen ein und erlaeutern aktuelle Forschungsfragen. Ein abgestimmtes Spektrum von Grund-, Theorie- und Spezialkursen, sowie an disziplinuebergreifenden Veranstaltungen teilweise mit praktischen bungen richtet sich an Studenten und Forscher aus dem akademischen und industriellen Bereich. In den letzten Jahren gab es in Deutschland einen interdisziplinaeren Aufbruch. Er fand im Herbst 1996 einen ersten Hoehepunkt in der Tagung Wege ins Hirn (http://www.hlrz.kfa-juelich.de/~peters/WegeInsHirn/). Dort wurde auch beschlossen, das IK als Nachfolgerin der bekannten KI-Fruehjahrschulen (KIFS) auszurichten. Als Veranstalter fungierten unter anderem die deutschen Fachverbaende der beteiligten Disziplinen. Dadurch wurde das IK auch als Institution abgesichert. Diese Aufbruchstimmung ging beim beim ersten IK im Fruehjahr 1997 auf die Teilnehmenden und Dozierenden ueber. Die Kurse und die Atmosphaere fanden grossen, oft sogar enthusiastischen Anklang. Das IK findet nun alljaehrlich statt. >> Veranstalter Das IK98 wird veranstaltet vom Fachbereich KI der Gesellschaft fuer Informatik (GI) in Kooperation mit: FG Neuronale Netze der GI; GMD - Forschungszentrum Informationstechnik GmbH; DFG-Graduiertenkolleg "Signalketten in Lebenden Systemen"; European Neural Network Society (ENNS); German Chapter der ENNS (GNNS); Gesellschaft fuer Kognitionswissenschaft e.V.; und Neurowissenschaftliche Gesellschaft e.V. >> Veranstaltungsort Das Tagungsheim ist die Familienbildungsstaette "Heinrich-Luebke-Haus" in Guenne (Sauerland). Dies Haus liegt abgeschieden am Moehnesee im Naturpark Arnsberger Wald. Die Teilnehmer sind im Tagungsheim untergebracht. Alles foerdert einen konzentrierten, geselligen Austausch zwischen den Teilnehmern auch abends nach den eigentlichen Kursveranstaltungen. >> Schwerpunktthema Das IK-98 hat als besonderen Schwerpunkt das Thema "Sprache und Kommunikation", das in mehreren weiterfuehrenden Kursen von unterschiedlichen Disziplinen her beleuchtet wird. >> Postergallerie Parallel zu den Kursen haben die Teilnehmer die Moeglichkeit, sich ihre Forschungen in einer informellen Postergalerie vorzustellen. >> Kurse und Dozenten Grundkurse G1 Neurobiologie (Gerhard Roth) G2 Kuenstliche Neuronale Netze - Theorie und Praxis (Guenther Palm G3 Einfuehrung in die KI (Ipke Wachsmuth) G4 Kognitive Systeme - Eine Einfuehrung in die Kognitionswissenschaft (Gerhard Strube) >> Theoriekurse T1 Das komplexe reale Neuron (Helmut Schwegler) T2 Connectionist Speech Recognition (Herve Bourlard) T3 Perception of Temporal Structures - Especially in Speech (Robert F. Port) T4 Sprachstruktur - Hirnarchitektur ; Sprachverarbeitung - Hirnprozesse (Helmut Schnelle) T5 Optimierungsstrategien fuer neuronale Lernverfahren (Helge Ritter) >> Spezialkurse S1 Hybride konnektionistische und symbolische Ansaetze zur Verarbeitung natuerlicher Sprache (Stefan Wermter) S2 Intelligente Agenten fuer Multimedia-Schnittstellen (Wolfgang Wahlster, Elisabeth Andre) S3 Neurobiologie des Hoersystems (Guenter Ehret) S4 Sprachproduktion (Thomas Pechmann) >> Disziplinuebergreifende Kurse D1 Fuzzy und Neurosysteme (Rudolf Kruse, Detlev Nauck) D2 Zeitliche Kognition (Ernst Poeppel, Till Roenneberg) D3 The origins and evolution of language and meaning (Luc Steels) D4 Kontrolle von Bewegung in biologischen Systemen und Navigation mobiler Roboter (Josef Schmitz, Thomas Christaller) D5 Optimieren neuronaler Netze durch Lernen und Evolution (Heinz Braun) D6 Koordination von Sprache und Handlung (Wolfgang Heydrich, Hannes Rieser) D7 Dynamik spikender Neurone und Zeitliche Kodierung (Andreas Herz) >> Abendprogramm In visionaeren, feurigen und/oder kuehnen "after-dinner-talks" werden herausragende Forscher und Forscherinnen zu Kontroversen einladen: Angela D. Friederici, Jerome Feldman, Robert F. Port, Hans-Dieter Burkhard. >> Kursunterlagen Zu allen Kursen wird es schriftliche Dokumentationen geben, welche als Sammelband allen Teilnehmern ausgehaendigt werden. >> Beirat Um die Anliegen des Interdisziplinaeren Kollegs in den verschiedenen deutschen Forscherkreisen bekanntzumachen und zu vertreten, hat sich folgender Beirat aus namhaften Wissenschaftlern gebildet: Wolfgang Banzhaf, Wilfried Brauer, Armin B. Cremers, Christian Freksa, Otthein Herzog, Wolfgang Hoeppner, Hanspeter Mallot, Thomas Metzinger, Heiko Neumann, Hermann Ney, Guenther Palm, Ernst Poeppel, Wolfgang Prinz, Burghard Rieger, Helge Ritter, Claus Rollinger, Werner von Seelen, Hans Spada, Gerhard Strube, Helmut Schwegler, Ipke Wachsmuth, Wolfgang Wahlster. >> Organisationskomitee Thomas Christaller, Bernhard Froetschl, Christopher Habel, Herbert Jaeger, Anthony Jameson, Frank Pasemann, Bjoern-Olaf Peters, Annegret Pfoh, Raul Rojas (Gesamtleitung), Gerhard Roth, Kerstin Schill, Werner Tack. >> Tagungsbuero Christine Harms, c/o GMD, Schloss Birlinghoven, D-53754 Sankt Augustin, Telefon 02241- 14-2473, Fax 02241-14-2472, email harms at gmd.de >> Weitere Informationen Detaillierte Infos zum Hintergrund und dem Tagungsprogramm des IK-98 sind auf dessen Internet-homepage (http://www.tzi.uni-bremen.de/ik98/) abrufbar. From juergen at idsia.ch Wed Jan 14 05:43:47 1998 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Wed, 14 Jan 1998 11:43:47 +0100 Subject: IDSIA: 1997 JOURNAL PUBLICATIONS; JOB OPENINGS Message-ID: <199801141043.LAA00645@ruebe.idsia.ch> This is a list of journal papers published or accepted during 1997, (co)authored by members of the Swiss AI research institute IDSIA. Many additional 1997 book chapters, conference papers etc. can be found in IDSIA's individual home pages: www.idsia.ch/people.html ________________________________________________________________________ 1. P. Badeau (Univ. Blaise Pascal) & M. Gendreau, F. Guertin, J.-Y. Potvin (Univ. Montreal) & E. D. Taillard (IDSIA): A parallel tabu search heuristic for the vehicle routing problem with time windows. Transportation Research-C 5, 109-122, 1997. A parallel implementation of an adaptive memory programme. http://www.idsia.ch/~eric/articles.dir/crt95_84.ps.Z 2. M. Dorigo (IRIDIA) & L. M. Gambardella (IDSIA): Ant Colony System: A Cooperative Learning Approach to the TSP. IEEE Transactions On Evolutionary Computation, 1(1):53-66, 1997. ACS consists of cooperating ant-like agents. In TSP applications, ACS is compared to some of the best algorithms for symmetric and asymmetric TSPs. ftp://ftp.idsia.ch/pub/luca/papers/ACS-EC97.ps.gz 3. M. Dorigo (IRIDIA) & L. M. Gambardella (IDSIA): Ant Colony For the Traveling Salesman Problem. BioSystems 43:73-81, 1997. ftp://ftp.idsia.ch/pub/luca/papers/ACS-BIO97.ps.gz 4. B. L. Golden (Univ. Maryland) & G. Laporte (Ecole des Hautes Etudes Commerciales de Montreal) & E. Taillard (IDSIA): An Adaptive Memory Heuristic For a Class of Vehicle Routing Problems with Minimax Objective. Computers & Operations Research 24, 1997, 445-452. A study of the capacitated vehicle routing problem (CVRP), CVRP with multiple use of vehicles (MUV), and the m-TSP with MUV. A novel method produces excellent solutions within reasonable time. http://www.idsia.ch/~eric/articles.dir/crt95_74.ps.Z 5. P. Hansen (Univ. Montreal) & N. Mladenovic (Univ. Montreal) & E. D. Taillard (IDSIA): Heuristic solution of the multisource Weber problem as a p-median problem. Accepted by Operations Research Letters, 1997. We examine a heuristic method that has been forgotten for more than 30 years. It is very appropriate for small to medium size multisource Weber problems. http://www.idsia.ch/~eric/articles.dir/localloc1.ps.Z 6. S. Hochreiter (TU Munich) & J. Schmidhuber (IDSIA): Flat Minima. Neural Computation, 9(1):1-43, 1997. An MDL-based, Bayesian argument suggests that flat minima of the error function are essential because they correspond to "simple", low-complexity neural nets and low expected overfitting. The argument is based on a Gibbs algorithm variant and a novel way of splitting generalization error into underfitting and overfitting error. An efficient algorithm called "flat minimum search" outperforms other widely used methods on stock market prediction tasks. ftp://ftp.idsia.ch/pub/juergen/fm.ps.gz 7. S. Hochreiter (TU Munich) & J. Schmidhuber (IDSIA): Long Short-Term Memory. Neural Computation, 9(8):1681-1726. A novel recurrent net algorithm with update complexity O(1) per weight and time step. LSTM can solve hard problems unsolvable by previous neural net algorithms. ftp://ftp.idsia.ch/pub/juergen/lstm.ps.gz 8. R. P. Salustowicz (IDSIA) & J. Schmidhuber (IDSIA): Probabilistic Incremental Program Evolution. Evolutionary Computation 5(2):123-141, 1997. A novel method for evolving programs by stochastic search in program space. Comparisons to "genetic programming", applications to partially observable environments. ftp://ftp.idsia.ch/pub/juergen/PIPE.ps.gz 9. R. P. Salustowicz (IDSIA) & M. Wiering (IDSIA) & J. Schmidhuber (IDSIA): Learning team strategies: soccer case studies. Machine Learning, accepted 1997. Multiagent learning: each soccer team's players share action set and policy. We compare TD-Q learning and Probabilistic Incremental Program Evolution (PIPE). ftp://ftp.idsia.ch/pub/juergen/soccer.ps.gz 10. J. Schmidhuber (IDSIA) & J. Zhao (IDSIA) & M. Wiering (IDSIA): Shifting Inductive Bias with Success-Story Algorithm, Adaptive Levin Search, and Incremental Self-Improvement. Machine Learning 28:105-130, 1997. We focus on searching program space and "learning to learn" in changing, partially observable environments. ftp://ftp.idsia.ch/pub/juergen/bias.ps.gz 11. J. Schmidhuber (IDSIA): Discovering Neural Nets with Low Kolmogorov Complexity and High Generalization Capability. Neural Networks 10(5):857-873, 1997. Review of basic concepts of Kolmogorov complexity theory relevant to machine learning. Toy experiments with a Levin search variant lead to better generalization performance than more traditional neural net algorithms. ftp://ftp.idsia.ch/pub/juergen/loconet.ps.gz 12. J. Schmidhuber (IDSIA). Low-Complexity Art: Leonardo, Journal of the International Society for the Arts, Sciences, and Technology, 30(2):97-103, MIT Press, 1997. Low-complexity art is the computer-age equivalent of simple art: art with low Kolmogorov complexity. With example cartoons and attempts at using MDL to explain what's "beautiful". ftp://ftp.idsia.ch/pub/juergen/locoart.ps.gz 13. E. D. Taillard (IDSIA) & P. Badeau (Univ. Blaise Pascal) & M. Gendreau, F. Guertin, J.Y. Potvin (Univ. Montreal): A tabu search heuristic for the vehicle routing problem with soft time windows. Transportation Science 31, 170-186, 1997. An efficient neighbourhood structure for vehicle routing problems implemented in an adaptive memory programme for dealing with soft time windows. http://www.idsia.ch/~eric/articles.dir/crt95_66.ps.Z 14. M. Wiering (IDSIA) & J. Schmidhuber (IDSIA). HQ-Learning. Adaptive Behavior, 6(2), accepted 1997. A hierarchical extension of Q(lambda)-learning designed to solve certain types of partially observable Markov decision problems (POMDPs). HQ automatically decomposes POMDPs into sequences of simpler subtasks that can be solved by memoryless policies. ftp://ftp.idsia.ch/pub/juergen/hq.ps.gz ------------------------JOB OPENINGS AT IDSIA--------------------------- We have a few openings for postdocs, research associates, and outstanding PhD students, as well as one system manager position. See www.idsia.ch for details and application instructions. In case you are not interested in any of the current openings but would like to be considered for future ones, please send HARDCOPIES (no email!) of a statement of research interests plus an outline of your research project, CV, list of publications, and your 3 best papers. Also send a BRIEF email message listing email addresses of three references. Please use your full name as subject header. EXAMPLE: subject: John_Smith We are looking forward to receiving your application! _________________________________________________ Juergen Schmidhuber research director IDSIA, Corso Elvezia 36, 6900-Lugano, Switzerland juergen at idsia.ch http://www.idsia.ch/~juergen From bert at mbfys.kun.nl Wed Jan 14 11:40:50 1998 From: bert at mbfys.kun.nl (Bert Kappen) Date: Wed, 14 Jan 1998 17:40:50 +0100 Subject: Boltzmann Machine learning using mean field theory ... Message-ID: <199801141640.RAA24864@bertus> Dear Connectionists, The following article will apear in the proceedings NIPS of 1998 ed. Micheal Kearns. This version contains some significant improvements over the earlier version. Boltzmann Machine learning using mean field theory and linear response correction written by (Hil)bert Kappen and Paco Rodrigues We present a new approximate learning algorithm for Boltzmann Machines, using a systematic expansion of the Gibbs free energy to second order in the weights. The linear response correction to the correlations is given by the Hessian of the Gibbs free energy. The computational complexity of the algorithm is cubic in the number of neurons. We compare the performance of the exact BM learning algorithm with first order (Weiss) mean field theory and second order (TAP) mean field theory. The learning task consists of a fully connected Ising spin glass model on 10 neurons. We conclude that 1) the method works well for paramagnetic problems 2) the TAP correction gives a significant improvement over the Weiss mean field theory, both for paramagnetic and spin glass problems and 3) that the inclusion of diagonal weights improves the Weiss approximation for paramagnetic problems, but not for spin glass problems. This article can now be downloaded from ftp://ftp.mbfys.kun.nl/snn/pub/reports/Kappen.LR_NIPS.ps.Z Best regards, Hilbert Kappen FTP INSTRUCTIONS unix% ftp ftp.mbfys.kun.nl Name: anonymous Password: (use your e-mail address) ftp> cd snn/pub/reports/ ftp> binary ftp> get Kappen.LR_NIPS.ps.Z ftp> bye unix% uncompress Kappen.LR_NIPS.ps.Z unix% lpr Kappen.LR_NIPS.ps From harnad at cogsci.soton.ac.uk Wed Jan 14 15:47:58 1998 From: harnad at cogsci.soton.ac.uk (Stevan Harnad) Date: Wed, 14 Jan 1998 20:47:58 +0000 (GMT) Subject: Consciousness and Connectionism: BBS Call for Commentators Message-ID: Below is the abstract of a forthcoming BBS target article on: A CONNECTIONIST THEORY OF PHENOMENAL EXPERIENCE by Gerard O'Brien and John Opie This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or nominated by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send EMAIL to: bbs at cogsci.soton.ac.uk or write to: Behavioral and Brain Sciences Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/ ftp://ftp.princeton.edu/pub/harnad/BBS/ ftp://ftp.cogsci.soton.ac.uk/pub/bbs/ gopher://gopher.princeton.edu:70/11/.libraries/.pujournals If you are not a BBS Associate, please send your CV and the name of a BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work. All past BBS authors, referees and commentators are eligible to become BBS Associates. To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection with a WWW browser, anonymous ftp or gopher according to the instructions that follow after the abstract. ____________________________________________________________________ A CONNECTIONIST THEORY OF PHENOMENAL EXPERIENCE Gerard O'Brien and John Opie Department of Philosophy The University of Adelaide South Australia 5005 AUSTRALIA KEYWORDS: computation, connectionism, consciousness, dissociation, mental representation, phenomenal experience ABSTRACT: When cognitive scientists apply computational theory to the problem of phenomenal consciousness, as many of them have been doing recently, there are two fundamentally distinct approaches available. Either consciousness is to be explained in terms of the nature of the representational vehicles the brain deploys, or it is to be explained in terms of the computational processes defined over these vehicles. We call versions of these two approaches VEHICLE and PROCESS theories of consciousness, respectively. However, while there may be space for vehicle theories of consciousness in cognitive science, they are relatively rare. This is because of the influence exerted, on the one hand, by a large body of research which purports to show that the explicit representation of information in the brain and conscious experience are dissociable, and on the other, by the classical computational theory of mind: the theory that takes human cognition to be a species of symbol manipulation. Two recent developments in cognitive science combine to suggest that a reappraisal of this situation is in order. First, a number of theorists have recently been highly critical of the experimental methodologies used in the dissociation studies -- so critical, in fact, that it is no longer reasonable to assume that the dissociability of conscious experience and explicit representation has been adequately demonstrated. Second, computationalism, as a theory of human cognition, is no longer as dominant in cognitive science as it once was. It now has a lively competitor in the form of connectionism; and connectionism, unlike computationalism, does have the computational resources to support a robust vehicle theory of consciousness. In this paper we develop and defend this connectionist-vehicle theory of consciousness. It takes the form of the following simple empirical hypothesis: phenomenal experience consists in the explicit representation of information in neurally realized pdp networks. This hypothesis leads us to reassess some common wisdom about consciousness, but, we will argue, in fruitful and ultimately plausible ways. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable from the World Wide Web or by anonymous ftp or gopher from the US or UK BBS Archive. Ftp instructions follow below. Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. The URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.obrien.html ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.obrien ftp://ftp.cogsci.soton.ac.uk/pub/bbs/Archive/bbs.obrien gopher://gopher.princeton.edu:70/11/.libraries/.pujournals To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.howe When you have the file(s) you want, type: quit From harnad at coglit.soton.ac.uk Wed Jan 14 14:12:48 1998 From: harnad at coglit.soton.ac.uk (S.Harnad) Date: Wed, 14 Jan 1998 19:12:48 GMT Subject: Lexical Access: BBS Call for Commentators Message-ID: <199801141912.TAA24108@amnesia.psy.soton.ac.uk> Below is the abstract of a forthcoming BBS target article on: A THEORY OF LEXICAL ACCESS IN SPEECH PRODUCTION by Willem J.M. Levelt, Ardi Roelofs, and Antje S. Meyer This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or nominated by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send EMAIL to: bbs at cogsci.soton.ac.uk or write to: Behavioral and Brain Sciences Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/ ftp://ftp.princeton.edu/pub/harnad/BBS/ ftp://ftp.cogsci.soton.ac.uk/pub/bbs/ gopher://gopher.princeton.edu:70/11/.libraries/.pujournals If you are not a BBS Associate, please send your CV and the name of a BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work. All past BBS authors, referees and commentators are eligible to become BBS Associates. To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection with a WWW browser, anonymous ftp or gopher according to the instructions that follow after the abstract. ____________________________________________________________________ A THEORY OF LEXICAL ACCESS IN SPEECH PRODUCTION Willem J.M. Levelt Max Planck Institute for Psycholinguistics P.O. Box 310 6500 AH Nijmegen The Netherlands pim at mpi.nl Ardi Roelofs Max Planck Institute for Psycholinguistics P.O. Box 310 6500 AH Nijmegen The Netherlands Antje S. Meyer Max Planck Institute for Psycholinguistics P.O. Box 310 6500 AH Nijmegen The Netherlands KEYWORDS: speaking, lexical access, conceptual preparation, lexical selection, morphological encoding, phonological encoding, syllabification, articulation, self-monitoring, lemma, morpheme, phoneme, speech error, magnetic encephalography, readiness potential, brain imaging ABSTRACT: Preparing words in speech production is normally a fast and accurate process. We generate them two or three per second in fluent conversation, and overtly naming a clear picture of an object can easily be initiated within 600 ms after picture onset. The underlying process, however, is exceedingly complex. The theory reviewed in this target article analyzes this process as staged and feedforward. After a first stage of conceptual preparation, word generation proceeds through lexical selection, morphological and phonological encoding, phonetic encoding and articulation itself. In addition, the speaker exerts some degree of output control by monitoring self-produced internal and overt speech. The core of the theory, ranging from lexical selection to the initiation of phonetic encoding, is captured in a computational model, called WEAVER++. Both the theory and the computational model have been developed in conjunction with reaction time experiments, particularly in picture naming or related word production paradigms with the aim of accounting for the real-time processing in normal word production. A comprehensive review of theory, model and experiments are presented. The model can handle some of the main observations in the domain of speech errors (the major empirical domain for most other theories of lexical access), and the theory also opens new ways of approaching the cerebral organization of speech production by way of high-resolution temporal imaging. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable from the World Wide Web or by anonymous ftp or gopher from the US or UK BBS Archive. Ftp instructions follow below. Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. The URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.levelt.html ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.levelt ftp://ftp.cogsci.soton.ac.uk/pub/bbs/Archive/bbs.levelt gopher://gopher.princeton.edu:70/11/.libraries/.pujournals To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.howe When you have the file(s) you want, type: quit From seung at physics.bell-labs.com Mon Jan 12 15:52:43 1998 From: seung at physics.bell-labs.com (Sebastian Seung) Date: Mon, 12 Jan 1998 15:52:43 -0500 Subject: preprints available Message-ID: <199801122052.PAA01030@heungbu.div111.lucent.com> The following preprints are now available at http://www.bell-labs.com/user/seung ---------------------------------------------------------------------- Learning continuous attractors in recurrent networks H. S. Seung One approach to invariant object recognition employs a recurrent neural network as an associative memory. In the standard depiction of the network's state space, memories of objects are stored as attractive fixed points of the dynamics. I argue for a modification of this picture: if an object has a continuous family of instantiations, it should be represented by a continuous attractor. This idea is illustrated with a network that learns to complete patterns. To perform the task of filling in missing information, the network develops a continuous attractor that models the manifold from which the patterns are drawn. From a statistical viewpoint, the pattern completion task allows a formulation of unsupervised learning in terms of regression rather than density estimation. http://www.bell-labs.com/user/seung/papers/continuous.ps.gz [To appear in Adv. Neural Info. Proc. Syst. 10 (1998)] ---------------------------------------------------------------------- Minimax and Hamiltonian dynamics of excitatory-inhibitory networks H. S. Seung, T. J. Richardson, J. C. Lagarias, and J. J. Hopfield A Lyapunov function for excitatory-inhibitory networks is constructed. The construction assumes symmetric interactions within excitatory and inhibitory populations of neurons, and antisymmetric interactions between populations. The Lyapunov function yields sufficient conditions for the global asymptotic stability of fixed points. If these conditions are violated, limit cycles may be stable. The relations of the Lyapunov function to optimization theory and classical mechanics are revealed by minimax and dissipative Hamiltonian forms of the network dynamics. http://www.bell-labs.com/user/seung/papers/minimax.ps.gz [To appear in Adv. Neural Info. Proc. Syst. 10 (1998)] ---------------------------------------------------------------------- Learning generative models with the up-propagation algorithm J.-H. Oh and H. S. Seung Up-propagation is an algorithm for inverting and learning neural network generative models. Sensory input is processed by inverting a model that generates patterns from hidden variables using top-down connections. The inversion process is iterative, utilizing a negative feedback loop that depends on an error signal propagated by bottom-up connections. The error signal is also used to learn the generative model from examples. The algorithm is benchmarked against principal component analysis in experiments on images of handwritten digits. http://www.bell-labs.com/user/seung/papers/up-prop.ps.gz [To appear in Adv. Neural Info. Proc. Syst. 10 (1998)] ---------------------------------------------------------------------- The rectified Gaussian distribution N. D. Socci, D. D. Lee, and H. S. Seung A simple but powerful modification of the standard Gaussian distribution is studied. The variables of the rectified Gaussian are constrained to be nonnegative, enabling the use of nonconvex energy functions. Two multimodal examples, the competitive and cooperative distributions, illustrate the representational power of the rectified Gaussian. Since the cooperative distribution can represent the translations of a pattern, it demonstrates the potential of the rectified Gaussian for modeling pattern manifolds. http://www.bell-labs.com/user/seung/papers/rg.ps.gz [To appear in Adv. Neural Info. Proc. Syst. 10 (1998)] ---------------------------------------------------------------------- Pattern analysis and synthesis in attractor neural networks H. S. Seung The representation of hidden variable models by attractor neural networks is studied. Memories are stored in a dynamical attractor that is a continuous manifold of fixed points, as illustrated by linear and nonlinear networks with hidden neurons. Pattern analysis and synthesis are forms of pattern completion by recall of a stored memory. Analysis and synthesis in the linear network are performed by bottom-up and top-down connections. In the nonlinear network, the analysis computation additionally requires rectification nonlinearity and inner product inhibition between hidden neurons. http://www.bell-labs.com/user/seung/papers/pattern.ps.gz [In Theoretical Aspects of Neural Computation: A Multidisciplinary Perspective, Proceedings of TANC'97. Springer-Verlag (1997)] From act at uow.edu.au Thu Jan 15 04:34:27 1998 From: act at uow.edu.au (Ah Chung Tsoi) Date: Thu, 15 Jan 1998 20:34:27 +1100 (EST) Subject: Research Associate in Data Mining and Knowledge Discovery Message-ID: <199801150934.UAA20863@wumpus.its.uow.edu.au> The following advertisement will appear in The Australian, 21st January, 1998. UNIVERSITY OF WOLLONGONG FACULTY OF INFORMATICS ARC Funded Research Associate in Data Mining and Knowledge Discovery Three Year Fixed Term Appointment Applications are called for from suitably qualified persons to participate in an Australian Research Council (ARC) funded position in data mining and knowledge discovery in collaboration with the Health Insurance Commission. This three-year project will be to carry out generic research in applying, researching and developing a number of data mining and knowledge discovery techniques to study the behavioural patterns of Diagnostic Imaging practitioners. It is expected that the successful candidate will have completed a PhD or be about to submit a thesis for a PhD in one or more of the following areas: artificial neural networks; expert systems; data analysis; artificial intelligence; fuzzy systems; statistics; graphical models. Preference will be given to candidates who have some relevant post doctoral experience. It is expected that the successful candidate will work closely with the Health Insurance Commission, in particular, with medically qualified experts in Diagnostic Imaging. The successful candidate will be based in Wollongong working with a dynamic team in neural networks and artificial intelligence in the Faculty of Informatics, under the direction of the Dean, Professor A. C. Tsoi. Dependent upon the qualifications and the experience of the successful applicant, appointment will be as an Associate Fellow in the salary range A$42,413 to A$45,527. Further information can be obtained by contacting Professor Ah Chung Tsoi, Dean, Faculty of Informatics, University of Wollongong; Email: ah_chung_tsoi at uow.edu.au, Phone: +61 2 42 21 38 43; Fax: +61 2 42 21 48 43. Closing date 13 February 1998. Applications should quote reference number CM98- contain details of qualifications, employment history, research interest, publications and names and addresses (including fax number or email address) of five referees and be forwarded to the Personnel Officer. Please mark envelope ``Confidential Appointment''. Mail address: University of Wollongong, Northfields Ave, Wollongong, NSW 2522 Australia. The University of Wollongong is an equal opportunity employer. ========================================= From weaveraj at helios.aston.ac.uk Thu Jan 15 09:18:06 1998 From: weaveraj at helios.aston.ac.uk (Andrew Weaver) Date: Thu, 15 Jan 1998 14:18:06 +0000 Subject: Full Time Lectureship Post, Aston University, UK Message-ID: <11722.199801151418@sun.aston.ac.uk> FULL TIME LECTURESHIP NEURAL COMPUTING RESEARCH GROUP ASTON UNIVERSITY, UK The group currently comprises five full time members of staff (David Lowe, Manfred Opper, David Saad, Ian Nabney and Chris Williams), 8 Postdoctoral Research Fellows, a Research Programmer, a Research Coordinator, 11 PhD students and 10 MSc research students. Current research contracts total approximately ukp1.5 million. We are seeking an additional highly motivated, enthusiastic individual to join our research team in the general areas of artificial neural networks, biomedical signal analysis, nonlinear pattern and time series processing and machine vision. The individual will also be expected to contribute to the graduate and undergraduate taught programmes. Further information on the activities and interests of the group may be obtained from the website http://www.ncrg.aston.ac.uk/ Terms of appointment will depend on the background and experience of particular candidates. The minimum period for which appointments are made is normally three years, with the possibility of renewal or transfer to continuing appointments. Salary will be in the range ukp16,045 to ukp27,985 per annum, and exceptionally ukp31,269 per annum (Lecturer Grade A & B), according to qualifications and experience. The appointment is part of a wider research expansion in the Electronic Engineering and Computer Science Division at Aston which includes `intelligent' databases, future internet technology and telecommunications network modelling. Candidates interested in this wider area are also encouraged to make further enquiries. Interested individuals should email a current C.V. including the contact details of at least three referees to:-- Professor David Lowe Head of Computer Science email: d.lowe at aston.ac.uk Neural Computing Research Group www: http://www.ncrg.aston.ac.uk/ Aston University tel: (+44/0) 121 333 4631 Aston Triangle fax: (+44/0) 121 333 4586 Birmingham B4 7ET UK Closing Date:- 12th March 1998. From hadley at cs.sfu.ca Thu Jan 15 16:16:40 1998 From: hadley at cs.sfu.ca (Bob Hadley) Date: Thu, 15 Jan 1998 13:16:40 -0800 (PST) Subject: paper available: "Connectionism, Novel Skill Combinations, Cognitive Architecture" Message-ID: <199801152116.NAA10281@css.cs.sfu.ca> FTP-host: ftp.fas.sfu.ca FTP-filename: pub/cs/hadley/skills.ps Total pages: 32 at 1.2 spacing. Connectionism and Novel Combinations of Skills: Implications for Cognitive Architecture by Robert F. Hadley Technical Report SFU CMPT TR 1998-01 ABSTRACT In the late 1980s, there were many who heralded the emergence of connectionism as a new paradigm -- one which would eventually displace the classically symbolic methods then dominant in AI and Cognitive Science. At present, there remain influential connectionists who continue to defend connectionism as a more realistic paradigm for modelling cognition, at all levels of abstraction, than the classical algorithmic methods of AI. Not infrequently, one encounters arguments along these lines: given what we know about neurophysiology, it is just not plausible to suppose that our brains possess an architecture anything like classical von Neumann machines. Our brains are not digital computers, and so, cannot support a classical architecture. In this paper, I advocate a middle ground. I assume, for argument's sake, that some form(s) of connectionism can provide reasonably approximate models -- at least for lower-level cognitive processes. Given this assumption, I argue on theoretical and empirical grounds that MOST human mental skills must reside in *separate* connectionist modules or ``sub-networks''. Ultimately, it is argued that the basic tenets of connectionism, in conjunction with the fact that humans often employ novel combinations of skill modules in rule following and problem solving, lead to the plausible conclusion that, in certain domains, high level cognition requires some form of classical architecture. During the course of argument, it emerges that only an architecture with classical structure could support the novel patterns of *information flow* and interaction that would exist among the relevant set of modules. Such a classical architecture might very well reside in the abstract levels of a hybrid system whose lower-level modules are purely connectionist. N.B. "Classical architecture" here derives from models found in computer science. My arguments are not the same as those given by Fodor and Pylyshyn, 1988. ----------------------------------------------------------------------- The paper, ``Connectionism and Novel Combinations ...'' can be obtained via ftp by doing the following: ftp ftp.fas.sfu.ca When asked for your name, type the word: anonymous When asked for a password, use your e-mail address. Then, you should change directories as follows: cd pub cd cs cd hadley and then do a get, as in: get skills.ps To exit from ftp, type : quit From kirchmai at informatik.tu-muenchen.de Fri Jan 16 08:08:59 1998 From: kirchmai at informatik.tu-muenchen.de (Clemens Kirchmair) Date: Fri, 16 Jan 1998 14:08:59 +0100 (MET) Subject: WORKSHOP PROGRAM: Fuzzy-Neuro-Systems '98 Message-ID: ---------------------------------- | Fuzzy-Neuro Systems '98 | | - Computational Intelligence - | | | | 5th International Workshop | | March, 19 - 20, 1998 | ---------------------------------- Technische Universitaet Muenchen Gesellschaft fuer Informatik e.V. Fachausschuss 1.2 "Inferenzsysteme" Technische Universitaet Muenchen Institut fuer Informatik Fuzzy-Neuro Systems '98 is the fifth event of a well established series of workshops with international participation. Its aim is to give an overview of the state of art in research and development of fuzzy systems and artificial neural networks. Another aim is to highlight applications of these methods and to forge innovative links between theory and application by means of creative discussions. Fuzzy-Neuro Systems '98 is being organized by the Technical Committee 1.2 "Inference Systems" (Fachausschuss 1.2 "Inferenzsysteme") of the German Informatics Society GI (Gesellschaft fuer Informatik e. V.) and Institut fuer Informatik, Technische Universitaet Muenchen in cooperation with Siemens AG and with the support of Kratzer Automatisierung GmbH. The workshop takes place at the Technische Universitaet Muenchen in Munich from March, 19 to 20, 1998. PROGRAM ------- Wednesday, March 18, 1998 18:00 Informal Get-Together Registration 21:00 End of reception and registration Thursday, March 19, 1998 8:00 Registration 9:00 Formal Opening President, TU Muenchen Dekan, Institut fuer Informatik, TU Muenchen Workshop Chair 9:15 Invited Lecture 1: Sets, Fuzzy Sets and Rough Sets Zdzislaw Pawlak, Warsaw University of Technology, Poland Chairman: W. Brauer, TU Muenchen 10:00 Session 1: Fuzzy Control Chairman: R. Isermann, TU Darmstadt Indirect Adaptive Sugeno Fuzzy Control J. Abonyi, L. Nagy, S. Ferenc, University of Veszprem, Veszprem, Hungary Simultaneous Creation of Fuzzy Sets and Rules for Hierarchical Fuzzy Systems R. Holve, FORWISS, Erlangen, Germany 10:50 Coffee break - Presentation of Posters 11:10 Session 2: Neural Networks for Classification Chairman: K. Obermayer, TU Berlin Hybrid Systems for Time Series Classification C. Neukirchen, G. Rigoll, Gerhard-Mercator-Universitaet, Duisburg How Parallel Plug-in Classifiers Optimally Contribute to the Overall System W. Utschick, J.A. Nossek, TU Muenchen 12:00 Invited Lecture 2: Is Readibility Compatible with Accuracy? Hugues Bersini, Universite Libre de Bruxelles, Belgium Chairman: J. Hollatz, Siemens AG, Muenchen 12:45 Lunch 14:00 Session 3: Fuzzy Logic in Data Analysis Chairman: C. Freksa, Universitaet Hamburg Fuzzy Topographic Kernel Clustering T. Graepel, K. Obermayer, TU Berlin Dynamic Data Analysis: Similarity Between Trajectories A. Joentgen, L. Mikenina, R. Weber, H.-J. Zimmermann, RWTH Aachen Spatial Reasoning with Uncertain Data Using Stochastic Relaxation R. Moratz, C. Freksa, Universitaet Hamburg Noise Clustering For Partially Supervised Classifier Design C. Otte, P. Jensch, Universitaet Oldenburg Fuzzy c-Mixed Prototypes Clustering C. Stutz, TU Muenchen T.A. Runkler, Siemens AG, Muenchen 16:00 Coffee break - Presentation of Posters 16:30 Invited Lecture 3: Neural Network Architectures for Time Series Prediction with Applications to Financial Data Forecasting Hans-Georg Zimmermann, Siemens AG, Muenchen Chairman: R. Rojas, FU Berlin 17:15 Session 4: Fuzzy-Neuro Systems Chairman: R. Kruse, Universitaet Magdeburg A Neuro-Fuzzy Approach to Feedforward Modeling of Nonlinear Time Series T. Briegel, V. Tresp, Siemens AG, Muenchen A Learning Algorithm for Fuzzy Neural Nets T. Feuring,Westfaelische Wilhelms-Universitaet Muenster James J. Buckley, University of Alabama at Birmingham, Birmingham, USA Improving a priori Control Knowledge by Reinforcement Learning M. Spott, M. Riedmiller, Universitaet Karlsruhe 18:30 End of First Day 20:00 Conference Dinner Friday, March 20, 1998 9:00 Session 5: Applications Chairman: G. Nakhaeizadeh, Daimler Benz AG, Forschung + Technik, Ulm Batch Recipe Optimization with Neural Networks and Genetice Algorithms K. Eder, Kratzer Automatisierung GmbH, Unterschleissheim Robust Tuning of Power System Stabilizers by an Accelerated Fuzzy-Logic Based Genetic Algorithm M. Khederzadeh, Power and Water Institute of Technology, Tehran, Iran Relating Chemical Structure to Activity: An Application of the Neural Folding Architecture T. Schmitt, C. Goller, TU Muenchen Optimization of a Fuzzy System Using Evolutionary Algorithms Q. Zhuang, M. Kreutz, J. Gayko, Ruhr-Universitaet Bochum 10:40 Coffee break - Presentation of Posters 11:00 Invited Lecture 4: Advanced Fuzzy-Concepts and Applications Harro Kiendl, Universitaet Dortmund Chairman: K. Eder, Kratzer Automatisierung GmbH, Unterschleissheim 11:45 Session 6: Theory and Foundations of Fuzzy-Logic Chairman: P. Klement, Universitaet Linz, Austria Rule Weights in Fuzzy Systems D. Nauck, R. Kruse, Universitaet Magdeburg Sliding-Mode-Based analysis of Fuzzy Gain Schedulers - The MIMO Case R. Palm, Siemens AG, Muenchen D. Driankov, University of Linkoeping, Sweden Qualitative Operators For Dealing With Uncertainty H. Seridi, Universite de Reims, France F. Bannay-Dupin, Universite d'Angers, France H. Akdag, Universite P. & M. Curie, Paris, France 13:00 Lunch 14:00 Session 7: Theory and Foundations of Neural Networks Chairman: A. Grauel, Universitaet Paderborn Prestructured Recurrent Neural Networks T. Brychcy, TU Muenchen Formalizing Neural Networks I. Fischer, University of Erlangen M. Koch, Technical University of Berlin M.R. Berthold, University of California, Berkeley, USA Correlation and Regression Based Neuron Pruning Strategies M. Rychetsky, S. Ortmann, C. Labeck, M. Glesner, TU Darmstadt 15:15 Invited Lecture 5: Soft Computing: the Synergistic Interaction of Fuzzy, Neural, and Evolutionary Computation Piero P. Bonissone, General Electric Corporate R&D Artificial Intelligence Laboratory, Schenectady, USA Chairman: S. Gottwald, Universitaet Leipzig 16:00 Closing Remarks and Invitation to FNS'99 Posters ------- Comparing Fuzzy Graphs M.R. Berthold, University of California, Berkeley, USA K.-P. Huber, Universitaet Karlsruhe A Numerical Approach to Approximate Reasoning via a Symbolic Interface. Application to Image Classification A. Borgi, H. Akdag, Universite P. & M. Curie, Paris, France J.-M. Bazin, Universite de Reims, France Entropy-Controlled Probabilistic Search M. David, J. Gottlieb, I. Kupka, TU Clausthal Ensembles of Evolutionary Created Artificial Neural Networks C.M. Friedrich, Universitaet Witten/Herdecke Design and Implementation of a Flexible Simulation Tool for Hybrid Problem Solving H. Geiger, IBV and TU Muenchen J. Pfalzgraf, K. Frank, T. Neuboeck, J. Weichenberger, Universitaet Salzburg, Austria A. Buecherl, TU Muenchen A Fuzzy Invariant Indexing Technique for Object Recognition under Partial Occlusion T. Graf, A. Knoll, A. Wolfram, Universitaet Bielefeld Fuzzy Causal Networks R. Hofmann, V. Tresp, Siemens AG, Muenchen Dynamic Data Analysis: Problem Description And Solution Approaches A. Joentgen, L. Mikenina, R. Weber, H.-J. Zimmermann, RWTH Aachen Filtering and Compressing Information by Neural Information Processor R. Kamimura, Tokai University, Japan A Fuzzy Local Map with Asymmetric Smoothing Using Voronoi Diagrams B. Lang, Siemens AG, Muenchen Fuzzy Interface with Prior Concepts and Non-convex Regularization J.C. Lemm, Universitaet Muenster Modeling and Simulating a Time-Dependent Physical System Using Fuzzy Techniques and a Recurrent Neural Network A. Nuernberger, A. Radetzky, R. Kruse, Universitaet Magdeburg The Kohonen Network Incorporating Explicit Statistics and Its Application to the Traveling Salesman Problem B.J. Oommen, Carleton University, Ottawa, Canada Automated Feature Selection Strategies: An experimental comparison improving Engine Knock Detection S. Ortmann, M. Rychetsky, M. Glesner, TU Darmstadt A Fuzzy-Neuro System for Reconstruction of Multi-Sensor information S. Petit-Renaud, T. Deneux, Universite de Technologie de Compiegne, Compiegne, France RACE: Relational Alternating Cluster Estimation and the Wedding Table Problem T.A. Runkler, Siemens AG, Muenchen J.C. Bezdek, University of West Florida, Pensacola, USA Neural Networks Handle Technological Information for Milling if Training Data is Carefully Preprocessed G. Schulz, D. Fichtner, A. Nestler, J. Hoffmann, TU Dresden Medically Motivated Testbed for Reinforcement Learning in Neural Architectures D. Surmeli, G. Koehler, H.-M. Gross, TU Ilmenau Adaptive Input-Space Clustering for Continuous Learning Tasks M. Tagscherer, P. Protzel, FORWISS, Erlangen A Criminalistic And Forensic Application Of Neural Networks A. Tenhagen, T. Feuring, W.-M. Lippe, G. Henke, H. Lahl, WWU-Muenster A Classical and a Fuzzy System Based Algorithm for the Simulation of the Waste Humidity in a Landfill M. Theisen, M. Glesner, TU Darmstadt FuNN, A Fuzzy Neural Logic Model R. Yasdi, GMD - Forschungszentrum Informationstechnik, Sankt Augustin An Efficient Model for Learning Systems of High-Dimensional Input within Local Scenarios J. Zhang, V. Schwert, Universitaet Bielefeld Optimization of a Fuzzy Controller for a Driver Assistant System Q. Zhuang, J. Gayko, M. Kreutz, Ruhr-Universitaet-Bochum Program Committee ----------------- Prof. Dr. W. Banzhaf, Universitaet Dortmund Dr. M. Berthold, Universitaet Karlsruhe Prof. Dr. Dr. h.c. W. Brauer, TU Muenchen (Chairman) Prof. Dr. G. Brewka, Universitaet Leipzig Dr. K. Eder, Kratzer Automatisierung GmbH, Unterschleissheim Prof. Dr. C. Freksa, Universitaet Hamburg Prof. Dr. M. Glesner, TU Darmstadt Prof. Dr. S. Gottwald, Universitaet Leipzig Prof. Dr. A. Grauel, Universitaet Paderborn/Soest Prof. Dr. H.-M. Gross, TU Ilmenau Dr. A. Guenter, Universitaet Bremen Dr. J. Hollatz, Siemens AG, Muenchen Prof. Dr. R. Isermann, TU Darmstadt Prof. Dr. P. Klement, Universitaet Linz, Austria Prof. Dr. R. Kruse, Universitaet Magdeburg (Vice Chairman) Prof. Dr. B. Mertsching, Universitaet Hamburg Prof. Dr. G. Nakhaeizadeh, Daimler Benz AG, Forschung + Technik, Ulm Prof. Dr. K. Obermayer, TU Berlin Prof. Dr. G. Palm, Universitaet Ulm Dr. R. Palm, Siemens AG, Muenchen Dr. L. Peters, GMD - Forschungszentrum Informationstechnik GmbH, Sankt Augustin Prof. Dr. F. Pichler, Universitaet Linz, Austria Dr. P. Protzel, FORWISS, Erlangen Prof. Dr. B. Reusch, Universitaet Dortmund Prof. Dr. Rigoll, Universitaet Duisburg Prof. Dr. R. Rojas, Freie Universitaet Berlin Prof. Dr. B. Schuermann, Siemens AG, Muenchen (Vice Chairman) Prof. Dr. W. von Seelen, Universitaet Bochum Prof. Dr. H. Thiele, Universitaet Dortmund Prof. Dr. W. Wahlster, Universitaet Saarbruecken Prof. Dr. H.-J. Zimmermann, RWTH Aachen Organization Committee ---------------------- Prof. Dr. Dr. h.c. W. Brauer (Chairman) Dieter Bartmann Till Brychcy Clemens Kirchmair Technische Universitaet Muenchen Tel.: 0 89/2 89-2 84 19 Fax: 0 89/2 89-2 84 83 Dr. Juergen Hollatz, Siemens AG, Muenchen (Vice Chairman) Christine Harms, - ccHa -, Sankt Augustin Conference Site --------------- TU Muenchen Barerstrasse 23 Entrance: Arcisstrasse Lecture hall S0320 D-80333 Muenchen Workshop Secretariat -------------------- Christine Harms c/o GMD / FNS'98 Schloss Birlinghoven D-53754 Sankt Augustin Tel.: ++49 2241 14-24 73 Fax: ++49 2241 14-24 72 email: christine.harms at gmd.de Registration ------------ Please make your (binding) reservation by sending the enclosed registration form to the conference secretariat. Confirmation will be given after receipt of the registration form. Conference Fees: (see registration form) industry rate: 495,- DM university rate: 345,- DM GI members: 295,- DM authors: 295,- DM students (up to age of 26): 60,- DM (*) *) excluding proceedings and conference dinner. A surcharge of DM 100,- is payable for registration after February, 18, 1998. Services of Gesellschaft fuer Informatik e. V. (GI) are VAT-free according to German law p. 4 Nr. 22a UStG. Payment (see registration form) ------- [ ] I have transferred the whole amount of DM________ to Gesellschaft fuer Informatik (GI), Sparkasse Bonn Account No.: 39 479 Bankcode: 380 500 00 Ref: SK-Fuzzy-98 [ ] I enclose a Eurocheque amounting to made payable to Gesellschaft fuer Informatik [ ] Please debit my [ ] Diners [ ] Visa [ ] Euro/Mastercard Cardnumber: Expiration date: Cardholder: Social events ------------- Informal get-together: March, 18, 1998, 18.00 - 21.00 Conference dinner: Thursday, March, 19,1998. Accommodation ------------- A limited number of rooms has been reserved at the FORUM/Penta Hotel at the special rate of single room DM 175,- double room DM 200,- FORUM Hotel Hochstrasse 3 D-81669 Muenchen Cancellation ------------ If cancellation is received up to February, 17, 1998, a 75% refund will be given. For cancellations received afterwards, no refunds can be guaranteed. WWW-Homepage ------------ URL: http://wwwbrauer.informatik.tu-muenchen.de/~fns98/ ----- snip, snip ----- Registration form for Fuzzy-Neuro Systems '98 --------------------------------------------- Please register me as follows Conference Fees: ---------------- [ ] industry rate: 495,- DM [ ] university rate: 345,- DM [ ] GI member No. 295,- DM [ ] authors: 295,- DM [ ] students (up to age of 26): 60,- DM (*) *) excluding proceedings and conference dinner Accommodation: -------------- I would like to make a binding reservation at the FORUM/Penta Hotel [ ] single room DM 175,- [ ] double room DM 200,- (together with ____________________________) Arrival date ______________________________ Departure date ___________________________ Payment directly at the hotel. Hotel booking has to be made until February, 17, 1998. After that we cannot guarantee any bookings. Conference diner: ----------------- [ ] I intend to participate in the conference dinner ...... extra ticket for conference dinner DM 50,-. Payment: -------- [ ] I have transferred the whole amount of DM________ to Gesellschaft fuer Informatik (GI), Sparkasse Bonn Account No.: 39 479 Bankcode: 380 500 00 Reference: SK-Fuzzy-98 [ ] I enclose a Eurocheque amounting to DM_________ made payable to Gesellschaft fuer Informatik [ ] Please debit my [ ] Diners [ ] Visa [ ] Euro/Mastercard Cardnumber:______________Expiration date:_________ Cardholder:_______________________________________ If cancellation is received up to February, 17, 1998, a 75% refund will be given. For cancellations received afterwards, no refunds can be guaranteed. Date:___________ Signature:__________________ Sender: ------- Last Name (Mr. / Mrs. / MS. Title): ________________________________________ First Name: ________________________________________ Affiliation: ________________________________________ Street/POB: ________________________________________ Zip/Postal Code/City: ________________________________________ Country: ________________________________________ Phone/Fax: ________________________________________ E-mail: ________________________________________ If you would like to take part in the workshop, please send the completed registration form to Christine Harms c/o GMD / FNS'98 Schloss Birlinghoven D-53754 Sankt Augustin Tel.: ++49 2241 14-24 73 Fax: ++49 2241 14-24 72 email: christine.harms at gmd.de From bdevries at sarnoff.com Fri Jan 16 12:55:10 1998 From: bdevries at sarnoff.com (Aalbert De Vries x2456) Date: Fri, 16 Jan 1998 12:55:10 -0500 Subject: NNSP98 Call for Papers Message-ID: <34BF9EFE.D192D090@sarnoff.com> CALL FOR PAPERS =============== *=====================================================* * THE 1998 IEEE SIGNAL PROCESSING SOCIETY WORKSHOP ON * * * * NEURAL NETWORKS FOR SIGNAL PROCESSING * *=====================================================* August 31 - September 3, 1998 Submission of extended summary : February 26, 1998 ================= Isaac Newton Institute for Mathematical Sciences, Cambridge, England The 1998 IEEE Workshop on Neural Networks for Signal Processing is the seventh in the series of workshops. Cambridge is a historic town, housing one of the leading Universities and several research institutions. In the Summer it is a beautiful place and a large number of visitors come here. It is easily reached by train and road from the Airports in London. The combination of these make it an ideal setting to host this workshop. The Isaac Newton Institute for Mathematical Sciences is based in Cambridge, adjoining the University and the Colleges. It was founded in 1992, and is devoted to the study of all branches of Mathematics. The Institute runs programmes that last for upto six months on various topics in mathematical sciences. Past programmes of relevance to this proposal include Computer Vision, Financial Mathematics and the current programme on Neural Networks and Machine Learning (July - December, 1997). One of the programmes at the Institute in July-December 1998 is Nonlinear and Nonstationary Signal Processing. Hence hosting this conference at the Institute will benefit the participants in many ways. 4. Accommodations Accommodation will be at Robinson College, Cambridge. Robinson is one of the new Colleges in Cambridge, and uses its facilities to host conferences during the summer months. It can accommodate about 300 guests in comfortable rooms. The College is within walking distance to the Cambridge city center and the Newton Institute. 5. Organization General Chairs Prof. Tony CONSTANTINIDES (Imperial) Prof. Sun-Yuan KUNG (Princeton) Vice-Chair Dr Bill Fitzgerald (Cambridge) Finance Chair Dr Christophe Molina (Anglia) Proceeding Chair Dr Elizabeth J. Wilson (Raytheon Co.) Publicity Chairs Dr Bert de Vries (Sarnoff) Dr Jonathan Chambers (Imperial) Program Chair Dr Mahesan Niranjan (Cambridge) Program Committee Tulay ADALI Andrew BACK Jean-Francois CARDOSO Bert DE VRIES Lee GILES Federico GIROSSI Yu Hen HU Jenq-Neng HWANG Jan LARSEN Yann LECUN David LOWE Christophe MOLINA Visakan KADIRKAMANATHAN Shigeru KATAGIRI Gary KUHN Elias MANOLAKOS Mahesan NIRANJAN Dragan OBRADOVIC Erkki OJA Kuldip PALIWAL Lionel TARASSENKO Volker TRESP Marc VAN HULLE Andreas WEIGEND Papers describing original research are solicited in the areas described below. All submitted papers will be reviewed by members of the Programme Committee. 6. Technical Areas Paradigms artificial neural networks, Markov models, graphical models, dynamical systems, nonlinear signal processing, and wavelets Application areas speech processing, image processing, OCR, robotics, adaptive filtering, communications, sensors, system identification, issues related to RWC, and other general signal processing and pattern recognition Theories generalization, design algorithms, optimization, probabilistic inference, parameter estimation, and network architectures Implementations parallel and distributed implementation, hardware design, and other general implementation technologies 7. Schedule Prospective authors are invited to submit 5 copies of extended summaries of no more than 6 pages. The top of the first page of the summary should include a title, authors' names, affiliations, address, telephone and fax numbers and email address, if any. Camera-ready full papers of accepted proposals will be published in a hard-bound volume by IEEE and distributed at the workshop. For further information, please contact Dr. Mahesan Niranjan, Cambridge University Engineering Department, Cambridge CB2 1PZ, England, (Tel.) +44 1223 332720, (Fax.) +44 1223 332662, (e-mail) niranjan at eng.cam.ac.uk. More information relating to the workshop will be available in http://www.sarnoff.com/conferences/nnsp98.htm and http://www-svr.eng.cam.ac.uk/nnsp98. Submissions to: Dr Mahesan Niranjan IEEE NNSP'98 Cambridge University Engineering Department Trumpington Street, Cambridge CB2 1PZ England ***** Important Dates ****** Submission of extended summary : February 26, 1998 Notification of acceptance : April 6, 1998 Submission of photo-ready accepted paper : May 3, 1998 Advanced registration, before : June 30, 1998 ============================================================== From geoff at giccs.georgetown.edu Fri Jan 16 20:58:11 1998 From: geoff at giccs.georgetown.edu (Geoff Goodhill) Date: Fri, 16 Jan 1998 20:58:11 -0500 Subject: NIPS Preprints available Message-ID: <199801170158.UAA00584@fathead.giccs.georgetown.edu> The following 3 papers from Georgetown University will appear in the 1998 NIPS proceedings, and are now available from http://www.giccs.georgetown.edu/~alex and ~geoff respectively: NEURAL BASIS OF OBJECT-CENTERED REPRESENTATIONS Sophie Deneve and Alexandre Pouget Georgetown Institute for Cognitive and Computational Sciences We present a neural model that can perform eye movements to a particular side of an object regardless of the position and orientation of the object in space, a generalization of a task which has been recently used by Olson and Gettner to investigate the neural structure of object-centered representations. Our model uses an intermediate representation in which units have oculocentric receptive fields-- just like collicular neurons--- whose gain is modulated by the side of the object to which the movement is directed, as well as the orientation of the object. We show that these gain modulations are consistent with Olson and Gettner single cell recordings in the supplementary eye field. This demonstrates that it is possible to perform an object-centered task without a representation involving an object-centered map, viz., without neurons whose receptive fields are defined in object-centered coordinates. We also show that the same approach can account for object-centered neglect, a situation in which patients with a right parietal lesion neglect the left side of objects regardless of the orientation of the objects. A MATHEMATICAL MODEL OF AXON GUIDANCE BY DIFFUSIBLE FACTORS Geoffrey J. Goodhill Georgetown Institute for Cognitive and Computational Sciences In the developing nervous system, gradients of target-derived diffusible factors play an important role in guiding axons to appropriate targets. In this paper, the shape that such a gradient might have is calculated as a function of distance from the target and the time since the start of factor production. Using estimates of the relevant parameter values from the experimental literature, the spatiotemporal domain in which a growth cone could detect such a gradient is derived. For large times, a value for the maximum guidance range of about 1 mm is obtained. This value fits well with experimental data. For smaller times, the analysis predicts that guidance over longer ranges may be possible. This prediction remains to be tested. GRADIENTS FOR RETINOTECTAL MAPPING Geoffrey J. Goodhill Georgetown Institute for Cognitive and Computational Sciences The initial activity-independent formation of a topographic map in the retinotectal system has long been thought to rely on the matching of molecular cues expressed in gradients in the retina and the tectum. However, direct experimental evidence for the existence of such gradients has only emerged since 1995. The new data has provoked the discussion of a new set of models in the experimental literature. Here, the capabilities of these models are analyzed, and the gradient shapes they predict in vivo are derived. From cchang at cns.bu.edu Sat Jan 17 16:18:53 1998 From: cchang at cns.bu.edu (Carolina Chang) Date: Sat, 17 Jan 1998 16:18:53 -0500 (EST) Subject: CFP: Biomimetic Robotics Message-ID: CALL FOR PAPERS --------------- Special session of ISIC/CIRA/ISAS'98 on BIOMIMETIC ROBOTICS Co-chairs: Carolina Chang and Paolo Gaudiano Boston University Neurobotics Lab Dept. of Cognitive and Neural Systems September 14-17, 1998 Gaithersburg, Maryland U.S.A SUBMISSION DEADLINE: February 27, 1998 http://neurobotics.bu.edu/conferences/CIRA98/ It has been argued that today's supercomputers are able to process information at a rate comparable to that of simple invertebrates. And yet, even ignoring physical constraints, no existing algorithm running on the fastest supercomputer could enable a robot to fly around a room, avoid obstacles, land upside down on the ceiling, feed, reproduce, and perform many of the other simple tasks that a housefly learns to perform without external training or supervision. The apparent simplicity with which flies and even much simpler biological organisms manage to survive in a constantly changing environment suggests that a potentially fruitful avenue of research is that of understanding the mechanisms adopted by biological systems for perception and control, and applying what is learned to robots. While we may not yet be able to make a computer function as flexibly as a housefly, there have been many promising starts in that direction. The goal of this special session is to present recent results in "biomimetic robotics", or the application of biological principles to robotics. The term "biological" in this case should be taken broadly to refer to any aspect of biological function, including for examples psychological theories or detailed models of neural function. We are soliciting submissions that describe biomimetic applications in any branch of robotics. Preference will be given to applications that utilize real systems, be they robotic or biological. SUBMISSION PROCEDURE -------------------- All submissions must be made in electronic format (postscript or MS Word preferred) as described below. Submissions must be formatted as specified in the call for papers for the joint conference ISIC/CIRA/ISAS'98 (see http://isd.cme.nist.gov/proj/is98/index.html): Papers should be limited to 6 pages including abstract, figures, and tables (i.e., two column format, 10pt Times font, and 8.5x11" paper). Authors who plan to submit a paper by the February 27 deadline are encouraged to contact C. Chang or P. Gaudiano by electronic mail (cchang at bu.edu, gaudiano at bu.edu) as soon as possible. Notification of acceptance and the author's kit will be mailed by May 8, 1998. The full paper typed in camera-ready form must be received by June 12, 1998. Final instructions for camera-ready copy submission will be in the author's kit. To submit an electronic copy of your manuscript, please prepare a postscript or MS Word version of the paper, including all figures, and upload it to the anonymous ftp site (instructions below if needed): ftp://neurobotics.bu.edu/pub/biomimetic To expedite uploading, your document may be compressed using gzip, pkzip, winzip, or any other commonly used compression scheme. As soon as you have uploaded your file to our ftp site, please send e-mail to cchang at bu.edu indicating the filename, who will serve as the corresponding author, and include the title, the name of the author(s), affiliation, address, telephone number, fax, and e-mail address. FTP UPLOADING INSTRUCTIONS -------------------------- Connect to the neurobotics ftp server using the "ftp" command or using one of the windows/mac ftp programs. Use the login name "anonymous" or "ftp", and send your e-mail address as password. For instance on a UNIX system you would do the following: ftp neurobotics.bu.edu Connected to neurobotics.bu.edu. 220 neurobotics.bu.edu FTP server (Version wu-2.4.2-academ[BETA-12](1) Wed Mar 5 12:37:21 EST 1997) ready. Name (neurobotics.bu.edu:gaudiano): ftp 331 Guest login ok, send your complete e-mail address as password. Password: (your e-mail) 230 Guest login ok, access restrictions apply. Remote system type is UNIX. Using binary mode to transfer files. ftp> cd pub/biomimetic 250 CWD command successful. ftp> bin 200 Type set to I. ftp> put your_file_name.ps.gz local: your_file_name.ps.gz remote: your_file_name.ps.gz 200 PORT command successful. 150 Opening BINARY mode data connection for your_file_name.ps.gz . 226 Transfer complete. 2312 bytes sent in 0.0204 secs (1.1e+02 Kbytes/sec) ftp> quit 221 Goodbye. For Mac and Windows systems there are many ftp programs with a graphical user interface that should simplify this process. Please note that permissions are set in such a way that you cannot view the contents of the ftp directory even after you have uploaded your file. For additional information about this special session, please send e-mail to cchang at bu.edu or gaudiano at bu.edu. For all information about ISIC/CIRA/ISAS'98 please consult the conference web page at http://isd.cme.nist.gov/proj/is98/index.html From smc at decsai.ugr.es Sun Jan 18 20:02:58 1998 From: smc at decsai.ugr.es (Serafin Moral) Date: Mon, 19 Jan 1998 01:02:58 +0000 Subject: UAI'98 Second Call for Papers Message-ID: <34C2A642.1340AEB9@decsai.ugr.es> We apologize if you receive multiple copies of this message. Please distribute to interested persons. ******************************************************************* NEW UPDATED INFORMATION ABOUT UAI-98 CONFERENCE ******************************************************************* >>>> New revised deadline to receive full papers. >>>> Length of submitted papers has been clarified. For more details about the updates given below, please visit the UAI-98 WWW page at http://www.uai98.cbmi.upmc.edu =========================================================== S E C O N D C A L L F O R P A P E R S =========================================================== ** U A I - 98 ** THE FOURTEENTH ANNUAL CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE July 24-26, 1998 University of Wisconsin Business School Madison, Wisconsin, USA ======================================= ++++++++++++++++++++++++++++++ Important Dates ++++++++++++++++++++++++++++++ >> Abstract and paper submission data received by: Monday, February 23, 1998 >> Postscript files of the papers received by: Thursday, February 26, 1988 >> Notification of acceptance by: Friday, April 10, 1998 >> Camera-ready copy due: Friday, May 8, 1998 >> Conference dates: July 24, 25, 26, 1998 >> Advanced tutorials on Uncertain Reasoning: Monday, July 27, 1998 These deadlines are truly strict. The period for reviewer assignment has been reduced to a minimum. So, deadline extensions will not be possible. ************************************************************************* Submitted papers must be at most 20 pages of 12pt Latex article style or equivalent, which is approximately 7,000 words. Accepted papers will be limited to 8 pages (with two additional pages allowed for a fee) in the UAI proceedings style, which is available at ftp://decsai.ugr.es/pub/utai/other/smc/proceedings.sty for Latex users. An 8-page paper in the proceedings style with no figures has a word count that typically is in the range of 6,000 to 7,000 words. The paper abstract and data should be sent by using the electronic form at the following address: http://decsai.ugr.es/~smc/uai98/send.html To submit a paper, send an electronic version of the paper (Postscript format) to the following address: uai98 at cbmi.upmc.edu The subject line of this message should be: $.ps, where $ is an identifier created from the last name of the first author, followed by the first initial of the author's first name. Multiple submissions by the same first author should be indicated by adding a number (e.g., pearlj2.ps) to the end of the identifier. Authors unable to submit papers electronically should send 5 hard copies of the complete paper to one of the Program Chairs (for their postal addresses, see http://www.uai98.cbmi.upmc.edu). ********************************************************************* Conference E-mail Address: uai98 at cbmi.upmc.edu Program Co-chairs: Gregory F. Cooper and Serafin Moral Conference Chair: Prakash P. Shenoy From at at cogsci.soton.ac.uk Sun Jan 18 15:26:06 1998 From: at at cogsci.soton.ac.uk (Adriaan Tijsseling) Date: Sun, 18 Jan 1998 20:26:06 +0000 Subject: PAPER on category learning in backprop nets. Message-ID: The following paper is available electronically from our web server: http://www.soton.ac.uk/~coglab/simcat.ps.gz ======================================================================== Warping Similarity Space in Category Learning by Backprop Nets Adriaan Tijsseling & Stevan Harnad Cognitive Science Centre, University of Southampton, Uk http://www.soton.ac.uk/~coglab/ Presented at "Interdisciplinary Workshop on Similarity and Categorisation, University of Edinburgh, November, 1997. Two previous neural network simulations of categorical perception (CP) have been replicated. The results have been subjected to more rigorous analysis. The findings still support the claim that backpropagation networks exhibit CP effects, but this is weakened as to incorporate the crucial role of the way auto-association training influences the organization of hidden unit representations. However, this appears to be more in accordance with CP effects in human subjects. ====================================================================== From terry at salk.edu Mon Jan 19 02:47:42 1998 From: terry at salk.edu (Terry Sejnowski) Date: Sun, 18 Jan 1998 23:47:42 -0800 (PST) Subject: Telluride Deadline Feb 1 Message-ID: <199801190747.XAA10560@helmholtz.salk.edu> "NEUROMORPHIC ENGINEERING WORKSHOP" JUNE 29 - JULY 19, 1998 TELLURIDE, COLORADO *** Deadline for application is February 1, 1998 *** Avis COHEN (University of Maryland) Rodney DOUGLAS (University of Zurich and ETH, Zurich/Switzerland) Christof KOCH (California Institute of Technology) Terrence SEJNOWSKI (Salk Institute and UCSD) Shihab SHAMMA (University of Maryland) We invite applications for a three week summer workshop that will be held in Telluride, Colorado from Monday, June 29 to Sunday, July 19, 1998. The 1997 summer workshop on "Neuromorphic Engineering", sponsored by the National Science Foundation, the Gatsby Foundation and by the "Center for Neuromorphic Systems Engineering" at the California Institute of Technology, was an exciting event and a great success. A detailed report on the workshop is available at http://www.klab.caltech.edu/~timmer/telluride.html We strongly encourage interested parties to browse through these reports and photo albums. GOALS: Carver Mead introduced the term "Neuromorphic Engineering" for a new field based on the design and fabrication of artificial neural systems, such as vision systems, head-eye systems, and roving robots, whose architecture and design principles are based on those of biological nervous systems. The goal of this workshop is to bring together young investigators and more established researchers from academia with their counterparts in industry and national laboratories, working on both neurobiological as well as engineering aspects of sensory systems and sensory-motor integration. The focus of the workshop will be on "active" participation, with demonstration systems and hands-on-experience for all participants. Neuromorphic engineering has a wide range of applications from nonlinear adaptive control of complex systems to the design of smart sensors. Many of the fundamental principles in this field, such as the use of learning methods and the design of parallel hardware, are inspired by biological systems. However, existing applications are modest and the challenge of scaling up from small artificial neural networks and designing completely autonomous systems at the levels achieved by biological systems lies ahead. The assumption underlying this three week workshop is that the next generation of neuromorphic systems would benefit from closer attention to the principles found through experimental and theoretical studies of real biological nervous systems as whole systems. FORMAT: The three week summer workshop will include background lectures systems neuroscience (in particular learning, oculo-motor and other motor systems and attention), practical tutorials on analog VLSI design, small mobile robots (Khoalas), hands-on projects, and special interest groups. Participants are required to take part and possibly complete at least one of the projects proposed (soon to be defined). They are furthermore encouraged to become involved in as many of the other activities proposed as interest and time allow. There will be two lectures in the morning that cover issues that are important to the community in general. Because of the diverse range of backgrounds among the participants, the majority of these lectures will be tutorials, rather than detailed reports of current research. These lectures will be given by invited speakers. Participants will be free to explore and play with whatever they choose in the afternoon. Projects and interest groups meet in the late afternoons, and after dinner. The analog VLSI practical tutorials will cover all aspects of analog VLSI design, simulation, layout, and testing over the workshop of the three weeks. The first week covers basics of transistors, simple circuit design and simulation. This material is intended for participants who have no experience with analog VLSI. The second week will focus on design frames for silicon retinas, from the silicon compilation and layout of on-chip video scanners, to building the peripheral boards necessary for interfacing analog VLSI retinas to video output monitors. Retina chips will be provided. The third week will feature sessions on floating gates, including lectures on the physics of tunneling and injection, and on inter-chip communication systems. We will also feature a tutorial on the use of small, mobile robots, focussing on Khoala's, as an ideal platform for vision, auditory and sensory-motor circuits. Projects that are carried out during the workshop will be centered in a number of groups, including active vision, audition, olfaction, motor control, central pattern generator, robotics, multichip communication, analog VLSI and learning. The "active perception" project group will emphasize vision and human sensory-motor coordination. Issues to be covered will include spatial localization and constancy, attention, motor planning, eye movements, and the use of visual motion information for motor control. Demonstrations will include a robot head active vision system consisting of a three degree-of-freedom binocular camera system that is fully programmable. The "central pattern generator" group will focus on small walking robots. It will look at characteristics and sources of parts for building robots, play with working examples of legged robots, and discuss CPG's and theories of nonlinear oscillators for locomotion. It will also explore the use of simple analog VLSI sensors for autonomous robots. The "robotics" group will use rovers, robot arms and working digital vision boards to investigate issues of sensory motor integration, passive compliance of the limb, and learning of inverse kinematics and inverse dynamics. The "multichip communication" project group will use existing interchip communication interfaces to program small networks of artificial neurons to exhibit particular behaviors such as amplification, oscillation, and associative memory. Issues in multichip communication will be discussed. LOCATION AND ARRANGEMENTS: The workshop will take place at the Telluride Elementary School located in the small town of Telluride, 9000 feet high in Southwest Colorado, about 6 hours away from Denver (350 miles). Continental and United Airlines provide daily flights directly into Telluride. All facilities within the beautifully renovated public school building are fully accessible to participants with disabilities. Participants will be housed in ski condominiums, within walking distance of the school. Participants are expected to share condominiums. No cars are required. Bring hiking boots, warm clothes and a backpack, since Telluride is surrounded by beautiful mountains. The workshop is intended to be very informal and hands-on. Participants are not required to have had previous experience in analog VLSI circuit design, computational or machine vision, systems level neurophysiology or modeling the brain at the systems level. However, we strongly encourage active researchers with relevant backgrounds from academia, industry and national laboratories to apply, in particular if they are prepared to work on specific projects, talk about their own work or bring demonstrations to Telluride (e.g. robots, chips, software). Internet access will be provided. Technical staff present throughout the workshops will assist with software and hardware issues. We will have a network of SUN workstations running UNIX, MACs and PCs running LINUX and Windows95. Unless otherwise arranged with one of the organizers, we expect participants to stay for the duration of this three week workshop. FINANCIAL ARRANGEMENT: We have several funding requests pending to pay for most of the costs associated with this workshop. Different from previous years, after notification of acceptances have been mailed out around March 15., 1998, participants are expected to pay a $250 workshop fee. In case of real hardship, this can be waived. Shared condominiums will be provided for all academic participants at no cost to them. We expect participant from National Laboratories and Industry to pay for these modestly priced condominiums. We expect to have funds to reimburse a small number of participants for up to travel (up to $500 for domestic travel and up to $800 for overseas travel). Please specify on the application whether such financial help is needed. HOW TO APPLY: The deadline for receipt of applications is February 1, 1998. Applicants should be at the level of graduate students or above (i.e. post-doctoral fellows, faculty, research and engineering staff and the equivalent positions in industry and national laboratories). We actively encourage qualified women and minority candidates to apply. Application should include: 1. Name, address, telephone, e-mail, FAX, and minority status (optional). 2. Curriculum Vitae. 3. One page summary of background and interests relevant to the workshop. 4. Description of special equipment needed for demonstrations that could be brought to the workshop. 5. Two letters of recommendation Complete applications should be sent to: Terrence J. Sejnowski The Salk Institute 10010 North Torrey Pines Road San Diego, CA 92037 email: terry at salk.edu FAX: (619) 587 0417 Applicants will be notified around March 15. 1998. From leila at ida.his.se Mon Jan 19 06:00:56 1998 From: leila at ida.his.se (Leila Khammari) Date: Mon, 19 Jan 1998 12:00:56 +0100 Subject: CFP ICANN 98 Message-ID: <34C33268.EC354B76@ida.his.se> This is being mailed to multiple mailing lists. Please accept our apologies if you receive multiple copies. _________________________________________________________________ CALL FOR PAPERS 8th INTERNATIONAL CONFERENCE ON ARTIFICIAL NEURAL NETWORKS (ICANN 98) September 2-4, 1998, Skoevde, Sweden Submission deadline March 25, 1998. http://www.his.se/ida/icann98/ _________________________________________________________________ INVITED SPEAKERS (to be completed) John Barnden, University of Birmingham, UK Chris Bishop, Microsoft Research, Cambridge, UK Rodney Brooks, MIT, Cambridge, USA Leif Finkel, University of Pennsylvania, USA Phil Husbands, University of Sussex, UK Teuvo Kohonen, Helsinki Univ. of Technology, Finland David MacKay, Cavendish Laboratory, Cambridge, UK Barak Pearlmutter, University of New Mexico, USA Ulrich Rueckert, Universitaet Paderborn, Germany David Rumelhart, Stanford University, USA Bernd Schuermann, Siemens, Germany _________________________________________________________________ ICANN 98, the 8th International Conference on Artificial Neural Networks, is held 2-4 September 1998 in Skoevde, Sweden. ICANN, the conference series of the European Neural Network Society and Europe's premier meeting in the field, sets out to extensively cover the whole spectrum of ANN-related research, ranging from industrial applications to biological systems. Carrying the heritage from Helsinki (1991), Brighton (1992), Amsterdam (1993), Sorrento (1994), Paris (1995), Bochum (1996) and Lausanne (1997), ICANN 98 includes a qualified scientific program, consisting of oral and poster presentations, a special 'Industry and Research' panel session, an exhibition with products related to ANNs, a set of tutorials on basic topics in ANN research and development, and on top of all this a relaxed and exciting social program. The conference is hosted and organized by the Connectionist Research Group at Hgskolan Skoevde in collaboration with the Swedish Neural Network Society (SNNS) and the European Neural Network Society (ENNS). It is supported by the International Neural Network Society (INNS), the Asian Pacific Neural Network Assembly (APNNA), the IEEE Neural Network Council and the IEE. ________________________________________________________________ SCOPE ICANN 98 aims to cover all aspects of ANN research, broadly divided into six areas, corresponding to separate organizational modules for which contributions are sought. THEORY: This module covers the broad area of theory. Topics include, but are not limited to: Model issues; Unsupervised/ supervised learning issues; "Life-long" learning; Inference; Signal Processing; Neurocontrol; Analysis; Combinatorial optimization. APPLICATIONS: We particularly encourage submissions covering novel ANN applications in, for example, the following areas: Pattern recognition; Time series prediction; Optimization; Data analysis/Data mining; Telecommunications; Control; Speech and signal processing; Vision and image processing. COMPUTATIONAL NEUROSCIENCE AND BRAIN THEORY: This module covers computational models of biological neural systems, functions, techniques and tools, e.g. Sensory and perceptual processing; Sensory fusion; Motor pattern generation and control; Computational neuroethology; Plasticity in the nervous system; Neuromodulation; Cognitive neuroscience; Behavior selection; Decision making; Cortical associative memory; Neuromorphic computer architectures. CONNECTIONIST COGNITIVE SCIENCE AND AI: This module covers the use of ANNs for modeling cognitive capacities and the relation between ANNs and AI, e.g. Vision and perception; Recognition and categorization; Development; Representational issues; Reasoning, problem solving and planning; Language and speech; Cognitive plausibility; Connectionism, Hybridism and Symbolism; Philosophical aspects and implications. AUTONOMOUS ROBOTICS AND ADAPTIVE BEHAVIOR: This module covers ANNs for adaptive control of autonomous robots as well as modeling of animal behavior. Possible topics include: Adaptive behavior in biological/artificial autonomous agents; ANN learning methods for adaptation, control and navigation; Multi-agent systems; Representational issues; Dynamics of agent-environment interaction; Biologically/ethologically inspired robotics; ANNs in evolutionary robotics and Artificial Life; Cognitive robotics. HARDWARE/IMPLEMENTATION: This module covers ANN hardware and implementational issues. Possible topics include: Analog/digital implementations; Pulse stream networks; On chip learning; Systems and architectures; Hardware implants/coupling silicon to biological nerves; Vision and image processing. In addition to the modules mentioned above we aim to further promote contacts between researchers and industry. To achieve this, a special panel session on 'Industry and Research' will be organized. Within this session, a number of speakers will be invited to present usage of state-of-the-art ANN technology in Japan, Europe and the USA. We also intend to invite potential funding agencies, in order to get their view on what kind of ANN research will be considered for funding in the future. _________________________________________________________________ SUBMISSION Prospective authors are invited to submit papers for oral or poster presentation by March 25, 1998. For details please see: http://www.his.se/ida/icann98/ or contact the conference secretariat (see below). All papers accepted for oral or poster presentation will appear in the conference proceedings which is published by Springer Verlag. _________________________________________________________________ PROGRAM COMMITTEE (to be completed) Bengt Asker, Independent Consultant, Sweden Lars Asplund, Uppsala University, Sweden Randall Beer, Santa Fe Institute & Case Western Reserve, University, USA Chris Bishop, Microsoft Research, Cambridge, UK Miklos Boda, Ericsson Telecom AB, Stockholm, Sweden Mikael Boden, Hoegskolan Skoevde, Sweden Valentino Braitenberg, Max Planck Institute for Biological Cybernetics, Tuebingen Germany Harald Brandt, Ericsson Telecom AB, Stockholm, Sweden Abhay Bulsari, AB Nonlinear Solutions OY, Finland Bo Cartling, Royal Institute of Technology, Sweden Ron Chrisley, University of Sussex, UK Erik De Schutter, University of Antwerp, Belgium Georg Dorffner, University of Vienna, Austria Rolf Eckmiller, University of Bonn, Germany Dario Floreano, Swiss Federal Institute of Technology, Lausanne, Switzerland Francoise Fogelman Soulie, SLIGOS, France Wulfram Gerstner, Centre for Neuro-Mimetic Systems, Lausanne, Switzerland John Hertz, Nordita, Denmark Pentti Kanerva, SICS, Sweden Bert Kappen, University of Nijmegen, The Netherlands Anders Lansner, Royal Inst. of Technology, Sweden Klaus-Robert Mueller, GMD First, Germany Ajit Narayanan, University of Exeter, UK Lars Niklasson, Hoegskolan Skoevde, Sweden Stefano Nolfi, National Research Council, Rome, Italy Erkki Oja, Helsinki University of Technology, Finland Gnther Palm, University of Ulm, Germany Jordan Pollack, Brandeis University, USA Ronan Reilly, University College Dublin, Ireland Brian Ripley, Oxford University, UK Thorsteinn Roegnvaldsson, Hoegskolan Halmstad, Sweden Bernd Schuermann, Siemens AG, Munich, Germany Noel Sharkey, University of Sheffield, UK Olli Simula, Helsinki University of Technology, Finland Jonas Sjoeberg, Chalmers University of Technology, Sweden Gunnar Sjoedin, SICS, Sweden Bertil Svensson, Hoegskolan Halmstad & Chalmers, University of Technology, Sweden Jun Tani, Sony CSL Inc., Tokyo, Japan Carme Torras, Universitat Politecnica de Catalunya, Barcelona, Spain Tim van Gelder, University of Melbourne, Australia Francisco Varela, LENA - CNRS, Paris, France Eric A. Wan, Oregon Graduate Institute, USA Florentin Woergoetter, Ruhr-Universitt Bochum, Germany Tom Ziemke, Hoegskolan Skoevde, Sweden _________________________________________________________________ DATES TO REMEMBER March 25, 1998 - submissions must be received May 6, 1998 - notification of acceptance May 28, 1998 - final camera ready papers must be received September 1, 1998 - ICANN 98 tutorials September 2-4, 1998 - ICANN 98 takes place _________________________________________________________________ CONFERENCE SECRETARIAT ICANN 98 Hoegskolan Skoevde P.O. Box 408 S-541 28 Skoevde SWEDEN Email: icann98 at ida.his.se Telefax: +46 (0)500-46 47 25 http://www.his.se/ida/icann98/ From kia at particle.kth.se Mon Jan 19 15:49:56 1998 From: kia at particle.kth.se (Karina Waldemark) Date: Mon, 19 Jan 1998 21:49:56 +0100 Subject: VI-DYNN'98 Call for papers Message-ID: <34C3BC74.AE9E7F9A@particle.kth.se> ------------------------------------------------------------------------ 2:nd call for papers: VI-DYNN'98 Workshop on Virtual Intelligence - Dynamic Neural Networks Stockholm June 22-26, 1998 Royal Institute of Technology, KTH Stockholm, Sweden ------------------------------------------------------------------------ Abstracts due to: February 28, 1998 ------------------------------------------------------------------------ VI-DYNN'98 Web: http://msia02.msi.se/vi-dynn VI-DYNN'98 will combine the DYNN emphasis on biologically inspired neural network models, especially Pulse Coupled Neural Networks (PCNN), to the practical applications emphasis of the VI workshops. In particular we will focus on why, how, and where to use biologically inspired neural systems. For example, we will learn how to adapt such systems to sensors such as digital X-Ray imaging devices, CCD's and SAR, etc. and examine questions of accuracy, speed, etc. Developments in research on biological neural systems, such as the mammalian visual systems, and how smart sensors can benefit from this knowledge will also be presented. Pulse Coupled Neural Networks (PCNN) have recently become among the most exciting new developments in the field of artificial neural networks (ANN), showing great promise for pattern recognition and other applications. The PCNN type models are related much more closely to real biological neural systems than most ANN's and many researchers in the field of ANN- Pattern Recognition are unfamiliar with them. VI-DYNN'98 will continue in the spirit with the Virtual Intelligence workshop series. ----------------------------------------------------------------- VI-DYNN'98 Topics: Dynamic NN Fuzzy Systems Spiking Neurons Rough Sets Brain Image Genetic Algorithms Virtual Reality ------------------------------------------------------------------ Applications: Medical Defense & Space Others ------------------------------------------------------------------- Special sessions: PCNN - Pulse Coupled Neural Networks exciting new artificial neural networks related to real biological neural systems PCNN applications: pattern recognition image processing digital x-ray imaging devices, CCDs & SAR Biologically inspired neural network models why, how and where to use them The mammalian visual system smart sensors benefit from their knowledge The Electronic Nose ------------------------------------------------------------------------ International Organizing Committee: John L. Johnson (MICOM, USA), Jason M. Kinser (George Mason U., USA) Thomas Lindblad (KTH, Sweden) Robert Lorenz (Univ. Wisconsin, USA) Mary Lou Padgett (Auburn U., USA), Robert T. Savely (NASA, Houston) Manual Samuelides(CERT-ONERA,Toulouse,France) John Taylor (Kings College,UK) Simon Thorpe (CERI-CNRS, Toulouse, France) ------------------------------------------------------------------------ Local Organizing Committee: Thomas Lindblad (KTH) - Conf. Chairman ClarkS. Lindsey (KTH) - Conf. Secretary Kenneth Agehed (KTH) Joakim Waldemark (KTH) Karina Waldemark (KTH) Nina Weil (KTH) Moyra Mann - registration officer --------------------------------------------------------------------- Contact: Thomas Lindblad (KTH) - Conf. Chairman email: lindblad at particle.kth.se Phone: [+46] - (0)8 - 16 11 09 ClarkS. Lindsey (KTH) - Conf. Secretary email: lindsey at particle.kth.se Phone: [+46] - (0)8 - 16 10 74 Switchboard: [+46] - (0)8 - 16 10 00 Fax: [+46] - (0)8 - 15 86 74 VI-DYNN'98 Web: http://msia02.msi.se/vi-dynn -- ------------------------------------------------------ This is an email from: Ph.D. Karina Waldemark ------------------------------------------------------ Royal Institute of Technology, KTH Physics Department Frescati Particle Physics and Instrumentation group ------------------------------------------------------ Email: kia at particle.kth.se Snail mail: Frescativagen 24 S-104 05 Stockholm SWEDEN Phone: [+46] - (0)8 - 16 10 81 Switchboard: [+46] - (0)8 - 16 10 00 Fax : [+46] - (0)8 - 15 86 74 Please visit MY HOME PAGE at http://msia02.msi.se/~kia/kia.html From smagt at dlr.de Tue Jan 20 05:43:59 1998 From: smagt at dlr.de (Patrick van der Smagt) Date: Tue, 20 Jan 1998 11:43:59 +0100 Subject: abstracts of NIPS cerebellar workshop available Message-ID: <34C47FEF.FEBD362C@robotic.dlr.de> Abstracts of the NIPS*97 workshop "Can Artificial Cerebellar Models Compete to Control Robots?" can now be downloaded from the Web at http://www.op.dlr.de/FF-DR-RS/CONFERENCES/nips-workshop/ The bibliographical information: P. van der Smagt and D. Bullock (editors), 1997 Extended Abstracts of the NIPS*97 Workshop `Can Artificial Cerebellar Models Compete to Control Robots?' DLR Technical Report # 515-97-28 (38 pages) Contents: * Patrick van der Smagt, "Dynamic control in new robot structures: Can we learn nonlinear functions of high dimensionality?" * Daniel Bullock, "Cerebellar learning for context sensitive and critically timed coordination of multiple action channels" * Gordon Kraft, "Optimized Weight Smoothing for CMAC Neural Networks" * Jose Contreras-Vidal and Juan Lopez-Coronado, "Adaptive Cerebellar Control of Opponent Muscles" * Mitsuo Kawato, "Multiple Internal Models in the Cerebellum" * Andrew Fagg, Leo Zelevinsky, Andrew Barto, and James Houk, "Using Crude Corrective Movements to Learn Accurate Motor Programs for Reaching" * Mark E. Nelson, "Adaptive Motor Control Without a Cerebellum" * Jacob Spoelstra, Michael Arbib, and Nicolas Schweighofer, "Cerebellar control of a simulated biomimetic manipulator for fast movements" * Marwan Jabri, Olivier Coenen, Jerry Huang, and Terrence Sejnowski, "Sensorimotor integration and control" Patrick van der Smagt -- dr Patrick van der Smagt phone +49 8153 281152 DLR/Institute of Robotics and System Dynamics fax +49 8153 281134 P.O. Box 1116, 82230 Wessling, Germany email From espaa at soc.plym.ac.uk Wed Jan 21 07:10:08 1998 From: espaa at soc.plym.ac.uk (espaa) Date: Wed, 21 Jan 1998 12:10:08 GMT Subject: Call for Paper PAA Journal Message-ID: <334F974762@scfs3.soc.plym.ac.uk> CALL FOR PAPERS PATTERN ANALYSIS AND APPLICATIONS journal http://www.soc.plym.ac.uk/soc/sameer/paa.htm Springer-Verlag Limited Springer Verlag Ltd is launching a new journal - Pattern Analysis and Applications (PAA) - in Spring 1998. Original Papers are now invited for the journal covering the following areas of interest: Aims and Scope of PAA: The journal publishes high quality articles in areas of fundamental research in pattern analysis and applications. It aims to provide a forum for original research which describes novel pattern analysis techniques and industrial applications of the current technology. The main aim of the journal is to publish high quality research in intelligent pattern analysis in computer science and engineering. In addition, the journal will also publish articles on pattern analysis applications in medical imaging. The journal solicits articles that detail new technology and methods for pattern recognition and analysis in applied domains including, but not limited to, computer vision and image processing, speech analysis, robotics, multimedia, document analysis, character recognition, knowledge engineering for pattern recognition, fractal analysis, and intelligent control. The journal publishes articles on the use of advanced pattern recognition and analysis methods including statistical techniques, neural networks, genetic algorithms, fuzzy pattern recognition, machine learning, and hardware implementations which are either relevant to the development of pattern analysis as a research area or detail novel pattern analysis applications. Papers proposing new classifier systems or their development, pattern analysis systems for real-time applications, fuzzy and temporal pattern recognition and uncertainty management in applied pattern recognition are particularly solicited. The journal encourages the submission of original case-studies on applied pattern recognition which describe important research in the area. The journal also solicits reviews on novel pattern analysis benchmarks, evaluation of pattern analysis tools, and important research activities at international centres of excellence working in pattern analysis. Audience: Researchers in computer science and engineering. Research and Development Personnel in industry. Researchers/ applications where pattern analysis is used, researchers in the area of novel pattern recognition and analysis techniques and their specific applications. ________________________________________________________________ Editorial Board: Sukhan Lee University of Southern California, USA Eric Saund Xerox Palo Alto Research Center, USA Narendra Ahuja University of Illinois - Urbana Champaign, USA Andras Lorincz University of Szeged, Hungary William Freeman Mitsubishi Electric Research Lab., USA Jianchang Mao IBM, USA Xuedong Huang Microsoft Corporation, USA Haruo Asada Toshiba Corporation, Japan Jim Keller University of Missouri-Columbia, USA Kurt Hornik Technical University of Vienna, Austria K K Biswas Indian Institute of Technology India Frederick Jelinek John Hopkins University, USA Adnan Amin University of New South Wales, Australia Jussi Parkkinen Lappeenranta University of Technology, Finland Hans Guesgen University of Aukland, New Zealand Gregory Hager Yale University, USA Gerhard Ritter University of Florida, USA Gabor Herman University of Pennsylvania, USA Ravi Kothari University of Cincinnati, USA Tan Chew Lim National University of Singapore, Singapore Horst Bischof Technical University of Vienna, Austria Larry Spitz Daimler-Benz Research & Tech., USA Torfinn Taxt University of Bergen, Norway Ching Y Suen Concordia University, Canada Terry Caelli Curtin University of Technology, Australia Eric Ristad Princeton University, USA Andreas Dengel German Research Centre for AI GmbH, Germany Henri Prade Universite Paul Sabatier, France Alexander Franz Sony Computer Science Lab Technology, Japan Dan Adam Technion-Israel Inst. of Technology, Israel John MacIntyre University of Sunderland, UK Robert Duin Delft University of Technology, Netherlands Hsi-Jian Lee National Chiao Tung University, Taiwan Steven Salzberg John Hopkins University, USA Ruggero Milanese University of Geneva, Switzerland Masayuki Nakajima Tokyo Institute of Technology, Japan Melanie Mitchell Santa Fe Institute, USA Madan Singh UMIST, UK James Duncan Yale University, USA Sanjoy Mitter MIT, USA Mari Ostendorf Boston University, USA Steve Young University of Cambridge, UK Alan Bovik University of Texas at Austin, USA Michael Brady University of Oxford, UK Simon Kasif University of Illinois at Chicago, USA David G Stork RICOH Silicon Valley, USA ______________________________________________________ Send your submissions to: Sameer Singh Editor-in-Chief, Pattern Analysis and Applications School of Computing University of Plymouth Plymouth PL4 8AA UK Full information about the journal and detailed instructions for Call for Papers can be found at the PAA web site. From eppler at hpe.fzk.de Wed Jan 21 08:54:59 1998 From: eppler at hpe.fzk.de (Wolfgang Eppler) Date: Wed, 21 Jan 1998 14:54:59 +0100 Subject: Interpretation and Optimization of Neural Systems Message-ID: <34C5FE33.1B0259B6@hpe.fzk.de> Call for Papers Invited Session "Interpretation and Optimization of Neural Systems" at EUFIT '98, Aachen, Germany, September 7 - 10, 1998 Neural networks are known to be black boxes. Their internal structure is determined by training algorithms that do not care about human comprehension. Some of the neural networks can be analyzed in an easy way, e.g. self-organized feature maps with their weights being prototype vectors in the input space or radial basis function networks with their weights and biases being centers and variances of gaussian regions. But even with multi-layer perceptrons there exist methods to understand better the interior of a network. Some approaches use Fuzzy rules or other symbolic methods, others use geometrical interpretations. The analyzing capability is an important feature for using neural networks in domains with high security requirements. The unknown response of a network to extreme values and badly testable input spaces are the reason for the necessity of such analyzing and manipulation tools. The manipulation of a network after training may help to optimize the generalization capability of the solution found. Genetic algorithms and graphical approaches are examples to achieve these objectives. The invited session cares about the different optimization and interpretation techniques. Both new theoretical methods and demonstration of tools are wellcome. Deadline for abstract: February 1, 1998 Deadline for camera ready paper: March 31, 1998 Address: Prof. Dr. H. Gemmeke Forschungszentrum Karlsruhe, FZK (Research Centre Karlsruhe) POB 3640 76021 Karlsruhe Germany or Fax: ++49 7247 82 3560 Tel: ++49 7247 82 5537 or email: eppler at hpe.fzk.de Please contact: Wolfgang Eppler, Tel: ++49 7247 82 5537, email: eppler at hpe.fzk.de Prof. Dr. Hartmut Gemmeke, Tel: ++49 7247 82 5635, email: gemmeke at hpe.fzk.de From jose at tractatus.rutgers.edu Wed Jan 21 12:26:22 1998 From: jose at tractatus.rutgers.edu (Stephen Hanson) Date: Wed, 21 Jan 1998 12:26:22 -0500 Subject: POSTDOC immediately available. RUTGERS Newark PSYCHOLOGY DEPARTMENT Message-ID: <34C62FBE.258E6EB8@tractatus.rutgers.edu> The Department of Psychology of Rutgers University-Newark Campus -- POSTDOCTORAL Position A PostDoctoral position that can be filled *immediately* running through fall98/Spring99 with a possbility of a second year renewal. Area of specialization in connectionist modeling with applications to categorization, recurrent networks, brain imaging or more generally cognitive neuroscience. Review of applications will begin immediately-- but will continue to be accepted until the position is filled. Starting date is flexible in the Spring 97 semester. Rutgers University is an equal opportunity/affirmative action employer. Qualified women and minority candidates are especially encouraged to apply. Send CV to Professor S. J. Hanson, Chair, Department of Psychology - Post Doc Search, Rutgers University, Newark, NJ 07102. Email enquiries can be made to jose at psychology.rutgers.edu please include POSTDOC in the subject heading. -- Stephen J. Hanson Professor & Chair Department of Psychology Smith Hall Rutgers University Newark, NJ 07102 voice: 1-973-353-5440 x5095 fax: 1-973-353-1171 http://psychology.rutgers.edu email: jose at kreizler.rutgers.edu cellular: 1-201-757-2589 From baluja at jprc.com Wed Jan 21 17:03:36 1998 From: baluja at jprc.com (Shumeet Baluja) Date: Wed, 21 Jan 1998 17:03:36 -0500 Subject: Paper Available on Rotation Invariant Face Detection Message-ID: <199801212203.RAA16887@india.jprc.com> Rotation Invariant Neural Network-Based Face Detection by: Henry Rowley Shumeet Baluja Takeo Kanade Abstract: In this paper, we present a neural network-based face detection system. Unlike similar systems which are limited to detecting upright, frontal faces, this system detects faces at any degree of rotation in the image plane. The system employs multiple networks; the first is a ``router'' network which processes each input window to determine its orientation and then uses this information to prepare the window for one or more ``detector'' networks. We present the training methods for both types of networks. We also perform sensitivity analysis on the networks, and present empirical results on a large test set. Finally, we present preliminary results for detecting faces which are rotated out of the image plane, such as profiles and semi-profiles. This is Technical Report: CMU-CS-97-201 Available from: http://www.cs.cmu.edu/~har/faces.html and http://www.cs.cmu.edu/~baluja/techreps.html Questions and comments are welcome. From hagai at phy.ucsf.EDU Wed Jan 21 19:04:29 1998 From: hagai at phy.ucsf.EDU (Hagai Attias) Date: Thu, 22 Jan 98 00:04:29 +0000 Subject: Paper available: blind source separation Message-ID: <199801220804.AAA20240@phy.ucsf.EDU> A new paper on blind separation of mixed and convolved sources is available at: http://keck.ucsf.edu/~hagai/papers.html ------------------------------------------------------- BLIND SOURCE SEPARATION AND DECONVOLUTION: THE DYNAMIC COMPONENT ANALYSIS ALGORITHM Hagai Attias and Christoph E. Schreiner University of California, San Francisco hagai at phy.ucsf.edu (Neural Computation 1998, in press) We present a novel family of unsupervised learning algorithms for blind separation of mixed and convolved sources. Our approach, termed `dynamic component analysis' (DCA), is based on formulating the separation problem as a learning task of a spatio-temporal generative model. The resulting learning rules achieve separation by exploiting high-order spatio-temporal statistics of the observed data. Using an extension of the relative-gradient concept to the spatio-temporal case, we derive different rules by learning generative models in the frequency and time domains, whereas a hybrid frequency/time model leads to the best performance. These algorithms generalize independent component analysis (ICA) to the case of convolutive mixtures, and exhibit superior performance on instantaneous mixtures. In Addition, our approach can incorporate information about the mixing situation when available, resulting in a `semi-blind' separation algorithm. Finally, the spatio-temporal redundancy reduction performed by DCA algorithms is shown to be equivalent to information-rate maximization through a simple network. From PHKYWONG at usthk.ust.hk Fri Jan 23 01:39:13 1998 From: PHKYWONG at usthk.ust.hk (PHKYWONG@usthk.ust.hk) Date: Fri, 23 Jan 1998 14:39:13 +0800 Subject: New Book: Theoretical Aspects of Neural Computation Message-ID: <01ISPX7VCY2A9110EM@usthk.ust.hk> Announcing a new book by Springer (order information attached): Theoretical Aspects of Neural Computation ----------------------------------------- A Multidisciplinary Perspective [Proceedings of Hong Kong International Workshop(TANC'97)] Kwok-Yee Michael Wong, Irwin King and Dit-Yan Yeung (Eds.) Over the past decade or so, neural computation has emerged as a research area with active involvement by researchers from a number of different disciplines, including computer science, engineering, mathematics, neurobiology, physics, and statistics. The Hong Kong International Workshop, TANC'97, brought together researchers with a diverse background to review the current status of neural computation research. Three aspects of neural computation have been emphasized: neuroscience aspects, computational and mathematical aspects, and statistical physics aspects. This book contains 29 contributions from frontier researchers in these fields. Thoroughly re-edited, and in some cases revised post-workshop, these papers collated into this review volume provide a top-class reference summary of the state-of-the-art work done in this field. Table of Contents ================= The Natural Gradient Learning Algorithm for Neural Networks Shun-ichi Amari Regression with Gaussian Processes: Average Case Performance Manfred Opper Bayesian Ying-Yang System and Theory as A Unified Statistical Learning Approach (II): From Unsupervised Learning to Supervised Learning and Temporal Modeling Lei Xu Bayesian Ying-Yang System and Theory as A Unified Statistical Learning Approach: (III) Models and Algorithms for Dependence Reduction, Data Dimension Reduction, ICA and Supervised Learning Lei Xu Optimal Bayesian Online Learning Ole Winther and Sara A. Solla Several Aspects of Pruning Methods in Recursive Least Square Algorithms for Neural Networks Chi-Sing Leung, Pui-Fai Sum, Ah-Chung Tsoi, and Lai-Wan Chan Experts or an Ensemble? A Statistical Mechanics Perspective of Multiple Neural Network Approaches Jong-Hoon Oh and Kukjin Kang Mean Field Theory of Learning in Pruned Perceptrons K. Y. Michael Wong Solving Inverse Problems by Bayesian Iterative Inversion of Neural Networks Jenq-Neng Hwang Stochastic Orientation of the Generating Distribution in Very Fast Simu- lated Reannealing Bruce E. Rosen Graph Partitioning Using Homotopy Based Fast Annealed Neural Networks Jian-Jun Xue yand Xiao-Hu Yuz Modelling Synfire Processing John Hertz Ideal Observers of Visual Object Recognition Zili Liu Primary Cortical Dynamics for Visual Grouping Zhaoping Li Architecture of Cortex Revealed by Divided Attention Experiments Ching Elizabeth Ho Information Merging in Neural Modelling Massimo Battisti*, Pietro Burrascano* and Dario Pirollo Signal Recognition Based on Wavelet and Wavelet Neural Network Yao-Jun Wu Xi-Zhi Shi Ming Xu Chaos Theory in EEG Analysis Hongkui Jing and Shijun Chen A Neurocomputational Model of Figure-Ground Discrimination by Rela- tive Motion Aike Guo, Haijian Sun, and Lin Liu Feature Selectivity in a Cortical Module with Short-Range Excitation David Hansely and Haim Sompolinskyz A Psychophysical Experiment to Test the Efficient Stereo Coding Theory Danmei Chen and Zhaoping Li An Exact Solution for On-Line Learning of Smooth Functions Sara A. Solla Unsupervised Learning by Examples: On-line Versus Off-line C. Van den Broeck A Simple Perceptron that Learns Non-Monotonic Rules Jun-ichi Inoue, Hidetoshi Nishimori and Yoshiyuki Kabashima The Stability of Asymmetric Hopfield Networks With Nonnegative Weights 267 Jinwen Ma Fully Connected Q-Ising Neural Networks: A General Scheme for Dis- cussing Parallel Dynamics D. Boll'e, G. Jongen and G. M. Shim Recurrent Sampling Models Peter Dayan Low-Complexity Coding and Decoding Sepp Hochreiter and J"urgen Schmidhuber Pattern Analysis and Synthesis in Attractor Neural Networks H. Sebastian Seung Author Index Subject Index ============================================================================ ORDERING INFORMATION: ISBN 981-3083-70-0 Book price: US$49.00 (excluding postage charges) Please find the ordering information at the website: http://www.springer.com.sg under Books on Computer Science. Alternatively, please fax your orders to: [1] Singapore sales (65) 84 20 107 for clients in SE Asia (email: orders at cyberway.com.sg; tel: (65) 84 20 112) or [2] Hong Kong sales (852) 2724 2366 for clients in N. Asia (email: joes at springer.com.hk; tel: (852) 2723 9698) From Dave_Touretzky at cs.cmu.edu Fri Jan 23 04:08:53 1998 From: Dave_Touretzky at cs.cmu.edu (Dave_Touretzky@cs.cmu.edu) Date: Fri, 23 Jan 1998 04:08:53 -0500 Subject: summer undergrad. program in cognitive/computational neuroscience Message-ID: <10831.885546533@skinner.boltz.cs.cmu.edu> The Center for the Neural Basis of Cognition, a joint program of the University of Pittsburgh and Carnegie Mellon University, offers an annual summer program for a small number of qualified undergraduates interested in studying cognitive or computational neuroscience. The program offers undergraduates ten weeks of intensive involvement in laboratory research supervised by one of the program's faculty. The program also includes weekly journal club meetings and a series of lectures and laboratory tours designed to give students a broad exposure to cognitive and computational neuroscience topics. Students' individual research experiences are planned in consultation with the training program's Director. Potential laboratory environments include single unit recording, neuroanatomy, brain imaging, computer simulation of biological or cognitive phenomena, robotics, and neuropsychological or behavioral assessment of clinical subjects. Students selecteed to participate in the program will receive a $2500 stipend, plus housing and a modest travel allowance. Support is provided by the National Science Foundation and the Center for the Neural Basis of Cognition. The application deadline this year is February 15, 1998. The program begins in early June and lasts for 10 weeks. For additional information about the program or to obtain application materials, visit our web site at http://www.cnbc.cmu.edu/Training/summer From sml%essex.ac.uk at seralph21.essex.ac.uk Tue Jan 27 04:09:08 1998 From: sml%essex.ac.uk at seralph21.essex.ac.uk (Simon Lucas) Date: Tue, 27 Jan 1998 09:09:08 +0000 Subject: paper and applet on chromosome coding for NN evolution Message-ID: <34CDA434.48C4@essex.ac.uk> Dear Connectionists, The following paper and applet are available from my website: http://esewww.essex.ac.uk/~sml entitled: A comparison of matrix rewriting versus direct encoding for evolving neural networks A.A. Siddiqui and S.M. Lucas Proceedings of IEEE International Conference on Evolutionary Computation, 1998 (to appear) Abstract: The intuitive expectation is that the scheme used to encode the neural network in the chromosome should be critical to the success of evolving neural networks to solve difficult problems. In 1990 Kitano (Complex Systems, vol 4, pp 461 - 476) published an encoding scheme based on context-free parallel matrix rewriting. The method allowed compact, finite, chromosomes to grow neural networks of potentially infinite size. Results were presented that demonstrated superior evolutionary properties of the matrix rewriting method compared to a simple direct encoding. In this paper, we present results that contradict those findings, and demonstrate that a genetic algorithm (GA) using a direct encoding can find good individuals just as efficiently as a GA using matrix rewriting. The applet allows you to attempt to reproduce the results presented in the paper and to extend the comparsion to other datasets. Best regards, Simon Lucas -- ------------------------------------------------ Dr. Simon Lucas Department of Electronic Systems Engineering University of Essex Colchester CO4 3SQ United Kingdom Tel: (+44) 1206 872935 Fax: (+44) 1206 872900 Email: sml at essex.ac.uk http://esewww.essex.ac.uk/~sml secretary: Mrs Wendy Ryder (+44) 1206 872437 ------------------------------------------------- From diane at cs.cmu.edu Tue Jan 27 16:37:01 1998 From: diane at cs.cmu.edu (Diane Stidle) Date: Tue, 27 Jan 1998 16:37:01 -0500 Subject: CONALD Conference Announcement Message-ID: <3.0.2.32.19980127163701.01014da8@ux5.sp.cs.cmu.edu> CONFERENCE ON AUTOMATED LEARNING AND DISCOVERY (CONALD) June 11-13, 1998 at Carnegie Mellon University, Pittsburgh, PA The Conference on Automated Learning and Discovery (CONALD'98) will bring together leading researchers from scientific disciplines concerned with learning from data. It will cover scientific research at the intersection of statistics, computer science, artificial intelligence, databases, social sciences and language technologies. The goal of this meeting is to explore new, unified research directions in this cross-disciplinary field. The conference features eight one-day cross-disciplinary workshops, interleaved with seven invited plenary talks by renowned statisticians, computer scientists, and cognitive scientists. The workshops will address issues such as: what is the state of the art, what can we do and what is missing? what are promising research directions? what are the most promising opportunities for cross-disciplinary research? CONALD differs from other meetings in the field in its broad, interdisciplinary scope. The goal of CONALD is to characterize the state-of-the-art in automated learning and discovery, and to identify promising cross-disciplinary research directions. The format will be very much tailored towards open discussions and free exchange of ideas. This meeting will be summarized by a written report that will be made available to the scientific community and NSF. ___Plenary speakers________________________________________________ * Tom Dietterich * Stuart Geman * David Heckerman * Michael Jordan * Daryl Pregibon * Herb Simon * Robert Tibshirani ___Workshops_______________________________________________________ * Visual Methods for the Study of Massive Data Sets organized by Bill Eddy and Steve Eick * Learning Causal Bayesian Networks organized by Richard Scheines and Larry Wasserman * Discovery in Natural and Social Science organized by Raul Valdes-Perez * Mixed-Media Databases organized by Shumeet Baluja, Christos Faloutsos, Alex Hauptmann, and Michael Witbrock * Learning from Text and the Web organized by Yiming Yang, Jaime Carbonell, Steve Fienberg, and Tom Mitchell * Robot Exploration and Learning organized by Howie Choset, Maja Mataric and Sebastian Thrun * Machine Learning and Reinforcement Learning for Manufacturing organized by Sridhar Mahadevan and Andrew Moore * Large-Scale Consumer Databases organized by Mike Meyer, Teddy Seidenfeld and Kannan Srinivasan ___Deadline_for_paper_submissions__________________________________ * February 16, 1998 ___More_information________________________________________________ * Web: http://www.cs.cmu.edu/~conald * E-mail: conald at cs.cmu.edu For submission instructions, consult our Web page or contact the organizers of the specific workshop. A limited number of travel stipends will be available. The conference will be sponsored by CMU's newly created Center for Automated Learning and Discovery. Additional financial support will be provided by the National Science Foundation (NSF). From terry at salk.edu Wed Jan 28 06:33:38 1998 From: terry at salk.edu (terry@salk.edu) Date: Wed, 28 Jan 1998 03:33:38 -0800 (PST) Subject: NEURAL COMPUTATION 10:2 Message-ID: <199801281133.DAA01530@hebb.salk.edu> Neural Computation - Contents Volume 10, Number 2 - February 15, 1998 ARTICLE Natural Gradient Works Efficiently in Learning Shun-ichi Amari NOTES Adding Lateral Inhibition to a Simple Feedforward Network Enables it to Perform Exclusive-Or Leslie S. Smith Combined Learning and Use for a Mixture Model Equivalent to the RBF Classifier David J. Miller, and Hasan S. Uyar LETTERS Modeling the Surround of MT Cells and Their Selectivity for Surface Orientation in Depth Specified by Motion Lin Liu and Marc M. van Hulle A Self-Organizing Neural Network Architecture for Navigation Using Optic Flow Seth Cameron, Stephen Grossberg, and Frank H. Guenther Analysis of Direction Selectivity Arising From Recurrent Cortical Interactions Paul Mineiro, and David Zipser Statistically Efficient Estimation Using Population Coding Alexandre Pouget, Kechen Zhang, Sophie Deneve, and Peter E. Latham Probabilistic Interpretation of Population Codes Richard S. Zemel, Peter Dayan, and Alexandre Pouget Stable and Rapid Recurrent Processing in Realistic Autoassociative Memories Francesco P. Battaglia and Alessandro Treves Synaptic Runaway In Associative Networks And The Pathogenesis Of Schizophrenia Asnat Greenstein-Messica, and Eytan Ruppin On Numerical Simulations of Integrate-and-Fire Neural Networks D. Hansel, G. Mato, C. Meunier, and L. Neltner A Floating Gate MOS Implementation of Resistive Fuse T. Matsumoto, T. Sawaji, T. Sakai, and H. Nagai ----- ABSTRACTS - http://mitpress.mit.edu/NECO/ SUBSCRIPTIONS - 1998 - VOLUME 10 - 8 ISSUES USA Canada* Other Countries Student/Retired $50 $53.50 $78 Individual $82 $87.74 $110 Institution $285 $304.95 $318 * includes 7% GST (Back issues from Volumes 1-9 are regularly available for $28 each to institutions and $14 each for individuals. Add $5 for postage per issue outside USA and Canada. Add +7% GST for Canada.) MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 258-6779 mitpress-orders at mit.edu ----- From tom.ziemke at ida.his.se Wed Jan 28 02:53:49 1998 From: tom.ziemke at ida.his.se (Tom Ziemke) Date: Wed, 28 Jan 1998 08:53:49 +0100 Subject: CFP - Autonomous Robotics and Adaptive Behaviour at ICANN 98 Message-ID: <199801280753.IAA27796@tor.ida.his.se> ------------------------------------------------------- This is being mailed to multiple mailing lists. Please accept our apologies if you receive multiple copies. ------------------------------------------------------- CALL FOR PAPERS ------------------------------------------- AUTONOMOUS ROBOTICS and ADAPTIVE BEHAVIOUR ------------------------------------------- A module of ICANN 98, the 8th International Conference on Artificial Neural Networks, Skoevde, Sweden, 2-4 Sept. 1998 ----------------- INVITED SPEAKERS ----------------- Invited speakers for this module are * Rodney Brooks, MIT AI Lab, Cambridge, USA * Phil Husbands, COGS, University of Sussex, UK ------ SCOPE ------ The adaptivity and flexibility of artificial neural networks (ANNs), as well as their capacity for learning and self-organisation, make them ideal candidates for the control of autonomous robots. For the same reasons, ANNs and ANN-controlled robots are also increasingly being used to model biological mechanisms underlying the adaptive behaviour of animals. Shared characteristics of biological and arti- ficial autonomous agents include embodiment, situatedness, and the requirement to exhibit adaptive behaviour in sensorimotor interaction with dynamic environments. This module of ICANN 98 covers the use of ANN techniques for adaptive control of autonomous robots as well as ANN/robotic models of animal behaviour. Possible topics include, but are not restricted to: * adaptive behaviour in biological and artificial autonomous agents * ANN learning methods for adaptation, control and navigation * communication, cooperation and collective behaviour in multi-agent systems * the role of representation in embodied/situated/autonomous systems * dynamics of agent-environment interaction * biologically and ethologically inspired agent modelling * evolutionary robotics and artificial life * cognitive robotics and embodied cognition ------------------------------------ PROGRAMME COMMITTEE for this module ------------------------------------ * Randall Beer * Valentino Braitenberg * Dario Floreano * Stefano Nolfi * Jordan Pollack * Noel Sharkey * Jun Tani * Carme Torras * Francisco Varela * Tom Ziemke ----------- SUBMISSION ----------- Prospective authors are invited to submit papers for oral or poster presentation by MARCH 25, 1998. For details please see: http://www.his.se/ida/icann98/submission or contact the conference secretariat (see below). ------------ PUBLICATION ------------ All papers accepted for oral or poster presentation will appear in the ICANN 98 proceedings published by Springer-Verlag. The organizers have also arranged to edit a journal special issue which will include (in extended form) selected papers from this track. -------------------- FURTHER INFORMATION -------------------- This module is organized by Noel Sharkey, University of Sheffield, UK, and Tom Ziemke, Univ. of Skoevde, Sweden. For further information concerning this module please see the web page http://www.ida.his.se/ida/icann98/robotics or contact Tom Ziemke University of Skoevde Dept. of Computer Science P.O. Box 408 S-541 28 Skoevde SWEDEN tom at ida.his.se fax +46 - (0)500 - 46 47 25 tel +46 - (0)500 - 46 47 30 For further information concerning ICANN 98 please see the web site http://www.his.se/ida/icann98/ or contact the conference secretariat ICANN 98 Leila Khammari University of Skoevde P.O. Box 408 S-541 28 Skoevde SWEDEN icann98 at ida.his.se fax +46 - (0)500 - 46 47 25 ---------------- IMPORTANT DATES ---------------- March 25, 1998 - submissions must be received May 6, 1998 - notification of acceptance or rejection May 28, 1998 - final camera-ready papers are due Sept. 2-4, 1998 - ICANN 98 takes place From jimmy at ecowar.demon.co.uk Wed Jan 28 13:03:30 1998 From: jimmy at ecowar.demon.co.uk (Jimmy Shadbolt) Date: Wed, 28 Jan 1998 18:03:30 +0000 Subject: research position - market prediction Message-ID: Econostat Ltd Hennerton House Wargrave Berks RG10 8PD United Kingdom We would like to invite applications for a research position at Econostat. The research team is involved principally in the prediction of monthly returns in the global bond and equity markets. All methods of prediction are investigated - regression, neural networks, genetic algorithms, Bayesian analysis, and anything else you can suggest. Original work is encouraged (and necessary!). Position Quantitative Research Analyst Job Description Research and development of expected return models Applications to Jimmy Shadbolt jimmy at ecowar.demon.co.uk Start Date IMMEDIATE Qualifications First degree in numerate discipline (maths, engineering, physics, statistics, etc). PhD (or MSc) in one of econometrics, mathematical statistics, applied mathematics or other related field of study. Strong interest in financial economics, as evidenced by research topic Training and experience Experience in econometrics, modern regression or optimisation methods Programming in C/C++ and/or Splus User experience in PC (word processing and spreadsheet) and Unix environments Aptitude and Ability Good oral and writing skills Creative and problem solving approach to research Personal Attributes Ability to work without close supervision as a member of a team Flexibility to meet changing opportunities in a dynamic research environment -- Jimmy Shadbolt From tani at csl.sony.co.jp Thu Jan 29 06:38:33 1998 From: tani at csl.sony.co.jp (Jun.Tani (SONY CSL)) Date: Thu, 29 Jan 98 20:38:33 +0900 Subject: TR: Self-organizing levels of articulation in sensory-motor systems. Message-ID: <9801291138.AA04434@tani.csl.sony.co.jp> Dear Connectionists, The following technical paper is avairable in http://www.csl.sony.co.jp/person/tani.html or directly to: ftp://ftp.csl.sony.co.jp/CSL/CSL-Papers/97/SCSL-TR-97-008.ps.Z --------------------------------------------------------------------------- Self-Organization of Modules and Their Hierarchy in Robot Learning Problems: A Dynamical Systems Approach Jun Tani and Stefano Nolfi (Sony CSL Technical Report: SCSL-TR-97-008) ABSTRACT: This paper discusses how modular and hierarchical structures can be self-organized dynamically in a robot learning paradigm. We develop an on-line learning scheme -- the so-called mixture of recurrent neural net (RNN) experts -- in which a set of RNN modules becomes self-organized as experts in order to account for the different categories of sensory-motor flow which the robot experiences. Autonomous switching between winning expert modules, responding to structural changes in the sensory-motor flow, actually corresponds to the temporal segmentation of behavior. In the meanwhile, another mixture of RNNs at a higher level learns the sequences of module switching occurring in the lower level, by which articulation at a further more abstract level is achieved. The proposed scheme was examined through simulation experiments involving the navigation learning problem. The simulated robot equipped with range sensors traveled around rooms of different shape. It was shown that modules corresponding to concepts such as turning right and left at corners, going straight along corridors and encountering junctions are self-organized in the lower level network. The modules corresponding to traveling in different rooms are self-organized in the higher level network. The robot succeeded in learning to perceive the world as articulated at multiple levels through its recursive interactions. -------------------------------------------------------------------------- Jun TANI, Ph.D Senior Researcher Sony Computer Science Laboratory Inc. Takanawa Muse Building, 3-14-13 Higashi-gotanda, Shinagawa-ku, Tokyo, 141 JAPAN email: tani at csl.sony.co.jp http://www.csl.sony.co.jp/person/tani.html Fax +81-3-5448-4273 Tel +81-3-5448-4380 From niall.griffith at ul.ie Thu Jan 29 05:29:58 1998 From: niall.griffith at ul.ie (Niall Griffith) Date: Thu, 29 Jan 1998 10:29:58 GMT Subject: Phd. studentships: Connectionist Models of Musical Processes Message-ID: <9801291029.AA03089@shannon.csis.ul.ie> Please post this to those who would be interested. Thanks. Connectionist Models in Computational Musicology etc. ----------------------------------------------------- Centre for Computational Musicology and Computer Music Department of Computer Science and Informations Systems University of Limerick Research studentships leading to a PhD. in: Connectionist Models in Computational Musicology, Computer Music or Cognitive Musicology Applications are invited from students interested in working towards a Doctorate in the area of Computer Music, Computational Musicology or Cognitive Musicology, developing models of musical processes using neural networks and related machine learning techniques (GA's, Reinforcement Learning). Initially students will register for a Master's by research and subsequently be re-registered for a doctorate. The student(s) will be supervised by Dr. Niall Griffith who has an interest in models that learn about musical structure and that can use what has been learned. Limerick is musically very active with the Irish World Music Centre and the Centre for Computational Musicology and Computer Music. The CCMCM offers a Masters in Music Technology. Current projects include a collaboration with members of the Irish World Music Centre and the Interaction Design Centre at UL in designing and implementing a "wired" dance floor that can track, represent and analyse dance steps. This project is ongoing and involves the floor as performance medium, a compositional and analystical tool and as a choreographic aid. Network models will be used extensively in this project and others. Applicants should have a 2.1 honours degree in a relevant subject (e.g. Music, Cognitive Science, Computer Science, Psychology), though experience and other qulifications will be taken into account. Programming skills are an advantage. If you are curious then please visit the following web sites at UL... http://www.csis.ul.ie /* CSIS home page http://www.csis.ul.ie/ccmmc /* Centre for Computational Musicology http://www.ul.ie/~pal/litefoot /* LiteFoot home page Contact: Niall Griffith, Department of CSIS University of Limerick. email: niall.griffith at ul.ie Telephone: +353 61 202785 Fax: +353 61 330876 From Yves.Moreau at esat.kuleuven.ac.be Thu Jan 29 13:12:11 1998 From: Yves.Moreau at esat.kuleuven.ac.be (Yves Moreau) Date: Thu, 29 Jan 1998 19:12:11 +0100 Subject: TR: Embedding Recurrent Neural Networks into Predator-Prey Models Message-ID: <34D0C67B.9A0BA590@esat.kuleuven.ac.be> Dear Connectionists, The following technical report is available via ftp or the World Wide Web: EMBEDDING RECURRENT NEURAL NETWORKS INTO PREDATOR-PREY MODELS Yves Moreau and Joos Vandewalle, K.U.Leuven ESAT-SISTA K.U.Leuven, Elektrotechniek-ESAT, Technical report ESAT-SISTA TR98-02 ftp://ftp.esat.kuleuven.ac.be/pub/SISTA/moreau/reports/lotka_volterra_tr98-02.ps Comments are more than welcome! ABSTRACT ======== We study changes of coordinates that allow the embedding of the ordinary differential equations describing continuous-time recurrent neural networks into differential equations describing predator-prey models ---also called Lotka-Volterra systems. We do this by transforming the equations for the neural network first into quasi-monomial form, where we express the vector field of the dynamical system as a linear combination of products of powers of the variables. From this quasi-monomial form, we can directly transform the system further into Lotka-Volterra equations. The resulting Lotka-Volterra system is of higher dimension than the original system, but the behavior of its first variables is equivalent to the behavior of the original neural network. We expect that this transformation will permit the application of existing techniques for the analysis of Lotka-Volterra systems to recurrent-neural networks. Furthermore, our result shows that Lotka-Volterra systems are universal approximators of dynamical systems, just as continuous-time neural networks. Keywords: Continuous-time neural networks, Equivalence of dynamical systems, Lotka-Volterra systems, Predator-prey models, Quasi-monomial forms. -------------------------------------------------------------- To get it from the World Wide Web, point your browser at: ftp://ftp.esat.kuleuven.ac.be/pub/SISTA/moreau/reports/lotka_volterra_tr98-02.ps To get it via FTP: ftp ftp.esat.kuleuven.ac.be cd pub/SISTA/moreau/reports get lotka_volterra_tr98-02.ps -------------------- Yves Moreau Department of Electrical Engineering Katholieke Universiteit Leuven Leuven, Belgium email: moreau at esat.kuleuven.ac.be homepage: http://www.esat.kuleuven.ac.be/~moreau publications: http://www.esat.kuleuven.ac.be/~moreau/publication_list.html From plaut at cmu.edu Thu Jan 29 13:50:39 1998 From: plaut at cmu.edu (David Plaut) Date: Thu, 29 Jan 1998 13:50:39 -0500 Subject: Preprint: Modeling phonological development Message-ID: <3744.886099839@eagle.cnbc.cmu.edu> The following preprint is available via the Web or anonymous ftp. The Emergence of Phonology from the Interplay of Speech Comprehension and Production: A Distributed Connectionist Approach David C. Plaut Christopher T. Kello Carnegie Mellon University and the Center for the Neural Basis of Cognition To appear in B. MacWhinney (Ed.), The emergence of language. Mahweh, NJ: Erlbaum. A distributed connectionist framework for phonological development is proposed in which phonological representations are not predefined but emerge under the pressure of mediating among acoustic, semantic, and articulatory representations in the service of both comprehension and production. Within the framework, articulatory feedback during speech production is derived from the acoustic consequences of the system's own articulations via a learned forward model of the physical mapping relating articulation to acoustics. An implementation of the framework, in the form of a discrete-time simple recurrent network, learned to comprehend, imitate, and intentionally name a corpus of 400 monosyllabic words, and its errors in development showed similar tendencies as those of young children. Although only a first step, the results provide support that the approach may ultimately form the basis for a comprehensive account of phonological development. [25 pages] URL: http://www.cnbc.cmu.edu/~plaut/papers/PlautKelloINPRESSchap.phon.ps.gz FTP-host: cnbc.cmu.edu FTP-file: pub/user/plaut/papers/PlautKelloINPRESSchap.phon.ps.gz (Note: an uncompressed version can be found under papers/uncompressed/) -Dave =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= David Plaut Center for the Neural Basis of Cognition and Mellon Institute 115, CNBC Departments of Psychology and Computer Science Carnegie Mellon University MI 115I, 412/268-5145 (fax -5060) 4400 Fifth Ave., Pittsburgh PA 15213-2683 http://www.cnbc.cmu.edu/~plaut "Doubt is not a pleasant condition but certainty is an absurd one." -Voltaire =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= From udah057 at bay.cc.kcl.ac.uk Fri Jan 30 11:36:37 1998 From: udah057 at bay.cc.kcl.ac.uk (John Taylor) Date: Fri, 30 Jan 1998 16:36:37 GMT Subject: POSTDOCTORAL POSITION IN HYBRID PROBLEMS Message-ID: <199801301636.QAA13485@mail.kcl.ac.uk> POSTDOCTORAL POSITION IN HYBRID PROBLEMS Applications are invited for a 3-year position in the PHYSTA (TMR: Training and Mobility) EC Network at the Centre for Neural Networks, Department of Mathematics, King's College, Strand, London, WC2R2LS, UK to work with Prof JG Taylor. The research will be to develop neural network modular and hierarchical hybrid architectures which are able to create, in an adaptive manner, subsymbolic representations especially for emotionally-loaded inputs based on speech and image analysis. These will then be related to processing at a symbolic level involved with the inputs. The applicant should have a PhD preferably in Neural Networks, if possible with experience in applying neural networks to problems in vision, speech or a related area. The applicant will also be expected to make short trips to the laboratories of the other partners in the TMR Network (S Gielen, Nijmegen; B Apolloni, Milan; S Kollias, Athens are the relevant ones). Please send your CV by reply. From chalmers at paradox.ucsc.edu Sat Jan 31 18:54:47 1998 From: chalmers at paradox.ucsc.edu (David Chalmers) Date: Sat, 31 Jan 98 15:54:47 PST Subject: Toward a Science of Consciousness 1998 Message-ID: <9801312354.AA29934@paradox.ucsc.edu.lingdomain> TOWARD A SCIENCE OF CONSCIOUSNESS 1998 TUCSON, ARIZONA APRIL 27 - MAY 2, 1998 Program details are now available for the third Tucson conference on "Toward a Science of Consciousness". The conference will take place from Monday April 27 to Saturday May 2, 1998, at the Tucson Convention Center and Music Hall, sponsored by the University of Arizona. Included below is an outline of plenary sessions and speakers, a list of concurrent sessions, and registration details. The information below is subject to slight revision. Note that the deadline for early registration is FEBRUARY 2. More details (including information on pre-conference workshops, poster sessions, abstracts, lodging, detailed schedule, and so on) can be found on the conference web sites at: http://www.consciousness.arizona.edu/page.htm (general information) http://www.zynet.co.uk/imprint/Tucson/ (abstracts, etc) PROGRAM COMMITTEE: David Chalmers, Stuart Hameroff, Alfred Kaszniak, Christof Koch, Marilyn Schlitz, Alwyn Scott, Petra Stoerig, Keith Sutherland, Michael Winkelman ---------------------------------------------------------------------- PLENARY SESSIONS Monday April 27 PL1: THE SELF * G. Strawson: The self. * M.S. Gazzaniga: The mind's past. * J. Shear: Experiential clarification of `The problem of self'. PL2: IMPLICIT PROCESSES * A. Greenwald: Simple mental feats that require conscious cognition (because unconscious cognition can't do them.) * P. Merikle: Is there memory for events during anesthesia? PL3: PATHWAYS OF VISUAL CONSCIOUSNESS * D. Milner: Unconscious visual processing for action: neuropsychological evidence. * M. Goodale: Unconscious visual processing for action: evidence from normal observers. * M. Mishkin: On the neural basis of visual awareness. Tuesday April 28 PL4: SLEEP AND DREAMING * B. McNaughton: Sleep and dreaming [working title] * J.A. Hobson: Neuropsychology of dreaming consciousness. * S. LaBerge: Lucid dreaming: psychophysiological studies of consciousness during REM sleep. PL5: INTEGRATIVE PERSPECTIVES * C. Koch: Visual awareness and the frontal lobes. * M. Tye: Representation and consciousness. PL6: COLOR AND CONSCIOUSNESS * K. Nordby: A 'colorful' life in black and white. * C.L. Hardin: Color quality and color structure. * M. Nida-Rumelin: Color and consciousness [working title] Wednesday April 29 PL7: TRANSPERSONAL PSYCHOLOGY * F. Vaughan: Essential dimensions of consciousness: Objective, subjective and intersubjective. * H. Hunt: Transpersonal and cognitive psychologies of consciousness: A necessary and reciprocal dialogue. * M. Schlitz: Transpersonal consciousness? Assessing the evidence. PL8: EMOTIONAL EXPERIENCE * A. Kaszniak: Conscious experience and autonomic response to emotion following frontal lobe damage. * R.D. Lane: Subregions within the anterior cingulate cortex may differentially participate in phenomenal and reflective consciousness awareness of emotion. Thursday April 30 PL9: EVOLUTION AND FUNCTION OF CONSCIOUSNESS I * S. Mithen: Handaxes: Some hard evidence regarding the evolution of the mind and consciousness * N. Humphrey: Cave painting, autism and the evolution of the human mind. * Third speaker TBA PL10: EVOLUTION AND FUNCTION OF CONSCIOUSNESS II * A.G. Cairns-Smith: If qualia evolved... * R.L. Gregory: What do qualia do? PL11: THE EXPLANATORY GAP * J. Levine: Conceivability, possibility, and the explanatory gap * C. McGinn: The explanatory gap [working title] * G. Rosenberg: On the intrinsic nature of the physical. Friday May 1 PL12: CULTURE AND CONSCIOUSNESS * A. Zajonc: Goethe and the science of consciousness: Toward a scientist's phenomenology of mind. * C. Laughlin: Biogenetic structural theory and the neurophenomenology of consciousness. * M. Winkelman: The fundamental properties of systems with consciousness. PL13: BLINDSIGHT * P. Stoerig, A. Cowey, R. Goebel: Blindsight and its neuronal basis. * S. Zeki: Blindsight [working title] PL14: SPACE, TIME, AND CONSCIOUSNESS * L. Smolin: Space, time and consciousness [working title] * P. Hut: Exploring actuality, through experiment and experience. * K. Yasue: Consciousness and photon dynamics in the brain. Saturday May 2 PL15: NEURAL CORRELATES OF CONSCIOUSNESS * B. Baars: Is a real psychoscope possible? Inferring when brain scans show us conscious experiences. * A. Revonsuo: How to take consciousness seriously in cognitive neuroscience. * J.B. Newman: Beyond pandemonium: the role of the reticular core in unifying the stream of consciousness PL16: AESTHETICS AND CONSCIOUSNESS * C.W. Tyler: The structure of interpersonal consciousness in art. * Second speaker TBA --------------------------------------------------------------- CONCURRENT SESSIONS Each session will have five speakers. For more details, see the conference web site. Monday April 27 C1: Qualia C2: Neural correlates of consciousness C3: Implicit cognition C4: Time C5: Isomorphism between phenomenology and neuroscience C6: Crosscultural perspectives Tuesday April 28 C7: Materialism and dualism C8: The function of consciousness C9: Attention and vision C10: Quantum biology and consciousness C11: Parapsychology C12: Consciousness and literature C13: Awareness, attention, and memory during sleep Thursday April 30 C14: The concept of consciousness C15: Computational and cognitive models C16: Blindsight C17: Evolution of consciousness C18: Altered states of consciousness C19: First-, second-, and third-person perspectives C20: Unconscious influences on motivational/affective awareness Friday May 1 C21: Unity of consciousness and the self C22: Ethics C23: Sleep and dreaming C24: Consciousness and physical reality C25: Emotion and volition C26: Art, music, and consciousness --------------------------------------------------------------- REGISTRATION FORM Toward a Science of Consciousness 1998 81ULCON227 (Please print in block letters or type.) Mr./Mrs./Ms./Dr. __________________________________________ Organization ______________________________________________ Address ___________________________________________________ City ________________________ State/Province______________ Postal Code__________________ Country_____________________ Daytime phone _______________ Fax ________________________ E-mail ____________________________________________________ Conference Fees ___ Early registration (payment received before February 2) $250.00 ___ Registration fee (payment received after February 2) $325.00 ___ Early student registration fee (payment before February 2) $100.00 Please include a copy of your current student ID. ___ Student registration fee (for current, full-time students) $150.00 Other Fees ___ Banquet, April 29, White Stallion Ranch $55.00 ___ Guest at banquet (Name ___________________________) $55.00 Meal choice: ___ chicken ___ salmon ___ vegetarian Pre-conference Workshops Saturday, April 25 All day ___ Observing the Mind: Basic Training in Skilled Means (C. Tart) $90.00 ___ Dream Interpretation (D. Roomy) $90.00 Morning ___ Health, Healing and Consciousness (Kohatsu/Koffler/Lee) $45.00 Afternoon ___ "Global workspace" capacity in the brain (B. Baars) $45.00 Sunday, April 26 Morning ___ Overview of Tucson III (V. Shamas) $45.00 ___ Quantum Theory, Reality and Consciousness (P. Pylkkanen) $45.00 ___ Exceptional Experience in Sports (R. White & S. Brown) $45.00 Afternoon ___ The Mammalian Visual System (C. Koch) $45.00 ___ Exploring Consciousness with Lucid Dreaming (S. LaBerge) $45.00 ___ Consciousness and the Binding Problem (A. Revonsuo) $45.00 Field Trips ___ Sabino Canyon $45.00 ___ Tubac and San Xavier $45.00 Total $_____ Payment Information Total payment $_________ ___ Check enclosed (in dollars from US bank), payable to Extended University ___ Credit card Visa____ MasterCard____ Account number ___________________________________________ Expiration date _______ Signature ________________________ ___ Purchase Order (enclose please) There are four ways to register. Payment or purchase order must accompany registration. PHONE: Call 520-621-7724 from 8:00 a.m.-5:00 p.m. MST, Monday-Friday. VISA and MasterCard accepted. FAX: Fax this form to 520-621-3269. Fax lines are open 24 hours. VISA and MasterCard accepted. MAIL: Send this form with payment to The University of Arizona Extended University; Attention: Registration; P.O. Box 210158; Tucson, AZ 85721-0158. E-MAIL: Send to extuniv at ccit.arizona.edu. Please include conference name, your name, priority code from the mail panel, address, daytime phone. Include full details of workshop options and total amount to be charged to your credit card. Give VISA or MasterCard number and expiration date. Cancellation policy: If you cancel your registration in writing by March 27, you'll receive a refund less a $35 cancellation fee. Non-attendance does not constitute a cancellation. If you have a disability and require accommodation, please contact us at the time of registration at 520-621-7724.