From Ronan.Reilly at ucd.ie Wed Jul 1 14:06:33 1998 From: Ronan.Reilly at ucd.ie (Ronan G. Reilly) Date: Wed, 01 Jul 1998 18:06:33 +0000 (GMT) Subject: Post-Doc position in Dublin Message-ID: <0EVF00J4HE77X2@hermes.ucd.ie> ************************************* * LCG TMR Network * * Learning Computational Grammars * * * ************************************* ****************************************************************** * POSTDOCTORAL RESEARCH OPPORTUNITY AT UNIVERSITY COLLEGE DUBLIN * ****************************************************************** LCG (Learning Computational Grammars) is a research network funded by the EC Training and Mobility of Researchers programme (TMR). The LCG network involves seven European partners. The research goal of the network is the application of machine learning techniques to extending a variety of computational grammars. The particular focus of UCD's research will be on the use of artificial neural network learning algorithms. See http://www.let.rug.nl/~nerbonne/tmr/lcg.html for more details. There will be three years of postdoctoral funding available in the Department of Computer Science at University College Dublin tenable immediately. The ideal postdoctoral candidates will have research experience in the use of ANNs in natural language processing. As the funding is provided by the EU Training and Mobility of researchers programme there are some restrictions on who may benefit from it: * Candidates must be aged 35 or younger * Candidates must be Nationals of an EU country, Norway, Switzerland or Iceland * Candidates must have studied or be studying for a Doctoral Degree * Candidates must not be Irish Nationals or worked in Ireland 18 out of the last 24 months If you are interested and eligible, e-mail your CV (RTF, ASCII, or PS versions only) and the names and addresses of two referees to the address below. Your CV should include a list of recent publications. Please also outline in 2-3 pages your interest in LCG, how it is related to work you have done, and what special expertise you bring to the problem. --------------------------------------------- Ronan G. Reilly, PhD Department of Computer Science University College Belfield Dublin 4 IRELAND http://cs-www.ucd.ie/staff/html/ronan.htm e-mail: Ronan.Reilly at ucd.ie Tel. : +353-1-706 2475 Fax : +353-1-269 7262 From cjcb at molson.ho.lucent.com Thu Jul 2 14:50:49 1998 From: cjcb at molson.ho.lucent.com (Chris Burges) Date: Thu, 2 Jul 1998 14:50:49 -0400 Subject: Kernel geometry, invariance, and support vector machines Message-ID: <199807021850.OAA28197@cottontail.lucent.com> The following paper is available at http://svm.research.bell-labs.com/SVMdoc.html Geometry and Invariance in Kernel Based Methods C.J.C. Burges, Bell Laboratories, Lucent Technologies To Appear In: Advances in Kernel Methods - Support Vector Learning, Eds. B. Schoelkopf, C. Burges, A. Smola, MIT Press, Cambridge, USA, 1998 We explore the questions of (1) how to describe the intrinsic geometry of the manifolds which occur naturally in methods, such as support vector machines (SVMs), in which the choice of kernel specifies a nonlinear mapping of one's data to a Hilbert space; and (2) how one can find kernels which are locally invariant under some given symmetry. The motivation for exploring the geometry of support vector methods is to gain a better intuitive understanding of the manifolds to which one's data is being mapped, and hence of the support vector method itself: we show, for example, that the Riemannian metric induced on the manifold by its embedding can be expressed in closed form in terms of the kernel. The motivation for looking for classes of kernels which instantiate local invariances is to find ways to incorporate known symmetries of the problem into the model selection (i.e. kernel selection) phase of the problem. A useful by-product of the geometry analysis is a necessary test which any proposed kernel must pass if it is to be a support vector kernel (i.e. a kernel which satisfies Mercer's positivity condition); as an example, we use this to show that the hyperbolic tangent kernel (for which the SVM is a two-layer neural network) violates Mercer's condition for various values of its parameters, a fact noted previously only experimentally. A basic result of the invariance analysis is that directly imposing a symmetry on the class of kernels effectively results in a preprocessing step, in which the preprocessed data lies in a space whose dimension is reduced by the number of generators of the symmetry group. Any desired kernels can then be used on the preprocessed data. We give a detailed example of vertical translation invariance for pixel data, where the binning of the data into pixels has some interesting consequences. The paper comprises two parts: Part 1 studies the geometry of the kernel mapping, and Part 2 the incorporation of invariances by choice of kernel. From rreilly at elecmag3.ucd.ie Thu Jul 2 13:12:33 1998 From: rreilly at elecmag3.ucd.ie (Richard Reilly) Date: Thu, 02 Jul 1998 18:12:33 +0100 Subject: Classifier Design for Online Handwriting Recognition Message-ID: <2.2.32.19980702171233.00b22710@elecmag3.ucd.ie> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ PhD/MEngSc research opportunity DSP Group, Electronic and Electrical Engineering Dept, UCD "Classifier Design for Online Handwriting Recognition" 02 July 1998 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Position available on ongoing project in online handwriting recognition. Project funded by Forbairt/EU. Tasks This project adopts a HW/SW codesign approach to the OHR task, aiming at implementations suitable for the constraints of PDA platform. The Preprocessing (normalisation, feature extraction and optional segmentation) is performed by a custom hardware module which is fully configurable by ROM and software. The hardware development is with Synopsys compiler. The main functions of Recognition and Postprocessing are implemented in software, using Hidden Markov Model and Dynamic Programming methods. Fuzzy logic may also be applied. C/VHDL Cosimulation will be used to define the system parameters i.e. feature set, preprocessor and classifier functions and to benchmark the system accuracy. Overall performance will then be validated on the UNIPEN database of online samples. The development platform is a Solaris environment on a Sun Ultra using the Tcl/Tk GUI. The classifier designer will investigate the following issues: - which classifiers perform well based on preprocessor capabilities and PDA constraints? - what exact preprocessor functionality is required for the feature sets, with a view to maximising invariance of the classifier to writing style and production? - what types of postprocessing (lexical, syntactic, user-dependent) are most efficient? - extensibility and adaptivity to multiple users, writing styles and languages - integration of support for gestures Requirements - working knowledge of DSP, pattern recognition esp. HMM, DP, fuzzy methods in application fields such as speech, handwriting, video - experience in C (/C++), UNIX, GUI (e.g. Tcl/Tk or Motif), MatLab - optional experience in VLSI design with VHDL/Verilog, Synopsys compiler and cosimulation with C highly desirable. Resources available The DSP lab contains a Sun Ultra/Enterprise 450 running Solaris and Pentium/Pro workstations using Win95. More information on this project: http://wwdsp.ucd.ie/~stephenm Candidates should have a good honours degree (H2.1 or H1) in a suitable area. Further details by contacting Dr Richard Reilly, Department of Electronic and Electrical Engineering Department, University College, Dublin 4, Ireland. Ph: 353 1 706 1960; Fax: 353 1 283 0921; e-mail: Richard.Reilly at ucd.ie http://wwdsp.ucd.ie From geoff at giccs.georgetown.edu Mon Jul 6 12:51:47 1998 From: geoff at giccs.georgetown.edu (Geoff Goodhill) Date: Mon, 6 Jul 1998 12:51:47 -0400 Subject: Postdoc position available Message-ID: <199807061651.MAA09660@fathead.giccs.georgetown.edu> POSTDOCTORAL POSITION - COMPUTATIONAL NEUROSCIENCE Georgetown Institute for Cognitive and Computational Sciences Georgetown University Washington DC A postdoctoral position is available from August 1st 1998 in the lab of Dr Geoff Goodhill for an NSF-funded project investigating models of cortical map formation. Experience in computational neuroscience and knowledge of C/C++ is required. The position is for one year in the first instance. The aim of the project is to understand how genetic and activity-dependent factors combine to determine map structure in primary visual cortex. We have so far written a simulator in C++ / OpenGL running on an SGI Octane workstation. We now plan to exploit this simulator by using it to help reveal the key biological parameters controlling map structure, and to guide appropriate mathematical analyses. More information about the lab can be found at http://www.giccs.georgetown.edu/labs/cns Applicants should send a CV, a letter of interest, and names and addresses (including email) of at least two referees to: Dr Geoffrey J. Goodhill Georgetown Institute for Cognitive and Computational Sciences Georgetown University Medical Center 3970 Reservoir Road NW Washington DC 20007 Tel: (202) 687 6889 Fax: (202) 687 0617 Email: geoff at giccs.georgetown.edu From tgd at CS.ORST.EDU Mon Jul 6 16:57:16 1998 From: tgd at CS.ORST.EDU (Tom Dietterich) Date: Mon, 6 Jul 1998 13:57:16 -0700 (PDT) Subject: Hierarchical Reinforcement Learning with MAXQ Message-ID: <199807062057.NAA28564@edison.CS.ORST.EDU> Two papers on hierarchical reinforcement learning are available. One is a conference paper that will appear this summer at the International Conference on Machine Learning. The other is a journal-length version with full technical details and discussion. ====================================================================== Dietterich, T. G. (to appear). The MAXQ method for hierarchical reinforcement learning. 1998 International Conference on Machine Learning. URL: ftp://ftp.cs.orst.edu/pub/tgd/papers/ml98-maxq.ps.gz Abstract: This paper presents a new approach to hierarchical reinforcement learning based on the MAXQ decomposition of the value function. The MAXQ decomposition has both a procedural semantics---as a subroutine hierarchy---and a declarative semantics---as a representation of the value function of a hierarchical policy. MAXQ unifies and extends previous work on hierarchical reinforcement learning by Singh, Kaelbling, and Dayan and Hinton. Conditions under which the MAXQ decomposition can represent the optimal value function are derived. The paper defines a hierarchical Q learning algorithm, proves its convergence, and shows experimentally that it can learn much faster than ordinary ``flat'' Q learning. Finally, the paper discusses some interesting issues that arise in hierarchical reinforcement learning including the hierarchical credit assignment problem and non-hierarchical execution of the MAXQ hierarchy. Note: This version has some errors corrected compared to the version that appears in the proceedings. In particular, Figure 1 is fixed. ====================================================================== Dietterich, T. G. (Submitted). Hierarchical reinforcement learning with the MAXQ value function decomposition URL: ftp://ftp.cs.orst.edu/pub/tgd/papers/mlj-maxq.ps.gz Abstract: This paper presents a new approach to hierarchical reinforcement learning based on the MAXQ decomposition of the value function. The MAXQ decomposition has both a procedural semantics---as a subroutine hierarchy---and a declarative semantics---as a representation of the value function of a hierarchical policy. MAXQ unifies and extends previous work on hierarchical reinforcement learning by Singh, Kaelbling, and Dayan and Hinton. Conditions under which the MAXQ decomposition can represent the optimal value function are derived. The paper defines a hierarchical Q learning algorithm, proves its convergence, and shows experimentally that it can learn much faster than ordinary ``flat'' Q learning. These results and experiments are extended to support state abstraction and non-hierarchical execution. The paper concludes with a discussion of design tradeoffs in hierarchical reinforcement learning. ====================================================================== From jer at mannanetwork.com Tue Jul 7 10:50:22 1998 From: jer at mannanetwork.com (Joel Ratsaby) Date: Tue, 07 Jul 1998 17:50:22 +0300 Subject: Job Post Message-ID: <35A235AE.D4C708DE@mannanetwork.com> Manna Network Technologies a leader in the relationship management field is seeking experienced AI developers for a exciting project in real time machine learning. The work involves the development and implementation of advanced machine learning algorithms. Ideal candidates will have a Masters in Computer Science or Electrical Engineering with a background and experience in applied machine learning, statistical pattern recognition. Knowledge and experience in Bayesian networks and a good background in object oriented programming and Java are a plus. For more information please send resume to manpower at mannanetwork.com or have a look at our web site www.mannanetwork.com Joel Ratsaby From krose at wins.uva.nl Tue Jul 7 03:43:21 1998 From: krose at wins.uva.nl (Ben Krose) Date: Tue, 07 Jul 1998 09:43:21 +0200 Subject: Ph.D. positions open Message-ID: <35A1D199.7450AD9A@wins.uva.nl> I would like to announce the following open positions: 2 Ph. D. positions (AIO) available. The "Intelligent Autonomous Systems (IAS)" group of the Computer Science Department, University of Amsterdam, is looking for two enthousiastic, motivated students for two AIO positions. Project 1: "Learning group behaviour in multiple robot systems", with as case study "Robot soccer". Project 2: "Classification of radar profiles with neural networks" Information: about the group: http://www.wins.uva.nl/research/ias/ about the projects: http://www.wins.uva.nl/research/learn/ additional: groen at wins.uva.nl. krose at wins.uva.nl, tel: 020 5257463 For project 1 we look for a student in the field of Computer Science, specialised in Artificial Intelligence. For project 2 we look for a student in the field of Computer Science, Physics or Electrical engineering. An AIO position is a paid four years appointment as a research trainee with the explicit aim that a Ph.D. thesis be produced in those four years. The monthly salary will start at dfl 2151 gross, increasing to a maximum of dfl 3841 gross in the fourth year. The appointment is for a period of maximally four years and should result in a Ph.D. A training and supervision plan will be drawn up, stipulating the aims and content of the proposed research project and the teaching obligations involved. Applications should include a CV, a statement of interest (1-2 pages), a recent relevant paper (if available), and a list of three or more references. Applications should be addressed before August 1, 1998 to Ms. Elvira Smeets Dept. of Computer Science University of Amsterdam Kruislaan 403, 1098 SJ Amsterdam The Netherlands --- Ben Kr\"ose Department of Computer Science University of Amsterdam, Kruislaan 403, 1098 SJ Amsterdam, NL. tel: +31 20 525 7520/7463/(7490 fax) http://carol.wins.uva.nl/~krose/ From lorraine.dingley at mrc-apu.cam.ac.uk Tue Jul 7 11:35:28 1998 From: lorraine.dingley at mrc-apu.cam.ac.uk (Lorraine Dingley) Date: Tue, 7 Jul 1998 15:35:28 +0000 Subject: Post-doc position Message-ID: POST-DOCTORAL POSITION IN COMPUTATIONAL MODELLING OF COGNITIVE-AFFECTIVE PROCESSES. MRC COGNITION AND BRAIN SCIENCES UNIT, CAMBRIDGE, UK Applications are invited for a post-doctoral scientist to join the Cognition & Emotion group. This new position has been created to facilitate the development of computationally explicit models of affective processes, or their neural bases, and related research. Applicants should have experience in computational modelling and have a strong interest in affect and/or affective disorders. The successful applicant would be expected to develop collaborations with other scientists in the group, and would also have the opportunity to develop their own research in a related area. Appointment would be for an initial period of 3 years, with the possibility of transfer to a career track position. Starting salary would be in the range range =A315,000 to =A324,000 per annum supported by a performance related pay scheme and MRC pension scheme. Further information can be obtained from Dr Andrew Mathews (phone 01223 355294) or the CBU website http://www.mrc-apu.cam.ac.uk/. Applications including 2 copies of a full CV, a brief description of research interests, and names and addresses of two professional referees should be sent by 11th August 1998 quoting reference CBU/CM to: Johanna Webb Personnel MRC Centre Hills Road Cambridge CB2 2QH MEDICAL RESEARCH COUNCIL operates a non-smoking policy and is an Equal Opportunity Employer From stefan.wermter at sunderland.ac.uk Wed Jul 8 08:10:45 1998 From: stefan.wermter at sunderland.ac.uk (Stefan Wermter) Date: Wed, 08 Jul 1998 13:10:45 +0100 Subject: job/phd topics neural networks, language processing, hybrid systems Message-ID: <35A361C5.94A42950@sunderland.ac.uk> I would appreciate it very much if you could forward this to relevant potentially interested students and researchers in your research group. Besides the researcher A position there are also additional possible PhD topics available. For more details see http://osiris.sunderland.ac.uk/~cs0stw/Projects/suggested_topics_titles or http://osiris.sunderland.ac.uk/~cs0stw/ in general. ------------------------------------------ Researcher A in Neural and Intelligent Systems (reference number CIRG28) Applications are invited for a three year research assistant position in the School of Computing and Information Systems investigating the development of hybrid neural/symbolic techniques for intelligent processing. This is an exciting new project which aims at developing new environments for integrating neural networks and symbolic processing. You will play a key role in the development of such hybrid subsymbolic/symbolic environments. It is intended to apply the developed hybrid environments in areas such as natural language processing, intelligent information extraction, or the integration of speech/language in multimedia applications. You should have a degree in a computing discipline and will be able to register for a higher degree (Mphil, PhD). A demonstrated interest in artificial neural networks, software engineering skills and programming experience are essential (preferably including a subset of C, C++, CommonLisp, Java, GUI). Experience and interest in neural network software and simulators would be an advantage (e.g. Planet, SNNS, Tlearn, Matlab, etc). Salary is according to the researcher A scale (currently up to 13,871 pounds, under revision, this is about 42000 DM or $23000). Application forms and further particulars are available from the Personell department under +44 191 515 and extensions 2055, 2429, 2054, 2046, or 2425 or E-Mail employee.recruitment at sunderland.ac.uk quoting the reference number CIRG28. For informal enquiries please contact Professor Stefan Wermter, e-mail: Stefan.Wermter at sunderland.ac.uk. Closing date: 10 July 1998. The successful candidate is expected to start the job as soon as possible. ******************************************** Professor Stefan Wermter Research Chair in Intelligent Systems University of Sunderland Dept. of Computing & Information Systems St Peters Way Sunderland SR6 0DD United Kingdom phone: +44 191 515 3279 fax: +44 191 515 2781 email: stefan.wermter at sunderland.ac.uk http://osiris.sunderland.ac.uk/~cs0stw/ ******************************************** From dwang at cis.ohio-state.edu Wed Jul 8 17:28:20 1998 From: dwang at cis.ohio-state.edu (DeLiang Wang) Date: Wed, 8 Jul 1998 17:28:20 -0400 (EDT) Subject: Tech report on speech segregation Message-ID: <199807082128.RAA10262@shirt.cis.ohio-state.edu> The following technical report is available via FTP/WWW: ------------------------------------------------------------------ "Separation of Speech from Interfering Sounds Based on Oscillatory Correlation" Technical Report #24, June 1998 The Ohio State University Center for Cognitive Science ------------------------------------------------------------------ DeLiang L. Wang, The Ohio State University Guy J. Brown, University of Sheffield A multi-stage neural model is proposed for an auditory scene analysis task - segregating speech from interfering sound sources. The core of the model is a two-layer oscillator network that performs stream segregation on the basis of oscillatory correlation. In the oscillatory correlation framework, a stream is represented by a population of synchronized relaxation oscillators, each of which corresponds to an auditory feature, and different streams are represented by desynchronized oscillator populations. Lateral connections between oscillators encode harmonicity, and proximity in frequency and time. Prior to the oscillator network are a model of the auditory periphery and a stage in which mid-level auditory representations are formed. The model has been systematically evaluated using a corpus of voiced speech mixed with interfering sounds, and produces improvements in terms of signal-to-noise ratio for every mixture. Furthermore, the pattern of improvements seems consistent with human performance. The performance of our model is compared with other studies on computational auditory scene analysis. A number of issues including biological plausibility and real-time implementation are also discussed. (28 pages, 384 KB compressed) for anonymous ftp: FTP-HOST: ftp.cis.ohio-state.edu Directory: /pub/leon/Brown Filename: ccs98.ps.gz for WWW: http://www.cis.ohio-state.edu/~dwang/reports.html (Some pages may not show up in postscript display, but should print OK) Send comments to DeLiang Wang (dwang at cis.ohio-state.edu) From janet at dcs.rhbnc.ac.uk Thu Jul 9 08:31:12 1998 From: janet at dcs.rhbnc.ac.uk (Janet Hales) Date: Thu, 09 Jul 98 13:31:12 +0100 Subject: Special event:COMPUTATIONAL INTELLIGENCE DAY Message-ID: <199807091231.NAA08652@platon.cs.rhbnc.ac.uk> Apologies for any multiple copies of this message you may have received via other lists. Wednesday 9 September 1998 COMPUTATIONAL INTELLIGENCE: THE IMPORTANCE OF BEING LEARNABLE **************************************************************** A one day research seminar Computer Learning Research Centre, Department of Computer Science Royal Holloway, University of London, 30th anniversary year (1997-98) The fourth of a series of one-day colloquia organised by the research groups in the Department. The programme includes invited lectures on key research topics in this rapidly developing field with the main themes of Machine Learning and Inductive Inference. Speakers include Jorma Rissanen, Ray Solomonoff, Vladimir Vapnik, Chris Wallace and Alexei Chervonenkis. The occasion will also mark the foundation of the Computer Learning Research Centre at Royal Holloway, University of London. Speakers: Professor Jorma Rissanen, Almaden Research Center, IBM Research Division, San Jose, CA, U.S.A: STOCHASTIC COMPLEXITY (provisional title) Professor Ray Solomonoff, Oxbridge Research Inc, Cambridge, Mass, U.S.A.: HOW TO TEACH A MACHINE Professor Vladimir Vapnik, AT&T Labs - Research, U.S.A. and Professor of Computer Science and Statistics, Royal Holloway, University of London: LEARNING THEORY AND PROBLEMS OF STATISTICS Professor Chris Wallace, Univ of Monash, Victoria, Australia: and Visiting Professor in the Dept of Computer Science, Royal Holloway, University of London: BREVITY IS THE SOUL OF SCIENCE Professor Alexei Chervonenkis, Institute of Control Sciences, Moscow: THE HISTORY OF THE SUPPORT VECTOR METHOD Provisional Programme: 10.30 am Coffee and Welcome - Alex Gammerman, Head of Department Morning: Theme - MACHINE LEARNING 11.00 am Vladimir Vapnik 11.50 am Alexei Chervonenkis 12.40 pm Lunch Afternoon: Theme - INDUCTIVE INFERENCE 2.10 pm Jorma Rissanen 3.00 pm Chris Wallace 3.50 pm Tea 4.40 pm Ray Solomonoff 5.30 pm Close All welcome - advance booking essential - for further information and to book a place please contact Janet Hales: Janet Hales, Departmental Events Co-ordinator Dept of Computer Science Royal Holloway University of London Tel 01784 443432 Fax 01784 439786 Email: J.Hales at dcs.rhbnc.ac.uk Location maps, info etc.: http://www.cs.rhbnc.ac.uk/location/ Further information is also available from: http://www.dcs.rhbnc.ac.uk/events/compintday.shtml From honavar at cs.iastate.edu Thu Jul 9 14:11:52 1998 From: honavar at cs.iastate.edu (Vasant Honavar) Date: Thu, 9 Jul 1998 13:11:52 -0500 (CDT) Subject: Call for Papers: Special Issue of the Machine Learning Journal on Automata Induction, Grammar Inference, and Language Acquisition In-Reply-To: <199804060934.TAA04155@reid.anu.edu.au> from "Peter Bartlett" at Apr 6, 98 07:34:04 pm Message-ID: <199807091811.NAA28149@ren.cs.iastate.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 5960 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/65c41751/attachment.ksh From cpoon at hstbme.mit.edu Thu Jul 9 15:28:26 1998 From: cpoon at hstbme.mit.edu (Chi-Sang Poon) Date: Thu, 9 Jul 1998 15:28:26 -0400 (EDT) Subject: Postdoc position - visual recognition Message-ID: POSTDOCTORAL POSITION IN VISUAL RECOGNITION Harvard-MIT Division of Health Sciences and Technology Massachusetts Institute of Technology A postdoctoral position is available immediately on a multidisciplinary project to develop a silicon vision chip for pattern recognition using analog very-large-scale integrated circuits. The goal of this sub-project is to develop a neural network architecture that is optimal for analog VLSI implementation. Research approach involves computer modeling of the human visual system and translation of these models into analog neural network architectures in cooperation with other team members in VLSI design. (See: Poon and Shah, Hebbian learning in parallel and modular memories, Biol. Cybern. 78:79-86, 1998). Experience in vision science and neural networks required. Position is initially for one year and renewable depending on progress and availability of funds. Applications are evaluated immediately upon receipt. Send CV, statement of professional interests and goals, and names of three references to: Chi-Sang Poon, Ph.D. Harvard-MIT Division of Health Sciences and Technology Rm 20A-126 M.I.T. Cambridge, MA 02139 Tel: 617-258-5405 Fax: 617-258-7906 email: cpoon at mit.edu From pmck at limsi.fr Fri Jul 10 09:19:32 1998 From: pmck at limsi.fr (Paul Mc-Kevitt) Date: Fri, 10 Jul 98 15:19:32 +0200 Subject: MIND-III: SPATIAL COGNITION, DUBLIN, IRELAND, AUG 17-19 Message-ID: <9807101319.AA23879@m72.limsi.fr> LLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLL MIND III: Annual Conference of the Cognitive Science Society of Ireland Theme: Spatial Cognition Dublin City University, Dublin, Ireland August 17-19, 1998 You are invited to participate in the Annual Conference of the CSSI, on the Theme of Spatial Cognition, at Dublin City University from August 17-19, 1998. This conference will bring together researchers from different Cognitive Science disciplines (Psychology, Computer Science, Linguistics, and Cognitive Geography) who are studying different aspects of spatial cognition. The conference will provide a forum for researchers to share insights about different aspects of spatial cognition and from the perspective of different disciplines. The academic programme will begin at 9:00 a.m. on August 17th and end on 19th. The social programme will include a barbecue and ceili (traditional Irish Dance) on Tuesday 18th and a tour and concert on Wednesday after the end of the academic programme. For information on registration and accommodation, please visit the web page at: http://www.psych.ucsb.edu/~hegarty/cssi/ The deadline for early registration is July 15th (after that the price increases significantly). For questions about the programme, contact Mary Hegarty: hegarty at psych.ucsb.edu For questions about registration and local arrangements, contact Sean O Nuallain: sonualla at compapp.dcu.ie PROGRAMME COMMITTEE: Ruth Byrne, Trinity College Dublin Jerome Feldman, University of California, Berkeley Mary Hegarty, University of California, Santa Barbara (Program Chair) Christopher Habel, University of Hamburg George Lakoff, University of California, Berkeley Robert H. Logie, University of Aberdeen Jack Loomis, University of California, Santa Barbara Paul Mc Kevitt, Aalborg University and University of Sheffield Daniel R. Montello, University of California, Santa Barbara N. Hari Naryanan, Auburn University and Georgia Institute of Technology Patrick Olivier, University of Wales, Aberystwyth Sean O Nuallain, Dublin City University (Co-Chair) Terry Regier, University of Chicago Keith Stenning, Edinburgh University Michael Spivey, Cornell University Arnold Smith, National Research Council, Canada Barbara Tversky, Stanford University PROGRAMME KEYNOTE SPEAKERS: Michel Denis, Groupe Cognition Humaine, LIMSI-CNRS, Universite de Paris-Sud Andrew Frank, Department of Geoinformation, Technical University Wien TALK PRESENTATIONS: ENVIRONMENTAL SPATIAL COGNITION G. Allen, University of South Carolina Men and women, maps and minds: Cognitive bases of sex-related differences in reading and interpreting maps C. Christou & H. Bulthoff, Max-Planck Institute for Biological Cybernetics, Tubingen Using virtual environments to study spatial encoding D. Jacobson, R. Kitchin, T. Garling, R. Golledge & M. Blades, University of California, Santa Barbara, Queens University of Belfast, Gotenborg University Learning a complex urban route without sight: Comparing naturalistic versus laboratory measures P. Peruch, F. Gaunet, C. Thinus-Blanc, M-D. Giroudo, CNRS, Marseille & CNRS-College de France, Paris Real and imagined perspective changes in visual versus locomotor navigation M. J. Sholl, Boston College The accessibility of metric relations in self-to-object and object-to-object systems LANGUAGE AND SPACE T. Baguley & S. J. Payne, Loughborough University and Cardiff University of Wales Given-new versus new-given? An analysis of reading times for spatial descriptions K. C. Coventry & M. Prat-Sala, University of Plymouth The interplay between geometry and function in the comprehension of spatial propositions J. Gurney & E. Kipple, Army Research Laboratory, Adelphi, MD Composing conceptual structure for spoken natural language in a virtual reality environment S. Huang, National Taiwan University Spatial Representation in a language without prepositions S. Taub, Gallaudet University Iconic spatial language in ASL: Concrete and metaphorical applications C. Vorwerg, University of Bielefeld Production and understanding of direction terms as a categorization process COMPUTATION AND SPATIAL COGNITION M. Eisenberg & A. Eisenberg, University of Colorado Designing real-time software advisors for 3-d spatial operations J. Gasos & A. Saffiotti, IRIDIA, Universite Libre de Brruxelles Fuzzy sets for the representation of uncertain spatial knowledge in autonomous robots R. K. Lindsay, University of Michigan Discovering Diagrammatic Demonstrations P. McKevitt, Aalborg University and University of Sheffield CHAMELEON meets spatial cognition D. R. Montello, M.F.Goodchild, P. Fohl & J. Gottsegen, University of California, Santa Barbara Implementing fuzzy spatial queries: Problem statement and behavioral science methods S. O Nuallain & J. Kelleher, Dublin City University Spoken Image meets VRML and JAVA SPATIAL REASONING AND PROBLEM SOLVING M. Gattis, Max Planck Institute for Psychological Research, Munich Mapping relational structure in visual reasoning J. N. McGregor, T. C. Ormerod & E. P. Chronicle, University of Victoria and Lancaster University Spatial and conceptual factors in human performance on the traveling salesperson problem P.D.Pearson, R. H.Logie & K.J. Gilhooly, University of Aberdeen Verbal representations and spatial manipulation during mental synthesis L. Rozenblit, M. Spivey & J. Wojslawowicz Mechanical reasoning about gear-and-belt systems: Do eye-movements predict performance? C. Sophian & M. Crosby, University of Hawaii at Manoa Ratios that even young children understand: The case of spatial proportions THEORETICAL PERSPECTIVES: R. H. Logie, Department of Aberdeen Constraints on visuo-spatial working memory N. H. Narayanan, Auburn University Exploring virtual information landscapes: Spatial cognition meets information visualization A. Smith, National Research Council, Canada Spatial cognition without spatial concepts C. Speed & D. G.Tobin, University of Plymouth Space under stress: Spatial understanding and new media technologies M. Tiressa, A. Caressa and G.Geminiani, Universita di Torino & Universita di Padova A theoretical framework for the study of spatial cognition POSTER PRESENTATIONS M. Betrancourt, A. Pellegrin & L. Tardif, Research Institut, INRIA Rhone-Alpes Using a spatial display to represent the temporal structure of multimedia documents M. Bollaert, LIMSI-CNRS, University de Paris-Sud A connectionist model of mental imagery K Borner & C Vorwerg, University of Bielefeld Applying VR technology to the study of spatial perception and cognition A. Caressa, A. Abrigliano & G. Geminiani, Universita de Padova & Universita di Torino. Describers and explorers: A method to investigate cognitive maps. E. P. Chronicle, T. C. Ormerod & J. McGregor. Lancaster University and University of Victoria When insight just won't come: The failure of visual cues in the nine-dot problem. R. Coates, C.J. Hamilton & T. Heffernan, University of Teeside and University of Northumbria at Newcastle In Search of the visual and spatial characteristics of visuo-spatial working memory G. Fernandez, LMSI-CNRS Individual differences in the processing of route directions R. Hornig, B. Claus & K. Eyferth, Technical University of Berlin In search for an overall organizing principle in spatial mental models: A question of inference M-C. Grobety, M. Morand & F. Schenk Cognitive Mapping across visually disconnected environments N. Gotts, University of Wales, Aberystwyth Describing the topology of spherical regions using the "RCC" formalism X. Guilarova, Moscow M.V. Lomosonov State University Polysemy of adjective "round" via Lakoff's radical category structuring J. S. Longstaff, Laban Center, London Cognitive Structures of Kinesthetic Space: Reevaluating Rudolph Labans Choreutics U. Schmid, S. Wiebrock & F. Wysotzki, Technical University of Berlin Modeling spatial inferences in text understanding LLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLL From eric at research.nj.nec.com Fri Jul 10 10:44:24 1998 From: eric at research.nj.nec.com (Eric B. Baum) Date: Fri, 10 Jul 1998 10:44:24 -0400 Subject: postdoc or visitor position and new papers available Message-ID: <199807101444.KAA09335@yin> I am seeking applicants for a Post-doc or Visiting Scientist position in my group at the NEC Research Institute in Princeton NJ. USA The position is for 1 year. To apply please send cv, cover letter and list of references to: Eric Baum, NEC Research Institute, 4 Independence Way, Princeton NJ 08540,USA or PREFERABLY by internet to eric at research.nj.nec.com Our research is focused on artificial economies of agents that reinforcement learn. Two new papers (and an extended abstract of one of these) are available on my web page http://www.neci.nj.nec.com:80/homepages/eric/eric.html The abstracts of these papers are appended below. ------------------------------------------------------------------- ------------------------------------------------------------------- "Manifesto for an Evolutionary Economics of Intelligence" ( PostScript file 58 pages) Draft last modified July, 1998. to appear in "Neural Networks and Machine Learning" Editor C. M. Bishop, Springer-Verlag (1998). We address the problem of reinforcement learning in ultra-complex environments. Such environments will require a modular approach. The modules must become rational in the sense that they learn to solve a piece of the problem. We describe how to use economic principles to assign credit and ensure that a collection of rational agents will collaborate on reinforcement learning. We also catalog several catastrophic failure modes that can be expected in distributed learning systems, and empirically have occurred in biological evolution, real economies, and artificial intelligence programs, when an appropriate economic structure is not enforced. We conjecture that such evolutionary economies can allow learning in feasible time scales, starting from a collection of agents which have little knowledge and hence are irrational, by dividing and conquering complex problems. We support this with two implementations of learning models based on these principles. The first of these systems has empirically learned to solve Blocks World problems involving arbitrary numbers of blocks. The second has demonstrated meta-learning-- it learns better ways of creating new agents, modifying its own learning algorithm to escape from local optima trapping competing approaches. We describe how economic models can naturally address problems at the meta-level, meta-learning and meta-computation, that are necessary for high intelligence; discuss the evolutionary origins and nature of biological intelligence; and compare, analyze, contrast, and report experiments on competing techniques including hillclimbing, genetic algorithms, genetic programming, and temporal difference learning. ---------------------------------------------------------- "Toward Code Evolution by Artificial Economies" E. B. Baum and Igor Durdanovic ( PostScript file 53 pages) Draft last modified July 8, 1998. ( PostScript file 9 pages) Extended Abstract,July 8, 1998. We have begun exploring code evolution by artificial economies. We implemented a reinforcement learning machine called Hayek2 consisting of agents, written in a machine language inspired by Ray's Tierra, that interact economically. Hayek2 succeeds in evolving code to solve Blocks World problems, and has been more effective at this than our hillclimbing program and our genetic program. Our hillclimber and our genetic program, in turn, both performed creditably, learning solutions as strong as a simple search program that utilizes substantial hand-coded domain knowledge. We made some efforts to optimize our hillclimbing program and it has features that may be of independent interest. Our genetic program exhibited strong gains from crossover compared to a version utilizing other macro-mutations. The relative strength of crossover and macro-mutations is a hotly debated issue within the GP community, and ours is the first unequivocal demonstration of which we are aware where crossover is much better than ``headless chicken mutation''. We have demonstrated meta-learning: Hayek2 succeeds in discovering new meta-level agents that improve its performance, getting it out of plateaus in which it has otherwise gotten stuck. Hayek2's performance benefitted from improvements in the algorithm deciding how meta-agents gave capital to their offspring, from improvements in how creation of intellectal property is rewarded, from improvements in how meta-agents are paid by their offspring, from assessing a rent for computational time that was proportional to total demand, and from improvements in the language, including strong typing to bias the search for useful agents and expanding the representational power of the meta-instructions using pattern based instructions. ------------------------------------- Eric Baum NEC Research Institute, 4 Independence Way, Princeton NJ 08540 PHONE:(609) 951-2712, FAX:(609) 951-2482, Inet:eric at research.nj.nec.com http://www.neci.nj.nec.com:80/homepages/eric/eric.html From marks at gizmo.usc.edu Sun Jul 12 19:57:44 1998 From: marks at gizmo.usc.edu (Mark Seidenberg) Date: Sun, 12 Jul 1998 15:57:44 -0800 Subject: Paper: grammaticality in connectionist nets Message-ID: A preprint of the following paper is available at http://siva.usc.edu/coglab/papers.html The Emergence of Grammaticality in Connectionist Networks Joseph Allen Mark S. Seidenberg University of Southern California To appear in B. MacWhinney (Ed.), The emergence of language. Mahwah, NJ: Erlbaum. In generative linguistics, knowing a language is equated with knowing a grammar. It is sometimes suggested that connectionist networks can provide an alternative account of linguistic knowledge, one that does not incorporate standard notions of grammar. Any such alternative account owes an explanation for how people can distinguish grammatical sentences from ungrammatical ones. We describe some experiments with an attractor network that processed 5-8 word sentences, mapping from form to meaning (comprehension) and from meaning to form (production). The model was trained on 10 types of sentences used in a classic study of aphasic language by Linebarger, Schwartz & Saffran (1983). The network performed qualitatively differently on novel grammatical vs. ungrammatical sentences (e.g., "He came to my town" vs. "He came my town"). The model was also tested on sentences analogous to Chomsky's famous "Colorless green ideas sleep furiously," which patterned with other grammatical sentences. We also examined the model's performance under damage and found that, like Linebarger et al.'s patients, the model could still distinguish between several kinds of grammatical and ungrammatical sentences even though its capacity to comprehend and produce these utterances was impaired. Although the model's coverage of English grammar is limited, it illustrates how the distinction between grammatical and ungrammatical sentences can be realized in a network that does not incorporate a grammar. The model provides a basis for understanding how people make grammaticality judgments and explains the dissociation between the abilities to use language and make grammaticality judgments seen in some aphasic patients. ____________________________________ Mark S. Seidenberg Neuroscience Program University of Southern California 3614 Watt Way Los Angeles, CA 90089-2520 Phone: 213-740-9174 Fax: 213-740-5687 http://www-rcf.usc.edu/~seidenb http://siva.usc.edu/coglab ____________________________________ From kirsten at faceit.com Mon Jul 13 10:29:31 1998 From: kirsten at faceit.com (Kirsten Rudolph) Date: Mon, 13 Jul 1998 10:29:31 -0400 Subject: ad for several jobs - please post Message-ID: <01bdae6a$a6538620$7a00a8c0@ES300.faceit.com> JOB OPPORTUNITIES at VISIONICS CORPORATION Visionics Corporation announces the availability of several positions. Visionics is the leading commercial innovator in facial recognition technology and is committed to maintaining the superiority of its technology and to increasing the range of its applications. To that end, the company is continuing to invest heavily in its growing team. We offer a stimulating work environment and the opportunity for rapid career development, as well as competitive compensation, a health plan and a generous incentive program including stock options. Research in Computer Vision and Pattern Recognition. Candidates are expected to have experience in neural networks, image processing, numerical analysis, and C/C++ programming and to have demonstrated a track record of research on real world visual pattern recognition problems. Experience in biometrics is a definite plus. Digital Signal Processing. We are seeking an engineer or computer scientist in the field of embedded technology. The successful candidate will primarily be involved in porting C++ applications to suitable DSP platforms, using C and/or assembly languages. Candidates are expected to have extensive experience in the Windows operating system and familiarity with Visual C++, MFC and neural nets. C code optimization and algorithmic experience are a plus. The positions are at Visionics' headquarters in Exchange Place, New Jersey, which is just across the Hudson from the World Trade Center - 4 minutes away via PATH train. (See our view of Wall Street on our web page: http://www.faceit.com). Interested individuals should e-mail resumes to kirsten at faceit.com or fax them to (201)332-9313, to the attention of Kirsten Rudolph. Visionics is an equal opportunity employer. Minority and women candidates are encouraged to apply. From uwe.zimmer at gmd.de Mon Jul 13 13:32:13 1998 From: uwe.zimmer at gmd.de (Uwe R. Zimmer) Date: Mon, 13 Jul 1998 19:32:13 +0200 Subject: PostDoc position at GMD Message-ID: <35AA449F.69A4171F@gmd.de> -------------------------------------------------------- Post-Doctoral Research Position in Autonomous Underwater Robotics Research -------------------------------------------------------- A new postdoctoral position is open at the German National Research Center for Information Technology (GMD) in St. Augustin, Germany and will be filled at the earliest convenience. The institute for Autonomous Intelligent Systems (SET) as based on the background of autonomous cognitive robotics and system design technologies is currently setting up autonomous underwater robotics research activities. Investigated questions are: - How to localize and move in six DoF without global correlation? - Interpretation / integration / abstraction / compression of complex sensor signals? - How to build models of previously unknown environments? - Direct sensor-actuator prediction - How to coordinate multiple loosely coupled robots? Real six degrees of freedom, real dynamic environments and real autonomy (which is required in most setups here), settle these questions in a fruitful area. The overall goal is of course not 'just' implementing prototype systems, but to get a better understanding of autonomy, and situatedness. Modeling, adaptation, clustering, prediction, communication, or - from the perspective of robotics - spatial and behavioral modeling, localization, navigation, and exploration are cross-topics addressed in most questions. Techniques employed in our institute up to now include dynamical systems, connectionist approaches, behavior-based techniques, rule based systems, and systems theory. Experiments are based on physical robots (not yet underwater!). Thus the discussion of experimental setups and particularly the meaning of embodiment became topics in itself. If the above challenges rose your interest, please proceed to our expectations regarding the ideal candidate: - Ph.D. / doctoral degree in computer sciences, electrical engineering, physics, mathematics, biology, or related disciplines. - Experiences in experimenting with autonomous systems - Theoretical foundations in mathematics, control, connectionism, dynamical systems, or systems theory - Interest in joining an international team of motivated researchers Furthermore it is expected that the candidate evolves/introduces her/his own perspective on the topic, and pushes the goals of the whole group at the same time. GMD is an equal opportunity employer. The position is for two years in the first instance. For any further information, and applications (including addresses of referees, two recent publications, and a letter of interest) please contact: Uwe R. Zimmer (adress below) ___________________________________________ ____________________________| Dr. Uwe R. Zimmer - GMD ___| Schloss Birlinghoven | 53754 St. Augustin, Germany | _______________________________________________________________. Voice: +49 2241 14 2373 - Fax: +49 2241 14 2384 | http://www.gmd.de/People/Uwe.Zimmer/ | From ah_chung_tsoi at uow.edu.au Tue Jul 14 00:02:54 1998 From: ah_chung_tsoi at uow.edu.au (Ah Chung Tsoi) Date: Tue, 14 Jul 1998 14:02:54 +1000 Subject: Workshop on data structures processing Message-ID: <35AAD86E.D5FFEDF7@uow.edu.au> -- Ah Chung Tsoi Phone: +61 2 42 21 38 43 Dean Fax: +61 2 42 21 48 43 Faculty of Informatics email: ah_chung_tsoi at uow.edu.au University of Wollongong Northfields Avenue Wollongong NSW 2522 AUSTRALIA -------------- next part -------------- Adaptive Processing of Structures A Joint Australian and Italy Workshop Friday, 7th August, 1998 Faculty of Informatics University of Wollongong Northfields Avenue Wollongong New South Wales Australia Many practical problems can be more conveniently modelled using data structures, e.g., lists, trees. For example, in image understanding, it is more conveniently to model the relationship among the objects in the image by a tree structure. Similarly, for document understanding, again it is more convenient to model various segments of the document using data structures. As another example, in Chemistry, often the structure of molecules are easily expressed in terms of a tree depicting their structures. In these applications, there are a number of practical problems which need to be solved, viz., if there are many data structures describing the problem, is it possible 1. To classify an unknown structure as whether it is similar to any of the previous data structure in the known set 2. To predict what the data structure may be. For example, in an autonomous robot navigation problem, if the robot is not endowed with a map of the environment, but instead rely on past traverse of the environment to identify landmarks. One question is: how does the robot know whether it has been visiting the same place before. Such a problem can be formulated as a classification of the tree structures describing the past experience in traversing the environment, i.e., in finding if the new tree structure describing a particular experience has occurred before. There are a number of methods in processing this type of classification problem. For example, one may use syntactical pattern recognition to model the structure, and to classify an unknown structure accordingly. Recently, there has been a substantial amount of work done in using neural networks to model data structures. Neural networks have been used in modelling data structure. For example, Jordan Pollock has applied a special case of a multilayer perceptron to model data structures, commonly known as an auto--associator, or RAAM. In this, he used a multilayer perceptron with the same input and output variables, and the dimension of the hidden layer is smaller than the input. It is known that this MLP structure can form internal representation in the hidden layer of the input variables. Pollock developed the RAAM model for encoding tree structures with labels on the leaves but this model cannot handle labels on the internal nodes. This approach was extended to handle any labeled graphs by Alessandro Sperduti. Both RAAM and LRAAM can encode structures by using a fixed size architecture, however they do not classify them. More recently, a number of groups have proposed the idea of using a recursive neuron to model the data structures, e.g., by Alessandro Sperduti and Antonina Starita (IEEE Transaction on Neural Networks, 1997), Christoph Goller and Andreas Kuechler. These work allow us to tackle problems in classifications and regression of structured objects, e.g., directed ordered acyclic graphs (DOAGs). In this workshop, we wish to introduce the audience to this exciting new development, as it promises to be one of the major breakthroughs in the representation of data structures, as well as in processing them. It can be applied to wherever data structure is a convenient method for representing the underlying problem. These include, apart from molecular chemistry, robot navigation, document processing, image processing, many areas, e.g., internet user behaviour modelling, natural language processing. This workshop will be given by the originators and developers of this approach, viz., Alessandro Sperduti, Marco Gori, Paolo Frasconi. There is a group working on this problem in the Faculty of Informatics, the University of Wollongong supported by an Australian Research Council large grant. The visit of Alessandro Sperduti, and Marco Gori are supported by an out-of-cycle large grant. The intention is to introduce Australian researchers to this exciting new methods, and to promote the application of such techniques to a much wider setting. The program of the workshop will be as follows: 9:30 - 9:45 Introduction Ah Chung Tsoi 9:45 - 10:45 Adaptive data structure modelling problems Alessandro Sperduti 10:45 - 11:15 Coffee break 11:15 - 12:15 General theory of data modelling by adaptive data structure methods Paolo Frasconi 12:15 - 1:15 Lunch 1:15 - 2:15 Properties of adaptive data structure modelling Marco Gori 2:15 - 3:15 Applications of adaptive data structure methods Alessandro Sperduti 3:15 - 3:45 Coffee break 3:45 - 5:00 Forum discussion Ah Chung Tsoi As this is supported by an Australian Research Council grant, there will be no registration fee for attending the workshop. However, attendants are responsible for their own lunch. Intended participants are encouraged to indicate their intention by emailing Professor Ah Chung Tsoi, ahchung at uow.edu.au for morning and afternoon coffee/tea catering purposes. From morten at gizmo.usc.edu Tue Jul 14 14:44:13 1998 From: morten at gizmo.usc.edu (Morten Christiansen) Date: Tue, 14 Jul 1998 11:44:13 -0700 (PDT) Subject: Paper: A connectionist model of recursion Message-ID: A preprint of the following paper to appear in Cognitive Science is available at http://www-rcf.usc.edu/~mortenc/nn-rec.html Toward a connectionist model of recursion in human linguistic performance Morten H. Christiansen University of Southern California Nick Chater University of Warwick Abstract Naturally occurring speech contains only a limited amount of complex recursive structure, and this is reflected in the empirically documented difficulties that people experience when processing such structures. We present a connectionist model of human performance in processing recursive language structures. The model is trained on simple artificial languages. We find that the qualitative performance profile of the model matches human behavior, both on the relative difficulty of center-embedded and cross-dependency, and between the processing of these complex recursive structures and right-branching recursive constructions. We analyze how these differences in performance are reflected in the internal representations of the model by performing discriminant analyses on these representation both before and after training. Furthermore, we show how a network trained to process recursive structures can also generate such structures in a probabilistic fashion. This work suggests a novel explanation of people's limited recursive performance, without assuming the existence of a mentally represented competence grammar allowing unbounded recursion. --Morten Christiansen ------------------------------------------------------------------------- Morten H. Christiansen, Ph.D. | Phone: +1 (213) 740-6299 NIBS Program | Fax: +1 (213) 740-5687 University of Southern California | Email: morten at gizmo.usc.edu University Park MC-2520 | WWW: http://www-rcf.usc.edu/~mortenc/ Los Angeles, CA 90089-2520 | Office: Hedco Neurosciences Bldg. B11 ------------------------------------------------------------------------- From henders at linc.cis.upenn.edu Tue Jul 14 16:35:39 1998 From: henders at linc.cis.upenn.edu (Jamie Henderson) Date: Tue, 14 Jul 1998 16:35:39 -0400 (EDT) Subject: papers on "Simple Synchrony Networks" and natural language parsing Message-ID: <199807142035.QAA08532@linc.cis.upenn.edu> The following two papers on learning natural language parsing using an architecture that applies Temporal Synchrony Variable Binding to Simple Recurrent Networks can be retrieved from the following web site: http://www.dcs.ex.ac.uk/~jamie/ Keywords: Simple Recurrent Networks, variable binding, synchronous oscillations, natural language, grammar induction, syntactic parsing, representing structure, systematicity. - - - - - - - - - - Simple Synchrony Networks: Learning to Parse Natural Language with Temporal Synchrony Variable Binding Peter Lane and James Henderson University of Exeter Abstract: The Simple Synchrony Network (SSN) is a new connectionist architecture, incorporating the insights of Temporal Synchrony Variable Binding (TSVB) into Simple Recurrent Networks. The use of TSVB means SSNs can output representations of structures, and can learn generalisations over the constituents of these structures (as required by systematicity). This paper describes the SSN and an associated training algorithm, and demonstrates SSNs' generalisation abilities through results from training SSNs to parse real natural language sentences. (6 pages) In Proceedings of the 1998 International Conference on Artificial Neural Networks (ICANN`98), Skovde, Sweden, 1998. - - - - - - - - - - A Connectionist Architecture for Learning to Parse James Henderson and Peter Lane University of Exeter Abstract: We present a connectionist architecture and demonstrate that it can learn syntactic parsing from a corpus of parsed text. The architecture can represent syntactic constituents, and can learn generalizations over syntactic constituents, thereby addressing the sparse data problems of previous connectionist architectures. We apply these Simple Synchrony Networks to mapping sequences of word tags to parse trees. After training on parsed samples of the Brown Corpus, the networks achieve precision and recall on constituents that approaches that of statistical methods for this task. (7 pages) In Proceedings of 17th International Conference on Computational Linguistics and the 36th Annual Meeting of the Association for Computational Linguistics (COLING-ACL`98), University of Montreal, Canada, 1998. ------------------------------- Dr James Henderson Department of Computer Science University of Exeter Exeter EX4 4PT, U.K. http://www.dcs.ex.ac.uk/~jamie/ ------------------------------- From Andreas.Eisele at xrce.xerox.com Wed Jul 15 05:03:01 1998 From: Andreas.Eisele at xrce.xerox.com (Andreas Eisele) Date: Wed, 15 Jul 1998 11:03:01 +0200 Subject: research opportunity at XRCE, Grenoble Message-ID: <199807150903.LAA11479@montendry.grenoble.xrce.xerox.com> Apologies for cross-posting... ************************************* * LCG TMR Network * * Learning Computational Grammars * * * ************************************* **************************************************************************** * PRE/POSTDOCTORAL RESEARCH OPPORTUNITY AT XEROX RESEARCH CENTRE, GRENOBLE * **************************************************************************** LCG (Learning Computational Grammars) is a research network funded by the EC Training and Mobility of Researchers programme (TMR). The LCG network involves seven European partners. The research goal of the network is to use machine learning techniques to extend a variety of computational grammars. The particular focus of XRCE's research will be on the integration of explicit grammatical knowledge with example-based evidence, such as a set of stored analyses in data oriented parsing, for example. See http://www.let.rug.nl/~nerbonne/tmr/lcg.html for more details. Up to three years of pre- or postdoctoral funding are available at XRCE, starting immediately. The ideal candidate will have research experience or strong interest in NLP and machine learning (including statistical techniques). As the funding is provided by the EU Training and Mobility of researchers programme there are some restrictions on who may benefit from it: * Candidates must be aged 35 or younger * Candidates must be Nationals of an EU country, Norway, Iceland, Switzerland or Israel * Candidates must have studied or be studying for a Doctoral Degree * Candidates cannot be French Nationals * Candidates cannot have worked in France more than 18 of the last 24 months If you are interested and eligible, e-mail your CV and the names and addresses of two referees to the address below. Your CV should include a list of recent publications. Please also outline in 2-3 pages your interest in LCG, how it is related to work you have done, and any special expertise you bring to the problem. --------------------------------------------- Andreas Eisele Andreas.Eisele at xrce.xerox.com Multilingual Theory and Technology Xerox Research Centre Europe Phone: +33 (0)4 76 61 50 86 6 chemin de Maupertuis Fax: +33 (0)4 76 61 50 99 F-38240 Meylan, France URL: http://www.xrce.xerox.com From berthold at ICSI.Berkeley.EDU Wed Jul 15 17:13:07 1998 From: berthold at ICSI.Berkeley.EDU (Michael Berthold) Date: Wed, 15 Jul 1998 14:13:07 -0700 (PDT) Subject: Announcing IDA-99 Message-ID: <199807152113.OAA24043@fondue.ICSI.Berkeley.EDU> Announcing IDA-99 Third International Symposium on Intelligent Data Analysis Center for Mathematics and Computer Science, Amsterdam, The Netherlands 9th-11th August 1999 Call for papers =============== IDA-99 will take place in Amsterdam from 9th to 11th August 1999, and is organised by Leiden University in cooperation with AAAI and NVKI. It will consist of a stimulating program of tutorials, invited talks by leading international experts in intelligent data analysis, contributed papers, poster sessions, and an exciting social program. We plan to have a special issue of the Intelligent Data Analysis journal with extended versions of a number of papers presented during the symposium. Objective ========= For many years the intersection of computing and data analysis contained menu-based statistics packages and not much else. Recently, statisticians have embraced computing, computer scientists are using statistical theories and methods, and researchers in all corners are inventing algorithms to find structure in vast online datasets. Data analysts now have access to tools for exploratory data analysis, decision tree induction, causal induction, function finding, constructing customised reference distributions, and visualisation, and there are intelligent assistants to advise on matters of design and analysis. There are tools for traditional, relatively small samples and also for enormous datasets. In all, the scope for probing data in new and penetrating ways has never been so exciting. Our aim is for IDA-99 to bring together a wide variety of researchers concerned with extracting knowledge from data, including people from statistics, machine learning, neural networks, computer science, pattern recognition, database management, and other areas. The strategies adopted by people from these areas are often different, and a synergy results if this is recognised. IDA-99 is intended to stimulate interaction between these different areas, so that more powerful tools emerge for extracting knowledge from data and a better understanding is developed of the process of intelligent data analysis. It is the third symposium on Intelligent Data Analysis after the successful symposia Intelligent Data Analysis 97 (http://www.dcs.bbk.ac.uk/ida97.html/) and Intelligent Data Analysis 95. Topics ====== Contributed papers are invited on any relevant topic, including, but not restricted to: APPLICATION & TOOLS: analysis of different kinds of data (e.g., censored, temporal etc) applications (e.g., commerce, engineering, finance, legal, manufacturing, medicine, public policy, science) assistants, intelligent agents for data analysis evaluation of IDA systems human-computer interaction in IDA IDA systems and tools information extraction, information retrieval THEORY & GENERAL PRINCIPLES: analysis of IDA algorithms classification, projection, regression, optimization, clustering data cleaning data pre-processing experiment design model specification, selection, estimation reasoning under uncertainty search statistical strategy uncertainty and noise in data ALGORITHMS & TECHNIQUES: Bayesian inference and influence diagrams bootstrap and randomization causal modeling data mining decision analysis exploratory data analysis fuzzy, neural and evolutionary appraoches knowledge-based analysis machine learning statistical pattern recognition visualization Submissions =========== Participants who wish to present a paper are requested to submit a manuscript, not exceeding 10 single-spaced pages. We strongly encourage authors to format their manuscript using Springer-Verlag's Advice to Authors (http://www.springer.de/comp/lncs/authors.html) for the Preparation of Contributions to LNCS Proceedings. This submission format is identical to the one for the final camera-ready copy of accepted papers. In addition, we request a separate page detailing the paper title, authors names, postal and email addresses, phone and fax numbers. Email submissions in Postscript form are encouraged. Otherwise, five hard copies of the manuscripts should be submitted. Submissions should be sent to the IDA-99 Program Chair. either electronically to: ida99 at wi.leidenuniv.nl or by hard copy to: Prof. dr J.N. Kok Department of Computer Science Leiden University P.O. Box 9512 2300 RA Leiden The Netherlands The address for courier services is Prof. dr J.N. Kok Department of Computer Science Leiden University Niels Bohrweg 1 2333 CA Leiden The Netherlands Important Dates =============== February 1st, 1999 Deadline for submitting papers April 15th, 1999 Notification of acceptance May 15th, 1999 Deadline for submission of final papers Review ====== All submissions will be reviewed on the basis of relevance, originality, significance, soundness and clarity. At least two referees will review each submission independently and final decisions will be made by program chairs, in consultation with relevant reviewers. Publications ============ The proceedings will be published in the Lecture Notes in Computer Science series of Springer (http://www.springer.de/comp/lncs/). The proceedings of Intelligent Data Analysis 97 appeared in this series as LNCS 1280 (http://www.springer.de/comp/lncs/volumes/1280.htm). Location ======== The symposium will use the facilities of the Center for Mathematics and Computer Science in Amsterdam (CWI -- http://www.cwi.nl/). There is an auditorium for more than 200 participants and several other rooms for parallel sessions. CWI is situated on the Wetenschappelijk Centrum Watergraafsmeer (WCW) campus in the eastern part of Amsterdam. Instructions about how to get to CWI can be found at: http://www.cwi.nl/cwi/about/directions.html On the campus there are several other research institutes and parts of the University of Amsterdam (http://www.uva.nl/english). Social Event ============ For the social event we are thinking about a boat trip through the canals of Amsterdam, with a special dinner in the centre of the city. We will provide each participant with a ``social package'', including a list of restaurants, bars, maps of town (http://www.channels.nl/themap.html), public transport information, timetables of trains (http://www.ns.nl/), etc. There are special boats and trams that circle along the touristic attractions of Amsterdam and hence it will be easy for the participants to find their way. Further information can be found in the The Internet Guide to Amsterdam (http://www.cwi.nl/~steven/amsterdam.html), panoramic pictures are also available (http://www.cwi.nl/~behr/PanoramaUK/Panorama.html). Exhibitions =========== IDA-99 welcomes demonstrations of software and publications related to intelligent data analysis. IDA-99 Organisation =================== General Chair: David Hand, Open University, UK Program Chair: Joost Kok, Leiden University, The Netherlands Program Co-Chairs: Michael Berthold, University of California, USA Doug Fisher, Vanderbilt University Members: Niall Adams, Open University, UK Pieter Adriaans, Syllogic, The Netherlands Russell Almond, Research Statistics Group, US Thomas Baeck, Informatik Centrum Dortmund, Germany Riccardo Bellazzi, University of Pavia, Italy Paul Cohen, University of Massachusetts, US Paul Darius, Leuven University, Belgium Tom Dietterich, Oregon State University, US Gerard van den Eijkel, Delft University of Technology, The Netherlands Fazel Famili, National Research Council, Canada Karl Froeschl, Univ of Vienna, Austria Linda van der Gaag, Utrecht University, The Netherlands Alex Gammerman, Royal Holloway London, UK Jaap van den Herik, University Maastricht, The Netherlands Larry Hunter, National Library of Medicine, US David Jensen, University of Massachusetts, US Bert Kappen, Nijmegen University, The Netherlands Hans Lenz, Free University of Berlin, Germany Frank Klawonn, University of Applied Sciences Emden, Germany Bing Liu, National University, Singapore Xiaohui Liu, Birkbeck College, UK David Madigan, University of Washington, US Heikki Mannila, Helsinki University, Finland Wayne Oldford, Waterloo, Canada Erkki Oja, Helsinki University of Technology, Finland Albert Prat, Technical University of Catalunya, Spain Luc de Raedt, KU Leuven, Belgium Rosanna Schiavo, University of Venice, Italy Jude Shavlik, University of Wisconsin, US Roberta Siciliano, University of Naples, Italy Arno Siebes, Center for Mathematics and Computer Science, The Netherlands Rosaria Silipo, International Computer Science Institute, US Floor Verdenius, ATO-DLO, The Netherlands Stefan Wrobel, GMD, Germany Jan Zytkow, Wichita State University, US (To be extended) Enquiries ========= Latest information regarding IDA-99 will be available on the World Wide Web Server of the Leiden Institute for Advanced Computer Science: http://www.wi.leidenuniv.nl/~ida99/ From giles at research.nj.nec.com Wed Jul 15 15:29:44 1998 From: giles at research.nj.nec.com (Lee Giles) Date: Wed, 15 Jul 1998 15:29:44 -0400 (EDT) Subject: Student research position Message-ID: <199807151929.PAA05809@alta> The NEC Research Institute in Princeton, NJ has an immediate opening for a student research position in the area of autonomous citation indexing (ACI) and machine learning, graph analysis or user profiling. ACI autonomously creates citation indexes which can provide a number of advantages over existing citation indexes and digital libraries for scholarly dissemination and feedback. Some recent related papers are listed below and those and others can be found at the listed Web site. Candidates must have experience in research and be able to effectively communicate research results. Ideal candidates will have knowledge of a number of the following: machine learning, graph analysis, user profiling, information retrieval, digital libraries, WWW, Perl, C++, and HTML. Interested applicants should apply by email, mail or fax including their resumes and any specific interests to: Dr. C. Lee Giles NEC Research Institute 4 Independence Way Princeton, NJ 08540 Fax: 609-951-2482 citeseer at research.nj.nec.com http://www.neci.nj.nec.com/homepages/giles/ Recent publications: C.L. Giles, K. Bollacker, S. Lawrence, CiteSeer: An Automatic Citation Indexing System, DL'98, The 3rd ACM Conference on Digital Libraries, pp. 89-98, 1998 [one of eight papers short listed for best paper award]. S. Lawrence, C.L. Giles, Context and Page Analysis for Improved Web Search, IEEE Internet Computing, (accepted). S. Lawrence, C.L. Giles, Searching the World Wide Web, SCIENCE, 280, p. 98. 1998. S. Lawrence. C.L. Giles, The Inquirus Meta Search Engine, 7th International World Wide Web Conference, p. 95, 1998. -- __ C. Lee Giles / Computer Science / NEC Research Institute / 4 Independence Way / Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 www.neci.nj.nec.com/homepages/giles == From ericr at ee.usyd.edu.au Wed Jul 15 21:20:34 1998 From: ericr at ee.usyd.edu.au (Eric Ronco) Date: Thu, 16 Jul 1998 11:20:34 +1000 Subject: a probable human motor control strategy Message-ID: <199807160120.LAA16620@merlot.ee.usyd.edu.au.usyd.edu.au> Dear connexionnists, This is to let you know of the existance of a new technical report in the on-line data base of sydney univesrity: http://merlot.ee.usyd.edu.au/tech_rep TITLE: Open-Loop Intermittent Feedback Optimal Control: a probable human motor control strategy Authors: Eric Ronco Keywords: Human motor control; Predictive control; Intermittent control; Open-loop control Abstract: Recent studies on human motor control have been largely influenced by two important statements: (1) Sensory feedback is too slow to be involved at least in fast motor control actions; (2) Learned internal model of the systems plays an important role in motor control. As a result, the human motor control system is often described as open-loop and particularly as a system inverse. System inverse control is limited by too many problems to be a plausible candidate. Instead, an alternative between open-loop and feedback control is proposed here: the "open-loop intermittent feedback optimal control". In this scheme, a prediction of the future behaviour of the system, that requires feedback information and a system model, is used to determine a sequence of actions which is run open-loop. The prediction of a new control sequence is performed intermittently (due to computational demand and slow sensory feedback) but with a sufficient frequency to ensure small control errors. The inverted pendulum on a cart is used to illustrate the viability of this scheme. bye Eric ------------------------------------------------------------------- Eric Ronco, PhD Tel: +61 2 9351 7680 Dt of Electrical Engineering Fax: +61 2 9351 5132 Bldg J13, Sydney University Email: ericr at ee.usyd.edu.au NSW 2006, Australia http://www.ee.usyd.edu.au/~ericr ------------------------------------------------------------------- From ericr at ee.usyd.edu.au Wed Jul 15 21:20:37 1998 From: ericr at ee.usyd.edu.au (Eric Ronco) Date: Thu, 16 Jul 1998 11:20:37 +1000 Subject: A Globally Valid Continuous-Time GPC Through Successive System Linearisations Message-ID: <199807160120.LAA16622@merlot.ee.usyd.edu.au.usyd.edu.au> Dear connexionnists, This is to let you know of the existance of a new technical report in the on-line data base of sydney univesrity: http://merlot.ee.usyd.edu.au/tech_rep TITLE: A Globally Valid Continuous-Time GPC Through Successive System Linearisations Authors: Eric Ronco, Taner Arsan, Peter J. Gawthrop and David J. Hill Keywords: Predictive control; Global control; Non-linear system; Off-equilibrium system linearisation Abstract: In a Model Based Predictive Controller (MBPC) an optimal set of control signals over a defined time period is determined by predicting the future system behaviour and minimising a non-linear cost function. The significant amount of computations involved in a non-linear function optimisation often precludes fast control actions. A successive off-equilibrium system linearisation approach is proposed here to enhance fast control action in a state space ""Continuous-time Generalised Predictive Controller'' (CGPC). This idea is also extended to achieve non-linear observations of the system states. It is shown from simulations of an inverted pendulum on a cart that this strategy is much more effective than a non-linear CGPC in terms of control system analisys, control quality, computation demand and noise robustness. bye Eric ------------------------------------------------------------------- Eric Ronco, PhD Tel: +61 2 9351 7680 Dt of Electrical Engineering Fax: +61 2 9351 5132 Bldg J13, Sydney University Email: ericr at ee.usyd.edu.au NSW 2006, Australia http://www.ee.usyd.edu.au/~ericr ------------------------------------------------------------------- From M.Usher at ukc.ac.uk Thu Jul 16 05:42:29 1998 From: M.Usher at ukc.ac.uk (M.Usher@ukc.ac.uk) Date: Thu, 16 Jul 1998 10:42:29 +0100 Subject: paper on binding and synchrony Message-ID: <199807160942.KAA15337@snipe.ukc.ac.uk> Researchers who are interested on the issue of neural synchrony as a mechanism for binding of visual information, might enjoy to read the following article that has just appeared last week in Nature (9 July, 1998, pp, 179-182). In this article we report results of psychophysical experiments that support a causal relation between synchrony and processes of visual grouping and segmentation. The article can be also accessed on our web-site: http://www.ukc.ac.uk/psychology/people/usherm/ Marius Usher Lecturer in Psychology University of Kent, Canterbury, UK Visual synchrony affects binding and segmentation in perception Marius Usher and Nick Donnelly ABSTRACT The visual system analyses information by decomposing complex objects into simple components (visual features) widely distributed across the cortex. When several objects are simultaneously present in the visual field, a mechanism is required to group (bind) together visual features belonging to each object and to separate (segment) them from features of other objects. An attractive scheme for binding visual features into a coherent percept If synchrony plays a major role in binding, one should expect that grouping and segmentation are facilitated in visual displays that induce stimulus dependent synchrony by temporal manipulations. We report data demonstrating that visual grouping is indeed facilitated when elements of one percept are presented simultaneously and are temporally separated (on a scale below the integration time of the visual system from elements of another percept is due to a global mechanism of grouping caused by synchronous neural activation and not to a local mechanism of motion computation. From Luis.Almeida at inesc.pt Thu Jul 16 11:43:27 1998 From: Luis.Almeida at inesc.pt (Luis B. Almeida) Date: Thu, 16 Jul 1998 16:43:27 +0100 Subject: preprint: nonlinear blind source separation Message-ID: <35AE1F9F.4EA986B4@ilusion.inesc.pt> The following preprint is available. It corresponds to a paper submitted to ICA'99, the International Workshop on Independent Component Analysis and Blind Source Separation, to be held in Aussois, France, in January 1999. Separation of nonlinear mixtures using pattern repulsion Goncalo C. Marques and Luis B. Almeida Abstract Blind source separation currently is a topic of great research interest. Most of the separation methods that have been developed are applicable only to the separation of linear mixtures. In this paper we derive a separation method for nonlinear mixtures, inspired by an analogy with the concept of repulsion among physical particles. Two examples of nonlinear separation are presented. The paper is available at ftp://146.193.2.131/pub/lba/papers/aussois.ps.gz (compressed postscript, 340 kB) and ftp://146.193.2.131/pub/lba/papers/aussois.ps (uncompressed postscript, 2.3 MB) Comments are welcome. Luis B. Almeida Phone: +351-1-3100246,+351-1-3544607 INESC Fax: +351-1-3145843 R. Alves Redol, 9 E-mail: lba at inesc.pt 1000 Lisboa, Portugal http://ilusion.inesc.pt/~lba/ ------------------------------------------------------------------------ *** Indonesia is killing innocent people in East Timor *** see http://amadeus.inesc.pt/~jota/Timor/ From marco at neuron.dii.unisi.it Thu Jul 16 11:50:23 1998 From: marco at neuron.dii.unisi.it (Marco Gori) Date: Thu, 16 Jul 1998 17:50:23 +0200 (MET DST) Subject: ECAI'98 tutorial - T11 Message-ID: ========================================================================== ECAI'98 Tutorial on Connectionist Models for Processing Structured Information Brighton, UK - August 25, 1998 Marco Gori http://www-dii.ing.unisi.it/~marco ========================================================================== This tutorial covers the problem of adaptive processing of graphs. Classic models for learning sequential information (e.g. recurrent neural networks) are properly extended to learn data organized as graphs. For instance, algorithms like Backpropagation Through Time (BPTT) are nicely extended to the case of data structures. People interested in this tutorial can find an abstract at http://www.cogs.susx.ac.uk/ecai98/tw/T11.html Additional information on the topic of adaptive processing of structured information can be found at http://www.dsi.unifi.it/~paolo/datas If you are interested in the tutorial, please don't hesitate to contact me. I'll be pleased to ask any question on the topic. For any information concerning the registration, you can access at the ECAI'98 web site (http://www.cogs.susx.ac.uk/ecai98/index.html) -- Marco Gori. ============================================ Marco Gori Dipartimento di Ingegneria dell'Informazione Universita' di Siena Via Roma 56 - 53100 Siena (Italy) Tel: : +39 577 26.36.10 Fax : +39 577 26.36.02 E-mail : marco at ing.unisi.it WWW : http://www-dii.ing.unisi.it/~marco ============================================= From ted.carnevale at yale.edu Thu Jul 16 13:34:18 1998 From: ted.carnevale at yale.edu (Ted Carnevale) Date: Thu, 16 Jul 1998 13:34:18 -0400 Subject: NEURON course at 1998 SFN meeting Message-ID: <35AE399A.11B9@yale.edu> Short Course Announcement Using the NEURON Simulation Environment --------------------------------------- A Satellite Symposium to the Society for Neuroscience Meeting Los Angeles, CA Saturday, Nov. 7, 1998 9 AM - 5 PM Speakers: N.T. Carnevale, M.L. Hines, J.W. Moore, and G.M. Shepherd This 1 day course with lectures and live demonstrations will present information essential for teaching and research applications of NEURON. It emphasizes practical issues that are key to the most productive use of this powerful and convenient modeling tool. Each registrant will receive a CD-ROM with software, plus a comprehensive set of notes that includes material which has not yet appeared elsewhere in print. Coffee breaks and lunch will be provided. There will be a registration fee to cover these and related costs, e.g. AV equipment rental and handout materials. This will be in the ballpark of other 1 day courses; exact figure depends on factors we haven't yet learned from SFN. >>> Registration is limited to 55 individuals <<< on a first-come, first-serve basis Deadlines Early registration: Friday, September 25, 1998 Late registration: Friday, October 9, 1998 NO on-site registration will be accepted. For more information and the electronic registration form see the course's WWW pages at http://www.neuron.yale.edu/sfn98.html and http://neuron.duke.edu/sfn98.html --Ted Supported in part by the National Science Foundation. Opinions expressed are those of the authors and not necessarily those of the Foundation. From istvan at usl.edu Thu Jul 16 17:04:09 1998 From: istvan at usl.edu (Dr. Istvn S. N. Berkeley) Date: Thu, 16 Jul 1998 16:04:09 -0500 Subject: Draft papers available on-line References: <199807142035.QAA08532@linc.cis.upenn.edu> Message-ID: <35AE6AC9.5CDE@USL.edu> The following draft papers and writings, concerning matters connectionist are available on-line at the URL below http:www.ucs.usl.edu/~isb9112/papers.html Berkeley, I. "Connectionism Reconsidered: Minds, Machines and Models" Abstract: In this paper the issue of drawing inferences about biological cognitive systems on the basis of connectionist simulations is addressed. In particular, the justification of inferences based on connectionist models trained using the backpropagation learning algorithm is examined. First it is noted that a justification commonly found in the philosophical literature is inapplicable. Then some general issues are raised about the relationships between models and biological systems. A way of conceiving the role of hidden units in connectionist networks is then introduced. This, in combination with an assumption about the way evolution goes about solving problems, is then used to suggest a means of justifying inferences about biological systems based on connectionist research. Berkeley, I. "What the #$*%! is a Subsymbol?" Abstract: In 1988, Smolensky proposed that connectionist processing systems should be understood as operating at what he termed the 'subsymbolic' level. Subsymbolic systems should be understood by comparing them to symbolic systems, in Smolensky's view. Up until recently, there have been real problems with analyzing and interpreting the operation of connectionist systems which have undergone training. However, recently published work on a network trained on a set of logic problems originally studied by Bechtel and Abrahamsen (1991)seems to offer the potential to provide a detailed, empirically based answer to questions about the nature of subsymbols. In this paper, the network analysis procedure and the results obtained using it are discussed. This provides the basis for a suprising insight into the nature of subsymbols. (Note: There are currently some problems with the figures associated with this paper, hopefully this will be fixed soon.) Berkeley, I. "Some Myths of Connectionism". Abstract: This paper considers a number of claims about connectionist systems which are often encoutered in the philosophical literature. The myths discussed here include the claim that connectionist systems are, in some sense, biological or neural, the claim that connectionist systems are compatible with real-time processing constraints, the claim that connectionist systems exhibit graceful degradation and the claim that connectionist systems are good generalizers. In the case of each of these claims, it is argued that there is a mythical component and, as such, claims of this kind should not be accepted by philosophers without appropriate qualification. (N.B. This paper is aimed more at a philosophical audience, than a technical one). Berkeley, I. "A Revisionist History of Connectionism". Abstract: An alternative perspective on the history of connectionism is offered in this short paper. In this paper a number of claims and conclusions often encountered in standard versions of the history are disputed. This paper also includes some new material from the people involved in the history of AI and Cognitive Science. Berkeley, I. "An Introduction to Connectionism". This is a useful teaching resource, which provides a reasonably accessible and non-technical introduction to the basic components and concepts which are important to backpropogation systems. It includes a description of some of the kinds of processing units which can be employed. All the best, Istvan -- Istvan S. N. Berkeley Ph.D, E-mail: istvan at USL.edu, Philosophy, The University of Southwestern Louisiana, USL P. O. Box 43770, Lafayette, LA 70504-3770, USA. Tel:(318) 482 6807, Fax: (318) 482 6195, http://www.ucs.usl.edu/~isb9112 ***** Learn about the new Cognitive Science Ph.D. program at USL, visit http://cognition.usl.edu ***** From juergen at idsia.ch Fri Jul 17 09:49:58 1998 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Fri, 17 Jul 1998 15:49:58 +0200 Subject: reinforcement learning economy Message-ID: <199807171349.PAA15994@ruebe.idsia.ch> This message is triggered by Eric Baum's recent announcement of his interesting papers on evolutionary economies for reinforcement learning, "Hayek machine", and metalearning. I would like to mention that several related ideas are expressed in an old paper from 1987 [1]. Pages 23-51 of [1] are devoted to "Prototypical Self-referential Associating Learning Mechanisms (PSALM1 - PSALM3). Hayek2 (the most recent Hayek variant) is somewhat reminiscent of PSALM3, where competing/cooperating reinforcement learning agents bid for executing actions. Winners may receive external reward for achieving goals. Agents are supposed to learn the credit assignment process itself (metalearning). For this purpose they can execute actions for collectively constructing and connecting and modifying agents and for transferring credit (reward) to agents. A crucial difference between PSALM3 and Hayek2 may be that PSALM3 does not strictly enforce individual property rights. For instance, agents may steal money from other agents and temporally use it in a way that does not contribute to the system's overall progress. On the other hand, to the best of my knowledge, PSALMs are the first machine learning systems that enforce the important constraint of total credit conservation (except for consumption and external reward) - this constraint is not enforced in Holland's landmark bucket brigade classifier economy (1985), which may cause inflation and other problems. Reference [1] also inspired a slightly more recent but less general approach enforcing money conservation, where money is "weight substance" of a reinforcement learning neural net [2]. Pages 7-13 of [1] are devoted to an alternative "Genetic Programming" (GP) approach that recursively applies metalevel GP to the task of finding better program-modifying programs on lower levels - the goal is to use GP for improving GP. It may be worth mentioning that this was suggested long before GP itself (invented by Cramer in 1985) was popularized in the 1990s. It should be stated that reference [1] does not meet the scientific standards of a journal publication - it is the first paper I ever wrote in a foreign language (as an undergraduate). But despite its age (it was first distributed more than a decade ago) it may still be of at least historic interest due to renewed attention to market models and metalearning (and also GP). Unfortunately there is no digital version, but if you are interested I will send you a hardcopy (this may take some time depending on demand). [1] J. Schmidhuber. Evolutionary Principles in Self-Referential Learning. On Learning how to Learn: The Meta-Meta-Meta...-Hook. Diploma thesis, Tech. Univ. Munich, 1987 [2] J. Schmidhuber. The Neural Bucket Brigade: A local learning algorithm for dynamic feedforward and recurrent networks. Connection Science, 1(4):403-412, 1989, http://www.idsia.ch/~juergen/onlinepub.html _________________________________________________ Juergen Schmidhuber research director IDSIA, Corso Elvezia 36, 6900-Lugano, Switzerland juergen at idsia.ch www.idsia.ch/~juergen From bfrey at dendrite.beckman.uiuc.edu Fri Jul 17 09:44:10 1998 From: bfrey at dendrite.beckman.uiuc.edu (Brendan Frey) Date: Fri, 17 Jul 1998 08:44:10 -0500 Subject: -> new book <- Message-ID: <199807171344.IAA07449@dendrite.beckman.uiuc.edu> ------------------------------------------------------ New book now available ------------------------------------------------------ Graphical Models for Machine Learning and Digital Communication Brendan Frey, MIT PRESS Information: www.cs.toronto.edu/~frey/book.html ------------------------------------------------------ neural networks graphical models Pearl's algorithm Monte Carlo variational methods wake-sleep algorithm generalized EM error-correcting coding bits-back coding => Helmholtz machines: unsupervised neural net probability models (like ICA, but with *dependent* components) => BEST error-correcting algorithm (turbodecoding) *** simple explanation using Pearl's algorithm => data compression using latent variable models Brendan. From mcauley.11 at osu.edu Fri Jul 17 11:38:29 1998 From: mcauley.11 at osu.edu (Devin McAuley) Date: Fri, 17 Jul 1998 11:38:29 -0400 (EDT) Subject: Tutorial workshop on connectionist models (preliminary announcement) Message-ID: <199807171538.LAA12905@mail4.uts.ohio-state.edu> A Tutorial Workshop on Connectionist Models of Cognition Saturday October 3rd - Sunday October 4th, 1998 Sponsored by theDepartment of Psychology and the Cognitive Science Center The Ohio State University Columbus, Ohio 43210 The Department of Psychology and the Cognitive Science Center at Ohio State University are pleased to announce a tutorial workshop on Connectionist Models of Cognition. The workshop is open to both faculty and students interested in the application of parallel distributed processing models to cognitive phenomena either in a teaching or research capacity. Participants will gain hands-on modeling experience with a variety of standard architectures and have the opportunity to begin developing their own models under the guidance of workshop staff. A background in connectionist models is not required to participate in this workshop. The intensive 2-day workshop will consist of tutorial lectures on specific connectionist models followed by structured modeling sessions in the computer lab. The network architectures covered will include interactive-activation networks, backpropagation networks, simple-recurrent networks, Hopfield networks, self-organizing feature maps, and coupled-oscillator models. Lab sessions will consist of a series of exercises using the BrainWave connectionist simulator. BrainWave is a web-friendly software tool for teaching connectionist models based on a paint-program metaphor. Developed for an undergraduate course on connectionist models, it targets novices and requires a minimal level of technical skill to use. The BrainWave package (and on-line teaching materials) have been included as components of undergraduate courses in psychology, cognitive science, linguistics, and computer science at universities in the United States, the United Kingdom, and Australia. By the end of this workshop, participants will have a general introduction to the field of neural networks and the BrainWave simulator, and have experience running simulations with several connectionist models of cognitive phenomena that have been influential in cognitive science. For faculty teaching cognitive modeling, the workshop provides the opportunity to gain familiarity with the BrainWave simulator and course materials. For researchers, it offers an excellent opportunity for professional development. The registration fee for the students attending the workshop is $50 and for all others it is $150. Included in the registration for the 2-day workshop are 1. A single-user copy of the BrainWave software package. 2. All course materials. The workshop coordinators are Dr. Devin McAuley (Department of Psychology, Ohio State University) and Dr. Simon Dennis (School of Psychology, University of Queensland). The Cognitive Science Center faculty sponsor is Professor Mari Jones (Department of Psychology, Ohio State University). Send general inquiries and registration forms to: Connectionist Models Workshop (c/o Devin McAuley) Department of Psychology 142 Townshend Hall The Ohio State University Columbus, Ohio 43210 USA Email: brainwav at psy.uq.edu.au Fax: 614-292-5601 * The registration deadline is September 15th (numbers are limited) * ---------------------------------------------------------------------------- Workshop Registration Form NAME ____________________________________________ EMAIL ____________________________________________ ADDRESS ____________________________________________ ____________________________________________ ____________________________________________ ____________________________________________ CHECK ONE STUDENT _____ ($50) ACADEMIC _____ ($150) INDUSTRY _____ ($150) Payment should be made to: the Department of Psychology, The Ohio State University ------------------------------------------------------------------- J. Devin McAuley, PhD Department of Psychology 142 Townshend Hall The Ohio State University Columbus, Ohio 43210 Email: mcauley.11 at osu.edu Phone: 1-614-292-4320 From giles at research.nj.nec.com Fri Jul 17 15:36:03 1998 From: giles at research.nj.nec.com (Lee Giles) Date: Fri, 17 Jul 1998 15:36:03 -0400 (EDT) Subject: Book on "Adaptive Processing of Sequences and Data Structures" Message-ID: <199807171936.PAA08666@alta> The following edited book might be of interest. "Adaptive Processing of Sequences and Data Structures" (eds.) C. Lee Giles, Marco Gori LECTURE NOTES IN COMPUTER SCIENCE: ARTIFICIAL INTELLIGENCE, Springer Verlag, 1998. This is a collection of tutorial papers presented at the International Summer School on Neural Networks, "E.R. Caianiello", Vietri sul Mare, Salerno, Italy, September 6-13, 1997. Ordering information can be had from: http://www.springer.de/comp/lncs/tutorial.html Chapters sections, titles and authors are: Architectures and Learning in Recurrent Neural Networks Recurrent Neural Network Architectures: An Overview A.C. Tsoi Gradient Based Learning Methods A.C. Tsoi Diagrammatic Methods for Deriving and Relating Temporal Neural Network Algorithms E.A. Wan and F. Beaufays Processing of Data Structures An Introduction to Learning Structured Information P. Frasconi Neural Networks for Processing Data Structures A. Sperduti The Loading Problem: Topics in Complexity M. Gori Probabilistic Models Learning Dynamic Bayesian Networks Z. Ghahramani Probabilistic Models of Neuronal Spike Trains P. Baldi Temporal Models in Blind Source Separation L.C. Parra Analog vs Symbolic Computation Recursive Neural Networks and Automata M. Maggini The Neural Network Pushdown Automaton: Architecture, Dynamics and Training G.Z. Sun, C.L. Giles and H.H. Chen Neural Dynamics with Stochasticity H. T. Siegelmann Applications Parsing the Stream of Time: The Value of Event-Based Segmentation in a Complex Real-World Control Problem M.C.Mozer and D. Miller Hybrid HMM/ANN Systems for Speech Recognition: Overview and New Research Directions H. Bourlard and N. Morgan Predictive Models for Sequence Modelling, Application to Speech and Character Recognition P. Gallinari Best regards, Lee Giles Marco Gori __ C. Lee Giles / Computer Science / NEC Research Institute / 4 Independence Way / Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 www.neci.nj.nec.com/homepages/giles == From rich at cs.umass.edu Mon Jul 20 11:54:13 1998 From: rich at cs.umass.edu (Rich Sutton) Date: Mon, 20 Jul 1998 10:54:13 -0500 Subject: Technical Report Announcement: Reinforcement Learning with Temporal Abstraction Message-ID: We are pleased to announce the public availability of the following technical report: Between MDPs and semi-MDPs: Learning, planning, and representing knowledge at multiple temporal scales. by Richard S. Sutton, Doina Precup, and Satinder Singh Learning, planning, and representing knowledge at multiple levels of temporal abstraction are key challenges for Artificial Intelligence. this paper we develop an approach to these problems based on the mathematical framework of reinforcement learning and Markov decision processes (MDPs). We extend the usual notion of action to include {\it options}---whole courses of behavior that may be temporally extended, stochastic, and contingent on events. Examples of options include picking up an object, going to lunch, and traveling to a distant city, as well as primitive actions such as muscle twitches and joint torques. Options may be given a priori, learned by experience, or both. They may be used interchangeably with actions in a variety of planning and learning methods. The theory of semi-Markov decision processes (SMDPs) can be applied to model the consequences of options and as a basis for planning and learning methods using them. In this paper we develop these connections, building on prior work by Bradtke and Duff (1995), Parr (in prep.) and others. Our main novel results concern the interface between the MDP and SMDP levels of analysis. We show how a set of options can be altered by changing only their termination conditions to improve over SMDP methods with no additional cost. We also introduce {\it intra-option} temporal-difference methods that are able to learn from fragments of an option's execution. Finally, we propose a notion of subgoal which can be used to improve the options themselves. Overall, we argue that options and their models provide hitherto missing aspects of a powerful, clear, and expressive framework for representing and organizing knowledge. ftp://ftp.cs.umass.edu/pub/anw/pub/sutton/SPS-98.ps.gz 39 pages, 1.8 MBytes. From georgiou at wiley.csusb.edu Mon Jul 20 03:49:58 1998 From: georgiou at wiley.csusb.edu (georgiou@wiley.csusb.edu) Date: Mon, 20 Jul 1998 00:49:58 -0700 (PDT) Subject: ICCIN'98: Student funding available, deadline: August 1 Message-ID: <199807200749.AAA23416@wiley.csusb.edu> 3rd International Conference on COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE (ICCIN'98) Sheraton Imperial Hotel & Convention Center Research Triangle Park, North Carolina October 24-28, 1998 (Tutorials are on October 23) New Deadline for Student Paper Submissions: August 1 ARO Financial Support Available The organizers of ICCIN '98 and the Joint Conference on Information Sciences are pleased to announce that the Mathematical and Computer Sciences Division of the Army Research Office has become a sponsor of JCIS'98. Registration and travel stipends for students with accepted papers to ICCIN '98 and JCIS'98 will be made available. To accommodate this late breaking good news, the deadline for student papers to ICCIN '98 and JCIS '98 will be extended to August 1, 1998. All papers will undergo rapid peer review. A student paper constitutes a paper with significant student involvement; it need not be a paper where the student is the first or primary author. For the purposes of this announcement, "students" include, but are not limited to grad students, post-docs and research fellows. Some funds may be available for students who wish to attend the conference but who do not present research findings. However, preference will be given to those students actually presenting research findings on work in computational intelligence, neural networks, control and systems theory and other topics relevant to the JCIS '98. Students who have already submitted papers and would like to be considered for funding please send a note to georgiou at csci.csusb.edu . Conference Co-chairs: Subhash C. Kak, Louisiana State University Jeffrey P. Sutton, Harvard University Plenary Speakers include the following: +-----------------------------------------------------------------------+ |James Anderson |Panos J. Antsaklis |John Baillieul |Walter Freeman | |-----------------+-------------------+----------------+----------------| |David Fogel |Stephen Grossberg |Stuart Hameroff |Yu Chi Ho | |-----------------+-------------------+----------------+----------------| |Thomas S.Huang |George J. Klir |Teuvo Kohonen |John Koza | |-----------------+-------------------+----------------+----------------| |Richard G. Palmer|Zdzislaw Pawlak |Karl Pribram |Azriel Rosenfeld| |-----------------+-------------------+----------------+----------------| |Julius T. Tou |I.Burhan Turksen |Paul J. Werbos |A.K.C.Wong | |-----------------+-------------------+----------------+----------------| |Lotfi A. Zadeh |Hans J.Zimmermann | | | +-----------------------------------------------------------------------+ This conference is part of the Fourth Joint Conference Information Sciences. Areas for which papers are sought include: o Artificial Life o Artificially Intelligent NNs o Associative Memory o Cognitive Science o Computational Intelligence o Efficiency/Robustness Comparisons o Evolutionary Computation for Neural Networks o Feature Extraction & Pattern Recognition o Implementations (electronic, Optical, Biochips) o Intelligent Control o Learning and Memory o Neural Network Architectures o Neurocognition o Neurodynamics o Neuro-Quantum Information Processing o Optimization o Parallel Computer Applications o Quantum Neurocomputing o Theory of Evolutionary Computation Papers will be accepted based on summaries. A summary shall not exceed 4 pages of 10-point font, double-column, single-spaced text, with figures and tables included. Required deposits and other information: http://www.ee.duke.edu/~gu/JCIS98/conf.html Send 3 copies of summaries to: George M. Georgiou Computer Science Department California State University San Bernardino, CA 92407-2397 U.S.A. georgiou at csci.csusb.edu Tutorial and other registration information can be found in the announcement of the Fourth Joint Conference Information Sciences: http://www.ee.duke.edu/~gu/JCIS98/ ICCIN'98 Conference Web site: http://www.csci.csusb.edu/iccin From oby at cs.tu-berlin.de Mon Jul 20 16:23:46 1998 From: oby at cs.tu-berlin.de (Klaus Obermayer) Date: Mon, 20 Jul 1998 22:23:46 +0200 (MET DST) Subject: preprints available Message-ID: <199807202023.WAA28537@pollux.cs.tu-berlin.de> Dear Connectionists, attached please find abstracts and preprint-locations of two short manuscripts on modelling the development of cortical maps. Cheers Klaus ---------------------------------------------------------------------- Prof. Klaus Obermayer phone: 49-30-314-73442 FR2-1, NI, Informatik 49-30-314-73120 Technische Universitaet Berlin fax: 49-30-314-73121 Franklinstrasse 28/29 e-mail: oby at cs.tu-berlin.de 10587 Berlin, Germany http://ni.cs.tu-berlin.de/ ---------------------------------------------------------------------- ---------------------------------------------------------------------- M. Stetter^1, E. W. Lang^2, and K. Obermayer^1 1 FB Informatik, TU Berlin 2 Physik-Department, U. Regensburg Unspecific long-term potentiation can evoke functional segregation in a model of area 17. Recently it has been shown in rat hippocampus that the synapse specificity of Hebbian long-term potentiation breaks down at short distances below 100 mum . Using a neural network model we show that this unspecific component of long term potentiation can be responsible for the robust formation and maintainance of cortical organization during activity driven development. When the model is applied to the formation of orientation and ocular dominance in visual cortex, we find that the addition of an unspecific component to standard Hebbian learning - in combination with a tendency of left-eye and right-eye driven synapses to initially group together on the postsynaptic neuron - induces the simultaneous emergence and stabilization of ocular dominance and of segregated, oriented ON-/OFF-subfields. Since standard Hebbian learning cannot account for the simultaneous stabilization of both structures, unspecific LTP thus induces a qualitatively new behaviour. Since unspecific LTP only acts between synapses which are locally clustered in space, our results imply that details of the local grouping of synapses on the dendritic arbors of postsynaptic cells can considerably influence the formation of the cortical functional organization at the systems level. to appear in: Neuroreport 1998 available at: http://ni.cs.tu-berlin.de/publications/#journals ------------------------------------------------------------------------ C. Piepenbrock and K. Obermayer FB Informatik, TU Berlin Effects of lateral competition in the primary visual cortex on the development of topographic projections and ocular dominance maps We present a Hebbian model for the development of cortical maps in the striate cortex that includes a parameter which represents the degree of lateral competition for activity between neurons. It has two well known models as limiting cases: for weak competition we obtain a Correlation Based Learning (CBL) model and for strong lateral competition we recover the Self Organizing Map (SOM). Receptive fields develop through very different mechanisms in these models: CBL is driven by the second order statistics of the input stimuli, while SOM maps stimulus features. We study the influence of lateral interactions for intermediate changes in competition. Increasing the competition for binocular localized stimuli we find a transition from an unorganized map to a topographic projection and subsequently to a topographic map with ocular dominance columns. to appear in: In Computational Neuroscience: Trends in Research 1998, 1998. available at: http://ni.cs.tu-berlin.de/publications/#conference From dhw at santafe.edu Mon Jul 20 16:54:04 1998 From: dhw at santafe.edu (dhw@santafe.edu) Date: Mon, 20 Jul 1998 14:54:04 -0600 Subject: Job Announcement at NASA Ames Research Center Message-ID: <199807202054.OAA20722@santafe.santafe.edu> ** PLEASE CIRCULATE ** RESEARCH POSITION AT NASA AMES The Computational Inductive Inference Group at NASA Ames Research Center invites applications for a researcher to work on its "Collective Intelligence" project. This multi-disciplinary effort is led by David Wolpert, and currently has 5 researchers. It is concerned with systems that involve large collections of sophisticated machine learning algorithms operating, without centralized control, so as to achieve a global objective. The current domains being investigated are packet routing, automated parallelization, and protocell modeling. The position will be at either the post-doctoral and/or staff researcher level, depending on qualifications. Applicants must have a doctoral degree and an outstanding research record. Experience in one or more of the fields of game theory, economics, machine learning / statistical inference (especially reinforcement learning), or multi-agent systems is required. Experience in network routing, biophysics, population biology, and/or parallel algorithms is a plus. Candidates should send a curriculum vitae (including a list of publications), their citizenship status and the names of at least three references to Hal Duncan, hduncan at mail.arc.nasa.gov, 650-604-4767, MS N269-2, NASA Ames Research Center, Moffett Field, CA, 94035. Electronic submissions are preferred. Do NOT send information to David Wolpert. From herbert.jaeger at gmd.de Tue Jul 21 08:36:40 1998 From: herbert.jaeger at gmd.de (Herbert Jaeger) Date: Tue, 21 Jul 1998 14:36:40 +0200 Subject: Postdoc Position in Mobile Robotics Message-ID: <35B48B2D.551D@gmd.de> ------------------------------------------------------------------- Post-doctoral position at the German Nation Research Center for Information Technology (GMD): Integrating bottom-up learning and planning in a mobile robot that uses stochastic models for predicting the consequences of action ------------------------------------------------------------------- Within GMD's annual postdoc programme (details at http://ik.gmd.de/PD-97-98.html), a research position on integrating bottom-up learning and conceptual-level planning in mobile robots is offered in the Cognitive Robotics research group (http://www.gmd.de/FIT/KI/CogRob/) at the GMD Institute for System Design (SET). Among many other topics related to the design of mobile robots, the group pursues the integration of bottom-up, behavior-based techniques with symbolic planning capabilities. It is not intended merely to interface a classical planning module with an otherwise behavior-based control system. Instead, the symbolic information processing must grow out of the dynamics of the robot-environment interaction. This goal is to be achieved by combining several techniques: (i) a stochastic learning component which enables the robot to learn to predict the consequences of its actions, (ii) a dynamical-systems based method ("Dual Dynamics") for specifying the behaviors of a robot, (iii) an architecture for integrating (i) and (ii) with a symbolic planning system on the grounds of conceptual knowledge that emerges from the robot's prediction learning. The complete system will be implemented on a B14 robot, our RoboCup robots, or underwater robots. Details can be found in several short papers (ftp://ftp.gmd.de/GMD/ais/publications/ {1997/jaeger-christaller.97.dd.ps.gz, 1998/jaeger.98.emcsr.ps.gz, 1998/hertzberg.98.ddplan.ps.gz}). We are looking for a post-doc researcher who finds this a fascinating scientific challenge, and who would like to take responsibilities for the success of this endeavor. We emphasize that this is not a narrowly defined project position - post-doctoral grantholders at GMD can put their own scientific stamp on their work. Qualifications: The candidate must have a doctoral degree in computer science, engineering, physics, mathematics, biology, or related areas. Furthermore, we would be happy about as many as possible from among the following qualifications: - experience in robotics, - experience with stochastic system analysis and control, - familiarity with dynamical systems in general, ODE's in particular, - a background in symbolic, - interest in the fundamental questions of cognitive agents research. Women are encouraged to apply. Handicapped people with comparable qualifications are given preferential treatment. Please direct inquiries and applications to Herbert Jaeger at the address given below. Formal applications should be prepared according to the requirements specified at http://ik.gmd.de/PD-97-98.html. Specifically, a short research plan must be included. >>> The deadline for applications is September 15, 1998. ---------------------------------------------------------------- Dr. Herbert Jaeger Phone +49-2241-14-2253 German National Research Center Fax +49-2241-14-2384 for Information Technology (GMD) email herbert.jaeger at gmd.de SET.KI Schloss Birlinghoven D-53754 Sankt Augustin, Germany http://www.gmd.de/People/Herbert.Jaeger/ ---------------------------------------------------------------- From benferha at irit.fr Tue Jul 21 12:27:12 1998 From: benferha at irit.fr (Salem BENFERHAT) Date: Tue, 21 Jul 1998 18:27:12 +0200 (MET DST) Subject: ETAI-new area "Decision and Reasoning under Uncertainty" Message-ID: <199807211627.SAA02487@elsac.irit.fr> Dear Colleagues, We are very pleased to announce the creation of a new area of the Electronic Transactions on Artificial Intelligence (ETAI), named Decision and Reasoning under Uncertainty (DRU), area of which we are in charge. To see how ETAI operates and what its present state is, the best is to take a look at the ETAI webpage http://www.ida.liu.se/ext/etai/. A brief summary is also given below. This new area covers researches on reasoning and decision under uncertainty both on the methodological and on the applicative sides. Significant papers are invited from the whole spectrum of uncertainty in Artificial Intelligence researches. See below for the call for papers. The reviewing process in ETAI-DRU area differs from the traditional one. Each sumbitted paper goes through the following steps (see below for a complete description of the reviewing process): 1. The paper is posted at the ETAI-website and is announced to DRU-community via a newsletter. This starts a three months online (public) discussions. 2. After these three months discussions, the author decides whether he/she wishes to have the paper refereed for the ETAI, or not. 3. If yes, the article is sent to confidential referees (timing is short, typically 3 weeks). The article is either accepted or not accepted for the ETAI. Research papers which are under public discussions are located at : "http://www.ida.liu.se/ext/etai/received/dru/received.html". Thus, we have already received a first paper from David Poole entitled "Decision Theory, the Situation Calculus and Conditional Plans". To contribute to the discussion or debate section concerning submitted papers, please send your questions or comments as an email to the area editors (benferhat at irit.fr, prade at irit.fr). Besides the reviewing process, we also organizes News Journals and Newsletters. The Newsletter is sent out by e-mail regularly and is the media through which information is distributed rapidly. The News Journal is published 3 or 4 times per year as a digest of the information that was sent out in the past Newsletters. Newsletters will i) inform about new submitted papers (open to discussions), ii) announce Conferences, Books, Journal issues, PhD thesis and technical reports, Career Opportunities and Training, and softwares dealing with uncertainty, and iii) include discussions on uncertainty reasoning. To include some announcements in one of the previous categories, please send an email to benferhat at irit.fr and Prade at irit.fr. Lastly, in order to maintain a list of people working on Decision and reasoning under uncertainty, and in order to continue receiving Newsletters and News Journals, please send the following information by email: Last and first name, affiliation, email address, personal web-page. Please, feel free to ask any questions about ETAI-DRU area. We hope that you may consider the possibility of contributing papers and discussions to the ETAI-DRU areas in the future. Best regards Salem Benferhat and Henri Prade ________________________________________________ 1. Call for paper Significant papers are invited from the whole spectrum of uncertainty in Artificial Intelligence researches. Topics of interests include, but are not restricted to: Methods: Probability theory (Bayesian or not), Belief function theory, Upper and lower probabilities, Possibility theory, Fuzzy sets, Rough sets, Measures of information, ... Problems: Approximate reasoning, Decision-making under uncertainty, Planning under uncertainty, Uncertainty issues in learning and data mining, Algorithms for uncertain reasoning, Formal languages for representing uncertain information, Belief revision and plausible reasoning under uncertainty, Data fusion, Diagnosis, Inference under uncertainty, expert systems, Cognitive modelling and uncertainty, Practical applications, ... Area editors: Salem Benferhat, Henri Prade, IRIT, Toulouse, France Area editorial committee * Fahiem Bacchus, Univ. of Waterloo, Canada * Bernadette Bouchon-Meunier, LIP6, Univ. of Paris VI, France * Ronen I. Brafman, Stanford, USA * Roger Cooke, Tech. Univ. Delft, The Netherlands * Didier Dubois, IRIT, Toulouse, France * Francesc Esteva, IIIA-CSIC, Bellaterra, Spain * Finn V. Jensen, Aalborg Univ., Denmark * Jurg Kohlas, Univ. of Fribourg, Switzerland * Rudolf Kruse, Univ. of Magdeburg, Germany * Serafin Moral, Univ. of Granada, Spain * Prakash P. Shenoy, Univ. of Kansas, USA * Philippe Smets, IRIDIA, Free Univ. of Brussels, Belgium * Marek J. Druzdzel, Univ. of Pittsburgh, USA * Lech Polkowski, Warsaw Univ. of Technology, Poland 2. A brief summary of ETAI The following gives some information about ETAI (to get a complete information, take a look to the web page: http://www.ida.liu.se/ext/etai/). In a certain sense, ETAI is an electronic journal. However, it is not simply a traditional journal gone electronic. The differences may be summarized by the following table describing the functions performed by a conventional journal and by the ETAI: Conventional Journal ETAI Distribution of the article A major function Not our business Reviewing and quality assurance A major function A major function Debate about Difficult and not much A major function published results done Publication of Impossible Welcomed and already on-line software started Bibliographic Difficult and not much A major function services done The basic service of a conventional (paper) journal is to have the article typeset, printed, and sent to the subscribers. The ETAI stays completely away from that process: it assumes the existence of First Publication Archives (similar to "Preprint Archives", but with a guarantee that the articles remain unchanged for an extended period of time). The ETAI only deals with URL:s pointing to articles that have been published (but without international peer review) in First Publication Archives. The reviewing and quality control is a major topic for the ETAI, like for conventional journals. However, the ETAI pioneers the principle of posteriori reviewing: the reviewing and acceptance process takes place after the article has been published. This has a number of consequences, but the major advantage from the point of view of the author is that he or she retains the priority right of the article and its results per the original date of publication, and independently of reviewing delays and possible reviewing mistakes. Reviewing in ETAI also differs from conventional journal reviewing in that it uses a succession of several "filters", rather than one single reviewing pass, and in that it is set up so as to encourage self-control on the side of the authors. The intention is that ETAI's quality control shall be considerably more strict and reliable than what is done in conventional journals. Besides the reviewing process, the ETAI also organizes News Journals (or newsletters) in each of its speciality areas. News Journals are fora for information about current events (workshops, etc), but they will also contain debate about recently published research results. Naturally, the on-line medium is much more appropriate for debate than what a conventional journal is. Compared to mailgroups, the News Journals offer a more persistent and reputable forum of discussion. Discussion contributions are preserved in such a way that they are accessible and referencable for the future. In other words, they also are to be considered as "published". One additional type of contributions in News Journals is for links to software that is available and can be run over the net. This is particularly valuable for software which can be run directly from a web page. Already the first issue of an ETAI News Journal publishes two such on-line software contributions. The creation of bibliographies, finally, is a traditional activity in research, but it is impractical in paper-based media since by their very nature, bibliographies ought to be updated as new articles arrive. The on-line maintenance of specialized bibliographies within each of its topic areas is a natural function in the ETAI. Generally speaking, it is clear that the electronic medium lends itself to a different grouping of functionalities than what is natural or even possible in the paper-based technology. For example, the bibliographic database underlying ETAI's bibliographic services is well integrated with the reviewing process and with the News Journals where new contributions to the literature are first reported. Similarly, debate items pertaining to a particular article will be accessible from the entry for the article itself. The ETAI therefore represents a novel approach to electronic publishing. We do not simply inherit the patterns from the older technology, but instead we have rethought the structure of scientific communication in order to make the best possible use of international computer networks as well as electronic document and database technologies. 3. The ETAI Reviewing process The reviewing process is the core activity of a scientific journal, electronic or not. The ETAI wishes to make full use of the electronic medium in order to obtain higher quality reviewing as a service to authors and readers alike. The reviewing will therefore be organized in the following, novel way. From the point of view of the author and the individual article, this works as follows: . The author(s) write(s) the article, and prepares it in postscript and/or PDF format. It is recommended to use ETAI style files. . An informal contact with the area editor may be appropriate in order to check that the article follows the basic formal criteria. . The author arranges to have the article published in a first publication archive (university E-Press, preprint archive, etc) in such a way that it is identified now and forever with a specific URL. . The article represented by the URL is submitted to the relevant ETAI area for inclusion in a news journal. . The area editor screens it and approves (hopefully) the inclusion of the article. . The article is subjected to open reviewing during at least six months. The author is likely to make changes to the article based on the feedback. . The author decides where he wants to go for acceptance. If she chooses e.g. JAIR or some other place except ETAI, nothing more to do. If the author submits the article (now probably modified) to for acceptance by the ETAI, then the area editor appoints two referees and gives them a short amount of time to do their job. Then the area editor decides based on the statements of the referees. See above regarding time of submission and time of decision. . Even after ETAI acceptance, the article "hangs around" and may engage additional debate. From giulio at dist.unige.it Thu Jul 23 02:27:31 1998 From: giulio at dist.unige.it (Giulio Sandini) Date: Thu, 23 Jul 1998 08:27:31 +0200 Subject: PhD Studentiships in Cognitive Neuroscience and Robotics Message-ID: <35B6D7D3.42DC@dist.unige.it> PhD Studentiships in Cognitive Neuroscience and Robotics Available The LIRA-Lab of the University of Genova in Italy is looking for PhD students interested in investigating the development of sensori-motor coordination through the implementation of artificial systems. LIRA-Lab is a small multidisciplinary group of people with backgrounds in biology and engineering and with a long-lasting experience on biologically motivated robots and systems . It is located in Genova, a medium-size city along the Italian Riviera, and operates within the Department of Computer Science of the Faculty of Engineering. More information about the lab's activity and references to past guests and students can be found at: http://www.lira.dist.unige.it:81 The successful candidate should have an undergraduate degree and should be interested in developing software and hardware models of sensori-motor mechanisms with particular emphasis on eye-head-hand coordination. Fellowships are supported by the European Union and must be assigned to young European students (According to the rules, Italian candidates and students with non-European passport are not eligible). The duration is not less than three years. Salary is in accordance with the guidelines suggested by the EU within the "Training and Mobility of Researcher" activity and it is of the order 18000 Euro per year. Interested candidates should contact: Prof. Giulio Sandini DIST - University of Genova Via Opera Pia, 13 16145 Genova - Italy Fax: +39 10 353.2154 e-mail: sandini at dist.unige.it From jhf at stat.Stanford.EDU Thu Jul 23 19:26:34 1998 From: jhf at stat.Stanford.EDU (Jerome H. Friedman) Date: Thu, 23 Jul 1998 16:26:34 -0700 (PDT) Subject: Paper on boosting. Message-ID: <199807232326.QAA14698@rgmiller.Stanford.EDU> *** Technical Report Available *** Additive Logistic Regression: a Statistical View of Boosting Jerome Friedman (jhf at stat.stanford.edu) Trevor Hastie (trevor at stat.stanford.edu) Robert Tibshirani (tibs at utstat.toronto.edu) ABSTRACT Boosting (Freund & Schapire 1996, Schapire & Singer 1998) is one of the most important recent developments in classification methodology. The performance of many classification algorithms often can be dramatically improved by sequentially applying them to reweighted versions of the input data, and taking a weighted majority vote of the sequence of classifiers thereby produced. We show that this seemingly mysterious phenomenon can be understood in terms of well known statistical principles, namely additive modeling and maximum likelihood. For the two-class problem, boosting can be viewed as an approximation to additive modeling on the logistic scale using maximum Bernoulli likelihood as a criterion. We develop more direct approximations and show that they exhibit nearly identical results to that of boosting. Direct multi-class generalizations based on multinomial likelihood are derived that exhibit performance comparable to other recently proposed multi-class generalizations of boosting in most situations, and far superior in some. We suggest a minor modification to boosting that can reduce computation, often by factors of 10 to 50. Finally, we apply these insights to produce an alternative formulation of boosting decision trees. This approach, based on best-first truncated tree induction, often leads to better performance, and can provide interpretable descriptions of the aggregate decision rule. It is also much faster computationally making it more suitable to large scale data mining applications. Available by ftp from: "ftp://stat.stanford.edu/pub/friedman/boost.ps.Z" or "ftp://utstat.toronto.edu/pub/tibs/boost.ps.Z" Comments welcomed. From geoff at giccs.georgetown.edu Fri Jul 24 18:37:33 1998 From: geoff at giccs.georgetown.edu (Geoff Goodhill) Date: Fri, 24 Jul 1998 18:37:33 -0400 Subject: Paper available Message-ID: <199807242237.SAA01938@fathead.giccs.georgetown.edu> Dear Colleagues, The following paper, which has just appeared in Network 9, 419-432, is available in postscript form from http://www.giccs.georgetown.edu/~geoff/pubs.html Thanks, Geoff Goodhill ----- The influence of neural activity and intracortical connections on the periodicity of ocular dominance stripes Geoffrey J. Goodhill Georgetown Institute for Cognitive and Computational Sciences Washington DC Abstract Several factors may interact to determine the periodicity of ocular dominance stripes in cat and monkey visual cortex. Previous theoretical work has suggested roles for the width of cortical interactions and the strength of between-eye correlations. Here, a model based on an explicit optimization is presented that allows a thorough characterization of how these and other parameters of the afferent input could affect ocular dominance stripe periodicity. The principle conclusions are that increasing the width of within-eye correlations leads to wider columns, and, surprisingly, that increasing the width of cortical interactions can sometimes lead to narrower columns. From nkasabov at commerce.otago.ac.nz Mon Jul 27 13:46:17 1998 From: nkasabov at commerce.otago.ac.nz (Nikola Kasabov) Date: Mon, 27 Jul 1998 13:46:17 NZST-12NZDT Subject: Two post-doc positions in connectionist systems Message-ID: <199807270147.NAA29139@arwen.otago.ac.nz> POSTDOCTORAL RESEARCH FELLOWS (TWO POSITIONS) CONNECTIONIST-BASED INFORMATION SYSTEMS Applications are invited for two positions of Postdoctoral Research Fellow within the Department of Information Science, University of Otago. Initially this is a two year fixed term appointment with the possibility of renewal. The successful applicants should have a PhD degree in Information Science, Computer Science, Electrical Engineering or Mathematics. Knowledge on contemporary methods and techniques for information processing and intelligent information systems (neural networks, fuzzy logic, evolutionary systems), excellent programming skills (Java, C++, Delphi), experience working with operating systems for PC and SUN platforms in a distributed network environment, is desirable. In collaboration with other universities in New Zealand and abroad, the department is working on a research project "Connectionist-Based Intelligent Information Systems" funded by the New Zealand Foundation for Research, Science and Technology (FRST). This project involves programming and using AI systems (neural networks, rule-based systems, fuzzy systems, evolutionary computation, rule extraction algorithms) for single platform or in a distributed platform-independent environment on the WWW. It also aims at applying developed methodologies and tools for adaptive speech recognition, language translation, image processing, adaptive control and data mining in bioinformatics. Commencing salary will be within the ranges available for each position. Further enquiries should be directed Associate Professor Nik Kasabov, +64-3-479 8319, email: nkasabov at otago.ac.nz, http: kel.otago.ac.nz/vacancies/. Reference number AG98/25 Closing date: Friday 21 August 1998 METHOD OF APPLICATION Further details regarding this position, the University and the application procedure are available from the Deputy Director, Personnel Services, University of Otago, PO Box 56, Dunedin, New Zealand (fax +64-3-474 1607) and from the Dunedin branch of the New Zealand Employment Service. Applicants should send two copies of their curriculum vitae together with the names, addresses and fax numbers of three referees, to the Deputy Director of Personnel Services by the specified closing date, quoting the appropriate reference number. Equal opportunity in employment is University policy. --------------------------------------------------------- Assoc.Professor Dr Nikola Kasabov phone:+64 3 479 8319 Department of Information Science fax: +64 3 479 8311 University of Otago,P.O.Box 56 nkasabov at otago.ac.nz Dunedin, New Zealand WWW http://divcom.otago.ac.nz:800/com/infosci/kel/home.htm ---------------------------------------------------------- From giles at research.nj.nec.com Mon Jul 27 13:48:52 1998 From: giles at research.nj.nec.com (Lee Giles) Date: Mon, 27 Jul 1998 13:48:52 -0400 Subject: journal paper on neural networks and NLP Message-ID: <199807271748.NAA22553@alta> The following journal paper on natural language learning and neural networks has been accepted in IEEE Transactions on Knowledge and Data Engineering and is now available at: http://www.neci.nj.nec.com/homepages/lawrence/ http://www.neci.nj.nec.com/homepages/giles/ The labeled corpus data set for this study is also available at: http://www.neci.nj.nec.com/homepages/sandiway/pappi/rnn/index.html ************************************************************************** "Natural Language Grammatical Inference with Recurrent Neural Networks" Steve Lawrence (1), C. Lee Giles (1,2), Sandiway Fong (1) (1) NEC Research Institute, 4 Independence Way, Princeton, NJ 08540, USA (2) Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742, USA {lawrence,giles,sandiway}@research.nj.nec.com ABSTRACT This paper examines the inductive inference of a complex grammar with neural networks -- specifically, the task considered is that of training a network to classify natural language sentences as grammatical or ungrammatical, thereby exhibiting the same kind of discriminatory power provided by the Principles and Parameters linguistic framework, or Government-and-Binding theory. Neural networks are trained, without the division into learned vs. innate components assumed by Chomsky, in an attempt to produce the same judgments as native speakers on sharply grammatical/ungrammatical data. How a recurrent neural network could possess linguistic capability, and the properties of various common recurrent neural network architectures are discussed. The problem exhibits training behavior which is often not present with smaller grammars, and training was initially difficult. However, after implementing several techniques aimed at improving the convergence of the gradient descent backpropagation-through-time training algorithm, significant learning was possible. It was found that certain architectures are better able to learn an appropriate grammar. The operation of the networks and their training is analyzed. Finally, the extraction of rules in the form of deterministic finite state automata is investigated. Keywords: recurrent neural networks, natural language processing, grammatical inference, government-and-binding theory, gradient descent, simulated annealing, principles-and-parameters framework, automata extraction. ********************************************************************** A previously published related book chapter: Steve Lawrence, Sandiway Fong, C. Lee Giles, "Natural Language Grammatical Inference: A Comparison of Recurrent Neural Networks and Machine Learning Methods," in Symbolic, Connectionist, and Statistical Approaches to Learning for Natural Language Processing, Lecture Notes in AI, edited by Stefan Wermter, Ellen Riloff and Gabriele Scheler, Springer Verlag, New York, pp. 33-47, 1996. is available upon request. __ C. Lee Giles / Computer Science / NEC Research Institute / 4 Independence Way / Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 www.neci.nj.nec.com/homepages/giles == From terry at salk.edu Tue Jul 28 18:08:10 1998 From: terry at salk.edu (Terry Sejnowski) Date: Tue, 28 Jul 1998 15:08:10 -0700 (PDT) Subject: NEURAL COMPUTATION 10:6 Message-ID: <199807282208.PAA16497@helmholtz.salk.edu> Neural Computation - Contents Volume 10, Number 6 - August 15, 1998 ARTICLES Chaotic Balanced State in a Model Of Cortical Circuits C. van Vreeswijk and H. Somplinsky Blind Source Separation and Deconvolution: The Dynamic Component Analysis Algorithm H. Attias and C. E. Schreiner NOTES Bias/Variance Decompositions for Likelihood-Based Estimators Tom Heskes The Influence Function of Principal Component Analysis by Self-Organizing Rule Isao Higuchi and Shinto Eguchi A Sparse Representation for Function Approximation Tomaso Poggio and Federico Girosi LETTERS An Equivalence Between Sparse Approximation and Support Vector Machines Federico Girosi Extended Kalman Filter-Based Pruning Method for Recurrent Neural Networks John Sum, Lai-wan Chan, Chi-sing Leung, and Gilbert H. Young Transform-Invariant Recognition by Association in a Recurrent Network Nestor Parga and Edmund Rolls Retrieval Dynamics in Oscillator Neural Networks Toshio Aoyagi and Katsunori Kitano A Fast And Robust Cluster Update Algorithm For Image Segmentation In Spin-Lattice Models Without Annealing--Visual Latencies Revisited Ralf Opara and Florentin Worgotter Probability Density Methods for Smooth Function Approximation and Learning in Populations of Tuned Spiking Neurons Terence David Sanger A Potts Neuron Approach To Communication Routing Jari Hakkinen, Martin Lagerholm, Carsten Peterson, and Bo Soderberg ----- ABSTRACTS - http://mitpress.mit.edu/NECO/ SUBSCRIPTIONS - 1998 - VOLUME 10 - 8 ISSUES USA Canada* Other Countries Student/Retired $50 $53.50 $78 Individual $82 $87.74 $110 Institution $285 $304.95 $318 * includes 7% GST (Back issues from Volumes 1-9 are regularly available for $28 each to institutions and $14 each for individuals. Add $5 for postage per issue outside USA and Canada. Add +7% GST for Canada.) MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 258-6779 mitpress-orders at mit.edu ----- From Tony.Plate at MCS.VUW.AC.NZ Wed Jul 29 02:29:56 1998 From: Tony.Plate at MCS.VUW.AC.NZ (Tony Plate) Date: Wed, 29 Jul 1998 18:29:56 +1200 Subject: two papers on interpreting NNs and Guassian process models Message-ID: <199807290629.SAA26113@rialto.mcs.vuw.ac.nz> The following two papers on interpreting neural networks and Guassian process models are available for download from http://www.mcs.vuw.ac.nz/~tap/publications.html ---------- Accuracy versus interpretability in flexible modeling: implementing a tradeoff using Gaussian process models} Tony A. Plate School of Mathematical and Computing Sciences Victoria University of Wellington, Wellington, New Zealand Tony.Plate at vuw.ac.nz To appear in Behaviourmetrika, special issue on Analysis of knowledge representations in neural network models. Abstract: One of the widely acknowledged drawbacks of flexible statistical models is that the fitted models are often extremely difficult to interpret. However, if flexible models are constrained to be additive the fitted models are much easier to interpret, as each input can be considered independently. The problem with additive models is that they cannot provide an accurate model if the phenomenon being modeled is not additive. This paper shows that a tradeoff between accuracy and additivity can be implemented easily in Gaussian process models, which are a type of flexible model closely related to feedforward neural networks. One can fit a series of Gaussian process models that begins with the completely flexible and are constrained to be progressively more additive, and thus progressively more interpretable. Observations of how the degree of non-additivity and the test error change as the models become more additive give insight into the importance of interactions in a particular model. Fitted models in the series can also be interpreted graphically with a technique for visualizing the effects of inputs in non-additive models that was adapted from plots for generalized additive models. This visualization technique shows the overall effects of different inputs and also shows which inputs are involved in interactions and how strong those interactions are. ---------- Visualizing the function computed by a feedforward neural network Tony Plate (Victoria University of Wellington) Joel Bert (University of British Columbia) John Grace (University of British Columbia) Pierre Band (Health Canada) Computer Science Technical Report CS-TR-98-5 Victoria University of Wellington, Wellington, New Zealand Abstract: A method for visualizing the function computed by a feedforward neural network is presented. It is most suitable for models with continuous inputs and a small number of outputs, where the output function is reasonably smooth, as in regression or probabilistic classification tasks. The visualization makes readily apparent the effects of each input and the way in which the functions deviates from a linear function. The visualization can also assist in identifying interactions in the fitted model. The method uses only the input-output relationship and thus can be applied to any predictive statistical model, including bagged and committee models, which are otherwise difficult to interpret. The visualization method is demonstrated on a neural-network model of how the risk of lung cancer is affected by smoking and drinking. -- Tony Plate, Computer Science Voice: +64-4-495-5233 ext 8578 School of Mathematical and Computing Sciences Fax: +64-4-495-5232 Victoria University, PO Box 600, Wellington, New Zealand tap at mcs.vuw.ac.nz http://www.mcs.vuw.ac.nz/~tap From ed at itctx.com Sun Jul 26 06:37:27 1998 From: ed at itctx.com (ed@itctx.com) Date: Sun, 26 Jul 1998 06:37:27 -0400 Subject: Job: US-FL-Orlando, Data modeler/developer Message-ID: <104B43AAE1D2D111B27600A024CF0E2811438A@KEG> Intelligent Technologies Corporation (ITC) is a market leader in intelligent fraud detection software for the healthcare, insurance, and financial industries and has several openings as a result of its recent growth. DATA MODELER/DEVELOPER We are looking for a candidate with applied mathematics/ engineering/CS degree with several years of experience in data modeling and software development in a UNIX C/C++ environment. The ideal candidate will have a broad background covering the following skill sets. - Applications of neural networks, genetic algorithms, fuzzy logic and statistics. - C, C++, Java programming. - Experience in UNIX - Good communication skills. We offer excellent compensation plus a complete benefits package including an employee stock options purchase plan. For consideration, send your resume including salary history to: ITC, Job Code N, 455 Douglas Ave., Suite 2155-23, Altamonte Springs, FL 32714. Or fax to (407) 862-2490. Or e-mail to: ed at itcTX.com. Check out our Web site at: http://www.itcTX.com Ed DeRouin, Ph. D. Chief Scientist Intelligent Technologies Corp. From harnad at coglit.soton.ac.uk Sun Jul 26 13:38:43 1998 From: harnad at coglit.soton.ac.uk (Stevan Harnad) Date: Sun, 26 Jul 1998 18:38:43 +0100 (BST) Subject: CogPrints: Archive your Preprints and Reprints Message-ID: To all biobehavioral and cognitive scientists: You are invited to archive your preprints and reprints in the CogPrints electronic archive. You will find that it is extremely easy. The Archive covers all the Cognitive Sciences: Psychology, Neuroscience, Biology, Computer Science, Linguistics and Philosophy CogPrints is completely free for everyone, both authors and readers, thanks to a subsidy from the Electronic Libraries Programme of the Joint Information Systems of the United Kingdom and the collaboration of the NSF-supported Physics Eprint Archive at Los Alamos. CogPrints has just been opened for public automatic archiving. This means authors can now deposit their own papers automatically. The first wave of papers had been invited and hand-archived by CogPrints in order to set a model of the form and content of CogPrints. To see the high level of contributors and contributions: http://cogprints.soton.ac.uk/ To archive your own papers automatically: http://cogprints.soton.ac.uk/author.html All authors are encouraged to archive their papers on their home servers as well. For further information: admin at coglit.soton.ac.uk From moreno at eel.upc.es Wed Jul 29 04:38:13 1998 From: moreno at eel.upc.es (Juan Manuel Moreno Arostegui) Date: Wed, 29 Jul 1998 10:38:13 +0200 Subject: CFP - Microneuro'99 Message-ID: <002401bdbacc$3ab0c540$98315393@muddy.upc.es> Apologies if you receive multiple copies of this message ------------------------------------------------------------------------ MicroNeuro'99 7th International Conference on Microelectronics for Neural, Fuzzy and Bio-inspired Systems Granada, Spain, April 7-9, 1999 MicroNeuro99 is the seventh of a series of international conferences previously held in Dortmund, Mnchen, Edinburgh, Torino, Lausanne and Dresden, in each of which around a hundred specialists have participated. The conference is dedicated to hardware implementations of artificial neural networks, fuzzy and neuro-fuzzy systems, and related bio-inspired computing architectures. The program will focus upon all aspects related to the hardware implementation of these systems, with special emphasis on specific VLSI analog, digital and pulse-coded circuits. DEADLINES Submissions of papers and demo proposals November 5, 1998 Notification of acceptance December 20, 1998 Conference April 7-9, 1999 INFORMATION: General Chair A. Prieto, Univ. Granada, E aprieto at ugr.es Japan Chair T.Shibata, Univ. Tokyo, J shibata at ee.t.u-tokyo.ac.jp USA Chair Andreas G. Andreou, Johns Hopkins Univ. Baltimore, USA andreou at jhu.edu Fax +34 958 248993 mneuro99 at atc.ugr.es http://atc.ugr.es/mneuro99 From pelillo at dsi.unive.it Thu Jul 30 20:59:23 1998 From: pelillo at dsi.unive.it (Marcello Pelillo) Date: Fri, 31 Jul 1998 02:59:23 +0200 (MET DST) Subject: Call for Papers: EMMCVPR'99 Message-ID: <199807310059.CAA20424@oink.dsi.unive.it> A non-text attachment was scrubbed... Name: not available Type: text Size: 4553 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/f4f2d9fb/attachment.ksh From suem at soc.plym.ac.uk Fri Jul 31 12:07:45 1998 From: suem at soc.plym.ac.uk (Sue McCabe) Date: Fri, 31 Jul 1998 17:07:45 +0100 Subject: Research studentship advert Message-ID: <1.5.4.32.19980731160745.00738878@soc.plym.ac.uk> Centre for Neural and Adaptive Systems School of Computing University of Plymouth, UK Research Studentship : The development of an integrated model of auditory scene analysis The Centre for Neural and Adaptive Systems investigates computational neural models of major brain processes underlying intelligent behaviour and uses these models to provide inspiration for the development of novel neural computing architectures and systems for intelligent sensory information processing, condition monitoring, autonomous control and robotics. A Research Student is sought by the Centre to participate in its on-going research programme in the area of auditory scene analysis. There are now a number of neural network models which address parts of the general problem, such as auditory streaming, pitch perception and sound localisation, and it is intended to develop a comprehensive model of auditory scene analysis which integrates all these aspects in a way which is consistent with current psychophysical and neurophysiological data. Candidates for the studentship will be expected to hold, or be about to receive, a good Honours or Masters degree in a relevant discipline and should demonstrate a strong interest in the neural network modelling of sensory processing and perception. Candidates should also have or expect to acquire a good working knowledge of fundamental neuroscience principles and good computational modelling skills. The studentship provides tuition fees and maintenance support at a level consistent with UK research council studentships. Additional income may be available through part time teaching. Further details about the Studentship can be obtained by telephoning Dr Sue McCabe on (+44) (0) 1752 232610, or by e-mail to sue at soc.plym.ac.uk. Dr Sue McCabe Centre for Neural and Adaptive Systems School of Computing University of Plymouth Plymouth PL4 8AA England tel: +44 17 52 23 26 10 fax: +44 17 52 23 25 40 e-mail: suem at soc.plym.ac.uk http://www.tech.plym.ac.uk/soc/research/neural/index.html From POCASIP at aol.com Fri Jul 31 20:19:33 1998 From: POCASIP at aol.com (POCASIP@aol.com) Date: Fri, 31 Jul 1998 20:19:33 EDT Subject: Looking for a Signal and Image Processing; CMOS Design; Adaptive Control expert Message-ID: <3ab6ec5f.35c25f16@aol.com> The Advanced Signal and Image Processing Laboratory of POC-IOS (POC stands for Physical Optics Corporation, and IOS stands for Intelligent Optical Systems) is looking for a candidate who has expertise in at least two of the following areas: 1. Signal and Image Processing, 2. CMOS ASIC Design, and 3. Nonlinear Adaptive Control + Knowledge of board-level electronic design, neural networks, and programming fluency in C and C++ are important assets. + Experience in solving real-world problems in a wide variety of applications is a definite plus. The Advanced Signal and Image Processing Laboratory has activities in, among others, the areas of adaptive control, hazardous waste analysis, skin injury diagnosis, silicon wafer inspection, food quality control, and target recognition, using neural computation implementations in software and hardware (both CMOS electronics and optics). POC-IOS is a rapidly growing dynamic high-tech company with a focus on optical sensors and information processing. It employs about 25 people, more than half of whom are scientists from a variety of backgrounds. We are located in Torrance, California, which is a pleasant seaside town with a high standard of living and year-round perfect weather. Please send your application including curriculum vitae, and three references, in ASCII only, by e-mail to POCASIP at aol.com Emile Fiesler From Ronan.Reilly at ucd.ie Wed Jul 1 14:06:33 1998 From: Ronan.Reilly at ucd.ie (Ronan G. Reilly) Date: Wed, 01 Jul 1998 18:06:33 +0000 (GMT) Subject: Post-Doc position in Dublin Message-ID: <0EVF00J4HE77X2@hermes.ucd.ie> ************************************* * LCG TMR Network * * Learning Computational Grammars * * * ************************************* ****************************************************************** * POSTDOCTORAL RESEARCH OPPORTUNITY AT UNIVERSITY COLLEGE DUBLIN * ****************************************************************** LCG (Learning Computational Grammars) is a research network funded by the EC Training and Mobility of Researchers programme (TMR). The LCG network involves seven European partners. The research goal of the network is the application of machine learning techniques to extending a variety of computational grammars. The particular focus of UCD's research will be on the use of artificial neural network learning algorithms. See http://www.let.rug.nl/~nerbonne/tmr/lcg.html for more details. There will be three years of postdoctoral funding available in the Department of Computer Science at University College Dublin tenable immediately. The ideal postdoctoral candidates will have research experience in the use of ANNs in natural language processing. As the funding is provided by the EU Training and Mobility of researchers programme there are some restrictions on who may benefit from it: * Candidates must be aged 35 or younger * Candidates must be Nationals of an EU country, Norway, Switzerland or Iceland * Candidates must have studied or be studying for a Doctoral Degree * Candidates must not be Irish Nationals or worked in Ireland 18 out of the last 24 months If you are interested and eligible, e-mail your CV (RTF, ASCII, or PS versions only) and the names and addresses of two referees to the address below. Your CV should include a list of recent publications. Please also outline in 2-3 pages your interest in LCG, how it is related to work you have done, and what special expertise you bring to the problem. --------------------------------------------- Ronan G. Reilly, PhD Department of Computer Science University College Belfield Dublin 4 IRELAND http://cs-www.ucd.ie/staff/html/ronan.htm e-mail: Ronan.Reilly at ucd.ie Tel. : +353-1-706 2475 Fax : +353-1-269 7262 From cjcb at molson.ho.lucent.com Thu Jul 2 14:50:49 1998 From: cjcb at molson.ho.lucent.com (Chris Burges) Date: Thu, 2 Jul 1998 14:50:49 -0400 Subject: Kernel geometry, invariance, and support vector machines Message-ID: <199807021850.OAA28197@cottontail.lucent.com> The following paper is available at http://svm.research.bell-labs.com/SVMdoc.html Geometry and Invariance in Kernel Based Methods C.J.C. Burges, Bell Laboratories, Lucent Technologies To Appear In: Advances in Kernel Methods - Support Vector Learning, Eds. B. Schoelkopf, C. Burges, A. Smola, MIT Press, Cambridge, USA, 1998 We explore the questions of (1) how to describe the intrinsic geometry of the manifolds which occur naturally in methods, such as support vector machines (SVMs), in which the choice of kernel specifies a nonlinear mapping of one's data to a Hilbert space; and (2) how one can find kernels which are locally invariant under some given symmetry. The motivation for exploring the geometry of support vector methods is to gain a better intuitive understanding of the manifolds to which one's data is being mapped, and hence of the support vector method itself: we show, for example, that the Riemannian metric induced on the manifold by its embedding can be expressed in closed form in terms of the kernel. The motivation for looking for classes of kernels which instantiate local invariances is to find ways to incorporate known symmetries of the problem into the model selection (i.e. kernel selection) phase of the problem. A useful by-product of the geometry analysis is a necessary test which any proposed kernel must pass if it is to be a support vector kernel (i.e. a kernel which satisfies Mercer's positivity condition); as an example, we use this to show that the hyperbolic tangent kernel (for which the SVM is a two-layer neural network) violates Mercer's condition for various values of its parameters, a fact noted previously only experimentally. A basic result of the invariance analysis is that directly imposing a symmetry on the class of kernels effectively results in a preprocessing step, in which the preprocessed data lies in a space whose dimension is reduced by the number of generators of the symmetry group. Any desired kernels can then be used on the preprocessed data. We give a detailed example of vertical translation invariance for pixel data, where the binning of the data into pixels has some interesting consequences. The paper comprises two parts: Part 1 studies the geometry of the kernel mapping, and Part 2 the incorporation of invariances by choice of kernel. From rreilly at elecmag3.ucd.ie Thu Jul 2 13:12:33 1998 From: rreilly at elecmag3.ucd.ie (Richard Reilly) Date: Thu, 02 Jul 1998 18:12:33 +0100 Subject: Classifier Design for Online Handwriting Recognition Message-ID: <2.2.32.19980702171233.00b22710@elecmag3.ucd.ie> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ PhD/MEngSc research opportunity DSP Group, Electronic and Electrical Engineering Dept, UCD "Classifier Design for Online Handwriting Recognition" 02 July 1998 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Position available on ongoing project in online handwriting recognition. Project funded by Forbairt/EU. Tasks This project adopts a HW/SW codesign approach to the OHR task, aiming at implementations suitable for the constraints of PDA platform. The Preprocessing (normalisation, feature extraction and optional segmentation) is performed by a custom hardware module which is fully configurable by ROM and software. The hardware development is with Synopsys compiler. The main functions of Recognition and Postprocessing are implemented in software, using Hidden Markov Model and Dynamic Programming methods. Fuzzy logic may also be applied. C/VHDL Cosimulation will be used to define the system parameters i.e. feature set, preprocessor and classifier functions and to benchmark the system accuracy. Overall performance will then be validated on the UNIPEN database of online samples. The development platform is a Solaris environment on a Sun Ultra using the Tcl/Tk GUI. The classifier designer will investigate the following issues: - which classifiers perform well based on preprocessor capabilities and PDA constraints? - what exact preprocessor functionality is required for the feature sets, with a view to maximising invariance of the classifier to writing style and production? - what types of postprocessing (lexical, syntactic, user-dependent) are most efficient? - extensibility and adaptivity to multiple users, writing styles and languages - integration of support for gestures Requirements - working knowledge of DSP, pattern recognition esp. HMM, DP, fuzzy methods in application fields such as speech, handwriting, video - experience in C (/C++), UNIX, GUI (e.g. Tcl/Tk or Motif), MatLab - optional experience in VLSI design with VHDL/Verilog, Synopsys compiler and cosimulation with C highly desirable. Resources available The DSP lab contains a Sun Ultra/Enterprise 450 running Solaris and Pentium/Pro workstations using Win95. More information on this project: http://wwdsp.ucd.ie/~stephenm Candidates should have a good honours degree (H2.1 or H1) in a suitable area. Further details by contacting Dr Richard Reilly, Department of Electronic and Electrical Engineering Department, University College, Dublin 4, Ireland. Ph: 353 1 706 1960; Fax: 353 1 283 0921; e-mail: Richard.Reilly at ucd.ie http://wwdsp.ucd.ie From geoff at giccs.georgetown.edu Mon Jul 6 12:51:47 1998 From: geoff at giccs.georgetown.edu (Geoff Goodhill) Date: Mon, 6 Jul 1998 12:51:47 -0400 Subject: Postdoc position available Message-ID: <199807061651.MAA09660@fathead.giccs.georgetown.edu> POSTDOCTORAL POSITION - COMPUTATIONAL NEUROSCIENCE Georgetown Institute for Cognitive and Computational Sciences Georgetown University Washington DC A postdoctoral position is available from August 1st 1998 in the lab of Dr Geoff Goodhill for an NSF-funded project investigating models of cortical map formation. Experience in computational neuroscience and knowledge of C/C++ is required. The position is for one year in the first instance. The aim of the project is to understand how genetic and activity-dependent factors combine to determine map structure in primary visual cortex. We have so far written a simulator in C++ / OpenGL running on an SGI Octane workstation. We now plan to exploit this simulator by using it to help reveal the key biological parameters controlling map structure, and to guide appropriate mathematical analyses. More information about the lab can be found at http://www.giccs.georgetown.edu/labs/cns Applicants should send a CV, a letter of interest, and names and addresses (including email) of at least two referees to: Dr Geoffrey J. Goodhill Georgetown Institute for Cognitive and Computational Sciences Georgetown University Medical Center 3970 Reservoir Road NW Washington DC 20007 Tel: (202) 687 6889 Fax: (202) 687 0617 Email: geoff at giccs.georgetown.edu From tgd at CS.ORST.EDU Mon Jul 6 16:57:16 1998 From: tgd at CS.ORST.EDU (Tom Dietterich) Date: Mon, 6 Jul 1998 13:57:16 -0700 (PDT) Subject: Hierarchical Reinforcement Learning with MAXQ Message-ID: <199807062057.NAA28564@edison.CS.ORST.EDU> Two papers on hierarchical reinforcement learning are available. One is a conference paper that will appear this summer at the International Conference on Machine Learning. The other is a journal-length version with full technical details and discussion. ====================================================================== Dietterich, T. G. (to appear). The MAXQ method for hierarchical reinforcement learning. 1998 International Conference on Machine Learning. URL: ftp://ftp.cs.orst.edu/pub/tgd/papers/ml98-maxq.ps.gz Abstract: This paper presents a new approach to hierarchical reinforcement learning based on the MAXQ decomposition of the value function. The MAXQ decomposition has both a procedural semantics---as a subroutine hierarchy---and a declarative semantics---as a representation of the value function of a hierarchical policy. MAXQ unifies and extends previous work on hierarchical reinforcement learning by Singh, Kaelbling, and Dayan and Hinton. Conditions under which the MAXQ decomposition can represent the optimal value function are derived. The paper defines a hierarchical Q learning algorithm, proves its convergence, and shows experimentally that it can learn much faster than ordinary ``flat'' Q learning. Finally, the paper discusses some interesting issues that arise in hierarchical reinforcement learning including the hierarchical credit assignment problem and non-hierarchical execution of the MAXQ hierarchy. Note: This version has some errors corrected compared to the version that appears in the proceedings. In particular, Figure 1 is fixed. ====================================================================== Dietterich, T. G. (Submitted). Hierarchical reinforcement learning with the MAXQ value function decomposition URL: ftp://ftp.cs.orst.edu/pub/tgd/papers/mlj-maxq.ps.gz Abstract: This paper presents a new approach to hierarchical reinforcement learning based on the MAXQ decomposition of the value function. The MAXQ decomposition has both a procedural semantics---as a subroutine hierarchy---and a declarative semantics---as a representation of the value function of a hierarchical policy. MAXQ unifies and extends previous work on hierarchical reinforcement learning by Singh, Kaelbling, and Dayan and Hinton. Conditions under which the MAXQ decomposition can represent the optimal value function are derived. The paper defines a hierarchical Q learning algorithm, proves its convergence, and shows experimentally that it can learn much faster than ordinary ``flat'' Q learning. These results and experiments are extended to support state abstraction and non-hierarchical execution. The paper concludes with a discussion of design tradeoffs in hierarchical reinforcement learning. ====================================================================== From jer at mannanetwork.com Tue Jul 7 10:50:22 1998 From: jer at mannanetwork.com (Joel Ratsaby) Date: Tue, 07 Jul 1998 17:50:22 +0300 Subject: Job Post Message-ID: <35A235AE.D4C708DE@mannanetwork.com> Manna Network Technologies a leader in the relationship management field is seeking experienced AI developers for a exciting project in real time machine learning. The work involves the development and implementation of advanced machine learning algorithms. Ideal candidates will have a Masters in Computer Science or Electrical Engineering with a background and experience in applied machine learning, statistical pattern recognition. Knowledge and experience in Bayesian networks and a good background in object oriented programming and Java are a plus. For more information please send resume to manpower at mannanetwork.com or have a look at our web site www.mannanetwork.com Joel Ratsaby From krose at wins.uva.nl Tue Jul 7 03:43:21 1998 From: krose at wins.uva.nl (Ben Krose) Date: Tue, 07 Jul 1998 09:43:21 +0200 Subject: Ph.D. positions open Message-ID: <35A1D199.7450AD9A@wins.uva.nl> I would like to announce the following open positions: 2 Ph. D. positions (AIO) available. The "Intelligent Autonomous Systems (IAS)" group of the Computer Science Department, University of Amsterdam, is looking for two enthousiastic, motivated students for two AIO positions. Project 1: "Learning group behaviour in multiple robot systems", with as case study "Robot soccer". Project 2: "Classification of radar profiles with neural networks" Information: about the group: http://www.wins.uva.nl/research/ias/ about the projects: http://www.wins.uva.nl/research/learn/ additional: groen at wins.uva.nl. krose at wins.uva.nl, tel: 020 5257463 For project 1 we look for a student in the field of Computer Science, specialised in Artificial Intelligence. For project 2 we look for a student in the field of Computer Science, Physics or Electrical engineering. An AIO position is a paid four years appointment as a research trainee with the explicit aim that a Ph.D. thesis be produced in those four years. The monthly salary will start at dfl 2151 gross, increasing to a maximum of dfl 3841 gross in the fourth year. The appointment is for a period of maximally four years and should result in a Ph.D. A training and supervision plan will be drawn up, stipulating the aims and content of the proposed research project and the teaching obligations involved. Applications should include a CV, a statement of interest (1-2 pages), a recent relevant paper (if available), and a list of three or more references. Applications should be addressed before August 1, 1998 to Ms. Elvira Smeets Dept. of Computer Science University of Amsterdam Kruislaan 403, 1098 SJ Amsterdam The Netherlands --- Ben Kr\"ose Department of Computer Science University of Amsterdam, Kruislaan 403, 1098 SJ Amsterdam, NL. tel: +31 20 525 7520/7463/(7490 fax) http://carol.wins.uva.nl/~krose/ From lorraine.dingley at mrc-apu.cam.ac.uk Tue Jul 7 11:35:28 1998 From: lorraine.dingley at mrc-apu.cam.ac.uk (Lorraine Dingley) Date: Tue, 7 Jul 1998 15:35:28 +0000 Subject: Post-doc position Message-ID: POST-DOCTORAL POSITION IN COMPUTATIONAL MODELLING OF COGNITIVE-AFFECTIVE PROCESSES. MRC COGNITION AND BRAIN SCIENCES UNIT, CAMBRIDGE, UK Applications are invited for a post-doctoral scientist to join the Cognition & Emotion group. This new position has been created to facilitate the development of computationally explicit models of affective processes, or their neural bases, and related research. Applicants should have experience in computational modelling and have a strong interest in affect and/or affective disorders. The successful applicant would be expected to develop collaborations with other scientists in the group, and would also have the opportunity to develop their own research in a related area. Appointment would be for an initial period of 3 years, with the possibility of transfer to a career track position. Starting salary would be in the range range =A315,000 to =A324,000 per annum supported by a performance related pay scheme and MRC pension scheme. Further information can be obtained from Dr Andrew Mathews (phone 01223 355294) or the CBU website http://www.mrc-apu.cam.ac.uk/. Applications including 2 copies of a full CV, a brief description of research interests, and names and addresses of two professional referees should be sent by 11th August 1998 quoting reference CBU/CM to: Johanna Webb Personnel MRC Centre Hills Road Cambridge CB2 2QH MEDICAL RESEARCH COUNCIL operates a non-smoking policy and is an Equal Opportunity Employer From stefan.wermter at sunderland.ac.uk Wed Jul 8 08:10:45 1998 From: stefan.wermter at sunderland.ac.uk (Stefan Wermter) Date: Wed, 08 Jul 1998 13:10:45 +0100 Subject: job/phd topics neural networks, language processing, hybrid systems Message-ID: <35A361C5.94A42950@sunderland.ac.uk> I would appreciate it very much if you could forward this to relevant potentially interested students and researchers in your research group. Besides the researcher A position there are also additional possible PhD topics available. For more details see http://osiris.sunderland.ac.uk/~cs0stw/Projects/suggested_topics_titles or http://osiris.sunderland.ac.uk/~cs0stw/ in general. ------------------------------------------ Researcher A in Neural and Intelligent Systems (reference number CIRG28) Applications are invited for a three year research assistant position in the School of Computing and Information Systems investigating the development of hybrid neural/symbolic techniques for intelligent processing. This is an exciting new project which aims at developing new environments for integrating neural networks and symbolic processing. You will play a key role in the development of such hybrid subsymbolic/symbolic environments. It is intended to apply the developed hybrid environments in areas such as natural language processing, intelligent information extraction, or the integration of speech/language in multimedia applications. You should have a degree in a computing discipline and will be able to register for a higher degree (Mphil, PhD). A demonstrated interest in artificial neural networks, software engineering skills and programming experience are essential (preferably including a subset of C, C++, CommonLisp, Java, GUI). Experience and interest in neural network software and simulators would be an advantage (e.g. Planet, SNNS, Tlearn, Matlab, etc). Salary is according to the researcher A scale (currently up to 13,871 pounds, under revision, this is about 42000 DM or $23000). Application forms and further particulars are available from the Personell department under +44 191 515 and extensions 2055, 2429, 2054, 2046, or 2425 or E-Mail employee.recruitment at sunderland.ac.uk quoting the reference number CIRG28. For informal enquiries please contact Professor Stefan Wermter, e-mail: Stefan.Wermter at sunderland.ac.uk. Closing date: 10 July 1998. The successful candidate is expected to start the job as soon as possible. ******************************************** Professor Stefan Wermter Research Chair in Intelligent Systems University of Sunderland Dept. of Computing & Information Systems St Peters Way Sunderland SR6 0DD United Kingdom phone: +44 191 515 3279 fax: +44 191 515 2781 email: stefan.wermter at sunderland.ac.uk http://osiris.sunderland.ac.uk/~cs0stw/ ******************************************** From dwang at cis.ohio-state.edu Wed Jul 8 17:28:20 1998 From: dwang at cis.ohio-state.edu (DeLiang Wang) Date: Wed, 8 Jul 1998 17:28:20 -0400 (EDT) Subject: Tech report on speech segregation Message-ID: <199807082128.RAA10262@shirt.cis.ohio-state.edu> The following technical report is available via FTP/WWW: ------------------------------------------------------------------ "Separation of Speech from Interfering Sounds Based on Oscillatory Correlation" Technical Report #24, June 1998 The Ohio State University Center for Cognitive Science ------------------------------------------------------------------ DeLiang L. Wang, The Ohio State University Guy J. Brown, University of Sheffield A multi-stage neural model is proposed for an auditory scene analysis task - segregating speech from interfering sound sources. The core of the model is a two-layer oscillator network that performs stream segregation on the basis of oscillatory correlation. In the oscillatory correlation framework, a stream is represented by a population of synchronized relaxation oscillators, each of which corresponds to an auditory feature, and different streams are represented by desynchronized oscillator populations. Lateral connections between oscillators encode harmonicity, and proximity in frequency and time. Prior to the oscillator network are a model of the auditory periphery and a stage in which mid-level auditory representations are formed. The model has been systematically evaluated using a corpus of voiced speech mixed with interfering sounds, and produces improvements in terms of signal-to-noise ratio for every mixture. Furthermore, the pattern of improvements seems consistent with human performance. The performance of our model is compared with other studies on computational auditory scene analysis. A number of issues including biological plausibility and real-time implementation are also discussed. (28 pages, 384 KB compressed) for anonymous ftp: FTP-HOST: ftp.cis.ohio-state.edu Directory: /pub/leon/Brown Filename: ccs98.ps.gz for WWW: http://www.cis.ohio-state.edu/~dwang/reports.html (Some pages may not show up in postscript display, but should print OK) Send comments to DeLiang Wang (dwang at cis.ohio-state.edu) From janet at dcs.rhbnc.ac.uk Thu Jul 9 08:31:12 1998 From: janet at dcs.rhbnc.ac.uk (Janet Hales) Date: Thu, 09 Jul 98 13:31:12 +0100 Subject: Special event:COMPUTATIONAL INTELLIGENCE DAY Message-ID: <199807091231.NAA08652@platon.cs.rhbnc.ac.uk> Apologies for any multiple copies of this message you may have received via other lists. Wednesday 9 September 1998 COMPUTATIONAL INTELLIGENCE: THE IMPORTANCE OF BEING LEARNABLE **************************************************************** A one day research seminar Computer Learning Research Centre, Department of Computer Science Royal Holloway, University of London, 30th anniversary year (1997-98) The fourth of a series of one-day colloquia organised by the research groups in the Department. The programme includes invited lectures on key research topics in this rapidly developing field with the main themes of Machine Learning and Inductive Inference. Speakers include Jorma Rissanen, Ray Solomonoff, Vladimir Vapnik, Chris Wallace and Alexei Chervonenkis. The occasion will also mark the foundation of the Computer Learning Research Centre at Royal Holloway, University of London. Speakers: Professor Jorma Rissanen, Almaden Research Center, IBM Research Division, San Jose, CA, U.S.A: STOCHASTIC COMPLEXITY (provisional title) Professor Ray Solomonoff, Oxbridge Research Inc, Cambridge, Mass, U.S.A.: HOW TO TEACH A MACHINE Professor Vladimir Vapnik, AT&T Labs - Research, U.S.A. and Professor of Computer Science and Statistics, Royal Holloway, University of London: LEARNING THEORY AND PROBLEMS OF STATISTICS Professor Chris Wallace, Univ of Monash, Victoria, Australia: and Visiting Professor in the Dept of Computer Science, Royal Holloway, University of London: BREVITY IS THE SOUL OF SCIENCE Professor Alexei Chervonenkis, Institute of Control Sciences, Moscow: THE HISTORY OF THE SUPPORT VECTOR METHOD Provisional Programme: 10.30 am Coffee and Welcome - Alex Gammerman, Head of Department Morning: Theme - MACHINE LEARNING 11.00 am Vladimir Vapnik 11.50 am Alexei Chervonenkis 12.40 pm Lunch Afternoon: Theme - INDUCTIVE INFERENCE 2.10 pm Jorma Rissanen 3.00 pm Chris Wallace 3.50 pm Tea 4.40 pm Ray Solomonoff 5.30 pm Close All welcome - advance booking essential - for further information and to book a place please contact Janet Hales: Janet Hales, Departmental Events Co-ordinator Dept of Computer Science Royal Holloway University of London Tel 01784 443432 Fax 01784 439786 Email: J.Hales at dcs.rhbnc.ac.uk Location maps, info etc.: http://www.cs.rhbnc.ac.uk/location/ Further information is also available from: http://www.dcs.rhbnc.ac.uk/events/compintday.shtml From honavar at cs.iastate.edu Thu Jul 9 14:11:52 1998 From: honavar at cs.iastate.edu (Vasant Honavar) Date: Thu, 9 Jul 1998 13:11:52 -0500 (CDT) Subject: Call for Papers: Special Issue of the Machine Learning Journal on Automata Induction, Grammar Inference, and Language Acquisition In-Reply-To: <199804060934.TAA04155@reid.anu.edu.au> from "Peter Bartlett" at Apr 6, 98 07:34:04 pm Message-ID: <199807091811.NAA28149@ren.cs.iastate.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 5960 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/65c41751/attachment-0001.ksh From cpoon at hstbme.mit.edu Thu Jul 9 15:28:26 1998 From: cpoon at hstbme.mit.edu (Chi-Sang Poon) Date: Thu, 9 Jul 1998 15:28:26 -0400 (EDT) Subject: Postdoc position - visual recognition Message-ID: POSTDOCTORAL POSITION IN VISUAL RECOGNITION Harvard-MIT Division of Health Sciences and Technology Massachusetts Institute of Technology A postdoctoral position is available immediately on a multidisciplinary project to develop a silicon vision chip for pattern recognition using analog very-large-scale integrated circuits. The goal of this sub-project is to develop a neural network architecture that is optimal for analog VLSI implementation. Research approach involves computer modeling of the human visual system and translation of these models into analog neural network architectures in cooperation with other team members in VLSI design. (See: Poon and Shah, Hebbian learning in parallel and modular memories, Biol. Cybern. 78:79-86, 1998). Experience in vision science and neural networks required. Position is initially for one year and renewable depending on progress and availability of funds. Applications are evaluated immediately upon receipt. Send CV, statement of professional interests and goals, and names of three references to: Chi-Sang Poon, Ph.D. Harvard-MIT Division of Health Sciences and Technology Rm 20A-126 M.I.T. Cambridge, MA 02139 Tel: 617-258-5405 Fax: 617-258-7906 email: cpoon at mit.edu From pmck at limsi.fr Fri Jul 10 09:19:32 1998 From: pmck at limsi.fr (Paul Mc-Kevitt) Date: Fri, 10 Jul 98 15:19:32 +0200 Subject: MIND-III: SPATIAL COGNITION, DUBLIN, IRELAND, AUG 17-19 Message-ID: <9807101319.AA23879@m72.limsi.fr> LLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLL MIND III: Annual Conference of the Cognitive Science Society of Ireland Theme: Spatial Cognition Dublin City University, Dublin, Ireland August 17-19, 1998 You are invited to participate in the Annual Conference of the CSSI, on the Theme of Spatial Cognition, at Dublin City University from August 17-19, 1998. This conference will bring together researchers from different Cognitive Science disciplines (Psychology, Computer Science, Linguistics, and Cognitive Geography) who are studying different aspects of spatial cognition. The conference will provide a forum for researchers to share insights about different aspects of spatial cognition and from the perspective of different disciplines. The academic programme will begin at 9:00 a.m. on August 17th and end on 19th. The social programme will include a barbecue and ceili (traditional Irish Dance) on Tuesday 18th and a tour and concert on Wednesday after the end of the academic programme. For information on registration and accommodation, please visit the web page at: http://www.psych.ucsb.edu/~hegarty/cssi/ The deadline for early registration is July 15th (after that the price increases significantly). For questions about the programme, contact Mary Hegarty: hegarty at psych.ucsb.edu For questions about registration and local arrangements, contact Sean O Nuallain: sonualla at compapp.dcu.ie PROGRAMME COMMITTEE: Ruth Byrne, Trinity College Dublin Jerome Feldman, University of California, Berkeley Mary Hegarty, University of California, Santa Barbara (Program Chair) Christopher Habel, University of Hamburg George Lakoff, University of California, Berkeley Robert H. Logie, University of Aberdeen Jack Loomis, University of California, Santa Barbara Paul Mc Kevitt, Aalborg University and University of Sheffield Daniel R. Montello, University of California, Santa Barbara N. Hari Naryanan, Auburn University and Georgia Institute of Technology Patrick Olivier, University of Wales, Aberystwyth Sean O Nuallain, Dublin City University (Co-Chair) Terry Regier, University of Chicago Keith Stenning, Edinburgh University Michael Spivey, Cornell University Arnold Smith, National Research Council, Canada Barbara Tversky, Stanford University PROGRAMME KEYNOTE SPEAKERS: Michel Denis, Groupe Cognition Humaine, LIMSI-CNRS, Universite de Paris-Sud Andrew Frank, Department of Geoinformation, Technical University Wien TALK PRESENTATIONS: ENVIRONMENTAL SPATIAL COGNITION G. Allen, University of South Carolina Men and women, maps and minds: Cognitive bases of sex-related differences in reading and interpreting maps C. Christou & H. Bulthoff, Max-Planck Institute for Biological Cybernetics, Tubingen Using virtual environments to study spatial encoding D. Jacobson, R. Kitchin, T. Garling, R. Golledge & M. Blades, University of California, Santa Barbara, Queens University of Belfast, Gotenborg University Learning a complex urban route without sight: Comparing naturalistic versus laboratory measures P. Peruch, F. Gaunet, C. Thinus-Blanc, M-D. Giroudo, CNRS, Marseille & CNRS-College de France, Paris Real and imagined perspective changes in visual versus locomotor navigation M. J. Sholl, Boston College The accessibility of metric relations in self-to-object and object-to-object systems LANGUAGE AND SPACE T. Baguley & S. J. Payne, Loughborough University and Cardiff University of Wales Given-new versus new-given? An analysis of reading times for spatial descriptions K. C. Coventry & M. Prat-Sala, University of Plymouth The interplay between geometry and function in the comprehension of spatial propositions J. Gurney & E. Kipple, Army Research Laboratory, Adelphi, MD Composing conceptual structure for spoken natural language in a virtual reality environment S. Huang, National Taiwan University Spatial Representation in a language without prepositions S. Taub, Gallaudet University Iconic spatial language in ASL: Concrete and metaphorical applications C. Vorwerg, University of Bielefeld Production and understanding of direction terms as a categorization process COMPUTATION AND SPATIAL COGNITION M. Eisenberg & A. Eisenberg, University of Colorado Designing real-time software advisors for 3-d spatial operations J. Gasos & A. Saffiotti, IRIDIA, Universite Libre de Brruxelles Fuzzy sets for the representation of uncertain spatial knowledge in autonomous robots R. K. Lindsay, University of Michigan Discovering Diagrammatic Demonstrations P. McKevitt, Aalborg University and University of Sheffield CHAMELEON meets spatial cognition D. R. Montello, M.F.Goodchild, P. Fohl & J. Gottsegen, University of California, Santa Barbara Implementing fuzzy spatial queries: Problem statement and behavioral science methods S. O Nuallain & J. Kelleher, Dublin City University Spoken Image meets VRML and JAVA SPATIAL REASONING AND PROBLEM SOLVING M. Gattis, Max Planck Institute for Psychological Research, Munich Mapping relational structure in visual reasoning J. N. McGregor, T. C. Ormerod & E. P. Chronicle, University of Victoria and Lancaster University Spatial and conceptual factors in human performance on the traveling salesperson problem P.D.Pearson, R. H.Logie & K.J. Gilhooly, University of Aberdeen Verbal representations and spatial manipulation during mental synthesis L. Rozenblit, M. Spivey & J. Wojslawowicz Mechanical reasoning about gear-and-belt systems: Do eye-movements predict performance? C. Sophian & M. Crosby, University of Hawaii at Manoa Ratios that even young children understand: The case of spatial proportions THEORETICAL PERSPECTIVES: R. H. Logie, Department of Aberdeen Constraints on visuo-spatial working memory N. H. Narayanan, Auburn University Exploring virtual information landscapes: Spatial cognition meets information visualization A. Smith, National Research Council, Canada Spatial cognition without spatial concepts C. Speed & D. G.Tobin, University of Plymouth Space under stress: Spatial understanding and new media technologies M. Tiressa, A. Caressa and G.Geminiani, Universita di Torino & Universita di Padova A theoretical framework for the study of spatial cognition POSTER PRESENTATIONS M. Betrancourt, A. Pellegrin & L. Tardif, Research Institut, INRIA Rhone-Alpes Using a spatial display to represent the temporal structure of multimedia documents M. Bollaert, LIMSI-CNRS, University de Paris-Sud A connectionist model of mental imagery K Borner & C Vorwerg, University of Bielefeld Applying VR technology to the study of spatial perception and cognition A. Caressa, A. Abrigliano & G. Geminiani, Universita de Padova & Universita di Torino. Describers and explorers: A method to investigate cognitive maps. E. P. Chronicle, T. C. Ormerod & J. McGregor. Lancaster University and University of Victoria When insight just won't come: The failure of visual cues in the nine-dot problem. R. Coates, C.J. Hamilton & T. Heffernan, University of Teeside and University of Northumbria at Newcastle In Search of the visual and spatial characteristics of visuo-spatial working memory G. Fernandez, LMSI-CNRS Individual differences in the processing of route directions R. Hornig, B. Claus & K. Eyferth, Technical University of Berlin In search for an overall organizing principle in spatial mental models: A question of inference M-C. Grobety, M. Morand & F. Schenk Cognitive Mapping across visually disconnected environments N. Gotts, University of Wales, Aberystwyth Describing the topology of spherical regions using the "RCC" formalism X. Guilarova, Moscow M.V. Lomosonov State University Polysemy of adjective "round" via Lakoff's radical category structuring J. S. Longstaff, Laban Center, London Cognitive Structures of Kinesthetic Space: Reevaluating Rudolph Labans Choreutics U. Schmid, S. Wiebrock & F. Wysotzki, Technical University of Berlin Modeling spatial inferences in text understanding LLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLL From eric at research.nj.nec.com Fri Jul 10 10:44:24 1998 From: eric at research.nj.nec.com (Eric B. Baum) Date: Fri, 10 Jul 1998 10:44:24 -0400 Subject: postdoc or visitor position and new papers available Message-ID: <199807101444.KAA09335@yin> I am seeking applicants for a Post-doc or Visiting Scientist position in my group at the NEC Research Institute in Princeton NJ. USA The position is for 1 year. To apply please send cv, cover letter and list of references to: Eric Baum, NEC Research Institute, 4 Independence Way, Princeton NJ 08540,USA or PREFERABLY by internet to eric at research.nj.nec.com Our research is focused on artificial economies of agents that reinforcement learn. Two new papers (and an extended abstract of one of these) are available on my web page http://www.neci.nj.nec.com:80/homepages/eric/eric.html The abstracts of these papers are appended below. ------------------------------------------------------------------- ------------------------------------------------------------------- "Manifesto for an Evolutionary Economics of Intelligence" ( PostScript file 58 pages) Draft last modified July, 1998. to appear in "Neural Networks and Machine Learning" Editor C. M. Bishop, Springer-Verlag (1998). We address the problem of reinforcement learning in ultra-complex environments. Such environments will require a modular approach. The modules must become rational in the sense that they learn to solve a piece of the problem. We describe how to use economic principles to assign credit and ensure that a collection of rational agents will collaborate on reinforcement learning. We also catalog several catastrophic failure modes that can be expected in distributed learning systems, and empirically have occurred in biological evolution, real economies, and artificial intelligence programs, when an appropriate economic structure is not enforced. We conjecture that such evolutionary economies can allow learning in feasible time scales, starting from a collection of agents which have little knowledge and hence are irrational, by dividing and conquering complex problems. We support this with two implementations of learning models based on these principles. The first of these systems has empirically learned to solve Blocks World problems involving arbitrary numbers of blocks. The second has demonstrated meta-learning-- it learns better ways of creating new agents, modifying its own learning algorithm to escape from local optima trapping competing approaches. We describe how economic models can naturally address problems at the meta-level, meta-learning and meta-computation, that are necessary for high intelligence; discuss the evolutionary origins and nature of biological intelligence; and compare, analyze, contrast, and report experiments on competing techniques including hillclimbing, genetic algorithms, genetic programming, and temporal difference learning. ---------------------------------------------------------- "Toward Code Evolution by Artificial Economies" E. B. Baum and Igor Durdanovic ( PostScript file 53 pages) Draft last modified July 8, 1998. ( PostScript file 9 pages) Extended Abstract,July 8, 1998. We have begun exploring code evolution by artificial economies. We implemented a reinforcement learning machine called Hayek2 consisting of agents, written in a machine language inspired by Ray's Tierra, that interact economically. Hayek2 succeeds in evolving code to solve Blocks World problems, and has been more effective at this than our hillclimbing program and our genetic program. Our hillclimber and our genetic program, in turn, both performed creditably, learning solutions as strong as a simple search program that utilizes substantial hand-coded domain knowledge. We made some efforts to optimize our hillclimbing program and it has features that may be of independent interest. Our genetic program exhibited strong gains from crossover compared to a version utilizing other macro-mutations. The relative strength of crossover and macro-mutations is a hotly debated issue within the GP community, and ours is the first unequivocal demonstration of which we are aware where crossover is much better than ``headless chicken mutation''. We have demonstrated meta-learning: Hayek2 succeeds in discovering new meta-level agents that improve its performance, getting it out of plateaus in which it has otherwise gotten stuck. Hayek2's performance benefitted from improvements in the algorithm deciding how meta-agents gave capital to their offspring, from improvements in how creation of intellectal property is rewarded, from improvements in how meta-agents are paid by their offspring, from assessing a rent for computational time that was proportional to total demand, and from improvements in the language, including strong typing to bias the search for useful agents and expanding the representational power of the meta-instructions using pattern based instructions. ------------------------------------- Eric Baum NEC Research Institute, 4 Independence Way, Princeton NJ 08540 PHONE:(609) 951-2712, FAX:(609) 951-2482, Inet:eric at research.nj.nec.com http://www.neci.nj.nec.com:80/homepages/eric/eric.html From marks at gizmo.usc.edu Sun Jul 12 19:57:44 1998 From: marks at gizmo.usc.edu (Mark Seidenberg) Date: Sun, 12 Jul 1998 15:57:44 -0800 Subject: Paper: grammaticality in connectionist nets Message-ID: A preprint of the following paper is available at http://siva.usc.edu/coglab/papers.html The Emergence of Grammaticality in Connectionist Networks Joseph Allen Mark S. Seidenberg University of Southern California To appear in B. MacWhinney (Ed.), The emergence of language. Mahwah, NJ: Erlbaum. In generative linguistics, knowing a language is equated with knowing a grammar. It is sometimes suggested that connectionist networks can provide an alternative account of linguistic knowledge, one that does not incorporate standard notions of grammar. Any such alternative account owes an explanation for how people can distinguish grammatical sentences from ungrammatical ones. We describe some experiments with an attractor network that processed 5-8 word sentences, mapping from form to meaning (comprehension) and from meaning to form (production). The model was trained on 10 types of sentences used in a classic study of aphasic language by Linebarger, Schwartz & Saffran (1983). The network performed qualitatively differently on novel grammatical vs. ungrammatical sentences (e.g., "He came to my town" vs. "He came my town"). The model was also tested on sentences analogous to Chomsky's famous "Colorless green ideas sleep furiously," which patterned with other grammatical sentences. We also examined the model's performance under damage and found that, like Linebarger et al.'s patients, the model could still distinguish between several kinds of grammatical and ungrammatical sentences even though its capacity to comprehend and produce these utterances was impaired. Although the model's coverage of English grammar is limited, it illustrates how the distinction between grammatical and ungrammatical sentences can be realized in a network that does not incorporate a grammar. The model provides a basis for understanding how people make grammaticality judgments and explains the dissociation between the abilities to use language and make grammaticality judgments seen in some aphasic patients. ____________________________________ Mark S. Seidenberg Neuroscience Program University of Southern California 3614 Watt Way Los Angeles, CA 90089-2520 Phone: 213-740-9174 Fax: 213-740-5687 http://www-rcf.usc.edu/~seidenb http://siva.usc.edu/coglab ____________________________________ From kirsten at faceit.com Mon Jul 13 10:29:31 1998 From: kirsten at faceit.com (Kirsten Rudolph) Date: Mon, 13 Jul 1998 10:29:31 -0400 Subject: ad for several jobs - please post Message-ID: <01bdae6a$a6538620$7a00a8c0@ES300.faceit.com> JOB OPPORTUNITIES at VISIONICS CORPORATION Visionics Corporation announces the availability of several positions. Visionics is the leading commercial innovator in facial recognition technology and is committed to maintaining the superiority of its technology and to increasing the range of its applications. To that end, the company is continuing to invest heavily in its growing team. We offer a stimulating work environment and the opportunity for rapid career development, as well as competitive compensation, a health plan and a generous incentive program including stock options. Research in Computer Vision and Pattern Recognition. Candidates are expected to have experience in neural networks, image processing, numerical analysis, and C/C++ programming and to have demonstrated a track record of research on real world visual pattern recognition problems. Experience in biometrics is a definite plus. Digital Signal Processing. We are seeking an engineer or computer scientist in the field of embedded technology. The successful candidate will primarily be involved in porting C++ applications to suitable DSP platforms, using C and/or assembly languages. Candidates are expected to have extensive experience in the Windows operating system and familiarity with Visual C++, MFC and neural nets. C code optimization and algorithmic experience are a plus. The positions are at Visionics' headquarters in Exchange Place, New Jersey, which is just across the Hudson from the World Trade Center - 4 minutes away via PATH train. (See our view of Wall Street on our web page: http://www.faceit.com). Interested individuals should e-mail resumes to kirsten at faceit.com or fax them to (201)332-9313, to the attention of Kirsten Rudolph. Visionics is an equal opportunity employer. Minority and women candidates are encouraged to apply. From uwe.zimmer at gmd.de Mon Jul 13 13:32:13 1998 From: uwe.zimmer at gmd.de (Uwe R. Zimmer) Date: Mon, 13 Jul 1998 19:32:13 +0200 Subject: PostDoc position at GMD Message-ID: <35AA449F.69A4171F@gmd.de> -------------------------------------------------------- Post-Doctoral Research Position in Autonomous Underwater Robotics Research -------------------------------------------------------- A new postdoctoral position is open at the German National Research Center for Information Technology (GMD) in St. Augustin, Germany and will be filled at the earliest convenience. The institute for Autonomous Intelligent Systems (SET) as based on the background of autonomous cognitive robotics and system design technologies is currently setting up autonomous underwater robotics research activities. Investigated questions are: - How to localize and move in six DoF without global correlation? - Interpretation / integration / abstraction / compression of complex sensor signals? - How to build models of previously unknown environments? - Direct sensor-actuator prediction - How to coordinate multiple loosely coupled robots? Real six degrees of freedom, real dynamic environments and real autonomy (which is required in most setups here), settle these questions in a fruitful area. The overall goal is of course not 'just' implementing prototype systems, but to get a better understanding of autonomy, and situatedness. Modeling, adaptation, clustering, prediction, communication, or - from the perspective of robotics - spatial and behavioral modeling, localization, navigation, and exploration are cross-topics addressed in most questions. Techniques employed in our institute up to now include dynamical systems, connectionist approaches, behavior-based techniques, rule based systems, and systems theory. Experiments are based on physical robots (not yet underwater!). Thus the discussion of experimental setups and particularly the meaning of embodiment became topics in itself. If the above challenges rose your interest, please proceed to our expectations regarding the ideal candidate: - Ph.D. / doctoral degree in computer sciences, electrical engineering, physics, mathematics, biology, or related disciplines. - Experiences in experimenting with autonomous systems - Theoretical foundations in mathematics, control, connectionism, dynamical systems, or systems theory - Interest in joining an international team of motivated researchers Furthermore it is expected that the candidate evolves/introduces her/his own perspective on the topic, and pushes the goals of the whole group at the same time. GMD is an equal opportunity employer. The position is for two years in the first instance. For any further information, and applications (including addresses of referees, two recent publications, and a letter of interest) please contact: Uwe R. Zimmer (adress below) ___________________________________________ ____________________________| Dr. Uwe R. Zimmer - GMD ___| Schloss Birlinghoven | 53754 St. Augustin, Germany | _______________________________________________________________. Voice: +49 2241 14 2373 - Fax: +49 2241 14 2384 | http://www.gmd.de/People/Uwe.Zimmer/ | From ah_chung_tsoi at uow.edu.au Tue Jul 14 00:02:54 1998 From: ah_chung_tsoi at uow.edu.au (Ah Chung Tsoi) Date: Tue, 14 Jul 1998 14:02:54 +1000 Subject: Workshop on data structures processing Message-ID: <35AAD86E.D5FFEDF7@uow.edu.au> -- Ah Chung Tsoi Phone: +61 2 42 21 38 43 Dean Fax: +61 2 42 21 48 43 Faculty of Informatics email: ah_chung_tsoi at uow.edu.au University of Wollongong Northfields Avenue Wollongong NSW 2522 AUSTRALIA -------------- next part -------------- Adaptive Processing of Structures A Joint Australian and Italy Workshop Friday, 7th August, 1998 Faculty of Informatics University of Wollongong Northfields Avenue Wollongong New South Wales Australia Many practical problems can be more conveniently modelled using data structures, e.g., lists, trees. For example, in image understanding, it is more conveniently to model the relationship among the objects in the image by a tree structure. Similarly, for document understanding, again it is more convenient to model various segments of the document using data structures. As another example, in Chemistry, often the structure of molecules are easily expressed in terms of a tree depicting their structures. In these applications, there are a number of practical problems which need to be solved, viz., if there are many data structures describing the problem, is it possible 1. To classify an unknown structure as whether it is similar to any of the previous data structure in the known set 2. To predict what the data structure may be. For example, in an autonomous robot navigation problem, if the robot is not endowed with a map of the environment, but instead rely on past traverse of the environment to identify landmarks. One question is: how does the robot know whether it has been visiting the same place before. Such a problem can be formulated as a classification of the tree structures describing the past experience in traversing the environment, i.e., in finding if the new tree structure describing a particular experience has occurred before. There are a number of methods in processing this type of classification problem. For example, one may use syntactical pattern recognition to model the structure, and to classify an unknown structure accordingly. Recently, there has been a substantial amount of work done in using neural networks to model data structures. Neural networks have been used in modelling data structure. For example, Jordan Pollock has applied a special case of a multilayer perceptron to model data structures, commonly known as an auto--associator, or RAAM. In this, he used a multilayer perceptron with the same input and output variables, and the dimension of the hidden layer is smaller than the input. It is known that this MLP structure can form internal representation in the hidden layer of the input variables. Pollock developed the RAAM model for encoding tree structures with labels on the leaves but this model cannot handle labels on the internal nodes. This approach was extended to handle any labeled graphs by Alessandro Sperduti. Both RAAM and LRAAM can encode structures by using a fixed size architecture, however they do not classify them. More recently, a number of groups have proposed the idea of using a recursive neuron to model the data structures, e.g., by Alessandro Sperduti and Antonina Starita (IEEE Transaction on Neural Networks, 1997), Christoph Goller and Andreas Kuechler. These work allow us to tackle problems in classifications and regression of structured objects, e.g., directed ordered acyclic graphs (DOAGs). In this workshop, we wish to introduce the audience to this exciting new development, as it promises to be one of the major breakthroughs in the representation of data structures, as well as in processing them. It can be applied to wherever data structure is a convenient method for representing the underlying problem. These include, apart from molecular chemistry, robot navigation, document processing, image processing, many areas, e.g., internet user behaviour modelling, natural language processing. This workshop will be given by the originators and developers of this approach, viz., Alessandro Sperduti, Marco Gori, Paolo Frasconi. There is a group working on this problem in the Faculty of Informatics, the University of Wollongong supported by an Australian Research Council large grant. The visit of Alessandro Sperduti, and Marco Gori are supported by an out-of-cycle large grant. The intention is to introduce Australian researchers to this exciting new methods, and to promote the application of such techniques to a much wider setting. The program of the workshop will be as follows: 9:30 - 9:45 Introduction Ah Chung Tsoi 9:45 - 10:45 Adaptive data structure modelling problems Alessandro Sperduti 10:45 - 11:15 Coffee break 11:15 - 12:15 General theory of data modelling by adaptive data structure methods Paolo Frasconi 12:15 - 1:15 Lunch 1:15 - 2:15 Properties of adaptive data structure modelling Marco Gori 2:15 - 3:15 Applications of adaptive data structure methods Alessandro Sperduti 3:15 - 3:45 Coffee break 3:45 - 5:00 Forum discussion Ah Chung Tsoi As this is supported by an Australian Research Council grant, there will be no registration fee for attending the workshop. However, attendants are responsible for their own lunch. Intended participants are encouraged to indicate their intention by emailing Professor Ah Chung Tsoi, ahchung at uow.edu.au for morning and afternoon coffee/tea catering purposes. From morten at gizmo.usc.edu Tue Jul 14 14:44:13 1998 From: morten at gizmo.usc.edu (Morten Christiansen) Date: Tue, 14 Jul 1998 11:44:13 -0700 (PDT) Subject: Paper: A connectionist model of recursion Message-ID: A preprint of the following paper to appear in Cognitive Science is available at http://www-rcf.usc.edu/~mortenc/nn-rec.html Toward a connectionist model of recursion in human linguistic performance Morten H. Christiansen University of Southern California Nick Chater University of Warwick Abstract Naturally occurring speech contains only a limited amount of complex recursive structure, and this is reflected in the empirically documented difficulties that people experience when processing such structures. We present a connectionist model of human performance in processing recursive language structures. The model is trained on simple artificial languages. We find that the qualitative performance profile of the model matches human behavior, both on the relative difficulty of center-embedded and cross-dependency, and between the processing of these complex recursive structures and right-branching recursive constructions. We analyze how these differences in performance are reflected in the internal representations of the model by performing discriminant analyses on these representation both before and after training. Furthermore, we show how a network trained to process recursive structures can also generate such structures in a probabilistic fashion. This work suggests a novel explanation of people's limited recursive performance, without assuming the existence of a mentally represented competence grammar allowing unbounded recursion. --Morten Christiansen ------------------------------------------------------------------------- Morten H. Christiansen, Ph.D. | Phone: +1 (213) 740-6299 NIBS Program | Fax: +1 (213) 740-5687 University of Southern California | Email: morten at gizmo.usc.edu University Park MC-2520 | WWW: http://www-rcf.usc.edu/~mortenc/ Los Angeles, CA 90089-2520 | Office: Hedco Neurosciences Bldg. B11 ------------------------------------------------------------------------- From henders at linc.cis.upenn.edu Tue Jul 14 16:35:39 1998 From: henders at linc.cis.upenn.edu (Jamie Henderson) Date: Tue, 14 Jul 1998 16:35:39 -0400 (EDT) Subject: papers on "Simple Synchrony Networks" and natural language parsing Message-ID: <199807142035.QAA08532@linc.cis.upenn.edu> The following two papers on learning natural language parsing using an architecture that applies Temporal Synchrony Variable Binding to Simple Recurrent Networks can be retrieved from the following web site: http://www.dcs.ex.ac.uk/~jamie/ Keywords: Simple Recurrent Networks, variable binding, synchronous oscillations, natural language, grammar induction, syntactic parsing, representing structure, systematicity. - - - - - - - - - - Simple Synchrony Networks: Learning to Parse Natural Language with Temporal Synchrony Variable Binding Peter Lane and James Henderson University of Exeter Abstract: The Simple Synchrony Network (SSN) is a new connectionist architecture, incorporating the insights of Temporal Synchrony Variable Binding (TSVB) into Simple Recurrent Networks. The use of TSVB means SSNs can output representations of structures, and can learn generalisations over the constituents of these structures (as required by systematicity). This paper describes the SSN and an associated training algorithm, and demonstrates SSNs' generalisation abilities through results from training SSNs to parse real natural language sentences. (6 pages) In Proceedings of the 1998 International Conference on Artificial Neural Networks (ICANN`98), Skovde, Sweden, 1998. - - - - - - - - - - A Connectionist Architecture for Learning to Parse James Henderson and Peter Lane University of Exeter Abstract: We present a connectionist architecture and demonstrate that it can learn syntactic parsing from a corpus of parsed text. The architecture can represent syntactic constituents, and can learn generalizations over syntactic constituents, thereby addressing the sparse data problems of previous connectionist architectures. We apply these Simple Synchrony Networks to mapping sequences of word tags to parse trees. After training on parsed samples of the Brown Corpus, the networks achieve precision and recall on constituents that approaches that of statistical methods for this task. (7 pages) In Proceedings of 17th International Conference on Computational Linguistics and the 36th Annual Meeting of the Association for Computational Linguistics (COLING-ACL`98), University of Montreal, Canada, 1998. ------------------------------- Dr James Henderson Department of Computer Science University of Exeter Exeter EX4 4PT, U.K. http://www.dcs.ex.ac.uk/~jamie/ ------------------------------- From Andreas.Eisele at xrce.xerox.com Wed Jul 15 05:03:01 1998 From: Andreas.Eisele at xrce.xerox.com (Andreas Eisele) Date: Wed, 15 Jul 1998 11:03:01 +0200 Subject: research opportunity at XRCE, Grenoble Message-ID: <199807150903.LAA11479@montendry.grenoble.xrce.xerox.com> Apologies for cross-posting... ************************************* * LCG TMR Network * * Learning Computational Grammars * * * ************************************* **************************************************************************** * PRE/POSTDOCTORAL RESEARCH OPPORTUNITY AT XEROX RESEARCH CENTRE, GRENOBLE * **************************************************************************** LCG (Learning Computational Grammars) is a research network funded by the EC Training and Mobility of Researchers programme (TMR). The LCG network involves seven European partners. The research goal of the network is to use machine learning techniques to extend a variety of computational grammars. The particular focus of XRCE's research will be on the integration of explicit grammatical knowledge with example-based evidence, such as a set of stored analyses in data oriented parsing, for example. See http://www.let.rug.nl/~nerbonne/tmr/lcg.html for more details. Up to three years of pre- or postdoctoral funding are available at XRCE, starting immediately. The ideal candidate will have research experience or strong interest in NLP and machine learning (including statistical techniques). As the funding is provided by the EU Training and Mobility of researchers programme there are some restrictions on who may benefit from it: * Candidates must be aged 35 or younger * Candidates must be Nationals of an EU country, Norway, Iceland, Switzerland or Israel * Candidates must have studied or be studying for a Doctoral Degree * Candidates cannot be French Nationals * Candidates cannot have worked in France more than 18 of the last 24 months If you are interested and eligible, e-mail your CV and the names and addresses of two referees to the address below. Your CV should include a list of recent publications. Please also outline in 2-3 pages your interest in LCG, how it is related to work you have done, and any special expertise you bring to the problem. --------------------------------------------- Andreas Eisele Andreas.Eisele at xrce.xerox.com Multilingual Theory and Technology Xerox Research Centre Europe Phone: +33 (0)4 76 61 50 86 6 chemin de Maupertuis Fax: +33 (0)4 76 61 50 99 F-38240 Meylan, France URL: http://www.xrce.xerox.com From berthold at ICSI.Berkeley.EDU Wed Jul 15 17:13:07 1998 From: berthold at ICSI.Berkeley.EDU (Michael Berthold) Date: Wed, 15 Jul 1998 14:13:07 -0700 (PDT) Subject: Announcing IDA-99 Message-ID: <199807152113.OAA24043@fondue.ICSI.Berkeley.EDU> Announcing IDA-99 Third International Symposium on Intelligent Data Analysis Center for Mathematics and Computer Science, Amsterdam, The Netherlands 9th-11th August 1999 Call for papers =============== IDA-99 will take place in Amsterdam from 9th to 11th August 1999, and is organised by Leiden University in cooperation with AAAI and NVKI. It will consist of a stimulating program of tutorials, invited talks by leading international experts in intelligent data analysis, contributed papers, poster sessions, and an exciting social program. We plan to have a special issue of the Intelligent Data Analysis journal with extended versions of a number of papers presented during the symposium. Objective ========= For many years the intersection of computing and data analysis contained menu-based statistics packages and not much else. Recently, statisticians have embraced computing, computer scientists are using statistical theories and methods, and researchers in all corners are inventing algorithms to find structure in vast online datasets. Data analysts now have access to tools for exploratory data analysis, decision tree induction, causal induction, function finding, constructing customised reference distributions, and visualisation, and there are intelligent assistants to advise on matters of design and analysis. There are tools for traditional, relatively small samples and also for enormous datasets. In all, the scope for probing data in new and penetrating ways has never been so exciting. Our aim is for IDA-99 to bring together a wide variety of researchers concerned with extracting knowledge from data, including people from statistics, machine learning, neural networks, computer science, pattern recognition, database management, and other areas. The strategies adopted by people from these areas are often different, and a synergy results if this is recognised. IDA-99 is intended to stimulate interaction between these different areas, so that more powerful tools emerge for extracting knowledge from data and a better understanding is developed of the process of intelligent data analysis. It is the third symposium on Intelligent Data Analysis after the successful symposia Intelligent Data Analysis 97 (http://www.dcs.bbk.ac.uk/ida97.html/) and Intelligent Data Analysis 95. Topics ====== Contributed papers are invited on any relevant topic, including, but not restricted to: APPLICATION & TOOLS: analysis of different kinds of data (e.g., censored, temporal etc) applications (e.g., commerce, engineering, finance, legal, manufacturing, medicine, public policy, science) assistants, intelligent agents for data analysis evaluation of IDA systems human-computer interaction in IDA IDA systems and tools information extraction, information retrieval THEORY & GENERAL PRINCIPLES: analysis of IDA algorithms classification, projection, regression, optimization, clustering data cleaning data pre-processing experiment design model specification, selection, estimation reasoning under uncertainty search statistical strategy uncertainty and noise in data ALGORITHMS & TECHNIQUES: Bayesian inference and influence diagrams bootstrap and randomization causal modeling data mining decision analysis exploratory data analysis fuzzy, neural and evolutionary appraoches knowledge-based analysis machine learning statistical pattern recognition visualization Submissions =========== Participants who wish to present a paper are requested to submit a manuscript, not exceeding 10 single-spaced pages. We strongly encourage authors to format their manuscript using Springer-Verlag's Advice to Authors (http://www.springer.de/comp/lncs/authors.html) for the Preparation of Contributions to LNCS Proceedings. This submission format is identical to the one for the final camera-ready copy of accepted papers. In addition, we request a separate page detailing the paper title, authors names, postal and email addresses, phone and fax numbers. Email submissions in Postscript form are encouraged. Otherwise, five hard copies of the manuscripts should be submitted. Submissions should be sent to the IDA-99 Program Chair. either electronically to: ida99 at wi.leidenuniv.nl or by hard copy to: Prof. dr J.N. Kok Department of Computer Science Leiden University P.O. Box 9512 2300 RA Leiden The Netherlands The address for courier services is Prof. dr J.N. Kok Department of Computer Science Leiden University Niels Bohrweg 1 2333 CA Leiden The Netherlands Important Dates =============== February 1st, 1999 Deadline for submitting papers April 15th, 1999 Notification of acceptance May 15th, 1999 Deadline for submission of final papers Review ====== All submissions will be reviewed on the basis of relevance, originality, significance, soundness and clarity. At least two referees will review each submission independently and final decisions will be made by program chairs, in consultation with relevant reviewers. Publications ============ The proceedings will be published in the Lecture Notes in Computer Science series of Springer (http://www.springer.de/comp/lncs/). The proceedings of Intelligent Data Analysis 97 appeared in this series as LNCS 1280 (http://www.springer.de/comp/lncs/volumes/1280.htm). Location ======== The symposium will use the facilities of the Center for Mathematics and Computer Science in Amsterdam (CWI -- http://www.cwi.nl/). There is an auditorium for more than 200 participants and several other rooms for parallel sessions. CWI is situated on the Wetenschappelijk Centrum Watergraafsmeer (WCW) campus in the eastern part of Amsterdam. Instructions about how to get to CWI can be found at: http://www.cwi.nl/cwi/about/directions.html On the campus there are several other research institutes and parts of the University of Amsterdam (http://www.uva.nl/english). Social Event ============ For the social event we are thinking about a boat trip through the canals of Amsterdam, with a special dinner in the centre of the city. We will provide each participant with a ``social package'', including a list of restaurants, bars, maps of town (http://www.channels.nl/themap.html), public transport information, timetables of trains (http://www.ns.nl/), etc. There are special boats and trams that circle along the touristic attractions of Amsterdam and hence it will be easy for the participants to find their way. Further information can be found in the The Internet Guide to Amsterdam (http://www.cwi.nl/~steven/amsterdam.html), panoramic pictures are also available (http://www.cwi.nl/~behr/PanoramaUK/Panorama.html). Exhibitions =========== IDA-99 welcomes demonstrations of software and publications related to intelligent data analysis. IDA-99 Organisation =================== General Chair: David Hand, Open University, UK Program Chair: Joost Kok, Leiden University, The Netherlands Program Co-Chairs: Michael Berthold, University of California, USA Doug Fisher, Vanderbilt University Members: Niall Adams, Open University, UK Pieter Adriaans, Syllogic, The Netherlands Russell Almond, Research Statistics Group, US Thomas Baeck, Informatik Centrum Dortmund, Germany Riccardo Bellazzi, University of Pavia, Italy Paul Cohen, University of Massachusetts, US Paul Darius, Leuven University, Belgium Tom Dietterich, Oregon State University, US Gerard van den Eijkel, Delft University of Technology, The Netherlands Fazel Famili, National Research Council, Canada Karl Froeschl, Univ of Vienna, Austria Linda van der Gaag, Utrecht University, The Netherlands Alex Gammerman, Royal Holloway London, UK Jaap van den Herik, University Maastricht, The Netherlands Larry Hunter, National Library of Medicine, US David Jensen, University of Massachusetts, US Bert Kappen, Nijmegen University, The Netherlands Hans Lenz, Free University of Berlin, Germany Frank Klawonn, University of Applied Sciences Emden, Germany Bing Liu, National University, Singapore Xiaohui Liu, Birkbeck College, UK David Madigan, University of Washington, US Heikki Mannila, Helsinki University, Finland Wayne Oldford, Waterloo, Canada Erkki Oja, Helsinki University of Technology, Finland Albert Prat, Technical University of Catalunya, Spain Luc de Raedt, KU Leuven, Belgium Rosanna Schiavo, University of Venice, Italy Jude Shavlik, University of Wisconsin, US Roberta Siciliano, University of Naples, Italy Arno Siebes, Center for Mathematics and Computer Science, The Netherlands Rosaria Silipo, International Computer Science Institute, US Floor Verdenius, ATO-DLO, The Netherlands Stefan Wrobel, GMD, Germany Jan Zytkow, Wichita State University, US (To be extended) Enquiries ========= Latest information regarding IDA-99 will be available on the World Wide Web Server of the Leiden Institute for Advanced Computer Science: http://www.wi.leidenuniv.nl/~ida99/ From giles at research.nj.nec.com Wed Jul 15 15:29:44 1998 From: giles at research.nj.nec.com (Lee Giles) Date: Wed, 15 Jul 1998 15:29:44 -0400 (EDT) Subject: Student research position Message-ID: <199807151929.PAA05809@alta> The NEC Research Institute in Princeton, NJ has an immediate opening for a student research position in the area of autonomous citation indexing (ACI) and machine learning, graph analysis or user profiling. ACI autonomously creates citation indexes which can provide a number of advantages over existing citation indexes and digital libraries for scholarly dissemination and feedback. Some recent related papers are listed below and those and others can be found at the listed Web site. Candidates must have experience in research and be able to effectively communicate research results. Ideal candidates will have knowledge of a number of the following: machine learning, graph analysis, user profiling, information retrieval, digital libraries, WWW, Perl, C++, and HTML. Interested applicants should apply by email, mail or fax including their resumes and any specific interests to: Dr. C. Lee Giles NEC Research Institute 4 Independence Way Princeton, NJ 08540 Fax: 609-951-2482 citeseer at research.nj.nec.com http://www.neci.nj.nec.com/homepages/giles/ Recent publications: C.L. Giles, K. Bollacker, S. Lawrence, CiteSeer: An Automatic Citation Indexing System, DL'98, The 3rd ACM Conference on Digital Libraries, pp. 89-98, 1998 [one of eight papers short listed for best paper award]. S. Lawrence, C.L. Giles, Context and Page Analysis for Improved Web Search, IEEE Internet Computing, (accepted). S. Lawrence, C.L. Giles, Searching the World Wide Web, SCIENCE, 280, p. 98. 1998. S. Lawrence. C.L. Giles, The Inquirus Meta Search Engine, 7th International World Wide Web Conference, p. 95, 1998. -- __ C. Lee Giles / Computer Science / NEC Research Institute / 4 Independence Way / Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 www.neci.nj.nec.com/homepages/giles == From ericr at ee.usyd.edu.au Wed Jul 15 21:20:34 1998 From: ericr at ee.usyd.edu.au (Eric Ronco) Date: Thu, 16 Jul 1998 11:20:34 +1000 Subject: a probable human motor control strategy Message-ID: <199807160120.LAA16620@merlot.ee.usyd.edu.au.usyd.edu.au> Dear connexionnists, This is to let you know of the existance of a new technical report in the on-line data base of sydney univesrity: http://merlot.ee.usyd.edu.au/tech_rep TITLE: Open-Loop Intermittent Feedback Optimal Control: a probable human motor control strategy Authors: Eric Ronco Keywords: Human motor control; Predictive control; Intermittent control; Open-loop control Abstract: Recent studies on human motor control have been largely influenced by two important statements: (1) Sensory feedback is too slow to be involved at least in fast motor control actions; (2) Learned internal model of the systems plays an important role in motor control. As a result, the human motor control system is often described as open-loop and particularly as a system inverse. System inverse control is limited by too many problems to be a plausible candidate. Instead, an alternative between open-loop and feedback control is proposed here: the "open-loop intermittent feedback optimal control". In this scheme, a prediction of the future behaviour of the system, that requires feedback information and a system model, is used to determine a sequence of actions which is run open-loop. The prediction of a new control sequence is performed intermittently (due to computational demand and slow sensory feedback) but with a sufficient frequency to ensure small control errors. The inverted pendulum on a cart is used to illustrate the viability of this scheme. bye Eric ------------------------------------------------------------------- Eric Ronco, PhD Tel: +61 2 9351 7680 Dt of Electrical Engineering Fax: +61 2 9351 5132 Bldg J13, Sydney University Email: ericr at ee.usyd.edu.au NSW 2006, Australia http://www.ee.usyd.edu.au/~ericr ------------------------------------------------------------------- From ericr at ee.usyd.edu.au Wed Jul 15 21:20:37 1998 From: ericr at ee.usyd.edu.au (Eric Ronco) Date: Thu, 16 Jul 1998 11:20:37 +1000 Subject: A Globally Valid Continuous-Time GPC Through Successive System Linearisations Message-ID: <199807160120.LAA16622@merlot.ee.usyd.edu.au.usyd.edu.au> Dear connexionnists, This is to let you know of the existance of a new technical report in the on-line data base of sydney univesrity: http://merlot.ee.usyd.edu.au/tech_rep TITLE: A Globally Valid Continuous-Time GPC Through Successive System Linearisations Authors: Eric Ronco, Taner Arsan, Peter J. Gawthrop and David J. Hill Keywords: Predictive control; Global control; Non-linear system; Off-equilibrium system linearisation Abstract: In a Model Based Predictive Controller (MBPC) an optimal set of control signals over a defined time period is determined by predicting the future system behaviour and minimising a non-linear cost function. The significant amount of computations involved in a non-linear function optimisation often precludes fast control actions. A successive off-equilibrium system linearisation approach is proposed here to enhance fast control action in a state space ""Continuous-time Generalised Predictive Controller'' (CGPC). This idea is also extended to achieve non-linear observations of the system states. It is shown from simulations of an inverted pendulum on a cart that this strategy is much more effective than a non-linear CGPC in terms of control system analisys, control quality, computation demand and noise robustness. bye Eric ------------------------------------------------------------------- Eric Ronco, PhD Tel: +61 2 9351 7680 Dt of Electrical Engineering Fax: +61 2 9351 5132 Bldg J13, Sydney University Email: ericr at ee.usyd.edu.au NSW 2006, Australia http://www.ee.usyd.edu.au/~ericr ------------------------------------------------------------------- From M.Usher at ukc.ac.uk Thu Jul 16 05:42:29 1998 From: M.Usher at ukc.ac.uk (M.Usher@ukc.ac.uk) Date: Thu, 16 Jul 1998 10:42:29 +0100 Subject: paper on binding and synchrony Message-ID: <199807160942.KAA15337@snipe.ukc.ac.uk> Researchers who are interested on the issue of neural synchrony as a mechanism for binding of visual information, might enjoy to read the following article that has just appeared last week in Nature (9 July, 1998, pp, 179-182). In this article we report results of psychophysical experiments that support a causal relation between synchrony and processes of visual grouping and segmentation. The article can be also accessed on our web-site: http://www.ukc.ac.uk/psychology/people/usherm/ Marius Usher Lecturer in Psychology University of Kent, Canterbury, UK Visual synchrony affects binding and segmentation in perception Marius Usher and Nick Donnelly ABSTRACT The visual system analyses information by decomposing complex objects into simple components (visual features) widely distributed across the cortex. When several objects are simultaneously present in the visual field, a mechanism is required to group (bind) together visual features belonging to each object and to separate (segment) them from features of other objects. An attractive scheme for binding visual features into a coherent percept If synchrony plays a major role in binding, one should expect that grouping and segmentation are facilitated in visual displays that induce stimulus dependent synchrony by temporal manipulations. We report data demonstrating that visual grouping is indeed facilitated when elements of one percept are presented simultaneously and are temporally separated (on a scale below the integration time of the visual system from elements of another percept is due to a global mechanism of grouping caused by synchronous neural activation and not to a local mechanism of motion computation. From Luis.Almeida at inesc.pt Thu Jul 16 11:43:27 1998 From: Luis.Almeida at inesc.pt (Luis B. Almeida) Date: Thu, 16 Jul 1998 16:43:27 +0100 Subject: preprint: nonlinear blind source separation Message-ID: <35AE1F9F.4EA986B4@ilusion.inesc.pt> The following preprint is available. It corresponds to a paper submitted to ICA'99, the International Workshop on Independent Component Analysis and Blind Source Separation, to be held in Aussois, France, in January 1999. Separation of nonlinear mixtures using pattern repulsion Goncalo C. Marques and Luis B. Almeida Abstract Blind source separation currently is a topic of great research interest. Most of the separation methods that have been developed are applicable only to the separation of linear mixtures. In this paper we derive a separation method for nonlinear mixtures, inspired by an analogy with the concept of repulsion among physical particles. Two examples of nonlinear separation are presented. The paper is available at ftp://146.193.2.131/pub/lba/papers/aussois.ps.gz (compressed postscript, 340 kB) and ftp://146.193.2.131/pub/lba/papers/aussois.ps (uncompressed postscript, 2.3 MB) Comments are welcome. Luis B. Almeida Phone: +351-1-3100246,+351-1-3544607 INESC Fax: +351-1-3145843 R. Alves Redol, 9 E-mail: lba at inesc.pt 1000 Lisboa, Portugal http://ilusion.inesc.pt/~lba/ ------------------------------------------------------------------------ *** Indonesia is killing innocent people in East Timor *** see http://amadeus.inesc.pt/~jota/Timor/ From marco at neuron.dii.unisi.it Thu Jul 16 11:50:23 1998 From: marco at neuron.dii.unisi.it (Marco Gori) Date: Thu, 16 Jul 1998 17:50:23 +0200 (MET DST) Subject: ECAI'98 tutorial - T11 Message-ID: ========================================================================== ECAI'98 Tutorial on Connectionist Models for Processing Structured Information Brighton, UK - August 25, 1998 Marco Gori http://www-dii.ing.unisi.it/~marco ========================================================================== This tutorial covers the problem of adaptive processing of graphs. Classic models for learning sequential information (e.g. recurrent neural networks) are properly extended to learn data organized as graphs. For instance, algorithms like Backpropagation Through Time (BPTT) are nicely extended to the case of data structures. People interested in this tutorial can find an abstract at http://www.cogs.susx.ac.uk/ecai98/tw/T11.html Additional information on the topic of adaptive processing of structured information can be found at http://www.dsi.unifi.it/~paolo/datas If you are interested in the tutorial, please don't hesitate to contact me. I'll be pleased to ask any question on the topic. For any information concerning the registration, you can access at the ECAI'98 web site (http://www.cogs.susx.ac.uk/ecai98/index.html) -- Marco Gori. ============================================ Marco Gori Dipartimento di Ingegneria dell'Informazione Universita' di Siena Via Roma 56 - 53100 Siena (Italy) Tel: : +39 577 26.36.10 Fax : +39 577 26.36.02 E-mail : marco at ing.unisi.it WWW : http://www-dii.ing.unisi.it/~marco ============================================= From ted.carnevale at yale.edu Thu Jul 16 13:34:18 1998 From: ted.carnevale at yale.edu (Ted Carnevale) Date: Thu, 16 Jul 1998 13:34:18 -0400 Subject: NEURON course at 1998 SFN meeting Message-ID: <35AE399A.11B9@yale.edu> Short Course Announcement Using the NEURON Simulation Environment --------------------------------------- A Satellite Symposium to the Society for Neuroscience Meeting Los Angeles, CA Saturday, Nov. 7, 1998 9 AM - 5 PM Speakers: N.T. Carnevale, M.L. Hines, J.W. Moore, and G.M. Shepherd This 1 day course with lectures and live demonstrations will present information essential for teaching and research applications of NEURON. It emphasizes practical issues that are key to the most productive use of this powerful and convenient modeling tool. Each registrant will receive a CD-ROM with software, plus a comprehensive set of notes that includes material which has not yet appeared elsewhere in print. Coffee breaks and lunch will be provided. There will be a registration fee to cover these and related costs, e.g. AV equipment rental and handout materials. This will be in the ballpark of other 1 day courses; exact figure depends on factors we haven't yet learned from SFN. >>> Registration is limited to 55 individuals <<< on a first-come, first-serve basis Deadlines Early registration: Friday, September 25, 1998 Late registration: Friday, October 9, 1998 NO on-site registration will be accepted. For more information and the electronic registration form see the course's WWW pages at http://www.neuron.yale.edu/sfn98.html and http://neuron.duke.edu/sfn98.html --Ted Supported in part by the National Science Foundation. Opinions expressed are those of the authors and not necessarily those of the Foundation. From istvan at usl.edu Thu Jul 16 17:04:09 1998 From: istvan at usl.edu (Dr. Istvn S. N. Berkeley) Date: Thu, 16 Jul 1998 16:04:09 -0500 Subject: Draft papers available on-line References: <199807142035.QAA08532@linc.cis.upenn.edu> Message-ID: <35AE6AC9.5CDE@USL.edu> The following draft papers and writings, concerning matters connectionist are available on-line at the URL below http:www.ucs.usl.edu/~isb9112/papers.html Berkeley, I. "Connectionism Reconsidered: Minds, Machines and Models" Abstract: In this paper the issue of drawing inferences about biological cognitive systems on the basis of connectionist simulations is addressed. In particular, the justification of inferences based on connectionist models trained using the backpropagation learning algorithm is examined. First it is noted that a justification commonly found in the philosophical literature is inapplicable. Then some general issues are raised about the relationships between models and biological systems. A way of conceiving the role of hidden units in connectionist networks is then introduced. This, in combination with an assumption about the way evolution goes about solving problems, is then used to suggest a means of justifying inferences about biological systems based on connectionist research. Berkeley, I. "What the #$*%! is a Subsymbol?" Abstract: In 1988, Smolensky proposed that connectionist processing systems should be understood as operating at what he termed the 'subsymbolic' level. Subsymbolic systems should be understood by comparing them to symbolic systems, in Smolensky's view. Up until recently, there have been real problems with analyzing and interpreting the operation of connectionist systems which have undergone training. However, recently published work on a network trained on a set of logic problems originally studied by Bechtel and Abrahamsen (1991)seems to offer the potential to provide a detailed, empirically based answer to questions about the nature of subsymbols. In this paper, the network analysis procedure and the results obtained using it are discussed. This provides the basis for a suprising insight into the nature of subsymbols. (Note: There are currently some problems with the figures associated with this paper, hopefully this will be fixed soon.) Berkeley, I. "Some Myths of Connectionism". Abstract: This paper considers a number of claims about connectionist systems which are often encoutered in the philosophical literature. The myths discussed here include the claim that connectionist systems are, in some sense, biological or neural, the claim that connectionist systems are compatible with real-time processing constraints, the claim that connectionist systems exhibit graceful degradation and the claim that connectionist systems are good generalizers. In the case of each of these claims, it is argued that there is a mythical component and, as such, claims of this kind should not be accepted by philosophers without appropriate qualification. (N.B. This paper is aimed more at a philosophical audience, than a technical one). Berkeley, I. "A Revisionist History of Connectionism". Abstract: An alternative perspective on the history of connectionism is offered in this short paper. In this paper a number of claims and conclusions often encountered in standard versions of the history are disputed. This paper also includes some new material from the people involved in the history of AI and Cognitive Science. Berkeley, I. "An Introduction to Connectionism". This is a useful teaching resource, which provides a reasonably accessible and non-technical introduction to the basic components and concepts which are important to backpropogation systems. It includes a description of some of the kinds of processing units which can be employed. All the best, Istvan -- Istvan S. N. Berkeley Ph.D, E-mail: istvan at USL.edu, Philosophy, The University of Southwestern Louisiana, USL P. O. Box 43770, Lafayette, LA 70504-3770, USA. Tel:(318) 482 6807, Fax: (318) 482 6195, http://www.ucs.usl.edu/~isb9112 ***** Learn about the new Cognitive Science Ph.D. program at USL, visit http://cognition.usl.edu ***** From juergen at idsia.ch Fri Jul 17 09:49:58 1998 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Fri, 17 Jul 1998 15:49:58 +0200 Subject: reinforcement learning economy Message-ID: <199807171349.PAA15994@ruebe.idsia.ch> This message is triggered by Eric Baum's recent announcement of his interesting papers on evolutionary economies for reinforcement learning, "Hayek machine", and metalearning. I would like to mention that several related ideas are expressed in an old paper from 1987 [1]. Pages 23-51 of [1] are devoted to "Prototypical Self-referential Associating Learning Mechanisms (PSALM1 - PSALM3). Hayek2 (the most recent Hayek variant) is somewhat reminiscent of PSALM3, where competing/cooperating reinforcement learning agents bid for executing actions. Winners may receive external reward for achieving goals. Agents are supposed to learn the credit assignment process itself (metalearning). For this purpose they can execute actions for collectively constructing and connecting and modifying agents and for transferring credit (reward) to agents. A crucial difference between PSALM3 and Hayek2 may be that PSALM3 does not strictly enforce individual property rights. For instance, agents may steal money from other agents and temporally use it in a way that does not contribute to the system's overall progress. On the other hand, to the best of my knowledge, PSALMs are the first machine learning systems that enforce the important constraint of total credit conservation (except for consumption and external reward) - this constraint is not enforced in Holland's landmark bucket brigade classifier economy (1985), which may cause inflation and other problems. Reference [1] also inspired a slightly more recent but less general approach enforcing money conservation, where money is "weight substance" of a reinforcement learning neural net [2]. Pages 7-13 of [1] are devoted to an alternative "Genetic Programming" (GP) approach that recursively applies metalevel GP to the task of finding better program-modifying programs on lower levels - the goal is to use GP for improving GP. It may be worth mentioning that this was suggested long before GP itself (invented by Cramer in 1985) was popularized in the 1990s. It should be stated that reference [1] does not meet the scientific standards of a journal publication - it is the first paper I ever wrote in a foreign language (as an undergraduate). But despite its age (it was first distributed more than a decade ago) it may still be of at least historic interest due to renewed attention to market models and metalearning (and also GP). Unfortunately there is no digital version, but if you are interested I will send you a hardcopy (this may take some time depending on demand). [1] J. Schmidhuber. Evolutionary Principles in Self-Referential Learning. On Learning how to Learn: The Meta-Meta-Meta...-Hook. Diploma thesis, Tech. Univ. Munich, 1987 [2] J. Schmidhuber. The Neural Bucket Brigade: A local learning algorithm for dynamic feedforward and recurrent networks. Connection Science, 1(4):403-412, 1989, http://www.idsia.ch/~juergen/onlinepub.html _________________________________________________ Juergen Schmidhuber research director IDSIA, Corso Elvezia 36, 6900-Lugano, Switzerland juergen at idsia.ch www.idsia.ch/~juergen From bfrey at dendrite.beckman.uiuc.edu Fri Jul 17 09:44:10 1998 From: bfrey at dendrite.beckman.uiuc.edu (Brendan Frey) Date: Fri, 17 Jul 1998 08:44:10 -0500 Subject: -> new book <- Message-ID: <199807171344.IAA07449@dendrite.beckman.uiuc.edu> ------------------------------------------------------ New book now available ------------------------------------------------------ Graphical Models for Machine Learning and Digital Communication Brendan Frey, MIT PRESS Information: www.cs.toronto.edu/~frey/book.html ------------------------------------------------------ neural networks graphical models Pearl's algorithm Monte Carlo variational methods wake-sleep algorithm generalized EM error-correcting coding bits-back coding => Helmholtz machines: unsupervised neural net probability models (like ICA, but with *dependent* components) => BEST error-correcting algorithm (turbodecoding) *** simple explanation using Pearl's algorithm => data compression using latent variable models Brendan. From mcauley.11 at osu.edu Fri Jul 17 11:38:29 1998 From: mcauley.11 at osu.edu (Devin McAuley) Date: Fri, 17 Jul 1998 11:38:29 -0400 (EDT) Subject: Tutorial workshop on connectionist models (preliminary announcement) Message-ID: <199807171538.LAA12905@mail4.uts.ohio-state.edu> A Tutorial Workshop on Connectionist Models of Cognition Saturday October 3rd - Sunday October 4th, 1998 Sponsored by theDepartment of Psychology and the Cognitive Science Center The Ohio State University Columbus, Ohio 43210 The Department of Psychology and the Cognitive Science Center at Ohio State University are pleased to announce a tutorial workshop on Connectionist Models of Cognition. The workshop is open to both faculty and students interested in the application of parallel distributed processing models to cognitive phenomena either in a teaching or research capacity. Participants will gain hands-on modeling experience with a variety of standard architectures and have the opportunity to begin developing their own models under the guidance of workshop staff. A background in connectionist models is not required to participate in this workshop. The intensive 2-day workshop will consist of tutorial lectures on specific connectionist models followed by structured modeling sessions in the computer lab. The network architectures covered will include interactive-activation networks, backpropagation networks, simple-recurrent networks, Hopfield networks, self-organizing feature maps, and coupled-oscillator models. Lab sessions will consist of a series of exercises using the BrainWave connectionist simulator. BrainWave is a web-friendly software tool for teaching connectionist models based on a paint-program metaphor. Developed for an undergraduate course on connectionist models, it targets novices and requires a minimal level of technical skill to use. The BrainWave package (and on-line teaching materials) have been included as components of undergraduate courses in psychology, cognitive science, linguistics, and computer science at universities in the United States, the United Kingdom, and Australia. By the end of this workshop, participants will have a general introduction to the field of neural networks and the BrainWave simulator, and have experience running simulations with several connectionist models of cognitive phenomena that have been influential in cognitive science. For faculty teaching cognitive modeling, the workshop provides the opportunity to gain familiarity with the BrainWave simulator and course materials. For researchers, it offers an excellent opportunity for professional development. The registration fee for the students attending the workshop is $50 and for all others it is $150. Included in the registration for the 2-day workshop are 1. A single-user copy of the BrainWave software package. 2. All course materials. The workshop coordinators are Dr. Devin McAuley (Department of Psychology, Ohio State University) and Dr. Simon Dennis (School of Psychology, University of Queensland). The Cognitive Science Center faculty sponsor is Professor Mari Jones (Department of Psychology, Ohio State University). Send general inquiries and registration forms to: Connectionist Models Workshop (c/o Devin McAuley) Department of Psychology 142 Townshend Hall The Ohio State University Columbus, Ohio 43210 USA Email: brainwav at psy.uq.edu.au Fax: 614-292-5601 * The registration deadline is September 15th (numbers are limited) * ---------------------------------------------------------------------------- Workshop Registration Form NAME ____________________________________________ EMAIL ____________________________________________ ADDRESS ____________________________________________ ____________________________________________ ____________________________________________ ____________________________________________ CHECK ONE STUDENT _____ ($50) ACADEMIC _____ ($150) INDUSTRY _____ ($150) Payment should be made to: the Department of Psychology, The Ohio State University ------------------------------------------------------------------- J. Devin McAuley, PhD Department of Psychology 142 Townshend Hall The Ohio State University Columbus, Ohio 43210 Email: mcauley.11 at osu.edu Phone: 1-614-292-4320 From giles at research.nj.nec.com Fri Jul 17 15:36:03 1998 From: giles at research.nj.nec.com (Lee Giles) Date: Fri, 17 Jul 1998 15:36:03 -0400 (EDT) Subject: Book on "Adaptive Processing of Sequences and Data Structures" Message-ID: <199807171936.PAA08666@alta> The following edited book might be of interest. "Adaptive Processing of Sequences and Data Structures" (eds.) C. Lee Giles, Marco Gori LECTURE NOTES IN COMPUTER SCIENCE: ARTIFICIAL INTELLIGENCE, Springer Verlag, 1998. This is a collection of tutorial papers presented at the International Summer School on Neural Networks, "E.R. Caianiello", Vietri sul Mare, Salerno, Italy, September 6-13, 1997. Ordering information can be had from: http://www.springer.de/comp/lncs/tutorial.html Chapters sections, titles and authors are: Architectures and Learning in Recurrent Neural Networks Recurrent Neural Network Architectures: An Overview A.C. Tsoi Gradient Based Learning Methods A.C. Tsoi Diagrammatic Methods for Deriving and Relating Temporal Neural Network Algorithms E.A. Wan and F. Beaufays Processing of Data Structures An Introduction to Learning Structured Information P. Frasconi Neural Networks for Processing Data Structures A. Sperduti The Loading Problem: Topics in Complexity M. Gori Probabilistic Models Learning Dynamic Bayesian Networks Z. Ghahramani Probabilistic Models of Neuronal Spike Trains P. Baldi Temporal Models in Blind Source Separation L.C. Parra Analog vs Symbolic Computation Recursive Neural Networks and Automata M. Maggini The Neural Network Pushdown Automaton: Architecture, Dynamics and Training G.Z. Sun, C.L. Giles and H.H. Chen Neural Dynamics with Stochasticity H. T. Siegelmann Applications Parsing the Stream of Time: The Value of Event-Based Segmentation in a Complex Real-World Control Problem M.C.Mozer and D. Miller Hybrid HMM/ANN Systems for Speech Recognition: Overview and New Research Directions H. Bourlard and N. Morgan Predictive Models for Sequence Modelling, Application to Speech and Character Recognition P. Gallinari Best regards, Lee Giles Marco Gori __ C. Lee Giles / Computer Science / NEC Research Institute / 4 Independence Way / Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 www.neci.nj.nec.com/homepages/giles == From rich at cs.umass.edu Mon Jul 20 11:54:13 1998 From: rich at cs.umass.edu (Rich Sutton) Date: Mon, 20 Jul 1998 10:54:13 -0500 Subject: Technical Report Announcement: Reinforcement Learning with Temporal Abstraction Message-ID: We are pleased to announce the public availability of the following technical report: Between MDPs and semi-MDPs: Learning, planning, and representing knowledge at multiple temporal scales. by Richard S. Sutton, Doina Precup, and Satinder Singh Learning, planning, and representing knowledge at multiple levels of temporal abstraction are key challenges for Artificial Intelligence. this paper we develop an approach to these problems based on the mathematical framework of reinforcement learning and Markov decision processes (MDPs). We extend the usual notion of action to include {\it options}---whole courses of behavior that may be temporally extended, stochastic, and contingent on events. Examples of options include picking up an object, going to lunch, and traveling to a distant city, as well as primitive actions such as muscle twitches and joint torques. Options may be given a priori, learned by experience, or both. They may be used interchangeably with actions in a variety of planning and learning methods. The theory of semi-Markov decision processes (SMDPs) can be applied to model the consequences of options and as a basis for planning and learning methods using them. In this paper we develop these connections, building on prior work by Bradtke and Duff (1995), Parr (in prep.) and others. Our main novel results concern the interface between the MDP and SMDP levels of analysis. We show how a set of options can be altered by changing only their termination conditions to improve over SMDP methods with no additional cost. We also introduce {\it intra-option} temporal-difference methods that are able to learn from fragments of an option's execution. Finally, we propose a notion of subgoal which can be used to improve the options themselves. Overall, we argue that options and their models provide hitherto missing aspects of a powerful, clear, and expressive framework for representing and organizing knowledge. ftp://ftp.cs.umass.edu/pub/anw/pub/sutton/SPS-98.ps.gz 39 pages, 1.8 MBytes. From georgiou at wiley.csusb.edu Mon Jul 20 03:49:58 1998 From: georgiou at wiley.csusb.edu (georgiou@wiley.csusb.edu) Date: Mon, 20 Jul 1998 00:49:58 -0700 (PDT) Subject: ICCIN'98: Student funding available, deadline: August 1 Message-ID: <199807200749.AAA23416@wiley.csusb.edu> 3rd International Conference on COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE (ICCIN'98) Sheraton Imperial Hotel & Convention Center Research Triangle Park, North Carolina October 24-28, 1998 (Tutorials are on October 23) New Deadline for Student Paper Submissions: August 1 ARO Financial Support Available The organizers of ICCIN '98 and the Joint Conference on Information Sciences are pleased to announce that the Mathematical and Computer Sciences Division of the Army Research Office has become a sponsor of JCIS'98. Registration and travel stipends for students with accepted papers to ICCIN '98 and JCIS'98 will be made available. To accommodate this late breaking good news, the deadline for student papers to ICCIN '98 and JCIS '98 will be extended to August 1, 1998. All papers will undergo rapid peer review. A student paper constitutes a paper with significant student involvement; it need not be a paper where the student is the first or primary author. For the purposes of this announcement, "students" include, but are not limited to grad students, post-docs and research fellows. Some funds may be available for students who wish to attend the conference but who do not present research findings. However, preference will be given to those students actually presenting research findings on work in computational intelligence, neural networks, control and systems theory and other topics relevant to the JCIS '98. Students who have already submitted papers and would like to be considered for funding please send a note to georgiou at csci.csusb.edu . Conference Co-chairs: Subhash C. Kak, Louisiana State University Jeffrey P. Sutton, Harvard University Plenary Speakers include the following: +-----------------------------------------------------------------------+ |James Anderson |Panos J. Antsaklis |John Baillieul |Walter Freeman | |-----------------+-------------------+----------------+----------------| |David Fogel |Stephen Grossberg |Stuart Hameroff |Yu Chi Ho | |-----------------+-------------------+----------------+----------------| |Thomas S.Huang |George J. Klir |Teuvo Kohonen |John Koza | |-----------------+-------------------+----------------+----------------| |Richard G. Palmer|Zdzislaw Pawlak |Karl Pribram |Azriel Rosenfeld| |-----------------+-------------------+----------------+----------------| |Julius T. Tou |I.Burhan Turksen |Paul J. Werbos |A.K.C.Wong | |-----------------+-------------------+----------------+----------------| |Lotfi A. Zadeh |Hans J.Zimmermann | | | +-----------------------------------------------------------------------+ This conference is part of the Fourth Joint Conference Information Sciences. Areas for which papers are sought include: o Artificial Life o Artificially Intelligent NNs o Associative Memory o Cognitive Science o Computational Intelligence o Efficiency/Robustness Comparisons o Evolutionary Computation for Neural Networks o Feature Extraction & Pattern Recognition o Implementations (electronic, Optical, Biochips) o Intelligent Control o Learning and Memory o Neural Network Architectures o Neurocognition o Neurodynamics o Neuro-Quantum Information Processing o Optimization o Parallel Computer Applications o Quantum Neurocomputing o Theory of Evolutionary Computation Papers will be accepted based on summaries. A summary shall not exceed 4 pages of 10-point font, double-column, single-spaced text, with figures and tables included. Required deposits and other information: http://www.ee.duke.edu/~gu/JCIS98/conf.html Send 3 copies of summaries to: George M. Georgiou Computer Science Department California State University San Bernardino, CA 92407-2397 U.S.A. georgiou at csci.csusb.edu Tutorial and other registration information can be found in the announcement of the Fourth Joint Conference Information Sciences: http://www.ee.duke.edu/~gu/JCIS98/ ICCIN'98 Conference Web site: http://www.csci.csusb.edu/iccin From oby at cs.tu-berlin.de Mon Jul 20 16:23:46 1998 From: oby at cs.tu-berlin.de (Klaus Obermayer) Date: Mon, 20 Jul 1998 22:23:46 +0200 (MET DST) Subject: preprints available Message-ID: <199807202023.WAA28537@pollux.cs.tu-berlin.de> Dear Connectionists, attached please find abstracts and preprint-locations of two short manuscripts on modelling the development of cortical maps. Cheers Klaus ---------------------------------------------------------------------- Prof. Klaus Obermayer phone: 49-30-314-73442 FR2-1, NI, Informatik 49-30-314-73120 Technische Universitaet Berlin fax: 49-30-314-73121 Franklinstrasse 28/29 e-mail: oby at cs.tu-berlin.de 10587 Berlin, Germany http://ni.cs.tu-berlin.de/ ---------------------------------------------------------------------- ---------------------------------------------------------------------- M. Stetter^1, E. W. Lang^2, and K. Obermayer^1 1 FB Informatik, TU Berlin 2 Physik-Department, U. Regensburg Unspecific long-term potentiation can evoke functional segregation in a model of area 17. Recently it has been shown in rat hippocampus that the synapse specificity of Hebbian long-term potentiation breaks down at short distances below 100 mum . Using a neural network model we show that this unspecific component of long term potentiation can be responsible for the robust formation and maintainance of cortical organization during activity driven development. When the model is applied to the formation of orientation and ocular dominance in visual cortex, we find that the addition of an unspecific component to standard Hebbian learning - in combination with a tendency of left-eye and right-eye driven synapses to initially group together on the postsynaptic neuron - induces the simultaneous emergence and stabilization of ocular dominance and of segregated, oriented ON-/OFF-subfields. Since standard Hebbian learning cannot account for the simultaneous stabilization of both structures, unspecific LTP thus induces a qualitatively new behaviour. Since unspecific LTP only acts between synapses which are locally clustered in space, our results imply that details of the local grouping of synapses on the dendritic arbors of postsynaptic cells can considerably influence the formation of the cortical functional organization at the systems level. to appear in: Neuroreport 1998 available at: http://ni.cs.tu-berlin.de/publications/#journals ------------------------------------------------------------------------ C. Piepenbrock and K. Obermayer FB Informatik, TU Berlin Effects of lateral competition in the primary visual cortex on the development of topographic projections and ocular dominance maps We present a Hebbian model for the development of cortical maps in the striate cortex that includes a parameter which represents the degree of lateral competition for activity between neurons. It has two well known models as limiting cases: for weak competition we obtain a Correlation Based Learning (CBL) model and for strong lateral competition we recover the Self Organizing Map (SOM). Receptive fields develop through very different mechanisms in these models: CBL is driven by the second order statistics of the input stimuli, while SOM maps stimulus features. We study the influence of lateral interactions for intermediate changes in competition. Increasing the competition for binocular localized stimuli we find a transition from an unorganized map to a topographic projection and subsequently to a topographic map with ocular dominance columns. to appear in: In Computational Neuroscience: Trends in Research 1998, 1998. available at: http://ni.cs.tu-berlin.de/publications/#conference From dhw at santafe.edu Mon Jul 20 16:54:04 1998 From: dhw at santafe.edu (dhw@santafe.edu) Date: Mon, 20 Jul 1998 14:54:04 -0600 Subject: Job Announcement at NASA Ames Research Center Message-ID: <199807202054.OAA20722@santafe.santafe.edu> ** PLEASE CIRCULATE ** RESEARCH POSITION AT NASA AMES The Computational Inductive Inference Group at NASA Ames Research Center invites applications for a researcher to work on its "Collective Intelligence" project. This multi-disciplinary effort is led by David Wolpert, and currently has 5 researchers. It is concerned with systems that involve large collections of sophisticated machine learning algorithms operating, without centralized control, so as to achieve a global objective. The current domains being investigated are packet routing, automated parallelization, and protocell modeling. The position will be at either the post-doctoral and/or staff researcher level, depending on qualifications. Applicants must have a doctoral degree and an outstanding research record. Experience in one or more of the fields of game theory, economics, machine learning / statistical inference (especially reinforcement learning), or multi-agent systems is required. Experience in network routing, biophysics, population biology, and/or parallel algorithms is a plus. Candidates should send a curriculum vitae (including a list of publications), their citizenship status and the names of at least three references to Hal Duncan, hduncan at mail.arc.nasa.gov, 650-604-4767, MS N269-2, NASA Ames Research Center, Moffett Field, CA, 94035. Electronic submissions are preferred. Do NOT send information to David Wolpert. From herbert.jaeger at gmd.de Tue Jul 21 08:36:40 1998 From: herbert.jaeger at gmd.de (Herbert Jaeger) Date: Tue, 21 Jul 1998 14:36:40 +0200 Subject: Postdoc Position in Mobile Robotics Message-ID: <35B48B2D.551D@gmd.de> ------------------------------------------------------------------- Post-doctoral position at the German Nation Research Center for Information Technology (GMD): Integrating bottom-up learning and planning in a mobile robot that uses stochastic models for predicting the consequences of action ------------------------------------------------------------------- Within GMD's annual postdoc programme (details at http://ik.gmd.de/PD-97-98.html), a research position on integrating bottom-up learning and conceptual-level planning in mobile robots is offered in the Cognitive Robotics research group (http://www.gmd.de/FIT/KI/CogRob/) at the GMD Institute for System Design (SET). Among many other topics related to the design of mobile robots, the group pursues the integration of bottom-up, behavior-based techniques with symbolic planning capabilities. It is not intended merely to interface a classical planning module with an otherwise behavior-based control system. Instead, the symbolic information processing must grow out of the dynamics of the robot-environment interaction. This goal is to be achieved by combining several techniques: (i) a stochastic learning component which enables the robot to learn to predict the consequences of its actions, (ii) a dynamical-systems based method ("Dual Dynamics") for specifying the behaviors of a robot, (iii) an architecture for integrating (i) and (ii) with a symbolic planning system on the grounds of conceptual knowledge that emerges from the robot's prediction learning. The complete system will be implemented on a B14 robot, our RoboCup robots, or underwater robots. Details can be found in several short papers (ftp://ftp.gmd.de/GMD/ais/publications/ {1997/jaeger-christaller.97.dd.ps.gz, 1998/jaeger.98.emcsr.ps.gz, 1998/hertzberg.98.ddplan.ps.gz}). We are looking for a post-doc researcher who finds this a fascinating scientific challenge, and who would like to take responsibilities for the success of this endeavor. We emphasize that this is not a narrowly defined project position - post-doctoral grantholders at GMD can put their own scientific stamp on their work. Qualifications: The candidate must have a doctoral degree in computer science, engineering, physics, mathematics, biology, or related areas. Furthermore, we would be happy about as many as possible from among the following qualifications: - experience in robotics, - experience with stochastic system analysis and control, - familiarity with dynamical systems in general, ODE's in particular, - a background in symbolic, - interest in the fundamental questions of cognitive agents research. Women are encouraged to apply. Handicapped people with comparable qualifications are given preferential treatment. Please direct inquiries and applications to Herbert Jaeger at the address given below. Formal applications should be prepared according to the requirements specified at http://ik.gmd.de/PD-97-98.html. Specifically, a short research plan must be included. >>> The deadline for applications is September 15, 1998. ---------------------------------------------------------------- Dr. Herbert Jaeger Phone +49-2241-14-2253 German National Research Center Fax +49-2241-14-2384 for Information Technology (GMD) email herbert.jaeger at gmd.de SET.KI Schloss Birlinghoven D-53754 Sankt Augustin, Germany http://www.gmd.de/People/Herbert.Jaeger/ ---------------------------------------------------------------- From benferha at irit.fr Tue Jul 21 12:27:12 1998 From: benferha at irit.fr (Salem BENFERHAT) Date: Tue, 21 Jul 1998 18:27:12 +0200 (MET DST) Subject: ETAI-new area "Decision and Reasoning under Uncertainty" Message-ID: <199807211627.SAA02487@elsac.irit.fr> Dear Colleagues, We are very pleased to announce the creation of a new area of the Electronic Transactions on Artificial Intelligence (ETAI), named Decision and Reasoning under Uncertainty (DRU), area of which we are in charge. To see how ETAI operates and what its present state is, the best is to take a look at the ETAI webpage http://www.ida.liu.se/ext/etai/. A brief summary is also given below. This new area covers researches on reasoning and decision under uncertainty both on the methodological and on the applicative sides. Significant papers are invited from the whole spectrum of uncertainty in Artificial Intelligence researches. See below for the call for papers. The reviewing process in ETAI-DRU area differs from the traditional one. Each sumbitted paper goes through the following steps (see below for a complete description of the reviewing process): 1. The paper is posted at the ETAI-website and is announced to DRU-community via a newsletter. This starts a three months online (public) discussions. 2. After these three months discussions, the author decides whether he/she wishes to have the paper refereed for the ETAI, or not. 3. If yes, the article is sent to confidential referees (timing is short, typically 3 weeks). The article is either accepted or not accepted for the ETAI. Research papers which are under public discussions are located at : "http://www.ida.liu.se/ext/etai/received/dru/received.html". Thus, we have already received a first paper from David Poole entitled "Decision Theory, the Situation Calculus and Conditional Plans". To contribute to the discussion or debate section concerning submitted papers, please send your questions or comments as an email to the area editors (benferhat at irit.fr, prade at irit.fr). Besides the reviewing process, we also organizes News Journals and Newsletters. The Newsletter is sent out by e-mail regularly and is the media through which information is distributed rapidly. The News Journal is published 3 or 4 times per year as a digest of the information that was sent out in the past Newsletters. Newsletters will i) inform about new submitted papers (open to discussions), ii) announce Conferences, Books, Journal issues, PhD thesis and technical reports, Career Opportunities and Training, and softwares dealing with uncertainty, and iii) include discussions on uncertainty reasoning. To include some announcements in one of the previous categories, please send an email to benferhat at irit.fr and Prade at irit.fr. Lastly, in order to maintain a list of people working on Decision and reasoning under uncertainty, and in order to continue receiving Newsletters and News Journals, please send the following information by email: Last and first name, affiliation, email address, personal web-page. Please, feel free to ask any questions about ETAI-DRU area. We hope that you may consider the possibility of contributing papers and discussions to the ETAI-DRU areas in the future. Best regards Salem Benferhat and Henri Prade ________________________________________________ 1. Call for paper Significant papers are invited from the whole spectrum of uncertainty in Artificial Intelligence researches. Topics of interests include, but are not restricted to: Methods: Probability theory (Bayesian or not), Belief function theory, Upper and lower probabilities, Possibility theory, Fuzzy sets, Rough sets, Measures of information, ... Problems: Approximate reasoning, Decision-making under uncertainty, Planning under uncertainty, Uncertainty issues in learning and data mining, Algorithms for uncertain reasoning, Formal languages for representing uncertain information, Belief revision and plausible reasoning under uncertainty, Data fusion, Diagnosis, Inference under uncertainty, expert systems, Cognitive modelling and uncertainty, Practical applications, ... Area editors: Salem Benferhat, Henri Prade, IRIT, Toulouse, France Area editorial committee * Fahiem Bacchus, Univ. of Waterloo, Canada * Bernadette Bouchon-Meunier, LIP6, Univ. of Paris VI, France * Ronen I. Brafman, Stanford, USA * Roger Cooke, Tech. Univ. Delft, The Netherlands * Didier Dubois, IRIT, Toulouse, France * Francesc Esteva, IIIA-CSIC, Bellaterra, Spain * Finn V. Jensen, Aalborg Univ., Denmark * Jurg Kohlas, Univ. of Fribourg, Switzerland * Rudolf Kruse, Univ. of Magdeburg, Germany * Serafin Moral, Univ. of Granada, Spain * Prakash P. Shenoy, Univ. of Kansas, USA * Philippe Smets, IRIDIA, Free Univ. of Brussels, Belgium * Marek J. Druzdzel, Univ. of Pittsburgh, USA * Lech Polkowski, Warsaw Univ. of Technology, Poland 2. A brief summary of ETAI The following gives some information about ETAI (to get a complete information, take a look to the web page: http://www.ida.liu.se/ext/etai/). In a certain sense, ETAI is an electronic journal. However, it is not simply a traditional journal gone electronic. The differences may be summarized by the following table describing the functions performed by a conventional journal and by the ETAI: Conventional Journal ETAI Distribution of the article A major function Not our business Reviewing and quality assurance A major function A major function Debate about Difficult and not much A major function published results done Publication of Impossible Welcomed and already on-line software started Bibliographic Difficult and not much A major function services done The basic service of a conventional (paper) journal is to have the article typeset, printed, and sent to the subscribers. The ETAI stays completely away from that process: it assumes the existence of First Publication Archives (similar to "Preprint Archives", but with a guarantee that the articles remain unchanged for an extended period of time). The ETAI only deals with URL:s pointing to articles that have been published (but without international peer review) in First Publication Archives. The reviewing and quality control is a major topic for the ETAI, like for conventional journals. However, the ETAI pioneers the principle of posteriori reviewing: the reviewing and acceptance process takes place after the article has been published. This has a number of consequences, but the major advantage from the point of view of the author is that he or she retains the priority right of the article and its results per the original date of publication, and independently of reviewing delays and possible reviewing mistakes. Reviewing in ETAI also differs from conventional journal reviewing in that it uses a succession of several "filters", rather than one single reviewing pass, and in that it is set up so as to encourage self-control on the side of the authors. The intention is that ETAI's quality control shall be considerably more strict and reliable than what is done in conventional journals. Besides the reviewing process, the ETAI also organizes News Journals (or newsletters) in each of its speciality areas. News Journals are fora for information about current events (workshops, etc), but they will also contain debate about recently published research results. Naturally, the on-line medium is much more appropriate for debate than what a conventional journal is. Compared to mailgroups, the News Journals offer a more persistent and reputable forum of discussion. Discussion contributions are preserved in such a way that they are accessible and referencable for the future. In other words, they also are to be considered as "published". One additional type of contributions in News Journals is for links to software that is available and can be run over the net. This is particularly valuable for software which can be run directly from a web page. Already the first issue of an ETAI News Journal publishes two such on-line software contributions. The creation of bibliographies, finally, is a traditional activity in research, but it is impractical in paper-based media since by their very nature, bibliographies ought to be updated as new articles arrive. The on-line maintenance of specialized bibliographies within each of its topic areas is a natural function in the ETAI. Generally speaking, it is clear that the electronic medium lends itself to a different grouping of functionalities than what is natural or even possible in the paper-based technology. For example, the bibliographic database underlying ETAI's bibliographic services is well integrated with the reviewing process and with the News Journals where new contributions to the literature are first reported. Similarly, debate items pertaining to a particular article will be accessible from the entry for the article itself. The ETAI therefore represents a novel approach to electronic publishing. We do not simply inherit the patterns from the older technology, but instead we have rethought the structure of scientific communication in order to make the best possible use of international computer networks as well as electronic document and database technologies. 3. The ETAI Reviewing process The reviewing process is the core activity of a scientific journal, electronic or not. The ETAI wishes to make full use of the electronic medium in order to obtain higher quality reviewing as a service to authors and readers alike. The reviewing will therefore be organized in the following, novel way. From the point of view of the author and the individual article, this works as follows: . The author(s) write(s) the article, and prepares it in postscript and/or PDF format. It is recommended to use ETAI style files. . An informal contact with the area editor may be appropriate in order to check that the article follows the basic formal criteria. . The author arranges to have the article published in a first publication archive (university E-Press, preprint archive, etc) in such a way that it is identified now and forever with a specific URL. . The article represented by the URL is submitted to the relevant ETAI area for inclusion in a news journal. . The area editor screens it and approves (hopefully) the inclusion of the article. . The article is subjected to open reviewing during at least six months. The author is likely to make changes to the article based on the feedback. . The author decides where he wants to go for acceptance. If she chooses e.g. JAIR or some other place except ETAI, nothing more to do. If the author submits the article (now probably modified) to for acceptance by the ETAI, then the area editor appoints two referees and gives them a short amount of time to do their job. Then the area editor decides based on the statements of the referees. See above regarding time of submission and time of decision. . Even after ETAI acceptance, the article "hangs around" and may engage additional debate. From giulio at dist.unige.it Thu Jul 23 02:27:31 1998 From: giulio at dist.unige.it (Giulio Sandini) Date: Thu, 23 Jul 1998 08:27:31 +0200 Subject: PhD Studentiships in Cognitive Neuroscience and Robotics Message-ID: <35B6D7D3.42DC@dist.unige.it> PhD Studentiships in Cognitive Neuroscience and Robotics Available The LIRA-Lab of the University of Genova in Italy is looking for PhD students interested in investigating the development of sensori-motor coordination through the implementation of artificial systems. LIRA-Lab is a small multidisciplinary group of people with backgrounds in biology and engineering and with a long-lasting experience on biologically motivated robots and systems . It is located in Genova, a medium-size city along the Italian Riviera, and operates within the Department of Computer Science of the Faculty of Engineering. More information about the lab's activity and references to past guests and students can be found at: http://www.lira.dist.unige.it:81 The successful candidate should have an undergraduate degree and should be interested in developing software and hardware models of sensori-motor mechanisms with particular emphasis on eye-head-hand coordination. Fellowships are supported by the European Union and must be assigned to young European students (According to the rules, Italian candidates and students with non-European passport are not eligible). The duration is not less than three years. Salary is in accordance with the guidelines suggested by the EU within the "Training and Mobility of Researcher" activity and it is of the order 18000 Euro per year. Interested candidates should contact: Prof. Giulio Sandini DIST - University of Genova Via Opera Pia, 13 16145 Genova - Italy Fax: +39 10 353.2154 e-mail: sandini at dist.unige.it From jhf at stat.Stanford.EDU Thu Jul 23 19:26:34 1998 From: jhf at stat.Stanford.EDU (Jerome H. Friedman) Date: Thu, 23 Jul 1998 16:26:34 -0700 (PDT) Subject: Paper on boosting. Message-ID: <199807232326.QAA14698@rgmiller.Stanford.EDU> *** Technical Report Available *** Additive Logistic Regression: a Statistical View of Boosting Jerome Friedman (jhf at stat.stanford.edu) Trevor Hastie (trevor at stat.stanford.edu) Robert Tibshirani (tibs at utstat.toronto.edu) ABSTRACT Boosting (Freund & Schapire 1996, Schapire & Singer 1998) is one of the most important recent developments in classification methodology. The performance of many classification algorithms often can be dramatically improved by sequentially applying them to reweighted versions of the input data, and taking a weighted majority vote of the sequence of classifiers thereby produced. We show that this seemingly mysterious phenomenon can be understood in terms of well known statistical principles, namely additive modeling and maximum likelihood. For the two-class problem, boosting can be viewed as an approximation to additive modeling on the logistic scale using maximum Bernoulli likelihood as a criterion. We develop more direct approximations and show that they exhibit nearly identical results to that of boosting. Direct multi-class generalizations based on multinomial likelihood are derived that exhibit performance comparable to other recently proposed multi-class generalizations of boosting in most situations, and far superior in some. We suggest a minor modification to boosting that can reduce computation, often by factors of 10 to 50. Finally, we apply these insights to produce an alternative formulation of boosting decision trees. This approach, based on best-first truncated tree induction, often leads to better performance, and can provide interpretable descriptions of the aggregate decision rule. It is also much faster computationally making it more suitable to large scale data mining applications. Available by ftp from: "ftp://stat.stanford.edu/pub/friedman/boost.ps.Z" or "ftp://utstat.toronto.edu/pub/tibs/boost.ps.Z" Comments welcomed. From geoff at giccs.georgetown.edu Fri Jul 24 18:37:33 1998 From: geoff at giccs.georgetown.edu (Geoff Goodhill) Date: Fri, 24 Jul 1998 18:37:33 -0400 Subject: Paper available Message-ID: <199807242237.SAA01938@fathead.giccs.georgetown.edu> Dear Colleagues, The following paper, which has just appeared in Network 9, 419-432, is available in postscript form from http://www.giccs.georgetown.edu/~geoff/pubs.html Thanks, Geoff Goodhill ----- The influence of neural activity and intracortical connections on the periodicity of ocular dominance stripes Geoffrey J. Goodhill Georgetown Institute for Cognitive and Computational Sciences Washington DC Abstract Several factors may interact to determine the periodicity of ocular dominance stripes in cat and monkey visual cortex. Previous theoretical work has suggested roles for the width of cortical interactions and the strength of between-eye correlations. Here, a model based on an explicit optimization is presented that allows a thorough characterization of how these and other parameters of the afferent input could affect ocular dominance stripe periodicity. The principle conclusions are that increasing the width of within-eye correlations leads to wider columns, and, surprisingly, that increasing the width of cortical interactions can sometimes lead to narrower columns. From nkasabov at commerce.otago.ac.nz Mon Jul 27 13:46:17 1998 From: nkasabov at commerce.otago.ac.nz (Nikola Kasabov) Date: Mon, 27 Jul 1998 13:46:17 NZST-12NZDT Subject: Two post-doc positions in connectionist systems Message-ID: <199807270147.NAA29139@arwen.otago.ac.nz> POSTDOCTORAL RESEARCH FELLOWS (TWO POSITIONS) CONNECTIONIST-BASED INFORMATION SYSTEMS Applications are invited for two positions of Postdoctoral Research Fellow within the Department of Information Science, University of Otago. Initially this is a two year fixed term appointment with the possibility of renewal. The successful applicants should have a PhD degree in Information Science, Computer Science, Electrical Engineering or Mathematics. Knowledge on contemporary methods and techniques for information processing and intelligent information systems (neural networks, fuzzy logic, evolutionary systems), excellent programming skills (Java, C++, Delphi), experience working with operating systems for PC and SUN platforms in a distributed network environment, is desirable. In collaboration with other universities in New Zealand and abroad, the department is working on a research project "Connectionist-Based Intelligent Information Systems" funded by the New Zealand Foundation for Research, Science and Technology (FRST). This project involves programming and using AI systems (neural networks, rule-based systems, fuzzy systems, evolutionary computation, rule extraction algorithms) for single platform or in a distributed platform-independent environment on the WWW. It also aims at applying developed methodologies and tools for adaptive speech recognition, language translation, image processing, adaptive control and data mining in bioinformatics. Commencing salary will be within the ranges available for each position. Further enquiries should be directed Associate Professor Nik Kasabov, +64-3-479 8319, email: nkasabov at otago.ac.nz, http: kel.otago.ac.nz/vacancies/. Reference number AG98/25 Closing date: Friday 21 August 1998 METHOD OF APPLICATION Further details regarding this position, the University and the application procedure are available from the Deputy Director, Personnel Services, University of Otago, PO Box 56, Dunedin, New Zealand (fax +64-3-474 1607) and from the Dunedin branch of the New Zealand Employment Service. Applicants should send two copies of their curriculum vitae together with the names, addresses and fax numbers of three referees, to the Deputy Director of Personnel Services by the specified closing date, quoting the appropriate reference number. Equal opportunity in employment is University policy. --------------------------------------------------------- Assoc.Professor Dr Nikola Kasabov phone:+64 3 479 8319 Department of Information Science fax: +64 3 479 8311 University of Otago,P.O.Box 56 nkasabov at otago.ac.nz Dunedin, New Zealand WWW http://divcom.otago.ac.nz:800/com/infosci/kel/home.htm ---------------------------------------------------------- From giles at research.nj.nec.com Mon Jul 27 13:48:52 1998 From: giles at research.nj.nec.com (Lee Giles) Date: Mon, 27 Jul 1998 13:48:52 -0400 Subject: journal paper on neural networks and NLP Message-ID: <199807271748.NAA22553@alta> The following journal paper on natural language learning and neural networks has been accepted in IEEE Transactions on Knowledge and Data Engineering and is now available at: http://www.neci.nj.nec.com/homepages/lawrence/ http://www.neci.nj.nec.com/homepages/giles/ The labeled corpus data set for this study is also available at: http://www.neci.nj.nec.com/homepages/sandiway/pappi/rnn/index.html ************************************************************************** "Natural Language Grammatical Inference with Recurrent Neural Networks" Steve Lawrence (1), C. Lee Giles (1,2), Sandiway Fong (1) (1) NEC Research Institute, 4 Independence Way, Princeton, NJ 08540, USA (2) Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742, USA {lawrence,giles,sandiway}@research.nj.nec.com ABSTRACT This paper examines the inductive inference of a complex grammar with neural networks -- specifically, the task considered is that of training a network to classify natural language sentences as grammatical or ungrammatical, thereby exhibiting the same kind of discriminatory power provided by the Principles and Parameters linguistic framework, or Government-and-Binding theory. Neural networks are trained, without the division into learned vs. innate components assumed by Chomsky, in an attempt to produce the same judgments as native speakers on sharply grammatical/ungrammatical data. How a recurrent neural network could possess linguistic capability, and the properties of various common recurrent neural network architectures are discussed. The problem exhibits training behavior which is often not present with smaller grammars, and training was initially difficult. However, after implementing several techniques aimed at improving the convergence of the gradient descent backpropagation-through-time training algorithm, significant learning was possible. It was found that certain architectures are better able to learn an appropriate grammar. The operation of the networks and their training is analyzed. Finally, the extraction of rules in the form of deterministic finite state automata is investigated. Keywords: recurrent neural networks, natural language processing, grammatical inference, government-and-binding theory, gradient descent, simulated annealing, principles-and-parameters framework, automata extraction. ********************************************************************** A previously published related book chapter: Steve Lawrence, Sandiway Fong, C. Lee Giles, "Natural Language Grammatical Inference: A Comparison of Recurrent Neural Networks and Machine Learning Methods," in Symbolic, Connectionist, and Statistical Approaches to Learning for Natural Language Processing, Lecture Notes in AI, edited by Stefan Wermter, Ellen Riloff and Gabriele Scheler, Springer Verlag, New York, pp. 33-47, 1996. is available upon request. __ C. Lee Giles / Computer Science / NEC Research Institute / 4 Independence Way / Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 www.neci.nj.nec.com/homepages/giles == From terry at salk.edu Tue Jul 28 18:08:10 1998 From: terry at salk.edu (Terry Sejnowski) Date: Tue, 28 Jul 1998 15:08:10 -0700 (PDT) Subject: NEURAL COMPUTATION 10:6 Message-ID: <199807282208.PAA16497@helmholtz.salk.edu> Neural Computation - Contents Volume 10, Number 6 - August 15, 1998 ARTICLES Chaotic Balanced State in a Model Of Cortical Circuits C. van Vreeswijk and H. Somplinsky Blind Source Separation and Deconvolution: The Dynamic Component Analysis Algorithm H. Attias and C. E. Schreiner NOTES Bias/Variance Decompositions for Likelihood-Based Estimators Tom Heskes The Influence Function of Principal Component Analysis by Self-Organizing Rule Isao Higuchi and Shinto Eguchi A Sparse Representation for Function Approximation Tomaso Poggio and Federico Girosi LETTERS An Equivalence Between Sparse Approximation and Support Vector Machines Federico Girosi Extended Kalman Filter-Based Pruning Method for Recurrent Neural Networks John Sum, Lai-wan Chan, Chi-sing Leung, and Gilbert H. Young Transform-Invariant Recognition by Association in a Recurrent Network Nestor Parga and Edmund Rolls Retrieval Dynamics in Oscillator Neural Networks Toshio Aoyagi and Katsunori Kitano A Fast And Robust Cluster Update Algorithm For Image Segmentation In Spin-Lattice Models Without Annealing--Visual Latencies Revisited Ralf Opara and Florentin Worgotter Probability Density Methods for Smooth Function Approximation and Learning in Populations of Tuned Spiking Neurons Terence David Sanger A Potts Neuron Approach To Communication Routing Jari Hakkinen, Martin Lagerholm, Carsten Peterson, and Bo Soderberg ----- ABSTRACTS - http://mitpress.mit.edu/NECO/ SUBSCRIPTIONS - 1998 - VOLUME 10 - 8 ISSUES USA Canada* Other Countries Student/Retired $50 $53.50 $78 Individual $82 $87.74 $110 Institution $285 $304.95 $318 * includes 7% GST (Back issues from Volumes 1-9 are regularly available for $28 each to institutions and $14 each for individuals. Add $5 for postage per issue outside USA and Canada. Add +7% GST for Canada.) MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 258-6779 mitpress-orders at mit.edu ----- From Tony.Plate at MCS.VUW.AC.NZ Wed Jul 29 02:29:56 1998 From: Tony.Plate at MCS.VUW.AC.NZ (Tony Plate) Date: Wed, 29 Jul 1998 18:29:56 +1200 Subject: two papers on interpreting NNs and Guassian process models Message-ID: <199807290629.SAA26113@rialto.mcs.vuw.ac.nz> The following two papers on interpreting neural networks and Guassian process models are available for download from http://www.mcs.vuw.ac.nz/~tap/publications.html ---------- Accuracy versus interpretability in flexible modeling: implementing a tradeoff using Gaussian process models} Tony A. Plate School of Mathematical and Computing Sciences Victoria University of Wellington, Wellington, New Zealand Tony.Plate at vuw.ac.nz To appear in Behaviourmetrika, special issue on Analysis of knowledge representations in neural network models. Abstract: One of the widely acknowledged drawbacks of flexible statistical models is that the fitted models are often extremely difficult to interpret. However, if flexible models are constrained to be additive the fitted models are much easier to interpret, as each input can be considered independently. The problem with additive models is that they cannot provide an accurate model if the phenomenon being modeled is not additive. This paper shows that a tradeoff between accuracy and additivity can be implemented easily in Gaussian process models, which are a type of flexible model closely related to feedforward neural networks. One can fit a series of Gaussian process models that begins with the completely flexible and are constrained to be progressively more additive, and thus progressively more interpretable. Observations of how the degree of non-additivity and the test error change as the models become more additive give insight into the importance of interactions in a particular model. Fitted models in the series can also be interpreted graphically with a technique for visualizing the effects of inputs in non-additive models that was adapted from plots for generalized additive models. This visualization technique shows the overall effects of different inputs and also shows which inputs are involved in interactions and how strong those interactions are. ---------- Visualizing the function computed by a feedforward neural network Tony Plate (Victoria University of Wellington) Joel Bert (University of British Columbia) John Grace (University of British Columbia) Pierre Band (Health Canada) Computer Science Technical Report CS-TR-98-5 Victoria University of Wellington, Wellington, New Zealand Abstract: A method for visualizing the function computed by a feedforward neural network is presented. It is most suitable for models with continuous inputs and a small number of outputs, where the output function is reasonably smooth, as in regression or probabilistic classification tasks. The visualization makes readily apparent the effects of each input and the way in which the functions deviates from a linear function. The visualization can also assist in identifying interactions in the fitted model. The method uses only the input-output relationship and thus can be applied to any predictive statistical model, including bagged and committee models, which are otherwise difficult to interpret. The visualization method is demonstrated on a neural-network model of how the risk of lung cancer is affected by smoking and drinking. -- Tony Plate, Computer Science Voice: +64-4-495-5233 ext 8578 School of Mathematical and Computing Sciences Fax: +64-4-495-5232 Victoria University, PO Box 600, Wellington, New Zealand tap at mcs.vuw.ac.nz http://www.mcs.vuw.ac.nz/~tap From ed at itctx.com Sun Jul 26 06:37:27 1998 From: ed at itctx.com (ed@itctx.com) Date: Sun, 26 Jul 1998 06:37:27 -0400 Subject: Job: US-FL-Orlando, Data modeler/developer Message-ID: <104B43AAE1D2D111B27600A024CF0E2811438A@KEG> Intelligent Technologies Corporation (ITC) is a market leader in intelligent fraud detection software for the healthcare, insurance, and financial industries and has several openings as a result of its recent growth. DATA MODELER/DEVELOPER We are looking for a candidate with applied mathematics/ engineering/CS degree with several years of experience in data modeling and software development in a UNIX C/C++ environment. The ideal candidate will have a broad background covering the following skill sets. - Applications of neural networks, genetic algorithms, fuzzy logic and statistics. - C, C++, Java programming. - Experience in UNIX - Good communication skills. We offer excellent compensation plus a complete benefits package including an employee stock options purchase plan. For consideration, send your resume including salary history to: ITC, Job Code N, 455 Douglas Ave., Suite 2155-23, Altamonte Springs, FL 32714. Or fax to (407) 862-2490. Or e-mail to: ed at itcTX.com. Check out our Web site at: http://www.itcTX.com Ed DeRouin, Ph. D. Chief Scientist Intelligent Technologies Corp. From harnad at coglit.soton.ac.uk Sun Jul 26 13:38:43 1998 From: harnad at coglit.soton.ac.uk (Stevan Harnad) Date: Sun, 26 Jul 1998 18:38:43 +0100 (BST) Subject: CogPrints: Archive your Preprints and Reprints Message-ID: To all biobehavioral and cognitive scientists: You are invited to archive your preprints and reprints in the CogPrints electronic archive. You will find that it is extremely easy. The Archive covers all the Cognitive Sciences: Psychology, Neuroscience, Biology, Computer Science, Linguistics and Philosophy CogPrints is completely free for everyone, both authors and readers, thanks to a subsidy from the Electronic Libraries Programme of the Joint Information Systems of the United Kingdom and the collaboration of the NSF-supported Physics Eprint Archive at Los Alamos. CogPrints has just been opened for public automatic archiving. This means authors can now deposit their own papers automatically. The first wave of papers had been invited and hand-archived by CogPrints in order to set a model of the form and content of CogPrints. To see the high level of contributors and contributions: http://cogprints.soton.ac.uk/ To archive your own papers automatically: http://cogprints.soton.ac.uk/author.html All authors are encouraged to archive their papers on their home servers as well. For further information: admin at coglit.soton.ac.uk From moreno at eel.upc.es Wed Jul 29 04:38:13 1998 From: moreno at eel.upc.es (Juan Manuel Moreno Arostegui) Date: Wed, 29 Jul 1998 10:38:13 +0200 Subject: CFP - Microneuro'99 Message-ID: <002401bdbacc$3ab0c540$98315393@muddy.upc.es> Apologies if you receive multiple copies of this message ------------------------------------------------------------------------ MicroNeuro'99 7th International Conference on Microelectronics for Neural, Fuzzy and Bio-inspired Systems Granada, Spain, April 7-9, 1999 MicroNeuro99 is the seventh of a series of international conferences previously held in Dortmund, Mnchen, Edinburgh, Torino, Lausanne and Dresden, in each of which around a hundred specialists have participated. The conference is dedicated to hardware implementations of artificial neural networks, fuzzy and neuro-fuzzy systems, and related bio-inspired computing architectures. The program will focus upon all aspects related to the hardware implementation of these systems, with special emphasis on specific VLSI analog, digital and pulse-coded circuits. DEADLINES Submissions of papers and demo proposals November 5, 1998 Notification of acceptance December 20, 1998 Conference April 7-9, 1999 INFORMATION: General Chair A. Prieto, Univ. Granada, E aprieto at ugr.es Japan Chair T.Shibata, Univ. Tokyo, J shibata at ee.t.u-tokyo.ac.jp USA Chair Andreas G. Andreou, Johns Hopkins Univ. Baltimore, USA andreou at jhu.edu Fax +34 958 248993 mneuro99 at atc.ugr.es http://atc.ugr.es/mneuro99 From pelillo at dsi.unive.it Thu Jul 30 20:59:23 1998 From: pelillo at dsi.unive.it (Marcello Pelillo) Date: Fri, 31 Jul 1998 02:59:23 +0200 (MET DST) Subject: Call for Papers: EMMCVPR'99 Message-ID: <199807310059.CAA20424@oink.dsi.unive.it> A non-text attachment was scrubbed... Name: not available Type: text Size: 4553 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/f4f2d9fb/attachment-0001.ksh From suem at soc.plym.ac.uk Fri Jul 31 12:07:45 1998 From: suem at soc.plym.ac.uk (Sue McCabe) Date: Fri, 31 Jul 1998 17:07:45 +0100 Subject: Research studentship advert Message-ID: <1.5.4.32.19980731160745.00738878@soc.plym.ac.uk> Centre for Neural and Adaptive Systems School of Computing University of Plymouth, UK Research Studentship : The development of an integrated model of auditory scene analysis The Centre for Neural and Adaptive Systems investigates computational neural models of major brain processes underlying intelligent behaviour and uses these models to provide inspiration for the development of novel neural computing architectures and systems for intelligent sensory information processing, condition monitoring, autonomous control and robotics. A Research Student is sought by the Centre to participate in its on-going research programme in the area of auditory scene analysis. There are now a number of neural network models which address parts of the general problem, such as auditory streaming, pitch perception and sound localisation, and it is intended to develop a comprehensive model of auditory scene analysis which integrates all these aspects in a way which is consistent with current psychophysical and neurophysiological data. Candidates for the studentship will be expected to hold, or be about to receive, a good Honours or Masters degree in a relevant discipline and should demonstrate a strong interest in the neural network modelling of sensory processing and perception. Candidates should also have or expect to acquire a good working knowledge of fundamental neuroscience principles and good computational modelling skills. The studentship provides tuition fees and maintenance support at a level consistent with UK research council studentships. Additional income may be available through part time teaching. Further details about the Studentship can be obtained by telephoning Dr Sue McCabe on (+44) (0) 1752 232610, or by e-mail to sue at soc.plym.ac.uk. Dr Sue McCabe Centre for Neural and Adaptive Systems School of Computing University of Plymouth Plymouth PL4 8AA England tel: +44 17 52 23 26 10 fax: +44 17 52 23 25 40 e-mail: suem at soc.plym.ac.uk http://www.tech.plym.ac.uk/soc/research/neural/index.html From POCASIP at aol.com Fri Jul 31 20:19:33 1998 From: POCASIP at aol.com (POCASIP@aol.com) Date: Fri, 31 Jul 1998 20:19:33 EDT Subject: Looking for a Signal and Image Processing; CMOS Design; Adaptive Control expert Message-ID: <3ab6ec5f.35c25f16@aol.com> The Advanced Signal and Image Processing Laboratory of POC-IOS (POC stands for Physical Optics Corporation, and IOS stands for Intelligent Optical Systems) is looking for a candidate who has expertise in at least two of the following areas: 1. Signal and Image Processing, 2. CMOS ASIC Design, and 3. Nonlinear Adaptive Control + Knowledge of board-level electronic design, neural networks, and programming fluency in C and C++ are important assets. + Experience in solving real-world problems in a wide variety of applications is a definite plus. The Advanced Signal and Image Processing Laboratory has activities in, among others, the areas of adaptive control, hazardous waste analysis, skin injury diagnosis, silicon wafer inspection, food quality control, and target recognition, using neural computation implementations in software and hardware (both CMOS electronics and optics). POC-IOS is a rapidly growing dynamic high-tech company with a focus on optical sensors and information processing. It employs about 25 people, more than half of whom are scientists from a variety of backgrounds. We are located in Torrance, California, which is a pleasant seaside town with a high standard of living and year-round perfect weather. Please send your application including curriculum vitae, and three references, in ASCII only, by e-mail to POCASIP at aol.com Emile Fiesler