From w.wezenbeek at elsevier.nl Tue Sep 2 09:39:11 1997 From: w.wezenbeek at elsevier.nl (Wilma van Wezenbeek) Date: Tue, 02 Sep 1997 15:39:11 +0200 Subject: New address Editor-in-Chief Neurocomputing Message-ID: An embedded and charset-unspecified text was scrubbed... Name: not available Url: https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/6598a804/attachment.ksh From honavar at cs.iastate.edu Tue Sep 2 22:48:52 1997 From: honavar at cs.iastate.edu (Vasant Honavar) Date: Tue, 2 Sep 1997 21:48:52 -0500 (CDT) Subject: Call for Papers: InCGI-98 Message-ID: <199709030248.VAA02592@ren.cs.iastate.edu> Dear Colleague: A preliminary call for papers for ICGI-98 follows. This is being mailed to multiple mailing lists. My apologies if you receive multiple copies of this message as a result. Vasant Honavar honavar at cs.iastate.edu ---------------------------------------------------------------------------- Preliminary Call for Papers http://www.cs.iastate.edu/~honavar/icgi98.html Fourth International Colloquium on Grammatical Inference (ICGI-98) Program Co-Chairs: Vasant Honavar and Giora Slutzki Iowa State University July 12-14, 1998 Iowa State University Ames, Iowa, USA. ---------------------------------------------------------------------------- In cooperation with IEEE Systems, Man, and Cybernetics Society ACL Special Interest Group on Natural Language Learning (and possibly other organizations) ---------------------------------------------------------------------------- Index * Introduction * Conference Format * Topics of Interest * Program Committee * Local Arrangements Committee * Submission of Papers * Submission of Tutorial Proposals ---------------------------------------------------------------------------- Introduction Grammatical Inference, variously refered to as automata induction, grammar induction, and automatic language acquisition, refers to the process of learning of grammars and languages from data. Machine learning of grammars finds a variety of applications in syntactic pattern recognition, adaptive intelligent agents, diagnosis, computational biology, systems modelling, prediction, natural language acquisition, data mining and knowledge discovery. Traditionally, grammatical inference has been studied by researchers in several research communities including: Information Theory, Formal Languages, Automata Theory, Language Acquisition, Computational Linguistics, Machine Learning, Pattern Recognition, Computational Learning Theory, Neural Networks, etc. Perhaps one of the first attempts to bring together researchers working on grammatical inference for an interdisciplinary exchange of research results took place under the aegis of the First Colloquium on Grammatical Inference held at the University of Essex in United Kingdom in April 1993. This was followed by the (second) International Colloquium on Grammatical Inference, held at Alicante in Spain, the proceedings of which were published by Springer-Verlag as volume 862 of the Lectures Notes in Artificial Intelligence, and the Third International Colloquium on Grammatical Inference, held at Montpellier in France, the proceedings of which were published by Springer-Verlag as volume 1147 of the Lecture Notes in Artificial Intelligence. Following the success of these events and the Workshop on Automata Induction, Grammatical Inference, and Language Acquisition, held in conjunction with the International Conference on Machine Learning at Nashville in United States in July 1997, the Fourth International Colloquium on Grammatical Inference will be held from July 12 through July 14, 1998, at Iowa State University in United States. ---------------------------------------------------------------------------- Topics of Interest The conference seeks to provide a forum for presentation and discussion of original research papers on all aspects of grammatical inference including, but not limited to: * Different models of grammar induction: e.g., learning from examples, learning using examples and queries, incremental versus non-incremental learning, distribution-free models of learning, learning under various distributional assumptions (e.g., simple distributions), impossibility results, complexity results, characterizations of representational and search biases of grammar induction algorithms. * Algorithms for induction of different classes of languages and automata: e.g., regular, context-free, and context-sensitive languages, interesting subsets of the above under additional syntactic constraints, tree and graph grammars, picture grammars, multi-dimensional grammars, attributed grammars, parameterized models, etc. * Theoretical and experimental analysis of different approaches to grammar induction including artificial neural networks, statistical methods, symbolic methods, information-theoretic approaches, minimum description length, and complexity-theoretic approaches, heuristic methods, etc. * Broader perspectives on grammar induction -- e.g., acquisition of grammar in conjunction with language semantics, semantic constraints on grammars, language acquisition by situated agents and robots, acquisition of language constructs that describe objects and events in space and time, developmental and evolutionary constraints on language acquisition, etc. * Demonstrated or potential applications of grammar induction in natural language acquisition, computational biology, structural pattern recognition, information retrieval, text processing, adaptive intelligent agents, systems modelling and control, and other domains. ---------------------------------------------------------------------------- Program Committee (Tentative) The following people have agreed to serve on the program committee. Several other individuals are yet to confirm their participation. R. Berwick, MIT, USA M. Brent, Johns Hopkins University, USA C. Cardie, Cornell University, USA W. Daelemans, Tilburg University, Netherlands D. Dowe, Monash University, Australia D. Estival, University of Melbourne, Australia J. Feldman, International Computer Science Institute, Berkeley, USA L. Giles, NEC Research Institute, Princeton, USA J. Gregor, University of Tennessee, USA C. de la Higuera, LIRMM, France T. Knuutila, University of Turku, Finland E. Makinen, University of Tampere, Finland L. Miclet, ENSSAT, Lannion, France. G. Nagaraja, Indian Institute of Technology, Bombay, India H. Ney, University of Technology, Aachen, Germany J. Nicolas, IRISA, France R. Parekh, Iowa State University, USA L. Pitt, University of Illinois at Urbana-Champaign, USA D. Powers, Flinders University, Australia L. Reeker, National Science Foundation, USA C. Samuelsson, Lucent Technologies, USA A. Sharma, University of New South Wales, Australia. E. Vidal, U. Politecnica de Valencia, Spain ---------------------------------------------------------------------------- Local Arrangements Committee Dale Grosvenor, Iowa State University, USA. K. Balakrishnan, Iowa State University, USA. R. Parekh, Iowa State University, USA J. Yang, Iowa State University, USA. ---------------------------------------------------------------------------- Conference Format and Proceedings The conference will include oral and possibly poster presentations of accepted papers, a small number of tutorials and invited talks. All accepted papers will appear in the conference proceedings to be published by a major publisher. (Negotiations are underway with Springer-Verlag regarding the publication of ICGI-98 proceedings as a volume in their Lecture Notes in Artificial Intelligence a subseries of the Lecture Notes in Computer Science). ---------------------------------------------------------------------------- Submission of Papers Postscript versions of the papers no more than 12 pages long, (including figures, tables, and references), prepared according to the formatting guidelines should be submitted electronically to icgi98-submissions at cs.iastate.edu. The formatting guidelines (including commonly used word-processor macros and templates) will be placed online shortly. In those rare instances where authors might be unable to submit postscript versions of their papers electronically, we will try to accomodate them. Each paper will be rigorously refereed by at least 2 reviewers for technical soundness, originality, and clarity of presentation. Deadlines The relevant schedule for paper submissions is as follows: * March 1, 1998. Deadline for receipt of manuscripts * April 21, 1998. Notification of acceptance * May 15, 1998. Camera ready copies due ---------------------------------------------------------------------------- Submission of Proposals for Tutorials The conference will include a small number of short (2-hour) tutorials on selected topics in grammatical inference. Some examples of possible tutorial topics are: Hidden Markov Models, Grammatical Inference Applications in Computational Biology and PAC learnability of Grammars. This list is meant only to be suggestive and not exhaustive. Those interested in presenting a a tutorial should submit a proposal (in plain text format) to icgi-submissions at cs.iastate.edu by electronic mail: * A brief abstract (300 words or less) describing the topics to be covered * A brief description of the target audience and their expected background * A brief curriculum vitae including the proposer's relevant qualifications and publications The relevant schedule for tutorials is as follows: * March 1, 1998. Deadline for receipt of tutorial proposals * April 1, 1998. Notification of acceptance * May 15, 1998. Tutorial notes due ---------------------------------------------------------------------------- From aku at cts-fs1.du.se Wed Sep 3 05:45:14 1997 From: aku at cts-fs1.du.se (Andreas Kuehnle) Date: Wed, 3 Sep 1997 11:45:14 +0200 Subject: Post-doctoral position Message-ID: <9709030945.AA11521@cts.du.se> Post-doctoral position in image processing and neural networks. Available: Nov. 1, 1997 Duration: 1-2 years At: CTS - Center for Research on Transportation and Society, Borlange, Sweden. CTS is an interdisciplinary transport research institute attached to Dalarna University, located 2 hours northwest of Stockholm. Research areas at CTS are transport economics, transport sociology and transport technology. Description of postion: The position involves a complete camera-image processing-neural network classifier chain and will be used to determine road conditions. The project is financed by the Swedish National Road Administration. Candidate profile: Ph.D. in image processing or signal processing Knowledge of statistics Programming in C or Matlab Willingness to acquire images outdoors in inclement weather (cold, rain, snow) Please send applications to: Andreas Kuehnle CTS Dalarna University 78188 Borlange Sweden Email: Andreas.Kuehnle at cts.du.se From websom at nodulus.hut.fi Wed Sep 3 06:50:59 1997 From: websom at nodulus.hut.fi (websom-project) Date: Wed, 3 Sep 1997 13:50:59 +0300 Subject: News about SOM and WEBSOM Message-ID: <199709031050.NAA28028@nodulus.hut.fi> ---------------------------------------------------------------------- SOM: The second revised edition of the book "Self-Organizing Maps" (Teuvo Kohonen, Springer, 1997) is available. See the page http://james.hut.fi/nnrc/new_book.html for further information. The collection of studies on the Self-Organizing Map (SOM) and Learning Vector Quantization (LVQ) now contains 2952 papers. The bibliography can be accessed through the page http://james.hut.fi/nnrc/refs/ ---------------------------------------------------------------------- WEBSOM - Self-Organizing Map for Internet Exploration: The WEBSOM method for organizing textual document collections has undergone considerable developments. The clustering along with topics has increased significantly. New demos can be accessed via the WEBSOM home page http://websom.hut.fi/websom/ There are maps of - over million documents from over 80 Usenet newsgroups - the abstracts of WSOM'97 (Workshop on Self-Organizing Maps) - three Usenet newsgroups separately: comp.ai.neural-nets, sci.lang, and sci.cognitive A bibliography of WEBSOM papers can be found in the home page. ---------------------------------------------------------------------- From padams at brain.neurobio.sunysb.edu Wed Sep 3 12:00:25 1997 From: padams at brain.neurobio.sunysb.edu (Paul R. Adams [Neurobiology]) Date: Wed, 3 Sep 1997 16:00:25 +0000 Subject: Postdoctoral position in Computational Neuroscience at Stony Bro Message-ID: Dear Colleague, We are pleased to announce the availability of a 1 year postdoctoral appointment in computational neurobiology, to be held at Stony Brook, and financed by the Swartz Fund for Computational Neuroscience. The position is described in the attached announcement, which is also appearing in an advertisement in "Science". Please note that preference will be given to theoretical scientists who will work on projects closely linked to experimental neuroscience, and that suitable candidates will be eligible for future Swartz Fund support. Please draw this opportunity to the attention of any of your colleagues to whom it might be of interest. Sincerely, P.Adams, L.Mendell, R. Shrock and S. Mclaughlin. Scientific Committee, Stony Brook component of Swartz Fund ** Stony Brook University Program in Computational Neuroscience** The Swartz Foundation has recently announced the establishment of the Swartz Fund for Computational Neuroscience to promote collaborative studies between neuroscientists at SUNY- Stony Brook and at Cold Spring Harbor Laboratories and researchers working in the mathematical, physical and computer sciences. The aim is to understand the algorithms that the brain actually uses. Stony Brook and Cold Spring Harbor have strong programs in the areas cited above, and support from the Swartz Fund will allow the growth of interactive research in brain theory. In the first year the Stony Brook component of this initiative will hire 1-2 scientists at the postdoctoral level to work either with an existing faculty member, or independently but in association with existing faculty.The salary will be highly competitive. Candidates should submit a one page summary of their research interests and goals, a CV and the names of 3 referees. For full consideration, applications should be received by September 30 1997. The fellow will be eligible for further Swartz Fund support. Further information about this position can be obtained at http://www.neurobio.sunysb.edu or from any of the following individuals:- Paul Adams (Neurobiology, to whom applications should be sent;PAdams at neurobio.sunysb.edu), Lorne Mendell (Neurobiology; LMendell at neurobio.sunysb.edu), Robert Shrock (Institute of Theoretical Physics; shrock at insti.physics.sunysb.edu), Stuart McLaughlin (Physiology and Biophysics;smcl at epo.som.sunysb.edu) ******************************************* From Pregenz at dpmi.tu-graz.ac.at Thu Sep 4 05:44:07 1997 From: Pregenz at dpmi.tu-graz.ac.at (Martin) Date: Thu, 4 Sep 1997 10:44:07 +0100 Subject: Stability problems with LVQ algorithms. Message-ID: We have observed stability problems with LVQ algorithms in our application on EEG data. Analysis of these problems resulted in general conditions under which the codebook vectors drift away from the data with both, LVQ3 and LVQ1 training. A paper which demonstrates the problems on simple 2-dim data is in preparation. A first draft can be downloaded form: FTP-host: fdpmial03.tu-graz.ac.at FTP-filename: /pub/outgoing/lvq_stab.ps.Z (135kB) /pub/outgoing/lvq_stab.ps (733kB) The problems are also outlined in my dissertation: "Distinction Sensitive Learning Vector Quantization", Graz, Austria, University of Technology, 1997. Comments are welcome to pregenz at dpmi.tu-graz.ac.at. Martin Pregenzer From giro-ci0 at wpmail.paisley.ac.uk Thu Sep 4 08:58:47 1997 From: giro-ci0 at wpmail.paisley.ac.uk (Mark Girolami) Date: Thu, 04 Sep 1997 12:58:47 +0000 Subject: I&AAN'98 Workshop- Final Call for Papers Message-ID: Final Call for Papers. International Workshop on INDEPENDENCE & ARTIFICIAL NEURAL NETWORKS / I&ANN'98 at the University of La Laguna, Tenerife, Spain February 9-10, 1998 http://www.compusmart.ab.ca/icsc/iann98.htm This workshop will take place immediately prior to the International ICSC Symposium on ENGINEERING OF INTELLIGENT SYSTEMS / EIS'98 at the University of La Laguna, Tenerife, Spain February 11-13, 1998 http://www.compusmart.ab.ca/icsc/eis98.htm ****************************************** TOPICS Recent ANN research has developed from those networks which find correlations in data sets to the more ambitious goal of finding independent components of data sets. The workshop will concentrate on those neural networks which find independent components and the associated networks whose learning rules use contextual information to organize their learning. The topic falls then into (at least) three main streams of current ANN research: 1. Independent Component Analysis which has most recently been successfully applied to the problem of "blind separation of sources" such as the recovery of a single voice from a mixture/convolution of voices. Such methods normally use either information theoretic criteria or higher order statistics to perform the separation. 2. Identification of independent sources: The seminal experiment in this field is the identification of single bars from an input grid containing mixtures of bars. Factor Analysis (or generative models) has been a recent popular method for this problem. 3. Using contextual information to identify structure in data. We envisage a single track program over two days (February 9 - 10, 1998) with many opportunities for informal discussion. ****************************************** INTERNATIONAL SCIENTIFIC COMMITTEE - Luis Almeida, INESC, Portugal - Tony Bell, Salk Institute, USA - Andrew Cichocki, RIKEN Institute, Japan - Colin Fyfe, University of Paisley, U.K. - Mark Girolami, University of Paisley, U.K. - Peter Hancock, University of Stirling, U.K. - Juha Karhunen, Helsinki University of Technology, Finland - Jim Kay, University of Glasgow, U.K. - Erkki Oja, Helsinki University of Technology, Finland ****************************************** INFORMATION FOR PARTICIPANTS AND AUTHORS Registrations are available for the workshop only (February 9 - 10, 1998), or combined with the EIS'98 symposium (February 11 - 13, 1998). The registration fee for the 2-day workshop is estimated at approximately Ptas. 37,000 per person and includes: - Use of facilities and equipment - Lunches, dinners and coffee breaks - Welcome wine & cheese party - Proceedings in print (workshop only) - Proceedings on CD-ROM (workshop and EIS'98 conference) - Daily transportation between hotels in Santa Cruz and workshop site The regular registration fee for the EIS'98 symposium (February 11-13, 1998) is estimated at Ptas. 59,000 per person, but a reduction will be offered to workshop participants. Separate proceedings will be printed for the workshop, but all respective papers will also be included on the CD-ROM, covering the I&ANN'98 workshop and the EIS'98 symposium. As a bonus, workshop participants will thus automatically also receive the conference proceedings (CD-ROM version). We anticipate that the proceedings will be published as a special issue of a journal. ****************************************** SUBMISSION OF PAPERS Prospective authors are requested to send a 4-6 page report of their work for evaluation by the International Scientific Committee. All reports must be written in English, starting with a succinct statement of the problem, the results achieved, their significance and a comparison with previous work. The report should also include: - Title of workshop (I&ANN'98) - Title of proposed paper - Authors names, affiliations, addresses - Name of author to contact for correspondence - E-mail address and fax # of contact author - Topics which best describe the paper (max. 5 keywords) Submissions may be made by airmail or electronic mail to: Dr. Colin Fyfe Department of Computing and Information Systems The University of Paisley High Street Paisley, PA1 2BE Scotland Email: fyfe0ci at paisley.ac.uk Fax: +44-141-848-3542 ****************************************** SUBMISSION DEADLINE It is the intention of the organizers to have the proceedings available for the delegates. Consequently, the submission deadline of September 15, 1997 has to be strictly respected. ****************************************** IMPORTANT DATES Submission of reports: September 15, 1997 Notification of acceptance: October 15, 1997 Delivery of full papers: November 15, 1997 I&ANN'98 Workshop: February 9 - 10, 1998 EIS'98 Conference: February 11 - 13, 1998 ****************************************** LOCAL ARRANGEMENTS For details about local arrangements, please consult the EIS'98 website at http://www.compusmart.ab.ca/icsc/eis98.htm ****************************************** FURTHER INFORMATION For further information please contact: - Dr. Colin Fyfe Department of Computing and Information Systems The University of Paisley High Street Paisley PA1 2BE Scotland E-mail: fyfe0ci at paisley.ac.uk Fax: +44-141-848-3542 or - ICSC Canada International Computer Science Conventions P.O. Box 279 Millet, Alberta T0C 1Z0 Canada E-mail: icsc at compusmart.ab.ca Fax: +1-403-387-4329 WWW: http://www.compusmart.ab.ca/icsc From giles at research.nj.nec.com Thu Sep 4 10:30:40 1997 From: giles at research.nj.nec.com (Lee Giles) Date: Thu, 4 Sep 97 10:30:40 EDT Subject: 1995 Citation Impact Factors Message-ID: <9709041430.AA14972@alta> Impact factor changes from year to year. It is worth comparing these to those in the year 1994 (on my homepage). The 1996 citation reports are not available yet. These numbers come from the 1995 JCR (Journal Citation Reports); for more details please see the Journal Citation Report. The impact factor is a measure of the frequency with which the average article in a journal has been cited for a particular year. This is a normalized measure so that small and large journals can be compared. For more details see Journal Citation Reports. Below are impact factors for journals listed under Computer Science, Artificial Intelligence. (Note I have added in some additional journals that may be of interest in AI.) The abbreviations are those used by JCR. JOURNAL ABBREVIATION IMPACT FACTOR COGNITIVE BRAIN RES 2.222 IEEE T PATTERN ANAL 1.940 NEURAL COMPUT 1.700 IEEE T NEURAL NETWOR 1.581 ARTIF INTELL 1.560 INT J COMPUT VISION 1.490 MACH LEARN 1.264 NEURAL NETWORKS 1.262 CHEMOMETR INTELL LAB 1.158 KNOWL ACQUIS 1.143 IEEE T ROBOTIC AUTOM 1.019 NETWORK-COMP NEURAL 0.950 AI MAG 0.857 ARTIF INTELL MED 0.850 EXPERT SYST APPL 0.790 IEEE T KNOWL DATA EN 0.720 INT J APPROX REASON 0.630 PATTERN RECOGN 0.621 NEUROCOMPUTING 0.609 INT J INTELL SYST 0.606 IEEE EXPERT 0.597 IMAGE VISION COMPUT 0.484 DECIS SUPPORT SYST 0.442 APPL ARTIF INTELL 0.431 PATTERN RECOGN LETT 0.431 AI EDAM 0.386 ENG APPL ARTIF INTEL 0.295 J AUTOM REASONING 0.247 J EXP THEOR ARTIF IN 0.238 ARTIF INTELL ENG 0.226 INT J SOFTW ENG KNOW 0.212 KNOWL-BASED SYST 0.167 ARTIF INTELL REV 0.114 AI APPLICATIONS 0.089 J INTELL ROBOT SYST 0.086 J INTELL MANUF 0.063 APPL INTELL 0.050 COMUT ARTIF INTELL 0.045 AVTOM VYCHISL TEKH+ 0.044 NEURAL PROCESS LETT 0.000 On my homepage I have the 1994 citation impact factors for AI and AI related journals plus some other citation impact factors for other journal catagories. This information is copyrighted by Journal Citation Reports. Regards Lee Giles __ C. Lee Giles / Computer Science / NEC Research Institute / 4 Independence Way / Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 www.neci.nj.nec.com/homepages/giles.html == From nat at clarity.Princeton.EDU Thu Sep 4 12:31:43 1997 From: nat at clarity.Princeton.EDU (Nathalie Japkowicz) Date: Thu, 4 Sep 1997 12:31:43 -0400 (EDT) Subject: NIPS*97 Workshop on Autoassociators Message-ID: <199709041631.MAA12023@confusion.Princeton.EDU> ***************************** NIPS*97 Workshop Announcement ***************************** ************************************************************************ Advances in Autoencoder/Autoassociator-Based Computations Date: Friday December 5, 1997 Location: Breckenridge, Colorado Organizers: Nathalie Japkowicz, Mark A. Gluck and Stephen J. Hanson ************************************************************************ Workshop Description: --------------------- Autoencoders/Autoassociators have had a troubled history. At first believed to have great potential for image compression and speech processing, they were subsequently shown not to outperform Principal Component Analysis (PCA), a linear dimensionality-reduction method, even in the presence of nonlinearities in their hidden layer. Nonetheless, because of their intriguing nature, their study was pursued and it was shown that under certain circumstances, they are capable of performing various types of nonlinear dimensionality-reduction tasks. More recently, they were also shown to be very competent in learning algorithm's reliability estimation, novelty detection, cognitive modeling of the hippocampus and of the natural language grammar acquisition process. Furthermore, they appear promising for time-series analyses when used in recursive mode. Despite their various successes, however, autoencoders/autoassociators have had a difficult time re-establishing themselves fully and the extent of their capabilities remains controversial. The purpose of this workshop is to attempt to revive this extremely powerful, yet conceptually simple device in order to generate a more elaborate understanding of their inner-workings and to explore the various practical and cognitive applications for which they can be useful. More specifically, we are hoping to bring together theoretical and experimental researchers from both engineering and cognitive science backgrounds, who have studied autoencoders/autoassociators, used them in various ways, or designed and studied novel autoencoder/autoassociator-based architectures of interest. We believe that by enabling the sharing of ideas and experiences, this forum will help understand autoencoders/autoassociators better and generate new research directions. In addition, autoencoder/autoassociator-based systems will be compared to other types of autoassociative schemes and their strengths and weaknesses evaluated against them. List of Speakers: ------------------ * Pierre Baldi * Horst Bischof/Andreas Weingessel/Kurt Hornik * Kostas Diamantaras * Zoubin Ghahramani/Sam Roweis/Geoffrey Hinton * Mohamad Hassoun/Agus Sudjianto * Robert Hecht-Nielsen * Nathalie Japkowicz/Stephen J. Hanson/Mark A. Gluck * Erkki Oja * Jordan Pollack * Holger Schwenk * Andreas Weingessel/Horst Bischof/Kurt Hornik (An additional two or three speakers will perhaps join the program) Titles and abstracts of the talks --------------------------------- Posted at: http://paul.rutgers.edu/~nat/Workshop/workshop.html From rafal at idsia.ch Thu Sep 4 14:21:11 1997 From: rafal at idsia.ch (Rafal Salustowicz) Date: Thu, 4 Sep 1997 20:21:11 +0200 (MET DST) Subject: Probabilistic Incremental Program Evolution Message-ID: PROBABILISTIC INCREMENTAL PROGRAM EVOLUTION Rafal Salustowicz Juergen Schmidhuber IDSIA, Switzerland To appear in Evolutionary Computation 5(2) Probabilistic Incremental Program Evolution (PIPE) is a novel technique for automatic program synthesis. We combine probability vector coding of program instructions, Population-Based Incremental Learning, and tree-coded programs like those used in some variants of Genetic Programming (GP). PIPE iteratively generates successive populations of functional programs according to an adaptive probability distribution over all possible programs. Each iteration it uses the best program to refine the distribution. Thus, it stochastically generates better and better programs. Since distribution refinements depend only on the best program of the current population, PIPE can evaluate program populations efficiently when the goal is to discover a program with minimal runtime. We compare PIPE to GP on a function regression problem and the 6-bit parity problem. We also use PIPE to solve tasks in partially observable mazes, where the best programs have minimal runtime. ftp://ftp.idsia.ch/pub/rafal/PIPE.ps.gz http://www.idsia.ch/~rafal/research.html Short version: Probabilistic Incremental Program Evolution: Stochastic Search Through Program Space. In van Someren, M., Widmer, G. editors, Machine Learning: ECML-97, pages 213-220, Lecture Notes in Artificial Intelligence 1224, Springer-Verlag Berlin Heidelberg. (ftp://ftp.idsia.ch/pub/rafal/ECML_PIPE.ps.gz) ****************************************************************************** * Rafal Salustowicz * * Istituto Dalle Molle di Studi sull'Intelligenza Artificiale (IDSIA) * * Corso Elvezia 36 e-mail: rafal at idsia.ch * * 6900 Lugano ============== * * Switzerland raf at cs.tu-berlin.de * * Tel (IDSIA) : +41 91 91198-38 raf at psych.stanford.edu * * Tel (office): +41 91 91198-32 * * Fax : +41 91 91198-39 WWW: http://www.idsia.ch/~rafal * ****************************************************************************** From giles at research.nj.nec.com Thu Sep 4 17:42:37 1997 From: giles at research.nj.nec.com (Lee Giles) Date: Thu, 4 Sep 97 17:42:37 EDT Subject: paper available: statistical distribution of NN results Message-ID: <9709042142.AA15822@alta> The following manuscript has been accepted in IEEE Transactions on Neural Networks and is available at the WWW site listed below: www.neci.nj.nec.com/homepages/giles/papers/IEEE.TNN.statistical.dist.of.trials.ps.Z We apologize in advance for any multiple postings that may be received. *********************************************************************** On the Distribution of Performance from Multiple Neural Network Trials Steve Lawrence(1), Andrew D. Back(2), Ah Chung Tsoi(3), C. Lee Giles(1,4) (1) NEC Research Institute, 4 Independence Way, Princeton, NJ 08540, USA. (2) Brain Information Processing Group, Frontier Research Program, RIKEN, The Institute of Physical and Chemical Research, 2-1 Hirosawa, Wako-shi, Saitama 351-01, Japan. (3) Faculty of Informatics, University of Wollongong, Northfields Avenue, Wollongong NSW 2522, Australia. (4) Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742, USA. ABSTRACT The performance of neural network simulations is often reported in terms of the mean and standard deviation of a number of simulations performed with different starting conditions. However, in many cases, the distribution of the individual results does not approximate a Gaussian distribution, may not be symmetric, and may be multimodal. We present the distribution of results for practical problems and show that assuming Gaussian distributions can significantly affect the interpretation of results, especially those of comparison studies. For a controlled task which we consider, we find that the distribution of performance is skewed towards better performance for smoother target functions and skewed towards worse performance for more complex target functions. We propose new guidelines for reporting performance which provide more information about the actual distribution. Keywords: neural networks, gradient training, backpropagation, error analysis, convergence, gaussian distribution, probability distributions, statistical methods, box whiskers, kolmogorov-smirnov test, mackey-glass, phoneme classification. __ C. Lee Giles / Computer Science / NEC Research Institute / 4 Independence Way / Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 www.neci.nj.nec.com/homepages/giles.html == From jdunn at cyllene.uwa.edu.au Fri Sep 5 01:15:46 1997 From: jdunn at cyllene.uwa.edu.au (John Dunn) Date: Fri, 5 Sep 1997 13:15:46 +0800 (WST) Subject: Eighth Australasian Mathematical Psychology Conference Message-ID: <199709050515.NAA13274@cyllene.uwa.edu.au> Third Call for Papers Eighth Australasian Mathematical Psychology Conference November 27-30, 1997 University of Western Australia Perth, W.A. Australia The Eighth Australasian Mathematical Psychology Conference (AMPC97), will be held at the University of Western Australia, Nedlands, W.A. 6907, from Thursday November 27 to Sunday November 30, 1997. Details concerning the conference, registration, and submission of papers are available at the AMPC97 Web site: http://www.psy.uwa.edu.au/mathpsych/ International visitors to the conference may wish to log their itinerary with the World-wide Academic Visitor Exchange (WAVE) at: http://www.psy.uwa.edu.au/wave/ If you have already submitted an abstract to this conference, please feel free to ignore this message. ----------------------------------------------------- EXTENSION OF DEADLINE Please note that the deadline for submission of abstracts has been extended (for the first and only time) to September 30, 1997. ----------------------------------------------------- SYMPOSIA The following symposia have been accepted. If you wish to present a paper at any of these, please contact the relevant convenor, listed below. Requests for additional symposia should be directed to the conference organisers at mathpsych at psy.uwa.edu.au. Local energy detection in vision David Badcock, University of Western Australia david at psy.uwa.edu.au Nonlinear dynamics Robert A M Gregson, Australian National University Robert.Gregson at anu.edu.au Associative learning John K Kruschke, Indiana University kruschke at croton.psych.indiana.edu Computational models of memory Stephan Lewandowsky, University of Western Australia lewan at psy.uwa.edu.au Knowledge representation Josef Lukas, University of Halle, Germany j.lukas at psych.uni-halle.de Choice, decision, and measurement Anthony A J Marley, McGill University tony at hebb.psych.mcgill.ca Face recognition Alice O'Toole & Herve Abdi, University of Texas otoole at utdallas.edu Models of response time Roger Ratcliff, Northwestern University roger at eccles.psych.nwu.edu ----------------------------------------------------- From Simon.N.CUMMING at British-Airways.com Fri Sep 5 12:37:58 1997 From: Simon.N.CUMMING at British-Airways.com (Simon.N.CUMMING@British-Airways.com) Date: 05 Sep 1997 16:37:58 Z Subject: ANNOUNCEMENT: Two Day Conference in Bath, UK. 15-16Sep97 Message-ID: <"BSC400A1 970905163755510421*/c=GB/admd=ATTMAIL/prmd=BA/o=British Airways PLC/s=CUMMING/g=SIMON/i=N/"@MHS> ANNOUNCEMENT: NEURAL COMPUTING APPLICATIONS FORUM Two Day Conference in Bath, UK. 15-16th September 1997 In Collaboration with the Engineering and Physical Sciences Research Council. NEURAL COMPUTING APPLICATIONS FORUM (NCAF) ------------------------------------------ The purpose of NCAF is to promote widespread exploitation of neural computing technology by: - providing a focus for neural network practitioners. - disseminating information on all aspects of neural computing. - encouraging close co-operation between industrialists and academics. NCAF holds four, two-day conferences per year, in the UK, with speakers from commercial and industrial organisations and universities. The focus of the talks is on practical issues in the application of neural network technology and related methods to solving real-world problems. The September event, in the beautiful and historic city of Bath, will be held in collaboration with EPSRC, and thanks to their support the costs of attending will be limited to accommodation and the social event. This meeting reports and discusses progress on a widespread programme of applications-directed research on neural computing which has been running in the UK for two years: the "KEY QUESTIONS" Programme. A brief glance at the programme will indicate that we have obtained the services of the foremost practitioners in the UK, and this will be complemented by poster sessions. Be assured, the "Industrial" contingent will be equally strong. Don't miss this first rate opportunity to meet all the important people from both industry and academia. PROGRAMME DETAILS: ----------------- The "Key Questions" Programme: Purpose and Context Chair: David Bounds (Recognition Systems) Static Pattern Processing ------------------------- Neural Networks for Visualisation of High-Dimensional Data Chris Bishop (Aston University) Autonomous Training Algorithms for Neural Networks Peter Rockett (Sheffield University) Neural Networks to Support Molecular Structure Matching Jim Austin (York University) Novelty Detection ----------------- Novelty Detection Lionel Tarassenko (Oxford University) Strategies for Applying Neural Computing to Time-varying Signals and their Application to Tyre Quality Control Tom Harris (Brunel University) Assessment of Neural Network Systems ------------------------------------ Performance Assessment and Generalisability of Neural Networks David Hand (Open University) Validation and Verification of Neural Network Systems Chris Bishop (Aston University) Neurocomputational models of auditory perception and speech recognition Sue McCabe (Plymouth University) A model based approach to optimal engine control Julian Mason (Cambridge Consultants) Non-Stationary Time Series Analysis ----------------------------------- Neural Computing for Non-stationary Medical Signal Processing Mahesan Niranjan (Cambridge University) Dynamic Traffic Monitoring using Neural Networks Howard Kirby (Leeds University) Non-stationary Feature Extraction and Tracking for the Classification of Turning Points in Multi-variate Time Series David Lowe (Aston University) Control and Systems Identification ---------------------------------- Approximation and Control of Industrial Nonlinear Dynamic Processes Julian Morris (Newcastle University) Local Model Networks for Dynamic Modelling and Internal Model Control of Industrial Plant George Irwin (Queen's University, Belfast) Neurofuzzy Model Building Algorithms and their Application in Non-stationary Environments Martin Brown (Southampton University) Issues raised by the Key Questions Programme -------------------------------------------- Neural network modelling of nickel based super alloys Joy Jones (Cambridge University) ===================================================================== SOCIAL PROGRAMME: ---------------- 15 September 1997: (EVENING) Conference Dinner in the Pump Rooms followed by a guided tour of the Roman Baths plus "Four Weighings and a Funeral" with Graham 'Rottweiler' Hesketh (Rolls-Royce) -------------------------------------------------------------------- REGISTRATION DETAILS: -------------------- Please note that attendance at this conference is free. Usual rates for meeting attendance are: for members, 20 pounds per day conference fee, excluding meals, social events and accommodation; for non-members, 100 pounds per day excluding meals, social events and accommodation. Thanks to support from EPSRC the only items that will be charged for this meeting are: accommodation and social events. There is, however, still a need to register. To register please e-mail ncafsec at brunel.ac.uk, giving your name and organisation, correspondence address, e-mail address and telephone number. Non-members are strongly encouraged to join (see membership details below). Please indicate if you wish to become a member. Please indicate if you require Bed and Breakfast Accommodation (25 pounds per night) for the nights of 14 and / or 15 September, and whether you wish to join the Conference Dinner and Tour of Roman Baths (25 pounds). NCAF MEMBERSHIP DETAILS: ------------------------ All amounts are in pounds Sterling, per annum. All members receive a quarterly newsletter and are eligible to vote at the AGM (but see note on corporate membership). Full (Corporate) Membership : 300 pounds (allows any number of people in the member organisation to attend meetings at member rates; voting rights are restricted to one, named, individual. Includes automatic subscription to the journal Neural Computing and Applications.) Individual Membership : 170 pounds (allows one, named, individual to attend meetings at member rates; includes journal) Associate Membership: 110 pounds: includes subscription to the journal and newsletter but does not cover admission to the meetings. Reduced (Student) Membership : 65 pounds including Journal; 30 pounds without journal. Applications for student membership should be accompanied by a copy of a current full-time student ID card, UB40, etc. For more information, or to become a member, Please contact NCAF, P.O. box 73, EGHAM, Surrey, UK. TW20 0YZ. Or telephone Sally Francis or Tom Harris on +44 1784 477271 or email ncafsec at brunel.ac.uk From rjb at psy.ox.ac.uk Mon Sep 8 07:32:45 1997 From: rjb at psy.ox.ac.uk (Roland Baddeley) Date: Mon, 08 Sep 1997 12:32:45 +0100 Subject: Two papers available on 1) neural coding, and 2) psychophysics Message-ID: <3413E25D.2847@psy.ox.ac.uk> The following two papers are available. The first is on the nature of the neuronal code (rate coding, information theory and all that), and the second is on using neural nets to analyse psychophysical experiments. They can now both be found on my web page (hope they are of interest): http://www.mrc-bbc.ox.ac.uk/~rjb/ =================================================================== Title: Responses of neurons in primary and inferior temporal visual cortices to natural scenes Baddeley, Abbott, Booth, Sengpiel, Freeman, Wakeman, and Rolls (in press) To appear in Proceedings of the Royal Society B Abstract: The primary visual cortex (V1) is the first cortical area to receive visual input, and inferior temporal (IT) areas are among the last along the ventral visual pathway. We recorded, in area V1 of anaesthetised cat and area IT of awake macaque monkey, responses of neurons to videos of natural scenes. Responses were analysed to test various hypotheses concerning the nature of neural coding in these two regions. A variety of spike-train statistics were measured including spike-count distributions, interspike interval distributions, coefficients of variation, power spectra, Fano factors and different sparseness measures. All statistics showed non-Poisson characteristics and several revealed self-similarity of the spike trains. Spike-count distributions were approximately exponential in both visual areas for eight different videos and for counting windows ranging from 50 ms to 5 seconds. The results suggest that the neurons maximise their information carrying capacity while maintaining a fixed long-term-average firing rate\cite{Baddeley96,Levy96}, or equivalently, minimise their average firing rate for a fixed information carrying capacity. =================================================================== Insights into motion perception by observer modelling Roland Baddeley and Srimant P. Tripathy Journal of the Optical Society of America (In press) The statistical efficiency of human observers performing a simplified version of the motion detection task used by Newsome et al. is high but not perfect. This reduced efficiency may be because of noise internal to the observers, or it may be because the observers are using strategies that are different from that used by an ideal machine. We therefore investigated which of three simple models best accounts for the observers' performance. The models compared were: 1) a motion detector that uses the proportion of dots in the first frame that move coherently (as would an ideal machine), 2) a model that bases its decision on the number of dots that move, 3) a model that differentially weights motions occurring at different locations in the visual field (for instance differentially weighting the point of fixation and the periphery). We compared these models by explicitly modelling the human observers performance. We recorded the exact stimulus configuration on each trial together with the observer's response, and, for the different models we found the parameters that best predicted the observer's performance in a least squares sense. We then used N fold cross-validation to compare the models and hence the associated hypotheses. Our results show that the performance of observers is based on the proportion of dots moving, not the absolute number, and that there was no evidence for any differential spatial weighting. Whilst this method of modelling the observers' response is only demonstrated for one simple psychophysical paradigm, it is general and can be applied to any psychophysical framework where the entire stimulus can be recorded. -- Dr Roland Baddeley WWW: http://www.mrc-bbc.ox.ac.uk/~rjb/ Mail: Psychology Dept, Oxford email: rjb at psy.ox.ac.uk University, South Parks phone: +44-1865-271914 Road, Oxford, OX1 3UD, UK fax: +44-1865-272488 From jagota at cse.ucsc.edu Mon Sep 8 19:26:35 1997 From: jagota at cse.ucsc.edu (Arun Jagota) Date: Mon, 8 Sep 1997 16:26:35 -0700 Subject: volunteers needed for NIPS Message-ID: <199709082326.QAA06554@bristlecone.cse.ucsc.edu> Several student volunteers are needed for assistance with various tasks at NIPS*97. Volunteering can be a fun and educational experience. Furthermore, in exchange for about 9 hours of work, volunteers receive free registration to the component of NIPS (tutorials, conference, workshops) they volunteer their time towards. Check out http://www.cse.ucsc.edu/~jagota for detailed instructions on how to volunteer, details on the various tasks, etc. Or contact me at jagota at cse.ucsc.edu Arun Jagota From cmb35 at newton.cam.ac.uk Tue Sep 9 05:47:10 1997 From: cmb35 at newton.cam.ac.uk (C.M. Bishop) Date: Tue, 09 Sep 1997 10:47:10 +0100 Subject: Two papers available on-line Message-ID: <199709090947.KAA13549@feynman> Two Papers Available Online: PROBABILISTIC PRINCIPAL COMPONENT ANALYSIS (NCRG/97/010) Michael E. Tipping and Christopher M. Bishop Neural Computing Research Group Aston University, Birmingham B4 7ET, U.K.. http://neural-server.aston.ac.uk/Papers/postscript/NCRG_97_010.ps.Z Abstract: Principal component analysis (PCA) is a ubiquitous technique for data analysis and processing, but one which is not based upon a probability model. In this paper we demonstrate how the principal axes of a set of observed data vectors may be determined through maximum-likelihood estimation of parameters in a latent variable model closely related to factor analysis. We consider the properties of the associated likelihood function, giving an EM algorithm for estimating the principal subspace iteratively, and discuss the advantages conveyed by the definition of a probability density function for PCA. MIXTURES OF PRINCIPAL COMPONENT ANALYSERS (NCRG/97/003) Michael E. Tipping and Christopher M. Bishop Neural Computing Research Group Aston University, Birmingham B4 7ET, U.K.. http://neural-server.aston.ac.uk/Papers/postscript/NCRG_97_003.ps.Z Abstract: Principal component analysis (PCA) is one of the most popular techniques for processing, compressing and visualising data, although its effectiveness is limited by its global linearity. While nonlinear variants of PCA have been proposed, an alternative paradigm is to capture data complexity by a combination of local linear PCA projections. However, conventional PCA does not correspond to a probability density, and so there is no unique way to combine PCA models. Previous attempts to formulate mixture models for PCA have therefore to some extent been ad hoc. In this paper, PCA is formulated within a maximum-likelihood framework, based on a specific form of Gaussian latent variable model. This leads to a well-defined mixture model for probabilistic principal component analysers, whose parameters can be determined using an EM algorithm. We discuss the advantages of this model in the context of clustering, density modelling and local dimensionality reduction, and we demonstrate its application to image compression and handwritten digit recognition. --- ooo --- A complete, searchable database of publications from the Neural Computing Research Group at Aston can be found by going to the Group home page http://www.ncrg.aston.ac.uk/ and selecting `Publications' --- ooo --- From wojtek at msmail4.hac.com Tue Sep 9 22:46:16 1997 From: wojtek at msmail4.hac.com (Przytula, Krzysztof W) Date: 9 Sep 1997 18:46:16 -0800 Subject: job opening at Hughes Research Message-ID: RESEARCH OPPORTUNITIES AT HUGHES RESEARCH LABORATORIES Hughes Research Laboratories has an immediate opening for a Research Staff Member to join a team of scientists in the Signal and Image Processing Department. Team members in this department have developed novel, state-of-the-art sensor fusion systems, neural networks, time-frequency transforms, and image compression algorithms for use in both commercial, and military applications. The successful candidate will investigate advanced signal and image processing techniques for information fusion, and pattern recognition applications. Current work is focused on the application of information fusion techniques, neural networks, and computer vision techniques for automatic target recognition, automotive safety, and pattern recognition applications. Specific duties will include theoretical analysis, algorithm design, and software simulation. Candidates are expected to have a Ph.D. in Electrical Engineering, Applied Mathematics, or Computer Science. Strong analytical skills and demonstrated ability to perform creative research, along with experience in signal and image processing, information fusion, or pattern recognition are required. Practical experience with Matlab, C, or C++ is essential. Good communications and teamwork skills are keys to success. Based on government restrictions regarding the export of technical data, a U.S. citizenship or resident alien status is required. Overlooking the Pacific Ocean and the coastal community of Malibu, the Research Laboratories provides an ideal environment for you to make the most of your scientific abilities. Our organization offers a competitive salary and benefits package. Additional information may be obtained from Lynn Ross. For immediate consideration, send your resume to: Lynn W. Ross Department RM21 Hughes Research Laboratories 3011 Malibu Canyon Road Malibu, CA 90265 FAX: (310) 317-5651 Internet: lross at msmail4.hac.com Proof of legal right to work in the United States required. An Equal Opportunity Employer. From rao at cs.rochester.edu Wed Sep 10 01:40:20 1997 From: rao at cs.rochester.edu (Rajesh Rao) Date: Wed, 10 Sep 1997 01:40:20 -0400 Subject: Tech Report: Space-Time Receptive Fields from Natural Images Message-ID: <199709100540.BAA22625@corvette.cs.rochester.edu> The following technical report on learning space-time receptive fields from natural images is available on the WWW page: http://www.cs.rochester.edu/u/rao/ or via anonymous ftp (see instructions below). Comments and suggestions welcome (This message has been cross-posted - my apologies to those who received it more than once). -- Rajesh Rao Internet: rao at cs.rochester.edu Dept. of Computer Science VOX: (716) 275-5492 University of Rochester FAX: (716) 461-2018 Rochester NY 14627-0226 WWW: http://www.cs.rochester.edu/u/rao/ =========================================================================== Efficient Encoding of Natural Time Varying Images Produces Oriented Space-Time Receptive Fields Rajesh P.N. Rao and Dana H. Ballard Technical Report 97.4 National Resource Laboratory for the Study of Brain and Behavior Department of Computer Science, University of Rochester August 1997 The receptive fields of neurons in the mammalian primary visual cortex are oriented not only in the domain of space, but in most cases, also in the domain of space-time. While the orientation of a receptive field in space determines the selectivity of the neuron to image structures at a particular orientation, a receptive field's orientation in space-time characterizes important additional properties such as velocity and direction selectivity. Previous studies have focused on explaining the spatial receptive field properties of visual neurons by relating them to the statistical structure of static natural images. In this report, we examine the possibility that the distinctive spatiotemporal properties of visual cortical neurons can be understood in terms of a statistically efficient strategy for encoding natural time varying images. We describe an artificial neural network that attempts to accurately reconstruct its spatiotemporal input data while simultaneously reducing the statistical dependencies between its outputs. The network utilizes spatiotemporally summating neurons and learns efficient sparse distributed representations of its spatiotemporal input stream by using recurrent lateral inhibition and a simple threshold nonlinearity for rectification of neural responses. When exposed to natural time varying images, neurons in a simulated network developed localized receptive fields oriented in both space and space-time, similar to the receptive fields of neurons in the primary visual cortex. Retrieval information: FTP-host: ftp.cs.rochester.edu FTP-pathname: /pub/u/rao/papers/space-time.ps.Z WWW URL: http://www.cs.rochester.edu/u/rao/ 26 pages; 1040K compressed. ========================================================================== Anonymous ftp instructions: >ftp ftp.cs.rochester.edu Connected to anon.cs.rochester.edu. 220 anon.cs.rochester.edu FTP server (Version wu-2.4(3)) ready. Name: [type 'anonymous' here] 331 Guest login ok, send your complete e-mail address as password. Password: [type your e-mail address here] ftp> cd /pub/u/rao/papers/ ftp> get space-time.ps ftp> bye From atick at monaco.rockefeller.edu Wed Sep 10 09:02:45 1997 From: atick at monaco.rockefeller.edu (Joseph Atick) Date: Wed, 10 Sep 1997 09:02:45 -0400 Subject: Research Positions in Pattern Recognition Message-ID: <9709100902.ZM12127@monaco.rockefeller.edu> Research Positions In Pattern Recognition and Image Analysis Visionics Corporation has several openings for research scientists and engineers in the field of Pattern Recognition and Image Analysis. Candidates are expected to have a Ph.D. or a Masters degree in Computer Science, Applied Mathematics, Electrical Engineering, Physics or Computational Neuroscience and to have demonstrated research abilities in computer vision, artificial neural networks, image processing, computational neuroscience or pattern recognition. The successful candidates will join the growing R&D team of Visionics in working on developing real-world pattern recognition algorithms-- especially video recognition (such as real time face recognition). The job will be at Visionics' new headquarters at 1 Exchange Place in Jersey City, New Jersey. This facility is immediately across the hudson from the World Trade Center in Manhattan and has impressive views of the Manhattan skylike and the Statue of Liberty on the Hudson. It is 5 minutes away from downtown Manhattan by Path train and is easily accesible to anywhere in New Jersey. Visionics offers a competitive salary and benefits package and a chance to rapid career advancement with one of the fastest growing research teams. Visionics is the developer of the FaceIt face recognition technology. The FaceIt engine is currently in use in dozens of large scale products and applications in security, access control, and banking around the world. For more information about Visionics please visit our webpage at http://www.faceit.com For immediate consideration, send your resume via: (1) Fax to Annette Isler Human Resources Visionics Corporation 1 Exchange Place Jersey City, NJ 07302 Fax: (201) 332 9313. or you can (2) Email it to jobs at faceit.com Visionics is an equal opportunity employer. -- Joseph J. Atick Rockefeller University 1230 York Avenue New York, NY 10021 Tel: 212 327 7421 Fax: 212 327 7422 From Frederic.Alexandre at loria.fr Wed Sep 10 03:48:45 1997 From: Frederic.Alexandre at loria.fr (Frederic Alexandre) Date: Wed, 10 Sep 1997 09:48:45 +0200 (MET DST) Subject: SPECIAL SESSION AT NEURAP'98: Knowledge-Based Applications of ANNs Message-ID: <199709100748.JAA18385@wernicke.loria.fr> NEURAP'98 Fourth International Conference on Neural Networks & their Applications March 11-12-13, 1998, Marseille, France CALL FOR PAPERS Deadline: December 1, 1997 SPECIAL SESSION AT NEURAP'98 Knowledge-Based Applications of Artificial Neural Networks This special session aims to address issues concerning the integration of knowledge-based systems and artificial neural networks. Such hybrid systems hold the promise of combining the best features of each approach for constructing intelligent systems. Various strategies can be envisaged, from the coupling of existing symbolic and connectionist models to the design of specific models for that purpose. Each of these strategies underline properties and gives original views of this new domain, that we wish to discuss and compare during this special session. We are especially interested in applications of this technology to real-world problems. Topics can include, but are not limited to: - Methods for incorporating prior knowledge into feed-forward and recurrent connectionist architectures - Combination of classical knowledge-based and connectionist models - Exploration of relationships between symbolic Machine Learning algorithms, Knowledge Acquisition, Knowledge-Based Systems and Knowledge-Based Neural Networks - Learning algorithms for Knowledge-Based Neural Networks - Rule extraction and refinement - Neural expert systems - Implementation issues: Modularity, Parallel and Distributed implementations, etc. - Applications to real world problems Organizers: Ian Cloete ( ian at cs.sun.ac.za ) and Frederic Alexandre ( falex at loria.fr ) Instructions: To ensure a lively conference, all papers will be presented in plenary session (short presentations) and also through posters. However, to guarantee the high quality of the presented works, the selection is based on full paper proposals. Depending on the maturity of the work presented, there will be short papers for on-going research of 4 pages long, and long papers for mature research results, up to 8 A4 pages (double columns, Times 10-point font size). All papers will be printed in full in the proceedings and must be in English. Please send 4 copies to the conference secretariat before December 1, 1997. Notification will be sent on February 3, 1998 and the camera-ready version is due on February 20, 1998. PLEASE NOTE: All papers must be clearly marked indicating inclusion in this Special Session. Secretariat & Information NEURAP'98 DIAM - IUSPIM , University of Aix-Marseille III, Domaine Universitaire de St Jerome, Avenue Escadrille Normandie-Niemen, 13397 Marseille Cedex 20, France Tel.: ++ 33 4 91 05 60 60 Fax: ++ 33 4 91 05 60 33 Email: Claude.Touzet at iuspim.u-3mrs.fr URL: http://www.iuspim.u-3mrs.fr/neurap98.htm or: http://dsi.ing.unifi.it/neural/neurap/cfp98.html For information on the conference, please visit http://www.iuspim.u-3mrs.fr/neurap98.htm or dsi.ing.unifi.it/neural/neurap/cfp98.html From terry at salk.edu Wed Sep 10 14:14:16 1997 From: terry at salk.edu (Terry Sejnowski) Date: Wed, 10 Sep 1997 11:14:16 -0700 (PDT) Subject: NEURAL COMPUTATION 9:7 Message-ID: <199709101814.LAA23940@helmholtz.salk.edu> Neural Computation - Contents Volume 9, Number 7 - Octobere 1, 1997 ARTICLE Mean-Field Theory For Batched-TD(lambda) Fernando J. Pineda LETTER Redundancy Reduction and Independent Component Analysis: Conditions on Cumulants and Adaptive Approaches Jean-Pierre Nadal and Nestor Parga Adaptive Online Learning Algorithms for Blind Separation: Maximization Entropy and Minimum Mutual Information Howard Hua Yang and Shun-ichi Amari A Fast Fixed-Point Algorithm for Independent Component Analysis Aapo Hyvarinen, and Erkki Oja Dimension Reduction by Local Principal Component Analysis Nandakishore Kambhatla and Todd K. Leen A Constructive, Incremental-Learning Network for Mixture Modeling and Classification James R. Williamson Shape Quantization And Recognition With Randomized Trees Donald Geman and Yali Amit Airline Crew Scheduling With Potts Neurons Martin Lagerholm, Carsten Peterson, and Bo Soderberg Online Learning in Radial Basis Function Networks Jason A. S. Freeman and David Saad Errata Image Segmentation Based on Oscillatory Correlation DeLiang Wang and David Terman ----- ABSTRACTS - http://mitpress.mit.edu/NECO/ SUBSCRIPTIONS - 1997 - VOLUME 9 - 8 ISSUES ______ $50 Student and Retired ______ $78 Individual ______ $250 Institution Add $28 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-8 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) mitpress-orders at mit.edu MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 ----- From Dimitris.Dracopoulos at brunel.ac.uk Wed Sep 10 14:27:30 1997 From: Dimitris.Dracopoulos at brunel.ac.uk (Dimitris.Dracopoulos@brunel.ac.uk) Date: Wed, 10 Sep 1997 19:27:30 +0100 Subject: New Book Announcement Message-ID: <22393.199709101827@hebe.brunel.ac.uk> Just published: `Evolutionary Learning Algorithms for Nonlinear Adaptive Control'' by Dimitris C. Dracopoulos Springer Verlag, Perspectives in Neural Computing Series, Series Editor: J. Taylor, August 1997, ISBN: 3-540-76161-6, GBP24.99 Neural networks and evolutionary algorithms are constantly expanding their field of application to a variety of new domains. One area of particular interest is their applicability to control and adaptive control systems: the limitations of the classical control theory combined with the need for greater robustness, adaptivity and ``intelligence'' make neurocontrol and evolutionary control algorithms an attractive (and in some cases, the only) alternative. After an introduction to neural networks and genetic algorithms, this volume describes in detail how neural networks and evolutionary techniques (specifically genetic algorithms and genetic programming) can be applied to the adaptive control of complex dynamic systems (including chaotic ones). A number of examples are presented and useful tips are given for the application of the techniques described. The fundamentals of dynamic systems theory and classical adaptive control are also given. More details can be found in: http://www.brunel.ac.uk/~csstdcd/learning_control_book.html From kogan at rutcor.rutgers.edu Wed Sep 10 12:17:54 1997 From: kogan at rutcor.rutgers.edu (Alex Kogan) Date: Wed, 10 Sep 1997 12:17:54 -0400 Subject: ARTIFICIAL INTELLIGENCE AND MATHEMATICS'98, SECOND CALL FOR PAPERS Message-ID: <9709101217.ZM16244@horn.rutgers.edu> SECOND CALL FOR PAPERS ------------------------------------------------------------------------- Fifth International Symposium on ARTIFICIAL INTELLIGENCE AND MATHEMATICS ------------------------------------------------------------------------- January 4-6, 1998, Fort Lauderdale, Florida http://rutcor.rutgers.edu/~amai Email: amai at rutcor.rutgers.edu ------------------------------------------------------------------------- APPROACH OF THE SYMPOSIUM The International Symposium on Artificial Intelligence and Mathematics is the fifth of a biennial series. Our goal is to foster interactions among mathematics, theoretical computer science, and artificial intelligence. The meeting includes paper presentation, invited speakers, and special topic sessions. Topic sessions in the past have covered computational learning theory, nonmonotonic reasoning, and computational complexity issues in AI. The editorial board of the Annals of Mathematics and Artificial Intelligence serves as the permanent Advisory Committee for the series. ------------------------------------------------------------------------- INVITED TALKS will be given by * Robert Aumann (Hebrew University, Israel) * Joe Halpern (Cornell University) * Pat Hayes (University of West Florida) * Scott Kirkpatrick (IBM, Yorktown Heights) * William McCune (Argonne National Laboratory) ------------------------------------------------------------------------- SUBMISSIONS Authors must e-mail a short abstract (up to 200 words) in plain text format to amai at rutcor.rutgers.edu by SEPTEMBER 23, 1997, and either e-mail postscript files or TeX/LaTeX source files (including all necessary macros) of their extended abstracts (up to 10 double-spaced pages) to amai at rutcor.rutgers.edu or send five copies to Endre Boros RUTCOR, Rutgers University P.O. Box 5062 New Brunswick, NJ 08903 USA or, if using FEDEX or another fast delivery service, to Endre Boros RUTCOR, Busch Campus, Rutgers University Brett and Bartholomew Roads Piscataway, NJ 08854 USA to be received by SEPTEMBER 30, 1997. Authors will be notified of acceptance or rejection by OCTOBER 31th, 1997. The final versions of the accepted extended abstracts, for inclusion in the conference volume, are due by NOVEMBER 30, 1997. Authors of accepted papers will be invited to submit within one month after the Symposium a final full length version of their paper to be considered for inclusion in a thoroughly refereed volume of the series Annals of Mathematics and Artificial Intelligence, J.C. Baltzer Scientific Publishing Co.; for earlier volumes, see Vol. I, Vol. II, Vol. III. ------------------------------------------------------------------------- IMPORTANT DATES Abstracts received: September 23, 1997 Extended abstracts due: September 30, 1997 Authors notified: October 31, 1997 Final versions received: November 30, 1997 AI & Math Symposium: January 4-6, 1998 ------------------------------------------------------------------------- SPONSORS The Symposium is partially supported by the Annals of Math and AI, Florida Atlantic University, and the Florida- Israel Institute. Other support is pending. If additional funding is secured, partial travel subsidies may be available to junior researchers. ------------------------------------------------------------------------- General Chair: Martin Golumbic, Bar-Ilan University, Ramat Gan Conference Chair: Frederick Hoffman, Florida Atlantic University Program Co-Chairs: Endre Boros, Rutgers University Russ Greiner, Siemens Corporate Research Inc. Publicity Chair: Alex Kogan, Rutgers University Program Committee (others pending): * Martin Anthony (London School of Economics, England) * Peter Auer (Technical University of Graz, Austria) * Fahiem Bacchus (Univ. Waterloo, Canada) * Peter Bartlett (Australian National University) * Peter van Beek (University of Alberta, Canada) * Jimi Crawford (i2 Technologies) * Adnan Darwiche (American Univ., Lebanon) * Rina Dechter (UC Irvine) * Thomas Eiter (University of Giessen, Germany) * Boi Faltings (EPFL, Switzerland) * Ronen Feldman (Bar-Ilan University, Ramat Gan) * John Franco (University of Cincinnati) * Eugene Freuder (University of New Hampshire) * Giorgio Gallo (University of Pisa, Italy) * Hector Geffner (Universidad Simn Bolvar, Venezuela) * Georg Gottlob (Technical University of Vienna, Austria) * Adam Grove (NEC Research) * Peter L. Hammer (Rutgers University) * David Heckerman (Microsoft Corporation) * Michael Kaminski (Technion, Israel) * Henry Kautz (AT&T) * Helene Kirchner (CNRS-INRIA, Nancy, France) * Richard Korf (UCLA) * Gerhard Lakemeyer (Aachen, Germany) * Jean-Claude Latombe (Stanford) * Maurizio Lenzerini (University of Rome, Italy) * Alon Levy (AT&T) * Fangzhen Lin (Hong Kong University of Science and Technology) * Alan Mackworth (UBC) * Heikki Mannila (University of Helsinki, Finnland) * Eddy Mayoraz (IDIAP, Switzerland) * Anil Nerode (Cornell) * Jeff Rosenschein (Hebrew University, Israel) * Elisha Sacks (Purdue) * Dale Schuurmans (University of Pennsylvania) * Bart Selman (AT&T) * Eduardo D. Sontag (Rutgers University) * Ewald Speckenmeyer (University of Koeln, Germany) * Moshe Vardi (Rice) * Paul Vitanyi (CWI, The Netherlands) ------------------------------------------------------------------------- INFORMATION Hotel The Symposium will be held at the Embassy Suites in Fort Lauderdale: Embassy Suites Hotel 1100 S.E. 17th Street Fort Lauderdale, FL 33316 Spacious, newly refurbished two room suites are available at the reduced rate of $129 single or double occupancy - includes separate living room and bedroom, microwave, refrigerator, coffee maker, two TVs, two voice mail telephones with dual lines, data ports, queen size sofa sleeper in living room. You get complimentary full cooked-to-order breakfast with free newspaper, complimentary manager's cocktail reception each evening, complimentary 24 hour transportation to and from Ft. Lauderdale airport, and free parking. For reservations, call 1-800-362-2779 or 954-527-2700, by December 1, 1997. Airline Delta Airlines is our Conference airline - and their discounts have improved. Call them at 1-800-241-6760, and give our FAU's file number: 102789A. Car Rental Avis Rent A Car is our Conference car rental agency, offering us special rates. Call 1-800-331-1600 (in Canada, 1-800-879-2847), and mention the Symposium. Further information and future announcements can be obtained from the Conference Web Site at http://rutcor.rutgers.edu/~amai or by (e)mail to Professor Frederick Hoffman Florida Atlantic University, Department of Mathematics PO Box 3091, Boca Raton, FL 33431, USA hoffman at acc.fau.edu For a list of where we have announced this conference, see http://rutcor.rutgers.edu/~amai/lists-news.html From Dimitris.Dracopoulos at brunel.ac.uk Wed Sep 10 14:37:40 1997 From: Dimitris.Dracopoulos at brunel.ac.uk (Dimitris.Dracopoulos@brunel.ac.uk) Date: Wed, 10 Sep 1997 19:37:40 +0100 Subject: typo error: New Book Announcement Message-ID: <22582.199709101837@hebe.brunel.ac.uk> The correct title to the previous book announcement is: ``Evolutionary Learning Algorithms for Neural Adaptive Control'' by Dimitris C. Dracopoulos Springer Verlag, Perspectives in Neural Computing Series, Series Editor: J. Taylor, August 1997, ISBN: 3-540-76161-6, GBP24.99 Apologies for the mistake. From zhang at salk.edu Thu Sep 11 16:55:00 1997 From: zhang at salk.edu (zhang@salk.edu) Date: Thu, 11 Sep 1997 13:55:00 -0700 (PDT) Subject: preprint available Message-ID: <199709112055.NAA05153@bohr.salk.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 3248 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/8d6fde6f/attachment.ksh From roweis at cns.caltech.edu Thu Sep 11 02:21:50 1997 From: roweis at cns.caltech.edu (Sam Roweis) Date: Wed, 10 Sep 1997 23:21:50 -0700 (PDT) Subject: Modified PCA Model Message-ID: <9709110628.AA11399@tao.cns.caltech.edu> Paper Available Online ------------------------ For those interested in modifications to basic principal component analysis (PCA) models, especially those which define a proper probability model in the data space, I am making available a paper which will appear in NIPS'97. This paper introduces a new model called _sensible_ principal component analysis (SPCA) which is closely related to the independent work of Michael Tipping and Chris Bishop recently announced on this list. The paper also shows how this new model as well as the regular PCA model can be efficiently learned using an EM algorithm. There are two basic ideas in the paper: 1) Given some data, a very simple thing to do is to build a single Gaussian density model using the sample mean and covariance. The likelihood of new data can be easily obtained by evaluation under this Gaussian. PCA identifies the axes of such a Gaussian model. But if we throw away some of the principal components by projecting into a subspace of high variance, we no longer have a proper density model. (The squared distance of new data from the subspace is one measure of "novelty" but it is insensitive to translations within the principal subspace.) We can recapture a proper probability model by replacing all of the unwanted principal components with a single spherical noise model whose scale can be estimated from the data without ever having to find the unwanted components. Because this noise model is characterized by only a single scalar, such a model is almost as economical as regular PCA but, like factor analysis, defines a proper probability density model. 2) Traditional methods for finding the first few principal components of a dataset can be quite costly when there are many datapoints and/or the data space is of very high dimension. These methods can also fail when the sample covariance matrix is not of full rank and cannot deal easily with missing data values. Using the EM algorithms presented in the paper to fit a regular PCA model or an SPCA model can be significantly more efficient and require less data than other techniques. These algorithms also deal naturally with missing data values, even when all datapoints have missing coordinates. EM ALGORITHMS FOR PCA AND SPCA (to appear in NIPS97) Sam Roweis Computation and Neural Systems 139-74 California Institute of Technology ftp://hope.caltech.edu/pub/roweis/Empca/empca.ps or ftp://hope.caltech.edu/pub/roweis/Empca/empca.ps.gz Abstract I present an expectation-maximization (EM) algorithm for principal component analysis (PCA). The algorithm allows a few eigenvectors and eigenvalues to be extracted from large collections of high dimensional data. It is computationally efficient and does not require computing the sample covariance of the data. It also naturally accommodates missing information. I also introduce a new variation of PCA known as {\em sensible} principal component analysis (SPCA) which defines a proper density model in the data space. Learning for SPCA is also done with an EM algorithm. I include results of simulations showing that these EM algorithms correctly and efficiently find the leading eigenvectors of the covariance of datasets in a few iterations using up to thousands of datapoints in hundreds of dimensions. Sam roweis at cns.caltech.edu http://hope.caltech.edu/roweis From scott at salk.edu Fri Sep 12 19:02:04 1997 From: scott at salk.edu (scott@salk.edu) Date: Fri, 12 Sep 1997 16:02:04 -0700 (PDT) Subject: Paper and software for Independent Component Analysis of Evoked Brain Responses Message-ID: <199709122302.QAA06503@kuffler.salk.edu> "Blind separation of auditory event-related brain responses into independent components" S. Makeig, T-P. Jung, D. Ghahremani, A.J. Bell & T.J. Sejnowski (In press, PNAS) Advance copies of this paper are available for online review and/or download (151K). Independent component analysis (ICA) is a method for decomposing multichannel data into a sum of temporally independent components. In the paper, we apply an enhanced version of the ICA algorithm of Bell & Sejnowski (1995) to decomposition of brain responses to auditory targets in a vigilance experiment. We demonstrate the nature and stability of the decomposition and discuss its utility for analysis of event-related response potentials. The URL is: http://www.cnl.salk.edu/~scott/PNAS.html ================================================================ Matlab Toolbox for Independent Component Analysis of Electrophysiological Data by Scott Makeig Tony Bell, Tzyy-Ping Jung, Colin Humphries, Te-Won Lee Terrence Sejnowski Computational Neurobiology Laboratory Salk Institute, La Jolla CA A toolbox of routines written under Matlab for Independent Component Analysis (ICA) and display of electrophysiological (EEG or MEG) data is available for download. This software implements the ICA algorithm of Bell & Sejnowski (1995) for use with multichannel physiological data, particularly event-related or spontaneous EEG (or MEG) data. The algorithm separates data into a sum of components whose time courses are maximally independent of one another and whose spatial projections to the scalp are fixed throughout the analysis epoch. The decomposition routine (runica.m) also can implement an extended ICA algorithm (Lee, Girolami and Sejnowski) for separating mixtures of sub-Gaussian as well as sparse (super-Gaussian) components. Applications to ERP and EEG data including comparison of conditions and elimination of artifacts have been addressed in a series of papers and abstracts available through a related bibliography page. Another page answers Frequently Asked Questions about applying ICA to psychophysiological data. Graphics routines include general-purpose functions for viewing either averaged or spontaneous EEG data and for making and viewing animations of shifting scalp distributions. Other routines are useful for sorting and displaying the time courses, scalp topographies, and scalp projections of ICA components. A demonstration routine (icademo.m) and directory page (ica.m) are included. The software has been written under Matlab 4.2c. A version for Matlab 5.0 will be released later. To download the toolbox in Unix (compress) or PC (zip) formats (~155K): http://www.cnl.salk.edu/~scott/ica-download-form.html For further on-line information: http://www.cnl.salk.edu/~scott/icafaq.html - frequently asked questions http://www.cnl.salk.edu/~scott/icabib.html - bibliography of applications Email: scott at salk.edu Scott Makeig ___________________________________________________________________ Scott Makeig http://www.cnl.salk.edu/~scott (619) 553-8414 Comp. Neurobiol. Lab., Salk Institute | scott at salk.edu UCSD Department of Neurosciences | smakeig at ucsd.edu Naval Health Research Center | makeig at nhrc.navy.mil From cns-cas at cns.bu.edu Sun Sep 14 22:02:20 1997 From: cns-cas at cns.bu.edu (Boston University - Cognitive and Neural Systems) Date: Sun, 14 Sep 1997 22:02:20 -0400 Subject: CALL FOR PAPERS - Deadline Oct 31, 1997! Message-ID: <3.0.3.32.19970914220220.00d6357c@cns.bu.edu> *****CALL FOR PAPERS***** 1998 Special Issue of Neural Networks NEURAL CONTROL AND ROBOTICS: BIOLOGY AND TECHNOLOGY Planning and executing movements is of great importance in both biological and mechanical systems. This Special Issue will bring together a broad range of invited and contributed articles that describe progress in understanding the biology and technology of movement control. Movement control covers a wide range of topics, from integration of different types of sensory information, to flexible planning of movements, to generation of motor commands, to compensation for internal and external perturbations. Of particular importance are the coordinate transformations, memory systems, and attentional and volitional mechanisms needed to implement movement control. Neural control is the study of how biological systems have solved these problems with joints, muscles, and brains. Robotics is the attempt to build mechanical systems that can solve these problems under constraints of size, weight, robustness, and cost. This Special Issue welcomes high quality articles from both fields and seeks to explore the possible synergies between them. CO-EDITORS: Professor Rodney Brooks, Massachusetts Institute of Technology Professor Stephen Grossberg, Boston University Dr. Lance Optican, National Institutes of Health SUBMISSION: Deadline for submission: October 31, 1997 Notification of acceptance: January 31, 1998 Format: no longer than 10,000 words; APA format ADDRESS FOR SUBMISSION: Professor Stephen Grossberg Boston University Department of Cognitive and Neural Systems 677 Beacon Street Boston, Massachusetts 02215 From atick at monaco.rockefeller.edu Mon Sep 15 08:45:14 1997 From: atick at monaco.rockefeller.edu (Joseph Atick) Date: Mon, 15 Sep 1997 08:45:14 -0400 Subject: Network:CNS Table of Contents Message-ID: <9709150845.ZM15475@monaco.rockefeller.edu> NETWORK: COMPUTATION IN NEURAL SYSTEMS Volume 8, Issue 3, August 1997 Table of Contents (on line version can be accessed by those with institutional subscription at http://www.iop.org/Journals/ne ) Pages: V1--V18, 239--354 VIEWPOINT V1 Hebbian learning, its correlation catastrophe, and unlearning J L van Hemmen PAPERS 239 An association matrix model of context-specific vertical vergence adaptation J W McCandless and C M Schor 259 Learning low-dimensional representations via the usage of multiple-class labels N Intrator and S Edelman 283 Optimal ensemble averaging of neural networks U Naftaly, N Intrator and D Horn 297 Hidden Markov modelling of simultaneously recorded cells in the associative cortex of behaving monkeys I Gat, N Tishby and M Abeles 323 Recursive algorithms for principal component extraction A Chi-Sing Leung, Kwok-Wo Wong and Ah Chung Tsoi 335 Spatial-frequency analysis in the perception of perspective depth K Sakai and L H Finkel 353 BOOK REVIEW Spikes: Exploring the Neural Code F Rieke, D Warland, R de Ruyter van Steveninck and W Bialek (reviewed by D Reich) -------------------------------------------------------------- -- Joseph J. Atick Rockefeller University 1230 York Avenue New York, NY 10021 Tel: 212 327 7421 Fax: 212 327 7422 From horwitz at alw.nih.gov Mon Sep 15 15:06:40 1997 From: horwitz at alw.nih.gov (Barry Horwitz) Date: Mon, 15 Sep 1997 15:06:40 -0400 Subject: Postdoctoral position Message-ID: <341D873B.5913@alw.nih.gov> Below is a notice concerning a postdoctoral position in our lab for someone interested in neural modeling and functional neuroimaging. ------------------------------------------- National Institute on Aging Postdoctoral Fellowship IN Neural Modeling of Human Functional neuroImaging data A postdoctoral fellowship is available for developing and applying computational neuroscience modeling methods to in vivo human functional neuroimaging data, obtained from positron emission tomography and functional magnetic resonance imaging. The goal is to understand the relation between functional neuroimaging data, with its low spatial and temporal resolution, and the underlying electrophysiological behavior of multiple interconnected neuronal populations. Knowledge of neural modeling techniques and extensive programming experience are required. PhD or MD degree required. The position is in the Laboratory of Neurosciences, Brain Aging and Dementia Section, National Institute on Aging, Bethesda, Md, USA. Starting salary commensurate with training and experience. For further information, contact: Dr. Barry Horwitz, Bldg. 10, Rm. 6C-414, Lab. Neurosciences, National Institutes of Health, Bethesda, MD 20892, USA. Tel. 301-594-7755; FAX: 301-402-0595; Email: horwitz at alw.nih.gov. From jlm at cnbc.cmu.edu Mon Sep 15 14:43:20 1997 From: jlm at cnbc.cmu.edu (Jay McClelland) Date: Mon, 15 Sep 1997 14:43:20 -0400 (EDT) Subject: OPENING FOR RA IN STUDIES OF ADULT BRAIN PLASTICITY Message-ID: <199709151843.OAA01376@eagle.cnbc.cmu.edu> OPENING FOR RA IN STUDIES OF ADULT BRAIN PLASTICITY An immediate opening exists for a research assistant who will be involved in a project to study strategies that promote learning and brain plasticity, and their basis and use in enhancing literacy, under the supervision of Professors Julie Fiez, University of Pittsburgh and Jay McClelland, Carnegie Mellon. The studies test a new model that attempts to account for the apparent stability of language processing deficits in the face of massive exposure to natural speech, while at the same time explaining why interventions using exaggerated speech appear to be effective in remediation. The assistant will have the opportunity to contribute to behavioral experiments and functional neuroimaging studies. Specific responsibilities will include subject recruitment, subject testing, data analysis, participation in the development of stimulus materials and experimental designs, library research, and general administrative support. This position is a full-time paid research position, and funding is available for up to three years. This is not intended as a vehicle for graduate student support, but the opening may provide a unique opportunity for recent college graduates interested in becoming involved in cognitive neuroscience research, including those who may wish to pursue graduate study in the future. The holder of this position will have access to the facilities and resources of the Center for the Neural Basis of Cognition, a joint project of the Carnegie-Mellon University and the University of Pittsburgh. Interested applicants should send a short statement of qualifications by email to fiez+ at pitt.edu, or by smail to: Professor Julie Fiez 605 LRDC, Dept. Psychology 3939 O'Hara Street University of Pittsburgh Pittsburgh, PA 15260 From cns-cas at cns.bu.edu Mon Sep 15 10:40:05 1997 From: cns-cas at cns.bu.edu (Boston University - Cognitive and Neural Systems) Date: Mon, 15 Sep 1997 10:40:05 -0400 Subject: Graduate Training in Cognitive and Neural Systems Message-ID: <3.0.3.32.19970915104005.0071b3b8@cns.bu.edu> A non-text attachment was scrubbed... Name: not available Type: text/enriched Size: 20855 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/cb55d5e9/attachment.bin From marwan at ee.usyd.edu.au Tue Sep 16 05:29:29 1997 From: marwan at ee.usyd.edu.au (Marwan Jabri) Date: Tue, 16 Sep 1997 19:29:29 +1000 (EST) Subject: U2000 Posdocs at University of Sydney, Australia Message-ID: This message is for interest to postdocs seekers. Look in http://www.usyd.edu.au/su/reschols/appkit/U2000FNT.HTM gor information about the U2000 Research Fellowships at the University of Sydney. I can provide more information if interested in applying in the fields of neural computing and neuromorphic engineering. The closing date is on October 9. Look in web site above regarding advert, conditions and methods of application, including form. Marwan Jabri ------------ Marwan Jabri Professor in Adaptive Systems Dept of Electrical Engineering, The University of Sydney NSW 2006, Australia Tel: (+61-2) 9351-2240, Fax:(+61-2) 9351-7209, Mobile: (+61) 414-512240 Email: marwan at sedal.usyd.edu.au, http://www.sedal.usyd.edu.au/~marwan/ From esann at dice.ucl.ac.be Tue Sep 16 06:32:00 1997 From: esann at dice.ucl.ac.be (esann@dice.ucl.ac.be) Date: Tue, 16 Sep 1997 12:32:00 +0200 Subject: ESANN 98 - call for papers Message-ID: <199709161021.MAA17643@ns1.dice.ucl.ac.be> --------------------------------------------------- | 6th European Symposium | | on Artificial Neural Networks | | | | ESANN 98 | | | | Bruges - April 22-23-24, 1998 | | | | First announcement and call for papers | --------------------------------------------------- The call for papers for the ESANN 98 conference is now available on the Web: http://www.dice.ucl.ac.be/esann For those of you who maintain WWW pages including lists of related ANN sites: we would appreciate if you could add the above URL to your list; thank you very much! We try as much as possible to avoid multiple sendings of this call for papers; however please apologize if you receive this e-mail twice, despite our precautions. You will find below a short version of this call for papers, without the instructions to authors (available on the Web). If you have difficulties to connect to the Web please send an e-mail to esann at dice.ucl.ac.be and we will send you a full version of the call for papers. Scope and topics ---------------- Since its first edition in 1993, the European Symposium on Artificial Neural Networks has become the reference for researchers on fundamentals and theoretical aspects of artificial neural networks. Each year, around 100 specialists from all parts of the world attend ESANN, in order to present their latest results and comprehensive surveys, and to discuss the future developments in this field. The ESANN 98 conference will focus on fundamental aspects of ANNs: theory, models, learning algorithms, mathematical aspects, approximation of functions, classification, control, time-series prediction, statistics, signal processing, vision, self-organization, vector quantization, evolutive learning, psychological computations, biological plausibility,... Papers on links and comparisons between ANNs and other domains of research, such as statistics, data analysis, signal processing, biology, psychology, evolutive learning, bio-inspired systems,...) are also encouraged. Papers will be presented orally (no parallel sessions). If the number of high-quality accepted papers is too high, some poster sessions could be organized. All posters will be complemented by a short oral presentation during a plenary session. It is important to mention that it is the topics of the paper which will decide if it better fits into an oral or a poster session, not its quality. The quality of posters will be the same as the quality of oral presentations, and both will be printed in the same way in the proceedings. Nevertheless, authors have the choice to indicate on the author submission form that they only accept to present their paper orally. The following is a non-exhaustive list of topics which will be covered during ESANN'98: * theory * models and architectures * mathematics * learning algorithms * vector quantization * self-organization * RBF networks * Bayesian classification * recurrent networks * approximation of functions * time series forecasting * adaptive control * statistical data analysis * independent component analysis * signal processing * natural and artificial vision * cellular neural networks * fuzzy neural networks * hybrid networks * identification of non-linear dynamic systems * biologically plausible artificial networks * bio-inspired systems * formal models of biological phenomena * neurobiological systems * cognitive psychology * adaptive behavior * evolutive learning The ESANN'98 conference is organized with the support of the IEEE Region 8, the IEEE Benelux Section, and the Universite Catholique de Louvain (UCL, Louvain-la-Neuve, Belgium). Special session --------------- 6 special sessions will be organized by renowned scientists in their respective fields. Papers submitted to these sessions are reviewed according to the same rules as any other submission. Authors who submit papers to one of these sessions are invited to mention it on the author submission form; nevertheless, submissions to the special sessions must follow the same format, instructions and deadlines as any other submission, and must be sent to the same address. The special sessions organized during ESANN 98 are: * Self-organizing maps for data analysis Marie Cottrell (Univ. Paris 1 Sorbonne, France) & Eric de Bodt (U.C. Louvain-la-Neuve, Belgium) * ANN for the processing of facial information Manuel Grana (UPV San Sebastian, Spain) * Radial basis networks Leonardo Reyneri (Polit. di Torino, Italy) * Cellular neural networks (CNN) technology - theory and application * Tamas Roska (Hungarian Academy of Science, Hungary) * Neural networks for control Joos Vandewalle & Johan Suykens (K.U. Leuven, Belgium) * ANN for speech processing Christian Wellekens (Eurecom Sophia-Antipolis, France) * The ESANN'98 conference is organized the week after CNNA'98, the 5th IEEE International Workshop of Cellular Neural Networks and their Applications, in London (UK). For details, see http://www.sbu.ac.uk/eeie/CNNA98 Location -------- The conference will be held in Bruges (also called "Venice of the North"), one of the most beautiful medieval towns in Europe. Bruges can be reached by train from Brussels in less than one hour (frequent trains). The town of Bruges is worldwide known, and famous for its architectural style, its canals, and its pleasant atmosphere. The conference will be organized in an hotel located near the center (walking distance) of the town. There is no obligation for the participants to stay in this hotel. Hotels of all level of comfort and price are available in Bruges; there is a possibility to book a room in the hotel of the conference, or in another one (50 m. from the first one) at a preferential rate through the conference secretariat. A list of other smaller hotels is also available. The conference will be held at the Novotel hotel, Katelijnestraat 65B, 8000 Brugge, Belgium. Deadlines --------- Submission of papers December 1, 1997 Notification of acceptance January 31, 1998 Symposium April 22-23-24, 1998 Conference secretariat ---------------------- Michel Verleysen D facto conference services phone: + 32 2 203 43 63 45 rue Masui Fax: + 32 2 203 42 94 B - 1000 Brussels (Belgium) E-mail: esann at dice.ucl.ac.be http://www.dice.ucl.ac.be/esann Reply form ---------- If you wish to receive the final program of ESANN'98, for any address change, or to add one of your colleagues in our database, please send this form to the conference secretariat. ------------------------ cut here ----------------------- ------------------ ESANN'98 reply form ------------------ Name: ................................................. First Name: ............................................ University or Company: ................................. ................................. Address: .............................................. .............................................. .............................................. ZIP: ........ Town: ................................ Country: ............................................... Tel: ................................................... Fax: ................................................... E-mail: ................................................ ------------------------ cut here ----------------------- Please send this form to : D facto conference services 45 rue Masui B - 1000 Brussels e-mail: esann at dice.ucl.ac.be Steering and local committee (to be confirmed) ---------------------------------------------- Fran?ois Blayo Univ. Paris I (F) Marie Cottrell Univ. Paris I (F) Jeanny Herault INPG Grenoble (F) Henri Leich Fac. Polytech. Mons (B) Bernard Manderick Vrije Univ. Brussel (B) Eric Noldus Univ. Gent (B) Jean-Pierre Peters FUNDP Namur (B) Joos Vandewalle KUL Leuven (B) Michel Verleysen UCL Louvain-la-Neuve (B) Scientific committee (to be confirmed) -------------------------------------- Edoardo Amaldi Cornell Univ. (USA) Agnes Babloyantz Univ. Libre Bruxelles (B) Herve Bourlard FPMS Mons (B) Joan Cabestany Univ. Polit. de Catalunya (E) Holk Cruse Universitat Bielefeld (D) Eric de Bodt UCL Louvain-la-Neuve (B) Dante Del Corso Politecnico di Torino (I) Wlodek Duch Nicholas Copernicus Univ. (PL) Marc Duranton Philips / LEP (F) Jean-Claude Fort Universite Nancy I (F) Bernd Fritzke Ruhr-Universitat Bochum (D) Stan Gielen Univ. of Nijmegen (NL) Karl Goser Universitat Dortmund (D) Manuel Grana UPV San Sebastian (E) Anne Guerin-Dugue INPG Grenoble (F) Martin Hasler EPFL Lausanne (CH) Christian Jutten INPG Grenoble (F) Vera Kurkova Acad. of Science of the Czech Rep. (CZ) Petr Lansky Acad. of Science of the Czech Rep. (CZ) Hans-Peter Mallot Max-Planck Institut (D) Eddy Mayoraz IDIAP Martigny (CH) Jean Arcady Meyer Ecole Normale Superieure Paris (F) Jose Mira Mira UNED (E) Pietro Morasso Univ. of Genoa (I) Jean-Pierre Nadal Ecole Normale Superieure Paris (F) Erkki Oja Helsinky University of Technology (FIN) Gilles Pages Universite Paris VI (F) Helene Paugam-Moisy Ecole Normale Superieure Lyon (F) Alberto Prieto Universitad de Granada (E) Ronan Reilly University College Dublin (IRE) Tamas Roska Hungarian Academy of Science (H) Jean-Pierre Rospars INRA Versailles (F) John Stonham Brunel University (UK) John Taylor King's College London (UK) Claude Touzet IUSPIM Marseilles (F) Marc Van Hulle KUL Leuven (B) Christian Wellekens Eurecom Sophia-Antipolis (F) _____________________________ _____________________________ D facto publications - Michel Verleysen conference services Univ. Cath. de Louvain - DICE 45 rue Masui 3, pl. du Levant 1000 Brussels B-1348 Louvain-la-Neuve Belgium Belgium tel: +32 2 203 43 63 tel: +32 10 47 25 51 fax: +32 2 203 42 94 fax: +32 10 47 25 98 esann at dice.ucl.ac.be verleysen at dice.ucl.ac.be http://www.dice.ucl.ac.be/esann _____________________________ _____________________________ From robbie at bcs.rochester.edu Tue Sep 16 08:24:16 1997 From: robbie at bcs.rochester.edu (Robbie Jacobs) Date: Tue, 16 Sep 1997 08:24:16 -0400 (EDT) Subject: faculty position available Message-ID: <199709161224.IAA21369@broca.bcs.rochester.edu> The Department of Brain and Cognitive Sciences at the University of Rochester invites applications for a tenure-track position as Assistant Professor in the area of cognition broadly defined (the ad for the position is below). People with interests in computational modeling of cognitive processes are encouraged to apply. DEPARTMENT OF BRAIN AND COGNITIVE SCIENCES, UNIVERSITY OF ROCHESTER: invites applications for a tenure-track position as Assistant Professor in the area of cognition broadly defined. Research interests could include domains traditional to cognitive psychology, such as learning, memory, reasoning, perception, motor control, and language, and could also include areas such as cognitive neuroscience, neuropsychology, computational modeling, and cognitive development. Candidates with strong research and teaching interests in any of these areas should send a vita, research and teaching statement, representative reprints, and at least three letters of recommendation to: Cognitive Search, Department of Brain and Cognitive Sciences, Meliora Hall, University of Rochester, Rochester, NY 14627-0268. Applicants can learn about the Department of Brain and Cognitive Sciences by referring to our pages on the world wide web (http://www.bcs.rochester.edu). The deadline for complete applications is November 15, 1997. Applications from women and members of minority groups are especially welcome. University of Rochester is an equal opportunity employer. From oby at cs.tu-berlin.de Tue Sep 16 12:38:09 1997 From: oby at cs.tu-berlin.de (Klaus Obermayer) Date: Tue, 16 Sep 1997 18:38:09 +0200 (MET DST) Subject: Preprint available Message-ID: <199709161638.SAA22840@pollux.cs.tu-berlin.de> Dear connectionists The following preprint is now available online at: http://kon.cs.tu-berlin.de/publications/#conference An annealed self-organizing map for source channel coding M. Burger, T. Graepel, and K. Obermayer CS Department, Technical University of Berlin, Berlin, Germany Abstract: We derive and analyse robust optimization schemes for noisy vector quantization on the basis of deterministic annealing. Starting from a cost function for central clustering that incorporates distortions from channel noise we develop a soft topographic vector quantization algorithm (STVQ) which is based on the maximum entropy principle and which performs a maximum-likelihood estimate in an expectation-maximization (EM) fashion. Annealing in the temperature parameter $\beta$ leads to phase transitions in the existing code vector representation during the cooling process for which we calculate critical temperatures and modes as a function of eigenvectors and eigenvalues of the covariance matrix of the data and the transition matrix of the channel noise. A whole family of vector quantization algorithms is derived from STVQ, among them a deterministic annealing scheme for Kohonen's self-organizing map (SOM). This algorithm, which we call SSOM, is then applied to vector quantization of image data to be sent via a noisy binary symmetric channel. The algorithm's performance is compared to those of LBG and STVQ. While it is naturally superior to LBG, which does not take into account channel noise, its results compare very well to those of STVQ, which is computationally much more demanding. (This paper will appear at NIPS97) From Pregenz at dpmi.tu-graz.ac.at Thu Sep 18 12:02:00 1997 From: Pregenz at dpmi.tu-graz.ac.at (Martin) Date: Thu, 18 Sep 1997 17:02:00 +0100 Subject: Thesis available Message-ID: <2490B8D696F@dpmi.tu-graz.ac.at> >>> FEATURE SELECTION >>> CLASSIFICATION PROBLEMS My the PhD thesis: "Distinction Senstivie Learning Vector Quantization (DSLVQ)" (TU-Graz, 120 pages) is now available for anonymous ftp from: FTP-host: fdpmial03.tu-graz.ac.at FTP-filename: /pub/outgoing/dslvq.ps.Z (900 kbyte) /pub/outgoing/dslvq.ps (4.4 mbyte) Martin Pregnezer --------------------------- Abstract This thesis introduces a new feature selection method: Distinction Sensitive Learning Vector Quantization (DSLVQ). DSLVQ is not based on the individual testing of different candidate feature subsets; the relevance of the features is deduced from the implicit problem representation through an exemplar based classification method. While most of the common feature selection methods require repeated training of the target classifier on selected feature subsets, only a single learning process is necessary with DSLVQ. This makes the new method exceptionally quick. The DSLVQ algorithm is motivated theoretically and evaluated empirically. On a very complex and high dimensional artificial data set it is shown that DSLVQ can separate relevant and irrelevant features reliably. A real world application of DSLVQ is the selection of optimal frequency bands for an EEG-based Brain Computer Interface (BCI). DSLVQ has been used to individually adapt the filter settings for each subject. This can improve the performance of the BCI. LVQ classifier: conditions under which stability problems with different training algorithms can occur are outlined in chapter 4 of this thesis or in a 15 pages draft paper which can be downloaded from the same site (lvq_stab.ps.Z / lvq_stab.ps). From Pregenz at dpmi.tu-graz.ac.at Thu Sep 18 12:02:00 1997 From: Pregenz at dpmi.tu-graz.ac.at (Martin) Date: Thu, 18 Sep 1997 17:02:00 +0100 Subject: Thesis available Message-ID: <2490B8D696F@dpmi.tu-graz.ac.at> >>> FEATURE SELECTION >>> CLASSIFICATION PROBLEMS My the PhD thesis: "Distinction Senstivie Learning Vector Quantization (DSLVQ)" (TU-Graz, 120 pages) is now available for anonymous ftp from: FTP-host: fdpmial03.tu-graz.ac.at FTP-filename: /pub/outgoing/dslvq.ps.Z (900 kbyte) /pub/outgoing/dslvq.ps (4.4 mbyte) Martin Pregnezer --------------------------- Abstract This thesis introduces a new feature selection method: Distinction Sensitive Learning Vector Quantization (DSLVQ). DSLVQ is not based on the individual testing of different candidate feature subsets; the relevance of the features is deduced from the implicit problem representation through an exemplar based classification method. While most of the common feature selection methods require repeated training of the target classifier on selected feature subsets, only a single learning process is necessary with DSLVQ. This makes the new method exceptionally quick. The DSLVQ algorithm is motivated theoretically and evaluated empirically. On a very complex and high dimensional artificial data set it is shown that DSLVQ can separate relevant and irrelevant features reliably. A real world application of DSLVQ is the selection of optimal frequency bands for an EEG-based Brain Computer Interface (BCI). DSLVQ has been used to individually adapt the filter settings for each subject. This can improve the performance of the BCI. LVQ classifier: conditions under which stability problems with different training algorithms can occur are outlined in chapter 4 of this thesis or in a 15 pages draft paper which can be downloaded from the same site (lvq_stab.ps.Z / lvq_stab.ps). From austin at minster.cs.york.ac.uk Wed Sep 17 18:11:16 1997 From: austin at minster.cs.york.ac.uk (austin@minster.cs.york.ac.uk) Date: Wed, 17 Sep 97 18:11:16 Subject: No subject Message-ID: Parallel Neural Hardware for High Performance Pattern Matching 3 Year research post An individual with a keen interest in designing high performance computing hardware is required to join a team working on the design of a very high performance pattern matching machine (search engine). The post involves the design from logic gate level, through parallel sub-systems to the low level software drivers of a machine based on a fine grained implementation of binary neural network based methods. We are looking for candidates with applicable experience in digital hardware design and implementation of low level software in C and C++. Knowledge of FPGA devices and CAD support tools and the design of high performance hardware will be necessary. Knowledge of neural networks and p methods is an advantage but not essential. To support the post are first class facilities and technical support. Working with a team of researchers and technicians, the projects major aim is to scale up our present pattern matching architecture (AURA) to a flexible parallel architecture to solve a wide range of problems being developed in conjunction with British Aerospace, The Post Office and other major international companies. Example applications are matching in very large textual, financial and structural databases. The post offers a unique opportunity to individuals who have a proven record of achievement to be involved in the development of a leading edge technology with exciting prospects for the future. The post is funded by the UKs EPSRC (government funding council) in collaboration with The Post Office Research Group and British Aerospace plc, under the direction of Dr. Jim Austin. Details of this project, further details of the post and related publications can be found on our web page at http://www.cs.york.ac.uk/~arch/nn/aura.html The research is to be undertaken within the Advanced Computer Architectures Group, one of the academic staff), under the supervision of Dr. Jim Austin and in partnership with Ken Lees. The Department is one of the leading Computer Science Departments in the UK, and is rated 5* for research, in recognition of this, the University has provided us with a brand new building which will open in Oct. 1997. The group is internationally known for its work in binary neural networks and their high speed hardware implementation. It has extensive computer and technical support. The successful applicant can look forward to joining a very active and thriving research team. Applications are invited from individuals with a good honours degree and a PhD in a relevant 15,159 - 20,103 UKpounds applications should be sent to the Personnel Office, University of York, York YO1 5DD, (email: jobs at york.ac.uk) quoting reference number 6049. The closing date for applications is 1 October 1997. From simon.schultz at psy.ox.ac.uk Sat Sep 20 15:19:08 1997 From: simon.schultz at psy.ox.ac.uk (Simon Schultz) Date: Sat, 20 Sep 1997 20:19:08 +0100 Subject: Preprint available Message-ID: <342421AC.102F@psy.ox.ac.uk> Dear Connectionists, Preprints of the following paper are available via WWW. It is to appear in a Cambridge Univ. Press book in 1998 entitled "Information Theory and the Brain", edited by R. Baddeley, P. Hancock and P. Foldiak. QUANTITATIVE ANALYSIS OF A SCHAFFER COLLATERAL MODEL S. Schultz(1), S. Panzeri(1), E.T. Rolls(1) and A. Treves(2) (1) Department of Experimental Psychology, University of Oxford, UK. (2) Programme in Neuroscience, SISSA, Trieste, Italy. Abstract: Advances in techniques for the formal analysis of neural networks have introduced the possibility of detailed quantitative analyses of brain circuitry. This paper applies a method for calculating mutual information to the analysis of the Schaffer collateral connections between regions CA3 and CA1 of the hippocampus. Attention is given to the introduction of further details of anatomy and physiology to the calculation: in particular, the distribution of the number of connections CA1 neurons receive from CA3, and the graded nature of the firing-rate distribution in region CA3. 16 pages, 5 figures. http://www.mrc-bbc.ox.ac.uk/~schultz/sch.ps.gz -- ----------------------------------------------------------------------- Simon Schultz Department of Experimental Psychology also: University of Oxford Corpus Christi College South Parks Rd., Oxford OX1 3UD Oxford OX1 4JF Phone: +44-1865-271419 Fax: +44-1865-310447 http://www.mrc-bbc.ox.ac.uk/~schultz/ ----------------------------------------------------------------------- From pmitra at bell-labs.com Mon Sep 22 01:01:29 1997 From: pmitra at bell-labs.com (Partha Mitra) Date: Mon, 22 Sep 1997 01:01:29 -0400 Subject: Just established: new neuroscience archives Message-ID: <3425FBA9.2F48@bell-labs.com> A neuroscience hierarchy has been established on the e-print archive at xxx.lanl.gov. This is an established archival site, sponsored by the NSF, that currently receives over 20,000 preprints and reprints each year, mostly in physics. Data rich research communities, such as astrophysics, are able to publish important sets of data (or links to the larger sets) in a central, easily accessible location. Some background information about the impact that the archive has had on physics and some related fields is available at http://xxx.lanl.gov/blurb/sep96news.html The site is primarily a means for efficient distribution and archival. Submissions are not reviewed (although "response" papers can be submitted), and so it does not replace conventional journals. Experience with the existing archives suggests that submissions tend to maintain a high level of scholarship. We suggest that there are a number of benefits that the neuroscience community might expect to obtain from this archive: * The archives provide a means of rapid and widely accessible dissemination of research results in the form of preprints or reprints. * Shorter notes, algorithms, or interesting data sets (with a short descriptive abstract) can be published independently of long papers and thus speed the distribution of important information. * The archive provides a central resource to make available important raw data sets for further scrutiny. For lengthy data sets, a short description with appropriate hyperlinks will be appropriate. * Technical reports, either on computational, instrumental, or preparatory procedures, can be made widely available. * Conference proceedings papers that aren't indexed or that don't receive wide distribution will be easily accessible. * The archive is mirrored in 14 different countries that are spread across the globe, with the network still growing. In particular, in countries where the costs of institutional subscriptions to journals are impractical, the archive has become the primary research source. There are three subcategories in the Neuroscience group, following the example of the Journal of Neuroscience: * 'neuro-cel' for Molecular/Cellular Neuroscience * 'neuro-sys' for Behavioral/systems Neuroscience * 'neuro-dev' for Developmental Neuroscience A computational or theoretical category has been initially avoided in order to keep theoretical, modeling and analysis studies mixed in with the experimental work. To reach the front pages for the archives connect to http://xxx.lanl.gov/archive/neuro-cel or http://xxx.lanl.gov/archive/neuro-sys or http://xxx.lanl.gov/archive/neuro-dev Papers may be uploaded in a variety of formats (TeX and LaTeX are preferred; PostScript that is generated as a print file by Microsoft Word is acceptable), which are then automatically converted to various file types for downloading. Instructions for both submission and retrieval are available from the web site. In particular, see http://xxx.lanl.gov/faq/. --------- The neuroscience archives were initiated as a subcategory of the existing e-print archives at LANL during the Analysis of Neural Data Workgroup (see http://sdphln.ucsd.edu/research/neurodata/) at the Marine Biological Laboratories, sponsored by the National Institutes of Mental Health. The e-print archives at the Los Alamos National Laboratory are supported by the U.S. National Science Foundation. ---------- From bersini at ulb.ac.be Mon Sep 22 11:20:30 1997 From: bersini at ulb.ac.be (Hugues Bersini) Date: Mon, 22 Sep 1997 17:20:30 +0200 Subject: The FAMIMO LOMOLOCO Workshop: 25th March 1998 Message-ID: <34268CBB.BAD6AD8A@ulb.ac.be> Following the IFAC workshop on Intelligent Components for vehicles (ICV'98): One day Workshop on LOMOLOCO (Local Modelling Local Control) approaches in Non-Linear Control in the context of the FAMIMO ESPRIT Project. In recent years, we have seen an increasing interest for the divide-and-conquer approaches in control. The basic idea is simple, and widely used in many scientific communities dealing with neural networks, fuzzy systems, non-linear control, statistical approaches ,etc.. A variety of ways are proposed to approximate and control unknown or partially known non-linear systems by some form of combination (smooth or crisp) of simple and local models (low-order polynomials, frequently constant or linear) which only act in some local region of the state space. In process control, we get very close to the classical strategy known as "gain scheduling" which the influence of fuzzy control has revived in a growing number of applications. Neural Network practitioners have proposed (in place of global approximators) "lazy" or "local" types of model and controller,in which learning takes place only on the basis of the data present in the restricted region occupied by the local agent. One basic and obvious advantage of this approach is that it allows us the possibility of making use of theoretical results and analytical tools from the field of statistical linear analysis, linear control, linear regression,etc.in a local, iterative and distributed way. The main issues to be addressed during the workshop will be: - the discovery of the local models i.e. the modelling part - how to derive local control laws from the local models i.e. how to adapt the different control strategies (regulation, tracking, predic- tive and optimal control) to this local and distributed approach - how to combine the local models and local controllers: smooth or crisp, and switching strategies in general - how to extend linear stability analysis to the global combination of the local controllers - lazy modelling and lazy control FAMIMO (Fuzzy Algorithms for Multi-Input Multi-Output processes) is the second ESPRIT European initiative (following FALCON) aiming at designing fuzzy systems for the reliable control of high-dimensional complex processes. LOMOLOCO approaches are among the strategies most analysed and tested by FAMIMO partners, who will present their work on this subject at the workshop. These partners are: The Lund Institute of Technology: Prof. Astrom and Prof. Arzen The Delft University of Technology: Prof. Verbruggen and Prof. Babuska The IRIDIA laboratory from Universite Libre de Bruxelles: Prof. Bersini The SIEMENS Automotive SA: Serge Boverie The AICIA Institute of Seville: Prof. Ollero. Besides we also expect the presence (to be confirmed) of Prof. Tanaka, Prof. Johanssen and Prof. Palm and some others to present their latest work around the LOMOLOCO approaches in control. We would like to encourage European researchers to attend this workshop which will take place the day after the Seville workshop and present work that they believe to be related to the main theme of the workshop. We therefore invite people to submit a full copy of their work (6 pages maximum) before the 8th December at the address of Hugues Bersini, indicated below. All organisational aspects of the workshop are the same as the Seville workshop (http://www.esi.us.es/ISA/icv98/). For any information and submission: Hugues Bersini IRIDIA CP 194/6 ULB 50, av. Franklin Roosevelt 1050 Bruxelles tel: +32 2 650 27 33 fax: +32 2 650 27 15 email: bersini at ulb.ac.be See also: http://iridia.ulb.ac.be/FAMIMO/Workshop.html From Simon.N.CUMMING at British-Airways.com Tue Sep 23 09:20:00 1997 From: Simon.N.CUMMING at British-Airways.com (Simon.N.CUMMING@British-Airways.com) Date: 23 Sep 1997 13:20:00 Z Subject: ANNOUNCEMENT: NCAF CONFERENCE, JAN 98 Message-ID: <"BSC400A1 970923131957580261*/c=GB/admd=ATTMAIL/prmd=BA/o=British Airways PLC/s=CUMMING/g=SIMON/i=N/"@MHS> NEURAL COMPUTING APPLICATIONS FORUM (NCAF) ------------------------------------------ The purpose of NCAF is to promote widespread exploitation of neural computing technology by: - providing a focus for neural network practitioners. - disseminating information on all aspects of neural computing. - encouraging close co-operation between industrialists and academics. NCAF holds four, two-day conferences per year, in the UK, with speakers from commercial and industrial organisations and universities. The focus of the talks is on practical issues in the application of neural network technology and related methods to solving real-world problems. NEXT MEETING - 21-22 JANUARY 1998, MALVERN, UK. ---------------------------------------------- The January meeting will be sponsored by DERA (Defence Evaluation and Research Agency, see http://www.dera.gov.uk/dera.htm) It will be a joint meeting with the annual DERA Artificial Intelligence Conference held at DERA Malvern on Wednesday 21st and Thursday 22nd January 1998. Please note that this will be a one-and-a-half day meeting starting at lunchtime on 21st. The 1 * days will be packed with applications oriented papers as usual. There will also be adequate time for networking with other practitioners, during coffee, lunch and the evening event. PROVISIONAL PROGRAMME --------------------- to include... Multi-Sensor Arrays - Mahesan Niranjan (Cambridge University) Application of Perceptron Neural Networks to Tool State Classification in a Metal Turning Process - Dimla E Dimla, Jnr (University of Wolverhampton) Classification of Molten Steel Quality using Pattern Recognition Techniques - Mark Brookes (British Steel) Neurocomputational Models of Auditory Perception and Speech Recognition - Sue McCabe (Plymouth University) Transformation-Invariant Feature Recognition by Neural Self-Organisation - Chris Webber (DERA) A Homogeneity Analysis Approach to Nonlinear Principal Components Analysis - Andrew Webb (DERA) NCAF Annual General Meeting SOCIAL PROGRAMME: ---------------- Murder-Mystery and dinner at the Abbey Hotel, Great Malvern on the evening of the 21st. plus, The Puzzle Corner is a mystery too with Graham 'Rottweiler' Hesketh (Rolls-Royce) REGISTRATION: ------------ Please note that due to MoD security restrictions it is absolutely essential that you register early for this meeting. Payment may follow later if necessary but it is vital that your name is known in advance. To register please contact ncaf by email at ncafsec at brunel.ac.uk or Phone Sally Francis on (+44)(0)1784 477271 or fax 472879 From emil at uivt.cas.cz Tue Sep 23 10:27:43 1997 From: emil at uivt.cas.cz (Emil Pelikan) Date: Tue, 23 Sep 1997 16:27:43 +0200 Subject: PASE'97 workshop Message-ID: <01BCC83D.9F3B11A0@pc_pelikan.uivt.cas.cz> 6th International Workshop on Parallel Applications in Statistics and Economics PASE '97 Computers in Finance and Economics Marianske Lazne, Czech Republic November 9 - 12, 1997 -------------------------------------------------------------------------------------------------- PURPOSE OF THE WORKSHOP: The purpose of this workshop is to bring together researchers interested in innovative information processing systems and their applications in the areas of statistics, finance and economics. The focus will be on in-depth presentations of state-of-the-art methods and applications as well as on communicating current research topics. This workshop is intended for industrial and academic persons seeking new ways to work with computers in finance and economics. Statistical Methods for Data Analysis and Forecasting Neural Networks in Finance and Econometrics Data Quality and Data Integrity Data Integration and Data Warehousing Risk Management Applications Real Time Decision Support Systems Banking and Financial Information on the Internet The presentations at the workshop will be made available to a broader audience by publishing the results in a special issue of Neural Network World - International Journal on Neural and Mass-Parallel Computing and Information Systems. ------------------------------------------------------------------------------------------------- WORKSHOP SITE: The workshop will be held in the Czech Republic in Marianske Lazne (Marienbad), one of the world's most beautiful and most visited spatowns. ------------------------------------------------------------------------------------------------- IN-DEPTH PRESENTATIONS P. Cianchi, G. Congiu, L. Landi, A. Piattoli, Universita di Firenze and Quasar S.p.a. Firenze: Financial Model Definition and Execution in a Real Time System Fully Integrated with the Market G. Darbellay and M. Slama, Institute of Computer Science, Prague: How Non-linear is your Time Series? - A New Method and a Case Study R. Dave, G. Stahl*, G. Ballocchi, M.C. Lundin, R.B. Olsen, Olsen & Associates, Zurich and *German Banking Supervisory Office, Berlin: Volatility Conditional Correlations Between Financial Markets: An Emperical Study with Impact on Risk Management Strategies G. Deboeck,?Word Bank: Financial Applications of Self-Organizing Maps: Investment Maps for Emerging Markets M. Mehta, Citibank, Bombay and Saudi American Bank, Riyadh: Neural Network Directed Trading in Financial Markets Using High Frequency Data M. Miksa, Bank Sal Oppenheim, Frankfurt: SONNET: Sal Oppenheim Neural Trader Th. Poddig, Department for Finance, University of Bremen: Developing Forecasting Models for Integrated Financial Markets using Artificial Neural Networks B. Seifert, Oxford Forecasting, Oxford: The 'Divide-and-Rule' Algorithm for Optimizing Functions of Many Variables R. Schnidrig*, D. W?rtz, M. Hanf, *Finance Online GmbH and Swiss Center for Scientific Computing, ETH Z?rich: Realtime Computer Simulation of a Foreign Exchange Trading Room G. Stahl, German Banking Supervisory Office, Berlin: Backtesting Using a Generalization of the Traffic-Light-Approach J.G. Taylor, Department of Mathematics, University of London Perception by Neural Networks ---------------------------------------------------------------------------------------------------- SOCIAL EVENTS In keeping with the tradition of the PASE workshop, a sight seeing program as well as other social events will be organized.? --------------------------------------------------------------------------------------------------- POSTERS AND DEMONSTRATIONS For soft- or hardware demonstrations please contact the organizers. There is also a limited possibility to present post-deadline contributions. For further information please get in contact with Martin Hanf or Helga Labermeier.? --------------------------------------------------------------------------------------------------- WORKSHOP FEE University Sfr 330.- (after Oct. 15th Sfr 380.-) Profit making company Sfr 690.- (after Oct. 15th Sfr 740.-) It includes the "Welcome Party" on Sunday evening, lunches on Monday, Tuesday and Wednesday, "the Workshop Dinner" on Tuesday evening, the special issue of Neural Network World journal and the proceedings. ----------------------------------------------------------------------------------------------------- CONTACT ADDRESSES Martin Hanf or Helga Labermeier or Emil Pelikan SCSC, CLU B1 ICS AS CR ETH Zentrum Pod vodarenskou vezi 2 CH-8092 Zurich 182 07 Prague 8 Switzerland Czech Republic E-Mail: pase at scsc.ethz.ch emil at uivt.cas.cz FAX: +41.1.632.1104 +4202 8585789 For accommodation and local arrangements ask: Milena Zeithemlova (Action M agency) Vrsovicka 68 101 00 Prague 10 Czech Republic Phone: +42.02.6731 2334 FAX: +42.02.6731 0503 E-Mail:actionm at cuni.cz Latest information about PASE will be available from http://www.uivt.cas.cz/PASE97 From espaa at soc.plym.ac.uk Wed Sep 24 06:58:41 1997 From: espaa at soc.plym.ac.uk (espaa) Date: Wed, 24 Sep 1997 10:58:41 GMT Subject: Patter Analysis and Applications Journal Message-ID: <68E887170F3@scfs3.soc.plym.ac.uk> Springer Verlag Ltd is launching a new journal - Pattern Analysis and Applications (PAA) - in Spring 1998. Aims and Scope of PAA: The journal publishes high quality articles in areas of fundamental research in pattern analysis and applications. It aims to provide a forum for original research which describes novel pattern analysis techniques and industrial applications of the current technology. The main aim of the journal is to publish high quality research in intelligent pattern analysis in computer science and engineering. In addition, the journal will also publish articles on pattern analysis applications in medical imaging. The journal solicits articles that detail new technology and methods for pattern recognition and analysis in applied domains including, but not limited to, computer vision and image processing, speech analysis, robotics, multimedia, document analysis, character recognition, knowledge engineering for pattern recognition, fractal analysis, and intelligent control. The journal publishes articles on the use of advanced pattern recognition and analysis methods including statistical techniques, neural networks, genetic algorithms, fuzzy pattern recognition, machine learning, and hardware implementations which are either relevant to the development of pattern analysis as a research area or detail novel pattern analysis applications. Papers proposing new classifier systems or their development, pattern analysis systems for real-time applications, fuzzy and temporal pattern recognition and uncertainty management in applied pattern recognition are particularly solicited. The journal encourages the submission of original case-studies on applied pattern recognition which describe important research in the area. The journal also solicits reviews on novel pattern analysis benchmarks, evaluation of pattern analysis tools, and important research activities at international centres of excellence working in pattern analysis. Audience: Researchers in computer science and engineering. Research and Development Personnel in industry. Researchers/ applications where pattern analysis is used, researchers in the area of novel pattern recognition and analysis techniques and their specific applications. Full information about the journal and detailed instructions for Call for Papers can be found at the PAA web site: http://www.soc.plym.ac.uk/soc/sameer/paa.htm Best regards Barbara Davies School of Computing University of Plymouth From hsd20 at newton.cam.ac.uk Wed Sep 24 05:33:55 1997 From: hsd20 at newton.cam.ac.uk (Heather S. Dawson) Date: Wed, 24 Sep 1997 10:33:55 +0100 (BST) Subject: Statistical Analysis of DNA and Protein Sequences Message-ID: A Newton Institute Workshop Statistical Analysis of DNA and Protein Sequences 20 - 24 October 1997 Organisers: D Haussler (UCSC), R Durbin (Sanger) and C M Bishop (Aston) With the Human Genome Project and other model organism genome sequencing projects now in full swing, there is a growing need for statistical analysis of databases containing DNA, RNA and protein sequences. Problems that need to be addressed include finding genes and other important functional elements in DNA sequences, modelling and classifying these elements, and modelling and classifying the RNA and protein sequences that are derived from them. Neural networks and hidden Markov models are two types of models that have been proposed to satisfy this need. The intent of this workshop is to explore the strengths and weaknesses of these and related techniques for the analysis of DNA and protein sequences, from both a mathematical and an empirical viewpoint. Provisional list of speakers includes: Stephen Altschul (NCBI) Phil Green (Seattle) Pierre Baldi (Caltech) David Haussler (UCSC) Philipp Bucher (Lausanne) Liisa Holm (EBI) Soren Brunak (Denmark) Anders Krogh (Denmark) Bill Bruno (Los Alamos) Alan Lapedes (Santa Fe Inst) Chris Burge (Stanford) Chip Lawrence (Albany, NY) Cyrus Chothia (Cambridge) Jun Liu (Stanford) Richard Durbin (Sanger Centre) Graeme Mitchison (Cambridge) Sean Eddy (St Louis) Victor Solovyev (Sanger Centre) Mikhail Gelfand (Moscow) Gary Stormo (Colorado) Nick Goldman (Cambridge) This workshop will form a component of the Newton Institute programme on Neural Networks and Machine Learning, organised by C M Bishop, D Haussler, G E Hinton, M Niranjan and L G Valiant. Further information about the programme is available via the WWW at http://www.newton.cam.ac.uk/programs/nnm.html The workshop will commence at 4:30 p.m. on Monday 20 October and end at lunch time on Friday 24 October. Location and Costs: The workshop will be held in the Isaac Newton Institute. Since space at the Institute is limited, participants are strongly encouraged to register early. There will be a registration fee of 60 UK pounds which will include the Conference Dinner on Thursday 23 October as well as morning coffee and afternoon tea throughout the week. Note that accommodation can be difficult to find in Cambridge at this time of year. A limited number of rooms have been reserved and will be offered to registrants on a first-come first-served basis. Registration forms are available from the workshop Web Page at http://www.newton.cam.ac.uk/programs/nnmdna.html Completed registration forms should be sent to Heather Dawson at the Newton Institute, or by e-mail to h.dawson at newton.cam.ac.uk The deadline for registration and housing is 26 September 1997 From allan at ohnishi.nuie.nagoya-u.ac.jp Thu Sep 25 01:10:08 1997 From: allan at ohnishi.nuie.nagoya-u.ac.jp (Allan Kardec Barros) Date: Thu, 25 Sep 1997 14:10:08 +0900 Subject: home-page on ICA Message-ID: <3429F22F.ACEB0976@ohnishi.nuie.nagoya-u.ac.jp> Dear connectionists, Recently, there was a boom of papers dealing with a new technique called "independent component analysis" (ICA) for blind separation/deconvolution of signals. ICA is an elegant and powerful solution to the problem of source separation, with a broad range of applications (Biomedical, communication, speech processing, etc.). To help people interested on ICA, many people created home-pages on the topic. I volunteered to put available in the net a "bibliography" where one could find "everything" about ICA. Since April 18, 1997 until today, there were 892 accesses. Of course, it is still far from being complete, even though I made every effort to keep it updated. You are kindly invited to visit or send me your publication on the topic. I have also added links to people who are researching on ICA. On their pages, you can find demos and algorithms and other information. The page is: http://www.ohnishi.nuie.nagoya-u.ac.jp/~allan/ICA -- Allan Kardec Barros Ohnishi Lab., Dep. of Info. Eng. School of Engineering, Nagoya University Furo-cho, Chikusa-ku, Nagoya 464-01 JAPAN From takane at takane2.psych.mcgill.ca Thu Sep 25 08:55:17 1997 From: takane at takane2.psych.mcgill.ca (Yoshio Takane) Date: Thu, 25 Sep 1997 08:55:17 -0400 (EDT) Subject: No subject Message-ID: <199709251255.IAA01398@takane2.psych.mcgill.ca> *****CALL FOR PAPERS***** BEHAVIORMETRIKA, an English journal published in Japan to promote the development and dissemination of quantitative methodology for analysis of human behavior, is planning to publish a special issue on ANALYSIS OF KNOWLEDGE REPRESENTATIONS IN NEURAL NETWORK (NN) MODELS broadly construed. I have been asked to serve as the guest editor for the special issue and would like to invite all potential contributors to submit high quality articles for possible publication in the issue. In statistics information extracted from the data are stored in estimates of model parameters. In regression analysis, for example, information in observed predictor variables useful in prediction is summarized in estimates of regression coefficients. Due to the linearity of the regression model interpretation of the estimated coefficients is relatively straightforward. In NN models knowledge acquired from training samples is represented by the weights indicating the strength of connections between neurons. However, due to the nonlinear nature of the model interpretation of these weights is extremely difficult, if not impossible. Consequently, NN models have largely been treated as black boxes. This special issue is intended to break the ground by bringing together various attempts to understand internal representations of knowledge in NN models. Papers are invited on network analysis including: * Methods of analyzing basic mechanisms of NN models * Examples of successful network analysis * Comparison among different network architectures in their knowledge representation (e.g., BP vs Cascade Correlation) * Comparison with statistical approaches * Visualization of high dimensional functions * Regularization methods to improve the quality of knowledge representation * Model selection in NN models * Assessment of stability and generalizability of knowledge in NN models * Effects of network topology, data encoding scheme, algorithm, environmental bias, etc. on network performance * Implementing prior knowledge in NN models SUBMISSION: Deadline for submission: January 31, 1998 Deadline for the first round reviews: April 30, 1998 Deadline for submission of the final version: August 31, 1998 Number of copies of a manuscript to be submitted: four Format: no longer than 10,000 words; APA style ADDRESS FOR SUBMISSION: Professor Yoshio Takane Department of Psychology McGill University 1205 Dr. Penfield Avenue Montreal QC H3A 1B1 CANADA email: takane at takane2.psych.mcgill.ca tel: 514 398 6125 fax: 514 398 4896 From bert at mbfys.kun.nl Thu Sep 25 03:26:30 1997 From: bert at mbfys.kun.nl (Bert Kappen) Date: Thu, 25 Sep 1997 09:26:30 +0200 Subject: Job openings neural networks and music Message-ID: <199709250726.JAA03891@vitellius.mbfys.kun.nl> Quantization of Temporal Patterns by Neural Networks A number of positions will become available for a research project on the quantization of musical time using neural networks. Quantisation is the process of separating the categorical, discrete time components -durations as notated in the musical score-, from the continuous deviations as present in a musical performance. The project has next to the fundamental aspects (connectionist models of categorical rhythm perception and their empirical validation), an important practical focus and aims at developing a robust component for automatic music transcription systems. The research will be realized jointly at the Nijmegen Institute for Cognition and Information (NICI) and at the Laboratory for Medical and Biophysics (SNN), University of Nijmegen. The project is funded by the Technology Foundation (STW). The following positions are vacant: Research Assistant (vacancy number 118) The task of the research assistant (OIO) will be to design adaptive methods for learning rhythm perception. Possible methods that will be considered are neural networks, Hidden Markov models and probabilistic graphical models. The candidate needs an excellent background in physics, mathematics or computer science. This work will lead to a PhD degree in Physics. Additional expertise in either music or psychology is an advantage. Appointment will be full-time for four years or 8/10 for 5 years. Gross salary will be NLG 2135 per month in the first year, increasing to NLG 3812 in the fourth year, based on full-time employment. This researcher will become member of the neural network research group at the Laboratory for Medical and Biophysics and will be employed by STW. This group consists of 8 researchers and PhD students and conducts theoretical and applied research on neural networks. Postdoctoral Researcher (vacancy number 21.7.97) The postdoc will investigate an existing connectionist model for quatization and will design and evaluate the prototype system and supervise the implementation of the components. Extensions of the model to handle tempo-tracking, polyphony and possibly beat induction are foreseen. Furthermore the theoretical results obtained in the post-graduate project need to be integrated and put to practical use. We look for a cognitive scientist with experience in both experimental methods and in computational modeling, preferably using Lisp. Experience with quantization and relaxation networks is an advantage. Appointment will be full-time for three years, with a possible extension. Depending on experience, the salary will be between NLG 3882 and NLG 8201 gross per month. This researcher will become member of the Music, Mind, Machine project team at the NICI. Music Technologist/Programmer (vacancy number 21.8.97) This technical assistant will be responsible for setting up the hard- and software and conducting the recording of performance data, the construction of user-interfaces, interfaces to existing music notation software, and the implementation of an Internet version of the prototype. We look for an engineer with experience in MIDI and related music technologies, Apple Macintosh, World-Wide Web and Internet, and/or Lisp programming. Appointment will probably be half-time for three years, with a possible extension. The salary, depending on experience, will be between NLG 3694 and NLG 5603 gross per month. This researcher will become member of the Music, Mind, Machine project team at the NICI. More information Details about the context of the research can be found at http://www.nici.kun.nl/mmm and http://www.mbfys.kun.nl/SNN. Employment will begin in early 1998. Nijmegen University intends to employ a proportionate number of women and men in all positions in the faculty. Women are therefore urgently invited to apply. Applications (three copies) should include a curriculum vitae and a statement of the candidate's professional interests and goals, and one copy of recent work (e.g., thesis, computer program, article). Applications for the OIO position (Vacancy 118) should be sent before October 25 to the Personnel Department of the Faculty of Natural Sciences, University of Nijmegen, Toernooiveld 1, 6525 ED Nijmegen (Vacancy 118). Applications for the post-doctoral position and music technologist (Vacancies 21.7.97 and 21.8.97) should be sent before November 15 to the Department of Personnel & Organization, Faculty of Social Sciences, University Nijmegen, P.O.Box 9104, 6500 HE Nijmegen, The Netherlands. Please mark envelope and letter with the vacancy number. From judithr at wccf.mit.edu Thu Sep 25 06:44:39 1997 From: judithr at wccf.mit.edu (Judith) Date: Thu, 25 Sep 1997 11:44:39 +0100 Subject: position available/MIT Message-ID: MASSACHUSETTS INSTITUTE OF TECHNOLOGY DEPARTMENT OF BRAIN & COGNITIVE SCIENCES The MIT Department of Brain and Cognitive Sciences anticipates making a tenure-track appointment in computational brain and cognitive science at the Assistant Professor level. Candidates should have a strong mathematical background and an active research interest in the mathematical modeling of specific biophysical, neural or cognitive phenomena. Individuals whose research focuses on learning and memory at the level of neurons and networks of neurons are especially encouraged to apply. Responsibilities include graduate and undergraduate teaching and research supervision. Applications should include a brief cover letter stating the candidate's research and teaching interests, a vita, three letters of recommendation and representative reprints. Send applications by December 21, 1997 to: Chair, Faculty Search Committee/Computational Neuroscience Department of Brain & Cognitive Sciences, E25-406 MIT 77 Massachusetts Avenue Cambridge, MA 02139-4307 MIT is an Affirmative Action/Equal Opportunity Employer. Qualified women and minority candidates are encouraged to apply. From mel at quake.usc.edu Thu Sep 25 18:51:33 1997 From: mel at quake.usc.edu (Bartlett Mel) Date: Thu, 25 Sep 1997 15:51:33 -0700 Subject: NIPS*97 Travel Grants Message-ID: <342AEAF5.1E4D@quake.usc.edu> Neural Information Processing Systems Conference NIPS*97 ****** Deadline Extended to Oct. 1 ******** Limited funds will be available to support the travel of young investigators, post-doctoral fellows, and graduate students to NIPS. Awards will be based on both merit and need. The amount of aid will generally not exceed $400 per domestic participant, $600 for international attendees. Conference registration is not covered by travel awards. To apply, send (1) a one page resume, and (2) a one page (max) justification. Particularly state whether applicant is an author/presenter on a paper submitted or accepted to NIPS*97, and/or whether applicant is an official participant in a NIPS workshop. Include return email address for notification. Send applications to (or use US mail address below): mel at quake.usc.edu Applications must arrive by Oct. 1 to be considered. Notification will be sent by email in late October. ------- Bartlett Mel NIPS*97 Treasurer -- Bartlett W. Mel (213)740-0334, -3397(lab) Assistant Professor of Biomedical Engineering (213)740-0343 fax University of Southern California, OHE 500 mel at quake.usc.edu US Mail: BME Department, MC 1451, USC, Los Angeles, CA 90089 Fedex: 3650 McClintock Ave, 500 Olin Hall, LA, CA 90089 From tony at salk.edu Tue Sep 23 09:08:51 1997 From: tony at salk.edu (Tony Bell) Date: Tue, 23 Sep 1997 06:08:51 -0700 Subject: NIPS*97 Program Message-ID: <3427BF63.717@salk.edu> --------------------------------------------------------------------- The following is a complete list of papers accepted to NIPS*97, along with the preliminary schedule. Michael Kearns NIPS*97 Program Chair ***** TUESDAY, DECEMBER 2: MORNING ORAL SESSION I ***** DNA^2 DNA Computation: A Potential Killer Application? Richard Lipton, Princeton University and Bellcore Research (Invited Talk) Incorporating Contextual Information in White Blood Cell Identification Xubo Song and Joseph Sill, California Institute of Technology Harvey Kasdan, International Remote Imaging Systems (Oral Presentation) Extended ICA Removes Artifacts from Electroencephalographic Recordings Tzyy-Ping Jung, Colin Humphries, and Te-Won Lee, Salk Institute Scott Makeig, Naval Health Research Center Martin J. McKeown, Salk Institute Vicente Iragui, UC San Diego Terrence J. Sejnowski, Salk Institute (Oral Presentation) A Solution for Missing Data in Recurrent Neural Networks with an Application to Blood Glucose Prediction Volker Tresp and Thomas Briegel, Siemens (Poster Spotlight) Reinforcement Learning for Call Admission Control in Routing in Integrated Service Networks Peter Marbach, Massachusetts Institute of Technology Oliver Mihatsch, Siemens Miriam Schulte, Technische Universitat Munchen John N. Tsitsiklis, Massachusetts Institute of Technology (Poster Spotlight) Intrusion Detection with Neural Networks Jake Ryan, Risto Miikkulainen, and Meng-Jang Lin, University of Texas at Austin (Poster Spotlight) Structure Driven Image Database Retrieval Jeremy S. De Bonet and Paul Viola, Massachusetts Institute of Technology (Poster Spotlight) Analog VLSI Model of Intersegmental Coordination with Nearest-neighbor Coupling Girish N. Patel, Jeremy H. Holleman and Stephen P. DeWeerth, Georgia Institute of Technology (Poster Spotlight) ***** TUESDAY, DECEMBER 2: MORNING ORAL SESSION II ***** A Framework for Multiple-Instance Learning Oded Maron and Tomas Lozano-Perez, Massachusetts Institute of Technology (Oral Presentation) Hierarchical Non-linear Factor Analysis and Topographic Maps Zoubin Ghahramani and Geoffrey E. Hinton, University of Toronto (Oral Presentation) Learning Continuous Attractors in Recurrent Networks H. S. Seung, Bell Labs Lucent Technologies (Oral Presentation) Classification by Pairwise Coupling Trevor Hastie, Stanford University Robert Tibshirani, University of Toronoto (Poster Spotlight) Agnostic Clustering of Markovian Sequences Ran El-Yaniv, Shai Fine and Naftali Tishby, Hebrew University (Poster Spotlight) EM Algorithms for PCA and SPCA Sam Roweis, California Institute of Technology (Poster Spotlight) Training Methods for Adaptive Boosting of Neural Networks for Character Recognition Holger Schwenk and Yoshua Bengio, Universite de Montreal (Poster Spotlight) Learning to Order Things William W. Cohen, Robert E. Schapire and Yoram Singer, AT&T Labs (Poster Spotlight) On Efficient Heuristic Ranking of Hypotheses Steve Chien, Andre Stechert and Darren Mutz, Jet Propulsion Laboratory (Poster Spotlight) Estimating Dependency Structure as a Hidden Variable Marina Meila and Michael I. Jordan, Massachusetts Institute of Technology (Poster Spotlight) ***** TUESDAY, DECEMBER 2: AFTERNOON ORAL SESSION ***** Learning in Rational Agents Stuart Russell, UC Berkeley (Invited Talk) Visual Navigation in a Robot Using Zig-zag Behavior M. Anthony Lewis, University of Illinois (Oral Presentation) Nonparametric Model-based Reinforcement Learning Christopher G. Atkeson, Georgia Institute of Technology (Oral Presentation) Reinforcement Learning with Hierarchies of Machines Ron Parr and Stuart Russell, UC Berkeley (Oral Presentation) On the Infeasibility of Training Neural Networks with Small Squared Error Van H. Vu, Yale University (Poster Spotlight) Data Dependent Structural Risk Minimization for Perceptron Decision Trees John Shawe-Taylor and Nello Cristianini, University of London (Poster Spotlight) Generalization in Decision Trees and DNF: Does Size Matter? Mostefa Golea and Peter L. Bartlett, Australian National University Wee Sun Lee, Australian Defence Force Academy (Poster Spotlight) The Asymptotic Convergence Rate of Q-learning Cs. Szpesvari, Jozsef Attila University (Poster Spotlight) An Improved Policy Iteration Algorithm for Partially Observable MDPs Eric A. Hansen, University of Massachusetts (Poster Spotlight) ***** TUESDAY, DECEMBER 2: EVENING POSTER SESSION ***** (Note: contributed papers presented during Tuesday's oral sessions will also have posters Tuesday evening.) Gradients for Retinotectal Mapping Geoffrey J. Goodhill, Georgetown University Incremental Learning with Sample Queries Joel Ratsaby, Nu Age Products Analytical Study of the Interplay Between Architecture and Predictability Avner Priel, Ido Kanter and D.A. Kessler, Bar Ilan University A 1,000-Neuron System with One Million 7-bit Physical Interconnections Yuzo Hirai, University of Tsukuba Finite Sample Bounds for Non-linear Time Series Prediction Ron Meir, Technion Graph Matching with Hierarchical Discrete Relaxation Richard C. Wilson and Edwin R. Hancock, University of York Recovering Perspective Pose with a Dual Step EM Algorithm A.D.J. Cross and E.R. Hancock, University of York Bidirectional Retrieval from Associative Memory Friedrich T. Sommer and G. Palm, University of Ulm Synchronized Auditory and Cognitive 40 Hz Attentional Streams, and the Impact of Rhythmic Expectation on Auditory Scene Analysis Bill Baird, UC Berkeley Ensemble and Modular Approaches for Face Detection: A Comparison Raphael Feraud and Olivier Bernier, France Telecom Hybrid Reinforcement Learning and its Application to Biped Robot Control Satoshi Yamada, Akira Watanabe and Michio Nakashima, Mitsubishi S-Map: A Network with a Simple Self-organization Algorithm for Generative Topographic Mapping Kimmo Kiviluoto and Erkki Oja, Helsinki University of Technology New Approximations of Differential Entropy for Independent Component Analysis and Projection Pursuit Aapo Hyvarinen, Helsinki University of Technology Combining Classifiers Using Correspondence Analysis Christopher J. Merz, UC Irvine A Neural Network Model of Naive Preference and Filial Imprinting in the Domestic Chick Lucy E. Hadden, UC San Diego MELONET I: Neural Nets for Inventing Baroque-style Chorale Variations Dominik Hornel, Universitat Fridericiana Karlsruhe (TH) Automatic Aircraft Recovery via Reinforcement Learning: Initial Experiments Jeffrey F. Monaco and David G. Ward, Barron Associates, Inc. Andrew G. Barto, University of Massachusetts Unsupervised On-line Learning of Decision Trees for Hierarchical Data Analysis Marcus Held and Joachim M. Buhmann, Rheinische Friedrich-Wilhelms-Universitat An Annealed Self-organizing Map for Source Channel Coding Matthias Burger, Thore Graepel and Klaus Obermayer, Technical University of Berlin Bach in a Box - Real-time Harmony Randall R. Spangler, Rodney M. Goodman, and Jim Hawkins, California Institute of Technology Function Approximation with the Sweeping Hinge Algorithm Don R. Hush and Fernando Lozano, University of New Mexico Bill Horne, MakeWaves, Inc. Incorporating Test Inputs into Learning Zehra Cataltepe and Malik Magdon-Ismail, California Institute of Technology Just One View: Invariances in Inferotemporal Cell Tuning Maximilian Riesenhuber and Tomaso Poggio, Massachusetts Institute of Technology Weight Space Structure and the Storage Capacity of a Fully-connected Committee Machine Yuansheng Xiong, Pohang Institute of Science and Technology Chulan Kwon, Myong Ji University Jong-Hoon Oh, Bell Labs Lucent Technologies Recurrent Neural Networks Can Learn to Implement Symbol-Sensitive Counting Paul Rodriguez, UC San Diego Janet Wiles, University of Queensland Multi-modular Associative Memory Nir Levy, David Horn and Eytan Ruppin, Tel-Aviv University On Parallel Versus Serial Processing: A Computational Study of Visual Search Eyal Cohen and Eytan Ruppin, Tel-Aviv University Self-similarity Properties of Natural Images Antonio Turiel, German Mato, and Nestor Parga, Universidad Autonoma de Madrid Jean-Pierre Nadal, Ecole Normale Superieure Experiences with Bayesian Learning in a Real World Application Peter Sykacek and Georg Dorffner, Austrian Research Institute for Artificial Intelligence Peter Rapplesberger, University of Vienna Josef Zeitlhofer, AKH Vienna An Analog VLSI Neural Network for Phase-based Computer Vision Bertram E. Shi and Kwok Fai Hui, Hong Kong University of Science and Technology An Application of Reversible-jump MCMC to Multivariate Spherical Gaussian Mixtures Alan D. Marrs, Defence Evaluation & Research Agency Regularisation in Sequential Learning Algorithms Joao FG de Freitas, Mahesan Niranjan and Andrew H. Gee, Cambridge University A Generic Approach for Identification of Event Related Brain Potentials via a Competitive Neural Network Structure Daniel H. Lange, Hava T. Siegelmann, Hillel Pratt and Gideon F. Inbar, Israel Institute of Technology Selecting Weighting Factors in Logarithmic Opinion Pools Tom Heskes, University of Nijmegen Shared Context Probabilistic Transducers Yoshua Bengio, Universite de Montreal Samy Bengio, CIRANO Jean-Francois Isabelle, NOVASYS Yoram Singer, AT&T Labs Perturbative M-sequences for Auditory Systems Identification Mark Kvale and Christoph E. Schreiner, UC San Francisco Instabilities in Eye Movement Control: A Model of Periodic Alternating Nystagmus Ernst R. Dow and Thomas J. Anastasio, University of Illinois The Observer-observation Dilemma in Neuro-forecasting Hans Georg Zimmermann and Ralph Neuneier, Siemens Enhancing Q-learning for Optimal Asset Allocation Ralph Neuneier, Siemens An Analog VLSI Model of the Fly Elementary Motion Detector Reid R. Harrison and Christof Koch, California Institute of Technology Synaptic Transmission: An Information-theoretic Perspective Amit Manwani and Christof Koch, California Institute of Technology Phase Transitions and Perceptual Organization of Video Sequences Yair Weiss, Massachusetts Institute of Technology Adaptive Choice of Grid and Time in Reinforcement Learning Stephan Pareigis, Christian-Albrechts-Universitat Kiel The Rectified Gaussian Distribution N.D. Socci, D.D. Lee and H.S. Seung, Bell Labs Lucent Technologies Two Approaches to Optimal Annealing Todd K. Leen, Oregon Graduate Institute Bernhard Schottky and David Saad, Aston University On the Separation of Signals from Neighboring Cells in Tetrode Recordings Maneesh Sahani, John S. Pezaris and Richard A. Andersen, California Institute of Technology Detection of First and Second Order Motion Alexander Grunewald, California Institute of Technology Heiko Neumann, Universitat Ulm Silicon Retina with Adaptive Filtering Properties Shih-Chii Liu, California Institute of Technology ***** WEDNESDAY, DECEMBER 3: MORNING ORAL SESSION I ***** Relative Loss Bounds, the Minimum Relative Entropy Principle and EM Manfred Warmuth, UC Santa Cruz (Invited Talk) Saddle Point and Hamiltonian Structure in Excitatory-inhibitory Networks H.S. Seung and T.J. Richardson, Bell Labs Lucent Technologies J.C. Lagarias, AT&T Labs J.J. Hopfield, Princeton University (Oral Presentation) >From Regularization Operators to Support Vector Kernels Alexander J. Smola, GMD Bernhard Scholkopf, Max Planck Institute (Oral Presentation) Globally Optimal On-line Learning Rules Magnus Rattray and David Saad, Aston University (Poster Spotlight) Asymptotics for Regularization Petri Koistinen, University of Helsinki (Poster Spotlight) Prior Knowledge in Support Vector Kernels Bernhard Scholkopf, Max Planck Institute Patrice Simard and Valdimir Vapnik, AT&T Labs Alexander J. Smola, GMD (Poster Spotlight) Optimization of the Drift for Nonequilibrium Diffusion Networks Paul Mineiro, Javier Movellan, and R.J. Williams, UC San Diego (Poster Spotlight) A Revolution: Belief Propagation in Graphs With Cycles Brendan Frey, University of Toronto David J. C. MacKay, Cambridge University (Poster Spotlight) ***** WEDNESDAY, DECEMBER 3: MORNING ORAL SESSION II ***** Using Expectation to Guide Processing: A Study of Three Real-world Applications Shumeet Baluja, Carnegie Mellon University (Oral Presentation) Bayesian Robustification for Audio Visual Fusion in Non-stationary Environments Javier Movellan and Paul Mineiro, UC San Diego (Oral Presentation) Spectrotemporal Receptive Fields for Neurons in the Primary Auditory Cortex of the Awake Primate R.C. deCharms and M.M. Merzenich, UC San Francisco (Oral Presentation) Active Data Clustering Thomas Hofmann, Massachusetts Institute of Technology Joachim M. Buhmann, Universitat Bonn (Poster Spotlight) Learning Nonlinear Overcomplete Representations for Efficient Coding Michael S. Lewicki and Terrence J. Sejnowski, Salk Institute (Poster Spotlight) A Non-parametric Multi-scale Statistical Model for Natural Images Jeremy S. De Bonet and Paul Viola, Massachusetts Institute of Technology (Poster Spotlight) Modeling Acoustic Correlations by Factor Analysis Lawrence Saul and Mazin Rahim, AT&T Labs (Poster Spotlight) Blind Separation of Radio Signals in Fading Channels Kari Torkkola, Motorola (Poster Spotlight) Features as Sufficient Statistics D. Geiger, A. Rudra and L. Maloney, New York University (Poster Spotlight) Bayesian Model of Surface Perception William T. Freeman, Mitsubishi Electric Research Lab Paul A. Viola, Massachusetts Institute of Technology (Poster Spotlight) 2D Observers for Human 3D Object Recognition? Zili Liu, NEC Research Institute Daniel Kersten, University of Minnesota (Poster Spotlight) ***** WEDNESDAY, DECEMBER 3: AFTERNOON ORAL SESSION ***** Computing with Action Potentials John Hopfield, Princeton University (Invited Talk) Coding of Naturalistic Stimuli by Auditory Midbrain Neurons H. Attias and C.E. Schreiner, UC San Francisco (Oral Presentation) Refractoriness and Neural Precision Michael J. Berry II and Markus Meister, Harvard University (Oral Presentation) A Mathematical Model of Axon Guidance by Diffusible Factors Geoffrey J. Goodhill, Georgetown University (Oral Presentation) Neural Basis of Object-centered Representations Sophie Deneve and Alexandre Pouget, Georgetown University (Poster Spotlight) A Model of Early Visual Processing Laurent Itti, Jochen Braun, Dale K. Lee and Christof Koch, California Institute of Technology (Poster Spotlight) Effects of Spike Timing Underlying Binocular Integration and Rivalry in a Neural Model of Early Visual Cortex Erik D. Lumer, Universite Libre de Bruxelles (Poster Spotlight) Statistical Models of Conditioning Peter Dayan and Theresa Long, Massachusetts Institute of Technology (Poster Spotlight) ***** WEDNESDAY, DECEMBER 3: EVENING POSTER SESSION ***** (Note: contributed papers presented during Wednesday's oral sessions will also have posters Wednesday evening.) Serial Order in Reading Aloud: Connectionist Models and Neighborhood Structure Jeanne C. Milostan and Garrison W. Cottrell, UC San Diego The Error Coding Method and PaCT's Gareth James and Trevor Hastie, Stanford University Linear Concepts and Hidden Variables: an Empirical Study Adam J. Grove, NEC Research Institute, Dan Roth, Weizmann Institute of Science A Hippocampal Model of Recognition Memory Randall C. O'Reilly, Massachusetts Institute of Technology Kenneth A. Norman, Harvard University James L. McClelland, Carnegie Mellon University The Efficiency and the Robustness of Natural Gradient Descent Learning Rule Howard Hua Yang and Shun-ichi Amari, RIKEN Derive Serial Updating Rule for Blind Separation from the Method of Scoring Howard Hua Yang, RIKEN Unconscious Inference and the Up-propagation Algorithm Jong-Hoon Oh and H. Sebastian Seung, Bell Labs Lucent Technologies Nonlinear Markov Networks for Continuous Variables Reimar Hofmann and Volker Tresp, Siemens Modelling Seasonality and Trends in Daily Rainfall Data Peter M. Williams, University of Sussex Task and Spatial Frequency Effects on Face Specialization Matthew N. Dailey and Garrison W. Cottrell, UC San Diego A Simple and Fast Neural Network Approach to Stereovision Rolf D. Henkel, University of Bremen Wavelet Models for Video Time Series Sheng Ma and Chuanyi Ji, Rensselaer Polytechnic Institute Multiple Threshold Neural Logic Vasken Bohossian and Jehoshua Bruck, California Institute of Technology Reinforcement Learning for Continuous Stochastic Control Problems Remi Munos, CEMAGREF Paul Bourgine, Ecole Polytechnique Independent Component Analysis for Identification of Artifacts in Magnetoencephalographic Recordings Ricardo Vigario, Veikko Jousmaki, Matti Hamalainen, Riitta Hari and Erkki Oja, Helsinki University of Technology Hybrid NN/HMM-based Speech Recognition with a Discriminant Neural Feature Extraction Daniel Willett and Gerhard Rigoll, Gerhard Mercator University Use of a Multi-layer Perceptron to Predict Malignancy in Ovarian Tumors Herman Verrelst, Yves Moreau, and Joos Vandewalle, Katholieke Universiteit Leuven Dirk Timmerman, University Hospitals Leuven Correlates of Attention in a Model of Dynamic Visual Recognition Rajesh P.N. Rao, University of Rochester Approximating Posterior Distributions in Belief Networks Using Mixtures Christopher M. Bishop and Neil Lawrence, Aston University Tommi Jaakkola and Michael I. Jordan, Massachusetts Institute of Technology Regression with Input-dependent Noise: a Gaussian Process Treatment Paul W. Goldberg, Christopher K.I. Williams and Christopher M. Bishop, Aston University Online Learning from Finite Training Sets in Non-linear Networks Peter Sollich, University of Edinburgh David Barber, Aston University Radial Basis Functions: a Bayesian Treatment David Barber and Bernhard Schottky, Aston University Computing with Stochastic Dynamic Synapses Wolfgang Maass, Technische Universitaet Graz Anthony M. Zador, Salk Institute Learning to Schedule Straight-line Code J. Eliot B. Moss, Paul E. Utgoff, John Cavazos, Doina Precup and Darko Stefanovic, University of Massachusetts Carla Brodley and David Scheeff, Purdue University, Multi-time Models for Temporally Abstract Planning Doina Precup and Richard S. Sutton, University of Massachusetts Inferring Sparse, Overcomplete Image Codes Using an Efficient Coding Framework Michael S. Lewicki, Salk Institute Bruno A. Olshausen, UC Davis Mapping a Manifold Joshua B. Tenenbaum, Massachusetts Institute of Technology Monotonic Networks Joseph Sill, California Institute of Technology The Canonical Distortion Measure in Feature Space and 1-NN Classification Jonathan Baxter and Peter Bartlett, Australian National University Relative Loss Bounds for Multidimensional Regression Problems Jyrki Kivinen, University of Helsinki Manfred K. Warmuth, UC Santa Cruz A General Purpose Image Processing Chip: Orientation Detection Ralph Etienne-Cummings and Donghui Cai, Southern Illinois University Receptive Field Formation in Natural Scene Environments: Comparison of Kurtosis, Skewness, and the Quadratic form of BCM Brian Blais, N. Intrator, H. Shouval and Leon N. Cooper, Brown University How to Dynamically Merge Markov Decision Processes Satinder Singh, University of Colorado David Cohn, Harlequin, Inc. Comparison of Human and Machine Word Recognition M. Schenkel, C. Latimer and M. Jabri, University of Sydney Analysis of Drifting Dynamics with Neural Network Hidden Markov Models J. Kohlmorgen, K.-R. Muller, GMD K. Pawelzik, MPI f. Stromungsforschung, A Neural Network Based Head Tracking System D.D. Lee and H.S. Seung, Bell Labs Lucent Technologies Multiresolution Tangent Distance for Affine-invariant Classification Nuno Vasconcelos and Andrew Lippman, Massachusetts Institute of Technology Modeling a Complex Cell in an Awake Macaque During Natural Image Viewing William E. Vinje and Jack L. Gallant, UC Berkeley Local Dimensionality Reduction Stefan Schaal, Georgia Institute of Technology Sethu Vijayakumar, Tokyo Institute of Technology Christopher C. Atkeson, Georgia Institute of Technology Using Helmholtz Machines to Analyze Multi-channel Neuronal Recordings Virginia R. de Sa, R. Christopher deCharms and Michael M. Merzenich, UC San Francisco RCC Cannot Compute Certain FSA, Even with Arbitrary Transfer Functions Mark Ring, GMD Competitive On-line Linear Regression V. Vovk, University of London Generalized Prioritized Sweeping David Andre, Nir Friedman and Ronald Parr, UC Berkeley Toward a Single-cell Account of Binocular Disparity Tuning: An Energy Model May be Hiding in Your Dendrites Bartlet W. Mel, University of Southern California Daniel L. Ruderman, The Salk Institute Kevin A. Archie, University of Southern California, Hippocampal Model of Rat Spatial Abilities Using Temporal Difference Learning David J. Foster and Richard G.M. Morris, Edinburgh University Peter Dayan, Massachusetts Institute of Technology Boltzmann Machine Learning Using Mean Field Theory and Linear Response Correction H.J. Kappen, University of Nijmegen ***** THURSDAY, DECEMBER 4: MORNING ORAL SESSION I ****** Odor Coding by the Olfactory System: Distributed Processing in Biological and Artificial Systems John S. Kauer, Tufts University (Invited Talk) Adaptation in Speech Motor Control John F. Houde, UC San Francisco Michael I. Jordan, Massachusetts Institute of Technology (Oral Presentation) A Superadditive-impairment Theory of Optic Aphasia Michael C. Mozer and Mark Sitton, University of Colorado Martha Farah, University of Pennsylvania (Oral Presentation) Learning Human-like Knowledge by Singular Value Decomposition: A Progress Report Thomas K. Landauer and Darrell Laham, University of Colorado at Boulder Peter Foltz, New Mexico State University (Oral Presentation) ***** THURSDAY, DECEMBER 4: MORNING ORAL SESSION II ***** Local Independent Component Analysis Juan K. Lin, University of Chicago (Oral Presentation) Stacked Density Estimation Padhraic Smyth, UC Irvine David Wolpert, IBM Almaden Research (Oral Presentation) Ensemble Learning for Multi-layer Networks David Barber and Christopher M. Bishop, Aston University (Oral Presentation) ***** END OF CONFERENCE --- ADJOURN TO WORKSHOPS ***** From aho at nada.kth.se Fri Sep 26 11:35:25 1997 From: aho at nada.kth.se (Anders Holst) Date: Fri, 26 Sep 1997 17:35:25 +0200 (MET DST) Subject: PhD thesis available Message-ID: <199709261535.RAA15645@aho.nada.kth.se> The following PhD thesis is available at: ftp://ftp.nada.kth.se/SANS/reports/ps/aho-thesis.ps.gz http://www.nada.kth.se/~aho/thesis.html -------------------------------------------------------------------- THE USE OF A BAYESIAN NEURAL NETWORK MODEL FOR CLASSIFICATION TASKS Anders Holst Studies of Artificial Neural Systems Dept. of Numerical Analysis and Computing Science Royal Institute of Technology, S-100 44 Stockholm, Sweden Abstract This thesis deals with a Bayesian neural network model. The focus is on how to use the model for automatic classification, i.e. on how to train the neural network to classify objects from some domain, given a database of labeled examples from the domain. The original Bayesian neural network is a one-layer network implementing a naive Bayesian classifier. It is based on the assumption that different attributes of the objects appear independently of each other. This work has been aimed at extending the original Bayesian neural network model, mainly focusing on three different aspects. First the model is extended to a multi-layer network, to relax the independence requirement. This is done by introducing a hidden layer of complex columns, groups of units which take input from the same set of input attributes. Two different types of complex column structures in the hidden layer are studied and compared. An information theoretic measure is used to decide which input attributes to consider together in complex columns. Also used are ideas from Bayesian statistics, as a means to estimate the probabilities from data which are required to set up the weights and biases in the neural network. The use of uncertain evidence and continuous valued attributes in the Bayesian neural network are also treated. Both things require the network to handle graded inputs, i.e. probability distributions over some discrete attributes given as input. Continuous valued attributes can then be handled by using mixture models. In effect, each mixture model converts a set of continuous valued inputs to a discrete number of probabilities for the component densities in the mixture model. Finally a query-reply system based on the Bayesian neural network is described. It constitutes a kind of expert system shell on top of the network. Rather than requiring all attributes to be given at once, the system can ask for the attributes relevant for the classification. Information theory is used to select the attributes to ask for. The system also offers an explanatory mechanism, which can give simple explanations of the state of the network, in terms of which inputs mean the most for the outputs. These extensions to the Bayesian neural network model are evaluated on a set of different databases, both realistic and synthetic, and the classification results are compared to those of various other classification methods on the same databases. The conclusion is that the Bayesian neural network model compares favorably to other methods for classification. In this work much inspiration has been taken from various branches of machine learning. The goal has been to combine the different ideas into one consistent and useful neural network model. A main theme throughout is to utilize independencies between attributes, to decrease the number of free parameters, and thus to increase the generalization capability of the method. Significant contributions are the method used to combine the outputs from mixture models over different subspaces of the domain, and the use of Bayesian estimation of parameters in the expectation maximization method during training of the mixture models. Keywords: Artificial neural network, Bayesian neural network, Machine learning, Classification task, Dependency structure, Mixture model, Query-reply system, Explanatory mechanism. From icsc at compusmart.ab.ca Fri Sep 26 12:25:04 1997 From: icsc at compusmart.ab.ca (ICSC Canada) Date: Fri, 26 Sep 1997 10:25:04 -0600 Subject: NC'98 / CFP Message-ID: <3.0.1.32.19970926102504.006beb6c@mail.compusmart.ab.ca> ANNOUNCEMENT / CALL FOR PAPERS International ICSC/IFAC Symposium on NEURAL COMPUTATION / NC'98 To be held at the Technical University of Vienna September 23 - 25, 1998 http://www.compusmart.ab.ca/icsc/nc98.htm SPONSORS - IFAC International Federation of Automatic Control - IEEE Institute of Electrical & Electronics Engineers Section Austria - OeCG Oesterreichische Computer Gesellschaft - OeVE Oesterreichischer Verband fuer Elektrotechnik - Siemens AG, Vienna, Austria - TU University of Technology, Vienna - ICSC International Computer Science Conventions, Canada/Switzerland ************************************************* SYMPOSIUM ORGANIZATION - HONORARY CHAIR Prof. Tomaso Poggio Co-Director Center for Biological and Computational Learning Massachusetts Institute of Technology, E25-201 Cambridge, MA 02139 / USA Email: tp-temp at ai.mit.edu Web Site: http://www.ai.mit.edu/people/poggio/ - SYMPOSIUM CHAIR Dr. Michael Heiss ECANSE Siemens AG Austria Gudrunstrasse 11 A-1100 Vienna / Austria Phone: +43-1-1707-47149 Fax: +43-1-1707-56256 Email: michael.heiss at siemens.at m.heiss at ieee.org Web Site: http://www.siemens.at/~ecanse/heiss.html - SYMPOSIUM ORGANIZER ICSC International Computer Science Conventions P.O. Box 279 Millet, AB T0C 1Z0 / Canada Phone: +1-403-387-3546 Fax: +1-403-387-4329 Email: icsc at compusmart.ab.ca Web Site: http://www.compusmart.ab.ca/icsc - PROGRAM COMMITTEE Peter G. Anderson, USA Shai Ben-David, Israel Horst Bischof, Austria Hans H. Bothe, Germany Martin Brown, U.K. Frans M. Coetzee, USA Juan Lopez Coronado, Spain Georg Dorffner, Austria K. Fukushima, Japan Wulfram Gertner, Switzerland Stan Gielen, Netherlands C. Lee Giles, USA C.J. Harris, U.K. J. Herault, France Kurt Hornik, Austria Nikola Kasabov, New Zealand Bart Kosko, USA Fa-Long Luo, Germany Wolfgang Maass, Austria Giuseppe Martinelli, Italy Fazel Naghdy, Australia Sankar K. Pal, India Y.H. Pao, USA Martin Pottmann, USA Raul Rojas, Germany Tariq Samad, USA V. David Sanchez A., USA Robert Sanner, USA Bernd Schuermann, Germany J.S. Shawe-Taylor, U.K. Peter Sincak, Slovakia George D. Smith, U.K. Nigel Steele, U.K. Piotr Szczepaniak, Poland Csaba Szepesvari, Hungary Henning Tolle, Germany S. Usui, Japan J. Vandewalle, Belgium A. Weinmann, Austria T. Yamakawa, Japan The symposium is organized under the honorary patronage of Prof. Alexander Weinmann, University of Technology, Vienna. ************************************************* NATIONAL ORGANIZING COMMITTEE M. Budil - T. Gruenberger - B. Knapp - M. Kuehrer - W. Reinisch - E. Thurner - C. Stroh ************************************************* PURPOSE OF THE CONFERENCE NC’98 will bring together scientists and practitioners in the field of Artificial Neural Networks. The conference will concentrate on the computational aspects of Neural Networks in the widest sense, from convergence theory, numerical aspects and hybrid systems through to neural software and hardware. In addition the conference will provide a venue for the discussion of commercial and other practical applications of the technology. It is hoped that participants will have a common interest in, and a fascination for, self-learning systems in nature, their theoretical modeling and interpretation, and in their computational implementation ************************************************* TOPICS Contributions are sought in areas based on the list below, which is indicative only. Contributions from new applications areas will be particularly welcome. Neural Network Theory - Mathematical, computational background - Mathematical theories of networks in dynamical systems - Neural network architectures and algorithms - Neural models for cognitive science and brain functions - Convergence - Numerical aspects - Statistical properties - Artificial associative memories - Self-organization - Hybrid systems - Neuro-Fuzzy - Knowledge extraction from neural networks - Genetic algorithms - Optimization - Radial basis function networks - CMAC - Pre-filtering and pre-selection of input-data - Chaos-theoretical methods for data evaluation - Vector quantization Tools and Hardware Implementation - Rapid prototyping tools - Graphical programming tools - Simulation and analyzation tools - Heuristics for neural network design - Neuro-computers - Electronic and optic implementations - Neuro-chips - Spiking-Neurons - Parallel processing Applications - Pattern recognition - Signal processing - Neural filters - Speech recognition - Robotics - Control - System Identification - Time series prediction - Sales forecast - Electrical power load forecast - X-ray image analysis ************************************************* SCIENTIFIC PROGRAM At present, the following plenary talks have been scheduled: - 'Learning Sparse Representations' Prof. Tomaso Poggio, Honarary Chairman of NC'98, Massachusetts Institute of Technology, Cambridge MA, USA - 'Spiking Neurons' Prof. Wolfgang Maass, Technical University Graz, Austria - 'Models of Visual Awareness in a Multi-Module Neural Network' Prof. Igor Aleksander, Imperial College of Science Technology and Medicine, London, U.K. NC'98 will include other invited plenary talks, contributed sessions, invited sessions, poster sessions, workshops and demonstrations. Various projects are under consideration. ************************************************* WORKSHOPS, TUTORIALS AND OTHER CONTRIBUTIONS Proposals should be submitted as soon as possible to the Symposium Chairman or the Symposium Organizer. Deadline for proposals: January 31, 1998. Tutorials and workshops will partly be organized prior to the conference on September 21-22, 1998. ************************************************* EXHIBITION An exhibition may be arranged at the conference site, displaying products, services and literature related to the conference topics technolgy. Interested exhibitors are requested to contact the Symposium Organizers for further information. Conference participants may display any written information related to the conference topic at the information desk (papers, call for papers, product information). For non-participants we offer a handling charge of US$ 100 for each 5 kilograms. ************************************************* SUBMISSION OF PAPERS Prospective authors are requested to send an abstract of 1000 - 2000 words for review by the International Program Committee. All abstracts must be written in English, starting with a succinct statement of the problem, the results achieved, their significance and a comparison with previous work. The abstract should also include: - Title of conference (NC'98) - Type of paper (regular, demonstration, poster, tutorial or invited) - Title of proposed paper - Authors names, affiliations, addresses - Name of author to contact for correspondence - E-mail address and fax # of contact author - Topics which best describe the paper (max. 5 keywords) Contributions are welcome from those working in industry and having experience in the topics of this conference as well as from academics. The conference language is English. Tutorial papers and demonstrations are also encouraged. It is recommended to submit abstracts by electronic mail to icsc at compusmart.ab.ca or else by fax or mail (2 copies) to the following address: ICSC Canada P.O. Box 279 Millet, Alberta T0C 1Z0 Canada Email: icsc at compusmart.ab.ca Fax: +1-403-387-4329 Submissions for poster presentations (minimum 4 pages) are accepted until June 30, 1997. These papers will neither be reviewed, nor included in the proceedings and later amendments are still possible. ************************************************* DEMONSTRATION SESSIONS Instead of submitting a paper, proposals for a 20 minute demonstration of a practical application can also be submitted. A regular 220V power outlet and a VHS-video recorder will be available at the demonstration room. The proposal should follow the above abstract submission guidelines. Only the abstract of the demonstration will be printed in the conference proceedings. ************************************************* POSTER SESSIONS Poster presentations are encouraged for people who whish to receive peer feedback on research, which is not yet ready for publication. Practical examples of applied research are particularly welcome. Poster sessions will allow the presentation and discussion of respective papers. Papers for poster presentations should contain at least 4 pages and be submitted to the symposium organizers until June 30, 1998 to be included in the conference program. These papers will neither be reviewed, nor included in the proceedings and later amendments are thus still possible. ************************************************* INVITED SESSIONS Prospective organizers of invited sessions are requested to send a session proposal (consisting of 4-5 invited paper abstracts, the recommended session-chair and co-chair, as well as a short statement describing the title and the purpose of the session to the Symposium Chairman or the Symposium Organizer. All invited sessions should start with a tutorial paper. ************************************************* SIEMENS BEST PRESENTATION AWARD The best presentation of each session will be honored with a best presentation award. ************************************************* PUBLICATIONS Conference proceedings (including all accepted papers) will be published by ICSC Academic Press and be available for the delegates at the symposium in printed form or on CD-ROM. Authors of a selected number of innovative papers will be invited to submit extended manuscripts for publication in a special issue of the Elsevier Journal on Neurocomputing. ************************************************* IMPORTANT DATES - Submission of Abstracts: January 31, 1998 - Notification of Acceptance: March 31, 1998 - Delivery of full papers: May 31, 1998 - Submission of poster presentations: June 30, 1998 - Tutorials and Workshops: September 21-22, 1998 - NC'98 Symposium: September 23-25, 1998 ************************************************* ACCOMMODATION Accommodation at reasonable rates will be available at nearby hotels. Full details will follow with the letters of acceptance. ************************************************* SOCIAL AND TOURIST ACTIVITIES A social program, including a reception and a "Heurigen Dinner" will be organized and also be available for accompanying persons. Discover, why Vienna is one of the most favored conference cities worldwide. The month of September is the best time for a visit to Vienna. Visit the Stephansdom, relax at the prater where you can take a trip with the Riesenrad, eat the world's best pastries (e.g. the original Sacher-Torte) in one of the famous coffee houses, listen to an opera at the Vienna State Opera, or visit one of the many museums, such as the famous Klimt Collection in the Belvedere Castle. Post-conference tours to Wachau, Salzburg or Prag may be organized. Please mail your preferences to the symposium organizer. Further tourist information is available from http://www.aaf.or.at/tom/Wien/ http://info.wien.at/e/index.htm http://info.wien.at/e/wienkart.htm http://austria-info.at/ http://www.magwien.gv.at/gismap/cgi-bin/wwwgis/adrsuche/ ************************************************* FURTHER INFORMATION Full updated information is available from http://www.compusmart.ab.ca/icsc/nc98.htm or contact - ICSC Canada, P.O. Box 279, Millet, Alberta T0C 1Z0, Canada E-mail: icsc at compusmart.ab.ca Fax: +1-403-387-4329 Phone: +1-403-387-3546 - Dr. Michael Heiss, Symposium Chair NC'98 PSE NLT2 ECANSE, Siemens AG Austria, Gudrunstrasse 11, A-1100 Vienna, Austria Email: michael.heiss at siemens.at or: m.heiss at ieee.org Fax: +43-1-1707-56256 Phone: +43-1-1707-47149 From j.cheng at ulst.ac.uk Fri Sep 26 17:29:13 1997 From: j.cheng at ulst.ac.uk (Jie Cheng) Date: Fri, 26 Sep 1997 21:29:13 -0000 Subject: Software Announcement Message-ID: <199709262131.AA21998@iserve1.infj.ulst.ac.uk> ANNOUNCEMENT ========================================================================= A belief network learning system is now available for download. It includes a wizard-like interface and a construction engine. Name: Belief Network Power Constructor Version: 1.0 Beta 1 Platforms: 32-bit windows systems (windows95/NT) Input: A data set with discrete values in the fields (attributes) and optional domain knowledge (attribute ordering, partial ordering, direct causes and effects). Output: A network structure of the data set. Main Features: 1.Easy to use. It gathers necessary input information through 5 simple steps. 2.Accessibility. Supports most of the popular desktop database and spreadsheet formats, including: Ms-Access, dBase, Foxpro, Paradox, Excel and text file formats. It also supports remote database servers like ORACLE, SQL-SERVER through ODBC. 3.Reusable. The engine is an ActiveX DLL, so that you can easily integrate the engine into your belief network, datamining or knowledge base system for windows95/NT. 4.Efficient. This engine constructs belief networks by using conditional independence(CI) tests. In general, it requires CI tests to the complexity of O(N^4); when the attribute ordering is known, the complexity is O(N^2). N is the number of attributes (fields). 5.Reliable. Modified mutual information calculation method is used as CI test to make it more reliable when the data set is not large enough. 6.Support domain knowledge. Complete ordering, partial ordering and causes and effects can be used to constrain the search space and therefore speed up the construction process. 7.Running time is Linear to the number of records. The system can be downloaded from web site: http://193.61.148.131/jcheng/bnpc.htm Suggestions and comments are welcome. ---------------------------------------------------- Jie Cheng email: j.cheng at ulst.ac.uk 16J24, Faculty of Informatics, UUJ, UK. BT37 0QB Tel: 44 1232 366500 Fax: 44 1232 366068 http://193.61.148.131/jcheng/ ---------------------------------------------------- From georg at ai.univie.ac.at Mon Sep 29 15:06:56 1997 From: georg at ai.univie.ac.at (Georg Dorffner) Date: Mon, 29 Sep 1997 21:06:56 +0200 Subject: CFP: NN and Adaptive Systems Message-ID: <342FFC50.59E2B600@ai.univie.ac.at> CALL FOR PAPERS for the symposium ====================================================== Artificial Neural Networks and Adaptive Systems ====================================================== chairs: Horst-Michael Gross, Germany, and Georg Dorffner, Austria as part of the Fourteenth European Meeting on Cybernetics and Systems Research April 14-17, 1998 University of Vienna, Vienna, Austria For this symposium, papers on any theoretical or practical aspect of artificial neural networks are invited. Special focus, however, will be put on the issue of adaptivity both in practical engineering applications and in applications of neural networks to the modeling of human behavior. By adaptivity we mean the capability of a neural network to adjust itself to changing environments. For this, a careful distinction between "learning" to devise weight matrices for a neural network before it is applied (and usually left unchanged) on one hand, and "true" adaptivity of a given neural network to constantly changing conditions on the other hand - i.e. real-time learning in unstationary environments - is made. The following is a - by no means exhaustive - list of possible topics in this realm: - online learning of neural network applications facing changing data distributions - transfer of neural network solutions to related but different domains - application of neural networks for adaptive autonomous systems - "phylogenetic" vs. "ontogenetic" adaptivity (e.g. adaptivity of connectivity and architecture vs. adaptivity of coupling parameters or weights) - short term vs. long term adaptation - adaptive reinforcement learning - adaptive pattern recognition - localized vs. distributed approximation (in terms of overlap of decision regions) and adaptivity Preference will be given to contributions that address such issues of adaptivity, but - as mentioned initially - other original work on neural newtorks is also welcome. Deadline for submissions (10 single-spaced A4 pages, maximum 43 lines, max. line length 160 mm, 12 point) is =============================================== October 26, 1997 =============================================== (Note that this deadline has been extended w.r.t. the original EMCSR deadline) Papers should be sent to: I. Ghobrial-Willmann or G. Helscher Austrian Society for Cybernetic Studies A-1010 Vienna 1, Schottengasse 3 (Austria) Phone: +43-1-53532810 Fax: +43-1-5320652 E-mail: sec at ai.univie.ac.at For more information on the whole EMCSR conference, see the Web-page http://www.ai.univie.ac.at/emcsr/ or contact the above address. !Hope to see you in Vienna! From drl at eng.cam.ac.uk Mon Sep 29 15:30:48 1997 From: drl at eng.cam.ac.uk (drl@eng.cam.ac.uk) Date: Mon, 29 Sep 1997 20:30:48 +0100 (BST) Subject: Quality Assurance in Maternity Care project Message-ID: <199709291930.28462@opal.eng.cam.ac.uk.eng.cam.ac.uk> (Apologies for cross-posting) The Quality Assurance in Maternity Care (QAMC) project is a 3 year investigation into neural network and other methods for predicting obstetrical risk. The project is funded by the European Union BIOMED program and the data processing centre is Cambridge, England. The QAMC project has made use of 771571 cases from the Scottish Morbidity Record and its findings should be of interest to researchers in the clinical, statistical, connectionist and data-mining communities. Much effort has gone into developing new methods for feature selection in large databases of discrete valued information. This research and all of the project's other publications are available via the project Web page: http://svr-www.eng.cam.ac.uk/projects/qamc Furthermore, the Web page provides interactive access to estimated and observed rates of incidence of a particular adverse pregnancy outcome: failure to progress in labour. The project is actively seeking feedback and we would welcome your comments (which can be submitted and read via the above web page). We hope you find this information of benefit. -- David R. Lovell (drl at eng.cam.ac.uk) Research Associate Depts of Engineering and Obstetrics & Gynaecology Q.A.M.C. University of Cambridge, Trumpington Street, Quality Assurance Cambridge CB2 1PZ, UK. Tel: +44 1223 332 754 in http://svr-www.eng.cam.ac.uk/~drl Maternity Care From S.Singh-1 at plymouth.ac.uk Mon Sep 29 19:23:37 1997 From: S.Singh-1 at plymouth.ac.uk (Sameer Singh) Date: Mon, 29 Sep 1997 19:23:37 BST Subject: PhD studentship Message-ID: <30C31E731BA@cs_fs15.csd.plym.ac.uk> University of Plymouth, UK School of Computing PhD Research Studentship Salary: See below Applications are now invited for a PhD studentship in the School of Computing in the area of unstructured information processing and extraction using intelligent techniques such as neural networks. The research project will be carried out in collaboration with Ranco Controls Ltd., Plymouth, a world leading manufacturer of control equipment. The project will also collaborate with the School of Electronic, Communication and Electrical Engineering. You should have a background in computer science or engineering with a good honours degree, and preferably with a Masters qualification. The project requires good knowledge in areas including information systems, artificial intelligence and C/C++. The studentship covers the tuition fee and a maintenance of Pounds 5510 per year. Application forms and further details are available from the School Office on +44-1752- 232 541. Further information and informal enquiries on the project should be directed to Dr. Sameer Singh, School of Computing, University of Plymouth, UK (tel: +44-1752-232 612, fax: +44-1752-232 540, e-mail: s1singh at plym.ac.uk). Closing date: 17 October, 1997 Promoting equal opportunities A Leading Centre for Teaching and Research From payman at fermi.jpl.nasa.gov Mon Sep 29 14:43:30 1997 From: payman at fermi.jpl.nasa.gov (Payman Arabshahi) Date: Mon, 29 Sep 1997 11:43:30 -0700 Subject: Paper available: Adaptive fuzzy min-max estimation Message-ID: <199709291843.LAA01486@fermi.jpl.nasa.gov> The following paper is now available online via: http://dsp.jpl.nasa.gov/~payman (under "Publications") or via anonymous ftp: ftp://dsp.jpl.nasa.gov/pub/payman/tcas9701.ps.gz (564842 bytes gzip compressed or 2306003 bytes uncompressed) --- Payman Arabshahi Jet Propulsion Laboratory Tel: (818) 393-6054 4800 Oak Grove Drive Fax: (818) 393-1717 MS 238-343 Email: payman at jpl.nasa.gov Pasadena, CA 91109 -------------------------------------------------------------------------- TITLE: Pointer adaptation and pruning of min-max fuzzy inference and estimation. AUTHORS: Arabshahi-P. Marks-R-J. Oh-S. Caudell-T-P. Choi-J-J. SOURCE: IEEE Transactions on Circuits and Systems II - Analog and Digital Signal Processing. Vol. 44, no. 9, Sept. 1997, p.696-709. ABSTRACT: A new technique for adaptation of fuzzy membership functions in a fuzzy inference system is proposed, The painter technique relies upon the isolation of the specific membership functions that contributed to the final decision, followed by the updating of these functions' parameters using steepest descent, The error measure used is thus backpropagated from output to input, through the min and max operators used during the inference stage, This occurs because the operations of min and max are continuous differentiable functions and, therefore, can be placed in a chain of partial derivatives for steepest descent backpropagation adaptation, Interestingly, the partials of min and max act as ''pointers'' with the result that only the function that gave rise to the min or max is adapted; the others are not, To illustrate, let alpha = max [beta(1), beta(2), ..., beta(N)]. Then partial derivative alpha/partial derivative beta(n) = 1 when beta(n) is the maximum and is otherwise zero, We apply this property to the fine tuning of membership functions of fuzzy min-max decision processes and illustrate with an estimation example, The adaptation process can reveal the need for reducing the number of membership functions, Under the assumption that the inference surface is in some sense smooth, the process of adaptation can reveal overdetermination of the fuzzy system in two ways, First, if two membership functions come sufficiently close to each other, they can be fused into a single membership function, Second, if a membership function becomes too narrow, it can be deleted, In both cases, the number of fuzzy IF-THEN rules is reduced, In certain cases, the overall performance of the fuzzy system ran be improved by this adaptive pruning. -------------------------------------------------------------------------- From w.wezenbeek at elsevier.nl Tue Sep 2 09:39:11 1997 From: w.wezenbeek at elsevier.nl (Wilma van Wezenbeek) Date: Tue, 02 Sep 1997 15:39:11 +0200 Subject: New address Editor-in-Chief Neurocomputing Message-ID: An embedded and charset-unspecified text was scrubbed... Name: not available Url: https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/6598a804/attachment-0001.ksh From honavar at cs.iastate.edu Tue Sep 2 22:48:52 1997 From: honavar at cs.iastate.edu (Vasant Honavar) Date: Tue, 2 Sep 1997 21:48:52 -0500 (CDT) Subject: Call for Papers: InCGI-98 Message-ID: <199709030248.VAA02592@ren.cs.iastate.edu> Dear Colleague: A preliminary call for papers for ICGI-98 follows. This is being mailed to multiple mailing lists. My apologies if you receive multiple copies of this message as a result. Vasant Honavar honavar at cs.iastate.edu ---------------------------------------------------------------------------- Preliminary Call for Papers http://www.cs.iastate.edu/~honavar/icgi98.html Fourth International Colloquium on Grammatical Inference (ICGI-98) Program Co-Chairs: Vasant Honavar and Giora Slutzki Iowa State University July 12-14, 1998 Iowa State University Ames, Iowa, USA. ---------------------------------------------------------------------------- In cooperation with IEEE Systems, Man, and Cybernetics Society ACL Special Interest Group on Natural Language Learning (and possibly other organizations) ---------------------------------------------------------------------------- Index * Introduction * Conference Format * Topics of Interest * Program Committee * Local Arrangements Committee * Submission of Papers * Submission of Tutorial Proposals ---------------------------------------------------------------------------- Introduction Grammatical Inference, variously refered to as automata induction, grammar induction, and automatic language acquisition, refers to the process of learning of grammars and languages from data. Machine learning of grammars finds a variety of applications in syntactic pattern recognition, adaptive intelligent agents, diagnosis, computational biology, systems modelling, prediction, natural language acquisition, data mining and knowledge discovery. Traditionally, grammatical inference has been studied by researchers in several research communities including: Information Theory, Formal Languages, Automata Theory, Language Acquisition, Computational Linguistics, Machine Learning, Pattern Recognition, Computational Learning Theory, Neural Networks, etc. Perhaps one of the first attempts to bring together researchers working on grammatical inference for an interdisciplinary exchange of research results took place under the aegis of the First Colloquium on Grammatical Inference held at the University of Essex in United Kingdom in April 1993. This was followed by the (second) International Colloquium on Grammatical Inference, held at Alicante in Spain, the proceedings of which were published by Springer-Verlag as volume 862 of the Lectures Notes in Artificial Intelligence, and the Third International Colloquium on Grammatical Inference, held at Montpellier in France, the proceedings of which were published by Springer-Verlag as volume 1147 of the Lecture Notes in Artificial Intelligence. Following the success of these events and the Workshop on Automata Induction, Grammatical Inference, and Language Acquisition, held in conjunction with the International Conference on Machine Learning at Nashville in United States in July 1997, the Fourth International Colloquium on Grammatical Inference will be held from July 12 through July 14, 1998, at Iowa State University in United States. ---------------------------------------------------------------------------- Topics of Interest The conference seeks to provide a forum for presentation and discussion of original research papers on all aspects of grammatical inference including, but not limited to: * Different models of grammar induction: e.g., learning from examples, learning using examples and queries, incremental versus non-incremental learning, distribution-free models of learning, learning under various distributional assumptions (e.g., simple distributions), impossibility results, complexity results, characterizations of representational and search biases of grammar induction algorithms. * Algorithms for induction of different classes of languages and automata: e.g., regular, context-free, and context-sensitive languages, interesting subsets of the above under additional syntactic constraints, tree and graph grammars, picture grammars, multi-dimensional grammars, attributed grammars, parameterized models, etc. * Theoretical and experimental analysis of different approaches to grammar induction including artificial neural networks, statistical methods, symbolic methods, information-theoretic approaches, minimum description length, and complexity-theoretic approaches, heuristic methods, etc. * Broader perspectives on grammar induction -- e.g., acquisition of grammar in conjunction with language semantics, semantic constraints on grammars, language acquisition by situated agents and robots, acquisition of language constructs that describe objects and events in space and time, developmental and evolutionary constraints on language acquisition, etc. * Demonstrated or potential applications of grammar induction in natural language acquisition, computational biology, structural pattern recognition, information retrieval, text processing, adaptive intelligent agents, systems modelling and control, and other domains. ---------------------------------------------------------------------------- Program Committee (Tentative) The following people have agreed to serve on the program committee. Several other individuals are yet to confirm their participation. R. Berwick, MIT, USA M. Brent, Johns Hopkins University, USA C. Cardie, Cornell University, USA W. Daelemans, Tilburg University, Netherlands D. Dowe, Monash University, Australia D. Estival, University of Melbourne, Australia J. Feldman, International Computer Science Institute, Berkeley, USA L. Giles, NEC Research Institute, Princeton, USA J. Gregor, University of Tennessee, USA C. de la Higuera, LIRMM, France T. Knuutila, University of Turku, Finland E. Makinen, University of Tampere, Finland L. Miclet, ENSSAT, Lannion, France. G. Nagaraja, Indian Institute of Technology, Bombay, India H. Ney, University of Technology, Aachen, Germany J. Nicolas, IRISA, France R. Parekh, Iowa State University, USA L. Pitt, University of Illinois at Urbana-Champaign, USA D. Powers, Flinders University, Australia L. Reeker, National Science Foundation, USA C. Samuelsson, Lucent Technologies, USA A. Sharma, University of New South Wales, Australia. E. Vidal, U. Politecnica de Valencia, Spain ---------------------------------------------------------------------------- Local Arrangements Committee Dale Grosvenor, Iowa State University, USA. K. Balakrishnan, Iowa State University, USA. R. Parekh, Iowa State University, USA J. Yang, Iowa State University, USA. ---------------------------------------------------------------------------- Conference Format and Proceedings The conference will include oral and possibly poster presentations of accepted papers, a small number of tutorials and invited talks. All accepted papers will appear in the conference proceedings to be published by a major publisher. (Negotiations are underway with Springer-Verlag regarding the publication of ICGI-98 proceedings as a volume in their Lecture Notes in Artificial Intelligence a subseries of the Lecture Notes in Computer Science). ---------------------------------------------------------------------------- Submission of Papers Postscript versions of the papers no more than 12 pages long, (including figures, tables, and references), prepared according to the formatting guidelines should be submitted electronically to icgi98-submissions at cs.iastate.edu. The formatting guidelines (including commonly used word-processor macros and templates) will be placed online shortly. In those rare instances where authors might be unable to submit postscript versions of their papers electronically, we will try to accomodate them. Each paper will be rigorously refereed by at least 2 reviewers for technical soundness, originality, and clarity of presentation. Deadlines The relevant schedule for paper submissions is as follows: * March 1, 1998. Deadline for receipt of manuscripts * April 21, 1998. Notification of acceptance * May 15, 1998. Camera ready copies due ---------------------------------------------------------------------------- Submission of Proposals for Tutorials The conference will include a small number of short (2-hour) tutorials on selected topics in grammatical inference. Some examples of possible tutorial topics are: Hidden Markov Models, Grammatical Inference Applications in Computational Biology and PAC learnability of Grammars. This list is meant only to be suggestive and not exhaustive. Those interested in presenting a a tutorial should submit a proposal (in plain text format) to icgi-submissions at cs.iastate.edu by electronic mail: * A brief abstract (300 words or less) describing the topics to be covered * A brief description of the target audience and their expected background * A brief curriculum vitae including the proposer's relevant qualifications and publications The relevant schedule for tutorials is as follows: * March 1, 1998. Deadline for receipt of tutorial proposals * April 1, 1998. Notification of acceptance * May 15, 1998. Tutorial notes due ---------------------------------------------------------------------------- From aku at cts-fs1.du.se Wed Sep 3 05:45:14 1997 From: aku at cts-fs1.du.se (Andreas Kuehnle) Date: Wed, 3 Sep 1997 11:45:14 +0200 Subject: Post-doctoral position Message-ID: <9709030945.AA11521@cts.du.se> Post-doctoral position in image processing and neural networks. Available: Nov. 1, 1997 Duration: 1-2 years At: CTS - Center for Research on Transportation and Society, Borlange, Sweden. CTS is an interdisciplinary transport research institute attached to Dalarna University, located 2 hours northwest of Stockholm. Research areas at CTS are transport economics, transport sociology and transport technology. Description of postion: The position involves a complete camera-image processing-neural network classifier chain and will be used to determine road conditions. The project is financed by the Swedish National Road Administration. Candidate profile: Ph.D. in image processing or signal processing Knowledge of statistics Programming in C or Matlab Willingness to acquire images outdoors in inclement weather (cold, rain, snow) Please send applications to: Andreas Kuehnle CTS Dalarna University 78188 Borlange Sweden Email: Andreas.Kuehnle at cts.du.se From websom at nodulus.hut.fi Wed Sep 3 06:50:59 1997 From: websom at nodulus.hut.fi (websom-project) Date: Wed, 3 Sep 1997 13:50:59 +0300 Subject: News about SOM and WEBSOM Message-ID: <199709031050.NAA28028@nodulus.hut.fi> ---------------------------------------------------------------------- SOM: The second revised edition of the book "Self-Organizing Maps" (Teuvo Kohonen, Springer, 1997) is available. See the page http://james.hut.fi/nnrc/new_book.html for further information. The collection of studies on the Self-Organizing Map (SOM) and Learning Vector Quantization (LVQ) now contains 2952 papers. The bibliography can be accessed through the page http://james.hut.fi/nnrc/refs/ ---------------------------------------------------------------------- WEBSOM - Self-Organizing Map for Internet Exploration: The WEBSOM method for organizing textual document collections has undergone considerable developments. The clustering along with topics has increased significantly. New demos can be accessed via the WEBSOM home page http://websom.hut.fi/websom/ There are maps of - over million documents from over 80 Usenet newsgroups - the abstracts of WSOM'97 (Workshop on Self-Organizing Maps) - three Usenet newsgroups separately: comp.ai.neural-nets, sci.lang, and sci.cognitive A bibliography of WEBSOM papers can be found in the home page. ---------------------------------------------------------------------- From padams at brain.neurobio.sunysb.edu Wed Sep 3 12:00:25 1997 From: padams at brain.neurobio.sunysb.edu (Paul R. Adams [Neurobiology]) Date: Wed, 3 Sep 1997 16:00:25 +0000 Subject: Postdoctoral position in Computational Neuroscience at Stony Bro Message-ID: Dear Colleague, We are pleased to announce the availability of a 1 year postdoctoral appointment in computational neurobiology, to be held at Stony Brook, and financed by the Swartz Fund for Computational Neuroscience. The position is described in the attached announcement, which is also appearing in an advertisement in "Science". Please note that preference will be given to theoretical scientists who will work on projects closely linked to experimental neuroscience, and that suitable candidates will be eligible for future Swartz Fund support. Please draw this opportunity to the attention of any of your colleagues to whom it might be of interest. Sincerely, P.Adams, L.Mendell, R. Shrock and S. Mclaughlin. Scientific Committee, Stony Brook component of Swartz Fund ** Stony Brook University Program in Computational Neuroscience** The Swartz Foundation has recently announced the establishment of the Swartz Fund for Computational Neuroscience to promote collaborative studies between neuroscientists at SUNY- Stony Brook and at Cold Spring Harbor Laboratories and researchers working in the mathematical, physical and computer sciences. The aim is to understand the algorithms that the brain actually uses. Stony Brook and Cold Spring Harbor have strong programs in the areas cited above, and support from the Swartz Fund will allow the growth of interactive research in brain theory. In the first year the Stony Brook component of this initiative will hire 1-2 scientists at the postdoctoral level to work either with an existing faculty member, or independently but in association with existing faculty.The salary will be highly competitive. Candidates should submit a one page summary of their research interests and goals, a CV and the names of 3 referees. For full consideration, applications should be received by September 30 1997. The fellow will be eligible for further Swartz Fund support. Further information about this position can be obtained at http://www.neurobio.sunysb.edu or from any of the following individuals:- Paul Adams (Neurobiology, to whom applications should be sent;PAdams at neurobio.sunysb.edu), Lorne Mendell (Neurobiology; LMendell at neurobio.sunysb.edu), Robert Shrock (Institute of Theoretical Physics; shrock at insti.physics.sunysb.edu), Stuart McLaughlin (Physiology and Biophysics;smcl at epo.som.sunysb.edu) ******************************************* From Pregenz at dpmi.tu-graz.ac.at Thu Sep 4 05:44:07 1997 From: Pregenz at dpmi.tu-graz.ac.at (Martin) Date: Thu, 4 Sep 1997 10:44:07 +0100 Subject: Stability problems with LVQ algorithms. Message-ID: We have observed stability problems with LVQ algorithms in our application on EEG data. Analysis of these problems resulted in general conditions under which the codebook vectors drift away from the data with both, LVQ3 and LVQ1 training. A paper which demonstrates the problems on simple 2-dim data is in preparation. A first draft can be downloaded form: FTP-host: fdpmial03.tu-graz.ac.at FTP-filename: /pub/outgoing/lvq_stab.ps.Z (135kB) /pub/outgoing/lvq_stab.ps (733kB) The problems are also outlined in my dissertation: "Distinction Sensitive Learning Vector Quantization", Graz, Austria, University of Technology, 1997. Comments are welcome to pregenz at dpmi.tu-graz.ac.at. Martin Pregenzer From giro-ci0 at wpmail.paisley.ac.uk Thu Sep 4 08:58:47 1997 From: giro-ci0 at wpmail.paisley.ac.uk (Mark Girolami) Date: Thu, 04 Sep 1997 12:58:47 +0000 Subject: I&AAN'98 Workshop- Final Call for Papers Message-ID: Final Call for Papers. International Workshop on INDEPENDENCE & ARTIFICIAL NEURAL NETWORKS / I&ANN'98 at the University of La Laguna, Tenerife, Spain February 9-10, 1998 http://www.compusmart.ab.ca/icsc/iann98.htm This workshop will take place immediately prior to the International ICSC Symposium on ENGINEERING OF INTELLIGENT SYSTEMS / EIS'98 at the University of La Laguna, Tenerife, Spain February 11-13, 1998 http://www.compusmart.ab.ca/icsc/eis98.htm ****************************************** TOPICS Recent ANN research has developed from those networks which find correlations in data sets to the more ambitious goal of finding independent components of data sets. The workshop will concentrate on those neural networks which find independent components and the associated networks whose learning rules use contextual information to organize their learning. The topic falls then into (at least) three main streams of current ANN research: 1. Independent Component Analysis which has most recently been successfully applied to the problem of "blind separation of sources" such as the recovery of a single voice from a mixture/convolution of voices. Such methods normally use either information theoretic criteria or higher order statistics to perform the separation. 2. Identification of independent sources: The seminal experiment in this field is the identification of single bars from an input grid containing mixtures of bars. Factor Analysis (or generative models) has been a recent popular method for this problem. 3. Using contextual information to identify structure in data. We envisage a single track program over two days (February 9 - 10, 1998) with many opportunities for informal discussion. ****************************************** INTERNATIONAL SCIENTIFIC COMMITTEE - Luis Almeida, INESC, Portugal - Tony Bell, Salk Institute, USA - Andrew Cichocki, RIKEN Institute, Japan - Colin Fyfe, University of Paisley, U.K. - Mark Girolami, University of Paisley, U.K. - Peter Hancock, University of Stirling, U.K. - Juha Karhunen, Helsinki University of Technology, Finland - Jim Kay, University of Glasgow, U.K. - Erkki Oja, Helsinki University of Technology, Finland ****************************************** INFORMATION FOR PARTICIPANTS AND AUTHORS Registrations are available for the workshop only (February 9 - 10, 1998), or combined with the EIS'98 symposium (February 11 - 13, 1998). The registration fee for the 2-day workshop is estimated at approximately Ptas. 37,000 per person and includes: - Use of facilities and equipment - Lunches, dinners and coffee breaks - Welcome wine & cheese party - Proceedings in print (workshop only) - Proceedings on CD-ROM (workshop and EIS'98 conference) - Daily transportation between hotels in Santa Cruz and workshop site The regular registration fee for the EIS'98 symposium (February 11-13, 1998) is estimated at Ptas. 59,000 per person, but a reduction will be offered to workshop participants. Separate proceedings will be printed for the workshop, but all respective papers will also be included on the CD-ROM, covering the I&ANN'98 workshop and the EIS'98 symposium. As a bonus, workshop participants will thus automatically also receive the conference proceedings (CD-ROM version). We anticipate that the proceedings will be published as a special issue of a journal. ****************************************** SUBMISSION OF PAPERS Prospective authors are requested to send a 4-6 page report of their work for evaluation by the International Scientific Committee. All reports must be written in English, starting with a succinct statement of the problem, the results achieved, their significance and a comparison with previous work. The report should also include: - Title of workshop (I&ANN'98) - Title of proposed paper - Authors names, affiliations, addresses - Name of author to contact for correspondence - E-mail address and fax # of contact author - Topics which best describe the paper (max. 5 keywords) Submissions may be made by airmail or electronic mail to: Dr. Colin Fyfe Department of Computing and Information Systems The University of Paisley High Street Paisley, PA1 2BE Scotland Email: fyfe0ci at paisley.ac.uk Fax: +44-141-848-3542 ****************************************** SUBMISSION DEADLINE It is the intention of the organizers to have the proceedings available for the delegates. Consequently, the submission deadline of September 15, 1997 has to be strictly respected. ****************************************** IMPORTANT DATES Submission of reports: September 15, 1997 Notification of acceptance: October 15, 1997 Delivery of full papers: November 15, 1997 I&ANN'98 Workshop: February 9 - 10, 1998 EIS'98 Conference: February 11 - 13, 1998 ****************************************** LOCAL ARRANGEMENTS For details about local arrangements, please consult the EIS'98 website at http://www.compusmart.ab.ca/icsc/eis98.htm ****************************************** FURTHER INFORMATION For further information please contact: - Dr. Colin Fyfe Department of Computing and Information Systems The University of Paisley High Street Paisley PA1 2BE Scotland E-mail: fyfe0ci at paisley.ac.uk Fax: +44-141-848-3542 or - ICSC Canada International Computer Science Conventions P.O. Box 279 Millet, Alberta T0C 1Z0 Canada E-mail: icsc at compusmart.ab.ca Fax: +1-403-387-4329 WWW: http://www.compusmart.ab.ca/icsc From giles at research.nj.nec.com Thu Sep 4 10:30:40 1997 From: giles at research.nj.nec.com (Lee Giles) Date: Thu, 4 Sep 97 10:30:40 EDT Subject: 1995 Citation Impact Factors Message-ID: <9709041430.AA14972@alta> Impact factor changes from year to year. It is worth comparing these to those in the year 1994 (on my homepage). The 1996 citation reports are not available yet. These numbers come from the 1995 JCR (Journal Citation Reports); for more details please see the Journal Citation Report. The impact factor is a measure of the frequency with which the average article in a journal has been cited for a particular year. This is a normalized measure so that small and large journals can be compared. For more details see Journal Citation Reports. Below are impact factors for journals listed under Computer Science, Artificial Intelligence. (Note I have added in some additional journals that may be of interest in AI.) The abbreviations are those used by JCR. JOURNAL ABBREVIATION IMPACT FACTOR COGNITIVE BRAIN RES 2.222 IEEE T PATTERN ANAL 1.940 NEURAL COMPUT 1.700 IEEE T NEURAL NETWOR 1.581 ARTIF INTELL 1.560 INT J COMPUT VISION 1.490 MACH LEARN 1.264 NEURAL NETWORKS 1.262 CHEMOMETR INTELL LAB 1.158 KNOWL ACQUIS 1.143 IEEE T ROBOTIC AUTOM 1.019 NETWORK-COMP NEURAL 0.950 AI MAG 0.857 ARTIF INTELL MED 0.850 EXPERT SYST APPL 0.790 IEEE T KNOWL DATA EN 0.720 INT J APPROX REASON 0.630 PATTERN RECOGN 0.621 NEUROCOMPUTING 0.609 INT J INTELL SYST 0.606 IEEE EXPERT 0.597 IMAGE VISION COMPUT 0.484 DECIS SUPPORT SYST 0.442 APPL ARTIF INTELL 0.431 PATTERN RECOGN LETT 0.431 AI EDAM 0.386 ENG APPL ARTIF INTEL 0.295 J AUTOM REASONING 0.247 J EXP THEOR ARTIF IN 0.238 ARTIF INTELL ENG 0.226 INT J SOFTW ENG KNOW 0.212 KNOWL-BASED SYST 0.167 ARTIF INTELL REV 0.114 AI APPLICATIONS 0.089 J INTELL ROBOT SYST 0.086 J INTELL MANUF 0.063 APPL INTELL 0.050 COMUT ARTIF INTELL 0.045 AVTOM VYCHISL TEKH+ 0.044 NEURAL PROCESS LETT 0.000 On my homepage I have the 1994 citation impact factors for AI and AI related journals plus some other citation impact factors for other journal catagories. This information is copyrighted by Journal Citation Reports. Regards Lee Giles __ C. Lee Giles / Computer Science / NEC Research Institute / 4 Independence Way / Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 www.neci.nj.nec.com/homepages/giles.html == From nat at clarity.Princeton.EDU Thu Sep 4 12:31:43 1997 From: nat at clarity.Princeton.EDU (Nathalie Japkowicz) Date: Thu, 4 Sep 1997 12:31:43 -0400 (EDT) Subject: NIPS*97 Workshop on Autoassociators Message-ID: <199709041631.MAA12023@confusion.Princeton.EDU> ***************************** NIPS*97 Workshop Announcement ***************************** ************************************************************************ Advances in Autoencoder/Autoassociator-Based Computations Date: Friday December 5, 1997 Location: Breckenridge, Colorado Organizers: Nathalie Japkowicz, Mark A. Gluck and Stephen J. Hanson ************************************************************************ Workshop Description: --------------------- Autoencoders/Autoassociators have had a troubled history. At first believed to have great potential for image compression and speech processing, they were subsequently shown not to outperform Principal Component Analysis (PCA), a linear dimensionality-reduction method, even in the presence of nonlinearities in their hidden layer. Nonetheless, because of their intriguing nature, their study was pursued and it was shown that under certain circumstances, they are capable of performing various types of nonlinear dimensionality-reduction tasks. More recently, they were also shown to be very competent in learning algorithm's reliability estimation, novelty detection, cognitive modeling of the hippocampus and of the natural language grammar acquisition process. Furthermore, they appear promising for time-series analyses when used in recursive mode. Despite their various successes, however, autoencoders/autoassociators have had a difficult time re-establishing themselves fully and the extent of their capabilities remains controversial. The purpose of this workshop is to attempt to revive this extremely powerful, yet conceptually simple device in order to generate a more elaborate understanding of their inner-workings and to explore the various practical and cognitive applications for which they can be useful. More specifically, we are hoping to bring together theoretical and experimental researchers from both engineering and cognitive science backgrounds, who have studied autoencoders/autoassociators, used them in various ways, or designed and studied novel autoencoder/autoassociator-based architectures of interest. We believe that by enabling the sharing of ideas and experiences, this forum will help understand autoencoders/autoassociators better and generate new research directions. In addition, autoencoder/autoassociator-based systems will be compared to other types of autoassociative schemes and their strengths and weaknesses evaluated against them. List of Speakers: ------------------ * Pierre Baldi * Horst Bischof/Andreas Weingessel/Kurt Hornik * Kostas Diamantaras * Zoubin Ghahramani/Sam Roweis/Geoffrey Hinton * Mohamad Hassoun/Agus Sudjianto * Robert Hecht-Nielsen * Nathalie Japkowicz/Stephen J. Hanson/Mark A. Gluck * Erkki Oja * Jordan Pollack * Holger Schwenk * Andreas Weingessel/Horst Bischof/Kurt Hornik (An additional two or three speakers will perhaps join the program) Titles and abstracts of the talks --------------------------------- Posted at: http://paul.rutgers.edu/~nat/Workshop/workshop.html From rafal at idsia.ch Thu Sep 4 14:21:11 1997 From: rafal at idsia.ch (Rafal Salustowicz) Date: Thu, 4 Sep 1997 20:21:11 +0200 (MET DST) Subject: Probabilistic Incremental Program Evolution Message-ID: PROBABILISTIC INCREMENTAL PROGRAM EVOLUTION Rafal Salustowicz Juergen Schmidhuber IDSIA, Switzerland To appear in Evolutionary Computation 5(2) Probabilistic Incremental Program Evolution (PIPE) is a novel technique for automatic program synthesis. We combine probability vector coding of program instructions, Population-Based Incremental Learning, and tree-coded programs like those used in some variants of Genetic Programming (GP). PIPE iteratively generates successive populations of functional programs according to an adaptive probability distribution over all possible programs. Each iteration it uses the best program to refine the distribution. Thus, it stochastically generates better and better programs. Since distribution refinements depend only on the best program of the current population, PIPE can evaluate program populations efficiently when the goal is to discover a program with minimal runtime. We compare PIPE to GP on a function regression problem and the 6-bit parity problem. We also use PIPE to solve tasks in partially observable mazes, where the best programs have minimal runtime. ftp://ftp.idsia.ch/pub/rafal/PIPE.ps.gz http://www.idsia.ch/~rafal/research.html Short version: Probabilistic Incremental Program Evolution: Stochastic Search Through Program Space. In van Someren, M., Widmer, G. editors, Machine Learning: ECML-97, pages 213-220, Lecture Notes in Artificial Intelligence 1224, Springer-Verlag Berlin Heidelberg. (ftp://ftp.idsia.ch/pub/rafal/ECML_PIPE.ps.gz) ****************************************************************************** * Rafal Salustowicz * * Istituto Dalle Molle di Studi sull'Intelligenza Artificiale (IDSIA) * * Corso Elvezia 36 e-mail: rafal at idsia.ch * * 6900 Lugano ============== * * Switzerland raf at cs.tu-berlin.de * * Tel (IDSIA) : +41 91 91198-38 raf at psych.stanford.edu * * Tel (office): +41 91 91198-32 * * Fax : +41 91 91198-39 WWW: http://www.idsia.ch/~rafal * ****************************************************************************** From giles at research.nj.nec.com Thu Sep 4 17:42:37 1997 From: giles at research.nj.nec.com (Lee Giles) Date: Thu, 4 Sep 97 17:42:37 EDT Subject: paper available: statistical distribution of NN results Message-ID: <9709042142.AA15822@alta> The following manuscript has been accepted in IEEE Transactions on Neural Networks and is available at the WWW site listed below: www.neci.nj.nec.com/homepages/giles/papers/IEEE.TNN.statistical.dist.of.trials.ps.Z We apologize in advance for any multiple postings that may be received. *********************************************************************** On the Distribution of Performance from Multiple Neural Network Trials Steve Lawrence(1), Andrew D. Back(2), Ah Chung Tsoi(3), C. Lee Giles(1,4) (1) NEC Research Institute, 4 Independence Way, Princeton, NJ 08540, USA. (2) Brain Information Processing Group, Frontier Research Program, RIKEN, The Institute of Physical and Chemical Research, 2-1 Hirosawa, Wako-shi, Saitama 351-01, Japan. (3) Faculty of Informatics, University of Wollongong, Northfields Avenue, Wollongong NSW 2522, Australia. (4) Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742, USA. ABSTRACT The performance of neural network simulations is often reported in terms of the mean and standard deviation of a number of simulations performed with different starting conditions. However, in many cases, the distribution of the individual results does not approximate a Gaussian distribution, may not be symmetric, and may be multimodal. We present the distribution of results for practical problems and show that assuming Gaussian distributions can significantly affect the interpretation of results, especially those of comparison studies. For a controlled task which we consider, we find that the distribution of performance is skewed towards better performance for smoother target functions and skewed towards worse performance for more complex target functions. We propose new guidelines for reporting performance which provide more information about the actual distribution. Keywords: neural networks, gradient training, backpropagation, error analysis, convergence, gaussian distribution, probability distributions, statistical methods, box whiskers, kolmogorov-smirnov test, mackey-glass, phoneme classification. __ C. Lee Giles / Computer Science / NEC Research Institute / 4 Independence Way / Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 www.neci.nj.nec.com/homepages/giles.html == From jdunn at cyllene.uwa.edu.au Fri Sep 5 01:15:46 1997 From: jdunn at cyllene.uwa.edu.au (John Dunn) Date: Fri, 5 Sep 1997 13:15:46 +0800 (WST) Subject: Eighth Australasian Mathematical Psychology Conference Message-ID: <199709050515.NAA13274@cyllene.uwa.edu.au> Third Call for Papers Eighth Australasian Mathematical Psychology Conference November 27-30, 1997 University of Western Australia Perth, W.A. Australia The Eighth Australasian Mathematical Psychology Conference (AMPC97), will be held at the University of Western Australia, Nedlands, W.A. 6907, from Thursday November 27 to Sunday November 30, 1997. Details concerning the conference, registration, and submission of papers are available at the AMPC97 Web site: http://www.psy.uwa.edu.au/mathpsych/ International visitors to the conference may wish to log their itinerary with the World-wide Academic Visitor Exchange (WAVE) at: http://www.psy.uwa.edu.au/wave/ If you have already submitted an abstract to this conference, please feel free to ignore this message. ----------------------------------------------------- EXTENSION OF DEADLINE Please note that the deadline for submission of abstracts has been extended (for the first and only time) to September 30, 1997. ----------------------------------------------------- SYMPOSIA The following symposia have been accepted. If you wish to present a paper at any of these, please contact the relevant convenor, listed below. Requests for additional symposia should be directed to the conference organisers at mathpsych at psy.uwa.edu.au. Local energy detection in vision David Badcock, University of Western Australia david at psy.uwa.edu.au Nonlinear dynamics Robert A M Gregson, Australian National University Robert.Gregson at anu.edu.au Associative learning John K Kruschke, Indiana University kruschke at croton.psych.indiana.edu Computational models of memory Stephan Lewandowsky, University of Western Australia lewan at psy.uwa.edu.au Knowledge representation Josef Lukas, University of Halle, Germany j.lukas at psych.uni-halle.de Choice, decision, and measurement Anthony A J Marley, McGill University tony at hebb.psych.mcgill.ca Face recognition Alice O'Toole & Herve Abdi, University of Texas otoole at utdallas.edu Models of response time Roger Ratcliff, Northwestern University roger at eccles.psych.nwu.edu ----------------------------------------------------- From Simon.N.CUMMING at British-Airways.com Fri Sep 5 12:37:58 1997 From: Simon.N.CUMMING at British-Airways.com (Simon.N.CUMMING@British-Airways.com) Date: 05 Sep 1997 16:37:58 Z Subject: ANNOUNCEMENT: Two Day Conference in Bath, UK. 15-16Sep97 Message-ID: <"BSC400A1 970905163755510421*/c=GB/admd=ATTMAIL/prmd=BA/o=British Airways PLC/s=CUMMING/g=SIMON/i=N/"@MHS> ANNOUNCEMENT: NEURAL COMPUTING APPLICATIONS FORUM Two Day Conference in Bath, UK. 15-16th September 1997 In Collaboration with the Engineering and Physical Sciences Research Council. NEURAL COMPUTING APPLICATIONS FORUM (NCAF) ------------------------------------------ The purpose of NCAF is to promote widespread exploitation of neural computing technology by: - providing a focus for neural network practitioners. - disseminating information on all aspects of neural computing. - encouraging close co-operation between industrialists and academics. NCAF holds four, two-day conferences per year, in the UK, with speakers from commercial and industrial organisations and universities. The focus of the talks is on practical issues in the application of neural network technology and related methods to solving real-world problems. The September event, in the beautiful and historic city of Bath, will be held in collaboration with EPSRC, and thanks to their support the costs of attending will be limited to accommodation and the social event. This meeting reports and discusses progress on a widespread programme of applications-directed research on neural computing which has been running in the UK for two years: the "KEY QUESTIONS" Programme. A brief glance at the programme will indicate that we have obtained the services of the foremost practitioners in the UK, and this will be complemented by poster sessions. Be assured, the "Industrial" contingent will be equally strong. Don't miss this first rate opportunity to meet all the important people from both industry and academia. PROGRAMME DETAILS: ----------------- The "Key Questions" Programme: Purpose and Context Chair: David Bounds (Recognition Systems) Static Pattern Processing ------------------------- Neural Networks for Visualisation of High-Dimensional Data Chris Bishop (Aston University) Autonomous Training Algorithms for Neural Networks Peter Rockett (Sheffield University) Neural Networks to Support Molecular Structure Matching Jim Austin (York University) Novelty Detection ----------------- Novelty Detection Lionel Tarassenko (Oxford University) Strategies for Applying Neural Computing to Time-varying Signals and their Application to Tyre Quality Control Tom Harris (Brunel University) Assessment of Neural Network Systems ------------------------------------ Performance Assessment and Generalisability of Neural Networks David Hand (Open University) Validation and Verification of Neural Network Systems Chris Bishop (Aston University) Neurocomputational models of auditory perception and speech recognition Sue McCabe (Plymouth University) A model based approach to optimal engine control Julian Mason (Cambridge Consultants) Non-Stationary Time Series Analysis ----------------------------------- Neural Computing for Non-stationary Medical Signal Processing Mahesan Niranjan (Cambridge University) Dynamic Traffic Monitoring using Neural Networks Howard Kirby (Leeds University) Non-stationary Feature Extraction and Tracking for the Classification of Turning Points in Multi-variate Time Series David Lowe (Aston University) Control and Systems Identification ---------------------------------- Approximation and Control of Industrial Nonlinear Dynamic Processes Julian Morris (Newcastle University) Local Model Networks for Dynamic Modelling and Internal Model Control of Industrial Plant George Irwin (Queen's University, Belfast) Neurofuzzy Model Building Algorithms and their Application in Non-stationary Environments Martin Brown (Southampton University) Issues raised by the Key Questions Programme -------------------------------------------- Neural network modelling of nickel based super alloys Joy Jones (Cambridge University) ===================================================================== SOCIAL PROGRAMME: ---------------- 15 September 1997: (EVENING) Conference Dinner in the Pump Rooms followed by a guided tour of the Roman Baths plus "Four Weighings and a Funeral" with Graham 'Rottweiler' Hesketh (Rolls-Royce) -------------------------------------------------------------------- REGISTRATION DETAILS: -------------------- Please note that attendance at this conference is free. Usual rates for meeting attendance are: for members, 20 pounds per day conference fee, excluding meals, social events and accommodation; for non-members, 100 pounds per day excluding meals, social events and accommodation. Thanks to support from EPSRC the only items that will be charged for this meeting are: accommodation and social events. There is, however, still a need to register. To register please e-mail ncafsec at brunel.ac.uk, giving your name and organisation, correspondence address, e-mail address and telephone number. Non-members are strongly encouraged to join (see membership details below). Please indicate if you wish to become a member. Please indicate if you require Bed and Breakfast Accommodation (25 pounds per night) for the nights of 14 and / or 15 September, and whether you wish to join the Conference Dinner and Tour of Roman Baths (25 pounds). NCAF MEMBERSHIP DETAILS: ------------------------ All amounts are in pounds Sterling, per annum. All members receive a quarterly newsletter and are eligible to vote at the AGM (but see note on corporate membership). Full (Corporate) Membership : 300 pounds (allows any number of people in the member organisation to attend meetings at member rates; voting rights are restricted to one, named, individual. Includes automatic subscription to the journal Neural Computing and Applications.) Individual Membership : 170 pounds (allows one, named, individual to attend meetings at member rates; includes journal) Associate Membership: 110 pounds: includes subscription to the journal and newsletter but does not cover admission to the meetings. Reduced (Student) Membership : 65 pounds including Journal; 30 pounds without journal. Applications for student membership should be accompanied by a copy of a current full-time student ID card, UB40, etc. For more information, or to become a member, Please contact NCAF, P.O. box 73, EGHAM, Surrey, UK. TW20 0YZ. Or telephone Sally Francis or Tom Harris on +44 1784 477271 or email ncafsec at brunel.ac.uk From rjb at psy.ox.ac.uk Mon Sep 8 07:32:45 1997 From: rjb at psy.ox.ac.uk (Roland Baddeley) Date: Mon, 08 Sep 1997 12:32:45 +0100 Subject: Two papers available on 1) neural coding, and 2) psychophysics Message-ID: <3413E25D.2847@psy.ox.ac.uk> The following two papers are available. The first is on the nature of the neuronal code (rate coding, information theory and all that), and the second is on using neural nets to analyse psychophysical experiments. They can now both be found on my web page (hope they are of interest): http://www.mrc-bbc.ox.ac.uk/~rjb/ =================================================================== Title: Responses of neurons in primary and inferior temporal visual cortices to natural scenes Baddeley, Abbott, Booth, Sengpiel, Freeman, Wakeman, and Rolls (in press) To appear in Proceedings of the Royal Society B Abstract: The primary visual cortex (V1) is the first cortical area to receive visual input, and inferior temporal (IT) areas are among the last along the ventral visual pathway. We recorded, in area V1 of anaesthetised cat and area IT of awake macaque monkey, responses of neurons to videos of natural scenes. Responses were analysed to test various hypotheses concerning the nature of neural coding in these two regions. A variety of spike-train statistics were measured including spike-count distributions, interspike interval distributions, coefficients of variation, power spectra, Fano factors and different sparseness measures. All statistics showed non-Poisson characteristics and several revealed self-similarity of the spike trains. Spike-count distributions were approximately exponential in both visual areas for eight different videos and for counting windows ranging from 50 ms to 5 seconds. The results suggest that the neurons maximise their information carrying capacity while maintaining a fixed long-term-average firing rate\cite{Baddeley96,Levy96}, or equivalently, minimise their average firing rate for a fixed information carrying capacity. =================================================================== Insights into motion perception by observer modelling Roland Baddeley and Srimant P. Tripathy Journal of the Optical Society of America (In press) The statistical efficiency of human observers performing a simplified version of the motion detection task used by Newsome et al. is high but not perfect. This reduced efficiency may be because of noise internal to the observers, or it may be because the observers are using strategies that are different from that used by an ideal machine. We therefore investigated which of three simple models best accounts for the observers' performance. The models compared were: 1) a motion detector that uses the proportion of dots in the first frame that move coherently (as would an ideal machine), 2) a model that bases its decision on the number of dots that move, 3) a model that differentially weights motions occurring at different locations in the visual field (for instance differentially weighting the point of fixation and the periphery). We compared these models by explicitly modelling the human observers performance. We recorded the exact stimulus configuration on each trial together with the observer's response, and, for the different models we found the parameters that best predicted the observer's performance in a least squares sense. We then used N fold cross-validation to compare the models and hence the associated hypotheses. Our results show that the performance of observers is based on the proportion of dots moving, not the absolute number, and that there was no evidence for any differential spatial weighting. Whilst this method of modelling the observers' response is only demonstrated for one simple psychophysical paradigm, it is general and can be applied to any psychophysical framework where the entire stimulus can be recorded. -- Dr Roland Baddeley WWW: http://www.mrc-bbc.ox.ac.uk/~rjb/ Mail: Psychology Dept, Oxford email: rjb at psy.ox.ac.uk University, South Parks phone: +44-1865-271914 Road, Oxford, OX1 3UD, UK fax: +44-1865-272488 From jagota at cse.ucsc.edu Mon Sep 8 19:26:35 1997 From: jagota at cse.ucsc.edu (Arun Jagota) Date: Mon, 8 Sep 1997 16:26:35 -0700 Subject: volunteers needed for NIPS Message-ID: <199709082326.QAA06554@bristlecone.cse.ucsc.edu> Several student volunteers are needed for assistance with various tasks at NIPS*97. Volunteering can be a fun and educational experience. Furthermore, in exchange for about 9 hours of work, volunteers receive free registration to the component of NIPS (tutorials, conference, workshops) they volunteer their time towards. Check out http://www.cse.ucsc.edu/~jagota for detailed instructions on how to volunteer, details on the various tasks, etc. Or contact me at jagota at cse.ucsc.edu Arun Jagota From cmb35 at newton.cam.ac.uk Tue Sep 9 05:47:10 1997 From: cmb35 at newton.cam.ac.uk (C.M. Bishop) Date: Tue, 09 Sep 1997 10:47:10 +0100 Subject: Two papers available on-line Message-ID: <199709090947.KAA13549@feynman> Two Papers Available Online: PROBABILISTIC PRINCIPAL COMPONENT ANALYSIS (NCRG/97/010) Michael E. Tipping and Christopher M. Bishop Neural Computing Research Group Aston University, Birmingham B4 7ET, U.K.. http://neural-server.aston.ac.uk/Papers/postscript/NCRG_97_010.ps.Z Abstract: Principal component analysis (PCA) is a ubiquitous technique for data analysis and processing, but one which is not based upon a probability model. In this paper we demonstrate how the principal axes of a set of observed data vectors may be determined through maximum-likelihood estimation of parameters in a latent variable model closely related to factor analysis. We consider the properties of the associated likelihood function, giving an EM algorithm for estimating the principal subspace iteratively, and discuss the advantages conveyed by the definition of a probability density function for PCA. MIXTURES OF PRINCIPAL COMPONENT ANALYSERS (NCRG/97/003) Michael E. Tipping and Christopher M. Bishop Neural Computing Research Group Aston University, Birmingham B4 7ET, U.K.. http://neural-server.aston.ac.uk/Papers/postscript/NCRG_97_003.ps.Z Abstract: Principal component analysis (PCA) is one of the most popular techniques for processing, compressing and visualising data, although its effectiveness is limited by its global linearity. While nonlinear variants of PCA have been proposed, an alternative paradigm is to capture data complexity by a combination of local linear PCA projections. However, conventional PCA does not correspond to a probability density, and so there is no unique way to combine PCA models. Previous attempts to formulate mixture models for PCA have therefore to some extent been ad hoc. In this paper, PCA is formulated within a maximum-likelihood framework, based on a specific form of Gaussian latent variable model. This leads to a well-defined mixture model for probabilistic principal component analysers, whose parameters can be determined using an EM algorithm. We discuss the advantages of this model in the context of clustering, density modelling and local dimensionality reduction, and we demonstrate its application to image compression and handwritten digit recognition. --- ooo --- A complete, searchable database of publications from the Neural Computing Research Group at Aston can be found by going to the Group home page http://www.ncrg.aston.ac.uk/ and selecting `Publications' --- ooo --- From wojtek at msmail4.hac.com Tue Sep 9 22:46:16 1997 From: wojtek at msmail4.hac.com (Przytula, Krzysztof W) Date: 9 Sep 1997 18:46:16 -0800 Subject: job opening at Hughes Research Message-ID: RESEARCH OPPORTUNITIES AT HUGHES RESEARCH LABORATORIES Hughes Research Laboratories has an immediate opening for a Research Staff Member to join a team of scientists in the Signal and Image Processing Department. Team members in this department have developed novel, state-of-the-art sensor fusion systems, neural networks, time-frequency transforms, and image compression algorithms for use in both commercial, and military applications. The successful candidate will investigate advanced signal and image processing techniques for information fusion, and pattern recognition applications. Current work is focused on the application of information fusion techniques, neural networks, and computer vision techniques for automatic target recognition, automotive safety, and pattern recognition applications. Specific duties will include theoretical analysis, algorithm design, and software simulation. Candidates are expected to have a Ph.D. in Electrical Engineering, Applied Mathematics, or Computer Science. Strong analytical skills and demonstrated ability to perform creative research, along with experience in signal and image processing, information fusion, or pattern recognition are required. Practical experience with Matlab, C, or C++ is essential. Good communications and teamwork skills are keys to success. Based on government restrictions regarding the export of technical data, a U.S. citizenship or resident alien status is required. Overlooking the Pacific Ocean and the coastal community of Malibu, the Research Laboratories provides an ideal environment for you to make the most of your scientific abilities. Our organization offers a competitive salary and benefits package. Additional information may be obtained from Lynn Ross. For immediate consideration, send your resume to: Lynn W. Ross Department RM21 Hughes Research Laboratories 3011 Malibu Canyon Road Malibu, CA 90265 FAX: (310) 317-5651 Internet: lross at msmail4.hac.com Proof of legal right to work in the United States required. An Equal Opportunity Employer. From rao at cs.rochester.edu Wed Sep 10 01:40:20 1997 From: rao at cs.rochester.edu (Rajesh Rao) Date: Wed, 10 Sep 1997 01:40:20 -0400 Subject: Tech Report: Space-Time Receptive Fields from Natural Images Message-ID: <199709100540.BAA22625@corvette.cs.rochester.edu> The following technical report on learning space-time receptive fields from natural images is available on the WWW page: http://www.cs.rochester.edu/u/rao/ or via anonymous ftp (see instructions below). Comments and suggestions welcome (This message has been cross-posted - my apologies to those who received it more than once). -- Rajesh Rao Internet: rao at cs.rochester.edu Dept. of Computer Science VOX: (716) 275-5492 University of Rochester FAX: (716) 461-2018 Rochester NY 14627-0226 WWW: http://www.cs.rochester.edu/u/rao/ =========================================================================== Efficient Encoding of Natural Time Varying Images Produces Oriented Space-Time Receptive Fields Rajesh P.N. Rao and Dana H. Ballard Technical Report 97.4 National Resource Laboratory for the Study of Brain and Behavior Department of Computer Science, University of Rochester August 1997 The receptive fields of neurons in the mammalian primary visual cortex are oriented not only in the domain of space, but in most cases, also in the domain of space-time. While the orientation of a receptive field in space determines the selectivity of the neuron to image structures at a particular orientation, a receptive field's orientation in space-time characterizes important additional properties such as velocity and direction selectivity. Previous studies have focused on explaining the spatial receptive field properties of visual neurons by relating them to the statistical structure of static natural images. In this report, we examine the possibility that the distinctive spatiotemporal properties of visual cortical neurons can be understood in terms of a statistically efficient strategy for encoding natural time varying images. We describe an artificial neural network that attempts to accurately reconstruct its spatiotemporal input data while simultaneously reducing the statistical dependencies between its outputs. The network utilizes spatiotemporally summating neurons and learns efficient sparse distributed representations of its spatiotemporal input stream by using recurrent lateral inhibition and a simple threshold nonlinearity for rectification of neural responses. When exposed to natural time varying images, neurons in a simulated network developed localized receptive fields oriented in both space and space-time, similar to the receptive fields of neurons in the primary visual cortex. Retrieval information: FTP-host: ftp.cs.rochester.edu FTP-pathname: /pub/u/rao/papers/space-time.ps.Z WWW URL: http://www.cs.rochester.edu/u/rao/ 26 pages; 1040K compressed. ========================================================================== Anonymous ftp instructions: >ftp ftp.cs.rochester.edu Connected to anon.cs.rochester.edu. 220 anon.cs.rochester.edu FTP server (Version wu-2.4(3)) ready. Name: [type 'anonymous' here] 331 Guest login ok, send your complete e-mail address as password. Password: [type your e-mail address here] ftp> cd /pub/u/rao/papers/ ftp> get space-time.ps ftp> bye From atick at monaco.rockefeller.edu Wed Sep 10 09:02:45 1997 From: atick at monaco.rockefeller.edu (Joseph Atick) Date: Wed, 10 Sep 1997 09:02:45 -0400 Subject: Research Positions in Pattern Recognition Message-ID: <9709100902.ZM12127@monaco.rockefeller.edu> Research Positions In Pattern Recognition and Image Analysis Visionics Corporation has several openings for research scientists and engineers in the field of Pattern Recognition and Image Analysis. Candidates are expected to have a Ph.D. or a Masters degree in Computer Science, Applied Mathematics, Electrical Engineering, Physics or Computational Neuroscience and to have demonstrated research abilities in computer vision, artificial neural networks, image processing, computational neuroscience or pattern recognition. The successful candidates will join the growing R&D team of Visionics in working on developing real-world pattern recognition algorithms-- especially video recognition (such as real time face recognition). The job will be at Visionics' new headquarters at 1 Exchange Place in Jersey City, New Jersey. This facility is immediately across the hudson from the World Trade Center in Manhattan and has impressive views of the Manhattan skylike and the Statue of Liberty on the Hudson. It is 5 minutes away from downtown Manhattan by Path train and is easily accesible to anywhere in New Jersey. Visionics offers a competitive salary and benefits package and a chance to rapid career advancement with one of the fastest growing research teams. Visionics is the developer of the FaceIt face recognition technology. The FaceIt engine is currently in use in dozens of large scale products and applications in security, access control, and banking around the world. For more information about Visionics please visit our webpage at http://www.faceit.com For immediate consideration, send your resume via: (1) Fax to Annette Isler Human Resources Visionics Corporation 1 Exchange Place Jersey City, NJ 07302 Fax: (201) 332 9313. or you can (2) Email it to jobs at faceit.com Visionics is an equal opportunity employer. -- Joseph J. Atick Rockefeller University 1230 York Avenue New York, NY 10021 Tel: 212 327 7421 Fax: 212 327 7422 From Frederic.Alexandre at loria.fr Wed Sep 10 03:48:45 1997 From: Frederic.Alexandre at loria.fr (Frederic Alexandre) Date: Wed, 10 Sep 1997 09:48:45 +0200 (MET DST) Subject: SPECIAL SESSION AT NEURAP'98: Knowledge-Based Applications of ANNs Message-ID: <199709100748.JAA18385@wernicke.loria.fr> NEURAP'98 Fourth International Conference on Neural Networks & their Applications March 11-12-13, 1998, Marseille, France CALL FOR PAPERS Deadline: December 1, 1997 SPECIAL SESSION AT NEURAP'98 Knowledge-Based Applications of Artificial Neural Networks This special session aims to address issues concerning the integration of knowledge-based systems and artificial neural networks. Such hybrid systems hold the promise of combining the best features of each approach for constructing intelligent systems. Various strategies can be envisaged, from the coupling of existing symbolic and connectionist models to the design of specific models for that purpose. Each of these strategies underline properties and gives original views of this new domain, that we wish to discuss and compare during this special session. We are especially interested in applications of this technology to real-world problems. Topics can include, but are not limited to: - Methods for incorporating prior knowledge into feed-forward and recurrent connectionist architectures - Combination of classical knowledge-based and connectionist models - Exploration of relationships between symbolic Machine Learning algorithms, Knowledge Acquisition, Knowledge-Based Systems and Knowledge-Based Neural Networks - Learning algorithms for Knowledge-Based Neural Networks - Rule extraction and refinement - Neural expert systems - Implementation issues: Modularity, Parallel and Distributed implementations, etc. - Applications to real world problems Organizers: Ian Cloete ( ian at cs.sun.ac.za ) and Frederic Alexandre ( falex at loria.fr ) Instructions: To ensure a lively conference, all papers will be presented in plenary session (short presentations) and also through posters. However, to guarantee the high quality of the presented works, the selection is based on full paper proposals. Depending on the maturity of the work presented, there will be short papers for on-going research of 4 pages long, and long papers for mature research results, up to 8 A4 pages (double columns, Times 10-point font size). All papers will be printed in full in the proceedings and must be in English. Please send 4 copies to the conference secretariat before December 1, 1997. Notification will be sent on February 3, 1998 and the camera-ready version is due on February 20, 1998. PLEASE NOTE: All papers must be clearly marked indicating inclusion in this Special Session. Secretariat & Information NEURAP'98 DIAM - IUSPIM , University of Aix-Marseille III, Domaine Universitaire de St Jerome, Avenue Escadrille Normandie-Niemen, 13397 Marseille Cedex 20, France Tel.: ++ 33 4 91 05 60 60 Fax: ++ 33 4 91 05 60 33 Email: Claude.Touzet at iuspim.u-3mrs.fr URL: http://www.iuspim.u-3mrs.fr/neurap98.htm or: http://dsi.ing.unifi.it/neural/neurap/cfp98.html For information on the conference, please visit http://www.iuspim.u-3mrs.fr/neurap98.htm or dsi.ing.unifi.it/neural/neurap/cfp98.html From terry at salk.edu Wed Sep 10 14:14:16 1997 From: terry at salk.edu (Terry Sejnowski) Date: Wed, 10 Sep 1997 11:14:16 -0700 (PDT) Subject: NEURAL COMPUTATION 9:7 Message-ID: <199709101814.LAA23940@helmholtz.salk.edu> Neural Computation - Contents Volume 9, Number 7 - Octobere 1, 1997 ARTICLE Mean-Field Theory For Batched-TD(lambda) Fernando J. Pineda LETTER Redundancy Reduction and Independent Component Analysis: Conditions on Cumulants and Adaptive Approaches Jean-Pierre Nadal and Nestor Parga Adaptive Online Learning Algorithms for Blind Separation: Maximization Entropy and Minimum Mutual Information Howard Hua Yang and Shun-ichi Amari A Fast Fixed-Point Algorithm for Independent Component Analysis Aapo Hyvarinen, and Erkki Oja Dimension Reduction by Local Principal Component Analysis Nandakishore Kambhatla and Todd K. Leen A Constructive, Incremental-Learning Network for Mixture Modeling and Classification James R. Williamson Shape Quantization And Recognition With Randomized Trees Donald Geman and Yali Amit Airline Crew Scheduling With Potts Neurons Martin Lagerholm, Carsten Peterson, and Bo Soderberg Online Learning in Radial Basis Function Networks Jason A. S. Freeman and David Saad Errata Image Segmentation Based on Oscillatory Correlation DeLiang Wang and David Terman ----- ABSTRACTS - http://mitpress.mit.edu/NECO/ SUBSCRIPTIONS - 1997 - VOLUME 9 - 8 ISSUES ______ $50 Student and Retired ______ $78 Individual ______ $250 Institution Add $28 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-8 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) mitpress-orders at mit.edu MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 ----- From Dimitris.Dracopoulos at brunel.ac.uk Wed Sep 10 14:27:30 1997 From: Dimitris.Dracopoulos at brunel.ac.uk (Dimitris.Dracopoulos@brunel.ac.uk) Date: Wed, 10 Sep 1997 19:27:30 +0100 Subject: New Book Announcement Message-ID: <22393.199709101827@hebe.brunel.ac.uk> Just published: `Evolutionary Learning Algorithms for Nonlinear Adaptive Control'' by Dimitris C. Dracopoulos Springer Verlag, Perspectives in Neural Computing Series, Series Editor: J. Taylor, August 1997, ISBN: 3-540-76161-6, GBP24.99 Neural networks and evolutionary algorithms are constantly expanding their field of application to a variety of new domains. One area of particular interest is their applicability to control and adaptive control systems: the limitations of the classical control theory combined with the need for greater robustness, adaptivity and ``intelligence'' make neurocontrol and evolutionary control algorithms an attractive (and in some cases, the only) alternative. After an introduction to neural networks and genetic algorithms, this volume describes in detail how neural networks and evolutionary techniques (specifically genetic algorithms and genetic programming) can be applied to the adaptive control of complex dynamic systems (including chaotic ones). A number of examples are presented and useful tips are given for the application of the techniques described. The fundamentals of dynamic systems theory and classical adaptive control are also given. More details can be found in: http://www.brunel.ac.uk/~csstdcd/learning_control_book.html From kogan at rutcor.rutgers.edu Wed Sep 10 12:17:54 1997 From: kogan at rutcor.rutgers.edu (Alex Kogan) Date: Wed, 10 Sep 1997 12:17:54 -0400 Subject: ARTIFICIAL INTELLIGENCE AND MATHEMATICS'98, SECOND CALL FOR PAPERS Message-ID: <9709101217.ZM16244@horn.rutgers.edu> SECOND CALL FOR PAPERS ------------------------------------------------------------------------- Fifth International Symposium on ARTIFICIAL INTELLIGENCE AND MATHEMATICS ------------------------------------------------------------------------- January 4-6, 1998, Fort Lauderdale, Florida http://rutcor.rutgers.edu/~amai Email: amai at rutcor.rutgers.edu ------------------------------------------------------------------------- APPROACH OF THE SYMPOSIUM The International Symposium on Artificial Intelligence and Mathematics is the fifth of a biennial series. Our goal is to foster interactions among mathematics, theoretical computer science, and artificial intelligence. The meeting includes paper presentation, invited speakers, and special topic sessions. Topic sessions in the past have covered computational learning theory, nonmonotonic reasoning, and computational complexity issues in AI. The editorial board of the Annals of Mathematics and Artificial Intelligence serves as the permanent Advisory Committee for the series. ------------------------------------------------------------------------- INVITED TALKS will be given by * Robert Aumann (Hebrew University, Israel) * Joe Halpern (Cornell University) * Pat Hayes (University of West Florida) * Scott Kirkpatrick (IBM, Yorktown Heights) * William McCune (Argonne National Laboratory) ------------------------------------------------------------------------- SUBMISSIONS Authors must e-mail a short abstract (up to 200 words) in plain text format to amai at rutcor.rutgers.edu by SEPTEMBER 23, 1997, and either e-mail postscript files or TeX/LaTeX source files (including all necessary macros) of their extended abstracts (up to 10 double-spaced pages) to amai at rutcor.rutgers.edu or send five copies to Endre Boros RUTCOR, Rutgers University P.O. Box 5062 New Brunswick, NJ 08903 USA or, if using FEDEX or another fast delivery service, to Endre Boros RUTCOR, Busch Campus, Rutgers University Brett and Bartholomew Roads Piscataway, NJ 08854 USA to be received by SEPTEMBER 30, 1997. Authors will be notified of acceptance or rejection by OCTOBER 31th, 1997. The final versions of the accepted extended abstracts, for inclusion in the conference volume, are due by NOVEMBER 30, 1997. Authors of accepted papers will be invited to submit within one month after the Symposium a final full length version of their paper to be considered for inclusion in a thoroughly refereed volume of the series Annals of Mathematics and Artificial Intelligence, J.C. Baltzer Scientific Publishing Co.; for earlier volumes, see Vol. I, Vol. II, Vol. III. ------------------------------------------------------------------------- IMPORTANT DATES Abstracts received: September 23, 1997 Extended abstracts due: September 30, 1997 Authors notified: October 31, 1997 Final versions received: November 30, 1997 AI & Math Symposium: January 4-6, 1998 ------------------------------------------------------------------------- SPONSORS The Symposium is partially supported by the Annals of Math and AI, Florida Atlantic University, and the Florida- Israel Institute. Other support is pending. If additional funding is secured, partial travel subsidies may be available to junior researchers. ------------------------------------------------------------------------- General Chair: Martin Golumbic, Bar-Ilan University, Ramat Gan Conference Chair: Frederick Hoffman, Florida Atlantic University Program Co-Chairs: Endre Boros, Rutgers University Russ Greiner, Siemens Corporate Research Inc. Publicity Chair: Alex Kogan, Rutgers University Program Committee (others pending): * Martin Anthony (London School of Economics, England) * Peter Auer (Technical University of Graz, Austria) * Fahiem Bacchus (Univ. Waterloo, Canada) * Peter Bartlett (Australian National University) * Peter van Beek (University of Alberta, Canada) * Jimi Crawford (i2 Technologies) * Adnan Darwiche (American Univ., Lebanon) * Rina Dechter (UC Irvine) * Thomas Eiter (University of Giessen, Germany) * Boi Faltings (EPFL, Switzerland) * Ronen Feldman (Bar-Ilan University, Ramat Gan) * John Franco (University of Cincinnati) * Eugene Freuder (University of New Hampshire) * Giorgio Gallo (University of Pisa, Italy) * Hector Geffner (Universidad Simn Bolvar, Venezuela) * Georg Gottlob (Technical University of Vienna, Austria) * Adam Grove (NEC Research) * Peter L. Hammer (Rutgers University) * David Heckerman (Microsoft Corporation) * Michael Kaminski (Technion, Israel) * Henry Kautz (AT&T) * Helene Kirchner (CNRS-INRIA, Nancy, France) * Richard Korf (UCLA) * Gerhard Lakemeyer (Aachen, Germany) * Jean-Claude Latombe (Stanford) * Maurizio Lenzerini (University of Rome, Italy) * Alon Levy (AT&T) * Fangzhen Lin (Hong Kong University of Science and Technology) * Alan Mackworth (UBC) * Heikki Mannila (University of Helsinki, Finnland) * Eddy Mayoraz (IDIAP, Switzerland) * Anil Nerode (Cornell) * Jeff Rosenschein (Hebrew University, Israel) * Elisha Sacks (Purdue) * Dale Schuurmans (University of Pennsylvania) * Bart Selman (AT&T) * Eduardo D. Sontag (Rutgers University) * Ewald Speckenmeyer (University of Koeln, Germany) * Moshe Vardi (Rice) * Paul Vitanyi (CWI, The Netherlands) ------------------------------------------------------------------------- INFORMATION Hotel The Symposium will be held at the Embassy Suites in Fort Lauderdale: Embassy Suites Hotel 1100 S.E. 17th Street Fort Lauderdale, FL 33316 Spacious, newly refurbished two room suites are available at the reduced rate of $129 single or double occupancy - includes separate living room and bedroom, microwave, refrigerator, coffee maker, two TVs, two voice mail telephones with dual lines, data ports, queen size sofa sleeper in living room. You get complimentary full cooked-to-order breakfast with free newspaper, complimentary manager's cocktail reception each evening, complimentary 24 hour transportation to and from Ft. Lauderdale airport, and free parking. For reservations, call 1-800-362-2779 or 954-527-2700, by December 1, 1997. Airline Delta Airlines is our Conference airline - and their discounts have improved. Call them at 1-800-241-6760, and give our FAU's file number: 102789A. Car Rental Avis Rent A Car is our Conference car rental agency, offering us special rates. Call 1-800-331-1600 (in Canada, 1-800-879-2847), and mention the Symposium. Further information and future announcements can be obtained from the Conference Web Site at http://rutcor.rutgers.edu/~amai or by (e)mail to Professor Frederick Hoffman Florida Atlantic University, Department of Mathematics PO Box 3091, Boca Raton, FL 33431, USA hoffman at acc.fau.edu For a list of where we have announced this conference, see http://rutcor.rutgers.edu/~amai/lists-news.html From Dimitris.Dracopoulos at brunel.ac.uk Wed Sep 10 14:37:40 1997 From: Dimitris.Dracopoulos at brunel.ac.uk (Dimitris.Dracopoulos@brunel.ac.uk) Date: Wed, 10 Sep 1997 19:37:40 +0100 Subject: typo error: New Book Announcement Message-ID: <22582.199709101837@hebe.brunel.ac.uk> The correct title to the previous book announcement is: ``Evolutionary Learning Algorithms for Neural Adaptive Control'' by Dimitris C. Dracopoulos Springer Verlag, Perspectives in Neural Computing Series, Series Editor: J. Taylor, August 1997, ISBN: 3-540-76161-6, GBP24.99 Apologies for the mistake. From zhang at salk.edu Thu Sep 11 16:55:00 1997 From: zhang at salk.edu (zhang@salk.edu) Date: Thu, 11 Sep 1997 13:55:00 -0700 (PDT) Subject: preprint available Message-ID: <199709112055.NAA05153@bohr.salk.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 3248 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/8d6fde6f/attachment-0001.ksh From roweis at cns.caltech.edu Thu Sep 11 02:21:50 1997 From: roweis at cns.caltech.edu (Sam Roweis) Date: Wed, 10 Sep 1997 23:21:50 -0700 (PDT) Subject: Modified PCA Model Message-ID: <9709110628.AA11399@tao.cns.caltech.edu> Paper Available Online ------------------------ For those interested in modifications to basic principal component analysis (PCA) models, especially those which define a proper probability model in the data space, I am making available a paper which will appear in NIPS'97. This paper introduces a new model called _sensible_ principal component analysis (SPCA) which is closely related to the independent work of Michael Tipping and Chris Bishop recently announced on this list. The paper also shows how this new model as well as the regular PCA model can be efficiently learned using an EM algorithm. There are two basic ideas in the paper: 1) Given some data, a very simple thing to do is to build a single Gaussian density model using the sample mean and covariance. The likelihood of new data can be easily obtained by evaluation under this Gaussian. PCA identifies the axes of such a Gaussian model. But if we throw away some of the principal components by projecting into a subspace of high variance, we no longer have a proper density model. (The squared distance of new data from the subspace is one measure of "novelty" but it is insensitive to translations within the principal subspace.) We can recapture a proper probability model by replacing all of the unwanted principal components with a single spherical noise model whose scale can be estimated from the data without ever having to find the unwanted components. Because this noise model is characterized by only a single scalar, such a model is almost as economical as regular PCA but, like factor analysis, defines a proper probability density model. 2) Traditional methods for finding the first few principal components of a dataset can be quite costly when there are many datapoints and/or the data space is of very high dimension. These methods can also fail when the sample covariance matrix is not of full rank and cannot deal easily with missing data values. Using the EM algorithms presented in the paper to fit a regular PCA model or an SPCA model can be significantly more efficient and require less data than other techniques. These algorithms also deal naturally with missing data values, even when all datapoints have missing coordinates. EM ALGORITHMS FOR PCA AND SPCA (to appear in NIPS97) Sam Roweis Computation and Neural Systems 139-74 California Institute of Technology ftp://hope.caltech.edu/pub/roweis/Empca/empca.ps or ftp://hope.caltech.edu/pub/roweis/Empca/empca.ps.gz Abstract I present an expectation-maximization (EM) algorithm for principal component analysis (PCA). The algorithm allows a few eigenvectors and eigenvalues to be extracted from large collections of high dimensional data. It is computationally efficient and does not require computing the sample covariance of the data. It also naturally accommodates missing information. I also introduce a new variation of PCA known as {\em sensible} principal component analysis (SPCA) which defines a proper density model in the data space. Learning for SPCA is also done with an EM algorithm. I include results of simulations showing that these EM algorithms correctly and efficiently find the leading eigenvectors of the covariance of datasets in a few iterations using up to thousands of datapoints in hundreds of dimensions. Sam roweis at cns.caltech.edu http://hope.caltech.edu/roweis From scott at salk.edu Fri Sep 12 19:02:04 1997 From: scott at salk.edu (scott@salk.edu) Date: Fri, 12 Sep 1997 16:02:04 -0700 (PDT) Subject: Paper and software for Independent Component Analysis of Evoked Brain Responses Message-ID: <199709122302.QAA06503@kuffler.salk.edu> "Blind separation of auditory event-related brain responses into independent components" S. Makeig, T-P. Jung, D. Ghahremani, A.J. Bell & T.J. Sejnowski (In press, PNAS) Advance copies of this paper are available for online review and/or download (151K). Independent component analysis (ICA) is a method for decomposing multichannel data into a sum of temporally independent components. In the paper, we apply an enhanced version of the ICA algorithm of Bell & Sejnowski (1995) to decomposition of brain responses to auditory targets in a vigilance experiment. We demonstrate the nature and stability of the decomposition and discuss its utility for analysis of event-related response potentials. The URL is: http://www.cnl.salk.edu/~scott/PNAS.html ================================================================ Matlab Toolbox for Independent Component Analysis of Electrophysiological Data by Scott Makeig Tony Bell, Tzyy-Ping Jung, Colin Humphries, Te-Won Lee Terrence Sejnowski Computational Neurobiology Laboratory Salk Institute, La Jolla CA A toolbox of routines written under Matlab for Independent Component Analysis (ICA) and display of electrophysiological (EEG or MEG) data is available for download. This software implements the ICA algorithm of Bell & Sejnowski (1995) for use with multichannel physiological data, particularly event-related or spontaneous EEG (or MEG) data. The algorithm separates data into a sum of components whose time courses are maximally independent of one another and whose spatial projections to the scalp are fixed throughout the analysis epoch. The decomposition routine (runica.m) also can implement an extended ICA algorithm (Lee, Girolami and Sejnowski) for separating mixtures of sub-Gaussian as well as sparse (super-Gaussian) components. Applications to ERP and EEG data including comparison of conditions and elimination of artifacts have been addressed in a series of papers and abstracts available through a related bibliography page. Another page answers Frequently Asked Questions about applying ICA to psychophysiological data. Graphics routines include general-purpose functions for viewing either averaged or spontaneous EEG data and for making and viewing animations of shifting scalp distributions. Other routines are useful for sorting and displaying the time courses, scalp topographies, and scalp projections of ICA components. A demonstration routine (icademo.m) and directory page (ica.m) are included. The software has been written under Matlab 4.2c. A version for Matlab 5.0 will be released later. To download the toolbox in Unix (compress) or PC (zip) formats (~155K): http://www.cnl.salk.edu/~scott/ica-download-form.html For further on-line information: http://www.cnl.salk.edu/~scott/icafaq.html - frequently asked questions http://www.cnl.salk.edu/~scott/icabib.html - bibliography of applications Email: scott at salk.edu Scott Makeig ___________________________________________________________________ Scott Makeig http://www.cnl.salk.edu/~scott (619) 553-8414 Comp. Neurobiol. Lab., Salk Institute | scott at salk.edu UCSD Department of Neurosciences | smakeig at ucsd.edu Naval Health Research Center | makeig at nhrc.navy.mil From cns-cas at cns.bu.edu Sun Sep 14 22:02:20 1997 From: cns-cas at cns.bu.edu (Boston University - Cognitive and Neural Systems) Date: Sun, 14 Sep 1997 22:02:20 -0400 Subject: CALL FOR PAPERS - Deadline Oct 31, 1997! Message-ID: <3.0.3.32.19970914220220.00d6357c@cns.bu.edu> *****CALL FOR PAPERS***** 1998 Special Issue of Neural Networks NEURAL CONTROL AND ROBOTICS: BIOLOGY AND TECHNOLOGY Planning and executing movements is of great importance in both biological and mechanical systems. This Special Issue will bring together a broad range of invited and contributed articles that describe progress in understanding the biology and technology of movement control. Movement control covers a wide range of topics, from integration of different types of sensory information, to flexible planning of movements, to generation of motor commands, to compensation for internal and external perturbations. Of particular importance are the coordinate transformations, memory systems, and attentional and volitional mechanisms needed to implement movement control. Neural control is the study of how biological systems have solved these problems with joints, muscles, and brains. Robotics is the attempt to build mechanical systems that can solve these problems under constraints of size, weight, robustness, and cost. This Special Issue welcomes high quality articles from both fields and seeks to explore the possible synergies between them. CO-EDITORS: Professor Rodney Brooks, Massachusetts Institute of Technology Professor Stephen Grossberg, Boston University Dr. Lance Optican, National Institutes of Health SUBMISSION: Deadline for submission: October 31, 1997 Notification of acceptance: January 31, 1998 Format: no longer than 10,000 words; APA format ADDRESS FOR SUBMISSION: Professor Stephen Grossberg Boston University Department of Cognitive and Neural Systems 677 Beacon Street Boston, Massachusetts 02215 From atick at monaco.rockefeller.edu Mon Sep 15 08:45:14 1997 From: atick at monaco.rockefeller.edu (Joseph Atick) Date: Mon, 15 Sep 1997 08:45:14 -0400 Subject: Network:CNS Table of Contents Message-ID: <9709150845.ZM15475@monaco.rockefeller.edu> NETWORK: COMPUTATION IN NEURAL SYSTEMS Volume 8, Issue 3, August 1997 Table of Contents (on line version can be accessed by those with institutional subscription at http://www.iop.org/Journals/ne ) Pages: V1--V18, 239--354 VIEWPOINT V1 Hebbian learning, its correlation catastrophe, and unlearning J L van Hemmen PAPERS 239 An association matrix model of context-specific vertical vergence adaptation J W McCandless and C M Schor 259 Learning low-dimensional representations via the usage of multiple-class labels N Intrator and S Edelman 283 Optimal ensemble averaging of neural networks U Naftaly, N Intrator and D Horn 297 Hidden Markov modelling of simultaneously recorded cells in the associative cortex of behaving monkeys I Gat, N Tishby and M Abeles 323 Recursive algorithms for principal component extraction A Chi-Sing Leung, Kwok-Wo Wong and Ah Chung Tsoi 335 Spatial-frequency analysis in the perception of perspective depth K Sakai and L H Finkel 353 BOOK REVIEW Spikes: Exploring the Neural Code F Rieke, D Warland, R de Ruyter van Steveninck and W Bialek (reviewed by D Reich) -------------------------------------------------------------- -- Joseph J. Atick Rockefeller University 1230 York Avenue New York, NY 10021 Tel: 212 327 7421 Fax: 212 327 7422 From horwitz at alw.nih.gov Mon Sep 15 15:06:40 1997 From: horwitz at alw.nih.gov (Barry Horwitz) Date: Mon, 15 Sep 1997 15:06:40 -0400 Subject: Postdoctoral position Message-ID: <341D873B.5913@alw.nih.gov> Below is a notice concerning a postdoctoral position in our lab for someone interested in neural modeling and functional neuroimaging. ------------------------------------------- National Institute on Aging Postdoctoral Fellowship IN Neural Modeling of Human Functional neuroImaging data A postdoctoral fellowship is available for developing and applying computational neuroscience modeling methods to in vivo human functional neuroimaging data, obtained from positron emission tomography and functional magnetic resonance imaging. The goal is to understand the relation between functional neuroimaging data, with its low spatial and temporal resolution, and the underlying electrophysiological behavior of multiple interconnected neuronal populations. Knowledge of neural modeling techniques and extensive programming experience are required. PhD or MD degree required. The position is in the Laboratory of Neurosciences, Brain Aging and Dementia Section, National Institute on Aging, Bethesda, Md, USA. Starting salary commensurate with training and experience. For further information, contact: Dr. Barry Horwitz, Bldg. 10, Rm. 6C-414, Lab. Neurosciences, National Institutes of Health, Bethesda, MD 20892, USA. Tel. 301-594-7755; FAX: 301-402-0595; Email: horwitz at alw.nih.gov. From jlm at cnbc.cmu.edu Mon Sep 15 14:43:20 1997 From: jlm at cnbc.cmu.edu (Jay McClelland) Date: Mon, 15 Sep 1997 14:43:20 -0400 (EDT) Subject: OPENING FOR RA IN STUDIES OF ADULT BRAIN PLASTICITY Message-ID: <199709151843.OAA01376@eagle.cnbc.cmu.edu> OPENING FOR RA IN STUDIES OF ADULT BRAIN PLASTICITY An immediate opening exists for a research assistant who will be involved in a project to study strategies that promote learning and brain plasticity, and their basis and use in enhancing literacy, under the supervision of Professors Julie Fiez, University of Pittsburgh and Jay McClelland, Carnegie Mellon. The studies test a new model that attempts to account for the apparent stability of language processing deficits in the face of massive exposure to natural speech, while at the same time explaining why interventions using exaggerated speech appear to be effective in remediation. The assistant will have the opportunity to contribute to behavioral experiments and functional neuroimaging studies. Specific responsibilities will include subject recruitment, subject testing, data analysis, participation in the development of stimulus materials and experimental designs, library research, and general administrative support. This position is a full-time paid research position, and funding is available for up to three years. This is not intended as a vehicle for graduate student support, but the opening may provide a unique opportunity for recent college graduates interested in becoming involved in cognitive neuroscience research, including those who may wish to pursue graduate study in the future. The holder of this position will have access to the facilities and resources of the Center for the Neural Basis of Cognition, a joint project of the Carnegie-Mellon University and the University of Pittsburgh. Interested applicants should send a short statement of qualifications by email to fiez+ at pitt.edu, or by smail to: Professor Julie Fiez 605 LRDC, Dept. Psychology 3939 O'Hara Street University of Pittsburgh Pittsburgh, PA 15260 From cns-cas at cns.bu.edu Mon Sep 15 10:40:05 1997 From: cns-cas at cns.bu.edu (Boston University - Cognitive and Neural Systems) Date: Mon, 15 Sep 1997 10:40:05 -0400 Subject: Graduate Training in Cognitive and Neural Systems Message-ID: <3.0.3.32.19970915104005.0071b3b8@cns.bu.edu> A non-text attachment was scrubbed... Name: not available Type: text/enriched Size: 20855 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/cb55d5e9/attachment-0001.bin From marwan at ee.usyd.edu.au Tue Sep 16 05:29:29 1997 From: marwan at ee.usyd.edu.au (Marwan Jabri) Date: Tue, 16 Sep 1997 19:29:29 +1000 (EST) Subject: U2000 Posdocs at University of Sydney, Australia Message-ID: This message is for interest to postdocs seekers. Look in http://www.usyd.edu.au/su/reschols/appkit/U2000FNT.HTM gor information about the U2000 Research Fellowships at the University of Sydney. I can provide more information if interested in applying in the fields of neural computing and neuromorphic engineering. The closing date is on October 9. Look in web site above regarding advert, conditions and methods of application, including form. Marwan Jabri ------------ Marwan Jabri Professor in Adaptive Systems Dept of Electrical Engineering, The University of Sydney NSW 2006, Australia Tel: (+61-2) 9351-2240, Fax:(+61-2) 9351-7209, Mobile: (+61) 414-512240 Email: marwan at sedal.usyd.edu.au, http://www.sedal.usyd.edu.au/~marwan/ From esann at dice.ucl.ac.be Tue Sep 16 06:32:00 1997 From: esann at dice.ucl.ac.be (esann@dice.ucl.ac.be) Date: Tue, 16 Sep 1997 12:32:00 +0200 Subject: ESANN 98 - call for papers Message-ID: <199709161021.MAA17643@ns1.dice.ucl.ac.be> --------------------------------------------------- | 6th European Symposium | | on Artificial Neural Networks | | | | ESANN 98 | | | | Bruges - April 22-23-24, 1998 | | | | First announcement and call for papers | --------------------------------------------------- The call for papers for the ESANN 98 conference is now available on the Web: http://www.dice.ucl.ac.be/esann For those of you who maintain WWW pages including lists of related ANN sites: we would appreciate if you could add the above URL to your list; thank you very much! We try as much as possible to avoid multiple sendings of this call for papers; however please apologize if you receive this e-mail twice, despite our precautions. You will find below a short version of this call for papers, without the instructions to authors (available on the Web). If you have difficulties to connect to the Web please send an e-mail to esann at dice.ucl.ac.be and we will send you a full version of the call for papers. Scope and topics ---------------- Since its first edition in 1993, the European Symposium on Artificial Neural Networks has become the reference for researchers on fundamentals and theoretical aspects of artificial neural networks. Each year, around 100 specialists from all parts of the world attend ESANN, in order to present their latest results and comprehensive surveys, and to discuss the future developments in this field. The ESANN 98 conference will focus on fundamental aspects of ANNs: theory, models, learning algorithms, mathematical aspects, approximation of functions, classification, control, time-series prediction, statistics, signal processing, vision, self-organization, vector quantization, evolutive learning, psychological computations, biological plausibility,... Papers on links and comparisons between ANNs and other domains of research, such as statistics, data analysis, signal processing, biology, psychology, evolutive learning, bio-inspired systems,...) are also encouraged. Papers will be presented orally (no parallel sessions). If the number of high-quality accepted papers is too high, some poster sessions could be organized. All posters will be complemented by a short oral presentation during a plenary session. It is important to mention that it is the topics of the paper which will decide if it better fits into an oral or a poster session, not its quality. The quality of posters will be the same as the quality of oral presentations, and both will be printed in the same way in the proceedings. Nevertheless, authors have the choice to indicate on the author submission form that they only accept to present their paper orally. The following is a non-exhaustive list of topics which will be covered during ESANN'98: * theory * models and architectures * mathematics * learning algorithms * vector quantization * self-organization * RBF networks * Bayesian classification * recurrent networks * approximation of functions * time series forecasting * adaptive control * statistical data analysis * independent component analysis * signal processing * natural and artificial vision * cellular neural networks * fuzzy neural networks * hybrid networks * identification of non-linear dynamic systems * biologically plausible artificial networks * bio-inspired systems * formal models of biological phenomena * neurobiological systems * cognitive psychology * adaptive behavior * evolutive learning The ESANN'98 conference is organized with the support of the IEEE Region 8, the IEEE Benelux Section, and the Universite Catholique de Louvain (UCL, Louvain-la-Neuve, Belgium). Special session --------------- 6 special sessions will be organized by renowned scientists in their respective fields. Papers submitted to these sessions are reviewed according to the same rules as any other submission. Authors who submit papers to one of these sessions are invited to mention it on the author submission form; nevertheless, submissions to the special sessions must follow the same format, instructions and deadlines as any other submission, and must be sent to the same address. The special sessions organized during ESANN 98 are: * Self-organizing maps for data analysis Marie Cottrell (Univ. Paris 1 Sorbonne, France) & Eric de Bodt (U.C. Louvain-la-Neuve, Belgium) * ANN for the processing of facial information Manuel Grana (UPV San Sebastian, Spain) * Radial basis networks Leonardo Reyneri (Polit. di Torino, Italy) * Cellular neural networks (CNN) technology - theory and application * Tamas Roska (Hungarian Academy of Science, Hungary) * Neural networks for control Joos Vandewalle & Johan Suykens (K.U. Leuven, Belgium) * ANN for speech processing Christian Wellekens (Eurecom Sophia-Antipolis, France) * The ESANN'98 conference is organized the week after CNNA'98, the 5th IEEE International Workshop of Cellular Neural Networks and their Applications, in London (UK). For details, see http://www.sbu.ac.uk/eeie/CNNA98 Location -------- The conference will be held in Bruges (also called "Venice of the North"), one of the most beautiful medieval towns in Europe. Bruges can be reached by train from Brussels in less than one hour (frequent trains). The town of Bruges is worldwide known, and famous for its architectural style, its canals, and its pleasant atmosphere. The conference will be organized in an hotel located near the center (walking distance) of the town. There is no obligation for the participants to stay in this hotel. Hotels of all level of comfort and price are available in Bruges; there is a possibility to book a room in the hotel of the conference, or in another one (50 m. from the first one) at a preferential rate through the conference secretariat. A list of other smaller hotels is also available. The conference will be held at the Novotel hotel, Katelijnestraat 65B, 8000 Brugge, Belgium. Deadlines --------- Submission of papers December 1, 1997 Notification of acceptance January 31, 1998 Symposium April 22-23-24, 1998 Conference secretariat ---------------------- Michel Verleysen D facto conference services phone: + 32 2 203 43 63 45 rue Masui Fax: + 32 2 203 42 94 B - 1000 Brussels (Belgium) E-mail: esann at dice.ucl.ac.be http://www.dice.ucl.ac.be/esann Reply form ---------- If you wish to receive the final program of ESANN'98, for any address change, or to add one of your colleagues in our database, please send this form to the conference secretariat. ------------------------ cut here ----------------------- ------------------ ESANN'98 reply form ------------------ Name: ................................................. First Name: ............................................ University or Company: ................................. ................................. Address: .............................................. .............................................. .............................................. ZIP: ........ Town: ................................ Country: ............................................... Tel: ................................................... Fax: ................................................... E-mail: ................................................ ------------------------ cut here ----------------------- Please send this form to : D facto conference services 45 rue Masui B - 1000 Brussels e-mail: esann at dice.ucl.ac.be Steering and local committee (to be confirmed) ---------------------------------------------- Fran?ois Blayo Univ. Paris I (F) Marie Cottrell Univ. Paris I (F) Jeanny Herault INPG Grenoble (F) Henri Leich Fac. Polytech. Mons (B) Bernard Manderick Vrije Univ. Brussel (B) Eric Noldus Univ. Gent (B) Jean-Pierre Peters FUNDP Namur (B) Joos Vandewalle KUL Leuven (B) Michel Verleysen UCL Louvain-la-Neuve (B) Scientific committee (to be confirmed) -------------------------------------- Edoardo Amaldi Cornell Univ. (USA) Agnes Babloyantz Univ. Libre Bruxelles (B) Herve Bourlard FPMS Mons (B) Joan Cabestany Univ. Polit. de Catalunya (E) Holk Cruse Universitat Bielefeld (D) Eric de Bodt UCL Louvain-la-Neuve (B) Dante Del Corso Politecnico di Torino (I) Wlodek Duch Nicholas Copernicus Univ. (PL) Marc Duranton Philips / LEP (F) Jean-Claude Fort Universite Nancy I (F) Bernd Fritzke Ruhr-Universitat Bochum (D) Stan Gielen Univ. of Nijmegen (NL) Karl Goser Universitat Dortmund (D) Manuel Grana UPV San Sebastian (E) Anne Guerin-Dugue INPG Grenoble (F) Martin Hasler EPFL Lausanne (CH) Christian Jutten INPG Grenoble (F) Vera Kurkova Acad. of Science of the Czech Rep. (CZ) Petr Lansky Acad. of Science of the Czech Rep. (CZ) Hans-Peter Mallot Max-Planck Institut (D) Eddy Mayoraz IDIAP Martigny (CH) Jean Arcady Meyer Ecole Normale Superieure Paris (F) Jose Mira Mira UNED (E) Pietro Morasso Univ. of Genoa (I) Jean-Pierre Nadal Ecole Normale Superieure Paris (F) Erkki Oja Helsinky University of Technology (FIN) Gilles Pages Universite Paris VI (F) Helene Paugam-Moisy Ecole Normale Superieure Lyon (F) Alberto Prieto Universitad de Granada (E) Ronan Reilly University College Dublin (IRE) Tamas Roska Hungarian Academy of Science (H) Jean-Pierre Rospars INRA Versailles (F) John Stonham Brunel University (UK) John Taylor King's College London (UK) Claude Touzet IUSPIM Marseilles (F) Marc Van Hulle KUL Leuven (B) Christian Wellekens Eurecom Sophia-Antipolis (F) _____________________________ _____________________________ D facto publications - Michel Verleysen conference services Univ. Cath. de Louvain - DICE 45 rue Masui 3, pl. du Levant 1000 Brussels B-1348 Louvain-la-Neuve Belgium Belgium tel: +32 2 203 43 63 tel: +32 10 47 25 51 fax: +32 2 203 42 94 fax: +32 10 47 25 98 esann at dice.ucl.ac.be verleysen at dice.ucl.ac.be http://www.dice.ucl.ac.be/esann _____________________________ _____________________________ From robbie at bcs.rochester.edu Tue Sep 16 08:24:16 1997 From: robbie at bcs.rochester.edu (Robbie Jacobs) Date: Tue, 16 Sep 1997 08:24:16 -0400 (EDT) Subject: faculty position available Message-ID: <199709161224.IAA21369@broca.bcs.rochester.edu> The Department of Brain and Cognitive Sciences at the University of Rochester invites applications for a tenure-track position as Assistant Professor in the area of cognition broadly defined (the ad for the position is below). People with interests in computational modeling of cognitive processes are encouraged to apply. DEPARTMENT OF BRAIN AND COGNITIVE SCIENCES, UNIVERSITY OF ROCHESTER: invites applications for a tenure-track position as Assistant Professor in the area of cognition broadly defined. Research interests could include domains traditional to cognitive psychology, such as learning, memory, reasoning, perception, motor control, and language, and could also include areas such as cognitive neuroscience, neuropsychology, computational modeling, and cognitive development. Candidates with strong research and teaching interests in any of these areas should send a vita, research and teaching statement, representative reprints, and at least three letters of recommendation to: Cognitive Search, Department of Brain and Cognitive Sciences, Meliora Hall, University of Rochester, Rochester, NY 14627-0268. Applicants can learn about the Department of Brain and Cognitive Sciences by referring to our pages on the world wide web (http://www.bcs.rochester.edu). The deadline for complete applications is November 15, 1997. Applications from women and members of minority groups are especially welcome. University of Rochester is an equal opportunity employer. From oby at cs.tu-berlin.de Tue Sep 16 12:38:09 1997 From: oby at cs.tu-berlin.de (Klaus Obermayer) Date: Tue, 16 Sep 1997 18:38:09 +0200 (MET DST) Subject: Preprint available Message-ID: <199709161638.SAA22840@pollux.cs.tu-berlin.de> Dear connectionists The following preprint is now available online at: http://kon.cs.tu-berlin.de/publications/#conference An annealed self-organizing map for source channel coding M. Burger, T. Graepel, and K. Obermayer CS Department, Technical University of Berlin, Berlin, Germany Abstract: We derive and analyse robust optimization schemes for noisy vector quantization on the basis of deterministic annealing. Starting from a cost function for central clustering that incorporates distortions from channel noise we develop a soft topographic vector quantization algorithm (STVQ) which is based on the maximum entropy principle and which performs a maximum-likelihood estimate in an expectation-maximization (EM) fashion. Annealing in the temperature parameter $\beta$ leads to phase transitions in the existing code vector representation during the cooling process for which we calculate critical temperatures and modes as a function of eigenvectors and eigenvalues of the covariance matrix of the data and the transition matrix of the channel noise. A whole family of vector quantization algorithms is derived from STVQ, among them a deterministic annealing scheme for Kohonen's self-organizing map (SOM). This algorithm, which we call SSOM, is then applied to vector quantization of image data to be sent via a noisy binary symmetric channel. The algorithm's performance is compared to those of LBG and STVQ. While it is naturally superior to LBG, which does not take into account channel noise, its results compare very well to those of STVQ, which is computationally much more demanding. (This paper will appear at NIPS97) From Pregenz at dpmi.tu-graz.ac.at Thu Sep 18 12:02:00 1997 From: Pregenz at dpmi.tu-graz.ac.at (Martin) Date: Thu, 18 Sep 1997 17:02:00 +0100 Subject: Thesis available Message-ID: <2490B8D696F@dpmi.tu-graz.ac.at> >>> FEATURE SELECTION >>> CLASSIFICATION PROBLEMS My the PhD thesis: "Distinction Senstivie Learning Vector Quantization (DSLVQ)" (TU-Graz, 120 pages) is now available for anonymous ftp from: FTP-host: fdpmial03.tu-graz.ac.at FTP-filename: /pub/outgoing/dslvq.ps.Z (900 kbyte) /pub/outgoing/dslvq.ps (4.4 mbyte) Martin Pregnezer --------------------------- Abstract This thesis introduces a new feature selection method: Distinction Sensitive Learning Vector Quantization (DSLVQ). DSLVQ is not based on the individual testing of different candidate feature subsets; the relevance of the features is deduced from the implicit problem representation through an exemplar based classification method. While most of the common feature selection methods require repeated training of the target classifier on selected feature subsets, only a single learning process is necessary with DSLVQ. This makes the new method exceptionally quick. The DSLVQ algorithm is motivated theoretically and evaluated empirically. On a very complex and high dimensional artificial data set it is shown that DSLVQ can separate relevant and irrelevant features reliably. A real world application of DSLVQ is the selection of optimal frequency bands for an EEG-based Brain Computer Interface (BCI). DSLVQ has been used to individually adapt the filter settings for each subject. This can improve the performance of the BCI. LVQ classifier: conditions under which stability problems with different training algorithms can occur are outlined in chapter 4 of this thesis or in a 15 pages draft paper which can be downloaded from the same site (lvq_stab.ps.Z / lvq_stab.ps). From Pregenz at dpmi.tu-graz.ac.at Thu Sep 18 12:02:00 1997 From: Pregenz at dpmi.tu-graz.ac.at (Martin) Date: Thu, 18 Sep 1997 17:02:00 +0100 Subject: Thesis available Message-ID: <2490B8D696F@dpmi.tu-graz.ac.at> >>> FEATURE SELECTION >>> CLASSIFICATION PROBLEMS My the PhD thesis: "Distinction Senstivie Learning Vector Quantization (DSLVQ)" (TU-Graz, 120 pages) is now available for anonymous ftp from: FTP-host: fdpmial03.tu-graz.ac.at FTP-filename: /pub/outgoing/dslvq.ps.Z (900 kbyte) /pub/outgoing/dslvq.ps (4.4 mbyte) Martin Pregnezer --------------------------- Abstract This thesis introduces a new feature selection method: Distinction Sensitive Learning Vector Quantization (DSLVQ). DSLVQ is not based on the individual testing of different candidate feature subsets; the relevance of the features is deduced from the implicit problem representation through an exemplar based classification method. While most of the common feature selection methods require repeated training of the target classifier on selected feature subsets, only a single learning process is necessary with DSLVQ. This makes the new method exceptionally quick. The DSLVQ algorithm is motivated theoretically and evaluated empirically. On a very complex and high dimensional artificial data set it is shown that DSLVQ can separate relevant and irrelevant features reliably. A real world application of DSLVQ is the selection of optimal frequency bands for an EEG-based Brain Computer Interface (BCI). DSLVQ has been used to individually adapt the filter settings for each subject. This can improve the performance of the BCI. LVQ classifier: conditions under which stability problems with different training algorithms can occur are outlined in chapter 4 of this thesis or in a 15 pages draft paper which can be downloaded from the same site (lvq_stab.ps.Z / lvq_stab.ps). From austin at minster.cs.york.ac.uk Wed Sep 17 18:11:16 1997 From: austin at minster.cs.york.ac.uk (austin@minster.cs.york.ac.uk) Date: Wed, 17 Sep 97 18:11:16 Subject: No subject Message-ID: Parallel Neural Hardware for High Performance Pattern Matching 3 Year research post An individual with a keen interest in designing high performance computing hardware is required to join a team working on the design of a very high performance pattern matching machine (search engine). The post involves the design from logic gate level, through parallel sub-systems to the low level software drivers of a machine based on a fine grained implementation of binary neural network based methods. We are looking for candidates with applicable experience in digital hardware design and implementation of low level software in C and C++. Knowledge of FPGA devices and CAD support tools and the design of high performance hardware will be necessary. Knowledge of neural networks and p methods is an advantage but not essential. To support the post are first class facilities and technical support. Working with a team of researchers and technicians, the projects major aim is to scale up our present pattern matching architecture (AURA) to a flexible parallel architecture to solve a wide range of problems being developed in conjunction with British Aerospace, The Post Office and other major international companies. Example applications are matching in very large textual, financial and structural databases. The post offers a unique opportunity to individuals who have a proven record of achievement to be involved in the development of a leading edge technology with exciting prospects for the future. The post is funded by the UKs EPSRC (government funding council) in collaboration with The Post Office Research Group and British Aerospace plc, under the direction of Dr. Jim Austin. Details of this project, further details of the post and related publications can be found on our web page at http://www.cs.york.ac.uk/~arch/nn/aura.html The research is to be undertaken within the Advanced Computer Architectures Group, one of the academic staff), under the supervision of Dr. Jim Austin and in partnership with Ken Lees. The Department is one of the leading Computer Science Departments in the UK, and is rated 5* for research, in recognition of this, the University has provided us with a brand new building which will open in Oct. 1997. The group is internationally known for its work in binary neural networks and their high speed hardware implementation. It has extensive computer and technical support. The successful applicant can look forward to joining a very active and thriving research team. Applications are invited from individuals with a good honours degree and a PhD in a relevant 15,159 - 20,103 UKpounds applications should be sent to the Personnel Office, University of York, York YO1 5DD, (email: jobs at york.ac.uk) quoting reference number 6049. The closing date for applications is 1 October 1997. From simon.schultz at psy.ox.ac.uk Sat Sep 20 15:19:08 1997 From: simon.schultz at psy.ox.ac.uk (Simon Schultz) Date: Sat, 20 Sep 1997 20:19:08 +0100 Subject: Preprint available Message-ID: <342421AC.102F@psy.ox.ac.uk> Dear Connectionists, Preprints of the following paper are available via WWW. It is to appear in a Cambridge Univ. Press book in 1998 entitled "Information Theory and the Brain", edited by R. Baddeley, P. Hancock and P. Foldiak. QUANTITATIVE ANALYSIS OF A SCHAFFER COLLATERAL MODEL S. Schultz(1), S. Panzeri(1), E.T. Rolls(1) and A. Treves(2) (1) Department of Experimental Psychology, University of Oxford, UK. (2) Programme in Neuroscience, SISSA, Trieste, Italy. Abstract: Advances in techniques for the formal analysis of neural networks have introduced the possibility of detailed quantitative analyses of brain circuitry. This paper applies a method for calculating mutual information to the analysis of the Schaffer collateral connections between regions CA3 and CA1 of the hippocampus. Attention is given to the introduction of further details of anatomy and physiology to the calculation: in particular, the distribution of the number of connections CA1 neurons receive from CA3, and the graded nature of the firing-rate distribution in region CA3. 16 pages, 5 figures. http://www.mrc-bbc.ox.ac.uk/~schultz/sch.ps.gz -- ----------------------------------------------------------------------- Simon Schultz Department of Experimental Psychology also: University of Oxford Corpus Christi College South Parks Rd., Oxford OX1 3UD Oxford OX1 4JF Phone: +44-1865-271419 Fax: +44-1865-310447 http://www.mrc-bbc.ox.ac.uk/~schultz/ ----------------------------------------------------------------------- From pmitra at bell-labs.com Mon Sep 22 01:01:29 1997 From: pmitra at bell-labs.com (Partha Mitra) Date: Mon, 22 Sep 1997 01:01:29 -0400 Subject: Just established: new neuroscience archives Message-ID: <3425FBA9.2F48@bell-labs.com> A neuroscience hierarchy has been established on the e-print archive at xxx.lanl.gov. This is an established archival site, sponsored by the NSF, that currently receives over 20,000 preprints and reprints each year, mostly in physics. Data rich research communities, such as astrophysics, are able to publish important sets of data (or links to the larger sets) in a central, easily accessible location. Some background information about the impact that the archive has had on physics and some related fields is available at http://xxx.lanl.gov/blurb/sep96news.html The site is primarily a means for efficient distribution and archival. Submissions are not reviewed (although "response" papers can be submitted), and so it does not replace conventional journals. Experience with the existing archives suggests that submissions tend to maintain a high level of scholarship. We suggest that there are a number of benefits that the neuroscience community might expect to obtain from this archive: * The archives provide a means of rapid and widely accessible dissemination of research results in the form of preprints or reprints. * Shorter notes, algorithms, or interesting data sets (with a short descriptive abstract) can be published independently of long papers and thus speed the distribution of important information. * The archive provides a central resource to make available important raw data sets for further scrutiny. For lengthy data sets, a short description with appropriate hyperlinks will be appropriate. * Technical reports, either on computational, instrumental, or preparatory procedures, can be made widely available. * Conference proceedings papers that aren't indexed or that don't receive wide distribution will be easily accessible. * The archive is mirrored in 14 different countries that are spread across the globe, with the network still growing. In particular, in countries where the costs of institutional subscriptions to journals are impractical, the archive has become the primary research source. There are three subcategories in the Neuroscience group, following the example of the Journal of Neuroscience: * 'neuro-cel' for Molecular/Cellular Neuroscience * 'neuro-sys' for Behavioral/systems Neuroscience * 'neuro-dev' for Developmental Neuroscience A computational or theoretical category has been initially avoided in order to keep theoretical, modeling and analysis studies mixed in with the experimental work. To reach the front pages for the archives connect to http://xxx.lanl.gov/archive/neuro-cel or http://xxx.lanl.gov/archive/neuro-sys or http://xxx.lanl.gov/archive/neuro-dev Papers may be uploaded in a variety of formats (TeX and LaTeX are preferred; PostScript that is generated as a print file by Microsoft Word is acceptable), which are then automatically converted to various file types for downloading. Instructions for both submission and retrieval are available from the web site. In particular, see http://xxx.lanl.gov/faq/. --------- The neuroscience archives were initiated as a subcategory of the existing e-print archives at LANL during the Analysis of Neural Data Workgroup (see http://sdphln.ucsd.edu/research/neurodata/) at the Marine Biological Laboratories, sponsored by the National Institutes of Mental Health. The e-print archives at the Los Alamos National Laboratory are supported by the U.S. National Science Foundation. ---------- From bersini at ulb.ac.be Mon Sep 22 11:20:30 1997 From: bersini at ulb.ac.be (Hugues Bersini) Date: Mon, 22 Sep 1997 17:20:30 +0200 Subject: The FAMIMO LOMOLOCO Workshop: 25th March 1998 Message-ID: <34268CBB.BAD6AD8A@ulb.ac.be> Following the IFAC workshop on Intelligent Components for vehicles (ICV'98): One day Workshop on LOMOLOCO (Local Modelling Local Control) approaches in Non-Linear Control in the context of the FAMIMO ESPRIT Project. In recent years, we have seen an increasing interest for the divide-and-conquer approaches in control. The basic idea is simple, and widely used in many scientific communities dealing with neural networks, fuzzy systems, non-linear control, statistical approaches ,etc.. A variety of ways are proposed to approximate and control unknown or partially known non-linear systems by some form of combination (smooth or crisp) of simple and local models (low-order polynomials, frequently constant or linear) which only act in some local region of the state space. In process control, we get very close to the classical strategy known as "gain scheduling" which the influence of fuzzy control has revived in a growing number of applications. Neural Network practitioners have proposed (in place of global approximators) "lazy" or "local" types of model and controller,in which learning takes place only on the basis of the data present in the restricted region occupied by the local agent. One basic and obvious advantage of this approach is that it allows us the possibility of making use of theoretical results and analytical tools from the field of statistical linear analysis, linear control, linear regression,etc.in a local, iterative and distributed way. The main issues to be addressed during the workshop will be: - the discovery of the local models i.e. the modelling part - how to derive local control laws from the local models i.e. how to adapt the different control strategies (regulation, tracking, predic- tive and optimal control) to this local and distributed approach - how to combine the local models and local controllers: smooth or crisp, and switching strategies in general - how to extend linear stability analysis to the global combination of the local controllers - lazy modelling and lazy control FAMIMO (Fuzzy Algorithms for Multi-Input Multi-Output processes) is the second ESPRIT European initiative (following FALCON) aiming at designing fuzzy systems for the reliable control of high-dimensional complex processes. LOMOLOCO approaches are among the strategies most analysed and tested by FAMIMO partners, who will present their work on this subject at the workshop. These partners are: The Lund Institute of Technology: Prof. Astrom and Prof. Arzen The Delft University of Technology: Prof. Verbruggen and Prof. Babuska The IRIDIA laboratory from Universite Libre de Bruxelles: Prof. Bersini The SIEMENS Automotive SA: Serge Boverie The AICIA Institute of Seville: Prof. Ollero. Besides we also expect the presence (to be confirmed) of Prof. Tanaka, Prof. Johanssen and Prof. Palm and some others to present their latest work around the LOMOLOCO approaches in control. We would like to encourage European researchers to attend this workshop which will take place the day after the Seville workshop and present work that they believe to be related to the main theme of the workshop. We therefore invite people to submit a full copy of their work (6 pages maximum) before the 8th December at the address of Hugues Bersini, indicated below. All organisational aspects of the workshop are the same as the Seville workshop (http://www.esi.us.es/ISA/icv98/). For any information and submission: Hugues Bersini IRIDIA CP 194/6 ULB 50, av. Franklin Roosevelt 1050 Bruxelles tel: +32 2 650 27 33 fax: +32 2 650 27 15 email: bersini at ulb.ac.be See also: http://iridia.ulb.ac.be/FAMIMO/Workshop.html From Simon.N.CUMMING at British-Airways.com Tue Sep 23 09:20:00 1997 From: Simon.N.CUMMING at British-Airways.com (Simon.N.CUMMING@British-Airways.com) Date: 23 Sep 1997 13:20:00 Z Subject: ANNOUNCEMENT: NCAF CONFERENCE, JAN 98 Message-ID: <"BSC400A1 970923131957580261*/c=GB/admd=ATTMAIL/prmd=BA/o=British Airways PLC/s=CUMMING/g=SIMON/i=N/"@MHS> NEURAL COMPUTING APPLICATIONS FORUM (NCAF) ------------------------------------------ The purpose of NCAF is to promote widespread exploitation of neural computing technology by: - providing a focus for neural network practitioners. - disseminating information on all aspects of neural computing. - encouraging close co-operation between industrialists and academics. NCAF holds four, two-day conferences per year, in the UK, with speakers from commercial and industrial organisations and universities. The focus of the talks is on practical issues in the application of neural network technology and related methods to solving real-world problems. NEXT MEETING - 21-22 JANUARY 1998, MALVERN, UK. ---------------------------------------------- The January meeting will be sponsored by DERA (Defence Evaluation and Research Agency, see http://www.dera.gov.uk/dera.htm) It will be a joint meeting with the annual DERA Artificial Intelligence Conference held at DERA Malvern on Wednesday 21st and Thursday 22nd January 1998. Please note that this will be a one-and-a-half day meeting starting at lunchtime on 21st. The 1 * days will be packed with applications oriented papers as usual. There will also be adequate time for networking with other practitioners, during coffee, lunch and the evening event. PROVISIONAL PROGRAMME --------------------- to include... Multi-Sensor Arrays - Mahesan Niranjan (Cambridge University) Application of Perceptron Neural Networks to Tool State Classification in a Metal Turning Process - Dimla E Dimla, Jnr (University of Wolverhampton) Classification of Molten Steel Quality using Pattern Recognition Techniques - Mark Brookes (British Steel) Neurocomputational Models of Auditory Perception and Speech Recognition - Sue McCabe (Plymouth University) Transformation-Invariant Feature Recognition by Neural Self-Organisation - Chris Webber (DERA) A Homogeneity Analysis Approach to Nonlinear Principal Components Analysis - Andrew Webb (DERA) NCAF Annual General Meeting SOCIAL PROGRAMME: ---------------- Murder-Mystery and dinner at the Abbey Hotel, Great Malvern on the evening of the 21st. plus, The Puzzle Corner is a mystery too with Graham 'Rottweiler' Hesketh (Rolls-Royce) REGISTRATION: ------------ Please note that due to MoD security restrictions it is absolutely essential that you register early for this meeting. Payment may follow later if necessary but it is vital that your name is known in advance. To register please contact ncaf by email at ncafsec at brunel.ac.uk or Phone Sally Francis on (+44)(0)1784 477271 or fax 472879 From emil at uivt.cas.cz Tue Sep 23 10:27:43 1997 From: emil at uivt.cas.cz (Emil Pelikan) Date: Tue, 23 Sep 1997 16:27:43 +0200 Subject: PASE'97 workshop Message-ID: <01BCC83D.9F3B11A0@pc_pelikan.uivt.cas.cz> 6th International Workshop on Parallel Applications in Statistics and Economics PASE '97 Computers in Finance and Economics Marianske Lazne, Czech Republic November 9 - 12, 1997 -------------------------------------------------------------------------------------------------- PURPOSE OF THE WORKSHOP: The purpose of this workshop is to bring together researchers interested in innovative information processing systems and their applications in the areas of statistics, finance and economics. The focus will be on in-depth presentations of state-of-the-art methods and applications as well as on communicating current research topics. This workshop is intended for industrial and academic persons seeking new ways to work with computers in finance and economics. Statistical Methods for Data Analysis and Forecasting Neural Networks in Finance and Econometrics Data Quality and Data Integrity Data Integration and Data Warehousing Risk Management Applications Real Time Decision Support Systems Banking and Financial Information on the Internet The presentations at the workshop will be made available to a broader audience by publishing the results in a special issue of Neural Network World - International Journal on Neural and Mass-Parallel Computing and Information Systems. ------------------------------------------------------------------------------------------------- WORKSHOP SITE: The workshop will be held in the Czech Republic in Marianske Lazne (Marienbad), one of the world's most beautiful and most visited spatowns. ------------------------------------------------------------------------------------------------- IN-DEPTH PRESENTATIONS P. Cianchi, G. Congiu, L. Landi, A. Piattoli, Universita di Firenze and Quasar S.p.a. Firenze: Financial Model Definition and Execution in a Real Time System Fully Integrated with the Market G. Darbellay and M. Slama, Institute of Computer Science, Prague: How Non-linear is your Time Series? - A New Method and a Case Study R. Dave, G. Stahl*, G. Ballocchi, M.C. Lundin, R.B. Olsen, Olsen & Associates, Zurich and *German Banking Supervisory Office, Berlin: Volatility Conditional Correlations Between Financial Markets: An Emperical Study with Impact on Risk Management Strategies G. Deboeck,?Word Bank: Financial Applications of Self-Organizing Maps: Investment Maps for Emerging Markets M. Mehta, Citibank, Bombay and Saudi American Bank, Riyadh: Neural Network Directed Trading in Financial Markets Using High Frequency Data M. Miksa, Bank Sal Oppenheim, Frankfurt: SONNET: Sal Oppenheim Neural Trader Th. Poddig, Department for Finance, University of Bremen: Developing Forecasting Models for Integrated Financial Markets using Artificial Neural Networks B. Seifert, Oxford Forecasting, Oxford: The 'Divide-and-Rule' Algorithm for Optimizing Functions of Many Variables R. Schnidrig*, D. W?rtz, M. Hanf, *Finance Online GmbH and Swiss Center for Scientific Computing, ETH Z?rich: Realtime Computer Simulation of a Foreign Exchange Trading Room G. Stahl, German Banking Supervisory Office, Berlin: Backtesting Using a Generalization of the Traffic-Light-Approach J.G. Taylor, Department of Mathematics, University of London Perception by Neural Networks ---------------------------------------------------------------------------------------------------- SOCIAL EVENTS In keeping with the tradition of the PASE workshop, a sight seeing program as well as other social events will be organized.? --------------------------------------------------------------------------------------------------- POSTERS AND DEMONSTRATIONS For soft- or hardware demonstrations please contact the organizers. There is also a limited possibility to present post-deadline contributions. For further information please get in contact with Martin Hanf or Helga Labermeier.? --------------------------------------------------------------------------------------------------- WORKSHOP FEE University Sfr 330.- (after Oct. 15th Sfr 380.-) Profit making company Sfr 690.- (after Oct. 15th Sfr 740.-) It includes the "Welcome Party" on Sunday evening, lunches on Monday, Tuesday and Wednesday, "the Workshop Dinner" on Tuesday evening, the special issue of Neural Network World journal and the proceedings. ----------------------------------------------------------------------------------------------------- CONTACT ADDRESSES Martin Hanf or Helga Labermeier or Emil Pelikan SCSC, CLU B1 ICS AS CR ETH Zentrum Pod vodarenskou vezi 2 CH-8092 Zurich 182 07 Prague 8 Switzerland Czech Republic E-Mail: pase at scsc.ethz.ch emil at uivt.cas.cz FAX: +41.1.632.1104 +4202 8585789 For accommodation and local arrangements ask: Milena Zeithemlova (Action M agency) Vrsovicka 68 101 00 Prague 10 Czech Republic Phone: +42.02.6731 2334 FAX: +42.02.6731 0503 E-Mail:actionm at cuni.cz Latest information about PASE will be available from http://www.uivt.cas.cz/PASE97 From espaa at soc.plym.ac.uk Wed Sep 24 06:58:41 1997 From: espaa at soc.plym.ac.uk (espaa) Date: Wed, 24 Sep 1997 10:58:41 GMT Subject: Patter Analysis and Applications Journal Message-ID: <68E887170F3@scfs3.soc.plym.ac.uk> Springer Verlag Ltd is launching a new journal - Pattern Analysis and Applications (PAA) - in Spring 1998. Aims and Scope of PAA: The journal publishes high quality articles in areas of fundamental research in pattern analysis and applications. It aims to provide a forum for original research which describes novel pattern analysis techniques and industrial applications of the current technology. The main aim of the journal is to publish high quality research in intelligent pattern analysis in computer science and engineering. In addition, the journal will also publish articles on pattern analysis applications in medical imaging. The journal solicits articles that detail new technology and methods for pattern recognition and analysis in applied domains including, but not limited to, computer vision and image processing, speech analysis, robotics, multimedia, document analysis, character recognition, knowledge engineering for pattern recognition, fractal analysis, and intelligent control. The journal publishes articles on the use of advanced pattern recognition and analysis methods including statistical techniques, neural networks, genetic algorithms, fuzzy pattern recognition, machine learning, and hardware implementations which are either relevant to the development of pattern analysis as a research area or detail novel pattern analysis applications. Papers proposing new classifier systems or their development, pattern analysis systems for real-time applications, fuzzy and temporal pattern recognition and uncertainty management in applied pattern recognition are particularly solicited. The journal encourages the submission of original case-studies on applied pattern recognition which describe important research in the area. The journal also solicits reviews on novel pattern analysis benchmarks, evaluation of pattern analysis tools, and important research activities at international centres of excellence working in pattern analysis. Audience: Researchers in computer science and engineering. Research and Development Personnel in industry. Researchers/ applications where pattern analysis is used, researchers in the area of novel pattern recognition and analysis techniques and their specific applications. Full information about the journal and detailed instructions for Call for Papers can be found at the PAA web site: http://www.soc.plym.ac.uk/soc/sameer/paa.htm Best regards Barbara Davies School of Computing University of Plymouth From hsd20 at newton.cam.ac.uk Wed Sep 24 05:33:55 1997 From: hsd20 at newton.cam.ac.uk (Heather S. Dawson) Date: Wed, 24 Sep 1997 10:33:55 +0100 (BST) Subject: Statistical Analysis of DNA and Protein Sequences Message-ID: A Newton Institute Workshop Statistical Analysis of DNA and Protein Sequences 20 - 24 October 1997 Organisers: D Haussler (UCSC), R Durbin (Sanger) and C M Bishop (Aston) With the Human Genome Project and other model organism genome sequencing projects now in full swing, there is a growing need for statistical analysis of databases containing DNA, RNA and protein sequences. Problems that need to be addressed include finding genes and other important functional elements in DNA sequences, modelling and classifying these elements, and modelling and classifying the RNA and protein sequences that are derived from them. Neural networks and hidden Markov models are two types of models that have been proposed to satisfy this need. The intent of this workshop is to explore the strengths and weaknesses of these and related techniques for the analysis of DNA and protein sequences, from both a mathematical and an empirical viewpoint. Provisional list of speakers includes: Stephen Altschul (NCBI) Phil Green (Seattle) Pierre Baldi (Caltech) David Haussler (UCSC) Philipp Bucher (Lausanne) Liisa Holm (EBI) Soren Brunak (Denmark) Anders Krogh (Denmark) Bill Bruno (Los Alamos) Alan Lapedes (Santa Fe Inst) Chris Burge (Stanford) Chip Lawrence (Albany, NY) Cyrus Chothia (Cambridge) Jun Liu (Stanford) Richard Durbin (Sanger Centre) Graeme Mitchison (Cambridge) Sean Eddy (St Louis) Victor Solovyev (Sanger Centre) Mikhail Gelfand (Moscow) Gary Stormo (Colorado) Nick Goldman (Cambridge) This workshop will form a component of the Newton Institute programme on Neural Networks and Machine Learning, organised by C M Bishop, D Haussler, G E Hinton, M Niranjan and L G Valiant. Further information about the programme is available via the WWW at http://www.newton.cam.ac.uk/programs/nnm.html The workshop will commence at 4:30 p.m. on Monday 20 October and end at lunch time on Friday 24 October. Location and Costs: The workshop will be held in the Isaac Newton Institute. Since space at the Institute is limited, participants are strongly encouraged to register early. There will be a registration fee of 60 UK pounds which will include the Conference Dinner on Thursday 23 October as well as morning coffee and afternoon tea throughout the week. Note that accommodation can be difficult to find in Cambridge at this time of year. A limited number of rooms have been reserved and will be offered to registrants on a first-come first-served basis. Registration forms are available from the workshop Web Page at http://www.newton.cam.ac.uk/programs/nnmdna.html Completed registration forms should be sent to Heather Dawson at the Newton Institute, or by e-mail to h.dawson at newton.cam.ac.uk The deadline for registration and housing is 26 September 1997 From allan at ohnishi.nuie.nagoya-u.ac.jp Thu Sep 25 01:10:08 1997 From: allan at ohnishi.nuie.nagoya-u.ac.jp (Allan Kardec Barros) Date: Thu, 25 Sep 1997 14:10:08 +0900 Subject: home-page on ICA Message-ID: <3429F22F.ACEB0976@ohnishi.nuie.nagoya-u.ac.jp> Dear connectionists, Recently, there was a boom of papers dealing with a new technique called "independent component analysis" (ICA) for blind separation/deconvolution of signals. ICA is an elegant and powerful solution to the problem of source separation, with a broad range of applications (Biomedical, communication, speech processing, etc.). To help people interested on ICA, many people created home-pages on the topic. I volunteered to put available in the net a "bibliography" where one could find "everything" about ICA. Since April 18, 1997 until today, there were 892 accesses. Of course, it is still far from being complete, even though I made every effort to keep it updated. You are kindly invited to visit or send me your publication on the topic. I have also added links to people who are researching on ICA. On their pages, you can find demos and algorithms and other information. The page is: http://www.ohnishi.nuie.nagoya-u.ac.jp/~allan/ICA -- Allan Kardec Barros Ohnishi Lab., Dep. of Info. Eng. School of Engineering, Nagoya University Furo-cho, Chikusa-ku, Nagoya 464-01 JAPAN From takane at takane2.psych.mcgill.ca Thu Sep 25 08:55:17 1997 From: takane at takane2.psych.mcgill.ca (Yoshio Takane) Date: Thu, 25 Sep 1997 08:55:17 -0400 (EDT) Subject: No subject Message-ID: <199709251255.IAA01398@takane2.psych.mcgill.ca> *****CALL FOR PAPERS***** BEHAVIORMETRIKA, an English journal published in Japan to promote the development and dissemination of quantitative methodology for analysis of human behavior, is planning to publish a special issue on ANALYSIS OF KNOWLEDGE REPRESENTATIONS IN NEURAL NETWORK (NN) MODELS broadly construed. I have been asked to serve as the guest editor for the special issue and would like to invite all potential contributors to submit high quality articles for possible publication in the issue. In statistics information extracted from the data are stored in estimates of model parameters. In regression analysis, for example, information in observed predictor variables useful in prediction is summarized in estimates of regression coefficients. Due to the linearity of the regression model interpretation of the estimated coefficients is relatively straightforward. In NN models knowledge acquired from training samples is represented by the weights indicating the strength of connections between neurons. However, due to the nonlinear nature of the model interpretation of these weights is extremely difficult, if not impossible. Consequently, NN models have largely been treated as black boxes. This special issue is intended to break the ground by bringing together various attempts to understand internal representations of knowledge in NN models. Papers are invited on network analysis including: * Methods of analyzing basic mechanisms of NN models * Examples of successful network analysis * Comparison among different network architectures in their knowledge representation (e.g., BP vs Cascade Correlation) * Comparison with statistical approaches * Visualization of high dimensional functions * Regularization methods to improve the quality of knowledge representation * Model selection in NN models * Assessment of stability and generalizability of knowledge in NN models * Effects of network topology, data encoding scheme, algorithm, environmental bias, etc. on network performance * Implementing prior knowledge in NN models SUBMISSION: Deadline for submission: January 31, 1998 Deadline for the first round reviews: April 30, 1998 Deadline for submission of the final version: August 31, 1998 Number of copies of a manuscript to be submitted: four Format: no longer than 10,000 words; APA style ADDRESS FOR SUBMISSION: Professor Yoshio Takane Department of Psychology McGill University 1205 Dr. Penfield Avenue Montreal QC H3A 1B1 CANADA email: takane at takane2.psych.mcgill.ca tel: 514 398 6125 fax: 514 398 4896 From bert at mbfys.kun.nl Thu Sep 25 03:26:30 1997 From: bert at mbfys.kun.nl (Bert Kappen) Date: Thu, 25 Sep 1997 09:26:30 +0200 Subject: Job openings neural networks and music Message-ID: <199709250726.JAA03891@vitellius.mbfys.kun.nl> Quantization of Temporal Patterns by Neural Networks A number of positions will become available for a research project on the quantization of musical time using neural networks. Quantisation is the process of separating the categorical, discrete time components -durations as notated in the musical score-, from the continuous deviations as present in a musical performance. The project has next to the fundamental aspects (connectionist models of categorical rhythm perception and their empirical validation), an important practical focus and aims at developing a robust component for automatic music transcription systems. The research will be realized jointly at the Nijmegen Institute for Cognition and Information (NICI) and at the Laboratory for Medical and Biophysics (SNN), University of Nijmegen. The project is funded by the Technology Foundation (STW). The following positions are vacant: Research Assistant (vacancy number 118) The task of the research assistant (OIO) will be to design adaptive methods for learning rhythm perception. Possible methods that will be considered are neural networks, Hidden Markov models and probabilistic graphical models. The candidate needs an excellent background in physics, mathematics or computer science. This work will lead to a PhD degree in Physics. Additional expertise in either music or psychology is an advantage. Appointment will be full-time for four years or 8/10 for 5 years. Gross salary will be NLG 2135 per month in the first year, increasing to NLG 3812 in the fourth year, based on full-time employment. This researcher will become member of the neural network research group at the Laboratory for Medical and Biophysics and will be employed by STW. This group consists of 8 researchers and PhD students and conducts theoretical and applied research on neural networks. Postdoctoral Researcher (vacancy number 21.7.97) The postdoc will investigate an existing connectionist model for quatization and will design and evaluate the prototype system and supervise the implementation of the components. Extensions of the model to handle tempo-tracking, polyphony and possibly beat induction are foreseen. Furthermore the theoretical results obtained in the post-graduate project need to be integrated and put to practical use. We look for a cognitive scientist with experience in both experimental methods and in computational modeling, preferably using Lisp. Experience with quantization and relaxation networks is an advantage. Appointment will be full-time for three years, with a possible extension. Depending on experience, the salary will be between NLG 3882 and NLG 8201 gross per month. This researcher will become member of the Music, Mind, Machine project team at the NICI. Music Technologist/Programmer (vacancy number 21.8.97) This technical assistant will be responsible for setting up the hard- and software and conducting the recording of performance data, the construction of user-interfaces, interfaces to existing music notation software, and the implementation of an Internet version of the prototype. We look for an engineer with experience in MIDI and related music technologies, Apple Macintosh, World-Wide Web and Internet, and/or Lisp programming. Appointment will probably be half-time for three years, with a possible extension. The salary, depending on experience, will be between NLG 3694 and NLG 5603 gross per month. This researcher will become member of the Music, Mind, Machine project team at the NICI. More information Details about the context of the research can be found at http://www.nici.kun.nl/mmm and http://www.mbfys.kun.nl/SNN. Employment will begin in early 1998. Nijmegen University intends to employ a proportionate number of women and men in all positions in the faculty. Women are therefore urgently invited to apply. Applications (three copies) should include a curriculum vitae and a statement of the candidate's professional interests and goals, and one copy of recent work (e.g., thesis, computer program, article). Applications for the OIO position (Vacancy 118) should be sent before October 25 to the Personnel Department of the Faculty of Natural Sciences, University of Nijmegen, Toernooiveld 1, 6525 ED Nijmegen (Vacancy 118). Applications for the post-doctoral position and music technologist (Vacancies 21.7.97 and 21.8.97) should be sent before November 15 to the Department of Personnel & Organization, Faculty of Social Sciences, University Nijmegen, P.O.Box 9104, 6500 HE Nijmegen, The Netherlands. Please mark envelope and letter with the vacancy number. From judithr at wccf.mit.edu Thu Sep 25 06:44:39 1997 From: judithr at wccf.mit.edu (Judith) Date: Thu, 25 Sep 1997 11:44:39 +0100 Subject: position available/MIT Message-ID: MASSACHUSETTS INSTITUTE OF TECHNOLOGY DEPARTMENT OF BRAIN & COGNITIVE SCIENCES The MIT Department of Brain and Cognitive Sciences anticipates making a tenure-track appointment in computational brain and cognitive science at the Assistant Professor level. Candidates should have a strong mathematical background and an active research interest in the mathematical modeling of specific biophysical, neural or cognitive phenomena. Individuals whose research focuses on learning and memory at the level of neurons and networks of neurons are especially encouraged to apply. Responsibilities include graduate and undergraduate teaching and research supervision. Applications should include a brief cover letter stating the candidate's research and teaching interests, a vita, three letters of recommendation and representative reprints. Send applications by December 21, 1997 to: Chair, Faculty Search Committee/Computational Neuroscience Department of Brain & Cognitive Sciences, E25-406 MIT 77 Massachusetts Avenue Cambridge, MA 02139-4307 MIT is an Affirmative Action/Equal Opportunity Employer. Qualified women and minority candidates are encouraged to apply. From mel at quake.usc.edu Thu Sep 25 18:51:33 1997 From: mel at quake.usc.edu (Bartlett Mel) Date: Thu, 25 Sep 1997 15:51:33 -0700 Subject: NIPS*97 Travel Grants Message-ID: <342AEAF5.1E4D@quake.usc.edu> Neural Information Processing Systems Conference NIPS*97 ****** Deadline Extended to Oct. 1 ******** Limited funds will be available to support the travel of young investigators, post-doctoral fellows, and graduate students to NIPS. Awards will be based on both merit and need. The amount of aid will generally not exceed $400 per domestic participant, $600 for international attendees. Conference registration is not covered by travel awards. To apply, send (1) a one page resume, and (2) a one page (max) justification. Particularly state whether applicant is an author/presenter on a paper submitted or accepted to NIPS*97, and/or whether applicant is an official participant in a NIPS workshop. Include return email address for notification. Send applications to (or use US mail address below): mel at quake.usc.edu Applications must arrive by Oct. 1 to be considered. Notification will be sent by email in late October. ------- Bartlett Mel NIPS*97 Treasurer -- Bartlett W. Mel (213)740-0334, -3397(lab) Assistant Professor of Biomedical Engineering (213)740-0343 fax University of Southern California, OHE 500 mel at quake.usc.edu US Mail: BME Department, MC 1451, USC, Los Angeles, CA 90089 Fedex: 3650 McClintock Ave, 500 Olin Hall, LA, CA 90089 From tony at salk.edu Tue Sep 23 09:08:51 1997 From: tony at salk.edu (Tony Bell) Date: Tue, 23 Sep 1997 06:08:51 -0700 Subject: NIPS*97 Program Message-ID: <3427BF63.717@salk.edu> --------------------------------------------------------------------- The following is a complete list of papers accepted to NIPS*97, along with the preliminary schedule. Michael Kearns NIPS*97 Program Chair ***** TUESDAY, DECEMBER 2: MORNING ORAL SESSION I ***** DNA^2 DNA Computation: A Potential Killer Application? Richard Lipton, Princeton University and Bellcore Research (Invited Talk) Incorporating Contextual Information in White Blood Cell Identification Xubo Song and Joseph Sill, California Institute of Technology Harvey Kasdan, International Remote Imaging Systems (Oral Presentation) Extended ICA Removes Artifacts from Electroencephalographic Recordings Tzyy-Ping Jung, Colin Humphries, and Te-Won Lee, Salk Institute Scott Makeig, Naval Health Research Center Martin J. McKeown, Salk Institute Vicente Iragui, UC San Diego Terrence J. Sejnowski, Salk Institute (Oral Presentation) A Solution for Missing Data in Recurrent Neural Networks with an Application to Blood Glucose Prediction Volker Tresp and Thomas Briegel, Siemens (Poster Spotlight) Reinforcement Learning for Call Admission Control in Routing in Integrated Service Networks Peter Marbach, Massachusetts Institute of Technology Oliver Mihatsch, Siemens Miriam Schulte, Technische Universitat Munchen John N. Tsitsiklis, Massachusetts Institute of Technology (Poster Spotlight) Intrusion Detection with Neural Networks Jake Ryan, Risto Miikkulainen, and Meng-Jang Lin, University of Texas at Austin (Poster Spotlight) Structure Driven Image Database Retrieval Jeremy S. De Bonet and Paul Viola, Massachusetts Institute of Technology (Poster Spotlight) Analog VLSI Model of Intersegmental Coordination with Nearest-neighbor Coupling Girish N. Patel, Jeremy H. Holleman and Stephen P. DeWeerth, Georgia Institute of Technology (Poster Spotlight) ***** TUESDAY, DECEMBER 2: MORNING ORAL SESSION II ***** A Framework for Multiple-Instance Learning Oded Maron and Tomas Lozano-Perez, Massachusetts Institute of Technology (Oral Presentation) Hierarchical Non-linear Factor Analysis and Topographic Maps Zoubin Ghahramani and Geoffrey E. Hinton, University of Toronto (Oral Presentation) Learning Continuous Attractors in Recurrent Networks H. S. Seung, Bell Labs Lucent Technologies (Oral Presentation) Classification by Pairwise Coupling Trevor Hastie, Stanford University Robert Tibshirani, University of Toronoto (Poster Spotlight) Agnostic Clustering of Markovian Sequences Ran El-Yaniv, Shai Fine and Naftali Tishby, Hebrew University (Poster Spotlight) EM Algorithms for PCA and SPCA Sam Roweis, California Institute of Technology (Poster Spotlight) Training Methods for Adaptive Boosting of Neural Networks for Character Recognition Holger Schwenk and Yoshua Bengio, Universite de Montreal (Poster Spotlight) Learning to Order Things William W. Cohen, Robert E. Schapire and Yoram Singer, AT&T Labs (Poster Spotlight) On Efficient Heuristic Ranking of Hypotheses Steve Chien, Andre Stechert and Darren Mutz, Jet Propulsion Laboratory (Poster Spotlight) Estimating Dependency Structure as a Hidden Variable Marina Meila and Michael I. Jordan, Massachusetts Institute of Technology (Poster Spotlight) ***** TUESDAY, DECEMBER 2: AFTERNOON ORAL SESSION ***** Learning in Rational Agents Stuart Russell, UC Berkeley (Invited Talk) Visual Navigation in a Robot Using Zig-zag Behavior M. Anthony Lewis, University of Illinois (Oral Presentation) Nonparametric Model-based Reinforcement Learning Christopher G. Atkeson, Georgia Institute of Technology (Oral Presentation) Reinforcement Learning with Hierarchies of Machines Ron Parr and Stuart Russell, UC Berkeley (Oral Presentation) On the Infeasibility of Training Neural Networks with Small Squared Error Van H. Vu, Yale University (Poster Spotlight) Data Dependent Structural Risk Minimization for Perceptron Decision Trees John Shawe-Taylor and Nello Cristianini, University of London (Poster Spotlight) Generalization in Decision Trees and DNF: Does Size Matter? Mostefa Golea and Peter L. Bartlett, Australian National University Wee Sun Lee, Australian Defence Force Academy (Poster Spotlight) The Asymptotic Convergence Rate of Q-learning Cs. Szpesvari, Jozsef Attila University (Poster Spotlight) An Improved Policy Iteration Algorithm for Partially Observable MDPs Eric A. Hansen, University of Massachusetts (Poster Spotlight) ***** TUESDAY, DECEMBER 2: EVENING POSTER SESSION ***** (Note: contributed papers presented during Tuesday's oral sessions will also have posters Tuesday evening.) Gradients for Retinotectal Mapping Geoffrey J. Goodhill, Georgetown University Incremental Learning with Sample Queries Joel Ratsaby, Nu Age Products Analytical Study of the Interplay Between Architecture and Predictability Avner Priel, Ido Kanter and D.A. Kessler, Bar Ilan University A 1,000-Neuron System with One Million 7-bit Physical Interconnections Yuzo Hirai, University of Tsukuba Finite Sample Bounds for Non-linear Time Series Prediction Ron Meir, Technion Graph Matching with Hierarchical Discrete Relaxation Richard C. Wilson and Edwin R. Hancock, University of York Recovering Perspective Pose with a Dual Step EM Algorithm A.D.J. Cross and E.R. Hancock, University of York Bidirectional Retrieval from Associative Memory Friedrich T. Sommer and G. Palm, University of Ulm Synchronized Auditory and Cognitive 40 Hz Attentional Streams, and the Impact of Rhythmic Expectation on Auditory Scene Analysis Bill Baird, UC Berkeley Ensemble and Modular Approaches for Face Detection: A Comparison Raphael Feraud and Olivier Bernier, France Telecom Hybrid Reinforcement Learning and its Application to Biped Robot Control Satoshi Yamada, Akira Watanabe and Michio Nakashima, Mitsubishi S-Map: A Network with a Simple Self-organization Algorithm for Generative Topographic Mapping Kimmo Kiviluoto and Erkki Oja, Helsinki University of Technology New Approximations of Differential Entropy for Independent Component Analysis and Projection Pursuit Aapo Hyvarinen, Helsinki University of Technology Combining Classifiers Using Correspondence Analysis Christopher J. Merz, UC Irvine A Neural Network Model of Naive Preference and Filial Imprinting in the Domestic Chick Lucy E. Hadden, UC San Diego MELONET I: Neural Nets for Inventing Baroque-style Chorale Variations Dominik Hornel, Universitat Fridericiana Karlsruhe (TH) Automatic Aircraft Recovery via Reinforcement Learning: Initial Experiments Jeffrey F. Monaco and David G. Ward, Barron Associates, Inc. Andrew G. Barto, University of Massachusetts Unsupervised On-line Learning of Decision Trees for Hierarchical Data Analysis Marcus Held and Joachim M. Buhmann, Rheinische Friedrich-Wilhelms-Universitat An Annealed Self-organizing Map for Source Channel Coding Matthias Burger, Thore Graepel and Klaus Obermayer, Technical University of Berlin Bach in a Box - Real-time Harmony Randall R. Spangler, Rodney M. Goodman, and Jim Hawkins, California Institute of Technology Function Approximation with the Sweeping Hinge Algorithm Don R. Hush and Fernando Lozano, University of New Mexico Bill Horne, MakeWaves, Inc. Incorporating Test Inputs into Learning Zehra Cataltepe and Malik Magdon-Ismail, California Institute of Technology Just One View: Invariances in Inferotemporal Cell Tuning Maximilian Riesenhuber and Tomaso Poggio, Massachusetts Institute of Technology Weight Space Structure and the Storage Capacity of a Fully-connected Committee Machine Yuansheng Xiong, Pohang Institute of Science and Technology Chulan Kwon, Myong Ji University Jong-Hoon Oh, Bell Labs Lucent Technologies Recurrent Neural Networks Can Learn to Implement Symbol-Sensitive Counting Paul Rodriguez, UC San Diego Janet Wiles, University of Queensland Multi-modular Associative Memory Nir Levy, David Horn and Eytan Ruppin, Tel-Aviv University On Parallel Versus Serial Processing: A Computational Study of Visual Search Eyal Cohen and Eytan Ruppin, Tel-Aviv University Self-similarity Properties of Natural Images Antonio Turiel, German Mato, and Nestor Parga, Universidad Autonoma de Madrid Jean-Pierre Nadal, Ecole Normale Superieure Experiences with Bayesian Learning in a Real World Application Peter Sykacek and Georg Dorffner, Austrian Research Institute for Artificial Intelligence Peter Rapplesberger, University of Vienna Josef Zeitlhofer, AKH Vienna An Analog VLSI Neural Network for Phase-based Computer Vision Bertram E. Shi and Kwok Fai Hui, Hong Kong University of Science and Technology An Application of Reversible-jump MCMC to Multivariate Spherical Gaussian Mixtures Alan D. Marrs, Defence Evaluation & Research Agency Regularisation in Sequential Learning Algorithms Joao FG de Freitas, Mahesan Niranjan and Andrew H. Gee, Cambridge University A Generic Approach for Identification of Event Related Brain Potentials via a Competitive Neural Network Structure Daniel H. Lange, Hava T. Siegelmann, Hillel Pratt and Gideon F. Inbar, Israel Institute of Technology Selecting Weighting Factors in Logarithmic Opinion Pools Tom Heskes, University of Nijmegen Shared Context Probabilistic Transducers Yoshua Bengio, Universite de Montreal Samy Bengio, CIRANO Jean-Francois Isabelle, NOVASYS Yoram Singer, AT&T Labs Perturbative M-sequences for Auditory Systems Identification Mark Kvale and Christoph E. Schreiner, UC San Francisco Instabilities in Eye Movement Control: A Model of Periodic Alternating Nystagmus Ernst R. Dow and Thomas J. Anastasio, University of Illinois The Observer-observation Dilemma in Neuro-forecasting Hans Georg Zimmermann and Ralph Neuneier, Siemens Enhancing Q-learning for Optimal Asset Allocation Ralph Neuneier, Siemens An Analog VLSI Model of the Fly Elementary Motion Detector Reid R. Harrison and Christof Koch, California Institute of Technology Synaptic Transmission: An Information-theoretic Perspective Amit Manwani and Christof Koch, California Institute of Technology Phase Transitions and Perceptual Organization of Video Sequences Yair Weiss, Massachusetts Institute of Technology Adaptive Choice of Grid and Time in Reinforcement Learning Stephan Pareigis, Christian-Albrechts-Universitat Kiel The Rectified Gaussian Distribution N.D. Socci, D.D. Lee and H.S. Seung, Bell Labs Lucent Technologies Two Approaches to Optimal Annealing Todd K. Leen, Oregon Graduate Institute Bernhard Schottky and David Saad, Aston University On the Separation of Signals from Neighboring Cells in Tetrode Recordings Maneesh Sahani, John S. Pezaris and Richard A. Andersen, California Institute of Technology Detection of First and Second Order Motion Alexander Grunewald, California Institute of Technology Heiko Neumann, Universitat Ulm Silicon Retina with Adaptive Filtering Properties Shih-Chii Liu, California Institute of Technology ***** WEDNESDAY, DECEMBER 3: MORNING ORAL SESSION I ***** Relative Loss Bounds, the Minimum Relative Entropy Principle and EM Manfred Warmuth, UC Santa Cruz (Invited Talk) Saddle Point and Hamiltonian Structure in Excitatory-inhibitory Networks H.S. Seung and T.J. Richardson, Bell Labs Lucent Technologies J.C. Lagarias, AT&T Labs J.J. Hopfield, Princeton University (Oral Presentation) >From Regularization Operators to Support Vector Kernels Alexander J. Smola, GMD Bernhard Scholkopf, Max Planck Institute (Oral Presentation) Globally Optimal On-line Learning Rules Magnus Rattray and David Saad, Aston University (Poster Spotlight) Asymptotics for Regularization Petri Koistinen, University of Helsinki (Poster Spotlight) Prior Knowledge in Support Vector Kernels Bernhard Scholkopf, Max Planck Institute Patrice Simard and Valdimir Vapnik, AT&T Labs Alexander J. Smola, GMD (Poster Spotlight) Optimization of the Drift for Nonequilibrium Diffusion Networks Paul Mineiro, Javier Movellan, and R.J. Williams, UC San Diego (Poster Spotlight) A Revolution: Belief Propagation in Graphs With Cycles Brendan Frey, University of Toronto David J. C. MacKay, Cambridge University (Poster Spotlight) ***** WEDNESDAY, DECEMBER 3: MORNING ORAL SESSION II ***** Using Expectation to Guide Processing: A Study of Three Real-world Applications Shumeet Baluja, Carnegie Mellon University (Oral Presentation) Bayesian Robustification for Audio Visual Fusion in Non-stationary Environments Javier Movellan and Paul Mineiro, UC San Diego (Oral Presentation) Spectrotemporal Receptive Fields for Neurons in the Primary Auditory Cortex of the Awake Primate R.C. deCharms and M.M. Merzenich, UC San Francisco (Oral Presentation) Active Data Clustering Thomas Hofmann, Massachusetts Institute of Technology Joachim M. Buhmann, Universitat Bonn (Poster Spotlight) Learning Nonlinear Overcomplete Representations for Efficient Coding Michael S. Lewicki and Terrence J. Sejnowski, Salk Institute (Poster Spotlight) A Non-parametric Multi-scale Statistical Model for Natural Images Jeremy S. De Bonet and Paul Viola, Massachusetts Institute of Technology (Poster Spotlight) Modeling Acoustic Correlations by Factor Analysis Lawrence Saul and Mazin Rahim, AT&T Labs (Poster Spotlight) Blind Separation of Radio Signals in Fading Channels Kari Torkkola, Motorola (Poster Spotlight) Features as Sufficient Statistics D. Geiger, A. Rudra and L. Maloney, New York University (Poster Spotlight) Bayesian Model of Surface Perception William T. Freeman, Mitsubishi Electric Research Lab Paul A. Viola, Massachusetts Institute of Technology (Poster Spotlight) 2D Observers for Human 3D Object Recognition? Zili Liu, NEC Research Institute Daniel Kersten, University of Minnesota (Poster Spotlight) ***** WEDNESDAY, DECEMBER 3: AFTERNOON ORAL SESSION ***** Computing with Action Potentials John Hopfield, Princeton University (Invited Talk) Coding of Naturalistic Stimuli by Auditory Midbrain Neurons H. Attias and C.E. Schreiner, UC San Francisco (Oral Presentation) Refractoriness and Neural Precision Michael J. Berry II and Markus Meister, Harvard University (Oral Presentation) A Mathematical Model of Axon Guidance by Diffusible Factors Geoffrey J. Goodhill, Georgetown University (Oral Presentation) Neural Basis of Object-centered Representations Sophie Deneve and Alexandre Pouget, Georgetown University (Poster Spotlight) A Model of Early Visual Processing Laurent Itti, Jochen Braun, Dale K. Lee and Christof Koch, California Institute of Technology (Poster Spotlight) Effects of Spike Timing Underlying Binocular Integration and Rivalry in a Neural Model of Early Visual Cortex Erik D. Lumer, Universite Libre de Bruxelles (Poster Spotlight) Statistical Models of Conditioning Peter Dayan and Theresa Long, Massachusetts Institute of Technology (Poster Spotlight) ***** WEDNESDAY, DECEMBER 3: EVENING POSTER SESSION ***** (Note: contributed papers presented during Wednesday's oral sessions will also have posters Wednesday evening.) Serial Order in Reading Aloud: Connectionist Models and Neighborhood Structure Jeanne C. Milostan and Garrison W. Cottrell, UC San Diego The Error Coding Method and PaCT's Gareth James and Trevor Hastie, Stanford University Linear Concepts and Hidden Variables: an Empirical Study Adam J. Grove, NEC Research Institute, Dan Roth, Weizmann Institute of Science A Hippocampal Model of Recognition Memory Randall C. O'Reilly, Massachusetts Institute of Technology Kenneth A. Norman, Harvard University James L. McClelland, Carnegie Mellon University The Efficiency and the Robustness of Natural Gradient Descent Learning Rule Howard Hua Yang and Shun-ichi Amari, RIKEN Derive Serial Updating Rule for Blind Separation from the Method of Scoring Howard Hua Yang, RIKEN Unconscious Inference and the Up-propagation Algorithm Jong-Hoon Oh and H. Sebastian Seung, Bell Labs Lucent Technologies Nonlinear Markov Networks for Continuous Variables Reimar Hofmann and Volker Tresp, Siemens Modelling Seasonality and Trends in Daily Rainfall Data Peter M. Williams, University of Sussex Task and Spatial Frequency Effects on Face Specialization Matthew N. Dailey and Garrison W. Cottrell, UC San Diego A Simple and Fast Neural Network Approach to Stereovision Rolf D. Henkel, University of Bremen Wavelet Models for Video Time Series Sheng Ma and Chuanyi Ji, Rensselaer Polytechnic Institute Multiple Threshold Neural Logic Vasken Bohossian and Jehoshua Bruck, California Institute of Technology Reinforcement Learning for Continuous Stochastic Control Problems Remi Munos, CEMAGREF Paul Bourgine, Ecole Polytechnique Independent Component Analysis for Identification of Artifacts in Magnetoencephalographic Recordings Ricardo Vigario, Veikko Jousmaki, Matti Hamalainen, Riitta Hari and Erkki Oja, Helsinki University of Technology Hybrid NN/HMM-based Speech Recognition with a Discriminant Neural Feature Extraction Daniel Willett and Gerhard Rigoll, Gerhard Mercator University Use of a Multi-layer Perceptron to Predict Malignancy in Ovarian Tumors Herman Verrelst, Yves Moreau, and Joos Vandewalle, Katholieke Universiteit Leuven Dirk Timmerman, University Hospitals Leuven Correlates of Attention in a Model of Dynamic Visual Recognition Rajesh P.N. Rao, University of Rochester Approximating Posterior Distributions in Belief Networks Using Mixtures Christopher M. Bishop and Neil Lawrence, Aston University Tommi Jaakkola and Michael I. Jordan, Massachusetts Institute of Technology Regression with Input-dependent Noise: a Gaussian Process Treatment Paul W. Goldberg, Christopher K.I. Williams and Christopher M. Bishop, Aston University Online Learning from Finite Training Sets in Non-linear Networks Peter Sollich, University of Edinburgh David Barber, Aston University Radial Basis Functions: a Bayesian Treatment David Barber and Bernhard Schottky, Aston University Computing with Stochastic Dynamic Synapses Wolfgang Maass, Technische Universitaet Graz Anthony M. Zador, Salk Institute Learning to Schedule Straight-line Code J. Eliot B. Moss, Paul E. Utgoff, John Cavazos, Doina Precup and Darko Stefanovic, University of Massachusetts Carla Brodley and David Scheeff, Purdue University, Multi-time Models for Temporally Abstract Planning Doina Precup and Richard S. Sutton, University of Massachusetts Inferring Sparse, Overcomplete Image Codes Using an Efficient Coding Framework Michael S. Lewicki, Salk Institute Bruno A. Olshausen, UC Davis Mapping a Manifold Joshua B. Tenenbaum, Massachusetts Institute of Technology Monotonic Networks Joseph Sill, California Institute of Technology The Canonical Distortion Measure in Feature Space and 1-NN Classification Jonathan Baxter and Peter Bartlett, Australian National University Relative Loss Bounds for Multidimensional Regression Problems Jyrki Kivinen, University of Helsinki Manfred K. Warmuth, UC Santa Cruz A General Purpose Image Processing Chip: Orientation Detection Ralph Etienne-Cummings and Donghui Cai, Southern Illinois University Receptive Field Formation in Natural Scene Environments: Comparison of Kurtosis, Skewness, and the Quadratic form of BCM Brian Blais, N. Intrator, H. Shouval and Leon N. Cooper, Brown University How to Dynamically Merge Markov Decision Processes Satinder Singh, University of Colorado David Cohn, Harlequin, Inc. Comparison of Human and Machine Word Recognition M. Schenkel, C. Latimer and M. Jabri, University of Sydney Analysis of Drifting Dynamics with Neural Network Hidden Markov Models J. Kohlmorgen, K.-R. Muller, GMD K. Pawelzik, MPI f. Stromungsforschung, A Neural Network Based Head Tracking System D.D. Lee and H.S. Seung, Bell Labs Lucent Technologies Multiresolution Tangent Distance for Affine-invariant Classification Nuno Vasconcelos and Andrew Lippman, Massachusetts Institute of Technology Modeling a Complex Cell in an Awake Macaque During Natural Image Viewing William E. Vinje and Jack L. Gallant, UC Berkeley Local Dimensionality Reduction Stefan Schaal, Georgia Institute of Technology Sethu Vijayakumar, Tokyo Institute of Technology Christopher C. Atkeson, Georgia Institute of Technology Using Helmholtz Machines to Analyze Multi-channel Neuronal Recordings Virginia R. de Sa, R. Christopher deCharms and Michael M. Merzenich, UC San Francisco RCC Cannot Compute Certain FSA, Even with Arbitrary Transfer Functions Mark Ring, GMD Competitive On-line Linear Regression V. Vovk, University of London Generalized Prioritized Sweeping David Andre, Nir Friedman and Ronald Parr, UC Berkeley Toward a Single-cell Account of Binocular Disparity Tuning: An Energy Model May be Hiding in Your Dendrites Bartlet W. Mel, University of Southern California Daniel L. Ruderman, The Salk Institute Kevin A. Archie, University of Southern California, Hippocampal Model of Rat Spatial Abilities Using Temporal Difference Learning David J. Foster and Richard G.M. Morris, Edinburgh University Peter Dayan, Massachusetts Institute of Technology Boltzmann Machine Learning Using Mean Field Theory and Linear Response Correction H.J. Kappen, University of Nijmegen ***** THURSDAY, DECEMBER 4: MORNING ORAL SESSION I ****** Odor Coding by the Olfactory System: Distributed Processing in Biological and Artificial Systems John S. Kauer, Tufts University (Invited Talk) Adaptation in Speech Motor Control John F. Houde, UC San Francisco Michael I. Jordan, Massachusetts Institute of Technology (Oral Presentation) A Superadditive-impairment Theory of Optic Aphasia Michael C. Mozer and Mark Sitton, University of Colorado Martha Farah, University of Pennsylvania (Oral Presentation) Learning Human-like Knowledge by Singular Value Decomposition: A Progress Report Thomas K. Landauer and Darrell Laham, University of Colorado at Boulder Peter Foltz, New Mexico State University (Oral Presentation) ***** THURSDAY, DECEMBER 4: MORNING ORAL SESSION II ***** Local Independent Component Analysis Juan K. Lin, University of Chicago (Oral Presentation) Stacked Density Estimation Padhraic Smyth, UC Irvine David Wolpert, IBM Almaden Research (Oral Presentation) Ensemble Learning for Multi-layer Networks David Barber and Christopher M. Bishop, Aston University (Oral Presentation) ***** END OF CONFERENCE --- ADJOURN TO WORKSHOPS ***** From aho at nada.kth.se Fri Sep 26 11:35:25 1997 From: aho at nada.kth.se (Anders Holst) Date: Fri, 26 Sep 1997 17:35:25 +0200 (MET DST) Subject: PhD thesis available Message-ID: <199709261535.RAA15645@aho.nada.kth.se> The following PhD thesis is available at: ftp://ftp.nada.kth.se/SANS/reports/ps/aho-thesis.ps.gz http://www.nada.kth.se/~aho/thesis.html -------------------------------------------------------------------- THE USE OF A BAYESIAN NEURAL NETWORK MODEL FOR CLASSIFICATION TASKS Anders Holst Studies of Artificial Neural Systems Dept. of Numerical Analysis and Computing Science Royal Institute of Technology, S-100 44 Stockholm, Sweden Abstract This thesis deals with a Bayesian neural network model. The focus is on how to use the model for automatic classification, i.e. on how to train the neural network to classify objects from some domain, given a database of labeled examples from the domain. The original Bayesian neural network is a one-layer network implementing a naive Bayesian classifier. It is based on the assumption that different attributes of the objects appear independently of each other. This work has been aimed at extending the original Bayesian neural network model, mainly focusing on three different aspects. First the model is extended to a multi-layer network, to relax the independence requirement. This is done by introducing a hidden layer of complex columns, groups of units which take input from the same set of input attributes. Two different types of complex column structures in the hidden layer are studied and compared. An information theoretic measure is used to decide which input attributes to consider together in complex columns. Also used are ideas from Bayesian statistics, as a means to estimate the probabilities from data which are required to set up the weights and biases in the neural network. The use of uncertain evidence and continuous valued attributes in the Bayesian neural network are also treated. Both things require the network to handle graded inputs, i.e. probability distributions over some discrete attributes given as input. Continuous valued attributes can then be handled by using mixture models. In effect, each mixture model converts a set of continuous valued inputs to a discrete number of probabilities for the component densities in the mixture model. Finally a query-reply system based on the Bayesian neural network is described. It constitutes a kind of expert system shell on top of the network. Rather than requiring all attributes to be given at once, the system can ask for the attributes relevant for the classification. Information theory is used to select the attributes to ask for. The system also offers an explanatory mechanism, which can give simple explanations of the state of the network, in terms of which inputs mean the most for the outputs. These extensions to the Bayesian neural network model are evaluated on a set of different databases, both realistic and synthetic, and the classification results are compared to those of various other classification methods on the same databases. The conclusion is that the Bayesian neural network model compares favorably to other methods for classification. In this work much inspiration has been taken from various branches of machine learning. The goal has been to combine the different ideas into one consistent and useful neural network model. A main theme throughout is to utilize independencies between attributes, to decrease the number of free parameters, and thus to increase the generalization capability of the method. Significant contributions are the method used to combine the outputs from mixture models over different subspaces of the domain, and the use of Bayesian estimation of parameters in the expectation maximization method during training of the mixture models. Keywords: Artificial neural network, Bayesian neural network, Machine learning, Classification task, Dependency structure, Mixture model, Query-reply system, Explanatory mechanism. From icsc at compusmart.ab.ca Fri Sep 26 12:25:04 1997 From: icsc at compusmart.ab.ca (ICSC Canada) Date: Fri, 26 Sep 1997 10:25:04 -0600 Subject: NC'98 / CFP Message-ID: <3.0.1.32.19970926102504.006beb6c@mail.compusmart.ab.ca> ANNOUNCEMENT / CALL FOR PAPERS International ICSC/IFAC Symposium on NEURAL COMPUTATION / NC'98 To be held at the Technical University of Vienna September 23 - 25, 1998 http://www.compusmart.ab.ca/icsc/nc98.htm SPONSORS - IFAC International Federation of Automatic Control - IEEE Institute of Electrical & Electronics Engineers Section Austria - OeCG Oesterreichische Computer Gesellschaft - OeVE Oesterreichischer Verband fuer Elektrotechnik - Siemens AG, Vienna, Austria - TU University of Technology, Vienna - ICSC International Computer Science Conventions, Canada/Switzerland ************************************************* SYMPOSIUM ORGANIZATION - HONORARY CHAIR Prof. Tomaso Poggio Co-Director Center for Biological and Computational Learning Massachusetts Institute of Technology, E25-201 Cambridge, MA 02139 / USA Email: tp-temp at ai.mit.edu Web Site: http://www.ai.mit.edu/people/poggio/ - SYMPOSIUM CHAIR Dr. Michael Heiss ECANSE Siemens AG Austria Gudrunstrasse 11 A-1100 Vienna / Austria Phone: +43-1-1707-47149 Fax: +43-1-1707-56256 Email: michael.heiss at siemens.at m.heiss at ieee.org Web Site: http://www.siemens.at/~ecanse/heiss.html - SYMPOSIUM ORGANIZER ICSC International Computer Science Conventions P.O. Box 279 Millet, AB T0C 1Z0 / Canada Phone: +1-403-387-3546 Fax: +1-403-387-4329 Email: icsc at compusmart.ab.ca Web Site: http://www.compusmart.ab.ca/icsc - PROGRAM COMMITTEE Peter G. Anderson, USA Shai Ben-David, Israel Horst Bischof, Austria Hans H. Bothe, Germany Martin Brown, U.K. Frans M. Coetzee, USA Juan Lopez Coronado, Spain Georg Dorffner, Austria K. Fukushima, Japan Wulfram Gertner, Switzerland Stan Gielen, Netherlands C. Lee Giles, USA C.J. Harris, U.K. J. Herault, France Kurt Hornik, Austria Nikola Kasabov, New Zealand Bart Kosko, USA Fa-Long Luo, Germany Wolfgang Maass, Austria Giuseppe Martinelli, Italy Fazel Naghdy, Australia Sankar K. Pal, India Y.H. Pao, USA Martin Pottmann, USA Raul Rojas, Germany Tariq Samad, USA V. David Sanchez A., USA Robert Sanner, USA Bernd Schuermann, Germany J.S. Shawe-Taylor, U.K. Peter Sincak, Slovakia George D. Smith, U.K. Nigel Steele, U.K. Piotr Szczepaniak, Poland Csaba Szepesvari, Hungary Henning Tolle, Germany S. Usui, Japan J. Vandewalle, Belgium A. Weinmann, Austria T. Yamakawa, Japan The symposium is organized under the honorary patronage of Prof. Alexander Weinmann, University of Technology, Vienna. ************************************************* NATIONAL ORGANIZING COMMITTEE M. Budil - T. Gruenberger - B. Knapp - M. Kuehrer - W. Reinisch - E. Thurner - C. Stroh ************************************************* PURPOSE OF THE CONFERENCE NC’98 will bring together scientists and practitioners in the field of Artificial Neural Networks. The conference will concentrate on the computational aspects of Neural Networks in the widest sense, from convergence theory, numerical aspects and hybrid systems through to neural software and hardware. In addition the conference will provide a venue for the discussion of commercial and other practical applications of the technology. It is hoped that participants will have a common interest in, and a fascination for, self-learning systems in nature, their theoretical modeling and interpretation, and in their computational implementation ************************************************* TOPICS Contributions are sought in areas based on the list below, which is indicative only. Contributions from new applications areas will be particularly welcome. Neural Network Theory - Mathematical, computational background - Mathematical theories of networks in dynamical systems - Neural network architectures and algorithms - Neural models for cognitive science and brain functions - Convergence - Numerical aspects - Statistical properties - Artificial associative memories - Self-organization - Hybrid systems - Neuro-Fuzzy - Knowledge extraction from neural networks - Genetic algorithms - Optimization - Radial basis function networks - CMAC - Pre-filtering and pre-selection of input-data - Chaos-theoretical methods for data evaluation - Vector quantization Tools and Hardware Implementation - Rapid prototyping tools - Graphical programming tools - Simulation and analyzation tools - Heuristics for neural network design - Neuro-computers - Electronic and optic implementations - Neuro-chips - Spiking-Neurons - Parallel processing Applications - Pattern recognition - Signal processing - Neural filters - Speech recognition - Robotics - Control - System Identification - Time series prediction - Sales forecast - Electrical power load forecast - X-ray image analysis ************************************************* SCIENTIFIC PROGRAM At present, the following plenary talks have been scheduled: - 'Learning Sparse Representations' Prof. Tomaso Poggio, Honarary Chairman of NC'98, Massachusetts Institute of Technology, Cambridge MA, USA - 'Spiking Neurons' Prof. Wolfgang Maass, Technical University Graz, Austria - 'Models of Visual Awareness in a Multi-Module Neural Network' Prof. Igor Aleksander, Imperial College of Science Technology and Medicine, London, U.K. NC'98 will include other invited plenary talks, contributed sessions, invited sessions, poster sessions, workshops and demonstrations. Various projects are under consideration. ************************************************* WORKSHOPS, TUTORIALS AND OTHER CONTRIBUTIONS Proposals should be submitted as soon as possible to the Symposium Chairman or the Symposium Organizer. Deadline for proposals: January 31, 1998. Tutorials and workshops will partly be organized prior to the conference on September 21-22, 1998. ************************************************* EXHIBITION An exhibition may be arranged at the conference site, displaying products, services and literature related to the conference topics technolgy. Interested exhibitors are requested to contact the Symposium Organizers for further information. Conference participants may display any written information related to the conference topic at the information desk (papers, call for papers, product information). For non-participants we offer a handling charge of US$ 100 for each 5 kilograms. ************************************************* SUBMISSION OF PAPERS Prospective authors are requested to send an abstract of 1000 - 2000 words for review by the International Program Committee. All abstracts must be written in English, starting with a succinct statement of the problem, the results achieved, their significance and a comparison with previous work. The abstract should also include: - Title of conference (NC'98) - Type of paper (regular, demonstration, poster, tutorial or invited) - Title of proposed paper - Authors names, affiliations, addresses - Name of author to contact for correspondence - E-mail address and fax # of contact author - Topics which best describe the paper (max. 5 keywords) Contributions are welcome from those working in industry and having experience in the topics of this conference as well as from academics. The conference language is English. Tutorial papers and demonstrations are also encouraged. It is recommended to submit abstracts by electronic mail to icsc at compusmart.ab.ca or else by fax or mail (2 copies) to the following address: ICSC Canada P.O. Box 279 Millet, Alberta T0C 1Z0 Canada Email: icsc at compusmart.ab.ca Fax: +1-403-387-4329 Submissions for poster presentations (minimum 4 pages) are accepted until June 30, 1997. These papers will neither be reviewed, nor included in the proceedings and later amendments are still possible. ************************************************* DEMONSTRATION SESSIONS Instead of submitting a paper, proposals for a 20 minute demonstration of a practical application can also be submitted. A regular 220V power outlet and a VHS-video recorder will be available at the demonstration room. The proposal should follow the above abstract submission guidelines. Only the abstract of the demonstration will be printed in the conference proceedings. ************************************************* POSTER SESSIONS Poster presentations are encouraged for people who whish to receive peer feedback on research, which is not yet ready for publication. Practical examples of applied research are particularly welcome. Poster sessions will allow the presentation and discussion of respective papers. Papers for poster presentations should contain at least 4 pages and be submitted to the symposium organizers until June 30, 1998 to be included in the conference program. These papers will neither be reviewed, nor included in the proceedings and later amendments are thus still possible. ************************************************* INVITED SESSIONS Prospective organizers of invited sessions are requested to send a session proposal (consisting of 4-5 invited paper abstracts, the recommended session-chair and co-chair, as well as a short statement describing the title and the purpose of the session to the Symposium Chairman or the Symposium Organizer. All invited sessions should start with a tutorial paper. ************************************************* SIEMENS BEST PRESENTATION AWARD The best presentation of each session will be honored with a best presentation award. ************************************************* PUBLICATIONS Conference proceedings (including all accepted papers) will be published by ICSC Academic Press and be available for the delegates at the symposium in printed form or on CD-ROM. Authors of a selected number of innovative papers will be invited to submit extended manuscripts for publication in a special issue of the Elsevier Journal on Neurocomputing. ************************************************* IMPORTANT DATES - Submission of Abstracts: January 31, 1998 - Notification of Acceptance: March 31, 1998 - Delivery of full papers: May 31, 1998 - Submission of poster presentations: June 30, 1998 - Tutorials and Workshops: September 21-22, 1998 - NC'98 Symposium: September 23-25, 1998 ************************************************* ACCOMMODATION Accommodation at reasonable rates will be available at nearby hotels. Full details will follow with the letters of acceptance. ************************************************* SOCIAL AND TOURIST ACTIVITIES A social program, including a reception and a "Heurigen Dinner" will be organized and also be available for accompanying persons. Discover, why Vienna is one of the most favored conference cities worldwide. The month of September is the best time for a visit to Vienna. Visit the Stephansdom, relax at the prater where you can take a trip with the Riesenrad, eat the world's best pastries (e.g. the original Sacher-Torte) in one of the famous coffee houses, listen to an opera at the Vienna State Opera, or visit one of the many museums, such as the famous Klimt Collection in the Belvedere Castle. Post-conference tours to Wachau, Salzburg or Prag may be organized. Please mail your preferences to the symposium organizer. Further tourist information is available from http://www.aaf.or.at/tom/Wien/ http://info.wien.at/e/index.htm http://info.wien.at/e/wienkart.htm http://austria-info.at/ http://www.magwien.gv.at/gismap/cgi-bin/wwwgis/adrsuche/ ************************************************* FURTHER INFORMATION Full updated information is available from http://www.compusmart.ab.ca/icsc/nc98.htm or contact - ICSC Canada, P.O. Box 279, Millet, Alberta T0C 1Z0, Canada E-mail: icsc at compusmart.ab.ca Fax: +1-403-387-4329 Phone: +1-403-387-3546 - Dr. Michael Heiss, Symposium Chair NC'98 PSE NLT2 ECANSE, Siemens AG Austria, Gudrunstrasse 11, A-1100 Vienna, Austria Email: michael.heiss at siemens.at or: m.heiss at ieee.org Fax: +43-1-1707-56256 Phone: +43-1-1707-47149 From j.cheng at ulst.ac.uk Fri Sep 26 17:29:13 1997 From: j.cheng at ulst.ac.uk (Jie Cheng) Date: Fri, 26 Sep 1997 21:29:13 -0000 Subject: Software Announcement Message-ID: <199709262131.AA21998@iserve1.infj.ulst.ac.uk> ANNOUNCEMENT ========================================================================= A belief network learning system is now available for download. It includes a wizard-like interface and a construction engine. Name: Belief Network Power Constructor Version: 1.0 Beta 1 Platforms: 32-bit windows systems (windows95/NT) Input: A data set with discrete values in the fields (attributes) and optional domain knowledge (attribute ordering, partial ordering, direct causes and effects). Output: A network structure of the data set. Main Features: 1.Easy to use. It gathers necessary input information through 5 simple steps. 2.Accessibility. Supports most of the popular desktop database and spreadsheet formats, including: Ms-Access, dBase, Foxpro, Paradox, Excel and text file formats. It also supports remote database servers like ORACLE, SQL-SERVER through ODBC. 3.Reusable. The engine is an ActiveX DLL, so that you can easily integrate the engine into your belief network, datamining or knowledge base system for windows95/NT. 4.Efficient. This engine constructs belief networks by using conditional independence(CI) tests. In general, it requires CI tests to the complexity of O(N^4); when the attribute ordering is known, the complexity is O(N^2). N is the number of attributes (fields). 5.Reliable. Modified mutual information calculation method is used as CI test to make it more reliable when the data set is not large enough. 6.Support domain knowledge. Complete ordering, partial ordering and causes and effects can be used to constrain the search space and therefore speed up the construction process. 7.Running time is Linear to the number of records. The system can be downloaded from web site: http://193.61.148.131/jcheng/bnpc.htm Suggestions and comments are welcome. ---------------------------------------------------- Jie Cheng email: j.cheng at ulst.ac.uk 16J24, Faculty of Informatics, UUJ, UK. BT37 0QB Tel: 44 1232 366500 Fax: 44 1232 366068 http://193.61.148.131/jcheng/ ---------------------------------------------------- From georg at ai.univie.ac.at Mon Sep 29 15:06:56 1997 From: georg at ai.univie.ac.at (Georg Dorffner) Date: Mon, 29 Sep 1997 21:06:56 +0200 Subject: CFP: NN and Adaptive Systems Message-ID: <342FFC50.59E2B600@ai.univie.ac.at> CALL FOR PAPERS for the symposium ====================================================== Artificial Neural Networks and Adaptive Systems ====================================================== chairs: Horst-Michael Gross, Germany, and Georg Dorffner, Austria as part of the Fourteenth European Meeting on Cybernetics and Systems Research April 14-17, 1998 University of Vienna, Vienna, Austria For this symposium, papers on any theoretical or practical aspect of artificial neural networks are invited. Special focus, however, will be put on the issue of adaptivity both in practical engineering applications and in applications of neural networks to the modeling of human behavior. By adaptivity we mean the capability of a neural network to adjust itself to changing environments. For this, a careful distinction between "learning" to devise weight matrices for a neural network before it is applied (and usually left unchanged) on one hand, and "true" adaptivity of a given neural network to constantly changing conditions on the other hand - i.e. real-time learning in unstationary environments - is made. The following is a - by no means exhaustive - list of possible topics in this realm: - online learning of neural network applications facing changing data distributions - transfer of neural network solutions to related but different domains - application of neural networks for adaptive autonomous systems - "phylogenetic" vs. "ontogenetic" adaptivity (e.g. adaptivity of connectivity and architecture vs. adaptivity of coupling parameters or weights) - short term vs. long term adaptation - adaptive reinforcement learning - adaptive pattern recognition - localized vs. distributed approximation (in terms of overlap of decision regions) and adaptivity Preference will be given to contributions that address such issues of adaptivity, but - as mentioned initially - other original work on neural newtorks is also welcome. Deadline for submissions (10 single-spaced A4 pages, maximum 43 lines, max. line length 160 mm, 12 point) is =============================================== October 26, 1997 =============================================== (Note that this deadline has been extended w.r.t. the original EMCSR deadline) Papers should be sent to: I. Ghobrial-Willmann or G. Helscher Austrian Society for Cybernetic Studies A-1010 Vienna 1, Schottengasse 3 (Austria) Phone: +43-1-53532810 Fax: +43-1-5320652 E-mail: sec at ai.univie.ac.at For more information on the whole EMCSR conference, see the Web-page http://www.ai.univie.ac.at/emcsr/ or contact the above address. !Hope to see you in Vienna! From drl at eng.cam.ac.uk Mon Sep 29 15:30:48 1997 From: drl at eng.cam.ac.uk (drl@eng.cam.ac.uk) Date: Mon, 29 Sep 1997 20:30:48 +0100 (BST) Subject: Quality Assurance in Maternity Care project Message-ID: <199709291930.28462@opal.eng.cam.ac.uk.eng.cam.ac.uk> (Apologies for cross-posting) The Quality Assurance in Maternity Care (QAMC) project is a 3 year investigation into neural network and other methods for predicting obstetrical risk. The project is funded by the European Union BIOMED program and the data processing centre is Cambridge, England. The QAMC project has made use of 771571 cases from the Scottish Morbidity Record and its findings should be of interest to researchers in the clinical, statistical, connectionist and data-mining communities. Much effort has gone into developing new methods for feature selection in large databases of discrete valued information. This research and all of the project's other publications are available via the project Web page: http://svr-www.eng.cam.ac.uk/projects/qamc Furthermore, the Web page provides interactive access to estimated and observed rates of incidence of a particular adverse pregnancy outcome: failure to progress in labour. The project is actively seeking feedback and we would welcome your comments (which can be submitted and read via the above web page). We hope you find this information of benefit. -- David R. Lovell (drl at eng.cam.ac.uk) Research Associate Depts of Engineering and Obstetrics & Gynaecology Q.A.M.C. University of Cambridge, Trumpington Street, Quality Assurance Cambridge CB2 1PZ, UK. Tel: +44 1223 332 754 in http://svr-www.eng.cam.ac.uk/~drl Maternity Care From S.Singh-1 at plymouth.ac.uk Mon Sep 29 19:23:37 1997 From: S.Singh-1 at plymouth.ac.uk (Sameer Singh) Date: Mon, 29 Sep 1997 19:23:37 BST Subject: PhD studentship Message-ID: <30C31E731BA@cs_fs15.csd.plym.ac.uk> University of Plymouth, UK School of Computing PhD Research Studentship Salary: See below Applications are now invited for a PhD studentship in the School of Computing in the area of unstructured information processing and extraction using intelligent techniques such as neural networks. The research project will be carried out in collaboration with Ranco Controls Ltd., Plymouth, a world leading manufacturer of control equipment. The project will also collaborate with the School of Electronic, Communication and Electrical Engineering. You should have a background in computer science or engineering with a good honours degree, and preferably with a Masters qualification. The project requires good knowledge in areas including information systems, artificial intelligence and C/C++. The studentship covers the tuition fee and a maintenance of Pounds 5510 per year. Application forms and further details are available from the School Office on +44-1752- 232 541. Further information and informal enquiries on the project should be directed to Dr. Sameer Singh, School of Computing, University of Plymouth, UK (tel: +44-1752-232 612, fax: +44-1752-232 540, e-mail: s1singh at plym.ac.uk). Closing date: 17 October, 1997 Promoting equal opportunities A Leading Centre for Teaching and Research From payman at fermi.jpl.nasa.gov Mon Sep 29 14:43:30 1997 From: payman at fermi.jpl.nasa.gov (Payman Arabshahi) Date: Mon, 29 Sep 1997 11:43:30 -0700 Subject: Paper available: Adaptive fuzzy min-max estimation Message-ID: <199709291843.LAA01486@fermi.jpl.nasa.gov> The following paper is now available online via: http://dsp.jpl.nasa.gov/~payman (under "Publications") or via anonymous ftp: ftp://dsp.jpl.nasa.gov/pub/payman/tcas9701.ps.gz (564842 bytes gzip compressed or 2306003 bytes uncompressed) --- Payman Arabshahi Jet Propulsion Laboratory Tel: (818) 393-6054 4800 Oak Grove Drive Fax: (818) 393-1717 MS 238-343 Email: payman at jpl.nasa.gov Pasadena, CA 91109 -------------------------------------------------------------------------- TITLE: Pointer adaptation and pruning of min-max fuzzy inference and estimation. AUTHORS: Arabshahi-P. Marks-R-J. Oh-S. Caudell-T-P. Choi-J-J. SOURCE: IEEE Transactions on Circuits and Systems II - Analog and Digital Signal Processing. Vol. 44, no. 9, Sept. 1997, p.696-709. ABSTRACT: A new technique for adaptation of fuzzy membership functions in a fuzzy inference system is proposed, The painter technique relies upon the isolation of the specific membership functions that contributed to the final decision, followed by the updating of these functions' parameters using steepest descent, The error measure used is thus backpropagated from output to input, through the min and max operators used during the inference stage, This occurs because the operations of min and max are continuous differentiable functions and, therefore, can be placed in a chain of partial derivatives for steepest descent backpropagation adaptation, Interestingly, the partials of min and max act as ''pointers'' with the result that only the function that gave rise to the min or max is adapted; the others are not, To illustrate, let alpha = max [beta(1), beta(2), ..., beta(N)]. Then partial derivative alpha/partial derivative beta(n) = 1 when beta(n) is the maximum and is otherwise zero, We apply this property to the fine tuning of membership functions of fuzzy min-max decision processes and illustrate with an estimation example, The adaptation process can reveal the need for reducing the number of membership functions, Under the assumption that the inference surface is in some sense smooth, the process of adaptation can reveal overdetermination of the fuzzy system in two ways, First, if two membership functions come sufficiently close to each other, they can be fused into a single membership function, Second, if a membership function becomes too narrow, it can be deleted, In both cases, the number of fuzzy IF-THEN rules is reduced, In certain cases, the overall performance of the fuzzy system ran be improved by this adaptive pruning. --------------------------------------------------------------------------