From svc at lacasa.com Sat Jun 1 15:11:02 1996 From: svc at lacasa.com (Stephen V. Coggeshall) Date: Sat, 1 Jun 1996 13:11:02 -0600 Subject: Message for distribution Message-ID: <199606011911.NAA02583@ives.lacasa.com> A small start-up company is looking for the right employees to work on a variety of problems in financial and other modeling. The Center for Adaptive Systems Applications (CASA, ~20 employees) has been in existence since June, 95, and is a result of a spin-off from the Los Alamos National Laboratory. The company is based in Los Alamos, NM. We are looking for post doc level researchers with strong skills in computer science (C, C++), math, and adaptive computing (neural nets). Experience in pattern recognition, neural nets, clustering algorithms, radial basis functions, etc. as well as large data set manipulation highly desired. Please send resumes to The Center for Adaptive Systems Applications c/o Frankie Gomez 901 18th Street Los Alamos, NM 87544 mfg at lacasa.com From allan at daimi.aau.dk Sun Jun 2 03:56:26 1996 From: allan at daimi.aau.dk (Allan Ove Kjeldberg) Date: Sun, 2 Jun 1996 09:56:26 +0200 Subject: Paper available: "Training Neural Networks by means of Genetic Algorithms Working on Very Long Chromosomes" by Peter Gravild Korning Message-ID: <199606020756.JAA16340@carbon.daimi.aau.dk> The paper korning.nnga.ps.Z is now available for copying from the Neuroprose repository: "Training Neural Networks by means of Genetic Algorithms Working on very Long Chromosomes" Peter Gravild Korning University of Aarhus Denmark The paper addresses the problem of training neural nets by use of a genetic algorithm, where the GA constitutes a general technique, an alternative to e.g. back-propagation. Attempts to do this have failed in the past. This is primarily due to the fact that the mean square error function known from back-propagation has been used as fitness function for the genetic algorithm. I have invented a new fitness function which is simple but very powerfull. Unlike the mean square error function, it takes into account the holistic nature of the GA's search. And the results are very promising. All critique and all comments/suggestions are very wellcome. (please use the mail address aragorn at daimi.aau.dk or korning.cbs.dtu.dk) Peter GRavild korning ABSTRACT: In the neural network/genetic algorithm community, rather limited success in the training of neural networks by genetic algorithms has been reported. In a paper by Whitley et al. (1991), he claims that, due to "the multiple representations problem", genetic algorithms will not effectively be able to train multilayer perceptrons, whose chromosomal representation of its weights exceeds 300 bits. In the following paper, by use of a "real-life" problem, known to be non-trivial, and by comparison with "classic" neural net training methods, I will try to show, that the modest success of applying genetic algorithms to the training of perceptrons, is caused not so much by "the multiple representations problem" as by the fact that problem-specific knowledge available is often ignored, thus making the problem unnecessarily tough for the genetic algorithm to solve. Special success is obtained by the use of a new fitness function, which takes into account the fact that the search performed by a genetic algorithm is holistic, and not local as is usually the case when perceptrons are trained by traditional methods. From radu_d at atm.neuro.pub.ro Sun Jun 2 03:10:59 1996 From: radu_d at atm.neuro.pub.ro (Radu Dogaru) Date: Sun, 2 Jun 1996 10:10:59 +0300 (EET DST) Subject: Two papers available Message-ID: <199606020711.KAA01738@atm.neuro.pub.ro> The following papers are available via anonymous FTP from the "neuroprose" archive: FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/dogaru_r.icnn95.ps FTP-file: pub/neuroprose/dogaru_r.icnn96.ps -------------------------------------------------------------------------------- 1 File name: dogaru_r.icnn95.ps.Z Title: Chaotic resonance theory, a new approach for pattern storage and retrieval in neural networks" Authors: Radu Dogaru, A.T. Murgan Abstract: A method for designing recurrent neural networks capable to oscilate chaotically and to synchronize was developed. A new neural model is proposed based on replacing locally tuned units (like RBF neurons) with small chaotic resonators. The overall behavior is simillar with the one obtained in an ART clustering networks while there is only one connection between layers of neurons, all inter-layer information being coded as one-dimensional chaotic signals. Pages: 5 Reference: Proceedings ICNN'95 (Perth-Australia, December '95), Vol. 6, pp. 3048-3052 -------------------------------------------------------------------------------- 2 File name: dogaru_r.icnn96.ps.Z Title: Searching for robust chaos in discrete-time neural networks using weight space exploration" Authors: Radu Doagaru, A.T. Murgan, Stefan Ortamnn, Manfred Glesner Abstract: A new method for analysis and design of reccurent neural networks is described. The method is based on weight space exploration and dislplays for large populations of neural networks particular maps related with entropic and sensitivity descriptors of the dynamic behaviors. A sensitivity descriptor was introduced in order to easily get information about chaotic dynamics in large populations of recurrent neural networks. Robust chaos is such a behavior characterized by getting the same class of chaotic signals when particular weights of the networks are allowed to vary within a compact domain. Pages: 6 Reference: Accepted to be published in Proceedings ICNN'96 (Washington D.C.,3-6 June 1996) ----------------------------------------------------------------------------------------- NO HARD-COPIES AVAILABLE ! For any comment or additional information please contact: Dr. Radu Dogaru "Politehnica" University of Bucharest, Romania E-mail: radu_d at atm.neuro.pub.ro or radu_d at lmn.pub.ro From ATAXR at asuvm.inre.asu.edu Mon Jun 3 03:12:35 1996 From: ATAXR at asuvm.inre.asu.edu (Asim Roy) Date: Mon, 03 Jun 1996 00:12:35 -0700 (MST) Subject: Connectionists Learning - Some New Ideas/Questions Message-ID: <01I5GB1UPVJ68X2WJV@asu.edu> This is an attempt to respond to some of the questions raised in regards to the network design issue in our new learning theory. I have included all of the responses relevant to all of the remaining issues, except one by Daniel Crespin, which was too long. A. "Task A. Perform Network Design Task" There is criticism of our learning theory on the grounds that humans inherit a pre-designed learning architecture and that this architecture has been designed and structured through the process of evolution over millions of years. I think this is a biological fact beyond dispute. The relevant question is what parts of the learning system can feasibly come pre-designed and what parts cannot. For example, we know that the architecture of the brain includes separate areas (partitions) for vision, emotion and long term memory and so on. Thus we inherit a partition of the brain based on the functions it is expected to perform. This is a level of the architecture that is indeed pre-designed. A learning mechanism is also prepackaged with this architecture that knows this functional organization of the brain. And within these partitions is available a collection of biological neurons (cells) and their connections. There is also preprocessing in certain parts of the system like the vision system. I don't think our theory is in conflict with these basic facts/ideas at all. Our learning theory relates to "work" performed by the learning mechanism within this functional organization of the brain. For example, on a lighter side of this argument, this learning mechanism may have to design and train appropriate networks that incorporate knowledge about Windows 95. None of the Windows 95 knowledge could have been inherited from our biological ancestors, despite their millions of years of learning. So the net for Windows 95 could not have come pre-designed to us. No system, biological or otherwise, can design an appropriate net for a problem about which it has no knowledge. When I was born, my parents knew nothing about computers. So neither they nor their ancestors could have pre-designed nets for me to learn about computers, unless we bring up the notion of "fixed general purpose nets/modules" being available in the brain for any kind of learning situation. These fixed general purpose nets could come with a fixed set of neurons. But the basic problem with that notion is its conflict with the idea of "learning" itself. The essence of "learning" is "generalization" and was discussed in the previous response on issues related to generalization. Since learning is generalization and since generalization is attempting to design the smallest possible net, the idea of "fixed pre-designed" nets is incompatible with the notion of "learning", whether it is in biological systems or otherwise. Learning within fixed pre-designed nets is not "learning" at all and can be dangerous indeed. Since we run the risk of simply over or underfitting the training data in fixed size nets, we may not "learn" anything in such pre-designed nets and we might be in grave danger as a species if we did so on a continuous basis. We could not have survived this long as a species by doing this - that is, by not being able to "generalize and learn". So, in general, problem-specific nets could not feasibly come pre- designed to us. From thrun+ at heaven.learning.cs.cmu.edu Tue Jun 4 13:17:08 1996 From: thrun+ at heaven.learning.cs.cmu.edu (thrun+@heaven.learning.cs.cmu.edu) Date: Tue, 4 Jun 96 13:17:08 EDT Subject: Call for papers: Special issue Machine Learning Message-ID: ------------------------------------------------------------------------------- Call for papers Special Issue of the Machine Learning Journal on Inductive Transfer ------------------------------------------------------------------------------- Lorien Pratt and Sebastian Thrun, Guest Editors ------------------------------------------------------------------------------- Many recent machine learning efforts are focusing on the question of how to learn in an environment in which more than one task is performed by a system. As in human learning, related tasks can build on one another, tasks that are learned simultaneously can cross-fertilize, and learning can occur at multiple levels, where the learning process itself is a learned skill. Learning in such an environment can have a number of benefits, including speedier learning of new tasks, a reduced number of training examples for new tasks, and improved accuracy. These benefits are especially apparent in complex applied tasks, where the combinatorics of learning are often otherwise prohibitive. Current efforts in this quickly growing research area include investigation of methods that facilitate learning multiple tasks simultaneously, those that determine the degree to which two related tasks can benefit from each other, and methods that extract and apply abstract representations from a source task to a new, related, target task. The situation where the target task is a specialization of the source task is an important special case. The study of such methods has broad application, including a natural fit to data mining systems, which extract regularities from heterogeneous data sources under the guidance of a human user, and can benefit from the additional bias afforded by inductive transfer. We solicit papers on inductive transfer and learning to learn for an upcoming Special Issue of the Machine Learning Journal. Please send six (6) copies of your manuscript postmarked by July 15, 1996 to: Dr. Lorien Pratt MCS Dept. CSM Golden, CO 80401 USA One (1) additional copy should be mailed to: Karen Cullen Attn: Special Issue on Inductive Transfer MACHINE LEARNING Editorial Office Kluwer Academic Publishers 101 Philip Drive Assinippi Park Norwell, MA 02061 USA Manuscripts should be limited to at most 12000 words. Please also note that Machine Learning is now accepting submission of final copy in electronic form. Authors may want to adhere to the journal formatting standards for paper submissions as well. There is a latex style file and related files available via anonymous ftp from ftp.std.com. Look in Kluwer/styles/journals for the files README, kbsfonts.sty, kbsjrnl.ins, kbsjrnl.sty, kbssamp.tex, and kbstmpl.tex, or the file kbsstyles.tar.Z, which contains them all. Please see http://vita.mines.edu:3857/1s/lpratt/transfer.html for more information on inductive transfer. Papers will be quickly reviewed for a target publication date in the first quarter of 1997. From maja at garnet.cs.brandeis.edu Tue Jun 4 17:05:02 1996 From: maja at garnet.cs.brandeis.edu (Maja Mataric) Date: Tue, 4 Jun 1996 17:05:02 -0400 Subject: call for participation Message-ID: <199606042105.RAA16416@garnet.cs.brandeis.edu> ***********************CONFERENCE INFORMATION******************************* From Animals to Animats The Fourth International Conference on Simulation of Adaptive Behavior (SAB96) September 9th-13th, 1996 North Falmouth, Massachusetts, USA FULL DETAILS ON THE WEB PAGE: http://www.cs.brandeis.edu/conferences/sab96 GENERAL CONTACT: sab96 at cs.brandeis.edu The objective of this conference is to bring together researchers in ethology, psychology, ecology, artificial intelligence, artificial life, robotics, and related fields so as to further our understanding of the behaviors and underlying mechanisms that allow natural and artificial animals to adapt and survive in uncertain environments. ***************************PROGRAM************************************** The conference will consist of a single track of invited speakers, refereed presentations and posters, and demonstrations. The invited speakers are James Albus, Jelle Atema, Daniel Dennett, Randy Gallistel, Scott Kelso, and David Touretzky. ***********************REGISTRATION IS OPEN***************************** EARLY REGISTRATION DEADLINE IS JUNE 30, 1996 Regular $350; Discounts for early registration, students and members of the International Society for Adaptive Behavior. ***************************HOTEL**************************************** HOTEL DEADLINE FOR BLOCKED ROOMS IS AUGUST 8TH The entire conference will take place at the Sea Crest in North Falmouth on Cape Cod. All participants are responsible for making their own reservations with the hotel. ************************STUDENT TRAVEL GRANTS**************************** APPLICATION DEADLINE: JUNE 24th Thanks to funding from the Office of Naval Research and the Bauer Foundation, we will be able to assist about 15 students by lowering their costs for attending the conference. $400 scholarships will be available for the selected North American students, and $600 scholarships for selected overseas students. ************************************************************************* ALL DETAILS ON THE WEB PAGE: http://www.cs.brandeis.edu/conferences/sab96 From radu_d at atm.neuro.pub.ro Wed Jun 5 01:20:09 1996 From: radu_d at atm.neuro.pub.ro (Radu Dogaru) Date: Wed, 5 Jun 1996 08:20:09 +0300 (EET DST) Subject: Two papers available Message-ID: <199606050520.IAA02499@atm.neuro.pub.ro> A error occured in the last message regarding files available in "neuroprose" archive. The error is related with the file-name which is "dogaru...." instead of "dogaru_r..." as it was in the original announcement. I will repeat the message with the error corrected. ------------------------------------------------------------------------------- The following papers are available via anonymous FTP from the "neuroprose" archive: FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/dogaru.icnn95.ps.Z FTP-file: pub/neuroprose/dogaru.icnn96.ps.Z -------------------------------------------------------------------------------- 1 File name: dogaru.icnn95.ps.Z Title: Chaotic resonance theory, a new approach for pattern storage and retrieval in neural networks" Authors: Radu Dogaru, A.T. Murgan Abstract: A method for designing recurrent neural networks capable to oscilate chaotically and to synchronize was developed. A new neural model is proposed based on replacing locally tuned units (like RBF neurons) with small chaotic resonators. The overall behavior is simillar with the one obtained in an ART clustering networks while there is only one connection between layers of neurons, all inter-layer information being coded as one-dimensional chaotic signals. Pages: 5 Reference: Proceedings ICNN'95 (Perth-Australia, December '95), Vol. 6, pp. 3048-3052 -------------------------------------------------------------------------------- 2 File name: dogaru.icnn96.ps.Z Title: Searching for robust chaos in discrete-time neural networks using weight space exploration" Authors: Radu Doagaru, A.T. Murgan, Stefan Ortamnn, Manfred Glesner Abstract: A new method for analysis and design of reccurent neural networks is described. The method is based on weight space exploration and dislplays for large populations of neural networks particular maps related with entropic and sensitivity descriptors of the dynamic behaviors. A sensitivity descriptor was introduced in order to easily get information about chaotic dynamics in large populations of recurrent neural networks. Robust chaos is such a behavior characterized by getting the same class of chaotic signals when particular weights of the networks are allowed to vary within a compact domain. Pages: 6 Reference: Accepted to be published in Proceedings ICNN'96 (Washington D.C.,3-6 June 1996) ----------------------------------------------------------------------------------------- NO HARD-COPIES AVAILABLE ! For any comment or additional information please contact: Dr. Radu Dogaru "Politehnica" University of Bucharest, Romania E-mail: radu_d at atm.neuro.pub.ro or radu_d at lmn.pub.ro From harnad at cogsci.soton.ac.uk Wed Jun 5 12:26:45 1996 From: harnad at cogsci.soton.ac.uk (Stevan Harnad) Date: Wed, 5 Jun 96 17:26:45 +0100 Subject: Neural Constructivism Manifesto: BBS Call for Commentators Message-ID: <8520.9606051626@cogsci.ecs.soton.ac.uk> Below is the abstract of a forthcoming BBS target article on: THE NEURAL BASIS OF COGNITIVE DEVELOPMENT: A CONSTRUCTIVIST MANIFESTO by Steven R. Quartz and Terrence J. Sejnowski This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or nominated by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send EMAIL to: bbs at soton.ac.uk or write to: Behavioral and Brain Sciences Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://www.princeton.edu/~harnad/bbs.html http://cogsci.soton.ac.uk/bbs ftp://ftp.princeton.edu/pub/harnad/BBS ftp://cogsci.soton.ac.uk/pub/harnad/BBS gopher://gopher.princeton.edu:70/11/.libraries/.pujournals If you are not a BBS Associate, please send your CV and the name of a BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work. All past BBS authors, referees and commentators are eligible to become BBS Associates. To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp (or gopher or world-wide-web) according to the instructions that follow after the abstract. ____________________________________________________________________ THE NEURAL BASIS OF COGNITIVE DEVELOPMENT: A CONSTRUCTIVIST MANIFESTO Steven R. Quartz and Terrence J. Sejnowski Computational Neurobiology Laboratory, and The Sloan Center for Theoretical Neurobiology, The Salk Institute for Biological Studies, 10010 North Torrey Pines Rd. La Jolla, CA 92037 steve at salk.edu Howard Hughes Medical Institute, The Salk Institute for Biological Studies, and Department of Biology, University of California, San Diego, La Jolla, CA 92037. terry at salk.edu KEYWORDS: neural development; cognitive development; constructivism; selectionism; mathematical learning theory; evolution; learnability. ABSTRACT: How do minds emerge from developing brains? According to "neural constructivism," the representational features of cortex are built from the dynamic interaction between neural growth mechanisms and environmentally derived neural activity. Contrary to popular selectionist models that emphasize regressive mechanisms, the neurobiological evidence suggests that this growth is a progressive increase in the representational properties of cortex. The interaction between the environment and neural growth results in a flexible type of learning: "constructive learning" minimizes the need for prespecification in accordance with recent neurobiological evidence that the developing cerebral cortex is largely free of domain-specific structure. Instead, the representational properties of cortex are built by the nature of the problem domain confronting it. This uniquely powerful and general learning strategy undermines the central assumption of classical learnability theory, that the learning properties of a system can be deduced from a fixed computational architecture. Neural constructivism suggests that the evolutionary emergence of neocortex in mammals is a progression toward more flexible representational structures, in contrast to the popular view of cortical evolution as an increase in innate, specialized circuits. Human cortical postnatal development is also more extensive and protracted than generally supposed, suggesting that cortex has evolved so as to maximize the capacity of environmental structure to shape its structure and function through constructive learning. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from ftp.princeton.edu according to the instructions below (the filename is bbs.quartz). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- These files are also on the World Wide Web and the easiest way to retrieve them is with Netscape, Mosaic, gopher, archie, veronica, etc. Here are some of the URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs.html http://cogsci.soton.ac.uk/~harnad/bbs.html ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.quartz ftp://cogsci.ecs.soton.ac.uk/pub/harnad/BBS/bbs.quartz gopher://gopher.princeton.edu:70/11/.libraries/.pujournals To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.quartz When you have the file(s) you want, type: quit ---------- Where the above procedure is not available there are two fileservers: ftpmail at decwrl.dec.com and bitftp at pucc.bitnet that will do the transfer for you. To one or the other of them, send the following one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). ------------------------------------------------------------- From nschraud at evotec.de Thu Jun 6 16:12:55 1996 From: nschraud at evotec.de (Nici Schraudolph) Date: Thu, 6 Jun 1996 22:12:55 +0200 Subject: update announcement - NC.bib Message-ID: <199606062012.WAA14311@nix.evotec.de> I have recently updated my BibTeX database NC.bib to include all articles in Neural Computation up to volume 7. The updated file is available by anonymous ftp from the following locations: USA - ftp://ftp.cnl.salk.edu/pub/schraudo/NC.bib.gz Europe - ftp://nix.evotec.de/pub/nschraud/NC.bib.gz It has also been incorporated into the following BibTeX collections: Center for Computational Intelligence, TU Wien: http://www.ci.tuwien.ac.at/docs/ci/bibtex_collection.html The Collection of Computer Science Bibliographies http://liinwww.ira.uka.de/bibliography/ Happy citing, -- Dr. Nicol N. Schraudolph Tel: +49-40-56081-284 Evotec Biosystems GmbH Fax: +49-40-56081-222 Grandweg 64 Home: +49-40-430-3381 22529 Hamburg Germany http://www.cnl.salk.edu/~schraudo/ From katagiri at hip.atr.co.jp Fri Jun 7 00:51:22 1996 From: katagiri at hip.atr.co.jp (Shigeru Katagiri) Date: Fri, 07 Jun 1996 13:51:22 +0900 Subject: Announcement of NNSP96 Message-ID: <9606070451.AA26184@hector> 1996 IEEE Signal Processing Society Workshop on Neural Networks for Signal Processing (NNSP96) September 4-6, Seika, Kyoto, Japan The workshop information, including CFP, Advance Program, Registration, Hotel Reservation, and Other Related Issues, is available on the NNSP96 WWW Home Page, ``http://www.hip.atr.co.jp/~katagiri/nnsp96_home.html''. You are encouraged to make an early registration. The early registration deadline is August 10, 1996. - Shigeru KATAGIRI (katagiri at hip.atr.co.jp) Program Chair, NNSP96 - Masae SHIOJI (shioji at hip.atr.co.jp) Secretary, NNSP96 From rosca at cs.rochester.edu Fri Jun 7 18:50:43 1996 From: rosca at cs.rochester.edu (rosca@cs.rochester.edu) Date: Fri, 7 Jun 1996 18:50:43 -0400 Subject: Papers available: modular genetic programming Message-ID: <199606072250.SAA00488@ash.cs.rochester.edu> The following papers on discovery of subroutines in Genetic Programming are now available for retrieval via ftp. Ftp: ftp://ftp.cs.rochester.edu/pub/u/rosca/gp WWW: http://www.cs.rochester.edu/u/rosca/research.html Comments and suggestions are welcome. Justinian -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Justinian Rosca Internet: rosca at cs.rochester.edu University of Rochester Office: (716) 275-1174 Department of Computer Science Fax: (716) 461-2018 Rochester, NY 14627-0226 WWW: http://www.cs.rochester.edu/u/rosca/ EVOLUTION-BASED DISCOVERY OF HIERARCHICAL BEHAVIORS J.P. Rosca and D.H. Ballard AAAI-96, The MIT Press, 1996 ftp://ftp.cs.rochester.edu/pub/u/rosca/gp/96.aaai.ps.gz (7 pages; 100k compressed) Abstract: The complexity of policy learning in a reinforcement learning task deteriorates primarily with the increase of the number of observations. Unfortunately, the number of observations may be unacceptably high even for simple problems. In order to cope with the scale up problem we adopt procedural representations of policies. Procedural representations have two advantages. First they are implicit, allowing for good inductive generalization over a very large set of input states. Second they facilitate modularization. In this paper we compare several randomized algorithms for learning modular procedural representations. The main algorithm, called Adaptive Representation through Learning (ARL) is a genetic programming extension that relies on the discovery of subroutines. ARL is suitable for learning hierarchies of subroutines and for constructing policies to complex tasks. When the learning problem cannot be solved because the specification is too loose and the domain is not well understood, ARL will discover regularities in the problem environment in the form of subroutines, which often lead to an easier problem solving. ARL was successfully tested on a typical reinforcement learning problem of controlling an agent in a dynamic and non-deterministic environment where the discovered subroutines correspond to agent behaviors. DISCOVERY OF SUBROUTINES IN GENETIC PROGRAMMING J.P. Rosca and D.H. Ballard In: Advances in Genetic Programming II Edited by P. Angeline and K. Kinnear Jr. MIT Press, 1996. ftp://ftp.cs.rochester.edu/pub/u/rosca/gp/96.aigp2.dsgp.ps.gz (25 pages; 439k compressed) Abstract: A fundamental problem in learning from observation and interaction with an environment is defining a good representation, that is a representation which captures the underlying structure and functionality of the domain. This chapter discusses an extension of the genetic programming (GP) paradigm based on the idea that subroutines obtained from blocks of good representations act as building blocks and may enable a faster evolution of even better representations. This GP extension algorithm is called adaptive representation through learning (ARL). It has built-in mechanisms for (1) creation of new subroutines through discovery and generalization of blocks of code; (2) deletion of subroutines. The set of evolved subroutines extracts common knowledge emerging during the evolutionary process and acquires the necessary structure for solving the problem. ARL was successfully tested on the problem of controlling an agent in a dynamic and non-deterministic environment. Results with the automatic discovery of subroutines show the potential to better scale up the GP technique to complex problems. From hu at eceserv0.ece.wisc.edu Fri Jun 7 22:38:37 1996 From: hu at eceserv0.ece.wisc.edu (Yu Hu) Date: Fri, 7 Jun 1996 21:38:37 -0500 Subject: IEEE Trans. SP Special issue CFP: Neural Network Signal Processing Message-ID: <199606080238.AA10364@eceserv0.ece.wisc.edu> ******************************************************************** * CALL FOR PAPERS * * * * A Special Issue of IEEE Transactions on Signal Processing: * * Applications of Neural Networks to Signal Processing * * * ******************************************************************** Expected Publication Date: November 1997 Issue Submission Deadline: December 1, 1996 Guest Editors: A. G. Constantinides, Simon Haykin, Yu Hen Hu, Jenq-Neng Hwang, Shigeru Katagiri, Sun-Yuan Kung, T. A. Poggio Significant progress has been made applying artificial neural network (ANN) techniques to signal processing. From a signal processing perspective, it is imperative to understand how the neural network based algorithms are related to more conventional approaches in terms of performance, cost, and practical implementation issues. Questions like these demand honest, pragmatic, innovative, and imaginative answers. This special issue offers a unique forum for researchers and practitioners in this field to present their view on these important questions. We seek highest quality manuscripts which focus on the signal processing aspects of a neural network based algorithm, applications or implementation. Topics of interests include, but are not limited to: Neural network based signal detection, classification, and understanding algorithms. Nonlinear system identification, signal prediction, modeling, adaptive filtering, and neural network learning algorithms. Neural network applications to biomedical signal processing, including medical imaging, Electrocardiogram, EEG, and related topics. Signal processing algorithms for biological neural system modeling Comparison of neural network based approach with conventional signal processing algorithms for solving real world signal processing tasks. Real world signal processing applications based on neural networks. Fast and parallel algorithms for efficient implementation of neural networks based signal processing systems. Prospective authors are encouraged to SUBMIT MANUSCRIPTS BY DECEMBER 1, 1996 to: Professor Yu-Hen Hu E-mail: hu at engr.wisc.edu Univ. of Wisconsin - Madison, Phone: (608) 262-6724 Dept. of Electrical and Computer Engineering Fax: (608) 262-1267 1415 Engineering Drive Madison, WI 53706-1691 U.S.A. On the cover letter, indicate the manuscript is submitted to the special issue on neural network for signal processing . All manuscripts should conform to the submission guideline detailed in the "information for authors" printed in each issue of the IEEE Transactions on Signal Processing. Specifically, the length of each manuscript should not exceed 30 double-spaced pages. SCHEDULE Manuscript received by: December 1, 1996 Completion of initial review: March 31, 1997 Final manuscript received by : June 30, 1997 Expected publication date: November, 1997 DISTINGUISHED GUEST EDITORS Prof. A. G. Constantinides, Imperial College, UK, a.constantinides at romeo.ic.ac.uk Prof. Simon Haykin, McMaster University, Canada, haykin at synapse.crl.mcmaster.ca Prof. Yu Hen Hu, Univ. of Wisconsin, U.S.A., hu at engr.wisc.edu Prof. Jenq-Neng Hwang, University of Washington, U.S.A., hwang at ee.washington.edu Dr. Shigeru Katagiri, ATR, JAPAN, katagiri at hip.atr.co.jp Prof. Sun-Yuan Kung, Princeton University, U.S.A., kung at princeton.edu Prof. T. A. Poggio, Massachusetts Inst. of Tech., U.S.A., tp-temp at ai.mit.edu From goldfarb at unb.ca Sun Jun 9 22:21:00 1996 From: goldfarb at unb.ca (Lev Goldfarb) Date: Sun, 9 Jun 1996 23:21:00 -0300 (ADT) Subject: Call for papers for a Special Issue of Pattern Recognition: What is inductive learning? Message-ID: My apologies if you receive multiple copies of this message. Please, post it. ************************************************************************ Call for papers ----------------- Special Issue of Pattern Recognition (The Journal of the Pattern Recognition Society) WHAT IS INDUCTIVE LEARNING: ON THE FOUNDATIONS OF PATTERN RECOGNITION, AI, AND COGNITIVE SCIENCE Guest editor: Lev Goldfarb Faculty of Computer Science University of New Brunswick Fredericton, N.B., Canada The "shape" of AI (and, partly, of cognitive science), as it stands now, has been molded largely by the three founding schools (at Massachusetts institute of Technology, Carnegie-Mellon and Stanford Universities). This "shape" stands now fragmented into several ill-defined research agendas with no clear basic SCIENTIFIC problems (in the classical understanding of the term) as their focus. It appears that four factors have contributed to this situation: inability to focus on the central cognitive process(es), lack of understanding of the structure of advanced scientific models, failure to see the distinction between the computational/logical and mathematical models, and the relative abundance of research funds for AI during the last 35 years. The resulting research agendas have prevented AI from cooperatively evolving into a scientific discipline with some central "real" problems that are inspired by the basic cognitive/biological processes. The candidates for such basic processes could come only from the central/common perceptual processes and only much later employed by the "higher", e.g. language, processes: the period during which the "higher" level processes have evolved is insignificant compared to that in which the development of the perceptual processes took place (compare also the anatomical development of the brain which does not show any basic changes with the development of the "higher" processes). Moreover, the partisan tradition in the development of AI may have also inspired the recent "connectionist revolution" as well as other smaller "revolutions", e.g., that related to the "genetic" learning. As a result, in particular, even the most "reputable" connectionist histories of the field of pattern recognition, which was formed more than three decades ago and to which the connectionism properly belongs, show amazing ignorance of the major developments in the parent (pattern recognition) field: the emergence of two important and formally quite irreconcilable recognition paradigms--vector space and syntactic. The latter ignorance is even more instructive in view of the fact that many engineers who got involved with the field of pattern recognition through the connectionist "movement" are also ignorant of the above two paradigms that were discovered and developed within largely applied/engineering parent field of pattern recognition. As far as the inception of a scientific field is concerned, it should be quite clear that the initial choice of the basic scientific problem(s) is of decisive importance. This is particularly true for cognitive modeling where the path from the model to the experiment and the reverse path are much more complex than was the case, for example, at the inception of physics. In this connection, a very important question arises, which will be addressed in the special issue: What form will the future/adequate cognitive models take? Furthermore, may be, as many cognitive scientists argue, since we are in a prescientific stage, we should simply continue to collect more and more data and not worry about the future models. The answer to the last argument is quite clear to me: look very carefully at the "data" and the corresponding experiments and you will note that no data can be even collected without an underlying model, which always includes both formal and informal components. In other words, we cannot avoid models (especially in cognitive science, where the path from the model to the experiment will be much longer and more complex than is the case in all other sciences). Therefore, paraphrasing Friedrich Engels's thought on the role of philosophy in science, one can say that there is absolutely no way to do a scientific experiment without the underlying model and the difference between a good scientist and a bad one has to do with the degree to which each realizes this dependence and actively participates in the selection of the corresponding model. It goes without saying that, at the inception of the science, the decision on which cognitive process one must focus initially should precede the selection of the model for the process. As to the choice of the basic scientific problem, or basic cognitive process, it appears that the really central cognitive process is that of inductive learning, which might have been marked so by many great philosophers of the past four centuries (e.g., Bacon, Descartes, Pascal, Locke, Hume, Kant, Mill, Russell, Quine) and even earlier (e.g., Aristotle). The insistence of such outstanding physiologists and neurophysiologists as Helmholtz and Barlow on the central role of inductive learning processes is also well known. However, in view of the difficulties associated with developing an adequate inductive learning model, researchers in AI and to a somewhat lesser extent in cognitive science have decided to view inductive learning not as a central process at all, i.e., they decided to "dissolve" the problem. It became clear to me that the above difficulties are related to the development of a genuinely new (symbolic) mathematical framework that can SATISFACTORILY define the concept of INDUCTIVE CLASS REPRESENTATION (ICR), i.e., the nature of encoding essentially infinite data set on the basis of a small finite set. (The most known as well as critical to the development of mathematics example of ICR is that of the classical Peano representation of the set of natural numbers--one element plus one operation--used in mathematical induction.) Thus, the main differences between inductive learning models should be viewed in light of the differences between the formal means, i.e. mathematical structures, offered by various models for representing the class inductively. I will also argue (in one of the papers) that the classical mathematical (numeric) models, including the vector space and probabilistic models, offer inadequate axiomatic frameworks for capturing the concept of ICR. As Peter Gardenfors aptly remarked in his 1990 paper, "induction has been called 'the scandal of philosophy' [and] unless more consideration is given to the question of which form of knowledge representation is appropriate for mechanized inductive inferences, I'm afraid that induction may become a scandal of AI as well." I strongly believe that all attempts to "dissolve" the inductive learning processes are futile and, moreover, that these processes are central cognitive processes for all levels of processing, hence the earlier workshop in Toronto (May 20-21) under the same title and the present Special Issue. I invite all researchers seriously interested in the scientific foundations of cognitive science, AI, or pattern recognition to submit the papers addressing, in addition to other relevant issues, the following questions: * What is the role of mathematics in cognitive science, AI, and pattern recognition? * Are there any central cognitive processes? * What is inductive learning? * What is inductive class representation (ICR)? * Are there several basic inductive learning processes? * Are the inductive learning processes central? * What are the relations between inductive learning processes and the known physical processes? * What is the relationship between the measurement processes and inductive learning processes (e.g., retina as a structured measurement device)? * What is the role of inductive learning in sensation and perception (vision, hearing, etc.)? * What is the relation between the inductive learning, categorization, and pattern recognition? * What is the relation between the supervised/inductive learning and the unsupervised learning? * What is the role of inductive learning processes in language acquisition? * What are the relationships, if any, between the inductive class representation (ICR) and the basic object representation (from the class)? * What are the differences between the mathematical structures employed by the known inductive learning models for capturing the corresponding ICRs? * What is the role of inductive learning in memory and knowledge representation? * What are the relations, if any, between the ICR and mental models and frames? When preparing the manuscript, please conform to the standard submission requirements given in journal Pattern Recognition, which could be faxed or mailed if necessary. Hardcopies (4) of each submission should be mailed to Lev Goldfarb Faculty of Computer Science University of New Brunswick P.O. Box 4400 E-mail: goldfarb at unb.ca Fredericton, N.B. E3B 5A3 Tel: 506-453-4566 Canada Fax: 506-453-3566 by the SUBMISSION DEADLINE, August 20, 1996. The review process should take about 4-5 weeks and will take into account the relevance, quality, and originality of the contribution. Potential contributors are encouraged to contact me with any questions they might have. ************************************************************************** -- Lev Goldfarb http://wwwos2.cs.unb.ca/profs/goldfarb/goldfarb.html From john at dcs.rhbnc.ac.uk Mon Jun 10 10:42:05 1996 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Mon, 10 Jun 96 15:42:05 +0100 Subject: RESEARCH ASSISTANT POST Message-ID: <199606101442.PAA05889@platon.cs.rhbnc.ac.uk> --------------------------------------------------------------- RESEARCH ASSISTANT POST Royal Holloway/London School of Economics, University of London --------------------------------------------------------------- Jonathan Baxter has been working at Royal Holloway and London School of Economics on an EPSRC (a UK funding council) funded project entitled `The Canonical Metric in Machine Learning'. Attached below is an abstract of the project which has a further year to run. Jonathan is resigning from the project to move to the Australian National University, where he will continue to work along similar lines. EPSRC have given us permission to recruit a replacement to start any time between 5th July and 5th January 97 and to run for a further 12 months provided that they are suitable for the work involved. If you know anyone who would be interested, it would be very helpful if they could visit London before Jonathan leaves on 5th July. Jonathan, Martin and I will be attending COLT and so could discuss the project in detail with anyone interested at that time. The rate of pay is 19600 pounds pa paid half through each institution. The slant taken could be towards more implementational work or alternatively (and perhaps preferably in view of the funding committee being mathematical) more theoretical. Anyone with an interest should not hesitate to contact us for more information. Best wishes John Shawe-Taylor, Martin Anthony and Jonathan Baxter ---------- Canonical Metric in Machine Learning (Abstract and progress) The performance of a learning algorithm is fundamentally limited by the features it uses. Thus discovering sets of good features is of major importance in machine learning research. The principle aim of this project is to further our theoretical knowledge of the feature discovery process, and also to implement practical solutions for feature discovery in problems such as character recognition and speech recognition. The theoretical aspect of the project builds on work by the previous research assistant (Jonathan Baxter) showing that if a learner is embedded within and environment of related tasks then the learner can learn features that are appropriate for learning all tasks in the environment. This process can be viewed as "Learning to Learn". One can also show that the environment of learning problems induces a natural metric (the "canonical metric") on the input space of the learner. Knowledge of this metric enables the learner to perform optimal quantization of the input space, and hence to learn optimally within the environment. The main theoretical focus of the project is to further investigate the theory of the canonical metric and its relation to learning. We are currently applying these theoretical ideas to the problem of Japanese character recognition and so far we have achieved notable success. The practical part of the project will be to continue these investigations and to also investigate applications to speech recognition. Further information on the background material to this project may be found in Neurocolt technical reports: 95-45 95-46 and 95-47. Also see Jonathan Baxter's talk at this year's COLT. From alpaydin at boun.edu.tr Tue Jun 11 06:33:41 1996 From: alpaydin at boun.edu.tr (Ethem Alpaydin) Date: Tue, 11 Jun 1996 14:33:41 +0400 (MEDT) Subject: Call for Participation: Tainn'96 In-Reply-To: Message-ID: Pre-S. We're sorry if you receive multiple copies of this message. ase forward * Please post * Please forward * Please post * Please forwa Call for Participation TAINN'96, Istanbul 5th Turkish Symposium on Artificial Intelligence and Neural Networks To be held at Istanbul Technical University, Macka Campus June 27 - 28, 1996 Jointly-organized by Bogazici University and Istanbul Technical University Invited talk by Prof Teuvo Kohonen, Helsinki University of Technology Full program, registration and accommodation information can be e-received by Email: tainn96 at boun.edu.tr URL: http://www.cmpe.boun.edu.tr/~tainn96 From listerrj at helios.aston.ac.uk Wed Jun 12 10:41:56 1996 From: listerrj at helios.aston.ac.uk (Richard Lister) Date: Wed, 12 Jun 1996 15:41:56 +0100 Subject: PhD Studentship Available Message-ID: <4719.199606121441@sun.aston.ac.uk> ---------------------------------------------------------------------- Neural Computing Research Group ------------------------------- Dept of Computer Science and Applied Mathematics Aston University, Birmingham, UK PhD STUDENTSHIP AVAILABLE ------------------------- *** Full details at http://www.ncrg.aston.ac.uk/ *** A studentship exists for a project which is jointly funded by the UK EPSRC, and by British Aerospace under the Total Technology scheme. The student will be expected to follow the Neural Computing MSc by Research degree for the first year, and will also be expected to pass four modules from the MBA course. The funding covers tuition fees and living expenses for three years and the student is expected to gain a PhD in Neural Computing at the end of this period. The project supervisor will be Professor David Lowe and the studentship will be based at Aston University, Birmingham, UK. Structural Characterisation of Wake EEG Signals ----------------------------------------------- A student is required to carry out research in the interdisciplinary area of multichannel EEG signal characterisation using statistical pattern recognition and artificial neural networks. The aim of the project is to investigate the degree to which attentiveness or vigilance may be characterised through an analysis of wake EEG signals. The problem domain is one of extraction and interpretation of structure in an environment in which there is little or no labelled data and in which there is a poor signal to noise ratio. Macrostate unsupervised clustering of multivariate EEG data is a difficult problem area and requires a high level of competence across discipline boundaries. The project calls for developing skills in linear and non-linear signal processing, biomedical data interpretation, statistical clustering methodology and artificial neural networks. The student will have to be mathematically and computationally proficient. How to apply ------------ This award is made on a competitive basis and students should ensure that applications reach the Neural Computing Research Group at Aston University by Wednesday June 19th 1996. An electronic version of the application form, is available on our Web Pages at http://www.ncrg.aston.ac.uk/ . Interviews will be held on Friday 21st June 1996 and candidates must ensure that they are available for interview. Successful candidates will be notified by telephone and/or email if they are to be called for interview. Candidates will be notified of the outcome on Monday 24th June 1996. ---------------------------------------------------------------------- From adali at engr.umbc.edu Thu Jun 13 12:17:00 1996 From: adali at engr.umbc.edu (Tulay Adali) Date: Thu, 13 Jun 1996 12:17:00 -0400 (EDT) Subject: CFP: Spec. Issue on Apps. of NNets in Biomedical Imaging/Image Processing Message-ID: <199606131617.QAA06071@akdeniz.engr.umbc.edu> --------------------------------------------------------------------- CALL FOR PAPERS --------------------------------------------------------------------- Special Issue on Applications of Neural Networks in Biomedical Imaging/Image Processing --------------------------------------------------------------------- We invite papers on applications of artificial neural networks in biomedical imaging and biomedical image processing to appear in a special issue of the Journal of VLSI Signal Processing Systems for Signal, Image, and Video Technology. Some possible areas of application are (but not restricted to): pattern recognition and feature extraction for computer aided diagnosis and prognosis, analysis (quantification, segmentation, edge detection, etc.), restoration, compression, registration, reconstruction, and quality evaluation of medical images. Of particular interest are methods that take advantage of the multimodal nature of biomedical image data which is available in most studies as well as techniques developed for sequences of biomedical images such as those for dynamic PET and functional MRI data. Schedule: Manuscript submission deadline: August 15, 1996 Notification of acceptance: January 15, 1997 Final manuscript submission deadline: March 1, 1997 Expected publication date: Third quarter of 1997 Prospective authors should follow the regular guidelines for publications submitted to journals of Kluwer Academic Publishers except that the manuscripts should be submitted to Tulay Adali, guest editor of the special issue. Submission instructions for the journal can be found at http://www.kwap.nl. Guest Editor: TULAY ADALI Department of Computer Science and Electrical Engineering University of Maryland Baltimore County Baltimore, MD 21228-5398 Tel: (410) 455-3521 Fax: (410) 455-3969 E-mail: adali at engr.umbc.edu For updates and a list of references in the area refer to: http://www.engr.umbc.edu/~adali/biomednn.html. \end{document} From csj at ccms.ntu.edu.tw Thu Jun 13 03:28:34 1996 From: csj at ccms.ntu.edu.tw (Sao-Jie Chen) Date: Thu, 13 Jun 1996 15:28:34 +0800 Subject: call for papers: ANNCSSP'96 (Taiwan) Message-ID: <199606130728.PAA24823@ccms.ntu.edu.tw> Submitted by : Prof. Von-Wun Soo ************************************************************************ SECOND CALL FOR PAPERS 1996 International Symposium on Multi-Technology Information Processing A Joint Symposium of Artificial Neural Networks, Circuits and Systems, and Signal Processing December 16-18, 1996 Hsin-Chu, Taiwan, Republic of China ************************************************************************ Submission of extended summary by July 15, 1996. Sponsored by: National Tsing Hua University (NTHU) Ministry of Education, Taiwan R.O.C. National Science Council, Taiwan R.O.C. in Cooperation with IEEE Signal Processing Society IEEE Circuits and Systems Society (pending) IEEE Neural Networks Council IEEE Taiwan Section Taiwanese Association for Artificial Intelligence ORGANIZATION General Co-chairs: H. C. Wang, NTHU, Y. H. Hu, U. of Wisconsin Advisory board Co-chairs: W. T. Chen, NTHU S. Y. Kung, Princeton U. Vice Co-chairs: H. C. Hu, NCTU J. N. Hwang, U. of Washington Program Co-chairs: V. W. Soo, NTHU C. H. Lee, AT&T Call For Papers The International Symposium on Multi-Technology Information Processing (ISMIP'96), a joint symposium of artificial neural networks, circuits and systems, and signal processing, will be held in National Tsing Hua University, Hsin Chu, Taiwan, Republic of China. This conference is an expansion of previous series of International Symposium of Artificial Neural Networks (ISANN). The main purpose of this conference is to offer a forum showcasing the latest advancement of modern information processing technologies. It will include recent innovative research results of theories, algorithms, architechtures, systems, hardware implementations that lead to intelligent information processing. The technical program will feature opening keynote addresses, invited plenary talks, technical presentations of refereed papers. The official language is English. Papers are solicited for, but not limited to, the following topics: 1. Associative Memory 2. Digital and Analog Neurocomputers 3. Fuzzy Neural Systems 4. Supervised/Unsupervised Learning 5. Robotics 6. Sensory/Motor Control 7. Image Processing 8. Pattern Recognition 9. Langauge/ Speech Processing 10. Digital Signal Processing 11. VLSI Architectures 12. Non-linear Circuits 13. Multimedia information processing 14. Optimization 15. Mathematical Methods 16. Visual signal processing 17. Content based signal processing 18. Applications Prospective authors are invited to submit 4 copies of extended summaries of no more than 4 pages. All the manuscripts must be written in English in single-spaced, single column, on 8.5" by 11" white papers. The top of the first page of the paper should include a title, authors' names, affiliations, address, telephone/fax numbers, and email address if applicable. The indicated corresponding author will receive an acknowledgement of his/her submission. Camera-ready full papers of accepted manuscripts will be published in a hard-bound proceedings and distributed in the symposium. For more information, please consult at the URL site http://pierce.ee.washington.edu/~nnsp/ismip96.html Authors are invited to send submissions to one of the program co-chairs: For submissions from USA and Europe: Dr. Chin-Hui Lee Bell Labs, Lucent Technologies 600 Mountain Avenue Murray Hill, NJ 07974, USA E-Mail: chl at research.bell-labs.com Phone: 908-582-5226 Fax: 908-582-7308 For submissions from Asia and the rest of the world Prof. V. W. Soo Dept. of Computer Science National Tsing Hua University Hsin Chu, Taiwan 30043, ROC E-Mail: soo at cs.nthu.edu.tw Phone: 886-35-731068 Fax: 886-35-723694 Schedule Submission of extended summary: July 15, 1996. Notification of acceptance: September 30, 1996. Submission of camera-ready paper: October 31, 1996. Advanced registration, before: November 15, 1996. From Dimitris.Dracopoulos at trurl.brunel.ac.uk Fri Jun 14 16:19:16 1996 From: Dimitris.Dracopoulos at trurl.brunel.ac.uk (Dimitris Dracopoulos) Date: Fri, 14 Jun 1996 14:19:16 -0600 Subject: Advanced MSc in Neural & Evolutionary Systems (London) Message-ID: <9606141419.ZM1649@trurl.brunel.ac.uk> The Department of Computer Science, Brunel University, London will run a new advanced MSc in Neural & Evolutionary Systems in 1996/1997. The MSc covers introductory and more advanced material in the areas of neural networks, genetic algorithms, genetic programming, artificial life, parallel computing, etc. The MSc in Neural and Evolutionary Systems is supported by the Centre of Neural and Evolutionary Systems (CNES), in the Department of Computer Science and Information Systems. More information about this MSc can be found in: http://http2.brunel.ac.uk:8080/~csstdcd/NES_Msc.html Applications will be considered until late June (or the latest beginning of July). For application forms please contact: Admissions Secretary Department of Computer Science and Information Systems Brunel University London Uxbridge Middlesex UB8 3PH United Kingdom Telephone: +44 (0)1895 274000 ext. 2394 Fax: +44 (0)1895 251686 Email: cs-msc-courses at brunel.ac.uk -- Dr Dimitris C. Dracopoulos Department of Computer Science Brunel University Telephone: +44 1895 274000 ext. 2120 London Fax: +44 1895 251686 Uxbridge E-mail: Dimitris.Dracopoulos at brunel.ac.uk Middlesex UB8 3PH United Kingdom From pkso at tattoo.ed.ac.uk Fri Jun 14 16:00:21 1996 From: pkso at tattoo.ed.ac.uk (P Sollich) Date: Fri, 14 Jun 96 16:00:21 BST Subject: Paper: Query learning in committee machine Message-ID: <9606141600.aa05180@uk.ac.ed.tattoo> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/sollich.queries_comm_machine.ps.Z Dear connectionists, the following preprint is now available for copying from the neuroprose repository: Learning from Minimum Entropy Queries in a Large Committee Machine Peter Sollich Department of Physics University of Edinburgh Edinburgh EH9 3JZ, U.K. ABSTRACT In supervised learning, the redundancy contained in random examples can be avoided by learning from queries. Using statistical mechanics, we study learning from minimum entropy queries in a large tree-committee machine. The generalization error decreases exponentially with the number of training examples, providing a significant improvement over the algebraic decay for random examples. The connection between entropy and generalization error in multi-layer networks is discussed, and a computationally cheap algorithm for constructing queries is suggested and analysed. Has appeared in Physical Review E, 53, R2060--R2063, 1996 (4 pages). Comments and/or feedback are welcome. Peter Sollich PS: Sorry - no hardcopies available. -------------------------------------------------------------------------- Peter Sollich Department of Physics University of Edinburgh e-mail: P.Sollich at ed.ac.uk Kings Buildings phone: +44 - (0)131 - 650 5293 Mayfield Road Edinburgh EH9 3JZ, U.K. -------------------------------------------------------------------------- From LUCIANO at etsiig.uniovi.es Mon Jun 17 06:23:00 1996 From: LUCIANO at etsiig.uniovi.es (Luciano Sanchez) Date: Mon, 17 Jun 1996 11:23:00 +0100 (GMT) Subject: New book in Spanish Message-ID: <01I60HUOKW5UCTZKPB@etsiig.uniovi.es> Last month a new book about metaheuristics (Simulated annealing, Tabu Search, GRASP, Genetic Algorithms) and application of NN in optimization has been published. The book is in Spanish. List of authors includes Prof Fred Glover (U of Colorado) author of Tabu Search Technique, and other Professors from Alabama, American University of Beirut, Tecnologico de Monterrey-Mexico, U La plata (Argentina) and Oviedo (Spain). All the material is original and written for this book. More info can be reached in the Web Address: http://www.ing.unlp.edu.ar/cetad/mos/libro.html TITLE: Optimizacion heuristica y redes neuronales AUTHORS: Adenso Diaz, Fred Glover, H. Ghaziri, J.L. Gonzalez, M. Laguna, P. Moscato, F. Tseng PREFACE BY Gerald Thompson (Carnegie Mellon U) PUBLISHER: Editorial Paraninfo, Madrid ( fax: + (34) 1.445.6218 ) PAGES: 235 ISBN: 84-283-2269-4 TABLE OF CONTTENTS (ABSTRACTED): CHAPTER 1. INTRODUCTION TO METAHEURISTICS (Complexity, classification of heuristics,..) CHAPTER 2. SIMULATED ANNEALING (Physical analogy, convergency, number partitioning & SA, applic,..) CHAPTER 3. GENETIC ALGORITHMS (Basics, schema theorem, defective problems, applications,...) CHAPTER 4. TABU SEARCH (Types of memory, structure and strategy, path relinking,...) CHAPTER 5. GRASP (GREEDY RANDOMIZED ADAPTIVE SEARCH PROCEDURES) (Design, local procedures, applications,...) CHAPTER 6. NEURAL NETWORKS (Architectures, NN & opt., Kohonen, Hopfield, elastic net., appl...) From rafal at mech.gla.ac.uk Mon Jun 17 05:29:06 1996 From: rafal at mech.gla.ac.uk (rafal@mech.gla.ac.uk) Date: Mon, 17 Jun 1996 10:29:06 +0100 (BST) Subject: New book on neurcontrol Message-ID: <7485.199606170929@gryphon.mech.gla.ac.uk> Contributed by: Rafal Zbikowski, PhD Control Group, Department of Mechanical Engineering, Glasgow University, Glasgow G12 8QQ, Scotland, UK rafal at mech.gla.ac.uk NEW BOOK ON NEUROCONTROL ``Neural Adaptive Control Technology'' R Zbikowski and K J Hunt (Editors) World Scientific, 1996 ISBN 981-02-2557-1 hard bound, 340pp, subject index Full details: http://www.mech.gla.ac.uk/~nactftp/nact.html http://www.singnet.com.sg/~wspclib/Books/compsci/3021.html Summary ^^^^^^^ This book is an outgrowth of the workshop on Neural Adaptive Control Technology, NACT I, held in 1995 in Glasgow. Selected workshop participants were asked to substantially expand and revise their contributions to make them into full papers. The workshop was organised in connection with a three-year European Union funded Basic Research Project in the ESPRIT framework, called NACT, a collaboration between Daimler-Benz (Germany) and the University of Glasgow (Scotland). A major aim of the NACT project is to develop a systematic engineering procedure for designing neural controllers for non-linear dynamic systems. The techniques developed are being evaluated on concrete industrial problems from Daimler-Benz. In the book emphasis is put on development of sound theory of neural adaptive control for non-linear control systems, but firmly anchored in the engineering context of industrial practice. Therefore the contributors are both renowned academics and practitioners from major industrial users of neurocontrol. Contents ^^^^^^^^ Part I Neural Adaptive Control Technology Chapter 1 J. C. Kalkkuhl and K. J. Hunt (Daimler-Benz AG) ``Discrete-time Neural Model Structures for Continuous Nonlinear Systems: Fundamental Properties and Control Aspects'' Chapter 2 P.~J.~Gawthrop (University of Glasgow) ``Continuous-Time Local Model Networks'' Chapter 3 R. {\.Z}bikowski and A. Dzieli{\'n}ski (University of Glasgow) ``Nonuniform Sampling Approach to Control Systems Modelling with Feedforward Neural Networks'' Part II Non-linear Control Fundamentals for Neural Networks Chapter 4 W. Respondek (Polish Academy of Sciences) ``Geometric Methods in Nonlinear Control Theory: A Survey'' Chapter 5 T. Kaczorek (Warsaw University of Technology) ``Local Reachability, Local Controllability and Observability of a Class of 2-D Bilinear Systems'' Chapter 6 T. A. Johansen and M. M. Polycarpou (SINTEF and University of Cincinnati) ``Stable Adaptive Control of a General Class of Non-linear Systems'' Part III Neural Techniques and Applications Chapter 7 J-M. Renders and M. Saerens (Universit{\'e} Libre de Bruxelles) ``Robust Adaptive Neurocontrol of MIMO Continuous-time Processes Based on the $e_1$-modification Scheme'' Chapter 8 I. Rivals and L. Personnaz ({\'E}cole Superieure de Physique et de Chimie Industrielles) ``Black-Box Modeling with State-Space Neural Networks'' Chapter 9 D. A. Sofge and D. L. Elliott (NeuroDyne, Inc.) ``An Approach to Intelligent Identification and Control of Nonlinear Dynamical Systems'' Chapter 10 W. S. Mischo (Darmstadt Institute of Technology) ``How to Adapt in Neurocontrol: A Decision for CMAC'' Chapter 11 G. T. Lines and T. Kavli (SINTEF Instrumentation) ``The Equivalence of Spline Models and Logic Applied to Model Construction and Interpretation'' Index Preface ^^^^^^^ This book is an outgrowth of the workshop on Neural Adaptive Control Technology, NACT I, held on May 18--19, 1995 in Glasgow. However, this book is not simply the conference proceedings. Instead, selected workshop participants were asked to substantially expand and revise their contributions to make them into full papers. Before the contents of the book is discussed, it seems in order to briefly sketch the background and purpose of the workshop. The event was organised in connection with a three-year European Union funded Basic Research Project in the ESPRIT framework, called NACT, a collaboration between Daimler-Benz (Germany) and the University of Glasgow (Scotland). The NACT project, which began on 1 April 1994, is a study of the fundamental properties of neural network based adaptive control systems. Where possible, links with traditional adaptive control systems are exploited. A major aim is to develop a systematic engineering procedure for designing neural controllers for non-linear dynamic systems. The techniques developed are being evaluated on concrete industrial problems from within the Daimler-Benz group of companies. This context dictated the focus of the workshop and guided the editors in the choice of the papers and their subsequent reshaping into substantive book chapters. Thus, emphasis is put on development of a sound theory of neural adaptive control for non-linear control systems, but firmly anchored in the engineering context of industrial practice. Therefore, the contributors are both renowned academics and practitioners from major industrial users of neurocontrol. The book naturally divides into three parts. Part I is devoted to the theoretical and practical results on neural adaptive control technology resulting from the NACT project. Chapter 1 by J.~C.~Kalkkuhl and K.~J.~Hunt analyses several important fundamental issues so far largely ignored in the neurocontrol context. The issue of prime importance, and thus treated first, is that of the discretisation of continuous-time models. The physical plants are continuous-time, but the practicality of digital implementations require discrete-time representations. A careful discussion is presented exposing the limitations of NARMAX models, widely used in neurocontrol. This sets the stage for the finite-element method approach to approximation of NARMAX models. Chapter~2 is written by Peter J.~Gawthrop, a well-known contributor to adaptive control, in particular, continuous-time self-tuning. Following this line, the continuous-time version of Local Model Networks/Controllers is introduced. The structure has some surprising connections to the state observation problem. This leads to the important distinction between local and global states of the model. The exposition is accompanied by simulations of essentially non-linear systems. Chapter 3 by R.~{\.Z}bikowski and A.~Dzieli{\'n}ski introduces and describes in some detail the nonuniform multi-dimensional sampling approach to neurocontrol with feedforward neural networks. The authors argue that this is a natural theoretical framework for practical control engineering problems, because the measured data representing the NARMA model come as multidimensional samples. The dynamics of the underlying system manifest themselves by nonuniformity of the data and thus the irregular spread of the samples is an essential feature of the representation. A novel method of neural modelling of NARMA systems is given. Important practical issues of distortions caused by approximation of the Fourier transform are addressed. A tutorial survey of the theory of Paley-Wiener functions, with emphasis on the neural modelling aspects, completes the presentation. Part II is devoted to results of non-linear control, relevant to theory of neurocontrol. It opens with Chapter 4 by Witold Respondek, whose pioneering work in the beginning of the 1980s resulted in explosive development (lasting to this day) of geometric methods in non-linear control. In fact, `geometric control' has practically become synonymous with non-linear control. Recently, the rich and consistent theory has been used more often in the context of neurocontrol, because of the need for a control framework for (non-linear) neural models. Being an important and active participant in the development of the geometric approach, Respondek offers valuable insights into the underlying mathematics, while never compromising on rigour. His lucid style is supported by numerous illustrations and examples making the chapter a readable and informative introduction. It is a welcome feature for a subject dominated by presentations often deprived of geometric feeling and overloaded with distracting technicalities. Chapter 5 gives another theoretical perspective, this time from Tadeusz Kaczorek, a major contributor to the theory of 2-D control systems. Chapter 6 by T.~A.~Johansen and M.~M.~Polycarpou deals with the important, yet often neglected, issue of stability of adaptive control of non-linear systems. Part III presents various aspects of neural control and its applications. It starts with Chapter 7 by J-M.~Renders and M.~Saerens. These authors develop local stability results for an adaptive neurocontrol strategy. Weight adaptation is based upon Lyapunov stability theory. The next chapter, Chapter 8, is co-authored by the well-known neural networks researchers I.~Rivals and L.~Personnaz who consider state-space models for neurocontrol as an alternative to the predominant input-output approach. Chapter 9 brings the intelligent control perspective on neural issues by one of the major players in the field, Donald A.~Sofge. Chapter 10 by W.~S.~Mischo (from Henning Tolle's CMAC school) presents theory and applications of CMAC-type memories for learning control. Finally, Chapter 11 by G.~Lines and T.~Kavli presents the results of applying spline-based adaptive methods for the development of dynamics models. The interpretation of the spline models as fuzzy systems is also examined. Rafa{\l} \.Zbikowski, Kenneth Hunt Glasgow, Berlin: December, 1995 From jourjine at arktur.zfe.siemens.de Tue Jun 18 10:06:45 1996 From: jourjine at arktur.zfe.siemens.de (Dr. Alexander Jourjine) Date: Tue, 18 Jun 1996 16:06:45 +0200 Subject: applied theory position at SNAT, Dresden, Germany Message-ID: <199606181406.QAA22431@kapella.nisced> A position is open in the algorithm development group of the Applications Software Dept. at Siemens Nixdorf Advanced Technologies GmbH, a fully owned subsidiary of Siemens Nixdorf AG. We work on face recognition, 3D object recognition, and failure prediction/novelty detection products. An ideal candidate would be a theoretical physicist or a mathematician with exposure to or original research in some of the following areas. 1. Mathematical physics. - Differential eqs - Integral eqs. - Prob. theory - Functional theory - Differential geometry of manifolds. 2. Dynamical systems and chaos theory 3. Optimization techniques: Monte Carlo etc. 4. Math. foundations of neural networks/GA Familiarity with image processing and/or IDL is a plus. We work in Unix and PC environments. The work involves a close collaboration with professional software developers to concieve, design and develope a range of commercial products. Publications, conference attendance, and patenting are encouraged. Work style is informal but there is a lot of pressure to finish things on time. Working language is English. Salary depends on qualifications. Some knowledge of German is helpful. Position is available immediately. For further questions please e-mail Alex Jourjine at jourjine.drs at sni.de. From ingber at ingber.com Tue Jun 18 14:23:00 1996 From: ingber at ingber.com (Lester Ingber) Date: Tue, 18 Jun 1996 11:23:00 -0700 Subject: Papers: Canonical Momenta Indicators of EEG as well as markets Message-ID: <199606181523.IAA20598@alumni.caltech.edu> Papers: Canonical Momenta Indicators of EEG as well as markets I have some preliminary results of calculations in progess, using Adaptive Simulated Annealing (ASA code in my archive), developing Canonical Momenta Indicators (CMI) from a large EEG study, where the raw data is modeled using my Statistical Mechanics of Neocortical Interactions (SMNI) model. The CMI give somewhat enhanced signal to noise resolutions over the raw data, and are candidates to be further processed as is ordinary EEG data. The current prelimary results are in smni96_lecture.ps.Z [1300K] %A L. Ingber %T Statistical mechanics of neocortical interactions (SMNI) %R SMNI Lecture Plates %I Lester Ingber Research %C McLean, VA %D 1996 %O URL http://www.ingber.com/smni96_lecture.ps.Z These plates contain recent preliminary results on Canonical Momenta Indicators (CMI) in the later sections. Under WWW, smni96_lecture.html permits viewing as a series of gif files. There are several smni... papers in my archive giving more details of the use of ASA and of SMNI. The direct application of SMNI to EEG data is relatively recent and described in smni91_eeg.ps.Z [500K] %A L. Ingber %T Statistical mechanics of neocortical interactions: A scaling paradigm applied to electroencephalography %J Phys. Rev. A %N 6 %V 44 %P 4017-4060 %D 1991 %O URL http://www.ingber.com/smni91_eeg.ps.Z These methods were applied to S&P 500 cash-futures markets data as well, as reported in some markets... papers in my archive, and there too the CMI give better signal to noise resolution than just the raw data. A short explanation is given in markets96_brief.ps.Z [40K] %A L. Ingber %T Trading markets with canonical momenta and adaptive simulated annealing %R Report %I Lester Ingber Research %C McLean, VA %D 1996 %O URL http://www.ingber.com/markets96_brief.ps.Z Under WWW, markets96_brief.html permits viewing as a series of gif files. This paper gives relatively non-technical descriptions of ASA and canonical momenta, and their applications to markets and EEG. The paper was solicited by and then accepted for publication in AI in Finance, but that journal subsequently ceased all publication. Shorter versions are published in Expert Analytica and to be published in A User's Manual to Computerized Trading, I. Nelken and M.G. Jurik, Eds. Lester ======================================================================== Instructions for Retrieval of Code and Reprints Interactively Via WWW The archive can be accessed via WWW path http://www.ingber.com/ Interactively Via Anonymous FTP Code and reprints can be retrieved via anonymous ftp from ftp.ingber.com. Interactively [brackets signify machine prompts]: [your_machine%] ftp ftp.ingber.com [Name (...):] anonymous [Password:] your_e-mail_address [ftp>] binary [ftp>] ls [ftp>] get file_of_interest [ftp>] quit The 00index file contains an index of the other files. Files have the same WWW and FTP paths under the main / directory; i.e., http://www.ingber.com/a_directory/a_file and ftp://ftp.ingber.com/a_directory/a_file reference the same file. Electronic Mail If you do not have ftp access, get information on the FTPmail service by: mail ftpmail at ftpmail.ramona.vix.com (was ftpmail at decwrl.dec.com), and send only the word "help" in the body of the message. Additional Information Sorry, I cannot assume the task of mailing out hardcopies of code or papers. Limited help assisting people with their queries on my codes and papers is available only by electronic mail correspondence. Lester ======================================================================== /* RESEARCH ingber at ingber.com * * INGBER ftp://ftp.ingber.com * * LESTER http://www.ingber.com/ * * Prof. Lester Ingber _ P.O. Box 857 _ McLean, VA 22101 _ 1.800.L.INGBER */ From horvitz at MICROSOFT.com Tue Jun 18 20:48:43 1996 From: horvitz at MICROSOFT.com (Eric Horvitz) Date: Tue, 18 Jun 1996 17:48:43 -0700 Subject: UAI-96 program and registration information Message-ID: ========================================================= P R O G R A M A N D R E G I S T R A T I O N ========================================================= ** U A I 96 ** THE TWELFTH ANNUAL CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE August 1-4, 1996 Reed College Portland, Oregon, USA ======================================= UAI WWW page at http://cuai-96.microsoft.com/ The effective handling of uncertainty is critical in designing, understanding, and evaluating computational systems tasked with making intelligent decisions. For over a decade, the Conference on Uncertainty in Artificial Intelligence (UAI) has served as a central meeting on advances in methods for reasoning under uncertainty in computer-based systems. The conference serves as an annual international forum for exchanging results on the use of principled methods to solve difficult challenges with inference, learning and decision making under uncertainty. * * * UAI 96 events include a full-day course on uncertain reasoning on the day before the main UAI 96 conference (Wednesday, July 31) at Reed College. Details on the course are available at: http://cuai-96.microsoft.com/tutor.htm * * * On Sunday, August 4, we will hold a UAI-KDD Special Joint Session on Learning, Probability, and Graphical Models at the Portland Convention Center. See information on the program below. * * * UAI-96 will begin shortly before KDD-96 (http://www-aig.jpl.nasa.gov/kdd96/), AAAI-96 (http://www.aaai.org/Conferences/National/1996/aaai96.html), and the AAAI workshops, and will be in close proximity to these meetings. * * * Refer to the UAI-96 WWW home page for late-breaking information: http://cuai-96.microsoft.com/ ========================================================== ** UAI-96 Conference Program ** ========================================================== ** Wednesday, July 31, 1996 ** Conference and Course Registration 8:00-8:30am Full-Day Course on Uncertain Reasoning 8:35-5:30pm (See: http://cuai-96.microsoft.com/ for course details) ========================================================== ** Thursday, August 1, 1996 ** Plenary Session I: Perspectives on Inference 8:45-10:15am Toward a Market Model for Bayesian Inference D. Pennock and M. Wellman A unifying framework for several probabilistic inference algorithms R. Dechter Computing upper and lower bounds on likelihoods in intractable networks T. Jaakkola and M. Jordan (Outstanding Student Paper Award) Query DAGs: A practical paradigm for implementing belief-network inference A. Darwiche and G. Provan Break 10:15-10:30am Plenary Session II: Applications of Uncertain Reasoning 10:30-12:00am MIDAS: An Influence Diagram for Management of Mildew in Winter Wheat A. Jensen and F. Jensen Optimal Factory Scheduling under Uncertainty using Stochastic Dominance A* P. Wurman and M. Wellman Supply Restoration in Power Distribution Systems --- A Case Study in Integrating Model-Based Diagnosis and Repair Planning S. Thiebaux, M. Cordier, O. Jehl, J. Krivine Network Engineering for Complex Belief Networks S. Mahoney and K. Laskey * Panel Discussion: "Reports from the front: Real-world experiences with uncertain reasoning systems" 12:00-12:45pm Moderator: Bruce D'Ambrosio Lunch 12:45-2:00pm Plenary Session III: Representation and Independence 2:00-3:40pm Context-Specific Independence in Bayesian Networks C. Boutilier, N. Friedman, M. Goldszmidt, D. Koller Binary Join Trees P. Shenoy Why is diagnosis using belief networks insensitive to imprecision in probabilities? M. Henrion, M. Pradhan, K. Huang, B. del Favero, G. Provan, P. O'Rorke On separation criterion and recovery algorithm for chain graphs Milan Studeny Poster Session I: Overview Presentations 3:40-4:00pm Poster Session I 4:00-6:00pm Inference Using Message Propagation and Topology Transformation in Vector Gaussian Continuous Networks S. Alag and A. Agogino Constraining Influence Diagram Structure by Generative Planning: An Application to the Optimization of Oil Spill Response J. Agosta An Alternative Markov Property for Chain Graphs S. Andersson, D. Madigan, and M. Perlman Object Recognition with Imperfect Perception and Redundant Description C. Barrouil and J. Lemaire A Sufficiently Fast Algorithm for Finding Close to Optimal Junction Trees A. Becker and D. Geiger Efficient Approximations for the Marginal Likelihood of Incomplete Data Given a Bayesian Network D. Chickering and D. Heckerman Independence with Lower and Upper Probabilities L. Chrisman Topological Parameters for Time-Space Tradeoff R. Dechter A Qualitative Markov Assumption and its Implications for Belief Change N. Friedman and J. Halpern A Probabilistic Model for Sensor Validation P. Ibarguengoytia and L. Sucar Bayesian Learning of Loglinear Models for Neural Connectivity K. Laskey and L. Martignon Geometric Implications of the Naive Bayes Assumption M. Peot Optimal Monte Carlo Estimation of Belief Network Inference M. Pradhan and P. Dagum On Coarsening and Feedback K. Reiser and Y. Chen A Discovery Algorithm for Directed Cyclic Graphs Thomas Richardson Efficient Enumeration of Instantiations in Bayesian Networks S. Srinivas and P. Nayak UAI-96 Meeting on Bayes Net Interchange Format 7:30-9:30pm (More information: http://cuai-96.microsoft.com/bnif.htm) ========================================================== ** Friday, August 2, 1996 Plenary Session IV: Time, Persistence, and Causality 8:45-10:15am A Structurally and Temporally Extended Bayesian Belief Network Model: Definitions, Properties, and Modelling Techniques C. Aliferis and G. Cooper Identifying independencies in causal graphs with feedback J. Pearl and R. Dechter Topics in Decision-Theoretic Troubleshooting: Repair and Experiment J. Breese and D. Heckerman A Polynomial-Time Algorithm for Deciding Equivalence of Directed Cyclic Graphical Models T. Richardson (Outstanding Student Paper Award) Break 10:15-10:30am Plenary Session V: Planning and Action under Uncertainty 10:30-12:00pm A Measure of Decision Flexibility R. Shachter and M. Mandelbaum A Graph-Theoretic Analysis of Information Value K. Poh and E. Horvitz Sound Abstraction of Probabilistic Actions in The Constraint Mass Assignment Framework A. Doan and P.Haddawy Flexible Policy Construction by Information Refinement M. Horsch and D. Poole * Panel Discussion: "Automated construction of models: Why, How, When?" 12:00-12:45pm Moderator: Daphne Koller Lunch 12:45-2:00pm Plenary Session VI: Qualitative Reasoning and Abstraction of Probability 2:00-3:30pm Generalized Qualitative Probability D. Lehmann Uncertain Inferences and Uncertain Conclusions H. Kyburg, Jr. Arguing for Decisions: A Qualitative Model of Decision Making B. Bonet and H. Geffner Defining Relative Likelihood in Partially Ordered Preferential Structures J. Halpern Poster Session II: Overview Presentations 3:40-4:00pm Poster Session II 4:00-6:00pm An Algorithm for Finding Minimum d-Separating Sets in Belief Networks S. Acid and L. de Campos Plan Development using Local Probabilistic Models E. Atkins, E. Durfee, K. Shin Entailment in Probability of Thresholded Generalizations D. Bamber Coping with the Limitations of Rational Inference in the Framework of Possibility Theory S. Benferhat, D. Dubois, H. Prade Decision-Analytic Approaches to Operational Decision Making: Application and Observation T. Chavez Learning Equivalence Classes of Bayesian Network Structures D. Chickering Propagation of 2-Monotone Lower Probabilities on an Undirected Graph L. Chrisman Quasi-Bayesian Strategies for Efficient Plan Generation: Application to the Planning to Observe Problem F. Cozman and E. Krotkov Some Experiments with Real-Time Decision Algorithms B. D'Ambrosio and S. Burgess An Evaluation of Structural Parameters for Probabilistic Reasoning: Results on Benchmark Circuits Y. El Fattah and R. Dechter Learning Bayesian Networks with Local Structure N. Friedman M. Goldszmidt Theoretical Foundations for Abstraction-Based Probabilistic Planning V. Ha and P. Haddawy Probabilistic Disjunctive Logic Programming L. Ngo A Framework for Decision-Theoretic Planning I: Combining the Situation Calculus, Conditional Plans, Probability and Utility D. Poole Coherent Knowledge Processing at Maximum Entropy by SPIRIT W. Roedder and C. Meyer Real-Time Estimation of Bayesian Networks R. Welch Testing Implication of Probabilistic Dependencies S.K.M. Wong UAI-96 Banquet and Invited Talk 7:30-9:30pm ========================================================= ** Saturday, August 3, 1996 ** Plenary Session VII: Developments in Belief and Possibility 8:45-10:00am Belief Revision in the Possibilistic Setting with Uncertain Inputs D. Dubois and H. Prade Approximations for Decision Making in the Dempster-Shafer Theory of Evidence M. Bauer Possible World Partition Sequences: A Unifying Framework for Uncertain Reasoning C. Teng Break 10:00-10:15am Plenary Session VIII: Learning and Uncertainty 10:15-11:45pm Asymptotic model selection for directed networks with hidden variables D. Geiger, D. Heckerman, C. Meek On the Sample Complexity of Learning Bayesian Networks N. Friedman and Z. Yakhini Learning Conventions in Multiagent Stochastic Domains using Likelihood Estimates C. Boutilier Critical Remarks on Single Link Search in Learning Belief Networks Y. Xiang, S.K.M Wong, N. Cercone * Panel Discussion: "Learning and Uncertainty: The Next Steps" 11:45-12:30pm Moderator: G. Cooper Lunch 12:30-2:00pm Plenary Session IX: Advances in Approximate Inference 2:00-3:45pm Computational complexity reduction for BN2O networks using similarity of states A. Kozlov and J. Singh Sample-and-Accumulate Algorithms for Belief Updating in Bayes Networks E. Santos Jr., S. Shimony, E. Williams Tail Simulation in Bayesian Networks E. Castillo, C. Solares, P. Gomez Efficient Search-Based Inference for Noisy-OR Belief Networks: TopEpsilon K. Huang and M. Henrion Break 3:45-4:00pm * Panel Discussion: "UAI by 2005: Reflections on critical problems, directions, and likely achievements for the next decade" 4:00-5:00pm Moderator: Eric Horvitz Report on the Bayes Net Interchange Format Meeting 5:00-5:20 UAI Planning Meeting 5:30-6:00 ================================================================ ** Sunday, August 4, 1996 ** UAI-KDD Special Joint Sessions Portland Convention Center Selected talks on learning graphical models from the UAI and KDD proceedings. UAI badges will be honored at the Portland Convention Center for the joint session. Plenary Session X: Learning, Probability, and Graphical Models I 8:30-12:00pm KDD: Knowledge Discovery and Data Mining: Toward a Unifying Framework U. Fayyad, G. Piatetsky-Shapiro, and P. Smyth UAI: Efficient Approximations for the Marginal Likelihood of Incomplete Data Given a Bayesian Network D. Chickering and D. Heckerman KDD: Clustering using Monte Carlo Cross-Validation P. Smyth UAI: Learning Equivalence Classes of Bayesian Network Structures D. Chickering Break 9:45-10:05am Plenary Session XI: Learning, Probability, and Graphical Models II 10:05-12:00pm UAI: Learning Bayesian Networks with Local Structure N. Friedman and M. Goldszmidt KDD: Rethinking the Learning of Belief Network Probabilities R. Musick UAI: Bayesian Learning of Loglinear Models for Neural Connectivity K. Laskey and L. Martignon KDD: Harnessing Graphical Structure in Markov Chain Monte Carlo Learning P. Stolorz ================================================================ Organization: Program Cochairs: ================= Eric Horvitz Microsoft Research, 9S Redmond, WA 98052 Phone: (206) 936 2127 Fax: (206) 936 0502 Email: horvitz at microsoft.com WWW: http://www.research.microsoft.com/research/dtg/horvitz/ Finn Jensen Department of Mathematics and Computer Science Aalborg University Fredrik Bajers Vej 7,E DK-9220 Aalborg OE Denmark Phone: +45 98 15 85 22 (ext. 5024) Fax: +45 98 15 81 29 Email: fvj at iesd.auc.dk WWW: http://www.iesd.auc.dk/cgi-bin/photofinger?fvj General Conference Chair (General conference inquiries): ======================== Steve Hanks Department of Computer Science and Engineering, FR-35 University of Washington Seattle, WA 98195 Tel: (206) 543 4784 Fax: (206) 543 2969 Email: hanks at cs.washington.edu UAI Program Committee ====================== Fahiem Bacchus, University of Waterloo, Cananda Salem Benferhat, IRIT Universite Paul Sabatier, France Philippe Besnard, IRISA, France Mark Boddy, Honeywell Technology Center, USA Piero Bonissone, General Electric Research Laboratory, USA Craig Boutilier, University of British Columbia, Canada Jack Breese, Microsoft Research, USA Wray Buntine, Thinkbank, USA Luis M. de Campos, Universidad de Granada, Spain Enrique Castillo, Universidad de Cantabria, Spain Eugene Charniak, Brown University, USA Greg Cooper, University of Pittsburgh, USA Bruce D'Ambrosio, Oregon State University, USA Paul Dagum, Stanford University, USA Adnan Darwiche, Rockwell Science Center, USA Tom Dean, Brown University, USA Denise Draper, University of Washington, USA Marek Druzdzel, University of Pittsburgh, USA Didier Dubois, IRIT Universite Paul Sabatier, France Ward Edwards, University of Southern California, USA Kazuo Ezawa, AT&T Labs, USA Nir Friedman, Stanford University, USA Robert Fung, Prevision, USA Linda van der Gaag, Utrecht University, Netherlands Hector Geffner, Universidad Simon Bolivar, Venezuela Dan Geiger, Technion, Israel Lluis Godo, Campus Universitat Autonoma Barcelona, Spain Robert Goldman, Honeywell Technology Center, USA Moises Goldszmidt, SRI International, USA Adam Grove, NEC Research Institute, USA Peter Haddawy, University of Wisconsin-Milwaukee, USA Petr Hajek, Academy of Sciences, Czech Republic Joseph Halpern, IBM Almaden Research Center, USA Steve Hanks, University of Washington, USA Othar Hansson, Thinkbank, USA Peter Hart, Ricoh California Research Center, USA David Heckerman, Microsoft Research, USA Max Henrion, Lumina, USA Frank Jensen, Hugin Expert A/S, Denmark Michael Jordan, MIT, USA Leslie Pack Kaelbling, Brown University, USA Uffe Kjaerulff, Aalborg University, Denmark Daphne Koller, Stanford University, USA Paul Krause, Imperial Cancer Research Fund, UK Rudolf Kruse, University of Braunschweig, Germany Henry Kyburg, University of Rochester, USA Jerome Lang, IRIT Universite Paul Sabatier, France Kathryn Laskey, George Mason University, USA Paul Lehner, George Mason University, USA John Lemmer, Rome Laboratory, USA Tod Levitt, IET, USA Ramon Lopez de Mantaras, Spanish Scientific Research Council, Spain David Madigan, University of Washington, USA Christopher Meek, Carnegie Mellon University, USA Serafin Moral, Universidad de Granada, Spain Eric Neufeld, University of Saskatchewan, Canada Ann Nicholson, Monash University, Australia Ramesh Patil, Information Sciences Institute, USC, USA Judea Pearl, University of California, Los Angeles, USA Kim Leng Poh, National University of Singapore David Poole, University of British Columbia, Canada Henri Prade, IRIT Universite Paul Sabatier, France Greg Provan, Institute for Learning Systems, USA Enrique Ruspini, SRI International, USA Romano Scozzafava, Dip. Me.Mo.Mat., Rome, Italy Ross Shachter, Stanford University, USA Prakash Shenoy, University of Kansas, USA Philippe Smets, IRIDIA Universite libre de Bruxelles, Belgium David Spiegelhalter, Cambridge University, UK Peter Spirtes, Carnegie Mellon University, USA Milan Studeny, Academy of Sciences, Czech Republic Sampath Srinivas, Microsoft, USA Jaap Suermondt, Hewlett Packard Laboratories, USA Marco Valtorta, University of South Carolina, USA Michael Wellman, University of Michigan, USA Nic Wilson, Oxford Brookes University, UK Yang Xiang, University of Regina, Canada Hong Xu, IRIDIA Universite libre de Bruxelles, Belgium John Yen, Texas A&M University, USA Lian Wen Zhang, Hong Kong University of Science & Technology ============================================================== UAI-96 REGISTRATION FORM ============================================================== Please return this form via email to hanks at cs.washington.edu or use the web-based registration form available at the UAI-96 home page at http://cuai-96.microsoft.com to register online. ================================= Registrant Information ================================= Name: __________________________________ Affiliation: ____________________________ Address: ____________________________ ____________________________ ____________________________ Phone: ____________________________ Email address: ____________________________ ================================= Registration information ================================= ____ Register me for the conference Non-student $275 Student $150 Students, please supply: Advisor's name and Email address: _______________________________________ ____ Register me for the full-day course on uncertain reasoning (July 31) Non-student with conference registration $85 without conference registration $135 Student with conference registration $35 without conference registration $50 ================================= Dormitory accomodation ================================= Singles, doubles, and triples are available. All include a private bedroom; doubles and triples share a bathroom. Rates: Single $26.50 per night Double $21.50 per night Triple $16.50 per night _____________ Arrival date (earliest July 30) _____________ Departure date (latest August 4) I am paying for _____ people for _____ nights at a daily rate of _________ for a total of _________ _______________________ Sharing with (doubles and triples only) _______________________ Sharing with (triples only) =============================== Meal service =============================== Conference registration includes the conference banquet on August 2nd. Reed college offers a package of three lunches during the conference for a total of $24. ____________ Please register me for the lunch service ================================= Payment Summary ================================= $______________ Conference registration $______________ Full-day course registration $______________ Lodging charges $______________ Meal charges $______________ TOTAL AMOUNT __________ Please charge my ____ Visa ____ MasterCard ___________________ Card Number ______________ Expiration date _________ I will send a check via surface mail. Address for checks: Steve Hanks Department of Computer Science and Engineering University of Washington Box 352350 Seattle, WA 98195-2350 _________ I will pay at the conference ============================================================ For questions about arrangements and registration issues, contact Steve Hanks (hanks at cs.washington.edu). For questions about the program, contact Eric Horvitz (horvitz at microsoft.com) or Finn Jensen (fvj at iesd.auc.dk). ============================================================ From piuri at elet.polimi.it Mon Jun 17 14:19:30 1996 From: piuri at elet.polimi.it (Vincenzo Piuri) Date: Mon, 17 Jun 1996 20:19:30 +0200 Subject: Journal of Integrated Computer-Aided Engineering Message-ID: <9606171801.AA12772@ipmel2.elet.polimi.it> ============================================================================== JOURNAL OF INTEGRATED COMPUTER-AIDED ENGINEERING John Wiley & Sons Inc. Publisher SPECIAL ISSUE ON NEURAL TECHNIQUES FOR INDUSTRIAL APPLICATIONS ============================================================================== >>> CALL FOR PAPERS <<< ============================================================================== The Journal of Integrated Computer-Aided Engineering has planned a special issue on Neural Techniques for Industrial Applications. Interested authors are invited to submit manuscripts based on their recent results on any aspect of Identification, Prediction and Control in industrial applications, including, but not limited to, theoretical foundations of the neural computation, neural models, network optimization, learning procedures, sensitivity analysis, experimental results, dedicated architectures, software simulation, implementations, embedded systems, and practical application cases. One of the main focus of the Journal is in fact the integration of new and emerging computing technologies for innovative solutions of engineering problems. Submitted manuscripts should not have been previously published in journals or books or being currently under consideration elsewhere. All manuscript should include a covering page containing: the title of the paper, full name(s) and affiliation(s) of the author(s), complete surface and electronic (if available) address(es), telephone and fax number(s). The corresponding author should be clearly identified on the title page. The manuscript should include a 300-word abstract and a list of keywords characterising the paper contents. Deadlines: November 30, 1996 five copies of the manuscript March 30, 1997 notification of acceptance/rejection. May 1, 1997 final version of the manuscript including the original artwork, author(s) biograpical information, and signed copyright forms Please, submit your manuscript(s), or direct your questions to either of the Guest Editors: Vincenzo Piuri Department of Electronics and Information Politecnico di Milano Piazza L. da Vinci 32 20133 Milano, Italy Phone +39-2-2399-3606 Fax +39-2-2399-3411 Email piuri at elet.polimi.it Cesare Alippi C.S.I.S.E.I. CNR - National Research Council Piazza L. da Vinci 32 20133 Milano, Italy Phone +39-2-2399-3512 Fax +39-2-2399-3411 Email alippi at elet.polimi.it ============================================================================== From nikola at prosun.first.gmd.de Wed Jun 19 04:28:07 1996 From: nikola at prosun.first.gmd.de (Nikola Serbedzija) Date: Wed, 19 Jun 96 10:28:07 +0200 Subject: CFP: Session on Neurosimulation - 15th IMACS World Congress Message-ID: <9606190828.AA18836@prosun.first.gmd.de> ================================================================ 15th IMACS WORLD CONGRESS on Scientific Computation, Modelling and Applied Mathematics Berlin, Germany, 24-29 August 1997 Sponsored by DFG, IEEE, IFAC, IFIP, IFORS, IMEKO. General Chair: Prof. A. Sydow (GMD FIRST Berlin, Germany) Honorary Chair: Prof. R. Vichnevetsky (Rutgers University, USA) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - CALL FOR PAPERS for the Organized Session on "Simulation of Artificial Neural Networks" Session Organizers: Gerd Kock and Nikola Serbedzija ================================================================ The aim of this session is to reflect the current techniques and trends in the simulation of artificial neural networks (ANNs). Both, software and hardware approaches are solicited. Topics of interest are, but not limited to: * General Aspects of Neural Simualations - design issues for simulation tools - inherent parallelism of ANNS - general-purpose neural simulations - special-purpose neural simulations - fault tolerance aspects * Parallel Implementation of ANNs - data parallel implementations - control parallel implementations * Hardware Emulation of ANNs - silicon technology - optical technology - molecular technology * General Simulation Tools - graphic/menu bases tools - module libraries - specific programming languages * Applications - applications using/demanding parallel implementations or hardware emulations - applications using/demanding analysis tools or graphical representations provided by simulation tools * Hybrid Systems - the topics from above are of interest also with respect to related hybrid systems (neuro-fuzzy, genetic algorithms) Authors interested in this session are invited to submit 3 copies of an extended summary (about 4 pages) of the paper to one of the session organizers by December 1st. 1996. Submission can be done also by email. The notification of acceptance/rejection will be mailed by February 28th, 1997. The authors of accepted papers will also receive detailed instructions for the final manuscripts preparation. The submission must contain the following information: The name(s) of the author(s), title(s), affiliation(s), complete address(es) (including email, phone, fax). In addition, the author responsible for communication has to be indicated. Important dates: --------------- December 1st, 1996 Extended summary due February 28th, 1997 Notification of acceptance/rejection April 30th, 1997 Camera-ready paper due Addresses to send contribution: ------------------------------ Dr. Gerd Kock Dr. Nikola Serbedzija GMD FIRST GMD FIRST Rudower Chaussee 5 Rudower Chaussee 5 D - 12489 Berlin D - 12489 Berlin Germany Germany e-mail: gerd at first.gmd.de e-mail: nikola at first.gmd.de tel: +49 30 / 6392 1863 tel: +49 30 / 6392 1873 ================================================================ IMACS The International Association for Mathematics and Computers in Simulation is an organization of professionals and scientists concerned with computers, computation and applied mathematics, in particular, as they apply to the simulation of systems. This includes numerical analysis, mathematical modelling, approximation theory, computer hardware and software, programming languages and compilers. IMACS also concerns itself with the general philosophy of scientific computation and applied mathematics, and with their impact on society and on disciplinary and interdisciplinary research. IMACS is one of the international scientific associations (with IFAC, IFORS, IFIP and IMEKO) represented in FIACC, the five international organizations in the area of computers, automation, instrumentation and the relevant branches of applied mathematics. Of the five, IMACS (which changed its name from AICA in 1976) is the oldest and was founded in 1956. For more information about the 15th IMACS WORLD CONGRESS turn to WWW page http://www.first.gmd.de/imacs97/ ================================================================ From piuri at elet.polimi.it Wed Jun 19 15:44:30 1996 From: piuri at elet.polimi.it (Vincenzo Piuri) Date: Wed, 19 Jun 1996 21:44:30 +0200 Subject: IMACS97 - call for papers Message-ID: <9606191944.AA28211@ipmel2.elet.polimi.it> ================================================================ 15th IMACS WORLD CONGRESS on Scientific Computation, Modelling and Applied Mathematics Berlin, Germany, 24-29 August 1997 Sponsored by IMACS, DFG, IEEE, IFAC, IFIP, IFORS, IMEKO. General Chair: prof. A. Sydow, GMD FIRST, Germany Honorary Chair: prof. R. Vichnevetsky, Rutgers University, USA ================================================================ Call for Papers for the Special Sessions on Neural Technologies for Control and Signal/Image Processing ================================================================ The aim of this meeting is to make the state of the art and explore the future trends of various aspects of computational engineering encompassing both theory and applications. Due to the increasing interest in industries and in applied research, three special sessions on neural technologies will be organized on theoretical aspects, implementations and applications concerning system control and signal/image processing. Session 1: "Neural Architectures and Implementations" Topics of interest are, but not limited to: theoretical aspects of neural architectures, optimization of neural architectures, design and implementation of digital, analog and mixed structures, software implementations, fault tolerance, architectural sensitivity and design issues. Particular enphasis will be given to architectures and implementations for control and signal processing. Session 2: "Neural Applications for Identification and Control" Topics of interest are, but not limited to: theoretical models for identification and control, methodologies for neural identification, strategies for neural control, architectures for identification and control, forecasting, applications, adaptability, sensitivity. Session 3: "Neural Techniques for Signal/Image Processing" Topics of interest are, but not limited to: theoretical models for signal/image analysis, methodologies for neural signal/image analysis, dedicated architectures, software implementations, optimization of neural paradigms, applications. Authors interested in the above Special Sessions are invited to submit an extended summary (about 5 pages) or the preliminary version of the paper to the Special Session Organizer by November 30, 1996. The submission must contain the following information: title, authors' name and affiliations, the name and the complete address (including affiliation, mail address, phone, fax, and email) of the contact author, the name of the special session. Submission can be done also by fax, email (plain postscript uuencoded files only), ftp (call your postscript file with the name of the first author, connect by ftp anonymous to ipmel2.elet.polimi.it, put your file in the directory pub/papers/Vincenzo.Piuri/imacs97, send an email to the session organizer informing about the submission). Rejection or preliminary acceptance will be mailed by December 15, 1996. Final acceptance will be mailed by February 28, 1997. The final camera-ready version of the paper is due by April 30, 1997. Prof. Vincenzo Piuri Organizer of the Special Sessions on Neural Technologies Department of Electronics and Information Politecnico di Milano piazza L. da Vinci 32 20133 Milano, Italy fax +39-2-2399-3411 email piuri at elet.polimi.it ================================================================ From M.J.van_der_Heijden at Physiology.MedFac.LeidenUniv.nl Thu Jun 20 16:06:42 1996 From: M.J.van_der_Heijden at Physiology.MedFac.LeidenUniv.nl (Marcel J. van der Heyden) Date: Thu, 20 Jun 1996 13:06:42 -0700 Subject: Proceedings of the HELNET Workshops on Neural Networks Message-ID: <31C9AF52.6FDA@rullf2.MedFac.LeidenUniv.nl> HELNET International Workshop on Neural Networks Proceedings Volume I/II (1994/1995) M.J. van der Heyden, J. Mrsic-Floegel and K. Weigl (eds) ** http://www.leidenuniv.nl/medfac/fff/groepc/chaos/helnet/ ** The HELNET workshops are informal meetings primarily targeted towards young researchers from neural networks and related fields. They are traditionally organised a few days prior to the ICANN conferences. Participants are offered the opportunity to present and extensively discuss their work as well as more general topics from the neural network field. In the HELNET proceedings a large variety of topics is treated: from the formal description of networks to neurobiology - from conceptual viewpoints to commercial applications. Thus, this collection of papers gives a comprehensive overview of current and ongoing research on neural networks. The proceedings can be browsed on-line at: http://www.leidenuniv.nl/medfac/fff/groepc/chaos/helnet/ On-line ordering is also provided for using credit card details or having an invoice sent. ---------------------------------------------------------------------- Table of Contents: Development of Spatio-Temporal Receptive Fields for Motion Detection in a Linsker Type Model (S. Wimbauer, W. Gerstner and J.L. van Hemmen) The Dependence on Size and Calcium Dynamics of Motoneuron Firing Properties: A Model Study (Marcel J. van der Heyden, A.A.J. Hilgevoord and L.J. Bour) Annealing in Minimal Free Energy Vector Quantization (D.R. Dersch and P. Tavan) Projection Learning: A Critical Review of Practical Aspects (Konrad Weigl) Transforming Hard Problems into Linearly Separable ones with Incremental Radial Basis Function Networks (B. Fritzke) Why are Neural Nets not Intelligent? (Harald Huening) Aspects of Information Detection using Entropy (Janko Mrsic-Floegel) Generating a Fractal Image by Programmed Cell Death: a Biological Communication Strategy for Parallel Computers (David W.N. Sharp) The Impossibility to Localize Electrical Activity in the Brain from EEG-Recordings by Means of Artificial Neural Networks (Sylvia C. Pont and Bob W. van Dijk) Analysis of Electronic Circuits with Evolutionary Strategies (Harald Gerlach and Joerg D. Becker) Neural Networks and Statistics: A Brief Overview (Marcel J. van der Heyden) Self-Controlling Chaos in Neuromodules (Nico Stollenwerk) The Effects of Feature Selection on Backpropagation in Feed-Forward Neural Networks (Selwyn Piramuthu) Exploring the Role of Emotion in the Design of Autonomous Systems (Raju S. Bapi) A Fusion of Game-Theory Based Learning and Projection Learning for Image Classification (Konrad Weigl and Shan Yu) Codierung eines Problems in die Sprache der Evolution (Harald Gerlach) Modelling the Wiener Cascade Using Time Delayed and Recurrent Neural Networks (M.G. Wagner, I.M. Thompson, S. Manchanda, P.G. Hearne and G.R.R. Greene) Niche memories for Temporal Sequence Processing: learning, recognition and tracking using neural representations (Janko Mrsic-Floegel) Stretching the Limits of Learning Without Modules (Antal van den Bosch and Ton Weijters) The Future for Weightless Systems (Nick Bradshaw) A Way to Improve Error Correction Capability of Hopfield Associative Memory in the Case of Saturation (Dmitry O. Gorodnichy) From electro at batc.allied.com Fri Jun 21 16:46:14 1996 From: electro at batc.allied.com (electro@batc.allied.com) Date: Fri, 21 Jun 1996 14:46:14 CST Subject: Self-organizing Systems Research Message-ID: <31CAFC07-00000001@proton> I am looking for recent research papers that describe algorithms and/or methods that enable a totally interconnected network of processing nodes to self-organize (i.e., determine an optimal configuration by eliminating redundant nodes, minimizing node connections, and establishing node priorities ) in order to meet a predefined set of spatial and functional criteria. I am somewhat familiar with Kohonen's "Self-Organizing Maps" and I would be interested in applications of these maps and other algorithms. Thanks in advance for any information that you can provide. Richard A. Burne electro at .batc.allied.com AlliedSignal Inc. From shultz at psych.mcgill.ca Fri Jun 21 10:56:46 1996 From: shultz at psych.mcgill.ca (Tom Shultz) Date: Fri, 21 Jun 1996 10:56:46 -0400 Subject: Papers available on cognitive development, knowledge and learning, cognitive consistency Message-ID: The following four papers are available at the WWW site for LNSC (Laboratory for Natural and Simulated Cognition at McGill University). http://www.psych.mcgill.ca/labs/lnsc/html/Lab-Home.html Alternatively, ftp addresses are given for each paper. Shultz, T. R., Schmidt, W. C., Buckingham, D., & Mareschal, D. (1995). Modeling cognitive development with a generative connectionist algorithm (pp. 205-261). In T. J. Simon & G. S. Halford (Eds.), Developing cognitive competence: New approaches to process modeling. Hillsdale, NJ: Erlbaum. One of the key unsolved problems in cognitive development is the precise specification of developmental transition mechanisms. In this chapter, we focus on the applicability of a specific generative connectionist algorithm, cascade-correlation (Fahlman & Lebiere, 1990), as a process model of transition mechanisms. Generative connectionist algorithms build their own network topologies as they learn, allowing them to simulate both qualitative and quantitative developmental changes. We compare and contrast cascade-correlation, Piaget's notions of assimilation and accommodation, Papert's little known but historically relevant genetron model, conventional back-propagation networks, and rule-based models. Specific cascade-correlation models of a wide range of developmental phenomena are presented. These include the balance scale task; concepts of potency and resistance in causal reasoning; seriation; integration of the concepts of distance, time, and velocity; and personal pronouns. Descriptions of these simulations stress the degree to which the models capture the essential known psychological phenomena, generate new testable predictions, and provide explanatory insights. In several cases, the simulation results underscore clear advantages of connectionist modeling techniques. Abstraction across the various models yields a set of domain-general constraints for cognitive development. Particular domain-specific constraints are identified. Finally, the models demonstrate that connectionist approaches can be successful even on relatively high-level cognitive tasks. ftp: ego.psych.mcgill.ca/pub/shultz/cog-comp.ps.gz -------------------------------------------------------------------------------- Shultz, T. R., & Lepper, M. R. (1995). Cognitive dissonance reduction as constraint satisfaction. Technical Report No. 2195, McGill Papers in Cognitive Science, McGill University, Montr?al. A constraint satisfaction neural network model (the consonance model) simulated data from the two major cognitive dissonance paradigms of insufficient justification and free-choice. In several cases, the model fit the human data better than did cognitive dissonance theory. Superior fits were due to the inclusion of constraints that were not part of dissonance theory and to the increased precision inherent to this computational approach. Predictions generated by the model for a free-choice between undesirable alternatives were confirmed in a new psychological experiment. The success of the consonance model underscores important, unforeseen similarities between what had been formerly regarded as the rather exotic process of dissonance reduction and a variety of other, more mundane psychological processes. Many of these processes can be understood as the progressive application of constraints supplied by beliefs and attitudes. ftp: ego.psych.mcgill.ca/pub/shultz/cog-diss.ps.gz -------------------------------------------------------------------------------- Shultz, T. R., Oshima-Takane, Y., & Takane, Y. (1995). Analysis of unstandardized contributions in cross connected networks. In D. Touretzky, G. Tesauro, & T. K. Leen, (Eds). Advances in Neural Information Processing Systems 7 (pp. 601-608). Cambridge, MA: MIT Press. Understanding knowledge representations in neural nets has been a difficult problem. Principal components analysis (PCA) of contributions (products of sending activations and connection weights) has yielded valuable insights into knowledge representations, but much of this work has focused on the correlation matrix of contributions. The present work shows that analyzing the variance-covariance matrix of contributions yields more valid insights by taking account of weights. ftp: ego.psych.mcgill.ca/pub/shultz/contcros.ps.gz -------------------------------------------------------------------------------- Tetewsky, S. J., Shultz, T. R., & Takane, Y. (1995). Training regimens and function compatibility: Implications for understanding the effects of knowledge on concept learning. Proceedings of the Seventeenth Annual Conference of the Cognitive Science Society (pp. 304-309). Hillsdale, NJ: Erlbaum. Previous research has indicated that breaking a task into subtasks can both facilitate and interfere with learning in neural networks. Although these results appear to be contradictory, they actually reflect some underlying principles governing learning in neural networks. Using the cascade-correlation learning algorithm, we devised a concept learning task that would let us specify the conditions under which subtasking would facilitate or interfere with learning. The results indicated that subtasking facilitated learning when the initial subtask involved learning a function compatible with that characterizing the rest of the task, and inhibited learning when the initial subtask involved a function incompatible with the rest of the task. These results were then discussed with regard to their implications for understanding the effect of knowledge on concept learning. ftp: ego.psych.mcgill.ca/pub/shultz/regimens.ps.gz -------------------------------------------------------------------------------- A variety of other papers in the areas of cognitive development, knowledge and learning, analyzing knowledge representations in neural networks, and cognitive consistency can be found at the LNSC site. Tom ------------------------------------------------------------------------ Thomas R. Shultz, Professor, Department of Psychology, McGill University 1205 Penfield Ave., Montreal, Quebec, Canada H3A 1B1 Phone: 514-398-6139 Fax: 514-398-4896 Email: shultz at psych.mcgill.ca WWW: http://www.psych.mcgill.ca/labs/lnsc/html/Lab-Home.html ------------------------------------------------------------------------ From harmonme at aa.wpafb.af.mil Tue Jun 25 10:47:58 1996 From: harmonme at aa.wpafb.af.mil (Mance E. Harmon) Date: Tue, 25 Jun 96 10:47:58 -0400 Subject: Tech Report Available Message-ID: <960625104756.32258@ethel.aa.wpafb.af.mil.0> Multi-Agent Residual Advantage Learning With General Function Approximation Mance E. Harmon Wright Laboratory WL/AACF 2241 Avionics Circle Wright-Patterson AFB,Ohio 45433-7318 harmonme at aa.wpafb.af.mil Leemon C. Baird III U.S.A.F. Academy 2354 Fairchild Dr. Suite 6K41 USAFA, Colorado 80840-6234 baird at cs.usafa.af.mil ABSTRACT A new algorithm, advantage learning, is presented that improves on advantage updating by requiring that a single function be learned rather than two. Furthermore, advantage learning requires only a single type of update, the learning update, while advantage updating requires two different types of updates, a learning update and a normilization update. The reinforcement learning system uses the residual form of advantage learning. An application of reinforcement learning to a Markov game is presented. The test-bed has continuous states and nonlinear dynamics. The game consists of two players, a missile and a plane; the missile pursues the plane and the plane evades the missile. On each time step, each player chooses one of two possible actions; turn left or turn right, resulting in a 90 degree instantaneous change in the aircraft's heading. Reinforcement is given only when the missile hits the plane or the plane reaches an escape distance from the missile. The advantage function is stored in a single-hidden-layer sigmoidal network. Speed of learning is increased by a new algorithm, Incremental Delta-Delta (IDD), which extends Jacobs' (1988) Delta-Delta for use in incremental training, and differs from Sutton's Incremental Delta-Bar-Delta (1992) in that it does not require the use of a trace and is amenable for use with general function approximation systems. The advantage learning algorithm for optimal control is modified for Markov games in order to find the minimax point, rather than the maximum. Empirical results gathered using the missile/aircraft test-bed validate theory that suggests residual forms of reinforcement learning algorithms converge to a local minimum of the mean squared Bellman residual when using general function approximation systems. Also, to our knowledge, this is the first time an approximate second order method has been used with residual algorithms. Empirical results are presented comparing convergence rates with and without the use of IDD for the reinforcement learning test-bed described above and for a supervised learning test-bed. The results of these experiments demonstrate IDD increased the rate of convergence and resulted in an order of magnitude lower total asymptotic error than when using backpropagation alone. Available at http://www.aa.wpafb.af.mil/~harmonme From fritzke at neuroinformatik.ruhr-uni-bochum.de Tue Jun 25 13:14:54 1996 From: fritzke at neuroinformatik.ruhr-uni-bochum.de (Bernd Fritzke) Date: Tue, 25 Jun 1996 19:14:54 +0200 (MET DST) Subject: Java neural network software available Message-ID: <9606251714.AA29703@hermes.neuroinformatik.ruhr-uni-bochum.de> This is to announce the availability of a Java implementation of the following algorithms and neural network models: - Hard Competitive Learning (standard algorithm) - Neural Gas (Martinetz and Schulten 1991) - Neural Gas with Competitive Hebbian Learning (Martinetz and Schulten 1991) - Competitive Hebbian Learning (Martinetz and Schulten 1991, Martinetz 1993) - Growing Neural Gas (Fritzke 1995) The software (written by my student Hartmut Loos) is distributed under the GNU Public License. It allows to experiment with the different methods using various probability distributions. All model parameters can be set interactively. A teach modus is provided to observe the models in "slow-motion" if so desired. The software can be accessed at http://www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/research/gsn/DemoGNG/GNG.html where it is embedded as Java applet into a Web page and is downloaded for immediate execution when you visit this page (if you have a slow link or only ftp access, please see below). I have prepared an accompanying html-paper entitled "Some competitive learning methods" (yes, there may be catchier titles 8v) ) describing the implemented models in detail: http://www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/research/gsn/JavaPaper/ It is possible to download the complete software and a Postscript version of the paper at the following addresses: ftp://ftp.neuroinformatik.ruhr-uni-bochum.de/pub/software/NN/DemoGNG/sclm.ps.gz ftp://ftp.neuroinformatik.ruhr-uni-bochum.de/pub/software/NN/DemoGNG/DemoGNG-1.00.tar.gz Please send comments regarding the paper to me and comments regarding the Java software to Hartmut (loos at neuroinformatik.ruhr-uni-bochum.de). Enjoy, Bernd Fritzke Acknowledgment: This work was done in the research group of Christoph von der Malsburg at Bochum. I like to thank him for several helpful discussions and the excellent working environment he has created there. -- Bernd Fritzke * Institut f"ur Neuroinformatik Tel. +49-234 7007845 Ruhr-Universit"at Bochum * Germany FAX. +49-234 7094210 WWW: http://www.neuroinformatik.ruhr-uni-bochum.de/ini/PEOPLE/fritzke/top.html -- Bernd Fritzke * Institut f"ur Neuroinformatik Tel. +49-234 7007845 Ruhr-Universit"at Bochum * Germany FAX. +49-234 7094210 WWW: http://www.neuroinformatik.ruhr-uni-bochum.de/ini/PEOPLE/fritzke/top.html From malsburg at neuroinformatik.ruhr-uni-bochum.de Tue Jun 25 05:37:26 1996 From: malsburg at neuroinformatik.ruhr-uni-bochum.de (malsburg@neuroinformatik.ruhr-uni-bochum.de) Date: Tue, 25 Jun 96 11:37:26 +0200 Subject: Announcement: Web pages on vision, robotics and neural networks Message-ID: <9606250937.AA09845@circe.neuroinformatik.ruhr-uni-bochum.de> This is to announce the availability of several new WWW pages describing work performed in my research group at Bochum, Germany. The pages can be accessed (preferably with a frame-capable browser) at http://www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/research.html and cover the following topics: COMPUTER VISION - segmentation - invariant object recognition - scene analysis - face recognition - tracking ROBOTICS - visually guided grasping - adaptive kinematics BIOLOGICALLY MOTIVATED MODELS - segmentation - object recognition GROWING SELF-ORGANIZING NETWORKS - clustering - classification - topology learning Please send any comments or questions directly to the respective authors. Christoph von der Malsburg Institute for Neural Computation Ruhr-University Bochum Germany From iconip96 at cs.cuhk.hk Mon Jun 24 10:12:52 1996 From: iconip96 at cs.cuhk.hk (iconip96) Date: Mon, 24 Jun 1996 22:12:52 +0800 (HKT) Subject: ICONIP'96 - Call for Participation Message-ID: <199606241412.WAA01725@cs.cuhk.hk> *********************************** ICONIP'96 -- CALL FOR PARTICIPATION http://www.cs.cuhk.hk/iconip96 *********************************** 1996 International Conference on Neural Information Processing The Annual Conference of the Asian Pacific Neural Network Assembly ICONIP'96, September 24 - 27, 1996 Hong Kong Convention and Exhibition Center, Wan Chai, Hong Kong In cooperation with IEEE / NNC --IEEE Neural Networks Council INNS - International Neural Network Society ENNS - European Neural Network Society JNNS - Japanese Neural Network Society CNNC - China Neural Networks Council ====================================================================== The goal of ICONIP'96 is to provide a forum for researchers and engineers from academia and industry to meet and to exchange ideas on the latest developments in neural information processing. The conference also further serves to stimulate local and regional interests in neural information processing and its potential applications to industries indigenous to this region. The conference consists of one-day Financial Engineering tutorial given by well known experts in the field and a three-day program with only four parallel sessions. The overall program covers major topics in neural information processing and reflects the latest progress with a good balance between scientific studies and industrial applications, as well as featured neural information processing approaches on Financial Engineering. The conference contains a high quality contributed program and a very strong invited program. For the contributed program, we received 314 submissions from 33 countries, including Asia-Pacific areas, Europe, North and South America. Each submitted paper was sent to three experts in the related fields from all over the world for reviewing. With their rigorous and timely efforts, a high quality technical program has been achieved with around 60% overall acceptance rate (20% oral presentation, 20% spotlight presentation, 20% poster presentation). The spotlight presentation is a poster presentation plus a 5-minute oral presentation which highlights the contribution of the poster paper. For the invited program, we have 5 keynote speakers, 3 honored speakers, and 22 invited speakers by well known international neural information processing scientists and experts. The invited program is also featured by 8 special sessions on interesting current topics. Hong Kong is one of the most dynamic and international cities in the world. She is a major financial center and has world-class facilities, easy accessibility, exciting entertainment, delicious cuisine and high levels of service and professionalism. Hong Kong people are known of their talent and aggressiveness. Come to Hong Kong! Visit this Eastern Pearl in this historical period, in transition from a British colony into a special administration zone of China in 1997. Hong Kong is a place where you can find the traditional Chinese culture and the Western culture. The mid-autumn festival will be on the last day of the conference. You will experience how the people in Hong Kong celebrate this festival by eating delicious moon-cakes and lighting up marvelous paper lanterns everywhere in the evening. You will also see the nearly completed Tsing Ma Bridge -- the world's longest span road-rail suspension bridge. Do not miss this opportunity to take part in ICONIP'96 and visit Hong Kong. ********************************** Tutorials On Financial Engineering ********************************** 1. Professor John Moody, Oregon Graduate Institute, USA "Time Series Modeling: Classical and Nonlinear Approaches" 2. Professor Halbert White, University California, San Diego, USA "Option Pricing In Modern Finance Theory and the Relevance Of Artificial Neural Networks" 3. Professor A-P. N. Refenes, London Business School, UK "Neural Networks in Financial Engineering" ************* Keynote Talks ************* 1. Professor Shun-ichi Amari, Tokyo University. "Information Geometry of Neural Networks" 2. Professor Yaser Abu-Mostafa, California Institute of Technology, USA "The Bin Model for Learning and Generalization" 3. Professor Leo Breiman, University California, Berkeley, USA "Democratizing Predictors" 4. Professor Christoph von der Malsburg, Ruhr-Universitat Bochum, Germany "Scene Analysis Based on Dynamic Links" (tentatively) 5. Professor Erkki Oja, Helsinki University of Technology, Finland "Blind Signal Separation by Neural Networks" ************** Honored Talks ************** 1. Professor Rolf Eckmiller, University of Bonn, Germany "Concerning the Development of Retina Implants with Neural Nets" 2. Professor Mitsuo Kawato, ATR Human Information Processing Research Lab, Japan "Generalized Linear Model Analysis of Cerebellar Motor Learning" 3. Professor Kunihiko Fukushima, Osaka University, Japan "Neural Network Model of Spatial Memory" ************* INVITED TALKS ************* Yoshua Bengio, Robert L. Fry, Hecht-Nielsen, Nathan Intrator, Arun Jagota, Bart Kosko*, John Moody, Barak Pearlmutter, A-P. N. Refenes, Michael P. Perrone, Juergen Schmidhuber, Sara A. Solla, Harold Szu*, Andreas Weigend*, Halbert White, Alan L. Yuille, Laiwan Chan, Nikola Kasabov, Soo-Young Lee, Yousou Wu, Ah Chung Tsoi *Tentatively Confirmed ***************************** CONFERENCE'S REGISTRATION FEE ***************************** On & Before July 1, 1996 Member HKD $2,800 On & Before July 1, 1996 Non-Member HKD $3,200 Late & On-Site Member HKD $3,200 Late & On-Site Non-Member HKD $3,600 Student Registration Fee HKD $1,000 For registration and other related ICONIP'96 information please browse our WWW site at http://www.cs.cuhk.hk/iconip96. Please send your inquiries to: ICONIP'96 Secretariat Department of Computer Science and Engineering The Chinese University of Hong Kong Shatin, N.T., Hong Kong Fax (852) 2603-5024 E-mail: iconip96 at cs.cuhk.hk http://www.cs.cuhk.hk/iconip96 From smagt at dlr.de Wed Jun 26 07:38:28 1996 From: smagt at dlr.de (Patrick van der Smagt) Date: Wed, 26 Jun 1996 13:38:28 +0200 Subject: postprint: local vs. global learning Message-ID: <31D12134.4EFA@dlr.de> This paper appeared last year November at the ICNN95. P. van der Smagt and F. Groen "Approximation with neural networks: Between local and global approximation." In Proceedings of the 1995 International Conference on Neural Networks, pp. II:1060-II:1064 (invited paper). Abstract: We investigate neural network based approximation methods. These methods depend on the locality of the basis functions. After discussing local and global basis functions, we propose a multi-resolution hierarchical method. The various resolutions are stored at various levels in a tree. At the root of the tree, a global approximation is kept; the leafs store the learning samples themselves. Intermediate nodes store intermediate representations. In order to find an optimal partitioning of the input space, self-organising maps (SOM`s) are used. The proposed method has implementational problems reminiscent of those encountered in many-particle simulations. We will investigate the parallel implementation of this method, using parallel hierarchical methods for many-particle simulations as a starting point. Search for "global" on http://www.op.dlr.de/FF-DR-RS/Smagt/papers/ From PAOLAD at iss.infn.it Tue Jun 25 13:38:40 1996 From: PAOLAD at iss.infn.it (PAOLAD@iss.infn.it) Date: Tue, 25 Jun 1996 19:38:40 +0200 (WET-DST) Subject: workshop on neural networks (Italy, September 23-27) Message-ID: <960625193840.1600136@iss.infn.it> WORKSHOP ON NEURAL NETWORKS: FROM BIOLOGY TO HARDWARE IMPLEMENTATIONS Grand Hotel Chia Laguna, Chia (Sardinia, Italy), September 23-27 1996 Organizing Committee: D. Amit, W. Bialek, P. Del Giudice, E.T. Rolls The Workshop, cross-disciplinary in character, will be focused mostly on the following topics: - the dynamical interpretation of cross correlations - what can be extracted from information theory analysis of spike trains - the potentialities of multiple recordings - the role of the timing of the spikes - the role of hardware implementations in the context of biological modelling Partial list of speakers: L Abbott (Brandais University) D Amit (The Hebrew University and Rome University) W Bialek (NEC Research Institute) R Douglas (ETH, Zurich) D Kleinfeld (University of California) M Nicolelis (Duke University Medical Center) B Richmond (National Institute of Mental Health, Bethesda) E Rolls (Oxford University) M Shadlen (University of Washington School of Medicine) S Thorpe (Centre de Recherche Cerveau et Cognition, Toulouse) M Tsodyks (Weizmann Institute) E Vaadia (Hadassah Medical School, The Hebrew University) F Van Der Velde (Leiden University) A Villa (Lausanne University) the planned schedule is meant to be 'discussion oriented', and will be approximately as follows: two invited talks per session, followed by a discussion a set of poster contributions for each session, also followed by a discussion. contributed papers will only be accepted as posters; all accepted contributions will be published in the proceedings volume. The registration fee is 220 US$, to be paid upon arrival. A limited number of fellowships will be available for participants coming from Countries belonging to the European Union or Israel. Chia is a beatiful place at the Southern tip of Sardinia, It is a drive of about 1 h from Cagliari airport, which has connecting flights from/to Rome, Florence etc. A bus transportation from Cagliari airport to the Workshop venue will be organized. Information about the workshop will be available at the following URL: http://wwwtera.iss.infn.it/workshop/chia96.html Here follows the registration form, to be returned by e-mail or fax BEFORE JULY 15 to the scientific secretary at the following address: paolad at vaxsan.iss.infn.it (Dr Paola Di Ciaccio) Fax: ++39-6-4462872 Participants willing to present a poster contribution should include a one-page abstract for evaluation by the organizing committee. ------------------------- CUT HERE -------------------------------- WORKSHOP ON NEURAL NETWORKS: FROM BIOLOGY TO HARDWARE IMPLEMENTATIONS CHIA, SEPTEMBER 23 - 27, 1996 Name .......................................................... Address ......................................................... ................................................................ ................................................................ Telephone ..................................................... Fax ........................................................... E-mail ............................................................. o I am interested in attending the Workshop o I am interested in presenting a poster contribution. (please include a one-page abstract in LateX or text-only format) Applicants will be informed as to their acceptance before July 25. o I ask for a financial support Please return this form or send the above information to the scientific secretary by July 15, 1996. E-mail: PAOLAD at VAXSAN.ISS.INFN.IT Fax : ++39-6-4462872 From smagt at dlr.de Thu Jun 27 03:57:26 1996 From: smagt at dlr.de (Patrick van der Smagt) Date: Thu, 27 Jun 1996 09:57:26 +0200 Subject: PostPrint: Many-Particle Decomposition and SOMs Message-ID: <31D23EE6.6AE9@dlr.de> P. van der Smagt and B. Kr?se, Using Many-Particle Decomposition to get a Parallel Self-Organising Map. In Proceedings of the 1995 Conference on Computer Science in the Netherlands, J. van Vliet (editors), pages 241-249. 1995. Abstract: We propose a method for decreasing the computational complexity of self-organising maps. The method uses a partitioning of the neurons into disjoint clusters. Teaching of the neurons occurs on a cluster-basis instead of on a neuron-basis. For teaching an N-neuron network with N' samples, the computational complexity decreases from O(NN') to O(N log N'). Furthermore, we introduce a measure for the amount of order in a self-organising map, and show that the introduced algorithm behaves as well as the original algorithm. Address: search for "decomposition" on http://www.op.dlr.de/FF-DR-RS/Smagt/papers/ -- dr Patrick van der Smagt phone +49 8153 281152 DLR/Institute of Robotics and Systems Dynamics fax +49 8153 281134 P.O. Box 1116, 82230 Wessling, Germany email From thrun+ at heaven.learning.cs.cmu.edu Thu Jun 27 10:46:22 1996 From: thrun+ at heaven.learning.cs.cmu.edu (thrun+@heaven.learning.cs.cmu.edu) Date: Thu, 27 Jun 96 10:46:22 EDT Subject: NEW BOOK: Robot Learning Message-ID: I have the pleasure to announce the following book: **** Recent Advances in Robot Learning **** edited by Judy A. Franklin GTE Laboratories, Waltham, MA, USA Tom M. Mitchell Carnegie Mellon University, Pittsburgh, PA, USA Sebastian Thrun Carnegie Mellon University, Pittsburgh, PA, USA Reprinted from MACHINE LEARNING, 23:2-3 THE KLUWER INTERNATIONAL SERIES IN ENGINEERING AND COMPUTER SCIENCE VOLUME 368 Recent Advances in Robot Learning contains seven papers on robot learning written by leading researchers in the field. As the selection of papers illustrates, the field of robot learning is both active and diverse. A variety of machine learning methods, ranging from inductive logic programming to reinforcement learning, is being applied to many subproblems in robot perception and control, often with objectives as diverse as parameter calibration and concept formulation. While no unified robot learning framework has yet emerged to cover the variety of problems and approaches described in these papers and other publications, a clear set of shared issues underlies many robot learning problems. - Machine learning, when applied to robotics, is situated: it is embedded into a real-world system that tightly integrates perception, decision making and execution. - Since robot learning involves decision making, there is an inherent active learning issue. - Robotic domains are usually complex, yet the expense of using actual robotic hardware often prohibits the collection of large amounts of training data. - Most robotic systems are real-time systems. Decisions must be made within critical or practical time constraints. These characteristics present challenges and constraints to the learning system. Since these characteristics are shared by other important real-world application domains, robotics is a highly attractive area for research on machine learning. Recent Advances in Robot Learning is an edited volume of peer-reviewed original research comprising seven invited contributions by leading researchers. This research work has also been published as a special issue of Machine Learning (Volume 23, Numbers 2 and 3). Kluwer Academic Publishers, Boston Date of publishing: June 1996 224 pp. Hardbound ISBN: 0-7923-9745-2 Prices: NLG: 175.00 USD: 94.00 GBP: 66.75 ============================================================================= CONTENTS o Real-World Robotics: Learning To Plan for Robust Execution, Scott W. Bennett, and Gerald F. DeJong o Robot Programming by Demonstration (RPD): Supporting the Induction, by Human Interaction, by Stefan Muench, Ruediger Dillmann, Siegfried Bocionek, and Michael Sassin o Performance Improvement of Robot Continuous-Path Operation through Iterative Learning Using Neural Networks, by Peter C.Y. Chen, James K. Mills, and Kenneth C. Smith o Learning Controllers for Industrial Robots, by C. Baroglio, A. Giordana, M. Kaiser, M. Nuttin, and R. Piola o Active Learning for Vision-Based Robot Grasping, by Marcos Salganicoff, Lyle H. Ungar, and Ruzena Bajcsy o Purposive Behavior Acquisition for a Real Robot by Vision-Based Reinforcement Learning, by Minoru Asada, Shoichi Noda, Sukoya Tawaratsumita, and Koh Hosoda o Learning Operational Concepts from Sensor Data of a Mobile Robot, by Volker Klingspor, Katharina J. Morik, and Anke Rieger ============================================================================= See http://www.cs.cmu.edu/~thrun/papers/franklin.book.html for more information (paper abstracts, order form). From JIgnacio at grial.uc3m.es Thu Jun 27 11:03:54 1996 From: JIgnacio at grial.uc3m.es (JIgnacio@grial.uc3m.es) Date: Thu, 27 Jun 1996 17:03:54 +0200 Subject: No subject Message-ID: REGISTRATION FORM FOR FIRST INTERNATIONAL WORKSHOP ON MACHINE LEARNING, FORECASTING, AND OPTIMIZATION 1996 WORKSHOP TO BE HELD ON JULY 10-12, 1996 AT UNIVERSIDAD CARLOS III DE MADRID Tutorials Tutorials will be in Spanish. Check off one tutorial for each period of time. Tutorials will be available in case that at least 10 persons register for that tutorial. Wednesday, July 10, 1996 9:00 AM - 11:00 AM __ Descubrimiento de relaciones en grandes Bases de Datos: Aurora Perez (Universidad Politecnica de Madrid) and Angela Ribeiro (Instituto de Automatica Industrial, CSIC) __ Prediccion dinamica: series temporales: David Rios (Universidad Politecnica de Madrid) 11:30 AM - 13:30 PM __ Aprendizaje inductivo aplicado a la medicina: Cesar Montes (Universidad Politecnica de Madrid) __ Tecnicas de Inteligencia Artificial aplicadas a las finanzas: Ignacio Olmeda (Universidad de Alcala de Henares) 15:00 PM - 17:00 PM __ Analisis estadistico de datos: Jacinto Gonzalez Pachon (Universidad Politecnica de Madrid) __ Programacion Genetica en problemas de control: Javier Segovia (Universidad Politecnica de Madrid) Preliminary Workshop Schedule Thursday, July 11, 1996 ----------------------- 9:30 Registration 10:00 General presentation 10:10 Invited Talk: Manuela Veloso (Carnegie Mellon University) 11:30 Coffee Break 11:45 Session on Mathematical and Integrated Approaches - "Nonparametric Estimation of Fully Nonlinear Models for Assets Returns", Ignacio Olmeda and Eugenio Fernandez - "Representation Changes in Combinatorial Problems: Pigeonhole Principle versus Integer Programming Relaxation", Yury V. Smirnov and Manuela M. Veloso - "Integrating Reasoning Information to Domain Knowledge in the Neural Learning Process", Saida Benlarbi and Kacem Zeroual - "Parameter Optimization in ART2 Neural Network, using Genetic Algorithms", Enrique Muro 1:15 Lunch 2:15 Invited Talk: Esther Ruiz (Universidad Carlos III de Madrid) 3:30 Session on Genetic Algorithms and Programming Approaches - "Automatic Generation of Turing Machines by a Genetic Approach", Julio Tanomaru and Akio Azuma - "GAGS, a Flexible Object Oriented Library for Evolutionary Computation", J.J. Merelo and A. Prieto - "An Application of Genetic Algorithms and Heuristic Techniques in Scheduling", Celia Gutierrez, Jose M. Lazaro and Joseba Zubia - "Classifiers Systems for Learning Reactions in Robotic Systems", Araceli Sanchis, Jose M. Molina and Pedro Isasi - "A Comparison of Forecast Accuracy between Genetic Programming and other Forecastors: A Loss-Differential Approach", Shu-Heng Chen and Chia-Hsuan Yeh Friday, July 12, 1996 --------------------- 10:00 Invited Talk: Juan J. Merelo (Universidad de Granada) 11:30 Coffee Break 11:45 Session on Neural Network Approaches - "Factor Analysis in Social Science: An Artificial Neural Network Perspective", Rafael Calvo - "Neural Network Forecast of Intraday Futures and Cash Returns", Pedro Isasi, Ignacio Olmeda, Eugenio Fernandez and Camino Fernandez - "Self-Organizing Feature Maps for Location and Scheduling", S. Lozano, F. Guerrero, J. Larra~neta and L. Onieva - "A Neural Network Hierarchical Model for Speech Recognition based on Biological Plausability", J.M. Ferrandez, D. del Valle, V. Rodellar and P. Gomez 1:15 Lunch 2:15 Invited Talk: Alicia Perez (Boston College) 3:30 Session on Symbolic Machine Learning Approaches - "Statistical Variable Interaction: Focusing Multiobjective Optimization in Machine Learning", Eduardo Perez and Larry Rendell - "A Multi-Agent Model for Decision Making and Learning", Jose I. Giraldez and Daniel Borrajo - "An Approximation to Generic Knowledge Discovery in Database Systems", Aurora Perez and Angela Ribeiro - "Learning to Forecast by Explaining the Consequences of Actions", Tristan Cazenave - "Basic Computational Processes in Machine Learning", Jesus G. Boticario and Jose Mira 5:15 Panel Workshop Fees Workshop fees are: Paid before June 30 Paid after June 30 ------------------- ------------------ Regular rate 25.000 pts. 35.000 pts. Speakers rate 15.000 pts. 25.000 pts. Students rate 5.000 pts. 10.000 pts. Carlos III members 2.000 pts. 5.000 pts. (students/teachers/staff) The workshop fee includes a copy of the proceedings, coffee breaks, and Thursday and Friday lunches. Students must send legible proof of full-time student status. Tutorial fees are: Paid before June 30 Paid after June 30 ------------------- ------------------ Regular rate 30.000 pts. 40.000 pts. University rate 15.000 pts. 20.000 pts. Students rate 2.500 pts. 5.000 pts. Carlos III members 2.000 pts. 3.000 pts. (students/teachers/staff) The tutorials fee includes the assistance to three non-parallel tutorials, documentation, and coffee breaks. Registration Please, send the completed application form to: - by email: dborrajo at grial.uc3m.es, or isasi at gaia.uc3m.es - by airmail: MALFO96, Attention D. Borrajo/P. Isasi Universidad Carlos III de Madrid c/ Butarque, 15 28911 Leganes, Madrid. Spain Registration Form Last name:__________________________________________________________________ First name:_________________________________________________________________ Title:______________________________________________________________________ Affiliation:________________________________________________________________ Address:____________________________________________________________________ ____________________________________________________________________________ ____________________________________________________________________________ Phone (include country and area code):______________________________________ Fax (include country and area code):________________________________________ E-mail:_____________________________________________________________________ Workshop fee:_________ Tutorial fee:_________ (please indicate which three tutorials do you wish to attend) Total amount:_________ Send a check made payable to "Universidad Carlos III de Madrid" or electronic funds transfer to: Account holder name: Universidad Carlos III de Madrid Bank: Caja de Madrid Branch address: c/ Juan de la Cierva s/n, Getafe Bank code: 2038 Branch number: 2452 Account number: 6000085134 Control Digit: 05 In the case of a transfer, please indicate that it is code 396 (Inscripciones de Malfo96) and send us a copy of the receipt. No refunds will be made; however, we will transfer your registration to a person you designate upon notification. Signature:__________________________________________________________________ From juergen at idsia.ch Thu Jun 27 14:55:29 1996 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Thu, 27 Jun 96 20:55:29 +0200 Subject: 3 IDSIA papers Message-ID: <9606271855.AA02460@fava.idsia.ch> 3 related papers available, all based on a recent, novel, general reinforcement learning paradigm that allows for metalearning and incremental self-improvement (IS). ____________________________________________________________________ SIMPLE PRINCIPLES OF METALEARNING Juergen Schmidhuber & Jieyu Zhao & Marco Wiering Technical Report IDSIA-69-96, June 27, 1996 23 pages, 195 K compressed, 662 K uncompressed The goal of metalearning is to generate useful shifts of inductive bias by adapting the current learning strategy in a "useful" way. Our learner leads a single life during which actions are continually executed according to the system's internal state and current policy (a modifiable, probabilistic algorithm mapping environmental inputs and internal states to outputs and new internal states). An action is considered a learning algorithm if it can modify the policy. Effects of learning processes on later learning processes are measured using reward/time ratios. Occasional backtracking enforces success histories of still valid policy modifications corresponding to histories of lifelong reward accelerations. The principle allows for plugging in a wide variety of learning algorithms. In particular, it allows for embedding the learner's policy modification strategy within the policy itself (self-reference). To demonstrate the principle's feasibility in cases where traditional reinforcement learning fails, we test it in complex, non-Markovian, changing environments ("POMDPs"). One of the tasks involves more than 10^13 states, two learners that both cooperate and compete, and strongly delayed reinforcement signals (initially separated by more than 300,000 time steps). ____________________________________________________________________ A GENERAL METHOD FOR INCREMENTAL SELF-IMPROVEMENT AND MULTI-AGENT LEARNING IN UNRESTRICTED ENVIRONMENTS Juergen Schmidhuber To appear in X. Yao, editor, Evolutionary Computation: Theory and Applications. Scientific Publ. Co., Singapore, 1996 (based on "On learning how to learn learning strategies", TR FKI-198-94, TUM 1994). 30 pages, 146 K compressed, 386 K uncompressed. ____________________________________________________________________ INCREMENTAL SELF-IMPROVEMENT FOR LIFE- TIME MULTI-AGENT REINFORCEMENT LEARNING Jieyu Zhao Juergen Schmidhuber To appear in Proc. SAB'96, MIT Press, Cambridge MA, 1996. 10 pages, 107 K compressed, 429 K uncompressed. A spin-off paper of the TR above. It includes another experiment: a multi-agent system consis- ting of 3 co-evolving, IS-based animats chasing each other learns interesting, stochastic predator and prey strategies. (Another spin-off paper is: M. Wiering and J. Schmidhuber. Solving POMDPs using Levin search and EIRA. To be presented by MW at ML'96.) ____________________________________________________________________ To obtain copies, use ftp, or try the web: http://www.idsia.ch/~juergen/onlinepub.html FTP-host: ftp.idsia.ch FTP-filenames: /pub/juergen/meta.ps.gz /pub/juergen/ec96.ps.gz /pub/jieyu/sab96.ps.gz ____________________________________________________________________ Juergen Schmidhuber & Jieyu Zhao & Marco Wiering http://www.idsia.ch IDSIA From lross at msmail4.hac.com Thu Jun 27 20:40:23 1996 From: lross at msmail4.hac.com (Ross, Lynn W) Date: 27 Jun 1996 16:40:23 -0800 Subject: Job Postings Message-ID: The Information Sciences Laboratory at HRL has an immediate opening for a scientist to join our team of researchers investigating advanced techniques in non-traditional and advanced signal and image processing, compression, and optimization. If you have experience in the following areas, we would like to hear from you: o pattern recognition o optimization o wavelet applications o compression o neural networks o decision aids o information fusion o signal processing In addition, the successful candidate will have a PhD in electrical engineering, applied math or computer science; plus, extensive knowledge of software programming in C or C++. Excellent communications abilities are required to effectively interact with the scientific and academic communities, with Hughes business units, and within our small, dynamic research team environment. Our research staff members are encouraged to invent, patent and publish on a regular basis. Our ideal location and competitive salary and benefits package contribute to a work environment designed to optimize creative research. Please send your resume to: o Lynn W. Ross o #BSISL o Hughes Research Laboratories o 3011 Malibu Canyon Road o Malibu o CA o 90265 o FAX: 310-317-5651 o email: lross at hrl.com. To learn more about HRL, visit our WebPage at http:\\www.hrl.com. Proof of legal right to work in the United States required. We are an Equal Opportunity Employer. From alex at salk.edu Fri Jun 28 11:16:21 1996 From: alex at salk.edu (Alexandre Pouget) Date: Fri, 28 Jun 96 08:16:21 PDT Subject: postdoctoral position Message-ID: <9606281516.AA03622@salk.edu> Postdoctoral position in Computational Neuroscience Institute of Computational and Cognitive Sciences Georgetown University Washington DC A post-doctoral position will be available fall 1996 at the Institute of Computational and Cognitive Sciences in Georgetown University. Candidates should have hands-on experience in computational neuroscience and a keen interest in cognitive science at large. Research will focus primarily on models of sensory-motor transformations and multisensory integration in human and monkeys. Candidates will be expected to interact closely with other members of the laboratory involved in testing stroke patients with spatial perception disorders. Other research modeling projects will be considered, in particular in the general area of neural representations. The institute offers a variety of laboratories in the field of cognitive neuroscience using investigations techniques such as single cell and optical recordings in behaving monkey and bats, a 7T fMRI for animals studies and a 1.5T fMRI for human subjects. Please send a CV, summary of relevant research experience and at least 2 letters of recommendation to: Dr. Alexandre Pouget. Institute for Cognitive and Computational Sciences. Georgetown University. New Research Building. Room EP04. 3970 Reservoir Road NW, Washington DC, 20007-2197 or send email to: alex at salk.edu. From mccallum at cs.rochester.edu Sat Jun 29 19:34:28 1996 From: mccallum at cs.rochester.edu (Andrew McCallum) Date: Sat, 29 Jun 1996 19:34:28 -0400 Subject: Paper on RL, feature selection, hidden state Message-ID: <199606292334.TAA19728@slate.cs.rochester.edu> The following paper on reinforcement learning, hidden state and feature selection is available by FTP. Comments and suggestions are welcome. "Learning to Use Selective Attention and Short-Term Memory" Andrew Kachites McCallum (to appear in SAB'96) Abstract This paper presents U-Tree, a reinforcement learning algorithm that uses selective attention and short-term memory to simultaneously address the intertwined problems of large perceptual state spaces and hidden state. By combining the advantages of work in instance-based (or ``memory-based'') learning and work with robust statistical tests for separating noise from task structure, the method learns quickly, creates task-relevant state distinctions, and handles noise well. U-Tree uses a tree-structured representation, and is related to work on Prediction Suffix Trees [Ron et al 94], Parti-game [Moore 94], G-algorithm [Chapman and Kaelbling 91], and Variable Resolution Dynamic Programming [Moore 91]. It builds on Utile Suffix Memory [McCallum 95], which only used short-term memory, not selective perception. The algorithm is demonstrated solving a highway driving task in which the agent weaves around slower and faster traffic. The agent uses active perception with simulated eye movements. The environment has hidden state, time pressure, stochasticity, over 21,000 world states and over 2,500 percepts. From this environment and sensory system, the agent uses a utile distinction test to build a tree that represents depth-three memory where necessary, and has just 143 internal states---far fewer than the 2500^3 states that would have resulted from a fixed-sized history-window approach. Retrieval information: FTP-host: ftp.cs.rochester.edu FTP-pathname: /pub/papers/robotics/96.mccallum-sab.ps.gz URL: ftp://ftp.cs.rochester.edu/pub/papers/robotics/96.mccallum-sab.ps.gz From comp.ai.neural-nets at DST.BOLTZ.CS.CMU.EDU Sun Jun 30 03:40:20 1996 From: comp.ai.neural-nets at DST.BOLTZ.CS.CMU.EDU (forwarded) Date: Sun, 30 Jun 96 03:40:20 EDT Subject: ECAI NNSK Workshop Program and Call for Participation Message-ID: ECAI'96 Workshop on NEURAL NETWORKS AND STRUCTURED KNOWLEDGE (NNSK) August 12, 1996 during the 12th European Conference on Artificial Intelligence August 12-16, 1996 in Budapest, Hungary Call for Participation ------------------------------------------------------------------------------- Latest information can be retrieved from the NNSK WWW-page http://www.informatik.uni-ulm.de/fakultaet/abteilungen/ni/ECAI-96/NNSK.html BACKGROUND ---------- Neural networks mostly are used for tasks dealing with information presented in vector or matrix form, without a rich internal structure reflecting relations between different entities. In some application areas, e.g. speech processing or forecasting, types of networks have been investigated for their ability to represent sequences of input data. Whereas approaches to use neural networks for the representation and processing of structured knowledge have been around for quite some time, especially in the area of connectionism, they frequently suffer from problems with expressiveness, knowledge acquisition, adaptivity and learning, or human interpretation. In the last years much progress has been made in the theoretical understanding and the construction of neural systems capable of representing and processing structured knowledge in an adequate way, while maintaining essential capabilities of neural networks such as learning, tolerance of noise, treatment of inconsistencies, and parallel operation. The goal of this workshop is twofold: On one hand, existing mechanisms are critically examined with respect to their suitability for the acquisition, representation, processing and interpretation of structured knowledge. On the other hand, new approaches, especially concerning the design of systems based on such mechanisms, are presented, with particular emphasis on their application to realistic problems. PRELIMINARY WORKSHOP PROGRAM ---------------------------- 8:30 - 8:50 INTRODUCTION (F. Kurfess) 8:50 - 10:10 SYMBOLIC INFERENCE IN CONNECTIONIST SYSTEMS 8:50 Semantic Knowledge in General Neural Units: Issues of Representation (J. de L. Pereira Castro) 9:10 Implementation of a SHRUTI Knowledge Representation and Reasoning System (R. Hayward, J. Diederich) 9:30 A Connectionist Representation of Symbolic Components, Dynamic Bindings and Basic Inference Operations (N. Seog Park, D. Robertson) 9:50 Logical Inference and Inductive Learning (A.S. d'Avila Garcez, G. Zaverucha, L.A.V. de Carvalho) 10:10 - 10:20 Discussion 10:30 - 11:00 Break 11:00 - 11:40 EXPLOITING PROBLEM-INHERENT STRUCTURED META-KNOWLEDGE 11:00 Declarative Heuristics for Neural Network Design (M. Vuilleumier, M. Hilario) 11:20 Sign Recognition as a Support to Robot Navigation (G. Adorni, G. Destri, M. Gori, M. Mordonini) 11:40 - 12:00 Discussion 12:00 - 13:45 Break 13:45 - 14:45 SUPERVISED INDUCTIVE INFERENCE ON STRUCTURED DOMAINS 13:45 Inductive Inference from Noisy Examples: The Rule-Noise Dilemma and the Hybrid Finite State Filter (M. Gori, M. Maggini, G. Soda) 14:05 Inductive Learning in Symbolic Domains Using Structure- Driven Recurrent Neural Networks (A. Kuechler, C. Goller) 14:25 Neural Networks for the Classification of Structures (A. Sperduti) 14:45 - 15:15 Discussion 15:15 - 15:45 Break 15:45 - 16:25 INFERRING HIERARCHIES 15:45 Inferring Hierarchical Categories with ART-Based Modular Neural Networks (G. Bartfai) 16:05 A Tree-Structured Approach to Medical Diagnosis Tasks (J. Rahmel, P. Hahn) 16:25 - 16:45 Discussion 16:45 - 17:30 General Discussion and Closing DISCUSSION THEMES ----------------- In addition to discussions centered around the presentations, we want to foster an exchange of ideas and opinions about issues relevant for representing and processing structured knowledge with neural networks. 1. Are symbols ultimately necessary for knowledge, or are they an artefact? Can we provide symbol-less methods that achieve some kind of knowledge processing facility? With respect to the limited discussion time at the workshop, we would like to put the emphasis on the technical and practical aspects (experiments, methods), not so much on the underlying philosophical thoughts. 2. Why do we need structured knowledge? Because the world is structured? Because our cognition is systematic (Fodor & Pylyshyn's argument)? For efficiency reasons? And should structure be then explicitly represented? 3. Should we try to use neural networks for the representation and processing of structured knowledge, or are we simply wasting our time? After all, there are well-founded methods and techniques in traditional, symbol-oriented AI. If we should try, what are good reasons? o knowledge acquisition o learning, adaptability o generalization o performance o robustness o uncertainty o inconsistency o scalability o learning times o formal properties (correctness, completeness) o understandability o modularity 4. What are the characteristics of application domains/tasks where NN-models and methods are more suitable than other approaches (e.g. Inductive Logic Programming) when dealing with structured knowledge? 5. Learning and generalization on a structured domain -- what does this mean? Are there different levels of generalization capabilities, what can be achieved by NN models? 6. Are any of the approaches relevant for cognitive processes, e.g. memory, reasoning, language? 7. Is there evidence for the use of symbols in biological neural networks? When and where do symbols appear? 8. How difficult is it to build larger systems? They may consist of several NNSK modules, or constitute hybrid systems together with symbo-oriented modules. 9. Should we try to establish formal relations between neural methods and symbolic methods? An example might be Hoelldobler and Kalinke's or Pinkas' work. o equivalence o transformation o complexity 10. Should we try to model basic functions known from symbolic methods, or develop neural ones from scratch? Or is it like trying to build flying machines modeled after birds, instead of what we know as airplanes? An example: unification; is it necessary for reasoning system, or might a radically different approach be better? 11. What are the relations between knowledge-based methods from the fields of neural networks, machine learning, statistics? PARTICIPATION AND REGISTRATION ------------------------------ A number of places are available for those who wish to attend the workshop without doing an oral presentation. Potential attendees are requested to send a statement of interest to the Workshop Chair (franz at cis.njit.edu). Please note that attendees of workshops must register for the main ECAI conference. ORGANIZING COMMITTEE -------------------- Franz Kurfess (chair) New Jersey Institute of Technology, Newark, USA Daniel Memmi LEIBNIZ-IMAG, Grenoble, France Andreas Kuechler University of Ulm, Germany Arnaud Giacometti Universiti de Tours, France CONTACT ------- Prof. Franz Kurfess Computer and Information Sciences Dept. New Jersey Institute of Technology Newark, NJ 07102, U.S.A. Voice : +1/201-596-5767 Fax : +1/201-596-5767 E-mail: franz at cis.njit.edu PROGRAM COMMITTEE ----------------- Venkat Ajjanagadde - University of Minnesota, Minneapolis Ethem Alpaydin - Bogazici University Joan Cabestany - University of Catalunya Joachim Diederich - Queensland University of Technology Georg Dorffner - Universitaet Wien C. Lee Giles - NEC Research Institute Marco Gori - University of Florence Melanie Hilario - University of Geneva (co-chair) Steffen Hoelldobler - TU Dresden Mirek Kubat - University of Ottawa Wolfgang Maass - Technische Universitaet Graz Ernst Niebur - John Hopkins University Guenther Palm - University of Ulm Lokendra Shastri - International Computer Science Institute, Berkeley Hava Siegelman - Technion (Israeli Institue of Technology) Alessandro Sperduti - University of Pisa (co-chair) Chris J. Thornton - University of Sussex From svc at lacasa.com Sat Jun 1 15:11:02 1996 From: svc at lacasa.com (Stephen V. Coggeshall) Date: Sat, 1 Jun 1996 13:11:02 -0600 Subject: Message for distribution Message-ID: <199606011911.NAA02583@ives.lacasa.com> A small start-up company is looking for the right employees to work on a variety of problems in financial and other modeling. The Center for Adaptive Systems Applications (CASA, ~20 employees) has been in existence since June, 95, and is a result of a spin-off from the Los Alamos National Laboratory. The company is based in Los Alamos, NM. We are looking for post doc level researchers with strong skills in computer science (C, C++), math, and adaptive computing (neural nets). Experience in pattern recognition, neural nets, clustering algorithms, radial basis functions, etc. as well as large data set manipulation highly desired. Please send resumes to The Center for Adaptive Systems Applications c/o Frankie Gomez 901 18th Street Los Alamos, NM 87544 mfg at lacasa.com From allan at daimi.aau.dk Sun Jun 2 03:56:26 1996 From: allan at daimi.aau.dk (Allan Ove Kjeldberg) Date: Sun, 2 Jun 1996 09:56:26 +0200 Subject: Paper available: "Training Neural Networks by means of Genetic Algorithms Working on Very Long Chromosomes" by Peter Gravild Korning Message-ID: <199606020756.JAA16340@carbon.daimi.aau.dk> The paper korning.nnga.ps.Z is now available for copying from the Neuroprose repository: "Training Neural Networks by means of Genetic Algorithms Working on very Long Chromosomes" Peter Gravild Korning University of Aarhus Denmark The paper addresses the problem of training neural nets by use of a genetic algorithm, where the GA constitutes a general technique, an alternative to e.g. back-propagation. Attempts to do this have failed in the past. This is primarily due to the fact that the mean square error function known from back-propagation has been used as fitness function for the genetic algorithm. I have invented a new fitness function which is simple but very powerfull. Unlike the mean square error function, it takes into account the holistic nature of the GA's search. And the results are very promising. All critique and all comments/suggestions are very wellcome. (please use the mail address aragorn at daimi.aau.dk or korning.cbs.dtu.dk) Peter GRavild korning ABSTRACT: In the neural network/genetic algorithm community, rather limited success in the training of neural networks by genetic algorithms has been reported. In a paper by Whitley et al. (1991), he claims that, due to "the multiple representations problem", genetic algorithms will not effectively be able to train multilayer perceptrons, whose chromosomal representation of its weights exceeds 300 bits. In the following paper, by use of a "real-life" problem, known to be non-trivial, and by comparison with "classic" neural net training methods, I will try to show, that the modest success of applying genetic algorithms to the training of perceptrons, is caused not so much by "the multiple representations problem" as by the fact that problem-specific knowledge available is often ignored, thus making the problem unnecessarily tough for the genetic algorithm to solve. Special success is obtained by the use of a new fitness function, which takes into account the fact that the search performed by a genetic algorithm is holistic, and not local as is usually the case when perceptrons are trained by traditional methods. From radu_d at atm.neuro.pub.ro Sun Jun 2 03:10:59 1996 From: radu_d at atm.neuro.pub.ro (Radu Dogaru) Date: Sun, 2 Jun 1996 10:10:59 +0300 (EET DST) Subject: Two papers available Message-ID: <199606020711.KAA01738@atm.neuro.pub.ro> The following papers are available via anonymous FTP from the "neuroprose" archive: FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/dogaru_r.icnn95.ps FTP-file: pub/neuroprose/dogaru_r.icnn96.ps -------------------------------------------------------------------------------- 1 File name: dogaru_r.icnn95.ps.Z Title: Chaotic resonance theory, a new approach for pattern storage and retrieval in neural networks" Authors: Radu Dogaru, A.T. Murgan Abstract: A method for designing recurrent neural networks capable to oscilate chaotically and to synchronize was developed. A new neural model is proposed based on replacing locally tuned units (like RBF neurons) with small chaotic resonators. The overall behavior is simillar with the one obtained in an ART clustering networks while there is only one connection between layers of neurons, all inter-layer information being coded as one-dimensional chaotic signals. Pages: 5 Reference: Proceedings ICNN'95 (Perth-Australia, December '95), Vol. 6, pp. 3048-3052 -------------------------------------------------------------------------------- 2 File name: dogaru_r.icnn96.ps.Z Title: Searching for robust chaos in discrete-time neural networks using weight space exploration" Authors: Radu Doagaru, A.T. Murgan, Stefan Ortamnn, Manfred Glesner Abstract: A new method for analysis and design of reccurent neural networks is described. The method is based on weight space exploration and dislplays for large populations of neural networks particular maps related with entropic and sensitivity descriptors of the dynamic behaviors. A sensitivity descriptor was introduced in order to easily get information about chaotic dynamics in large populations of recurrent neural networks. Robust chaos is such a behavior characterized by getting the same class of chaotic signals when particular weights of the networks are allowed to vary within a compact domain. Pages: 6 Reference: Accepted to be published in Proceedings ICNN'96 (Washington D.C.,3-6 June 1996) ----------------------------------------------------------------------------------------- NO HARD-COPIES AVAILABLE ! For any comment or additional information please contact: Dr. Radu Dogaru "Politehnica" University of Bucharest, Romania E-mail: radu_d at atm.neuro.pub.ro or radu_d at lmn.pub.ro From ATAXR at asuvm.inre.asu.edu Mon Jun 3 03:12:35 1996 From: ATAXR at asuvm.inre.asu.edu (Asim Roy) Date: Mon, 03 Jun 1996 00:12:35 -0700 (MST) Subject: Connectionists Learning - Some New Ideas/Questions Message-ID: <01I5GB1UPVJ68X2WJV@asu.edu> This is an attempt to respond to some of the questions raised in regards to the network design issue in our new learning theory. I have included all of the responses relevant to all of the remaining issues, except one by Daniel Crespin, which was too long. A. "Task A. Perform Network Design Task" There is criticism of our learning theory on the grounds that humans inherit a pre-designed learning architecture and that this architecture has been designed and structured through the process of evolution over millions of years. I think this is a biological fact beyond dispute. The relevant question is what parts of the learning system can feasibly come pre-designed and what parts cannot. For example, we know that the architecture of the brain includes separate areas (partitions) for vision, emotion and long term memory and so on. Thus we inherit a partition of the brain based on the functions it is expected to perform. This is a level of the architecture that is indeed pre-designed. A learning mechanism is also prepackaged with this architecture that knows this functional organization of the brain. And within these partitions is available a collection of biological neurons (cells) and their connections. There is also preprocessing in certain parts of the system like the vision system. I don't think our theory is in conflict with these basic facts/ideas at all. Our learning theory relates to "work" performed by the learning mechanism within this functional organization of the brain. For example, on a lighter side of this argument, this learning mechanism may have to design and train appropriate networks that incorporate knowledge about Windows 95. None of the Windows 95 knowledge could have been inherited from our biological ancestors, despite their millions of years of learning. So the net for Windows 95 could not have come pre-designed to us. No system, biological or otherwise, can design an appropriate net for a problem about which it has no knowledge. When I was born, my parents knew nothing about computers. So neither they nor their ancestors could have pre-designed nets for me to learn about computers, unless we bring up the notion of "fixed general purpose nets/modules" being available in the brain for any kind of learning situation. These fixed general purpose nets could come with a fixed set of neurons. But the basic problem with that notion is its conflict with the idea of "learning" itself. The essence of "learning" is "generalization" and was discussed in the previous response on issues related to generalization. Since learning is generalization and since generalization is attempting to design the smallest possible net, the idea of "fixed pre-designed" nets is incompatible with the notion of "learning", whether it is in biological systems or otherwise. Learning within fixed pre-designed nets is not "learning" at all and can be dangerous indeed. Since we run the risk of simply over or underfitting the training data in fixed size nets, we may not "learn" anything in such pre-designed nets and we might be in grave danger as a species if we did so on a continuous basis. We could not have survived this long as a species by doing this - that is, by not being able to "generalize and learn". So, in general, problem-specific nets could not feasibly come pre- designed to us. From thrun+ at heaven.learning.cs.cmu.edu Tue Jun 4 13:17:08 1996 From: thrun+ at heaven.learning.cs.cmu.edu (thrun+@heaven.learning.cs.cmu.edu) Date: Tue, 4 Jun 96 13:17:08 EDT Subject: Call for papers: Special issue Machine Learning Message-ID: ------------------------------------------------------------------------------- Call for papers Special Issue of the Machine Learning Journal on Inductive Transfer ------------------------------------------------------------------------------- Lorien Pratt and Sebastian Thrun, Guest Editors ------------------------------------------------------------------------------- Many recent machine learning efforts are focusing on the question of how to learn in an environment in which more than one task is performed by a system. As in human learning, related tasks can build on one another, tasks that are learned simultaneously can cross-fertilize, and learning can occur at multiple levels, where the learning process itself is a learned skill. Learning in such an environment can have a number of benefits, including speedier learning of new tasks, a reduced number of training examples for new tasks, and improved accuracy. These benefits are especially apparent in complex applied tasks, where the combinatorics of learning are often otherwise prohibitive. Current efforts in this quickly growing research area include investigation of methods that facilitate learning multiple tasks simultaneously, those that determine the degree to which two related tasks can benefit from each other, and methods that extract and apply abstract representations from a source task to a new, related, target task. The situation where the target task is a specialization of the source task is an important special case. The study of such methods has broad application, including a natural fit to data mining systems, which extract regularities from heterogeneous data sources under the guidance of a human user, and can benefit from the additional bias afforded by inductive transfer. We solicit papers on inductive transfer and learning to learn for an upcoming Special Issue of the Machine Learning Journal. Please send six (6) copies of your manuscript postmarked by July 15, 1996 to: Dr. Lorien Pratt MCS Dept. CSM Golden, CO 80401 USA One (1) additional copy should be mailed to: Karen Cullen Attn: Special Issue on Inductive Transfer MACHINE LEARNING Editorial Office Kluwer Academic Publishers 101 Philip Drive Assinippi Park Norwell, MA 02061 USA Manuscripts should be limited to at most 12000 words. Please also note that Machine Learning is now accepting submission of final copy in electronic form. Authors may want to adhere to the journal formatting standards for paper submissions as well. There is a latex style file and related files available via anonymous ftp from ftp.std.com. Look in Kluwer/styles/journals for the files README, kbsfonts.sty, kbsjrnl.ins, kbsjrnl.sty, kbssamp.tex, and kbstmpl.tex, or the file kbsstyles.tar.Z, which contains them all. Please see http://vita.mines.edu:3857/1s/lpratt/transfer.html for more information on inductive transfer. Papers will be quickly reviewed for a target publication date in the first quarter of 1997. From maja at garnet.cs.brandeis.edu Tue Jun 4 17:05:02 1996 From: maja at garnet.cs.brandeis.edu (Maja Mataric) Date: Tue, 4 Jun 1996 17:05:02 -0400 Subject: call for participation Message-ID: <199606042105.RAA16416@garnet.cs.brandeis.edu> ***********************CONFERENCE INFORMATION******************************* From Animals to Animats The Fourth International Conference on Simulation of Adaptive Behavior (SAB96) September 9th-13th, 1996 North Falmouth, Massachusetts, USA FULL DETAILS ON THE WEB PAGE: http://www.cs.brandeis.edu/conferences/sab96 GENERAL CONTACT: sab96 at cs.brandeis.edu The objective of this conference is to bring together researchers in ethology, psychology, ecology, artificial intelligence, artificial life, robotics, and related fields so as to further our understanding of the behaviors and underlying mechanisms that allow natural and artificial animals to adapt and survive in uncertain environments. ***************************PROGRAM************************************** The conference will consist of a single track of invited speakers, refereed presentations and posters, and demonstrations. The invited speakers are James Albus, Jelle Atema, Daniel Dennett, Randy Gallistel, Scott Kelso, and David Touretzky. ***********************REGISTRATION IS OPEN***************************** EARLY REGISTRATION DEADLINE IS JUNE 30, 1996 Regular $350; Discounts for early registration, students and members of the International Society for Adaptive Behavior. ***************************HOTEL**************************************** HOTEL DEADLINE FOR BLOCKED ROOMS IS AUGUST 8TH The entire conference will take place at the Sea Crest in North Falmouth on Cape Cod. All participants are responsible for making their own reservations with the hotel. ************************STUDENT TRAVEL GRANTS**************************** APPLICATION DEADLINE: JUNE 24th Thanks to funding from the Office of Naval Research and the Bauer Foundation, we will be able to assist about 15 students by lowering their costs for attending the conference. $400 scholarships will be available for the selected North American students, and $600 scholarships for selected overseas students. ************************************************************************* ALL DETAILS ON THE WEB PAGE: http://www.cs.brandeis.edu/conferences/sab96 From radu_d at atm.neuro.pub.ro Wed Jun 5 01:20:09 1996 From: radu_d at atm.neuro.pub.ro (Radu Dogaru) Date: Wed, 5 Jun 1996 08:20:09 +0300 (EET DST) Subject: Two papers available Message-ID: <199606050520.IAA02499@atm.neuro.pub.ro> A error occured in the last message regarding files available in "neuroprose" archive. The error is related with the file-name which is "dogaru...." instead of "dogaru_r..." as it was in the original announcement. I will repeat the message with the error corrected. ------------------------------------------------------------------------------- The following papers are available via anonymous FTP from the "neuroprose" archive: FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/dogaru.icnn95.ps.Z FTP-file: pub/neuroprose/dogaru.icnn96.ps.Z -------------------------------------------------------------------------------- 1 File name: dogaru.icnn95.ps.Z Title: Chaotic resonance theory, a new approach for pattern storage and retrieval in neural networks" Authors: Radu Dogaru, A.T. Murgan Abstract: A method for designing recurrent neural networks capable to oscilate chaotically and to synchronize was developed. A new neural model is proposed based on replacing locally tuned units (like RBF neurons) with small chaotic resonators. The overall behavior is simillar with the one obtained in an ART clustering networks while there is only one connection between layers of neurons, all inter-layer information being coded as one-dimensional chaotic signals. Pages: 5 Reference: Proceedings ICNN'95 (Perth-Australia, December '95), Vol. 6, pp. 3048-3052 -------------------------------------------------------------------------------- 2 File name: dogaru.icnn96.ps.Z Title: Searching for robust chaos in discrete-time neural networks using weight space exploration" Authors: Radu Doagaru, A.T. Murgan, Stefan Ortamnn, Manfred Glesner Abstract: A new method for analysis and design of reccurent neural networks is described. The method is based on weight space exploration and dislplays for large populations of neural networks particular maps related with entropic and sensitivity descriptors of the dynamic behaviors. A sensitivity descriptor was introduced in order to easily get information about chaotic dynamics in large populations of recurrent neural networks. Robust chaos is such a behavior characterized by getting the same class of chaotic signals when particular weights of the networks are allowed to vary within a compact domain. Pages: 6 Reference: Accepted to be published in Proceedings ICNN'96 (Washington D.C.,3-6 June 1996) ----------------------------------------------------------------------------------------- NO HARD-COPIES AVAILABLE ! For any comment or additional information please contact: Dr. Radu Dogaru "Politehnica" University of Bucharest, Romania E-mail: radu_d at atm.neuro.pub.ro or radu_d at lmn.pub.ro From harnad at cogsci.soton.ac.uk Wed Jun 5 12:26:45 1996 From: harnad at cogsci.soton.ac.uk (Stevan Harnad) Date: Wed, 5 Jun 96 17:26:45 +0100 Subject: Neural Constructivism Manifesto: BBS Call for Commentators Message-ID: <8520.9606051626@cogsci.ecs.soton.ac.uk> Below is the abstract of a forthcoming BBS target article on: THE NEURAL BASIS OF COGNITIVE DEVELOPMENT: A CONSTRUCTIVIST MANIFESTO by Steven R. Quartz and Terrence J. Sejnowski This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or nominated by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send EMAIL to: bbs at soton.ac.uk or write to: Behavioral and Brain Sciences Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://www.princeton.edu/~harnad/bbs.html http://cogsci.soton.ac.uk/bbs ftp://ftp.princeton.edu/pub/harnad/BBS ftp://cogsci.soton.ac.uk/pub/harnad/BBS gopher://gopher.princeton.edu:70/11/.libraries/.pujournals If you are not a BBS Associate, please send your CV and the name of a BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work. All past BBS authors, referees and commentators are eligible to become BBS Associates. To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp (or gopher or world-wide-web) according to the instructions that follow after the abstract. ____________________________________________________________________ THE NEURAL BASIS OF COGNITIVE DEVELOPMENT: A CONSTRUCTIVIST MANIFESTO Steven R. Quartz and Terrence J. Sejnowski Computational Neurobiology Laboratory, and The Sloan Center for Theoretical Neurobiology, The Salk Institute for Biological Studies, 10010 North Torrey Pines Rd. La Jolla, CA 92037 steve at salk.edu Howard Hughes Medical Institute, The Salk Institute for Biological Studies, and Department of Biology, University of California, San Diego, La Jolla, CA 92037. terry at salk.edu KEYWORDS: neural development; cognitive development; constructivism; selectionism; mathematical learning theory; evolution; learnability. ABSTRACT: How do minds emerge from developing brains? According to "neural constructivism," the representational features of cortex are built from the dynamic interaction between neural growth mechanisms and environmentally derived neural activity. Contrary to popular selectionist models that emphasize regressive mechanisms, the neurobiological evidence suggests that this growth is a progressive increase in the representational properties of cortex. The interaction between the environment and neural growth results in a flexible type of learning: "constructive learning" minimizes the need for prespecification in accordance with recent neurobiological evidence that the developing cerebral cortex is largely free of domain-specific structure. Instead, the representational properties of cortex are built by the nature of the problem domain confronting it. This uniquely powerful and general learning strategy undermines the central assumption of classical learnability theory, that the learning properties of a system can be deduced from a fixed computational architecture. Neural constructivism suggests that the evolutionary emergence of neocortex in mammals is a progression toward more flexible representational structures, in contrast to the popular view of cortical evolution as an increase in innate, specialized circuits. Human cortical postnatal development is also more extensive and protracted than generally supposed, suggesting that cortex has evolved so as to maximize the capacity of environmental structure to shape its structure and function through constructive learning. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from ftp.princeton.edu according to the instructions below (the filename is bbs.quartz). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- These files are also on the World Wide Web and the easiest way to retrieve them is with Netscape, Mosaic, gopher, archie, veronica, etc. Here are some of the URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs.html http://cogsci.soton.ac.uk/~harnad/bbs.html ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.quartz ftp://cogsci.ecs.soton.ac.uk/pub/harnad/BBS/bbs.quartz gopher://gopher.princeton.edu:70/11/.libraries/.pujournals To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.quartz When you have the file(s) you want, type: quit ---------- Where the above procedure is not available there are two fileservers: ftpmail at decwrl.dec.com and bitftp at pucc.bitnet that will do the transfer for you. To one or the other of them, send the following one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). ------------------------------------------------------------- From nschraud at evotec.de Thu Jun 6 16:12:55 1996 From: nschraud at evotec.de (Nici Schraudolph) Date: Thu, 6 Jun 1996 22:12:55 +0200 Subject: update announcement - NC.bib Message-ID: <199606062012.WAA14311@nix.evotec.de> I have recently updated my BibTeX database NC.bib to include all articles in Neural Computation up to volume 7. The updated file is available by anonymous ftp from the following locations: USA - ftp://ftp.cnl.salk.edu/pub/schraudo/NC.bib.gz Europe - ftp://nix.evotec.de/pub/nschraud/NC.bib.gz It has also been incorporated into the following BibTeX collections: Center for Computational Intelligence, TU Wien: http://www.ci.tuwien.ac.at/docs/ci/bibtex_collection.html The Collection of Computer Science Bibliographies http://liinwww.ira.uka.de/bibliography/ Happy citing, -- Dr. Nicol N. Schraudolph Tel: +49-40-56081-284 Evotec Biosystems GmbH Fax: +49-40-56081-222 Grandweg 64 Home: +49-40-430-3381 22529 Hamburg Germany http://www.cnl.salk.edu/~schraudo/ From katagiri at hip.atr.co.jp Fri Jun 7 00:51:22 1996 From: katagiri at hip.atr.co.jp (Shigeru Katagiri) Date: Fri, 07 Jun 1996 13:51:22 +0900 Subject: Announcement of NNSP96 Message-ID: <9606070451.AA26184@hector> 1996 IEEE Signal Processing Society Workshop on Neural Networks for Signal Processing (NNSP96) September 4-6, Seika, Kyoto, Japan The workshop information, including CFP, Advance Program, Registration, Hotel Reservation, and Other Related Issues, is available on the NNSP96 WWW Home Page, ``http://www.hip.atr.co.jp/~katagiri/nnsp96_home.html''. You are encouraged to make an early registration. The early registration deadline is August 10, 1996. - Shigeru KATAGIRI (katagiri at hip.atr.co.jp) Program Chair, NNSP96 - Masae SHIOJI (shioji at hip.atr.co.jp) Secretary, NNSP96 From rosca at cs.rochester.edu Fri Jun 7 18:50:43 1996 From: rosca at cs.rochester.edu (rosca@cs.rochester.edu) Date: Fri, 7 Jun 1996 18:50:43 -0400 Subject: Papers available: modular genetic programming Message-ID: <199606072250.SAA00488@ash.cs.rochester.edu> The following papers on discovery of subroutines in Genetic Programming are now available for retrieval via ftp. Ftp: ftp://ftp.cs.rochester.edu/pub/u/rosca/gp WWW: http://www.cs.rochester.edu/u/rosca/research.html Comments and suggestions are welcome. Justinian -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Justinian Rosca Internet: rosca at cs.rochester.edu University of Rochester Office: (716) 275-1174 Department of Computer Science Fax: (716) 461-2018 Rochester, NY 14627-0226 WWW: http://www.cs.rochester.edu/u/rosca/ EVOLUTION-BASED DISCOVERY OF HIERARCHICAL BEHAVIORS J.P. Rosca and D.H. Ballard AAAI-96, The MIT Press, 1996 ftp://ftp.cs.rochester.edu/pub/u/rosca/gp/96.aaai.ps.gz (7 pages; 100k compressed) Abstract: The complexity of policy learning in a reinforcement learning task deteriorates primarily with the increase of the number of observations. Unfortunately, the number of observations may be unacceptably high even for simple problems. In order to cope with the scale up problem we adopt procedural representations of policies. Procedural representations have two advantages. First they are implicit, allowing for good inductive generalization over a very large set of input states. Second they facilitate modularization. In this paper we compare several randomized algorithms for learning modular procedural representations. The main algorithm, called Adaptive Representation through Learning (ARL) is a genetic programming extension that relies on the discovery of subroutines. ARL is suitable for learning hierarchies of subroutines and for constructing policies to complex tasks. When the learning problem cannot be solved because the specification is too loose and the domain is not well understood, ARL will discover regularities in the problem environment in the form of subroutines, which often lead to an easier problem solving. ARL was successfully tested on a typical reinforcement learning problem of controlling an agent in a dynamic and non-deterministic environment where the discovered subroutines correspond to agent behaviors. DISCOVERY OF SUBROUTINES IN GENETIC PROGRAMMING J.P. Rosca and D.H. Ballard In: Advances in Genetic Programming II Edited by P. Angeline and K. Kinnear Jr. MIT Press, 1996. ftp://ftp.cs.rochester.edu/pub/u/rosca/gp/96.aigp2.dsgp.ps.gz (25 pages; 439k compressed) Abstract: A fundamental problem in learning from observation and interaction with an environment is defining a good representation, that is a representation which captures the underlying structure and functionality of the domain. This chapter discusses an extension of the genetic programming (GP) paradigm based on the idea that subroutines obtained from blocks of good representations act as building blocks and may enable a faster evolution of even better representations. This GP extension algorithm is called adaptive representation through learning (ARL). It has built-in mechanisms for (1) creation of new subroutines through discovery and generalization of blocks of code; (2) deletion of subroutines. The set of evolved subroutines extracts common knowledge emerging during the evolutionary process and acquires the necessary structure for solving the problem. ARL was successfully tested on the problem of controlling an agent in a dynamic and non-deterministic environment. Results with the automatic discovery of subroutines show the potential to better scale up the GP technique to complex problems. From hu at eceserv0.ece.wisc.edu Fri Jun 7 22:38:37 1996 From: hu at eceserv0.ece.wisc.edu (Yu Hu) Date: Fri, 7 Jun 1996 21:38:37 -0500 Subject: IEEE Trans. SP Special issue CFP: Neural Network Signal Processing Message-ID: <199606080238.AA10364@eceserv0.ece.wisc.edu> ******************************************************************** * CALL FOR PAPERS * * * * A Special Issue of IEEE Transactions on Signal Processing: * * Applications of Neural Networks to Signal Processing * * * ******************************************************************** Expected Publication Date: November 1997 Issue Submission Deadline: December 1, 1996 Guest Editors: A. G. Constantinides, Simon Haykin, Yu Hen Hu, Jenq-Neng Hwang, Shigeru Katagiri, Sun-Yuan Kung, T. A. Poggio Significant progress has been made applying artificial neural network (ANN) techniques to signal processing. From a signal processing perspective, it is imperative to understand how the neural network based algorithms are related to more conventional approaches in terms of performance, cost, and practical implementation issues. Questions like these demand honest, pragmatic, innovative, and imaginative answers. This special issue offers a unique forum for researchers and practitioners in this field to present their view on these important questions. We seek highest quality manuscripts which focus on the signal processing aspects of a neural network based algorithm, applications or implementation. Topics of interests include, but are not limited to: Neural network based signal detection, classification, and understanding algorithms. Nonlinear system identification, signal prediction, modeling, adaptive filtering, and neural network learning algorithms. Neural network applications to biomedical signal processing, including medical imaging, Electrocardiogram, EEG, and related topics. Signal processing algorithms for biological neural system modeling Comparison of neural network based approach with conventional signal processing algorithms for solving real world signal processing tasks. Real world signal processing applications based on neural networks. Fast and parallel algorithms for efficient implementation of neural networks based signal processing systems. Prospective authors are encouraged to SUBMIT MANUSCRIPTS BY DECEMBER 1, 1996 to: Professor Yu-Hen Hu E-mail: hu at engr.wisc.edu Univ. of Wisconsin - Madison, Phone: (608) 262-6724 Dept. of Electrical and Computer Engineering Fax: (608) 262-1267 1415 Engineering Drive Madison, WI 53706-1691 U.S.A. On the cover letter, indicate the manuscript is submitted to the special issue on neural network for signal processing . All manuscripts should conform to the submission guideline detailed in the "information for authors" printed in each issue of the IEEE Transactions on Signal Processing. Specifically, the length of each manuscript should not exceed 30 double-spaced pages. SCHEDULE Manuscript received by: December 1, 1996 Completion of initial review: March 31, 1997 Final manuscript received by : June 30, 1997 Expected publication date: November, 1997 DISTINGUISHED GUEST EDITORS Prof. A. G. Constantinides, Imperial College, UK, a.constantinides at romeo.ic.ac.uk Prof. Simon Haykin, McMaster University, Canada, haykin at synapse.crl.mcmaster.ca Prof. Yu Hen Hu, Univ. of Wisconsin, U.S.A., hu at engr.wisc.edu Prof. Jenq-Neng Hwang, University of Washington, U.S.A., hwang at ee.washington.edu Dr. Shigeru Katagiri, ATR, JAPAN, katagiri at hip.atr.co.jp Prof. Sun-Yuan Kung, Princeton University, U.S.A., kung at princeton.edu Prof. T. A. Poggio, Massachusetts Inst. of Tech., U.S.A., tp-temp at ai.mit.edu From goldfarb at unb.ca Sun Jun 9 22:21:00 1996 From: goldfarb at unb.ca (Lev Goldfarb) Date: Sun, 9 Jun 1996 23:21:00 -0300 (ADT) Subject: Call for papers for a Special Issue of Pattern Recognition: What is inductive learning? Message-ID: My apologies if you receive multiple copies of this message. Please, post it. ************************************************************************ Call for papers ----------------- Special Issue of Pattern Recognition (The Journal of the Pattern Recognition Society) WHAT IS INDUCTIVE LEARNING: ON THE FOUNDATIONS OF PATTERN RECOGNITION, AI, AND COGNITIVE SCIENCE Guest editor: Lev Goldfarb Faculty of Computer Science University of New Brunswick Fredericton, N.B., Canada The "shape" of AI (and, partly, of cognitive science), as it stands now, has been molded largely by the three founding schools (at Massachusetts institute of Technology, Carnegie-Mellon and Stanford Universities). This "shape" stands now fragmented into several ill-defined research agendas with no clear basic SCIENTIFIC problems (in the classical understanding of the term) as their focus. It appears that four factors have contributed to this situation: inability to focus on the central cognitive process(es), lack of understanding of the structure of advanced scientific models, failure to see the distinction between the computational/logical and mathematical models, and the relative abundance of research funds for AI during the last 35 years. The resulting research agendas have prevented AI from cooperatively evolving into a scientific discipline with some central "real" problems that are inspired by the basic cognitive/biological processes. The candidates for such basic processes could come only from the central/common perceptual processes and only much later employed by the "higher", e.g. language, processes: the period during which the "higher" level processes have evolved is insignificant compared to that in which the development of the perceptual processes took place (compare also the anatomical development of the brain which does not show any basic changes with the development of the "higher" processes). Moreover, the partisan tradition in the development of AI may have also inspired the recent "connectionist revolution" as well as other smaller "revolutions", e.g., that related to the "genetic" learning. As a result, in particular, even the most "reputable" connectionist histories of the field of pattern recognition, which was formed more than three decades ago and to which the connectionism properly belongs, show amazing ignorance of the major developments in the parent (pattern recognition) field: the emergence of two important and formally quite irreconcilable recognition paradigms--vector space and syntactic. The latter ignorance is even more instructive in view of the fact that many engineers who got involved with the field of pattern recognition through the connectionist "movement" are also ignorant of the above two paradigms that were discovered and developed within largely applied/engineering parent field of pattern recognition. As far as the inception of a scientific field is concerned, it should be quite clear that the initial choice of the basic scientific problem(s) is of decisive importance. This is particularly true for cognitive modeling where the path from the model to the experiment and the reverse path are much more complex than was the case, for example, at the inception of physics. In this connection, a very important question arises, which will be addressed in the special issue: What form will the future/adequate cognitive models take? Furthermore, may be, as many cognitive scientists argue, since we are in a prescientific stage, we should simply continue to collect more and more data and not worry about the future models. The answer to the last argument is quite clear to me: look very carefully at the "data" and the corresponding experiments and you will note that no data can be even collected without an underlying model, which always includes both formal and informal components. In other words, we cannot avoid models (especially in cognitive science, where the path from the model to the experiment will be much longer and more complex than is the case in all other sciences). Therefore, paraphrasing Friedrich Engels's thought on the role of philosophy in science, one can say that there is absolutely no way to do a scientific experiment without the underlying model and the difference between a good scientist and a bad one has to do with the degree to which each realizes this dependence and actively participates in the selection of the corresponding model. It goes without saying that, at the inception of the science, the decision on which cognitive process one must focus initially should precede the selection of the model for the process. As to the choice of the basic scientific problem, or basic cognitive process, it appears that the really central cognitive process is that of inductive learning, which might have been marked so by many great philosophers of the past four centuries (e.g., Bacon, Descartes, Pascal, Locke, Hume, Kant, Mill, Russell, Quine) and even earlier (e.g., Aristotle). The insistence of such outstanding physiologists and neurophysiologists as Helmholtz and Barlow on the central role of inductive learning processes is also well known. However, in view of the difficulties associated with developing an adequate inductive learning model, researchers in AI and to a somewhat lesser extent in cognitive science have decided to view inductive learning not as a central process at all, i.e., they decided to "dissolve" the problem. It became clear to me that the above difficulties are related to the development of a genuinely new (symbolic) mathematical framework that can SATISFACTORILY define the concept of INDUCTIVE CLASS REPRESENTATION (ICR), i.e., the nature of encoding essentially infinite data set on the basis of a small finite set. (The most known as well as critical to the development of mathematics example of ICR is that of the classical Peano representation of the set of natural numbers--one element plus one operation--used in mathematical induction.) Thus, the main differences between inductive learning models should be viewed in light of the differences between the formal means, i.e. mathematical structures, offered by various models for representing the class inductively. I will also argue (in one of the papers) that the classical mathematical (numeric) models, including the vector space and probabilistic models, offer inadequate axiomatic frameworks for capturing the concept of ICR. As Peter Gardenfors aptly remarked in his 1990 paper, "induction has been called 'the scandal of philosophy' [and] unless more consideration is given to the question of which form of knowledge representation is appropriate for mechanized inductive inferences, I'm afraid that induction may become a scandal of AI as well." I strongly believe that all attempts to "dissolve" the inductive learning processes are futile and, moreover, that these processes are central cognitive processes for all levels of processing, hence the earlier workshop in Toronto (May 20-21) under the same title and the present Special Issue. I invite all researchers seriously interested in the scientific foundations of cognitive science, AI, or pattern recognition to submit the papers addressing, in addition to other relevant issues, the following questions: * What is the role of mathematics in cognitive science, AI, and pattern recognition? * Are there any central cognitive processes? * What is inductive learning? * What is inductive class representation (ICR)? * Are there several basic inductive learning processes? * Are the inductive learning processes central? * What are the relations between inductive learning processes and the known physical processes? * What is the relationship between the measurement processes and inductive learning processes (e.g., retina as a structured measurement device)? * What is the role of inductive learning in sensation and perception (vision, hearing, etc.)? * What is the relation between the inductive learning, categorization, and pattern recognition? * What is the relation between the supervised/inductive learning and the unsupervised learning? * What is the role of inductive learning processes in language acquisition? * What are the relationships, if any, between the inductive class representation (ICR) and the basic object representation (from the class)? * What are the differences between the mathematical structures employed by the known inductive learning models for capturing the corresponding ICRs? * What is the role of inductive learning in memory and knowledge representation? * What are the relations, if any, between the ICR and mental models and frames? When preparing the manuscript, please conform to the standard submission requirements given in journal Pattern Recognition, which could be faxed or mailed if necessary. Hardcopies (4) of each submission should be mailed to Lev Goldfarb Faculty of Computer Science University of New Brunswick P.O. Box 4400 E-mail: goldfarb at unb.ca Fredericton, N.B. E3B 5A3 Tel: 506-453-4566 Canada Fax: 506-453-3566 by the SUBMISSION DEADLINE, August 20, 1996. The review process should take about 4-5 weeks and will take into account the relevance, quality, and originality of the contribution. Potential contributors are encouraged to contact me with any questions they might have. ************************************************************************** -- Lev Goldfarb http://wwwos2.cs.unb.ca/profs/goldfarb/goldfarb.html From john at dcs.rhbnc.ac.uk Mon Jun 10 10:42:05 1996 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Mon, 10 Jun 96 15:42:05 +0100 Subject: RESEARCH ASSISTANT POST Message-ID: <199606101442.PAA05889@platon.cs.rhbnc.ac.uk> --------------------------------------------------------------- RESEARCH ASSISTANT POST Royal Holloway/London School of Economics, University of London --------------------------------------------------------------- Jonathan Baxter has been working at Royal Holloway and London School of Economics on an EPSRC (a UK funding council) funded project entitled `The Canonical Metric in Machine Learning'. Attached below is an abstract of the project which has a further year to run. Jonathan is resigning from the project to move to the Australian National University, where he will continue to work along similar lines. EPSRC have given us permission to recruit a replacement to start any time between 5th July and 5th January 97 and to run for a further 12 months provided that they are suitable for the work involved. If you know anyone who would be interested, it would be very helpful if they could visit London before Jonathan leaves on 5th July. Jonathan, Martin and I will be attending COLT and so could discuss the project in detail with anyone interested at that time. The rate of pay is 19600 pounds pa paid half through each institution. The slant taken could be towards more implementational work or alternatively (and perhaps preferably in view of the funding committee being mathematical) more theoretical. Anyone with an interest should not hesitate to contact us for more information. Best wishes John Shawe-Taylor, Martin Anthony and Jonathan Baxter ---------- Canonical Metric in Machine Learning (Abstract and progress) The performance of a learning algorithm is fundamentally limited by the features it uses. Thus discovering sets of good features is of major importance in machine learning research. The principle aim of this project is to further our theoretical knowledge of the feature discovery process, and also to implement practical solutions for feature discovery in problems such as character recognition and speech recognition. The theoretical aspect of the project builds on work by the previous research assistant (Jonathan Baxter) showing that if a learner is embedded within and environment of related tasks then the learner can learn features that are appropriate for learning all tasks in the environment. This process can be viewed as "Learning to Learn". One can also show that the environment of learning problems induces a natural metric (the "canonical metric") on the input space of the learner. Knowledge of this metric enables the learner to perform optimal quantization of the input space, and hence to learn optimally within the environment. The main theoretical focus of the project is to further investigate the theory of the canonical metric and its relation to learning. We are currently applying these theoretical ideas to the problem of Japanese character recognition and so far we have achieved notable success. The practical part of the project will be to continue these investigations and to also investigate applications to speech recognition. Further information on the background material to this project may be found in Neurocolt technical reports: 95-45 95-46 and 95-47. Also see Jonathan Baxter's talk at this year's COLT. From alpaydin at boun.edu.tr Tue Jun 11 06:33:41 1996 From: alpaydin at boun.edu.tr (Ethem Alpaydin) Date: Tue, 11 Jun 1996 14:33:41 +0400 (MEDT) Subject: Call for Participation: Tainn'96 In-Reply-To: Message-ID: Pre-S. We're sorry if you receive multiple copies of this message. ase forward * Please post * Please forward * Please post * Please forwa Call for Participation TAINN'96, Istanbul 5th Turkish Symposium on Artificial Intelligence and Neural Networks To be held at Istanbul Technical University, Macka Campus June 27 - 28, 1996 Jointly-organized by Bogazici University and Istanbul Technical University Invited talk by Prof Teuvo Kohonen, Helsinki University of Technology Full program, registration and accommodation information can be e-received by Email: tainn96 at boun.edu.tr URL: http://www.cmpe.boun.edu.tr/~tainn96 From listerrj at helios.aston.ac.uk Wed Jun 12 10:41:56 1996 From: listerrj at helios.aston.ac.uk (Richard Lister) Date: Wed, 12 Jun 1996 15:41:56 +0100 Subject: PhD Studentship Available Message-ID: <4719.199606121441@sun.aston.ac.uk> ---------------------------------------------------------------------- Neural Computing Research Group ------------------------------- Dept of Computer Science and Applied Mathematics Aston University, Birmingham, UK PhD STUDENTSHIP AVAILABLE ------------------------- *** Full details at http://www.ncrg.aston.ac.uk/ *** A studentship exists for a project which is jointly funded by the UK EPSRC, and by British Aerospace under the Total Technology scheme. The student will be expected to follow the Neural Computing MSc by Research degree for the first year, and will also be expected to pass four modules from the MBA course. The funding covers tuition fees and living expenses for three years and the student is expected to gain a PhD in Neural Computing at the end of this period. The project supervisor will be Professor David Lowe and the studentship will be based at Aston University, Birmingham, UK. Structural Characterisation of Wake EEG Signals ----------------------------------------------- A student is required to carry out research in the interdisciplinary area of multichannel EEG signal characterisation using statistical pattern recognition and artificial neural networks. The aim of the project is to investigate the degree to which attentiveness or vigilance may be characterised through an analysis of wake EEG signals. The problem domain is one of extraction and interpretation of structure in an environment in which there is little or no labelled data and in which there is a poor signal to noise ratio. Macrostate unsupervised clustering of multivariate EEG data is a difficult problem area and requires a high level of competence across discipline boundaries. The project calls for developing skills in linear and non-linear signal processing, biomedical data interpretation, statistical clustering methodology and artificial neural networks. The student will have to be mathematically and computationally proficient. How to apply ------------ This award is made on a competitive basis and students should ensure that applications reach the Neural Computing Research Group at Aston University by Wednesday June 19th 1996. An electronic version of the application form, is available on our Web Pages at http://www.ncrg.aston.ac.uk/ . Interviews will be held on Friday 21st June 1996 and candidates must ensure that they are available for interview. Successful candidates will be notified by telephone and/or email if they are to be called for interview. Candidates will be notified of the outcome on Monday 24th June 1996. ---------------------------------------------------------------------- From adali at engr.umbc.edu Thu Jun 13 12:17:00 1996 From: adali at engr.umbc.edu (Tulay Adali) Date: Thu, 13 Jun 1996 12:17:00 -0400 (EDT) Subject: CFP: Spec. Issue on Apps. of NNets in Biomedical Imaging/Image Processing Message-ID: <199606131617.QAA06071@akdeniz.engr.umbc.edu> --------------------------------------------------------------------- CALL FOR PAPERS --------------------------------------------------------------------- Special Issue on Applications of Neural Networks in Biomedical Imaging/Image Processing --------------------------------------------------------------------- We invite papers on applications of artificial neural networks in biomedical imaging and biomedical image processing to appear in a special issue of the Journal of VLSI Signal Processing Systems for Signal, Image, and Video Technology. Some possible areas of application are (but not restricted to): pattern recognition and feature extraction for computer aided diagnosis and prognosis, analysis (quantification, segmentation, edge detection, etc.), restoration, compression, registration, reconstruction, and quality evaluation of medical images. Of particular interest are methods that take advantage of the multimodal nature of biomedical image data which is available in most studies as well as techniques developed for sequences of biomedical images such as those for dynamic PET and functional MRI data. Schedule: Manuscript submission deadline: August 15, 1996 Notification of acceptance: January 15, 1997 Final manuscript submission deadline: March 1, 1997 Expected publication date: Third quarter of 1997 Prospective authors should follow the regular guidelines for publications submitted to journals of Kluwer Academic Publishers except that the manuscripts should be submitted to Tulay Adali, guest editor of the special issue. Submission instructions for the journal can be found at http://www.kwap.nl. Guest Editor: TULAY ADALI Department of Computer Science and Electrical Engineering University of Maryland Baltimore County Baltimore, MD 21228-5398 Tel: (410) 455-3521 Fax: (410) 455-3969 E-mail: adali at engr.umbc.edu For updates and a list of references in the area refer to: http://www.engr.umbc.edu/~adali/biomednn.html. \end{document} From csj at ccms.ntu.edu.tw Thu Jun 13 03:28:34 1996 From: csj at ccms.ntu.edu.tw (Sao-Jie Chen) Date: Thu, 13 Jun 1996 15:28:34 +0800 Subject: call for papers: ANNCSSP'96 (Taiwan) Message-ID: <199606130728.PAA24823@ccms.ntu.edu.tw> Submitted by : Prof. Von-Wun Soo ************************************************************************ SECOND CALL FOR PAPERS 1996 International Symposium on Multi-Technology Information Processing A Joint Symposium of Artificial Neural Networks, Circuits and Systems, and Signal Processing December 16-18, 1996 Hsin-Chu, Taiwan, Republic of China ************************************************************************ Submission of extended summary by July 15, 1996. Sponsored by: National Tsing Hua University (NTHU) Ministry of Education, Taiwan R.O.C. National Science Council, Taiwan R.O.C. in Cooperation with IEEE Signal Processing Society IEEE Circuits and Systems Society (pending) IEEE Neural Networks Council IEEE Taiwan Section Taiwanese Association for Artificial Intelligence ORGANIZATION General Co-chairs: H. C. Wang, NTHU, Y. H. Hu, U. of Wisconsin Advisory board Co-chairs: W. T. Chen, NTHU S. Y. Kung, Princeton U. Vice Co-chairs: H. C. Hu, NCTU J. N. Hwang, U. of Washington Program Co-chairs: V. W. Soo, NTHU C. H. Lee, AT&T Call For Papers The International Symposium on Multi-Technology Information Processing (ISMIP'96), a joint symposium of artificial neural networks, circuits and systems, and signal processing, will be held in National Tsing Hua University, Hsin Chu, Taiwan, Republic of China. This conference is an expansion of previous series of International Symposium of Artificial Neural Networks (ISANN). The main purpose of this conference is to offer a forum showcasing the latest advancement of modern information processing technologies. It will include recent innovative research results of theories, algorithms, architechtures, systems, hardware implementations that lead to intelligent information processing. The technical program will feature opening keynote addresses, invited plenary talks, technical presentations of refereed papers. The official language is English. Papers are solicited for, but not limited to, the following topics: 1. Associative Memory 2. Digital and Analog Neurocomputers 3. Fuzzy Neural Systems 4. Supervised/Unsupervised Learning 5. Robotics 6. Sensory/Motor Control 7. Image Processing 8. Pattern Recognition 9. Langauge/ Speech Processing 10. Digital Signal Processing 11. VLSI Architectures 12. Non-linear Circuits 13. Multimedia information processing 14. Optimization 15. Mathematical Methods 16. Visual signal processing 17. Content based signal processing 18. Applications Prospective authors are invited to submit 4 copies of extended summaries of no more than 4 pages. All the manuscripts must be written in English in single-spaced, single column, on 8.5" by 11" white papers. The top of the first page of the paper should include a title, authors' names, affiliations, address, telephone/fax numbers, and email address if applicable. The indicated corresponding author will receive an acknowledgement of his/her submission. Camera-ready full papers of accepted manuscripts will be published in a hard-bound proceedings and distributed in the symposium. For more information, please consult at the URL site http://pierce.ee.washington.edu/~nnsp/ismip96.html Authors are invited to send submissions to one of the program co-chairs: For submissions from USA and Europe: Dr. Chin-Hui Lee Bell Labs, Lucent Technologies 600 Mountain Avenue Murray Hill, NJ 07974, USA E-Mail: chl at research.bell-labs.com Phone: 908-582-5226 Fax: 908-582-7308 For submissions from Asia and the rest of the world Prof. V. W. Soo Dept. of Computer Science National Tsing Hua University Hsin Chu, Taiwan 30043, ROC E-Mail: soo at cs.nthu.edu.tw Phone: 886-35-731068 Fax: 886-35-723694 Schedule Submission of extended summary: July 15, 1996. Notification of acceptance: September 30, 1996. Submission of camera-ready paper: October 31, 1996. Advanced registration, before: November 15, 1996. From Dimitris.Dracopoulos at trurl.brunel.ac.uk Fri Jun 14 16:19:16 1996 From: Dimitris.Dracopoulos at trurl.brunel.ac.uk (Dimitris Dracopoulos) Date: Fri, 14 Jun 1996 14:19:16 -0600 Subject: Advanced MSc in Neural & Evolutionary Systems (London) Message-ID: <9606141419.ZM1649@trurl.brunel.ac.uk> The Department of Computer Science, Brunel University, London will run a new advanced MSc in Neural & Evolutionary Systems in 1996/1997. The MSc covers introductory and more advanced material in the areas of neural networks, genetic algorithms, genetic programming, artificial life, parallel computing, etc. The MSc in Neural and Evolutionary Systems is supported by the Centre of Neural and Evolutionary Systems (CNES), in the Department of Computer Science and Information Systems. More information about this MSc can be found in: http://http2.brunel.ac.uk:8080/~csstdcd/NES_Msc.html Applications will be considered until late June (or the latest beginning of July). For application forms please contact: Admissions Secretary Department of Computer Science and Information Systems Brunel University London Uxbridge Middlesex UB8 3PH United Kingdom Telephone: +44 (0)1895 274000 ext. 2394 Fax: +44 (0)1895 251686 Email: cs-msc-courses at brunel.ac.uk -- Dr Dimitris C. Dracopoulos Department of Computer Science Brunel University Telephone: +44 1895 274000 ext. 2120 London Fax: +44 1895 251686 Uxbridge E-mail: Dimitris.Dracopoulos at brunel.ac.uk Middlesex UB8 3PH United Kingdom From pkso at tattoo.ed.ac.uk Fri Jun 14 16:00:21 1996 From: pkso at tattoo.ed.ac.uk (P Sollich) Date: Fri, 14 Jun 96 16:00:21 BST Subject: Paper: Query learning in committee machine Message-ID: <9606141600.aa05180@uk.ac.ed.tattoo> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/sollich.queries_comm_machine.ps.Z Dear connectionists, the following preprint is now available for copying from the neuroprose repository: Learning from Minimum Entropy Queries in a Large Committee Machine Peter Sollich Department of Physics University of Edinburgh Edinburgh EH9 3JZ, U.K. ABSTRACT In supervised learning, the redundancy contained in random examples can be avoided by learning from queries. Using statistical mechanics, we study learning from minimum entropy queries in a large tree-committee machine. The generalization error decreases exponentially with the number of training examples, providing a significant improvement over the algebraic decay for random examples. The connection between entropy and generalization error in multi-layer networks is discussed, and a computationally cheap algorithm for constructing queries is suggested and analysed. Has appeared in Physical Review E, 53, R2060--R2063, 1996 (4 pages). Comments and/or feedback are welcome. Peter Sollich PS: Sorry - no hardcopies available. -------------------------------------------------------------------------- Peter Sollich Department of Physics University of Edinburgh e-mail: P.Sollich at ed.ac.uk Kings Buildings phone: +44 - (0)131 - 650 5293 Mayfield Road Edinburgh EH9 3JZ, U.K. -------------------------------------------------------------------------- From LUCIANO at etsiig.uniovi.es Mon Jun 17 06:23:00 1996 From: LUCIANO at etsiig.uniovi.es (Luciano Sanchez) Date: Mon, 17 Jun 1996 11:23:00 +0100 (GMT) Subject: New book in Spanish Message-ID: <01I60HUOKW5UCTZKPB@etsiig.uniovi.es> Last month a new book about metaheuristics (Simulated annealing, Tabu Search, GRASP, Genetic Algorithms) and application of NN in optimization has been published. The book is in Spanish. List of authors includes Prof Fred Glover (U of Colorado) author of Tabu Search Technique, and other Professors from Alabama, American University of Beirut, Tecnologico de Monterrey-Mexico, U La plata (Argentina) and Oviedo (Spain). All the material is original and written for this book. More info can be reached in the Web Address: http://www.ing.unlp.edu.ar/cetad/mos/libro.html TITLE: Optimizacion heuristica y redes neuronales AUTHORS: Adenso Diaz, Fred Glover, H. Ghaziri, J.L. Gonzalez, M. Laguna, P. Moscato, F. Tseng PREFACE BY Gerald Thompson (Carnegie Mellon U) PUBLISHER: Editorial Paraninfo, Madrid ( fax: + (34) 1.445.6218 ) PAGES: 235 ISBN: 84-283-2269-4 TABLE OF CONTTENTS (ABSTRACTED): CHAPTER 1. INTRODUCTION TO METAHEURISTICS (Complexity, classification of heuristics,..) CHAPTER 2. SIMULATED ANNEALING (Physical analogy, convergency, number partitioning & SA, applic,..) CHAPTER 3. GENETIC ALGORITHMS (Basics, schema theorem, defective problems, applications,...) CHAPTER 4. TABU SEARCH (Types of memory, structure and strategy, path relinking,...) CHAPTER 5. GRASP (GREEDY RANDOMIZED ADAPTIVE SEARCH PROCEDURES) (Design, local procedures, applications,...) CHAPTER 6. NEURAL NETWORKS (Architectures, NN & opt., Kohonen, Hopfield, elastic net., appl...) From rafal at mech.gla.ac.uk Mon Jun 17 05:29:06 1996 From: rafal at mech.gla.ac.uk (rafal@mech.gla.ac.uk) Date: Mon, 17 Jun 1996 10:29:06 +0100 (BST) Subject: New book on neurcontrol Message-ID: <7485.199606170929@gryphon.mech.gla.ac.uk> Contributed by: Rafal Zbikowski, PhD Control Group, Department of Mechanical Engineering, Glasgow University, Glasgow G12 8QQ, Scotland, UK rafal at mech.gla.ac.uk NEW BOOK ON NEUROCONTROL ``Neural Adaptive Control Technology'' R Zbikowski and K J Hunt (Editors) World Scientific, 1996 ISBN 981-02-2557-1 hard bound, 340pp, subject index Full details: http://www.mech.gla.ac.uk/~nactftp/nact.html http://www.singnet.com.sg/~wspclib/Books/compsci/3021.html Summary ^^^^^^^ This book is an outgrowth of the workshop on Neural Adaptive Control Technology, NACT I, held in 1995 in Glasgow. Selected workshop participants were asked to substantially expand and revise their contributions to make them into full papers. The workshop was organised in connection with a three-year European Union funded Basic Research Project in the ESPRIT framework, called NACT, a collaboration between Daimler-Benz (Germany) and the University of Glasgow (Scotland). A major aim of the NACT project is to develop a systematic engineering procedure for designing neural controllers for non-linear dynamic systems. The techniques developed are being evaluated on concrete industrial problems from Daimler-Benz. In the book emphasis is put on development of sound theory of neural adaptive control for non-linear control systems, but firmly anchored in the engineering context of industrial practice. Therefore the contributors are both renowned academics and practitioners from major industrial users of neurocontrol. Contents ^^^^^^^^ Part I Neural Adaptive Control Technology Chapter 1 J. C. Kalkkuhl and K. J. Hunt (Daimler-Benz AG) ``Discrete-time Neural Model Structures for Continuous Nonlinear Systems: Fundamental Properties and Control Aspects'' Chapter 2 P.~J.~Gawthrop (University of Glasgow) ``Continuous-Time Local Model Networks'' Chapter 3 R. {\.Z}bikowski and A. Dzieli{\'n}ski (University of Glasgow) ``Nonuniform Sampling Approach to Control Systems Modelling with Feedforward Neural Networks'' Part II Non-linear Control Fundamentals for Neural Networks Chapter 4 W. Respondek (Polish Academy of Sciences) ``Geometric Methods in Nonlinear Control Theory: A Survey'' Chapter 5 T. Kaczorek (Warsaw University of Technology) ``Local Reachability, Local Controllability and Observability of a Class of 2-D Bilinear Systems'' Chapter 6 T. A. Johansen and M. M. Polycarpou (SINTEF and University of Cincinnati) ``Stable Adaptive Control of a General Class of Non-linear Systems'' Part III Neural Techniques and Applications Chapter 7 J-M. Renders and M. Saerens (Universit{\'e} Libre de Bruxelles) ``Robust Adaptive Neurocontrol of MIMO Continuous-time Processes Based on the $e_1$-modification Scheme'' Chapter 8 I. Rivals and L. Personnaz ({\'E}cole Superieure de Physique et de Chimie Industrielles) ``Black-Box Modeling with State-Space Neural Networks'' Chapter 9 D. A. Sofge and D. L. Elliott (NeuroDyne, Inc.) ``An Approach to Intelligent Identification and Control of Nonlinear Dynamical Systems'' Chapter 10 W. S. Mischo (Darmstadt Institute of Technology) ``How to Adapt in Neurocontrol: A Decision for CMAC'' Chapter 11 G. T. Lines and T. Kavli (SINTEF Instrumentation) ``The Equivalence of Spline Models and Logic Applied to Model Construction and Interpretation'' Index Preface ^^^^^^^ This book is an outgrowth of the workshop on Neural Adaptive Control Technology, NACT I, held on May 18--19, 1995 in Glasgow. However, this book is not simply the conference proceedings. Instead, selected workshop participants were asked to substantially expand and revise their contributions to make them into full papers. Before the contents of the book is discussed, it seems in order to briefly sketch the background and purpose of the workshop. The event was organised in connection with a three-year European Union funded Basic Research Project in the ESPRIT framework, called NACT, a collaboration between Daimler-Benz (Germany) and the University of Glasgow (Scotland). The NACT project, which began on 1 April 1994, is a study of the fundamental properties of neural network based adaptive control systems. Where possible, links with traditional adaptive control systems are exploited. A major aim is to develop a systematic engineering procedure for designing neural controllers for non-linear dynamic systems. The techniques developed are being evaluated on concrete industrial problems from within the Daimler-Benz group of companies. This context dictated the focus of the workshop and guided the editors in the choice of the papers and their subsequent reshaping into substantive book chapters. Thus, emphasis is put on development of a sound theory of neural adaptive control for non-linear control systems, but firmly anchored in the engineering context of industrial practice. Therefore, the contributors are both renowned academics and practitioners from major industrial users of neurocontrol. The book naturally divides into three parts. Part I is devoted to the theoretical and practical results on neural adaptive control technology resulting from the NACT project. Chapter 1 by J.~C.~Kalkkuhl and K.~J.~Hunt analyses several important fundamental issues so far largely ignored in the neurocontrol context. The issue of prime importance, and thus treated first, is that of the discretisation of continuous-time models. The physical plants are continuous-time, but the practicality of digital implementations require discrete-time representations. A careful discussion is presented exposing the limitations of NARMAX models, widely used in neurocontrol. This sets the stage for the finite-element method approach to approximation of NARMAX models. Chapter~2 is written by Peter J.~Gawthrop, a well-known contributor to adaptive control, in particular, continuous-time self-tuning. Following this line, the continuous-time version of Local Model Networks/Controllers is introduced. The structure has some surprising connections to the state observation problem. This leads to the important distinction between local and global states of the model. The exposition is accompanied by simulations of essentially non-linear systems. Chapter 3 by R.~{\.Z}bikowski and A.~Dzieli{\'n}ski introduces and describes in some detail the nonuniform multi-dimensional sampling approach to neurocontrol with feedforward neural networks. The authors argue that this is a natural theoretical framework for practical control engineering problems, because the measured data representing the NARMA model come as multidimensional samples. The dynamics of the underlying system manifest themselves by nonuniformity of the data and thus the irregular spread of the samples is an essential feature of the representation. A novel method of neural modelling of NARMA systems is given. Important practical issues of distortions caused by approximation of the Fourier transform are addressed. A tutorial survey of the theory of Paley-Wiener functions, with emphasis on the neural modelling aspects, completes the presentation. Part II is devoted to results of non-linear control, relevant to theory of neurocontrol. It opens with Chapter 4 by Witold Respondek, whose pioneering work in the beginning of the 1980s resulted in explosive development (lasting to this day) of geometric methods in non-linear control. In fact, `geometric control' has practically become synonymous with non-linear control. Recently, the rich and consistent theory has been used more often in the context of neurocontrol, because of the need for a control framework for (non-linear) neural models. Being an important and active participant in the development of the geometric approach, Respondek offers valuable insights into the underlying mathematics, while never compromising on rigour. His lucid style is supported by numerous illustrations and examples making the chapter a readable and informative introduction. It is a welcome feature for a subject dominated by presentations often deprived of geometric feeling and overloaded with distracting technicalities. Chapter 5 gives another theoretical perspective, this time from Tadeusz Kaczorek, a major contributor to the theory of 2-D control systems. Chapter 6 by T.~A.~Johansen and M.~M.~Polycarpou deals with the important, yet often neglected, issue of stability of adaptive control of non-linear systems. Part III presents various aspects of neural control and its applications. It starts with Chapter 7 by J-M.~Renders and M.~Saerens. These authors develop local stability results for an adaptive neurocontrol strategy. Weight adaptation is based upon Lyapunov stability theory. The next chapter, Chapter 8, is co-authored by the well-known neural networks researchers I.~Rivals and L.~Personnaz who consider state-space models for neurocontrol as an alternative to the predominant input-output approach. Chapter 9 brings the intelligent control perspective on neural issues by one of the major players in the field, Donald A.~Sofge. Chapter 10 by W.~S.~Mischo (from Henning Tolle's CMAC school) presents theory and applications of CMAC-type memories for learning control. Finally, Chapter 11 by G.~Lines and T.~Kavli presents the results of applying spline-based adaptive methods for the development of dynamics models. The interpretation of the spline models as fuzzy systems is also examined. Rafa{\l} \.Zbikowski, Kenneth Hunt Glasgow, Berlin: December, 1995 From jourjine at arktur.zfe.siemens.de Tue Jun 18 10:06:45 1996 From: jourjine at arktur.zfe.siemens.de (Dr. Alexander Jourjine) Date: Tue, 18 Jun 1996 16:06:45 +0200 Subject: applied theory position at SNAT, Dresden, Germany Message-ID: <199606181406.QAA22431@kapella.nisced> A position is open in the algorithm development group of the Applications Software Dept. at Siemens Nixdorf Advanced Technologies GmbH, a fully owned subsidiary of Siemens Nixdorf AG. We work on face recognition, 3D object recognition, and failure prediction/novelty detection products. An ideal candidate would be a theoretical physicist or a mathematician with exposure to or original research in some of the following areas. 1. Mathematical physics. - Differential eqs - Integral eqs. - Prob. theory - Functional theory - Differential geometry of manifolds. 2. Dynamical systems and chaos theory 3. Optimization techniques: Monte Carlo etc. 4. Math. foundations of neural networks/GA Familiarity with image processing and/or IDL is a plus. We work in Unix and PC environments. The work involves a close collaboration with professional software developers to concieve, design and develope a range of commercial products. Publications, conference attendance, and patenting are encouraged. Work style is informal but there is a lot of pressure to finish things on time. Working language is English. Salary depends on qualifications. Some knowledge of German is helpful. Position is available immediately. For further questions please e-mail Alex Jourjine at jourjine.drs at sni.de. From ingber at ingber.com Tue Jun 18 14:23:00 1996 From: ingber at ingber.com (Lester Ingber) Date: Tue, 18 Jun 1996 11:23:00 -0700 Subject: Papers: Canonical Momenta Indicators of EEG as well as markets Message-ID: <199606181523.IAA20598@alumni.caltech.edu> Papers: Canonical Momenta Indicators of EEG as well as markets I have some preliminary results of calculations in progess, using Adaptive Simulated Annealing (ASA code in my archive), developing Canonical Momenta Indicators (CMI) from a large EEG study, where the raw data is modeled using my Statistical Mechanics of Neocortical Interactions (SMNI) model. The CMI give somewhat enhanced signal to noise resolutions over the raw data, and are candidates to be further processed as is ordinary EEG data. The current prelimary results are in smni96_lecture.ps.Z [1300K] %A L. Ingber %T Statistical mechanics of neocortical interactions (SMNI) %R SMNI Lecture Plates %I Lester Ingber Research %C McLean, VA %D 1996 %O URL http://www.ingber.com/smni96_lecture.ps.Z These plates contain recent preliminary results on Canonical Momenta Indicators (CMI) in the later sections. Under WWW, smni96_lecture.html permits viewing as a series of gif files. There are several smni... papers in my archive giving more details of the use of ASA and of SMNI. The direct application of SMNI to EEG data is relatively recent and described in smni91_eeg.ps.Z [500K] %A L. Ingber %T Statistical mechanics of neocortical interactions: A scaling paradigm applied to electroencephalography %J Phys. Rev. A %N 6 %V 44 %P 4017-4060 %D 1991 %O URL http://www.ingber.com/smni91_eeg.ps.Z These methods were applied to S&P 500 cash-futures markets data as well, as reported in some markets... papers in my archive, and there too the CMI give better signal to noise resolution than just the raw data. A short explanation is given in markets96_brief.ps.Z [40K] %A L. Ingber %T Trading markets with canonical momenta and adaptive simulated annealing %R Report %I Lester Ingber Research %C McLean, VA %D 1996 %O URL http://www.ingber.com/markets96_brief.ps.Z Under WWW, markets96_brief.html permits viewing as a series of gif files. This paper gives relatively non-technical descriptions of ASA and canonical momenta, and their applications to markets and EEG. The paper was solicited by and then accepted for publication in AI in Finance, but that journal subsequently ceased all publication. Shorter versions are published in Expert Analytica and to be published in A User's Manual to Computerized Trading, I. Nelken and M.G. Jurik, Eds. Lester ======================================================================== Instructions for Retrieval of Code and Reprints Interactively Via WWW The archive can be accessed via WWW path http://www.ingber.com/ Interactively Via Anonymous FTP Code and reprints can be retrieved via anonymous ftp from ftp.ingber.com. Interactively [brackets signify machine prompts]: [your_machine%] ftp ftp.ingber.com [Name (...):] anonymous [Password:] your_e-mail_address [ftp>] binary [ftp>] ls [ftp>] get file_of_interest [ftp>] quit The 00index file contains an index of the other files. Files have the same WWW and FTP paths under the main / directory; i.e., http://www.ingber.com/a_directory/a_file and ftp://ftp.ingber.com/a_directory/a_file reference the same file. Electronic Mail If you do not have ftp access, get information on the FTPmail service by: mail ftpmail at ftpmail.ramona.vix.com (was ftpmail at decwrl.dec.com), and send only the word "help" in the body of the message. Additional Information Sorry, I cannot assume the task of mailing out hardcopies of code or papers. Limited help assisting people with their queries on my codes and papers is available only by electronic mail correspondence. Lester ======================================================================== /* RESEARCH ingber at ingber.com * * INGBER ftp://ftp.ingber.com * * LESTER http://www.ingber.com/ * * Prof. Lester Ingber _ P.O. Box 857 _ McLean, VA 22101 _ 1.800.L.INGBER */ From horvitz at MICROSOFT.com Tue Jun 18 20:48:43 1996 From: horvitz at MICROSOFT.com (Eric Horvitz) Date: Tue, 18 Jun 1996 17:48:43 -0700 Subject: UAI-96 program and registration information Message-ID: ========================================================= P R O G R A M A N D R E G I S T R A T I O N ========================================================= ** U A I 96 ** THE TWELFTH ANNUAL CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE August 1-4, 1996 Reed College Portland, Oregon, USA ======================================= UAI WWW page at http://cuai-96.microsoft.com/ The effective handling of uncertainty is critical in designing, understanding, and evaluating computational systems tasked with making intelligent decisions. For over a decade, the Conference on Uncertainty in Artificial Intelligence (UAI) has served as a central meeting on advances in methods for reasoning under uncertainty in computer-based systems. The conference serves as an annual international forum for exchanging results on the use of principled methods to solve difficult challenges with inference, learning and decision making under uncertainty. * * * UAI 96 events include a full-day course on uncertain reasoning on the day before the main UAI 96 conference (Wednesday, July 31) at Reed College. Details on the course are available at: http://cuai-96.microsoft.com/tutor.htm * * * On Sunday, August 4, we will hold a UAI-KDD Special Joint Session on Learning, Probability, and Graphical Models at the Portland Convention Center. See information on the program below. * * * UAI-96 will begin shortly before KDD-96 (http://www-aig.jpl.nasa.gov/kdd96/), AAAI-96 (http://www.aaai.org/Conferences/National/1996/aaai96.html), and the AAAI workshops, and will be in close proximity to these meetings. * * * Refer to the UAI-96 WWW home page for late-breaking information: http://cuai-96.microsoft.com/ ========================================================== ** UAI-96 Conference Program ** ========================================================== ** Wednesday, July 31, 1996 ** Conference and Course Registration 8:00-8:30am Full-Day Course on Uncertain Reasoning 8:35-5:30pm (See: http://cuai-96.microsoft.com/ for course details) ========================================================== ** Thursday, August 1, 1996 ** Plenary Session I: Perspectives on Inference 8:45-10:15am Toward a Market Model for Bayesian Inference D. Pennock and M. Wellman A unifying framework for several probabilistic inference algorithms R. Dechter Computing upper and lower bounds on likelihoods in intractable networks T. Jaakkola and M. Jordan (Outstanding Student Paper Award) Query DAGs: A practical paradigm for implementing belief-network inference A. Darwiche and G. Provan Break 10:15-10:30am Plenary Session II: Applications of Uncertain Reasoning 10:30-12:00am MIDAS: An Influence Diagram for Management of Mildew in Winter Wheat A. Jensen and F. Jensen Optimal Factory Scheduling under Uncertainty using Stochastic Dominance A* P. Wurman and M. Wellman Supply Restoration in Power Distribution Systems --- A Case Study in Integrating Model-Based Diagnosis and Repair Planning S. Thiebaux, M. Cordier, O. Jehl, J. Krivine Network Engineering for Complex Belief Networks S. Mahoney and K. Laskey * Panel Discussion: "Reports from the front: Real-world experiences with uncertain reasoning systems" 12:00-12:45pm Moderator: Bruce D'Ambrosio Lunch 12:45-2:00pm Plenary Session III: Representation and Independence 2:00-3:40pm Context-Specific Independence in Bayesian Networks C. Boutilier, N. Friedman, M. Goldszmidt, D. Koller Binary Join Trees P. Shenoy Why is diagnosis using belief networks insensitive to imprecision in probabilities? M. Henrion, M. Pradhan, K. Huang, B. del Favero, G. Provan, P. O'Rorke On separation criterion and recovery algorithm for chain graphs Milan Studeny Poster Session I: Overview Presentations 3:40-4:00pm Poster Session I 4:00-6:00pm Inference Using Message Propagation and Topology Transformation in Vector Gaussian Continuous Networks S. Alag and A. Agogino Constraining Influence Diagram Structure by Generative Planning: An Application to the Optimization of Oil Spill Response J. Agosta An Alternative Markov Property for Chain Graphs S. Andersson, D. Madigan, and M. Perlman Object Recognition with Imperfect Perception and Redundant Description C. Barrouil and J. Lemaire A Sufficiently Fast Algorithm for Finding Close to Optimal Junction Trees A. Becker and D. Geiger Efficient Approximations for the Marginal Likelihood of Incomplete Data Given a Bayesian Network D. Chickering and D. Heckerman Independence with Lower and Upper Probabilities L. Chrisman Topological Parameters for Time-Space Tradeoff R. Dechter A Qualitative Markov Assumption and its Implications for Belief Change N. Friedman and J. Halpern A Probabilistic Model for Sensor Validation P. Ibarguengoytia and L. Sucar Bayesian Learning of Loglinear Models for Neural Connectivity K. Laskey and L. Martignon Geometric Implications of the Naive Bayes Assumption M. Peot Optimal Monte Carlo Estimation of Belief Network Inference M. Pradhan and P. Dagum On Coarsening and Feedback K. Reiser and Y. Chen A Discovery Algorithm for Directed Cyclic Graphs Thomas Richardson Efficient Enumeration of Instantiations in Bayesian Networks S. Srinivas and P. Nayak UAI-96 Meeting on Bayes Net Interchange Format 7:30-9:30pm (More information: http://cuai-96.microsoft.com/bnif.htm) ========================================================== ** Friday, August 2, 1996 Plenary Session IV: Time, Persistence, and Causality 8:45-10:15am A Structurally and Temporally Extended Bayesian Belief Network Model: Definitions, Properties, and Modelling Techniques C. Aliferis and G. Cooper Identifying independencies in causal graphs with feedback J. Pearl and R. Dechter Topics in Decision-Theoretic Troubleshooting: Repair and Experiment J. Breese and D. Heckerman A Polynomial-Time Algorithm for Deciding Equivalence of Directed Cyclic Graphical Models T. Richardson (Outstanding Student Paper Award) Break 10:15-10:30am Plenary Session V: Planning and Action under Uncertainty 10:30-12:00pm A Measure of Decision Flexibility R. Shachter and M. Mandelbaum A Graph-Theoretic Analysis of Information Value K. Poh and E. Horvitz Sound Abstraction of Probabilistic Actions in The Constraint Mass Assignment Framework A. Doan and P.Haddawy Flexible Policy Construction by Information Refinement M. Horsch and D. Poole * Panel Discussion: "Automated construction of models: Why, How, When?" 12:00-12:45pm Moderator: Daphne Koller Lunch 12:45-2:00pm Plenary Session VI: Qualitative Reasoning and Abstraction of Probability 2:00-3:30pm Generalized Qualitative Probability D. Lehmann Uncertain Inferences and Uncertain Conclusions H. Kyburg, Jr. Arguing for Decisions: A Qualitative Model of Decision Making B. Bonet and H. Geffner Defining Relative Likelihood in Partially Ordered Preferential Structures J. Halpern Poster Session II: Overview Presentations 3:40-4:00pm Poster Session II 4:00-6:00pm An Algorithm for Finding Minimum d-Separating Sets in Belief Networks S. Acid and L. de Campos Plan Development using Local Probabilistic Models E. Atkins, E. Durfee, K. Shin Entailment in Probability of Thresholded Generalizations D. Bamber Coping with the Limitations of Rational Inference in the Framework of Possibility Theory S. Benferhat, D. Dubois, H. Prade Decision-Analytic Approaches to Operational Decision Making: Application and Observation T. Chavez Learning Equivalence Classes of Bayesian Network Structures D. Chickering Propagation of 2-Monotone Lower Probabilities on an Undirected Graph L. Chrisman Quasi-Bayesian Strategies for Efficient Plan Generation: Application to the Planning to Observe Problem F. Cozman and E. Krotkov Some Experiments with Real-Time Decision Algorithms B. D'Ambrosio and S. Burgess An Evaluation of Structural Parameters for Probabilistic Reasoning: Results on Benchmark Circuits Y. El Fattah and R. Dechter Learning Bayesian Networks with Local Structure N. Friedman M. Goldszmidt Theoretical Foundations for Abstraction-Based Probabilistic Planning V. Ha and P. Haddawy Probabilistic Disjunctive Logic Programming L. Ngo A Framework for Decision-Theoretic Planning I: Combining the Situation Calculus, Conditional Plans, Probability and Utility D. Poole Coherent Knowledge Processing at Maximum Entropy by SPIRIT W. Roedder and C. Meyer Real-Time Estimation of Bayesian Networks R. Welch Testing Implication of Probabilistic Dependencies S.K.M. Wong UAI-96 Banquet and Invited Talk 7:30-9:30pm ========================================================= ** Saturday, August 3, 1996 ** Plenary Session VII: Developments in Belief and Possibility 8:45-10:00am Belief Revision in the Possibilistic Setting with Uncertain Inputs D. Dubois and H. Prade Approximations for Decision Making in the Dempster-Shafer Theory of Evidence M. Bauer Possible World Partition Sequences: A Unifying Framework for Uncertain Reasoning C. Teng Break 10:00-10:15am Plenary Session VIII: Learning and Uncertainty 10:15-11:45pm Asymptotic model selection for directed networks with hidden variables D. Geiger, D. Heckerman, C. Meek On the Sample Complexity of Learning Bayesian Networks N. Friedman and Z. Yakhini Learning Conventions in Multiagent Stochastic Domains using Likelihood Estimates C. Boutilier Critical Remarks on Single Link Search in Learning Belief Networks Y. Xiang, S.K.M Wong, N. Cercone * Panel Discussion: "Learning and Uncertainty: The Next Steps" 11:45-12:30pm Moderator: G. Cooper Lunch 12:30-2:00pm Plenary Session IX: Advances in Approximate Inference 2:00-3:45pm Computational complexity reduction for BN2O networks using similarity of states A. Kozlov and J. Singh Sample-and-Accumulate Algorithms for Belief Updating in Bayes Networks E. Santos Jr., S. Shimony, E. Williams Tail Simulation in Bayesian Networks E. Castillo, C. Solares, P. Gomez Efficient Search-Based Inference for Noisy-OR Belief Networks: TopEpsilon K. Huang and M. Henrion Break 3:45-4:00pm * Panel Discussion: "UAI by 2005: Reflections on critical problems, directions, and likely achievements for the next decade" 4:00-5:00pm Moderator: Eric Horvitz Report on the Bayes Net Interchange Format Meeting 5:00-5:20 UAI Planning Meeting 5:30-6:00 ================================================================ ** Sunday, August 4, 1996 ** UAI-KDD Special Joint Sessions Portland Convention Center Selected talks on learning graphical models from the UAI and KDD proceedings. UAI badges will be honored at the Portland Convention Center for the joint session. Plenary Session X: Learning, Probability, and Graphical Models I 8:30-12:00pm KDD: Knowledge Discovery and Data Mining: Toward a Unifying Framework U. Fayyad, G. Piatetsky-Shapiro, and P. Smyth UAI: Efficient Approximations for the Marginal Likelihood of Incomplete Data Given a Bayesian Network D. Chickering and D. Heckerman KDD: Clustering using Monte Carlo Cross-Validation P. Smyth UAI: Learning Equivalence Classes of Bayesian Network Structures D. Chickering Break 9:45-10:05am Plenary Session XI: Learning, Probability, and Graphical Models II 10:05-12:00pm UAI: Learning Bayesian Networks with Local Structure N. Friedman and M. Goldszmidt KDD: Rethinking the Learning of Belief Network Probabilities R. Musick UAI: Bayesian Learning of Loglinear Models for Neural Connectivity K. Laskey and L. Martignon KDD: Harnessing Graphical Structure in Markov Chain Monte Carlo Learning P. Stolorz ================================================================ Organization: Program Cochairs: ================= Eric Horvitz Microsoft Research, 9S Redmond, WA 98052 Phone: (206) 936 2127 Fax: (206) 936 0502 Email: horvitz at microsoft.com WWW: http://www.research.microsoft.com/research/dtg/horvitz/ Finn Jensen Department of Mathematics and Computer Science Aalborg University Fredrik Bajers Vej 7,E DK-9220 Aalborg OE Denmark Phone: +45 98 15 85 22 (ext. 5024) Fax: +45 98 15 81 29 Email: fvj at iesd.auc.dk WWW: http://www.iesd.auc.dk/cgi-bin/photofinger?fvj General Conference Chair (General conference inquiries): ======================== Steve Hanks Department of Computer Science and Engineering, FR-35 University of Washington Seattle, WA 98195 Tel: (206) 543 4784 Fax: (206) 543 2969 Email: hanks at cs.washington.edu UAI Program Committee ====================== Fahiem Bacchus, University of Waterloo, Cananda Salem Benferhat, IRIT Universite Paul Sabatier, France Philippe Besnard, IRISA, France Mark Boddy, Honeywell Technology Center, USA Piero Bonissone, General Electric Research Laboratory, USA Craig Boutilier, University of British Columbia, Canada Jack Breese, Microsoft Research, USA Wray Buntine, Thinkbank, USA Luis M. de Campos, Universidad de Granada, Spain Enrique Castillo, Universidad de Cantabria, Spain Eugene Charniak, Brown University, USA Greg Cooper, University of Pittsburgh, USA Bruce D'Ambrosio, Oregon State University, USA Paul Dagum, Stanford University, USA Adnan Darwiche, Rockwell Science Center, USA Tom Dean, Brown University, USA Denise Draper, University of Washington, USA Marek Druzdzel, University of Pittsburgh, USA Didier Dubois, IRIT Universite Paul Sabatier, France Ward Edwards, University of Southern California, USA Kazuo Ezawa, AT&T Labs, USA Nir Friedman, Stanford University, USA Robert Fung, Prevision, USA Linda van der Gaag, Utrecht University, Netherlands Hector Geffner, Universidad Simon Bolivar, Venezuela Dan Geiger, Technion, Israel Lluis Godo, Campus Universitat Autonoma Barcelona, Spain Robert Goldman, Honeywell Technology Center, USA Moises Goldszmidt, SRI International, USA Adam Grove, NEC Research Institute, USA Peter Haddawy, University of Wisconsin-Milwaukee, USA Petr Hajek, Academy of Sciences, Czech Republic Joseph Halpern, IBM Almaden Research Center, USA Steve Hanks, University of Washington, USA Othar Hansson, Thinkbank, USA Peter Hart, Ricoh California Research Center, USA David Heckerman, Microsoft Research, USA Max Henrion, Lumina, USA Frank Jensen, Hugin Expert A/S, Denmark Michael Jordan, MIT, USA Leslie Pack Kaelbling, Brown University, USA Uffe Kjaerulff, Aalborg University, Denmark Daphne Koller, Stanford University, USA Paul Krause, Imperial Cancer Research Fund, UK Rudolf Kruse, University of Braunschweig, Germany Henry Kyburg, University of Rochester, USA Jerome Lang, IRIT Universite Paul Sabatier, France Kathryn Laskey, George Mason University, USA Paul Lehner, George Mason University, USA John Lemmer, Rome Laboratory, USA Tod Levitt, IET, USA Ramon Lopez de Mantaras, Spanish Scientific Research Council, Spain David Madigan, University of Washington, USA Christopher Meek, Carnegie Mellon University, USA Serafin Moral, Universidad de Granada, Spain Eric Neufeld, University of Saskatchewan, Canada Ann Nicholson, Monash University, Australia Ramesh Patil, Information Sciences Institute, USC, USA Judea Pearl, University of California, Los Angeles, USA Kim Leng Poh, National University of Singapore David Poole, University of British Columbia, Canada Henri Prade, IRIT Universite Paul Sabatier, France Greg Provan, Institute for Learning Systems, USA Enrique Ruspini, SRI International, USA Romano Scozzafava, Dip. Me.Mo.Mat., Rome, Italy Ross Shachter, Stanford University, USA Prakash Shenoy, University of Kansas, USA Philippe Smets, IRIDIA Universite libre de Bruxelles, Belgium David Spiegelhalter, Cambridge University, UK Peter Spirtes, Carnegie Mellon University, USA Milan Studeny, Academy of Sciences, Czech Republic Sampath Srinivas, Microsoft, USA Jaap Suermondt, Hewlett Packard Laboratories, USA Marco Valtorta, University of South Carolina, USA Michael Wellman, University of Michigan, USA Nic Wilson, Oxford Brookes University, UK Yang Xiang, University of Regina, Canada Hong Xu, IRIDIA Universite libre de Bruxelles, Belgium John Yen, Texas A&M University, USA Lian Wen Zhang, Hong Kong University of Science & Technology ============================================================== UAI-96 REGISTRATION FORM ============================================================== Please return this form via email to hanks at cs.washington.edu or use the web-based registration form available at the UAI-96 home page at http://cuai-96.microsoft.com to register online. ================================= Registrant Information ================================= Name: __________________________________ Affiliation: ____________________________ Address: ____________________________ ____________________________ ____________________________ Phone: ____________________________ Email address: ____________________________ ================================= Registration information ================================= ____ Register me for the conference Non-student $275 Student $150 Students, please supply: Advisor's name and Email address: _______________________________________ ____ Register me for the full-day course on uncertain reasoning (July 31) Non-student with conference registration $85 without conference registration $135 Student with conference registration $35 without conference registration $50 ================================= Dormitory accomodation ================================= Singles, doubles, and triples are available. All include a private bedroom; doubles and triples share a bathroom. Rates: Single $26.50 per night Double $21.50 per night Triple $16.50 per night _____________ Arrival date (earliest July 30) _____________ Departure date (latest August 4) I am paying for _____ people for _____ nights at a daily rate of _________ for a total of _________ _______________________ Sharing with (doubles and triples only) _______________________ Sharing with (triples only) =============================== Meal service =============================== Conference registration includes the conference banquet on August 2nd. Reed college offers a package of three lunches during the conference for a total of $24. ____________ Please register me for the lunch service ================================= Payment Summary ================================= $______________ Conference registration $______________ Full-day course registration $______________ Lodging charges $______________ Meal charges $______________ TOTAL AMOUNT __________ Please charge my ____ Visa ____ MasterCard ___________________ Card Number ______________ Expiration date _________ I will send a check via surface mail. Address for checks: Steve Hanks Department of Computer Science and Engineering University of Washington Box 352350 Seattle, WA 98195-2350 _________ I will pay at the conference ============================================================ For questions about arrangements and registration issues, contact Steve Hanks (hanks at cs.washington.edu). For questions about the program, contact Eric Horvitz (horvitz at microsoft.com) or Finn Jensen (fvj at iesd.auc.dk). ============================================================ From piuri at elet.polimi.it Mon Jun 17 14:19:30 1996 From: piuri at elet.polimi.it (Vincenzo Piuri) Date: Mon, 17 Jun 1996 20:19:30 +0200 Subject: Journal of Integrated Computer-Aided Engineering Message-ID: <9606171801.AA12772@ipmel2.elet.polimi.it> ============================================================================== JOURNAL OF INTEGRATED COMPUTER-AIDED ENGINEERING John Wiley & Sons Inc. Publisher SPECIAL ISSUE ON NEURAL TECHNIQUES FOR INDUSTRIAL APPLICATIONS ============================================================================== >>> CALL FOR PAPERS <<< ============================================================================== The Journal of Integrated Computer-Aided Engineering has planned a special issue on Neural Techniques for Industrial Applications. Interested authors are invited to submit manuscripts based on their recent results on any aspect of Identification, Prediction and Control in industrial applications, including, but not limited to, theoretical foundations of the neural computation, neural models, network optimization, learning procedures, sensitivity analysis, experimental results, dedicated architectures, software simulation, implementations, embedded systems, and practical application cases. One of the main focus of the Journal is in fact the integration of new and emerging computing technologies for innovative solutions of engineering problems. Submitted manuscripts should not have been previously published in journals or books or being currently under consideration elsewhere. All manuscript should include a covering page containing: the title of the paper, full name(s) and affiliation(s) of the author(s), complete surface and electronic (if available) address(es), telephone and fax number(s). The corresponding author should be clearly identified on the title page. The manuscript should include a 300-word abstract and a list of keywords characterising the paper contents. Deadlines: November 30, 1996 five copies of the manuscript March 30, 1997 notification of acceptance/rejection. May 1, 1997 final version of the manuscript including the original artwork, author(s) biograpical information, and signed copyright forms Please, submit your manuscript(s), or direct your questions to either of the Guest Editors: Vincenzo Piuri Department of Electronics and Information Politecnico di Milano Piazza L. da Vinci 32 20133 Milano, Italy Phone +39-2-2399-3606 Fax +39-2-2399-3411 Email piuri at elet.polimi.it Cesare Alippi C.S.I.S.E.I. CNR - National Research Council Piazza L. da Vinci 32 20133 Milano, Italy Phone +39-2-2399-3512 Fax +39-2-2399-3411 Email alippi at elet.polimi.it ============================================================================== From nikola at prosun.first.gmd.de Wed Jun 19 04:28:07 1996 From: nikola at prosun.first.gmd.de (Nikola Serbedzija) Date: Wed, 19 Jun 96 10:28:07 +0200 Subject: CFP: Session on Neurosimulation - 15th IMACS World Congress Message-ID: <9606190828.AA18836@prosun.first.gmd.de> ================================================================ 15th IMACS WORLD CONGRESS on Scientific Computation, Modelling and Applied Mathematics Berlin, Germany, 24-29 August 1997 Sponsored by DFG, IEEE, IFAC, IFIP, IFORS, IMEKO. General Chair: Prof. A. Sydow (GMD FIRST Berlin, Germany) Honorary Chair: Prof. R. Vichnevetsky (Rutgers University, USA) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - CALL FOR PAPERS for the Organized Session on "Simulation of Artificial Neural Networks" Session Organizers: Gerd Kock and Nikola Serbedzija ================================================================ The aim of this session is to reflect the current techniques and trends in the simulation of artificial neural networks (ANNs). Both, software and hardware approaches are solicited. Topics of interest are, but not limited to: * General Aspects of Neural Simualations - design issues for simulation tools - inherent parallelism of ANNS - general-purpose neural simulations - special-purpose neural simulations - fault tolerance aspects * Parallel Implementation of ANNs - data parallel implementations - control parallel implementations * Hardware Emulation of ANNs - silicon technology - optical technology - molecular technology * General Simulation Tools - graphic/menu bases tools - module libraries - specific programming languages * Applications - applications using/demanding parallel implementations or hardware emulations - applications using/demanding analysis tools or graphical representations provided by simulation tools * Hybrid Systems - the topics from above are of interest also with respect to related hybrid systems (neuro-fuzzy, genetic algorithms) Authors interested in this session are invited to submit 3 copies of an extended summary (about 4 pages) of the paper to one of the session organizers by December 1st. 1996. Submission can be done also by email. The notification of acceptance/rejection will be mailed by February 28th, 1997. The authors of accepted papers will also receive detailed instructions for the final manuscripts preparation. The submission must contain the following information: The name(s) of the author(s), title(s), affiliation(s), complete address(es) (including email, phone, fax). In addition, the author responsible for communication has to be indicated. Important dates: --------------- December 1st, 1996 Extended summary due February 28th, 1997 Notification of acceptance/rejection April 30th, 1997 Camera-ready paper due Addresses to send contribution: ------------------------------ Dr. Gerd Kock Dr. Nikola Serbedzija GMD FIRST GMD FIRST Rudower Chaussee 5 Rudower Chaussee 5 D - 12489 Berlin D - 12489 Berlin Germany Germany e-mail: gerd at first.gmd.de e-mail: nikola at first.gmd.de tel: +49 30 / 6392 1863 tel: +49 30 / 6392 1873 ================================================================ IMACS The International Association for Mathematics and Computers in Simulation is an organization of professionals and scientists concerned with computers, computation and applied mathematics, in particular, as they apply to the simulation of systems. This includes numerical analysis, mathematical modelling, approximation theory, computer hardware and software, programming languages and compilers. IMACS also concerns itself with the general philosophy of scientific computation and applied mathematics, and with their impact on society and on disciplinary and interdisciplinary research. IMACS is one of the international scientific associations (with IFAC, IFORS, IFIP and IMEKO) represented in FIACC, the five international organizations in the area of computers, automation, instrumentation and the relevant branches of applied mathematics. Of the five, IMACS (which changed its name from AICA in 1976) is the oldest and was founded in 1956. For more information about the 15th IMACS WORLD CONGRESS turn to WWW page http://www.first.gmd.de/imacs97/ ================================================================ From piuri at elet.polimi.it Wed Jun 19 15:44:30 1996 From: piuri at elet.polimi.it (Vincenzo Piuri) Date: Wed, 19 Jun 1996 21:44:30 +0200 Subject: IMACS97 - call for papers Message-ID: <9606191944.AA28211@ipmel2.elet.polimi.it> ================================================================ 15th IMACS WORLD CONGRESS on Scientific Computation, Modelling and Applied Mathematics Berlin, Germany, 24-29 August 1997 Sponsored by IMACS, DFG, IEEE, IFAC, IFIP, IFORS, IMEKO. General Chair: prof. A. Sydow, GMD FIRST, Germany Honorary Chair: prof. R. Vichnevetsky, Rutgers University, USA ================================================================ Call for Papers for the Special Sessions on Neural Technologies for Control and Signal/Image Processing ================================================================ The aim of this meeting is to make the state of the art and explore the future trends of various aspects of computational engineering encompassing both theory and applications. Due to the increasing interest in industries and in applied research, three special sessions on neural technologies will be organized on theoretical aspects, implementations and applications concerning system control and signal/image processing. Session 1: "Neural Architectures and Implementations" Topics of interest are, but not limited to: theoretical aspects of neural architectures, optimization of neural architectures, design and implementation of digital, analog and mixed structures, software implementations, fault tolerance, architectural sensitivity and design issues. Particular enphasis will be given to architectures and implementations for control and signal processing. Session 2: "Neural Applications for Identification and Control" Topics of interest are, but not limited to: theoretical models for identification and control, methodologies for neural identification, strategies for neural control, architectures for identification and control, forecasting, applications, adaptability, sensitivity. Session 3: "Neural Techniques for Signal/Image Processing" Topics of interest are, but not limited to: theoretical models for signal/image analysis, methodologies for neural signal/image analysis, dedicated architectures, software implementations, optimization of neural paradigms, applications. Authors interested in the above Special Sessions are invited to submit an extended summary (about 5 pages) or the preliminary version of the paper to the Special Session Organizer by November 30, 1996. The submission must contain the following information: title, authors' name and affiliations, the name and the complete address (including affiliation, mail address, phone, fax, and email) of the contact author, the name of the special session. Submission can be done also by fax, email (plain postscript uuencoded files only), ftp (call your postscript file with the name of the first author, connect by ftp anonymous to ipmel2.elet.polimi.it, put your file in the directory pub/papers/Vincenzo.Piuri/imacs97, send an email to the session organizer informing about the submission). Rejection or preliminary acceptance will be mailed by December 15, 1996. Final acceptance will be mailed by February 28, 1997. The final camera-ready version of the paper is due by April 30, 1997. Prof. Vincenzo Piuri Organizer of the Special Sessions on Neural Technologies Department of Electronics and Information Politecnico di Milano piazza L. da Vinci 32 20133 Milano, Italy fax +39-2-2399-3411 email piuri at elet.polimi.it ================================================================ From M.J.van_der_Heijden at Physiology.MedFac.LeidenUniv.nl Thu Jun 20 16:06:42 1996 From: M.J.van_der_Heijden at Physiology.MedFac.LeidenUniv.nl (Marcel J. van der Heyden) Date: Thu, 20 Jun 1996 13:06:42 -0700 Subject: Proceedings of the HELNET Workshops on Neural Networks Message-ID: <31C9AF52.6FDA@rullf2.MedFac.LeidenUniv.nl> HELNET International Workshop on Neural Networks Proceedings Volume I/II (1994/1995) M.J. van der Heyden, J. Mrsic-Floegel and K. Weigl (eds) ** http://www.leidenuniv.nl/medfac/fff/groepc/chaos/helnet/ ** The HELNET workshops are informal meetings primarily targeted towards young researchers from neural networks and related fields. They are traditionally organised a few days prior to the ICANN conferences. Participants are offered the opportunity to present and extensively discuss their work as well as more general topics from the neural network field. In the HELNET proceedings a large variety of topics is treated: from the formal description of networks to neurobiology - from conceptual viewpoints to commercial applications. Thus, this collection of papers gives a comprehensive overview of current and ongoing research on neural networks. The proceedings can be browsed on-line at: http://www.leidenuniv.nl/medfac/fff/groepc/chaos/helnet/ On-line ordering is also provided for using credit card details or having an invoice sent. ---------------------------------------------------------------------- Table of Contents: Development of Spatio-Temporal Receptive Fields for Motion Detection in a Linsker Type Model (S. Wimbauer, W. Gerstner and J.L. van Hemmen) The Dependence on Size and Calcium Dynamics of Motoneuron Firing Properties: A Model Study (Marcel J. van der Heyden, A.A.J. Hilgevoord and L.J. Bour) Annealing in Minimal Free Energy Vector Quantization (D.R. Dersch and P. Tavan) Projection Learning: A Critical Review of Practical Aspects (Konrad Weigl) Transforming Hard Problems into Linearly Separable ones with Incremental Radial Basis Function Networks (B. Fritzke) Why are Neural Nets not Intelligent? (Harald Huening) Aspects of Information Detection using Entropy (Janko Mrsic-Floegel) Generating a Fractal Image by Programmed Cell Death: a Biological Communication Strategy for Parallel Computers (David W.N. Sharp) The Impossibility to Localize Electrical Activity in the Brain from EEG-Recordings by Means of Artificial Neural Networks (Sylvia C. Pont and Bob W. van Dijk) Analysis of Electronic Circuits with Evolutionary Strategies (Harald Gerlach and Joerg D. Becker) Neural Networks and Statistics: A Brief Overview (Marcel J. van der Heyden) Self-Controlling Chaos in Neuromodules (Nico Stollenwerk) The Effects of Feature Selection on Backpropagation in Feed-Forward Neural Networks (Selwyn Piramuthu) Exploring the Role of Emotion in the Design of Autonomous Systems (Raju S. Bapi) A Fusion of Game-Theory Based Learning and Projection Learning for Image Classification (Konrad Weigl and Shan Yu) Codierung eines Problems in die Sprache der Evolution (Harald Gerlach) Modelling the Wiener Cascade Using Time Delayed and Recurrent Neural Networks (M.G. Wagner, I.M. Thompson, S. Manchanda, P.G. Hearne and G.R.R. Greene) Niche memories for Temporal Sequence Processing: learning, recognition and tracking using neural representations (Janko Mrsic-Floegel) Stretching the Limits of Learning Without Modules (Antal van den Bosch and Ton Weijters) The Future for Weightless Systems (Nick Bradshaw) A Way to Improve Error Correction Capability of Hopfield Associative Memory in the Case of Saturation (Dmitry O. Gorodnichy) From electro at batc.allied.com Fri Jun 21 16:46:14 1996 From: electro at batc.allied.com (electro@batc.allied.com) Date: Fri, 21 Jun 1996 14:46:14 CST Subject: Self-organizing Systems Research Message-ID: <31CAFC07-00000001@proton> I am looking for recent research papers that describe algorithms and/or methods that enable a totally interconnected network of processing nodes to self-organize (i.e., determine an optimal configuration by eliminating redundant nodes, minimizing node connections, and establishing node priorities ) in order to meet a predefined set of spatial and functional criteria. I am somewhat familiar with Kohonen's "Self-Organizing Maps" and I would be interested in applications of these maps and other algorithms. Thanks in advance for any information that you can provide. Richard A. Burne electro at .batc.allied.com AlliedSignal Inc. From shultz at psych.mcgill.ca Fri Jun 21 10:56:46 1996 From: shultz at psych.mcgill.ca (Tom Shultz) Date: Fri, 21 Jun 1996 10:56:46 -0400 Subject: Papers available on cognitive development, knowledge and learning, cognitive consistency Message-ID: The following four papers are available at the WWW site for LNSC (Laboratory for Natural and Simulated Cognition at McGill University). http://www.psych.mcgill.ca/labs/lnsc/html/Lab-Home.html Alternatively, ftp addresses are given for each paper. Shultz, T. R., Schmidt, W. C., Buckingham, D., & Mareschal, D. (1995). Modeling cognitive development with a generative connectionist algorithm (pp. 205-261). In T. J. Simon & G. S. Halford (Eds.), Developing cognitive competence: New approaches to process modeling. Hillsdale, NJ: Erlbaum. One of the key unsolved problems in cognitive development is the precise specification of developmental transition mechanisms. In this chapter, we focus on the applicability of a specific generative connectionist algorithm, cascade-correlation (Fahlman & Lebiere, 1990), as a process model of transition mechanisms. Generative connectionist algorithms build their own network topologies as they learn, allowing them to simulate both qualitative and quantitative developmental changes. We compare and contrast cascade-correlation, Piaget's notions of assimilation and accommodation, Papert's little known but historically relevant genetron model, conventional back-propagation networks, and rule-based models. Specific cascade-correlation models of a wide range of developmental phenomena are presented. These include the balance scale task; concepts of potency and resistance in causal reasoning; seriation; integration of the concepts of distance, time, and velocity; and personal pronouns. Descriptions of these simulations stress the degree to which the models capture the essential known psychological phenomena, generate new testable predictions, and provide explanatory insights. In several cases, the simulation results underscore clear advantages of connectionist modeling techniques. Abstraction across the various models yields a set of domain-general constraints for cognitive development. Particular domain-specific constraints are identified. Finally, the models demonstrate that connectionist approaches can be successful even on relatively high-level cognitive tasks. ftp: ego.psych.mcgill.ca/pub/shultz/cog-comp.ps.gz -------------------------------------------------------------------------------- Shultz, T. R., & Lepper, M. R. (1995). Cognitive dissonance reduction as constraint satisfaction. Technical Report No. 2195, McGill Papers in Cognitive Science, McGill University, Montr?al. A constraint satisfaction neural network model (the consonance model) simulated data from the two major cognitive dissonance paradigms of insufficient justification and free-choice. In several cases, the model fit the human data better than did cognitive dissonance theory. Superior fits were due to the inclusion of constraints that were not part of dissonance theory and to the increased precision inherent to this computational approach. Predictions generated by the model for a free-choice between undesirable alternatives were confirmed in a new psychological experiment. The success of the consonance model underscores important, unforeseen similarities between what had been formerly regarded as the rather exotic process of dissonance reduction and a variety of other, more mundane psychological processes. Many of these processes can be understood as the progressive application of constraints supplied by beliefs and attitudes. ftp: ego.psych.mcgill.ca/pub/shultz/cog-diss.ps.gz -------------------------------------------------------------------------------- Shultz, T. R., Oshima-Takane, Y., & Takane, Y. (1995). Analysis of unstandardized contributions in cross connected networks. In D. Touretzky, G. Tesauro, & T. K. Leen, (Eds). Advances in Neural Information Processing Systems 7 (pp. 601-608). Cambridge, MA: MIT Press. Understanding knowledge representations in neural nets has been a difficult problem. Principal components analysis (PCA) of contributions (products of sending activations and connection weights) has yielded valuable insights into knowledge representations, but much of this work has focused on the correlation matrix of contributions. The present work shows that analyzing the variance-covariance matrix of contributions yields more valid insights by taking account of weights. ftp: ego.psych.mcgill.ca/pub/shultz/contcros.ps.gz -------------------------------------------------------------------------------- Tetewsky, S. J., Shultz, T. R., & Takane, Y. (1995). Training regimens and function compatibility: Implications for understanding the effects of knowledge on concept learning. Proceedings of the Seventeenth Annual Conference of the Cognitive Science Society (pp. 304-309). Hillsdale, NJ: Erlbaum. Previous research has indicated that breaking a task into subtasks can both facilitate and interfere with learning in neural networks. Although these results appear to be contradictory, they actually reflect some underlying principles governing learning in neural networks. Using the cascade-correlation learning algorithm, we devised a concept learning task that would let us specify the conditions under which subtasking would facilitate or interfere with learning. The results indicated that subtasking facilitated learning when the initial subtask involved learning a function compatible with that characterizing the rest of the task, and inhibited learning when the initial subtask involved a function incompatible with the rest of the task. These results were then discussed with regard to their implications for understanding the effect of knowledge on concept learning. ftp: ego.psych.mcgill.ca/pub/shultz/regimens.ps.gz -------------------------------------------------------------------------------- A variety of other papers in the areas of cognitive development, knowledge and learning, analyzing knowledge representations in neural networks, and cognitive consistency can be found at the LNSC site. Tom ------------------------------------------------------------------------ Thomas R. Shultz, Professor, Department of Psychology, McGill University 1205 Penfield Ave., Montreal, Quebec, Canada H3A 1B1 Phone: 514-398-6139 Fax: 514-398-4896 Email: shultz at psych.mcgill.ca WWW: http://www.psych.mcgill.ca/labs/lnsc/html/Lab-Home.html ------------------------------------------------------------------------ From harmonme at aa.wpafb.af.mil Tue Jun 25 10:47:58 1996 From: harmonme at aa.wpafb.af.mil (Mance E. Harmon) Date: Tue, 25 Jun 96 10:47:58 -0400 Subject: Tech Report Available Message-ID: <960625104756.32258@ethel.aa.wpafb.af.mil.0> Multi-Agent Residual Advantage Learning With General Function Approximation Mance E. Harmon Wright Laboratory WL/AACF 2241 Avionics Circle Wright-Patterson AFB,Ohio 45433-7318 harmonme at aa.wpafb.af.mil Leemon C. Baird III U.S.A.F. Academy 2354 Fairchild Dr. Suite 6K41 USAFA, Colorado 80840-6234 baird at cs.usafa.af.mil ABSTRACT A new algorithm, advantage learning, is presented that improves on advantage updating by requiring that a single function be learned rather than two. Furthermore, advantage learning requires only a single type of update, the learning update, while advantage updating requires two different types of updates, a learning update and a normilization update. The reinforcement learning system uses the residual form of advantage learning. An application of reinforcement learning to a Markov game is presented. The test-bed has continuous states and nonlinear dynamics. The game consists of two players, a missile and a plane; the missile pursues the plane and the plane evades the missile. On each time step, each player chooses one of two possible actions; turn left or turn right, resulting in a 90 degree instantaneous change in the aircraft's heading. Reinforcement is given only when the missile hits the plane or the plane reaches an escape distance from the missile. The advantage function is stored in a single-hidden-layer sigmoidal network. Speed of learning is increased by a new algorithm, Incremental Delta-Delta (IDD), which extends Jacobs' (1988) Delta-Delta for use in incremental training, and differs from Sutton's Incremental Delta-Bar-Delta (1992) in that it does not require the use of a trace and is amenable for use with general function approximation systems. The advantage learning algorithm for optimal control is modified for Markov games in order to find the minimax point, rather than the maximum. Empirical results gathered using the missile/aircraft test-bed validate theory that suggests residual forms of reinforcement learning algorithms converge to a local minimum of the mean squared Bellman residual when using general function approximation systems. Also, to our knowledge, this is the first time an approximate second order method has been used with residual algorithms. Empirical results are presented comparing convergence rates with and without the use of IDD for the reinforcement learning test-bed described above and for a supervised learning test-bed. The results of these experiments demonstrate IDD increased the rate of convergence and resulted in an order of magnitude lower total asymptotic error than when using backpropagation alone. Available at http://www.aa.wpafb.af.mil/~harmonme From fritzke at neuroinformatik.ruhr-uni-bochum.de Tue Jun 25 13:14:54 1996 From: fritzke at neuroinformatik.ruhr-uni-bochum.de (Bernd Fritzke) Date: Tue, 25 Jun 1996 19:14:54 +0200 (MET DST) Subject: Java neural network software available Message-ID: <9606251714.AA29703@hermes.neuroinformatik.ruhr-uni-bochum.de> This is to announce the availability of a Java implementation of the following algorithms and neural network models: - Hard Competitive Learning (standard algorithm) - Neural Gas (Martinetz and Schulten 1991) - Neural Gas with Competitive Hebbian Learning (Martinetz and Schulten 1991) - Competitive Hebbian Learning (Martinetz and Schulten 1991, Martinetz 1993) - Growing Neural Gas (Fritzke 1995) The software (written by my student Hartmut Loos) is distributed under the GNU Public License. It allows to experiment with the different methods using various probability distributions. All model parameters can be set interactively. A teach modus is provided to observe the models in "slow-motion" if so desired. The software can be accessed at http://www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/research/gsn/DemoGNG/GNG.html where it is embedded as Java applet into a Web page and is downloaded for immediate execution when you visit this page (if you have a slow link or only ftp access, please see below). I have prepared an accompanying html-paper entitled "Some competitive learning methods" (yes, there may be catchier titles 8v) ) describing the implemented models in detail: http://www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/research/gsn/JavaPaper/ It is possible to download the complete software and a Postscript version of the paper at the following addresses: ftp://ftp.neuroinformatik.ruhr-uni-bochum.de/pub/software/NN/DemoGNG/sclm.ps.gz ftp://ftp.neuroinformatik.ruhr-uni-bochum.de/pub/software/NN/DemoGNG/DemoGNG-1.00.tar.gz Please send comments regarding the paper to me and comments regarding the Java software to Hartmut (loos at neuroinformatik.ruhr-uni-bochum.de). Enjoy, Bernd Fritzke Acknowledgment: This work was done in the research group of Christoph von der Malsburg at Bochum. I like to thank him for several helpful discussions and the excellent working environment he has created there. -- Bernd Fritzke * Institut f"ur Neuroinformatik Tel. +49-234 7007845 Ruhr-Universit"at Bochum * Germany FAX. +49-234 7094210 WWW: http://www.neuroinformatik.ruhr-uni-bochum.de/ini/PEOPLE/fritzke/top.html -- Bernd Fritzke * Institut f"ur Neuroinformatik Tel. +49-234 7007845 Ruhr-Universit"at Bochum * Germany FAX. +49-234 7094210 WWW: http://www.neuroinformatik.ruhr-uni-bochum.de/ini/PEOPLE/fritzke/top.html From malsburg at neuroinformatik.ruhr-uni-bochum.de Tue Jun 25 05:37:26 1996 From: malsburg at neuroinformatik.ruhr-uni-bochum.de (malsburg@neuroinformatik.ruhr-uni-bochum.de) Date: Tue, 25 Jun 96 11:37:26 +0200 Subject: Announcement: Web pages on vision, robotics and neural networks Message-ID: <9606250937.AA09845@circe.neuroinformatik.ruhr-uni-bochum.de> This is to announce the availability of several new WWW pages describing work performed in my research group at Bochum, Germany. The pages can be accessed (preferably with a frame-capable browser) at http://www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/research.html and cover the following topics: COMPUTER VISION - segmentation - invariant object recognition - scene analysis - face recognition - tracking ROBOTICS - visually guided grasping - adaptive kinematics BIOLOGICALLY MOTIVATED MODELS - segmentation - object recognition GROWING SELF-ORGANIZING NETWORKS - clustering - classification - topology learning Please send any comments or questions directly to the respective authors. Christoph von der Malsburg Institute for Neural Computation Ruhr-University Bochum Germany From iconip96 at cs.cuhk.hk Mon Jun 24 10:12:52 1996 From: iconip96 at cs.cuhk.hk (iconip96) Date: Mon, 24 Jun 1996 22:12:52 +0800 (HKT) Subject: ICONIP'96 - Call for Participation Message-ID: <199606241412.WAA01725@cs.cuhk.hk> *********************************** ICONIP'96 -- CALL FOR PARTICIPATION http://www.cs.cuhk.hk/iconip96 *********************************** 1996 International Conference on Neural Information Processing The Annual Conference of the Asian Pacific Neural Network Assembly ICONIP'96, September 24 - 27, 1996 Hong Kong Convention and Exhibition Center, Wan Chai, Hong Kong In cooperation with IEEE / NNC --IEEE Neural Networks Council INNS - International Neural Network Society ENNS - European Neural Network Society JNNS - Japanese Neural Network Society CNNC - China Neural Networks Council ====================================================================== The goal of ICONIP'96 is to provide a forum for researchers and engineers from academia and industry to meet and to exchange ideas on the latest developments in neural information processing. The conference also further serves to stimulate local and regional interests in neural information processing and its potential applications to industries indigenous to this region. The conference consists of one-day Financial Engineering tutorial given by well known experts in the field and a three-day program with only four parallel sessions. The overall program covers major topics in neural information processing and reflects the latest progress with a good balance between scientific studies and industrial applications, as well as featured neural information processing approaches on Financial Engineering. The conference contains a high quality contributed program and a very strong invited program. For the contributed program, we received 314 submissions from 33 countries, including Asia-Pacific areas, Europe, North and South America. Each submitted paper was sent to three experts in the related fields from all over the world for reviewing. With their rigorous and timely efforts, a high quality technical program has been achieved with around 60% overall acceptance rate (20% oral presentation, 20% spotlight presentation, 20% poster presentation). The spotlight presentation is a poster presentation plus a 5-minute oral presentation which highlights the contribution of the poster paper. For the invited program, we have 5 keynote speakers, 3 honored speakers, and 22 invited speakers by well known international neural information processing scientists and experts. The invited program is also featured by 8 special sessions on interesting current topics. Hong Kong is one of the most dynamic and international cities in the world. She is a major financial center and has world-class facilities, easy accessibility, exciting entertainment, delicious cuisine and high levels of service and professionalism. Hong Kong people are known of their talent and aggressiveness. Come to Hong Kong! Visit this Eastern Pearl in this historical period, in transition from a British colony into a special administration zone of China in 1997. Hong Kong is a place where you can find the traditional Chinese culture and the Western culture. The mid-autumn festival will be on the last day of the conference. You will experience how the people in Hong Kong celebrate this festival by eating delicious moon-cakes and lighting up marvelous paper lanterns everywhere in the evening. You will also see the nearly completed Tsing Ma Bridge -- the world's longest span road-rail suspension bridge. Do not miss this opportunity to take part in ICONIP'96 and visit Hong Kong. ********************************** Tutorials On Financial Engineering ********************************** 1. Professor John Moody, Oregon Graduate Institute, USA "Time Series Modeling: Classical and Nonlinear Approaches" 2. Professor Halbert White, University California, San Diego, USA "Option Pricing In Modern Finance Theory and the Relevance Of Artificial Neural Networks" 3. Professor A-P. N. Refenes, London Business School, UK "Neural Networks in Financial Engineering" ************* Keynote Talks ************* 1. Professor Shun-ichi Amari, Tokyo University. "Information Geometry of Neural Networks" 2. Professor Yaser Abu-Mostafa, California Institute of Technology, USA "The Bin Model for Learning and Generalization" 3. Professor Leo Breiman, University California, Berkeley, USA "Democratizing Predictors" 4. Professor Christoph von der Malsburg, Ruhr-Universitat Bochum, Germany "Scene Analysis Based on Dynamic Links" (tentatively) 5. Professor Erkki Oja, Helsinki University of Technology, Finland "Blind Signal Separation by Neural Networks" ************** Honored Talks ************** 1. Professor Rolf Eckmiller, University of Bonn, Germany "Concerning the Development of Retina Implants with Neural Nets" 2. Professor Mitsuo Kawato, ATR Human Information Processing Research Lab, Japan "Generalized Linear Model Analysis of Cerebellar Motor Learning" 3. Professor Kunihiko Fukushima, Osaka University, Japan "Neural Network Model of Spatial Memory" ************* INVITED TALKS ************* Yoshua Bengio, Robert L. Fry, Hecht-Nielsen, Nathan Intrator, Arun Jagota, Bart Kosko*, John Moody, Barak Pearlmutter, A-P. N. Refenes, Michael P. Perrone, Juergen Schmidhuber, Sara A. Solla, Harold Szu*, Andreas Weigend*, Halbert White, Alan L. Yuille, Laiwan Chan, Nikola Kasabov, Soo-Young Lee, Yousou Wu, Ah Chung Tsoi *Tentatively Confirmed ***************************** CONFERENCE'S REGISTRATION FEE ***************************** On & Before July 1, 1996 Member HKD $2,800 On & Before July 1, 1996 Non-Member HKD $3,200 Late & On-Site Member HKD $3,200 Late & On-Site Non-Member HKD $3,600 Student Registration Fee HKD $1,000 For registration and other related ICONIP'96 information please browse our WWW site at http://www.cs.cuhk.hk/iconip96. Please send your inquiries to: ICONIP'96 Secretariat Department of Computer Science and Engineering The Chinese University of Hong Kong Shatin, N.T., Hong Kong Fax (852) 2603-5024 E-mail: iconip96 at cs.cuhk.hk http://www.cs.cuhk.hk/iconip96 From smagt at dlr.de Wed Jun 26 07:38:28 1996 From: smagt at dlr.de (Patrick van der Smagt) Date: Wed, 26 Jun 1996 13:38:28 +0200 Subject: postprint: local vs. global learning Message-ID: <31D12134.4EFA@dlr.de> This paper appeared last year November at the ICNN95. P. van der Smagt and F. Groen "Approximation with neural networks: Between local and global approximation." In Proceedings of the 1995 International Conference on Neural Networks, pp. II:1060-II:1064 (invited paper). Abstract: We investigate neural network based approximation methods. These methods depend on the locality of the basis functions. After discussing local and global basis functions, we propose a multi-resolution hierarchical method. The various resolutions are stored at various levels in a tree. At the root of the tree, a global approximation is kept; the leafs store the learning samples themselves. Intermediate nodes store intermediate representations. In order to find an optimal partitioning of the input space, self-organising maps (SOM`s) are used. The proposed method has implementational problems reminiscent of those encountered in many-particle simulations. We will investigate the parallel implementation of this method, using parallel hierarchical methods for many-particle simulations as a starting point. Search for "global" on http://www.op.dlr.de/FF-DR-RS/Smagt/papers/ From PAOLAD at iss.infn.it Tue Jun 25 13:38:40 1996 From: PAOLAD at iss.infn.it (PAOLAD@iss.infn.it) Date: Tue, 25 Jun 1996 19:38:40 +0200 (WET-DST) Subject: workshop on neural networks (Italy, September 23-27) Message-ID: <960625193840.1600136@iss.infn.it> WORKSHOP ON NEURAL NETWORKS: FROM BIOLOGY TO HARDWARE IMPLEMENTATIONS Grand Hotel Chia Laguna, Chia (Sardinia, Italy), September 23-27 1996 Organizing Committee: D. Amit, W. Bialek, P. Del Giudice, E.T. Rolls The Workshop, cross-disciplinary in character, will be focused mostly on the following topics: - the dynamical interpretation of cross correlations - what can be extracted from information theory analysis of spike trains - the potentialities of multiple recordings - the role of the timing of the spikes - the role of hardware implementations in the context of biological modelling Partial list of speakers: L Abbott (Brandais University) D Amit (The Hebrew University and Rome University) W Bialek (NEC Research Institute) R Douglas (ETH, Zurich) D Kleinfeld (University of California) M Nicolelis (Duke University Medical Center) B Richmond (National Institute of Mental Health, Bethesda) E Rolls (Oxford University) M Shadlen (University of Washington School of Medicine) S Thorpe (Centre de Recherche Cerveau et Cognition, Toulouse) M Tsodyks (Weizmann Institute) E Vaadia (Hadassah Medical School, The Hebrew University) F Van Der Velde (Leiden University) A Villa (Lausanne University) the planned schedule is meant to be 'discussion oriented', and will be approximately as follows: two invited talks per session, followed by a discussion a set of poster contributions for each session, also followed by a discussion. contributed papers will only be accepted as posters; all accepted contributions will be published in the proceedings volume. The registration fee is 220 US$, to be paid upon arrival. A limited number of fellowships will be available for participants coming from Countries belonging to the European Union or Israel. Chia is a beatiful place at the Southern tip of Sardinia, It is a drive of about 1 h from Cagliari airport, which has connecting flights from/to Rome, Florence etc. A bus transportation from Cagliari airport to the Workshop venue will be organized. Information about the workshop will be available at the following URL: http://wwwtera.iss.infn.it/workshop/chia96.html Here follows the registration form, to be returned by e-mail or fax BEFORE JULY 15 to the scientific secretary at the following address: paolad at vaxsan.iss.infn.it (Dr Paola Di Ciaccio) Fax: ++39-6-4462872 Participants willing to present a poster contribution should include a one-page abstract for evaluation by the organizing committee. ------------------------- CUT HERE -------------------------------- WORKSHOP ON NEURAL NETWORKS: FROM BIOLOGY TO HARDWARE IMPLEMENTATIONS CHIA, SEPTEMBER 23 - 27, 1996 Name .......................................................... Address ......................................................... ................................................................ ................................................................ Telephone ..................................................... Fax ........................................................... E-mail ............................................................. o I am interested in attending the Workshop o I am interested in presenting a poster contribution. (please include a one-page abstract in LateX or text-only format) Applicants will be informed as to their acceptance before July 25. o I ask for a financial support Please return this form or send the above information to the scientific secretary by July 15, 1996. E-mail: PAOLAD at VAXSAN.ISS.INFN.IT Fax : ++39-6-4462872 From smagt at dlr.de Thu Jun 27 03:57:26 1996 From: smagt at dlr.de (Patrick van der Smagt) Date: Thu, 27 Jun 1996 09:57:26 +0200 Subject: PostPrint: Many-Particle Decomposition and SOMs Message-ID: <31D23EE6.6AE9@dlr.de> P. van der Smagt and B. Kr?se, Using Many-Particle Decomposition to get a Parallel Self-Organising Map. In Proceedings of the 1995 Conference on Computer Science in the Netherlands, J. van Vliet (editors), pages 241-249. 1995. Abstract: We propose a method for decreasing the computational complexity of self-organising maps. The method uses a partitioning of the neurons into disjoint clusters. Teaching of the neurons occurs on a cluster-basis instead of on a neuron-basis. For teaching an N-neuron network with N' samples, the computational complexity decreases from O(NN') to O(N log N'). Furthermore, we introduce a measure for the amount of order in a self-organising map, and show that the introduced algorithm behaves as well as the original algorithm. Address: search for "decomposition" on http://www.op.dlr.de/FF-DR-RS/Smagt/papers/ -- dr Patrick van der Smagt phone +49 8153 281152 DLR/Institute of Robotics and Systems Dynamics fax +49 8153 281134 P.O. Box 1116, 82230 Wessling, Germany email From thrun+ at heaven.learning.cs.cmu.edu Thu Jun 27 10:46:22 1996 From: thrun+ at heaven.learning.cs.cmu.edu (thrun+@heaven.learning.cs.cmu.edu) Date: Thu, 27 Jun 96 10:46:22 EDT Subject: NEW BOOK: Robot Learning Message-ID: I have the pleasure to announce the following book: **** Recent Advances in Robot Learning **** edited by Judy A. Franklin GTE Laboratories, Waltham, MA, USA Tom M. Mitchell Carnegie Mellon University, Pittsburgh, PA, USA Sebastian Thrun Carnegie Mellon University, Pittsburgh, PA, USA Reprinted from MACHINE LEARNING, 23:2-3 THE KLUWER INTERNATIONAL SERIES IN ENGINEERING AND COMPUTER SCIENCE VOLUME 368 Recent Advances in Robot Learning contains seven papers on robot learning written by leading researchers in the field. As the selection of papers illustrates, the field of robot learning is both active and diverse. A variety of machine learning methods, ranging from inductive logic programming to reinforcement learning, is being applied to many subproblems in robot perception and control, often with objectives as diverse as parameter calibration and concept formulation. While no unified robot learning framework has yet emerged to cover the variety of problems and approaches described in these papers and other publications, a clear set of shared issues underlies many robot learning problems. - Machine learning, when applied to robotics, is situated: it is embedded into a real-world system that tightly integrates perception, decision making and execution. - Since robot learning involves decision making, there is an inherent active learning issue. - Robotic domains are usually complex, yet the expense of using actual robotic hardware often prohibits the collection of large amounts of training data. - Most robotic systems are real-time systems. Decisions must be made within critical or practical time constraints. These characteristics present challenges and constraints to the learning system. Since these characteristics are shared by other important real-world application domains, robotics is a highly attractive area for research on machine learning. Recent Advances in Robot Learning is an edited volume of peer-reviewed original research comprising seven invited contributions by leading researchers. This research work has also been published as a special issue of Machine Learning (Volume 23, Numbers 2 and 3). Kluwer Academic Publishers, Boston Date of publishing: June 1996 224 pp. Hardbound ISBN: 0-7923-9745-2 Prices: NLG: 175.00 USD: 94.00 GBP: 66.75 ============================================================================= CONTENTS o Real-World Robotics: Learning To Plan for Robust Execution, Scott W. Bennett, and Gerald F. DeJong o Robot Programming by Demonstration (RPD): Supporting the Induction, by Human Interaction, by Stefan Muench, Ruediger Dillmann, Siegfried Bocionek, and Michael Sassin o Performance Improvement of Robot Continuous-Path Operation through Iterative Learning Using Neural Networks, by Peter C.Y. Chen, James K. Mills, and Kenneth C. Smith o Learning Controllers for Industrial Robots, by C. Baroglio, A. Giordana, M. Kaiser, M. Nuttin, and R. Piola o Active Learning for Vision-Based Robot Grasping, by Marcos Salganicoff, Lyle H. Ungar, and Ruzena Bajcsy o Purposive Behavior Acquisition for a Real Robot by Vision-Based Reinforcement Learning, by Minoru Asada, Shoichi Noda, Sukoya Tawaratsumita, and Koh Hosoda o Learning Operational Concepts from Sensor Data of a Mobile Robot, by Volker Klingspor, Katharina J. Morik, and Anke Rieger ============================================================================= See http://www.cs.cmu.edu/~thrun/papers/franklin.book.html for more information (paper abstracts, order form). From JIgnacio at grial.uc3m.es Thu Jun 27 11:03:54 1996 From: JIgnacio at grial.uc3m.es (JIgnacio@grial.uc3m.es) Date: Thu, 27 Jun 1996 17:03:54 +0200 Subject: No subject Message-ID: REGISTRATION FORM FOR FIRST INTERNATIONAL WORKSHOP ON MACHINE LEARNING, FORECASTING, AND OPTIMIZATION 1996 WORKSHOP TO BE HELD ON JULY 10-12, 1996 AT UNIVERSIDAD CARLOS III DE MADRID Tutorials Tutorials will be in Spanish. Check off one tutorial for each period of time. Tutorials will be available in case that at least 10 persons register for that tutorial. Wednesday, July 10, 1996 9:00 AM - 11:00 AM __ Descubrimiento de relaciones en grandes Bases de Datos: Aurora Perez (Universidad Politecnica de Madrid) and Angela Ribeiro (Instituto de Automatica Industrial, CSIC) __ Prediccion dinamica: series temporales: David Rios (Universidad Politecnica de Madrid) 11:30 AM - 13:30 PM __ Aprendizaje inductivo aplicado a la medicina: Cesar Montes (Universidad Politecnica de Madrid) __ Tecnicas de Inteligencia Artificial aplicadas a las finanzas: Ignacio Olmeda (Universidad de Alcala de Henares) 15:00 PM - 17:00 PM __ Analisis estadistico de datos: Jacinto Gonzalez Pachon (Universidad Politecnica de Madrid) __ Programacion Genetica en problemas de control: Javier Segovia (Universidad Politecnica de Madrid) Preliminary Workshop Schedule Thursday, July 11, 1996 ----------------------- 9:30 Registration 10:00 General presentation 10:10 Invited Talk: Manuela Veloso (Carnegie Mellon University) 11:30 Coffee Break 11:45 Session on Mathematical and Integrated Approaches - "Nonparametric Estimation of Fully Nonlinear Models for Assets Returns", Ignacio Olmeda and Eugenio Fernandez - "Representation Changes in Combinatorial Problems: Pigeonhole Principle versus Integer Programming Relaxation", Yury V. Smirnov and Manuela M. Veloso - "Integrating Reasoning Information to Domain Knowledge in the Neural Learning Process", Saida Benlarbi and Kacem Zeroual - "Parameter Optimization in ART2 Neural Network, using Genetic Algorithms", Enrique Muro 1:15 Lunch 2:15 Invited Talk: Esther Ruiz (Universidad Carlos III de Madrid) 3:30 Session on Genetic Algorithms and Programming Approaches - "Automatic Generation of Turing Machines by a Genetic Approach", Julio Tanomaru and Akio Azuma - "GAGS, a Flexible Object Oriented Library for Evolutionary Computation", J.J. Merelo and A. Prieto - "An Application of Genetic Algorithms and Heuristic Techniques in Scheduling", Celia Gutierrez, Jose M. Lazaro and Joseba Zubia - "Classifiers Systems for Learning Reactions in Robotic Systems", Araceli Sanchis, Jose M. Molina and Pedro Isasi - "A Comparison of Forecast Accuracy between Genetic Programming and other Forecastors: A Loss-Differential Approach", Shu-Heng Chen and Chia-Hsuan Yeh Friday, July 12, 1996 --------------------- 10:00 Invited Talk: Juan J. Merelo (Universidad de Granada) 11:30 Coffee Break 11:45 Session on Neural Network Approaches - "Factor Analysis in Social Science: An Artificial Neural Network Perspective", Rafael Calvo - "Neural Network Forecast of Intraday Futures and Cash Returns", Pedro Isasi, Ignacio Olmeda, Eugenio Fernandez and Camino Fernandez - "Self-Organizing Feature Maps for Location and Scheduling", S. Lozano, F. Guerrero, J. Larra~neta and L. Onieva - "A Neural Network Hierarchical Model for Speech Recognition based on Biological Plausability", J.M. Ferrandez, D. del Valle, V. Rodellar and P. Gomez 1:15 Lunch 2:15 Invited Talk: Alicia Perez (Boston College) 3:30 Session on Symbolic Machine Learning Approaches - "Statistical Variable Interaction: Focusing Multiobjective Optimization in Machine Learning", Eduardo Perez and Larry Rendell - "A Multi-Agent Model for Decision Making and Learning", Jose I. Giraldez and Daniel Borrajo - "An Approximation to Generic Knowledge Discovery in Database Systems", Aurora Perez and Angela Ribeiro - "Learning to Forecast by Explaining the Consequences of Actions", Tristan Cazenave - "Basic Computational Processes in Machine Learning", Jesus G. Boticario and Jose Mira 5:15 Panel Workshop Fees Workshop fees are: Paid before June 30 Paid after June 30 ------------------- ------------------ Regular rate 25.000 pts. 35.000 pts. Speakers rate 15.000 pts. 25.000 pts. Students rate 5.000 pts. 10.000 pts. Carlos III members 2.000 pts. 5.000 pts. (students/teachers/staff) The workshop fee includes a copy of the proceedings, coffee breaks, and Thursday and Friday lunches. Students must send legible proof of full-time student status. Tutorial fees are: Paid before June 30 Paid after June 30 ------------------- ------------------ Regular rate 30.000 pts. 40.000 pts. University rate 15.000 pts. 20.000 pts. Students rate 2.500 pts. 5.000 pts. Carlos III members 2.000 pts. 3.000 pts. (students/teachers/staff) The tutorials fee includes the assistance to three non-parallel tutorials, documentation, and coffee breaks. Registration Please, send the completed application form to: - by email: dborrajo at grial.uc3m.es, or isasi at gaia.uc3m.es - by airmail: MALFO96, Attention D. Borrajo/P. Isasi Universidad Carlos III de Madrid c/ Butarque, 15 28911 Leganes, Madrid. Spain Registration Form Last name:__________________________________________________________________ First name:_________________________________________________________________ Title:______________________________________________________________________ Affiliation:________________________________________________________________ Address:____________________________________________________________________ ____________________________________________________________________________ ____________________________________________________________________________ Phone (include country and area code):______________________________________ Fax (include country and area code):________________________________________ E-mail:_____________________________________________________________________ Workshop fee:_________ Tutorial fee:_________ (please indicate which three tutorials do you wish to attend) Total amount:_________ Send a check made payable to "Universidad Carlos III de Madrid" or electronic funds transfer to: Account holder name: Universidad Carlos III de Madrid Bank: Caja de Madrid Branch address: c/ Juan de la Cierva s/n, Getafe Bank code: 2038 Branch number: 2452 Account number: 6000085134 Control Digit: 05 In the case of a transfer, please indicate that it is code 396 (Inscripciones de Malfo96) and send us a copy of the receipt. No refunds will be made; however, we will transfer your registration to a person you designate upon notification. Signature:__________________________________________________________________ From juergen at idsia.ch Thu Jun 27 14:55:29 1996 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Thu, 27 Jun 96 20:55:29 +0200 Subject: 3 IDSIA papers Message-ID: <9606271855.AA02460@fava.idsia.ch> 3 related papers available, all based on a recent, novel, general reinforcement learning paradigm that allows for metalearning and incremental self-improvement (IS). ____________________________________________________________________ SIMPLE PRINCIPLES OF METALEARNING Juergen Schmidhuber & Jieyu Zhao & Marco Wiering Technical Report IDSIA-69-96, June 27, 1996 23 pages, 195 K compressed, 662 K uncompressed The goal of metalearning is to generate useful shifts of inductive bias by adapting the current learning strategy in a "useful" way. Our learner leads a single life during which actions are continually executed according to the system's internal state and current policy (a modifiable, probabilistic algorithm mapping environmental inputs and internal states to outputs and new internal states). An action is considered a learning algorithm if it can modify the policy. Effects of learning processes on later learning processes are measured using reward/time ratios. Occasional backtracking enforces success histories of still valid policy modifications corresponding to histories of lifelong reward accelerations. The principle allows for plugging in a wide variety of learning algorithms. In particular, it allows for embedding the learner's policy modification strategy within the policy itself (self-reference). To demonstrate the principle's feasibility in cases where traditional reinforcement learning fails, we test it in complex, non-Markovian, changing environments ("POMDPs"). One of the tasks involves more than 10^13 states, two learners that both cooperate and compete, and strongly delayed reinforcement signals (initially separated by more than 300,000 time steps). ____________________________________________________________________ A GENERAL METHOD FOR INCREMENTAL SELF-IMPROVEMENT AND MULTI-AGENT LEARNING IN UNRESTRICTED ENVIRONMENTS Juergen Schmidhuber To appear in X. Yao, editor, Evolutionary Computation: Theory and Applications. Scientific Publ. Co., Singapore, 1996 (based on "On learning how to learn learning strategies", TR FKI-198-94, TUM 1994). 30 pages, 146 K compressed, 386 K uncompressed. ____________________________________________________________________ INCREMENTAL SELF-IMPROVEMENT FOR LIFE- TIME MULTI-AGENT REINFORCEMENT LEARNING Jieyu Zhao Juergen Schmidhuber To appear in Proc. SAB'96, MIT Press, Cambridge MA, 1996. 10 pages, 107 K compressed, 429 K uncompressed. A spin-off paper of the TR above. It includes another experiment: a multi-agent system consis- ting of 3 co-evolving, IS-based animats chasing each other learns interesting, stochastic predator and prey strategies. (Another spin-off paper is: M. Wiering and J. Schmidhuber. Solving POMDPs using Levin search and EIRA. To be presented by MW at ML'96.) ____________________________________________________________________ To obtain copies, use ftp, or try the web: http://www.idsia.ch/~juergen/onlinepub.html FTP-host: ftp.idsia.ch FTP-filenames: /pub/juergen/meta.ps.gz /pub/juergen/ec96.ps.gz /pub/jieyu/sab96.ps.gz ____________________________________________________________________ Juergen Schmidhuber & Jieyu Zhao & Marco Wiering http://www.idsia.ch IDSIA From lross at msmail4.hac.com Thu Jun 27 20:40:23 1996 From: lross at msmail4.hac.com (Ross, Lynn W) Date: 27 Jun 1996 16:40:23 -0800 Subject: Job Postings Message-ID: The Information Sciences Laboratory at HRL has an immediate opening for a scientist to join our team of researchers investigating advanced techniques in non-traditional and advanced signal and image processing, compression, and optimization. If you have experience in the following areas, we would like to hear from you: o pattern recognition o optimization o wavelet applications o compression o neural networks o decision aids o information fusion o signal processing In addition, the successful candidate will have a PhD in electrical engineering, applied math or computer science; plus, extensive knowledge of software programming in C or C++. Excellent communications abilities are required to effectively interact with the scientific and academic communities, with Hughes business units, and within our small, dynamic research team environment. Our research staff members are encouraged to invent, patent and publish on a regular basis. Our ideal location and competitive salary and benefits package contribute to a work environment designed to optimize creative research. Please send your resume to: o Lynn W. Ross o #BSISL o Hughes Research Laboratories o 3011 Malibu Canyon Road o Malibu o CA o 90265 o FAX: 310-317-5651 o email: lross at hrl.com. To learn more about HRL, visit our WebPage at http:\\www.hrl.com. Proof of legal right to work in the United States required. We are an Equal Opportunity Employer. From alex at salk.edu Fri Jun 28 11:16:21 1996 From: alex at salk.edu (Alexandre Pouget) Date: Fri, 28 Jun 96 08:16:21 PDT Subject: postdoctoral position Message-ID: <9606281516.AA03622@salk.edu> Postdoctoral position in Computational Neuroscience Institute of Computational and Cognitive Sciences Georgetown University Washington DC A post-doctoral position will be available fall 1996 at the Institute of Computational and Cognitive Sciences in Georgetown University. Candidates should have hands-on experience in computational neuroscience and a keen interest in cognitive science at large. Research will focus primarily on models of sensory-motor transformations and multisensory integration in human and monkeys. Candidates will be expected to interact closely with other members of the laboratory involved in testing stroke patients with spatial perception disorders. Other research modeling projects will be considered, in particular in the general area of neural representations. The institute offers a variety of laboratories in the field of cognitive neuroscience using investigations techniques such as single cell and optical recordings in behaving monkey and bats, a 7T fMRI for animals studies and a 1.5T fMRI for human subjects. Please send a CV, summary of relevant research experience and at least 2 letters of recommendation to: Dr. Alexandre Pouget. Institute for Cognitive and Computational Sciences. Georgetown University. New Research Building. Room EP04. 3970 Reservoir Road NW, Washington DC, 20007-2197 or send email to: alex at salk.edu. From mccallum at cs.rochester.edu Sat Jun 29 19:34:28 1996 From: mccallum at cs.rochester.edu (Andrew McCallum) Date: Sat, 29 Jun 1996 19:34:28 -0400 Subject: Paper on RL, feature selection, hidden state Message-ID: <199606292334.TAA19728@slate.cs.rochester.edu> The following paper on reinforcement learning, hidden state and feature selection is available by FTP. Comments and suggestions are welcome. "Learning to Use Selective Attention and Short-Term Memory" Andrew Kachites McCallum (to appear in SAB'96) Abstract This paper presents U-Tree, a reinforcement learning algorithm that uses selective attention and short-term memory to simultaneously address the intertwined problems of large perceptual state spaces and hidden state. By combining the advantages of work in instance-based (or ``memory-based'') learning and work with robust statistical tests for separating noise from task structure, the method learns quickly, creates task-relevant state distinctions, and handles noise well. U-Tree uses a tree-structured representation, and is related to work on Prediction Suffix Trees [Ron et al 94], Parti-game [Moore 94], G-algorithm [Chapman and Kaelbling 91], and Variable Resolution Dynamic Programming [Moore 91]. It builds on Utile Suffix Memory [McCallum 95], which only used short-term memory, not selective perception. The algorithm is demonstrated solving a highway driving task in which the agent weaves around slower and faster traffic. The agent uses active perception with simulated eye movements. The environment has hidden state, time pressure, stochasticity, over 21,000 world states and over 2,500 percepts. From this environment and sensory system, the agent uses a utile distinction test to build a tree that represents depth-three memory where necessary, and has just 143 internal states---far fewer than the 2500^3 states that would have resulted from a fixed-sized history-window approach. Retrieval information: FTP-host: ftp.cs.rochester.edu FTP-pathname: /pub/papers/robotics/96.mccallum-sab.ps.gz URL: ftp://ftp.cs.rochester.edu/pub/papers/robotics/96.mccallum-sab.ps.gz From comp.ai.neural-nets at DST.BOLTZ.CS.CMU.EDU Sun Jun 30 03:40:20 1996 From: comp.ai.neural-nets at DST.BOLTZ.CS.CMU.EDU (forwarded) Date: Sun, 30 Jun 96 03:40:20 EDT Subject: ECAI NNSK Workshop Program and Call for Participation Message-ID: ECAI'96 Workshop on NEURAL NETWORKS AND STRUCTURED KNOWLEDGE (NNSK) August 12, 1996 during the 12th European Conference on Artificial Intelligence August 12-16, 1996 in Budapest, Hungary Call for Participation ------------------------------------------------------------------------------- Latest information can be retrieved from the NNSK WWW-page http://www.informatik.uni-ulm.de/fakultaet/abteilungen/ni/ECAI-96/NNSK.html BACKGROUND ---------- Neural networks mostly are used for tasks dealing with information presented in vector or matrix form, without a rich internal structure reflecting relations between different entities. In some application areas, e.g. speech processing or forecasting, types of networks have been investigated for their ability to represent sequences of input data. Whereas approaches to use neural networks for the representation and processing of structured knowledge have been around for quite some time, especially in the area of connectionism, they frequently suffer from problems with expressiveness, knowledge acquisition, adaptivity and learning, or human interpretation. In the last years much progress has been made in the theoretical understanding and the construction of neural systems capable of representing and processing structured knowledge in an adequate way, while maintaining essential capabilities of neural networks such as learning, tolerance of noise, treatment of inconsistencies, and parallel operation. The goal of this workshop is twofold: On one hand, existing mechanisms are critically examined with respect to their suitability for the acquisition, representation, processing and interpretation of structured knowledge. On the other hand, new approaches, especially concerning the design of systems based on such mechanisms, are presented, with particular emphasis on their application to realistic problems. PRELIMINARY WORKSHOP PROGRAM ---------------------------- 8:30 - 8:50 INTRODUCTION (F. Kurfess) 8:50 - 10:10 SYMBOLIC INFERENCE IN CONNECTIONIST SYSTEMS 8:50 Semantic Knowledge in General Neural Units: Issues of Representation (J. de L. Pereira Castro) 9:10 Implementation of a SHRUTI Knowledge Representation and Reasoning System (R. Hayward, J. Diederich) 9:30 A Connectionist Representation of Symbolic Components, Dynamic Bindings and Basic Inference Operations (N. Seog Park, D. Robertson) 9:50 Logical Inference and Inductive Learning (A.S. d'Avila Garcez, G. Zaverucha, L.A.V. de Carvalho) 10:10 - 10:20 Discussion 10:30 - 11:00 Break 11:00 - 11:40 EXPLOITING PROBLEM-INHERENT STRUCTURED META-KNOWLEDGE 11:00 Declarative Heuristics for Neural Network Design (M. Vuilleumier, M. Hilario) 11:20 Sign Recognition as a Support to Robot Navigation (G. Adorni, G. Destri, M. Gori, M. Mordonini) 11:40 - 12:00 Discussion 12:00 - 13:45 Break 13:45 - 14:45 SUPERVISED INDUCTIVE INFERENCE ON STRUCTURED DOMAINS 13:45 Inductive Inference from Noisy Examples: The Rule-Noise Dilemma and the Hybrid Finite State Filter (M. Gori, M. Maggini, G. Soda) 14:05 Inductive Learning in Symbolic Domains Using Structure- Driven Recurrent Neural Networks (A. Kuechler, C. Goller) 14:25 Neural Networks for the Classification of Structures (A. Sperduti) 14:45 - 15:15 Discussion 15:15 - 15:45 Break 15:45 - 16:25 INFERRING HIERARCHIES 15:45 Inferring Hierarchical Categories with ART-Based Modular Neural Networks (G. Bartfai) 16:05 A Tree-Structured Approach to Medical Diagnosis Tasks (J. Rahmel, P. Hahn) 16:25 - 16:45 Discussion 16:45 - 17:30 General Discussion and Closing DISCUSSION THEMES ----------------- In addition to discussions centered around the presentations, we want to foster an exchange of ideas and opinions about issues relevant for representing and processing structured knowledge with neural networks. 1. Are symbols ultimately necessary for knowledge, or are they an artefact? Can we provide symbol-less methods that achieve some kind of knowledge processing facility? With respect to the limited discussion time at the workshop, we would like to put the emphasis on the technical and practical aspects (experiments, methods), not so much on the underlying philosophical thoughts. 2. Why do we need structured knowledge? Because the world is structured? Because our cognition is systematic (Fodor & Pylyshyn's argument)? For efficiency reasons? And should structure be then explicitly represented? 3. Should we try to use neural networks for the representation and processing of structured knowledge, or are we simply wasting our time? After all, there are well-founded methods and techniques in traditional, symbol-oriented AI. If we should try, what are good reasons? o knowledge acquisition o learning, adaptability o generalization o performance o robustness o uncertainty o inconsistency o scalability o learning times o formal properties (correctness, completeness) o understandability o modularity 4. What are the characteristics of application domains/tasks where NN-models and methods are more suitable than other approaches (e.g. Inductive Logic Programming) when dealing with structured knowledge? 5. Learning and generalization on a structured domain -- what does this mean? Are there different levels of generalization capabilities, what can be achieved by NN models? 6. Are any of the approaches relevant for cognitive processes, e.g. memory, reasoning, language? 7. Is there evidence for the use of symbols in biological neural networks? When and where do symbols appear? 8. How difficult is it to build larger systems? They may consist of several NNSK modules, or constitute hybrid systems together with symbo-oriented modules. 9. Should we try to establish formal relations between neural methods and symbolic methods? An example might be Hoelldobler and Kalinke's or Pinkas' work. o equivalence o transformation o complexity 10. Should we try to model basic functions known from symbolic methods, or develop neural ones from scratch? Or is it like trying to build flying machines modeled after birds, instead of what we know as airplanes? An example: unification; is it necessary for reasoning system, or might a radically different approach be better? 11. What are the relations between knowledge-based methods from the fields of neural networks, machine learning, statistics? PARTICIPATION AND REGISTRATION ------------------------------ A number of places are available for those who wish to attend the workshop without doing an oral presentation. Potential attendees are requested to send a statement of interest to the Workshop Chair (franz at cis.njit.edu). Please note that attendees of workshops must register for the main ECAI conference. ORGANIZING COMMITTEE -------------------- Franz Kurfess (chair) New Jersey Institute of Technology, Newark, USA Daniel Memmi LEIBNIZ-IMAG, Grenoble, France Andreas Kuechler University of Ulm, Germany Arnaud Giacometti Universiti de Tours, France CONTACT ------- Prof. Franz Kurfess Computer and Information Sciences Dept. New Jersey Institute of Technology Newark, NJ 07102, U.S.A. Voice : +1/201-596-5767 Fax : +1/201-596-5767 E-mail: franz at cis.njit.edu PROGRAM COMMITTEE ----------------- Venkat Ajjanagadde - University of Minnesota, Minneapolis Ethem Alpaydin - Bogazici University Joan Cabestany - University of Catalunya Joachim Diederich - Queensland University of Technology Georg Dorffner - Universitaet Wien C. Lee Giles - NEC Research Institute Marco Gori - University of Florence Melanie Hilario - University of Geneva (co-chair) Steffen Hoelldobler - TU Dresden Mirek Kubat - University of Ottawa Wolfgang Maass - Technische Universitaet Graz Ernst Niebur - John Hopkins University Guenther Palm - University of Ulm Lokendra Shastri - International Computer Science Institute, Berkeley Hava Siegelman - Technion (Israeli Institue of Technology) Alessandro Sperduti - University of Pisa (co-chair) Chris J. Thornton - University of Sussex