From Connectionists-Request at cs.cmu.edu Wed Mar 1 00:06:00 1995 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Wed, 01 Mar 95 00:06:00 EST Subject: Bi-monthly Reminder Message-ID: <835.794034360@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated September 9, 1994. This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is a moderated forum for enlightened technical discussions and professional announcements. It is not a random free-for-all like comp.ai.neural-nets. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to thousands of busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. -- Dave Touretzky & Lisa Saksida --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, corresponded with him directly and retrieved the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new textbooks related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. The file "current" in the same directory contains the archives for the current month. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU 2. Login as user anonymous with password your username. 3. 'cd' directly to the following directory: /afs/cs/project/connect/connect-archives The archive directory is the ONLY one you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into this directory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- Using Mosaic and the World Wide Web ----------------------------------- You can also access these files using the following url: http://www.cs.cmu.edu:8001/afs/cs/project/connect/connect-archives ---------------------------------------------------------------------- The NEUROPROSE Archive ---------------------- Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. We strongly discourage the merger into the repository of existing bodies of work or the use of this medium as a vanity press for papers which are not of publication quality. PLACING A FILE To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. Current naming convention is author.title.filetype.Z where title is just enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. The Z indicates that the file has been compressed by the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is in the appendix. Make sure your paper is single-spaced, so as to save paper, and include an INDEX Entry, consisting of 1) the filename, 2) the email contact for problems, 3) the number of pages and 4) a one sentence description. See the INDEX file for examples. ANNOUNCING YOUR PAPER It is the author's responsibility to invite other researchers to make copies of their paper. Before announcing, have a friend at another institution retrieve and print the file, so as to avoid easily found local postscript library errors. And let the community know how many pages to expect on their printer. Finally, information about where the paper will/might appear is appropriate inside the paper as well as in the announcement. In your subject line of your mail message, rather than "paper available via FTP," please indicate the subject or title, e.g. "paper available "Solving Towers of Hanoi with ART-4" Please add two lines to your mail header, or the top of your message, so as to facilitate the development of mailer scripts and macros which can automatically retrieve files from both NEUROPROSE and other lab-specific repositories: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/filename.ps.Z When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. To prevent forwarding, place a "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your file. If you do offer hard copies, be prepared for a high cost. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! A shell script called Getps, written by Tony Plate, is in the directory, and can perform the necessary retrieval operations, given the file name. Functions for GNU Emacs RMAIL, and other mailing systems will also be posted as debugged and available. At any time, for any reason, the author may request their paper be updated or removed. For further questions contact: Jordan Pollack Associate Professor Computer Science Department Center for Complex Systems Brandeis University Phone: (617) 736-2713/* to fax Waltham, MA 02254 email: pollack at cs.brandeis.edu APPENDIX: Here is an example of naming and placing a file: unix> compress myname.title.ps unix> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put myname.title.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for myname.title.ps.Z 226 Transfer complete. 100000 bytes sent in 1.414 seconds ftp> quit 221 Goodbye. unix> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file myname.title.ps.Z in the Inbox. Here is the INDEX entry: myname.title.ps.Z mylogin at my.email.address 12 pages. A random paper which everyone will want to read Let me know when it is in place so I can announce it to Connectionists at cmu. ^D AFTER RECEIVING THE GO-AHEAD, AND HAVING A FRIEND TEST RETRIEVE THE FILE, HE DOES THE FOLLOWING: unix> mail connectionists Subject: TR announcement: Born Again Perceptrons FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/myname.title.ps.Z The file myname.title.ps.Z is now available for copying from the Neuroprose repository: Random Paper (12 pages) Somebody Somewhere Cornell University ABSTRACT: In this unpublishable paper, I generate another alternative to the back-propagation algorithm which performs 50% better on learning the exclusive-or problem. ~r.signature ^D ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "ftp.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. Another valid directory is "/afs/cs/project/connect/code", where we store various supported and unsupported neural network simulators and related software. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "neural-bench at cs.cmu.edu". From jari at vermis.hut.fi Wed Mar 1 02:46:28 1995 From: jari at vermis.hut.fi (Jari Kangas) Date: Wed, 1 Mar 1995 09:46:28 +0200 Subject: New version (v3.0) of SOM_PAK Message-ID: <199503010746.JAA16971@vermis> ************************************************************************ * * * SOM_PAK * * * * The * * * * Self-Organizing Map * * * * Program Package * * * * Version 3.0 (March 1, 1995) * * * * Prepared by the * * SOM Programming Team of the * * Helsinki University of Technology * * Laboratory of Computer and Information Science * * Rakentajanaukio 2 C, SF-02150 Espoo * * FINLAND * * * * Copyright (c) 1992-1995 * * * ************************************************************************ Updated public-domain programs for Self-Organizing Map (SOM) algorithms are available via anonymous FTP on the Internet. A new book on SOM and LVQ (Learning Vector Quantization) has also recently come out: Teuvo Kohonen. Self-Organizing Maps (Springer Series in Information Science, Vol 30, 1995). In short, Self-Organizing Map (SOM) defines a 'nonlinear projection' of the probability density function of the high-dimensional input data onto the two-dimensional display. SOM places a number of reference vectors into an input data space to approximate to its data set in an ordered fashion, and thus implements a kind of nonparametric, nonlinear regression. This package contains all necessary programs for the application of Self-Organizing Map algorithms in an arbitrary complex data visualization task. This code is distributed without charge on an "as is" basis. There is no warranty of any kind by the authors or by Helsinki University of Technology. In the implementation of the SOM programs we have tried to use as simple code as possible. Therefore the programs are supposed to compile in various machines without any specific modifications made on the code. All programs have been written in ANSI C. The programs are available in two archive formats, one for the UNIX-environment, the other for MS-DOS. Both archives contain exactly the same files. These files can be accessed via FTP as follows: 1. Create an FTP connection from wherever you are to machine "cochlea.hut.fi". The internet address of this machine is 130.233.168.48, for those who need it. 2. Log in as user "anonymous" with your own e-mail address as password. 3. Change remote directory to "/pub/som_pak". 4. At this point FTP should be able to get a listing of files in this directory with DIR and fetch the ones you want with GET. (The exact FTP commands you use depend on your local FTP program.) Remember to use the binary transfer mode for compressed files. The som_pak program package includes the following files: - Documentation: README short description of the package and installation instructions som_doc.ps documentation in (c) PostScript format som_doc.ps.Z same as above but compressed som_doc.txt documentation in ASCII format - Source file archives: som_p3r0.exe Self-extracting MS-DOS archive file som_pak-3.0.tar UNIX tape archive file som_pak-3.0.tar.Z same as above but compressed An example of FTP access is given below unix> ftp cochlea.hut.fi (or 130.233.168.48) Name: anonymous Password: ftp> cd /pub/som_pak ftp> binary ftp> get som_pak-3.0.tar.Z ftp> quit unix> uncompress som_pak-3.0.tar.Z unix> tar xvfo som_pak-3.0.tar See file README for further installation instructions. All comments concerning this package should be addressed to som at cochlea.hut.fi. ************************************************************************ From bishopc at helios.aston.ac.uk Wed Mar 1 03:29:38 1995 From: bishopc at helios.aston.ac.uk (bishopc) Date: Wed, 1 Mar 1995 08:29:38 +0000 Subject: NCAF Spring Conference Message-ID: <14359.9503010829@sun.aston.ac.uk> -------------------------------------------------------------------- NEURAL COMPUTING APPLICATIONS FORUM NCAF Two-Day Conference: "Practical Applications and Techniques of Neural Networks" (sponsored by IBM and Neuroptics) 12 and 13 April, 1995 Robinson College, Cambridge, UK 12 April 1995 ------------- Invited Guest Tutorial: Pattern Recognition Using Hidden Markov Models Prof Steve Young, Cambridge University Keynote Talk: The IBM ZISC Chip Guy Paillet, Neuroptics Consulting Neural Computing: The Key Answers? Prof David Bounds, Aston University (EPSRC Neural Computing Coordinator) Neural Networks for Analysis of EEG David Siegwart, Oxford University High Speed Car Number Plate Recognition (with demonstration!) Steve Gull, Cambridge University Workshop: Practicalities of Training Networks An interactive workshop with opportunities for questions and discussion from the audience. Champagne Reception hosted by Neuroptics and IBM The ZISC Banquet After dinner speaker: Robert Worden, Logica Cambridge 13 April 1995 ------------- Neural Interpretation of Foetal Heart Rate Traces Richard Shaw, Cambridge University Classification of Wood Quality John Keating, St Patrick's, Ireland Medical Diagnosis Using ARTMap for Autonomous Learning Robert Harrison, Sheffield University Control of a Jet Engine Ian Nabney, Aston University Detection of Organised Credit Card Fraud Iain Strachan, AEA Technology Recurrent Networks for Very Large Vocabulary Speech Recognition Tony Robinson, Cambridge University Interpretation and Knowledge Discovery in MLPs Marilyn Vaughn, RMCS, Cranfield University Solving Folding Optimisation Problems with Self-Organising Networks Shara Amin, British Telecom Using Neural Networks for Property Valuation Howard James, Portsmouth University An Evaluation of the Neocognitron David Lovell, Cambridge University ------------------------------------------------------------------- NEURAL COMPUTING APPLICATIONS FORUM The Neural Computing Applications Forum (NCAF) was formed in 1990 and has since come to provide the principal mechanism for exchange of ideas and information between academics and industrialists in the UK on all aspects of neural networks and their practical applications. NCAF organises four 2-day conferences each year, which are attended by well over 100 participants. It has its own international journal `Neural Computing and Applications' which is published quarterly by Springer-Verlag, and it produces a quarterly newsletter `Networks'. Annual membership rates (Pounds Stirling): Company: 300 Individual: 170 Associate 110 Student: 65 Membership includes free registration at all four annual conferences, a subscription to the journal `Neural Computing and Applications', and a subscription to `Networks'. Associate membership includes the journal and newsletter but does not include admission to the conferences (for which a separate fee must be paid) and is intended primarily for overseas members who are unable to attend most of the conferences. For further information: Tel: +44 (0)784 477271 Fax: +44 (0)784 472879 Chris M Bishop (Chairman, NCAF) -------------------------------------------------------------------- Professor Chris M Bishop Tel. +44 (0)21 359 3611 x4270 Neural Computing Research Group Fax. +44 (0)21 333 6215 Dept. of Computer Science c.m.bishop at aston.ac.uk Aston University Birmingham B4 7ET, UK -------------------------------------------------------------------- From jbaxter at colossus.cs.adelaide.edu.au Wed Mar 1 06:53:51 1995 From: jbaxter at colossus.cs.adelaide.edu.au (Jon Baxter) Date: Wed, 1 Mar 1995 22:23:51 +1030 (CST) Subject: Paper Available: The canonical metric in vector quantization Message-ID: <9503011153.AA27533@colossus.cs.adelaide.edu.au> The following paper is available via anonymous ftp from calvin.maths.flinders.edu.au:://pub/jon/quant.ps.Z FTP instructions are given at the end of the message. Title: The Canonical Metric For Vector Quantization, 8 pages. Author: Jonathan Baxter Abstract: To measure the quality of a set of vector quantization points a means of measuring the distance between two points is required. Common metrics such as the {\em Hamming} and {\em Euclidean} metrics, while mathematically simple, are inappropriate for comparing speech signals or images. In this paper it is argued that there often exists a natural {\em environment} of functions to the quantization process (for example, the word classifiers in speech recognition and the character classifiers in character recognition) and that such an enviroment induces a {\em canonical metric} on the space being quantized. It is proved that optimizing the {\em reconstruction error} with respect to the canonical metric gives rise to optimal approximations of the functions in the environment, so that the canonical metric can be viewed as embodying all the essential information relevant to learning the functions in the environment. Techniques for {\em learning} the canonical metric are discussed, in particular the relationship between learning the canonical metric and {\em internal representation learning}. FTP Instructions: unix> ftp calvin.maths.flinders.edu.au (or 129.96.32.2) login: anonymous password: (your e-mail address) ftp> cd pub/jon ftp> binary ftp> get quant.ps.Z ftp> quit unix> uncompress quant.ps.Z unix> lpr quant.ps (or however you print) From njm at cupido.inesc.pt Wed Mar 1 11:28:36 1995 From: njm at cupido.inesc.pt (njm@cupido.inesc.pt) Date: Wed, 01 Mar 95 16:28:36 +0000 Subject: 2nd CFP: EPIA'95 Message-ID: <9503011628.AA00912@cupido.inesc.pt> EPIA'95 - 2nd CALL FOR PAPERS SEVENTH PORTUGUESE CONFERENCE ON ARTIFICIAL INTELLIGENCE Funchal, Madeira Island, Portugal October 3-6, 1995 (Under the auspices of the Portuguese Association for AI) SUBMISSION DEADLINE: March 20, 1995 The Seventh Portuguese Conference on Artificial Intelligence (EPIA'95) will be held at Funchal, Madeira Island, Portugal, on October 3-6, 1995. As in previous issues ('89, '91, and '93), EPIA'95 will be run as an international conference, English being the official language. The scientific program encompasses tutorials, invited lectures, demonstrations, and paper presentations. Five well known researchers will present invited lectures. The conference is devoted to all areas of Artificial Intelligence and will cover both theoretical and foundational issues and applications as well. Parallel workshops on Expert Systems, Fuzzy Logic and Neural Networks, and Applications of A.I. to Robotics and Vision Systems will run simultaneously (see below). INVITED LECTURERS ~~~~~~~~~~~~~~~~~ In this issue of the conference,four special invited lectures will promote a debate on the very foundations of Artificial Intelligence, its approaches and results. It is an honour to announce the invited lecturers and the corresponding talks: "Why Human Brains Can't Really Think", by Marvin Minsky (MIT-USA); "Planning and Learning in Intelligent Agents", by Manuela Veloso (CMU-USA); "The Connectionist Paradigm and AI", by Borges de Almeida (IST-Portugal); "The Evolutionist Approach - Past, Present, and Future of AI", by Rodney Brooks (MIT-USA). TUTORIALS ~~~~~~~~~ In this issue of the conference, four tutorials will be delivered: "Artificial Life and Autonomous Robots", by Luc Steels (VUB AI Lab-Belgium); "Virtual Reality - The AI perspective", by David Hogg (Univ. of Leeds-UK); "Introduction to Artificial Intelligence", by Ernesto Costa (Univ. of Coimbra-Portugal); (in Portuguese) "Design of Expert Systems", by Ernesto Morgado (IST-Portugal); (in Portuguese) SUBMISSION OF PAPERS ~~~~~~~~~~~~~~~~~~~~ Authors must submit five (5) complete printed copies of their papers to the "EPIA'95 submission address". Fax or electronic submissions will not be accepted. Submissions must be printed on A4 or 8 1/2"x11" paper using 12 point type. Each page must have a maximum of 38 lines and an average of 75 characters per line (corresponding to the LaTeX article-style, 12 point). Double-sided printing is strongly encouraged. The body of submitted papers must be at most 12 pages, including title, abstract, figures, tables, and diagrams, but excluding the title page and bibliography. ELECTRONIC ABSTRACT ~~~~~~~~~~~~~~~~~~~ In addition to submitting the paper copies, authors should send to epia95-abstracts at inesc.pt a short (200 words) electronic abstract of their paper to aid the reviewing process. The electronic abstract must be in plain ASCII text (no LaTeX)) in the following format: TITLE: FIRST AUTHOR: <last name, first name> EMAIL: <email of the first author> FIRST ADDRESS: <first author address> COAUTHORS: <their names, if any> KEYWORDS: <keywords separated by commas> ABSTRACT: <text of the abstract> Authors are requested to select 1-3 appropriate keywords from the list below. Authors are welcome to add additional keywords descriptors as needed. Applications, agent-oriented programming, automated reasoning, belief revision, case-based reasoning, common sense reasoning, constraint satisfaction, distributed AI, expert systems, genetic algorithms, knowledge representation, logic programming, machine learning, natural language understanding, nonmonotonic reasoning, planning, qualitative reasoning, real-time systems, robotics, spatial reasoning, theorem proving, theory of computation, tutoring systems. REVIEW OF PAPERS ~~~~~~~~~~~~~~~~ Submissions will be judged on significance, originality, quality and clarity. Reviewing will be blind to the identities of the authors. This requires that authors exercise some care not to identify themselves in their papers. Each copy of the paper must have a title page, separated from the body of the paper, including the title of the paper, the names and addresses of all authors, a list of content areas (see above) and any acknowledgments. The second page should include the same title, a short abstract of less than 200 words, and the exact same contents areas, but not the names nor affiliations of the authors. This page may include text of the paper. The references should include all published literature relevant to the paper, including previous works of the authors, but should not include unpublished works of the authors. When referring to one's own work, use the third person. For example, say "previously, Peter [17] has shown that ...". Try to avoid including any information in the body of the paper or references that would identify the authors or their institutions. Such information can be added to the final camera-ready version for publication. Please do not staple the title page to the body of the paper. Submitted papers must be unpublished. PUBLICATION ~~~~~~~~~~~ The proceedings will be published by Springer-Verlag (lecture notes in A.I. series). Authors will be required to transfer copyright of their paper to Springer-Verlag. ASSOCIATED WORKSHOPS ~~~~~~~~~~~~~~~~~~~~ In the framework of the conference three workshops will be organized: Applications of Expert Systems, Fuzzy Logic and Neural Networks in Engineering, and Applications of Artificial Intelligence to Robotics and Vision Systems. Real world applications, running systems, and demos are welcome. CONFERENCE & PROGRAM CO-CHAIRS ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Carlos Pinto-Ferreira Nuno Mamede Instituto Superior Tecnico Instituto Superior Tecnico ISR, Av. Rovisco Pais INESC, Apartado 13069 1000 Lisboa, Portugal 1000 Lisboa, Portugal Voice: +351 (1) 8475105 Voice: +351 (1) 310-0234 Fax: +351 (1) 3523014 Fax: +351 (1) 525843 Email: cpf at kappa.ist.utl.pt Email: njm at inesc.pt PROGRAM COMMITTEE ~~~~~~~~~~~~~~~~~ Antonio Porto (Portugal) Lauiri Carlson (Finland) Benjamin Kuipers (USA) Luc Steels (Belgium) Bernhard Nebel (Germany) Luigia Aiello (Italy) David Makinson (Germany) Luis Moniz Pereira (Portugal) Erik Sandewall (Sweden) Luis Monteiro (Portugal) Ernesto Costa (Portugal) Manuela Veloso (USA) Helder Coelho (Portugal) Maria Cravo (Portugal) Joao Martins (Portugal) Miguel Filgueiras (Portugal) John Self (UK) Yoav Shoham (USA) Jose Carmo (Portugal) Yves Kodratoff (France) DEADLINES ~~~~~~~~~ Papers Submission: ................. March 20, 1995 Notification of acceptance: ........ May 15, 1995 Camera Ready Copies Due: ........... June 12, 1995 SUBMISSION & INQUIRIES ADDRESS ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ EPIA95 INESC, Apartado 13069 1000 Lisboa, Portugal Voice: +351 (1) 310-0325 Fax: +351 (1) 525843 Email: epia95 at inesc.pt SUPPORTERS ~~~~~~~~~~ Banco Nacional Ultramarino Governo Regional da Madeira Instituto Superior Tecnico SISCOG - Sistemas Cognitivos INESC CITMA IBM TAPair Portugal PLANNING TO ATTEND ~~~~~~~~~~~~~~~~~~ People planning to submit a paper or/and to attend the conference or attend a workshop are asked to complete and return the following form (by fax or email) to the inquiries address standing their intention. It will help the conference organizers to estimate the facilities needed for the conference and will enable all interested people to receive updated information. +----------------------------------------------------------------+ | REGISTRATION OF INTEREST | | | | Title . . . . . Name . . . . . . . . . . . . . . . . . . . . | | Institution . . . . . . . . . . . . . . . . . . . . . . . . . | | Address1 . . . . . . . . . . . . . . . . . . . . . . . . . . . | | Address2 . . . . . . . . . . . . . . . . . . . . . . . . . . . | | Country . . . . . . . . . . . . . . . . . . . . . . . . . . . | | Telephone. . . . . . . . . . . . . . . Fax . . . . . . . . . . | | Email address. . . . . . . . . . . . . . . . . . . . . . . . . | | I intend to submit a paper (yes/no). . . . . . . . . . . . . . | | I intend to participate only (yes/no). . . . . . . . . . . . . | | I will travel with ... guests | +----------------------------------------------------------------+ From ajit at uts.cc.utexas.edu Wed Mar 1 13:05:47 1995 From: ajit at uts.cc.utexas.edu (Ajit Dingankar) Date: Wed, 1 Mar 1995 12:05:47 -0600 Subject: A Note on Error Bounds for Approximation in Inner Product Spaces Message-ID: <199503011805.MAA30724@curly.cc.utexas.edu> **DO NOT FORWARD TO OTHER GROUPS** Sorry, no hardcopies available. URL: ftp://archive.cis.ohio-state.edu/pub/neuroprose/dingankar.error-bounds.ps.Z BiBTeX entry: @ARTICLE{atd15, AUTHOR = "Dingankar, A. T. and Sandberg, I. W.", TITLE = "{A Note on Error Bounds for Approximation in Inner Product Spaces}", JOURNAL = "Circuits, Systems and Signal Processing", VOLUME = {}, NUMBER = {}, PAGES = {}, YEAR = {1996}, } A Note on Error Bounds for Approximation in Inner Product Spaces ---------------------------------------------------------------- ABSTRACT In a recent paper a method is described for constructing certain approximations to a general element in the closure of the convex hull of a subset of an inner product space. This is of interest in connection with neural networks. Here we give an algorithm that generates simpler approximants with somewhat less computational cost. From esann at dice.ucl.ac.be Wed Mar 1 11:18:49 1995 From: esann at dice.ucl.ac.be (esann@dice.ucl.ac.be) Date: Wed, 1 Mar 1995 18:18:49 +0200 Subject: Neural Processing Letters: abstracts on WWW Message-ID: <199503011715.SAA03612@ns1.dice.ucl.ac.be> ------------------------- Neural Processing Letters Abstracts on WWW server ------------------------- According to the large number of requests that we received, all abstracts of papers in published issues of Neural Processing Letters are now available on the WWW and FTP servers of the journal. You may connect to these servers at the following addresses: - FTP server: ftp.dice.ucl.ac.be directory: /pub/neural-nets/NPL - WWW server: http://www.dice.ucl.ac.be/neural-nets/NPL/NPL.html Subscription to the journal is also now possible by credit card. If you have no access to these servers, or for any other information (subscriptions, instructions for authors, free sample copies,...), please don't hesitate to contact directly the publisher: D facto publications 45 rue Masui B-1210 Brussels Belgium Phone: + 32 2 245 43 63 Fax: + 32 2 245 46 94 _____________________________ D facto publications - conference services 45 rue Masui 1210 Brussels Belgium tel: +32 2 245 43 63 fax: +32 2 245 46 94 _____________________________ From ajit at uts.cc.utexas.edu Wed Mar 1 13:14:20 1995 From: ajit at uts.cc.utexas.edu (Ajit Dingankar) Date: Wed, 1 Mar 1995 12:14:20 -0600 Subject: On Approximation of Linear Functionals on L_p Spaces Message-ID: <199503011814.MAA18317@curly.cc.utexas.edu> **DO NOT FORWARD TO OTHER GROUPS** Sorry, no hardcopies available. URL: ftp://archive.cis.ohio-state.edu/pub/neuroprose/dingankar.linear-functionals.ps.Z BiBTeX entry: @ARTICLE{atd16, AUTHOR = "Sandberg, I. W. and Dingankar, A. T.", TITLE = "{On Approximation of Linear Functionals on $L_p$ Spaces}", JOURNAL = "IEEE Transactions on Circuits and Systems-I: Fundamental Theory and Applications", VOLUME = {}, NUMBER = {}, PAGES = {}, YEAR = "1995", } On Approximation of Linear Functionals on L_p Spaces ---------------------------------------------------- ABSTRACT In a recent paper certain approximations to continuous nonlinear functionals defined on an $L_p$ space $ (1 < p < \infty) $ are shown to exist. These approximations may be realized by sigmoidal neural networks employing a linear input layer that implements finite sums of integrals of a certain type. In another recent paper similar approximation results are obtained using elements of a general class of continuous linear functionals. In this note we describe a connection between these results by showing that every continuous linear functional on a compact subset of $L_p$ may be approximated uniformly by certain finite sums of integrals. We also describe the relevance of this result to the approximation of continuous nonlinear functionals with neural networks. From mas at isca.pdial.interpath.net Wed Mar 1 12:52:38 1995 From: mas at isca.pdial.interpath.net (Mary Ann Sullivan) Date: Wed, 1 Mar 1995 12:52:38 -0500 Subject: CALL FOR PAPERS: ISCA INT'L CONF ON COMPUTER APPLICATIONS IN INDUSTRY & ENGINEERING Nov. 29 - Dec. 1, 1995 Honolulu, Hawaii Message-ID: <mailman.746.1149540310.24850.connectionists@cs.cmu.edu> A non-text attachment was scrubbed... Name: not available Type: multipart/mixed Size: 3717 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/5ac13381/attachment.bin From thodberg at nn.dmri.dk Wed Mar 1 14:02:22 1995 From: thodberg at nn.dmri.dk (Hans Henrik Thodberg) Date: Wed, 1 Mar 1995 20:02:22 +0100 Subject: TR: Ace-of-Bayes with ARD Message-ID: <9503011902.AA02531@dmri.dk> The following 33 pages long manuscript is now available by ftp: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/thodberg.bayes-ard.ps.Z or URL (WWW): ftp://archive.cis.ohio-state.edu/pub/neuroprose/thodberg.bayes-ard.ps.Z Hardcopies are not avaliable. ---------------------------------------------------------------------------- A Review of Bayesian Neural Networks with an Application to Near Infrared Spectroscopy. Hans Henrik Thodberg The Danish Meat Research Institute Abstract MacKay's Bayesian framework for backpropagation is a practical and powerful means to improve the generalisation ability of neural networks. It is based on a Gaussian approximation to the posterior weight distribution. The framework is extended, reviewed and demonstrated in a pedagogical way. The notation is simplified using the ordinary weight decay parameter, and a detailed and explicit procedure for adjusting several weight decay parameters is given. Bayesian backprop is applied in the prediction of fat content in minced meat from near infrared spectra. It outperforms ``early stopping'' as well as quadratic regression. The evidence of a committee of differently trained networks is computed, and the corresponding improved generalisation is verified. The error bars on the predictions of the fat content are computed. There are three contributors: The random noise, the uncertainty in the weights, and the deviation among the committee members. The Bayesian framework is compared to Moody's GPE. Finally, MacKay and Neal's Automatic Relevance Determination, in which the weight decay parameters depend on the input number, is applied to the data with improved results. ---------------------------------------------------------------------------- The manuscript is a revised version of thodberg.ace-of-bayes.ps.Z which is also in neuroprose. The main changes are the following: Pruning has been taken out (it is treated in a separate paper), the treatment of committees is extended, and there is a new section demonstrating the powerful Automatic Relevance Determination. The data used in the paper are now available by ftp. The paper is submitted to IEEE Trans. on Neural Networks. Comments are welcome! ---------------------------------------------------------------------------- Hans Henrik Thodberg Email(NEW!!): thodberg at nn.dmri.dk Danish Meat Research Institute Phone: (+45) 42 36 12 00 Maglegaardsvej 2, Postboks 57 Fax: (+45) 42 36 48 36 DK-4000 Roskilde, Denmark ---------------------------------------------------------------------------- From Volker.Tresp at zfe.siemens.de Wed Mar 1 15:06:21 1995 From: Volker.Tresp at zfe.siemens.de (Volker Tresp) Date: Wed, 1 Mar 1995 21:06:21 +0100 Subject: 2 Papers available on combining estimators and missing data Message-ID: <199503012006.AA11490@train.zfe.siemens.de> The -2- files tresp.combining.ps.Z and tresp.effic_miss.ps.Z can now be copied from Neuroprose. The papers are 8 and 9 pages long. Hardcopies copies are not available. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/tresp.combining.ps.Z COMBINING ESTIMATORS USING NON-CONSTANT WEIGHTING FUNCTIONS by Volker Tresp and Michiaki Taniguchi Abstract: This paper discusses the linearly weighted combination of estimators in which the weighting functions are dependent on the input. We show that the weighting functions can be derived either by evaluating the input dependent variance of each estimator or by estimating how likely it is that a given estimator has seen data in the region of the input space close to the input pattern. The latter solution is closely related to the mixture of experts approach and we show how learning rules for the mixture of experts can be derived from the theory about learning with missing features. The presented approaches are modular since the weighting functions can easily be modified (no retraining) if more estimators are added. Furthermore, it is easy to incorporate estimators which were not derived from data such as expert systems or algorithms. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/tresp.effic_miss.ps.Z EFFICIENT METHODS FOR DEALING WITH MISSING DATA IN SUPERVISED LEARNING by Volker Tresp, Ralph Neuneier, and Subutai Ahmad Abstract: We present efficient algorithms for dealing with the problem of missing inputs (incomplete feature vectors) during training and recall. Our approach is based on the approximation of the input data distribution using Parzen windows. For recall, we obtain closed form solutions for arbitrary feedforward networks. For training, we show how the backpropagation step for an incomplete pattern can be approximated by a weighted averaged backpropagation step. The complexity of the solutions for training and recall is independent of the number of missing features. We verify our theoretical results using one classification and one regression problem. The papers will appear in G. Tesauro, D. S. Touretzky and T. K. Leen, eds., "Advances in Neural Information Processing Systems 7", MIT Press, Cambridge MA, 1995. ________________________________________ Volker Tresp Siemens AG ZFE T SN4 81730 Munich Germany email: Volker.Tresp at zfe.siemens.de Phone: +49 89 636 49408 Fax: +49 89 636 3320 ________________________________________ From delliott at src.umd.edu Thu Mar 2 03:16:30 1995 From: delliott at src.umd.edu (David L. Elliott) Date: Thu, 2 Mar 1995 03:16:30 -0500 Subject: NN paper (identification of systems) available by anon ftp Message-ID: <199503020816.DAA23586@newra.src.umd.edu> The following paper, which has been accepted for an invited session at the 1995 American Control Conference, will be available until the conference (June 21 1995) by anonymous ftp to the ftp server for the Institute for Systems Research ftp://ftp.isr.umd.edu/pub/ISRTR/ps Filename: TR95-17.ps Title: "Reconstruction of nonlinear systems with delay lines and feedforward networks" Author: David L. Elliott, ISR, Univ. of Maryland, College Park Abstract: Nonlinear system theory ideas have led to a method for approximating the dynamics of a nonlinear system in a bounded region of its state space, by training a feedforward neural network which is then reconfigured in recursive mode to provide a stand-alone simulator of the original system. The input layer of the neural network contains time-delayed samples of one or more system outputs and control inputs. Autonomous systems can also be simulated in this way by providing impulse inputs. My apology for the 0.9M filesize-- the paper is only 5 pages, but assembling diverse PS text and figures was inefficient. Printing time is short, however. Criticisms and comments will be especially helpful if sent before June 1, and will be very welcome. David From srw1001 at eng.cam.ac.uk Thu Mar 2 08:51:45 1995 From: srw1001 at eng.cam.ac.uk (srw1001@eng.cam.ac.uk) Date: Thu, 2 Mar 95 13:51:45 GMT Subject: Paper availble : Non-linear Prediction using Hierarchical Mixtures of Experts. Message-ID: <9503021351.22738@fear.eng.cam.ac.uk> The following paper is available by anonymous ftp from the archive of the Speech, Vision and Robotics Group at the Cambridge University Engineering Department and the Neuroprose archives. NON-LINEAR PREDICTION OF ACOUSTIC VECTORS USING HIERARCHICAL MIXTURES OF EXPERTS. Steve Waterhouse and Tony Robinson Cambridge University Engineering Department Trumpington Street Cambridge CB2 1PZ England Abstract In this paper we consider speech coding as a problem of speech modelling. In particular, prediction of parameterised speech over short time segments is performed using the Hierarchical Mixture of Experts (HME) \cite{JordanJacobs94}. The HME gives two advantages over traditional non-linear function approximators such as the Multi-Layer Perceptron (MLP); a statistical understanding of the operation of the predictor and provision of information about the performance of the predictor in the form of likelihood information and local error bars. These two issues are examined on both toy and real world problems of regression and time series prediction. In the speech coding context, we extend the principle of combining local predictions via the HME to a Vector Quantization scheme in which fixed local codebooks are combined on-line for each observation. To appear in Advances in Neural Information Processing Systems 7, edited by Gerald Tesauro, David Touretzky, and Todd Leen. ************************ How to obtain a copy ************************ a) via ftp from Cambridge University SVR: unix> ftp svr-ftp.eng.cam.ac.uk Name: anonymous Password: (type your email address) ftp> cd reports ftp> binary ftp> get waterhouse_nips94.ps.Z ftp> quit unix> uncompress waterhouse_nips94.ps.Z unix> lpr waterhouse_nips94.ps (or however you print PostScript) b) via ftp from neuroprose archive: unix> ftp Name: anonymous Password: (type your email address) ftp> cd pub/neuroprose/reports ftp> binary ftp> get waterhouse.nips94.ps.Z ftp> quit unix> uncompress waterhouse.nips94.ps.Z unix> lpr waterhouse.nips94.ps (or however you print PostScript) c) or email me: srw1001 at eng.cam.ac.uk d) (easiest) access my WWW page http://svr-www.eng.cam.ac.uk/~srw1001, where the file is symlinked. ----------------------------------------------------- Steve Waterhouse, Information Engineering, Cambridge University Engineering Department, Trumpington Street, Cambridge CB2 1PZ, UK. Email: srw1001 at eng.cam.ac.uk Phone : (0223) 332800 World Wide Web: http://svr-www.eng.cam.ac.uk/~srw1001 From srw1001 at eng.cam.ac.uk Thu Mar 2 08:52:39 1995 From: srw1001 at eng.cam.ac.uk (srw1001@eng.cam.ac.uk) Date: Thu, 2 Mar 95 13:52:39 GMT Subject: Paper availble : Classification using Hierarchical Mixtures of Experts. Message-ID: <9503021352.22748@fear.eng.cam.ac.uk> The following paper is available by anonymous ftp from the archive of the Speech, Vision and Robotics Group at the Cambridge University Engineering Department and the Neuroprose archives. CLASSIFICATION USING HIERARCHICAL MIXTURES OF EXPERTS Steve Waterhouse and Tony Robinson Cambridge University Engineering Department Trumpington Street Cambridge CB2 1PZ England Abstract There has recently been widespread interest in the use of multiple models for classification and regression in the statistics and neural networks communities. The Hierarchical Mixture of Experts (HME) \cite{JordanJacobs94} has been successful in a number of regression problems, yielding significantly faster training through the use of the Expectation Maximisation algorithm. In this paper we extend the HME to classification and results are reported for three common classification benchmark tests: Exclusive-Or, N-input Parity and Two Spirals. Reference : In Proc. 1994 IEEE Workshop on Neural Networks for Signal Processing, pp 177-186. ************************ How to obtain a copy ************************ a) via ftp from Cambridge University SVR: unix> ftp svr-ftp.eng.cam.ac.uk Name: anonymous Password: (type your email address) ftp> cd reports ftp> binary ftp> get waterhouse_hme.ps.Z ftp> quit unix> uncompress waterhouse_hme.ps.Z unix> lpr waterhouse_hme.ps (or however you print PostScript) b) via ftp from neuroprose archive: unix> ftp Name: anonymous Password: (type your email address) ftp> cd pub/neuroprose/reports ftp> binary ftp> get waterhouse.hme_classification.ps.Z ftp> quit unix> uncompress waterhouse.hme_classification.ps.Z unix> lpr waterhouse.hme_classification.ps (or however you print PostScript) c) or email me: srw1001 at eng.cam.ac.uk d) (easiest) access my WWW page http://svr-www.eng.cam.ac.uk/~srw1001, where the file is symlinked. ----------------------------------------------------- Steve Waterhouse, Information Engineering, Cambridge University Engineering Department, Trumpington Street, Cambridge CB2 1PZ, UK. Email: srw1001 at eng.cam.ac.uk Phone : (0223) 332800 World Wide Web: http://svr-www.eng.cam.ac.uk/~srw1001 From Francoise.Fogelman at laforia.ibp.fr Thu Mar 2 04:06:01 1995 From: Francoise.Fogelman at laforia.ibp.fr (FOGELMAN Francoise + 33 1 41 28 41 70) Date: Thu, 2 Mar 1995 10:06:01 +0100 Subject: ICANN'95 Message-ID: <199503020906.KAA20561@erato.ibp.fr> *************************************************************************** UPDATED BROCHURE *************************************************************************** ***************************************************************************** XXX XXXX X XX XX XX XX XX XXXXX XXXXXX X X X X XXX XX XXX XX XX X XX X X X XXXXX XX X XX XX X XX XXXXX XXXXX X X X X XX XXX XX XXX XX XXX XXX XXXX X X XX XX XX XX XXXX XXXXX PARIS, OCTOBER 9-13, 1995 Maison de la Chimie NEURAL NETWORKS AND THEIR APPLICATIONS ***************************************************************************** SCIENTIFIC CONFERENCE INDUSTRIAL CONFERENCE TUTORIALS & EXHIBITION organized by EUROPEAN NEURAL NETWORK SOCIETY ***************************************************************************** INFORMATION ***************************************************************************** Over the last four years, the ENNS - European Neural Network Society - has held its annual conference ICANN in Helsinki (1991), Brighton (1992), Amsterdam (1993) and Sorrento (1994). This conference has become the foremost meeting for the European neural network scientific community. In 1995, ENNS will hold the ICANN meeting in Paris. The format of this conference will include a scientific conference, an industrial conference, tutorials, industrial forums and an industrial exhibition. Our challenge, in organizing this conference, is to achieve the highest scientific quality for papers presented at the scientific conference (through a strict selection procedure), together with the most convincing set of applications presented at the industrial conference (only operational, top-level applications will be considered). Papers should stress the rationale of the Neural Network approach and provide a comparison with other techniques. We thus hope to demonstrate that Neural Networks are indeed a very deep and exciting field of research, as well as a most efficient, profitable technique for the industry. To achieve these goals, we seek contributions from all the scientists, both from academy and industry, who share our interests and our quality requirements. ***************************************************************************** CALL FOR PAPERS The conference will cover the following domains : SCIENTIFIC CONFERENCE * theory * algorithms & architectures * implementations (hardware & software) * cognitive sciences & AI * neurobiology * applications identification & control image processing & vision OCR speech & signal processing prediction optimization INDUSTRIAL CONFERENCE This conference will cover two main categories: on the one hand, descriptions of tools and methods and their use in real-life cases and, on the other, descriptions of concrete applications in industry and the sector of services. All fields of application are eligible. Special sessions will be organized on specific areas of industry such as: * banking, finance & insurance * telecommunications * teledetection * process engineering, control and monitoring * oil industry * power industry * food processing * transportation * robotics * speech processing * document processing, OCR, text retrieval & indexing * VLSI & dedicated hardware * forecasting & marketing * technical diagnosis * non destructive testing * medicine * defense LOCATION The conference will be held in la Maison de la Chimie, right in the center of Paris, near les Invalides. Built in 1707, for Frederic-Maurice de la Tour, Comte d'Auvergne, Lieutenant General to King Louis XIV, the Mansion has today become a Congress Center equipped with all the modern facilities. INSTRUCTIONS TO AUTHORS Length of papers: not exceeding 6 pages in A4 format (i. e. about 8,000 characters). An electronic format will be made available at : ftp lix.polytechnique.fr login: anonymous password : your e-mail address in the directory /pub/ICANN95/out, read file README for instructions. If you want to leave messages or enquiries, you can also use : in the directory /pub/ICANN95/in, read file README for instructions. Seven copies of the papers should reach the Conference Secretariat at the address below by ****** APRIL 15 1995 ***** : ICANN'95 1 avenue Newton bp 207 92 142 CLAMART Cedex France Fax: +33 - 1 - 41 28 45 84 Submitted papers should be accompanied by a cover page giving: * the title of the paper and the author(s) name(s), * the author's address, phone number and extension, fax number and, if possible, e-mail address, * a 10-line abstract together with a list of key-words, * an indication of which conference the paper should be included in: scientific or industrial LANGUAGE Papers submitted for the scientific conference should be in English. Papers submitted for the industrial conference may be either in English or French. TUTORIALS Tutorials will be organized. The Program Committee is open to proposals for tutorials covering industrial applications. Suggestions should describe the content of the tutorial (in 150-200 words) and the instructor's expertise and experience in the field concerned. The deadline for reception is MAY 15 1995. EXHIBITION From wolpert at psyche.mit.edu Thu Mar 2 16:06:40 1995 From: wolpert at psyche.mit.edu (Daniel Wolpert) Date: Thu, 2 Mar 95 16:06:40 EST Subject: Two NIPS preprints on motor control Message-ID: <9503022106.AA18741@psyche.mit.edu> The following two papers will appear in G. Tesauro, D.S. Touretzky and T.K. Leen, eds., "Advances in Neural Information Processing Systems 7", MIT Press, Cambridge MA, 1995. The papers combine computational and psychophysical approaches to human motor control. Daniel Wolpert wolpert at psyche.mit.edu ----------------------------------------------------------------------- Forward dynamic models in human motor control: Psychophysical evidence Daniel Wolpert, Zoubin Ghahramani & Michael Jordan Department of Brain & Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 02139 Based on computational principles, with as yet no direct experimental validation, it has been proposed that the central nervous system (CNS) uses an internal model to simulate the dynamic behavior of the motor system in planning, control and learning. We present experimental results and simulations based on a novel approach that investigates the temporal propagation of errors in the sensorimotor integration process. Our results provide direct support for the existence of an internal model. FTP-host: psyche.mit.edu FTP-filename: /pub/wolpert/forward.ps.Z URL: ftp://psyche.mit.edu/pub/wolpert/forward.ps.Z 8 pages long [163K compressed]. ----------------------------------------------------------------------- Computational structure of coordinate transformations: A generalization study Zoubin Ghahramani, Daniel Wolpert & Michael Jordan Department of Brain & Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 02139 One of the fundamental properties that both neural networks and the central nervous system share is the ability to learn and generalize from examples. While this property has been studied extensively in the neural network literature it has not been thoroughly explored in human perceptual and motor learning. We have chosen a coordinate transformation system---the visuomotor map which transforms visual coordinates into motor coordinates---to study the generalization effects of learning new input--output pairs. Using a paradigm of computer controlled altered visual feedback, we have studied the generalization of the visuomotor map subsequent to both local and context-dependent remappings. A local remapping of one or two input-output pairs induced a significant global, yet decaying, change in the visuomotor map, suggesting a representation for the map composed of units with large functional receptive fields. Our study of context-dependent remappings indicated that a single point in visual space can be mapped to two different finger locations depending on a context variable---the starting point of the movement. Furthermore, as the context is varied there is a gradual shift between the two remappings, consistent with two visuomotor modules being learned and gated smoothly with the context. FTP-host: psyche.mit.edu FTP-filename: /pub/wolpert/coord.ps.Z URL: ftp://psyche.mit.edu/pub/wolpert/coord.ps.Z 8 pages long [218K compressed]. From rjb at psy.ox.ac.uk Fri Mar 3 09:16:15 1995 From: rjb at psy.ox.ac.uk (Roland Baddeley) Date: Fri, 3 Mar 1995 14:16:15 GMT Subject: Job available at University of Oxford Message-ID: <199503031416.OAA05484@axp01.mrc-bbc.ox.ac.uk> A new position available that may interest reader of Connectionists. Please feel free to pass to other boards or colleagues, and please send replies to erolls at psy.ox.ac.uk and not to this address. Roland Baddeley UNIVERSITY OF OXFORD DEPARTMENT OF EXPERIMENTAL PSYCHOLOGY Post in Computational Neuroscience The following post is available as part of a long-term research programme combining computational and neurophysiological approaches to the brain mechanisms of vision and memory: Computational neuroscientist (RS1A) to make formal network models and/or analyse by neural network simulation the functions of visual cortical areas and the hippocampus. The salary is on the RS1A (postdoctoral) scale 13,941-20,953 pounds, with support provided by a Programme Grant, and is available from April 1995. Applications including the names of two referees, or enquiries for further details, to Dr. Edmund T. Rolls, University of Oxford, Department of Experimental Psychology, South Parks Road, Oxford OX1 3UD, England (telephone 01865-271348, email erolls at psy.ox.ac.uk). The University is an Equal Opportunities Employer. An introduction to some of the work is provided in the following: Rolls,E.T. and Treves,A. (1994) Neural networks in the brain involved in memory and recall. Progress in Brain Research 102: 335-341. or Treves,A. and Rolls,E.T. (1994) A computational analysis of the role of the hippocampus in memory. Hippocampus 4: 374-391. Rolls,E.T. (1994) Brain mechanisms for invariant visual recognition and learning. Behavioural Processes 33: 113- 138. or Rolls,E.T. (1995) Learning mechanisms in the temporal lobe visual cortex. Behavioural Brain Research 66: 177-185. From CHRIS at gauss.cam.wits.ac.za Sat Mar 4 18:09:28 1995 From: CHRIS at gauss.cam.wits.ac.za (Christopher Gordon) Date: Sat, 4 Mar 1995 18:09:28 GMT+0200 Subject: TR Available: The Use of Cross-Validation in Neural Network Ext Message-ID: <6F911CD32FC@gauss.cam.wits.ac.za> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/gordon.extrapolation.ps.Z -------------------------------------------------------------------- The following 12 page paper is now available by ftp: The Use of Cross-Validation in Neural Network Extrapolation of Forest Tree Growth. C. Gordon, University of the Witwatersrand. Presented at PRASA (1994). Abstract: A back-propagation multilayer Artificial Neural Network (ANN) is used to model the mean growth rate of different plots of Pine trees. Cross-validation is found to be essential in providing a good extrapolation. Predictions are made for different plots which have different initial densities and different artificial thinning regimes. Comparisons are made with a standard nonlinear regression solution. * FTP procedure: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: (email address) ftp> cd pub/neuroprose ftp> binary ftp> get gordon.extrapolation.ps.Z ftp> quit unix> uncompress gordon.extrapolation.ps.Z unix> lpr gordon.extrapolation.ps (or however you print postscript) ****************************************************************** * Christopher Gordon * * Department of Applied Maths * * University of the Witwatersrand * * Internet : CHRIS at gauss.cam.wits.ac.za * * Snail Mail : PO Box 65684, Benmore 2010, South Africa * * Tel: (011)716-3229(W) * * * ****************************************************************** From seung at physics.att.com Fri Mar 3 15:01:43 1995 From: seung at physics.att.com (seung@physics.att.com) Date: Fri, 3 Mar 95 15:01:43 EST Subject: preprint: Local and Global Convergence of On-line Learning Message-ID: <9503032001.AA27097@physics.att.com> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/barkai.local.ps.Z The file barkai.local.ps.Z is now available at ftp://archive.cis.ohio-state.edu/pub/neuroprose/barkai.local.ps.Z Local and Global Convergence of On-Line Learning N. Barkai Racah Inst. of Physics, Hebrew Univ. of Jerusalem H. S. Seung AT&T Bell Laboratories H. Sompolinsky Racah Inst. of Physics, Hebrew Univ. of Jerusalem AT&T Bell Laboratories We study the performance of an generalized perceptron algorithm for learning realizable dichotomies, with an error-dependent adaptive learning rate. The asymptotic scaling form of the solution to the associated Markov equations is derived, assuming certain smoothness conditions. We show that the system converges to the optimal solution and the generalization error asymptotically obeys a univeral inverse power law in the number of examples. The system is capable of escaping from local minima, and adapts rapidly to shifts in the target function. The general theory is illustrated for the perceptron and committee machine. From alex at CompApp.DCU.IE Sat Mar 4 04:55:00 1995 From: alex at CompApp.DCU.IE (alex@CompApp.DCU.IE) Date: Sat, 4 Mar 95 09:55:00 GMT Subject: Call For Papers Message-ID: <9503040955.AA14068@janitor.compapp.dcu.ie> PLEASE POST! PLEASE POST! PLEASE POST! PLEASE POST! PLEASE POST! Call for Papers for the Fourth International Conference on The COGNITIVE SCIENCE of NATURAL LANGUAGE PROCESSING Dublin City University, 5-7 July 1995 Subject Areas: This is a non-exclusive list of subjects which fall within the scope of CSNLP. It is intended as a guide only. * Corpus-based NLP * Connectionist NLP * Statistical and knowledge-based MT * Linguistic knowledge representation * Cognitive linguistics * Declarative approaches to NLP * NLG and NLU * Dialogue and discourse * Human language processing * Text linguistics * Evaluation of NLP * Hybrid approaches to NLP Submissions may deal with theoretical issues, applications, databases or other aspects of CSNLP, but the importance of cognitive aspects should be borne in mind. Papers should report original substantive research. Theme: The Role of Syntax There is currently considerable debate regarding the place and importance of syntax in NLP. Papers dealing with this matter will be given preference. Invited Speakers: The following speakers have agreed to give keynote talks: Mark Steedman, University of Pennsylvania Alison Henry, University of Ulster Registration and Accommodation: The registration fee will be IR#60, and will include proceedings, lunches and one evening meal. Accommodation can be reserved in the campus residences at DCU. A single room is IR#16 per night, with full Irish breakfast an additional IR#4. Accommodation will be "First come, first served": there is a heavy demand for campus rooms in the summer. There are also several hotels and B&B establishments nearby: addresses will be provided on request. To register, contact Alex Monaghan at the addresses given below. Payment in advance is possible but not obligatory. Please state gender (for accommodation purposes) and any unusual dietary requirements. Submission of Abstracts: Those wishing to present a paper at CSNLP should submit a 400-word abstract to arrive not later than 10/4/95. Abstracts should give the author's full name and address, with Email address if possible, and should be sent to: CSNLP Alex Monaghan School of Computer Applications Dublin City University Dublin 9 Ireland Email submissions are preferred, plain ASCII text please to: --------- alex at compapp.dcu.ie (internet) Completed papers should be around 8 pages long, although longer papers will be considered if requested. Camera-ready copy must be submitted to arrive in Dublin by 19/6/94. No particular conference style will be imposed, but papers should be legible (12pt laser printed) and well-structured. Deadlines: 10th April --- Submission of 400-word abstract 1st May --- Notification of acceptance 19th June --- Deadline for receipt of camera-ready paper (c.8 pages) 26th June --- Final date for registration, accommodation, meals etc. From CHRIS at gauss.cam.wits.ac.za Tue Mar 7 20:06:22 1995 From: CHRIS at gauss.cam.wits.ac.za (Christopher Gordon) Date: Tue, 7 Mar 1995 20:06:22 GMT+0200 Subject: TR Available: The Use of Cross-Validation in Neural Network Message-ID: <74306A56168@gauss.cam.wits.ac.za> I apologize to anyone who tried to unsuccesfully uncompress the file gordon.extrapolation.ps.Z which is advertized below. It was compressed with gzip not unix compress. gzip can be used to uncompress it. I will be replacing it with a unix compressed version as soon as possible. I will post another message when I have do so. > FTP-host: archive.cis.ohio-state.edu > FTP-filename: /pub/neuroprose/gordon.extrapolation.ps.Z > > -------------------------------------------------------------------- > > The following 12 page paper is now available by ftp: > > The Use of Cross-Validation in Neural Network > Extrapolation of Forest Tree Growth. > > C. Gordon, University of the Witwatersrand. > > Presented at PRASA (1994). > > Abstract: > A back-propagation multilayer Artificial Neural > Network (ANN) is used to model the mean growth > rate of different plots of Pine trees. > Cross-validation is found to be essential in > providing a good extrapolation. Predictions are > made for different plots which have different > initial densities and different artificial thinning > regimes. Comparisons are made with a > standard nonlinear regression solution. > > > * FTP procedure: > unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) > Name: anonymous > Password: (email address) > ftp> cd pub/neuroprose > ftp> binary > ftp> get gordon.extrapolation.ps.Z > ftp> quit ****************************************************************** * Christopher Gordon * * Department of Applied Maths * * University of the Witwatersrand * * Internet : CHRIS at gauss.cam.wits.ac.za * * Snail Mail : PO Box 65684, Benmore 2010, South Africa * * Tel: (011)716-3229(W) * * * ****************************************************************** From bernardo at esaii.upc.es Tue Mar 7 18:12:46 1995 From: bernardo at esaii.upc.es (Bernardo Morcego) Date: Tue, 7 Mar 1995 18:12:46 UTC+0200 Subject: Modular NN Message-ID: <273*/S=bernardo/OU=esaii/O=upc/PRMD=iris/ADMD=mensatex/C=es/@MHS> Dear connectionists, We are interested in the study and developement of Modular Artificial Neural Networks. Our aim is to study the influence of the interaction between modules in the learning process and show the beneffits of building networks using Neural Modules. We use the resulting archi- tectures in non-linear dynamic systems identification. We found several applications in which the use of modules was carried out in a building block fashion: each module had a complex function to learn and, after training, was assembled with the others. But this is not exactly what we are interested in. Our object is the design of neural network modules and the study of their relationships and the bibliography we were able to find is given at the end of this message. We would be very grateful if anyone aware of related work could send us any pointer. All recieved replies will be collected and summarized. Thanks in advance, Bernardo. ----------------------------------------------------------------------------- Bernardo Morcego (ESAII) FIB - Universitat Politecnica de Catalunya FAX: (34-3) 401 70 40 c/Pau Gargallo 5 Tel: (34-3) 401 69 92 08028 Barcelona (Spain) email bernardo at esaii.upc.es ----------------------------------------------------------------------------- Y. Bennani, P. Gallinari, Task Decomposition Through a Modular Con- nectionist Architecture: a Talker Identification System, Artificial Neural Networks 2, I Aleksander and J. Taylor (Eds), Elsevier, 1992. E.J.W Boers, H. Kuiper, Biological Metaphors and the Design of Modular Artificial Neural Networks, Master's Thesis, Leiden University, 1992. F. Fogelman, E. Viennet, B. Lamy, Multi-Modular Neural Network Archi- tectures: Applications in Optical Character and Human Face Recognition, Int. Journal of Pattern Recognition and Artificial Intelligence vol 7, No 4: 721-755, 1993. F. Gruau, D. Whitley, The Cellular Developmental of Neural Networks: the Interaction of Learning and Evolution, Ecole Normale Superieure de Lyon, Research Report 93-04, 1993. B.L.M. Happel, J.M.J Murre, Designing Modular Network Architectures Using a Genetic Algorithm, Artificial Neural Networks 2, I Aleksander and J. Taylor (Eds), Elsevier, 1992. R.A. Jacobs, Task Decomposition Through Competition in a Modular Con- nectionist Architecture, University of Massachusetts, COINS TR 90-44, 1990. R.A. Jacobs and M.I. Jordan, Learning Piecewise Control Strategies in a Modular Neural Network Architecture, IEEE Trans. on Systems, Man, and Cybernetics, Vol 23, No 2, 1993. R. Miikkulainen and M.G. Dyer, Natural Language Processing with Modular PDP Networks and Distributed Lexicon, Cognitive Science 15(3): 343 - 399, 1991. D.C. Plaut, Double Dissociation without Modularity: Evidence from Connectionist Neuropsychology, to appear in Journal of Clinical and Experimental Neuropsycology, 1994. Frank Smieja, The Pandemonium System of Reflective Agents, 1993. A. Waibel, Modular Construction of Time-Delay Neural Networks for Speech Recognition, Neural Computation 1, 39-46, 1989. ------------------------------------------------------------------------------ From konig at ICSI.Berkeley.EDU Tue Mar 7 16:07:05 1995 From: konig at ICSI.Berkeley.EDU (Yochai Konig) Date: Tue, 7 Mar 1995 13:07:05 -0800 Subject: TR announcement Message-ID: <199503072107.NAA07049@icsib35.ICSI.Berkeley.EDU> Hi, A revised and corrected version of the following TR is available from our FTP site. The changes all are concerned with some technical modifications of our convergence proof, particularly for Theorem 3. --Yochai =========================== TR announcement ================================= REMAP: Recursive Estimation and Maximization of A Posteriori Probabilities ---Applications to Transition-based Connectionist Speech Recognition--- by H. Bourlard, Y. Konig & N. Morgan Intl. Computer Science Institute 1947 Center Street, Suite 600 Berkeley, CA 94704 email: bourlard,konig,morgan at icsi.berkeley.edu ICSI Technical Report TR-94-064 Abstract In this paper, we describe the theoretical formulation of REMAP, an approach for the training and estimation of posterior probabilities using a recursive algorithm that is reminiscent of the EM (Expectation Maximization) algorithm [dempster77] for the estimation of data likelihoods. Although very general, the method is developed in the context of a statistical model for transition-based speech recognition using Artificial Neural Networks (ANN) to generate probabilities for hidden Markov models (HMMs). In the new approach, we use local conditional posterior probabilities of transitions to estimate global posterior probabilities of word sequences given acoustic speech data. Although we still use ANNs to estimate posterior probabilities, the network is trained with targets that are themselves estimates of local posterior probabilities. These targets are iteratively re-estimated by the REMAP equivalent of the forward and backward recursions of the Baum-Welch algorithm [baum70,baum72] to guarantee regular increase (up to a local maximum) of the global posterior probability. Convergence of the whole scheme is proven. Unlike most previous hybrid HMM/ANN systems that we and others have developed, the new formulation determines the most probable word sequence, rather than the utterance corresponding to the most probable state sequence. Also, in addition to using all possible state sequences, the proposed training algorithm uses posterior probabilities at both local and global levels and is discriminant in nature. The postscript file of the full technical report (68 pages) can be copied from our (anonymous) ftp site as follows: ftp ftp.icsi.berkeley.edu username= anonymous passw= your email address cd pub/techreports/1994 binary get tr-94-064.ps.Z ===================================================================== From goodman at unr.edu Wed Mar 8 00:54:02 1995 From: goodman at unr.edu (Phil Goodman) Date: Tue, 7 Mar 1995 21:54:02 -0800 (PST) Subject: PostDoc in NN Programming Message-ID: <199503080554.AA18040@equinox.unr.edu> ******* Position Announcement ******* POSTDOCTORAL FELLOWSHIP IN ARTIFICIAL NEURAL NETWORK PROGRAMMING Center for Biomedical Modeling Research University of Nevada, Lake Tahoe/Reno LOCATION: The University of Nevada Center for Biomedical Modeling Research (CBMR), located at the base of the Sierra Nevada Mountains near Lake Tahoe, is an interdisciplinary research institute involving the Departments of Medicine, Electrical Engineering, and Computer Science. Under federal funding, CBMR faculty and collaborators apply neural network and advanced probabilistic/statistical concepts to large health care databases. In particular, they are developing methods to: (1) improve the accuracy of predicting surgical mortality, (2) interpret nonlinearities and interactions among predictors, and (3) manage missing data. QUALIFICATIONS: Candidates considered for this position must have: (1) strong programming skills in the C language, (2) substantial operating experience in a UNIX environment, (3) familiarity with multilayer perceptron neural network architectures, and (4) a PhD in a related field. In addition, experience in probability, statistical regression, optimization, and/or functional approximation theory will be considered favorably. EFFORT: Approximately 80% of effort will be devoted programming neural network software. The remaining 20% of effort will be available for literature review, manuscript preparation, and attending scientific conferences (supported). DURATION: This fellowship will begin in April, 1995. The duration will be 12 months, extendible upon mutual agreement for an additional 6 months. APPLICATION: If interested, please send the following by plain-text electronic mail (preferred), surface mail, or FAX: (1) a cover letter detailing your interests and qualifications, (2) your resume, and, (3) the names and phone numbers (or email addresses) of three references. ___________________________________________________________________________ Philip H. Goodman, MD, MS E-mail: goodman at unr.edu Associate Professor of Medicine, Electrical Engineering, & Computer Science Director University of Nevada Center for Biomedical Modeling Research World-Wide Web: http://www.scs.unr.edu/~cbmr/ Washoe Medical Center, 77 Pringle Way, Reno, Nevada 89520 USA Voice: +1 702 328-4869 FAX: +1 702 328-4871 ___________________________________________________________________________ The University of Nevada, Reno is an Equal Opportunity/Affirmative Action employer and does not discriminate on the basis of race, color, religion, sex, age, creed, national origin, veteran status, physical or mental disability, and in accordance with University policy, sexual orientation, in any program or activity it operates. The University of Nevada employs only United States citizens and aliens lawfully authorized to work in the United States. From juergen at idsia.ch Wed Mar 8 05:53:31 1995 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Wed, 8 Mar 95 11:53:31 +0100 Subject: research positions available Message-ID: <9503081053.AA00750@fava.idsia.ch> +------------------------------+ | Research positions available | +------------------------------+ Istituto Dalle Molle di Studi sull'Intelligenza Artificiale (IDSIA) in beautiful Lugano (Switzerland) is starting up research projects in (1) reinforcement learning and evolutionary computation, and (2) supervised and unsupervised neural nets. We offer research positions at various levels (beginning now): 1. 1 postdoc or up to 2 PhD students, 2. A few undergrad students. The initial contracts will last until the end of this year, with a possible extension of 2 more years. Temporary visits (for a few months) by visiting researchers/students are also possible. Send CV, list of publications (if applicable), brief statement of research interests, and names/email addresses of references to: juergen at idsia.ch Postscript preferred. DEADLINE: March 31 1995. Earlier applications preferred. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ To check whether you share our research interests, have a look at these recent papers: 1. ``On learning how to learn learning strategies'' FTP-host: flop.informatik.tu-muenchen.de (131.159.8.35) FTP-filename: /pub/fki/fki-198-94.ps.gz (use gunzip to uncompress) 2. ``Flat minimum search finds simple nets'' (fki-200-94.ps.gz) 3. ``Semilinear predictability minimization produces orientation sensitive edge detectors'' (fki-201-94.ps.gz) 4. ``Ant_Q: A Reinforcement Learning Approach to Combinatorial Optimization'' FTP-host: fava.idsia.ch (192.132.252.1) FTP-filename: /pub/ant_q/TR.09-ANT-Q.ps.gz Papers 1-3 and others (and a list of publications) can also be retrieved from http://papa.informatik.tu-muenchen.de/mitarbeiter/schmidhu.html or from http://www.idsia.ch/people/juergen.html A significant part of the initial research will focus on the ``incremental self-improvement paradigm'' described in paper 1. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ New address since March 1: Juergen Schmidhuber IDSIA Corso Elvezia 36 6900-Lugano Switzerland juergen at idsia.ch From large at cis.ohio-state.edu Wed Mar 8 09:02:54 1995 From: large at cis.ohio-state.edu (Edward Large) Date: Wed, 8 Mar 1995 09:02:54 -0500 Subject: Papers on dynamical systems, NNs, and music cognition on neuroprose Message-ID: <199503081402.JAA25675@liberia.cis.ohio-state.edu> Dynamic Representation of Musical Structure (132 pages) Edward W. Large The Ohio State University PhD dissertation FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/Thesis/large.diss.ps.Z ABSTRACT: The problem of how the human brain perceives and represents complex, temporally structured sequences of events is central to cognitive science. Music is an ideal domain for the addressing this issue. Music provides a rich source of data, generated by a natural human activity, in which complex sequential and temporal relationships abound. Understanding how musical structure may be coded as dynamic patterns of activation in artificial neural networks is the goal of this dissertation. Implications for other domains, including speech perception, are discussed. This work addresses two questions that are important in understanding the representation of structured sequences. The first is the acquisition and representation of structural relationships among events, important in representing sequences with long distance temporal dependencies, and in learning structured systems of communication. The second is the representation of temporal relationships among events, that is important in recognizing and representing sequences independent of presentation rate, while retaining sensitivity to relative timing relationships. These two issues are intimately related, and this dissertation addresses the nature of this relationship. Two research projects are described. The first models the acquisition and representation of structural relationships among events in musical sequences, addressing issues of style acquisition and musical variation. An artificial neural network encodes the rhythmic organization and pitch contents of simple melodies. As the network learns to encode melodies, structurally more important events dominate less important events, as described by reductionist theories of music. The second project addresses the perception of temporal structure in musical sequences, specifically the perception of beat and meter. An entrainment model is proposed. An oscillator tracks periodic components of complex rhythmic patterns, resulting in a dynamical system model of beat perception. The self-organizing response of a group of oscillators embodies the perception of metrical structure. Resonance and the Perception of Musical Meter (37 pages) Edward W. Large and John F. Kolen The Ohio State University Connection Science, 6 (1), 177 - 208. Reprint from the recent Connection Science special issue on music and creativity. FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/large.resonance.ps.Z ABSTRACT: Many connectionist approaches to musical expectancy and music composition let the question of "What next?" overshadow the equally important question of "When next?". One cannot escape the latter question, one of temporal structure, when considering the perception of musical meter. We view the perception of metrical structure as a dynamic process where the temporal organization of external musical events synchronizes, or entrains, a listener's internal processing mechanisms. This article introduces a novel connectionist unit, based upon a mathematical model of entrainment, capable of phase- and frequency-locking to periodic components of incoming rhythmic patterns. Networks of these units can self-organize temporally structured responses to rhythmic patterns. The resulting network behavior embodies the perception of metrical structure. The article concludes with a discussion of the implications of our approach for theories of metrical structure and musical expectancy. Reduced Memory Representations for Music (39 pages) Edward W. Large and Caroline Palmer The Ohio State University Jordan B. Pollack Brandeis University Preprint of an article to appear in Cognitive Science. FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/large.reduced.ps.Z ABSTRACT: We address the problem of musical variation (identification of different musical sequences as variations) and its implications for mental representations of music. According to reductionist theories, listeners judge the structural importance of musical events while forming mental representations. These judgments may result from the production of reduced memory representations that retain only the musical gist. In a study of improvised music performance, pianists produced variations on melodies. Analyses of the musical events retained across variations provided support for the reductionist account of structural importance. A neural network trained to produce reduced memory representations for the same melodies represented structurally important events more efficiently than others. Agreement among the musicians' improvisations, the network model, and music-theoretic predictions suggest that perceived constancy across musical variation is a natural result of a reductionist mechanism for producing memory representations. From jang at mathworks.com Wed Mar 8 13:45:22 1995 From: jang at mathworks.com (jang@mathworks.com) Date: Wed, 8 Mar 1995 13:45:22 -0500 Subject: Review paper on neuro-fuzzy modeling and control available Message-ID: <199503081845.NAA26161@localhost> Hi, The following review paper (29 pages) on neuro-fuzzy modeling and control is now available for anonymous ftp from the MathWorks ftp site: FTP-host --> ftp.mathworks.com FTP-file --> /pub/doc/papers/neuro-fuzzy.ps URL --> ftp://ftp.mathworks.com/pub/doc/papers/neuro-fuzzy.ps It's about 1.2 MB, so be sure to use -s option when printing. This paper is an extended version of the one appears in the proceedings of the IEEE, special issue on fuzzy logic, March 1995. Title and abstract of the paper are listed below. Title: Neuro-Fuzzy Modeling and Control Abstract: Fundamental and advanced developments in neuro-fuzzy synergisms for modeling and control are reviewed. The essential part of neuro-fuzzy synergisms comes from a common framework called adaptive networks, which unifies both neural networks and fuzzy models. The fuzzy models under the framework of adaptive networks is called ANFIS (Adaptive-Network-based Fuzzy Inference System), which possess certain advantages over neural networks. We introduce the design methods for ANFIS in both modeling and control applications. Current problems and future directions for neuro-fuzzy approaches are also addressed. o o ) ==== J.-S. Roger Jang ========= o /| / ===== jang at mathworks.com ==== The MathWorks, Inc. o /\ \/: info at mathworks.com 24 Prime Park Way (@ ) ): http://www.mathworks.com Natick, MA 01760-1500 \/ /\: ftp.mathworks.com ==== Tel: 508-653-1396 ext 4567==== \| \ ====== Fax: 508-653-6971 ==== \ ) From wray at ptolemy-ethernet.arc.nasa.gov Wed Mar 8 15:52:55 1995 From: wray at ptolemy-ethernet.arc.nasa.gov (Wray Buntine) Date: Wed, 8 Mar 95 12:52:55 PST Subject: Modularity in neural networks Message-ID: <9503082052.AA27517@ptolemy.arc.nasa.gov> Dear connectionists, Just want to bring peoples attention to the fact that the Bayesian network community (a close relative of the neural network community) takes modularity as one of its foundations. I discuss this more, giving some connections to neural nets, in: URL: ftp://ack.arc.nasa.gov/pub/buntine/kdd2.ps.Z FTP: extract from above @incollection{Buntine.KDD2, AUTHOR = "W.L. Buntine", TITLE = "Graphical Models for Discovering Knowledge", BOOKTITLE = "Knowledge Discovery in Databases (Volume 2)", EDITOR = "U. M. Fayyad and G. Piatetsky-Shapiro and P. Smyth and R. S. Uthurasamy", PUBLISHER = "MIT Press", YEAR = "1995" } Here is an article that talks about these general ideas in more detail. @article{howard:km, AUTHOR = "R.A. Howard", TITLE = "Knowledge maps", JOURNAL = "Management Science", VOLUME = 35, PAGES = "903--922" , YEAR = 1989, NUMBER = 8 } Hopefully, David Heckerman will talk about this more at Snowbird, Machines that Learn this April. Wray Buntine NASA Ames Research Center phone: (415) 604 3389 Mail Stop 269-2 fax: (415) 604 3594 Moffett Field, CA, 94035-1000 email: wray at kronos.arc.nasa.gov From tom at csc1.prin.edu Wed Mar 8 16:02:30 1995 From: tom at csc1.prin.edu (Tom Fuller) Date: Wed, 8 Mar 1995 15:02:30 -0600 Subject: No subject Message-ID: <199503082102.PAA28073@spectre.prin.edu> The file fuller.scl.ps.Z is now available for copying from the Neuroprose repository: Supervised Competitive Learning Thomas H. Fuller, Jr. and Takayuki D. Kimura, Abstract: Supervised Competitive Learning (SCL) assembles a set of learning modules into a supervised learning system to address the stability-plasticity dilemma. Each learning module acts as a similarity detector for a prototype, and includes prototype resetting (akin to that of ART) to respond to new prototypes. SCL has usually employed backpropagation networks as the learning modules. It has been tested with two feature abstractors: about 30 energy-based features, and a combination of energy-based and graphical features (about 60). About 75 subjects have been involved. In recent testing (15 college students), SCL recognized 99% (energy features only) of test digits, 91% (energy) and 96.6% (energy/graphical) of test letters, and 85% of test gestures (energy/graphical). SCL has also been tested with fuzzy sets as learning modules for recognizing handwritten digits and handwritten gestures, recognizing 97% of test digits, and 91% of test gestures. This is Technical Report WUCS-93-45 at Washington University in St. Louis, which has some hardcopies available. It is reprinted from Journal of Intelligent Material Systems and Structures (March 1994. pages 232-246) This work was supported by the Kumon Machine Project. Department of Computer Science Washington University Campus Box 1045 One Brookings Drive St. Louis, MO 63130-4899 Here's a sample retrieval session: unix> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive FTP server (Version wu-2.4(2) Mon Apr 18 14:41:30 EDT 1994) ready. Name (archive.cis.ohio-state.edu:me): anonymous 331 Guest login ok, send your complete e-mail address as password. Password: me at here.edu 230 Guest login ok, access restrictions apply. Remote system type is UNIX. Using binary mode to transfer files. ftp> cd pub/neuroprose 250 CWD command successful. ftp> get fuller.scl.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for fuller.scl.ps.Z (138497 bytes). 226 Transfer complete. 138497 bytes received in 28.29 seconds (4.78 Kbytes/s) ftp> bye 221 Goodbye. unix> uncompress fuller.scl.ps.Z unix> <send fuller.scl.ps to favorite viewer or printer> From gp at isds.Duke.EDU Wed Mar 8 16:30:15 1995 From: gp at isds.Duke.EDU (Giovanni Parmigiani) Date: Wed, 8 Mar 1995 16:30:15 -0500 Subject: No subject Message-ID: <mailman.748.1149540311.24850.connectionists@cs.cmu.edu> ============================================================================== CALL FOR PAPERS -- CALL FOR TRAVEL FUNDING APPLICATIONS ============================================================================== INTERNATIONAL WORKSHOP ON MODEL UNCERTAINTY AND MODEL ROBUSTNESS ---------------------------------------------------------------- Bath, England, June 30th-July 1st or 2nd 1995 ============================================= The information below and future updates will be available at the ISDS WWW site (http://www.isds.duke.edu). GOALS OF THE WORKSHOP In recent years, advances in statistical methodology and computing have made available powerful modeling tools in a variety of areas. Along with the added modeling flexibility, increasing attention is being paid to the relationship between modeling assumptions and results. Debates on the effect of modeling assumptions on crucial scientific and policy prediction, such as global warming and the health impact of toxic waste, have reached the mass media. This international workshop on model uncertainty and model robustness, blending methodology and case studies, will have the following goals: a) to help elicit current issues and methods from a host of different application areas and disciplines; b) to promote wider utilization of worthy practical approaches and solutions developed in specific fields; c) to advance understanding of the relative merits of existing tools and approaches; and d) to identify directions for future methodological developments. PROGRAM AND CALL FOR PAPERS The program of the Workshop will include both talks and poster presentations. The talks will be invited, and will be organized in 7-9 sessions (depending on the number of participants), each including 2 related presentations and one discussion. Ample time will be allowed for floor discussion. No parallel sessions will be planned, to encourage interaction among participants with different interests and background. If there are enough talks for 9 sessions the meeting will run from Friday morning June 30th to Sunday morning July 2nd, otherwise Friday morning to Saturday afternoon July 1st (this will be decided by early April). There will be a Workshop banquet Saturday evening July 1st. A poster session will take place the evening of June 30th, and will provide a venue for discussing contributions that cannot be included in the daytime sessions, and for further informal interaction. The poster session will be open to contributors. We are actively seeking relevant posters. WORKSHOP TOPICS The foundational basis of the Workshop is eclectic -- we intend to contrast Bayesian, frequentist, practical, and theoretical viewpoints. * Overview of current directions in model uncertainty and robustness in model specification, in statistics and the physical and social sciences * Model uncertainty and model robustness in specific areas (linear and generalised linear models, time series, imaging, spatial statistics, design of experiments, meta-analysis, graphical models, survival analysis, decision modelling, risk analysis) * Case studies, emphasising scientific and policy implications of different approaches for handling uncertainty in model choice * Comparison of alternative methodological approaches to model uncertainty; foundations * Practical implementation of inference under strong uncertainty about model choice; accounting for model uncertainty by hierarchical modelling; mixture modelling * Variable selection problems. Deterministic and stochastic algorithms for searching model specification spaces; convergence issues * Prior specification in Bayesian approaches. Diffuse and informative priors on structure and parameters; elicitation of priors and utilities; choice of scale on which to elicit/infer/predict; specification of non-standard covariance structures * Methodology for model choice, information, and related topics. Bayes factors, their use, interpretation, computation and role in model building. AIC, BIC and other model specification criteria * Model uncertainty and model criticism. Diagnostics and influence measures. Cross-validation and predictive validation * Computation/algorithms; software for illustrating the mapping from modelling assumptions to conclusions; graphical methods for assessing uncertainty in model choice; communication of model uncertainty to practitioners * Modelling via exchangeability (E) and conditional independence (CI). Uncertainty about E/CI assumptions Further details about the Workshop program and participants will be made available at the ISDS www site (http://www.isds.duke.edu) as soon as they become available. REGISTRATION AND TRAVEL GRANT FOR US AND OTHER PARTICIPANTS The Workshop organisers have applied for an NSF Group Travel Grant for participants from the USA to attend the Workshop, and an EPSRC grant to support the travel expenses of people from other countries. The EPSRC grant includes modest support for non-UK participants to visit other places in the UK, to work with colleagues and give seminars, while they are away from their home institutions. Interactive registration and grant application forms will be available at the ISDS www site (http://www.isds.duke.edu) in the immediate future. Alternatively, email or paper versions of the forms can be requested by contacting Giovanni Parmigiani (email gp at isds.duke.edu or fax +1-919-684-8594). PROCEEDINGS A World Wide Web version of the proceedings of the Workshop will be created at the ISDS www site. Papers will be made available as soon they are sent to us. Instructions for submissions will be posted. Alternatively, please contact Giovanni Parmigiani. WORKSHOP LOCATION Bath is an elegant city of about 80,000 people, built to a unified Georgian architectural plan in the eighteenth century; many of its buildings look today much as they did 250 years ago. It is the only World Heritage City in the UK, and offers amenities both urban (concerts, drama, films, a broad range of international shops) and rural (good walking and interesting villages in the beautiful countryside at the south end of the Cotswolds and beyond). The Workshop will be held in an old courtroom that dates from the 1780s, with the banquet taking place in a Georgian ballroom. Hotel accommodations range from moderate to five-star, many in restored period buildings near the city center. The city is about an hour and a half from Heathrow and Gatwick by car, and is served by a good rail link to London (85 minutes away). We look forward to seeing you at the Workshop. The Organizing Committee: David Draper (University of Bath), Giovanni Parmigiani (ISDS, Duke University), Mike West (ISDS, Duke University). From tdenoeux at hds.univ-compiegne.fr Thu Mar 9 04:32:13 1995 From: tdenoeux at hds.univ-compiegne.fr (tdenoeux@hds.univ-compiegne.fr) Date: Thu, 9 Mar 1995 10:32:13 +0100 Subject: paper on k nearest neighbors and D-S theory Message-ID: <199503090932.KAA15632@asterix.hds.univ-compiegne.fr> Announcement: The following paper to appear in IEEE Transactions on Systems, Man and Cybernetics, 25 (05) is available by anonymous ftp: ftp ftp.hds.univ-compiegne.fr cd /pub/diagnostic get knnds.ps.Z uncompress knnds.ps title: A k-nearest neighbor classification rule based on Dempster-Shafer Theory author: Thierry Denoeux ABSTRACT In this paper, the problem of classifying an unseen pattern on the basis of its nearest neighbors in a recorded data set is addressed from the point of view of Dempster-Shafer theory. Each neighbor of a sample to be classified is considered as an item of evidence that supports certain hypotheses regarding the class membership of that pattern. The degree of support is defined as a function of the distance between the two vectors. The evidence of the k nearest neighbors is then pooled by means of Dempster's rule of combination. This approach provides a global treatment of such issues as ambiguity and distance rejection, and imperfect knowledge regarding the class membership of training patterns. The effectiveness of this classification scheme as compared to the voting and distance-weighted k-NN procedures is demonstrated using several sets of simulated and real-world data. +------------------------------------------------------------------------+ | tdenoeux at hds.univ-compiegne.fr Thierry DENOEUX | | Departement Genie Informatique | | Centre de Recherches de Royallieu | | tel (+33) 44 23 44 96 Universite de Technologie de Compiegne | | fax (+33) 44 23 44 77 B.P. 649 | | 60206 COMPIEGNE CEDEX | | France | +------------------------------------------------------------------------+ From gerhard at ai.univie.ac.at Thu Mar 9 12:47:19 1995 From: gerhard at ai.univie.ac.at (Gerhard Widmer) Date: Thu, 9 Mar 95 12:47:19 MET Subject: IJCAI-95 Workshop on AI & Music Message-ID: <199503091147.MAA14630@museum.ai.univie.ac.at> In view of a recent issue of "Connection Science" on the topic "Music and Creativity" the following announcement may be of interest to the connectionst community: SECOND AND LAST CALL FOR PAPERS !!! IJCAI-95 WORKSHOP ON ARTIFICIAL INTELLIGENCE AND MUSIC (Specialized topic: "AI MODELS OF STRUCTURAL MUSIC UNDERSTANDING") to be held in the context of the International Joint Conference on Artificial Intelligence (IJCAI-95) Montreal, Quebec Artificial Intelligence and Music (AIM) has become a stable field of research which is recognized both in the music and the AI communities as a valuable and promising research direction. In the AI arena, there has been a series of international workshops on this topic at major AI conferences (e.g., AAAI-88; IJCAI-89; ECAI-90; ECAI-92). The most recent indications of the growing recognition of AIM in the AI community were the special track on "AI and the Arts" at the AAAI-94 conference (in which the majority of papers dealt with AI & Music) and a real-time interactive music performance at AAAI-94's Art Exhibit. The purpose of this workshop is to discuss, in an informal setting, current topics and results in research on Artificial Intelligence and Music, in particular problems and approaches related to AI models of structural music understanding. TOPICS OF INTEREST Previous workshops on AI & Music were rather broad in scope. Given the advances in research that have taken place in the meantime, we are now in a position to define a highly focused theme for this workshop, which will provide for a coherent and focused scientific discussion. The specialized topic of the IJCAI-95 workshop on AI and Music --- "AI Models of Structural Music Understanding" --- refers to all aspects of structured music perception and processing that are amenable to computer modelling, e.g., beat induction, structure recognition and abstraction, real-time perception and pattern induction, as well as to research on the role of these abilities in various domains of musical competence (listening, composition, improvisation, performance, learning). The following short list of issues exemplifies the types of topics to be discussed: - AI models of musical structure perception - AI models of perception of / representation of / reasoning about musical time - empirical investigations with AI programs based on structural music theories - real-time vs. non-real-time models of music comprehension - music understanding and creativity Contributions by workshop participants should both have a substantial AI component and be well-founded in music theory and musicology. PARTICIPATION AND SUBMISSION OF PAPERS To maintain a genuine workshop atmosphere, participation is limited to at most 30 persons. Participants will be selected by the organizing committee (see below), based on submitted papers. Participants will be expected to actively contribute to the workshop by either presenting a talk or taking part in panel and/or open discussions. Researchers interested in participating in the workshop are invited to submit extended abstracts (up to 5 pages) on completed or ongoing research related to the above-mentioned topics. Submissions may be sent by e-mail (self-contained LaTex or PostScript files) or as hardcopies (in triplicate) to the workshop organizer (address see below). E-mail submission is highly encouraged. The submissions will be reviewed by members of the organizing committee. Accepted papers will be published in the form of official IJCAI-95 workshop notes. Participants selected for giving a talk at the workshop will be asked to submit a full-length paper for the workshop notes (deadlines see below). If you want to demonstrate an operational AIM system, please contact the workshop organizer and supply detailed information about type of system, equipment required, etc. We cannot guarantee at this point that system demonstrations will be possible at the workshop, but we will try to do our best. *** IMPORTANT NOTICE *** IJCAI regulations require that participants register for the main IJCAI-95 conference. In addition, IJCAI charges a fee of US$ 50,- for workshop participation. For more information about IJCAI-95, please contact the IJCAI Conference Management at American Association for Artificial Intelligence (AAAI) 445 Burgess Drive Menlo Park, CA 94025 Tel: +1 - 415 - 328-3123 Fax: +1 - 415 - 321-4457 e-mail: ijcai at aaai.org or consult the IJCAI WWW page (http://ijcai.org/). For more information about the workshop, please contact the workshop organizer (address see below). IMPORTANT DATES: Abstracts/papers due by: March 18, 1995 Notification of acceptance: April 8, 1995 Camera-ready version of final paper due: April 24, 1995 Date of workshop: Monday, Aug. 21, 1995 Main IJCAI-95 conference: Aug. 21 - 25, 1995 ORGANIZING COMMITTEE: Roger Dannenberg Computer Science Department Carnegie Mellon University Pittsburgh, PA, USA Bruce Pennycook Faculty of Music McGill University Montreal, Canada Geber Ramalho LAFORIA-CNRS Universite' Paris VI Paris, France Brian K. Smith School of Education and Social Policy & The Institute for the Learning Sciences Northwestern University Evanston, IL, USA Gerhard Widmer Department of Medical Cybernetics and Artificial Intelligence University of Vienna and Austrian Research Institute for Artificial Intelligence Vienna, Austria WORKSHOP ORGANIZER: Please send abstracts/papers or any questions to Gerhard Widmer Austrian Research Institute for Artificial Intelligence Schottengasse 3 A-1010 Vienna Austria Phone: +43 - 1 - 53532810 Fax: +43 - 1 - 5320652 e-mail: gerhard at ai.univie.ac.at From kak at gate.ee.lsu.edu Thu Mar 9 17:33:26 1995 From: kak at gate.ee.lsu.edu (Subhash Kak) Date: Thu, 9 Mar 95 16:33:26 CST Subject: No subject Message-ID: <9503092233.AA03133@gate.ee.lsu.edu> Second Annual Joint Conference on Information Sciences September 28 - October 1, 1995 HONORARY CONFERENCE CHAIRS Lotfi A. Zadeh & Azriel Rosenfeld First Annual Conference on Computational Intelligence & Neurosciences Co-Chairs: Subhash C. Kak & Jeffrey P. Sutton ADVISORY BOARD Jim Anderson Earl Dowell Erol Gelenbe Kaoru Hirota George Klir Teuvo Kohonen Gregory Lockhead Zdzislaw Pawlak C. V. Ramamoorthy Herb Rauch John E. R. Staddon Masaki Togai Victor Van Beuren Max Woodbury Stephen S. Yau Lotfi A. Zadeh H. Zimmerman PROGRAM COMMITTEES First Annual Conference on Computational Intelligence & Neurosciences Robert Erickson George Georgiou David Hislop Michael Huerta Subhash C. Kak Stephen Koslow Sridhar Narayan Slater E. Newman Gregory Lockhead Richard Palmer David C. Rubin Nestor Schmajuk David W. Smith John Staddon Jeffrey P. Sutton Harold Szu L.E.H. Trainor Abraham Waksman Paul Werbos M. L. Wolbarsht Max Woodbury TIME SCHEDULE & VENUE September 28 to October 1, 1995. The beautiful ``Shell Island'' Hotels of Wrightsville Beach, North Carolina, USA. Tel: 800-689-6765 KEYNOTE SPEAKERS, PLENARY SPEAKERS The following distinguished, leading researchers have already accepted our invitation: Jim Anderson Zdzislaw Pawlak Azriel Rosenfeld L. E. H. Trainor Lotfi A. Zadeh The other leading researchers including: Stephen Grossbery, John Holland, John Staddon, Michio Sugeno, etc, are considering our invitations. ****************** * PUBLICATIONS * ****************** The joint conference publishes one Proceedings on Summaries which consist of all papers accepted by all three program committees. The JCIS Proceedings will be made available on Sept. 28, 1995. A summary shall not exceed 4 pages of 10-point font, double-column, single-spaced text, (1 page minimum) with figures and tables included. Any summary exceeding 4 pages will be charged $100 per additional page. Three copies of the summary are required by June 25,1995. A deposit of $150 check must be included to guarantee the publication of your 4 pages summary in the Proceedings. $150 can be deducted from registration fee later. It is very important to mark ``plan A'' or ``plan B'' or ``plan C'' on your manuscript. The conference will make the choice for you if you forget to do so. Final version of the full length paper must be submitted by October 1, 1995. Four (4) copies of the full length paper shall be prepared according to the ``information for Authors'' appearing at the back cover of Information Sciences, an International Journal (Elsevier Publishing Co.). A full paper shall not exceed 20 pages including figures and tables. All full length papers will be reviewed by experts in their respective fields. Revised papers will be due on April 15, 1996. Accepted papers will appear in the hard-covered proceeding (book) to be published by a publisher or Information Sciences Journal (INS journal now has three publications: Informatics and Computer Sciences, Intelligent Systems, Applications). All fully registered conference attendees will receive a copy of proceeding (summary) on September 28, 1995; a free one-year subscription (paid by this conference) of Information Sciences Journal - Applications. Lastly, the right to purchase either or all of Vol.I, Vol.II, Vol.III of Advances in FT & T hard-covered, deluxe, professional books at 1/2 price. (1) Deadline for summaries submission: June 25, 1995. (2) Reviewing: June 25 - Aug. 1, 1995. (3) Decision & notification date: August 5, 1995. (4) Absolute deadline for summaries: August 25, 1995. (5) Deadline for full length paper: October 1, 1995. -------------------------------------- | ANNOUNCEMENT AND CALL FOR PAPERS | -------------------------------------- General Information The Joint Conference on Information Sciences consists of three international conferences. All interested attendees including researchers, organizers, speakers, exhibitors, students and other participants should register either in Plan A: Fourth International Conference on Fuzzy Theory & Technology or Plan B: Second International Conference on Computer Theory & Informatics and Plan C: First International Conference on Computational Intelligence & Neurosciences. First Annual Conference on Computational Intelligence & Neurosciences The conference will consist of both plenary sessions and contributory sessions, focusing on topics of critical interest and the direction of future research. For contributory sessions, full papers are being solicited. We also welcome you to contact Jeffrey P. Sutton for the interesting formate he has proposes in ``Symposium on Neural Computing''. Several interesting sessions Have already been organized, e.g., ``From Animals to Robots'' by Nestor Schmajuk, ``Intelligent Control'' by Chris Tseng, and ``Computational Science Meets Neurobiology'' by Sridhar Narayan. Other example topics include, but are not limited to the following: * Neural Network Architectures * Artificially Intelligent Neural Networks * Artificial Life * Associative Memory * Computational Intelligence * Cognitive Science * Fuzzy Neural Systems * Relations between Fuzzy Logic and Neural Networks * Theory of Evaluationary Computation * Efficiency/Robustness Comparisons with Other Direct Search Algorithms * Parallel Computer Applications * Integration of Fuzzy Logic and Evolutionary Computing * Evaluationary Computation for Neural Networks * Fuzzy Logic in Evolutionary Algorithms * Neurocognition * Neurodynamics * Optimization * Feature Extraction & Pattern Recognition * Learning and Memory * Implementations (electronic, Optical, Biochips) * Intelligent Control (1) The Human Brain Project: Neuroinformatics The Human Brain Project is a broad-based, long-term research initiative which supports research and development of advanced technologies to make avaiable to neuroscientists and behavioral scientists an array of information tools for the 21st Century. These include novel database and querying capabilities for the full range of structural and fuctional information about the brain and behavior, as well as technologies for managing, integrating and sharing information over networks. These will also provide the means for electronic collaboration. This initiative was launched in 1993, and is supported by 14 federal organizations across 5 agencies (NIH, NASA, NSF, DOE and DOD). Over 40 brain and behavioral scientists, informatio and computer scientists, engineers, mathematicians and statisticians are funded by the Human Brain Project. The diversity and volume of data generated by brain and behavioral science is testing the limits of informatics and associated technologies; the Human Brain Project is expanding these limits. Such advances will be applicable to a wide range of problems, as brain and behavioral research encompasses a multitude of disciplinary approaches and data types. And, since brain and behavioral research is generated around the world, the potential of this initiative will be fully realized only when these tools become global resources. Thus, the strategies and technologies that are developed as part of the Human Brain Project will serve as models for other complex information domains, with implications far beyond the brain research community. It is our intention to creat such a discussion forum by inviting various governmental, industrial and acdemic researchers to attend. For further information please contact Co-chair Subhash C. Kak (kak at max.ee.lsu.edu, Voice: 504 388 5552, FAX: 504 388 5200). (2) Symposium on Neural Computing It is important that researchers in information science keep abreast of rapid achievements being made in neuroscience. Mathematical and computer approaches aid in understanding biological information processing, and information science stands to gain by examining strategies used by the nervous system to solve complex problems. The aim of this symposium is to identify areas where future research in information science should focus based on recent advances in neural computing. A discussion format will be used. It will be supplemented with a few key plenary lectures on computational and systems neuroscience. No prior background in neurobiology is assumed. A summary statement of the goals elucidated in the discussions will be prepared. Your participation in this growing area of information science is most welcome. N.B. Professors Jim Anderson (Brown) and L.E.H. Trainor (Toronto) are both interested in contributing. Jeffrey know several other prominent researchers in the field who would likely contribute if asked. For further information about this symposium, please contact Jeffrey P. Sutton (sutton at ai.mit.edu, Voice: 617 726 6766, FAX: 617 726 4078). *************** * TUTORIALS * *************** Several mini-courses are scheduled for sign-up. Please take note that any one of them may be cancelled or combined with other mini-courses due to the lack of attendance. Cost of each mini-course is $120 up to 7/15/95 & $160 after 7/15/95, the same cost for all mini-course. No. Name of Mini-Course Instructor Time ------------------------------------------------------------------------ A Languages and Compilers for J. Ramanujan 6:30 pm - 9 pm Distributed Memory Machine Sept. 28 ------------------------------------------------------------------------- B Pattern Recognition Theory H. D. Cheng 6:30 pm - 9 pm Sept. 28 ------------------------------------------------------------------------- C Fuzzy Set Theory George Klir 2:00pm - 4:30pm Sept. 29 ------------------------------------------------------------------------- D Neural Network Theory Richard Palmer 2:00pm - 4:30pm Sept. 29 ------------------------------------------------------------------------- E Fuzzy Expert Systems I. B. Turksen 6:30 pm - 9 pm Sept. 29 ------------------------------------------------------------------------- F Intelligent Control Systems Chris Tseng 6:30 pm - 9 pm Sept. 29 ------------------------------------------------------------------------- G Neural Network Applications Subhash Kak 2:00pm - 4:30pm Sept. 30 ------------------------------------------------------------------------- H Pattern Recognition Applications Edward K. Wong 2:00pm - 4:30pm Sept. 30 ------------------------------------------------------------------------- I Fuzzy Logic & NN Integration Marcus Thint 9:30am - 12:00 Oct. 1 ------------------------------------------------------------------------- J Rough Set Theory Tsau Young Lin 9:30am - 12:00 Oct. 1 ------------------------------------------------------------------------- ***************** * EXHIBITIONS * ***************** Once again, Intelligent Machines, Inc. will demonstrate their highly successful new software ``O'inca''-a FL-NN, Fuzzy-Neuro Design Framework. Elsevier Publishing Co. will lead major publishers for another successful exhibits. Dr. Hua Li of Texas Tech. Univ. will exhibit his hardware & software research results. In addition, we plan to celebrate 30th Anniversary of Fuzzy Theory & Technology through a special exhibit of historical aluable. This exhibition does not represent any society, it does represent, however, some personal collections. Any person has some interesting or collectibles to share ith the conference, please contact Paul P. Wang (ppw at ee.duke.edu). Interested Vendors should contact: Dr. Rhett George Department of Electrical Engineering Duke University Durham, NC 27708 Telephone: 919 660 5242 FAX: 919 660 5293 rtg at ee.duke.edu ******************************************** * JCIS'95 REGISTRATION FEES & INFORMATION * ******************************************** Up to 7/15/95 After 7/15/95 Full Registration $275.00 $395.00 Student Registration $100.00 $160.00 Tutorial(per Mini-Course) $120.00 $160.00 Exhibit Boot Fee $300.00 $400.00 One Day Fee(no pre-reg. discount) $195.00 $ 85.00 (Student) Above fees applicable to both Plan A & Plan B & Plan C FULL CONFERENCE REGISTRATION: Includes admission to all sessions, exhibit area, coffee, tea and soda. A copy of conference proceedings (summary) at conference and one year subscription of Information Sciences - Applications, An International Journal, published by Elsevier Publishing Co. In addition, the right to purchase the hard-cover deluxe books at 1/2 price. Award Banquet on Sept. 30, 1995 is included through Full Registration. One day registration does not include banquet, but one year IS Joural - C subscription is included for one-day full registration only. Tutorials are not included. STUDENT CONFERENCE REGISTRATION: For full-time students only. A letter from your department is required. You must present a current student ID with picture. A copy of conference proceedings (summary) is included. Admission to all sessions, exhibit area, area, coffee,tea and soda. The right to purchase the hard-cover deluxe books at 1/2 price. Free subscription of IS Journal - Applications, however, is not included. TUTORIALS REGISTRATION: Any person can register for the Tutorials. A copy of lecture notes for the course registered is included. Coffee, tea and soda are included. The summary and free subscription of IS Journal - Applications is, However, not included. The right to purchase the hard-cover deluxe books is included. VARIOUS CONFERENCE CONTACTS: Tutorial Conference Information Paul P. Wang Jerry C.Y. Tyan Kitahiro Kaneda ppw at ee.duke.edu ctyan at ee.duke.edu hiro at ee.duke.edu Tel. (919)660-5271 Tel. (919)660-5233 Tel. (919)660-5233 660-5259 Coordinates Overall Administration Local Arrangement Chair Xiliang Gu Sridhar Narayan gu at ee.duke.edu Dept. of Mathematical Sciences Tel. (919)660-5233 Wilmington, NC 28403 (919)383-5936 U. S. A. narayan at cms.uncwil.edu Tel: 910 395 3671 (work) 910 395 5378 (home) *********************** * TRAVEL ARRANGEMENTS * *********************** The Travel Center of Durham, Inc. has been designated the officeal travel provider. Special domestic fares have been arranged and The Travel Center is prepared to book all flight travel. Domestic United States and Canada: 1-800-334-1085 International FAX: 919-687-0903 ********************** * HOTEL ARRANGEMENTS * ********************** SHELL ISLAND RESORT HOTELS 2700 N. LUMINA AVE. WRIGHTSVILLE BEACH, NC 28480 U. S. A. This is the conference site and lodging. A block of suites (double rooms) have been reserved for JCIS'95 attendees with discounted rate. All prices listed here are for double occupancies. $100.00 + 9% Tax (Sun.- Thur.) $115.00 + 9% Tax (Fri. - Sat.) $10.00 for each additional person over 2 people per room. We urge you to make reservation early. Free transportation from and to Wilmington, N. C. Airport is available for ``Shell Island'' Resort Hotel Guests. However, you must make reservation for this free service. Please contact: Carvie Gillikin, Director of Sales Voice: 1-800-689-6765 or: 910-256-8696 FAX: 910-256-0154 --------------------------------------------------- | SPONSORS: | | Machine Intelligence and Fuzzy Logic Laboratory | | Dept. of Electrical Engineering | | Duke University | | | | Elserier Science Publishing Inc. | | New York, N.Y. | | | --------------------------------------------------- ------------------------------------------------------------------------------- CONFERENCE REGISTRATION FORM It is important to choose only one plan; Participation Plan A or Plan B or Plan C. The only difference in the privelege for the choice is both Plan B and Plan C do not participate in Lotfi A. Zadeh Best Paper Competition. [ ] I wish to receive further information. [ ] I intend to participate in the conference. [ ] I intend to present my paper to regular session. [ ] I intend to register in tutorial(s). Mane: Dr./Mr./Mrs. _________________________________________________ Address: ___________________________________________________________ Country: ___________________________________________________________ Phone:________________ Fax: _______________ E-mail: ________________ Affiliation(for Badge): ____________________________________________ Participation Plan: [ ]A [ ]B [ ]C Up to 7/15/95 After 7/15/95 Full Registration [ ]$275.00 [ ]$395.00 Student Registration [ ]$100.00 [ ]$160.00 Tutorial(per Mini-Course) [ ]$120.00 [ ]$160.00 Exhibit Boot Fee [ ]$300.00 [ ]$400.00 One Day Fee(no pre-reg. discount) [ ]$195.00 [ ]$ 85.00 (Student) Total Enclosed(U.S. Dollars): ________________ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ $ Please make check payable and mail to: $ $ FT & T $ $ c/o. Paul P. Wang $ $ Dept. of Electrical Engineering $ $ Duke University $ $ Durham, NC 27708 $ $ U. S. A. $ $ $ $ All foreign payments must be made by $ $ draft on a US Bank in US dollars. No $ $ credit cards or purchase order can be $ $ accepted. $ $ $ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From M.West at statslab.cam.ac.uk Thu Mar 9 05:23:00 1995 From: M.West at statslab.cam.ac.uk (Mike West) Date: Thu, 9 Mar 95 10:23 GMT Subject: No subject Message-ID: <m0rmfN8-000PprC@lion.statslab.cam.ac.uk> INTERNATIONAL WORKSHOP ON MIXTURES Aussois, France, September 17-21 1995 A Workshop on Statistical Mixture Modelling will be held in Aussois, France in September this year. The meeting is co-sponsored by French agencies CNRS, ADRES and INRIA, the universities of Rouen and Grenoble, and Duke University. Additional sponsorship is expected from the US National Science Foundation, principally in terms of a group travel grant for US based researchers. The Workshop is held in recognition of the recent growth and development in the theory and, in particular, applications of statistical methods based on mixtures of distributions. The high level of recent and current research activity reflects the growing appreciation of the key roles played by mixture models in many complex modelling and inference problems, and is driven, in part, by advances in computational statistical technology. The meeting provides a forum for reviewing and publicising widely dispersed research activities in mixture modelling, stimulating fertilisation of theoretical, methodological and computational research directions for the near future, and focusing attention on the wide variety of significant applied problems in complex stochastic systems that are inherently structered in mixture terms. Example workshop topics might include: mixtures in density estimation, regression and time series; mixtures in statistical image modelling and analysis, neural networks, graphical models and networks; stochastic simulation for mixture analysis; clustering and classification problems; model selection and combination; alternative approaches to inference in mixtures; latent variables and incomplete data problems; and applications of mixtures in various scientific areas. The workshop will bring together senior researchers, new researchers and students from various backgrounds to promote exchange and interactions on the frontiers of statistical mixture modelling and to highlight the development of statistical technology across these fields. The meeting will consist of invited and contributed talks, posters, discussion and round-table sessions. Talks will be given in morning (9am-1pm) and evening (5pm-8pm) sessions, with the midday period (2.30-4.30pm) for contributed poster sessions, informal discussions and round-tables. Activities at the meeting will be publicised though World Wide Web access to abstracts of papers and posters presented. The Organising Committee of the Workshop consists of Christian Robert (Universite de Rouen) and Gilles Celeaux (Inria, Grenoble, France), Professor Kathryn Roeder (Carnegie Mellon University), and Mike West (ISDS, Duke University). The venue is the CNRS Paul Langevin Conference Center at Aussois in the French Alps. Attendance is to be strictly capped at 80 delegates on a first-come, first-served basis. Registration details and forms are available from Christian Robert, at robert at bayes.univ-rouen.fr. Informal enquiries can also be sent to Mike West, at M.West at statslab.cam.ac.uk, from whom applications for NSF travel grant support can also be obtained. From somers at ai.mit.edu Fri Mar 10 00:02:49 1995 From: somers at ai.mit.edu (David Somers) Date: Fri, 10 Mar 95 00:02:49 EST Subject: Paper Available: Emergent Model of Orientation Selectivity Message-ID: <9503100502.AA05295@vidi> *****************Pre-print Available via FTP ******************* FTP-host: ftp.ai.mit.edu FTP-filename: /pub/users/somers/orient-jneurosci.ps.Z URL ftp://ftp.ai.mit.edu/pub/users/somers/orient-jneurosci.ps.Z An Emergent Model of Orientation Selectivity In Cat Visual Cortical Simple Cells (41 pages) David Somers, Sacha Nelson, and Mriganka Sur, MIT, Dept. of Brain & Cognitive Sciences To appear in: The Journal of Neuroscience ABSTRACT It is well known that visual cortical neurons respond vigorously to a limited range of stimulus orientations, while their primary afferent inputs, neurons in the lateral geniculate nucleus (LGN) respond well to all orientations. Mechanisms based on intracortical inhibition and/or converging thalamocortical afferents have previously been suggested to underlie the generation of cortical orientation selectivity; however, these models conflict with experimental data. Here, a 1:4 scale model of a $1700\mu\mbox{m}$ by $200\mu\mbox{m}$ region of layer IV of cat primary visual cortex (area 17) is presented in order to demonstrate that local intracortical excitation may provide the dominant source of orientation selective input. In agreement with experiment, model cortical cells exhibit sharp orientation selectivity despite receiving strong iso--orientation inhibition, weak cross- -orientation inhibition, no shunting inhibition, and weakly tuned thalamocortical excitation. Sharp tuning is provided by recurrent cortical excitation. As this tuning signal arises from the same pool of neurons that it excites, orientation selectivity in the model is shown to be an emergent property of the cortical feedback circuitry. In the model, as in experiment, sharpness of orientation tuning is independent of stimulus contrast and persists with silencing of ON--type subfields. The model also provides a unified account of intracellular and extracellular inhibitory blockade experiments which had previously appeared to conflict over the role of inhibition. It is suggested that intracortical inhibition acts non-specifically and indirectly to maintain the selectivity of individual neurons by balancing strong intracortical excitation at the columnar level. David C. Somers Dept. of Brain & Cognitive Sciences MIT, E25-618 45 Carleton St. Cambridge, MA 02139 ftp://ftp.ai.mit.edu/pub/users/somers/orient-jneurosci.ps.Z From lkhansen at eivind.ei.dtu.dk Fri Mar 10 09:44:21 1995 From: lkhansen at eivind.ei.dtu.dk (Lars Kai Hansen) Date: Fri, 10 Mar 1995 15:44:21 +0100 Subject: workshop Message-ID: <9503101444.AA03032@ei.dtu.dk> Telluride Summer Research Center Box 2255, Telluride, Colorado 81435, USA. Re.: Telluride Summer Research Center Workshop on Neural Networks 1995. Secretariats: CONNECT, Electronics Institute, B. 349 Technical University of Denmark DK-2800 Lyngby Denmark Phone: +45 4525 3889 Fax: +45 4588 0117 Email: lkhansen at ei.dtu.dk Interdisciplinary Research Center Dept. of Mathematical Sciences, San Diego State University. San Diego California 92182, USA Phone: +1 619 594 7204 Fax: +1 619 594 6746 Email: salamon at math.sdsu.edu Dept. WNI Universitaire Campus Limburgs Universitair Centrum B-3590 Diepenbeek, Belgium Phone: 32 011 268214 Fax: 32 011 268299 Email: chris at luc.ac.be March 10, 1995 Dear prospective participant, Enclosed find the announcement of a 1995 Workshop on Neural Networks which will complement the other workshops in the 1995 program at the Telluride Summer Research Center. The workshop will take place from July 2 to July 9, and will be devoted to aspects of neural networks. While the list of topics will vary to reflect the interests of the participants, the list will include: THEORY: Generalization, ensembles, unsupervised learning. Extremely ill-posed learning and high dimensional data sets. APPLICATIONS: Medical imaging (PET) and time series processing. The format of the sections is to take a prepared one hour talk and turn it into a careful and detailed cross examination lasting 4-5 hours. This format has proven in the past years to be an outstanding way to rapidly understand new ideas and we invite you to share in the learning experience that viewing each other's work under such scrutiny can provide. The discussions aim to be constructive rather than critical, and all possible attempts will be made to relate the topics to the open problems of the workshop. Individual participants are expected to provide their own salaries. Academic and research institutions have recognized the value of participation in the Center's activities and have been willing to pay for work performed at the Center. The Summer Research Center helps arrange for housing thereby obtaining reduced rates for the participants and families. >>>>>> OBS!: Respond promtly to increase the chances of getting inexpensive housing!! A registration fee is charged all participants (100$ for one week, 125$ for two or more weeks). The fee may be waived in special circumstances. Anybody requiring official letters of invitation should contact the organizers specifying their needs. The Center offers an excellent way to meet collaborators from other institutions through coordinated visits. Extended stays are thus recommended. The Telluride locale, situated in an unspoiled valley near 4200 m high mountain peaks in the south west corner of Colorado, offers ample outdoor attractions making such stays enjoyable for the families as well. If you think you may be intested or would like to remain on our mailing list for future years, please indicate so on the enclosed form and return it to one of the organizers before April 5'th 1995. If you know of other colleagues who may be interested, please pass a copy to them or include their name and address. Sincerely, Lars Kai Hansen, Peter Salamon, and Christian Van den Broeck. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>><<<<<< Telluride Summer Research Center Box 2255, Telluride, Colorado 81435, USA. ANNOUNCEMENT OF A WORKSHOP ON Neural networks in high dimensions. July 2, - July 9, 1995 TELLURIDE SUMMER RESEARCH CENTER TELLURIDE, COLORADO 81435 USA Topics will include: Generalization, Ensembles, Unsupervised Learning, Medical Imaging () I would like to come to the center for the period: () I am interested in the workshop, but I cannot commit myself at this moment () I am not interested for this summer, but keep me on your mailing list. Name: Institution: Address: Phone, Fax, Email: Current interests: Lodging requirements (# persons): >>>>>>>> APPLICATION DEADLINE APRIL 5'th 1995 <<<<<<<<<< Organizers: Peter Salamon email: salamon at math.sdsu.edu Lars Kai Hansen email: lkhansen at ei.dtu.dk Christian Van den Broeck email: chris at luc.ac.be From NEUROCOG at vms.cis.pitt.edu Fri Mar 10 12:12:26 1995 From: NEUROCOG at vms.cis.pitt.edu (NEUROCOG@vms.cis.pitt.edu) Date: Fri, 10 Mar 1995 13:12:26 -0400 (EDT) Subject: Undergraduate Summer Research in Neural Basis of Cognition Message-ID: <01HNZ08DPQOO9GXHV5@vms.cis.pitt.edu> * * * * * * * * * * * * UNDERGRADUATE SUMMER RESEARCH * * * * * * * * * * * * * * * * * * * * * * * * IN COGNITIVE NEUROSCIENCE * * * * * * * * * * * * * * FULL DESCRIPTION OF THE PROGRAM TO BE FOUND AT: http://neurocog.lrdc.pitt.edu/npc (www) The Neural Processes in Cognition Training Program at the University of Pittsburgh and Carnegie Mellon University has several positions available for qualified undergraduates interested in studying cognitive neuroscience. Cognitive neuroscience is a growing interdisciplinary area of study (see Science, 1993, v. 261, pp 1805-7) that interprets cognitive functions in terms of neuroanatomical and neurophysiological data and computer simulations. Undergraduate students participating in the summer program will be expected to spend ten weeks of intensive involvement in laboratory research supervised by one of the program's faculty. The summer program also includes weekly journal clubs and a series of informal lectures. Students receive a $2500 stipend provided by National Science Foundation support. Each student's specific plan of research will be determined in consultation with the training program's Director. Potential laboratory environments include single unit recording, neuroanatomy, computer simulation of biological and cognitive effects, neuropsychological assessment, behavioral assessment, and brain imaging. Applications are encouraged from highly motivated undergraduate students with interests in biology, psychology, engineering, physics, mathematics or computer science. Application deadline is April 25, 1995. To apply, request application materials by email at neurocog at vms.cis.pitt.edu or phone 412-624-7064 or write to the address below. The materials provide a listing of faculty research interests to consider. Applicants are strongly encouraged to identify a particular faculty member of interest. The application includes a statement of interest, a recent school transcript, one faculty letter of recommendation and a selection of one or two areas of research interests. Send requests and application materials to: Professor Walter Schneider, Program Director Neural Processes in Cognition University of Pittsburgh 3939 O'Hara Street Pittsburgh, PA 15260 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * For additional information on the World Wide Web open URL http://neurocog.lrdc.pitt.edu/npc * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * From RPRAGHUPATHI at oavax.csuchico.edu Sat Mar 11 14:27:48 1995 From: RPRAGHUPATHI at oavax.csuchico.edu (RPRAGHUPATHI@oavax.csuchico.edu) Date: Sat, 11 Mar 1995 12:27:48 -0700 (PDT) Subject: call for papers - please circulate! Message-ID: <01HO0CWZ6KV800KWRK@oavax.csuchico.edu> CALL FOR PAPERS "NEURAL NETWORKS IN BUSINESS" HAWAII INTERNATIONAL CONFERENCE ON SYSTEM SCIENCES - 29 JANUARY 3 - 6, 1996, MAUI, HAWAII Papers are invited for the minitrack on NEURAL NETWORKS IN BUSINESS as part of The Information Systems track at the Hawaii International Conference on System Sciences (HICSS). Researchers and practitioners doing work in neural network applications in the different functional areas of business such as marketing, production, HRM, software engineering, finance, accounting, health care, and law are invited to submit papers for consideration for this minitrack at HICSS. Papers must focus on applications in new areas, describe the methodology, lessons learnt, and future research issues. Comparisons of neural network performance to traditional models are also encouraged. The task modeled must be important and relevant to business situations, the data used real, and the implementation complete. Mini-track coordinators: W. "RP" Raghupathi California State University Department of Accounting and Management Information Systems Chico, CA 95929-011 Phone: (916) 898-4825 Fax: (916) 898-4584 E-mail: RPRAGHUPATHI at OAVAX.CSUCHICO.EDU and Shashi Shekhar University of Minnesota Computer Science Department 4-192 EE/CS Building 200 Union Street S.E. Minneapoli, MN 55455-0159 Phone: (612) 624-8307 Fax: (612) 625-0572 E-mail: shekhar at cs.umn.edu Instructions for submitting papers: 1. Submit 6 (six) copies of the full paper, consisting of 20-25 pages double-spaced including title page, abstract, references and diagrams directly to either of the minitrack coordinators. 2. Do not submit the paper to more than one minitrack. Paper should contain original material and not be previously published or currently submitted for consideration elsewhere. 3. Each paper must have a title page which includes the title, full name of all authors, and their complete addresses including affiliation(s), telephone numbers(s), and e-mail address(es). 4. The first page of the paper should include the title and a 300-word abstract. DEADLINES: MARCH 15, 1995: Abstracts may be submitted to minitrack coordinators (or track coordinators) for guidance and indication of appropriate content. Authors unfamiliar with HICSS or who wish additional guidance are encouraged to contact any coordinator to discuss potential papers. JUNE 1, 1995: Six (6) copies of the full papers must be submitted to the appropriate minitrack or track coordinators. AUG. 31, 1995: Notification of accepted papers mailed to authors. OCT. 1, 1995: Accepted manuscripts, camera-ready, sent to minitrack coordinators. One author from each paper must be registered by this time. NOV. 15, 1995: All other registrations must be received. Registration received after this deadline may not be accepted due to space limitations. Hotel reservations should also be made by this time. The "Neural Networks in Business" minitrack is part of the Information Systems Track. There are two other major tracks in the conference: software technology and Digital Documents. The Information Systems Track itself has several minitracks that focus on a variety of research topics in Collaboration Technology, Decision Support and Knowledge-Based Systems, and Organizational Systems and Technology. For more information contact: Jay F. Nunamaker, Jr. E-mail: nunamaker at bpa.arizona.edu (602) 621-4475 FAX: (602) 621-2433 Ralph H. Sprague, Jr. E-mail:sprague at uhunix.uhcc.hawaii.edu (808) 956-7082 FAX: (808) 956-9889 For more information on other tracks, please contact: Software Technology Track: Hesham El-Rewini E-mail: rewini at unocss.unomaha.edu Bruce D. Shriver E-mail: b.shriver at genesis2.com Digital Documents Track: M. Stuart Lynn E-mail: msylnn at cpa.org For more information on the conference, please contact the conference coordinator: Pamela Harrington E-mail: hicss at uhunix.uhcc.hawaii.edu (808) 956-7396 FAX: (808) 956-3766 From maggini at mcculloch.ing.unifi.it Tue Mar 14 03:32:59 1995 From: maggini at mcculloch.ing.unifi.it (Marco Maggini) Date: Tue, 14 Mar 95 09:32:59 +0100 Subject: AI*IA 2nd Call for Papers Message-ID: <9503140832.AA20548@mcculloch.ing.unifi.it> 2nd CALL FOR PAPERS AI*IA 95 Fourth Congress of the Italian Association for Artificial Intelligence Firenze, October 11-13, 1995 (Palazzo dei Congressi) IMPORTANT DATES ------------------------------ Deadline for submission April 10, 1995 Notification of acceptance June 11, 1995 Camera-ready copies due July 7, 1995 Congress October 11-13, 1995 THEME ----------- The Congress of the Italian Association for Artificial Intelligence is the most relevant national event in the field of Artificial Intelligence for both researchers interested in the methodological aspects and practitioners involved in applications. Papers will include both long and short presentations on substantial, original and previously unpublished research in all aspects of AI, including, but not limited to: - Automated Reasoning - Knowledge Representation - Architectures and Languages for AI - Machine Learning - Natural Language - Planning and Robotics - Qualitative Reasoning - Perception and Vision - Distributed Artificial Intelligence - Cognitive Modeling - Connectionist Models. PAPER SUBMISSION ------------------------ Five (5) copies of original papers not exceeding 5000 words (about 10 single spaced pages) MUST BE POSTMARKED ON OR BEFORE MONDAY APRIL 10, 1995 to: Prof. Giovanni Soda Dipartimento di Sistemi e Informatica Universita'di Firenze via S. Marta,3 50139 Firenze (Italy) E-mail: giovanni at ingfi1.ing.unifi.it Each copy of the paper must include a cover sheet, separate from the body of the paper, including: title of the paper, full name, postal addresses, phone number, fax numbers, e-mail adresses, if any, of all authors, an abstract of 100-200 words and a set of keywords giving the area/subarea of the paper and describing the topic of the paper. NO ELECTRONIC SUBMISSION WILL BE ACCEPTED. ATTENDANCE ----------------------- At least one author of the papers selected for oral and/or poster presentation must attend the Conference. GENERAL INFORMATION -------------------------------------- This CFP and the latest information regarding AI*IA95 can be found in the World Wide Web under http://www-dsi.ing.unifi.it/ai/aiia95 or obtained sending an e-mail to aiia95 at ingfi1.ing.unifi.it. PROGRAM COMMITTEE ------------------------------------ Giovanni Soda (Universita' di Firenze) (Chair) Program Committee Members for the Scientific Track: --------------------------------------------------------------------------- Stefania BANDINI ( Universita' di Milano) Amedeo CAPPELLI (CNR - Pisa) Amedeo CESTA (CNR - Roma) Marco COLOMBETTI (Politecnico di Milano) Mauro DI MANZO (Universita' di Genova) Floriana ESPOSITO (Universita' di Bari) Massimo GALLANTI (CISE - Milano) Fausto GIUNCHIGLIA (Irst e Universita' di Trento) Marco GORI (Universita' di Firenze) Leonardo LESMO (Universita' di Torino) Daniele NARDI (Universita' di Roma) Enrico PAGELLO (Universita' di Padova) Vito ROBERTO (Universita' di Udine) Oliviero STOCK (IRST - Trento) Giuseppe TRAUTTEUR (Universita' di Napoli) Franco TURINI (Universita' di Pisa) Program Committee Members for the Application Track ------------------------------------------------------------------------------ Franco CANEPA (Imit - Novara) Giannetto LEVIZZARI (Centro Ricerche FIAT - Orbassano (TO)) Fabio MALABOCCHIA (Cselt - Torino) Fulvio MARCOZ (Alenia - Roma) Renato PETRIOLI (Fondazione Bordoni - Roma) Roberto SERRA (Ferruzzi Finanziaria - Ravenna) Lorenzo TOMADA (Agip - Milano) Organizing Committee Members ---------------------------------------------- Carlo BIAGIOLI (IDG - Firenze) Francesca CESARINI (Universita' di Firenze) Marco GORI (Universita' di Firenze) Elisabetta GRAZZINI(Universita' di Firenze) From maggini at mcculloch.ing.unifi.it Tue Mar 14 02:48:21 1995 From: maggini at mcculloch.ing.unifi.it (Marco Maggini) Date: Tue, 14 Mar 95 08:48:21 +0100 Subject: Neurocomputing Journal (Special Issue) Message-ID: <9503140748.AA20227@mcculloch.ing.unifi.it> ========================================================== CALL FOR PAPER Special Issue on Recurrent Networks for Sequence Processing in the Neurocomputing Journal (Elsevier) M. Gori, M. Mozer, A.C. Tsoi, and R.L. Watrous (Eds) ========================================================== I'm sorry to announce that, unlike what indicated in previous electronic delivering of this call for paper, following the Neurocomputing editorial policy, prospective authors shouldn't submit the manuscript to one of the Guest Editors, but to the Editor-in-Chief of the journal by March 30, 1995 at the following address: Dr. V. David Sanchez A. Neurocomputing - Editor-in-Chief German Aerospace Research Establishment DLR Oberpfaffenhofen Institute for Robotics and System Dynamics P.O. Box 1116 D-82230 Wessling, Germany e-mail: df1y at dv.op.dlr.de Manuscripts that have already been sent to one of the Guest Editors needn't to be sent also to the Editor-in-Chief. Marco Gori, Ph.D. Associate Professor of Computer Science, Dipartimento di Sistemi e Informatica Universita' di Firenze Via S. Marta, 3 - 50139 Firenze (Italy) voice: +39 (55) 479-6265 fax: +39 (55) 479-6363 email: marco at mcculloch.ing.unifi.it WWW: http://www-dsi.ing.unifi.it/~marco From ruppin at math.tau.ac.il Tue Mar 14 08:39:30 1995 From: ruppin at math.tau.ac.il (Eithan Rupin) Date: Tue, 14 Mar 1995 15:39:30 +0200 Subject: Workshop on Modeling Brain Disorders Message-ID: <199503141339.PAA20225@virgo.math.tau.ac.il> NEURAL MODELING OF COGNITIVE AND BRAIN DISORDERS Workshop, June 8 - 10, 1995 Inn and Conference Center, University of Maryland, College Park, MD (located just north of Washington, DC) SPONSORS National Institute of Mental Health National Institute of Neurological Disorders and Stroke National Institute on Deafness and Other Communication Disorders National Institute on Aging Institute for Advanced Computer Studies, University of Maryland Dept. of Neurology, University of Maryland School of Medicine Center for Neural Basis of Cognition, Carnegie Mellon & Pittsburgh Universities Adams Super Center for Brain Studies, Tel Aviv University Center for Neural and Cognitive Sciences, University of Maryland The focus of this workshop will be on the lesioning of neural network models to study disorders in neurology, neuropsychology and psychiatry. The goals of the workshop are: to evaluate current achievements and the possibilities for further advancement; to examine methodological modeling issues, such as limitations of the networks currently employed, and the required computational properties of future models; and to make the material presented at the workshop available to the wider audience of researchers interested in studying neural models of brain disorders. A Proceedings consisting of the abstracts of presentations will be available at the meeting, and a book of contributed chapters based on the workshop is under consideration. Program Committee: Rita Berndt (Maryland), Barry Gordon (Johns Hopkins), Michael Hasselmo (Harvard), Ralph Hoffman (Yale), Joanne Luciano (Boston), Jay McClelland (Carnegie Mellon), Al Nigrin (American), David Plaut (Carnegie Mellon), James Reggia (Maryland), Eytan Ruppin (Tel-Aviv), and Stanley Tuhrim (Mount Sinai). Travel Fellowships: Funding has been requested for a few fellowships to offset travel costs of students, postdocs, and residents. To be considered for any such travel support that becomes available, please send your name, address, phone number, fax number, email address, status (student/postdoc/resident) and proof of status (copy of current student ID, letter from faculty advisor, etc.) to reach J. Reggia at the address below by Wednesday, April 19, 1995. Either indicate the name of the oral/poster presentation of which you are a co-author, or state in two or three sentences why you wish to attend. Registration and Hotel Reservations: Please use attached forms. Registration and reservations prior to May 15 are strongly recommended. Questions? Direct questions about workshop registration or administration to Cecilia Kullman, UMIACS, University of Maryland, College Park, MD 20742 USA; Tel.: (301)405-6722; Fax: (301)314-9658; email: cecilia at umiacs.umd.edu For questions about hotel reservations, please contact the hotel directly as indicated on the reservation form. For questions about the content of the workshop, please contact either Eytan Ruppin via email at ruppin at math.tau.ac.il, or James Reggia, Dept. of Computer Science, A.V. Williams Bldg., University of Maryland, College Park MD 20742 USA; Tel.: (301) 405-2686; Fax: (301)405-6707; email: reggia at cs.umd.edu PROGRAM Each workshop session will be focused on specific disorders and composed of four invited presentations followed by a critical commentary and a general discussion. ----- Thursday, June 8 8:00 AM: Registration Desk Opens 9:00 AM: Welcome: NIH Representative 9:05 AM: Introduction: James Reggia, University of Maryland 9:30 AM: Alzheimer's Disease and Memory Disorders Chair and Discussant: Steven Small (University of Pittsburgh) James McClelland (Carnegie Mellon University), with B. McNaughton and R. O'Reilly - Complementary learning systems in the hippocampus and neocortex Michael Hasselmo (Harvard University) - A computational theory of Alzheimer's disease as a breakdown in cortical learning dynamics David Horn (Tel-Aviv University, Israel), with N. Levy and E. Ruppin - Neural modeling of memory deterioration in Alzheimer's disease Martha Farah (University of Pennsylvania), with L. Tippett - Semantic knowledge impairments in Alzheimer's disease: insights from connectionist modeling 12:30 PM: Lunch Break 2:00 PM: Epilepsy Chair and Discussant: Michael Rogawski (National Institutes of Health) Roger Traub (IBM Watson), with J. Jeffreys - Unifying principles in epileptic after-discharges in vitro John Rinzel (National Institutes of Health) - Modeling network rhythmogenesis of epilepsy using reduced Hodgkin-Huxley neurons William Lytton (University of Wisconsin) - Toward rational pharmacotherapeutics Mayank Mehta (University of Arizona), with C. Dasgupta and G. Ullal - A neural network model for kindling of focal epilepsy 5:00 PM: Break (Put up posters) 5:30 PM: Reception and Poster Presentations ----- Friday, June 9 9:00 AM: Stroke and Functional Effects of Focal Lesions Chair and Discussant: Barry Gordon (Johns Hopkins University) John Pearson (David Sarnoff Research Center) - Plasticity in the organization of adult somatosensory cortex: a computer simulation based on neuronal group selection James Reggia (University of Maryland), with S. Armentrout, S. Goodall, Y. Chen, and E. Ruppin - Modeling post-stroke cortical map reorganization Manfred Spitzer (University of Heidelberg, Germany) - A neuronal network model of phantom limbs Eytan Ruppin (Tel-Aviv University, Israel), with J. Reggia - Patterns of damage in associative memory models and multi-infarct dementia Noon: Lunch Break 1:30 PM: Aphasia and Acquired Dyslexia Chair and Discussant: Rita Berndt (University of Maryland) Gary Dell (University of Illinois), with M. Schwartz, N. Martin, E. Saffran and D. Gagnon - Lesioning a connectionist model of lexical retrieval to simulate naming errors in aphasia Max Coltheart (Macquarie University, Australia), with R. Langdon and M. Haller - Simulation of acquired dyslexias by the DRC model, a computational model of visual word recognition and reading aloud Karalyn Patterson (MRC, Applied Psychology Unit, Cambridge, England), with D. Plaut, J. McClelland, M. Seidenberg and J. Hodges - Connections and disconnections: a connectionist account of surface dyslexia David Plaut (Carnegie Mellon University) - Connectionist modeling of the breakdown and recovery of reading via meaning 4:30 PM: Dinner Break ----- Saturday, June 10 9:00 AM: Schizophrenia, Frontal and Affective Disorders Chair and Discussant: Jonathan Cohen (Carnegie Mellon University \& University of Pittsburgh) Ralph Hoffman (Yale University) - Modeling positive symptoms of schizophrenia using attractor and backpropagation networks David Servan-Schreiber (University of Pittsburgh), with J. Cohen - Cognitive deficits in schizophrenia: modeling neuromodulation of prefrontal cortex Dan Levine (University of Texas at Arlington) - Functional deficits of frontal lobe lesions Joanne Luciano (Boston University), with M. Negishi, M. Cohen, and J. Samson - A dynamic neural model of cognitive and brain disorders Noon: Lunch Break 1:30 PM: Commentary: James McClelland (Carnegie Mellon University) 2:00 PM: General Discussion A brief commentary will be followed by a general discussion of where we are and where we want to go from here. Among the issues to be considered are the successes and limitations of current models of neurological, neuropsychological and psychiatric disorders. What common methods have been identified? How can models of this sort be validated, and at what ``level of detail" should they be formulated? What topics seem amenable to future neural modeling, and what are barriers to further progress in this field? Finally, feedback on the workshop format and content will be solicited, and the interest and usefulness of holding similar workshops or more formal conferences in the future will be assessed. 4:30 PM: Adjournment POSTER PRESENTATIONS Thursday, June 8, 5:30 PM T.S. Braver, J.D. Cohen and D. Servan-Schreiber. A Model of Normal and Schizophrenic Performance in a Task Involving Working Memory and Inhibition. Carnegie Mellon University and University of Pittsburgh, USA. J.L Contreras-Vidal, H.L. Teulings and G.E. Stelmach. A Neural Model of Spatiotemporal Neurotransmitter Dynamics in Parkinson's Disease: Dopamine Depletion and Lesion Studies. Arizona State University, USA. J.T. Devlin, L.M. Gonnerman, E.S. Andersen and M.S. Seidenberg. Modeling Double Dissociation using Progressive, Widespread Damage. University of Southern California, USA. T.M. Gale, R.J. Frank, D.J. Done and S.P. Hunt. Modeling Conceptual Disruption in Dementia of Alzheimer Type. University of Hertfordshire, England. P. Gupta. Phonological Representation, Word Learning, and Verbal Short-Term Memory: A Neural and Computational Model. Carnegie Mellon University, USA. B. Horwitz, A.R. McIntosh, J.V. Haxby, D. Golomb, M.B. Schapiro, S.I. Rapoport and C.L. Grady. Systems-Level Network Analysis of Cortical Visual Pathways Mapped by Positron Emission Tomography (PET) in Dementia of the Alzheimer Type. National Institute on Aging, USA. E.A. Klein and J.C. Wu. Neural Modeling of Striatal and Limbic Structures in Major Depressive Illness Using PET. University of California at Irvine, USA. J.P. Levy. Semantic Representations in Connectionist Models: The Use of Text Corpus Statistics. University of Edinburgh, UK. K.A. Mayall and G.W. Humphreys. A Connectionist Model of Pure Alexia and Case Mixing Effects. University of Birmingham, England. B.F. O'Donnell, M.E. Karapelou, D. Pedini and R.W. McCarley. Visual Pattern Classification and Recognition in Normal and Schizophrenic Subjects: An Adaptive Resonance Theory Simulation Study. Harvard University and Boston University, USA. R.L. Ownby. A Computational Model of Stroke-related Hemineglect: Preliminary Development. University of Miami, USA. D.V. Reynolds, H.A. Getty and G. Atwell. Object-Oriented Computer Models of Brain Disorders Based on Functional Neuroanatomy. Henry Ford Hospital, USA, and University of Windsor, Canada. R. Shilcock and P. Cairns. Connectionist Modelling of Visuospatial Unilateral Neglect. University of Edinburgh, UK. R. Shilcock, M.L. Kelly and K. Loughran. Evidence for a Connectionist Model of Visuospatial Neglect based on Foveal Splitting. University of Edinburgh, UK. S. Tsumoto and H. Tanaka. Computational Analysis of Acquired Dyslexia of Chinese Characters based on Neural Networks. Medical Research Institute, Tokyo Medical University, Japan. J. Sirosh and R. Miikkulainen. Reorganization of Lateral Interactions and Topographic Maps Following Cortical Lesions. University of Texas at Austin, USA. J.P. Sutton. Modeling Cortical Disorders using Nested Networks. Harvard University, USA. C.S. Whitney, R.S. Berndt and J.A. Reggia. A Computational Model of Single Word Oral Reading. University of Maryland, USA. J. Wright and K. Ahmad. The Simulation of Acquired Language Disorders Using Modular Connectionist Architectures. University of Surrey, UK. DIRECTIONS TO UNIVERSITY OF MARYLAND INN AND CONFERENCE CENTER >From National (DCA) Airport: Upon leaving the airport, follow the signs to Washington, D.C., using the George Washington Parkway. Stay on the parkway until you see the I-495 Rockville exit. Follow 495 until you get to the New Hampshire Avenue exit. Take the New Hampshire/Takoma Park exit. Stay on New Hampshire Avenue and make a left at the second light onto Adelphi Road. Drive approximately three miles on Adelphi Road through two traffic lights. At the third light, make a left turn onto University Boulevard and an immediate right into the parking garage. The building is marked University College Center of Adult Education. >From Baltimore-Washington International (BWI) Airport: Upon exiting the airport, follow signs for I-95 (toward Washington). I-95 will take you to 95 South. Follow 95 South approximately 30 miles. Stay on 95 South until you get to the Route 1 South/College Park exit (Exit 25B). Follow Route 1 to the first exit for the University of Maryland (Systems Administration). Take this exit (Route 193) which immediately becomes University Boulevard. Keep on University Boulevard and go through two traffic lights. At the third light (intersection of University Boulevard and Adelphi Road) make a U-turn and an immediate right into the parking garage. The building is marked University College Center of Adult Education. >From Dulles (IAD) Airport: Upon leaving the airport, follow the signs towards Washington, D.C., until you see the signs for I-495. Take the exit towards Rockville. Follow 495 until you get to the exit for New Hampshire Avenue. Take the New Hampshire/Takoma Park exit. Stay on New Hampshire Avenue and make a left at the second light onto Adelphi Road. Drive approximately three miles on Adelphi Road through two traffic lights. At the third light, make a left turn onto University Boulevard and an immediate right into the parking garage. The building is marked University College Center of Adult Education. ----------------------------cut here------------------------------- REGISTRATION FORM NEURAL MODELING WORKSHOP College Park, Md June 8-10, 1995 Name: ___________________________________________________ Affiliation: ________________________________________________ Address: _________________________________________________ _________________________________________________________ Telephone: ___________________________ Fax: ________________________________ e-mail: ______________________________ ___ $50 Conference fee before 5/15/95 ___ $65 Conference fee after 5/15/95 ___ $25 Student/postdoc/resident fee Amount Enclosed: $________________ MAKE CHECKS PAYABLE TO "UMIACS-Neural Modeling Workshop." Conference fee includes proceedings, coffee and reception. Payment must accompany the registration form. Checks must be in US dollars only and payable to "UMIACS-Neural Modeling Workshop." Please do not send cash. CREDIT CARDS WILL NOT BE ACCEPTED. Students/postdocs/residents must provide a copy of a student ID card or a letter from a faculty member for proof of status. RETURN BY MAY 15, 1995 TO: Cecilia Kullman UMIACS University of Maryland College Park, MD 20742, USA Tel.: (301) 405-6722. Fax: (301) 314-9658 e-mail: cecilia at umiacs.umd.edu ----------------------------- cut here ---------------------------- HOTEL RESERVATION FORM Neural Modeling Workshop The Inn and Conference Center University of Maryland University College Please reserve the following accommodations: ___ $69 Single Occupancy ___ $84 Double Occupancy Arrival Date: ____________ Departure Date: ____________ ___ Smoking ___ Non-smoking ___ Deposit check enclosed in the amount of $ ____________ ___ Credit card guarantee: Credit card number: _____________________________ Credit card expiration date: ____________ Signature: ________________________________ Name: ___________________________________________ Affiliation: _________________________________________ Address: __________________________________________ __________________________________________________ Telephone: _________________________________________ Fax: ______________________________________________ Rates are per room per night. All rates are subject to a 5% occupancy tax. All reservations must be accompanied by a deposit of one night's room rate plus tax, or a credit card guarantee. Guaranteed reservations will be held until 6:00 a.m. the following day. Reservations not canceled prior to 6:00 p.m. on the arrival day will be charged one night's room rate plus tax. SEND BY MAY 15, 1995 TO: Reservations The Inn and Conference Center University of Maryland University College College Park, MD 20742, USA Tel.: (301) 985-7300, Fax: (301) 985-7850 From marks at u.washington.edu Tue Mar 14 12:55:17 1995 From: marks at u.washington.edu (Robert Marks) Date: Tue, 14 Mar 95 09:55:17 -0800 Subject: book announcement: Computational Intelligence Message-ID: <9503141755.AA29898@carson.u.washington.edu> COMPUTATIONAL INTELLIGENCE: Imitating Life edited by Jacek M. Zurada, Robert J. Marks II and Charles J. Robinson IEEE Press 1994 Computational Intelligence has emerged from the fusion of the fields of neural networks, fuzzy systems and evolutionary computation. For the first time, contributions of world-renowned experts and pioneers in the field are collected into a single volume. The articles are grouped into the following categories: Computational Learning Theory, Approximate Reasoning, Evolutionary Computation, Biological Computation and Pattern Recognition, Intelligent Control, Hybrid Computational Intelligence, and Applications. The contributions were first presented in a special Plenary Symposium held in conjunction with the 1994 World Congress on Computational Intelligence. Key features include: An introduction by the editors...An extensive overview of computational intelligence by James Bezdek...Articles by such leading experts as: James Bezdek, Didier Dubois, R. Eckmiller, Lawrence J. Fogel, Anil K. Jain, James M. Keller, Reza Langari, Erkki Oja, Henri Prade, Steven K. Rogers, J. David Schaffer, Michio Sugeno, H.J. Zimmerman, and others...A special section on applicatins in the fields of biology, signal and image processing, robotics and control... An extensive subject index for easy reference, and more! Contributions from Bezdek, Oja, Berenji, Hecht-Nielsen, Dubois, Prade, DeJong, Fogel, Anil Jain, Anderson, Usui, Eckmiller, Sugeno, Bonissone, Fukuda, Schaffer, Zimmerman, Fogelman & Rogers IEEE Member Price: $40.00 List Price: $49.95 1994 Hardcover 448 pp ISBN 0-7803-1104-3 To order, call IEEE Customer Service: (800)678-IEEE or (908)981-1393 or fax (908)981-9667. IEEE Order No: PC04580 Request the Table of Contents from r.marks at ieee.org From movellan at cogsci.UCSD.EDU Mon Mar 13 22:27:46 1995 From: movellan at cogsci.UCSD.EDU (Javier Movellan) Date: Tue, 14 Mar 1995 11:27:46 +0800 Subject: AV Database Message-ID: <9503141927.AA18485@ergo.UCSD.EDU> Tulips1 is a small Audio-Visual database useful for simple projects on audio-visual speech recognition. I am making the data available through anonymous ftp on ergo.ucsd.edu at /pub/tulips1 Tulips1 includes 12 subjects saying the first four digits in English. Audio part is in .au format, visual part was digitized at 30fps and it is in .pgm format. -Javier From yves at netid.com Tue Mar 14 16:01:26 1995 From: yves at netid.com (Yves Chauvin) Date: Tue, 14 Mar 95 13:01:26 PST Subject: Backpropagation volume available Message-ID: <9503142101.AA01815@netid.com> The volume: Back-propagation: Theory, Architectures, and Applications. (1995). Yves Chauvin and David E. Rumelhart (Eds.). Lawrence Erlbaum: Hillsdale, NJ. is now available. Table of Contents and ordering instructions are given below. Yves Chauvin ---------------------------------------------------------------------------- TABLE OF CONTENTS Backpropagation: The basic theory. D. E. Rumelhart, R. Durbin, R. Golden, Y. Chauvin Phoneme recognition using time-delay neural networks. A. Waibel, T. Hanazawa, G. E. Hinton, K. Shikano, K. J. Lang Automated aircraft flare and touchdown control using neural networks C. Schley, Y. Chauvin, V. Henkle Recurrent backpropagation networks. F. J. Pineda A focused backpropagation algorithm for temporal pattern recognition M. C. Mozer Nonlinear control with neural networks D. H. Nguyen, B. Widrow Forward models: Supervised learning with a distal teacher. M. J. Jordan, D. E. Rumelhart Backpropagation: Some comments and variations. S. J. Hanson Graded state machines: The representation of temporal contingencies in feedback networks. A. Cleeremans, D. Servan-Schreiber, J. L. McClelland Spatial coherence as an internal teacher for a neural network. S. Becker, G. E. Hinton Connectionist modeling and control of finite state systems given partial state information. J. R. Bachrach, M. C. Mozer Backpropagation and unsupervised learning in linear networks. P. Baldi, Y. Chauvin, K. Hornik Gradient-based learning algorithms for recurrent networks and their computational complexity. R. J. Williams, D. Zipser When neural networks play Sherlock Holmes P. Baldi, Y. Chauvin Gradient descent learning algorithms: A unified perspective P. Baldi ---------------------------------------------------------------------------- ORDERING INSTRUCTIONS To order: 1) call toll free: 1-800-9-BOOKS-9 (1-800-926-6579) 2) fax: 201-666-2394 3) email: orders at leahq.mhs.compuserve.com 4) send order to:Lawrence Erlbaum Associates,Inc 365 Broadway Hillsdale, NJ 07642 - Orders should include name of Editors, Title, and ISBN (0-8058-1258-X). -if paying by check include handling charge of $2.00 for first book and $.50 for each additional book. - if paying by credit card include: card name, ie: Visa, Mastercard, AMEX, Discover, include card number, expiration and signature if mailing credit card order. Price for the hard cover copy is $125 and $45 for the paper back. There is a 10% discount off orders that are prepaid (for individuals only). From jbower at bbb.caltech.edu Tue Mar 14 17:51:17 1995 From: jbower at bbb.caltech.edu (jbower@bbb.caltech.edu) Date: Tue, 14 Mar 95 14:51:17 PST Subject: Two Postdoctoral Positions Message-ID: <9503142251.AA06832@bbb.caltech.edu> TWO POSTDOCTORAL POSITIONS IN CEREBELLAR NETWORK MODELING in collaboration with: James M. Bower California Institute of Technology and Erik De Schutter University of Antwerp, Belgium Two postdoctoral positions are available immediately to participate in an ongoing collaboration on the use of realistic models of cerebellar cortical neurons and circuits to investigate cerebellar function. A major objective of this collaboration is to construct a morphologically and physiologically realistic network model of the cerebellar cortex of the rat. This network model is an extension of our previous efforts to construct realistic compartmental models of the principle cell types within cerebellum (see below). The model itself is being constructed in the GENESIS simulation system, and involves the use of parallel supercomputers as computational engines. The project is supported by the Human Frontier Science Organization. Successful candidates will link modeling and physiology efforts in Dr. Bower's laboratory at the California Institute of Technology with the simulation-based research of Dr. De Schutter at the University of Antwerp (Belgium). As the selected candidates will be expected to collaborate extensively with researchers at both sites, exchange visits and dual appointments will be provided. Candidates should have computational neuroscience experience, preferentially with compartmental models or with the GENESIS software. Candidates whose previous research combines both modeling and experimental physiology are particularly encouraged to apply. Positions are available for 2 to 3 years, starting immediately. Salary commensurate with experience. Funding is independent of nationality. Caltech is an equal opportunity employer. Women and those underrepresented in science are particularly encouraged to submit applications. Applicants must send curriculum vitae, a statement of why this research interests them, and three references to BOTH Prof. Bower and Prof. De Schutter, if possible by e-mail. Prof. J.M. Bower Prof. E. De Schutter Div. of Biology Born Bunge Foundation MC 216-76 Dept. of Medicine California Institute of Technology University of Antwerp - UIA Pasadena, CA 91125 B2610 Antwerp USA Belgium fax: +1-818-4490679 +32-3-8202541 e-mail: jbower at smaug.bbb.caltech.edu erik at kuifje.bbf.uia.ac.be Additional information: World Wide Web: http://www.bbb.caltech.edu/bowerlab http://bbf-www.uia.ac.be/ E. De Schutter and J.M. Bower: An active membrane model of the cerebellar Purkinje cell. I. Simulation of current clamps in slice. Journal of Neurophysiology 71: 375-400 (1994). E. De Schutter and J.M. Bower: An active membrane model of the cerebellar Purkinje cell: II. Simulation of synaptic responses. Journal of Neurophysiology 71: 401-419 (1994). E. De Schutter and J.M. Bower: Simulated responses of cerebellar Purkinje cell are independent of the dendritic location of granule cell synaptic inputs. Proceedings of the National Academy of Sciences USA 91: 4736-4740 (1994). From chris at psychologie.uni-leipzig.de Wed Mar 15 10:19:02 1995 From: chris at psychologie.uni-leipzig.de (Christian Kaernbach) Date: Wed, 15 Mar 95 16:19:02 +0100 Subject: Conference in Leipzig, June 29 - July 1 Message-ID: <9503151519.AA03144@psychologie.uni-leipzig.de> PHENOMENA AND ARCHITECTURES OF COGNITIVE DYNAMICS Symposium in Leipzig, Germany, June 29 to July 1 '95 Speakers: . Topics: A. Baddeley, Cambridge . E. Basar, Luebeck . A reference to Ernst Heinrich Weber I. Biederman, Los Angeles . (* 24.6.1795, 4 contributions) will H. Colonius, Oldenburg . be followed by two sessions, S. Dehaene, Paris . contrasting R. Eckhorn, Marburg, . * binding by temporal coherence P. Goldman-Rakic, New York . (and other topics related to M. W. Greenlee, Freiburg . synchronisation) J. L. van Hemmen, Muenchen . with G. Hitch, London . * seemingly serial processes in P. Koenig, La Jolla . working memory (Sternberg paradigm) M. Kubovy, Charlottesville . Both topics cover about the same R. van Lier, Nijmegen . temporal domain (25 ms to n00 ms). S. W. Link, Hamilton . Psychologists, psychiologists, and C. von der Malsburg, Bochum . connectionists will present to each I. V. Maltseva, Moskau . other their specific view point. R. Mausfeld, Kiel . Questions of interest will be: W. Phillips, Stirling . - Is binding by temporal coherence E. Scheerer, Oldenburg . fast enough? M. Schuermann, Luebeck . - How to code hierarchical structures O. Sporns, La Jolla . with temporal code? J. T. Townsend, Bloomington . - How can massively parallel models S. J. Williamson, New York . explain serial processes? The conference is the starting activity of a new cognition research group at Leipzig university. The scientific board is made up by Hans-Georg Geissler, Edith Goepfert, and Andreas Schierwagen. Organisation: Christian Kaernbach Institut fuer Allgemeine Psychologie Universitaet Leipzig Tieckstr. 2 04 275 Leipzig, Germany Tel.: +49 341 97-36932 Fax: +49 341 328464 E-mail: chris at psychologie.uni-leipzig.de From P.McKevitt at dcs.shef.ac.uk Wed Mar 15 12:39:01 1995 From: P.McKevitt at dcs.shef.ac.uk (Paul Mc Kevitt) Date: Wed, 15 Mar 95 17:39:01 GMT Subject: IEE COLLOQ. LONDON (MAY): GROUNDING-REPRESENTATIONS (Sharkey/ Mc Kevitt) Message-ID: <9503151739.AA18698@dcs.shef.ac.uk> ============================================================================== GROUNDING REPRESENTATIONS GROUNDING REPRESENTATIONS GROUNDING REPRESENTATIONS ============================================================================== PROGRAMME AND CALL FOR PARTICIPATION GROUNDING REPRESENTATIONS: Integration of sensory information in Natural Language Processing, Artificial Intelligence and Neural Networks IEE COLLOQUIUM IEE Computing and Control Division [Professional group: C4 (Artificial Intelligence)] in association with: British Computer Society Specialist Group on Expert Systems and The Society for the Study of Artificial Intelligence and Simulation of Behaviour (SSAISB) MONDAY, MAY 15th, 1995 ********************** at the IEE Colloquium Savoy Place London, ENGLAND Chairs NOEL SHARKEY and PAUL MC KEVITT Department of Computer Science University of Sheffield, England WORKSHOP DESCRIPTION: Perhaps the most famous criticism of traditional Artificial Intelligence is that computer programs use symbols that are arbitrarily interpretable (see Searle, 1980 for the Chinese Room and Harnad, 1990 for the symbol grounding problem). We could, for example, use the word "apple" to mean anything from a "common fruit" to a "pig's nose". All the computer knows is the relationship between this symbol the others that we have given it. The question is, how is it possible to move from this notion of meaning, as the relationship between arbitrary symbols, to a notion of "intrinsic" meaning. In other words, how do we provide meaning by grounding computer symbols or representations in the physical world? The aim of this colloquium is to take a broad look at many of the important issues in relating machine intelligence to the world and to make accessible some of the most recent research in integrating information from different modalities. For example, why is it important to have symbol or representation grounding and what is the role of the emerging neural network technology? One approach has been to link intelligence to the sensory world through visual systems or robotic devices. Another approach is work on systems that integrate information from different modalities such as vision and language. Yet another approach has been to examine how the human brain relates sensory, motor and other information. It looks like we may be at long last getting a handle on the age old CHINESE ROOM and SYMBOL GROUNDING problems. Hence this colloquium has as its focus, "grounding representations. The colloquium will occur over one day and will focus on three themes: (1) Biology and development; (2) Computational models and (3) Symbol grounding. The target audience of this colloquium will include Engineers and Scientists in Neural Networks and Artificial Intelligence, Developmental Psychologists, Cognitive Scientists, Philosophers of mind, Biologists and all of those interested in the application of Artificial Intelligence to real world problems. PROGRAMME: Monday, May 15th, 1995 ************************ INTRODUCTION: 9.00 REGISTRATION + SUSTENANCE 10.00 `An introduction' NOEL SHARKEY (Department of Computer Science, University of Sheffield, ENGLAND) BIOLOGY: 10.30 `The neuronal mechanisms of language' VALENTINO BRAITENBERG (Max Plank Institute for Biological Cybernetics, Tuebingen, GERMANY) COMPUTATIONAL MODELS: 11.00 `Natural language and exploration of an information space' OLIVIERO STOCK (Istituto per la Ricerca Scientifica e Technologica, IRST) (Trento, ITALY) 11.30 `How visual salience influences natural language descriptions' WOLFGANG MAASS (Cognitive Science Programme) (Universitaet des Saarlandes, Saarbruecken, GERMANY) 12.00 DISCUSSION 12.30 LUNCH GROUNDING SYMBOLS: 2.00 `On grounding language with neural networks' GEORG DORFFNER (Austrian Institute for Artificial Intelligence, Vienna, AUSTRIA) 2.30 `Some observations on symbol-grounding from a combined symbolic/connectionist viewpoint' JOHN BARNDEN (Computing Research Laboratory, New Mexico, USA) & (Department of Computer Science, University of Reading, ENGLAND) 3.00 Sustenance Break 3.30 `Grounding symbols in sensorimotor categories with neural networks' STEVAN HARNAD (Department of Psychology, University of Southampton, ENGLAND) PANEL DISCUSSION AND QUESTIONS: 4.00 `Grounding representations' Chairs + Invited speakers S/IN S/IN: 4.30 `De brief/comments' PAUL MC KEVITT (Department of Computer Science, University of Sheffield, ENGLAND) 5.00 O/ICHE MHA/ITH ***************************** PUBLICATION: We intend to publish a book on this Colloquium Proceedings. ADDRESSES IEE CONTACT: Sarah Leong Groups Officer The Institution of Electrical Engineers (IEE) Savoy Place GB- WC2R OBL, London England, UK, EU. E-mail: SLeong at iee.org.uk (Sarah Leong) E-mail: mbarrett at iee.org.uk (Martin Barrett) E-mail: dpenrose at iee.org.uk (David Penrose) WWW: http://www.iee.org.uk Ftp: ftp.iee.org.uk FaX: +44 (0) 171-497-3633 Phone: +44 (0) 171-240-1871 (general) Phone: +44 (0) 171-344-8423 (direct) LOCATION: The Institution of Electrical Engineers (IEE) Savoy Place GB- WC2R OBL, London England, UK, EU. ACADEMIC CONTACT: Paul Mc Kevitt Department of Computer Science Regent Court 211 Portobello Street University of Sheffield GB- S1 4DP, Sheffield England, UK, EU. E-mail: p.mckevitt at dcs.shef.ac.uk WWW: http://www.dcs.shef.ac.uk/ WWW: http://www.shef.ac.uk/ Ftp: ftp.dcs.shef.ac.uk FaX: +44 (0) 114-278-0972 Phone: +44 (0) 114-282-5572 (Office) 282-5596 (Lab.) 282-5590 (Secretary) REGISTRATION: Registration forms are available from SARAH LEONG at the above address and should be sent to the following address: (It is NOT possible to register by E-mail.) Colloquium Bookings Institution of Electrical Engineers (IEE) PO Box 96 Stevenage GB- SG1 2SD Herts England, UK, EU. Fax: +44 (0) 143 874 2792 Receipt Enquiries: +44 (0) 143 876 7243 Registration enquiries: +44 (0) 171 240 1871 x.2206 PRE-REGISTRATION IS ADVISED ALTHOUGH YOU CAN REGISTER ON THE DAY OF THE EVENT. ________________________________________________________________________ R E G I S T R A T I O N COSTS ________________________________________________________________________ (ALL FIGURES INCLUDE VAT) IEE MEMBERS 44.00 NON-IEE MEMBERS 74.00 IEE MEMBERS (Retired, Unemployed, Students) FREE NON-IEE MEMBERS (Retired, Unemployed, Students) 22.00 LUNCH TICKET 4.70 MEMBERS: Members of the IEEIE, The British Computer Society and the Society for the Study of Artificial Intelligence and Simulation of Behaviour and Eurel Member Associations will be admitted at Members' rates. ============================================================================== GROUNDING REPRESENTATIONS GROUNDING REPRESENTATIONS GROUNDING REPRESENTATIONS ============================================================================== From pjs at aig.jpl.nasa.gov Fri Mar 17 11:59:53 1995 From: pjs at aig.jpl.nasa.gov (Padhraic J. Smyth) Date: Fri, 17 Mar 95 08:59:53 PST Subject: Summer student positions at JPL Message-ID: <9503171659.AA02781@amorgos.jpl.nasa.gov> SUMMER POSITIONS AT THE JET PROPULSION LABORATORY (JPL) March 16th 1995 JPL is seeking applications from graduate students interested in summer positions (1995). Ideal candidates will have a background and interest in statistical pattern recognition, applied statistics, and image processing (or some subset of these topics). The work will include participation in ongoing projects for automated analysis of remote-sensing and sky-survey images, including both theoretical investigations and algorithm development. Candidates should be capable of implementing developed algorithms in a standard programming language such as C or within an environment such as MATLAB. This is an ideal opportunity for students wishing to get involved in the analysis of large high-dimensional datasets of scientific importance. Interested applicants should send a copy of their resume to: Padhraic Smyth JPL 525-3660 4800 Oak Grove Drive Pasadena, CA 91109. or email a postscript or ascii copy to: pjs at galway.jpl.nasa.gov From chris at orion.eee.kcl.ac.uk Fri Mar 17 10:14:29 1995 From: chris at orion.eee.kcl.ac.uk (Chris Christodoulou) Date: Fri, 17 Mar 95 15:14:29 GMT Subject: NN RIG Lecture Message-ID: <9503171514.AA14235@orion.eee.kcl.ac.uk> IEEE Neural Networks Regional Interest Group Chairman: Trevor Clarkson (tgc at kcl.ac.uk) NN RIG LECTURE "Applications of Neural Nets and Fuzzy Logic in Communications Systems" by Dr Stamatios Kartalopoulos (VP IEEE Neural Networks Council) to be held at King's College London, Strand, London WC2 on Wednesday 29th March 1995 at 6.00pm Room 1B23 (Strand Building) All are welcome at this lecture SUMMARY Neural Networks and Fuzzy Logic have already found wide range applicability. The projection is that this trend will keep continuing. Although the notion of neural networks was to escape from the conventional computer, nevertheless, to date, many neural network applications still depend on the conventional computer; during learning or during normal operation. The all-neural network, independent from the conventional digital computer (during learning and operation), and outperforming a conventional computer in equivalent functionality with cost/performance as a metric, is yet to come. Currently, we see niche applications that address improvements in a specific area within a complex system. Most of the applications described fall in this category. Paradigms used today are math intensive; the neural network problem has been shifted from the unknown mental process to an optimization algorithmic problem using conventional math. I cannot believe that the human brain operates on equations to estimate and solve a problem, to recognize objects, or to make inferences. I cannot believe that the human brain solves the backpropagation algorithm, or any other algorithm when it is trained. What is needed are fresh ideas. Ideas that go beyond the current conventional paradigms. Ideas that emulate human thinking and map it on a network. Conventional math is good for modelling to use computer tools for emulation and simulation, for the time being. However, conventional math is a mental attractor that keeps pulling us back to conventional techniques. In short, there are many Challenges that make the future seem more exciting than ever. From isabelle at research.att.com Fri Mar 17 12:39:29 1995 From: isabelle at research.att.com (Isabelle Guyon) Date: Fri, 17 Mar 95 12:39:29 EST Subject: No subject Message-ID: <9503171738.AA25122@big.info.att.com> #################################### REMINDER ################################# ###### M O N D A Y , M A R C H 2 0 T H D E A D L I N E ####### #################################### REMINDER ################################# /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ < ICANN industrial 1 day workshop: > < Neural network applications > < to DOCUMENT ANALYSIS and RECOGNITION > < Paris, October 1995 > \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/ * Layout and logical structure analysis of documents. * Map drawing and understanding. * OCR and handwriting recognition (off-line and on-line). * Multimedia document processing. * Image/text retrieval, automated indexing. * User interfaces to electronic libraries. * Image/text compression. This workshop will be a forum for application researchers and developers to present their systems and discuss tabu subjects, including: - hybrid solutions, - solutions that work (don't know why), - solutions that do not work (though theoretically optimum), - hacks, tricks and miscellaneous occult methods, - marketing advantage/disadvantage of saying that there is a NN in the system. The condition of acceptance will not be the novelty of the algorithms but the existence of a working "money making" application or at least a working prototype with a path towards industrialization. The performance of the system should be measured quantitatively, preferably using known benchmarks or comparisons with other systems. As for regular scientific papers, every statement should be properly supported by experimental evidence, statistics or references. Demonstrations and videos are encouraged. *** Submission deadline of a 6 page paper = March 20, 1995 *** Send 4 paper copies to: Isabelle Guyon ---------------------- AT&T Bell Laboratories 955 Creston road Berkeley, CA 94708, USA Electronic formats available at: ftp lix.polytechnique.fr login: anonymous password : your e-mail address ftp> cd /pub/ICANN95/out For more informations on ICANN write to isabelle at research.att.com. From jain at arris.com Fri Mar 17 12:58:20 1995 From: jain at arris.com (Ajay Jain) Date: Fri, 17 Mar 95 09:58:20 -0800 Subject: Position available: Arris Pharmaceutical Message-ID: <9503171758.AA01501@snug.arris.com> Position available: Arris Pharmaceutical Corporation Job Title: Scientist, Computational Sciences Arris Pharmaceutical is a South San Francisco-based pharmaceutical company of about 85 people (NASDAQ: ARRS). We are dedicated to the efficient discovery and development of orally-active human therapeutics. Computational approaches to drug discovery form one of our core technologies, and we have developed novel techniques for computer-aided drug design that rely ideas from machine learning, computational geometry, chemistry, and physics. Job description and requirements: Working as a member of the Computational Sciences Department, perform applied research in computational drug design, particularly in the areas of flexible molecular docking, three-dimensional quantitative structure-activity prediction, flexible molecular database screening, and de novo ligand design. The position requires a PhD in Computer Science with significant research experience in computational geometry, machine learning, pattern recognition, or related fields. Formal training in organic chemistry is also required. The ideal candidate will have demonstrated success in applied research on non-toy problems requiring understanding of the underlying problem domain. Proficiency in C or C++ is also required. Please send your resume with the names and addresses of three references to me (address below). Some publications that are illustrative of work in our group are listed below: A. N. Jain, N. L. Harris, and J. Y. Park. Quantitative Binding Site Model Generation: Compass Applied to Multiple Chemotypes Targeting the 5HT1A Receptor. Journal of Medicinal Chemistry. In press (appearing very soon). A. N. Jain, T. G. Dietterich, R. L. Lathrop, D. Chapman, R. E. Critchlow, B. E. Bauer, T. A. Webster, and T. Lozano-Perez. Compass: A shape-based machine learning tool for drug design. Journal of Computer-Aided Molecular Design 8(6): 635-652, 1994. A. N. Jain, K. Koile, D. Chapman. Compass: Predicting biological activities from molecular surface properties; performance comparisons on a steroid benchmark. Journal of Medicinal Chemistry 37: 2315-2327, 1994. T. G. Dietterich, A. N. Jain, R. L. Lathrop, and T. Lozano-Perez. A comparison of dynamic reposing and tangent distance for drug activity prediction. In Advances in Neural Information Processing Systems 6, ed. J. D. Cowan, G. Tesauro, and J. Alspector. San Francisco, CA: Morgan Kaufmann. 1994. ------------------------------------------------------------------------ Dr. Ajay N. Jain Senior Scientist, Computational Sciences Email: jain at arris.com Arris Pharmaceutical Corporation Phone: (415) 737-1651 385 Oyster Point Boulevard, Suite 3 FAX: (415) 737-8590 South San Francisco, CA 94080 From giles at research.nj.nec.com Fri Mar 17 16:26:10 1995 From: giles at research.nj.nec.com (Lee Giles) Date: Fri, 17 Mar 95 16:26:10 EST Subject: Computational capabilities of recurrent NARX neural networks Message-ID: <9503172126.AA17893@alta> The following Technical Report is available via the University of Maryland Department of Computer Science and the NEC Research Institute archives: _____________________________________________________________________________ Computational capabilities of recurrent NARX neural networks UNIVERSITY OF MARYLAND TECHNICAL REPORT UMIACS-TR-95-12 AND CS-TR-3408 H. T. Siegelmann[1], B. G. Horne[2], C. L. Giles[2,3] [1] Dept. of Information Systems Engineering, Technion, Haifa 32000, Israel [2] NEC Research Institute, 4 Independence Way, Princeton, NJ 08540 [3] UMIACS, University of Maryland, College Park, MD 20742 iehava at ie.technion.ac.il {horne,giles}@research.nj.nec.com Recently, fully connected recurrent neural networks have been proven to be computationally rich --- at least as powerful as Turing machines. This work focuses on another network which is popular in control applications and has been found to be very effective at learning a variety of problems. These networks are based upon Nonlinear AutoRegressive models with eXogenous Inputs (NARX models), and are therefore called {\em NARX networks}. As opposed to other recurrent networks, NARX networks have a limited feedback which comes only from the output neuron rather than from hidden states. They are formalized by \[ y(t) = \Psi \left( \rule[-1ex]{0em}{3ex} u(t-n_u), \ldots, u(t-1), u(t), y(t-n_y), \ldots, y(t-1) \right), \] where $u(t)$ and $y(t)$ represent input and output of the network at time $t$, $n_u$ and $n_y$ are the input and output order, and the function $\Psi$ is the mapping performed by a Multilayer Perceptron. We constructively prove that the NARX networks with a finite number of parameters are computationally as strong as fully connected recurrent networks and thus Turing machines. We conclude that in theory one can use the NARX models, rather than conventional recurrent networks without any computational loss even though their feedback is limited. Furthermore, these results raise the issue of what amount of feedback or recurrence is necessary for any network to be Turing equivalent and what restrictions on feedback limit computational power. ------------------------------------------------------------------------- http://www.neci.nj.nec.com/homepages/giles.html http://www.cs.umd.edu/TRs/TR-no-abs.html or ftp://ftp.nj.nec.com/pub/giles/papers/NARX.capabilities.ps.Z -------------------------------------------------------------------------- -- C. Lee Giles / NEC Research Institute / 4 Independence Way Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 URL http://www.neci.nj.nec.com/homepages/giles.html == From AIHENP95 at pisa.infn.it Fri Mar 17 15:30:01 1995 From: AIHENP95 at pisa.infn.it (AIHENP95@pisa.infn.it) Date: Fri, 17 Mar 1995 21:30:01 +0100 (WET) Subject: Fourth Mailing - AIHENP95 Pisa, 3-8 April 1995 Message-ID: <950317213002.6340022f@pisa.infn.it> _______________________________________________________________________________ FOURTH INTERNATIONAL WORKSHOP ON SOFTWARE ENGINEERING AND ARTIFICIAL INTELLIGENCE FOR HIGH ENERGY AND NUCLEAR PHYSICS AIHENP95-Pisa Pisa (Tuscany), Italy 3 - 8 April, 1995 --------- FOURTH MAILING ---------- ---Workshop Program---Participant Information--- _______________________________________________________________________________ INTERNATIONAL SCIENTIFIC ADVISORY COMMITTEE S. R. Amendolia INFN & Univ. Sassari Pisa I G. Auger GANIL Caen F K. H. Becks Bergische Univ. Wuppertal D O. Benhar INFN Rome I R. Brun CERN CN Geneva CH B. Denby INFN Pisa I F. Etienne CPPM Marseille F R. Gatto Geneva Univ. Geneva CH G. Gonnet ETHZ Zurich CH M. Green Royal Holloway Col. Egham Surrey GB V. Ilyin Moscow University Moscow RU F. James CERN Geneva CH A. Kataev INR Moscow RU P. Kunz SLAC Stanford USA M. Kunze Ruhr University Bochum D C. S. Lindsey KTH Stockholm S V. Matveev INR Moscow RU K. McFarlane CEBAF/Norfolk Newport News USA R. Odorico Univ. of Bologna Bologna I D. Perret-Gallix LAPP Annecy F C. Peterson Lund University Lund S B. Remaud IN2P3 Paris F E. Remiddi Univ. of Bologna Bologna I P. Ribarics MPI Munich D M. Sendall CERN ECP Geneva CH Y. Shimizu KEK Tsukuba JP D. Shirkov JINR Dubna RU A. Smirnitsky ITEP Moscow RU R. Tripiccione INFN Pisa I M. Veltman Univ. of Michigan Ann Arbor USA J. Vermaseren NIKHEF-H Amsterdam NL C. Vogel CISI Paris F E. Wildner CERN PS Geneva CH DEAR COLLEAGUES: This is the Fourth Mailing for the 1995 edition of the AIHENP worskshop series, containing the Workshop Program, Further Information for Participants, and the Registration/Accommodation forms for those of you who may not yet have turned them in. The LaTeX file for preparation of camera ready manuscripts is not included here but is still available via WWW at the URL's shown below. The page limit is 6 pages for parallel papers and 8 pages for plenary papers. The organizing committee can also supply a copy on request. The completed papers are to be brought to the workshop. WWW URL's for viewing AIHENP95 Information: http://www.cern.ch/Physics/Conferences/C1995/Overview.html http://www1.cern.ch/NeuralNets/nnwInHep.html We on the organizing committee look forward to seeing you in Pisa very soon! Bruce Denby Conference Chairman For the International Advisory Committe and Local Organizing Committee ABOUT AIHENP ------------ The AIHENP series workshops are intended primarily for scientists working in fields related to High Energy and Nuclear Physics. The AIHENP series began in Lyon, France, in March 1990, and has subsequently been sited in La Londe les Maures, France, in January 1992, and in Oberammergau, Germany, in October 1993. The workshops have always been less formal than full conferences, stressing *new* results and ideas, and with sufficient time allowed for spontaneous discussions to develop. As in the past, the 1995 workshop will consist of plenary sessions and three parallel sessions covering our three subgroups, 1) SOFWARE ENGINEERING 2) ARTIFICIAL INTELLIGENCE AND NEURAL NETS 3) SYMBOLIC MANIPULATION along with tutorials and demonstrations, poster session, and industrial booths. Tutorials this year include: INTRODUCTION TO FUZZY LOGIC FOR HIGH ENERGY PHYSICS; PARTICLE SEARCHES WITH NEURAL NETWORKS; INTRODUCTION TO C AND OBJECT ORIENTED PROGRAMMING FOR PHYSICISTS. We have also invited a few experts from other fields to come and give keynote talks which should give us some new perspectives. This year, for example, we have talks on SPACE APPLICATIONS OF NEURAL NETWORKS and COMMERCIAL APPLICATIONS OF NEURAL NETWORKS, the latter of which will be given by the President of the European Neural Networks Society, Prof. Francoise Fogelman, of SLIGOS, Paris, France. Industrial Exhibits confirmed are: IBM (Rome), Spring (Rome), MACS (Pisa, CNAPS Hardware), and Siemens (Munich/Vienna, SYNAPSE Hardware). The workshop is also support by Apple Computer, Alenia, and CAEN. FURTHER INFORMATION FOR PARTICIPANTS ------------------------------------ All organizational details except the scientific program are being handled by TRE EMME CONGRESSI in Pisa. Their official registration forms and information follow at the end of this document. YOUR TRE EMME REGISTRATION FORM IS YOUR ONLY OFFICIAL REGISTRATION FOR THE WORKSHOP. Pisa Airport is only about one kilometer from town center and has daily connections to several international airports (Paris, London, Frankfort, etc.) and to national airports of Rome and Milan. Pisa may also be reached easily by train via Milan, Rome, Florence, or Turin. There are taxi stands at the airport and train station. A typical taxi fare within town is about 10.000 lira. Pisa is a small city and often walking is the best way to get around. The Palazzo dei Congressi is within walking distance of all workshop hotels. Traffic and especially parking in Pisa are difficult. Rental cars are not recommended. The Worskhop Telephone Numbers (DURING THE WORKSHOP ONLY) will be: +39 50 598.139 telephone +39 50 598.112 fax A limited number of terminals will be available for checking electronic mail, etc., during the workshop. Poster sessions and Industrial Exhibits will run continuously during AIHENP95. Posters can be mounted or removed at any time during the workshop. Presenters of posters are encouraged to be present at their posters from 13:30 - 14:00 and from 19:00 - 20:00 for viewing and questions. --------------------------------- CUT HERE ------------------------------------ +-----------------------------------------------------------------------+ | AIHENP95 Workshop Mini-Schedule Pisa 3-8 April 1995 | +-----------------------------------------------------------------------+ | KEY: A1 = First "AI" sess. (Main Auditorium) | | A2 = Second "AI" sess. (Pacinotti Room) | | SM = Sym. Manip. sess. (Fermi Room) | | SE = Soft. Eng. sess. (Pacinotti Room) | | | | (Plenary Talks in Main Auditorium) | | (User Terminals in Auletta "A") | | (Poster Sessions and Industrial Exhibitions run continuously) | | (Posters manned 13:30-14:00, 19:00-20:00) | +-----+----------+----------+----------+----------+----------+----------+ |Sess.| Mon 3 | Tue 4 | Wed 5 | Thu 6 | Fri 7 | Sat 8 | +-----+----------+----------+----------+----------+----------+----------+ |08:30| -09:30- | | | | |Par. Sess.| | |Local Org.| Intro & | Parallel | Parallel | Parallel | A1-SM-SE | |Morn.|Committee | Plenary | Sessions | Sessions | Sessions | -09:30- | | | Meeting | Talks | A1-SM-SE | A1-SM-SE | A1-SM-SE | Summary | |12:30| Palazzo | | | | | Talks | +-----+----------+----------+----------+----------+----------+----------+ |Meal?| No | Lunch | Lunch | Lunch | Lunch | No | +-----+----------+----------+----------+----------+----------+----------+ |14:00| -16:00- | Plenary | | | | | | |Registr'n.| Talks | Parallel | Parallel | Parallel | Workshop | |Aft. | Begins | -15:00- | Sessions | Sessions | Sessions | Ends | | |Palazzo di|Par. Sess.| A1-SM-A2 | A1-SM-A2 | A1-SM-SE | | |19:00| Congressi| A1-SM-SE | | | | | +-----+----------+----------+----------+----------+----------+----------+ | |**18:00** | | -20:30- | -19:00- | | | | Welcome | Special | Dinner | Advisory | Special | |Eve. | Cocktail | Talks | Royal |Committee | Talks | | |Palazzo di| | Victoria | Meeting | | | | Congressi| | Hotel | Palazzo | | +-----+----------+----------+----------+----------+----------+ --------------------------------- CUT HERE ------------------------------------ ========================================================================= AIHENP95 PISA WORKSHOP PROGRAM 3-8 APRIL 1995 PALAZZO DEI CONGRESSI, PISA, ITALY ========================================================================= WELCOME CEREMONIES 3 April Monday 18:00 - 20:00 ------------------------------------------------------------------------- Welcoming Addresses: Giuseppe Pierazzini, Director, INFN Sezion di Pisa Francoise Fogelman, President, European Neural Network Society ========================================================================= PLENARY SESSION 4 April Tuesday 08:30 - 15:00 ========================================================================= 4 April Tuesday Morning 08:30 - 12:30 Chair: TBA 08:30 WELCOME TO AIHENP95 Pisa - Bruce Denby (INFN Pisa) Workshop Chairman 08:45 SPACE APPLICATIONS OF - Thomas Lindblad (KTH Stockholm) NEURAL NETWORKS 09:45 COMMERCIAL APPLICATIONS OF - Francoise Fogelman (SLIGOS, Paris) NEURAL NETWORKS (President, European Neural Network Society) 10:45 STATE OF THE ART IN SYMBOLIC - TBA MANIPULATION 11:45 INTRODUCTION TO C AND OBJECT - P. Murat (ITEP Moscow/INFN Pisa) ORIENTED PROGRAMMING FOR HIGH ENERGY PHYSICISTS 4 April Tuesday Afternoon 14:00 - 15:00 Chair: TBA 14:00 STATE OF THE ART IN SYMBOLIC - TBA MANIPULATION ========================================================================= PARALLEL SESSIONS 4-7 April ========================================================================= "AI" SESSION: NEURAL NETWORKS, FUZZY LOGIC, EXPERT SYSTEMS, LANGUAGES, GENETIC ALGORITHMS, ETC. ------------------------------------------------------------------------- 4 April Tuesday (afternoon) 15:00 - 19:00 ****** Survey of AI Methods in Physics ****** Chair: TBA 15:00 - J. Moeck (MPI Munich, Ge.) "Artificial Neural Networks as a Second-Level Trigger at the H1 Experiment -- Performance Analysis and Results --" 15:30 - S. Westerhof(Wuppertal, Ge.) "Application of Neural Networks in TeV Gamma-Ray Astronomy" 16:00 - C. David (Univ. Nantes, Fr.) "Neural Networks in Theoretical Nuclear Physics" 16:30 - break (30 min) 17:00 - TBA "Tutorial: Intro. to Fuzzy Logic" 17:30 - E. Gandolfi (Univ. Bolgna) "Fuzzy Logic Applications in HEP" 18:00 - G. Stimpfle-Abele (Univ. Barcelona, Spain) "Tutorial: Neural Nets for Particle Searches" ----------------------------------------------------------------------- 5 April Wednesday (morning) 8:30 - 12:30 ****** Triggering, Real-time and Hardware AI Systems ****** Chair: TBA 8:30 - C. Loomis (Rutgers, US) "Using an Analog Neural Network to Trigger on Tau Leptons at CDF" 9:00 - S. Vlachos (Univ. Basel,Switzerland) "A neural network trigger system for the CP-LEAR experiment" 9:30 - R. Nobrega (CERN) "A Neural Network Trigger with a RICH Detector" 10:00 - L. Lundheim (CERN) "A Programmable Active Memory Implementation of a Neural Network for Second Level Triggering in ATLAS" 10:30 - break (30 min) ****** Multivariate Analysis and Neural Networks ****** 11:00 - C. Peterson (Lund, Sweden) "Determining Dependency Structures and Estimating Nonlinear Regression Errors without Doing Regr. 11:45 - J.Proriol (Clermont-Ferrand, Fr.) "Multimodular Neural Networks for the Classification of High Energy Events" 5 April Wednesday (evening) 14:00 - 19:00 +++++++ AI Parallel Session I +++++++ ****** Multivariate Analysis and Neural Networks ****** Chair: TBA 14:00 - Y. Wang (CERN) "Neural Network: A Powerful Tool for Classification" 14:30 - Ll. Garrido (Univ. Barcelona) "Using Neural Networks to enhance signal over background: The top--quark search" 15:00 - Tariq Aziz (CERN) "Heavy Flavour Tagging from hadronic Z decays using Neural Network Technique" 15:30 - break (30 min) 16:00 - H. E. Miettinen (Rice Univ., USA) "Top Quark Search with Multivariate Probability Estimates and Neural Networks" 16:30 - V.V.Ivanov (Dubna, Russia) "Input Data for a Multilayer Perceptron in the Form of Variational Series" 17:00 - R. Sinkus (Univ. Hamburg, Ge) "A novel approach to error function minimization for feedforward neural networks" 17:30 - D.Steuer (Univ. Ilmenau, Ge.) "The use of adaptive recursive estimation methods for acceleration of backpropagation learning algorithm" 18:00 - J. Wroldsen (Gjovik College, Norway) "A Robust Algorithm for Pruning Neural Networks" 18:30 - P. Fuchs (Lab. Saturne, Fr.) "The development of neural network algorithms with LICORNE - a commercial software tool for data analysis andimage processing" 5 April Wednesday (evening) 14:00 - 19:00 +++++++ AI Parallel Session II +++++++ ****** Genetic, Evolutionary and Cellular Automata Algorithms ****** Chair: TBA 14:00 - G. Organtini (Univ. Rome) "Using Genetics in Particle Physics" 14:30 - M. Kunze (Ruhr Univ. at Bochum, Ge) "Application of neural networks and a genetic algorithm in the analysis of multi particle final states" 15:00 - C. Busch (Univ. Wuppertal, Ge.) A Very Tentative Approach Towards the Problem of Adjusting Monte Carlo Generator Parameters by Means of the Evolution Strategy" 15:30 - break (30 min) 16:00 - H.M.A. Andree (Utrecht, Nl) "The optimisation of feed-forward neural networks by means of genetic algorithms" 16:30 - R. Berlich (Ruhr Univ. at Bochum, Ge) "Training neural networks using evolutionary strategies" 17:00 - M.Casolino (Univ. Rome) "A Cellular Automaton to Filter Noise in High Energy Physics Particle Tracks" 17:30 - G. Ososkov (for E.A.Tikhonenko) (Dubna, Russia) "New Random Number Generator on the Base of Cellular Automaton Suitable for Parallel Implementing" 18:00 - M. Kunze (Ruhr Univ. at Bochum, Ge) "Growing Cell Structures" 18:30 - Spare slot ----------------------------------------------------------------------- 6 April Thursday (morning) 8:30 - 12:30 ****** Triggering, Real-time and Hardware AI Systems ****** Chair: TBA 8:30 - D. Goldner (Univ. Dortmund, Ge.) "Artificial Neural Networks as a Level-2 Trigger for the H1 Experiment" 9:00 - T. T. Tong (DESY) "Using a high speed analog neural network chip in the first level R-Z trigger of the H1-Experiment at HERA". 9:30 - R. Odorico (Univ. Bologna, It.) "Trigger for Beauty Employing them MA16 Neural Microprocessor" 10:00 - A.W.Lodder (Univ. Utrecht, Nl) "The Implementation of Feed-Forward Neural Networks on the CNAPS system" 10:30 - break (30 min) 11:00 - I. Lazzizzera(INFN Trento, It.) "TOTEM: a highly parallel chip for triggering applications with inductive learning based on the Reactive Tabu Search" 11:30 - C. S. Lindsey (KTH, Sweden) "Experience with the IBM ZISC Neural Network Chip" 12:00 - Th. Lindblad (KTH, Sweden) "Evaluation of a RBF/DDA Neural Network" 6 April Thursday (evening) 14:00 - 19:00 +++++++ AI Parallel Session I +++++++ ****** Neural Networks in Offline Analysis ****** Chair: TBA 14:00 - J. Proriol (Clermont-Ferrand, Fr.) "Tagging Higgs Boson in Hadronic LEP2 events with Neural Networks" 14:30 - Oliver Cooke (CERN) Determining the Primary Parton Charge in a Jet 15:00 - A.A. Handzel (Weizmann Inst. Israel) "Comparison of Three Neural Network Algorithms in a Data Analysis Classification Task" 15:30 - break (30 min) 16:00 - A. de Angelis (CERN) "Tagging the s quark in the hadronic decays of the Z" 16:30 - C. Guicheney (U. Blaise Pascal, Fr.) "Using Neural networks and Fuzzy Logic in the Search of the Higgs" 17:00 - R. Sparvoli (Univ. Rome) "Gamma Ray Energy Discrimination with Neural Networks" 17:30 - M.Casolino (Univ. Rome) "Optimization of a Neural Network for Particle Classification in a Segmented Calorimeter" 18:00 - J.S. Lange (TU Dresden, Ge.) "Cluster Gravitation - An Extension to the Kohonen Algorithm for the Identification of the pp-Bremsstrahlung at COSY 18:30 - W. Tajuddin (Univ. Malaya, Malaysia) "Understanding Event Classification by Multilayer Perceptrons" 6 April Thursday (evening) 14:00 - 19:00 +++++++ AI Parallel Session II +++++++ ****** Tracking ****** Chair: TBA 14:00 - Baginyan S. A. (Dubna, Russia) " Controlled Neural Network Application in TRACK-MATCH Problem" 14:30 - D.L. Bui(DESY) "Application of the elastic arms approach to track finding in the Forward Tracking Detector of H1-Experiment at HERA" 15:00 - C.A. Byrd (Univ. Arkansas, USA) "A Rough-Set-Based Grouping Algorithm for Particle Tracking in High Energy Physics" 15:30 - break (30 min) 16:00 - M. Fuchs (IKF Frankfurt, Ge) "A 3-dimensional Transformation Tracker for raw TPC-data" 16:30 - G. A. Ososkov (Dubna, Russia) "Applications of cellular automata and neural networks for particle track search" 17:00 - N. Stepanov (ITEP, Russia) "Towards the fast trackfinder algorithm for the CMS experiment at LHC" 17:30 - W. Tajuddin (Univ. Malaya, Malaysia) "Structured Feed- Forward Neural Network for Track Finding" 18:00 - G. Stimpfle-Abele (Univ. Barcelona, Spain) "Determination of Beam Parameters in LEAR with Neural Nets" 18:30 - "L. Santi (Udine, Italy) "Fast Reconstruction of the Antiproton Annihilation Vertex at Intermediate Energies, Based on a Rotor Neural Network" ----------------------------------------------------------------------- 7 April Friday (morning) 8:30 - 12:30 ****** Triggering, Real-time and Hardware AI Systems ****** Chair: TBA 8:30 - G. Pauletta (Univ. Udine, It.) "Pulse Shape Discrimination with a Neural Network" 9:00 - G.B.Pontecorvo (Dubna, Russia) "On a Possible Second Level Trigger for the Experiment DISTO" 9:30 - J.M. Seixas (CERN) "A Neural Second-Level System Based on Calorimetry and Principal Components Analysis" 10:00 - M. Masetti (Univ. Bologna, It.) " Design of VLSI realization of a very fast Fuzzy processor for trigger applications in HEP" 10:30 - break (30 min) 11:00 - G. G. Athanasiu (Univ. Crete) "Retinal Neurocomputing Principles for Real Time Track Identification" 11:30 - J. Seixas (Rio de Janeiro) "Implementing a Neural Second Level Trigger System on a Fast DSP: The Feature Extraction Problem" 12:00 - D. Salvatore (Univ. Pisa, It.) "Neuroclassifier Chip for Vertex Detection" 7 April (evening) 14:00 - 19:00 ****** Adaptive and Symbolic Methods in Offline Analysis ****** Chair: TBA 14:00 - D'Agostini (Univ. Rome) "A Multidimensional Unfolding Method Based on Bayes' Theorem" 14:30 - R. Sinkus (Univ. Hamburg, Ge) "Neural network based electron identification in the ZEUS detector" 15:00 - D. Falciai (Univ. Perugia, Italy) "Electron Identification with Neural Network at SLD" 15:30 - break (30 min) 16:00 - K. A. Gernoth "Neural Network Models of Nuclear Systematics" 16:30 - G. Tomasicchio (Bari) "An object oriented approach to design symbolic and connectionist systems in HEP" 17:00 - Th. Flor, (Univ. Ilmenau, Ge.) "Integration of symbolic rule based and subsymbolic neural net based information processing in a multi paradigm knowledge based system" 17:30 - E. Bubelev (for V.M.Severyanov) (Dubna, Russia) "Artificial Neural Networks Usage for Recognition of Poincare' Imaginable Statistical Bodies in Non-Euclidean High Energy Physics" 18:00 - Giulio D'Agostini (Univ. Rome) "On the Use of the Covariance Matrix to Fit Correlated Data" 18:30 - V.Il. Tarasov. (Nucl. Safety Inst. Russia) "The statistical analysis of pollution 137-Cs in settlements Russian Chernobyl Zone" ----------------------------------------------------------------------- 7 April Saturday (morning) 8:30 - 09:30 Chair: TBA 08:30 - Spare slot 09:00 - A. Smirnitsky (ITEP) "Summary of AIHENP-Moscow Meeting, AI Section" ----------------------------------------------------------------------- ****** "AI" Posters 4-8 April Continuous****** (Posters can be mounted or removed at any time during the workshop. Presenters of posters are encouraged to be present at their posters from 13:30 - 14:00 and from 19:00 - 20:00.) V.V. Ivanov, (Dubna) "Multidimensional data analysis based on the omega-n-k criteria and multilayer perceptrons" Th. Flor, (Univ. Ilmenau, Ge.) "Using advantages of ODBS and CORBA-standard for modelling of objectoriented distributed knowledge based systems in VisualWorks/Distributed-Smalltalk" Th. Flor, (Univ. Ilmenau, Ge.) "Multi paradigm inference system VISIS as knowledge based framework within an objectoriented medical information modell" J. Proriol (Clermont-Ferrand, Fr.) "Selection of Variables for Neural Network Analysis" V.M.Severyanov (Dubna, Russia) "Application of Artificial Neural Networks to Low pT Muon Identification in ATLAS Hadron Calorimeter" V.M.Severyanov (Dubna, Russia) "Calculation and Interactive Construction of Flat Fractals Using Neural Networks" W. Tajuddin (Univ. Malaya, Malaysia) "Track Classification and Enumeration in Solid State Nuclear Track Detectors usinc Cellular Automata" T. Yakhno (Novosibirsk, Russia) "TallTalk: Combining OO-paradigm and Constraint Programming for Knowledge Representation" ========================================================================= SYMBOLIC MANIPULATION SESSION: ------------------------------------------------------------------------- 5 April (morning) 8:30 - 12:30 Chair: TBA SUBGROUP C-2: Full automation systems 8:30 - Y.Kurihara (KEK, Japan) "Catalogue of electron-positron annihilation processes using GRACE system" 9:00 - A.Pukhov (INP MSU, Moscow, Russia) "Automatic calculation of amplitudes in the CompHEP package" 9:30 - M.Jimbo (Tokyo Management College, Japan) "A system for the automatic computation of cross sections including SUSY particles" 10:00 - J.-X.Wang (IHEP, Beijing, China) "Automatic calculation of loop processes" 10:30 - break (30 min) SUBGROUP C-3: programs and methods 11:00 - L.Surguladze (Univ. of Oregon, USA) "Computer programs for high order analytical perturbative calculations in high energy physics" 11:30 - A.Grozin (Open Univ., UK) "Multiloop calculations in heavy quark effective theory" 12:00 - J.Gracey (Univ. Durham, UK) "Large N_f methods for computing the perturbative structure of deep inelastic scattering" 5 April (evening) 14:00 - 19:00 Chair: TBA SUBGROUP C-1: Languages and Tools 14:00 - M.Sofroniou (Univ. Bologna - Wolfram Research Ltd.) "Strategies for effective numerical computations using Mathematica" 14:30 - E.Remiddi (Univ. Bologna, Italy) "GOLEM: a language (and a program) for writing (and checking) mathematical proofs" 15:00 - S.Capitani ("La Sapienza", Roma, Italy) "Use of SCHOONSHIP and FORM codes in perturbative lattice calculations" 15:30 - break (30 min) SUBGROUP C-2: symbolic-numeric interface 16:00 - K.Kato (Kogakuin Univ., Tokyo, Japan) "Numerical approach to two-loop integrals with masses" 16:30 - D.Kovalenko (INP MSU, Moscow, Russia) "Automatic generation of kinematics for exclusive high energy collisions", 17:00 - T.Ishikawa (KEK, Japan) "Symbolic code optimization of polynomials" 17:30 - F.Tkachov (INR, Moscow, Russia) "MILXy Way: How much better than VEGAS can one integrate in many dimensions?" ----------------------------------------------------------------------- 6 April (morning) 8:30 - 12:30 Chair: TBA SUBGROUP C-3: methods and algorithms 8:30 - A.Kotikov (LAPP, Annecy, France) "Gegenbauer polynomial technique: the second birth" 9:00 - A.Czarnecki (Univ. Karlsruhe, Germany) "A new method of computing two-loop Feynman integrals with massive particles" 9:30 - L.Avdeev (JINR, Russia) "Recurrence relations for evaluating three-loop vacuum diagrams with a mass" 10:00 - V.Ilyin (INP MSU, Russia) "New method of reducing vacuum multiloop Feynman integrals to master ones" 10:30 - break (30 min) SUBGROUP C-2: Feynman diagram generation 11:00 - T.Kaneko (Meiji-Gakuin Univ., Yokohama, Japan) "A Feynman-graph generator for any order of coupling constants", SUBGROUP C-1: Graphical interface 11:30 - D.Juriev (Ecole Normale Superieure, Paris, France-Russia) "Some aspects of interactive visualization of 2D quantum field theory: algebra, geometry and computer graphics" 12:00 - I.Nikitin (IHEP, Protvino, Russia) "Visual study of complicated phenomena in string theory" 6 April (evening) 14:00 - 19:00 Chair: TBA SUBGROUP C-4: QFT and SUSY etc. 14:00 - C.Schubert (DESY-Zeuthen, Germany) "Programming the string-inspired method: main computational problems" 14:30 - A.Lanyov (JINR, Russia) "Calculation of heat-kernel coefficients and usage of computer algebra" 15:00 - A.Candiello ("Galileo Galilei", Padova, Italy) "WBase: a C package to reduce tensor products of Lie algebra representations" 15:30 - break (30 min) SUBGROUP C-2: Applications 16:00 - N.Nakazawa (Kogakuin Univ., Tokyo, Japan) "Automatic Calculation of 2-loop Weak Corrections to Muon Anomalous Magnetic Moment" 16:30 - O.Tarasov (Biellefeld Univ., Germany - JINR, Russia) "One-loop radiative correction to the process gamma-gamma -> t bar-t" 17:00 - I.Akushevich (Belorussia Univ., Minsk, Belorussia) "The calculation of contribution of double photon bremstrahlung to polarized assymetry by using symbolic manipulation technique" SUBGROUP C-3: results 17:30 - A.Pivovarov (INR, Moscow, Russia) "On the positronium lifetime calculation" 18:00 - A.Davydychev (Bergen Univ., Norway) "New results for two-loop diagrams with massive and massless particles" 18:30 - S.Larin (INR, Moscow, Russia) "Computation of the high order QCD corrections to physical quantities" ----------------------------------------------------------------------- 7 April (morning) 8:30 - 12:30 Chair: TBA SUBGROUP C-3: Methods and algorithms 8:30 - J.Fleischer (Biellefeld Univ., Germany) "Calculation of two-loop vertex functions from their small momentum expansion" 9:00 - D.Kreimer (Univ. Tasmania, Australia) "Feynman diagram calculations - from finite integral representations to knotted infinities" 9:30 - T.van Ritbergen (NIKHEF, The Netherlands) "The calculation of various quantities within perturbation theory at the 3 and 4-loop order" 10:00 - C.Schubert (DESY-Zeuthen, Germany) "Programming the string-inspired method: methods and algorithms for the evaluation of higher order corrections" 10:30 - break (30 min) SUBGROUP C-3: Applications 11:00 - G.Pivovarov (INR, Moscow, Russia) "The gauge for atom-like bound states" 11:30 - P.Baikov (INP MSU, Moscow, Russia) "Three loop vacuum polarization and four loop muon anomalous magnetic moment" 12:00 - N.Ussykina (INP MSU, Moscow, Russia) "Cracking double boxes: a progress report" 7 April (evening) 14:00 - 19:00 Chair: TBA SUBGROUP C-4: other fields 14:30 - E.Wildner (CERN) "Integration of symbolic computing in accelerator control" 15:00 - E.S.Cheb-Terrab (Univ. Rio de Janeiro, Brazil) "A computational strategy for the analytical solving of partial differential equations" 15:30 - break (30 min.) 16:00 - P.Pronin (MSU, Moscow, Russia) "New tensor package for REDUCE" SYMBOLIC SECTION RESUME 16:30 - Summary talk on "Moscow one-day session" Symb. Manip. Section 17:00 - Round Table discussion ========================================================================= SOFTWARE ENGINEERING SESSION ------------------------------------------------------------------------- 4 April Tuesday Afternoon 15:00 - 19:00 Chair: TBA **Object oriented programming and C++** 15:00 - R. Petravick (Fermilab) "Software engineering methods and standards used in the Sloan Digital Sky Survey" ------------------------------------------------------------------------------ 5 April Wednesday Morning 08:30 - 12:30 Chair: TBA **Online applications; Graphics and Interfaces** 08:30 - 09:00 - I. Legrand (CERN) "Design and simulation of the online trigger and reconstruction farm for the HERA-B experiment" 09:30 - J. Cramer (Washington/Max Planck) "SControl, a program for slow control of large physics experiments" 10:00 - C. Maidantchik (CERN/Rio de Janeiro) "Quality assurance on coupling DAQ software modules" 10:30 - Break (30 min.) 11:00 - V. Monich (Novosibirsk) "ZTREE - Data analysis and graphics display system for the CMD-2 detector" 11:30 - Y. Merzlyakov (Novosibirsk) "Software design of a distributed heterogeneous front end DAQ" 12:00 - V. Fine (Dubna) "Using the Windows/NT operating system for HEP applications" ------------------------------------------------------------------------------ 6 April Thursday Morning 08:30 - 12:30 Chair: TBA **Simulation; Code and data management techniques** 08:30 - B. Burow (DESY) "How one operator and hundreds of computers around the world simulate a million ZEUS events per week" 09:00 - C. Bormann (Frankfurt) "A distributed data analysis environment" 09:30 - E. Agterhuis (Utrecht) "Software tools for Microstrip Gas detector simulation" 10:00 - L. Cioni (Pisa) "Co-operative principles in application design" 10:30 - Break (30 min.) 11:00 - S. Cabasino (Pisa) "The Ape Computer Family" 11:45 - B. Burow (DESY) "TDM: A proposed tool to manage data and tasks for a comfortable future of HEP event processing" ------------------------------------------------------------------------------ 7 April Friday Morning 08:30 - 12:30 Chair: TBA **Object Oriented Programming and C++** 08:30 - 09:00 - P. Fuchs (Saturne) "The development of an object-oriented system which integrates simulation, reconstruction and analysis within a common framework 09:30 - J. Carter (CERN) "Experience using formal methods in HEP" 10:00 - N. Piscopo (ARTIS) "An integrated software engineering environment for developing concurrent applications through simulation and automatic code generation" 10:30 - break (30 min) 11:00 - G. Maron (Legnaro) "Experience using an integrated software engineering package for developing the AURIGA antenna data acquisition and analysis system" 11:30 - G. Attardi (Pisa) "The PoSSo Project" 12:00 - V. Talanov (Protvino) " Application of C++ programming principles to geometry description problem in particle transport simulation" 7 April Friday Afternoon 14:00 - 19:00 Chair: TBA **Object Oriented Programming and C++** 14:00 - O. Krivosheev (Tomsk) "Object oriented approach to the design of Monte Carlo code for simulation of electromagnetic showers" 14:30 - O. Krivosheev (Tomsk) "Source viewer and source portability checker - useful tools for developing C++ programs" 15:00 - Break ------------------------------------------------------------------------- ****Software Engineering Posters 4-8 April continuous***** (Posters can be mounted or removed at any time during the workshop. Presenters of posters are encouraged to be present at their posters from 13:30 - 14:00 and from 19:00 - 20:00.) F. Bruyant (CERN) "COMO, an approach for object oriented analysis and design for scientific applications of an algorithmic nature" Th. Kozlowski (LANL) "The use of Shlaer-Mellor object oriented analysis and recursive design in the development of the PHENIX computing systems" M. Marin (Chile) "An event driven simulation environment for hard particle molecular dynamics" ========================================================================= PLENARY SESSION 8 April 09:30 - 12:45 SUMMARY TALKS ========================================================================= 8 April Saturday Morning 09:30 - 12:45 Chair: TBA 09:30 SUMMARY OF AI SESSION - Marcel Kunze (Bochum) 10:30 SUMMARY OF SYMBOLIC - TBA MANIPULATION SESSION 11:30 SUMMARY OF SOFTWARE - P. Murat (INFN Pisa/ITEP Moscow) ENGINEERING SESSION (to be confirmed) 12:30 CLOSING REMARKS - B. Denby (INFN Pisa) Conference Chairman - D. Perret-Gallix (LAPP Annecy) Co-chairman ========================================================================= SPECIAL SEMINARS ========================================================================= 4 April Tuesday Chair: S.R. Amendolia 19:00 - H. E. Miettinen (Rice Univ., USA) "Results of the Top Quark Search from D0" ------------------------------------------------------------------------- 7 April Friday Chair: H. E. Miettinen 19:00 - S.R. Amendolia (INFN Pisa, Italy) "Results of the Top Quark Search from CDF" ========================================================================= INDUSTRY SESSION CONTINUOUS 4-8 APRIL ========================================================================= Confirmed stands: IBM (Rome) Spring (Rome) MACS (Pisa, CNAPS Hardware) Siemens (Munich/Vienna, SYNAPSE Hardware) ========================================================================= --------------------------------- CUT HERE ------------------------------------ REGISTRATION INSTRUCTIONS ------------------------- NOTA BENE: REGISTRATION AND ACCOMMODATION ARE BEING HANDLED BY THE TRE EMME CONGRESSI COMPANY. CALL THEM AT (39)(50)44154 OR FAX (39)(50)500725 FOR ANY QUESTIONS OR PROBLEMS CONCERNING REGISTRATION AND ACCOMMODATION. DO NOT SEND THE FORMS TO THE CONFERENCE CHAIRMAN. Part of your conference fee goes to pay TRE EMME to provide a friendly, helpful service for the conference. Take advantage of it. Following are three separate items: 1) REGISTRATION FORM. ALL PARTICIPANTS MUST SUBMIT THIS FORM. It can be returned by FAX or POST with accompanying payment, by CREDIT CARD, INTERNATIONAL CHEQUE, or EUROCHEQUE. You can also register ON SITE. Those taking the student or EPS member discount should be prepared at arrival to show proof of their status. IN CASE OF ANY PROBLEMS, CALL TRE EMME FOR ASSISTANCE. 2) ACCOMMODATION FORM. PLEASE BOOK YOUR HOTEL EARLY TO AVOID PROBLEMS. It is recommended that all participants make use of the hotels which have been blocked by TRE EMME in order to assure availability of appropriate accommodation. Three classes of hotels are available. A deposit of one night's stay plus Lit. 20.000 is required in order to book a room. This should be sent directly to Tre Emme by international cheque or eurocheque by MARCH 13, 1995 in order to guarantee your reservation. **IF YOU CANNOT GET THE DEPOSIT IN ON TIME, BOOK YOUR ROOM ANYWAY, AND IT WILL BE HELD UNTIL NOON OF YOUR ARRIVAL DAY; YOU MUST THEN INTERACT WITH THE HOTEL (PHONE NUMBER ON YOUR VOUCHER) YOURSELF IF YOU ARRIVE LATER THAN THAT.** The hotels will not accept credit cards for the **deposit**, but final bills (minus deposit) **can** be paid by credit card. IN CASE OF ANY PROBLEMS, CALL OR FAX TRE EMME FOR ASSISTANCE. 3) SUBMISSION OF ABSTRACTS. Submissions are closed, however, papers submitted now may be included if they especially attract the interest of the organizing committee. Late abstracts will not be included in the abstract booklet. --------------------------------- CUT HERE ------------------------------------ IV International Workshop AIHENP Pisa (Italy) April 3-8, 1995 REGISTRATION FORM To be mailed or faxed to: TRE EMME CONGRESSI Via Risorgimento 4, 56126 Pisa (Italy) Tel. +39-50 - 44154/20583 Fax. +39-50 - 500725 Surname .............................First name.............................. Affiliation/Company.......................................................... Address ..................................................................... Postal Code .......... City ...................... Country .................. Tel. ..... / .......................... Fax. ..... / ........................ Fiscal/VAT code for invoice: ................................................ Fees(Incl. 19% VAT) by March 3,1995 Thereafter Standard __ Lit. 425.000 __ Lit. 500.000 Lit. ............ EPS Member __ Lit. 380.000 __ Lit. 455.000 Lit. ............ Student __ Lit. 325.000 __ Lit. 400.000 Lit. ............ Social dinner for __ Lit. 70.000 N. .... places Lit. ............ accompanying person(s) Tot. Lit. ............ _____________________________________________________________________________ Payment __ VISA __ MASTERCARD __ EUROCARD __ CARTASI' - Tot. Lit. ............ Card n. ........................................ Expiry date ................ Cardholder (capital letters) ................................................ I am enclosing __ International Cheque __ Eurocheque for the sum of Lit. ..................... addressed to TRE EMME / AIHENP N.B. Preregistration is strongly encouraged. Date ............................ Signature ................................ --------------------------------- CUT HERE ------------------------------------ IV International Workshop AIHENP Pisa (Italy) April 3-8, 1995 ACCOMMODATION FORM To be mailed or faxed to: TRE EMME CONGRESSI Via Risorgimento 4, 56126 Pisa (Italy) Tel. +39-50 - 44154/20583 Fax. +39-50 - 500725 Surname .............................First name.............................. Home Address ................................................................ Postal Code .......... City ...................... Country .................. Tel. ..... / .......................... Fax. ..... / ........................ Accompanying person(s) ...................................................... Deposit Required(but see NB below): cost of one night plus Lit. 20.000 handling Cat. Hotel Single Double Double for Single **** __ Lit. 220.000 __ Lit. 290.000 __ Lit. 260.000 *** __ Lit. 95.000 __ Lit. 135.000 __ Lit. 110.000 ** __ Lit. 65.000 __ Lit. 100.000 __ Lit. 75.000 Date of arrival ................ Departure .............. tot. nights ....... I wish to share a double room with .......................................... N.B. - Prices include breakfast, service charges, taxes and VAT. - All bedrooms have private shower or bath. - When single rooms no longer available, double for single will be reserved. - The deposit will be deducted from the hotel bill upon display of the voucher sent by TRE EMME CONGRESSI: bills may be settled by credit card. - IF YOU HAVE A PROBLEM GETTING THE DEPOSIT IN ON TIME, BOOK YOUR ROOM ANYWAY. IT WILL BE HELD UNTIL *NOON* OF ARRIVAL DAY. YOU ARE THEN RESPONSIBLE FOR CALLING THE HOTEL (NUMBER ON TRE EMME VOUCHER) TO ARRANGE FOR LATER ARRIVAL IF NECESSARY. - Deposit payment must be performed by International Cheque or Eurocheque made payable to TRE EMME CONGRESSI; bank charges encountered with other forms of payment will be charged to the participant. I am enclosing International/Eurocheque N. ............ for Lit. ............ made payable to TRE EMME CONGRESSI. Date ............................ Signature ................................ --------------------------------- CUT HERE ------------------------------------ SUBMISSION OF ABSTRACTS ----------------------- Submission of abstracts is technically closed. Abstracts already submitted have been reviewed and applicants notified of acceptance. Abstracts may still be considered, up until the date of the workshop; however acceptances will be at the discretion of the Advisory Committee. Late abstracts will not be included in the abstract booklet. Applications should be sent by electronic mail. Applications should include an abstract describing the work done, a title and a list of authors which indicates which one is the contact person. The contact person should include his postal address, electronic mail address, and telephone and fax numbers. THE ABSTRACT PLUS ALL ACCOMPANYING INFORMATION SHOULD FIT ON ONE PAGE. NO SPECIAL FORMAT IS REQUIRED (PLAIN TEXT PREFERRED). All papers accepted for oral or poster presentation will be published by World Scientific in the workshop proceedings, which, as in past workshops, will be a hardcover edition with the workshop logo in color on the cover. Qualifying papers will also be published in a special edition of International Journal of Modern Physics C. Addresses for Submission of ABSTRACTS ------------------------------------- Electronic Submission: AIHENP95 at vaxpia.pi.infn.it -or- DENBY at fnalv.fnal.gov Fax Submission: (39) (50) 880-317 in care of Bruce Denby From giles at research.nj.nec.com Fri Mar 17 18:55:08 1995 From: giles at research.nj.nec.com (Lee Giles) Date: Fri, 17 Mar 95 18:55:08 EST Subject: TR Available: Recurrent Neural Networks and Dynamical Systems Message-ID: <9503172355.AA18111@alta> The following Technical Report is available via the University of Maryland Department of Computer Science and the NEC Research Institute archives: _____________________________________________________________________________ "Finite State Machines and Recurrent Neural Networks -- Automata and Dynamical Systems Approaches" UNIVERSITY OF MARYLAND TECHNICAL REPORT UMIACS-TR-95-1 and CS-TR-3396 Peter Tino[1,2], Bill G. Horne[2], C. Lee Giles[2,3] [1] Dept. of Informatics and Computer Systems, Slovak Technical University, Ilkovicova 3, 812 19 Bratislava, Slovakia [2] NEC Research Institute, 4 Independence Way, Princeton, NJ 08540 [3] UMIACS, University of Maryland, College Park, MD 20742 {tino,horne,giles}@research.nj.nec.com We present two approaches to the analysis of the relationship between a recurrent neural network (RNN) and the finite state machine M the network is able to exactly mimic. First, the network is treated as a state machine and the relationship between the RNN and M is established in the context of algebraic theory of automata. In the second approach, the RNN is viewed as a set of discrete-time dynamical systems associated with input symbols of M. In particular, issues concerning network representation of loops and cycles in the state transition diagram of M are shown to provide a basis for the interpretation of learning process from the point of view of bifurcation analysis. The circumstances under which a loop corresponding to an input symbol x is represented by an attractive fixed point of the underlying dynamical system associated with x are investigated. For the case of two recurrent neurons, under some assumptions on weight values, bifurcations can be understood in the geometrical context of intersection of increasing and decreasing parts of curves defining fixed points. The most typical bifurcation responsible for the creation of a new fixed point is the saddle node bifurcation. ------------------------------------------------------------------------- http://www.neci.nj.nec.com/homepages/giles.html http://www.cs.umd.edu/TRs/TR-no-abs.html or ftp://ftp.nj.nec.com/pub/giles/papers/UMD-CS-TR-3396.fsm-rnn.dynamics.systems.ps.Z -------------------------------------------------------------------------- -- C. Lee Giles / NEC Research Institute / 4 Independence Way Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 URL http://www.neci.nj.nec.com/homepages/giles.html == From B344DSL at UTARLG.UTA.EDU Fri Mar 17 19:18:36 1995 From: B344DSL at UTARLG.UTA.EDU (B344DSL@UTARLG.UTA.EDU) Date: Fri, 17 Mar 1995 18:18:36 -0600 (CST) Subject: MIND conference at Texas A&M: preliminary announcement Message-ID: <mailman.749.1149540312.24850.connectionists@cs.cmu.edu> Preliminary Announcement and Call for Abstracts Conference on Neural Networks for Novel High-Order Rule Formation Sponsored by Metroplex Institute for Neural Dynamics (MIND) and For a New Social Science (NSS) Texas A&M University, May 20-21, 1995 MIND, a neural networks professional organization based in the Dallas-Fort Worth area, and NSS, a private research foundation based in Coral Springs, Florida, are jointly sponsoring a conference on Neural Networks for Novel High-order Rule Formation. This will partially overlap a conference on Creative Concepts May 19-20 sponsored by the Psychology Department at Texas A&M and the American Psychological Association. This will in turn be preceded by ARMADILLO, the region psychology meeting on Thursday, May 18 (whose registration is free for those attending either Creative Cognition or MIND/NSS). Invited speakers for the MIND/NSS portion include John Taylor (King's College, London); Karl Pribram (Radford University); Risto Miikkulainen (University of Texas); Ramkrishna Prakash (University of Houston); Sam Leven (For a New Social Science); and Daniel Levine (University of Texas at Arlington). There is space for a limited number of contributed talks, for presentation on the Sunday of the conference, and an arbitrary number of posters, to up for the duration of the conference. MIND has sponsored six international conferences, three of which have formed the basis for books (two in print and one now in progress). All but the first have been on focused topics within the neural network field. The topics were chosen for their interest to a broad community, some interested primarily in neurobiology, others in neural theory, and others in engineering applications. These last three topics have been Oscillations in Neural Systems, Optimality in Biological and Artificial Networks?, and Neural Networks for Knowledge Representation and Inference. NSS has co-sponsored two of MIND's conferences. Its purpose is, to quote from its founding statement, "turning the findings and techniques of science to the benefit of social science." It seeks to develop more predictive methodological bases for areas ranging from economics to management theory to social psychology ~ in some cases, to replace foundational assumptions dating from the time of David Hume and Adam Smith, based on a static and unrealistic model of human behavior, with new foundational assumptions that draw on modern knowledge of neuroscience, cognitive science, and neural network theory. This would mean that social scientific models which assume humans always behave rationally will be replaced by models which incorporate emotion, habit, novelty, and ~ particularly relevant for this conference ~ creative intuition. In the words of NSS's original statement: We may find people less rational than we would like them, economic models less precise, survey results less certain. .. We of For a New Social Science seek to find real answers instead of nostrums and mythology. But when we cannot find simple solutions, we choose to see our world plainly and to open our eyes to what we do not know. The theme of this conference will be connectionist modeling of the processes by which complex decision rules are deduced, learned, and encoded. These include, for example, rules that determine, on the basis of some trials, which classes of actions will be rewarded. The myth that neural network methodology is only relevant for low-order pattern processing and not for high-order cognition is rapidly being disproved by recent models. In particular, the 1994 World Congress on Neural Networks included a session on Mind, Brain, and Consciousness, which was one of the most popular and successful sessions of that conference; another such session will be held at the same Congress in 1995. John Taylor has developed a series of models related to consciousness, which is interpreted partly as selective attentional (based in the thalamic reticular nucleus) and partly as comparison of current stimuli with episodic memories of past events (based in the hippocampus). Raju Bapi and Daniel Levine have constructed a network that learns motor sequences and classifies them on the basis of reward. Models have been developed that mimic disruption of specific cognitive tasks by specific mental disorders, among them Alzheimer dementia, autism, depression, and schizophrenia. Sam Leven and Daniel Levine have constructed a neural network that simulates contextual shifts in multiattribute decision making, with specific application to consumer preference for old versus new versions of Coca-Cola. Finally, Haluk Ogmen and Ramkrishna Prakash built on models previously developed by Grossberg and his colleagues to design robots that actively explore their environment under the influence of appetitive and aversive stimuli. All this work paves the way for developing neural network models of creativity and innovation. Part of the creative process involves search for novel high-order rules when current rules fail to predict expected results or to yield expected rewards. This process often requires transfer to a higher level of complexity of analysis. Hence creativity involves what Douglas Hofstadter called a "search of search spaces." Some current models in progress also incorporate knowledge of different brain regions involved in circuits for such a transfer of control. Bapi and Levine discuss the role of the frontal lobes in such a circuit. In the experiments modeled therein, macaque monkeys with prefrontal damage can learn an invariant sequence of motor actions if it is rewarded, but have difficulty learning any one of several reorderings of a sequence (say, ABC, ACB, BAC, BCA, CAB, and CBA) if all are rewarded. This flexible sequence rule is one of many types of complex rules that require intact frontal lobes to be learned effectively. Another is learning to go back and forth on alternate trials between two food trays. Yet another is learning to move toward the most novel object in the environment. Karl Pribram hints that the frontal lobes act in concert with some areas of the limbic system, particularly the hippocampus and amygdala. These theories of specific brain regions are not yet precise or uniquely determined. Neural network models of high-order cognitive processes typically build on network structures that have previously been developed for low-order processes, and may or may not incorporate these neurobiological details. Still, we are now witnessing a dynamic convergence of insights from cognitive neuropsychology along with those from experimental psychology, cognitive science, and neural network theory. This will be the general theme of these two overlapping conferences. Registration for this conference will be $40: registration forms are attached. Those attending the Creative Concepts Conference immediately preceding the MIND/NSS conference will be able to attend for $15. For information about transportation and lodging in College Station, TX (roughly between Austin and Houston) where Texas A&M is located, please contact: Steve Smith Department of Psychology Texas A&M University College Station, TX 77843 409-845-2509 sms at psyc.tamu.edu If you are interested in speaking, please send an abstract by Friday, April 7, to Daniel S. Levine Department of Mathematics University of Texas at Arlington Arlington, TX 76019-0408 817-273-3598 b344dsl at utarlg.uta.edu ------------------------------------------------------------------- -----------------------------------------------------PLEASE RETURN THIS REGISTRATION FORM TO PROF. LEVINE ------------------------------------------------------------------- ----------------------------------------------------- Name ________________________________________ Phone __________________________ Address _________________________________________________________________ _____________ ____________________________________ e-mail ___________________________________ 1. I plan to attend (check all that apply): ARMADILLO ____ Creative Concepts ____ MIND/NSS ____ 2. I would like to present a talk or poster at MIND/NSS ____ From Frank.Kelly at cs.tcd.ie Mon Mar 20 07:18:45 1995 From: Frank.Kelly at cs.tcd.ie (Frank Kelly) Date: Mon, 20 Mar 1995 12:18:45 +0000 (WET) Subject: Connectionist models of Figure-Ground Segregation (Problems?) Message-ID: <mailman.750.1149540312.24850.connectionists@cs.cmu.edu> Hello, I am doing a project on Nonlinear Coupled Oscillators applied to Figure-Ground Segregation. Current models I have examined are included below my mail.sig. Basically the question I would like to pose is the following: Although all of these models 'solve' figure-ground segregation to some degree, can anyone say which model is 'best' and what crtieria can we base this upon? e.g. One of the key criteria for my project is speed, so what I would be interested in knowing is: Which model is fastest and/or does any model approach the speed at which the human visual system segregates figure and ground. Other criteria would be : * Resistance to Noise * Biological Plausibility * Model Complexity (e.g. does the neurons model allow for orientation selectivity, does the model require full connectivity between all nodes) *Use of attentional mechanisms I would appreciate any light people could throw on this subject of finding a 'best' model, especially experimental results/papers. BTW, If anyone knows of any other systems (or has comments to make on any of the above systems) I would be grateful if you could contact me. Many Thanks in advance, --Frank Kelly = Frank.Kelly at cs.tcd.ie | AI group, Dept. of Computer Science, = = Work: +353-1-608 1800 | Trinity College, Dublin 2. Ireland. = = WWW : http://www.cs.tcd.ie/www/kellyfj/kellyfj.html = So far I have found the following systems: -------------------------------------------- [Von der Malsburg & Schneider 86] Von der malsburg, C., and W. Schneider A neural Cocktail-Party Processor in Biological Cybernetics 54, 29-40 (1986) [Von der Malsburg & Buhmann 92] Von der Malsburg, C., and J. Buhmann Sensory Segmentation with coupled neural oscillators in Biological Cybernetics 67, 233-242 (1992) [Sompolinsky et al 90] Sompolinsky, H., Golomb, D., and D. Kleinfeld Global processing of visual stimuli in a neural network of coupled oscillators in Proceedings of the National Academy of Sciences, USA Vol.87, pp.7200-7204, September 1990. [Sejnowski & Hinton 87] Sejnowski, T.J., and G.E. Hinton Separating Figure from Ground with a Boltzmann Machine in (Arbib 87) [Pabst et al. 89] Pabst, M., H.J. Reitboeck, and R. Eckhorn A model of Preattentive region definition based on texture analysis in (Cotterill 89) [Konig et al. 92] Konig, P., Janosch, B., and T.B. Schillen Stimulus-Dependent Assembly Formation of Oscillatory Responses : III. Learning in Neural Computation 4, 666-681 (1992) [Kammen et al. 89] Kammen, D.M., P.J. Holmes, and C. Koch Cortical Architecture and Oscillations in Neuronal Networks : Feedback vs. Local Coupling in (Cotterill 89) [Grossberg & Somers 91] Grossberg, S., and D. Somers Synchronized oscillations during cooperative feature linking in a cortical model of visual perception in Neural Networks Vol. 4 pp. 453-466 [Fellenz 94] Fellenz W.A. A Neural Network for Preattentive Perceptual Grouping in Proceedings of the Irish Neural Networks Conference 1994 Univeristy College Dublin, Sept.12-13, 1994 [Eckhorn et al 89] Eckhorn, R., H.J. Reitboeck, M. Arndt, and P.Dicke A Neural Network for feature linking via synchronous activity in (Cotterill 89) [Yamaguchi & Hiroshi 94] Yamaguchi, Y., and S. Hiroshi Pattern recognition with figure-ground seperation by generation of coherent oscillations in Neural Networks Vol.3, 1994, pp.153-170 [Campbell and Wang 94] Campbell, S., and D. Wang Synchronization and Desynchronization in a Network of Locally Coupled Wilson-Cowan Oscillators in Technical Report OSU-CISRC-8/94-TR43, Lab for AI Research, Dept. of Computer and Information Science and Center for Cognitive Science, The Ohio State University, Columbus, Ohio 43210-1277, USA [Sporns et al. 91] Sporns, O. Tononi, G. and G.M. Edelman Modeling perceptual grouping and figure-ground segregation by means of active reentrant connections in Proc. Natl. Acad. Sci. USA Vol.88 oo.129-133, January 1991 n.b. [Cotterill 89] Cotterill, R.M.J. Models of Brain Function 1989 From David_Redish at GS151.SP.CS.CMU.EDU Sat Mar 18 12:13:05 1995 From: David_Redish at GS151.SP.CS.CMU.EDU (David_Redish@GS151.SP.CS.CMU.EDU) Date: Sat, 18 Mar 95 12:13:05 EST Subject: Paper available: Navigating with Landmarks Message-ID: <mailman.751.1149540312.24850.connectionists@cs.cmu.edu> The following paper is now available electronically (via the Web) "Navigating with Landmarks: Computing Goal Locations from Place Codes" A. David Redish and David S. Touretzky Carnegie Mellon University to appear in _Symbolic Visual Learning_, K. Ikeuchi and M. Veloso, eds., Oxford University Press. A computer model of rodent navigation, based on coupled mechanisms for place recognition, path integration, and maintenance of head direction, offers a way to operationally combine constraints from neurophysiology and behavioral observation. We describe how one such model reproduces a variety of experiments by Collett, Cartwright, and Smith (J. Comp Phys. A 158:835-851) in which gerbils learn to find a hidden food reward, guided by an array of visual landmarks in an open arena. We also describe some neurophysiological predictions of the model; these may soon be verified experimentally. Portions of the model have been implemented on a mobile robot. ------------------------------------------------------------ gzipped: http://www.cs.cmu.edu:8001/Web/People/dredish/pub/vislearn-web.ps.gz unix compressed: http://www.cs.cmu.edu:8001/Web/People/dredish/pub/vislearn-web.ps.Z For other papers of ours, see http://www.cs.cmu.edu:8001/Web/People/dredish/bibliography.html ------------------------------------------------------------ Notes: This paper contains large compressed postscript figures and may take a long time to print out on some printers. This paper will sometimes produce an "unable to uncompress file" error, however, my experience has been that this is a spurious warning and the paper uncompresses correctly. Any problems, contact David Redish dredish at cs.cmu.edu From tony at salk.edu Wed Mar 22 20:20:58 1995 From: tony at salk.edu (Tony Bell) Date: Wed, 22 Mar 95 17:20:58 PST Subject: short TR on noisy neurons Message-ID: <9503230120.AA04504@salk.edu> ---------------------------------- FTP-host: ftp.salk.edu FTP-file: pub/tony/bell.noisy.ps.Z ---------------------------------- The following (short) technical report is ftp-able from the Salk Institute. The file is called bell.noisy.ps.Z, it is 0.65 Mbytes compressed, 1.9 Mbytes uncompressed, and 10 pages long (4 figures). It describes work presented at the Computation and Neural Systems 1994 meeting (CNS '94), but which was late for inclusion in the Proceedings. ----------------------------------------------------------------------- Technical Report no. INC-9502, February 1995, Institute for Neural Computation, UCSD, San Diego, CA 92093-0523 `BALANCING' OF CONDUCTANCES MAY EXPLAIN IRREGULAR CORTICAL SPIKING. Anthony J. Bell, Zachary F. Mainen, Misha Tsodyks & Terrence J. Sejnowski Computational Neurobiology Laboratory The Salk Institute 10010 N. Torrey Pines Road La Jolla, California 92037 ABSTRACT Five related factors are identified which enable single compartment Hodgkin-Huxley model neurons to convert random synaptic input into irregular spike trains similar to those seen in {\em in vivo} cortical recordings. We suggest that cortical neurons may operate in a narrow parameter regime where synaptic and intrinsic conductances are balanced to reflect, through spike timing, detailed correlations in the inputs. ----------------------------------------------------------------------- Can be obtained via ftp as follows: unix> ftp ftp.salk.edu (or 198.202.70.34) (log in as "anonymous", e-mail address as password) ftp> binary ftp> cd pub/tony ftp> get bell.noisy.ps.Z ftp> quit unix> uncompress bell.noisy.ps.Z unix> lpr bell.noisy.ps From hali at sans.kth.se Wed Mar 22 17:14:46 1995 From: hali at sans.kth.se (Hans Liljenstrom) Date: Wed, 22 Mar 1995 23:14:46 +0100 Subject: Workshop on Fluctuations in Biology Message-ID: <199503222214.AA07496@thalamus.sans.kth.se> ********************************************************************** First announcement of an interdisciplinary workshop organized in collaboration with the Swedish Council for Planning and Coordination of Research (FRN) THE ROLE AND CONTROL OF RANDOM EVENTS IN BIOLOGICAL SYSTEMS Sigtuna, Sweden 4-9 September 1995 MOTIVATION Life is normally associated with a high degree of order and organization. However, disorder in various contexts referred to as fluctuations, noise or chaos is also a crucial component of many biological processes. For example, in evolution random errors in the reproduction of the genetic material provides a variation that is fundamental for the selection of adaptive organisms. At a molecular level, thermal fluctuations govern the movements and functions of the macromolecules in the cell. Yet, it is also clear that too large a variation may have disastrous effects. Uncontrolled processes need stabilizing mechanisms. More knowledge of the stability requirements of biological processes is needed in order to better understand these problems, which also have important medical applications. Many diseases, for instance certain degenerations of brain cells, are caused by failure of the stabilizing mechanisms in the cell. Stability is also important and difficult to achieve in biotechnological applications. In particular, there is randomness in structure and function of the neural networks of the brain. Spontaneous firing of neurons seems to be important for maintaining an adequate level of activity, but does this "neuronal noise" have any other significance? What are the effects of errors and fluctuations in the information processing of the brain? Can these microscopic fluctuations be amplified to provide macroscopic effects? Often, one cannot easily determine whether an apparently random process is due to noise, governed by uncontrolled degrees of freedom, or if it is a result of "deterministic chaos". Would the difference be of any importance for biology? Especially, could chaos, which is characterized by sensitivity and divergence, be useful for any kind of information processing that normally depends upon stability and convergence? Could chaos in the neural dynamics of the brain perhaps be responsible for (creative) thinking? OBJECTIVE The objective of this meeting is to address the questions and problems given above, for a deeper understanding of the effects of disorder in biological systems. Fluctuations and chaos have been extensively studied in physics, but to a much lesser degree in biology. Important concepts from physics, such as "noise-induced state transitions" and "controlled chaos" could also be of relevance for biological systems. Yet, little has been done about such applications and a more critical analysis of the positive and negative effects of disorder for living systems is needed. It is essential to make concrete and testable hypotheses, and to avoid the kind of superficial and more fashionable treatment that often dominates the field. By bringing together scientists with knowledge and insights from different disciplines we hope to shed more light on these problems, which we think are profound for understanding the phenomenon of life. SCOPE A number of invited speakers will provide presentations on the fundamental problems, but we invite further contributions, in the form of short lectures, computer demonstrations and posters by additional participants. We expect everyone to take an active part in the program, in particular in the general discussions. In order to maintain close contact between all participants, and to provide an efficient workshop atmosphere, the number of participants will be limited to approximately fifty people. A proceedings volume is planned. LOCATION The location of the workshop will be at a unique guest home in Sigtuna, a royal town in early Middle Ages. Situated at the shore of the beautiful lake Malaren, Sigtuna is only 15 km away from the Stockholm Intl. Airport and 45 km from downtown Stockholm. It is also close to the city of Uppsala, which is famous for its Viking graves and for the oldest university and largest cathedral in Scandinavia. The area around Sigtuna is full of cultural and historical sites and the great number of runic stones is unique in the world. There will be excursions and opportunities for sightseeing. The total cost, including accomodation, all meals and registration fee is 4500 SEK. Depending on funding availability, we may be able to give some economical support. ORGANIZING COMMITTEE Clas Blomberg, Dept. of Physics, Royal Institute of Technology, Stockholm Hans Liljenstrom, Dept. of Comp. Sci., Royal Institute of Technology, Stockholm Peter Arhem, Nobel Inst. for Neurophysiology, Karolinska Institutet, Stockholm CONFIRMED INVITED SPEAKERS Luigi Agnati, Dept. of Neuroscience, Karolinska Inst., Stockholm, Sweden Agnes Babloyantz, Dept of Chem. Physics, Free University of Brussels, Belgium Adi Bulsara, NRad, San Diego, USA Rodney Cotterill, Div. of Biophysics, Technical Univ. of Denmark Walter Freeman , Dept. of Molecular and Cell Biology, UC Berkeley, USA Hermann Haken, Inst. f. Theor. Physik und Synergetik, Univ. Stuttgart, Germany Christof Koch, Computation and Neural Systems Program, Caltech, Pasadena, USA Larry Liebovitch, Center for Complex Systems, FAU, Boca Raton, USA Michael Mackey, Dept. of Physiology, McGill University, Montreal, Canada Frank Moss, Dept. of Physics, University of Missouri, St Louis, USA Sakire Pogun, Center for Brain Research, Ege University, Izmir, Turkey Ichiro Tsuda, Dept. of Mathematics, Hokkaido University, Sapporo, Japan FURTHER INFORMATION Hans Liljenstrom SANS - Studies of Artifical Neural Systems Dept. of Numerical Analysis and Computing Science Royal Institute of Technology S-100 44 Stockholm, SWEDEN Email: hali at sans.kth.se Phone: +46-(0)8-790 6909 Fax: +46-(0)8-790 0930 ======================================================================== If you are interested in participating in this workshop, please fill in and return the preliminary registration form below: ------------------------------------------------------------------------ Name: Address: Student (yes/no): Willing to contribute with a presentation (yes/no): Preliminary title/subject: ------------------------------------------------------------------------ From zbyszek at uncc.edu Fri Mar 24 09:00:17 1995 From: zbyszek at uncc.edu (Zbigniew Michalewicz) Date: Fri, 24 Mar 1995 09:00:17 -0500 Subject: 3rd IEEE ICEC '96 call for papers Message-ID: <199503241400.JAA17376@unccsun.uncc.edu> ------------------------ CALL FOR PAPERS ------------------------------------ 1996 IEEE International Conference on Evolutionary Computation (ICEC'96) Nagoya, Japan, May 20-22, 1996 3rd IEEE ICEC'96 is co-sponsored by IEEE Neural Network Council (NNC) and Society of Intrumentation and Control Engineers (SICE). 3rd IEEE ICEC'96 will be organized in conjunction with the conference of Artificial Life (Kyoto, JAPAN, May 16-18, 1996). TOPICS: Theory of evolutionary computation Applications of evolutionary computation Efficiency / robustness comparisons with other direct search algorithms Parallel computer implementations Artificial life and biologically inspired evolutionary computation Evolutionary algorithms for computational intelligence Comparisons between difference variants of evolutionary algorithms Machine learning applications Genetic algorithm and selforganization Evolutionary computation for neural networks Fuzzy logic in evolutionary algorithms SUBMISSION PROCEDURE: Prospective authors are invited to submit papers related to the listed topics for oral or poster presentation. Five (5) copies of the paper must be submitted for review. Papers should be printed on letter size white paper, written in English in two-column format in Times or similar font style, 10 points or larger with 2.5 cm margins on all four sides. A length of four pages is encouraged, and a limit of six pages, including figures, tables and references will be enforced. Centered at the top of the first page should be the complete title of the paper and the name(s), affiliation(s) and address(es) of the author(s). All papers (except those submitted for special sessions - which may have different deadlines - see information on special sessions below) should be sent to: Toshio Fukuda, General Chair Nagoya University Dept. of Micro System Engineering and Dept. of Mechano-Informatics and Systems Furo-cho, Chikusa-ku, Nagoya 464-01, JAPAN Phone: +81-52-789-4478 Fax: +81-52-789-3909 E-mail: fukuda at mein.nagoya-u.ac.jp IMPORTANT DATES: Proposal for tutorial/exhibits November 15, 1995 Submission of Papers (except for special sessions) December 20, 1995 Notification of acceptance February 20, 1996 Submission of camera-ready papers April 10, 1996 Program Co-chairs: Thomas Baeck Informatik Centrum Dortmund (ICD) baeck at ls11.informatik.uni-dortmund.de Hiroaki Kitano Sony Computer Science Laboratory kitano at csl.sony.co.jp Zbigniew Michalewicz University of North Carolina - Charlotte zbyszek at uncc.edu There are several special sessions organized for the 3rd IEEE ICEC '96; so far these include: ********************************************************************* "Constrained Optimization, Constraint Satisfaction and EC" ********************************************************************* Evolutionary Computation has proved its merit in treating difficult problems in, for example, numerical optimization and machine learning. Nevertheless, problems where constraints on the search space (i.e., on the candidate solutions) play an important role have received relatively little attention. In real-world problems, however, the presence of constraints seems to be rather the rule than the exception. The class of constrained problems can be divided into Constraint Satisfaction Problems (CSP) and Constrained Optimization Problems (COP). This special session addresses both subclasses, and aims to explore the extent to which EC can usefully tackle problems of these kinds. The session is organized by Gusz Eiben, chair (Utrecht University, gusz at cs.ruu.nl) Dave Corne (University of Edinburgh,dave at aifh.ed.ac.uk) Jurgen Dorn (Technical University of Vienna, dorn at vexpert.dbai.tuwien.ac.at) Peter Ross (University of Edinburgh, peter at aisb.ed.ac.uk) Submission: Four (4) copies of complete (6 pages maximum) papers, preferably in PostScript form, should be submitted no later than December 15, 1995 to: A.E. Eiben | email: gusz at cs.ruu.nl Department of Computer Science | Utrecht University | Phone: +31-(0)30-533619 P.O.Box 80089 | 3508 TB Utrecht | Fax: +31-(0)30-513791 The Netherlands | All papers will be reviewed, and authors will be notified of the inclusion of their papers in the special session by February 15, 1996. Any questions regarding this special session can be directed to any of the organizers. ********************************************************************* "Evolutionary Artificial Neural Networks" ********************************************************************* Evolutionary Artificial Neural Networks (EANNs) can be considered as a combination of artificial neural networks (ANNs) and evolutionary search algorithms. Three levels of evolution in EANNs have been studied recently, i.e., the evolution of connection weights, architectures, and learning rules. Major issues in the research of EANNs include their scalability, generalisation ability and interactions among different levels of evolution. This special session will serve as a forum for both researchers and practitioners to discuss these important issues and exchange their latest research results/ideas in the area. This special session is organized by X. Yao (xin at cs.adfa.oz.au). Prospective authors are invited to submit four (4) copies of their papers to the following address no later than 20 December 1995. (Please do not include author's information, e.g., name and address, in three of four submitted copies): Xin Yao Department of Computer Science University College, The University of New South Wales Australian Defence Force Academy Canberra, ACT 2600, Australia Ph: +61 6 268 8819 Fax: +61 6 268 8581 Email: xin at csadfa.cs.adfa.oz.au All papers will be reviewed. Notification of acceptance/rejection will be sent out by 20 February 1996. The camera-ready copy must be submitted by 10 April 1996 for inclusion in the conference proceedings. ********************************************************************* "Evolutionary Robotics and Automation" ********************************************************************* More and more researchers are applying evolutionary computation techniques to challenging problems in robotics and automation, where classical methods fail to be effective. In addition to being vastly applicable to many hard problems, evolutionary concepts inspire many researchers as well as users to be fully creative in inventing their own versions of evolutionary algorithms for the specific needs of different domains of problems. This special session serves as a forum for exchanging research results in this growing interdisciplinary area and for encouraging further exploration of the fusion between evolutionary computation and intelligent robotics and automation. This special session is organized by J. Xiao (xiao at uncc.edu). Four (4) copies of complete (6 pages maximum) papers should be submitted no later than December 15, 1995 to: Jing Xiao Department of Computer Science University of North Carolina - Charlotte Charlotte, NC 28223 Phone: (704) 547-4883 Fax: (704) 547-3516 E-mail: xiao at uncc.edu All papers will be reviewed, and authors will be notified of the inclusion of their papers in the special session by February 15, 1996. Any questions regarding this special session should be directed to J. Xiao at the above address. ********************************************************************* "Genetic programming" ********************************************************************* The goal of automatic programming is to create, in an automated way, a computer program that enables a computer to solve a problem. Genetic programming extends the genetic algorithm to the domain of computer programs. In genetic programming, populations of program are genetically bred to solve problems. Genetic programming is a domain-independent method for evolving computer programs that solves, or approximately solves, a variety of problems from a variety of fields, including many benchmark problems from machine learning and artificial intelligence such as problems of control, robotics, optimization, game playing, and symbolic regression (i.e., system identification, concept learning). Early versions of genetic programming evolved programs consisiting of only a single part (i.e., one main program). The session is organized by John R. Koza, Stanford University (Koza at Cs.Stanford.Edu), Lee Spector, Hampshire College (LSPECTOR at hampshire.edu), and Yuji Sato, Hitachi Ltd. Central Research Lab. (yuji at crl.hitachi.co.jp). Prospective authors are encouraged to submit four (4) hard copies of their papers (6 pages maximum) to be received by Friday December 15, 1995 to: John R. Koza Computer Science Department Margaret Jacks Hall Stanford University Stanford, California 94305-2140 USA PHONE: 415-723-1517 FAX(Not for paper submission): 415-941-9430 E-MAIL: Koza at Cs.Stanford.Edu All papers will be reviewed and authors will be notified about acceptance/rejection by about Wednesday, February 15, 1996. ********************************************************************* "Self-adaptation in evolutionary algorithms" ********************************************************************* Evolutionary algorithms (EAs) with the ability to adapt internal strategic parameters (like population size, mutation distribution, type of recombination operator, selective pressure etc.) during the search process usually find better solutions than variants with fixed strategic parameters. Self-adaptation is very useful if different (fixed) parameter settings produce large differences in the solution quality of the algorithm. Most experiences are available for (real-coded) EAs whose individuals adapt their mutation distributions (or step sizes). Here, the property to adjust the step size is induced by competetive pressure among individuals. Evidently, self-adapting mechanisms can be realized by competing subpopulations as well. The potential of those EAs is essentially unexplored. This special session is organized by Guenter Rudolph (rudolph at ls11.informatik.uni-dortmund.de) and is intended to serve as a forum to discuss new ideas and to address the question of a theoretical treatment of self-adapting mechanisms. Four (4) copies of complete papers (6 pages maximum) should be submitted no later than December 15, 1995 to: Guenter Rudolph ICD Informatik Centrum Dortmund e.V. Joseph-von-Fraunhofer-Str. 20 D-44227 Dortmund Germany Phone : +49 - (0)231 - 9700 - 365 Fax : +49 - (0)231 - 9700 - 959 E-mail: rudolph at ls11.informatik.uni-dortmund.de All papers will be reviewed. Authors will be notified of acceptance/rejection by February 15, 1996. ********************************************************************* "Evolutionary algorithms and fuzzy systems" ********************************************************************* Fuzzy sets (FS) and evolutionary algorithms have been already successfully applied to many areas including fuzzy control and fuzzy clustering. There are a number of facets of symbiosis between the technologies of FS and GA. On one hand evolutionary computation enriches the optimization environment for fuzzy systems. On the other, fuzzy sets supply a new macroscopic and domain-specific insight into the fundamental mechanisms of evolutionary algorithms (including fuzzy crossover, fuzzy reproduction, fuzzy fitness function, etc.). The objective of this session is to foster further interaction between researchers actively engaged in FS and GAs. The session will provide a broad forum for exchanging ideas between academe and industry and discussing recent pursuits in the area. This special session is organized by Witold Pedrycz (pedrycz at ee.umanitoba.ca). Prospective authors are encouraged to submit four (4) copies of their papers (6 pages maximum) by December 15, 1995 to: Witold Pedrycz Department of Electrical and Computer Engineering University of Manitoba Winnipeg Canada RT 2N2 Phone : (204) 474-8380 Fax: (204) 261-4639 E-mail: pedrycz at ee.umanitoba.ca All papers will be reviewed and authors will be notified about acceptance/rejection by February 15, 1996. ********************************************************************* ********************************************************************* The deadline for proposals for organizing a special session during the 3rd IEEE ICEC '96 is 20 August 1995; submit your proposal to any Program Co-chair. From omlinc at research.nj.nec.com Fri Mar 24 13:10:55 1995 From: omlinc at research.nj.nec.com (Christian Omlin) Date: Fri, 24 Mar 95 13:10:55 EST Subject: TR available - fault-tolerant recurrent neural networks Message-ID: <9503241810.AA04631@arosa> The following Technical Report is available via the NEC Research Institute archives: __________________________________________________________________________________ Fault-Tolerant Implementation of Finite-State Automata in Recurrent Neural Networks RENSSELAER POLYTECHNIC INSTITUTE DEPT. OF COMPUTER SCIENCE TR CS 95-3 C.W. Omlin[1,2], C.L. Giles[1,3] [1]NEC Research Institute, 4 Independence Way, Princeton, NJ 08540 [2]CS Department, Rensselaer Polytechnic Institute, Troy, NY 12180 [3]UMIACS, University of Maryland, College Park, MD 20742} {omlinc,giles}@research.nj.nec.com ABSTRACT Recently, we have proven that the dynamics of any deterministic finite-state automaton (DFA) with n states and m input symbols can be implemented in a sparse second-order recurrent neural network (SORNN) with n+1 state neurons, O(mn) second-order weights and sigmoidal discriminant functions. We investigate how that constructive algorithm can be extended to fault-tolerant neural DFA implementations where faults in an analog implementation of neurons or weights do not affect the desired network performance. We show that tolerance to weight perturbation can be achieved easily; tolerance to weight and/or neuron stuck-at-zero faults, however, requires duplication of the network resources. This result has an impact on the construction of neural DFAs with a dense internal representation of DFA states. __________________________________________________________________________________ http://www.neci.nj.nec.com/homepages/omlin/omlin.html or ftp://ftp.nj.nec.com/pub/omlinc/fault_tolerance.ps.Z __________________________________________________________________________________ From pja at barbarian.endicott.ibm.com Fri Mar 24 13:19:31 1995 From: pja at barbarian.endicott.ibm.com (Peter J. Angeline) Date: Fri, 24 Mar 1995 13:19:31 -0500 Subject: CFP for 5th Annual Conference on Evolutionary Programming Message-ID: <9503241819.AA07491@barbarian.endicott.ibm.com> --------------------------- CALL FOR PAPERS ------------------------------ EP'96 THE FIFTH ANNUAL CONFERENCE ON EVOLUTIONARY PROGRAMMING SPONSORED BY THE EVOLUTIONARY PROGRAMMING SOCIETY February 29 to March 3, 1996 Sheraton Harbor Island Hotel San Diego, CA, USA General Chairman: Lawrence J. Fogel, Natural Selection, Inc. Technical Program Co-Chairs: Peter J. Angeline, Loral Federal Systems Thomas Baeck, Informatik Centrum Dortmund Thomas M. English, Texas Tech University The Fifth Annual Conference on Evolutionary Programming will serve as a forum for researchers investigating applications and theory of evolutionary programming and other related areas in evolutionary and natural computation. Authors are invited to submit papers which describe original unpublished research in evolutionary programming, evolution strategies, genetic algorithms and genetic programming, artificial life, cultural algorithms, and other models that rely on evolutionary principles. Specific topics include but are not limited to the use of evolutionary simulations in optimization, neural network training and design, automatic control, image processing, and other applications, as well as mathematical theory or empirical analysis providing insight into the behavior of such algorithms. Of particular interest are applications of simulated evolution to problems in biology. Hardcopies of manuscripts must be received by one of the technical program co-chairs by September 26, 1995. Electronic submissions cannot be accepted. Papers should be clear, concise, and written in English. Papers received after the deadline will be handled on a time- and space-available basis. The notification of the program committee's review decision will be mailed by November 30, 1995. Papers eligible for the student award must be marked appropriately for consideration (see below). Camera ready papers are due at the conference, and will be published shortly after its completion. Submissions should be single-spaced, 12 pt. font and should not exceed 15 pages including figures and references. Send five (5) copies of the complete paper to: In Europe: Thomas Baeck Informatik Centrum Dortmund Joseph-von-Fraunhofer-Str. 20 D-44227 Dortmund Germany Email: baeck at home.informatik.uni-dortmund.de In US: Peter J. Angeline Loral Federal Systems 1801 State Route 17C Mail Drop 0210 Owego, NY 13827 Email: pja at lfs.loral.com -or- Thomas M. English Computer Science Department Texas Tech University Lubbock, Texas 79409-3104 Email: english at cs.ttu.edu Authors outside Europe or the United States may send their paper to any of the above technical chairmen at their convenience. SUMMARY OF IMPORTANT DATES -------------------------- September 26, 1995 Submissions of papers November 30, 1995 Notification sent to authors February 29, 1996 Conference Begins Evolutionary Programming Society Award for Best Student Paper ------------------------------------------------------------- In order to foster student contributions and encourage exceptional scholarship in evolutionary programming and closely related fields, the Evolutionary Programming Society awards one exceptional student paper submitted to the Annual Conference on Evolutionary Programming. The award carries a $500 cash prize and a plaque signifying the honor. To be eligible for the award, all authors of the paper must be full-time students at an accredited college, university or other educational institution. Submissions to be considered for this award must be clearly marked at the top of the title page with the phrase "CONSIDER FOR STUDENT AWARD." In addition, the paper should be accompanied by a cover letter stating that (1) the paper is to be considered for the student award (2) all authors are currently enrolled full-time students at a university, college or other educational institution, and (3) that the student authors are responsible for the work presented. Only papers submitted to the conference and marked as indicated will be considered for the award. Late submissions will not be considered. Officers of the Evolutionary Programming Society, students under their immediate supervision, and their immediate family members are not eligible. Judging will be made by officers of the Evolutionary Programming Society or by an Awards Committee appointed by the president. Judging will be based on the perceived technical merit of the student's research to the field of evolutionary programming, and more broadly to the understanding of self-organizing systems. The Evolutionary Programming Society and/or the Awards Committee reserves the right not to give an award in any year if no eligible student paper is deemed to be of award quality. Presentation of the Student Paper Award will be made at the conference. Program Committee: J. L. Breeden, Santa Fe Institute M. Conrad, Wayne State University K. A. De Jong, George Mason University D. B. Fogel, Natural Selection, Inc. G. B. Fogel, University of California at Los Angeles R. Galar, Technical University of Wroclaw P. G. Harrald, University of Manchester Institute of Science and Technology K. E. Kinnear, Adaptive Systems J. R. McDonnell, Naval Command Control and Ocean Surveillance Center Z. Michalewicz, University of North Carolina F. Palmieri, University of Connecticut R. G. Reynolds, Wayne State University S. H. Rubin, Central Michigan University G. Rudolph, University of Dortmund N. Saravanan, Ford Research H.-P. Schwefel, University of Dortmund A. V. Sebald, University of California at San Diego W. M. Spears, Naval Research Labs D. E. Waagen, TRW Systems Integration Group Finance Chair: V. W. Porto, Orincon Corporation Local Arrangements: W. Page, Naval Command Control and Ocean Surveillance Center From duff at wrath.cs.umass.edu Fri Mar 24 17:06:59 1995 From: duff at wrath.cs.umass.edu (duff@wrath.cs.umass.edu) Date: Fri, 24 Mar 1995 17:06:59 -0500 Subject: Tech Rept: Q-learning for Bandit Problems Message-ID: <9503242206.AA04229@wrath.cs.umass.edu> The following technical report is available via anonymous ftp: Q-LEARNING FOR BANDIT PROBLEMS (COMPSCI Technical Report 95-26) Michael Duff Department of Computer Science University of Massachusetts Amherst, MA 01003 duff at cs.umass.edu Multi-armed bandits may be viewed as decompositionally-structured Markov decision processes (MDP's) with potentially very large state sets. A particularly elegant methodology for computing optimal policies was developed over twenty ago by Gittins [Gittins \& Jones, 1974]. Gittins' approach reduces the problem of finding optimal policies for the original MDP to a sequence of low-dimensional stopping problems whose solutions determine the optimal policy through the so-called ``Gittins indices.'' Katehakis and Veinott [Katehakis \& Veinott, 1987] have shown that the Gittins index for a task in state $i$ may be interpreted as a particular component of the maximum-value function associated with the ``restart-in-$i$'' process, a simple MDP to which standard solution methods for computing optimal policies, such as successive approximation, apply. This paper explores the problem of learning the Gittins indices on-line without the aid of a process model; it suggests utilizing task-state-specific Q-learning agents to solve their respective restart-in-state-$i$ subproblems, and includes an example in which the online reinforcement learning approach is applied to a simple problem of stochastic scheduling---one instance drawn from a wide class of problems that may be formulated as bandit problems. FTP-host: envy.cs.umass.edu FTP-file: pub/duff/bandit.ps.Z 18 MBytes compressed / .46 MBytes uncompressed / 32 pages (8 figures) FTP Instructions: unix> ftp envy.cs.umass.edu login: anonymous password: (your email address) ftp> cd pub/duff ftp> binary ftp> get bandit.ps.Z ftp> quit unix> uncompress bandit.ps.Z unix> lpr bandit.ps From rafal at mech.gla.ac.uk Fri Mar 24 07:04:20 1995 From: rafal at mech.gla.ac.uk (Rafal W Zbikowski) Date: Fri, 24 Mar 1995 12:04:20 GMT Subject: Workshop on Neurocontrol Message-ID: <10527.199503241204@gryphon.mech.gla.ac.uk> Neural Adaptive Control Technology Workshop: NACT I 18--19 May, 1995 University of Glasgow, Scotland, UK The first of a series of three workshops on Neural Adaptive Control Technology (NACT) will take place on May 18--19, 1995 in Glasgow, Scotland. This event is being organised in connection with a three-year European Union funded Basic Research Project in the ESPRIT framework. The project is a collaboration between Daimler-Benz Systems Technology Research, Berlin, Germany and the Control Group, Department of Mechanical Engineering, University of Glasgow, Glasgow, Scotland. The project is a study of the fundamental properties of neural network based adaptive control systems. Where possible, links with traditional adaptive control systems will be exploited. A major aim is to develop a systematic engineering procedure for designing neural controllers for non-linear dynamic systems. The techniques developed will be evaluated on concrete industrial problems from within the Daimler-Benz group of companies: Mercedes-Benz AG, Deutsche Aerospace (DASA), AEG and DEBIS. The project leader is Dr.~Ken Hunt (Daimler-Benz) and the other principal investigator is Professor Peter Gawthrop (University of Glasgow). Call for Participation, Provisional Programme, registration form and hotel booking can be found as the PostScript files: call.ps Call for Participation proviso.ps Provisional Programme register.ps registration & hotel on the servers detailed below. FTP server ^^^^^^^^^^ anonymous FTP to: ftp.mech.gla.ac.uk (130.209.12.14) directory: nact World-Wide Web server ^^^^^^^^^^^^^^^^^^^^^ http://www.mech.gla.ac.uk/~nactftp/nact.html WWW server provides a link to the FTP server. Rafal Zbikowski Control Group, Department of Mechanical Engineering, Glasgow University, Glasgow G12 8QQ, Scotland, UK rafal at mech.gla.ac.uk From john at dcs.rhbnc.ac.uk Sat Mar 25 11:56:08 1995 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Sat, 25 Mar 95 16:56:08 +0000 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <199503251656.QAA21004@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT): several new reports available ---------------------------------------- NeuroCOLT Technical Report NC-TR-94-018: ---------------------------------------- On the Complexity of Function Learning by Peter Auer, Technische Universitaet Graz, Philip M. Long, Duke University, Wolfgang Maass, Technische Universitaet Graz, Gerhard J. Woeginger, Technische Universitaet Graz Abstract: The majority of results in computational learning theory are concerned with concept learning, i.e. with the special case of function learning for classes of functions with range $\{ 0,1 \}$. Much less is known about the theory of learning functions with a larger range such as N or R. In particular relatively few results exist about the general structure of common models for function learning, and there are only very few nontrivial function classes for which positive learning results have been exhibited in any of these models. We introduce in this paper the notion of a binary branching adversary tree for function learning, which allows us to give a somewhat surprising equivalent characterization of the optimal learning cost for learning a class of real-valued functions (in terms of a max-min definition which does not involve any ``learning'' model). Another general structural result of this paper relates the cost for learning a union of function classes to the learning costs for the individual function classes. Furthermore, we exhibit an efficient learning algorithm for learning convex piecewise linear functions from $R^d$ into $R$. Previously, the class of linear functions from $R^d$ into $R$ was the only class of functions with multi-dimensional domain that was known to be learnable within the rigorous framework of a formal model for on-line learning. Finally we give a sufficient condition for an arbitrary class $\F$ of functions from $R$ into $R$ that allows us to learn the class of all functions that can be written as the pointwise maximum of $k$ functions from $\F$. This allows us to exhibit a number of further nontrivial classes of functions from $R$ into $R$ for which there exist efficient learning algorithms. ---------------------------------------- NeuroCOLT Technical Report NC-TR-94-019: ---------------------------------------- Neural Nets with Superlinear VC-Dimension by Wolfgang Maass, Institute for Theoretical Computer Science, Technische Universitaet Graz, Klosterwiesgasse 32/2, A-8010 Graz, Abstract: It has been known for quite a while that the Vapnik-Chervonenkis dimension (VC-dimension) of a feedforward neural net with linear threshold gates is at most $O(w \cdot \log w)$, where $w$ is the total number of weights in the neural net. We show in this paper that this bound is in fact asymptotically optimal. More precisely, we exhibit for any depth $d\geq 3$ a large class of feedforward neural nets of depth $d$ with $w$ weights that have VC-dimension $\Omega(w\cdot \log w)$. This lower bound holds even if the inputs are restricted to boolean values. The proof of this result relies on a new method that allows us to encode more ``program-bits'' in the weights of a neural net than previously thought possible. ---------------------------------------- NeuroCOLT Technical Report NC-TR-94-020: ---------------------------------------- Efficient Agnostic PAC-Learning with Simple Hypotheses by Wolfgang Maass, Institute for Theoretical Computer Science, Technische Universitaet Graz, Klosterwiesgasse 32/2, A-8010 Graz, Abstract: We exhibit efficient algorithms for agnostic PAC-learning with rectangles, unions of two rectangles, and unions of $k$ intervals as hypotheses. These hypothesis classes are of some interest from the point of view of applied machine learning, because empirical studies show that hypotheses of this simple type (in just one or two of the attributes) provide good prediction rules for various real-world classification problems. In addition, optimal hypotheses of this type may provide valuable heuristic insight into the structure of a real-world classification problem. The algorithms that are introduced in this paper make it feasible to compute optimal hypotheses of this type for a training set of several hundred examples. We also exhibit an approximation algorithm that can compute near optimal hypotheses for much larger datasets. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-002: ---------------------------------------- Agnostic PAC-Learning of Functions on Analog Neural Nets by Wolfgang Maass, Institute for Theoretical Computer Science, Technische Universitaet Graz, Klosterwiesgasse 32/2, A-8010 Graz, Austria Abstract: We consider learning on multi-layer neural nets with piecewise polynomial activation functions and a fixed number $k$ of numerical inputs. We exhibit arbitrarily large network architectures for which efficient and provably successful learning algorithms exist in the rather realistic refinement of Valiant's model for probably approximately correct learning (``PAC-learning'') where no a-priori assumptions are required about the ``target function'' (agnostic learning), arbitrary noise is permitted in the training sample, and the target outputs as well as the network outputs may be arbitrary reals. The number of computation steps of the learning algorithm LEARN that we construct is bounded by a polynomial in the bit-length $n$ of the fixed number of input variables, in the bound $s$ for the allowed bit-length of weights, in $\frac{1} {\varepsilon}$, where $\varepsilon$ is some arbitrary given bound for the true error of the neural net after training, and in $\frac{1}{\delta}$ where ${\delta}$ is some arbitrary given bound for the probability that the learning algorithm fails for a randomly drawn training sample. However the computation time of LEARN is exponential in the number of weights of the considered network architecture, and therefore only of interest for neural nets of small size. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-003: ---------------------------------------- Perspectives of Current Research about the Complexity of Learning on Neural Nets by Wolfgang Maass, Institute for Theoretical Computer Science, Technische Universitaet Graz, Klosterwiesgasse 32/2, A-8010 Graz, Austria Abstract: This paper discusses within the framework of computational learning theory the current state of knowledge and some open problems in three areas of research about learning on feedforward neural nets: \begin{itemize} \item[--]Neural nets that learn from mistakes \item[--]Bounds for the Vapnik-Chervonenkis dimension of neural nets \item[--]Agnostic PAC-learning of functions on neural nets. \end{itemize} All relevant definitions are given in this paper, and no previous knowledge about computational learning theory or neural nets is required. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-005: ---------------------------------------- Simulating Access to Hidden Information while Learning by Peter Auer, Technische Universit\"{a}t Graz, Philip M. Long, Duke University Abstract: We introduce a new technique which enables a learner without access to hidden information to learn nearly as well as a learner with access to hidden information. We apply our technique to solve an open problem of Maass and Tur\'{a}n, showing that for any concept class $F$, the least number of queries sufficient for learning $F$ by an algorithm which has access only to arbitrary equivalence queries is at most a factor of $1/\log_2 (4/3)$ more than the least number of queries sufficient for learning $F$ by an algorithm which has access to both arbitrary equivalence queries and membership queries. Previously known results imply that the $1/\log_2 (4/3)$ in our bound is best possible. We describe analogous results for two generalizations of this model to function learning, and apply those results to bound the difficulty of learning in the harder of these models in terms of the difficulty of learning in the easier model. We bound the difficulty of learning unions of $k$ concepts from a class $F$ in terms of the difficulty of learning $F$. We bound the difficulty of learning in a noisy environment for deterministic algorithms in terms of the difficulty of learning in a noise-free environment. We apply a variant of our technique to develop an algorithm transformation that allows probabilistic learning algorithms to nearly optimally cope with noise. A second variant enables us to improve a general lower bound of Tur\'{a}n for the PAC-learning model (with queries). Finally, we show that logarithmically many membership queries never help to obtain computationally efficient learning algorithms. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-006: ---------------------------------------- A Stop Criterion for the Boltzmann Machine Learning Algorithm by Berthold Ruf, Technical University Graz Abstract: Ackley, Hinton and Sejnowski introduced a very interesting and versatile learning algorithm for the Boltzmann machine (BM). However it is difficult to decide when to stop the learning procedure. Experiments have shown that the BM may destroy previously achieved results when the learning process is executed for too long. This paper introduces a new quantity, the conditional divergence, measuring the learning success for the inputs of the data set. To demonstrate its use, some experiments are presented, based on the Encoder Problem. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-007: ---------------------------------------- VC-Dimensions for Graphs by Evangelos Kranakis, Carleton University, Danny Krizanc, Carleton University, Berthold Ruf, Technical University Graz, Jorge Urrutia, University of Ottawa, Gerhard J. Woeginger, Technical University Graz Abstract: We study set systems over the vertex set (or edge set) of some graph that are induced by special graph properties like clique, connectedness, path, star, tree, etc. We derive a variety of combinatorial and computational results on the $\vc$ (Vapnik-Chervonenkis) dimension of these set systems. For most of these set systems (e.g.\ for the systems induced by trees, connected sets, or paths), computing the $\vc$-dimension is an $\np$-hard problem. Moreover, determining the $\vc$-dimension for set systems induced by neighborhoods of single vertices is complete for the class $\lognp$. In contrast to these intractability results, we show that the $\vc$-dimension for set systems induced by stars is computable in polynomial time. For set systems induced by paths or cycles, we determine the extremal graphs $G$ with the minimum number of edges such that $\vc_{{\cal P}}(G)\ge k$. Finally, we show a close relation between the $\vc$-dimension of set systems induced by connected sets of vertices and the $\vc$ dimension of set systems induced by connected sets of edges; the argument is done via the line graph of the corresponding graph. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-008: ---------------------------------------- Computing the Maximum Bichromatic Discrepancy, with applications to Computer Graphics and Machine Learning by David P. Dobkin, Princeton University, Dimitrios Gunopulos, Princeton University, Wolfgang Maass, Technische Universitaet Graz, Abstract: Computing the maximum bichromatic discrepancy is an interesting theoretical problem with important applications in computational learning theory, computational geometry and computer graphics. In this paper we give algorithms to compute the maximum bichromatic discrepancy for simple geometric ranges, including rectangles and halfspaces. In addition, we give extensions to other discrepancy problems. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-009: ---------------------------------------- A Finite Automaton Learning System using Genetic Programming by Herman Ehrenburg, CWI, Jeroen van Maanen, CWI Abstract: This report describes the Finite Automaton Learning System (FALS), an evolutionary system that is designed to find small digital circuits that duplicate the behaviour of a given finite automaton. FALS is developed with the aim to get a better insight in learning systems. It is also targeted to become a general purpose automatic programming system. The system is based on the genetic programming approach to evolve programs for tasks instead of explicitly programming them. A representation of digital circuits suitable for genetic programming is given as well as an extended crossover operator that alleviates the need to specify an upper bound for the number of states in advance. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-010: ---------------------------------------- On Specifying Boolean Functions by Labelled Examples by Martin Anthony, London School of Economics, Graham Brightwell, London School of Economics, John Shawe-Taylor, Royal Holloway, University of London Abstract: We say a function $t$ in a set $H$ of $\{0,1\}$-valued functions defined on a set $X$ is {\it specified} by $S \subseteq X$ if the only function in $H$ which agrees with $t$ on $S$ is $t$ itself. The {\it specification number} of $t$ is the least cardinality of such an $S$. For a general finite class of functions, we show that the specification number of any function in the class is at least equal to a parameter from~\cite{RS} known as the testing dimension of the class. We investigate in some detail the specification numbers of functions in the set of linearly separable Boolean functions of $n $ variables---those functions $f$ such that $f^{-1}(\{0\})$ and $f^{-1}(\{1\})$ can be separated by a hyperplane. We present general methods for finding upper bounds on these specification numbers and we characterise those functions which have largest specification number. We obtain a general lower bound on the specification number and we show that for all {\it nested} functions, this lower bound is attained. We give a simple proof of the fact that for any linearly separable Boolean function, there is exactly one set of examples of minimal cardinality which specifies the function. We discuss those functions which have limited dependence, in the sense that some of the variables are redundant (that is, there are irrelevant attributes), giving tight upper and lower bounds on the specification numbers of such functions. We then bound the average, or expected, number of examples needed to specify a linearly separable Boolean function. In the final section of the paper, we address the complexity of computing specification numbers and related parameters. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-012: ---------------------------------------- On the relations between discrete and continuous complexity theory by Klaus Meer, RWTH Aachen Abstract: Relations between discrete and continuous complexity models are considered. The present paper is devoted to combine both models. In particular we analyze the 3-Satisfiability problem. The existence of fast decision procedures for this problem over the reals is examined based on certain conditions on the discrete setting. Moreover we study the behaviour of exponential time computations over the reals depending on the real complexity of 3-Satisfiability. This will be done using tools from complexity theory over the integers. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-014: ---------------------------------------- Grundlagen der reellen Komplexit\"atstheorie by Klaus Meer, RWTH Aachen Abstract: (in English - text is in German) Complexity theory deals with the question of classifying mathematical problems according to the difficulty they provide for algorithmic solutions. This is generally related to \begin{itemize} \item finding efficient solution-algorithms, \item analyzing structural properties which make problems difficult to solve and \item comparing problems. \end{itemize} Contrary to the situation in classical complexity theory the real approach is interested in studying problems defined on continuous structures. Starting point for the present lecture notes will be the model of a real Turing-machine as it was introduced 1989 by Blum, Shub, and Smale. We will begin with a formal definition of notions like computability, decidability and efficiency. This gives rise to consider the complexity classes $P_{\R}$ and $NP_{\R}$. After analyzing basic properties (reducibility, $NP_{\R}-$completeness,existence of complete problems) we'll care about decidability of problems in class $NP_{\R}$. To this aim results on quantifier elimination and on the structure of semialgebraic sets are investigated. Finally, methods for proving lower bounds are presented. For this purpose we show a real version of Hilbert's Nullstellensatz. Table of contents: 0. Introduction 1. The computational model of Blum, Shub, and Smale 2. Complexity theory for the BSS-model 3. Existential theory over the reals 4. Lower bounds References ----------------------- The Report NC-TR-94-018 can be accessed and printed as follows % ftp cscx.cs.rhbnc.ac.uk (134.219.200.45) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-94-018.ps.Z ftp> bye % zcat nc-tr-94-018.ps.Z | lpr -l Similarly for the other technical report. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. The files may also be accessed via WWW starting from the NeuroCOLT homepage: http://www.dcs.rhbnc.ac.uk/neurocolt.html Best wishes John Shawe-Taylor From rajkumar at centre-intelligent-systems.plymouth.ac.uk Sun Mar 26 14:49:51 1995 From: rajkumar at centre-intelligent-systems.plymouth.ac.uk (Rajkumar Roy (EDC)) Date: Sun, 26 Mar 95 14:49:51 BST Subject: No subject Message-ID: <6961.9503261349@cis.plymouth.ac.uk> Subject: ACEDC'96 Call for Papers ... Cc: ************************************************************** PLEASE CIRCULATE ! PLEASE CIRCULATE ! PLEASE CIRCULATE ! ************************************************************** SECOND INTERNATIONAL CONFERENCE ADAPTIVE COMPUTING IN ENGINEERING DESIGN AND CONTROL '96 26-28 March 1996 'The Integration of Genetic Algorithms, Neural Computing and Related Adaptive Techniques with Current Engineering Practice'. 1ST CALL FOR PAPERS ACEDC'96 CONFERENCE CHAIRS Dr I C Parmee Prof M J Denham Plymouth Engineering Design Centre AIMS OF THE CONFERENCE There is a world-wide upsurge of interest from both industry and academia in exciting novel computer technologies that are inspired by biological principles and other natural processes. The Genetic Algorithm, Neural Computing and Cellular Automata are examples of emergent computational techniques which exploit co-operating elements to solve complex problems previously considered to be beyond the capabilities of conventional numerical computation. A number of specialised conferences are held annually where fundamental issues in these fields are described and discussed. ACEDC'96 is the second in what is expected to be a biennial series of meetings aimed at addressing the rapidly developing integration of these emerging computing technologies with engineering applications, particularly in the areas of design and control. The primary objective of the ACEDC'96 Conference is to create a stimulating environment in which participants can assess the state of the art, discuss feasible future directions for research and applications and develop long term targets. The ultimate aim of this conference series is to ensure that design engineers can take full advantage of these powerful computing technologies and of their implementation upon high performance computing platforms, as both become increasingly available and dominant over the next ten years and into the early part of the 21st Century. RELEVANT AREAS Papers are invited which address, amongst others, the following issues: * How are design and control problems best formulated for the application of these novel computing technologies? * What aspects of design and control problems present difficulties for and limitations on the use of these technologies? * What are the current shortcomings of the novel computing methods in respect of their application to real world problems? * To what extent can the development of hybrid approaches, involving the dynamic combination of complementary computing methods, help to solve present and future problems? * How can designer intuition and experience be captured and included in the process? * How can the design engineer visualise and explain the computational processes, their resulting solutions and pathways to these solutions? * How can designer creativity be best enhanced by these techniques? 1ST CALL FOR PAPERS Submissions should take the form initially of extended abstracts of 1000-2000 words which fully describe how the paper will contribute to the aims of the conference. This will be either by addressing the issues described above or related issues at a conceptual level, or by describing real world examples of how such issues have been approached and problems overcome. Abstracts are invited from both industry and academia and may describe completed work or ongoing research. Papers will be accepted as either full papers for oral presentation or short papers for poster presentation etc. Extended abstracts should be received by 1st May 1995. Successful authors will be informed by 30th August 1995 and a camera-ready copy should arrive no later than 23rd October 1995. CONFERENCE ORGANISATION The Conference will be of three days duration. Parallel sessions will be avoided on at least two of the three days with the aim of generating widespread discussion on all aspects of the meeting. The content of each session will be designed as far as possible to include papers which address the issues both conceptually and through application in order to promote and stimulate discussion of the integration between the computing methods and their real-world applications. This approach will also assist in the identification of the generic issues involved. Keynote speakers have been invited to present papers which will further stimulate and focus discussion of the major issues in the field. Delegate fees will be kept to a minimum and are unlikely to exceed 180.00 pounds sterling for the attendance on all three days, the aim being to provide high-level information exchange at low cost. INVITED KEYNOTE SPEAKERS Professor Eric Goodman Michigan State University, USA Professor George Thierauf University of Essen, Germany Professor John Taylor Kings College, London, UK Professor Julian Morris University of Newcastle-upon-Tyne, UK Dr Philip Husbands University of Sussex, UK INVITED SCIENTIFIC COMMITTEE A J Keane University of Oxford, UK H Schwefel University of Dortmund, Germany P Husbands University of Sussex, UK G Thierauf University of Essen, Germany P Cowley Rolls Royce, UK E Semenkin Siberian Aerospace Academy, Russia P Liddell British Aerospace, UK D Grierson University of Waterloo, Canada G Gapper British Aerospace, UK J Angus Rolls Royce, UK E Goodman Michigan State University, USA J Taylor Kings College, London, UK E Kant Schlumberger Computing Labs, USA C Hughes Logica Cambridge, UK S Talukdar Carnegie Mellon University, USA C Harris University of Southampton, UK J Morris University of Newcastle-upon-Tyne, UK C Lin Institute of Technology, Taiwan S Patel Unilever Research Laboratory, UK M J Denham University of Plymouth, UK I C Parmee University of Plymouth, UK ASSOCIATED SOCIETIES Institution of Engineering Designers Institution of Mechanical Engineers Institution of Civil Engineers British Computer Society AISB IMPORTANT DATES Immediately Expression of interest 1st May 1995 Deadline for receipt of abstracts 30th August 1995 Notification of acceptance 23rd October 1995 Deadline for receipt of full papers 26-28th Mar 1996 Conference CONTACT ADDRESS Ms J Levers (Secretary) Plymouth Engineering Design Centre University of Plymouth Charles Cross Centre Drake Circus PLYMOUTH Devon, PL4 8DE United Kingdom Tele: +44 (0)1752-233508 Fax: +44 (0)1752-233505 Email: ian at cis.plym.ac.uk From prechelt at ira.uka.de Mon Mar 27 08:51:39 1995 From: prechelt at ira.uka.de (Lutz Prechelt) Date: Mon, 27 Mar 1995 15:51:39 +0200 Subject: TR on connection pruning available Message-ID: <"irafs2.ira.104:27.03.95.13.50.23"@ira.uka.de> FTP-host: ftp.icsi.berkeley.edu FTP-file: /pub/techreports/1995/tr-95-009.ps.Z URL: ftp://ftp.icsi.berkeley.edu/pub/techreports/1995/tr-95-009.ps.Z The technical report "Adaptive Parameter Pruning in Neural Networks" is now available for anonymous ftp from ftp.icsi.berkeley.edu in directory /pub/techreports/1995/ as file tr-95-009.ps.Z (92 kB, 14 pages). Here is the bibtex entry and abstract: @TechReport{Prechelt95e, author = "Lutz Prechelt", title = "Adaptive Parameter Pruning in Neural Networks", institution = "International Computer Science Institute", year = 1995, number = "95-009", address = "Berkeley, CA", month = mar, Class = "nn, learning, experiment, algorithm", URL = "ftp://ftp.icsi.berkeley.edu/pub/techreports/1995/tr-95-009.ps.Z}, abstract = "Neural network pruning methods on the level of individual network parameters (e.g. connection weights) can improve generalization. An open problem in the pruning methods known today (OBD, OBS, autoprune, epsiprune) is the selection of the number of parameters to be removed in each pruning step (pruning strength). This paper presents a pruning method \Def{lprune} that automatically adapts the pruning strength to the evolution of weights and loss of generalization during training. The method requires no algorithm parameter adjustment by the user. The results of extensive experimentation indicate that lprune is often superior to autoprune (which is superior to OBD) on diagnosis tasks unless severe pruning early in the training process is required. Results of statistical significance tests comparing autoprune to the new method lprune as well as to backpropagation with early stopping are given for 14 different problems." } The ICSI internet connection is sometimes extremely slow and fails often. If you have problems getting the document, just try again at a different time. Sorry, no hardcopies available from me. Lutz Lutz Prechelt (http://wwwipd.ira.uka.de/~prechelt/) | Whenever you Institut fuer Programmstrukturen und Datenorganisation | complicate things, Universitaet Karlsruhe; 76128 Karlsruhe; Germany | they get (Voice: +49/721/608-4068, FAX: +49/721/694092) | less simple. From bert at mbfys.kun.nl Tue Mar 28 04:44:35 1995 From: bert at mbfys.kun.nl (Bert Kappen) Date: Tue, 28 Mar 1995 11:44:35 +0200 Subject: No subject Message-ID: <199503280944.LAA11946@septimius.mbfys.kun.nl> Subject: Publication announcement FTP-host: galba.mbfys.kun.nl FTP-file: pub/reports/Kappen.RBBM.ps.Z Radial Basis Boltzmann Machines and learning with missing values (4 pages) Hilbert J. Kappen, Marcel J. Nijman RWCP Novel Function SNN Laboratory Dept. of Medical Physics and Biophysics, University of Nijmegen Geert Grooteplein 21, NL 6525 EZ Nijmegen, The Netherlands ABSTRACT: A Radial Basis Boltzmann Machine (RBBM) is a specialized Boltzmann Machine architecture that combines feed-forward mapping with probability estimation in the input space, and for which very fast learning rules exist. The hidden representation of the network displays symmetry breaking as a function of the noise in the Glauber dynamics. Thus generalization can be studied as a function of the noise in the neuron dynamics instead of as a function of the number of hidden units. For the special case of unsupervised learning, we show that this method is an elegant alternative of $k$ nearest neighbor, leading to comparable performance without the need to store all data. We show that the RBBM has good classification performance compared to the MLP. The main advantage of the RBBM is that simultaneously with the input-output mapping, a model of the input space is obtained which can be used for learning with missing values. We show that the RBBM compares favorably to the MLP for large percentages of missing values. From l.s.smith at cs.stir.ac.uk Tue Mar 28 10:38:10 1995 From: l.s.smith at cs.stir.ac.uk (Dr L S Smith (Staff)) Date: Tue, 28 Mar 1995 16:38:10 +0100 Subject: M.Sc. Course in Neural Computation. Message-ID: <199503281538.QAA07039@katrine.cs.stir.ac.uk> CENTRE FOR COGNITIVE AND COMPUTATIONAL NEUROSCIENCE UNIVERSITY OF STIRLING, SCOTLAND M.Sc. in NEURAL COMPUTATION This is a one-year full-time course with a focus on basic principles of neural computation and a special emphasis on vision. Students are prepared for careers in neural net research and development in industrial and academic situations, and also for work in more traditional computational, cognitive science, or neuroscience environments where this training can provide a distinctive and valuable contribution. The course may include a two-month industrial placement. Work for the M.Sc. can in some case be converted into the first year of a Ph.D. A nine-month Post-graduate Diploma is also available. COURSE STRUCTURE: AUTUMN 1. Introduction to neural computation 2. Principles of vision 3. Cognitive neuroscience 4. Mathematical and statistical techniques SPRING & SUMMER 1. Advanced practical work 2. Advanced topics 3. Research project ADMISSION: Applicants with any relevant first degree are eligible, e.g., PSYCHOLOGY, COMPUTING, BIOLOGY, PHYSICS, ENGINEERING, or MATHEMATICS. For information and application forms contact: School Office, School of Human Sciences, University of Stirling, Stirling FK9 4LA, SCOTLAND For specific enquiries contact: Dr W. A. Phillips, CCCN, Stirling University, Stirling FK9 4LA, Scotland e-mail: WAP1 at FORTH.STIR.AC.UK From halici at rorqual.CC.METU.EDU.TR Mon Mar 27 03:23:33 1995 From: halici at rorqual.CC.METU.EDU.TR (ugur halici) Date: Mon, 27 Mar 1995 11:23:33 +0300 (MEST) Subject: NN chips Message-ID: <mailman.752.1149540312.24850.connectionists@cs.cmu.edu> Dear Colleagues, We are gathering information on Neural Network hardware devices that have been implemented. Until now, we have collected sufficient information on the following chips/devices, for which a reference list for these devices is provided at the end of the message: TiNMANN Kohonen SOFM Nestor Ni1000 Siemens MA16 Mitsubishi Branch Neuron Unit Bell Labs Hopfield Chip Hitachi Digital Chip Philips L-Neuro Intel ETANN University of Edinburgh EPSILON AT&T Bell Labs ANNA Adaptive Solution CNAPS Hitachi WSI Siemens SYNAPSE However we have insufficient information on the following: British Telecom HANNIBAL Silicon Retina Jet Propulsion Laboratory Hopfield Chip BELLCORE Boltzmann Machine KAKADU Multilayer Perceptron Fujitsu Analog-Digital Chip U. of Catholique Louvain Kohonen SOFM Chip MIT Neuroprocessor Chip TRW MARK HNC SNAP We will appreciate your contact if you have involved somehow in implemented neuro-chips whose name is not in our list or listed among the ones for which we have insufficient information. Sincerely, Ugur Halici Dept. of Electrical Engineering Middle East Technical University, 06531, Ankara Fax: (+90) 312 210 12 61 Email: halici at rorqual.cc.metu.edu REFERENCES Alspector, J.,et. al., 1989, "Performance of a Stochastic Learning Microchip.", Advances in Neural Information Processing Systems, Vol.1, pp. 748-760. Arima, Y., et. al, 1991a, "A 336-Neuron, 28-K Synapse, Self-learning Neural Network Chip with Branch-Neuron-Unit Architecture.", IEEE Journal of Solid State Circuits, Vol.26, No.11, pp. 1637-1644. Arima, Y., et. al, 1991b, "A Self-learning Neural Network Chip with 125-Neurons and 10-K self-Organization Synapses.", IEEE Journal of Solid State Circuits, Vol.26, No.4, pp. 607-611. Arima, Y., et. al, 1992, "A Refreshable Analog VLSI Neural Network Chip with 400-Neurons and 40-K synapses", IEEE Journal of Solid State Circuits, Vol.27, No.12, pp. 1854-1861. Castro, H.A., et. al, 1993, "Implementation and Performance of Analog Nonvolatile Neural Network", Analog Integrated Circuits and Signal Processing, Vol.4, pp. 97-113. Eberhardt, S.P., et.al., 1992, "Analog VLSI Neural Netowrks: Implementation Issues and Examples in Optimization and Supervised Learning.", IEEE Transactions on Industrial Electronics, Vol.39, No.6, pp. 552-564. Hamilton, A., et. al, 1993, "Pulse Stream VLSI Circuits and Systems: The EPSILON Neural Network Chipset", International Journal of Neural Systems, Vol.4, No.4, pp. 395-405. Holler, M., et. al, 1989, "An Electrically Trainable Artificial Neural Network (ETANN) with 1024 'Floating Gate' Syanpse", Proceedings of IACNN 1989, pp.191-196 INTEL 80170NW ETANN Experimental Sheet, May 1990, Intel Corp. Maher, M.A.C., et.al., 1989, "Implementing Neural Architectures Using Analog VLSI Circuits.", IEEE Transactions on Circuits and Systems, Vol. 36, No. 5, pp. 643-652 Mueller, D., and D.Hammerstorm, 1992, "A Neural Network Systems Component", Proceedings of IEEE ICNN 1992, pp. 1258-1264. Murray, A.F., et. al, 1994, "Pulse Stream VLSI Neural Networks.", IEEE Micro, June 1994, pp. 29-38. Ramacher, U., 1992, "SYNAPSE - A Neurocomputer that Synthesizes Neural Algorithms on a Parallel Systolic Engine", Journal of Parallel and Distributed Computing, Vol.14, pp. 306-318. Ramacher, U., et. al, 1993, "Multiprocessor and Memory Architecture of the Neurocomputer Synapse-1", International Journal of Neural Systems, Vol.4, No.4, pp. 333-336. Sackinger, E., et. al, 1992a, "Application of the ANNA Neural Network Chip to High-speed Character Recognition.", IEEE Transactions on Neural Networks, Vol.3, No.3, pp. 498-505. Tam, S., et. al, 1992, "A Reconfigurable Multi-chip Analog Neural Network Recognition and Back-Propagation Training", Proceedings of IEEE ICNN 1992, pp. 625-630. Watanebe, T., et. al, 1993, "A Single 1.5V Digital Chip for a 106 Synapse Neural Network.", IEEE Transactions on Neural Networks, Vol.4, No.3, pp. 387-393. From robtag at dia.unisa.it Wed Mar 29 05:31:32 1995 From: robtag at dia.unisa.it (Tagliaferri Roberto) Date: Wed, 29 Mar 1995 12:31:32 +0200 Subject: FIRST EUROPEAN NEURAL NETWORK SCHOOL Message-ID: <9503291031.AA15397@udsab.dia.unisa.it> FIRST EUROPEAN NEURAL NETWORK SCHOOL First Announcement IIASS, Vietri S/M (Salerno), Italy 25-29 September 1995 (Co-chairs: Professor M. Marinaro and Dr. T.G. Clarkson) There is a need for a school in neural networks to caterto the healthy growth of activity in the subject. The NEURONET EC Network of Excellence is proposing to help sponsor and develop this, in collaboration with IIASS. The School will last for 5 days, with lectures in the mornings (9.00 am - 12.00 midday) and late afternoons (3.00 pm - 5.00 pm) each day. 95 per day, each 1 hour in lenght). At the end of each day a discussion (1 hour) will be held. Proposed topics (3 hours per topics, except as where mentioned): 1. Introduction to Artificial Neural Networks (ANNs) 2. Theory of Learning 3. Applications of ANNs in Control } 4. Applications of ANNs in Pattern Recognition } (2 hours each) 5. Applications of ANNs in Time Series } 6. Introduction to Living Neural Networks 7. Biological Inputs and Outputs 8. Biological Memory Systems 9. Higher Order Cognitive Modelling Committee from NEURONET Dr. T.G. Clarkson (KCL) (Chair) Professor J.G. Taylor (KCL) Professor A. Babloyantz (and other Human Resources Committee members) PARTICIPANTS All appropriate (students, beginners to the field, etc.) Registration fee 250.000 Italian Lire Registration form (To be sent to IIASS by May, 31 1995) ***************************************************************************** TEAR OFF HERE ***************************************************************************** INFORMATION FORM to be returned to: FENNS 95 IIASS Via Pellegrino, 19 84019 Vietri s/m (Salerno) Italia FENNS 95 Vietri s/m, 25-29 September 1995 Last name : .......................................................... First Name : ........................................................ Organization or company : ............................................ ...................................................................... ...................................................................... Postal code/Zip code : ............................................... City : ............................................................... Country : ............................................................ Tel : ................................................................ Fax : ................................................................ Electronic mail:...................................................... ***************************************************************************** TEAR OFF HERE ***************************************************************************** The accepted registration will be notified by June, 20 1995 and the registration fee must be sent by July, 20 1995. ***************************************************************************** From terry at salk.edu Thu Mar 30 01:30:02 1995 From: terry at salk.edu (Terry Sejnowski) Date: Wed, 29 Mar 95 22:30:02 PST Subject: Neural Computation Abstract on WWW Message-ID: <9503300630.AA15721@salk.edu> Abstracts for Neural Computation can be found on the World Wide Web: http://www-mitpress.mit.edu Terry ----- From rsun at cs.ua.edu Thu Mar 30 12:57:13 1995 From: rsun at cs.ua.edu (Ron Sun) Date: Thu, 30 Mar 1995 11:57:13 -0600 Subject: Workshop on connectionist-symbolic integration Message-ID: <9503301757.AA18175@athos.cs.ua.edu> ---------------------------------------------- The IJCAI Workshop on Connectionist-symbolic Integration: From Unified to Hybrid Approaches ---------------------------------------------- Montreal, Canada August 19-20, 1995 There has been a considerable amount of research in integrating connectionist and symbolic processing. While such an approach has clear advantages, it also encounters serious difficulties and challenges. Therefore, various models and ideas have been proposed to address various problems and aspects in this integration. There is a growing interest from many segments of the AI community, ranging from expert systems, to cognitive modeling, to logical reasoning. Two major trends can be identified in the state of the art: these are the unified, or purely connectionist, and the hybrid approaches to integration. Whereas the purely connectionist ("connectionist-to-the-top") approach claims that complex symbol processing functionalities can be achieved via neural networks alone, the hybrid approach is premised on the complementarity of the two paradigms and aims at their synergistic combination in systems comprising both neural and symbolic components. In fact, these trends can be viewed as two ends of an entire spectrum. Up till now, overall, there is still relatively little work in comparing and combining these fairly isolated efforts. This workshop will provide a forum for discussions and exchanges of ideas in this area, to foster cooperative work. The workshop will tackle important issues in integrating connectionist and symbolic processing. ** Organizing Committee Frederic Alexandre (co-chair) John Barnden Steve Gallant Larry Medsker Christian Pellegrini Noel Sharkey Ron Sun (co-chair) ** Program Committee Lawrence Bookman Michael Dyer Wolfgang Ertel LiMin Fu Jose Gonzalez-Cristobal Ruben Gonzalez-Rubio Jean-Paul Haton Melanie Hilario Abderrahim Labbi Ronald Yager Workshop Schedule} -------------------- August 19th, 1995 9:00 - 9:20 Opening Remarks R. Sun F. Alexandre 9:20 - 11:30 Invited talks Chair: Ron Sun 9:20 - 10:20 Invited Talk: Neuropsychology meets AI J. Hendler 10:30 - 11:30 Invited Talk: Neural computing and Artificial Intelligence N. Sharkey 11:30-12:00 Panel responses and discussions Chair: R. Sun Panelists: F. Alexandre, J. Austin, G. Cottrell/D. Noelle, R. Yager Each panelist gives a 5 minute commentary, with questions and comments from the audience. The invited speakers then give their responses. 1:30 - 2:30 Interactive Session: Definitions of Approaches Chair: F. Alexandre Overview of strategies for neurosymbolic integration M. Hilario Cognitive aspects of neurosymbolic integration Y. Lallement and F. Alexandre 2:30 pm - 4:00 Regular Session: Hybrid Approaches Chair: C. Pellegrini A hybrid learning model of abductive reasoning T. Johnson and J. Zhang A hybrid learning model for reaction and decision making R. Sun and T. Peterson A preprocessing model for integrating CBR and prototype-based neural network M. Malek and B. Amy 4:10 - 5:40 Regular Session: Unified Approaches Chair: R. Sun Symbolic neural networks derived from stochastic grammar domain models E. Mjolsness Micro-level hybridization in DUAL B. Kokinov A unified connectionist model of instruction following D. Noelle and G. Cottrell August 20th, 1995 9:00 - 10:30 Regular Session: Hybrid Approaches Chair: J.P. Haton An integrated symbolic/connectionist model of parsing S. Stevenson A hybrid system framework for disambiguating word senses X. Wu, M. McTear, P. Ojha, H. Dai A localist network architecture for logical inference N. Park and D. Robertson 10:40 - 12:40 Regular Session: Unified Approaches Chair: F. Alexandre Holographic reduced representation T. Plate Distributed representations for terms in hybrid reasoning systems A. Sperduti, A. Starita, C. Goller Learning distributed representation R. Krosley and M. Misra Distributed associative memory J. Austin 2:00 - 4:30 Regular Session: Hybrid Approaches Chair: L. Medsker A distributed platform for symbolic-connectionist integration J. C. Gonzalez, J. R. Velasco, C. A. Iglesias Nessyl3L: a neurosymbolic system with 3 levels B. Orsier and A. Labbi A framework for hybrid systems P. Bison, G. Chemello, C. Sossai, G. Trainito A first approach to a taxonomy of fuzzy-neural systems L. Magdalena Task structure and computational level; architectural issues in symbolic-connectionist integration R. Khosla and T. Dillon 4:40 - 5:30 Summary Panel Chair: F. Alexandre Panelists: R. Sun, T. Johnson/J. Zhang, M. Hilario, S. Gallant, J.P. Haton, L. Medsker 5:30 Workshop ends -------------------------------------------------------------- For details, contact IJCAI-95, c/o AAAI, 455 Burgess Drive, Menlo Park, CA 94025, USA. ================================================================ Dr. Ron Sun Department of Computer Science phone: (205) 348-6363 The University of Alabama fax: (205) 348-0219 Tuscaloosa, AL 35487 email: rsun at cs.ua.edu ================================================================ From bishopc at helios.aston.ac.uk Fri Mar 31 06:40:38 1995 From: bishopc at helios.aston.ac.uk (bishopc) Date: Fri, 31 Mar 1995 11:40:38 +0000 Subject: Aston World Wide Web Pages Message-ID: <9838.9503311040@sun.aston.ac.uk> Aston University ---------------- Neural Computing Research Group ------------------------------- World Wide Web pages -------------------- Our world wide web pages can be viewed at the following URL: http://neural-server.aston.ac.uk/ These pages include information on the research activities of the Group, lists of recent publications and preprints, and funded research opportunities within the Group. Chris Bishop -------------------------------------------------------------------- Professor Chris M Bishop Tel. +44 (0)121 359 3611 x4270 Neural Computing Research Group Fax. +44 (0)121 333 6215 Dept. of Computer Science c.m.bishop at aston.ac.uk Aston University Birmingham B4 7ET, UK -------------------------------------------------------------------- From rwp at eng.cam.ac.uk Fri Mar 31 14:52:07 1995 From: rwp at eng.cam.ac.uk (Richard Prager) Date: Fri, 31 Mar 1995 14:52:07 BST Subject: Cambridge Neural Nets Summer School 1995 Message-ID: <199503311352.12914@dsl.eng.cam.ac.uk> +------------------------------------------------------------------+ | FIFTH ANNUAL CAMBRIDGE NEURAL NETWORKS SUMMER SCHOOL | | | | 24-27 July 1995, Emmanuel College, Cambridge, United Kingdom | +------------------------------------------------------------------+ FULLY FUNDED PLACES AVAILABLE FOR EPSRC RESEARCH STUDENTS DISCOUNTED RATES AVAILABLE FOR ACADEMICS SPEAKERS Professor Chris BISHOP - Aston University, Birmingham Dr Herve BOURLARD - Faculte Polytechnique of Mons, Belgium Dr John DAUGMAN - University of Cambridge Professor Geoffrey HINTON - Toronto University Dr Robert JOHNSTON - GEC Hirst Research Centre, Hertfordshire Professor Michael JORDAN - MIT, Boston, Massachusetts Dr Michael LYNCH - Cambridge Neurodynamics Ltd Dr David MACKAY - University of Cambridge Dr Rich SUTTON - GTE Laboratories, Massachusetts Dr Lionel TARASSENKO - University of Oxford PROGRAMME The course will consist of a series of lectures by international experts, interspersed with practical sessions, laboratory tours, poster session, discussion (both formal and informal) and a commercial exhibition. All sessions will be covered by comprehensive course notes and subjects will include: Introduction and overview: Connectionist computing: an introduction and overview Programming a neural network Parallel distributed processing perspective Theory and parallels with conventional algorithms Architectures: Pattern processing and generalisation Bayesian methods and non-linear modelling Reinforcement learning neural networks Multiple expert networks Self organising neural networks Feedback networks for optimization Applications: System identifications Time series predictions Learning forward and inverse dynamical models Control of non-linear dynamical systems using neural networks Artificial and biological vision systems Silicon VLSI neural networks Applications to speech recognition Applications to mobile robotics Financial system modelling Applications in medical diagnostics WHO WILL BENEFIT * Engineers, software specialists and those needing to assess the current potential of neural networks * Technical staff requiring an overview of the subject * Individuals who already have expertise in this area and need to keep abreast of recent developments * Those who have recently entered the field and require a complete perspective of the subject * Researchers and academics working within neural computing areas, as well as all those in industry researching and developing applications Some, although not all, of the lectures will involve graduate level mathematical theory. ACADEMIC DIRECTORS The Summer School will be chaired by members of Cambridge University Engineering Department (CUED) who as members of the Speech, Vision and Robotics Group have current research interests as shown: DR MAHESAN NIRANJAN - speech processing and pattern classification DR RICHARD PRAGER - speech and medical applications DR TONY ROBINSON - recurrent networks and speech processing ACADEMIC SPEAKERS PROFESSOR CHRIS BISHOP, Head of the Neural Computing Research Group at Aston University and Chairman of the Neural Computing Applications Forum, is researching statistical pattern recognition. DR HERVE BOURLARD is with Faculte Polytechnique of Mons, Belgium. He has made many contributions in the area of neural networks and speech recognition. DR JOHN DAUGMAN is Lecturer in Artificial Intelligence in the Computer Laboratory at Cambridge University. His areas of research are computational neuroscience, multi-dimensional signal processing, computer vision, statistical pattern recognition, and biological vision. PROFESSOR GEOFFREY HINTON, Professor of Computer Science and Psychology at the University of Toronto, researches learning, perception and symbol processing in neural networks. He was one of the researchers who introduced the back-propagation algorithm that is now widely used for practical applications. PROFESSOR MICHAEL JORDAN is in the Department of Brain & Cognitive Science at MIT. His contributions to the field include the development of probabilistic methods for learning in modular and hierarchical systems, and the development of methods for applying neural networks to system identification and control problems. DR DAVID MACKAY works on Bayesian methods and non-linear modelling at the Cavendish Laboratory. He obtained his PhD in Computation and Neural Systems at California Institute of Technology. DR LIONEL TARASSENKO is with the Department of Engineering Science at the University of Oxford. His specialisms are robotics and the hardware implementation of neural computing. INDUSTRIAL SPEAKERS DR ROBERT JOHNSTON joined GEC Hirst Research Centre in 1989. His current research is on the applications of non-linear control, with emphasis on fuzzy systems and neural networks. DR MICHAEL LYNCH is Managing Director of Cambridge Neuro-dynamics, a company specialising in the practical application of neural network recognition systems for police, transport and security systems. DR RICH SUTTON is with the Adaptive Systems Department of GTE Laboratories near Boston, Massachusetts. His specialisms are reinforcement learning, planning and animal learning behaviours. LABORATORY TOURS Two afternoon tours will provide the opportunity to observe current research in the field of neural network theory and applications. The Neural Networks Group at CUED have been working in this field since 1984. The Group currently consists of 8 staff and 14 research students. `HANDS-ON' SESSIONS An afternoon practical session will offer the chance to experiment with neural network software and develop an understanding of its strengths and weaknesses. A custom designed environment is available to enable the participants to simulate a variety of problems and explore neural solutions. APPLICATIONS DAY Day 4 will be focused on the applications of neural systems featuring presentations from companies which have exploited connectionist solutions in their businesses. This will give delegates invaluable first hand insight into the technical and practical detail of the transition from research to application. It will present a 'world-class' perspective on the relevance of neural design techniques to commercial and industrial requirements. RESEARCH STUDENTS The Engineering and Physical Sciences Research Council are funding a limited number of places for UK, post-graduate research students. The students benefit from the interaction with a wide range of academics and industrialists and have the opportunity to extend their experience and establish links into industry and other institutions. A poster session will present the current research interests of each student, providing an insight into the work of the connectionist groups in higher education institutions across the UK. VENUE AND ACCOMMODATION The Summer School will be held at Emmanuel College, Cambridge. Emmanuel was founded in 1584 and its attractions include the Wren Chapel and beautiful gardens which delegates may enjoy. The Summer School will take place in the new lecture theatre complex. Emmanuel's city centre location provides easy access to shops and services, is on the main bus route from the Rail Station and is 2 minutes walk from Drummer Street Bus Station, which is served by all national and airport services. Accommodation can be arranged for delegates in single study bedrooms with shared facilities at Emmanuel College for 205 pounds for 4 nights to include bed and breakfast, dinner and a Course Dinner. An additional night's accommodation (bed and breakfast only) on Thursday, 27 July is available at 28 pounds. If you would prefer to make your own arrangements please indicate on the registration form and details of local hotels will be sent to you. SUMMER SCHOOL FEES All fees are payable in advance and include a set of course notes and all day-time refreshments. Days 1-4 (Monday-Thursday) - 775 pounds Academic Discounted Rate: (see qualifying note)* Days 1-4 (Monday-Thursday) - 475 pounds *Academic discounts only available if fees are to be paid by an academic institution. Limited number available - please contact the Course Administrator before applying. REGISTRATIONS For applications for EPSRC fully funded studentships please use the form at the end of this message. Otherwise please contact the Cambridge Programme for Industry: By Email: rjs1008 at cus.cam.ac.uk By Post: Registration Administrator, University of Cambridge, Programme for Industry, 1 Trumpington Street, Cambridge, CB2 1QA, UK. By Phone on: +44 (0)1223 302233 By Fax on: +44 (0)1223 301122 All reserved places must be confirmed by returning a registration form to the address shown. Bookings will be confirmed after payment has been received in full. Delegates will only be accepted onto a course if payment has been received in full or an official company order has been received. METHODS OF PAYMENT Payments should be made by: A cheque drawn on a UK bank, VISA or Mastercard/Eurocard, Sterling banker's draft drawn on a UK bank, Crossed international money order, Sterling travellers' cheques Any bank charges arising from international transactions must be met by the delegate. All payments to University of Cambridge must be for the full amount of fees incurred. Personal cheques drawn on banks outside the UK will not be accepted. Please do not send cash. Cheques or orders should be made payable to the 'University of Cambridge-EYA 4814'. CANCELLATIONS Half the registration fee will be returned for bookings cancelled up to one calender month in advance of the course. After this time no fees are returnable. However, substitutions may be made at any time. The Cambridge Programme for Industry reserves the right to pass on any charges levied by a College for cancellation of accommodation and meal bookings. EPSRC FUNDED STUDENTSHIPS A number of Studentships are available for EPSRC-funded, UK registered, post-graduate research students with UK residency. The Studentship covers all course costs and students can attend either Days 1-3 or Days 1-4, with full accommodation packages. Overnight accommodation will not be funded for students resident in or close to Cambridge. Funding is not available for accommodation on Sunday, 23 July, but bed and breakfast in College can be booked at a cost to the student of 28 pounds. Please indicate if you require the extra night's accommodation at the time of application. Wherever possible, students will be funded to their requested level, however it will be necessary to limit the number of students funded for Day 4. Please note that Studentships do not include travel costs. Recipients of 1994 Studentships will not be eligible for 1995 Studentships. HOW TO APPLY FOR AN EPSRC FULLY FUNDED STUDENTSHIP To be considered for a place, please complete the application form below and send a one page summary of current research including how you expect to benefit by attending, a curriculum vitae and a letter of recommendation from your supervisor. The deadline for applications is 19 May 1995. It should be noted that successful applicants will be required to present a poster of their current research or of the research interest of their group. The poster session will be held on Day 1. EPSRC Studentship No. Title (Mr/Miss/Ms) Surname First Name(s) Institution Address Post Code Telephone Fax E-mail: I wish to be considered for an EPSRC Studentship as follows: (please delete those which do not apply) Days 1-3 with 2 nights accommodation package Days 1-3 without accommodation Days 1-4 with 3 nights accommodation package Days 1-4 without accommodation Bed and breakfast only on Sunday, 23 July @ 28 pounds From bishopc at helios.aston.ac.uk Fri Mar 31 15:53:24 1995 From: bishopc at helios.aston.ac.uk (bishopc) Date: Fri, 31 Mar 1995 20:53:24 +0000 Subject: Lectureships in Neural Computing Message-ID: <4008.9503311953@sun.aston.ac.uk> ------------------------------------------------------------------- Neural Computing Research Group ------------------------------- Dept of Computer Science and Applied Mathematics Aston University, Birmingham, UK TWO LECTURESHIPS ---------------- * Full details at http://neural-server.aston.ac.uk// * Applications are invited for two Lectureships within the Department of Computer Science and Applied Mathematics. (These posts are roughly comparable to Assistant Professor positions in North America). Candidates are expected to have excellent academic qualifications and a proven record of research. The appointments will be for an initial period of three years, with the possibility of subsequent renewal or transfer to a continuing appointment. Successful candidates will be expected to make a substantial contribution to the research activities of the Group in the area of neural computing, or a related area concerned with advanced information processing. Current research activity focusses on principled approaches to neural computing, and ranges from theoretical foundations to industrial and commercial applications. We would be interested in candidates who can contribute directly to this research programme or who can broaden it into related areas, while maintaining the emphasis on theoretically well-founded research. The successful candidates will also be expected to contribute to the undergraduate and/or postgraduate teaching programmes. Neural Computing Research Group ------------------------------- The Neural Computing Research Group currently comprises the following academic staff Chris Bishop Professor David Lowe Professor David Bounds Professor Richard Rohwer Lecturer Alan Harget Lecturer Ian Nabney Lecturer David Saad Lecturer (arrives 1 August) together with the following Research Fellows Chris Williams Shane Murnion Alan McLachlan Huaihu Zhu a full-time computer support assistant, and eleven postgraduate research students. Conditions of Service --------------------- The appointments will be for an initial period of three years, with the possibility of subsequent renewal or transfer to a continuing appointment. Salaries will be within the lecturer A and B range 14,756 to 25,735, and exceptionally up to 28,756 (UK pounds; these salary scales are currently under review). How to Apply ------------ If you wish to be considered for one of these positions, please send a full CV and publications list, together with the names of 4 referees, to: Professor C M Bishop Neural Computing Research Group Department of Computer Science and Applied Mathematics Aston University Birmingham B4 7ET, U.K. Tel: 021 359 3611 ext. 4270 Fax: 021 333 6215 e-mail: c.m.bishop at aston.ac.uk closing date: 19 May 1995 From dave at twinearth.wustl.edu Fri Mar 31 03:48:35 1995 From: dave at twinearth.wustl.edu (David Chalmers) Date: Fri, 31 Mar 95 02:48:35 CST Subject: Penrose symposium Message-ID: <9503310848.AA11316@twinearth.wustl.edu> Over the coming weeks, there will be a symposium on Roger Penrose's recent book SHADOWS OF THE MIND in the electronic journal PSYCHE. There will be ten review articles discussing the issues raised by the book from a variety of perspectives, and Penrose will reply. Authors of the review articles are: Bernard Baars: Psychology, The Wright Institute, Berkeley David Chalmers: Philosophy, Washington University Solomon Feferman: Mathematics, Stanford University Stanley Klein: Vision Sciences, University of California at Berkeley Aaron Klug: Molecular Biology, Cambridge University Tim Maudlin: Philosophy, Rutgers University John McCarthy: Computer Science, Stanford University Daryl McCullough: Computer Science, Odyssey Research Associates Drew McDermott: Computer Science, Yale University Hans Moravec: Robotics, Carnegie Mellon University The articles will appear at a rate of one or two per week, and will be e-mailed to subscribers of the mailing list PSYCHE-L. To subscribe to this mailing list, send e-mail to listserv at iris.rfmh.org, with a single line "SUBSCRIBE PSYCHE-L <your name>". The articles will also be made available on the worldwide web, at http://hcrl.open.ac.uk/psyche.html (this is the PSYCHE home page). Discussion is encouraged on the associated discussion list PSYCHE-D -- to subscribe, "SUBSCRIBE PSYCHE-D" at the address above. David Chalmers. From tom at csc1.prin.edu Fri Mar 31 17:37:50 1995 From: tom at csc1.prin.edu (Tom Fuller) Date: Fri, 31 Mar 1995 16:37:50 -0600 Subject: No subject Message-ID: <199503312238.QAA22381@spectre.prin.edu> The file fuller.thesis.ps.Z is now available for copying from the Neuroprose repository: Supervised Competitive Learning: a technology for pen-based adaptation in real time Thomas H. Fuller, Jr. Computer Science Principia College Elsah, IL 62028 tom at csc1.prin.edu Abstract: The advent of affordable, pen-based computers promises wide application in educational and home settings. In such settings, systems will be regularly employed by a few users (children or students), and occasionally by other users (teachers or parents). The systems must adapt to the writing and gestures of regular users but not lose prior recognition ability. Furthermore, this adaptation must occur in real time not to frustrate or confuse the user, and not to interfere with the task at hand. It must also provide a reliable measure of the likelihood of correct recognition. Supervised Competitive Learning is our technology for the recognition of handwritten symbols. It uses a shifting collection of neural network-based similarity detectors to adapt to the user. We demonstrate that it satisfies the following requirements: 1. Pen-based technology: digitizing display tablet with pen. 2. Low cost: PC-level processor with about 50 MIPS. 3. Wide range of subjects: varying by age, nationality, writing style. 4. Wide range of symbol sets: numerals, alphabetic characters, gestures. 5. Usage: adaptation to regular users; persistent response to occasional users. 6. On-line recognition: both response and adaptation in real time. 7. Self-criticism: reliable measure of likelihood of correct response. 8. Context-free classification: symbol by symbol recognition. SCL successfully recognizes handwritten characters from writers on which it has trained (digits, lowercase, uppercase, and others) at least as well as known current systems (96.5% - 99.2%, depending on character sets). It adapts to its user in real time with a 50 MIPS processor without loss of response to occasional users. Finally, its estimates of its correctness are strongly correlated with actual likelihood of correctness. This is a doctoral dissertation at Washington University in St. Louis. Hardcopies are only available from University Microfilms, Inc. This work was supported by the Kumon Machine Project. ADVISOR: Professor Takayuki Dan Kimura completed December, 1994 Department of Computer Science Washington University Campus Box 1045 One Brookings Drive St. Louis, MO 63130-4899 queries about the work should go to; Thomas H. Fuller, Jr. Computer Science Principia College Elsah, IL 62028 tom at csc1.prin.edu Here's a sample retrieval session: unix> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive FTP server (Version wu-2.4(2) Mon Apr 18 14:41:30 EDT 1994) ready. Name (archive.cis.ohio-state.edu:me): anonymous 331 Guest login ok, send your complete e-mail address as password. Password: me at here.edu 230 Guest login ok, access restrictions apply. Remote system type is UNIX. Using binary mode to transfer files. ftp> cd pub/neuroprose/Thesis 250 CWD command successful. ftp> get fuller.thesis.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for fuller.thesis.ps.Z (798510 bytes). 226 Transfer complete. 798510 bytes received in 180 seconds (5 Kbytes/s) ftp> bye 221 Goodbye. unix> uncompress fuller.thesis.ps.Z unix> <send fuller.thesis.ps to favorite viewer or printer> From Connectionists-Request at cs.cmu.edu Wed Mar 1 00:06:00 1995 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Wed, 01 Mar 95 00:06:00 EST Subject: Bi-monthly Reminder Message-ID: <835.794034360@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated September 9, 1994. This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is a moderated forum for enlightened technical discussions and professional announcements. It is not a random free-for-all like comp.ai.neural-nets. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to thousands of busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. -- Dave Touretzky & Lisa Saksida --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, corresponded with him directly and retrieved the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new textbooks related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. The file "current" in the same directory contains the archives for the current month. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU 2. Login as user anonymous with password your username. 3. 'cd' directly to the following directory: /afs/cs/project/connect/connect-archives The archive directory is the ONLY one you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into this directory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- Using Mosaic and the World Wide Web ----------------------------------- You can also access these files using the following url: http://www.cs.cmu.edu:8001/afs/cs/project/connect/connect-archives ---------------------------------------------------------------------- The NEUROPROSE Archive ---------------------- Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. We strongly discourage the merger into the repository of existing bodies of work or the use of this medium as a vanity press for papers which are not of publication quality. PLACING A FILE To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. Current naming convention is author.title.filetype.Z where title is just enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. The Z indicates that the file has been compressed by the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is in the appendix. Make sure your paper is single-spaced, so as to save paper, and include an INDEX Entry, consisting of 1) the filename, 2) the email contact for problems, 3) the number of pages and 4) a one sentence description. See the INDEX file for examples. ANNOUNCING YOUR PAPER It is the author's responsibility to invite other researchers to make copies of their paper. Before announcing, have a friend at another institution retrieve and print the file, so as to avoid easily found local postscript library errors. And let the community know how many pages to expect on their printer. Finally, information about where the paper will/might appear is appropriate inside the paper as well as in the announcement. In your subject line of your mail message, rather than "paper available via FTP," please indicate the subject or title, e.g. "paper available "Solving Towers of Hanoi with ART-4" Please add two lines to your mail header, or the top of your message, so as to facilitate the development of mailer scripts and macros which can automatically retrieve files from both NEUROPROSE and other lab-specific repositories: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/filename.ps.Z When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. To prevent forwarding, place a "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your file. If you do offer hard copies, be prepared for a high cost. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! A shell script called Getps, written by Tony Plate, is in the directory, and can perform the necessary retrieval operations, given the file name. Functions for GNU Emacs RMAIL, and other mailing systems will also be posted as debugged and available. At any time, for any reason, the author may request their paper be updated or removed. For further questions contact: Jordan Pollack Associate Professor Computer Science Department Center for Complex Systems Brandeis University Phone: (617) 736-2713/* to fax Waltham, MA 02254 email: pollack at cs.brandeis.edu APPENDIX: Here is an example of naming and placing a file: unix> compress myname.title.ps unix> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put myname.title.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for myname.title.ps.Z 226 Transfer complete. 100000 bytes sent in 1.414 seconds ftp> quit 221 Goodbye. unix> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file myname.title.ps.Z in the Inbox. Here is the INDEX entry: myname.title.ps.Z mylogin at my.email.address 12 pages. A random paper which everyone will want to read Let me know when it is in place so I can announce it to Connectionists at cmu. ^D AFTER RECEIVING THE GO-AHEAD, AND HAVING A FRIEND TEST RETRIEVE THE FILE, HE DOES THE FOLLOWING: unix> mail connectionists Subject: TR announcement: Born Again Perceptrons FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/myname.title.ps.Z The file myname.title.ps.Z is now available for copying from the Neuroprose repository: Random Paper (12 pages) Somebody Somewhere Cornell University ABSTRACT: In this unpublishable paper, I generate another alternative to the back-propagation algorithm which performs 50% better on learning the exclusive-or problem. ~r.signature ^D ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "ftp.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. Another valid directory is "/afs/cs/project/connect/code", where we store various supported and unsupported neural network simulators and related software. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "neural-bench at cs.cmu.edu". From jari at vermis.hut.fi Wed Mar 1 02:46:28 1995 From: jari at vermis.hut.fi (Jari Kangas) Date: Wed, 1 Mar 1995 09:46:28 +0200 Subject: New version (v3.0) of SOM_PAK Message-ID: <199503010746.JAA16971@vermis> ************************************************************************ * * * SOM_PAK * * * * The * * * * Self-Organizing Map * * * * Program Package * * * * Version 3.0 (March 1, 1995) * * * * Prepared by the * * SOM Programming Team of the * * Helsinki University of Technology * * Laboratory of Computer and Information Science * * Rakentajanaukio 2 C, SF-02150 Espoo * * FINLAND * * * * Copyright (c) 1992-1995 * * * ************************************************************************ Updated public-domain programs for Self-Organizing Map (SOM) algorithms are available via anonymous FTP on the Internet. A new book on SOM and LVQ (Learning Vector Quantization) has also recently come out: Teuvo Kohonen. Self-Organizing Maps (Springer Series in Information Science, Vol 30, 1995). In short, Self-Organizing Map (SOM) defines a 'nonlinear projection' of the probability density function of the high-dimensional input data onto the two-dimensional display. SOM places a number of reference vectors into an input data space to approximate to its data set in an ordered fashion, and thus implements a kind of nonparametric, nonlinear regression. This package contains all necessary programs for the application of Self-Organizing Map algorithms in an arbitrary complex data visualization task. This code is distributed without charge on an "as is" basis. There is no warranty of any kind by the authors or by Helsinki University of Technology. In the implementation of the SOM programs we have tried to use as simple code as possible. Therefore the programs are supposed to compile in various machines without any specific modifications made on the code. All programs have been written in ANSI C. The programs are available in two archive formats, one for the UNIX-environment, the other for MS-DOS. Both archives contain exactly the same files. These files can be accessed via FTP as follows: 1. Create an FTP connection from wherever you are to machine "cochlea.hut.fi". The internet address of this machine is 130.233.168.48, for those who need it. 2. Log in as user "anonymous" with your own e-mail address as password. 3. Change remote directory to "/pub/som_pak". 4. At this point FTP should be able to get a listing of files in this directory with DIR and fetch the ones you want with GET. (The exact FTP commands you use depend on your local FTP program.) Remember to use the binary transfer mode for compressed files. The som_pak program package includes the following files: - Documentation: README short description of the package and installation instructions som_doc.ps documentation in (c) PostScript format som_doc.ps.Z same as above but compressed som_doc.txt documentation in ASCII format - Source file archives: som_p3r0.exe Self-extracting MS-DOS archive file som_pak-3.0.tar UNIX tape archive file som_pak-3.0.tar.Z same as above but compressed An example of FTP access is given below unix> ftp cochlea.hut.fi (or 130.233.168.48) Name: anonymous Password: <your email address> ftp> cd /pub/som_pak ftp> binary ftp> get som_pak-3.0.tar.Z ftp> quit unix> uncompress som_pak-3.0.tar.Z unix> tar xvfo som_pak-3.0.tar See file README for further installation instructions. All comments concerning this package should be addressed to som at cochlea.hut.fi. ************************************************************************ From bishopc at helios.aston.ac.uk Wed Mar 1 03:29:38 1995 From: bishopc at helios.aston.ac.uk (bishopc) Date: Wed, 1 Mar 1995 08:29:38 +0000 Subject: NCAF Spring Conference Message-ID: <14359.9503010829@sun.aston.ac.uk> -------------------------------------------------------------------- NEURAL COMPUTING APPLICATIONS FORUM NCAF Two-Day Conference: "Practical Applications and Techniques of Neural Networks" (sponsored by IBM and Neuroptics) 12 and 13 April, 1995 Robinson College, Cambridge, UK 12 April 1995 ------------- Invited Guest Tutorial: Pattern Recognition Using Hidden Markov Models Prof Steve Young, Cambridge University Keynote Talk: The IBM ZISC Chip Guy Paillet, Neuroptics Consulting Neural Computing: The Key Answers? Prof David Bounds, Aston University (EPSRC Neural Computing Coordinator) Neural Networks for Analysis of EEG David Siegwart, Oxford University High Speed Car Number Plate Recognition (with demonstration!) Steve Gull, Cambridge University Workshop: Practicalities of Training Networks An interactive workshop with opportunities for questions and discussion from the audience. Champagne Reception hosted by Neuroptics and IBM The ZISC Banquet After dinner speaker: Robert Worden, Logica Cambridge 13 April 1995 ------------- Neural Interpretation of Foetal Heart Rate Traces Richard Shaw, Cambridge University Classification of Wood Quality John Keating, St Patrick's, Ireland Medical Diagnosis Using ARTMap for Autonomous Learning Robert Harrison, Sheffield University Control of a Jet Engine Ian Nabney, Aston University Detection of Organised Credit Card Fraud Iain Strachan, AEA Technology Recurrent Networks for Very Large Vocabulary Speech Recognition Tony Robinson, Cambridge University Interpretation and Knowledge Discovery in MLPs Marilyn Vaughn, RMCS, Cranfield University Solving Folding Optimisation Problems with Self-Organising Networks Shara Amin, British Telecom Using Neural Networks for Property Valuation Howard James, Portsmouth University An Evaluation of the Neocognitron David Lovell, Cambridge University ------------------------------------------------------------------- NEURAL COMPUTING APPLICATIONS FORUM The Neural Computing Applications Forum (NCAF) was formed in 1990 and has since come to provide the principal mechanism for exchange of ideas and information between academics and industrialists in the UK on all aspects of neural networks and their practical applications. NCAF organises four 2-day conferences each year, which are attended by well over 100 participants. It has its own international journal `Neural Computing and Applications' which is published quarterly by Springer-Verlag, and it produces a quarterly newsletter `Networks'. Annual membership rates (Pounds Stirling): Company: 300 Individual: 170 Associate 110 Student: 65 Membership includes free registration at all four annual conferences, a subscription to the journal `Neural Computing and Applications', and a subscription to `Networks'. Associate membership includes the journal and newsletter but does not include admission to the conferences (for which a separate fee must be paid) and is intended primarily for overseas members who are unable to attend most of the conferences. For further information: Tel: +44 (0)784 477271 Fax: +44 (0)784 472879 Chris M Bishop (Chairman, NCAF) -------------------------------------------------------------------- Professor Chris M Bishop Tel. +44 (0)21 359 3611 x4270 Neural Computing Research Group Fax. +44 (0)21 333 6215 Dept. of Computer Science c.m.bishop at aston.ac.uk Aston University Birmingham B4 7ET, UK -------------------------------------------------------------------- From jbaxter at colossus.cs.adelaide.edu.au Wed Mar 1 06:53:51 1995 From: jbaxter at colossus.cs.adelaide.edu.au (Jon Baxter) Date: Wed, 1 Mar 1995 22:23:51 +1030 (CST) Subject: Paper Available: The canonical metric in vector quantization Message-ID: <9503011153.AA27533@colossus.cs.adelaide.edu.au> The following paper is available via anonymous ftp from calvin.maths.flinders.edu.au:://pub/jon/quant.ps.Z FTP instructions are given at the end of the message. Title: The Canonical Metric For Vector Quantization, 8 pages. Author: Jonathan Baxter Abstract: To measure the quality of a set of vector quantization points a means of measuring the distance between two points is required. Common metrics such as the {\em Hamming} and {\em Euclidean} metrics, while mathematically simple, are inappropriate for comparing speech signals or images. In this paper it is argued that there often exists a natural {\em environment} of functions to the quantization process (for example, the word classifiers in speech recognition and the character classifiers in character recognition) and that such an enviroment induces a {\em canonical metric} on the space being quantized. It is proved that optimizing the {\em reconstruction error} with respect to the canonical metric gives rise to optimal approximations of the functions in the environment, so that the canonical metric can be viewed as embodying all the essential information relevant to learning the functions in the environment. Techniques for {\em learning} the canonical metric are discussed, in particular the relationship between learning the canonical metric and {\em internal representation learning}. FTP Instructions: unix> ftp calvin.maths.flinders.edu.au (or 129.96.32.2) login: anonymous password: (your e-mail address) ftp> cd pub/jon ftp> binary ftp> get quant.ps.Z ftp> quit unix> uncompress quant.ps.Z unix> lpr quant.ps (or however you print) From njm at cupido.inesc.pt Wed Mar 1 11:28:36 1995 From: njm at cupido.inesc.pt (njm@cupido.inesc.pt) Date: Wed, 01 Mar 95 16:28:36 +0000 Subject: 2nd CFP: EPIA'95 Message-ID: <9503011628.AA00912@cupido.inesc.pt> EPIA'95 - 2nd CALL FOR PAPERS SEVENTH PORTUGUESE CONFERENCE ON ARTIFICIAL INTELLIGENCE Funchal, Madeira Island, Portugal October 3-6, 1995 (Under the auspices of the Portuguese Association for AI) SUBMISSION DEADLINE: March 20, 1995 The Seventh Portuguese Conference on Artificial Intelligence (EPIA'95) will be held at Funchal, Madeira Island, Portugal, on October 3-6, 1995. As in previous issues ('89, '91, and '93), EPIA'95 will be run as an international conference, English being the official language. The scientific program encompasses tutorials, invited lectures, demonstrations, and paper presentations. Five well known researchers will present invited lectures. The conference is devoted to all areas of Artificial Intelligence and will cover both theoretical and foundational issues and applications as well. Parallel workshops on Expert Systems, Fuzzy Logic and Neural Networks, and Applications of A.I. to Robotics and Vision Systems will run simultaneously (see below). INVITED LECTURERS ~~~~~~~~~~~~~~~~~ In this issue of the conference,four special invited lectures will promote a debate on the very foundations of Artificial Intelligence, its approaches and results. It is an honour to announce the invited lecturers and the corresponding talks: "Why Human Brains Can't Really Think", by Marvin Minsky (MIT-USA); "Planning and Learning in Intelligent Agents", by Manuela Veloso (CMU-USA); "The Connectionist Paradigm and AI", by Borges de Almeida (IST-Portugal); "The Evolutionist Approach - Past, Present, and Future of AI", by Rodney Brooks (MIT-USA). TUTORIALS ~~~~~~~~~ In this issue of the conference, four tutorials will be delivered: "Artificial Life and Autonomous Robots", by Luc Steels (VUB AI Lab-Belgium); "Virtual Reality - The AI perspective", by David Hogg (Univ. of Leeds-UK); "Introduction to Artificial Intelligence", by Ernesto Costa (Univ. of Coimbra-Portugal); (in Portuguese) "Design of Expert Systems", by Ernesto Morgado (IST-Portugal); (in Portuguese) SUBMISSION OF PAPERS ~~~~~~~~~~~~~~~~~~~~ Authors must submit five (5) complete printed copies of their papers to the "EPIA'95 submission address". Fax or electronic submissions will not be accepted. Submissions must be printed on A4 or 8 1/2"x11" paper using 12 point type. Each page must have a maximum of 38 lines and an average of 75 characters per line (corresponding to the LaTeX article-style, 12 point). Double-sided printing is strongly encouraged. The body of submitted papers must be at most 12 pages, including title, abstract, figures, tables, and diagrams, but excluding the title page and bibliography. ELECTRONIC ABSTRACT ~~~~~~~~~~~~~~~~~~~ In addition to submitting the paper copies, authors should send to epia95-abstracts at inesc.pt a short (200 words) electronic abstract of their paper to aid the reviewing process. The electronic abstract must be in plain ASCII text (no LaTeX)) in the following format: TITLE: <title of the paper> FIRST AUTHOR: <last name, first name> EMAIL: <email of the first author> FIRST ADDRESS: <first author address> COAUTHORS: <their names, if any> KEYWORDS: <keywords separated by commas> ABSTRACT: <text of the abstract> Authors are requested to select 1-3 appropriate keywords from the list below. Authors are welcome to add additional keywords descriptors as needed. Applications, agent-oriented programming, automated reasoning, belief revision, case-based reasoning, common sense reasoning, constraint satisfaction, distributed AI, expert systems, genetic algorithms, knowledge representation, logic programming, machine learning, natural language understanding, nonmonotonic reasoning, planning, qualitative reasoning, real-time systems, robotics, spatial reasoning, theorem proving, theory of computation, tutoring systems. REVIEW OF PAPERS ~~~~~~~~~~~~~~~~ Submissions will be judged on significance, originality, quality and clarity. Reviewing will be blind to the identities of the authors. This requires that authors exercise some care not to identify themselves in their papers. Each copy of the paper must have a title page, separated from the body of the paper, including the title of the paper, the names and addresses of all authors, a list of content areas (see above) and any acknowledgments. The second page should include the same title, a short abstract of less than 200 words, and the exact same contents areas, but not the names nor affiliations of the authors. This page may include text of the paper. The references should include all published literature relevant to the paper, including previous works of the authors, but should not include unpublished works of the authors. When referring to one's own work, use the third person. For example, say "previously, Peter [17] has shown that ...". Try to avoid including any information in the body of the paper or references that would identify the authors or their institutions. Such information can be added to the final camera-ready version for publication. Please do not staple the title page to the body of the paper. Submitted papers must be unpublished. PUBLICATION ~~~~~~~~~~~ The proceedings will be published by Springer-Verlag (lecture notes in A.I. series). Authors will be required to transfer copyright of their paper to Springer-Verlag. ASSOCIATED WORKSHOPS ~~~~~~~~~~~~~~~~~~~~ In the framework of the conference three workshops will be organized: Applications of Expert Systems, Fuzzy Logic and Neural Networks in Engineering, and Applications of Artificial Intelligence to Robotics and Vision Systems. Real world applications, running systems, and demos are welcome. CONFERENCE & PROGRAM CO-CHAIRS ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Carlos Pinto-Ferreira Nuno Mamede Instituto Superior Tecnico Instituto Superior Tecnico ISR, Av. Rovisco Pais INESC, Apartado 13069 1000 Lisboa, Portugal 1000 Lisboa, Portugal Voice: +351 (1) 8475105 Voice: +351 (1) 310-0234 Fax: +351 (1) 3523014 Fax: +351 (1) 525843 Email: cpf at kappa.ist.utl.pt Email: njm at inesc.pt PROGRAM COMMITTEE ~~~~~~~~~~~~~~~~~ Antonio Porto (Portugal) Lauiri Carlson (Finland) Benjamin Kuipers (USA) Luc Steels (Belgium) Bernhard Nebel (Germany) Luigia Aiello (Italy) David Makinson (Germany) Luis Moniz Pereira (Portugal) Erik Sandewall (Sweden) Luis Monteiro (Portugal) Ernesto Costa (Portugal) Manuela Veloso (USA) Helder Coelho (Portugal) Maria Cravo (Portugal) Joao Martins (Portugal) Miguel Filgueiras (Portugal) John Self (UK) Yoav Shoham (USA) Jose Carmo (Portugal) Yves Kodratoff (France) DEADLINES ~~~~~~~~~ Papers Submission: ................. March 20, 1995 Notification of acceptance: ........ May 15, 1995 Camera Ready Copies Due: ........... June 12, 1995 SUBMISSION & INQUIRIES ADDRESS ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ EPIA95 INESC, Apartado 13069 1000 Lisboa, Portugal Voice: +351 (1) 310-0325 Fax: +351 (1) 525843 Email: epia95 at inesc.pt SUPPORTERS ~~~~~~~~~~ Banco Nacional Ultramarino Governo Regional da Madeira Instituto Superior Tecnico SISCOG - Sistemas Cognitivos INESC CITMA IBM TAPair Portugal PLANNING TO ATTEND ~~~~~~~~~~~~~~~~~~ People planning to submit a paper or/and to attend the conference or attend a workshop are asked to complete and return the following form (by fax or email) to the inquiries address standing their intention. It will help the conference organizers to estimate the facilities needed for the conference and will enable all interested people to receive updated information. +----------------------------------------------------------------+ | REGISTRATION OF INTEREST | | | | Title . . . . . Name . . . . . . . . . . . . . . . . . . . . | | Institution . . . . . . . . . . . . . . . . . . . . . . . . . | | Address1 . . . . . . . . . . . . . . . . . . . . . . . . . . . | | Address2 . . . . . . . . . . . . . . . . . . . . . . . . . . . | | Country . . . . . . . . . . . . . . . . . . . . . . . . . . . | | Telephone. . . . . . . . . . . . . . . Fax . . . . . . . . . . | | Email address. . . . . . . . . . . . . . . . . . . . . . . . . | | I intend to submit a paper (yes/no). . . . . . . . . . . . . . | | I intend to participate only (yes/no). . . . . . . . . . . . . | | I will travel with ... guests | +----------------------------------------------------------------+ From ajit at uts.cc.utexas.edu Wed Mar 1 13:05:47 1995 From: ajit at uts.cc.utexas.edu (Ajit Dingankar) Date: Wed, 1 Mar 1995 12:05:47 -0600 Subject: A Note on Error Bounds for Approximation in Inner Product Spaces Message-ID: <199503011805.MAA30724@curly.cc.utexas.edu> **DO NOT FORWARD TO OTHER GROUPS** Sorry, no hardcopies available. URL: ftp://archive.cis.ohio-state.edu/pub/neuroprose/dingankar.error-bounds.ps.Z BiBTeX entry: @ARTICLE{atd15, AUTHOR = "Dingankar, A. T. and Sandberg, I. W.", TITLE = "{A Note on Error Bounds for Approximation in Inner Product Spaces}", JOURNAL = "Circuits, Systems and Signal Processing", VOLUME = {}, NUMBER = {}, PAGES = {}, YEAR = {1996}, } A Note on Error Bounds for Approximation in Inner Product Spaces ---------------------------------------------------------------- ABSTRACT In a recent paper a method is described for constructing certain approximations to a general element in the closure of the convex hull of a subset of an inner product space. This is of interest in connection with neural networks. Here we give an algorithm that generates simpler approximants with somewhat less computational cost. From esann at dice.ucl.ac.be Wed Mar 1 11:18:49 1995 From: esann at dice.ucl.ac.be (esann@dice.ucl.ac.be) Date: Wed, 1 Mar 1995 18:18:49 +0200 Subject: Neural Processing Letters: abstracts on WWW Message-ID: <199503011715.SAA03612@ns1.dice.ucl.ac.be> ------------------------- Neural Processing Letters Abstracts on WWW server ------------------------- According to the large number of requests that we received, all abstracts of papers in published issues of Neural Processing Letters are now available on the WWW and FTP servers of the journal. You may connect to these servers at the following addresses: - FTP server: ftp.dice.ucl.ac.be directory: /pub/neural-nets/NPL - WWW server: http://www.dice.ucl.ac.be/neural-nets/NPL/NPL.html Subscription to the journal is also now possible by credit card. If you have no access to these servers, or for any other information (subscriptions, instructions for authors, free sample copies,...), please don't hesitate to contact directly the publisher: D facto publications 45 rue Masui B-1210 Brussels Belgium Phone: + 32 2 245 43 63 Fax: + 32 2 245 46 94 _____________________________ D facto publications - conference services 45 rue Masui 1210 Brussels Belgium tel: +32 2 245 43 63 fax: +32 2 245 46 94 _____________________________ From ajit at uts.cc.utexas.edu Wed Mar 1 13:14:20 1995 From: ajit at uts.cc.utexas.edu (Ajit Dingankar) Date: Wed, 1 Mar 1995 12:14:20 -0600 Subject: On Approximation of Linear Functionals on L_p Spaces Message-ID: <199503011814.MAA18317@curly.cc.utexas.edu> **DO NOT FORWARD TO OTHER GROUPS** Sorry, no hardcopies available. URL: ftp://archive.cis.ohio-state.edu/pub/neuroprose/dingankar.linear-functionals.ps.Z BiBTeX entry: @ARTICLE{atd16, AUTHOR = "Sandberg, I. W. and Dingankar, A. T.", TITLE = "{On Approximation of Linear Functionals on $L_p$ Spaces}", JOURNAL = "IEEE Transactions on Circuits and Systems-I: Fundamental Theory and Applications", VOLUME = {}, NUMBER = {}, PAGES = {}, YEAR = "1995", } On Approximation of Linear Functionals on L_p Spaces ---------------------------------------------------- ABSTRACT In a recent paper certain approximations to continuous nonlinear functionals defined on an $L_p$ space $ (1 < p < \infty) $ are shown to exist. These approximations may be realized by sigmoidal neural networks employing a linear input layer that implements finite sums of integrals of a certain type. In another recent paper similar approximation results are obtained using elements of a general class of continuous linear functionals. In this note we describe a connection between these results by showing that every continuous linear functional on a compact subset of $L_p$ may be approximated uniformly by certain finite sums of integrals. We also describe the relevance of this result to the approximation of continuous nonlinear functionals with neural networks. From mas at isca.pdial.interpath.net Wed Mar 1 12:52:38 1995 From: mas at isca.pdial.interpath.net (Mary Ann Sullivan) Date: Wed, 1 Mar 1995 12:52:38 -0500 Subject: CALL FOR PAPERS: ISCA INT'L CONF ON COMPUTER APPLICATIONS IN INDUSTRY & ENGINEERING Nov. 29 - Dec. 1, 1995 Honolulu, Hawaii Message-ID: <mailman.746.1149591330.29955.connectionists@cs.cmu.edu> A non-text attachment was scrubbed... Name: not available Type: multipart/mixed Size: 3717 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/a9bdef4a/attachment.bin From thodberg at nn.dmri.dk Wed Mar 1 14:02:22 1995 From: thodberg at nn.dmri.dk (Hans Henrik Thodberg) Date: Wed, 1 Mar 1995 20:02:22 +0100 Subject: TR: Ace-of-Bayes with ARD Message-ID: <9503011902.AA02531@dmri.dk> The following 33 pages long manuscript is now available by ftp: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/thodberg.bayes-ard.ps.Z or URL (WWW): ftp://archive.cis.ohio-state.edu/pub/neuroprose/thodberg.bayes-ard.ps.Z Hardcopies are not avaliable. ---------------------------------------------------------------------------- A Review of Bayesian Neural Networks with an Application to Near Infrared Spectroscopy. Hans Henrik Thodberg The Danish Meat Research Institute Abstract MacKay's Bayesian framework for backpropagation is a practical and powerful means to improve the generalisation ability of neural networks. It is based on a Gaussian approximation to the posterior weight distribution. The framework is extended, reviewed and demonstrated in a pedagogical way. The notation is simplified using the ordinary weight decay parameter, and a detailed and explicit procedure for adjusting several weight decay parameters is given. Bayesian backprop is applied in the prediction of fat content in minced meat from near infrared spectra. It outperforms ``early stopping'' as well as quadratic regression. The evidence of a committee of differently trained networks is computed, and the corresponding improved generalisation is verified. The error bars on the predictions of the fat content are computed. There are three contributors: The random noise, the uncertainty in the weights, and the deviation among the committee members. The Bayesian framework is compared to Moody's GPE. Finally, MacKay and Neal's Automatic Relevance Determination, in which the weight decay parameters depend on the input number, is applied to the data with improved results. ---------------------------------------------------------------------------- The manuscript is a revised version of thodberg.ace-of-bayes.ps.Z which is also in neuroprose. The main changes are the following: Pruning has been taken out (it is treated in a separate paper), the treatment of committees is extended, and there is a new section demonstrating the powerful Automatic Relevance Determination. The data used in the paper are now available by ftp. The paper is submitted to IEEE Trans. on Neural Networks. Comments are welcome! ---------------------------------------------------------------------------- Hans Henrik Thodberg Email(NEW!!): thodberg at nn.dmri.dk Danish Meat Research Institute Phone: (+45) 42 36 12 00 Maglegaardsvej 2, Postboks 57 Fax: (+45) 42 36 48 36 DK-4000 Roskilde, Denmark ---------------------------------------------------------------------------- From Volker.Tresp at zfe.siemens.de Wed Mar 1 15:06:21 1995 From: Volker.Tresp at zfe.siemens.de (Volker Tresp) Date: Wed, 1 Mar 1995 21:06:21 +0100 Subject: 2 Papers available on combining estimators and missing data Message-ID: <199503012006.AA11490@train.zfe.siemens.de> The -2- files tresp.combining.ps.Z and tresp.effic_miss.ps.Z can now be copied from Neuroprose. The papers are 8 and 9 pages long. Hardcopies copies are not available. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/tresp.combining.ps.Z COMBINING ESTIMATORS USING NON-CONSTANT WEIGHTING FUNCTIONS by Volker Tresp and Michiaki Taniguchi Abstract: This paper discusses the linearly weighted combination of estimators in which the weighting functions are dependent on the input. We show that the weighting functions can be derived either by evaluating the input dependent variance of each estimator or by estimating how likely it is that a given estimator has seen data in the region of the input space close to the input pattern. The latter solution is closely related to the mixture of experts approach and we show how learning rules for the mixture of experts can be derived from the theory about learning with missing features. The presented approaches are modular since the weighting functions can easily be modified (no retraining) if more estimators are added. Furthermore, it is easy to incorporate estimators which were not derived from data such as expert systems or algorithms. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/tresp.effic_miss.ps.Z EFFICIENT METHODS FOR DEALING WITH MISSING DATA IN SUPERVISED LEARNING by Volker Tresp, Ralph Neuneier, and Subutai Ahmad Abstract: We present efficient algorithms for dealing with the problem of missing inputs (incomplete feature vectors) during training and recall. Our approach is based on the approximation of the input data distribution using Parzen windows. For recall, we obtain closed form solutions for arbitrary feedforward networks. For training, we show how the backpropagation step for an incomplete pattern can be approximated by a weighted averaged backpropagation step. The complexity of the solutions for training and recall is independent of the number of missing features. We verify our theoretical results using one classification and one regression problem. The papers will appear in G. Tesauro, D. S. Touretzky and T. K. Leen, eds., "Advances in Neural Information Processing Systems 7", MIT Press, Cambridge MA, 1995. ________________________________________ Volker Tresp Siemens AG ZFE T SN4 81730 Munich Germany email: Volker.Tresp at zfe.siemens.de Phone: +49 89 636 49408 Fax: +49 89 636 3320 ________________________________________ From delliott at src.umd.edu Thu Mar 2 03:16:30 1995 From: delliott at src.umd.edu (David L. Elliott) Date: Thu, 2 Mar 1995 03:16:30 -0500 Subject: NN paper (identification of systems) available by anon ftp Message-ID: <199503020816.DAA23586@newra.src.umd.edu> The following paper, which has been accepted for an invited session at the 1995 American Control Conference, will be available until the conference (June 21 1995) by anonymous ftp to the ftp server for the Institute for Systems Research ftp://ftp.isr.umd.edu/pub/ISRTR/ps Filename: TR95-17.ps Title: "Reconstruction of nonlinear systems with delay lines and feedforward networks" Author: David L. Elliott, ISR, Univ. of Maryland, College Park Abstract: Nonlinear system theory ideas have led to a method for approximating the dynamics of a nonlinear system in a bounded region of its state space, by training a feedforward neural network which is then reconfigured in recursive mode to provide a stand-alone simulator of the original system. The input layer of the neural network contains time-delayed samples of one or more system outputs and control inputs. Autonomous systems can also be simulated in this way by providing impulse inputs. My apology for the 0.9M filesize-- the paper is only 5 pages, but assembling diverse PS text and figures was inefficient. Printing time is short, however. Criticisms and comments will be especially helpful if sent before June 1, and will be very welcome. David From srw1001 at eng.cam.ac.uk Thu Mar 2 08:51:45 1995 From: srw1001 at eng.cam.ac.uk (srw1001@eng.cam.ac.uk) Date: Thu, 2 Mar 95 13:51:45 GMT Subject: Paper availble : Non-linear Prediction using Hierarchical Mixtures of Experts. Message-ID: <9503021351.22738@fear.eng.cam.ac.uk> The following paper is available by anonymous ftp from the archive of the Speech, Vision and Robotics Group at the Cambridge University Engineering Department and the Neuroprose archives. NON-LINEAR PREDICTION OF ACOUSTIC VECTORS USING HIERARCHICAL MIXTURES OF EXPERTS. Steve Waterhouse and Tony Robinson Cambridge University Engineering Department Trumpington Street Cambridge CB2 1PZ England Abstract In this paper we consider speech coding as a problem of speech modelling. In particular, prediction of parameterised speech over short time segments is performed using the Hierarchical Mixture of Experts (HME) \cite{JordanJacobs94}. The HME gives two advantages over traditional non-linear function approximators such as the Multi-Layer Perceptron (MLP); a statistical understanding of the operation of the predictor and provision of information about the performance of the predictor in the form of likelihood information and local error bars. These two issues are examined on both toy and real world problems of regression and time series prediction. In the speech coding context, we extend the principle of combining local predictions via the HME to a Vector Quantization scheme in which fixed local codebooks are combined on-line for each observation. To appear in Advances in Neural Information Processing Systems 7, edited by Gerald Tesauro, David Touretzky, and Todd Leen. ************************ How to obtain a copy ************************ a) via ftp from Cambridge University SVR: unix> ftp svr-ftp.eng.cam.ac.uk Name: anonymous Password: (type your email address) ftp> cd reports ftp> binary ftp> get waterhouse_nips94.ps.Z ftp> quit unix> uncompress waterhouse_nips94.ps.Z unix> lpr waterhouse_nips94.ps (or however you print PostScript) b) via ftp from neuroprose archive: unix> ftp Name: anonymous Password: (type your email address) ftp> cd pub/neuroprose/reports ftp> binary ftp> get waterhouse.nips94.ps.Z ftp> quit unix> uncompress waterhouse.nips94.ps.Z unix> lpr waterhouse.nips94.ps (or however you print PostScript) c) or email me: srw1001 at eng.cam.ac.uk d) (easiest) access my WWW page http://svr-www.eng.cam.ac.uk/~srw1001, where the file is symlinked. ----------------------------------------------------- Steve Waterhouse, Information Engineering, Cambridge University Engineering Department, Trumpington Street, Cambridge CB2 1PZ, UK. Email: srw1001 at eng.cam.ac.uk Phone : (0223) 332800 World Wide Web: http://svr-www.eng.cam.ac.uk/~srw1001 From srw1001 at eng.cam.ac.uk Thu Mar 2 08:52:39 1995 From: srw1001 at eng.cam.ac.uk (srw1001@eng.cam.ac.uk) Date: Thu, 2 Mar 95 13:52:39 GMT Subject: Paper availble : Classification using Hierarchical Mixtures of Experts. Message-ID: <9503021352.22748@fear.eng.cam.ac.uk> The following paper is available by anonymous ftp from the archive of the Speech, Vision and Robotics Group at the Cambridge University Engineering Department and the Neuroprose archives. CLASSIFICATION USING HIERARCHICAL MIXTURES OF EXPERTS Steve Waterhouse and Tony Robinson Cambridge University Engineering Department Trumpington Street Cambridge CB2 1PZ England Abstract There has recently been widespread interest in the use of multiple models for classification and regression in the statistics and neural networks communities. The Hierarchical Mixture of Experts (HME) \cite{JordanJacobs94} has been successful in a number of regression problems, yielding significantly faster training through the use of the Expectation Maximisation algorithm. In this paper we extend the HME to classification and results are reported for three common classification benchmark tests: Exclusive-Or, N-input Parity and Two Spirals. Reference : In Proc. 1994 IEEE Workshop on Neural Networks for Signal Processing, pp 177-186. ************************ How to obtain a copy ************************ a) via ftp from Cambridge University SVR: unix> ftp svr-ftp.eng.cam.ac.uk Name: anonymous Password: (type your email address) ftp> cd reports ftp> binary ftp> get waterhouse_hme.ps.Z ftp> quit unix> uncompress waterhouse_hme.ps.Z unix> lpr waterhouse_hme.ps (or however you print PostScript) b) via ftp from neuroprose archive: unix> ftp Name: anonymous Password: (type your email address) ftp> cd pub/neuroprose/reports ftp> binary ftp> get waterhouse.hme_classification.ps.Z ftp> quit unix> uncompress waterhouse.hme_classification.ps.Z unix> lpr waterhouse.hme_classification.ps (or however you print PostScript) c) or email me: srw1001 at eng.cam.ac.uk d) (easiest) access my WWW page http://svr-www.eng.cam.ac.uk/~srw1001, where the file is symlinked. ----------------------------------------------------- Steve Waterhouse, Information Engineering, Cambridge University Engineering Department, Trumpington Street, Cambridge CB2 1PZ, UK. Email: srw1001 at eng.cam.ac.uk Phone : (0223) 332800 World Wide Web: http://svr-www.eng.cam.ac.uk/~srw1001 From Francoise.Fogelman at laforia.ibp.fr Thu Mar 2 04:06:01 1995 From: Francoise.Fogelman at laforia.ibp.fr (FOGELMAN Francoise + 33 1 41 28 41 70) Date: Thu, 2 Mar 1995 10:06:01 +0100 Subject: ICANN'95 Message-ID: <199503020906.KAA20561@erato.ibp.fr> *************************************************************************** UPDATED BROCHURE *************************************************************************** ***************************************************************************** XXX XXXX X XX XX XX XX XX XXXXX XXXXXX X X X X XXX XX XXX XX XX X XX X X X XXXXX XX X XX XX X XX XXXXX XXXXX X X X X XX XXX XX XXX XX XXX XXX XXXX X X XX XX XX XX XXXX XXXXX PARIS, OCTOBER 9-13, 1995 Maison de la Chimie NEURAL NETWORKS AND THEIR APPLICATIONS ***************************************************************************** SCIENTIFIC CONFERENCE INDUSTRIAL CONFERENCE TUTORIALS & EXHIBITION organized by EUROPEAN NEURAL NETWORK SOCIETY ***************************************************************************** INFORMATION ***************************************************************************** Over the last four years, the ENNS - European Neural Network Society - has held its annual conference ICANN in Helsinki (1991), Brighton (1992), Amsterdam (1993) and Sorrento (1994). This conference has become the foremost meeting for the European neural network scientific community. In 1995, ENNS will hold the ICANN meeting in Paris. The format of this conference will include a scientific conference, an industrial conference, tutorials, industrial forums and an industrial exhibition. Our challenge, in organizing this conference, is to achieve the highest scientific quality for papers presented at the scientific conference (through a strict selection procedure), together with the most convincing set of applications presented at the industrial conference (only operational, top-level applications will be considered). Papers should stress the rationale of the Neural Network approach and provide a comparison with other techniques. We thus hope to demonstrate that Neural Networks are indeed a very deep and exciting field of research, as well as a most efficient, profitable technique for the industry. To achieve these goals, we seek contributions from all the scientists, both from academy and industry, who share our interests and our quality requirements. ***************************************************************************** CALL FOR PAPERS The conference will cover the following domains : SCIENTIFIC CONFERENCE * theory * algorithms & architectures * implementations (hardware & software) * cognitive sciences & AI * neurobiology * applications identification & control image processing & vision OCR speech & signal processing prediction optimization INDUSTRIAL CONFERENCE This conference will cover two main categories: on the one hand, descriptions of tools and methods and their use in real-life cases and, on the other, descriptions of concrete applications in industry and the sector of services. All fields of application are eligible. Special sessions will be organized on specific areas of industry such as: * banking, finance & insurance * telecommunications * teledetection * process engineering, control and monitoring * oil industry * power industry * food processing * transportation * robotics * speech processing * document processing, OCR, text retrieval & indexing * VLSI & dedicated hardware * forecasting & marketing * technical diagnosis * non destructive testing * medicine * defense LOCATION The conference will be held in la Maison de la Chimie, right in the center of Paris, near les Invalides. Built in 1707, for Frederic-Maurice de la Tour, Comte d'Auvergne, Lieutenant General to King Louis XIV, the Mansion has today become a Congress Center equipped with all the modern facilities. INSTRUCTIONS TO AUTHORS Length of papers: not exceeding 6 pages in A4 format (i. e. about 8,000 characters). An electronic format will be made available at : ftp lix.polytechnique.fr login: anonymous password : your e-mail address in the directory /pub/ICANN95/out, read file README for instructions. If you want to leave messages or enquiries, you can also use : in the directory /pub/ICANN95/in, read file README for instructions. Seven copies of the papers should reach the Conference Secretariat at the address below by ****** APRIL 15 1995 ***** : ICANN'95 1 avenue Newton bp 207 92 142 CLAMART Cedex France Fax: +33 - 1 - 41 28 45 84 Submitted papers should be accompanied by a cover page giving: * the title of the paper and the author(s) name(s), * the author's address, phone number and extension, fax number and, if possible, e-mail address, * a 10-line abstract together with a list of key-words, * an indication of which conference the paper should be included in: scientific or industrial LANGUAGE Papers submitted for the scientific conference should be in English. Papers submitted for the industrial conference may be either in English or French. TUTORIALS Tutorials will be organized. The Program Committee is open to proposals for tutorials covering industrial applications. Suggestions should describe the content of the tutorial (in 150-200 words) and the instructor's expertise and experience in the field concerned. The deadline for reception is MAY 15 1995. EXHIBITION From wolpert at psyche.mit.edu Thu Mar 2 16:06:40 1995 From: wolpert at psyche.mit.edu (Daniel Wolpert) Date: Thu, 2 Mar 95 16:06:40 EST Subject: Two NIPS preprints on motor control Message-ID: <9503022106.AA18741@psyche.mit.edu> The following two papers will appear in G. Tesauro, D.S. Touretzky and T.K. Leen, eds., "Advances in Neural Information Processing Systems 7", MIT Press, Cambridge MA, 1995. The papers combine computational and psychophysical approaches to human motor control. Daniel Wolpert wolpert at psyche.mit.edu ----------------------------------------------------------------------- Forward dynamic models in human motor control: Psychophysical evidence Daniel Wolpert, Zoubin Ghahramani & Michael Jordan Department of Brain & Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 02139 Based on computational principles, with as yet no direct experimental validation, it has been proposed that the central nervous system (CNS) uses an internal model to simulate the dynamic behavior of the motor system in planning, control and learning. We present experimental results and simulations based on a novel approach that investigates the temporal propagation of errors in the sensorimotor integration process. Our results provide direct support for the existence of an internal model. FTP-host: psyche.mit.edu FTP-filename: /pub/wolpert/forward.ps.Z URL: ftp://psyche.mit.edu/pub/wolpert/forward.ps.Z 8 pages long [163K compressed]. ----------------------------------------------------------------------- Computational structure of coordinate transformations: A generalization study Zoubin Ghahramani, Daniel Wolpert & Michael Jordan Department of Brain & Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 02139 One of the fundamental properties that both neural networks and the central nervous system share is the ability to learn and generalize from examples. While this property has been studied extensively in the neural network literature it has not been thoroughly explored in human perceptual and motor learning. We have chosen a coordinate transformation system---the visuomotor map which transforms visual coordinates into motor coordinates---to study the generalization effects of learning new input--output pairs. Using a paradigm of computer controlled altered visual feedback, we have studied the generalization of the visuomotor map subsequent to both local and context-dependent remappings. A local remapping of one or two input-output pairs induced a significant global, yet decaying, change in the visuomotor map, suggesting a representation for the map composed of units with large functional receptive fields. Our study of context-dependent remappings indicated that a single point in visual space can be mapped to two different finger locations depending on a context variable---the starting point of the movement. Furthermore, as the context is varied there is a gradual shift between the two remappings, consistent with two visuomotor modules being learned and gated smoothly with the context. FTP-host: psyche.mit.edu FTP-filename: /pub/wolpert/coord.ps.Z URL: ftp://psyche.mit.edu/pub/wolpert/coord.ps.Z 8 pages long [218K compressed]. From rjb at psy.ox.ac.uk Fri Mar 3 09:16:15 1995 From: rjb at psy.ox.ac.uk (Roland Baddeley) Date: Fri, 3 Mar 1995 14:16:15 GMT Subject: Job available at University of Oxford Message-ID: <199503031416.OAA05484@axp01.mrc-bbc.ox.ac.uk> A new position available that may interest reader of Connectionists. Please feel free to pass to other boards or colleagues, and please send replies to erolls at psy.ox.ac.uk and not to this address. Roland Baddeley UNIVERSITY OF OXFORD DEPARTMENT OF EXPERIMENTAL PSYCHOLOGY Post in Computational Neuroscience The following post is available as part of a long-term research programme combining computational and neurophysiological approaches to the brain mechanisms of vision and memory: Computational neuroscientist (RS1A) to make formal network models and/or analyse by neural network simulation the functions of visual cortical areas and the hippocampus. The salary is on the RS1A (postdoctoral) scale 13,941-20,953 pounds, with support provided by a Programme Grant, and is available from April 1995. Applications including the names of two referees, or enquiries for further details, to Dr. Edmund T. Rolls, University of Oxford, Department of Experimental Psychology, South Parks Road, Oxford OX1 3UD, England (telephone 01865-271348, email erolls at psy.ox.ac.uk). The University is an Equal Opportunities Employer. An introduction to some of the work is provided in the following: Rolls,E.T. and Treves,A. (1994) Neural networks in the brain involved in memory and recall. Progress in Brain Research 102: 335-341. or Treves,A. and Rolls,E.T. (1994) A computational analysis of the role of the hippocampus in memory. Hippocampus 4: 374-391. Rolls,E.T. (1994) Brain mechanisms for invariant visual recognition and learning. Behavioural Processes 33: 113- 138. or Rolls,E.T. (1995) Learning mechanisms in the temporal lobe visual cortex. Behavioural Brain Research 66: 177-185. From CHRIS at gauss.cam.wits.ac.za Sat Mar 4 18:09:28 1995 From: CHRIS at gauss.cam.wits.ac.za (Christopher Gordon) Date: Sat, 4 Mar 1995 18:09:28 GMT+0200 Subject: TR Available: The Use of Cross-Validation in Neural Network Ext Message-ID: <6F911CD32FC@gauss.cam.wits.ac.za> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/gordon.extrapolation.ps.Z -------------------------------------------------------------------- The following 12 page paper is now available by ftp: The Use of Cross-Validation in Neural Network Extrapolation of Forest Tree Growth. C. Gordon, University of the Witwatersrand. Presented at PRASA (1994). Abstract: A back-propagation multilayer Artificial Neural Network (ANN) is used to model the mean growth rate of different plots of Pine trees. Cross-validation is found to be essential in providing a good extrapolation. Predictions are made for different plots which have different initial densities and different artificial thinning regimes. Comparisons are made with a standard nonlinear regression solution. * FTP procedure: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: (email address) ftp> cd pub/neuroprose ftp> binary ftp> get gordon.extrapolation.ps.Z ftp> quit unix> uncompress gordon.extrapolation.ps.Z unix> lpr gordon.extrapolation.ps (or however you print postscript) ****************************************************************** * Christopher Gordon * * Department of Applied Maths * * University of the Witwatersrand * * Internet : CHRIS at gauss.cam.wits.ac.za * * Snail Mail : PO Box 65684, Benmore 2010, South Africa * * Tel: (011)716-3229(W) * * * ****************************************************************** From seung at physics.att.com Fri Mar 3 15:01:43 1995 From: seung at physics.att.com (seung@physics.att.com) Date: Fri, 3 Mar 95 15:01:43 EST Subject: preprint: Local and Global Convergence of On-line Learning Message-ID: <9503032001.AA27097@physics.att.com> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/barkai.local.ps.Z The file barkai.local.ps.Z is now available at ftp://archive.cis.ohio-state.edu/pub/neuroprose/barkai.local.ps.Z Local and Global Convergence of On-Line Learning N. Barkai Racah Inst. of Physics, Hebrew Univ. of Jerusalem H. S. Seung AT&T Bell Laboratories H. Sompolinsky Racah Inst. of Physics, Hebrew Univ. of Jerusalem AT&T Bell Laboratories We study the performance of an generalized perceptron algorithm for learning realizable dichotomies, with an error-dependent adaptive learning rate. The asymptotic scaling form of the solution to the associated Markov equations is derived, assuming certain smoothness conditions. We show that the system converges to the optimal solution and the generalization error asymptotically obeys a univeral inverse power law in the number of examples. The system is capable of escaping from local minima, and adapts rapidly to shifts in the target function. The general theory is illustrated for the perceptron and committee machine. From alex at CompApp.DCU.IE Sat Mar 4 04:55:00 1995 From: alex at CompApp.DCU.IE (alex@CompApp.DCU.IE) Date: Sat, 4 Mar 95 09:55:00 GMT Subject: Call For Papers Message-ID: <9503040955.AA14068@janitor.compapp.dcu.ie> PLEASE POST! PLEASE POST! PLEASE POST! PLEASE POST! PLEASE POST! Call for Papers for the Fourth International Conference on The COGNITIVE SCIENCE of NATURAL LANGUAGE PROCESSING Dublin City University, 5-7 July 1995 Subject Areas: This is a non-exclusive list of subjects which fall within the scope of CSNLP. It is intended as a guide only. * Corpus-based NLP * Connectionist NLP * Statistical and knowledge-based MT * Linguistic knowledge representation * Cognitive linguistics * Declarative approaches to NLP * NLG and NLU * Dialogue and discourse * Human language processing * Text linguistics * Evaluation of NLP * Hybrid approaches to NLP Submissions may deal with theoretical issues, applications, databases or other aspects of CSNLP, but the importance of cognitive aspects should be borne in mind. Papers should report original substantive research. Theme: The Role of Syntax There is currently considerable debate regarding the place and importance of syntax in NLP. Papers dealing with this matter will be given preference. Invited Speakers: The following speakers have agreed to give keynote talks: Mark Steedman, University of Pennsylvania Alison Henry, University of Ulster Registration and Accommodation: The registration fee will be IR#60, and will include proceedings, lunches and one evening meal. Accommodation can be reserved in the campus residences at DCU. A single room is IR#16 per night, with full Irish breakfast an additional IR#4. Accommodation will be "First come, first served": there is a heavy demand for campus rooms in the summer. There are also several hotels and B&B establishments nearby: addresses will be provided on request. To register, contact Alex Monaghan at the addresses given below. Payment in advance is possible but not obligatory. Please state gender (for accommodation purposes) and any unusual dietary requirements. Submission of Abstracts: Those wishing to present a paper at CSNLP should submit a 400-word abstract to arrive not later than 10/4/95. Abstracts should give the author's full name and address, with Email address if possible, and should be sent to: CSNLP Alex Monaghan School of Computer Applications Dublin City University Dublin 9 Ireland Email submissions are preferred, plain ASCII text please to: --------- alex at compapp.dcu.ie (internet) Completed papers should be around 8 pages long, although longer papers will be considered if requested. Camera-ready copy must be submitted to arrive in Dublin by 19/6/94. No particular conference style will be imposed, but papers should be legible (12pt laser printed) and well-structured. Deadlines: 10th April --- Submission of 400-word abstract 1st May --- Notification of acceptance 19th June --- Deadline for receipt of camera-ready paper (c.8 pages) 26th June --- Final date for registration, accommodation, meals etc. From CHRIS at gauss.cam.wits.ac.za Tue Mar 7 20:06:22 1995 From: CHRIS at gauss.cam.wits.ac.za (Christopher Gordon) Date: Tue, 7 Mar 1995 20:06:22 GMT+0200 Subject: TR Available: The Use of Cross-Validation in Neural Network Message-ID: <74306A56168@gauss.cam.wits.ac.za> I apologize to anyone who tried to unsuccesfully uncompress the file gordon.extrapolation.ps.Z which is advertized below. It was compressed with gzip not unix compress. gzip can be used to uncompress it. I will be replacing it with a unix compressed version as soon as possible. I will post another message when I have do so. > FTP-host: archive.cis.ohio-state.edu > FTP-filename: /pub/neuroprose/gordon.extrapolation.ps.Z > > -------------------------------------------------------------------- > > The following 12 page paper is now available by ftp: > > The Use of Cross-Validation in Neural Network > Extrapolation of Forest Tree Growth. > > C. Gordon, University of the Witwatersrand. > > Presented at PRASA (1994). > > Abstract: > A back-propagation multilayer Artificial Neural > Network (ANN) is used to model the mean growth > rate of different plots of Pine trees. > Cross-validation is found to be essential in > providing a good extrapolation. Predictions are > made for different plots which have different > initial densities and different artificial thinning > regimes. Comparisons are made with a > standard nonlinear regression solution. > > > * FTP procedure: > unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) > Name: anonymous > Password: (email address) > ftp> cd pub/neuroprose > ftp> binary > ftp> get gordon.extrapolation.ps.Z > ftp> quit ****************************************************************** * Christopher Gordon * * Department of Applied Maths * * University of the Witwatersrand * * Internet : CHRIS at gauss.cam.wits.ac.za * * Snail Mail : PO Box 65684, Benmore 2010, South Africa * * Tel: (011)716-3229(W) * * * ****************************************************************** From bernardo at esaii.upc.es Tue Mar 7 18:12:46 1995 From: bernardo at esaii.upc.es (Bernardo Morcego) Date: Tue, 7 Mar 1995 18:12:46 UTC+0200 Subject: Modular NN Message-ID: <273*/S=bernardo/OU=esaii/O=upc/PRMD=iris/ADMD=mensatex/C=es/@MHS> Dear connectionists, We are interested in the study and developement of Modular Artificial Neural Networks. Our aim is to study the influence of the interaction between modules in the learning process and show the beneffits of building networks using Neural Modules. We use the resulting archi- tectures in non-linear dynamic systems identification. We found several applications in which the use of modules was carried out in a building block fashion: each module had a complex function to learn and, after training, was assembled with the others. But this is not exactly what we are interested in. Our object is the design of neural network modules and the study of their relationships and the bibliography we were able to find is given at the end of this message. We would be very grateful if anyone aware of related work could send us any pointer. All recieved replies will be collected and summarized. Thanks in advance, Bernardo. ----------------------------------------------------------------------------- Bernardo Morcego (ESAII) FIB - Universitat Politecnica de Catalunya FAX: (34-3) 401 70 40 c/Pau Gargallo 5 Tel: (34-3) 401 69 92 08028 Barcelona (Spain) email bernardo at esaii.upc.es ----------------------------------------------------------------------------- Y. Bennani, P. Gallinari, Task Decomposition Through a Modular Con- nectionist Architecture: a Talker Identification System, Artificial Neural Networks 2, I Aleksander and J. Taylor (Eds), Elsevier, 1992. E.J.W Boers, H. Kuiper, Biological Metaphors and the Design of Modular Artificial Neural Networks, Master's Thesis, Leiden University, 1992. F. Fogelman, E. Viennet, B. Lamy, Multi-Modular Neural Network Archi- tectures: Applications in Optical Character and Human Face Recognition, Int. Journal of Pattern Recognition and Artificial Intelligence vol 7, No 4: 721-755, 1993. F. Gruau, D. Whitley, The Cellular Developmental of Neural Networks: the Interaction of Learning and Evolution, Ecole Normale Superieure de Lyon, Research Report 93-04, 1993. B.L.M. Happel, J.M.J Murre, Designing Modular Network Architectures Using a Genetic Algorithm, Artificial Neural Networks 2, I Aleksander and J. Taylor (Eds), Elsevier, 1992. R.A. Jacobs, Task Decomposition Through Competition in a Modular Con- nectionist Architecture, University of Massachusetts, COINS TR 90-44, 1990. R.A. Jacobs and M.I. Jordan, Learning Piecewise Control Strategies in a Modular Neural Network Architecture, IEEE Trans. on Systems, Man, and Cybernetics, Vol 23, No 2, 1993. R. Miikkulainen and M.G. Dyer, Natural Language Processing with Modular PDP Networks and Distributed Lexicon, Cognitive Science 15(3): 343 - 399, 1991. D.C. Plaut, Double Dissociation without Modularity: Evidence from Connectionist Neuropsychology, to appear in Journal of Clinical and Experimental Neuropsycology, 1994. Frank Smieja, The Pandemonium System of Reflective Agents, 1993. A. Waibel, Modular Construction of Time-Delay Neural Networks for Speech Recognition, Neural Computation 1, 39-46, 1989. ------------------------------------------------------------------------------ From konig at ICSI.Berkeley.EDU Tue Mar 7 16:07:05 1995 From: konig at ICSI.Berkeley.EDU (Yochai Konig) Date: Tue, 7 Mar 1995 13:07:05 -0800 Subject: TR announcement Message-ID: <199503072107.NAA07049@icsib35.ICSI.Berkeley.EDU> Hi, A revised and corrected version of the following TR is available from our FTP site. The changes all are concerned with some technical modifications of our convergence proof, particularly for Theorem 3. --Yochai =========================== TR announcement ================================= REMAP: Recursive Estimation and Maximization of A Posteriori Probabilities ---Applications to Transition-based Connectionist Speech Recognition--- by H. Bourlard, Y. Konig & N. Morgan Intl. Computer Science Institute 1947 Center Street, Suite 600 Berkeley, CA 94704 email: bourlard,konig,morgan at icsi.berkeley.edu ICSI Technical Report TR-94-064 Abstract In this paper, we describe the theoretical formulation of REMAP, an approach for the training and estimation of posterior probabilities using a recursive algorithm that is reminiscent of the EM (Expectation Maximization) algorithm [dempster77] for the estimation of data likelihoods. Although very general, the method is developed in the context of a statistical model for transition-based speech recognition using Artificial Neural Networks (ANN) to generate probabilities for hidden Markov models (HMMs). In the new approach, we use local conditional posterior probabilities of transitions to estimate global posterior probabilities of word sequences given acoustic speech data. Although we still use ANNs to estimate posterior probabilities, the network is trained with targets that are themselves estimates of local posterior probabilities. These targets are iteratively re-estimated by the REMAP equivalent of the forward and backward recursions of the Baum-Welch algorithm [baum70,baum72] to guarantee regular increase (up to a local maximum) of the global posterior probability. Convergence of the whole scheme is proven. Unlike most previous hybrid HMM/ANN systems that we and others have developed, the new formulation determines the most probable word sequence, rather than the utterance corresponding to the most probable state sequence. Also, in addition to using all possible state sequences, the proposed training algorithm uses posterior probabilities at both local and global levels and is discriminant in nature. The postscript file of the full technical report (68 pages) can be copied from our (anonymous) ftp site as follows: ftp ftp.icsi.berkeley.edu username= anonymous passw= your email address cd pub/techreports/1994 binary get tr-94-064.ps.Z ===================================================================== From goodman at unr.edu Wed Mar 8 00:54:02 1995 From: goodman at unr.edu (Phil Goodman) Date: Tue, 7 Mar 1995 21:54:02 -0800 (PST) Subject: PostDoc in NN Programming Message-ID: <199503080554.AA18040@equinox.unr.edu> ******* Position Announcement ******* POSTDOCTORAL FELLOWSHIP IN ARTIFICIAL NEURAL NETWORK PROGRAMMING Center for Biomedical Modeling Research University of Nevada, Lake Tahoe/Reno LOCATION: The University of Nevada Center for Biomedical Modeling Research (CBMR), located at the base of the Sierra Nevada Mountains near Lake Tahoe, is an interdisciplinary research institute involving the Departments of Medicine, Electrical Engineering, and Computer Science. Under federal funding, CBMR faculty and collaborators apply neural network and advanced probabilistic/statistical concepts to large health care databases. In particular, they are developing methods to: (1) improve the accuracy of predicting surgical mortality, (2) interpret nonlinearities and interactions among predictors, and (3) manage missing data. QUALIFICATIONS: Candidates considered for this position must have: (1) strong programming skills in the C language, (2) substantial operating experience in a UNIX environment, (3) familiarity with multilayer perceptron neural network architectures, and (4) a PhD in a related field. In addition, experience in probability, statistical regression, optimization, and/or functional approximation theory will be considered favorably. EFFORT: Approximately 80% of effort will be devoted programming neural network software. The remaining 20% of effort will be available for literature review, manuscript preparation, and attending scientific conferences (supported). DURATION: This fellowship will begin in April, 1995. The duration will be 12 months, extendible upon mutual agreement for an additional 6 months. APPLICATION: If interested, please send the following by plain-text electronic mail (preferred), surface mail, or FAX: (1) a cover letter detailing your interests and qualifications, (2) your resume, and, (3) the names and phone numbers (or email addresses) of three references. ___________________________________________________________________________ Philip H. Goodman, MD, MS E-mail: goodman at unr.edu Associate Professor of Medicine, Electrical Engineering, & Computer Science Director University of Nevada Center for Biomedical Modeling Research World-Wide Web: http://www.scs.unr.edu/~cbmr/ Washoe Medical Center, 77 Pringle Way, Reno, Nevada 89520 USA Voice: +1 702 328-4869 FAX: +1 702 328-4871 ___________________________________________________________________________ The University of Nevada, Reno is an Equal Opportunity/Affirmative Action employer and does not discriminate on the basis of race, color, religion, sex, age, creed, national origin, veteran status, physical or mental disability, and in accordance with University policy, sexual orientation, in any program or activity it operates. The University of Nevada employs only United States citizens and aliens lawfully authorized to work in the United States. From juergen at idsia.ch Wed Mar 8 05:53:31 1995 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Wed, 8 Mar 95 11:53:31 +0100 Subject: research positions available Message-ID: <9503081053.AA00750@fava.idsia.ch> +------------------------------+ | Research positions available | +------------------------------+ Istituto Dalle Molle di Studi sull'Intelligenza Artificiale (IDSIA) in beautiful Lugano (Switzerland) is starting up research projects in (1) reinforcement learning and evolutionary computation, and (2) supervised and unsupervised neural nets. We offer research positions at various levels (beginning now): 1. 1 postdoc or up to 2 PhD students, 2. A few undergrad students. The initial contracts will last until the end of this year, with a possible extension of 2 more years. Temporary visits (for a few months) by visiting researchers/students are also possible. Send CV, list of publications (if applicable), brief statement of research interests, and names/email addresses of references to: juergen at idsia.ch Postscript preferred. DEADLINE: March 31 1995. Earlier applications preferred. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ To check whether you share our research interests, have a look at these recent papers: 1. ``On learning how to learn learning strategies'' FTP-host: flop.informatik.tu-muenchen.de (131.159.8.35) FTP-filename: /pub/fki/fki-198-94.ps.gz (use gunzip to uncompress) 2. ``Flat minimum search finds simple nets'' (fki-200-94.ps.gz) 3. ``Semilinear predictability minimization produces orientation sensitive edge detectors'' (fki-201-94.ps.gz) 4. ``Ant_Q: A Reinforcement Learning Approach to Combinatorial Optimization'' FTP-host: fava.idsia.ch (192.132.252.1) FTP-filename: /pub/ant_q/TR.09-ANT-Q.ps.gz Papers 1-3 and others (and a list of publications) can also be retrieved from http://papa.informatik.tu-muenchen.de/mitarbeiter/schmidhu.html or from http://www.idsia.ch/people/juergen.html A significant part of the initial research will focus on the ``incremental self-improvement paradigm'' described in paper 1. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ New address since March 1: Juergen Schmidhuber IDSIA Corso Elvezia 36 6900-Lugano Switzerland juergen at idsia.ch From large at cis.ohio-state.edu Wed Mar 8 09:02:54 1995 From: large at cis.ohio-state.edu (Edward Large) Date: Wed, 8 Mar 1995 09:02:54 -0500 Subject: Papers on dynamical systems, NNs, and music cognition on neuroprose Message-ID: <199503081402.JAA25675@liberia.cis.ohio-state.edu> Dynamic Representation of Musical Structure (132 pages) Edward W. Large The Ohio State University PhD dissertation FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/Thesis/large.diss.ps.Z ABSTRACT: The problem of how the human brain perceives and represents complex, temporally structured sequences of events is central to cognitive science. Music is an ideal domain for the addressing this issue. Music provides a rich source of data, generated by a natural human activity, in which complex sequential and temporal relationships abound. Understanding how musical structure may be coded as dynamic patterns of activation in artificial neural networks is the goal of this dissertation. Implications for other domains, including speech perception, are discussed. This work addresses two questions that are important in understanding the representation of structured sequences. The first is the acquisition and representation of structural relationships among events, important in representing sequences with long distance temporal dependencies, and in learning structured systems of communication. The second is the representation of temporal relationships among events, that is important in recognizing and representing sequences independent of presentation rate, while retaining sensitivity to relative timing relationships. These two issues are intimately related, and this dissertation addresses the nature of this relationship. Two research projects are described. The first models the acquisition and representation of structural relationships among events in musical sequences, addressing issues of style acquisition and musical variation. An artificial neural network encodes the rhythmic organization and pitch contents of simple melodies. As the network learns to encode melodies, structurally more important events dominate less important events, as described by reductionist theories of music. The second project addresses the perception of temporal structure in musical sequences, specifically the perception of beat and meter. An entrainment model is proposed. An oscillator tracks periodic components of complex rhythmic patterns, resulting in a dynamical system model of beat perception. The self-organizing response of a group of oscillators embodies the perception of metrical structure. Resonance and the Perception of Musical Meter (37 pages) Edward W. Large and John F. Kolen The Ohio State University Connection Science, 6 (1), 177 - 208. Reprint from the recent Connection Science special issue on music and creativity. FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/large.resonance.ps.Z ABSTRACT: Many connectionist approaches to musical expectancy and music composition let the question of "What next?" overshadow the equally important question of "When next?". One cannot escape the latter question, one of temporal structure, when considering the perception of musical meter. We view the perception of metrical structure as a dynamic process where the temporal organization of external musical events synchronizes, or entrains, a listener's internal processing mechanisms. This article introduces a novel connectionist unit, based upon a mathematical model of entrainment, capable of phase- and frequency-locking to periodic components of incoming rhythmic patterns. Networks of these units can self-organize temporally structured responses to rhythmic patterns. The resulting network behavior embodies the perception of metrical structure. The article concludes with a discussion of the implications of our approach for theories of metrical structure and musical expectancy. Reduced Memory Representations for Music (39 pages) Edward W. Large and Caroline Palmer The Ohio State University Jordan B. Pollack Brandeis University Preprint of an article to appear in Cognitive Science. FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/large.reduced.ps.Z ABSTRACT: We address the problem of musical variation (identification of different musical sequences as variations) and its implications for mental representations of music. According to reductionist theories, listeners judge the structural importance of musical events while forming mental representations. These judgments may result from the production of reduced memory representations that retain only the musical gist. In a study of improvised music performance, pianists produced variations on melodies. Analyses of the musical events retained across variations provided support for the reductionist account of structural importance. A neural network trained to produce reduced memory representations for the same melodies represented structurally important events more efficiently than others. Agreement among the musicians' improvisations, the network model, and music-theoretic predictions suggest that perceived constancy across musical variation is a natural result of a reductionist mechanism for producing memory representations. From jang at mathworks.com Wed Mar 8 13:45:22 1995 From: jang at mathworks.com (jang@mathworks.com) Date: Wed, 8 Mar 1995 13:45:22 -0500 Subject: Review paper on neuro-fuzzy modeling and control available Message-ID: <199503081845.NAA26161@localhost> Hi, The following review paper (29 pages) on neuro-fuzzy modeling and control is now available for anonymous ftp from the MathWorks ftp site: FTP-host --> ftp.mathworks.com FTP-file --> /pub/doc/papers/neuro-fuzzy.ps URL --> ftp://ftp.mathworks.com/pub/doc/papers/neuro-fuzzy.ps It's about 1.2 MB, so be sure to use -s option when printing. This paper is an extended version of the one appears in the proceedings of the IEEE, special issue on fuzzy logic, March 1995. Title and abstract of the paper are listed below. Title: Neuro-Fuzzy Modeling and Control Abstract: Fundamental and advanced developments in neuro-fuzzy synergisms for modeling and control are reviewed. The essential part of neuro-fuzzy synergisms comes from a common framework called adaptive networks, which unifies both neural networks and fuzzy models. The fuzzy models under the framework of adaptive networks is called ANFIS (Adaptive-Network-based Fuzzy Inference System), which possess certain advantages over neural networks. We introduce the design methods for ANFIS in both modeling and control applications. Current problems and future directions for neuro-fuzzy approaches are also addressed. o o ) ==== J.-S. Roger Jang ========= o /| / ===== jang at mathworks.com ==== The MathWorks, Inc. o /\ \/: info at mathworks.com 24 Prime Park Way (@ ) ): http://www.mathworks.com Natick, MA 01760-1500 \/ /\: ftp.mathworks.com ==== Tel: 508-653-1396 ext 4567==== \| \ ====== Fax: 508-653-6971 ==== \ ) From wray at ptolemy-ethernet.arc.nasa.gov Wed Mar 8 15:52:55 1995 From: wray at ptolemy-ethernet.arc.nasa.gov (Wray Buntine) Date: Wed, 8 Mar 95 12:52:55 PST Subject: Modularity in neural networks Message-ID: <9503082052.AA27517@ptolemy.arc.nasa.gov> Dear connectionists, Just want to bring peoples attention to the fact that the Bayesian network community (a close relative of the neural network community) takes modularity as one of its foundations. I discuss this more, giving some connections to neural nets, in: URL: ftp://ack.arc.nasa.gov/pub/buntine/kdd2.ps.Z FTP: extract from above @incollection{Buntine.KDD2, AUTHOR = "W.L. Buntine", TITLE = "Graphical Models for Discovering Knowledge", BOOKTITLE = "Knowledge Discovery in Databases (Volume 2)", EDITOR = "U. M. Fayyad and G. Piatetsky-Shapiro and P. Smyth and R. S. Uthurasamy", PUBLISHER = "MIT Press", YEAR = "1995" } Here is an article that talks about these general ideas in more detail. @article{howard:km, AUTHOR = "R.A. Howard", TITLE = "Knowledge maps", JOURNAL = "Management Science", VOLUME = 35, PAGES = "903--922" , YEAR = 1989, NUMBER = 8 } Hopefully, David Heckerman will talk about this more at Snowbird, Machines that Learn this April. Wray Buntine NASA Ames Research Center phone: (415) 604 3389 Mail Stop 269-2 fax: (415) 604 3594 Moffett Field, CA, 94035-1000 email: wray at kronos.arc.nasa.gov From tom at csc1.prin.edu Wed Mar 8 16:02:30 1995 From: tom at csc1.prin.edu (Tom Fuller) Date: Wed, 8 Mar 1995 15:02:30 -0600 Subject: No subject Message-ID: <199503082102.PAA28073@spectre.prin.edu> The file fuller.scl.ps.Z is now available for copying from the Neuroprose repository: Supervised Competitive Learning Thomas H. Fuller, Jr. and Takayuki D. Kimura, Abstract: Supervised Competitive Learning (SCL) assembles a set of learning modules into a supervised learning system to address the stability-plasticity dilemma. Each learning module acts as a similarity detector for a prototype, and includes prototype resetting (akin to that of ART) to respond to new prototypes. SCL has usually employed backpropagation networks as the learning modules. It has been tested with two feature abstractors: about 30 energy-based features, and a combination of energy-based and graphical features (about 60). About 75 subjects have been involved. In recent testing (15 college students), SCL recognized 99% (energy features only) of test digits, 91% (energy) and 96.6% (energy/graphical) of test letters, and 85% of test gestures (energy/graphical). SCL has also been tested with fuzzy sets as learning modules for recognizing handwritten digits and handwritten gestures, recognizing 97% of test digits, and 91% of test gestures. This is Technical Report WUCS-93-45 at Washington University in St. Louis, which has some hardcopies available. It is reprinted from Journal of Intelligent Material Systems and Structures (March 1994. pages 232-246) This work was supported by the Kumon Machine Project. Department of Computer Science Washington University Campus Box 1045 One Brookings Drive St. Louis, MO 63130-4899 Here's a sample retrieval session: unix> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive FTP server (Version wu-2.4(2) Mon Apr 18 14:41:30 EDT 1994) ready. Name (archive.cis.ohio-state.edu:me): anonymous 331 Guest login ok, send your complete e-mail address as password. Password: me at here.edu 230 Guest login ok, access restrictions apply. Remote system type is UNIX. Using binary mode to transfer files. ftp> cd pub/neuroprose 250 CWD command successful. ftp> get fuller.scl.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for fuller.scl.ps.Z (138497 bytes). 226 Transfer complete. 138497 bytes received in 28.29 seconds (4.78 Kbytes/s) ftp> bye 221 Goodbye. unix> uncompress fuller.scl.ps.Z unix> <send fuller.scl.ps to favorite viewer or printer> From gp at isds.Duke.EDU Wed Mar 8 16:30:15 1995 From: gp at isds.Duke.EDU (Giovanni Parmigiani) Date: Wed, 8 Mar 1995 16:30:15 -0500 Subject: No subject Message-ID: <mailman.748.1149591332.29955.connectionists@cs.cmu.edu> ============================================================================== CALL FOR PAPERS -- CALL FOR TRAVEL FUNDING APPLICATIONS ============================================================================== INTERNATIONAL WORKSHOP ON MODEL UNCERTAINTY AND MODEL ROBUSTNESS ---------------------------------------------------------------- Bath, England, June 30th-July 1st or 2nd 1995 ============================================= The information below and future updates will be available at the ISDS WWW site (http://www.isds.duke.edu). GOALS OF THE WORKSHOP In recent years, advances in statistical methodology and computing have made available powerful modeling tools in a variety of areas. Along with the added modeling flexibility, increasing attention is being paid to the relationship between modeling assumptions and results. Debates on the effect of modeling assumptions on crucial scientific and policy prediction, such as global warming and the health impact of toxic waste, have reached the mass media. This international workshop on model uncertainty and model robustness, blending methodology and case studies, will have the following goals: a) to help elicit current issues and methods from a host of different application areas and disciplines; b) to promote wider utilization of worthy practical approaches and solutions developed in specific fields; c) to advance understanding of the relative merits of existing tools and approaches; and d) to identify directions for future methodological developments. PROGRAM AND CALL FOR PAPERS The program of the Workshop will include both talks and poster presentations. The talks will be invited, and will be organized in 7-9 sessions (depending on the number of participants), each including 2 related presentations and one discussion. Ample time will be allowed for floor discussion. No parallel sessions will be planned, to encourage interaction among participants with different interests and background. If there are enough talks for 9 sessions the meeting will run from Friday morning June 30th to Sunday morning July 2nd, otherwise Friday morning to Saturday afternoon July 1st (this will be decided by early April). There will be a Workshop banquet Saturday evening July 1st. A poster session will take place the evening of June 30th, and will provide a venue for discussing contributions that cannot be included in the daytime sessions, and for further informal interaction. The poster session will be open to contributors. We are actively seeking relevant posters. WORKSHOP TOPICS The foundational basis of the Workshop is eclectic -- we intend to contrast Bayesian, frequentist, practical, and theoretical viewpoints. * Overview of current directions in model uncertainty and robustness in model specification, in statistics and the physical and social sciences * Model uncertainty and model robustness in specific areas (linear and generalised linear models, time series, imaging, spatial statistics, design of experiments, meta-analysis, graphical models, survival analysis, decision modelling, risk analysis) * Case studies, emphasising scientific and policy implications of different approaches for handling uncertainty in model choice * Comparison of alternative methodological approaches to model uncertainty; foundations * Practical implementation of inference under strong uncertainty about model choice; accounting for model uncertainty by hierarchical modelling; mixture modelling * Variable selection problems. Deterministic and stochastic algorithms for searching model specification spaces; convergence issues * Prior specification in Bayesian approaches. Diffuse and informative priors on structure and parameters; elicitation of priors and utilities; choice of scale on which to elicit/infer/predict; specification of non-standard covariance structures * Methodology for model choice, information, and related topics. Bayes factors, their use, interpretation, computation and role in model building. AIC, BIC and other model specification criteria * Model uncertainty and model criticism. Diagnostics and influence measures. Cross-validation and predictive validation * Computation/algorithms; software for illustrating the mapping from modelling assumptions to conclusions; graphical methods for assessing uncertainty in model choice; communication of model uncertainty to practitioners * Modelling via exchangeability (E) and conditional independence (CI). Uncertainty about E/CI assumptions Further details about the Workshop program and participants will be made available at the ISDS www site (http://www.isds.duke.edu) as soon as they become available. REGISTRATION AND TRAVEL GRANT FOR US AND OTHER PARTICIPANTS The Workshop organisers have applied for an NSF Group Travel Grant for participants from the USA to attend the Workshop, and an EPSRC grant to support the travel expenses of people from other countries. The EPSRC grant includes modest support for non-UK participants to visit other places in the UK, to work with colleagues and give seminars, while they are away from their home institutions. Interactive registration and grant application forms will be available at the ISDS www site (http://www.isds.duke.edu) in the immediate future. Alternatively, email or paper versions of the forms can be requested by contacting Giovanni Parmigiani (email gp at isds.duke.edu or fax +1-919-684-8594). PROCEEDINGS A World Wide Web version of the proceedings of the Workshop will be created at the ISDS www site. Papers will be made available as soon they are sent to us. Instructions for submissions will be posted. Alternatively, please contact Giovanni Parmigiani. WORKSHOP LOCATION Bath is an elegant city of about 80,000 people, built to a unified Georgian architectural plan in the eighteenth century; many of its buildings look today much as they did 250 years ago. It is the only World Heritage City in the UK, and offers amenities both urban (concerts, drama, films, a broad range of international shops) and rural (good walking and interesting villages in the beautiful countryside at the south end of the Cotswolds and beyond). The Workshop will be held in an old courtroom that dates from the 1780s, with the banquet taking place in a Georgian ballroom. Hotel accommodations range from moderate to five-star, many in restored period buildings near the city center. The city is about an hour and a half from Heathrow and Gatwick by car, and is served by a good rail link to London (85 minutes away). We look forward to seeing you at the Workshop. The Organizing Committee: David Draper (University of Bath), Giovanni Parmigiani (ISDS, Duke University), Mike West (ISDS, Duke University). From tdenoeux at hds.univ-compiegne.fr Thu Mar 9 04:32:13 1995 From: tdenoeux at hds.univ-compiegne.fr (tdenoeux@hds.univ-compiegne.fr) Date: Thu, 9 Mar 1995 10:32:13 +0100 Subject: paper on k nearest neighbors and D-S theory Message-ID: <199503090932.KAA15632@asterix.hds.univ-compiegne.fr> Announcement: The following paper to appear in IEEE Transactions on Systems, Man and Cybernetics, 25 (05) is available by anonymous ftp: ftp ftp.hds.univ-compiegne.fr cd /pub/diagnostic get knnds.ps.Z uncompress knnds.ps title: A k-nearest neighbor classification rule based on Dempster-Shafer Theory author: Thierry Denoeux ABSTRACT In this paper, the problem of classifying an unseen pattern on the basis of its nearest neighbors in a recorded data set is addressed from the point of view of Dempster-Shafer theory. Each neighbor of a sample to be classified is considered as an item of evidence that supports certain hypotheses regarding the class membership of that pattern. The degree of support is defined as a function of the distance between the two vectors. The evidence of the k nearest neighbors is then pooled by means of Dempster's rule of combination. This approach provides a global treatment of such issues as ambiguity and distance rejection, and imperfect knowledge regarding the class membership of training patterns. The effectiveness of this classification scheme as compared to the voting and distance-weighted k-NN procedures is demonstrated using several sets of simulated and real-world data. +------------------------------------------------------------------------+ | tdenoeux at hds.univ-compiegne.fr Thierry DENOEUX | | Departement Genie Informatique | | Centre de Recherches de Royallieu | | tel (+33) 44 23 44 96 Universite de Technologie de Compiegne | | fax (+33) 44 23 44 77 B.P. 649 | | 60206 COMPIEGNE CEDEX | | France | +------------------------------------------------------------------------+ From gerhard at ai.univie.ac.at Thu Mar 9 12:47:19 1995 From: gerhard at ai.univie.ac.at (Gerhard Widmer) Date: Thu, 9 Mar 95 12:47:19 MET Subject: IJCAI-95 Workshop on AI & Music Message-ID: <199503091147.MAA14630@museum.ai.univie.ac.at> In view of a recent issue of "Connection Science" on the topic "Music and Creativity" the following announcement may be of interest to the connectionst community: SECOND AND LAST CALL FOR PAPERS !!! IJCAI-95 WORKSHOP ON ARTIFICIAL INTELLIGENCE AND MUSIC (Specialized topic: "AI MODELS OF STRUCTURAL MUSIC UNDERSTANDING") to be held in the context of the International Joint Conference on Artificial Intelligence (IJCAI-95) Montreal, Quebec Artificial Intelligence and Music (AIM) has become a stable field of research which is recognized both in the music and the AI communities as a valuable and promising research direction. In the AI arena, there has been a series of international workshops on this topic at major AI conferences (e.g., AAAI-88; IJCAI-89; ECAI-90; ECAI-92). The most recent indications of the growing recognition of AIM in the AI community were the special track on "AI and the Arts" at the AAAI-94 conference (in which the majority of papers dealt with AI & Music) and a real-time interactive music performance at AAAI-94's Art Exhibit. The purpose of this workshop is to discuss, in an informal setting, current topics and results in research on Artificial Intelligence and Music, in particular problems and approaches related to AI models of structural music understanding. TOPICS OF INTEREST Previous workshops on AI & Music were rather broad in scope. Given the advances in research that have taken place in the meantime, we are now in a position to define a highly focused theme for this workshop, which will provide for a coherent and focused scientific discussion. The specialized topic of the IJCAI-95 workshop on AI and Music --- "AI Models of Structural Music Understanding" --- refers to all aspects of structured music perception and processing that are amenable to computer modelling, e.g., beat induction, structure recognition and abstraction, real-time perception and pattern induction, as well as to research on the role of these abilities in various domains of musical competence (listening, composition, improvisation, performance, learning). The following short list of issues exemplifies the types of topics to be discussed: - AI models of musical structure perception - AI models of perception of / representation of / reasoning about musical time - empirical investigations with AI programs based on structural music theories - real-time vs. non-real-time models of music comprehension - music understanding and creativity Contributions by workshop participants should both have a substantial AI component and be well-founded in music theory and musicology. PARTICIPATION AND SUBMISSION OF PAPERS To maintain a genuine workshop atmosphere, participation is limited to at most 30 persons. Participants will be selected by the organizing committee (see below), based on submitted papers. Participants will be expected to actively contribute to the workshop by either presenting a talk or taking part in panel and/or open discussions. Researchers interested in participating in the workshop are invited to submit extended abstracts (up to 5 pages) on completed or ongoing research related to the above-mentioned topics. Submissions may be sent by e-mail (self-contained LaTex or PostScript files) or as hardcopies (in triplicate) to the workshop organizer (address see below). E-mail submission is highly encouraged. The submissions will be reviewed by members of the organizing committee. Accepted papers will be published in the form of official IJCAI-95 workshop notes. Participants selected for giving a talk at the workshop will be asked to submit a full-length paper for the workshop notes (deadlines see below). If you want to demonstrate an operational AIM system, please contact the workshop organizer and supply detailed information about type of system, equipment required, etc. We cannot guarantee at this point that system demonstrations will be possible at the workshop, but we will try to do our best. *** IMPORTANT NOTICE *** IJCAI regulations require that participants register for the main IJCAI-95 conference. In addition, IJCAI charges a fee of US$ 50,- for workshop participation. For more information about IJCAI-95, please contact the IJCAI Conference Management at American Association for Artificial Intelligence (AAAI) 445 Burgess Drive Menlo Park, CA 94025 Tel: +1 - 415 - 328-3123 Fax: +1 - 415 - 321-4457 e-mail: ijcai at aaai.org or consult the IJCAI WWW page (http://ijcai.org/). For more information about the workshop, please contact the workshop organizer (address see below). IMPORTANT DATES: Abstracts/papers due by: March 18, 1995 Notification of acceptance: April 8, 1995 Camera-ready version of final paper due: April 24, 1995 Date of workshop: Monday, Aug. 21, 1995 Main IJCAI-95 conference: Aug. 21 - 25, 1995 ORGANIZING COMMITTEE: Roger Dannenberg Computer Science Department Carnegie Mellon University Pittsburgh, PA, USA Bruce Pennycook Faculty of Music McGill University Montreal, Canada Geber Ramalho LAFORIA-CNRS Universite' Paris VI Paris, France Brian K. Smith School of Education and Social Policy & The Institute for the Learning Sciences Northwestern University Evanston, IL, USA Gerhard Widmer Department of Medical Cybernetics and Artificial Intelligence University of Vienna and Austrian Research Institute for Artificial Intelligence Vienna, Austria WORKSHOP ORGANIZER: Please send abstracts/papers or any questions to Gerhard Widmer Austrian Research Institute for Artificial Intelligence Schottengasse 3 A-1010 Vienna Austria Phone: +43 - 1 - 53532810 Fax: +43 - 1 - 5320652 e-mail: gerhard at ai.univie.ac.at From kak at gate.ee.lsu.edu Thu Mar 9 17:33:26 1995 From: kak at gate.ee.lsu.edu (Subhash Kak) Date: Thu, 9 Mar 95 16:33:26 CST Subject: No subject Message-ID: <9503092233.AA03133@gate.ee.lsu.edu> Second Annual Joint Conference on Information Sciences September 28 - October 1, 1995 HONORARY CONFERENCE CHAIRS Lotfi A. Zadeh & Azriel Rosenfeld First Annual Conference on Computational Intelligence & Neurosciences Co-Chairs: Subhash C. Kak & Jeffrey P. Sutton ADVISORY BOARD Jim Anderson Earl Dowell Erol Gelenbe Kaoru Hirota George Klir Teuvo Kohonen Gregory Lockhead Zdzislaw Pawlak C. V. Ramamoorthy Herb Rauch John E. R. Staddon Masaki Togai Victor Van Beuren Max Woodbury Stephen S. Yau Lotfi A. Zadeh H. Zimmerman PROGRAM COMMITTEES First Annual Conference on Computational Intelligence & Neurosciences Robert Erickson George Georgiou David Hislop Michael Huerta Subhash C. Kak Stephen Koslow Sridhar Narayan Slater E. Newman Gregory Lockhead Richard Palmer David C. Rubin Nestor Schmajuk David W. Smith John Staddon Jeffrey P. Sutton Harold Szu L.E.H. Trainor Abraham Waksman Paul Werbos M. L. Wolbarsht Max Woodbury TIME SCHEDULE & VENUE September 28 to October 1, 1995. The beautiful ``Shell Island'' Hotels of Wrightsville Beach, North Carolina, USA. Tel: 800-689-6765 KEYNOTE SPEAKERS, PLENARY SPEAKERS The following distinguished, leading researchers have already accepted our invitation: Jim Anderson Zdzislaw Pawlak Azriel Rosenfeld L. E. H. Trainor Lotfi A. Zadeh The other leading researchers including: Stephen Grossbery, John Holland, John Staddon, Michio Sugeno, etc, are considering our invitations. ****************** * PUBLICATIONS * ****************** The joint conference publishes one Proceedings on Summaries which consist of all papers accepted by all three program committees. The JCIS Proceedings will be made available on Sept. 28, 1995. A summary shall not exceed 4 pages of 10-point font, double-column, single-spaced text, (1 page minimum) with figures and tables included. Any summary exceeding 4 pages will be charged $100 per additional page. Three copies of the summary are required by June 25,1995. A deposit of $150 check must be included to guarantee the publication of your 4 pages summary in the Proceedings. $150 can be deducted from registration fee later. It is very important to mark ``plan A'' or ``plan B'' or ``plan C'' on your manuscript. The conference will make the choice for you if you forget to do so. Final version of the full length paper must be submitted by October 1, 1995. Four (4) copies of the full length paper shall be prepared according to the ``information for Authors'' appearing at the back cover of Information Sciences, an International Journal (Elsevier Publishing Co.). A full paper shall not exceed 20 pages including figures and tables. All full length papers will be reviewed by experts in their respective fields. Revised papers will be due on April 15, 1996. Accepted papers will appear in the hard-covered proceeding (book) to be published by a publisher or Information Sciences Journal (INS journal now has three publications: Informatics and Computer Sciences, Intelligent Systems, Applications). All fully registered conference attendees will receive a copy of proceeding (summary) on September 28, 1995; a free one-year subscription (paid by this conference) of Information Sciences Journal - Applications. Lastly, the right to purchase either or all of Vol.I, Vol.II, Vol.III of Advances in FT & T hard-covered, deluxe, professional books at 1/2 price. (1) Deadline for summaries submission: June 25, 1995. (2) Reviewing: June 25 - Aug. 1, 1995. (3) Decision & notification date: August 5, 1995. (4) Absolute deadline for summaries: August 25, 1995. (5) Deadline for full length paper: October 1, 1995. -------------------------------------- | ANNOUNCEMENT AND CALL FOR PAPERS | -------------------------------------- General Information The Joint Conference on Information Sciences consists of three international conferences. All interested attendees including researchers, organizers, speakers, exhibitors, students and other participants should register either in Plan A: Fourth International Conference on Fuzzy Theory & Technology or Plan B: Second International Conference on Computer Theory & Informatics and Plan C: First International Conference on Computational Intelligence & Neurosciences. First Annual Conference on Computational Intelligence & Neurosciences The conference will consist of both plenary sessions and contributory sessions, focusing on topics of critical interest and the direction of future research. For contributory sessions, full papers are being solicited. We also welcome you to contact Jeffrey P. Sutton for the interesting formate he has proposes in ``Symposium on Neural Computing''. Several interesting sessions Have already been organized, e.g., ``From Animals to Robots'' by Nestor Schmajuk, ``Intelligent Control'' by Chris Tseng, and ``Computational Science Meets Neurobiology'' by Sridhar Narayan. Other example topics include, but are not limited to the following: * Neural Network Architectures * Artificially Intelligent Neural Networks * Artificial Life * Associative Memory * Computational Intelligence * Cognitive Science * Fuzzy Neural Systems * Relations between Fuzzy Logic and Neural Networks * Theory of Evaluationary Computation * Efficiency/Robustness Comparisons with Other Direct Search Algorithms * Parallel Computer Applications * Integration of Fuzzy Logic and Evolutionary Computing * Evaluationary Computation for Neural Networks * Fuzzy Logic in Evolutionary Algorithms * Neurocognition * Neurodynamics * Optimization * Feature Extraction & Pattern Recognition * Learning and Memory * Implementations (electronic, Optical, Biochips) * Intelligent Control (1) The Human Brain Project: Neuroinformatics The Human Brain Project is a broad-based, long-term research initiative which supports research and development of advanced technologies to make avaiable to neuroscientists and behavioral scientists an array of information tools for the 21st Century. These include novel database and querying capabilities for the full range of structural and fuctional information about the brain and behavior, as well as technologies for managing, integrating and sharing information over networks. These will also provide the means for electronic collaboration. This initiative was launched in 1993, and is supported by 14 federal organizations across 5 agencies (NIH, NASA, NSF, DOE and DOD). Over 40 brain and behavioral scientists, informatio and computer scientists, engineers, mathematicians and statisticians are funded by the Human Brain Project. The diversity and volume of data generated by brain and behavioral science is testing the limits of informatics and associated technologies; the Human Brain Project is expanding these limits. Such advances will be applicable to a wide range of problems, as brain and behavioral research encompasses a multitude of disciplinary approaches and data types. And, since brain and behavioral research is generated around the world, the potential of this initiative will be fully realized only when these tools become global resources. Thus, the strategies and technologies that are developed as part of the Human Brain Project will serve as models for other complex information domains, with implications far beyond the brain research community. It is our intention to creat such a discussion forum by inviting various governmental, industrial and acdemic researchers to attend. For further information please contact Co-chair Subhash C. Kak (kak at max.ee.lsu.edu, Voice: 504 388 5552, FAX: 504 388 5200). (2) Symposium on Neural Computing It is important that researchers in information science keep abreast of rapid achievements being made in neuroscience. Mathematical and computer approaches aid in understanding biological information processing, and information science stands to gain by examining strategies used by the nervous system to solve complex problems. The aim of this symposium is to identify areas where future research in information science should focus based on recent advances in neural computing. A discussion format will be used. It will be supplemented with a few key plenary lectures on computational and systems neuroscience. No prior background in neurobiology is assumed. A summary statement of the goals elucidated in the discussions will be prepared. Your participation in this growing area of information science is most welcome. N.B. Professors Jim Anderson (Brown) and L.E.H. Trainor (Toronto) are both interested in contributing. Jeffrey know several other prominent researchers in the field who would likely contribute if asked. For further information about this symposium, please contact Jeffrey P. Sutton (sutton at ai.mit.edu, Voice: 617 726 6766, FAX: 617 726 4078). *************** * TUTORIALS * *************** Several mini-courses are scheduled for sign-up. Please take note that any one of them may be cancelled or combined with other mini-courses due to the lack of attendance. Cost of each mini-course is $120 up to 7/15/95 & $160 after 7/15/95, the same cost for all mini-course. No. Name of Mini-Course Instructor Time ------------------------------------------------------------------------ A Languages and Compilers for J. Ramanujan 6:30 pm - 9 pm Distributed Memory Machine Sept. 28 ------------------------------------------------------------------------- B Pattern Recognition Theory H. D. Cheng 6:30 pm - 9 pm Sept. 28 ------------------------------------------------------------------------- C Fuzzy Set Theory George Klir 2:00pm - 4:30pm Sept. 29 ------------------------------------------------------------------------- D Neural Network Theory Richard Palmer 2:00pm - 4:30pm Sept. 29 ------------------------------------------------------------------------- E Fuzzy Expert Systems I. B. Turksen 6:30 pm - 9 pm Sept. 29 ------------------------------------------------------------------------- F Intelligent Control Systems Chris Tseng 6:30 pm - 9 pm Sept. 29 ------------------------------------------------------------------------- G Neural Network Applications Subhash Kak 2:00pm - 4:30pm Sept. 30 ------------------------------------------------------------------------- H Pattern Recognition Applications Edward K. Wong 2:00pm - 4:30pm Sept. 30 ------------------------------------------------------------------------- I Fuzzy Logic & NN Integration Marcus Thint 9:30am - 12:00 Oct. 1 ------------------------------------------------------------------------- J Rough Set Theory Tsau Young Lin 9:30am - 12:00 Oct. 1 ------------------------------------------------------------------------- ***************** * EXHIBITIONS * ***************** Once again, Intelligent Machines, Inc. will demonstrate their highly successful new software ``O'inca''-a FL-NN, Fuzzy-Neuro Design Framework. Elsevier Publishing Co. will lead major publishers for another successful exhibits. Dr. Hua Li of Texas Tech. Univ. will exhibit his hardware & software research results. In addition, we plan to celebrate 30th Anniversary of Fuzzy Theory & Technology through a special exhibit of historical aluable. This exhibition does not represent any society, it does represent, however, some personal collections. Any person has some interesting or collectibles to share ith the conference, please contact Paul P. Wang (ppw at ee.duke.edu). Interested Vendors should contact: Dr. Rhett George Department of Electrical Engineering Duke University Durham, NC 27708 Telephone: 919 660 5242 FAX: 919 660 5293 rtg at ee.duke.edu ******************************************** * JCIS'95 REGISTRATION FEES & INFORMATION * ******************************************** Up to 7/15/95 After 7/15/95 Full Registration $275.00 $395.00 Student Registration $100.00 $160.00 Tutorial(per Mini-Course) $120.00 $160.00 Exhibit Boot Fee $300.00 $400.00 One Day Fee(no pre-reg. discount) $195.00 $ 85.00 (Student) Above fees applicable to both Plan A & Plan B & Plan C FULL CONFERENCE REGISTRATION: Includes admission to all sessions, exhibit area, coffee, tea and soda. A copy of conference proceedings (summary) at conference and one year subscription of Information Sciences - Applications, An International Journal, published by Elsevier Publishing Co. In addition, the right to purchase the hard-cover deluxe books at 1/2 price. Award Banquet on Sept. 30, 1995 is included through Full Registration. One day registration does not include banquet, but one year IS Joural - C subscription is included for one-day full registration only. Tutorials are not included. STUDENT CONFERENCE REGISTRATION: For full-time students only. A letter from your department is required. You must present a current student ID with picture. A copy of conference proceedings (summary) is included. Admission to all sessions, exhibit area, area, coffee,tea and soda. The right to purchase the hard-cover deluxe books at 1/2 price. Free subscription of IS Journal - Applications, however, is not included. TUTORIALS REGISTRATION: Any person can register for the Tutorials. A copy of lecture notes for the course registered is included. Coffee, tea and soda are included. The summary and free subscription of IS Journal - Applications is, However, not included. The right to purchase the hard-cover deluxe books is included. VARIOUS CONFERENCE CONTACTS: Tutorial Conference Information Paul P. Wang Jerry C.Y. Tyan Kitahiro Kaneda ppw at ee.duke.edu ctyan at ee.duke.edu hiro at ee.duke.edu Tel. (919)660-5271 Tel. (919)660-5233 Tel. (919)660-5233 660-5259 Coordinates Overall Administration Local Arrangement Chair Xiliang Gu Sridhar Narayan gu at ee.duke.edu Dept. of Mathematical Sciences Tel. (919)660-5233 Wilmington, NC 28403 (919)383-5936 U. S. A. narayan at cms.uncwil.edu Tel: 910 395 3671 (work) 910 395 5378 (home) *********************** * TRAVEL ARRANGEMENTS * *********************** The Travel Center of Durham, Inc. has been designated the officeal travel provider. Special domestic fares have been arranged and The Travel Center is prepared to book all flight travel. Domestic United States and Canada: 1-800-334-1085 International FAX: 919-687-0903 ********************** * HOTEL ARRANGEMENTS * ********************** SHELL ISLAND RESORT HOTELS 2700 N. LUMINA AVE. WRIGHTSVILLE BEACH, NC 28480 U. S. A. This is the conference site and lodging. A block of suites (double rooms) have been reserved for JCIS'95 attendees with discounted rate. All prices listed here are for double occupancies. $100.00 + 9% Tax (Sun.- Thur.) $115.00 + 9% Tax (Fri. - Sat.) $10.00 for each additional person over 2 people per room. We urge you to make reservation early. Free transportation from and to Wilmington, N. C. Airport is available for ``Shell Island'' Resort Hotel Guests. However, you must make reservation for this free service. Please contact: Carvie Gillikin, Director of Sales Voice: 1-800-689-6765 or: 910-256-8696 FAX: 910-256-0154 --------------------------------------------------- | SPONSORS: | | Machine Intelligence and Fuzzy Logic Laboratory | | Dept. of Electrical Engineering | | Duke University | | | | Elserier Science Publishing Inc. | | New York, N.Y. | | | --------------------------------------------------- ------------------------------------------------------------------------------- CONFERENCE REGISTRATION FORM It is important to choose only one plan; Participation Plan A or Plan B or Plan C. The only difference in the privelege for the choice is both Plan B and Plan C do not participate in Lotfi A. Zadeh Best Paper Competition. [ ] I wish to receive further information. [ ] I intend to participate in the conference. [ ] I intend to present my paper to regular session. [ ] I intend to register in tutorial(s). Mane: Dr./Mr./Mrs. _________________________________________________ Address: ___________________________________________________________ Country: ___________________________________________________________ Phone:________________ Fax: _______________ E-mail: ________________ Affiliation(for Badge): ____________________________________________ Participation Plan: [ ]A [ ]B [ ]C Up to 7/15/95 After 7/15/95 Full Registration [ ]$275.00 [ ]$395.00 Student Registration [ ]$100.00 [ ]$160.00 Tutorial(per Mini-Course) [ ]$120.00 [ ]$160.00 Exhibit Boot Fee [ ]$300.00 [ ]$400.00 One Day Fee(no pre-reg. discount) [ ]$195.00 [ ]$ 85.00 (Student) Total Enclosed(U.S. Dollars): ________________ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ $ Please make check payable and mail to: $ $ FT & T $ $ c/o. Paul P. Wang $ $ Dept. of Electrical Engineering $ $ Duke University $ $ Durham, NC 27708 $ $ U. S. A. $ $ $ $ All foreign payments must be made by $ $ draft on a US Bank in US dollars. No $ $ credit cards or purchase order can be $ $ accepted. $ $ $ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From M.West at statslab.cam.ac.uk Thu Mar 9 05:23:00 1995 From: M.West at statslab.cam.ac.uk (Mike West) Date: Thu, 9 Mar 95 10:23 GMT Subject: No subject Message-ID: <m0rmfN8-000PprC@lion.statslab.cam.ac.uk> INTERNATIONAL WORKSHOP ON MIXTURES Aussois, France, September 17-21 1995 A Workshop on Statistical Mixture Modelling will be held in Aussois, France in September this year. The meeting is co-sponsored by French agencies CNRS, ADRES and INRIA, the universities of Rouen and Grenoble, and Duke University. Additional sponsorship is expected from the US National Science Foundation, principally in terms of a group travel grant for US based researchers. The Workshop is held in recognition of the recent growth and development in the theory and, in particular, applications of statistical methods based on mixtures of distributions. The high level of recent and current research activity reflects the growing appreciation of the key roles played by mixture models in many complex modelling and inference problems, and is driven, in part, by advances in computational statistical technology. The meeting provides a forum for reviewing and publicising widely dispersed research activities in mixture modelling, stimulating fertilisation of theoretical, methodological and computational research directions for the near future, and focusing attention on the wide variety of significant applied problems in complex stochastic systems that are inherently structered in mixture terms. Example workshop topics might include: mixtures in density estimation, regression and time series; mixtures in statistical image modelling and analysis, neural networks, graphical models and networks; stochastic simulation for mixture analysis; clustering and classification problems; model selection and combination; alternative approaches to inference in mixtures; latent variables and incomplete data problems; and applications of mixtures in various scientific areas. The workshop will bring together senior researchers, new researchers and students from various backgrounds to promote exchange and interactions on the frontiers of statistical mixture modelling and to highlight the development of statistical technology across these fields. The meeting will consist of invited and contributed talks, posters, discussion and round-table sessions. Talks will be given in morning (9am-1pm) and evening (5pm-8pm) sessions, with the midday period (2.30-4.30pm) for contributed poster sessions, informal discussions and round-tables. Activities at the meeting will be publicised though World Wide Web access to abstracts of papers and posters presented. The Organising Committee of the Workshop consists of Christian Robert (Universite de Rouen) and Gilles Celeaux (Inria, Grenoble, France), Professor Kathryn Roeder (Carnegie Mellon University), and Mike West (ISDS, Duke University). The venue is the CNRS Paul Langevin Conference Center at Aussois in the French Alps. Attendance is to be strictly capped at 80 delegates on a first-come, first-served basis. Registration details and forms are available from Christian Robert, at robert at bayes.univ-rouen.fr. Informal enquiries can also be sent to Mike West, at M.West at statslab.cam.ac.uk, from whom applications for NSF travel grant support can also be obtained. From somers at ai.mit.edu Fri Mar 10 00:02:49 1995 From: somers at ai.mit.edu (David Somers) Date: Fri, 10 Mar 95 00:02:49 EST Subject: Paper Available: Emergent Model of Orientation Selectivity Message-ID: <9503100502.AA05295@vidi> *****************Pre-print Available via FTP ******************* FTP-host: ftp.ai.mit.edu FTP-filename: /pub/users/somers/orient-jneurosci.ps.Z URL ftp://ftp.ai.mit.edu/pub/users/somers/orient-jneurosci.ps.Z An Emergent Model of Orientation Selectivity In Cat Visual Cortical Simple Cells (41 pages) David Somers, Sacha Nelson, and Mriganka Sur, MIT, Dept. of Brain & Cognitive Sciences To appear in: The Journal of Neuroscience ABSTRACT It is well known that visual cortical neurons respond vigorously to a limited range of stimulus orientations, while their primary afferent inputs, neurons in the lateral geniculate nucleus (LGN) respond well to all orientations. Mechanisms based on intracortical inhibition and/or converging thalamocortical afferents have previously been suggested to underlie the generation of cortical orientation selectivity; however, these models conflict with experimental data. Here, a 1:4 scale model of a $1700\mu\mbox{m}$ by $200\mu\mbox{m}$ region of layer IV of cat primary visual cortex (area 17) is presented in order to demonstrate that local intracortical excitation may provide the dominant source of orientation selective input. In agreement with experiment, model cortical cells exhibit sharp orientation selectivity despite receiving strong iso--orientation inhibition, weak cross- -orientation inhibition, no shunting inhibition, and weakly tuned thalamocortical excitation. Sharp tuning is provided by recurrent cortical excitation. As this tuning signal arises from the same pool of neurons that it excites, orientation selectivity in the model is shown to be an emergent property of the cortical feedback circuitry. In the model, as in experiment, sharpness of orientation tuning is independent of stimulus contrast and persists with silencing of ON--type subfields. The model also provides a unified account of intracellular and extracellular inhibitory blockade experiments which had previously appeared to conflict over the role of inhibition. It is suggested that intracortical inhibition acts non-specifically and indirectly to maintain the selectivity of individual neurons by balancing strong intracortical excitation at the columnar level. David C. Somers Dept. of Brain & Cognitive Sciences MIT, E25-618 45 Carleton St. Cambridge, MA 02139 ftp://ftp.ai.mit.edu/pub/users/somers/orient-jneurosci.ps.Z From lkhansen at eivind.ei.dtu.dk Fri Mar 10 09:44:21 1995 From: lkhansen at eivind.ei.dtu.dk (Lars Kai Hansen) Date: Fri, 10 Mar 1995 15:44:21 +0100 Subject: workshop Message-ID: <9503101444.AA03032@ei.dtu.dk> Telluride Summer Research Center Box 2255, Telluride, Colorado 81435, USA. Re.: Telluride Summer Research Center Workshop on Neural Networks 1995. Secretariats: CONNECT, Electronics Institute, B. 349 Technical University of Denmark DK-2800 Lyngby Denmark Phone: +45 4525 3889 Fax: +45 4588 0117 Email: lkhansen at ei.dtu.dk Interdisciplinary Research Center Dept. of Mathematical Sciences, San Diego State University. San Diego California 92182, USA Phone: +1 619 594 7204 Fax: +1 619 594 6746 Email: salamon at math.sdsu.edu Dept. WNI Universitaire Campus Limburgs Universitair Centrum B-3590 Diepenbeek, Belgium Phone: 32 011 268214 Fax: 32 011 268299 Email: chris at luc.ac.be March 10, 1995 Dear prospective participant, Enclosed find the announcement of a 1995 Workshop on Neural Networks which will complement the other workshops in the 1995 program at the Telluride Summer Research Center. The workshop will take place from July 2 to July 9, and will be devoted to aspects of neural networks. While the list of topics will vary to reflect the interests of the participants, the list will include: THEORY: Generalization, ensembles, unsupervised learning. Extremely ill-posed learning and high dimensional data sets. APPLICATIONS: Medical imaging (PET) and time series processing. The format of the sections is to take a prepared one hour talk and turn it into a careful and detailed cross examination lasting 4-5 hours. This format has proven in the past years to be an outstanding way to rapidly understand new ideas and we invite you to share in the learning experience that viewing each other's work under such scrutiny can provide. The discussions aim to be constructive rather than critical, and all possible attempts will be made to relate the topics to the open problems of the workshop. Individual participants are expected to provide their own salaries. Academic and research institutions have recognized the value of participation in the Center's activities and have been willing to pay for work performed at the Center. The Summer Research Center helps arrange for housing thereby obtaining reduced rates for the participants and families. >>>>>> OBS!: Respond promtly to increase the chances of getting inexpensive housing!! A registration fee is charged all participants (100$ for one week, 125$ for two or more weeks). The fee may be waived in special circumstances. Anybody requiring official letters of invitation should contact the organizers specifying their needs. The Center offers an excellent way to meet collaborators from other institutions through coordinated visits. Extended stays are thus recommended. The Telluride locale, situated in an unspoiled valley near 4200 m high mountain peaks in the south west corner of Colorado, offers ample outdoor attractions making such stays enjoyable for the families as well. If you think you may be intested or would like to remain on our mailing list for future years, please indicate so on the enclosed form and return it to one of the organizers before April 5'th 1995. If you know of other colleagues who may be interested, please pass a copy to them or include their name and address. Sincerely, Lars Kai Hansen, Peter Salamon, and Christian Van den Broeck. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>><<<<<< Telluride Summer Research Center Box 2255, Telluride, Colorado 81435, USA. ANNOUNCEMENT OF A WORKSHOP ON Neural networks in high dimensions. July 2, - July 9, 1995 TELLURIDE SUMMER RESEARCH CENTER TELLURIDE, COLORADO 81435 USA Topics will include: Generalization, Ensembles, Unsupervised Learning, Medical Imaging () I would like to come to the center for the period: () I am interested in the workshop, but I cannot commit myself at this moment () I am not interested for this summer, but keep me on your mailing list. Name: Institution: Address: Phone, Fax, Email: Current interests: Lodging requirements (# persons): >>>>>>>> APPLICATION DEADLINE APRIL 5'th 1995 <<<<<<<<<< Organizers: Peter Salamon email: salamon at math.sdsu.edu Lars Kai Hansen email: lkhansen at ei.dtu.dk Christian Van den Broeck email: chris at luc.ac.be From NEUROCOG at vms.cis.pitt.edu Fri Mar 10 12:12:26 1995 From: NEUROCOG at vms.cis.pitt.edu (NEUROCOG@vms.cis.pitt.edu) Date: Fri, 10 Mar 1995 13:12:26 -0400 (EDT) Subject: Undergraduate Summer Research in Neural Basis of Cognition Message-ID: <01HNZ08DPQOO9GXHV5@vms.cis.pitt.edu> * * * * * * * * * * * * UNDERGRADUATE SUMMER RESEARCH * * * * * * * * * * * * * * * * * * * * * * * * IN COGNITIVE NEUROSCIENCE * * * * * * * * * * * * * * FULL DESCRIPTION OF THE PROGRAM TO BE FOUND AT: http://neurocog.lrdc.pitt.edu/npc (www) The Neural Processes in Cognition Training Program at the University of Pittsburgh and Carnegie Mellon University has several positions available for qualified undergraduates interested in studying cognitive neuroscience. Cognitive neuroscience is a growing interdisciplinary area of study (see Science, 1993, v. 261, pp 1805-7) that interprets cognitive functions in terms of neuroanatomical and neurophysiological data and computer simulations. Undergraduate students participating in the summer program will be expected to spend ten weeks of intensive involvement in laboratory research supervised by one of the program's faculty. The summer program also includes weekly journal clubs and a series of informal lectures. Students receive a $2500 stipend provided by National Science Foundation support. Each student's specific plan of research will be determined in consultation with the training program's Director. Potential laboratory environments include single unit recording, neuroanatomy, computer simulation of biological and cognitive effects, neuropsychological assessment, behavioral assessment, and brain imaging. Applications are encouraged from highly motivated undergraduate students with interests in biology, psychology, engineering, physics, mathematics or computer science. Application deadline is April 25, 1995. To apply, request application materials by email at neurocog at vms.cis.pitt.edu or phone 412-624-7064 or write to the address below. The materials provide a listing of faculty research interests to consider. Applicants are strongly encouraged to identify a particular faculty member of interest. The application includes a statement of interest, a recent school transcript, one faculty letter of recommendation and a selection of one or two areas of research interests. Send requests and application materials to: Professor Walter Schneider, Program Director Neural Processes in Cognition University of Pittsburgh 3939 O'Hara Street Pittsburgh, PA 15260 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * For additional information on the World Wide Web open URL http://neurocog.lrdc.pitt.edu/npc * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * From RPRAGHUPATHI at oavax.csuchico.edu Sat Mar 11 14:27:48 1995 From: RPRAGHUPATHI at oavax.csuchico.edu (RPRAGHUPATHI@oavax.csuchico.edu) Date: Sat, 11 Mar 1995 12:27:48 -0700 (PDT) Subject: call for papers - please circulate! Message-ID: <01HO0CWZ6KV800KWRK@oavax.csuchico.edu> CALL FOR PAPERS "NEURAL NETWORKS IN BUSINESS" HAWAII INTERNATIONAL CONFERENCE ON SYSTEM SCIENCES - 29 JANUARY 3 - 6, 1996, MAUI, HAWAII Papers are invited for the minitrack on NEURAL NETWORKS IN BUSINESS as part of The Information Systems track at the Hawaii International Conference on System Sciences (HICSS). Researchers and practitioners doing work in neural network applications in the different functional areas of business such as marketing, production, HRM, software engineering, finance, accounting, health care, and law are invited to submit papers for consideration for this minitrack at HICSS. Papers must focus on applications in new areas, describe the methodology, lessons learnt, and future research issues. Comparisons of neural network performance to traditional models are also encouraged. The task modeled must be important and relevant to business situations, the data used real, and the implementation complete. Mini-track coordinators: W. "RP" Raghupathi California State University Department of Accounting and Management Information Systems Chico, CA 95929-011 Phone: (916) 898-4825 Fax: (916) 898-4584 E-mail: RPRAGHUPATHI at OAVAX.CSUCHICO.EDU and Shashi Shekhar University of Minnesota Computer Science Department 4-192 EE/CS Building 200 Union Street S.E. Minneapoli, MN 55455-0159 Phone: (612) 624-8307 Fax: (612) 625-0572 E-mail: shekhar at cs.umn.edu Instructions for submitting papers: 1. Submit 6 (six) copies of the full paper, consisting of 20-25 pages double-spaced including title page, abstract, references and diagrams directly to either of the minitrack coordinators. 2. Do not submit the paper to more than one minitrack. Paper should contain original material and not be previously published or currently submitted for consideration elsewhere. 3. Each paper must have a title page which includes the title, full name of all authors, and their complete addresses including affiliation(s), telephone numbers(s), and e-mail address(es). 4. The first page of the paper should include the title and a 300-word abstract. DEADLINES: MARCH 15, 1995: Abstracts may be submitted to minitrack coordinators (or track coordinators) for guidance and indication of appropriate content. Authors unfamiliar with HICSS or who wish additional guidance are encouraged to contact any coordinator to discuss potential papers. JUNE 1, 1995: Six (6) copies of the full papers must be submitted to the appropriate minitrack or track coordinators. AUG. 31, 1995: Notification of accepted papers mailed to authors. OCT. 1, 1995: Accepted manuscripts, camera-ready, sent to minitrack coordinators. One author from each paper must be registered by this time. NOV. 15, 1995: All other registrations must be received. Registration received after this deadline may not be accepted due to space limitations. Hotel reservations should also be made by this time. The "Neural Networks in Business" minitrack is part of the Information Systems Track. There are two other major tracks in the conference: software technology and Digital Documents. The Information Systems Track itself has several minitracks that focus on a variety of research topics in Collaboration Technology, Decision Support and Knowledge-Based Systems, and Organizational Systems and Technology. For more information contact: Jay F. Nunamaker, Jr. E-mail: nunamaker at bpa.arizona.edu (602) 621-4475 FAX: (602) 621-2433 Ralph H. Sprague, Jr. E-mail:sprague at uhunix.uhcc.hawaii.edu (808) 956-7082 FAX: (808) 956-9889 For more information on other tracks, please contact: Software Technology Track: Hesham El-Rewini E-mail: rewini at unocss.unomaha.edu Bruce D. Shriver E-mail: b.shriver at genesis2.com Digital Documents Track: M. Stuart Lynn E-mail: msylnn at cpa.org For more information on the conference, please contact the conference coordinator: Pamela Harrington E-mail: hicss at uhunix.uhcc.hawaii.edu (808) 956-7396 FAX: (808) 956-3766 From maggini at mcculloch.ing.unifi.it Tue Mar 14 03:32:59 1995 From: maggini at mcculloch.ing.unifi.it (Marco Maggini) Date: Tue, 14 Mar 95 09:32:59 +0100 Subject: AI*IA 2nd Call for Papers Message-ID: <9503140832.AA20548@mcculloch.ing.unifi.it> 2nd CALL FOR PAPERS AI*IA 95 Fourth Congress of the Italian Association for Artificial Intelligence Firenze, October 11-13, 1995 (Palazzo dei Congressi) IMPORTANT DATES ------------------------------ Deadline for submission April 10, 1995 Notification of acceptance June 11, 1995 Camera-ready copies due July 7, 1995 Congress October 11-13, 1995 THEME ----------- The Congress of the Italian Association for Artificial Intelligence is the most relevant national event in the field of Artificial Intelligence for both researchers interested in the methodological aspects and practitioners involved in applications. Papers will include both long and short presentations on substantial, original and previously unpublished research in all aspects of AI, including, but not limited to: - Automated Reasoning - Knowledge Representation - Architectures and Languages for AI - Machine Learning - Natural Language - Planning and Robotics - Qualitative Reasoning - Perception and Vision - Distributed Artificial Intelligence - Cognitive Modeling - Connectionist Models. PAPER SUBMISSION ------------------------ Five (5) copies of original papers not exceeding 5000 words (about 10 single spaced pages) MUST BE POSTMARKED ON OR BEFORE MONDAY APRIL 10, 1995 to: Prof. Giovanni Soda Dipartimento di Sistemi e Informatica Universita'di Firenze via S. Marta,3 50139 Firenze (Italy) E-mail: giovanni at ingfi1.ing.unifi.it Each copy of the paper must include a cover sheet, separate from the body of the paper, including: title of the paper, full name, postal addresses, phone number, fax numbers, e-mail adresses, if any, of all authors, an abstract of 100-200 words and a set of keywords giving the area/subarea of the paper and describing the topic of the paper. NO ELECTRONIC SUBMISSION WILL BE ACCEPTED. ATTENDANCE ----------------------- At least one author of the papers selected for oral and/or poster presentation must attend the Conference. GENERAL INFORMATION -------------------------------------- This CFP and the latest information regarding AI*IA95 can be found in the World Wide Web under http://www-dsi.ing.unifi.it/ai/aiia95 or obtained sending an e-mail to aiia95 at ingfi1.ing.unifi.it. PROGRAM COMMITTEE ------------------------------------ Giovanni Soda (Universita' di Firenze) (Chair) Program Committee Members for the Scientific Track: --------------------------------------------------------------------------- Stefania BANDINI ( Universita' di Milano) Amedeo CAPPELLI (CNR - Pisa) Amedeo CESTA (CNR - Roma) Marco COLOMBETTI (Politecnico di Milano) Mauro DI MANZO (Universita' di Genova) Floriana ESPOSITO (Universita' di Bari) Massimo GALLANTI (CISE - Milano) Fausto GIUNCHIGLIA (Irst e Universita' di Trento) Marco GORI (Universita' di Firenze) Leonardo LESMO (Universita' di Torino) Daniele NARDI (Universita' di Roma) Enrico PAGELLO (Universita' di Padova) Vito ROBERTO (Universita' di Udine) Oliviero STOCK (IRST - Trento) Giuseppe TRAUTTEUR (Universita' di Napoli) Franco TURINI (Universita' di Pisa) Program Committee Members for the Application Track ------------------------------------------------------------------------------ Franco CANEPA (Imit - Novara) Giannetto LEVIZZARI (Centro Ricerche FIAT - Orbassano (TO)) Fabio MALABOCCHIA (Cselt - Torino) Fulvio MARCOZ (Alenia - Roma) Renato PETRIOLI (Fondazione Bordoni - Roma) Roberto SERRA (Ferruzzi Finanziaria - Ravenna) Lorenzo TOMADA (Agip - Milano) Organizing Committee Members ---------------------------------------------- Carlo BIAGIOLI (IDG - Firenze) Francesca CESARINI (Universita' di Firenze) Marco GORI (Universita' di Firenze) Elisabetta GRAZZINI(Universita' di Firenze) From maggini at mcculloch.ing.unifi.it Tue Mar 14 02:48:21 1995 From: maggini at mcculloch.ing.unifi.it (Marco Maggini) Date: Tue, 14 Mar 95 08:48:21 +0100 Subject: Neurocomputing Journal (Special Issue) Message-ID: <9503140748.AA20227@mcculloch.ing.unifi.it> ========================================================== CALL FOR PAPER Special Issue on Recurrent Networks for Sequence Processing in the Neurocomputing Journal (Elsevier) M. Gori, M. Mozer, A.C. Tsoi, and R.L. Watrous (Eds) ========================================================== I'm sorry to announce that, unlike what indicated in previous electronic delivering of this call for paper, following the Neurocomputing editorial policy, prospective authors shouldn't submit the manuscript to one of the Guest Editors, but to the Editor-in-Chief of the journal by March 30, 1995 at the following address: Dr. V. David Sanchez A. Neurocomputing - Editor-in-Chief German Aerospace Research Establishment DLR Oberpfaffenhofen Institute for Robotics and System Dynamics P.O. Box 1116 D-82230 Wessling, Germany e-mail: df1y at dv.op.dlr.de Manuscripts that have already been sent to one of the Guest Editors needn't to be sent also to the Editor-in-Chief. Marco Gori, Ph.D. Associate Professor of Computer Science, Dipartimento di Sistemi e Informatica Universita' di Firenze Via S. Marta, 3 - 50139 Firenze (Italy) voice: +39 (55) 479-6265 fax: +39 (55) 479-6363 email: marco at mcculloch.ing.unifi.it WWW: http://www-dsi.ing.unifi.it/~marco From ruppin at math.tau.ac.il Tue Mar 14 08:39:30 1995 From: ruppin at math.tau.ac.il (Eithan Rupin) Date: Tue, 14 Mar 1995 15:39:30 +0200 Subject: Workshop on Modeling Brain Disorders Message-ID: <199503141339.PAA20225@virgo.math.tau.ac.il> NEURAL MODELING OF COGNITIVE AND BRAIN DISORDERS Workshop, June 8 - 10, 1995 Inn and Conference Center, University of Maryland, College Park, MD (located just north of Washington, DC) SPONSORS National Institute of Mental Health National Institute of Neurological Disorders and Stroke National Institute on Deafness and Other Communication Disorders National Institute on Aging Institute for Advanced Computer Studies, University of Maryland Dept. of Neurology, University of Maryland School of Medicine Center for Neural Basis of Cognition, Carnegie Mellon & Pittsburgh Universities Adams Super Center for Brain Studies, Tel Aviv University Center for Neural and Cognitive Sciences, University of Maryland The focus of this workshop will be on the lesioning of neural network models to study disorders in neurology, neuropsychology and psychiatry. The goals of the workshop are: to evaluate current achievements and the possibilities for further advancement; to examine methodological modeling issues, such as limitations of the networks currently employed, and the required computational properties of future models; and to make the material presented at the workshop available to the wider audience of researchers interested in studying neural models of brain disorders. A Proceedings consisting of the abstracts of presentations will be available at the meeting, and a book of contributed chapters based on the workshop is under consideration. Program Committee: Rita Berndt (Maryland), Barry Gordon (Johns Hopkins), Michael Hasselmo (Harvard), Ralph Hoffman (Yale), Joanne Luciano (Boston), Jay McClelland (Carnegie Mellon), Al Nigrin (American), David Plaut (Carnegie Mellon), James Reggia (Maryland), Eytan Ruppin (Tel-Aviv), and Stanley Tuhrim (Mount Sinai). Travel Fellowships: Funding has been requested for a few fellowships to offset travel costs of students, postdocs, and residents. To be considered for any such travel support that becomes available, please send your name, address, phone number, fax number, email address, status (student/postdoc/resident) and proof of status (copy of current student ID, letter from faculty advisor, etc.) to reach J. Reggia at the address below by Wednesday, April 19, 1995. Either indicate the name of the oral/poster presentation of which you are a co-author, or state in two or three sentences why you wish to attend. Registration and Hotel Reservations: Please use attached forms. Registration and reservations prior to May 15 are strongly recommended. Questions? Direct questions about workshop registration or administration to Cecilia Kullman, UMIACS, University of Maryland, College Park, MD 20742 USA; Tel.: (301)405-6722; Fax: (301)314-9658; email: cecilia at umiacs.umd.edu For questions about hotel reservations, please contact the hotel directly as indicated on the reservation form. For questions about the content of the workshop, please contact either Eytan Ruppin via email at ruppin at math.tau.ac.il, or James Reggia, Dept. of Computer Science, A.V. Williams Bldg., University of Maryland, College Park MD 20742 USA; Tel.: (301) 405-2686; Fax: (301)405-6707; email: reggia at cs.umd.edu PROGRAM Each workshop session will be focused on specific disorders and composed of four invited presentations followed by a critical commentary and a general discussion. ----- Thursday, June 8 8:00 AM: Registration Desk Opens 9:00 AM: Welcome: NIH Representative 9:05 AM: Introduction: James Reggia, University of Maryland 9:30 AM: Alzheimer's Disease and Memory Disorders Chair and Discussant: Steven Small (University of Pittsburgh) James McClelland (Carnegie Mellon University), with B. McNaughton and R. O'Reilly - Complementary learning systems in the hippocampus and neocortex Michael Hasselmo (Harvard University) - A computational theory of Alzheimer's disease as a breakdown in cortical learning dynamics David Horn (Tel-Aviv University, Israel), with N. Levy and E. Ruppin - Neural modeling of memory deterioration in Alzheimer's disease Martha Farah (University of Pennsylvania), with L. Tippett - Semantic knowledge impairments in Alzheimer's disease: insights from connectionist modeling 12:30 PM: Lunch Break 2:00 PM: Epilepsy Chair and Discussant: Michael Rogawski (National Institutes of Health) Roger Traub (IBM Watson), with J. Jeffreys - Unifying principles in epileptic after-discharges in vitro John Rinzel (National Institutes of Health) - Modeling network rhythmogenesis of epilepsy using reduced Hodgkin-Huxley neurons William Lytton (University of Wisconsin) - Toward rational pharmacotherapeutics Mayank Mehta (University of Arizona), with C. Dasgupta and G. Ullal - A neural network model for kindling of focal epilepsy 5:00 PM: Break (Put up posters) 5:30 PM: Reception and Poster Presentations ----- Friday, June 9 9:00 AM: Stroke and Functional Effects of Focal Lesions Chair and Discussant: Barry Gordon (Johns Hopkins University) John Pearson (David Sarnoff Research Center) - Plasticity in the organization of adult somatosensory cortex: a computer simulation based on neuronal group selection James Reggia (University of Maryland), with S. Armentrout, S. Goodall, Y. Chen, and E. Ruppin - Modeling post-stroke cortical map reorganization Manfred Spitzer (University of Heidelberg, Germany) - A neuronal network model of phantom limbs Eytan Ruppin (Tel-Aviv University, Israel), with J. Reggia - Patterns of damage in associative memory models and multi-infarct dementia Noon: Lunch Break 1:30 PM: Aphasia and Acquired Dyslexia Chair and Discussant: Rita Berndt (University of Maryland) Gary Dell (University of Illinois), with M. Schwartz, N. Martin, E. Saffran and D. Gagnon - Lesioning a connectionist model of lexical retrieval to simulate naming errors in aphasia Max Coltheart (Macquarie University, Australia), with R. Langdon and M. Haller - Simulation of acquired dyslexias by the DRC model, a computational model of visual word recognition and reading aloud Karalyn Patterson (MRC, Applied Psychology Unit, Cambridge, England), with D. Plaut, J. McClelland, M. Seidenberg and J. Hodges - Connections and disconnections: a connectionist account of surface dyslexia David Plaut (Carnegie Mellon University) - Connectionist modeling of the breakdown and recovery of reading via meaning 4:30 PM: Dinner Break ----- Saturday, June 10 9:00 AM: Schizophrenia, Frontal and Affective Disorders Chair and Discussant: Jonathan Cohen (Carnegie Mellon University \& University of Pittsburgh) Ralph Hoffman (Yale University) - Modeling positive symptoms of schizophrenia using attractor and backpropagation networks David Servan-Schreiber (University of Pittsburgh), with J. Cohen - Cognitive deficits in schizophrenia: modeling neuromodulation of prefrontal cortex Dan Levine (University of Texas at Arlington) - Functional deficits of frontal lobe lesions Joanne Luciano (Boston University), with M. Negishi, M. Cohen, and J. Samson - A dynamic neural model of cognitive and brain disorders Noon: Lunch Break 1:30 PM: Commentary: James McClelland (Carnegie Mellon University) 2:00 PM: General Discussion A brief commentary will be followed by a general discussion of where we are and where we want to go from here. Among the issues to be considered are the successes and limitations of current models of neurological, neuropsychological and psychiatric disorders. What common methods have been identified? How can models of this sort be validated, and at what ``level of detail" should they be formulated? What topics seem amenable to future neural modeling, and what are barriers to further progress in this field? Finally, feedback on the workshop format and content will be solicited, and the interest and usefulness of holding similar workshops or more formal conferences in the future will be assessed. 4:30 PM: Adjournment POSTER PRESENTATIONS Thursday, June 8, 5:30 PM T.S. Braver, J.D. Cohen and D. Servan-Schreiber. A Model of Normal and Schizophrenic Performance in a Task Involving Working Memory and Inhibition. Carnegie Mellon University and University of Pittsburgh, USA. J.L Contreras-Vidal, H.L. Teulings and G.E. Stelmach. A Neural Model of Spatiotemporal Neurotransmitter Dynamics in Parkinson's Disease: Dopamine Depletion and Lesion Studies. Arizona State University, USA. J.T. Devlin, L.M. Gonnerman, E.S. Andersen and M.S. Seidenberg. Modeling Double Dissociation using Progressive, Widespread Damage. University of Southern California, USA. T.M. Gale, R.J. Frank, D.J. Done and S.P. Hunt. Modeling Conceptual Disruption in Dementia of Alzheimer Type. University of Hertfordshire, England. P. Gupta. Phonological Representation, Word Learning, and Verbal Short-Term Memory: A Neural and Computational Model. Carnegie Mellon University, USA. B. Horwitz, A.R. McIntosh, J.V. Haxby, D. Golomb, M.B. Schapiro, S.I. Rapoport and C.L. Grady. Systems-Level Network Analysis of Cortical Visual Pathways Mapped by Positron Emission Tomography (PET) in Dementia of the Alzheimer Type. National Institute on Aging, USA. E.A. Klein and J.C. Wu. Neural Modeling of Striatal and Limbic Structures in Major Depressive Illness Using PET. University of California at Irvine, USA. J.P. Levy. Semantic Representations in Connectionist Models: The Use of Text Corpus Statistics. University of Edinburgh, UK. K.A. Mayall and G.W. Humphreys. A Connectionist Model of Pure Alexia and Case Mixing Effects. University of Birmingham, England. B.F. O'Donnell, M.E. Karapelou, D. Pedini and R.W. McCarley. Visual Pattern Classification and Recognition in Normal and Schizophrenic Subjects: An Adaptive Resonance Theory Simulation Study. Harvard University and Boston University, USA. R.L. Ownby. A Computational Model of Stroke-related Hemineglect: Preliminary Development. University of Miami, USA. D.V. Reynolds, H.A. Getty and G. Atwell. Object-Oriented Computer Models of Brain Disorders Based on Functional Neuroanatomy. Henry Ford Hospital, USA, and University of Windsor, Canada. R. Shilcock and P. Cairns. Connectionist Modelling of Visuospatial Unilateral Neglect. University of Edinburgh, UK. R. Shilcock, M.L. Kelly and K. Loughran. Evidence for a Connectionist Model of Visuospatial Neglect based on Foveal Splitting. University of Edinburgh, UK. S. Tsumoto and H. Tanaka. Computational Analysis of Acquired Dyslexia of Chinese Characters based on Neural Networks. Medical Research Institute, Tokyo Medical University, Japan. J. Sirosh and R. Miikkulainen. Reorganization of Lateral Interactions and Topographic Maps Following Cortical Lesions. University of Texas at Austin, USA. J.P. Sutton. Modeling Cortical Disorders using Nested Networks. Harvard University, USA. C.S. Whitney, R.S. Berndt and J.A. Reggia. A Computational Model of Single Word Oral Reading. University of Maryland, USA. J. Wright and K. Ahmad. The Simulation of Acquired Language Disorders Using Modular Connectionist Architectures. University of Surrey, UK. DIRECTIONS TO UNIVERSITY OF MARYLAND INN AND CONFERENCE CENTER >From National (DCA) Airport: Upon leaving the airport, follow the signs to Washington, D.C., using the George Washington Parkway. Stay on the parkway until you see the I-495 Rockville exit. Follow 495 until you get to the New Hampshire Avenue exit. Take the New Hampshire/Takoma Park exit. Stay on New Hampshire Avenue and make a left at the second light onto Adelphi Road. Drive approximately three miles on Adelphi Road through two traffic lights. At the third light, make a left turn onto University Boulevard and an immediate right into the parking garage. The building is marked University College Center of Adult Education. >From Baltimore-Washington International (BWI) Airport: Upon exiting the airport, follow signs for I-95 (toward Washington). I-95 will take you to 95 South. Follow 95 South approximately 30 miles. Stay on 95 South until you get to the Route 1 South/College Park exit (Exit 25B). Follow Route 1 to the first exit for the University of Maryland (Systems Administration). Take this exit (Route 193) which immediately becomes University Boulevard. Keep on University Boulevard and go through two traffic lights. At the third light (intersection of University Boulevard and Adelphi Road) make a U-turn and an immediate right into the parking garage. The building is marked University College Center of Adult Education. >From Dulles (IAD) Airport: Upon leaving the airport, follow the signs towards Washington, D.C., until you see the signs for I-495. Take the exit towards Rockville. Follow 495 until you get to the exit for New Hampshire Avenue. Take the New Hampshire/Takoma Park exit. Stay on New Hampshire Avenue and make a left at the second light onto Adelphi Road. Drive approximately three miles on Adelphi Road through two traffic lights. At the third light, make a left turn onto University Boulevard and an immediate right into the parking garage. The building is marked University College Center of Adult Education. ----------------------------cut here------------------------------- REGISTRATION FORM NEURAL MODELING WORKSHOP College Park, Md June 8-10, 1995 Name: ___________________________________________________ Affiliation: ________________________________________________ Address: _________________________________________________ _________________________________________________________ Telephone: ___________________________ Fax: ________________________________ e-mail: ______________________________ ___ $50 Conference fee before 5/15/95 ___ $65 Conference fee after 5/15/95 ___ $25 Student/postdoc/resident fee Amount Enclosed: $________________ MAKE CHECKS PAYABLE TO "UMIACS-Neural Modeling Workshop." Conference fee includes proceedings, coffee and reception. Payment must accompany the registration form. Checks must be in US dollars only and payable to "UMIACS-Neural Modeling Workshop." Please do not send cash. CREDIT CARDS WILL NOT BE ACCEPTED. Students/postdocs/residents must provide a copy of a student ID card or a letter from a faculty member for proof of status. RETURN BY MAY 15, 1995 TO: Cecilia Kullman UMIACS University of Maryland College Park, MD 20742, USA Tel.: (301) 405-6722. Fax: (301) 314-9658 e-mail: cecilia at umiacs.umd.edu ----------------------------- cut here ---------------------------- HOTEL RESERVATION FORM Neural Modeling Workshop The Inn and Conference Center University of Maryland University College Please reserve the following accommodations: ___ $69 Single Occupancy ___ $84 Double Occupancy Arrival Date: ____________ Departure Date: ____________ ___ Smoking ___ Non-smoking ___ Deposit check enclosed in the amount of $ ____________ ___ Credit card guarantee: Credit card number: _____________________________ Credit card expiration date: ____________ Signature: ________________________________ Name: ___________________________________________ Affiliation: _________________________________________ Address: __________________________________________ __________________________________________________ Telephone: _________________________________________ Fax: ______________________________________________ Rates are per room per night. All rates are subject to a 5% occupancy tax. All reservations must be accompanied by a deposit of one night's room rate plus tax, or a credit card guarantee. Guaranteed reservations will be held until 6:00 a.m. the following day. Reservations not canceled prior to 6:00 p.m. on the arrival day will be charged one night's room rate plus tax. SEND BY MAY 15, 1995 TO: Reservations The Inn and Conference Center University of Maryland University College College Park, MD 20742, USA Tel.: (301) 985-7300, Fax: (301) 985-7850 From marks at u.washington.edu Tue Mar 14 12:55:17 1995 From: marks at u.washington.edu (Robert Marks) Date: Tue, 14 Mar 95 09:55:17 -0800 Subject: book announcement: Computational Intelligence Message-ID: <9503141755.AA29898@carson.u.washington.edu> COMPUTATIONAL INTELLIGENCE: Imitating Life edited by Jacek M. Zurada, Robert J. Marks II and Charles J. Robinson IEEE Press 1994 Computational Intelligence has emerged from the fusion of the fields of neural networks, fuzzy systems and evolutionary computation. For the first time, contributions of world-renowned experts and pioneers in the field are collected into a single volume. The articles are grouped into the following categories: Computational Learning Theory, Approximate Reasoning, Evolutionary Computation, Biological Computation and Pattern Recognition, Intelligent Control, Hybrid Computational Intelligence, and Applications. The contributions were first presented in a special Plenary Symposium held in conjunction with the 1994 World Congress on Computational Intelligence. Key features include: An introduction by the editors...An extensive overview of computational intelligence by James Bezdek...Articles by such leading experts as: James Bezdek, Didier Dubois, R. Eckmiller, Lawrence J. Fogel, Anil K. Jain, James M. Keller, Reza Langari, Erkki Oja, Henri Prade, Steven K. Rogers, J. David Schaffer, Michio Sugeno, H.J. Zimmerman, and others...A special section on applicatins in the fields of biology, signal and image processing, robotics and control... An extensive subject index for easy reference, and more! Contributions from Bezdek, Oja, Berenji, Hecht-Nielsen, Dubois, Prade, DeJong, Fogel, Anil Jain, Anderson, Usui, Eckmiller, Sugeno, Bonissone, Fukuda, Schaffer, Zimmerman, Fogelman & Rogers IEEE Member Price: $40.00 List Price: $49.95 1994 Hardcover 448 pp ISBN 0-7803-1104-3 To order, call IEEE Customer Service: (800)678-IEEE or (908)981-1393 or fax (908)981-9667. IEEE Order No: PC04580 Request the Table of Contents from r.marks at ieee.org From movellan at cogsci.UCSD.EDU Mon Mar 13 22:27:46 1995 From: movellan at cogsci.UCSD.EDU (Javier Movellan) Date: Tue, 14 Mar 1995 11:27:46 +0800 Subject: AV Database Message-ID: <9503141927.AA18485@ergo.UCSD.EDU> Tulips1 is a small Audio-Visual database useful for simple projects on audio-visual speech recognition. I am making the data available through anonymous ftp on ergo.ucsd.edu at /pub/tulips1 Tulips1 includes 12 subjects saying the first four digits in English. Audio part is in .au format, visual part was digitized at 30fps and it is in .pgm format. -Javier From yves at netid.com Tue Mar 14 16:01:26 1995 From: yves at netid.com (Yves Chauvin) Date: Tue, 14 Mar 95 13:01:26 PST Subject: Backpropagation volume available Message-ID: <9503142101.AA01815@netid.com> The volume: Back-propagation: Theory, Architectures, and Applications. (1995). Yves Chauvin and David E. Rumelhart (Eds.). Lawrence Erlbaum: Hillsdale, NJ. is now available. Table of Contents and ordering instructions are given below. Yves Chauvin ---------------------------------------------------------------------------- TABLE OF CONTENTS Backpropagation: The basic theory. D. E. Rumelhart, R. Durbin, R. Golden, Y. Chauvin Phoneme recognition using time-delay neural networks. A. Waibel, T. Hanazawa, G. E. Hinton, K. Shikano, K. J. Lang Automated aircraft flare and touchdown control using neural networks C. Schley, Y. Chauvin, V. Henkle Recurrent backpropagation networks. F. J. Pineda A focused backpropagation algorithm for temporal pattern recognition M. C. Mozer Nonlinear control with neural networks D. H. Nguyen, B. Widrow Forward models: Supervised learning with a distal teacher. M. J. Jordan, D. E. Rumelhart Backpropagation: Some comments and variations. S. J. Hanson Graded state machines: The representation of temporal contingencies in feedback networks. A. Cleeremans, D. Servan-Schreiber, J. L. McClelland Spatial coherence as an internal teacher for a neural network. S. Becker, G. E. Hinton Connectionist modeling and control of finite state systems given partial state information. J. R. Bachrach, M. C. Mozer Backpropagation and unsupervised learning in linear networks. P. Baldi, Y. Chauvin, K. Hornik Gradient-based learning algorithms for recurrent networks and their computational complexity. R. J. Williams, D. Zipser When neural networks play Sherlock Holmes P. Baldi, Y. Chauvin Gradient descent learning algorithms: A unified perspective P. Baldi ---------------------------------------------------------------------------- ORDERING INSTRUCTIONS To order: 1) call toll free: 1-800-9-BOOKS-9 (1-800-926-6579) 2) fax: 201-666-2394 3) email: orders at leahq.mhs.compuserve.com 4) send order to:Lawrence Erlbaum Associates,Inc 365 Broadway Hillsdale, NJ 07642 - Orders should include name of Editors, Title, and ISBN (0-8058-1258-X). -if paying by check include handling charge of $2.00 for first book and $.50 for each additional book. - if paying by credit card include: card name, ie: Visa, Mastercard, AMEX, Discover, include card number, expiration and signature if mailing credit card order. Price for the hard cover copy is $125 and $45 for the paper back. There is a 10% discount off orders that are prepaid (for individuals only). From jbower at bbb.caltech.edu Tue Mar 14 17:51:17 1995 From: jbower at bbb.caltech.edu (jbower@bbb.caltech.edu) Date: Tue, 14 Mar 95 14:51:17 PST Subject: Two Postdoctoral Positions Message-ID: <9503142251.AA06832@bbb.caltech.edu> TWO POSTDOCTORAL POSITIONS IN CEREBELLAR NETWORK MODELING in collaboration with: James M. Bower California Institute of Technology and Erik De Schutter University of Antwerp, Belgium Two postdoctoral positions are available immediately to participate in an ongoing collaboration on the use of realistic models of cerebellar cortical neurons and circuits to investigate cerebellar function. A major objective of this collaboration is to construct a morphologically and physiologically realistic network model of the cerebellar cortex of the rat. This network model is an extension of our previous efforts to construct realistic compartmental models of the principle cell types within cerebellum (see below). The model itself is being constructed in the GENESIS simulation system, and involves the use of parallel supercomputers as computational engines. The project is supported by the Human Frontier Science Organization. Successful candidates will link modeling and physiology efforts in Dr. Bower's laboratory at the California Institute of Technology with the simulation-based research of Dr. De Schutter at the University of Antwerp (Belgium). As the selected candidates will be expected to collaborate extensively with researchers at both sites, exchange visits and dual appointments will be provided. Candidates should have computational neuroscience experience, preferentially with compartmental models or with the GENESIS software. Candidates whose previous research combines both modeling and experimental physiology are particularly encouraged to apply. Positions are available for 2 to 3 years, starting immediately. Salary commensurate with experience. Funding is independent of nationality. Caltech is an equal opportunity employer. Women and those underrepresented in science are particularly encouraged to submit applications. Applicants must send curriculum vitae, a statement of why this research interests them, and three references to BOTH Prof. Bower and Prof. De Schutter, if possible by e-mail. Prof. J.M. Bower Prof. E. De Schutter Div. of Biology Born Bunge Foundation MC 216-76 Dept. of Medicine California Institute of Technology University of Antwerp - UIA Pasadena, CA 91125 B2610 Antwerp USA Belgium fax: +1-818-4490679 +32-3-8202541 e-mail: jbower at smaug.bbb.caltech.edu erik at kuifje.bbf.uia.ac.be Additional information: World Wide Web: http://www.bbb.caltech.edu/bowerlab http://bbf-www.uia.ac.be/ E. De Schutter and J.M. Bower: An active membrane model of the cerebellar Purkinje cell. I. Simulation of current clamps in slice. Journal of Neurophysiology 71: 375-400 (1994). E. De Schutter and J.M. Bower: An active membrane model of the cerebellar Purkinje cell: II. Simulation of synaptic responses. Journal of Neurophysiology 71: 401-419 (1994). E. De Schutter and J.M. Bower: Simulated responses of cerebellar Purkinje cell are independent of the dendritic location of granule cell synaptic inputs. Proceedings of the National Academy of Sciences USA 91: 4736-4740 (1994). From chris at psychologie.uni-leipzig.de Wed Mar 15 10:19:02 1995 From: chris at psychologie.uni-leipzig.de (Christian Kaernbach) Date: Wed, 15 Mar 95 16:19:02 +0100 Subject: Conference in Leipzig, June 29 - July 1 Message-ID: <9503151519.AA03144@psychologie.uni-leipzig.de> PHENOMENA AND ARCHITECTURES OF COGNITIVE DYNAMICS Symposium in Leipzig, Germany, June 29 to July 1 '95 Speakers: . Topics: A. Baddeley, Cambridge . E. Basar, Luebeck . A reference to Ernst Heinrich Weber I. Biederman, Los Angeles . (* 24.6.1795, 4 contributions) will H. Colonius, Oldenburg . be followed by two sessions, S. Dehaene, Paris . contrasting R. Eckhorn, Marburg, . * binding by temporal coherence P. Goldman-Rakic, New York . (and other topics related to M. W. Greenlee, Freiburg . synchronisation) J. L. van Hemmen, Muenchen . with G. Hitch, London . * seemingly serial processes in P. Koenig, La Jolla . working memory (Sternberg paradigm) M. Kubovy, Charlottesville . Both topics cover about the same R. van Lier, Nijmegen . temporal domain (25 ms to n00 ms). S. W. Link, Hamilton . Psychologists, psychiologists, and C. von der Malsburg, Bochum . connectionists will present to each I. V. Maltseva, Moskau . other their specific view point. R. Mausfeld, Kiel . Questions of interest will be: W. Phillips, Stirling . - Is binding by temporal coherence E. Scheerer, Oldenburg . fast enough? M. Schuermann, Luebeck . - How to code hierarchical structures O. Sporns, La Jolla . with temporal code? J. T. Townsend, Bloomington . - How can massively parallel models S. J. Williamson, New York . explain serial processes? The conference is the starting activity of a new cognition research group at Leipzig university. The scientific board is made up by Hans-Georg Geissler, Edith Goepfert, and Andreas Schierwagen. Organisation: Christian Kaernbach Institut fuer Allgemeine Psychologie Universitaet Leipzig Tieckstr. 2 04 275 Leipzig, Germany Tel.: +49 341 97-36932 Fax: +49 341 328464 E-mail: chris at psychologie.uni-leipzig.de From P.McKevitt at dcs.shef.ac.uk Wed Mar 15 12:39:01 1995 From: P.McKevitt at dcs.shef.ac.uk (Paul Mc Kevitt) Date: Wed, 15 Mar 95 17:39:01 GMT Subject: IEE COLLOQ. LONDON (MAY): GROUNDING-REPRESENTATIONS (Sharkey/ Mc Kevitt) Message-ID: <9503151739.AA18698@dcs.shef.ac.uk> ============================================================================== GROUNDING REPRESENTATIONS GROUNDING REPRESENTATIONS GROUNDING REPRESENTATIONS ============================================================================== PROGRAMME AND CALL FOR PARTICIPATION GROUNDING REPRESENTATIONS: Integration of sensory information in Natural Language Processing, Artificial Intelligence and Neural Networks IEE COLLOQUIUM IEE Computing and Control Division [Professional group: C4 (Artificial Intelligence)] in association with: British Computer Society Specialist Group on Expert Systems and The Society for the Study of Artificial Intelligence and Simulation of Behaviour (SSAISB) MONDAY, MAY 15th, 1995 ********************** at the IEE Colloquium Savoy Place London, ENGLAND Chairs NOEL SHARKEY and PAUL MC KEVITT Department of Computer Science University of Sheffield, England WORKSHOP DESCRIPTION: Perhaps the most famous criticism of traditional Artificial Intelligence is that computer programs use symbols that are arbitrarily interpretable (see Searle, 1980 for the Chinese Room and Harnad, 1990 for the symbol grounding problem). We could, for example, use the word "apple" to mean anything from a "common fruit" to a "pig's nose". All the computer knows is the relationship between this symbol the others that we have given it. The question is, how is it possible to move from this notion of meaning, as the relationship between arbitrary symbols, to a notion of "intrinsic" meaning. In other words, how do we provide meaning by grounding computer symbols or representations in the physical world? The aim of this colloquium is to take a broad look at many of the important issues in relating machine intelligence to the world and to make accessible some of the most recent research in integrating information from different modalities. For example, why is it important to have symbol or representation grounding and what is the role of the emerging neural network technology? One approach has been to link intelligence to the sensory world through visual systems or robotic devices. Another approach is work on systems that integrate information from different modalities such as vision and language. Yet another approach has been to examine how the human brain relates sensory, motor and other information. It looks like we may be at long last getting a handle on the age old CHINESE ROOM and SYMBOL GROUNDING problems. Hence this colloquium has as its focus, "grounding representations. The colloquium will occur over one day and will focus on three themes: (1) Biology and development; (2) Computational models and (3) Symbol grounding. The target audience of this colloquium will include Engineers and Scientists in Neural Networks and Artificial Intelligence, Developmental Psychologists, Cognitive Scientists, Philosophers of mind, Biologists and all of those interested in the application of Artificial Intelligence to real world problems. PROGRAMME: Monday, May 15th, 1995 ************************ INTRODUCTION: 9.00 REGISTRATION + SUSTENANCE 10.00 `An introduction' NOEL SHARKEY (Department of Computer Science, University of Sheffield, ENGLAND) BIOLOGY: 10.30 `The neuronal mechanisms of language' VALENTINO BRAITENBERG (Max Plank Institute for Biological Cybernetics, Tuebingen, GERMANY) COMPUTATIONAL MODELS: 11.00 `Natural language and exploration of an information space' OLIVIERO STOCK (Istituto per la Ricerca Scientifica e Technologica, IRST) (Trento, ITALY) 11.30 `How visual salience influences natural language descriptions' WOLFGANG MAASS (Cognitive Science Programme) (Universitaet des Saarlandes, Saarbruecken, GERMANY) 12.00 DISCUSSION 12.30 LUNCH GROUNDING SYMBOLS: 2.00 `On grounding language with neural networks' GEORG DORFFNER (Austrian Institute for Artificial Intelligence, Vienna, AUSTRIA) 2.30 `Some observations on symbol-grounding from a combined symbolic/connectionist viewpoint' JOHN BARNDEN (Computing Research Laboratory, New Mexico, USA) & (Department of Computer Science, University of Reading, ENGLAND) 3.00 Sustenance Break 3.30 `Grounding symbols in sensorimotor categories with neural networks' STEVAN HARNAD (Department of Psychology, University of Southampton, ENGLAND) PANEL DISCUSSION AND QUESTIONS: 4.00 `Grounding representations' Chairs + Invited speakers S/IN S/IN: 4.30 `De brief/comments' PAUL MC KEVITT (Department of Computer Science, University of Sheffield, ENGLAND) 5.00 O/ICHE MHA/ITH ***************************** PUBLICATION: We intend to publish a book on this Colloquium Proceedings. ADDRESSES IEE CONTACT: Sarah Leong Groups Officer The Institution of Electrical Engineers (IEE) Savoy Place GB- WC2R OBL, London England, UK, EU. E-mail: SLeong at iee.org.uk (Sarah Leong) E-mail: mbarrett at iee.org.uk (Martin Barrett) E-mail: dpenrose at iee.org.uk (David Penrose) WWW: http://www.iee.org.uk Ftp: ftp.iee.org.uk FaX: +44 (0) 171-497-3633 Phone: +44 (0) 171-240-1871 (general) Phone: +44 (0) 171-344-8423 (direct) LOCATION: The Institution of Electrical Engineers (IEE) Savoy Place GB- WC2R OBL, London England, UK, EU. ACADEMIC CONTACT: Paul Mc Kevitt Department of Computer Science Regent Court 211 Portobello Street University of Sheffield GB- S1 4DP, Sheffield England, UK, EU. E-mail: p.mckevitt at dcs.shef.ac.uk WWW: http://www.dcs.shef.ac.uk/ WWW: http://www.shef.ac.uk/ Ftp: ftp.dcs.shef.ac.uk FaX: +44 (0) 114-278-0972 Phone: +44 (0) 114-282-5572 (Office) 282-5596 (Lab.) 282-5590 (Secretary) REGISTRATION: Registration forms are available from SARAH LEONG at the above address and should be sent to the following address: (It is NOT possible to register by E-mail.) Colloquium Bookings Institution of Electrical Engineers (IEE) PO Box 96 Stevenage GB- SG1 2SD Herts England, UK, EU. Fax: +44 (0) 143 874 2792 Receipt Enquiries: +44 (0) 143 876 7243 Registration enquiries: +44 (0) 171 240 1871 x.2206 PRE-REGISTRATION IS ADVISED ALTHOUGH YOU CAN REGISTER ON THE DAY OF THE EVENT. ________________________________________________________________________ R E G I S T R A T I O N COSTS ________________________________________________________________________ (ALL FIGURES INCLUDE VAT) IEE MEMBERS 44.00 NON-IEE MEMBERS 74.00 IEE MEMBERS (Retired, Unemployed, Students) FREE NON-IEE MEMBERS (Retired, Unemployed, Students) 22.00 LUNCH TICKET 4.70 MEMBERS: Members of the IEEIE, The British Computer Society and the Society for the Study of Artificial Intelligence and Simulation of Behaviour and Eurel Member Associations will be admitted at Members' rates. ============================================================================== GROUNDING REPRESENTATIONS GROUNDING REPRESENTATIONS GROUNDING REPRESENTATIONS ============================================================================== From pjs at aig.jpl.nasa.gov Fri Mar 17 11:59:53 1995 From: pjs at aig.jpl.nasa.gov (Padhraic J. Smyth) Date: Fri, 17 Mar 95 08:59:53 PST Subject: Summer student positions at JPL Message-ID: <9503171659.AA02781@amorgos.jpl.nasa.gov> SUMMER POSITIONS AT THE JET PROPULSION LABORATORY (JPL) March 16th 1995 JPL is seeking applications from graduate students interested in summer positions (1995). Ideal candidates will have a background and interest in statistical pattern recognition, applied statistics, and image processing (or some subset of these topics). The work will include participation in ongoing projects for automated analysis of remote-sensing and sky-survey images, including both theoretical investigations and algorithm development. Candidates should be capable of implementing developed algorithms in a standard programming language such as C or within an environment such as MATLAB. This is an ideal opportunity for students wishing to get involved in the analysis of large high-dimensional datasets of scientific importance. Interested applicants should send a copy of their resume to: Padhraic Smyth JPL 525-3660 4800 Oak Grove Drive Pasadena, CA 91109. or email a postscript or ascii copy to: pjs at galway.jpl.nasa.gov From chris at orion.eee.kcl.ac.uk Fri Mar 17 10:14:29 1995 From: chris at orion.eee.kcl.ac.uk (Chris Christodoulou) Date: Fri, 17 Mar 95 15:14:29 GMT Subject: NN RIG Lecture Message-ID: <9503171514.AA14235@orion.eee.kcl.ac.uk> IEEE Neural Networks Regional Interest Group Chairman: Trevor Clarkson (tgc at kcl.ac.uk) NN RIG LECTURE "Applications of Neural Nets and Fuzzy Logic in Communications Systems" by Dr Stamatios Kartalopoulos (VP IEEE Neural Networks Council) to be held at King's College London, Strand, London WC2 on Wednesday 29th March 1995 at 6.00pm Room 1B23 (Strand Building) All are welcome at this lecture SUMMARY Neural Networks and Fuzzy Logic have already found wide range applicability. The projection is that this trend will keep continuing. Although the notion of neural networks was to escape from the conventional computer, nevertheless, to date, many neural network applications still depend on the conventional computer; during learning or during normal operation. The all-neural network, independent from the conventional digital computer (during learning and operation), and outperforming a conventional computer in equivalent functionality with cost/performance as a metric, is yet to come. Currently, we see niche applications that address improvements in a specific area within a complex system. Most of the applications described fall in this category. Paradigms used today are math intensive; the neural network problem has been shifted from the unknown mental process to an optimization algorithmic problem using conventional math. I cannot believe that the human brain operates on equations to estimate and solve a problem, to recognize objects, or to make inferences. I cannot believe that the human brain solves the backpropagation algorithm, or any other algorithm when it is trained. What is needed are fresh ideas. Ideas that go beyond the current conventional paradigms. Ideas that emulate human thinking and map it on a network. Conventional math is good for modelling to use computer tools for emulation and simulation, for the time being. However, conventional math is a mental attractor that keeps pulling us back to conventional techniques. In short, there are many Challenges that make the future seem more exciting than ever. From isabelle at research.att.com Fri Mar 17 12:39:29 1995 From: isabelle at research.att.com (Isabelle Guyon) Date: Fri, 17 Mar 95 12:39:29 EST Subject: No subject Message-ID: <9503171738.AA25122@big.info.att.com> #################################### REMINDER ################################# ###### M O N D A Y , M A R C H 2 0 T H D E A D L I N E ####### #################################### REMINDER ################################# /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ < ICANN industrial 1 day workshop: > < Neural network applications > < to DOCUMENT ANALYSIS and RECOGNITION > < Paris, October 1995 > \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/ * Layout and logical structure analysis of documents. * Map drawing and understanding. * OCR and handwriting recognition (off-line and on-line). * Multimedia document processing. * Image/text retrieval, automated indexing. * User interfaces to electronic libraries. * Image/text compression. This workshop will be a forum for application researchers and developers to present their systems and discuss tabu subjects, including: - hybrid solutions, - solutions that work (don't know why), - solutions that do not work (though theoretically optimum), - hacks, tricks and miscellaneous occult methods, - marketing advantage/disadvantage of saying that there is a NN in the system. The condition of acceptance will not be the novelty of the algorithms but the existence of a working "money making" application or at least a working prototype with a path towards industrialization. The performance of the system should be measured quantitatively, preferably using known benchmarks or comparisons with other systems. As for regular scientific papers, every statement should be properly supported by experimental evidence, statistics or references. Demonstrations and videos are encouraged. *** Submission deadline of a 6 page paper = March 20, 1995 *** Send 4 paper copies to: Isabelle Guyon ---------------------- AT&T Bell Laboratories 955 Creston road Berkeley, CA 94708, USA Electronic formats available at: ftp lix.polytechnique.fr login: anonymous password : your e-mail address ftp> cd /pub/ICANN95/out For more informations on ICANN write to isabelle at research.att.com. From jain at arris.com Fri Mar 17 12:58:20 1995 From: jain at arris.com (Ajay Jain) Date: Fri, 17 Mar 95 09:58:20 -0800 Subject: Position available: Arris Pharmaceutical Message-ID: <9503171758.AA01501@snug.arris.com> Position available: Arris Pharmaceutical Corporation Job Title: Scientist, Computational Sciences Arris Pharmaceutical is a South San Francisco-based pharmaceutical company of about 85 people (NASDAQ: ARRS). We are dedicated to the efficient discovery and development of orally-active human therapeutics. Computational approaches to drug discovery form one of our core technologies, and we have developed novel techniques for computer-aided drug design that rely ideas from machine learning, computational geometry, chemistry, and physics. Job description and requirements: Working as a member of the Computational Sciences Department, perform applied research in computational drug design, particularly in the areas of flexible molecular docking, three-dimensional quantitative structure-activity prediction, flexible molecular database screening, and de novo ligand design. The position requires a PhD in Computer Science with significant research experience in computational geometry, machine learning, pattern recognition, or related fields. Formal training in organic chemistry is also required. The ideal candidate will have demonstrated success in applied research on non-toy problems requiring understanding of the underlying problem domain. Proficiency in C or C++ is also required. Please send your resume with the names and addresses of three references to me (address below). Some publications that are illustrative of work in our group are listed below: A. N. Jain, N. L. Harris, and J. Y. Park. Quantitative Binding Site Model Generation: Compass Applied to Multiple Chemotypes Targeting the 5HT1A Receptor. Journal of Medicinal Chemistry. In press (appearing very soon). A. N. Jain, T. G. Dietterich, R. L. Lathrop, D. Chapman, R. E. Critchlow, B. E. Bauer, T. A. Webster, and T. Lozano-Perez. Compass: A shape-based machine learning tool for drug design. Journal of Computer-Aided Molecular Design 8(6): 635-652, 1994. A. N. Jain, K. Koile, D. Chapman. Compass: Predicting biological activities from molecular surface properties; performance comparisons on a steroid benchmark. Journal of Medicinal Chemistry 37: 2315-2327, 1994. T. G. Dietterich, A. N. Jain, R. L. Lathrop, and T. Lozano-Perez. A comparison of dynamic reposing and tangent distance for drug activity prediction. In Advances in Neural Information Processing Systems 6, ed. J. D. Cowan, G. Tesauro, and J. Alspector. San Francisco, CA: Morgan Kaufmann. 1994. ------------------------------------------------------------------------ Dr. Ajay N. Jain Senior Scientist, Computational Sciences Email: jain at arris.com Arris Pharmaceutical Corporation Phone: (415) 737-1651 385 Oyster Point Boulevard, Suite 3 FAX: (415) 737-8590 South San Francisco, CA 94080 From giles at research.nj.nec.com Fri Mar 17 16:26:10 1995 From: giles at research.nj.nec.com (Lee Giles) Date: Fri, 17 Mar 95 16:26:10 EST Subject: Computational capabilities of recurrent NARX neural networks Message-ID: <9503172126.AA17893@alta> The following Technical Report is available via the University of Maryland Department of Computer Science and the NEC Research Institute archives: _____________________________________________________________________________ Computational capabilities of recurrent NARX neural networks UNIVERSITY OF MARYLAND TECHNICAL REPORT UMIACS-TR-95-12 AND CS-TR-3408 H. T. Siegelmann[1], B. G. Horne[2], C. L. Giles[2,3] [1] Dept. of Information Systems Engineering, Technion, Haifa 32000, Israel [2] NEC Research Institute, 4 Independence Way, Princeton, NJ 08540 [3] UMIACS, University of Maryland, College Park, MD 20742 iehava at ie.technion.ac.il {horne,giles}@research.nj.nec.com Recently, fully connected recurrent neural networks have been proven to be computationally rich --- at least as powerful as Turing machines. This work focuses on another network which is popular in control applications and has been found to be very effective at learning a variety of problems. These networks are based upon Nonlinear AutoRegressive models with eXogenous Inputs (NARX models), and are therefore called {\em NARX networks}. As opposed to other recurrent networks, NARX networks have a limited feedback which comes only from the output neuron rather than from hidden states. They are formalized by \[ y(t) = \Psi \left( \rule[-1ex]{0em}{3ex} u(t-n_u), \ldots, u(t-1), u(t), y(t-n_y), \ldots, y(t-1) \right), \] where $u(t)$ and $y(t)$ represent input and output of the network at time $t$, $n_u$ and $n_y$ are the input and output order, and the function $\Psi$ is the mapping performed by a Multilayer Perceptron. We constructively prove that the NARX networks with a finite number of parameters are computationally as strong as fully connected recurrent networks and thus Turing machines. We conclude that in theory one can use the NARX models, rather than conventional recurrent networks without any computational loss even though their feedback is limited. Furthermore, these results raise the issue of what amount of feedback or recurrence is necessary for any network to be Turing equivalent and what restrictions on feedback limit computational power. ------------------------------------------------------------------------- http://www.neci.nj.nec.com/homepages/giles.html http://www.cs.umd.edu/TRs/TR-no-abs.html or ftp://ftp.nj.nec.com/pub/giles/papers/NARX.capabilities.ps.Z -------------------------------------------------------------------------- -- C. Lee Giles / NEC Research Institute / 4 Independence Way Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 URL http://www.neci.nj.nec.com/homepages/giles.html == From AIHENP95 at pisa.infn.it Fri Mar 17 15:30:01 1995 From: AIHENP95 at pisa.infn.it (AIHENP95@pisa.infn.it) Date: Fri, 17 Mar 1995 21:30:01 +0100 (WET) Subject: Fourth Mailing - AIHENP95 Pisa, 3-8 April 1995 Message-ID: <950317213002.6340022f@pisa.infn.it> _______________________________________________________________________________ FOURTH INTERNATIONAL WORKSHOP ON SOFTWARE ENGINEERING AND ARTIFICIAL INTELLIGENCE FOR HIGH ENERGY AND NUCLEAR PHYSICS AIHENP95-Pisa Pisa (Tuscany), Italy 3 - 8 April, 1995 --------- FOURTH MAILING ---------- ---Workshop Program---Participant Information--- _______________________________________________________________________________ INTERNATIONAL SCIENTIFIC ADVISORY COMMITTEE S. R. Amendolia INFN & Univ. Sassari Pisa I G. Auger GANIL Caen F K. H. Becks Bergische Univ. Wuppertal D O. Benhar INFN Rome I R. Brun CERN CN Geneva CH B. Denby INFN Pisa I F. Etienne CPPM Marseille F R. Gatto Geneva Univ. Geneva CH G. Gonnet ETHZ Zurich CH M. Green Royal Holloway Col. Egham Surrey GB V. Ilyin Moscow University Moscow RU F. James CERN Geneva CH A. Kataev INR Moscow RU P. Kunz SLAC Stanford USA M. Kunze Ruhr University Bochum D C. S. Lindsey KTH Stockholm S V. Matveev INR Moscow RU K. McFarlane CEBAF/Norfolk Newport News USA R. Odorico Univ. of Bologna Bologna I D. Perret-Gallix LAPP Annecy F C. Peterson Lund University Lund S B. Remaud IN2P3 Paris F E. Remiddi Univ. of Bologna Bologna I P. Ribarics MPI Munich D M. Sendall CERN ECP Geneva CH Y. Shimizu KEK Tsukuba JP D. Shirkov JINR Dubna RU A. Smirnitsky ITEP Moscow RU R. Tripiccione INFN Pisa I M. Veltman Univ. of Michigan Ann Arbor USA J. Vermaseren NIKHEF-H Amsterdam NL C. Vogel CISI Paris F E. Wildner CERN PS Geneva CH DEAR COLLEAGUES: This is the Fourth Mailing for the 1995 edition of the AIHENP worskshop series, containing the Workshop Program, Further Information for Participants, and the Registration/Accommodation forms for those of you who may not yet have turned them in. The LaTeX file for preparation of camera ready manuscripts is not included here but is still available via WWW at the URL's shown below. The page limit is 6 pages for parallel papers and 8 pages for plenary papers. The organizing committee can also supply a copy on request. The completed papers are to be brought to the workshop. WWW URL's for viewing AIHENP95 Information: http://www.cern.ch/Physics/Conferences/C1995/Overview.html http://www1.cern.ch/NeuralNets/nnwInHep.html We on the organizing committee look forward to seeing you in Pisa very soon! Bruce Denby Conference Chairman For the International Advisory Committe and Local Organizing Committee ABOUT AIHENP ------------ The AIHENP series workshops are intended primarily for scientists working in fields related to High Energy and Nuclear Physics. The AIHENP series began in Lyon, France, in March 1990, and has subsequently been sited in La Londe les Maures, France, in January 1992, and in Oberammergau, Germany, in October 1993. The workshops have always been less formal than full conferences, stressing *new* results and ideas, and with sufficient time allowed for spontaneous discussions to develop. As in the past, the 1995 workshop will consist of plenary sessions and three parallel sessions covering our three subgroups, 1) SOFWARE ENGINEERING 2) ARTIFICIAL INTELLIGENCE AND NEURAL NETS 3) SYMBOLIC MANIPULATION along with tutorials and demonstrations, poster session, and industrial booths. Tutorials this year include: INTRODUCTION TO FUZZY LOGIC FOR HIGH ENERGY PHYSICS; PARTICLE SEARCHES WITH NEURAL NETWORKS; INTRODUCTION TO C AND OBJECT ORIENTED PROGRAMMING FOR PHYSICISTS. We have also invited a few experts from other fields to come and give keynote talks which should give us some new perspectives. This year, for example, we have talks on SPACE APPLICATIONS OF NEURAL NETWORKS and COMMERCIAL APPLICATIONS OF NEURAL NETWORKS, the latter of which will be given by the President of the European Neural Networks Society, Prof. Francoise Fogelman, of SLIGOS, Paris, France. Industrial Exhibits confirmed are: IBM (Rome), Spring (Rome), MACS (Pisa, CNAPS Hardware), and Siemens (Munich/Vienna, SYNAPSE Hardware). The workshop is also support by Apple Computer, Alenia, and CAEN. FURTHER INFORMATION FOR PARTICIPANTS ------------------------------------ All organizational details except the scientific program are being handled by TRE EMME CONGRESSI in Pisa. Their official registration forms and information follow at the end of this document. YOUR TRE EMME REGISTRATION FORM IS YOUR ONLY OFFICIAL REGISTRATION FOR THE WORKSHOP. Pisa Airport is only about one kilometer from town center and has daily connections to several international airports (Paris, London, Frankfort, etc.) and to national airports of Rome and Milan. Pisa may also be reached easily by train via Milan, Rome, Florence, or Turin. There are taxi stands at the airport and train station. A typical taxi fare within town is about 10.000 lira. Pisa is a small city and often walking is the best way to get around. The Palazzo dei Congressi is within walking distance of all workshop hotels. Traffic and especially parking in Pisa are difficult. Rental cars are not recommended. The Worskhop Telephone Numbers (DURING THE WORKSHOP ONLY) will be: +39 50 598.139 telephone +39 50 598.112 fax A limited number of terminals will be available for checking electronic mail, etc., during the workshop. Poster sessions and Industrial Exhibits will run continuously during AIHENP95. Posters can be mounted or removed at any time during the workshop. Presenters of posters are encouraged to be present at their posters from 13:30 - 14:00 and from 19:00 - 20:00 for viewing and questions. --------------------------------- CUT HERE ------------------------------------ +-----------------------------------------------------------------------+ | AIHENP95 Workshop Mini-Schedule Pisa 3-8 April 1995 | +-----------------------------------------------------------------------+ | KEY: A1 = First "AI" sess. (Main Auditorium) | | A2 = Second "AI" sess. (Pacinotti Room) | | SM = Sym. Manip. sess. (Fermi Room) | | SE = Soft. Eng. sess. (Pacinotti Room) | | | | (Plenary Talks in Main Auditorium) | | (User Terminals in Auletta "A") | | (Poster Sessions and Industrial Exhibitions run continuously) | | (Posters manned 13:30-14:00, 19:00-20:00) | +-----+----------+----------+----------+----------+----------+----------+ |Sess.| Mon 3 | Tue 4 | Wed 5 | Thu 6 | Fri 7 | Sat 8 | +-----+----------+----------+----------+----------+----------+----------+ |08:30| -09:30- | | | | |Par. Sess.| | |Local Org.| Intro & | Parallel | Parallel | Parallel | A1-SM-SE | |Morn.|Committee | Plenary | Sessions | Sessions | Sessions | -09:30- | | | Meeting | Talks | A1-SM-SE | A1-SM-SE | A1-SM-SE | Summary | |12:30| Palazzo | | | | | Talks | +-----+----------+----------+----------+----------+----------+----------+ |Meal?| No | Lunch | Lunch | Lunch | Lunch | No | +-----+----------+----------+----------+----------+----------+----------+ |14:00| -16:00- | Plenary | | | | | | |Registr'n.| Talks | Parallel | Parallel | Parallel | Workshop | |Aft. | Begins | -15:00- | Sessions | Sessions | Sessions | Ends | | |Palazzo di|Par. Sess.| A1-SM-A2 | A1-SM-A2 | A1-SM-SE | | |19:00| Congressi| A1-SM-SE | | | | | +-----+----------+----------+----------+----------+----------+----------+ | |**18:00** | | -20:30- | -19:00- | | | | Welcome | Special | Dinner | Advisory | Special | |Eve. | Cocktail | Talks | Royal |Committee | Talks | | |Palazzo di| | Victoria | Meeting | | | | Congressi| | Hotel | Palazzo | | +-----+----------+----------+----------+----------+----------+ --------------------------------- CUT HERE ------------------------------------ ========================================================================= AIHENP95 PISA WORKSHOP PROGRAM 3-8 APRIL 1995 PALAZZO DEI CONGRESSI, PISA, ITALY ========================================================================= WELCOME CEREMONIES 3 April Monday 18:00 - 20:00 ------------------------------------------------------------------------- Welcoming Addresses: Giuseppe Pierazzini, Director, INFN Sezion di Pisa Francoise Fogelman, President, European Neural Network Society ========================================================================= PLENARY SESSION 4 April Tuesday 08:30 - 15:00 ========================================================================= 4 April Tuesday Morning 08:30 - 12:30 Chair: TBA 08:30 WELCOME TO AIHENP95 Pisa - Bruce Denby (INFN Pisa) Workshop Chairman 08:45 SPACE APPLICATIONS OF - Thomas Lindblad (KTH Stockholm) NEURAL NETWORKS 09:45 COMMERCIAL APPLICATIONS OF - Francoise Fogelman (SLIGOS, Paris) NEURAL NETWORKS (President, European Neural Network Society) 10:45 STATE OF THE ART IN SYMBOLIC - TBA MANIPULATION 11:45 INTRODUCTION TO C AND OBJECT - P. Murat (ITEP Moscow/INFN Pisa) ORIENTED PROGRAMMING FOR HIGH ENERGY PHYSICISTS 4 April Tuesday Afternoon 14:00 - 15:00 Chair: TBA 14:00 STATE OF THE ART IN SYMBOLIC - TBA MANIPULATION ========================================================================= PARALLEL SESSIONS 4-7 April ========================================================================= "AI" SESSION: NEURAL NETWORKS, FUZZY LOGIC, EXPERT SYSTEMS, LANGUAGES, GENETIC ALGORITHMS, ETC. ------------------------------------------------------------------------- 4 April Tuesday (afternoon) 15:00 - 19:00 ****** Survey of AI Methods in Physics ****** Chair: TBA 15:00 - J. Moeck (MPI Munich, Ge.) "Artificial Neural Networks as a Second-Level Trigger at the H1 Experiment -- Performance Analysis and Results --" 15:30 - S. Westerhof(Wuppertal, Ge.) "Application of Neural Networks in TeV Gamma-Ray Astronomy" 16:00 - C. David (Univ. Nantes, Fr.) "Neural Networks in Theoretical Nuclear Physics" 16:30 - break (30 min) 17:00 - TBA "Tutorial: Intro. to Fuzzy Logic" 17:30 - E. Gandolfi (Univ. Bolgna) "Fuzzy Logic Applications in HEP" 18:00 - G. Stimpfle-Abele (Univ. Barcelona, Spain) "Tutorial: Neural Nets for Particle Searches" ----------------------------------------------------------------------- 5 April Wednesday (morning) 8:30 - 12:30 ****** Triggering, Real-time and Hardware AI Systems ****** Chair: TBA 8:30 - C. Loomis (Rutgers, US) "Using an Analog Neural Network to Trigger on Tau Leptons at CDF" 9:00 - S. Vlachos (Univ. Basel,Switzerland) "A neural network trigger system for the CP-LEAR experiment" 9:30 - R. Nobrega (CERN) "A Neural Network Trigger with a RICH Detector" 10:00 - L. Lundheim (CERN) "A Programmable Active Memory Implementation of a Neural Network for Second Level Triggering in ATLAS" 10:30 - break (30 min) ****** Multivariate Analysis and Neural Networks ****** 11:00 - C. Peterson (Lund, Sweden) "Determining Dependency Structures and Estimating Nonlinear Regression Errors without Doing Regr. 11:45 - J.Proriol (Clermont-Ferrand, Fr.) "Multimodular Neural Networks for the Classification of High Energy Events" 5 April Wednesday (evening) 14:00 - 19:00 +++++++ AI Parallel Session I +++++++ ****** Multivariate Analysis and Neural Networks ****** Chair: TBA 14:00 - Y. Wang (CERN) "Neural Network: A Powerful Tool for Classification" 14:30 - Ll. Garrido (Univ. Barcelona) "Using Neural Networks to enhance signal over background: The top--quark search" 15:00 - Tariq Aziz (CERN) "Heavy Flavour Tagging from hadronic Z decays using Neural Network Technique" 15:30 - break (30 min) 16:00 - H. E. Miettinen (Rice Univ., USA) "Top Quark Search with Multivariate Probability Estimates and Neural Networks" 16:30 - V.V.Ivanov (Dubna, Russia) "Input Data for a Multilayer Perceptron in the Form of Variational Series" 17:00 - R. Sinkus (Univ. Hamburg, Ge) "A novel approach to error function minimization for feedforward neural networks" 17:30 - D.Steuer (Univ. Ilmenau, Ge.) "The use of adaptive recursive estimation methods for acceleration of backpropagation learning algorithm" 18:00 - J. Wroldsen (Gjovik College, Norway) "A Robust Algorithm for Pruning Neural Networks" 18:30 - P. Fuchs (Lab. Saturne, Fr.) "The development of neural network algorithms with LICORNE - a commercial software tool for data analysis andimage processing" 5 April Wednesday (evening) 14:00 - 19:00 +++++++ AI Parallel Session II +++++++ ****** Genetic, Evolutionary and Cellular Automata Algorithms ****** Chair: TBA 14:00 - G. Organtini (Univ. Rome) "Using Genetics in Particle Physics" 14:30 - M. Kunze (Ruhr Univ. at Bochum, Ge) "Application of neural networks and a genetic algorithm in the analysis of multi particle final states" 15:00 - C. Busch (Univ. Wuppertal, Ge.) A Very Tentative Approach Towards the Problem of Adjusting Monte Carlo Generator Parameters by Means of the Evolution Strategy" 15:30 - break (30 min) 16:00 - H.M.A. Andree (Utrecht, Nl) "The optimisation of feed-forward neural networks by means of genetic algorithms" 16:30 - R. Berlich (Ruhr Univ. at Bochum, Ge) "Training neural networks using evolutionary strategies" 17:00 - M.Casolino (Univ. Rome) "A Cellular Automaton to Filter Noise in High Energy Physics Particle Tracks" 17:30 - G. Ososkov (for E.A.Tikhonenko) (Dubna, Russia) "New Random Number Generator on the Base of Cellular Automaton Suitable for Parallel Implementing" 18:00 - M. Kunze (Ruhr Univ. at Bochum, Ge) "Growing Cell Structures" 18:30 - Spare slot ----------------------------------------------------------------------- 6 April Thursday (morning) 8:30 - 12:30 ****** Triggering, Real-time and Hardware AI Systems ****** Chair: TBA 8:30 - D. Goldner (Univ. Dortmund, Ge.) "Artificial Neural Networks as a Level-2 Trigger for the H1 Experiment" 9:00 - T. T. Tong (DESY) "Using a high speed analog neural network chip in the first level R-Z trigger of the H1-Experiment at HERA". 9:30 - R. Odorico (Univ. Bologna, It.) "Trigger for Beauty Employing them MA16 Neural Microprocessor" 10:00 - A.W.Lodder (Univ. Utrecht, Nl) "The Implementation of Feed-Forward Neural Networks on the CNAPS system" 10:30 - break (30 min) 11:00 - I. Lazzizzera(INFN Trento, It.) "TOTEM: a highly parallel chip for triggering applications with inductive learning based on the Reactive Tabu Search" 11:30 - C. S. Lindsey (KTH, Sweden) "Experience with the IBM ZISC Neural Network Chip" 12:00 - Th. Lindblad (KTH, Sweden) "Evaluation of a RBF/DDA Neural Network" 6 April Thursday (evening) 14:00 - 19:00 +++++++ AI Parallel Session I +++++++ ****** Neural Networks in Offline Analysis ****** Chair: TBA 14:00 - J. Proriol (Clermont-Ferrand, Fr.) "Tagging Higgs Boson in Hadronic LEP2 events with Neural Networks" 14:30 - Oliver Cooke (CERN) Determining the Primary Parton Charge in a Jet 15:00 - A.A. Handzel (Weizmann Inst. Israel) "Comparison of Three Neural Network Algorithms in a Data Analysis Classification Task" 15:30 - break (30 min) 16:00 - A. de Angelis (CERN) "Tagging the s quark in the hadronic decays of the Z" 16:30 - C. Guicheney (U. Blaise Pascal, Fr.) "Using Neural networks and Fuzzy Logic in the Search of the Higgs" 17:00 - R. Sparvoli (Univ. Rome) "Gamma Ray Energy Discrimination with Neural Networks" 17:30 - M.Casolino (Univ. Rome) "Optimization of a Neural Network for Particle Classification in a Segmented Calorimeter" 18:00 - J.S. Lange (TU Dresden, Ge.) "Cluster Gravitation - An Extension to the Kohonen Algorithm for the Identification of the pp-Bremsstrahlung at COSY 18:30 - W. Tajuddin (Univ. Malaya, Malaysia) "Understanding Event Classification by Multilayer Perceptrons" 6 April Thursday (evening) 14:00 - 19:00 +++++++ AI Parallel Session II +++++++ ****** Tracking ****** Chair: TBA 14:00 - Baginyan S. A. (Dubna, Russia) " Controlled Neural Network Application in TRACK-MATCH Problem" 14:30 - D.L. Bui(DESY) "Application of the elastic arms approach to track finding in the Forward Tracking Detector of H1-Experiment at HERA" 15:00 - C.A. Byrd (Univ. Arkansas, USA) "A Rough-Set-Based Grouping Algorithm for Particle Tracking in High Energy Physics" 15:30 - break (30 min) 16:00 - M. Fuchs (IKF Frankfurt, Ge) "A 3-dimensional Transformation Tracker for raw TPC-data" 16:30 - G. A. Ososkov (Dubna, Russia) "Applications of cellular automata and neural networks for particle track search" 17:00 - N. Stepanov (ITEP, Russia) "Towards the fast trackfinder algorithm for the CMS experiment at LHC" 17:30 - W. Tajuddin (Univ. Malaya, Malaysia) "Structured Feed- Forward Neural Network for Track Finding" 18:00 - G. Stimpfle-Abele (Univ. Barcelona, Spain) "Determination of Beam Parameters in LEAR with Neural Nets" 18:30 - "L. Santi (Udine, Italy) "Fast Reconstruction of the Antiproton Annihilation Vertex at Intermediate Energies, Based on a Rotor Neural Network" ----------------------------------------------------------------------- 7 April Friday (morning) 8:30 - 12:30 ****** Triggering, Real-time and Hardware AI Systems ****** Chair: TBA 8:30 - G. Pauletta (Univ. Udine, It.) "Pulse Shape Discrimination with a Neural Network" 9:00 - G.B.Pontecorvo (Dubna, Russia) "On a Possible Second Level Trigger for the Experiment DISTO" 9:30 - J.M. Seixas (CERN) "A Neural Second-Level System Based on Calorimetry and Principal Components Analysis" 10:00 - M. Masetti (Univ. Bologna, It.) " Design of VLSI realization of a very fast Fuzzy processor for trigger applications in HEP" 10:30 - break (30 min) 11:00 - G. G. Athanasiu (Univ. Crete) "Retinal Neurocomputing Principles for Real Time Track Identification" 11:30 - J. Seixas (Rio de Janeiro) "Implementing a Neural Second Level Trigger System on a Fast DSP: The Feature Extraction Problem" 12:00 - D. Salvatore (Univ. Pisa, It.) "Neuroclassifier Chip for Vertex Detection" 7 April (evening) 14:00 - 19:00 ****** Adaptive and Symbolic Methods in Offline Analysis ****** Chair: TBA 14:00 - D'Agostini (Univ. Rome) "A Multidimensional Unfolding Method Based on Bayes' Theorem" 14:30 - R. Sinkus (Univ. Hamburg, Ge) "Neural network based electron identification in the ZEUS detector" 15:00 - D. Falciai (Univ. Perugia, Italy) "Electron Identification with Neural Network at SLD" 15:30 - break (30 min) 16:00 - K. A. Gernoth "Neural Network Models of Nuclear Systematics" 16:30 - G. Tomasicchio (Bari) "An object oriented approach to design symbolic and connectionist systems in HEP" 17:00 - Th. Flor, (Univ. Ilmenau, Ge.) "Integration of symbolic rule based and subsymbolic neural net based information processing in a multi paradigm knowledge based system" 17:30 - E. Bubelev (for V.M.Severyanov) (Dubna, Russia) "Artificial Neural Networks Usage for Recognition of Poincare' Imaginable Statistical Bodies in Non-Euclidean High Energy Physics" 18:00 - Giulio D'Agostini (Univ. Rome) "On the Use of the Covariance Matrix to Fit Correlated Data" 18:30 - V.Il. Tarasov. (Nucl. Safety Inst. Russia) "The statistical analysis of pollution 137-Cs in settlements Russian Chernobyl Zone" ----------------------------------------------------------------------- 7 April Saturday (morning) 8:30 - 09:30 Chair: TBA 08:30 - Spare slot 09:00 - A. Smirnitsky (ITEP) "Summary of AIHENP-Moscow Meeting, AI Section" ----------------------------------------------------------------------- ****** "AI" Posters 4-8 April Continuous****** (Posters can be mounted or removed at any time during the workshop. Presenters of posters are encouraged to be present at their posters from 13:30 - 14:00 and from 19:00 - 20:00.) V.V. Ivanov, (Dubna) "Multidimensional data analysis based on the omega-n-k criteria and multilayer perceptrons" Th. Flor, (Univ. Ilmenau, Ge.) "Using advantages of ODBS and CORBA-standard for modelling of objectoriented distributed knowledge based systems in VisualWorks/Distributed-Smalltalk" Th. Flor, (Univ. Ilmenau, Ge.) "Multi paradigm inference system VISIS as knowledge based framework within an objectoriented medical information modell" J. Proriol (Clermont-Ferrand, Fr.) "Selection of Variables for Neural Network Analysis" V.M.Severyanov (Dubna, Russia) "Application of Artificial Neural Networks to Low pT Muon Identification in ATLAS Hadron Calorimeter" V.M.Severyanov (Dubna, Russia) "Calculation and Interactive Construction of Flat Fractals Using Neural Networks" W. Tajuddin (Univ. Malaya, Malaysia) "Track Classification and Enumeration in Solid State Nuclear Track Detectors usinc Cellular Automata" T. Yakhno (Novosibirsk, Russia) "TallTalk: Combining OO-paradigm and Constraint Programming for Knowledge Representation" ========================================================================= SYMBOLIC MANIPULATION SESSION: ------------------------------------------------------------------------- 5 April (morning) 8:30 - 12:30 Chair: TBA SUBGROUP C-2: Full automation systems 8:30 - Y.Kurihara (KEK, Japan) "Catalogue of electron-positron annihilation processes using GRACE system" 9:00 - A.Pukhov (INP MSU, Moscow, Russia) "Automatic calculation of amplitudes in the CompHEP package" 9:30 - M.Jimbo (Tokyo Management College, Japan) "A system for the automatic computation of cross sections including SUSY particles" 10:00 - J.-X.Wang (IHEP, Beijing, China) "Automatic calculation of loop processes" 10:30 - break (30 min) SUBGROUP C-3: programs and methods 11:00 - L.Surguladze (Univ. of Oregon, USA) "Computer programs for high order analytical perturbative calculations in high energy physics" 11:30 - A.Grozin (Open Univ., UK) "Multiloop calculations in heavy quark effective theory" 12:00 - J.Gracey (Univ. Durham, UK) "Large N_f methods for computing the perturbative structure of deep inelastic scattering" 5 April (evening) 14:00 - 19:00 Chair: TBA SUBGROUP C-1: Languages and Tools 14:00 - M.Sofroniou (Univ. Bologna - Wolfram Research Ltd.) "Strategies for effective numerical computations using Mathematica" 14:30 - E.Remiddi (Univ. Bologna, Italy) "GOLEM: a language (and a program) for writing (and checking) mathematical proofs" 15:00 - S.Capitani ("La Sapienza", Roma, Italy) "Use of SCHOONSHIP and FORM codes in perturbative lattice calculations" 15:30 - break (30 min) SUBGROUP C-2: symbolic-numeric interface 16:00 - K.Kato (Kogakuin Univ., Tokyo, Japan) "Numerical approach to two-loop integrals with masses" 16:30 - D.Kovalenko (INP MSU, Moscow, Russia) "Automatic generation of kinematics for exclusive high energy collisions", 17:00 - T.Ishikawa (KEK, Japan) "Symbolic code optimization of polynomials" 17:30 - F.Tkachov (INR, Moscow, Russia) "MILXy Way: How much better than VEGAS can one integrate in many dimensions?" ----------------------------------------------------------------------- 6 April (morning) 8:30 - 12:30 Chair: TBA SUBGROUP C-3: methods and algorithms 8:30 - A.Kotikov (LAPP, Annecy, France) "Gegenbauer polynomial technique: the second birth" 9:00 - A.Czarnecki (Univ. Karlsruhe, Germany) "A new method of computing two-loop Feynman integrals with massive particles" 9:30 - L.Avdeev (JINR, Russia) "Recurrence relations for evaluating three-loop vacuum diagrams with a mass" 10:00 - V.Ilyin (INP MSU, Russia) "New method of reducing vacuum multiloop Feynman integrals to master ones" 10:30 - break (30 min) SUBGROUP C-2: Feynman diagram generation 11:00 - T.Kaneko (Meiji-Gakuin Univ., Yokohama, Japan) "A Feynman-graph generator for any order of coupling constants", SUBGROUP C-1: Graphical interface 11:30 - D.Juriev (Ecole Normale Superieure, Paris, France-Russia) "Some aspects of interactive visualization of 2D quantum field theory: algebra, geometry and computer graphics" 12:00 - I.Nikitin (IHEP, Protvino, Russia) "Visual study of complicated phenomena in string theory" 6 April (evening) 14:00 - 19:00 Chair: TBA SUBGROUP C-4: QFT and SUSY etc. 14:00 - C.Schubert (DESY-Zeuthen, Germany) "Programming the string-inspired method: main computational problems" 14:30 - A.Lanyov (JINR, Russia) "Calculation of heat-kernel coefficients and usage of computer algebra" 15:00 - A.Candiello ("Galileo Galilei", Padova, Italy) "WBase: a C package to reduce tensor products of Lie algebra representations" 15:30 - break (30 min) SUBGROUP C-2: Applications 16:00 - N.Nakazawa (Kogakuin Univ., Tokyo, Japan) "Automatic Calculation of 2-loop Weak Corrections to Muon Anomalous Magnetic Moment" 16:30 - O.Tarasov (Biellefeld Univ., Germany - JINR, Russia) "One-loop radiative correction to the process gamma-gamma -> t bar-t" 17:00 - I.Akushevich (Belorussia Univ., Minsk, Belorussia) "The calculation of contribution of double photon bremstrahlung to polarized assymetry by using symbolic manipulation technique" SUBGROUP C-3: results 17:30 - A.Pivovarov (INR, Moscow, Russia) "On the positronium lifetime calculation" 18:00 - A.Davydychev (Bergen Univ., Norway) "New results for two-loop diagrams with massive and massless particles" 18:30 - S.Larin (INR, Moscow, Russia) "Computation of the high order QCD corrections to physical quantities" ----------------------------------------------------------------------- 7 April (morning) 8:30 - 12:30 Chair: TBA SUBGROUP C-3: Methods and algorithms 8:30 - J.Fleischer (Biellefeld Univ., Germany) "Calculation of two-loop vertex functions from their small momentum expansion" 9:00 - D.Kreimer (Univ. Tasmania, Australia) "Feynman diagram calculations - from finite integral representations to knotted infinities" 9:30 - T.van Ritbergen (NIKHEF, The Netherlands) "The calculation of various quantities within perturbation theory at the 3 and 4-loop order" 10:00 - C.Schubert (DESY-Zeuthen, Germany) "Programming the string-inspired method: methods and algorithms for the evaluation of higher order corrections" 10:30 - break (30 min) SUBGROUP C-3: Applications 11:00 - G.Pivovarov (INR, Moscow, Russia) "The gauge for atom-like bound states" 11:30 - P.Baikov (INP MSU, Moscow, Russia) "Three loop vacuum polarization and four loop muon anomalous magnetic moment" 12:00 - N.Ussykina (INP MSU, Moscow, Russia) "Cracking double boxes: a progress report" 7 April (evening) 14:00 - 19:00 Chair: TBA SUBGROUP C-4: other fields 14:30 - E.Wildner (CERN) "Integration of symbolic computing in accelerator control" 15:00 - E.S.Cheb-Terrab (Univ. Rio de Janeiro, Brazil) "A computational strategy for the analytical solving of partial differential equations" 15:30 - break (30 min.) 16:00 - P.Pronin (MSU, Moscow, Russia) "New tensor package for REDUCE" SYMBOLIC SECTION RESUME 16:30 - Summary talk on "Moscow one-day session" Symb. Manip. Section 17:00 - Round Table discussion ========================================================================= SOFTWARE ENGINEERING SESSION ------------------------------------------------------------------------- 4 April Tuesday Afternoon 15:00 - 19:00 Chair: TBA **Object oriented programming and C++** 15:00 - R. Petravick (Fermilab) "Software engineering methods and standards used in the Sloan Digital Sky Survey" ------------------------------------------------------------------------------ 5 April Wednesday Morning 08:30 - 12:30 Chair: TBA **Online applications; Graphics and Interfaces** 08:30 - 09:00 - I. Legrand (CERN) "Design and simulation of the online trigger and reconstruction farm for the HERA-B experiment" 09:30 - J. Cramer (Washington/Max Planck) "SControl, a program for slow control of large physics experiments" 10:00 - C. Maidantchik (CERN/Rio de Janeiro) "Quality assurance on coupling DAQ software modules" 10:30 - Break (30 min.) 11:00 - V. Monich (Novosibirsk) "ZTREE - Data analysis and graphics display system for the CMD-2 detector" 11:30 - Y. Merzlyakov (Novosibirsk) "Software design of a distributed heterogeneous front end DAQ" 12:00 - V. Fine (Dubna) "Using the Windows/NT operating system for HEP applications" ------------------------------------------------------------------------------ 6 April Thursday Morning 08:30 - 12:30 Chair: TBA **Simulation; Code and data management techniques** 08:30 - B. Burow (DESY) "How one operator and hundreds of computers around the world simulate a million ZEUS events per week" 09:00 - C. Bormann (Frankfurt) "A distributed data analysis environment" 09:30 - E. Agterhuis (Utrecht) "Software tools for Microstrip Gas detector simulation" 10:00 - L. Cioni (Pisa) "Co-operative principles in application design" 10:30 - Break (30 min.) 11:00 - S. Cabasino (Pisa) "The Ape Computer Family" 11:45 - B. Burow (DESY) "TDM: A proposed tool to manage data and tasks for a comfortable future of HEP event processing" ------------------------------------------------------------------------------ 7 April Friday Morning 08:30 - 12:30 Chair: TBA **Object Oriented Programming and C++** 08:30 - 09:00 - P. Fuchs (Saturne) "The development of an object-oriented system which integrates simulation, reconstruction and analysis within a common framework 09:30 - J. Carter (CERN) "Experience using formal methods in HEP" 10:00 - N. Piscopo (ARTIS) "An integrated software engineering environment for developing concurrent applications through simulation and automatic code generation" 10:30 - break (30 min) 11:00 - G. Maron (Legnaro) "Experience using an integrated software engineering package for developing the AURIGA antenna data acquisition and analysis system" 11:30 - G. Attardi (Pisa) "The PoSSo Project" 12:00 - V. Talanov (Protvino) " Application of C++ programming principles to geometry description problem in particle transport simulation" 7 April Friday Afternoon 14:00 - 19:00 Chair: TBA **Object Oriented Programming and C++** 14:00 - O. Krivosheev (Tomsk) "Object oriented approach to the design of Monte Carlo code for simulation of electromagnetic showers" 14:30 - O. Krivosheev (Tomsk) "Source viewer and source portability checker - useful tools for developing C++ programs" 15:00 - Break ------------------------------------------------------------------------- ****Software Engineering Posters 4-8 April continuous***** (Posters can be mounted or removed at any time during the workshop. Presenters of posters are encouraged to be present at their posters from 13:30 - 14:00 and from 19:00 - 20:00.) F. Bruyant (CERN) "COMO, an approach for object oriented analysis and design for scientific applications of an algorithmic nature" Th. Kozlowski (LANL) "The use of Shlaer-Mellor object oriented analysis and recursive design in the development of the PHENIX computing systems" M. Marin (Chile) "An event driven simulation environment for hard particle molecular dynamics" ========================================================================= PLENARY SESSION 8 April 09:30 - 12:45 SUMMARY TALKS ========================================================================= 8 April Saturday Morning 09:30 - 12:45 Chair: TBA 09:30 SUMMARY OF AI SESSION - Marcel Kunze (Bochum) 10:30 SUMMARY OF SYMBOLIC - TBA MANIPULATION SESSION 11:30 SUMMARY OF SOFTWARE - P. Murat (INFN Pisa/ITEP Moscow) ENGINEERING SESSION (to be confirmed) 12:30 CLOSING REMARKS - B. Denby (INFN Pisa) Conference Chairman - D. Perret-Gallix (LAPP Annecy) Co-chairman ========================================================================= SPECIAL SEMINARS ========================================================================= 4 April Tuesday Chair: S.R. Amendolia 19:00 - H. E. Miettinen (Rice Univ., USA) "Results of the Top Quark Search from D0" ------------------------------------------------------------------------- 7 April Friday Chair: H. E. Miettinen 19:00 - S.R. Amendolia (INFN Pisa, Italy) "Results of the Top Quark Search from CDF" ========================================================================= INDUSTRY SESSION CONTINUOUS 4-8 APRIL ========================================================================= Confirmed stands: IBM (Rome) Spring (Rome) MACS (Pisa, CNAPS Hardware) Siemens (Munich/Vienna, SYNAPSE Hardware) ========================================================================= --------------------------------- CUT HERE ------------------------------------ REGISTRATION INSTRUCTIONS ------------------------- NOTA BENE: REGISTRATION AND ACCOMMODATION ARE BEING HANDLED BY THE TRE EMME CONGRESSI COMPANY. CALL THEM AT (39)(50)44154 OR FAX (39)(50)500725 FOR ANY QUESTIONS OR PROBLEMS CONCERNING REGISTRATION AND ACCOMMODATION. DO NOT SEND THE FORMS TO THE CONFERENCE CHAIRMAN. Part of your conference fee goes to pay TRE EMME to provide a friendly, helpful service for the conference. Take advantage of it. Following are three separate items: 1) REGISTRATION FORM. ALL PARTICIPANTS MUST SUBMIT THIS FORM. It can be returned by FAX or POST with accompanying payment, by CREDIT CARD, INTERNATIONAL CHEQUE, or EUROCHEQUE. You can also register ON SITE. Those taking the student or EPS member discount should be prepared at arrival to show proof of their status. IN CASE OF ANY PROBLEMS, CALL TRE EMME FOR ASSISTANCE. 2) ACCOMMODATION FORM. PLEASE BOOK YOUR HOTEL EARLY TO AVOID PROBLEMS. It is recommended that all participants make use of the hotels which have been blocked by TRE EMME in order to assure availability of appropriate accommodation. Three classes of hotels are available. A deposit of one night's stay plus Lit. 20.000 is required in order to book a room. This should be sent directly to Tre Emme by international cheque or eurocheque by MARCH 13, 1995 in order to guarantee your reservation. **IF YOU CANNOT GET THE DEPOSIT IN ON TIME, BOOK YOUR ROOM ANYWAY, AND IT WILL BE HELD UNTIL NOON OF YOUR ARRIVAL DAY; YOU MUST THEN INTERACT WITH THE HOTEL (PHONE NUMBER ON YOUR VOUCHER) YOURSELF IF YOU ARRIVE LATER THAN THAT.** The hotels will not accept credit cards for the **deposit**, but final bills (minus deposit) **can** be paid by credit card. IN CASE OF ANY PROBLEMS, CALL OR FAX TRE EMME FOR ASSISTANCE. 3) SUBMISSION OF ABSTRACTS. Submissions are closed, however, papers submitted now may be included if they especially attract the interest of the organizing committee. Late abstracts will not be included in the abstract booklet. --------------------------------- CUT HERE ------------------------------------ IV International Workshop AIHENP Pisa (Italy) April 3-8, 1995 REGISTRATION FORM To be mailed or faxed to: TRE EMME CONGRESSI Via Risorgimento 4, 56126 Pisa (Italy) Tel. +39-50 - 44154/20583 Fax. +39-50 - 500725 Surname .............................First name.............................. Affiliation/Company.......................................................... Address ..................................................................... Postal Code .......... City ...................... Country .................. Tel. ..... / .......................... Fax. ..... / ........................ Fiscal/VAT code for invoice: ................................................ Fees(Incl. 19% VAT) by March 3,1995 Thereafter Standard __ Lit. 425.000 __ Lit. 500.000 Lit. ............ EPS Member __ Lit. 380.000 __ Lit. 455.000 Lit. ............ Student __ Lit. 325.000 __ Lit. 400.000 Lit. ............ Social dinner for __ Lit. 70.000 N. .... places Lit. ............ accompanying person(s) Tot. Lit. ............ _____________________________________________________________________________ Payment __ VISA __ MASTERCARD __ EUROCARD __ CARTASI' - Tot. Lit. ............ Card n. ........................................ Expiry date ................ Cardholder (capital letters) ................................................ I am enclosing __ International Cheque __ Eurocheque for the sum of Lit. ..................... addressed to TRE EMME / AIHENP N.B. Preregistration is strongly encouraged. Date ............................ Signature ................................ --------------------------------- CUT HERE ------------------------------------ IV International Workshop AIHENP Pisa (Italy) April 3-8, 1995 ACCOMMODATION FORM To be mailed or faxed to: TRE EMME CONGRESSI Via Risorgimento 4, 56126 Pisa (Italy) Tel. +39-50 - 44154/20583 Fax. +39-50 - 500725 Surname .............................First name.............................. Home Address ................................................................ Postal Code .......... City ...................... Country .................. Tel. ..... / .......................... Fax. ..... / ........................ Accompanying person(s) ...................................................... Deposit Required(but see NB below): cost of one night plus Lit. 20.000 handling Cat. Hotel Single Double Double for Single **** __ Lit. 220.000 __ Lit. 290.000 __ Lit. 260.000 *** __ Lit. 95.000 __ Lit. 135.000 __ Lit. 110.000 ** __ Lit. 65.000 __ Lit. 100.000 __ Lit. 75.000 Date of arrival ................ Departure .............. tot. nights ....... I wish to share a double room with .......................................... N.B. - Prices include breakfast, service charges, taxes and VAT. - All bedrooms have private shower or bath. - When single rooms no longer available, double for single will be reserved. - The deposit will be deducted from the hotel bill upon display of the voucher sent by TRE EMME CONGRESSI: bills may be settled by credit card. - IF YOU HAVE A PROBLEM GETTING THE DEPOSIT IN ON TIME, BOOK YOUR ROOM ANYWAY. IT WILL BE HELD UNTIL *NOON* OF ARRIVAL DAY. YOU ARE THEN RESPONSIBLE FOR CALLING THE HOTEL (NUMBER ON TRE EMME VOUCHER) TO ARRANGE FOR LATER ARRIVAL IF NECESSARY. - Deposit payment must be performed by International Cheque or Eurocheque made payable to TRE EMME CONGRESSI; bank charges encountered with other forms of payment will be charged to the participant. I am enclosing International/Eurocheque N. ............ for Lit. ............ made payable to TRE EMME CONGRESSI. Date ............................ Signature ................................ --------------------------------- CUT HERE ------------------------------------ SUBMISSION OF ABSTRACTS ----------------------- Submission of abstracts is technically closed. Abstracts already submitted have been reviewed and applicants notified of acceptance. Abstracts may still be considered, up until the date of the workshop; however acceptances will be at the discretion of the Advisory Committee. Late abstracts will not be included in the abstract booklet. Applications should be sent by electronic mail. Applications should include an abstract describing the work done, a title and a list of authors which indicates which one is the contact person. The contact person should include his postal address, electronic mail address, and telephone and fax numbers. THE ABSTRACT PLUS ALL ACCOMPANYING INFORMATION SHOULD FIT ON ONE PAGE. NO SPECIAL FORMAT IS REQUIRED (PLAIN TEXT PREFERRED). All papers accepted for oral or poster presentation will be published by World Scientific in the workshop proceedings, which, as in past workshops, will be a hardcover edition with the workshop logo in color on the cover. Qualifying papers will also be published in a special edition of International Journal of Modern Physics C. Addresses for Submission of ABSTRACTS ------------------------------------- Electronic Submission: AIHENP95 at vaxpia.pi.infn.it -or- DENBY at fnalv.fnal.gov Fax Submission: (39) (50) 880-317 in care of Bruce Denby From giles at research.nj.nec.com Fri Mar 17 18:55:08 1995 From: giles at research.nj.nec.com (Lee Giles) Date: Fri, 17 Mar 95 18:55:08 EST Subject: TR Available: Recurrent Neural Networks and Dynamical Systems Message-ID: <9503172355.AA18111@alta> The following Technical Report is available via the University of Maryland Department of Computer Science and the NEC Research Institute archives: _____________________________________________________________________________ "Finite State Machines and Recurrent Neural Networks -- Automata and Dynamical Systems Approaches" UNIVERSITY OF MARYLAND TECHNICAL REPORT UMIACS-TR-95-1 and CS-TR-3396 Peter Tino[1,2], Bill G. Horne[2], C. Lee Giles[2,3] [1] Dept. of Informatics and Computer Systems, Slovak Technical University, Ilkovicova 3, 812 19 Bratislava, Slovakia [2] NEC Research Institute, 4 Independence Way, Princeton, NJ 08540 [3] UMIACS, University of Maryland, College Park, MD 20742 {tino,horne,giles}@research.nj.nec.com We present two approaches to the analysis of the relationship between a recurrent neural network (RNN) and the finite state machine M the network is able to exactly mimic. First, the network is treated as a state machine and the relationship between the RNN and M is established in the context of algebraic theory of automata. In the second approach, the RNN is viewed as a set of discrete-time dynamical systems associated with input symbols of M. In particular, issues concerning network representation of loops and cycles in the state transition diagram of M are shown to provide a basis for the interpretation of learning process from the point of view of bifurcation analysis. The circumstances under which a loop corresponding to an input symbol x is represented by an attractive fixed point of the underlying dynamical system associated with x are investigated. For the case of two recurrent neurons, under some assumptions on weight values, bifurcations can be understood in the geometrical context of intersection of increasing and decreasing parts of curves defining fixed points. The most typical bifurcation responsible for the creation of a new fixed point is the saddle node bifurcation. ------------------------------------------------------------------------- http://www.neci.nj.nec.com/homepages/giles.html http://www.cs.umd.edu/TRs/TR-no-abs.html or ftp://ftp.nj.nec.com/pub/giles/papers/UMD-CS-TR-3396.fsm-rnn.dynamics.systems.ps.Z -------------------------------------------------------------------------- -- C. Lee Giles / NEC Research Institute / 4 Independence Way Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 URL http://www.neci.nj.nec.com/homepages/giles.html == From B344DSL at UTARLG.UTA.EDU Fri Mar 17 19:18:36 1995 From: B344DSL at UTARLG.UTA.EDU (B344DSL@UTARLG.UTA.EDU) Date: Fri, 17 Mar 1995 18:18:36 -0600 (CST) Subject: MIND conference at Texas A&M: preliminary announcement Message-ID: <mailman.749.1149591332.29955.connectionists@cs.cmu.edu> Preliminary Announcement and Call for Abstracts Conference on Neural Networks for Novel High-Order Rule Formation Sponsored by Metroplex Institute for Neural Dynamics (MIND) and For a New Social Science (NSS) Texas A&M University, May 20-21, 1995 MIND, a neural networks professional organization based in the Dallas-Fort Worth area, and NSS, a private research foundation based in Coral Springs, Florida, are jointly sponsoring a conference on Neural Networks for Novel High-order Rule Formation. This will partially overlap a conference on Creative Concepts May 19-20 sponsored by the Psychology Department at Texas A&M and the American Psychological Association. This will in turn be preceded by ARMADILLO, the region psychology meeting on Thursday, May 18 (whose registration is free for those attending either Creative Cognition or MIND/NSS). Invited speakers for the MIND/NSS portion include John Taylor (King's College, London); Karl Pribram (Radford University); Risto Miikkulainen (University of Texas); Ramkrishna Prakash (University of Houston); Sam Leven (For a New Social Science); and Daniel Levine (University of Texas at Arlington). There is space for a limited number of contributed talks, for presentation on the Sunday of the conference, and an arbitrary number of posters, to up for the duration of the conference. MIND has sponsored six international conferences, three of which have formed the basis for books (two in print and one now in progress). All but the first have been on focused topics within the neural network field. The topics were chosen for their interest to a broad community, some interested primarily in neurobiology, others in neural theory, and others in engineering applications. These last three topics have been Oscillations in Neural Systems, Optimality in Biological and Artificial Networks?, and Neural Networks for Knowledge Representation and Inference. NSS has co-sponsored two of MIND's conferences. Its purpose is, to quote from its founding statement, "turning the findings and techniques of science to the benefit of social science." It seeks to develop more predictive methodological bases for areas ranging from economics to management theory to social psychology ~ in some cases, to replace foundational assumptions dating from the time of David Hume and Adam Smith, based on a static and unrealistic model of human behavior, with new foundational assumptions that draw on modern knowledge of neuroscience, cognitive science, and neural network theory. This would mean that social scientific models which assume humans always behave rationally will be replaced by models which incorporate emotion, habit, novelty, and ~ particularly relevant for this conference ~ creative intuition. In the words of NSS's original statement: We may find people less rational than we would like them, economic models less precise, survey results less certain. .. We of For a New Social Science seek to find real answers instead of nostrums and mythology. But when we cannot find simple solutions, we choose to see our world plainly and to open our eyes to what we do not know. The theme of this conference will be connectionist modeling of the processes by which complex decision rules are deduced, learned, and encoded. These include, for example, rules that determine, on the basis of some trials, which classes of actions will be rewarded. The myth that neural network methodology is only relevant for low-order pattern processing and not for high-order cognition is rapidly being disproved by recent models. In particular, the 1994 World Congress on Neural Networks included a session on Mind, Brain, and Consciousness, which was one of the most popular and successful sessions of that conference; another such session will be held at the same Congress in 1995. John Taylor has developed a series of models related to consciousness, which is interpreted partly as selective attentional (based in the thalamic reticular nucleus) and partly as comparison of current stimuli with episodic memories of past events (based in the hippocampus). Raju Bapi and Daniel Levine have constructed a network that learns motor sequences and classifies them on the basis of reward. Models have been developed that mimic disruption of specific cognitive tasks by specific mental disorders, among them Alzheimer dementia, autism, depression, and schizophrenia. Sam Leven and Daniel Levine have constructed a neural network that simulates contextual shifts in multiattribute decision making, with specific application to consumer preference for old versus new versions of Coca-Cola. Finally, Haluk Ogmen and Ramkrishna Prakash built on models previously developed by Grossberg and his colleagues to design robots that actively explore their environment under the influence of appetitive and aversive stimuli. All this work paves the way for developing neural network models of creativity and innovation. Part of the creative process involves search for novel high-order rules when current rules fail to predict expected results or to yield expected rewards. This process often requires transfer to a higher level of complexity of analysis. Hence creativity involves what Douglas Hofstadter called a "search of search spaces." Some current models in progress also incorporate knowledge of different brain regions involved in circuits for such a transfer of control. Bapi and Levine discuss the role of the frontal lobes in such a circuit. In the experiments modeled therein, macaque monkeys with prefrontal damage can learn an invariant sequence of motor actions if it is rewarded, but have difficulty learning any one of several reorderings of a sequence (say, ABC, ACB, BAC, BCA, CAB, and CBA) if all are rewarded. This flexible sequence rule is one of many types of complex rules that require intact frontal lobes to be learned effectively. Another is learning to go back and forth on alternate trials between two food trays. Yet another is learning to move toward the most novel object in the environment. Karl Pribram hints that the frontal lobes act in concert with some areas of the limbic system, particularly the hippocampus and amygdala. These theories of specific brain regions are not yet precise or uniquely determined. Neural network models of high-order cognitive processes typically build on network structures that have previously been developed for low-order processes, and may or may not incorporate these neurobiological details. Still, we are now witnessing a dynamic convergence of insights from cognitive neuropsychology along with those from experimental psychology, cognitive science, and neural network theory. This will be the general theme of these two overlapping conferences. Registration for this conference will be $40: registration forms are attached. Those attending the Creative Concepts Conference immediately preceding the MIND/NSS conference will be able to attend for $15. For information about transportation and lodging in College Station, TX (roughly between Austin and Houston) where Texas A&M is located, please contact: Steve Smith Department of Psychology Texas A&M University College Station, TX 77843 409-845-2509 sms at psyc.tamu.edu If you are interested in speaking, please send an abstract by Friday, April 7, to Daniel S. Levine Department of Mathematics University of Texas at Arlington Arlington, TX 76019-0408 817-273-3598 b344dsl at utarlg.uta.edu ------------------------------------------------------------------- -----------------------------------------------------PLEASE RETURN THIS REGISTRATION FORM TO PROF. LEVINE ------------------------------------------------------------------- ----------------------------------------------------- Name ________________________________________ Phone __________________________ Address _________________________________________________________________ _____________ ____________________________________ e-mail ___________________________________ 1. I plan to attend (check all that apply): ARMADILLO ____ Creative Concepts ____ MIND/NSS ____ 2. I would like to present a talk or poster at MIND/NSS ____ From Frank.Kelly at cs.tcd.ie Mon Mar 20 07:18:45 1995 From: Frank.Kelly at cs.tcd.ie (Frank Kelly) Date: Mon, 20 Mar 1995 12:18:45 +0000 (WET) Subject: Connectionist models of Figure-Ground Segregation (Problems?) Message-ID: <mailman.750.1149591332.29955.connectionists@cs.cmu.edu> Hello, I am doing a project on Nonlinear Coupled Oscillators applied to Figure-Ground Segregation. Current models I have examined are included below my mail.sig. Basically the question I would like to pose is the following: Although all of these models 'solve' figure-ground segregation to some degree, can anyone say which model is 'best' and what crtieria can we base this upon? e.g. One of the key criteria for my project is speed, so what I would be interested in knowing is: Which model is fastest and/or does any model approach the speed at which the human visual system segregates figure and ground. Other criteria would be : * Resistance to Noise * Biological Plausibility * Model Complexity (e.g. does the neurons model allow for orientation selectivity, does the model require full connectivity between all nodes) *Use of attentional mechanisms I would appreciate any light people could throw on this subject of finding a 'best' model, especially experimental results/papers. BTW, If anyone knows of any other systems (or has comments to make on any of the above systems) I would be grateful if you could contact me. Many Thanks in advance, --Frank Kelly = Frank.Kelly at cs.tcd.ie | AI group, Dept. of Computer Science, = = Work: +353-1-608 1800 | Trinity College, Dublin 2. Ireland. = = WWW : http://www.cs.tcd.ie/www/kellyfj/kellyfj.html = So far I have found the following systems: -------------------------------------------- [Von der Malsburg & Schneider 86] Von der malsburg, C., and W. Schneider A neural Cocktail-Party Processor in Biological Cybernetics 54, 29-40 (1986) [Von der Malsburg & Buhmann 92] Von der Malsburg, C., and J. Buhmann Sensory Segmentation with coupled neural oscillators in Biological Cybernetics 67, 233-242 (1992) [Sompolinsky et al 90] Sompolinsky, H., Golomb, D., and D. Kleinfeld Global processing of visual stimuli in a neural network of coupled oscillators in Proceedings of the National Academy of Sciences, USA Vol.87, pp.7200-7204, September 1990. [Sejnowski & Hinton 87] Sejnowski, T.J., and G.E. Hinton Separating Figure from Ground with a Boltzmann Machine in (Arbib 87) [Pabst et al. 89] Pabst, M., H.J. Reitboeck, and R. Eckhorn A model of Preattentive region definition based on texture analysis in (Cotterill 89) [Konig et al. 92] Konig, P., Janosch, B., and T.B. Schillen Stimulus-Dependent Assembly Formation of Oscillatory Responses : III. Learning in Neural Computation 4, 666-681 (1992) [Kammen et al. 89] Kammen, D.M., P.J. Holmes, and C. Koch Cortical Architecture and Oscillations in Neuronal Networks : Feedback vs. Local Coupling in (Cotterill 89) [Grossberg & Somers 91] Grossberg, S., and D. Somers Synchronized oscillations during cooperative feature linking in a cortical model of visual perception in Neural Networks Vol. 4 pp. 453-466 [Fellenz 94] Fellenz W.A. A Neural Network for Preattentive Perceptual Grouping in Proceedings of the Irish Neural Networks Conference 1994 Univeristy College Dublin, Sept.12-13, 1994 [Eckhorn et al 89] Eckhorn, R., H.J. Reitboeck, M. Arndt, and P.Dicke A Neural Network for feature linking via synchronous activity in (Cotterill 89) [Yamaguchi & Hiroshi 94] Yamaguchi, Y., and S. Hiroshi Pattern recognition with figure-ground seperation by generation of coherent oscillations in Neural Networks Vol.3, 1994, pp.153-170 [Campbell and Wang 94] Campbell, S., and D. Wang Synchronization and Desynchronization in a Network of Locally Coupled Wilson-Cowan Oscillators in Technical Report OSU-CISRC-8/94-TR43, Lab for AI Research, Dept. of Computer and Information Science and Center for Cognitive Science, The Ohio State University, Columbus, Ohio 43210-1277, USA [Sporns et al. 91] Sporns, O. Tononi, G. and G.M. Edelman Modeling perceptual grouping and figure-ground segregation by means of active reentrant connections in Proc. Natl. Acad. Sci. USA Vol.88 oo.129-133, January 1991 n.b. [Cotterill 89] Cotterill, R.M.J. Models of Brain Function 1989 From David_Redish at GS151.SP.CS.CMU.EDU Sat Mar 18 12:13:05 1995 From: David_Redish at GS151.SP.CS.CMU.EDU (David_Redish@GS151.SP.CS.CMU.EDU) Date: Sat, 18 Mar 95 12:13:05 EST Subject: Paper available: Navigating with Landmarks Message-ID: <mailman.751.1149591332.29955.connectionists@cs.cmu.edu> The following paper is now available electronically (via the Web) "Navigating with Landmarks: Computing Goal Locations from Place Codes" A. David Redish and David S. Touretzky Carnegie Mellon University to appear in _Symbolic Visual Learning_, K. Ikeuchi and M. Veloso, eds., Oxford University Press. A computer model of rodent navigation, based on coupled mechanisms for place recognition, path integration, and maintenance of head direction, offers a way to operationally combine constraints from neurophysiology and behavioral observation. We describe how one such model reproduces a variety of experiments by Collett, Cartwright, and Smith (J. Comp Phys. A 158:835-851) in which gerbils learn to find a hidden food reward, guided by an array of visual landmarks in an open arena. We also describe some neurophysiological predictions of the model; these may soon be verified experimentally. Portions of the model have been implemented on a mobile robot. ------------------------------------------------------------ gzipped: http://www.cs.cmu.edu:8001/Web/People/dredish/pub/vislearn-web.ps.gz unix compressed: http://www.cs.cmu.edu:8001/Web/People/dredish/pub/vislearn-web.ps.Z For other papers of ours, see http://www.cs.cmu.edu:8001/Web/People/dredish/bibliography.html ------------------------------------------------------------ Notes: This paper contains large compressed postscript figures and may take a long time to print out on some printers. This paper will sometimes produce an "unable to uncompress file" error, however, my experience has been that this is a spurious warning and the paper uncompresses correctly. Any problems, contact David Redish dredish at cs.cmu.edu From tony at salk.edu Wed Mar 22 20:20:58 1995 From: tony at salk.edu (Tony Bell) Date: Wed, 22 Mar 95 17:20:58 PST Subject: short TR on noisy neurons Message-ID: <9503230120.AA04504@salk.edu> ---------------------------------- FTP-host: ftp.salk.edu FTP-file: pub/tony/bell.noisy.ps.Z ---------------------------------- The following (short) technical report is ftp-able from the Salk Institute. The file is called bell.noisy.ps.Z, it is 0.65 Mbytes compressed, 1.9 Mbytes uncompressed, and 10 pages long (4 figures). It describes work presented at the Computation and Neural Systems 1994 meeting (CNS '94), but which was late for inclusion in the Proceedings. ----------------------------------------------------------------------- Technical Report no. INC-9502, February 1995, Institute for Neural Computation, UCSD, San Diego, CA 92093-0523 `BALANCING' OF CONDUCTANCES MAY EXPLAIN IRREGULAR CORTICAL SPIKING. Anthony J. Bell, Zachary F. Mainen, Misha Tsodyks & Terrence J. Sejnowski Computational Neurobiology Laboratory The Salk Institute 10010 N. Torrey Pines Road La Jolla, California 92037 ABSTRACT Five related factors are identified which enable single compartment Hodgkin-Huxley model neurons to convert random synaptic input into irregular spike trains similar to those seen in {\em in vivo} cortical recordings. We suggest that cortical neurons may operate in a narrow parameter regime where synaptic and intrinsic conductances are balanced to reflect, through spike timing, detailed correlations in the inputs. ----------------------------------------------------------------------- Can be obtained via ftp as follows: unix> ftp ftp.salk.edu (or 198.202.70.34) (log in as "anonymous", e-mail address as password) ftp> binary ftp> cd pub/tony ftp> get bell.noisy.ps.Z ftp> quit unix> uncompress bell.noisy.ps.Z unix> lpr bell.noisy.ps From hali at sans.kth.se Wed Mar 22 17:14:46 1995 From: hali at sans.kth.se (Hans Liljenstrom) Date: Wed, 22 Mar 1995 23:14:46 +0100 Subject: Workshop on Fluctuations in Biology Message-ID: <199503222214.AA07496@thalamus.sans.kth.se> ********************************************************************** First announcement of an interdisciplinary workshop organized in collaboration with the Swedish Council for Planning and Coordination of Research (FRN) THE ROLE AND CONTROL OF RANDOM EVENTS IN BIOLOGICAL SYSTEMS Sigtuna, Sweden 4-9 September 1995 MOTIVATION Life is normally associated with a high degree of order and organization. However, disorder in various contexts referred to as fluctuations, noise or chaos is also a crucial component of many biological processes. For example, in evolution random errors in the reproduction of the genetic material provides a variation that is fundamental for the selection of adaptive organisms. At a molecular level, thermal fluctuations govern the movements and functions of the macromolecules in the cell. Yet, it is also clear that too large a variation may have disastrous effects. Uncontrolled processes need stabilizing mechanisms. More knowledge of the stability requirements of biological processes is needed in order to better understand these problems, which also have important medical applications. Many diseases, for instance certain degenerations of brain cells, are caused by failure of the stabilizing mechanisms in the cell. Stability is also important and difficult to achieve in biotechnological applications. In particular, there is randomness in structure and function of the neural networks of the brain. Spontaneous firing of neurons seems to be important for maintaining an adequate level of activity, but does this "neuronal noise" have any other significance? What are the effects of errors and fluctuations in the information processing of the brain? Can these microscopic fluctuations be amplified to provide macroscopic effects? Often, one cannot easily determine whether an apparently random process is due to noise, governed by uncontrolled degrees of freedom, or if it is a result of "deterministic chaos". Would the difference be of any importance for biology? Especially, could chaos, which is characterized by sensitivity and divergence, be useful for any kind of information processing that normally depends upon stability and convergence? Could chaos in the neural dynamics of the brain perhaps be responsible for (creative) thinking? OBJECTIVE The objective of this meeting is to address the questions and problems given above, for a deeper understanding of the effects of disorder in biological systems. Fluctuations and chaos have been extensively studied in physics, but to a much lesser degree in biology. Important concepts from physics, such as "noise-induced state transitions" and "controlled chaos" could also be of relevance for biological systems. Yet, little has been done about such applications and a more critical analysis of the positive and negative effects of disorder for living systems is needed. It is essential to make concrete and testable hypotheses, and to avoid the kind of superficial and more fashionable treatment that often dominates the field. By bringing together scientists with knowledge and insights from different disciplines we hope to shed more light on these problems, which we think are profound for understanding the phenomenon of life. SCOPE A number of invited speakers will provide presentations on the fundamental problems, but we invite further contributions, in the form of short lectures, computer demonstrations and posters by additional participants. We expect everyone to take an active part in the program, in particular in the general discussions. In order to maintain close contact between all participants, and to provide an efficient workshop atmosphere, the number of participants will be limited to approximately fifty people. A proceedings volume is planned. LOCATION The location of the workshop will be at a unique guest home in Sigtuna, a royal town in early Middle Ages. Situated at the shore of the beautiful lake Malaren, Sigtuna is only 15 km away from the Stockholm Intl. Airport and 45 km from downtown Stockholm. It is also close to the city of Uppsala, which is famous for its Viking graves and for the oldest university and largest cathedral in Scandinavia. The area around Sigtuna is full of cultural and historical sites and the great number of runic stones is unique in the world. There will be excursions and opportunities for sightseeing. The total cost, including accomodation, all meals and registration fee is 4500 SEK. Depending on funding availability, we may be able to give some economical support. ORGANIZING COMMITTEE Clas Blomberg, Dept. of Physics, Royal Institute of Technology, Stockholm Hans Liljenstrom, Dept. of Comp. Sci., Royal Institute of Technology, Stockholm Peter Arhem, Nobel Inst. for Neurophysiology, Karolinska Institutet, Stockholm CONFIRMED INVITED SPEAKERS Luigi Agnati, Dept. of Neuroscience, Karolinska Inst., Stockholm, Sweden Agnes Babloyantz, Dept of Chem. Physics, Free University of Brussels, Belgium Adi Bulsara, NRad, San Diego, USA Rodney Cotterill, Div. of Biophysics, Technical Univ. of Denmark Walter Freeman , Dept. of Molecular and Cell Biology, UC Berkeley, USA Hermann Haken, Inst. f. Theor. Physik und Synergetik, Univ. Stuttgart, Germany Christof Koch, Computation and Neural Systems Program, Caltech, Pasadena, USA Larry Liebovitch, Center for Complex Systems, FAU, Boca Raton, USA Michael Mackey, Dept. of Physiology, McGill University, Montreal, Canada Frank Moss, Dept. of Physics, University of Missouri, St Louis, USA Sakire Pogun, Center for Brain Research, Ege University, Izmir, Turkey Ichiro Tsuda, Dept. of Mathematics, Hokkaido University, Sapporo, Japan FURTHER INFORMATION Hans Liljenstrom SANS - Studies of Artifical Neural Systems Dept. of Numerical Analysis and Computing Science Royal Institute of Technology S-100 44 Stockholm, SWEDEN Email: hali at sans.kth.se Phone: +46-(0)8-790 6909 Fax: +46-(0)8-790 0930 ======================================================================== If you are interested in participating in this workshop, please fill in and return the preliminary registration form below: ------------------------------------------------------------------------ Name: Address: Student (yes/no): Willing to contribute with a presentation (yes/no): Preliminary title/subject: ------------------------------------------------------------------------ From zbyszek at uncc.edu Fri Mar 24 09:00:17 1995 From: zbyszek at uncc.edu (Zbigniew Michalewicz) Date: Fri, 24 Mar 1995 09:00:17 -0500 Subject: 3rd IEEE ICEC '96 call for papers Message-ID: <199503241400.JAA17376@unccsun.uncc.edu> ------------------------ CALL FOR PAPERS ------------------------------------ 1996 IEEE International Conference on Evolutionary Computation (ICEC'96) Nagoya, Japan, May 20-22, 1996 3rd IEEE ICEC'96 is co-sponsored by IEEE Neural Network Council (NNC) and Society of Intrumentation and Control Engineers (SICE). 3rd IEEE ICEC'96 will be organized in conjunction with the conference of Artificial Life (Kyoto, JAPAN, May 16-18, 1996). TOPICS: Theory of evolutionary computation Applications of evolutionary computation Efficiency / robustness comparisons with other direct search algorithms Parallel computer implementations Artificial life and biologically inspired evolutionary computation Evolutionary algorithms for computational intelligence Comparisons between difference variants of evolutionary algorithms Machine learning applications Genetic algorithm and selforganization Evolutionary computation for neural networks Fuzzy logic in evolutionary algorithms SUBMISSION PROCEDURE: Prospective authors are invited to submit papers related to the listed topics for oral or poster presentation. Five (5) copies of the paper must be submitted for review. Papers should be printed on letter size white paper, written in English in two-column format in Times or similar font style, 10 points or larger with 2.5 cm margins on all four sides. A length of four pages is encouraged, and a limit of six pages, including figures, tables and references will be enforced. Centered at the top of the first page should be the complete title of the paper and the name(s), affiliation(s) and address(es) of the author(s). All papers (except those submitted for special sessions - which may have different deadlines - see information on special sessions below) should be sent to: Toshio Fukuda, General Chair Nagoya University Dept. of Micro System Engineering and Dept. of Mechano-Informatics and Systems Furo-cho, Chikusa-ku, Nagoya 464-01, JAPAN Phone: +81-52-789-4478 Fax: +81-52-789-3909 E-mail: fukuda at mein.nagoya-u.ac.jp IMPORTANT DATES: Proposal for tutorial/exhibits November 15, 1995 Submission of Papers (except for special sessions) December 20, 1995 Notification of acceptance February 20, 1996 Submission of camera-ready papers April 10, 1996 Program Co-chairs: Thomas Baeck Informatik Centrum Dortmund (ICD) baeck at ls11.informatik.uni-dortmund.de Hiroaki Kitano Sony Computer Science Laboratory kitano at csl.sony.co.jp Zbigniew Michalewicz University of North Carolina - Charlotte zbyszek at uncc.edu There are several special sessions organized for the 3rd IEEE ICEC '96; so far these include: ********************************************************************* "Constrained Optimization, Constraint Satisfaction and EC" ********************************************************************* Evolutionary Computation has proved its merit in treating difficult problems in, for example, numerical optimization and machine learning. Nevertheless, problems where constraints on the search space (i.e., on the candidate solutions) play an important role have received relatively little attention. In real-world problems, however, the presence of constraints seems to be rather the rule than the exception. The class of constrained problems can be divided into Constraint Satisfaction Problems (CSP) and Constrained Optimization Problems (COP). This special session addresses both subclasses, and aims to explore the extent to which EC can usefully tackle problems of these kinds. The session is organized by Gusz Eiben, chair (Utrecht University, gusz at cs.ruu.nl) Dave Corne (University of Edinburgh,dave at aifh.ed.ac.uk) Jurgen Dorn (Technical University of Vienna, dorn at vexpert.dbai.tuwien.ac.at) Peter Ross (University of Edinburgh, peter at aisb.ed.ac.uk) Submission: Four (4) copies of complete (6 pages maximum) papers, preferably in PostScript form, should be submitted no later than December 15, 1995 to: A.E. Eiben | email: gusz at cs.ruu.nl Department of Computer Science | Utrecht University | Phone: +31-(0)30-533619 P.O.Box 80089 | 3508 TB Utrecht | Fax: +31-(0)30-513791 The Netherlands | All papers will be reviewed, and authors will be notified of the inclusion of their papers in the special session by February 15, 1996. Any questions regarding this special session can be directed to any of the organizers. ********************************************************************* "Evolutionary Artificial Neural Networks" ********************************************************************* Evolutionary Artificial Neural Networks (EANNs) can be considered as a combination of artificial neural networks (ANNs) and evolutionary search algorithms. Three levels of evolution in EANNs have been studied recently, i.e., the evolution of connection weights, architectures, and learning rules. Major issues in the research of EANNs include their scalability, generalisation ability and interactions among different levels of evolution. This special session will serve as a forum for both researchers and practitioners to discuss these important issues and exchange their latest research results/ideas in the area. This special session is organized by X. Yao (xin at cs.adfa.oz.au). Prospective authors are invited to submit four (4) copies of their papers to the following address no later than 20 December 1995. (Please do not include author's information, e.g., name and address, in three of four submitted copies): Xin Yao Department of Computer Science University College, The University of New South Wales Australian Defence Force Academy Canberra, ACT 2600, Australia Ph: +61 6 268 8819 Fax: +61 6 268 8581 Email: xin at csadfa.cs.adfa.oz.au All papers will be reviewed. Notification of acceptance/rejection will be sent out by 20 February 1996. The camera-ready copy must be submitted by 10 April 1996 for inclusion in the conference proceedings. ********************************************************************* "Evolutionary Robotics and Automation" ********************************************************************* More and more researchers are applying evolutionary computation techniques to challenging problems in robotics and automation, where classical methods fail to be effective. In addition to being vastly applicable to many hard problems, evolutionary concepts inspire many researchers as well as users to be fully creative in inventing their own versions of evolutionary algorithms for the specific needs of different domains of problems. This special session serves as a forum for exchanging research results in this growing interdisciplinary area and for encouraging further exploration of the fusion between evolutionary computation and intelligent robotics and automation. This special session is organized by J. Xiao (xiao at uncc.edu). Four (4) copies of complete (6 pages maximum) papers should be submitted no later than December 15, 1995 to: Jing Xiao Department of Computer Science University of North Carolina - Charlotte Charlotte, NC 28223 Phone: (704) 547-4883 Fax: (704) 547-3516 E-mail: xiao at uncc.edu All papers will be reviewed, and authors will be notified of the inclusion of their papers in the special session by February 15, 1996. Any questions regarding this special session should be directed to J. Xiao at the above address. ********************************************************************* "Genetic programming" ********************************************************************* The goal of automatic programming is to create, in an automated way, a computer program that enables a computer to solve a problem. Genetic programming extends the genetic algorithm to the domain of computer programs. In genetic programming, populations of program are genetically bred to solve problems. Genetic programming is a domain-independent method for evolving computer programs that solves, or approximately solves, a variety of problems from a variety of fields, including many benchmark problems from machine learning and artificial intelligence such as problems of control, robotics, optimization, game playing, and symbolic regression (i.e., system identification, concept learning). Early versions of genetic programming evolved programs consisiting of only a single part (i.e., one main program). The session is organized by John R. Koza, Stanford University (Koza at Cs.Stanford.Edu), Lee Spector, Hampshire College (LSPECTOR at hampshire.edu), and Yuji Sato, Hitachi Ltd. Central Research Lab. (yuji at crl.hitachi.co.jp). Prospective authors are encouraged to submit four (4) hard copies of their papers (6 pages maximum) to be received by Friday December 15, 1995 to: John R. Koza Computer Science Department Margaret Jacks Hall Stanford University Stanford, California 94305-2140 USA PHONE: 415-723-1517 FAX(Not for paper submission): 415-941-9430 E-MAIL: Koza at Cs.Stanford.Edu All papers will be reviewed and authors will be notified about acceptance/rejection by about Wednesday, February 15, 1996. ********************************************************************* "Self-adaptation in evolutionary algorithms" ********************************************************************* Evolutionary algorithms (EAs) with the ability to adapt internal strategic parameters (like population size, mutation distribution, type of recombination operator, selective pressure etc.) during the search process usually find better solutions than variants with fixed strategic parameters. Self-adaptation is very useful if different (fixed) parameter settings produce large differences in the solution quality of the algorithm. Most experiences are available for (real-coded) EAs whose individuals adapt their mutation distributions (or step sizes). Here, the property to adjust the step size is induced by competetive pressure among individuals. Evidently, self-adapting mechanisms can be realized by competing subpopulations as well. The potential of those EAs is essentially unexplored. This special session is organized by Guenter Rudolph (rudolph at ls11.informatik.uni-dortmund.de) and is intended to serve as a forum to discuss new ideas and to address the question of a theoretical treatment of self-adapting mechanisms. Four (4) copies of complete papers (6 pages maximum) should be submitted no later than December 15, 1995 to: Guenter Rudolph ICD Informatik Centrum Dortmund e.V. Joseph-von-Fraunhofer-Str. 20 D-44227 Dortmund Germany Phone : +49 - (0)231 - 9700 - 365 Fax : +49 - (0)231 - 9700 - 959 E-mail: rudolph at ls11.informatik.uni-dortmund.de All papers will be reviewed. Authors will be notified of acceptance/rejection by February 15, 1996. ********************************************************************* "Evolutionary algorithms and fuzzy systems" ********************************************************************* Fuzzy sets (FS) and evolutionary algorithms have been already successfully applied to many areas including fuzzy control and fuzzy clustering. There are a number of facets of symbiosis between the technologies of FS and GA. On one hand evolutionary computation enriches the optimization environment for fuzzy systems. On the other, fuzzy sets supply a new macroscopic and domain-specific insight into the fundamental mechanisms of evolutionary algorithms (including fuzzy crossover, fuzzy reproduction, fuzzy fitness function, etc.). The objective of this session is to foster further interaction between researchers actively engaged in FS and GAs. The session will provide a broad forum for exchanging ideas between academe and industry and discussing recent pursuits in the area. This special session is organized by Witold Pedrycz (pedrycz at ee.umanitoba.ca). Prospective authors are encouraged to submit four (4) copies of their papers (6 pages maximum) by December 15, 1995 to: Witold Pedrycz Department of Electrical and Computer Engineering University of Manitoba Winnipeg Canada RT 2N2 Phone : (204) 474-8380 Fax: (204) 261-4639 E-mail: pedrycz at ee.umanitoba.ca All papers will be reviewed and authors will be notified about acceptance/rejection by February 15, 1996. ********************************************************************* ********************************************************************* The deadline for proposals for organizing a special session during the 3rd IEEE ICEC '96 is 20 August 1995; submit your proposal to any Program Co-chair. From omlinc at research.nj.nec.com Fri Mar 24 13:10:55 1995 From: omlinc at research.nj.nec.com (Christian Omlin) Date: Fri, 24 Mar 95 13:10:55 EST Subject: TR available - fault-tolerant recurrent neural networks Message-ID: <9503241810.AA04631@arosa> The following Technical Report is available via the NEC Research Institute archives: __________________________________________________________________________________ Fault-Tolerant Implementation of Finite-State Automata in Recurrent Neural Networks RENSSELAER POLYTECHNIC INSTITUTE DEPT. OF COMPUTER SCIENCE TR CS 95-3 C.W. Omlin[1,2], C.L. Giles[1,3] [1]NEC Research Institute, 4 Independence Way, Princeton, NJ 08540 [2]CS Department, Rensselaer Polytechnic Institute, Troy, NY 12180 [3]UMIACS, University of Maryland, College Park, MD 20742} {omlinc,giles}@research.nj.nec.com ABSTRACT Recently, we have proven that the dynamics of any deterministic finite-state automaton (DFA) with n states and m input symbols can be implemented in a sparse second-order recurrent neural network (SORNN) with n+1 state neurons, O(mn) second-order weights and sigmoidal discriminant functions. We investigate how that constructive algorithm can be extended to fault-tolerant neural DFA implementations where faults in an analog implementation of neurons or weights do not affect the desired network performance. We show that tolerance to weight perturbation can be achieved easily; tolerance to weight and/or neuron stuck-at-zero faults, however, requires duplication of the network resources. This result has an impact on the construction of neural DFAs with a dense internal representation of DFA states. __________________________________________________________________________________ http://www.neci.nj.nec.com/homepages/omlin/omlin.html or ftp://ftp.nj.nec.com/pub/omlinc/fault_tolerance.ps.Z __________________________________________________________________________________ From pja at barbarian.endicott.ibm.com Fri Mar 24 13:19:31 1995 From: pja at barbarian.endicott.ibm.com (Peter J. Angeline) Date: Fri, 24 Mar 1995 13:19:31 -0500 Subject: CFP for 5th Annual Conference on Evolutionary Programming Message-ID: <9503241819.AA07491@barbarian.endicott.ibm.com> --------------------------- CALL FOR PAPERS ------------------------------ EP'96 THE FIFTH ANNUAL CONFERENCE ON EVOLUTIONARY PROGRAMMING SPONSORED BY THE EVOLUTIONARY PROGRAMMING SOCIETY February 29 to March 3, 1996 Sheraton Harbor Island Hotel San Diego, CA, USA General Chairman: Lawrence J. Fogel, Natural Selection, Inc. Technical Program Co-Chairs: Peter J. Angeline, Loral Federal Systems Thomas Baeck, Informatik Centrum Dortmund Thomas M. English, Texas Tech University The Fifth Annual Conference on Evolutionary Programming will serve as a forum for researchers investigating applications and theory of evolutionary programming and other related areas in evolutionary and natural computation. Authors are invited to submit papers which describe original unpublished research in evolutionary programming, evolution strategies, genetic algorithms and genetic programming, artificial life, cultural algorithms, and other models that rely on evolutionary principles. Specific topics include but are not limited to the use of evolutionary simulations in optimization, neural network training and design, automatic control, image processing, and other applications, as well as mathematical theory or empirical analysis providing insight into the behavior of such algorithms. Of particular interest are applications of simulated evolution to problems in biology. Hardcopies of manuscripts must be received by one of the technical program co-chairs by September 26, 1995. Electronic submissions cannot be accepted. Papers should be clear, concise, and written in English. Papers received after the deadline will be handled on a time- and space-available basis. The notification of the program committee's review decision will be mailed by November 30, 1995. Papers eligible for the student award must be marked appropriately for consideration (see below). Camera ready papers are due at the conference, and will be published shortly after its completion. Submissions should be single-spaced, 12 pt. font and should not exceed 15 pages including figures and references. Send five (5) copies of the complete paper to: In Europe: Thomas Baeck Informatik Centrum Dortmund Joseph-von-Fraunhofer-Str. 20 D-44227 Dortmund Germany Email: baeck at home.informatik.uni-dortmund.de In US: Peter J. Angeline Loral Federal Systems 1801 State Route 17C Mail Drop 0210 Owego, NY 13827 Email: pja at lfs.loral.com -or- Thomas M. English Computer Science Department Texas Tech University Lubbock, Texas 79409-3104 Email: english at cs.ttu.edu Authors outside Europe or the United States may send their paper to any of the above technical chairmen at their convenience. SUMMARY OF IMPORTANT DATES -------------------------- September 26, 1995 Submissions of papers November 30, 1995 Notification sent to authors February 29, 1996 Conference Begins Evolutionary Programming Society Award for Best Student Paper ------------------------------------------------------------- In order to foster student contributions and encourage exceptional scholarship in evolutionary programming and closely related fields, the Evolutionary Programming Society awards one exceptional student paper submitted to the Annual Conference on Evolutionary Programming. The award carries a $500 cash prize and a plaque signifying the honor. To be eligible for the award, all authors of the paper must be full-time students at an accredited college, university or other educational institution. Submissions to be considered for this award must be clearly marked at the top of the title page with the phrase "CONSIDER FOR STUDENT AWARD." In addition, the paper should be accompanied by a cover letter stating that (1) the paper is to be considered for the student award (2) all authors are currently enrolled full-time students at a university, college or other educational institution, and (3) that the student authors are responsible for the work presented. Only papers submitted to the conference and marked as indicated will be considered for the award. Late submissions will not be considered. Officers of the Evolutionary Programming Society, students under their immediate supervision, and their immediate family members are not eligible. Judging will be made by officers of the Evolutionary Programming Society or by an Awards Committee appointed by the president. Judging will be based on the perceived technical merit of the student's research to the field of evolutionary programming, and more broadly to the understanding of self-organizing systems. The Evolutionary Programming Society and/or the Awards Committee reserves the right not to give an award in any year if no eligible student paper is deemed to be of award quality. Presentation of the Student Paper Award will be made at the conference. Program Committee: J. L. Breeden, Santa Fe Institute M. Conrad, Wayne State University K. A. De Jong, George Mason University D. B. Fogel, Natural Selection, Inc. G. B. Fogel, University of California at Los Angeles R. Galar, Technical University of Wroclaw P. G. Harrald, University of Manchester Institute of Science and Technology K. E. Kinnear, Adaptive Systems J. R. McDonnell, Naval Command Control and Ocean Surveillance Center Z. Michalewicz, University of North Carolina F. Palmieri, University of Connecticut R. G. Reynolds, Wayne State University S. H. Rubin, Central Michigan University G. Rudolph, University of Dortmund N. Saravanan, Ford Research H.-P. Schwefel, University of Dortmund A. V. Sebald, University of California at San Diego W. M. Spears, Naval Research Labs D. E. Waagen, TRW Systems Integration Group Finance Chair: V. W. Porto, Orincon Corporation Local Arrangements: W. Page, Naval Command Control and Ocean Surveillance Center From duff at wrath.cs.umass.edu Fri Mar 24 17:06:59 1995 From: duff at wrath.cs.umass.edu (duff@wrath.cs.umass.edu) Date: Fri, 24 Mar 1995 17:06:59 -0500 Subject: Tech Rept: Q-learning for Bandit Problems Message-ID: <9503242206.AA04229@wrath.cs.umass.edu> The following technical report is available via anonymous ftp: Q-LEARNING FOR BANDIT PROBLEMS (COMPSCI Technical Report 95-26) Michael Duff Department of Computer Science University of Massachusetts Amherst, MA 01003 duff at cs.umass.edu Multi-armed bandits may be viewed as decompositionally-structured Markov decision processes (MDP's) with potentially very large state sets. A particularly elegant methodology for computing optimal policies was developed over twenty ago by Gittins [Gittins \& Jones, 1974]. Gittins' approach reduces the problem of finding optimal policies for the original MDP to a sequence of low-dimensional stopping problems whose solutions determine the optimal policy through the so-called ``Gittins indices.'' Katehakis and Veinott [Katehakis \& Veinott, 1987] have shown that the Gittins index for a task in state $i$ may be interpreted as a particular component of the maximum-value function associated with the ``restart-in-$i$'' process, a simple MDP to which standard solution methods for computing optimal policies, such as successive approximation, apply. This paper explores the problem of learning the Gittins indices on-line without the aid of a process model; it suggests utilizing task-state-specific Q-learning agents to solve their respective restart-in-state-$i$ subproblems, and includes an example in which the online reinforcement learning approach is applied to a simple problem of stochastic scheduling---one instance drawn from a wide class of problems that may be formulated as bandit problems. FTP-host: envy.cs.umass.edu FTP-file: pub/duff/bandit.ps.Z 18 MBytes compressed / .46 MBytes uncompressed / 32 pages (8 figures) FTP Instructions: unix> ftp envy.cs.umass.edu login: anonymous password: (your email address) ftp> cd pub/duff ftp> binary ftp> get bandit.ps.Z ftp> quit unix> uncompress bandit.ps.Z unix> lpr bandit.ps From rafal at mech.gla.ac.uk Fri Mar 24 07:04:20 1995 From: rafal at mech.gla.ac.uk (Rafal W Zbikowski) Date: Fri, 24 Mar 1995 12:04:20 GMT Subject: Workshop on Neurocontrol Message-ID: <10527.199503241204@gryphon.mech.gla.ac.uk> Neural Adaptive Control Technology Workshop: NACT I 18--19 May, 1995 University of Glasgow, Scotland, UK The first of a series of three workshops on Neural Adaptive Control Technology (NACT) will take place on May 18--19, 1995 in Glasgow, Scotland. This event is being organised in connection with a three-year European Union funded Basic Research Project in the ESPRIT framework. The project is a collaboration between Daimler-Benz Systems Technology Research, Berlin, Germany and the Control Group, Department of Mechanical Engineering, University of Glasgow, Glasgow, Scotland. The project is a study of the fundamental properties of neural network based adaptive control systems. Where possible, links with traditional adaptive control systems will be exploited. A major aim is to develop a systematic engineering procedure for designing neural controllers for non-linear dynamic systems. The techniques developed will be evaluated on concrete industrial problems from within the Daimler-Benz group of companies: Mercedes-Benz AG, Deutsche Aerospace (DASA), AEG and DEBIS. The project leader is Dr.~Ken Hunt (Daimler-Benz) and the other principal investigator is Professor Peter Gawthrop (University of Glasgow). Call for Participation, Provisional Programme, registration form and hotel booking can be found as the PostScript files: call.ps Call for Participation proviso.ps Provisional Programme register.ps registration & hotel on the servers detailed below. FTP server ^^^^^^^^^^ anonymous FTP to: ftp.mech.gla.ac.uk (130.209.12.14) directory: nact World-Wide Web server ^^^^^^^^^^^^^^^^^^^^^ http://www.mech.gla.ac.uk/~nactftp/nact.html WWW server provides a link to the FTP server. Rafal Zbikowski Control Group, Department of Mechanical Engineering, Glasgow University, Glasgow G12 8QQ, Scotland, UK rafal at mech.gla.ac.uk From john at dcs.rhbnc.ac.uk Sat Mar 25 11:56:08 1995 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Sat, 25 Mar 95 16:56:08 +0000 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <199503251656.QAA21004@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT): several new reports available ---------------------------------------- NeuroCOLT Technical Report NC-TR-94-018: ---------------------------------------- On the Complexity of Function Learning by Peter Auer, Technische Universitaet Graz, Philip M. Long, Duke University, Wolfgang Maass, Technische Universitaet Graz, Gerhard J. Woeginger, Technische Universitaet Graz Abstract: The majority of results in computational learning theory are concerned with concept learning, i.e. with the special case of function learning for classes of functions with range $\{ 0,1 \}$. Much less is known about the theory of learning functions with a larger range such as N or R. In particular relatively few results exist about the general structure of common models for function learning, and there are only very few nontrivial function classes for which positive learning results have been exhibited in any of these models. We introduce in this paper the notion of a binary branching adversary tree for function learning, which allows us to give a somewhat surprising equivalent characterization of the optimal learning cost for learning a class of real-valued functions (in terms of a max-min definition which does not involve any ``learning'' model). Another general structural result of this paper relates the cost for learning a union of function classes to the learning costs for the individual function classes. Furthermore, we exhibit an efficient learning algorithm for learning convex piecewise linear functions from $R^d$ into $R$. Previously, the class of linear functions from $R^d$ into $R$ was the only class of functions with multi-dimensional domain that was known to be learnable within the rigorous framework of a formal model for on-line learning. Finally we give a sufficient condition for an arbitrary class $\F$ of functions from $R$ into $R$ that allows us to learn the class of all functions that can be written as the pointwise maximum of $k$ functions from $\F$. This allows us to exhibit a number of further nontrivial classes of functions from $R$ into $R$ for which there exist efficient learning algorithms. ---------------------------------------- NeuroCOLT Technical Report NC-TR-94-019: ---------------------------------------- Neural Nets with Superlinear VC-Dimension by Wolfgang Maass, Institute for Theoretical Computer Science, Technische Universitaet Graz, Klosterwiesgasse 32/2, A-8010 Graz, Abstract: It has been known for quite a while that the Vapnik-Chervonenkis dimension (VC-dimension) of a feedforward neural net with linear threshold gates is at most $O(w \cdot \log w)$, where $w$ is the total number of weights in the neural net. We show in this paper that this bound is in fact asymptotically optimal. More precisely, we exhibit for any depth $d\geq 3$ a large class of feedforward neural nets of depth $d$ with $w$ weights that have VC-dimension $\Omega(w\cdot \log w)$. This lower bound holds even if the inputs are restricted to boolean values. The proof of this result relies on a new method that allows us to encode more ``program-bits'' in the weights of a neural net than previously thought possible. ---------------------------------------- NeuroCOLT Technical Report NC-TR-94-020: ---------------------------------------- Efficient Agnostic PAC-Learning with Simple Hypotheses by Wolfgang Maass, Institute for Theoretical Computer Science, Technische Universitaet Graz, Klosterwiesgasse 32/2, A-8010 Graz, Abstract: We exhibit efficient algorithms for agnostic PAC-learning with rectangles, unions of two rectangles, and unions of $k$ intervals as hypotheses. These hypothesis classes are of some interest from the point of view of applied machine learning, because empirical studies show that hypotheses of this simple type (in just one or two of the attributes) provide good prediction rules for various real-world classification problems. In addition, optimal hypotheses of this type may provide valuable heuristic insight into the structure of a real-world classification problem. The algorithms that are introduced in this paper make it feasible to compute optimal hypotheses of this type for a training set of several hundred examples. We also exhibit an approximation algorithm that can compute near optimal hypotheses for much larger datasets. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-002: ---------------------------------------- Agnostic PAC-Learning of Functions on Analog Neural Nets by Wolfgang Maass, Institute for Theoretical Computer Science, Technische Universitaet Graz, Klosterwiesgasse 32/2, A-8010 Graz, Austria Abstract: We consider learning on multi-layer neural nets with piecewise polynomial activation functions and a fixed number $k$ of numerical inputs. We exhibit arbitrarily large network architectures for which efficient and provably successful learning algorithms exist in the rather realistic refinement of Valiant's model for probably approximately correct learning (``PAC-learning'') where no a-priori assumptions are required about the ``target function'' (agnostic learning), arbitrary noise is permitted in the training sample, and the target outputs as well as the network outputs may be arbitrary reals. The number of computation steps of the learning algorithm LEARN that we construct is bounded by a polynomial in the bit-length $n$ of the fixed number of input variables, in the bound $s$ for the allowed bit-length of weights, in $\frac{1} {\varepsilon}$, where $\varepsilon$ is some arbitrary given bound for the true error of the neural net after training, and in $\frac{1}{\delta}$ where ${\delta}$ is some arbitrary given bound for the probability that the learning algorithm fails for a randomly drawn training sample. However the computation time of LEARN is exponential in the number of weights of the considered network architecture, and therefore only of interest for neural nets of small size. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-003: ---------------------------------------- Perspectives of Current Research about the Complexity of Learning on Neural Nets by Wolfgang Maass, Institute for Theoretical Computer Science, Technische Universitaet Graz, Klosterwiesgasse 32/2, A-8010 Graz, Austria Abstract: This paper discusses within the framework of computational learning theory the current state of knowledge and some open problems in three areas of research about learning on feedforward neural nets: \begin{itemize} \item[--]Neural nets that learn from mistakes \item[--]Bounds for the Vapnik-Chervonenkis dimension of neural nets \item[--]Agnostic PAC-learning of functions on neural nets. \end{itemize} All relevant definitions are given in this paper, and no previous knowledge about computational learning theory or neural nets is required. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-005: ---------------------------------------- Simulating Access to Hidden Information while Learning by Peter Auer, Technische Universit\"{a}t Graz, Philip M. Long, Duke University Abstract: We introduce a new technique which enables a learner without access to hidden information to learn nearly as well as a learner with access to hidden information. We apply our technique to solve an open problem of Maass and Tur\'{a}n, showing that for any concept class $F$, the least number of queries sufficient for learning $F$ by an algorithm which has access only to arbitrary equivalence queries is at most a factor of $1/\log_2 (4/3)$ more than the least number of queries sufficient for learning $F$ by an algorithm which has access to both arbitrary equivalence queries and membership queries. Previously known results imply that the $1/\log_2 (4/3)$ in our bound is best possible. We describe analogous results for two generalizations of this model to function learning, and apply those results to bound the difficulty of learning in the harder of these models in terms of the difficulty of learning in the easier model. We bound the difficulty of learning unions of $k$ concepts from a class $F$ in terms of the difficulty of learning $F$. We bound the difficulty of learning in a noisy environment for deterministic algorithms in terms of the difficulty of learning in a noise-free environment. We apply a variant of our technique to develop an algorithm transformation that allows probabilistic learning algorithms to nearly optimally cope with noise. A second variant enables us to improve a general lower bound of Tur\'{a}n for the PAC-learning model (with queries). Finally, we show that logarithmically many membership queries never help to obtain computationally efficient learning algorithms. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-006: ---------------------------------------- A Stop Criterion for the Boltzmann Machine Learning Algorithm by Berthold Ruf, Technical University Graz Abstract: Ackley, Hinton and Sejnowski introduced a very interesting and versatile learning algorithm for the Boltzmann machine (BM). However it is difficult to decide when to stop the learning procedure. Experiments have shown that the BM may destroy previously achieved results when the learning process is executed for too long. This paper introduces a new quantity, the conditional divergence, measuring the learning success for the inputs of the data set. To demonstrate its use, some experiments are presented, based on the Encoder Problem. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-007: ---------------------------------------- VC-Dimensions for Graphs by Evangelos Kranakis, Carleton University, Danny Krizanc, Carleton University, Berthold Ruf, Technical University Graz, Jorge Urrutia, University of Ottawa, Gerhard J. Woeginger, Technical University Graz Abstract: We study set systems over the vertex set (or edge set) of some graph that are induced by special graph properties like clique, connectedness, path, star, tree, etc. We derive a variety of combinatorial and computational results on the $\vc$ (Vapnik-Chervonenkis) dimension of these set systems. For most of these set systems (e.g.\ for the systems induced by trees, connected sets, or paths), computing the $\vc$-dimension is an $\np$-hard problem. Moreover, determining the $\vc$-dimension for set systems induced by neighborhoods of single vertices is complete for the class $\lognp$. In contrast to these intractability results, we show that the $\vc$-dimension for set systems induced by stars is computable in polynomial time. For set systems induced by paths or cycles, we determine the extremal graphs $G$ with the minimum number of edges such that $\vc_{{\cal P}}(G)\ge k$. Finally, we show a close relation between the $\vc$-dimension of set systems induced by connected sets of vertices and the $\vc$ dimension of set systems induced by connected sets of edges; the argument is done via the line graph of the corresponding graph. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-008: ---------------------------------------- Computing the Maximum Bichromatic Discrepancy, with applications to Computer Graphics and Machine Learning by David P. Dobkin, Princeton University, Dimitrios Gunopulos, Princeton University, Wolfgang Maass, Technische Universitaet Graz, Abstract: Computing the maximum bichromatic discrepancy is an interesting theoretical problem with important applications in computational learning theory, computational geometry and computer graphics. In this paper we give algorithms to compute the maximum bichromatic discrepancy for simple geometric ranges, including rectangles and halfspaces. In addition, we give extensions to other discrepancy problems. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-009: ---------------------------------------- A Finite Automaton Learning System using Genetic Programming by Herman Ehrenburg, CWI, Jeroen van Maanen, CWI Abstract: This report describes the Finite Automaton Learning System (FALS), an evolutionary system that is designed to find small digital circuits that duplicate the behaviour of a given finite automaton. FALS is developed with the aim to get a better insight in learning systems. It is also targeted to become a general purpose automatic programming system. The system is based on the genetic programming approach to evolve programs for tasks instead of explicitly programming them. A representation of digital circuits suitable for genetic programming is given as well as an extended crossover operator that alleviates the need to specify an upper bound for the number of states in advance. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-010: ---------------------------------------- On Specifying Boolean Functions by Labelled Examples by Martin Anthony, London School of Economics, Graham Brightwell, London School of Economics, John Shawe-Taylor, Royal Holloway, University of London Abstract: We say a function $t$ in a set $H$ of $\{0,1\}$-valued functions defined on a set $X$ is {\it specified} by $S \subseteq X$ if the only function in $H$ which agrees with $t$ on $S$ is $t$ itself. The {\it specification number} of $t$ is the least cardinality of such an $S$. For a general finite class of functions, we show that the specification number of any function in the class is at least equal to a parameter from~\cite{RS} known as the testing dimension of the class. We investigate in some detail the specification numbers of functions in the set of linearly separable Boolean functions of $n $ variables---those functions $f$ such that $f^{-1}(\{0\})$ and $f^{-1}(\{1\})$ can be separated by a hyperplane. We present general methods for finding upper bounds on these specification numbers and we characterise those functions which have largest specification number. We obtain a general lower bound on the specification number and we show that for all {\it nested} functions, this lower bound is attained. We give a simple proof of the fact that for any linearly separable Boolean function, there is exactly one set of examples of minimal cardinality which specifies the function. We discuss those functions which have limited dependence, in the sense that some of the variables are redundant (that is, there are irrelevant attributes), giving tight upper and lower bounds on the specification numbers of such functions. We then bound the average, or expected, number of examples needed to specify a linearly separable Boolean function. In the final section of the paper, we address the complexity of computing specification numbers and related parameters. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-012: ---------------------------------------- On the relations between discrete and continuous complexity theory by Klaus Meer, RWTH Aachen Abstract: Relations between discrete and continuous complexity models are considered. The present paper is devoted to combine both models. In particular we analyze the 3-Satisfiability problem. The existence of fast decision procedures for this problem over the reals is examined based on certain conditions on the discrete setting. Moreover we study the behaviour of exponential time computations over the reals depending on the real complexity of 3-Satisfiability. This will be done using tools from complexity theory over the integers. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-014: ---------------------------------------- Grundlagen der reellen Komplexit\"atstheorie by Klaus Meer, RWTH Aachen Abstract: (in English - text is in German) Complexity theory deals with the question of classifying mathematical problems according to the difficulty they provide for algorithmic solutions. This is generally related to \begin{itemize} \item finding efficient solution-algorithms, \item analyzing structural properties which make problems difficult to solve and \item comparing problems. \end{itemize} Contrary to the situation in classical complexity theory the real approach is interested in studying problems defined on continuous structures. Starting point for the present lecture notes will be the model of a real Turing-machine as it was introduced 1989 by Blum, Shub, and Smale. We will begin with a formal definition of notions like computability, decidability and efficiency. This gives rise to consider the complexity classes $P_{\R}$ and $NP_{\R}$. After analyzing basic properties (reducibility, $NP_{\R}-$completeness,existence of complete problems) we'll care about decidability of problems in class $NP_{\R}$. To this aim results on quantifier elimination and on the structure of semialgebraic sets are investigated. Finally, methods for proving lower bounds are presented. For this purpose we show a real version of Hilbert's Nullstellensatz. Table of contents: 0. Introduction 1. The computational model of Blum, Shub, and Smale 2. Complexity theory for the BSS-model 3. Existential theory over the reals 4. Lower bounds References ----------------------- The Report NC-TR-94-018 can be accessed and printed as follows % ftp cscx.cs.rhbnc.ac.uk (134.219.200.45) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-94-018.ps.Z ftp> bye % zcat nc-tr-94-018.ps.Z | lpr -l Similarly for the other technical report. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. The files may also be accessed via WWW starting from the NeuroCOLT homepage: http://www.dcs.rhbnc.ac.uk/neurocolt.html Best wishes John Shawe-Taylor From rajkumar at centre-intelligent-systems.plymouth.ac.uk Sun Mar 26 14:49:51 1995 From: rajkumar at centre-intelligent-systems.plymouth.ac.uk (Rajkumar Roy (EDC)) Date: Sun, 26 Mar 95 14:49:51 BST Subject: No subject Message-ID: <6961.9503261349@cis.plymouth.ac.uk> Subject: ACEDC'96 Call for Papers ... Cc: ************************************************************** PLEASE CIRCULATE ! PLEASE CIRCULATE ! PLEASE CIRCULATE ! ************************************************************** SECOND INTERNATIONAL CONFERENCE ADAPTIVE COMPUTING IN ENGINEERING DESIGN AND CONTROL '96 26-28 March 1996 'The Integration of Genetic Algorithms, Neural Computing and Related Adaptive Techniques with Current Engineering Practice'. 1ST CALL FOR PAPERS ACEDC'96 CONFERENCE CHAIRS Dr I C Parmee Prof M J Denham Plymouth Engineering Design Centre AIMS OF THE CONFERENCE There is a world-wide upsurge of interest from both industry and academia in exciting novel computer technologies that are inspired by biological principles and other natural processes. The Genetic Algorithm, Neural Computing and Cellular Automata are examples of emergent computational techniques which exploit co-operating elements to solve complex problems previously considered to be beyond the capabilities of conventional numerical computation. A number of specialised conferences are held annually where fundamental issues in these fields are described and discussed. ACEDC'96 is the second in what is expected to be a biennial series of meetings aimed at addressing the rapidly developing integration of these emerging computing technologies with engineering applications, particularly in the areas of design and control. The primary objective of the ACEDC'96 Conference is to create a stimulating environment in which participants can assess the state of the art, discuss feasible future directions for research and applications and develop long term targets. The ultimate aim of this conference series is to ensure that design engineers can take full advantage of these powerful computing technologies and of their implementation upon high performance computing platforms, as both become increasingly available and dominant over the next ten years and into the early part of the 21st Century. RELEVANT AREAS Papers are invited which address, amongst others, the following issues: * How are design and control problems best formulated for the application of these novel computing technologies? * What aspects of design and control problems present difficulties for and limitations on the use of these technologies? * What are the current shortcomings of the novel computing methods in respect of their application to real world problems? * To what extent can the development of hybrid approaches, involving the dynamic combination of complementary computing methods, help to solve present and future problems? * How can designer intuition and experience be captured and included in the process? * How can the design engineer visualise and explain the computational processes, their resulting solutions and pathways to these solutions? * How can designer creativity be best enhanced by these techniques? 1ST CALL FOR PAPERS Submissions should take the form initially of extended abstracts of 1000-2000 words which fully describe how the paper will contribute to the aims of the conference. This will be either by addressing the issues described above or related issues at a conceptual level, or by describing real world examples of how such issues have been approached and problems overcome. Abstracts are invited from both industry and academia and may describe completed work or ongoing research. Papers will be accepted as either full papers for oral presentation or short papers for poster presentation etc. Extended abstracts should be received by 1st May 1995. Successful authors will be informed by 30th August 1995 and a camera-ready copy should arrive no later than 23rd October 1995. CONFERENCE ORGANISATION The Conference will be of three days duration. Parallel sessions will be avoided on at least two of the three days with the aim of generating widespread discussion on all aspects of the meeting. The content of each session will be designed as far as possible to include papers which address the issues both conceptually and through application in order to promote and stimulate discussion of the integration between the computing methods and their real-world applications. This approach will also assist in the identification of the generic issues involved. Keynote speakers have been invited to present papers which will further stimulate and focus discussion of the major issues in the field. Delegate fees will be kept to a minimum and are unlikely to exceed 180.00 pounds sterling for the attendance on all three days, the aim being to provide high-level information exchange at low cost. INVITED KEYNOTE SPEAKERS Professor Eric Goodman Michigan State University, USA Professor George Thierauf University of Essen, Germany Professor John Taylor Kings College, London, UK Professor Julian Morris University of Newcastle-upon-Tyne, UK Dr Philip Husbands University of Sussex, UK INVITED SCIENTIFIC COMMITTEE A J Keane University of Oxford, UK H Schwefel University of Dortmund, Germany P Husbands University of Sussex, UK G Thierauf University of Essen, Germany P Cowley Rolls Royce, UK E Semenkin Siberian Aerospace Academy, Russia P Liddell British Aerospace, UK D Grierson University of Waterloo, Canada G Gapper British Aerospace, UK J Angus Rolls Royce, UK E Goodman Michigan State University, USA J Taylor Kings College, London, UK E Kant Schlumberger Computing Labs, USA C Hughes Logica Cambridge, UK S Talukdar Carnegie Mellon University, USA C Harris University of Southampton, UK J Morris University of Newcastle-upon-Tyne, UK C Lin Institute of Technology, Taiwan S Patel Unilever Research Laboratory, UK M J Denham University of Plymouth, UK I C Parmee University of Plymouth, UK ASSOCIATED SOCIETIES Institution of Engineering Designers Institution of Mechanical Engineers Institution of Civil Engineers British Computer Society AISB IMPORTANT DATES Immediately Expression of interest 1st May 1995 Deadline for receipt of abstracts 30th August 1995 Notification of acceptance 23rd October 1995 Deadline for receipt of full papers 26-28th Mar 1996 Conference CONTACT ADDRESS Ms J Levers (Secretary) Plymouth Engineering Design Centre University of Plymouth Charles Cross Centre Drake Circus PLYMOUTH Devon, PL4 8DE United Kingdom Tele: +44 (0)1752-233508 Fax: +44 (0)1752-233505 Email: ian at cis.plym.ac.uk From prechelt at ira.uka.de Mon Mar 27 08:51:39 1995 From: prechelt at ira.uka.de (Lutz Prechelt) Date: Mon, 27 Mar 1995 15:51:39 +0200 Subject: TR on connection pruning available Message-ID: <"irafs2.ira.104:27.03.95.13.50.23"@ira.uka.de> FTP-host: ftp.icsi.berkeley.edu FTP-file: /pub/techreports/1995/tr-95-009.ps.Z URL: ftp://ftp.icsi.berkeley.edu/pub/techreports/1995/tr-95-009.ps.Z The technical report "Adaptive Parameter Pruning in Neural Networks" is now available for anonymous ftp from ftp.icsi.berkeley.edu in directory /pub/techreports/1995/ as file tr-95-009.ps.Z (92 kB, 14 pages). Here is the bibtex entry and abstract: @TechReport{Prechelt95e, author = "Lutz Prechelt", title = "Adaptive Parameter Pruning in Neural Networks", institution = "International Computer Science Institute", year = 1995, number = "95-009", address = "Berkeley, CA", month = mar, Class = "nn, learning, experiment, algorithm", URL = "ftp://ftp.icsi.berkeley.edu/pub/techreports/1995/tr-95-009.ps.Z}, abstract = "Neural network pruning methods on the level of individual network parameters (e.g. connection weights) can improve generalization. An open problem in the pruning methods known today (OBD, OBS, autoprune, epsiprune) is the selection of the number of parameters to be removed in each pruning step (pruning strength). This paper presents a pruning method \Def{lprune} that automatically adapts the pruning strength to the evolution of weights and loss of generalization during training. The method requires no algorithm parameter adjustment by the user. The results of extensive experimentation indicate that lprune is often superior to autoprune (which is superior to OBD) on diagnosis tasks unless severe pruning early in the training process is required. Results of statistical significance tests comparing autoprune to the new method lprune as well as to backpropagation with early stopping are given for 14 different problems." } The ICSI internet connection is sometimes extremely slow and fails often. If you have problems getting the document, just try again at a different time. Sorry, no hardcopies available from me. Lutz Lutz Prechelt (http://wwwipd.ira.uka.de/~prechelt/) | Whenever you Institut fuer Programmstrukturen und Datenorganisation | complicate things, Universitaet Karlsruhe; 76128 Karlsruhe; Germany | they get (Voice: +49/721/608-4068, FAX: +49/721/694092) | less simple. From bert at mbfys.kun.nl Tue Mar 28 04:44:35 1995 From: bert at mbfys.kun.nl (Bert Kappen) Date: Tue, 28 Mar 1995 11:44:35 +0200 Subject: No subject Message-ID: <199503280944.LAA11946@septimius.mbfys.kun.nl> Subject: Publication announcement FTP-host: galba.mbfys.kun.nl FTP-file: pub/reports/Kappen.RBBM.ps.Z Radial Basis Boltzmann Machines and learning with missing values (4 pages) Hilbert J. Kappen, Marcel J. Nijman RWCP Novel Function SNN Laboratory Dept. of Medical Physics and Biophysics, University of Nijmegen Geert Grooteplein 21, NL 6525 EZ Nijmegen, The Netherlands ABSTRACT: A Radial Basis Boltzmann Machine (RBBM) is a specialized Boltzmann Machine architecture that combines feed-forward mapping with probability estimation in the input space, and for which very fast learning rules exist. The hidden representation of the network displays symmetry breaking as a function of the noise in the Glauber dynamics. Thus generalization can be studied as a function of the noise in the neuron dynamics instead of as a function of the number of hidden units. For the special case of unsupervised learning, we show that this method is an elegant alternative of $k$ nearest neighbor, leading to comparable performance without the need to store all data. We show that the RBBM has good classification performance compared to the MLP. The main advantage of the RBBM is that simultaneously with the input-output mapping, a model of the input space is obtained which can be used for learning with missing values. We show that the RBBM compares favorably to the MLP for large percentages of missing values. From l.s.smith at cs.stir.ac.uk Tue Mar 28 10:38:10 1995 From: l.s.smith at cs.stir.ac.uk (Dr L S Smith (Staff)) Date: Tue, 28 Mar 1995 16:38:10 +0100 Subject: M.Sc. Course in Neural Computation. Message-ID: <199503281538.QAA07039@katrine.cs.stir.ac.uk> CENTRE FOR COGNITIVE AND COMPUTATIONAL NEUROSCIENCE UNIVERSITY OF STIRLING, SCOTLAND M.Sc. in NEURAL COMPUTATION This is a one-year full-time course with a focus on basic principles of neural computation and a special emphasis on vision. Students are prepared for careers in neural net research and development in industrial and academic situations, and also for work in more traditional computational, cognitive science, or neuroscience environments where this training can provide a distinctive and valuable contribution. The course may include a two-month industrial placement. Work for the M.Sc. can in some case be converted into the first year of a Ph.D. A nine-month Post-graduate Diploma is also available. COURSE STRUCTURE: AUTUMN 1. Introduction to neural computation 2. Principles of vision 3. Cognitive neuroscience 4. Mathematical and statistical techniques SPRING & SUMMER 1. Advanced practical work 2. Advanced topics 3. Research project ADMISSION: Applicants with any relevant first degree are eligible, e.g., PSYCHOLOGY, COMPUTING, BIOLOGY, PHYSICS, ENGINEERING, or MATHEMATICS. For information and application forms contact: School Office, School of Human Sciences, University of Stirling, Stirling FK9 4LA, SCOTLAND For specific enquiries contact: Dr W. A. Phillips, CCCN, Stirling University, Stirling FK9 4LA, Scotland e-mail: WAP1 at FORTH.STIR.AC.UK From halici at rorqual.CC.METU.EDU.TR Mon Mar 27 03:23:33 1995 From: halici at rorqual.CC.METU.EDU.TR (ugur halici) Date: Mon, 27 Mar 1995 11:23:33 +0300 (MEST) Subject: NN chips Message-ID: <mailman.752.1149591334.29955.connectionists@cs.cmu.edu> Dear Colleagues, We are gathering information on Neural Network hardware devices that have been implemented. Until now, we have collected sufficient information on the following chips/devices, for which a reference list for these devices is provided at the end of the message: TiNMANN Kohonen SOFM Nestor Ni1000 Siemens MA16 Mitsubishi Branch Neuron Unit Bell Labs Hopfield Chip Hitachi Digital Chip Philips L-Neuro Intel ETANN University of Edinburgh EPSILON AT&T Bell Labs ANNA Adaptive Solution CNAPS Hitachi WSI Siemens SYNAPSE However we have insufficient information on the following: British Telecom HANNIBAL Silicon Retina Jet Propulsion Laboratory Hopfield Chip BELLCORE Boltzmann Machine KAKADU Multilayer Perceptron Fujitsu Analog-Digital Chip U. of Catholique Louvain Kohonen SOFM Chip MIT Neuroprocessor Chip TRW MARK HNC SNAP We will appreciate your contact if you have involved somehow in implemented neuro-chips whose name is not in our list or listed among the ones for which we have insufficient information. Sincerely, Ugur Halici Dept. of Electrical Engineering Middle East Technical University, 06531, Ankara Fax: (+90) 312 210 12 61 Email: halici at rorqual.cc.metu.edu REFERENCES Alspector, J.,et. al., 1989, "Performance of a Stochastic Learning Microchip.", Advances in Neural Information Processing Systems, Vol.1, pp. 748-760. Arima, Y., et. al, 1991a, "A 336-Neuron, 28-K Synapse, Self-learning Neural Network Chip with Branch-Neuron-Unit Architecture.", IEEE Journal of Solid State Circuits, Vol.26, No.11, pp. 1637-1644. Arima, Y., et. al, 1991b, "A Self-learning Neural Network Chip with 125-Neurons and 10-K self-Organization Synapses.", IEEE Journal of Solid State Circuits, Vol.26, No.4, pp. 607-611. Arima, Y., et. al, 1992, "A Refreshable Analog VLSI Neural Network Chip with 400-Neurons and 40-K synapses", IEEE Journal of Solid State Circuits, Vol.27, No.12, pp. 1854-1861. Castro, H.A., et. al, 1993, "Implementation and Performance of Analog Nonvolatile Neural Network", Analog Integrated Circuits and Signal Processing, Vol.4, pp. 97-113. Eberhardt, S.P., et.al., 1992, "Analog VLSI Neural Netowrks: Implementation Issues and Examples in Optimization and Supervised Learning.", IEEE Transactions on Industrial Electronics, Vol.39, No.6, pp. 552-564. Hamilton, A., et. al, 1993, "Pulse Stream VLSI Circuits and Systems: The EPSILON Neural Network Chipset", International Journal of Neural Systems, Vol.4, No.4, pp. 395-405. Holler, M., et. al, 1989, "An Electrically Trainable Artificial Neural Network (ETANN) with 1024 'Floating Gate' Syanpse", Proceedings of IACNN 1989, pp.191-196 INTEL 80170NW ETANN Experimental Sheet, May 1990, Intel Corp. Maher, M.A.C., et.al., 1989, "Implementing Neural Architectures Using Analog VLSI Circuits.", IEEE Transactions on Circuits and Systems, Vol. 36, No. 5, pp. 643-652 Mueller, D., and D.Hammerstorm, 1992, "A Neural Network Systems Component", Proceedings of IEEE ICNN 1992, pp. 1258-1264. Murray, A.F., et. al, 1994, "Pulse Stream VLSI Neural Networks.", IEEE Micro, June 1994, pp. 29-38. Ramacher, U., 1992, "SYNAPSE - A Neurocomputer that Synthesizes Neural Algorithms on a Parallel Systolic Engine", Journal of Parallel and Distributed Computing, Vol.14, pp. 306-318. Ramacher, U., et. al, 1993, "Multiprocessor and Memory Architecture of the Neurocomputer Synapse-1", International Journal of Neural Systems, Vol.4, No.4, pp. 333-336. Sackinger, E., et. al, 1992a, "Application of the ANNA Neural Network Chip to High-speed Character Recognition.", IEEE Transactions on Neural Networks, Vol.3, No.3, pp. 498-505. Tam, S., et. al, 1992, "A Reconfigurable Multi-chip Analog Neural Network Recognition and Back-Propagation Training", Proceedings of IEEE ICNN 1992, pp. 625-630. Watanebe, T., et. al, 1993, "A Single 1.5V Digital Chip for a 106 Synapse Neural Network.", IEEE Transactions on Neural Networks, Vol.4, No.3, pp. 387-393. From robtag at dia.unisa.it Wed Mar 29 05:31:32 1995 From: robtag at dia.unisa.it (Tagliaferri Roberto) Date: Wed, 29 Mar 1995 12:31:32 +0200 Subject: FIRST EUROPEAN NEURAL NETWORK SCHOOL Message-ID: <9503291031.AA15397@udsab.dia.unisa.it> FIRST EUROPEAN NEURAL NETWORK SCHOOL First Announcement IIASS, Vietri S/M (Salerno), Italy 25-29 September 1995 (Co-chairs: Professor M. Marinaro and Dr. T.G. Clarkson) There is a need for a school in neural networks to caterto the healthy growth of activity in the subject. The NEURONET EC Network of Excellence is proposing to help sponsor and develop this, in collaboration with IIASS. The School will last for 5 days, with lectures in the mornings (9.00 am - 12.00 midday) and late afternoons (3.00 pm - 5.00 pm) each day. 95 per day, each 1 hour in lenght). At the end of each day a discussion (1 hour) will be held. Proposed topics (3 hours per topics, except as where mentioned): 1. Introduction to Artificial Neural Networks (ANNs) 2. Theory of Learning 3. Applications of ANNs in Control } 4. Applications of ANNs in Pattern Recognition } (2 hours each) 5. Applications of ANNs in Time Series } 6. Introduction to Living Neural Networks 7. Biological Inputs and Outputs 8. Biological Memory Systems 9. Higher Order Cognitive Modelling Committee from NEURONET Dr. T.G. Clarkson (KCL) (Chair) Professor J.G. Taylor (KCL) Professor A. Babloyantz (and other Human Resources Committee members) PARTICIPANTS All appropriate (students, beginners to the field, etc.) Registration fee 250.000 Italian Lire Registration form (To be sent to IIASS by May, 31 1995) ***************************************************************************** TEAR OFF HERE ***************************************************************************** INFORMATION FORM to be returned to: FENNS 95 IIASS Via Pellegrino, 19 84019 Vietri s/m (Salerno) Italia FENNS 95 Vietri s/m, 25-29 September 1995 Last name : .......................................................... First Name : ........................................................ Organization or company : ............................................ ...................................................................... ...................................................................... Postal code/Zip code : ............................................... City : ............................................................... Country : ............................................................ Tel : ................................................................ Fax : ................................................................ Electronic mail:...................................................... ***************************************************************************** TEAR OFF HERE ***************************************************************************** The accepted registration will be notified by June, 20 1995 and the registration fee must be sent by July, 20 1995. ***************************************************************************** From terry at salk.edu Thu Mar 30 01:30:02 1995 From: terry at salk.edu (Terry Sejnowski) Date: Wed, 29 Mar 95 22:30:02 PST Subject: Neural Computation Abstract on WWW Message-ID: <9503300630.AA15721@salk.edu> Abstracts for Neural Computation can be found on the World Wide Web: http://www-mitpress.mit.edu Terry ----- From rsun at cs.ua.edu Thu Mar 30 12:57:13 1995 From: rsun at cs.ua.edu (Ron Sun) Date: Thu, 30 Mar 1995 11:57:13 -0600 Subject: Workshop on connectionist-symbolic integration Message-ID: <9503301757.AA18175@athos.cs.ua.edu> ---------------------------------------------- The IJCAI Workshop on Connectionist-symbolic Integration: From Unified to Hybrid Approaches ---------------------------------------------- Montreal, Canada August 19-20, 1995 There has been a considerable amount of research in integrating connectionist and symbolic processing. While such an approach has clear advantages, it also encounters serious difficulties and challenges. Therefore, various models and ideas have been proposed to address various problems and aspects in this integration. There is a growing interest from many segments of the AI community, ranging from expert systems, to cognitive modeling, to logical reasoning. Two major trends can be identified in the state of the art: these are the unified, or purely connectionist, and the hybrid approaches to integration. Whereas the purely connectionist ("connectionist-to-the-top") approach claims that complex symbol processing functionalities can be achieved via neural networks alone, the hybrid approach is premised on the complementarity of the two paradigms and aims at their synergistic combination in systems comprising both neural and symbolic components. In fact, these trends can be viewed as two ends of an entire spectrum. Up till now, overall, there is still relatively little work in comparing and combining these fairly isolated efforts. This workshop will provide a forum for discussions and exchanges of ideas in this area, to foster cooperative work. The workshop will tackle important issues in integrating connectionist and symbolic processing. ** Organizing Committee Frederic Alexandre (co-chair) John Barnden Steve Gallant Larry Medsker Christian Pellegrini Noel Sharkey Ron Sun (co-chair) ** Program Committee Lawrence Bookman Michael Dyer Wolfgang Ertel LiMin Fu Jose Gonzalez-Cristobal Ruben Gonzalez-Rubio Jean-Paul Haton Melanie Hilario Abderrahim Labbi Ronald Yager Workshop Schedule} -------------------- August 19th, 1995 9:00 - 9:20 Opening Remarks R. Sun F. Alexandre 9:20 - 11:30 Invited talks Chair: Ron Sun 9:20 - 10:20 Invited Talk: Neuropsychology meets AI J. Hendler 10:30 - 11:30 Invited Talk: Neural computing and Artificial Intelligence N. Sharkey 11:30-12:00 Panel responses and discussions Chair: R. Sun Panelists: F. Alexandre, J. Austin, G. Cottrell/D. Noelle, R. Yager Each panelist gives a 5 minute commentary, with questions and comments from the audience. The invited speakers then give their responses. 1:30 - 2:30 Interactive Session: Definitions of Approaches Chair: F. Alexandre Overview of strategies for neurosymbolic integration M. Hilario Cognitive aspects of neurosymbolic integration Y. Lallement and F. Alexandre 2:30 pm - 4:00 Regular Session: Hybrid Approaches Chair: C. Pellegrini A hybrid learning model of abductive reasoning T. Johnson and J. Zhang A hybrid learning model for reaction and decision making R. Sun and T. Peterson A preprocessing model for integrating CBR and prototype-based neural network M. Malek and B. Amy 4:10 - 5:40 Regular Session: Unified Approaches Chair: R. Sun Symbolic neural networks derived from stochastic grammar domain models E. Mjolsness Micro-level hybridization in DUAL B. Kokinov A unified connectionist model of instruction following D. Noelle and G. Cottrell August 20th, 1995 9:00 - 10:30 Regular Session: Hybrid Approaches Chair: J.P. Haton An integrated symbolic/connectionist model of parsing S. Stevenson A hybrid system framework for disambiguating word senses X. Wu, M. McTear, P. Ojha, H. Dai A localist network architecture for logical inference N. Park and D. Robertson 10:40 - 12:40 Regular Session: Unified Approaches Chair: F. Alexandre Holographic reduced representation T. Plate Distributed representations for terms in hybrid reasoning systems A. Sperduti, A. Starita, C. Goller Learning distributed representation R. Krosley and M. Misra Distributed associative memory J. Austin 2:00 - 4:30 Regular Session: Hybrid Approaches Chair: L. Medsker A distributed platform for symbolic-connectionist integration J. C. Gonzalez, J. R. Velasco, C. A. Iglesias Nessyl3L: a neurosymbolic system with 3 levels B. Orsier and A. Labbi A framework for hybrid systems P. Bison, G. Chemello, C. Sossai, G. Trainito A first approach to a taxonomy of fuzzy-neural systems L. Magdalena Task structure and computational level; architectural issues in symbolic-connectionist integration R. Khosla and T. Dillon 4:40 - 5:30 Summary Panel Chair: F. Alexandre Panelists: R. Sun, T. Johnson/J. Zhang, M. Hilario, S. Gallant, J.P. Haton, L. Medsker 5:30 Workshop ends -------------------------------------------------------------- For details, contact IJCAI-95, c/o AAAI, 455 Burgess Drive, Menlo Park, CA 94025, USA. ================================================================ Dr. Ron Sun Department of Computer Science phone: (205) 348-6363 The University of Alabama fax: (205) 348-0219 Tuscaloosa, AL 35487 email: rsun at cs.ua.edu ================================================================ From bishopc at helios.aston.ac.uk Fri Mar 31 06:40:38 1995 From: bishopc at helios.aston.ac.uk (bishopc) Date: Fri, 31 Mar 1995 11:40:38 +0000 Subject: Aston World Wide Web Pages Message-ID: <9838.9503311040@sun.aston.ac.uk> Aston University ---------------- Neural Computing Research Group ------------------------------- World Wide Web pages -------------------- Our world wide web pages can be viewed at the following URL: http://neural-server.aston.ac.uk/ These pages include information on the research activities of the Group, lists of recent publications and preprints, and funded research opportunities within the Group. Chris Bishop -------------------------------------------------------------------- Professor Chris M Bishop Tel. +44 (0)121 359 3611 x4270 Neural Computing Research Group Fax. +44 (0)121 333 6215 Dept. of Computer Science c.m.bishop at aston.ac.uk Aston University Birmingham B4 7ET, UK -------------------------------------------------------------------- From rwp at eng.cam.ac.uk Fri Mar 31 14:52:07 1995 From: rwp at eng.cam.ac.uk (Richard Prager) Date: Fri, 31 Mar 1995 14:52:07 BST Subject: Cambridge Neural Nets Summer School 1995 Message-ID: <199503311352.12914@dsl.eng.cam.ac.uk> +------------------------------------------------------------------+ | FIFTH ANNUAL CAMBRIDGE NEURAL NETWORKS SUMMER SCHOOL | | | | 24-27 July 1995, Emmanuel College, Cambridge, United Kingdom | +------------------------------------------------------------------+ FULLY FUNDED PLACES AVAILABLE FOR EPSRC RESEARCH STUDENTS DISCOUNTED RATES AVAILABLE FOR ACADEMICS SPEAKERS Professor Chris BISHOP - Aston University, Birmingham Dr Herve BOURLARD - Faculte Polytechnique of Mons, Belgium Dr John DAUGMAN - University of Cambridge Professor Geoffrey HINTON - Toronto University Dr Robert JOHNSTON - GEC Hirst Research Centre, Hertfordshire Professor Michael JORDAN - MIT, Boston, Massachusetts Dr Michael LYNCH - Cambridge Neurodynamics Ltd Dr David MACKAY - University of Cambridge Dr Rich SUTTON - GTE Laboratories, Massachusetts Dr Lionel TARASSENKO - University of Oxford PROGRAMME The course will consist of a series of lectures by international experts, interspersed with practical sessions, laboratory tours, poster session, discussion (both formal and informal) and a commercial exhibition. All sessions will be covered by comprehensive course notes and subjects will include: Introduction and overview: Connectionist computing: an introduction and overview Programming a neural network Parallel distributed processing perspective Theory and parallels with conventional algorithms Architectures: Pattern processing and generalisation Bayesian methods and non-linear modelling Reinforcement learning neural networks Multiple expert networks Self organising neural networks Feedback networks for optimization Applications: System identifications Time series predictions Learning forward and inverse dynamical models Control of non-linear dynamical systems using neural networks Artificial and biological vision systems Silicon VLSI neural networks Applications to speech recognition Applications to mobile robotics Financial system modelling Applications in medical diagnostics WHO WILL BENEFIT * Engineers, software specialists and those needing to assess the current potential of neural networks * Technical staff requiring an overview of the subject * Individuals who already have expertise in this area and need to keep abreast of recent developments * Those who have recently entered the field and require a complete perspective of the subject * Researchers and academics working within neural computing areas, as well as all those in industry researching and developing applications Some, although not all, of the lectures will involve graduate level mathematical theory. ACADEMIC DIRECTORS The Summer School will be chaired by members of Cambridge University Engineering Department (CUED) who as members of the Speech, Vision and Robotics Group have current research interests as shown: DR MAHESAN NIRANJAN - speech processing and pattern classification DR RICHARD PRAGER - speech and medical applications DR TONY ROBINSON - recurrent networks and speech processing ACADEMIC SPEAKERS PROFESSOR CHRIS BISHOP, Head of the Neural Computing Research Group at Aston University and Chairman of the Neural Computing Applications Forum, is researching statistical pattern recognition. DR HERVE BOURLARD is with Faculte Polytechnique of Mons, Belgium. He has made many contributions in the area of neural networks and speech recognition. DR JOHN DAUGMAN is Lecturer in Artificial Intelligence in the Computer Laboratory at Cambridge University. His areas of research are computational neuroscience, multi-dimensional signal processing, computer vision, statistical pattern recognition, and biological vision. PROFESSOR GEOFFREY HINTON, Professor of Computer Science and Psychology at the University of Toronto, researches learning, perception and symbol processing in neural networks. He was one of the researchers who introduced the back-propagation algorithm that is now widely used for practical applications. PROFESSOR MICHAEL JORDAN is in the Department of Brain & Cognitive Science at MIT. His contributions to the field include the development of probabilistic methods for learning in modular and hierarchical systems, and the development of methods for applying neural networks to system identification and control problems. DR DAVID MACKAY works on Bayesian methods and non-linear modelling at the Cavendish Laboratory. He obtained his PhD in Computation and Neural Systems at California Institute of Technology. DR LIONEL TARASSENKO is with the Department of Engineering Science at the University of Oxford. His specialisms are robotics and the hardware implementation of neural computing. INDUSTRIAL SPEAKERS DR ROBERT JOHNSTON joined GEC Hirst Research Centre in 1989. His current research is on the applications of non-linear control, with emphasis on fuzzy systems and neural networks. DR MICHAEL LYNCH is Managing Director of Cambridge Neuro-dynamics, a company specialising in the practical application of neural network recognition systems for police, transport and security systems. DR RICH SUTTON is with the Adaptive Systems Department of GTE Laboratories near Boston, Massachusetts. His specialisms are reinforcement learning, planning and animal learning behaviours. LABORATORY TOURS Two afternoon tours will provide the opportunity to observe current research in the field of neural network theory and applications. The Neural Networks Group at CUED have been working in this field since 1984. The Group currently consists of 8 staff and 14 research students. `HANDS-ON' SESSIONS An afternoon practical session will offer the chance to experiment with neural network software and develop an understanding of its strengths and weaknesses. A custom designed environment is available to enable the participants to simulate a variety of problems and explore neural solutions. APPLICATIONS DAY Day 4 will be focused on the applications of neural systems featuring presentations from companies which have exploited connectionist solutions in their businesses. This will give delegates invaluable first hand insight into the technical and practical detail of the transition from research to application. It will present a 'world-class' perspective on the relevance of neural design techniques to commercial and industrial requirements. RESEARCH STUDENTS The Engineering and Physical Sciences Research Council are funding a limited number of places for UK, post-graduate research students. The students benefit from the interaction with a wide range of academics and industrialists and have the opportunity to extend their experience and establish links into industry and other institutions. A poster session will present the current research interests of each student, providing an insight into the work of the connectionist groups in higher education institutions across the UK. VENUE AND ACCOMMODATION The Summer School will be held at Emmanuel College, Cambridge. Emmanuel was founded in 1584 and its attractions include the Wren Chapel and beautiful gardens which delegates may enjoy. The Summer School will take place in the new lecture theatre complex. Emmanuel's city centre location provides easy access to shops and services, is on the main bus route from the Rail Station and is 2 minutes walk from Drummer Street Bus Station, which is served by all national and airport services. Accommodation can be arranged for delegates in single study bedrooms with shared facilities at Emmanuel College for 205 pounds for 4 nights to include bed and breakfast, dinner and a Course Dinner. An additional night's accommodation (bed and breakfast only) on Thursday, 27 July is available at 28 pounds. If you would prefer to make your own arrangements please indicate on the registration form and details of local hotels will be sent to you. SUMMER SCHOOL FEES All fees are payable in advance and include a set of course notes and all day-time refreshments. Days 1-4 (Monday-Thursday) - 775 pounds Academic Discounted Rate: (see qualifying note)* Days 1-4 (Monday-Thursday) - 475 pounds *Academic discounts only available if fees are to be paid by an academic institution. Limited number available - please contact the Course Administrator before applying. REGISTRATIONS For applications for EPSRC fully funded studentships please use the form at the end of this message. Otherwise please contact the Cambridge Programme for Industry: By Email: rjs1008 at cus.cam.ac.uk By Post: Registration Administrator, University of Cambridge, Programme for Industry, 1 Trumpington Street, Cambridge, CB2 1QA, UK. By Phone on: +44 (0)1223 302233 By Fax on: +44 (0)1223 301122 All reserved places must be confirmed by returning a registration form to the address shown. Bookings will be confirmed after payment has been received in full. Delegates will only be accepted onto a course if payment has been received in full or an official company order has been received. METHODS OF PAYMENT Payments should be made by: A cheque drawn on a UK bank, VISA or Mastercard/Eurocard, Sterling banker's draft drawn on a UK bank, Crossed international money order, Sterling travellers' cheques Any bank charges arising from international transactions must be met by the delegate. All payments to University of Cambridge must be for the full amount of fees incurred. Personal cheques drawn on banks outside the UK will not be accepted. Please do not send cash. Cheques or orders should be made payable to the 'University of Cambridge-EYA 4814'. CANCELLATIONS Half the registration fee will be returned for bookings cancelled up to one calender month in advance of the course. After this time no fees are returnable. However, substitutions may be made at any time. The Cambridge Programme for Industry reserves the right to pass on any charges levied by a College for cancellation of accommodation and meal bookings. EPSRC FUNDED STUDENTSHIPS A number of Studentships are available for EPSRC-funded, UK registered, post-graduate research students with UK residency. The Studentship covers all course costs and students can attend either Days 1-3 or Days 1-4, with full accommodation packages. Overnight accommodation will not be funded for students resident in or close to Cambridge. Funding is not available for accommodation on Sunday, 23 July, but bed and breakfast in College can be booked at a cost to the student of 28 pounds. Please indicate if you require the extra night's accommodation at the time of application. Wherever possible, students will be funded to their requested level, however it will be necessary to limit the number of students funded for Day 4. Please note that Studentships do not include travel costs. Recipients of 1994 Studentships will not be eligible for 1995 Studentships. HOW TO APPLY FOR AN EPSRC FULLY FUNDED STUDENTSHIP To be considered for a place, please complete the application form below and send a one page summary of current research including how you expect to benefit by attending, a curriculum vitae and a letter of recommendation from your supervisor. The deadline for applications is 19 May 1995. It should be noted that successful applicants will be required to present a poster of their current research or of the research interest of their group. The poster session will be held on Day 1. EPSRC Studentship No. Title (Mr/Miss/Ms) Surname First Name(s) Institution Address Post Code Telephone Fax E-mail: I wish to be considered for an EPSRC Studentship as follows: (please delete those which do not apply) Days 1-3 with 2 nights accommodation package Days 1-3 without accommodation Days 1-4 with 3 nights accommodation package Days 1-4 without accommodation Bed and breakfast only on Sunday, 23 July @ 28 pounds From bishopc at helios.aston.ac.uk Fri Mar 31 15:53:24 1995 From: bishopc at helios.aston.ac.uk (bishopc) Date: Fri, 31 Mar 1995 20:53:24 +0000 Subject: Lectureships in Neural Computing Message-ID: <4008.9503311953@sun.aston.ac.uk> ------------------------------------------------------------------- Neural Computing Research Group ------------------------------- Dept of Computer Science and Applied Mathematics Aston University, Birmingham, UK TWO LECTURESHIPS ---------------- * Full details at http://neural-server.aston.ac.uk// * Applications are invited for two Lectureships within the Department of Computer Science and Applied Mathematics. (These posts are roughly comparable to Assistant Professor positions in North America). Candidates are expected to have excellent academic qualifications and a proven record of research. The appointments will be for an initial period of three years, with the possibility of subsequent renewal or transfer to a continuing appointment. Successful candidates will be expected to make a substantial contribution to the research activities of the Group in the area of neural computing, or a related area concerned with advanced information processing. Current research activity focusses on principled approaches to neural computing, and ranges from theoretical foundations to industrial and commercial applications. We would be interested in candidates who can contribute directly to this research programme or who can broaden it into related areas, while maintaining the emphasis on theoretically well-founded research. The successful candidates will also be expected to contribute to the undergraduate and/or postgraduate teaching programmes. Neural Computing Research Group ------------------------------- The Neural Computing Research Group currently comprises the following academic staff Chris Bishop Professor David Lowe Professor David Bounds Professor Richard Rohwer Lecturer Alan Harget Lecturer Ian Nabney Lecturer David Saad Lecturer (arrives 1 August) together with the following Research Fellows Chris Williams Shane Murnion Alan McLachlan Huaihu Zhu a full-time computer support assistant, and eleven postgraduate research students. Conditions of Service --------------------- The appointments will be for an initial period of three years, with the possibility of subsequent renewal or transfer to a continuing appointment. Salaries will be within the lecturer A and B range 14,756 to 25,735, and exceptionally up to 28,756 (UK pounds; these salary scales are currently under review). How to Apply ------------ If you wish to be considered for one of these positions, please send a full CV and publications list, together with the names of 4 referees, to: Professor C M Bishop Neural Computing Research Group Department of Computer Science and Applied Mathematics Aston University Birmingham B4 7ET, U.K. Tel: 021 359 3611 ext. 4270 Fax: 021 333 6215 e-mail: c.m.bishop at aston.ac.uk closing date: 19 May 1995 From dave at twinearth.wustl.edu Fri Mar 31 03:48:35 1995 From: dave at twinearth.wustl.edu (David Chalmers) Date: Fri, 31 Mar 95 02:48:35 CST Subject: Penrose symposium Message-ID: <9503310848.AA11316@twinearth.wustl.edu> Over the coming weeks, there will be a symposium on Roger Penrose's recent book SHADOWS OF THE MIND in the electronic journal PSYCHE. There will be ten review articles discussing the issues raised by the book from a variety of perspectives, and Penrose will reply. Authors of the review articles are: Bernard Baars: Psychology, The Wright Institute, Berkeley David Chalmers: Philosophy, Washington University Solomon Feferman: Mathematics, Stanford University Stanley Klein: Vision Sciences, University of California at Berkeley Aaron Klug: Molecular Biology, Cambridge University Tim Maudlin: Philosophy, Rutgers University John McCarthy: Computer Science, Stanford University Daryl McCullough: Computer Science, Odyssey Research Associates Drew McDermott: Computer Science, Yale University Hans Moravec: Robotics, Carnegie Mellon University The articles will appear at a rate of one or two per week, and will be e-mailed to subscribers of the mailing list PSYCHE-L. To subscribe to this mailing list, send e-mail to listserv at iris.rfmh.org, with a single line "SUBSCRIBE PSYCHE-L <your name>". The articles will also be made available on the worldwide web, at http://hcrl.open.ac.uk/psyche.html (this is the PSYCHE home page). Discussion is encouraged on the associated discussion list PSYCHE-D -- to subscribe, "SUBSCRIBE PSYCHE-D" at the address above. David Chalmers. From tom at csc1.prin.edu Fri Mar 31 17:37:50 1995 From: tom at csc1.prin.edu (Tom Fuller) Date: Fri, 31 Mar 1995 16:37:50 -0600 Subject: No subject Message-ID: <199503312238.QAA22381@spectre.prin.edu> The file fuller.thesis.ps.Z is now available for copying from the Neuroprose repository: Supervised Competitive Learning: a technology for pen-based adaptation in real time Thomas H. Fuller, Jr. Computer Science Principia College Elsah, IL 62028 tom at csc1.prin.edu Abstract: The advent of affordable, pen-based computers promises wide application in educational and home settings. In such settings, systems will be regularly employed by a few users (children or students), and occasionally by other users (teachers or parents). The systems must adapt to the writing and gestures of regular users but not lose prior recognition ability. Furthermore, this adaptation must occur in real time not to frustrate or confuse the user, and not to interfere with the task at hand. It must also provide a reliable measure of the likelihood of correct recognition. Supervised Competitive Learning is our technology for the recognition of handwritten symbols. It uses a shifting collection of neural network-based similarity detectors to adapt to the user. We demonstrate that it satisfies the following requirements: 1. Pen-based technology: digitizing display tablet with pen. 2. Low cost: PC-level processor with about 50 MIPS. 3. Wide range of subjects: varying by age, nationality, writing style. 4. Wide range of symbol sets: numerals, alphabetic characters, gestures. 5. Usage: adaptation to regular users; persistent response to occasional users. 6. On-line recognition: both response and adaptation in real time. 7. Self-criticism: reliable measure of likelihood of correct response. 8. Context-free classification: symbol by symbol recognition. SCL successfully recognizes handwritten characters from writers on which it has trained (digits, lowercase, uppercase, and others) at least as well as known current systems (96.5% - 99.2%, depending on character sets). It adapts to its user in real time with a 50 MIPS processor without loss of response to occasional users. Finally, its estimates of its correctness are strongly correlated with actual likelihood of correctness. This is a doctoral dissertation at Washington University in St. Louis. Hardcopies are only available from University Microfilms, Inc. This work was supported by the Kumon Machine Project. ADVISOR: Professor Takayuki Dan Kimura completed December, 1994 Department of Computer Science Washington University Campus Box 1045 One Brookings Drive St. Louis, MO 63130-4899 queries about the work should go to; Thomas H. Fuller, Jr. Computer Science Principia College Elsah, IL 62028 tom at csc1.prin.edu Here's a sample retrieval session: unix> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive FTP server (Version wu-2.4(2) Mon Apr 18 14:41:30 EDT 1994) ready. Name (archive.cis.ohio-state.edu:me): anonymous 331 Guest login ok, send your complete e-mail address as password. Password: me at here.edu 230 Guest login ok, access restrictions apply. Remote system type is UNIX. Using binary mode to transfer files. ftp> cd pub/neuroprose/Thesis 250 CWD command successful. ftp> get fuller.thesis.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for fuller.thesis.ps.Z (798510 bytes). 226 Transfer complete. 798510 bytes received in 180 seconds (5 Kbytes/s) ftp> bye 221 Goodbye. unix> uncompress fuller.thesis.ps.Z unix> <send fuller.thesis.ps to favorite viewer or printer>