From gaussier at ensea.fr Wed Jun 1 11:16:48 1994 From: gaussier at ensea.fr (gaussier@ensea.fr) Date: Wed, 1 Jun 1994 15:16:48 +0000 Subject: No subject Message-ID: <9406011416.AA02894@aladdin.ensea.fr> >From Perception to Action - PerAc'94 Lausanne Lausanne, Switzerland, 7-9 september 1994 A State of the Art Conference on Autonomous Robots and Artificial Life """""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" PerAc is an international conference intended to promote the study of autonomous systems interacting with an unknown world. This means that their control systems should not rely on an "a priori" modelization of the environment or on the intervention of an external operator during exploration. The general theme of the conference is the study of how actions can help to modify perception in an interesting manner. Emphasis is put on "technological transfer" from biology (ethology, neurobiology...) and psychology to engineering. The conference will address theorical aspects and applications to the design of an autonomous robot. Presentations will deal with simple behavioral robots but also with the interest of collective intelligence and genetic algoritms. Attention will be also focused on Neural Networks and building blocks, and on how to use them to realize complex architectures and "Intelligent Systems". At last, cognitive implications of our approach will be discussed. --- Ask for the detailed programme booklet including --- registrationand hotel reservation forms --- (please provide your complete post address) +++ PerAc'94, LAMI-EPFL, CH-1015 Lausanne +++ Fax ++41 21 693-5263, E-mail perac at di.epfl.ch ----------------------------- | Short final programme | See the Call for Poster and Contest ----------------------------- near the end of this message Monday 5, Tuesday 6: Tutorials and Contest ---------- Wednesday 7 *** 1 - Opening session R. Pfeifer, University Zurich, Switzerland "From Perception to Action: The Right Direction?" *** 2 - Collective Intelligence J.L. Deneubourg, ULB Bruxelles, Belgium "Trams, Ants and Matches - Decentralization of Transportation Systems" T. Fukuda et al., University Nagoya, Japan "Self-Organizing Robotic Systems - Organization and Evolution of Group Behavior in Cellular Robotic" H. Asama, Riken Wako, Japan "Operation of Cooperative Multiple Robots using Communication in a Decentralized Robotic System" *** 3 - Simple Behavioral Robots D. McFarland, Balliol College Oxford, GB "Animal Robotics - From Self-Sufficient to Autonomy" H. Cruse, University Bielefeld, Germany "Simple Neural Nets for Controlling a 6-legged Walking System" C. Ferrel, MIT Cambridge, USA "Robust and Adaptive Locomotion of an Autonomous Hexapod" T. Shibata et al., MEL Tsukuba, Japan "Adaptation and Learning for Lasso Robot" L. Steels, University Bruxelles, Belgium "Mathematical Analysis of Behavior Systems" *** Poster and demo session. Contest award, winner's demos ---------- Thursday, Sept 8 J.A. Meyer, ENS Paris, France "Development, Learning and Evolution in Animats" *** 4 - Genetic Algorithms P. Husband et al., University Brighton, UK "The Use of Genetic Algorithms for the Development of Sensorimotor Control Systems" D. Floreano, University Trieste & F. Mondada, EPFL "Active Perception and Grasping - An Autonomous Perspective." M.A. Bedau, Reed College Portland, USA "The Evolution of Sensory Motor Loops" S. Nolfi et al., CNR Roma, Italy "Phenotipic Plasticity in Evolving Neural Networks P. Bessiere, E. Dedieu & E. Mazer, CNRS Grenoble, France "Representing Robot - Environment Interactions Using Probabilities" *** 5 - Active Perception N. Franceschini, CNRS Marseille, France "Mobile Robots with Insect Eyes" X. Arreguit & E.A. Vittoz, CSEM Neuchbtel, Swizerland "Perception Systems implemented in Analog VLSI for Real-Time Applications" P. Dario, ARTS Lab Pisa, Italy "Micro Sensors and Microactuators for Robots" F. Heitger, R. von der Heydt, ETH Zurich, Switzerland "A Computational Model of Neural Contour Processings - Figure-Ground Segregation and Illusory Contours" T. Pun et al., University of Geneva, Switzerland "Exploiting Dynamic Aspect of Visual Perception for Object Recognition" *** Poster and demo session. Banquet ---------- Friday, Sept 9 *** 6 - Building Blocks and Architectures for Designing Intelligent Systems Y. Burnod, CREARE Paris, France "Computational Properties of the Cerebral Cortex to Learn Sensorimotor Programs" J.S. Albus, W. Rippey, NIST Gaithersburg, USA "An architecture and Methodology for Designing Intelligent Machine Systems" S. Grossberg, University Boston, USA "Neural Models for Real Time Learning and Adaptation" Ph. Gaussier, ENSEA Paris, S. Zrehen, EPF Lausanne "Why Topological Maps are Useful for Learning in an Autonomous System" C. Engels and G. Schvner, Ruhr-University Bochum, Germany "Dynamic Field Architecture for Autonomous Systems" P.F.M.J.Verschure, NeuroScience, La Jolla, USA "Value-based Learning in Real World Artifact" *** 7 - Complex Architecture to Control Autonomous Robots R. Chatilla, LAAS-CNRS Toulouse, France "Control Architectures for Autonomous Mobile Robots" Y.A. Bachelder & A.M. Waxman, MIT Lexington, USA "A Neural System for Qualitative Mapping and Navigation in Visual Environments" Ph. Gaussier, ENSEA Paris, S. Zrehen, EPF Lausanne, Switzerland "Complex Neural Architecture for Emerging Cognition Abilities in an Autonomous System" *** 8 - Cognition K. Dautenhahn, GMD Sankt Augustin, Germany "Trying to Imitate - A Step Towards Releasing Robots from Social Isolation" J. Taylor, King's College London, UK "The Relational Mind" J. Stewart, Inst. Pasteur Paris, France "The Implications for Understanding High-Level Cognition of a Grounding in Elementary Adaptive Systems" ------------------------------------------------------ | PerAc'94 Registration fees | | Before July 1 350 Swiss Francs Students 170 CHF | | After July 1 400 Swiss Francs Students 200 CHF | ------------------------------------------------------ oooooooooooooooooooooooooooooooooooooooooooooo | PerAc'94 Call for Posters, | | Call for Demonstrations, Call for Videos | | Contest, Tutorials | oooooooooooooooooooooooooooooooooooooooooooooo -- Posters -- 4-page short papers that will be published in the proceedings and presented as posters are due for June 1, 1994, the latest. Posters will be displayed during the whole Conference and enough time will be provided to promote interaction with the authors. Posters will be refereed; authors will be informed before July 1. Due to the tight printing schedule, late posters will not be accepted -- Demonstrations -- Robotic demonstrations are considered as posters. In addition to the 4-page abstract describing the scientific interest of the demonstration, the submission should include a 1-page requirement for demonstration space and support. Last minute informal demonstrations will be accepted if they fall within the range of the conference -- Videos -- 5 minute video clips have to be submitted in Super-VHS or VHS (preferably PAL, NTSC leads to a poorer quality). Tapes together with a 2-page description should be submitted before July 15, 1994 -- Contest -- A robotic contest will be organized the day before the conference. Teams participating to the contest must be announced before July 15, and will be able to follow the conference freely. -- Tutorials -- Three tutorials will be organized in parallel on Monday 5 and Tuesday 6. Neural Networks (in french), organized by C. Lehmann, Mantra-EPFL VLSI for Perception, organized by X. Arreguit, CSEM, Neuchbtel Artificial Life, organized by E.Sanchez, EPFL For further information: PerAc'94, LAMI-EPFL, CH-1015 Lausanne Fax ++41 21 693-5263, E-mail perac at di.epfl.ch From giles at research.nj.nec.com Wed Jun 1 12:28:52 1994 From: giles at research.nj.nec.com (Lee Giles) Date: Wed, 1 Jun 94 12:28:52 EDT Subject: summer position Message-ID: <9406011628.AA15007@fuzzy> SUMMER STUDENT OPPORTUNITY NEC RESEARCH INSTITUTE PRINCETON NJ Due to an unexpected opportunity, NEC Research Institute in Princeton, N.J. has an immediate opening for a 3 month summer research and programming position. The research emphasis will be on recurrent neural networks and machine learning methods applied to natural language processing. The successful candidate will have knowledge of machine learning and neural network methodologies plus excellent coding skills in the C/Unix environment. Interested applicants should send their resumes by mail, fax, or email to the address below. Dr. C. Lee Giles or Dr. Sandiway Fong NEC Research Institute 4 Independence Way Princeton, N.J. 08540 FAX: 609-951-2482 EMAIL: sandiway at research.nj.nec.com Applicants must show documentation of eligibility for employment. Because this is a summer position, the only expenses to be paid will be salary. NEC is an equal opportunity employer. -- C. Lee Giles / NEC Research Institute / 4 Independence Way Princeton, NJ 08540 / 609-951-2642 / Fax 2482 == From williabv at sun.aston.ac.uk Wed Jun 1 13:10:04 1994 From: williabv at sun.aston.ac.uk (williabv) Date: Wed, 1 Jun 1994 17:10:04 +0000 Subject: papers available by ftp Message-ID: <5966.9406011610@sun.aston.ac.uk> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/williams.wcci94.ps.Z FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/williams.ecal_93.ps.Z ------------------------------------------------------------------------------ The following two papers have been placed in the Neuroprose repository: pub/neuroprose/williams.wcci94.ps.Z Improving Classification Performance in the Bumptree Network by Optimising Topology with a Genetic Algorithm. Williams, B. V., Bostock, R. T. J., Bounds, D. G. and Harget, A. J. Submitted to IEEE Evolutionary Computation 1994, part of WCCI '94. ABSTRACT: The Bumptree is a binary tree of Gaussians which partitions a Euclidian space. The leaf layer consists of a set of local linear classifiers, and the whole system can be trained in a supervised manner to form a piecewise linear model. In this paper a Genetic Algorithm (GA) is used to optimise the topology of the tree. We discuss the properties of the genetic coding scheme, and argue that the GA/bumptree does not suffer from the same scaling problems as other GA/neural-net hybrids. Results on test problems, including a non-trivial classification task, are encouraging, with the GA able to discover topologies which give improved performance over those generated by a constructive algorithm. ------------------------------------------------------------------------------ pub/neuroprose/williams.ecal_93.ps.Z Learning and Evolution in Populations of Backprop Networks Williams, B.V. and Bounds, D. G. Presented at ECAL '93 in Brussels. ABSTRACT: This paper describes some simple Artificial Life experiments based on an earlier paper by Parisi, Nolfi and Cecconi (1991) in which an MLP controls a simple animat (simulated animal). A GA is used to optimise connection weights to produce basic food gathering behaviour. We discuss the merits of the approach, and suggest a more simple explanation of certain observed behaviours to that offered by the original authors. ------------------------------------------------------------------------------ How to retrieve and print the papers - an example session: % ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive FTP server (Version wu-2.4(2) Mon Apr 18 14:41:30 EDT 1994) ready. Name (archive.cis.ohio-state.edu:myname): anonymous 331 Guest login ok, send your complete e-mail address as password. Password: myname at mysite 230 Guest login ok, access restrictions apply. ftp> cd pub/neuroprose 250-Please read the file README 250- it was last modified on Fri Jul 2 09:04:46 1993 - 334 days ago 250 CWD command successful. ftp> binary 200 Type set to I. ftp> get williams.wcci94.ps.Z [wait while the file is downloaded] ftp> quit 221 Goodbye. % uncompress williams.wcci94.ps.Z % lp williams.wcci94.ps From geva at fit.qut.edu.au Thu Jun 2 02:23:06 1994 From: geva at fit.qut.edu.au (Shlomo Geva) Date: Thu, 2 Jun 1994 16:23:06 +1000 (EST) Subject: ANZIIS 94 2nd Call for Papers Message-ID: <199406020623.QAA22614@ocean.fit.qut.edu.au> A non-text attachment was scrubbed... Name: not available Type: text Size: 7719 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/799994d6/attachment.ksh From venkat at occam.informatik.uni-tuebingen.de Thu Jun 2 09:57:34 1994 From: venkat at occam.informatik.uni-tuebingen.de (Venkat Ajjanagadde) Date: Thu, 2 Jun 94 15:57:34 +0200 Subject: Paper available by ftp Message-ID: <9406021357.AA01049@occam.informatik.uni-tuebingen.de> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/ajjanagadde.aaai94.ps.Z --------------------------------------------- The following paper is available via anonymous ftp from the neuroprose archive: File: ajjanagadde.aaai94.ps.Z 6 pages Unclear Distinctions Lead to Unnecessary Shortcomings: Examining the Rule vs Fact, Role vs Filler, and Type vs Predicate Distinctions from a Connectionist Representation and Reasoning Perspective [To appear in Proc. of the AAAI-94] Venkat Ajjanagadde Wilhelm-Schickard Institute Universitaet Tuebingen, Germany abstract: This paper deals with three distinctions pertaining to knowledge representation, namely, the rules vs facts distinction, roles vs fillers distinction, and predicates vs types distinction. Though these distinctions may indeed have some intuitive appeal, the exact natures of these distinctions are not entirely clear. This paper discusses some of the problems that arise when one accords these distinctions a prominent status in a connectionist system by choosing the representational structures so as to reflect these distinctions. The example we will look at in this paper is the connectionist reasoning system developed by Ajjanagadde and Shastri (Ajjanagadde & Shastri, 1991; Shastri & Ajjanagadde, 1993). Their system performs an interesting class of inferences using activation synchrony to represent dynamic bindings. The rule/fact, role/filler, type/predicate distinctions figure predominantly in the way knowledge is encoded in their system. We will discuss some significant shortcomings this leads to. Then, we will propose a much more uniform scheme for representing knowledge. The resulting system enjoys some significant advantages over Ajjanagadde & Shastri's system, while retaining the idea of using synchrony to represent bindings. To retrieve the compressed postscript file, do the following: unix> ftp archive.cis.ohio-state.edu ftp> login: anonymous ftp> password: [your_full_email_address] ftp> cd pub/neuroprose ftp> binary ftp> get ajjanagadde.aaai94.ps.Z ftp> bye unix> uncompress ajjanagadde.aaai94.ps.Z unix> lpr ajjanagadde.aaai94.ps (or however you print postscript) From bishopc at helios.aston.ac.uk Fri Jun 3 09:31:21 1994 From: bishopc at helios.aston.ac.uk (bishopc) Date: Fri, 3 Jun 1994 13:31:21 +0000 Subject: Paper available by ftp Message-ID: <24937.9406031231@sun.aston.ac.uk> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/bishop.novelty.ps.Z The following technical report is available by anonymous ftp. ------------------------------------------------------------------------ NOVELTY DETECTION AND NEURAL NETWORK VALIDATION Chris M Bishop Neural Computing Research Group Aston University Birmingham, B4 7ET, U.K. email: c.m.bishop at aston.ac.uk Neural Computing Research Group Report: NCRG/4289 (Accepted for publication in IEE Proceedings) Abstract One of the key factors limiting the use of neural networks in many industrial applications has been the difficulty of demonstrating that a trained network will continue to generate reliable outputs once it is in routine use. An important potential source of errors arises from novel input data, that is input data which differ significantly from the data used to train the network. In this paper we investigate the relationship between the degree of novelty of input data and the corresponding reliability of the outputs from the network. We describe a quantitative procedure for assessing novelty, and we demonstrate its performance using an application involving the monitoring of oil flow in multi-phase pipelines. -------------------------------------------------------------------- ftp instructions: % ftp archive.cis.ohio-state.edu Name: anonymous password: your full email address ftp> cd pub/neuroprose ftp> binary ftp> get bishop.novelty.ps.Z ftp> bye % uncompress bishop.novelty.ps.Z % lpr bishop.novelty.ps -------------------------------------------------------------------- Professor Chris M Bishop Tel. +44 (0)21 359 3611 x4270 Neural Computing Research Group Fax. +44 (0)21 333 6215 Dept. of Computer Science c.m.bishop at aston.ac.uk Aston University Birmingham B4 7ET, UK -------------------------------------------------------------------- From cnna at tce.ing.uniroma1.it Fri Jun 3 06:26:57 1994 From: cnna at tce.ing.uniroma1.it (cnna@tce.ing.uniroma1.it) Date: Fri, 3 Jun 1994 12:26:57 +0200 Subject: CNNA-94 Call for Papers Message-ID: <9406031026.AA16107@tce.ing.uniroma1.it> *********************************************************************** * * * CCCC NN N NN N A 999 4 4 * * C C N N N N N N A A 9 9 4 4 * * C N N N N N N A A *** 99999 44444 * * C N N N N N N AAAAAAA 9 4 * * CCCCC N NN N NN A A 999 4 * * * *********************************************************************** Third IEEE International Workshop on Cellular Neural Networks and their Applications CNNA-94 Rome, Italy, December 18-21, 1994 CALL FOR PAPERS ORGANIZED BY Department of Electronic Engineering "La Sapienza" University of Rome, Italy CO-SPONSORED BY IEEE Circuits & Systems Society IEEE Region 8 Italian National Research Council (CNR) WITH COOPERATION of the following IEEE local Sections: Central & Southern Italy, Northern Italy, Czech Republic, Hungary, Turkey. SCIENTIFIC COMMITTEE V. Cimagallli (Italy) - chairman N.N. Aizenberg (Ukraine) L.O. Chua (USA) A. Dmitriev (Russia) K. Halonen (Finland) M. Hasler (Switzerland) J. Herault (France) J.L. Huertas (Spain) J.A. Nossek (Germany) S. Jankowski (Poland) T. Roska (Hungary) M. Salerno (Italy) M. Tanaka (Japan) J. Taylor (Great Britain) J. Vandewalle (Belgium) X. Wu (P.R. China) INVITATION The Workshop is the third such gathering of scientists and engineers dedicated to CNN research. The first two CNNA meetings showed CNN is a powerful paradigm with important possibilities of application, and fostered wide interest in the scientific and industrial community. CNN theory is now reaching maturity; therefore, this Workshop will be structured in a new way, and will feature the following main topics of interest: 1) CNN Universal Chip prototypes are expected to be demonstrated in operation; ------------------------------------------------------------------------------- 2) Real-life applications of CNN chips will also be demostrated, in diverse fields, such as medical diagnosis, and robotics; 3) Distinguished speakers will give keynotes on latest findings. Prof. L.O. Chua will give the inaugural lecture on the topic: "The CNN Universal Chip: Dawn of a New Computer Paradigm". Lectures have been invited by Profs. J. Hamori and F. Werblin on related topics of neurobiology, and Prof. A. Csurgay on "Toward the Use of Quantum Devices in a Cellular Structure". 4) A round-table discussion, chaired by Dr. C. Lau, will be organized, on the topic: "Trends and Applications of the CNN Paradigm" 5) Authors are encouraged to submit a more detailed version of their papers to the upcoming special issue on CNNs of the International Journal of Circuit Theory and Applications, edited by Profs. T. Roska and J. Vandewalle (deadline: January 1995). TOPICS OF INTEREST include, but are not limited to: - Basic Theory: stability, sensitivity, properties and theoretical limitations, comparison with different neural network paradigms, steady-state, oscillatory, chaotic dynamics. - Architectures: cloning templates or non-homogeneous weights, continuous or discrete time, layered systems, nonlinear and/or delayed interactions. - Learning and Design: algorithms, generalization, information storage, CAD tools, design by examples, synthesis. - Hardware Implementation: analog and digital ICs, optical realizations, discrete-component dedicated circuits, dedicated simulation circuitry - Applications: signal processing, system modelling, CAM, classifiers. ORGANIZING COMMITTEE V. Cimagalli, chairman M. Balsi A. Londei P. Tognolatti CORRESPONDANCE should be addressed to CNNA-94 Dipartimento di Ingegneria Elettronica via Eudossiana, 18 00184 Roma Italy tel.: +39-6-44585836 fax: +39-6-4742647 e-mail: cnna at tce.ing.uniroma1.it VENUE Sessions will be held at "La Sapienza" University of Rome between the morning of Dec. 19 and the afternoon of Dec.21. A welcome cocktail will be offered to participants in the evening of Dec. 18, and a banquet on Dec. 20. Assistance for accommodation will be provided by American Express (see attached form) PERSPECTIVE AUTHORS are encouraged to submit a 3-page extended summary of their works in 4 copies (mail or fax, no electronic submission available), written in English, to the above address. Accepted contributions will have to be extended to a maximum of 6 pages, according to the style described in authors' kit to be sent with notification of acceptance. SCHEDULE is as follows: Extended summary to be received by Sep. 2 Notification of acceptance mailed by Oct. 10 Camera-ready paper to be received by Nov. 11 ------------------------------------------------------------------------------- REGISTRATION FORM mail to CNNA-94 Dipartimento di Ingegneria Elettronica, via Eudossiana, 18 Roma, Italy, I-00184 or fax to +39-6-4742647) title:____________ last name:_______________________________________ first name:______________________________________ institution:_____________________________________ address: street:__________________________________________ city:____________________________________________ state/country:___________________________________ zip code:______________ tel:_____________________________________________ fax:_____________________________________________ e-mail:__________________________________________ [] I intend to submit (have submitted) a paper title:___________________________________________ _________________________________________________ spouse/guest: first name:______________________________________ last name:_______________________________________ ------------------------------------------------------------------------------- REGISTRATION FEES Registration fees may be paid: (check one) [] by bank swift tranfer to:Banca di Roma, ag. 158, via Eudossiana, 18 Roma, Italy, I-00184. Bank codes: ABI 3002-3, CAB 03380-3, account no. 503/34, payable to: "Sezione Italia Centro-Sud dell'IEEE - CNNA'94", stating clearly your name (copy enclosed) [] by Italian personal cheque payable to "Sezione Italia Centro-Sud dell'IEEE - CNNA'94" (non-transferable, sent with this form by "Assicurata"); Fees are in Italian Lire; they include participation in all sessions, proceedings, coffee breaks between sessions, welcome cocktail, and banquet. Please check where applicable: BEFORE NOV. 11 AFTER NOV. 11 FULL [] Itl. 500,000 [] Itl. 550,000 IEEE MEMBER* [] Itl. 400,000 [] Itl. 450,000 *(no.__________________) FULL-TIME STUDENT** [] Itl. 100,000 [] Itl. 200,000 **(please enclose letter of certification from department chairperson) SPOUSE/GUEST: WELCOME COCKTAIL [] Itl. 40,000 BANQUET [] Itl. 120,000 [] I need a receipt of payment ------------------------------------------------------------------------------- ACCOMMODATION FORM mail to American Express Co., S.p.A., Business Meetings & Convention Dept., Piazza di Spagna, 38 Rome, Italy, I-00187, or fax to: +39-6-67642699 title:____________ last name:_______________________________________ first name:______________________________________ institution:_____________________________________ address: street:__________________________________________ city:____________________________________________ state/country:___________________________________ zip code:______________ tel:_____________________________________________ fax:_____________________________________________ spouse/guest: first name:______________________________________ last name:_______________________________________ ------------------------------------------------------------------------------- ACCOMMODATION FEES Accommodation fees may be paid (check one) [] by bank swift tranfer to: Banco di Sicilia, ag. 15 Piazzale Ardig, 43/45 Rome, Italy I-00142. Bank codes: ABI J01020, CAB 03215, account no. 410158991, payable to American Express; (copy enclosed) [] by Italian personal cheque payable to American Express (non-transferable, sent by "Assicurata" with this form); [] by American Express card Accommodation at Hotel Palatino (****), close to Workshop venue, is offered at special rates for participants to CNNA-94. Reservations are guaranteed when booking before Sep. 20; after this date they will be confirmed on a space availability basis. Following rates include room and breakfast: 3 NIGHTS EACH ADDITIONAL NIGHT: SINGLE Itl. 250,0000 Itl. 86,000 DOUBLE Itl. 380,000 Itl. 135,000 Please fill in: arrival date:_______________________ departure date:_____________________ total nights:_____ [] single room deposit: Itl. 100,000 [] double room deposit: Itl. 150,000 shared with _____________________________ Please charge my American Express Card for the above stated amount No._________________________________ Expiry date:___________________________ Signature:_____________________________ ------------------------------------------------------------------------- A POSTSCRIPT COPY OF THIS CALL FOR PAPERS CAN BE OBTAINED BY ANONYMOUS FTP AS FOLLOWS: ftp volterra.science.unitn.it (130.186.34.16) login as "anonymous" wuth password cd pub/cells binary get cnna.cfp.ps.Z bye then uncompress the file (uncompress cnna.cfp.ps.Z) and print. From ingber at alumni.caltech.edu Mon Jun 6 04:44:30 1994 From: ingber at alumni.caltech.edu (Lester Ingber) Date: Mon, 06 Jun 1994 01:44:30 -0700 Subject: Research Opportunities Parallelizing ASA and PATHINT Message-ID: <199406060844.BAA03619@alumni.caltech.edu> Research Opportunities Parallelizing ASA and PATHINT I am looking for one to several people with experience parallelizing C code, e.g., on Crays, to work on parallelizing two specific algorithms: (a) Adaptive Simulated Annealing (ASA), and (b) an algorithm to calculate the time-development of multi-variable nonlinear Fokker-Planck-type systems, using a powerful non-Monte Carlo path integral algorithm (PATHINT). Some code and papers dealing with these algorithms can be obtained from ftp.alumni.caltech.edu [131.215.139.234] in the /pub/ingber directory. I am PI of an award of Cray time on an NSF Supercomputer, and have ported these codes successfully onto a C90. However, I am short of time to further optimize these codes, which is an essential requirement before doing production runs on C90 and T3D Crays. If necessary, I will do this work myself, but I would rather share the work, experience, and research with other interested people who also can expedite these projects. All code will remain under my copyright under the GNU General Public License (GPL), i.e., the least restrictive Library GPL. There are several immediate projects that are just waiting for detailed calculations, using codes which already run on SPARCstations, but need the power of supercomputers for production runs. All results of these studies will be published in peer-reviewed scientific journals, and only active participants on these projects will be co-authors on these papers. Examples of these projects include: neuroscience statistical mechanics of neocortical interactions (SMNI) EEG correlates of behavioral states short-term memory modeling realistic chaos + noise modeling financial applications 2- and 3-state term-structure security calculations testing of trading rules nonlinear modeling persistence of chaos in the presence of moderate noise If you are interested, please send me a short description of projects you have worked on, and how many hours/week you are prepared to commit to these projects for at least a period of 6-12 months. Lester || Prof. Lester Ingber || || Lester Ingber Research || || P.O. Box 857 E-Mail: ingber at alumni.caltech.edu || || McLean, VA 22101 Archive: ftp.alumni.caltech.edu:/pub/ingber || From mb at archsci.arch.su.EDU.AU Mon Jun 6 00:13:59 1994 From: mb at archsci.arch.su.EDU.AU (mb@archsci.arch.su.EDU.AU) Date: Mon, 6 Jun 1994 14:13:59 +1000 Subject: Paper in NeuroProse: Time Delay Radial Basis Functions. Message-ID: <9406060413.AA13495@assynt> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/berthold.tdrbf-icnn94.ps.Z The following paper was placed in the NeuroProse archive (Thanks to Jordan for maintaining this valuable service!) to appear in: Proc. ICNN'94, Orlando, Florida 4 pages, sorry no hardcopies available. A Time Delay Radial Basis Function Network for Phoneme Recognition Michael R. Berthold ABSTRACT: This paper presents the Time Delay Radial Basis Function Network (TDRBF) for recognition of phonemes. The TDRBF combines features from Time Delay Neural Networks (TDNN) and Radial Basis Functions (RBF). The ability to detect acoustic features and their temporal relationship independent of position in time is inherited from TDNN. The use of RBFs leads to shorter training times and less parameters to adjust, which makes it easier to apply TDRBF to new tasks. The recognition of three phonemes with about 750 training and testing tokens each was choosen as an evaluation task. The results suggest an equivalent performance of TDRBF and TDNN presented by Waibel et al., but TDRBF require much less training time to reach a good performance and in addition have a clear indication when the minimum error is reached, therefore no danger of overtraining exists. INSTRUCTIONS FOR RETRIVAL: % ftp archive.cis.ohio-state.edu ftp> anonymous ftp> (your eMail-address) ftp> binary ftp> cd pub/neuroprose ftp> get berthold.tdrbf-icnn94.ps.Z ftp> quit % uncompress berthold.tdrbf-icnn94.ps.Z % lpr berthold.tdrbf-icnn94.ps (or however you print on your system) ------------------------------------------------------------------------- Michael R. Berthold Phone: +61 2 692 3549 Key Centre of Design Computing Fax: +61 2 692 3031 University of Sydney Internet: mb at archsci.arch.su.edu.au NSW 2006 Australia or: berthold at fzi.de ------------------------------------------------------------------------- From mario at physics.uottawa.ca Mon Jun 6 10:58:05 1994 From: mario at physics.uottawa.ca (Mario Marchand) Date: Mon, 6 Jun 94 11:58:05 ADT Subject: Neural Net and PAC learning papers available by anonymous ftp Message-ID: <9406061458.AA01644@physics.uottawa.ca> The following papers are now available by anonymous ftp from the host "Dirac.physics.uottawa.ca" in the "pub" directory. Mario Marchand mario at physics.uottawa.ca ---- **************************************************************** FILE: COLT93.ps.Z TITLE: Average Case Analysis of the Clipped Hebb Rule for Nonoverlapping Perceptron Networks AUTHORS: Mostefa Golea, Mario Marchand Abstract We investigate the clipped Hebb rule for learning different multilayer networks of nonoverlapping perceptrons with binary weights and zero thresholds when the examples are generated according to the uniform distribution. Using the central limit theorem and very simple counting arguments, we calculate exactly its learning curves (\ie\ the generalization rates as a function of the number of training examples) in the limit of a large number of inputs. We find that the learning curves converge {\em exponentially\/} rapidly to perfect generalization. These results are very encouraging given the simplicity of the learning rule. The analytic expressions of the learning curves are in excellent agreement with the numerical simulations, even for moderate values of the number of inputs. ******************************************************************* ****************************************************************** FILE: EuroCOLT93.ps.Z TITLE: On Learning Simple Deterministic and Probabilistic Neural Concepts AUTHORS: Mostefa Golea, Mario Marchans Abstract We investigate the learnability, under the uniform distribution, of deterministic and probabilistic neural concepts that can be represented as {\em simple combinations} of {\em nonoverlapping} perceptrons with binary weights. Two perceptrons are said to be nonoverlapping if they do not share any input variables. In the deterministic case, we investigate, within the distribution-specific PAC model, the learnability of {\em perceptron decision lists} and {\em generalized perceptron decision lists}. In the probabilistic case, we adopt the approach of {\em learning with a model of probability} introduced by Kearns and Schapire~\cite{KS90} and Yamanishi~\cite{Y92}, and investigate a class of concepts we call {\em probabilistic majorities} of nonoverlapping perceptrons. We give polynomial time algorithms for learning these restricted classes of networks. The algorithms work by estimating various statistical quantities that yield enough information to infer, with high probability, the target concept. *********************************************************************** *********************************************************************** FILE: NeuralComp.ps.Z TITLE: On Learning Perceptrons with Binary Weights AUTHORS: Mostefa Golea, Mario Marchand Abstract We present an algorithm that PAC learns any perceptron with binary weights and arbitrary threshold under the family of {\em product distributions\/}. The sample complexity of this algorithm is of $O((n/\epsilon)^4 \ln(n/\delta))$ and its running time increases only linearly with the number of training examples. The algorithm does not try to find an hypothesis that agrees with all of the training examples; rather, it constructs a binary perceptron based on various probabilistic estimates obtained from the training examples. We show that, under the restricted case of the {\em uniform distribution\/} and zero threshold, the algorithm reduces to the well known {\em clipped Hebb rule\/}. We calculate exactly the average generalization rate (\ie\ the learning curve) of the algorithm, under the uniform distribution, in the limit of an infinite number of dimensions. We find that the error rate decreases {\em exponentially\/} as a function of the number of training examples. Hence, the average case analysis gives a sample complexity of $O(n\ln(1/\epsilon))$; a large improvement over the PAC learning analysis. The analytical expression of the learning curve is in excellent agreement with the extensive numerical simulations. In addition, the algorithm is very robust with respect to classification noise. *********************************************************************** ********************************************************************** FILE: NIPS92.ps.Z TITLE: On Learning $\mu$-Perceptron Networks with Binary Weights AUTHORS: Mostefa Golea, Mario Marchand, Thomas R. Hancock Abstract Neural networks with binary weights are very important from both the theoretical and practical points of view. In this paper, we investigate the learnability of single binary perceptrons and unions of $\mu$-binary-perceptron networks, {\em i.e.\/} an ``OR'' of binary perceptrons where each input unit is connected to one and only one perceptron. We give a polynomial time algorithm that PAC learns these networks under the uniform distribution. The algorithm is able to identify both the network connectivity and the weight values necessary to represent the target function. These results suggest that, under reasonable distributions, $\mu$-perceptron networks may be easier to learn than fully connected networks. ********************************************************************** ********************************************************************** FILE: Network93.ps.Z TITLE: On Learning Simple Neural Concepts: From Halfspace Intersections to Neural Decision Lists AUTHORS: Mario Marchand, Mostefa Golea Abstract In this paper, we take a close look at the problem of learning simple neural concepts under the uniform distribution of examples. By simple neural concepts we mean concepts that can be represented as simple combinations of perceptrons (halfspaces). One such class of concepts is the class of halfspace intersections. By formalizing the problem of learning halfspace intersections as a {\em set covering problem}, we are led to consider the following sub-problem: given a set of non linearly separable examples, find the largest linearly separable subset of it. We give an approximation algorithm for this NP-hard sub-problem. Simulations, on both linearly and non linearly separable functions, show that this approximation algorithm works well under the uniform distribution, outperforming the Pocket algorithm used by many constructive neural algorithms. Based on this approximation algorithm, we present a greedy method for learning halfspace intersections. We also present extensive numerical results that strongly suggests that this greedy method learns halfspace intersections under the uniform distribution of examples. Finally, we introduce a new class of simple, yet very rich, neural concepts that we call {\em neural decision lists}. We show how the greedy method can be generalized to handle this class of concepts. Both greedy methods for halfspace intersections and neural decision lists were tried on real-world data with very encouraging results. This shows that these concepts are not only important from the theoretical point of view, but also in practice. *************************************************************************** From fu at cis.ufl.edu Mon Jun 6 15:14:32 1994 From: fu at cis.ufl.edu (fu@cis.ufl.edu) Date: Mon, 6 Jun 1994 15:14:32 -0400 Subject: a special issue Message-ID: <199406061914.PAA07659@whale.cis.ufl.edu> CALL FOR SUBMISSIONS Special Issue of the Journal ``Knowledge-Based Systems'' Theme: ``Knowledge-Based Neural Networks'' Guest Editor: LiMin Fu (University of Florida, USA) A. Background: Knowledge-based neural networks are concerned with the use of domain knowledge to determine the initial structure of the neural network. Such constructions have drawn increasing attention recently. The rudimentary idea is simple: the knowledge-based approach models what we know and the neural-network approach does what we are ignorant or uncertain of. Furthermore, there is an urgent need for a bridge between symbolic artificial intelligence and neural networks. The main goal of this special issue is to explore the relationship between them for engineering intelligent systems. B. Contents: Examples of specific research include but are not limited to: (1) How do we build a neural network based on prior knowledge? (2) How do neural heuristics improve the current model for a particular problem (e.g., classification, planning, signal processing, and control)? (3) How does knowledge in conjunction with neural heuristics contribute to machine learning? (4) What is the emergent behavior of a hybrid system? (5) What are the fundamental issues behind the combined approach? C. Schedule: Four copies of a full-length Paper should be submitted, by 1 September 1994, to Dr. LiMin Fu Department of Computer and Information Sciences 301 CSE University of Florida Gainesville, FL 32611 USA All papers will be subject to stringent review. From marcio at dca.fee.unicamp.br Mon Jun 6 18:33:25 1994 From: marcio at dca.fee.unicamp.br (Prof. Marcio Luiz de Andrade Netto) Date: Mon, 6 Jun 94 17:33:25 EST Subject: NNACIP '94 - Call for papers Message-ID: <9406062033.AA21625@dca.fee.unicamp.br> PLEASE, DISTRIBUTE FREELY. ==================================================================== Preliminary Call for Papers NNACIP '94 AMCA / IEEE International Workshop on Neural Networks Applied to Control and Image Processing November 7-11, 1994, in Mexico City The purpose of this international workshop is to provide an opportunity for the leading researchers of the world to broaden their horizon through discussions on various applications of Neural Networks (NN) to control and signal processing. This workshop is intended to provide also an international forum for presentation of concepts and applications of NN in both: i)design, implementation, supervision, and monitoring of control systems, and ii)digital image processing, 2D & 3D pattern recognition. Main Topics * modeling, estimation and identification * intelligent control systems applications * autonomous robots * process monitoring and supervision * fault detection and emergency control * intelligent motion control * adaptive learning control systems * manufacturing processes * image filtering * image segmentation * image compression * feature extraction * 2D and 3D pattern recognition * pattern classification * multi-sensor integration * Signal processing Organized by Asociacin de Mxico de Control Automtico, AMCA International Program Committee Toshio Fukuda (J) Chairman Francisco Cervantes (MEX) Editor Luis Alonso Romero (E) Marcio Andrade Netto (BRA) Danilo Bassi (RCH) G.A. Bekey (USA) J.C. Bertrand (F) Pierre Borne(F) Aldo Cipriano (RCH) Ricardo Carelli (ARG) Jess Figueroa-Nazuno (MEX) Petros Ioannou (USA) Rafael Kelly (MEX) Dominique Lamy (F) Nuria Piera-Carrete (E) M. Sakawa (J) Edgar Snchez-Sinencio (USA) Carme Torras (E) Manuel Valenzuela (MEX) Sponsors * IEEE Systems, Man & Cybernetics * IEEE Neural Networks Council * CONACyT * CINVESTAV * IPN * UNAM * UVM * ANIAC * UAM-Atz Organizing Committee J. H. Sossa-Azuela, President A.J. Malo-Tamayo, Secretary A. Ramrez-Trevio, Treasurer E. Gortcheva R.A. Garrido P. Wiederhold C. Verde S. Gonzlez-Brambila D. Rosenblueth E. Snchez-Camperos Important Dates Submission of extended abstracts and proposals for invited session: June 20, 94 Acceptance notification: July 15, 94 Camera ready papers due: Sept. 30, 94 Workshop realization: Nov. 7-11, 94 Information J.M. Ibarra-Zannatha (General Chairman) Cinvestav-IPN, Control Automtico A.P. 14-740, 07000 Mxico, D.F., MEXICO Tel.: +52-5-754.7601, Fax.: +52-5-586.6290 & 752.0290 E-mail: nnacip at mvax1.red.cinvestav.mx Prospective authors are invited to submit papers in any technical area listed above, sending three copies of a 3-4 pages extended summary includig problem statement, methodology, figures and references. ====================================================================== Marcio L. Andrade Netto School of Electrical Engineering State University of Campinas BRASIL marcio at fee.unicamp.br ===================================================================== From D.C.Mitchell at exeter.ac.uk Wed Jun 8 07:11:44 1994 From: D.C.Mitchell at exeter.ac.uk (D.C.Mitchell@exeter.ac.uk) Date: Wed, 8 Jun 1994 12:11:44 +0100 (BST) Subject: Cognitive Science position at Exeter Message-ID: <4145.9406081111@scraps> A non-text attachment was scrubbed... Name: not available Type: text Size: 1530 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/9a371e24/attachment.ksh From georg at ai.univie.ac.at Wed Jun 8 07:07:06 1994 From: georg at ai.univie.ac.at (Georg Dorffner) Date: Wed, 8 Jun 1994 13:07:06 +0200 Subject: WWW Neural Network Page at OFAI Message-ID: <199406081107.AA01688@chicago.ai.univie.ac.at> ========================================================= Announcing a New WWW Page on Neural Networks at the Austrian Research Institute for Artificial Intelligence ========================================================= As part of the Worl-Wide-Web (WWW) server of the Department of Medical Cybernetics and Artificial Intelligence of the University of Vienna and the Austrian Research Institute for Artificial Intelligence (OFAI) URL: http://www.ai.univie.ac.at a home page specifically dedicated to our research and services in neural networks has been established: URL: http://www.ai.univie.ac.at/oefai/nn/nngroup.html It gives a description of the research currently being undertaken at the Neural Network Group of the OFAI, which consists of the following four domains: - practical applications of neural networks - theoretical research on neural networks - cognitive modeling with neural networks - neural network simulation tools A complete list of publications is given, many of which can be directly retrieved as postscript files. Among the local services provided you are welcome to use our ======================================== bibliographical search utility BIBLIO, ======================================== which permits you to search among 3500 books and papers in the field of neural networks. Search key can be an author's name and/or a string contained in the title. The basis for the search is an on-line data base containing books, reports, journal and conference references, such as IEEE and INNS neural network conferences. This data base is constantly being extended. Finally, links to other neural network WWW pages, as well as data and report repositories are also given. Enjoy! P.S: If you have questions or difficulties, mail to georg at ai.univie.ac.at From wahba at stat.wisc.edu Wed Jun 8 13:20:12 1994 From: wahba at stat.wisc.edu (Grace Wahba) Date: Wed, 8 Jun 94 12:20:12 -0500 Subject: ftp-papers:learn,tune Message-ID: <9406081720.AA03139@hera.stat.wisc.edu> The following technical reports are available by anonymous ftp. nonlin-learn.ps.Z G. Wahba, Generalization and Regularization in Nonlinear Learning Systems, UW-Madison Statistics Dept TR 921, May, 1994. To appear in the Handbook of Brain Theory, Michael Arbib, Ed. Relates feedforward neural nets, radial basis functions and smoothing spline anova within the length limitations of the Handbook. tuning-nwp.ps.Z G. Wahba, D. R. Johnson, F. Gao and J. Gong, Adaptive tuning of numerical weather prediction models: Part I: randomized GCV and related methods in three and four dimensional data assimilation. UW-Madison Statistics Dept TR 920, April, 1994, submitted. Shows how to tune the bias-variance tradeoff and other tradeoffs via generalized cross validation (gcv), unbiased risk (ubr), and generalized maximum likelihood (gml) with very large data sets in the context of regularized function estimation. Shows how to use randomized trace estimation to compute gcv and ubr, and in particular to use these randomized estimates to estimate when to stop the iteration when large variational problems are solved iteratively. Written in the language of data assimilation in numerical weather prediction but the methods may be of interest in machine learning. -------------------------------------------------------------------- ftp instructions: fn = nonlin-learn or tuning-nwp % ftp ftp.stat.wisc.edu Name: anonymous password: your email address ftp> cd pub/wahba ftp> binary ftp> get fn.ps.Z ftp> bye % uncompress fn.ps.Z % lpr fn.ps -------------------------------------------------------------------- Grace Wahba Statistics Dept University of Wisconsin-Madison wahba at stat.wisc.edu -------------------------------------------------------------------- Get Contents for other papers of interest. From bishopc at helios.aston.ac.uk Wed Jun 8 14:53:15 1994 From: bishopc at helios.aston.ac.uk (bishopc) Date: Wed, 8 Jun 1994 18:53:15 +0000 Subject: Neural Computing Applications Forum Message-ID: <17961.9406081753@sun.aston.ac.uk> ------------------------------------------------------------------- NEURAL COMPUTING APPLICATIONS FORUM The Neural Computing Applications Forum (NCAF) was formed in 1990 and has since come to provide the principal mechanism for exchange of ideas and information between academics and industrialists in the UK on all aspects of neural networks and their practical applications. NCAF organises four 2-day conferences each year, which are attended by around 100 participants. It has its own international journal `Neural Computing and Applications' which is published quarterly by Springer-Verlag, and it produces a quarterly newsletter `Networks'. Forthcoming conferences will be held at Oxford University (30 June and 1 July, 1994), Aston University (14 and 15 September, 1994) and London (11 and 12 January, 1995). The programme for the Oxford conference is given below. Annual membership rates (Pounds Stirling): Company: 250 Individual: 140 Student: 55 Membership includes free registration at all four annual conferences, a subscription to the journal `Neural Computing and Applications', and a subscription to `Networks'. For further information: Tel: +44 (0)784 477271 Fax: +44 (0)784 472879 email: c.m.bishop at aston.ac.uk Chris M Bishop (Chairman, NCAF) -------------------------------------------------------------------- NCAF Two-Day Conference: PRACTICAL APPLICATIONS AND TECHNIQUES OF NEURAL NETWORKS 30 June and 1 July 1994 St Hugh's College, Oxford 30 June 1994 ------------ Tutorial: Bayes for Beginners (An introduction to Bayesian methods for neural networks) Chris M Bishop, Aston University Invited Talk: Medical Applications of Neural Networks Lionel Tarassenko, Oxford University Workshop: Key Issues in the Application of Neural Networks Discussion leaders: Lionel Tarassenko, Oxford University Steve Roberts, Oxford University Andy Wright, British Aerospace Peter Cowley, Rolls Royce Invited Talk: Is Statistics Plus Neural Networks More Than the Sum of its Parts? Brian Ripley, Oxford University 1 July 1994 ----------- Solving Literary Mysteries using Neural Computation Robert Matthews, Aston University Water Quality Monitoring Peter Smith, Loughborough University Intelligent Hybrid Systems Suram Goonatilake, University College London Alarm Monitoring in Intensive Care Lorraine Dodd, Neural Solutions Applying Neural Networks to Machine Health Monitoring Peter Smith, Sunderland University Fault Diagnosis on Telephone Networks Andy Chaskell, British Telecom A Learning System for Visual Tracking of Object Motion Andrew Blake, Oxford University Modelling Conditional Distributions Chris Bishop, Aston University Neural Networks for Spacecraft Control Andy Wright, British Aerospace Neural Networks for Speaker Recognition Steve Frederikson, Oxford University -------------------------------------------------------------------- Professor Chris M Bishop Tel. +44 (0)21 359 3611 x4270 Neural Computing Research Group Fax. +44 (0)21 333 6215 Dept. of Computer Science c.m.bishop at aston.ac.uk Aston University Birmingham B4 7ET, UK -------------------------------------------------------------------- From tgd at chert.CS.ORST.EDU Wed Jun 8 15:48:02 1994 From: tgd at chert.CS.ORST.EDU (Tom Dietterich) Date: Wed, 8 Jun 94 12:48:02 PDT Subject: Postdoctoral Position: Applying Machine Learning to Ecosystem Modeling Message-ID: <9406081948.AA01142@edison.CS.ORST.EDU> Postdoctoral Position: Applying Machine Learning to Ecosystem Modeling Complex ecosystem models are calibrated by manually fitting them to available data sets. This is time-consuming, and it can result in overfitting of the models to the data. We are applying machine learning methods to automate this calibration and thereby improve the reliability and statistical validity of the resulting models. Our ecosystem model--MAPSS--predicts amounts and types of vegetation that will grow under global warming climate scenarios. An important goal of global change research is to incorporate such vegetation models into existing ocean-atmosphere physical models. Under NSF funding, we are seeking a Post-Doc to assume a major role in carrying out this research. Components of the research involve (a) representing ecosystem models declaratively, (b) implementing gradient and non-gradient search techniques for parameter fitting, (c) implementing parallel algorithms for running and fitting the ecosystem model, and (d) conducting basic research on issues of combining prior knowledge with data to learn effectively. The ideal candidate will have a PhD in computer science or a closely related discipline with experience in neural networks, simulated annealing (and similar search procedures), knowledge representation, and parallel computing. The candidate must know or be eager to learn some basic plant physiology and soil hydrology. Computational resources for this project include a 16-processor 1Gflop Meiko multicomputer and a 128-processor CNAPS neurocomputer. Applicants should send a CV, summary of research accomplishments, sample papers, and 3 letters of reference to Thomas G. Dietterich 303 Dearborn Hall Department of Computer Science Oregon State University Corvallis, OR 97331 tgd at cs.orst.edu Principal investigators: Thomas G. Dietterich, Department of Computer Science Ron Nielson, US Forest Service OSU is an Affirmative Action/Equal Opportunity Employer and Complies with Section 504 of the Rehabilitation Act of 1973. OSU has a policy of being responsive to the needs of dual-career couples. Closing Date: July 5, 1994 From leow at cs.utexas.edu Wed Jun 8 16:23:47 1994 From: leow at cs.utexas.edu (Wee Kheng Leow) Date: Wed, 8 Jun 1994 15:23:47 -0500 Subject: new dissertation available Message-ID: <199406082023.PAA10635@coltexo.cs.utexas.edu> FTP-host: cs.utexas.edu FTP-filename: pub/neural-nets/papers/leow.diss.tar The following dissertation is available through anonymous ftp. It is also available in the WWW from the UTCS Neural Nets Research Group home page http://www.cs.utexas.edu/~sirosh/nn.html under High-Level Vision publications. It contains 198 pages. ----------------- VISOR: Learning Visual Schemas in Neural Networks for Object Recognition and Scene Analysis Wee Kheng Leow Department of Computer Sciences The University of Texas at Austin Abstract This dissertation describes a neural network system called VISOR for object recognition and scene analysis. The research with VISOR aims at three general goals: (1) to contribute to building robust, general vision systems that can be adapted to different applications, (2) to contribute to a better understanding of the human visual system by modeling high-level perceptual phenomena, and (3) to address several fundamental problems in neural network implementation of intelligent systems, including resource-limited representation, and representing and learning structured knowledge. These goals lead to a schema-based approach to visual processing, and focus the research on the representation and learning of visual schemas in neural networks. Given an input scene, VISOR focuses attention at one component of an object at a time, and extracts the shape and position of the component. The schemas, represented in a hierarchy of maps and connections between them, cooperate and compete to determine which one best matches the input. VISOR keeps shifting attention to other parts of the scene, reusing the same schema representations to identify the objects one at a time, eventually recognizing what the scene depicts. The recognition result consists of labels for the objects and the entire scene. VISOR also learns to encode the schemas' spatial structures through unsupervised modification of connection weights, and reinforcement feedback from the environment is used to determine whether to adapt existing schemas or create new schemas to represent novel inputs. VISOR's operation is based on cooperative, competitive, and parallel bottom-up and top-down processes that seem to underlie many human perceptual phenomena. Therefore, VISOR can provide a computational account of many such phenomena, including shifting of attention, priming effect, perceptual reversal, and circular reaction, and may lead to a better understanding of how these processes are carried out in the human visual system. Compared to traditional rule-based systems, VISOR shows remarkable robustness of recognition, and is able to indicate the confidence of its analysis as the inputs differ increasingly from the schemas. With such properties, VISOR is a promising first step towards a general vision system that can be used in different applications after learning the application-specific schemas. --------------- The dissertation is contained in a tar file called leow.diss.tar, which consists of 5 compressed ps files, with a total of 198 pages. To retrieve the files, do the following: unix> ftp cs.utexas.edu Name: anonymous Password: ftp> binary ftp> cd pub/neural-nets/papers ftp> get leow.diss.tar ftp> quit unix> tar xvf leow.diss.tar unix> uncompress *.ps.Z unix> lpr -P *.ps If the ps files are too large, you may have to use lpr with the -s option. From opitz at cs.wisc.edu Wed Jun 8 17:09:53 1994 From: opitz at cs.wisc.edu (Dave Opitz) Date: Wed, 8 Jun 94 16:09:53 -0500 Subject: Papers available by ftp Message-ID: <9406082109.AA20718@flanders.cs.wisc.edu> The following three papers have been placed in an FTP repository at the University of Wisconsin (abstracts appear at the end of the message). These papers are also available on WWW via Mosaic. Type "Mosaic ftp://ftp.cs.wisc.edu/machine-learning/shavlik-group" or "Mosaic http://www.cs.wisc.edu/~shavlik/uwml.html" (for our group's "home page"). Opitz, D. W. & Shavlik, J. W. (1994). "Using genetic search to refine knowledge-based neural networks." Proceedings of the 11th International Conference on Machine Learning, New Brunswick, NJ. Craven, M. W. & Shavlik, J. W. (1994). "Using sampling and queries to extract rules from trained neural networks." Proceedings of the 11th International Conference on Machine Learning, New Brunswick, NJ. Maclin, R. & Shavlik, J. W. (1994). "Incorporating advice into agents that learn from reinforcements." Proceedings of the 12th National Conference on Artificial Intelligence (AAAI-94), Seattle, WA. (A longer version appears as UW-CS TR 1227.) ---------------------- To retrieve the papers by ftp: unix> ftp ftp.cs.wisc.edu Name: anonymous Password: (Your e-mail address) ftp> binary ftp> cd machine-learning/shavlik-group/ ftp> get opitz.mlc94.ps.Z ftp> get craven.mlc94.ps.Z ftp> get maclin.aaai94.ps.Z (or get maclin.tr94.ps.Z) ftp> quit unix> uncompress opitz.mlc94.ps.Z (similarly for the other 2 papers) unix> lpr opitz.mlc94.ps ============================================================================== Using Genetic Search to Refine Knowledge-Based Neural Networks David W. Opitz Jude W. Shavlik Abstract: An ideal inductive-learning algorithm should exploit all available resources, such as computing power and domain-specific knowledge, to improve its ability to generalize. Connectionist theory-refinement systems have proven to be effective at utilizing domain-specific knowledge; however, most are unable to exploit available computing power. This weakness occurs because they lack the ability to refine the topology of the networks they produce, thereby limiting generalization, especially when given impoverished domain theories. We present the REGENT algorithm, which uses genetic algorithms to broaden the type of networks seen during its search. It does this by using (a) the domain theory to help create an initial population and (b) crossover and mutation operators specifically designed for knowledge-based networks. Experiments on three real-world domains indicate that our new algorithm is able to significantly increase generalization compared to a standard connectionist theory-refinement system, as well as our previous algorithm for growing knowledge-based networks. ============================================================================== Using Sampling and Queries to Extract Rules from Trained Neural Networks Mark W. Craven Jude W. Shavlik Abstract: Concepts learned by neural networks are difficult to understand because they are represented using large assemblages of real-valued parameters. One approach to understanding trained neural networks is to extract symbolic rules that describe their classification behavior. There are several existing rule-extraction approaches that operate by searching for such rules. We present a novel method that casts rule extraction not as a search problem, but instead as a learning problem. In addition to learning from training examples, our method exploits the property that networks can be efficiently queried. We describe algorithms for extracting both conjunctive and M-of-N rules, and present experiments that show that our method is more efficient than conventional search-based approaches. ============================================================================== Incorporating Advice into Agents that Learn from Reinforcements Rich Maclin Jude W. Shavlik Abstract: Learning from reinforcements is a promising approach for creating intelligent agents. However, reinforcement learning usually requires a large number of training episodes. We present an approach that addresses this shortcoming by allowing a connectionist Q-learner to accept advice given, at any time and in a natural manner, by an external observer. In our approach, the advice-giver watches the learner and occasionally makes suggestions, expressed as instructions in a simple programming language. Based on techniques from knowledge-based neural networks, these programs are inserted directly into the agent's utility function. Subsequent reinforcement learning further integrates and refines the advice. We present empirical evidence that shows our approach leads to statistically-significant gains in expected reward. Importantly, the advice improves the expected reward regardless of the stage of training at which it is given. From atul at nynexst.com Wed Jun 8 18:21:44 1994 From: atul at nynexst.com (Atul Chhabra) Date: Wed, 8 Jun 94 18:21:44 EDT Subject: Summer Job Opening Message-ID: <9406082221.AA21468@texas.nynexst.com> SUMMER STUDENT POSITION NYNEX Science & Technology White Plains, NY We have an immediate need for a summer student for upto a three month period. The position is for research and programming in Hidden Markov Models and the Viterbi Algorithm for incorporating contextual information into a handprinted character recognition system. Candidates must have knowledge of HMM, Viterbi algorithm, and neural networks. They must possess excellent C programming skills in the Unix environment. NYNEX is the Regional Bell Operating Company (RBOC) for New York and New England. NYNEX Science & Technology is the R&D division of NYNEX. Applicants should send their resumes by mail, fax, or email to the address below. Email resumes must be plain text only. ---------- Atul K. Chhabra Phone: (914)644-2786 NYNEX Science & Technology Fax: (914)644-2404 500 Westchester Avenue Internet: atul at nynexst.com White Plains, NY 10604 From stefano at kant.irmkant.rm.cnr.it Thu Jun 9 10:45:05 1994 From: stefano at kant.irmkant.rm.cnr.it (stefano@kant.irmkant.rm.cnr.it) Date: Thu, 9 Jun 1994 14:45:05 GMT Subject: TR available Message-ID: <9406091445.AA12074@kant.irmkant.rm.cnr.it> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/nolfi.plastic.ps.Z FTP-filename: /pub/neuroprose/nolfi.erobot.ps.Z Two papers are now available for copying from the Neuroprose repository. Hardcopies are not available. Comments are welcome. PHENOTYPIC PLASTICITY IN EVOLVING NEURAL NETWORKS /pub/neuroprose/nolfi.plastic.ps.Z 12-pages To appear In: Proceedings of the First Conference From Perception to Action, 5-9 September, Lausanne. HOW TO EVOLVE AUTONOMOUS ROBOTS: DIFFERENT APPROACHES IN EVOLUTIONARY ROBOTICS /pub/neuroprose/nolfi.erobot.ps.Z 9-pages To appear in: Proceedings of ALIFEIV Conference, Cambridge, MA, 7-9 July. PHENOTYPIC PLASTICITY IN EVOLVING NEURAL NETWORKS Stefano Nolfi, Orazio Miglino, and Domenico Parisi We present a model based on genetic algorithm and neural networks. The neural networks develop on the basis of an inherited genotype but they show phenotypic plasticity, i.e. they develop in ways that are adapted to the specific environment. The genotype-to-phenotype mapping is not abstractly conceived as taking place in a single instant but is a temporal process that takes a substantial portion of an individual's lifetime to complete and is sensitive to the particular environment in which the individual happens to develop. Furthermore, the respective roles of the genotype and of the environment are not decided a priori but are part of what evolves. We show how such a model is able to evolve control systems for autonomous robots that can adapt to different types of environments. HOW TO EVOLVE AUTONOMOUS ROBOTS: DIFFERENT APPROACHES IN EVOLUTIONARY ROBOTICS Stefano Nolfi, Dario Floreano, Orazio Miglino, and Francesco Mondada A methodology for evolving the control systems of autonomous robots has not yet been well established. In this paper we will show different examples of applications of evolutionary robotics to real robots by describing three different approaches to develop neural controllers for mobile robots. In all the experiments described real robots are involved and are indeed the ultimate means of evaluating the success and the results of the procedures employed. Each approach will be compared with the others and the relative advantages and drawbacks will be discussed. Last, but not least, we will try to tackle a few important issues related to the design of the hardware and of the evolutionary conditions in which the control system of the autonomous agent should evolve. Stefano Nolfi Institute of Psychology National Research Council Viale Marx,15 00137 Rome, Italy e-mail:stefano at kant.irmkant.rm.cnr.it From marwan at sedal.sedal.su.OZ.AU Sat Jun 11 11:22:47 1994 From: marwan at sedal.sedal.su.OZ.AU (Marwan A. Jabri, Sydney Univ. Elec. Eng., Tel: +61-2 692 2240) Date: Sat, 11 Jun 94 10:22:47 EST Subject: ACNN'95 Call for Papers Message-ID: <9406110022.AA18145@sedal.sedal.su.OZ.AU> C A L L F O R P A P E R S = = = = = = = = = = = = = = = = A C N N ' 9 5 SIXTH AUSTRALIAN CONFERENCE ON NEURAL NETWORKS 6th - 8th FEBRUARY 1995 University of Sydney Sydney, Australia in co-operation with Asian-Pacific Neural Network Assembly (APNNA) Australian Neuroscience Society (ANS) Institute of Electrical & Electronics Engineers (IEEE) Institution of Radio & Electronics Engineers (IREE) Systems Engineering & Design Automation Laboratory (SEDAL) The sixth Australian conference on neural networks will be held in Sydney on February 6th - 8th 1995 at the University of Sydney. ACNN'95 is the annual national meeting of the Australian neural network community. It is a multi-disciplinary meeting and seeks contributions from Neuroscientists, Engineers, Computer Scientists, Mathematicians, Physicists and Psychologists. ACNN'95 will feature a number of invited speakers. The program will include presentations and poster sessions. Proceedings will be printed and distributed to the attendees. The posters will be displayed for a significant period of time, and time will be allocated for authors to be present at their poster in the conference program. Software demonstrations will be possible for authors. SparcStations, DEC Stations and PCs will be available in an adjacent laboratory for these demonstrations. Invited Keynote Speaker ACNN'95 will feature a number of keynote speakers, including Dr Larry Jackel, AT&T Bell Laboratories and Dr Edmund Rolls, Oxford University. Pre-Conference Workshops Pre-Conference Workshops covering basic introductions to neural computing, neuroscience, applications and implementations will be held. A separate flyer describing the workshops will be issued at a later date. Workshop on Industrial Application of Neural Computing The Warren Centre for Advanced Engineering will be organising a workshop on industrial application of neural computing. A separate information sheet will be issued. Submission Categories The major categories for paper submissions include, but are not restricted to: 1. Neuroscience: Integrative function of neural networks in vision, Audition, Motor, Somatosensory and Autonomic functions; Synaptic function; Cellular information processing; 2. Theory: Learning; Generalisation; Complexity; Scaling; Stability; Dynamics; 3. Implementation: Hardware implementation of neural nets; Analog and digital VLSI implementation; Optical implementation; 4. Architectures and Learning Algorithms: New architectures and learning algorithms; Hierarchy; Modularity; Learning pattern sequences; Information integration; 5. Cognitive Science and AI: Computational models of perception and pattern recognition; Memory; Concept formation; Problem solving and reasoning; Visual and auditory attention; Language acquisition and production; Neural network implementation of expert systems; 6. Applications: Application of neural nets to signal processing and analysis; Pattern recognition: Speech, Machine vision; Motor control; Robotics; Forecast; Medical. Initial Submission of Papers As this is a multi-disciplinary meeting, papers are required to be comprehensible to an informed researcher outside the particular stream of the author in addition to the normal requirements of technical merit. Papers should be submitted as close as possible to final form and must not exceed four single A4 pages (2-column format). The first page should include the title and abstract, and should leave space for, but not include the authors' names and affiliations. A cover page should be supplied giving the title of the paper, the name and affiliation of each author, together with the postal address, the e-mail address, and the phone and fax numbers of a designated contact author. The type font should be no smaller than 10 point except in footnotes. A serif font such as Times or New Century Schoolbook is preferred. Four copies of the paper and the front cover sheet should be sent to: Agatha Shotam ACNN'95 Secretariat University of Sydney Department of Electrical Engineering NSW 2006 Australia Each manuscript should clearly indicate submission category (from the six listed) and author preference for oral or poster presentations. This initial submission must be on hard copy to reach us by Friday, 2 September 1994. Submission Deadlines Friday, 2 September 1994 Deadline for receipt of paper submissions Friday, 28 October 1994 Notification of acceptance Friday, 2 December 1994 Final papers in camera-ready form for printing Venue Peter Nicol Russell Building, University of Sydney, Australia. ACNN'95 Organising Committee General Chairs: Marwan Jabri University of Sydney Cyril Latimer University of Sydney Technical Program Chair: Bill Gibson University of Sydney Technical Program Advisors: Max Bennett University of Sydney Bill Wilson University of New South Wales Len Hamey Macquarie University Program Committee: Peter Bartlett Australian National University Max Bennett University of Sydney Robert Bogner University of Adelaide Tony Burkitt Australian National University Terry Caelli Curtin University of Technology Simon Carlile University of Sydney Margaret Charles University of Sydney George Coghill University of Auckland Tom Downs University of Queensland Barry Flower University of Sydney Tom Gedeon University of New South Wales Bill Gibson University of Sydney Simon Goss Defence Science & Tech Org Graeme Halford University of Queensland Len Hamey Macquarie University Andrew Heathcote University of Newcastle Marwan Jabri University of Sydney Andrew Jennings Royal Melbourne Institute of Tech Nikola Kasabov University of Otago Adam Kowalczyk Telecom Australia Cyril Latimer University of Sydney Philip Leong University of Sydney Iain Macleod Australian National University M Palaniswami University of Melbourne Nick Redding Defence Science & Tech Org M Srinivasan Australian National University Ah Chung Tsoi University of Queensland Janet Wiles University of Queensland Robert Williamson Australian National University Bill Wilson University of New South Wales International Advisory Board: Yaser Abu-Mostafa Caltech Josh Alspector Bellcore Shun-ichi Amari University of Tokyo, Japan Michael Arbib University of Southern California Scott Fahlman Carnegie Mellon University Hans Peter Graf AT&T Bell Laboratories Yuzo Hirai University of Tsukuba Masumi Ishikawa Kyushu Institute of Technology Larry Jackel AT&T Bell Laboratories John Lazzaro University of Berkeley Yann LeCun AT&T Bell Laboratories Richard Lippmann MIT Lincoln Lab Nelson Morgan Intl Computer Sci Inst, Berkeley Alan Murray University of Edinburgh Fernando Pineda John Hopkins University Terry Sejnowski The SALK Institute Lionel Tarassenko Oxford University Eric Vittoz CSEM, Switzerland David Zipser University of California, San Diego Registrations The registration fee to attend ACNN'95 is: Full Time Students A$110.00 Academics A$250.00 Other A$350.00 A discount of 20% applies for advance registration. Registration forms must be posted before December 2nd, 1994, to be entitled to the discount. To be eligible for the Full Time Student rate, a letter from the Head of Department as verification of enrolment is required. Accommodation Delegates will have to make their accommodation arrangements directly with the hotel of their choice. A list of accommodation close to the conference venue is provided below. Rates quoted are on a `per night' basis. Rates may be subject to change without prior notice. University Motor Inn 25 Arundel St Glebe NSW 2037 Rates: A$92 (Single) Tel: +61 (2) 660 5777 A$95 (Double) Fax: +61 (2) 660 2929 Camperdown Towers Motel 144 Mallet St Camperdown NSW2050 Rates: A$65 (Single) Tel: +61 (2) 519 5211 A$75 (Double) Fax: +61 (2) 519 9179 Metro Motor Inn 1 Meagher St Chippendale NSW 2008 Tel: +61 (2) 319 4133 Rates: A$68 (Single/Double) Tel: +61 (2) 698 7665 Sydney Travellers Rest Hotel 37 Ultimo Rd Rates: A$27 students (Dormitory) Haymarket NSW 2000 A$39 adults (Twin Share) Tel: +61 (2) 281 5555 A$62 (Single) Fax: +61 (2) 281 2666 Alishan International House 100 Glebe Point Rd Glebe NSW 2037 Rates: A$18 (Dormitory) Tel: +61 (2) 566 4048 A$55 (Single) Fax: +61 (2) 525 4686 A$70 (Double) ------------------------------------------------------------------------------- ACNN'95 Registration Form Title & Name: ___________________________________________________________ Organisation: ___________________________________________________________ Department: _____________________________________________________________ Occupation: _____________________________________________________________ Address: ________________________________________________________________ State: ____________________ Post Code: _____________ Country: ___________ Tel: ( ) __________________________ Fax: ( ) _____________________ E-mail: _________________________________________________________________ [ ] Find enclosed a cheque for the sum of @ : ______________________ [ ] Charge my credit card for the sum of # :________________________ Mastercard/Visa# Number : ______________________________________ Valid until: ________ Signature: __________________ Date: ______ ------------------------------------------------------------------------------ To register, please fill in this form and return it together with payment to : Mrs Agatha Shotam Secretariat ACNN'95 Department of Electrical Engineering The University of Sydney NSW 2006 AUSTRALIA ------------------------------------------------------------------------------ [ ] I would like to attend the Pre-Conference Workshops * [ ] I would like to attend the Industrial Application of Neural Computing Workshop * * Registration fees for the Pre-Conference Workshops and the Industrial Application of Neural Computing Workshop will be determined at a later stage. _________________________________________________________________________ @ Registration fees: Before 2 Dec 94 After 2 Dec 94 Full Time Students A$ 88.00 A$110.00 Academics A$200.00 A$250.00 Other A$280.00 A$350.00 # Please encircle type of card From jakobc at Mordred.DoCS.UU.SE Mon Jun 13 04:59:34 1994 From: jakobc at Mordred.DoCS.UU.SE (Jakob Carlstr|m) Date: Mon, 13 Jun 94 10:59:34 +0200 Subject: Research position available Message-ID: <9406130859.AAMordred00548@Mordred.DoCS.UU.SE> RESEARCH POSITION AVAILABLE: Hardware implementation of artificial neural networks A research position is available in the field of hardware implementation of artificial neural networks, at the Department of Computer Systems, Uppsala University, Sweden. The position is open to a scientist with a solid background in neural networks and familiarity with analog and digital electronic circuit construction as well as VLSI design. A candidate of postdoctoral or equivalent status will be preferred. The position will be in a new project for developing neural network algorithms and hardware, aiming at VLSI implementations. The researcher is expected to play a major role in this project. The position is tenable for one year at minimum, possibly longer. Uppsala University is Scandinavia's oldest university, founded in 1477, and offers a stimulating research environment. The Department of Computer Systems conducts research on artificial neural networks, real-time systems and formal methods for concurrent systems. The neural networks group was formed in 1991, and consists of four graduate students supervised by Associate Professor Lars Asplund. We have published reports on neural network-based control of digital telecom networks, and on hardware architectures for neural networks. Further information may be obtained from Associate Professor Lars Asplund, Department of Computer Systems, Uppsala University, Box 325, S-751 05 Uppsala, Sweden; fax +46 18 55 02 25; email asplund at docs.uu.se. Applicants should send a full CV, sample papers and the names and addresses of two professional referees to the above address. Closing date: August 8, 1994. From sassk at macaulay-land-use.scot-agric-res-inst.ac.uk Mon Jun 13 07:37:17 1994 From: sassk at macaulay-land-use.scot-agric-res-inst.ac.uk (Jim Kay) Date: Mon, 13 Jun 94 11:37:17 GMT Subject: Stats vs ANNs : A Competition Message-ID: <3352.9406131137@mluri.sari.ac.uk> Statistics vs. Neural Networks A Competition Can artificial neural networks outperform statistical methods in a fair comparison ? Finding applications where they can is one of the main objectives of a two-day workshop to be held on April 19-20, 1995 in Edinburgh, Scotland. We invite entries to this competition which should reach Jim Kay at the address given below by November the first. The decisions reached will be communicated to applicants by the 15th of January, 1995. The best four entries will be selected and one applicant per entry will be invited to attend the workshop and make an oral presentation of their results; costs of accommodation and travel (within the UK) will be provided subject to certain upper bounds. The other general objectives of the workshop are: to discuss problems of statistical interest within ANN research; to discuss statistical concepts and tools that expand the technology of ANN research; to enhance collaborative research involving experts from one or more of the two communities. We look forward to receiving your applications which should include a contact name and address and be no more than 10 typed A4 pages. Jim Kay and Mike Titterington SASS Environmental Modelling Unit Macaulay Land Use Research Institute Craigiebuckler Aberdeen AB9 2 QJ Scotland, UK e-mail : j.kay at uk.ac.sari.mluri (within the UK) j.kay at mluri.sari.ac.uk (internet address) Tel. : +224 - 318611 (ext. 2269) Fax : +224 - 208065 From tishby at CS.HUJI.AC.IL Mon Jun 13 10:48:13 1994 From: tishby at CS.HUJI.AC.IL (Tali Tishby) Date: Mon, 13 Jun 1994 17:48:13 +0300 Subject: 12-ICPR: Registration Information Message-ID: <199406131448.AA25912@irs01.cs.huji.ac.il> =============================================================================== 12th ICPR : INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION 9-13 October 1994 Renaissance Hotel, Jerusalem =============================================================================== The 12-ICPR contains about 220 presentations organized in four tracks, with a total of 56 sessions: o COMPUTER VISION AND IMAGE PROCESSING - 25 Sessions o PATTERN RECOGNITION AND NEURAL NETWORKS - 19 Sessions o SIGNAL PROCESSING - 7 Sessions o PARALLEL COMPUTING - 5 Sessions In addition, about 230 papers will be presented in poster form. The program committees did their best to achieve a high-quality and balanced technical program. Combined with the inspiring location at Jerusalem, we are certain that the 12th ICPR will be a rewarding and memorable experience. In addition to the excellent technical program, an exciting range of TUTORIALS, SOCIAL EVENTS, and TOURS will be offered. For example, the BANQUET is planned to be a Bedouine Feast on the shore of the Dead Sea. In the remainder of this mailing, you will find: - 12-ICPR Conference Registration Form (E-Mail registration possible!) - 12-ICPR Hotel Reservation Form (E-Mail reservations possible!) - How to get the TECHNICAL PROGRAM, TUTORIAL PROGRAM, and full REGISTRATION, HOTEL, and TOURING INFORMATION. Read the conditions on registration and hotel reservations, including cancellations, before booking!!! ******************************************************************************* * A note regarding hotel reservations: Jerusalem enjoys more visitors these * * days than in any other period. This causes shortage in hotel rooms, and you * * are urged to book your hotel as early as possible! * ******************************************************************************* \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/ cut here \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/ Date:____________ 12-ICPR CONFERENCE REGISTRATION FORM ************************************ 12-ICPR, 9-13 October 1994 Renaissance Hotel, Jerusalem Last Name _______________ First Name ___________________ Title:___________ Mailing Address __________________________________________________________ __________________________________________________________ City/State/Code _______________________________________ Country _________ Phone _________________ Fax __________________ E-mail ____________________ Registration Fee Schedule: ______________________________ Advance On-Site or IAPR Society and Member Number By Aug. 9 After Aug. 9 Member $395 USD __________ $455 USD __________ Non-member $445 USD __________ $495 USD __________ *Student $195 USD __________ $235 USD __________ One Tutorial $160 USD __________ $200 USD __________ Two Tutorials $270 USD __________ $330 USD __________ *Student - Each Tutorial $ 50 USD __________ $ 50 USD __________ Receptions for accompanying person $ 40 USD __________ $ 40 USD __________ Banquet for students and accompanying persons (12 Oct): Quantity _____ $ 40 USD __________ $ 40 USD __________ Extra Conference Proceedings: Quantity _____ $110 USD __________ Full registration fee includes: all sessions, one copy of proceedings, coffee breaks and all social events. Banquet is not included for students and accompanying persons. *Please bring a proof of full-time student status to the registration desk. Tutorial registration at student's rates is on a waitlist basis only. Morning Tutorials, Sunday, Oct. 9, 1994. Check at most one: ----------------------------------------------------------- ____ A1: O. Faugeras - "Invariant Theory for Pattern Recognition" ____ B1: Yann le Cun - "Neural Networks in Pattern Recognition" ____ C1: S. Furui - "Speech Recognition Techniques" ____ D1: T. Bernard, V. Cantoni, and M. Ferretti - "Special Chips for Pattern Recognition and Image Processing" ____ E1: H. Samet - "Geographic Information Systems" Afternoon Tutorials, Sunday, Oct. 9, 1994. Check at most one: ------------------------------------------------------------- ____ A2: R. Haralick - "Image Analysis with Mathematical Morphology" ____ B2: H. Baird - "Document Image Analysis" ____ C2: M. Tekalp - "Digital Video Processing" ____ D2: R. Hummel - "Parallel computing methods for Object Recognition" ____ E2: A. Jain - "Statistical Pattern Recognition" TOTAL PAYMENT for Registration: US$ __________ METHOD OF PAYMENT (Check one): [ ] Check [ ] Bank Transfer [ ] Visa [ ] American Express Card Number _______________________________________ Expiration date _______ Cardholder's name _______________________________________________ Cardholder's signature ___________________________ Date: _______________ - The check should be payable to "12-ICPR" in US dollars. - The Bank Transfer should be made to the credit of the following account. International / 12-ICPR Account Number 412716 Israel Discount Bank, Branch 100 4 Rothschild Blvd. 66881 Tel-Aviv, Israel Please enclose a copy of the transfer document with the registration form. E-Mail this registration form to: icpr at math.tau.ac.il or Mail/Fax to: 12-ICPR, c/o International P.O.Box 29313 61292 Tel-Aviv, Israel Tel: +972-3-510 2538 Fax: +972-3-660 604 \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/ cut here \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/ Date:_______________ 12-ICPR HOTEL RESERVATION FORM ****************************** 12-ICPR, 9-13 October 1994 Renaissance Hotel, Jerusalem E-MAIL, MAIL, FAX, or phone your reservation by Aug. 9 to ensure availability and special rates to the hotel of your choice. Hotels in Jerusalem at the time of the conference are usually highly busy. We will accept accomodation forms up to the conference, but reservation is guaranteed ONLY for advanced registration. ** For more information, including TOURING, ftp the files "registration" ** ** and "tourism" ** Last Name _______________ First Name ___________________ Title:___________ Mailing Address __________________________________________________________ __________________________________________________________ City/State/Code _______________________________________ Country _________ Phone _________________ Fax __________________ E-mail ____________________ I will share a room with: ________________________________________________ Check-in Date ___________ Check-out Date _________________ (______ nights) Please indicate at least three choices for hotel: Use "1" for highest priority, "3" for lowest. A Deposit of US$200 per room is needed for categories A/B/C, and a deposit of US$100 per room is needed for D/E. Hotel category Single Double ---------------- ------------- ------------- A [ ] $174-$190 [ ] $187-$209 B1 (Conf. Site) [ ] $100-$118 [ ] $131-$133 B2 [ ] $100-$118 [ ] $131-$133 C1 (Walking Dist.) [ ] $71- $80 [ ] $79-$102 C2 [ ] $71- $80 [ ] $79-$102 D [ ] $59 [ ] $71 E (Rooms in Homes) [ ] $35- $42 [ ] $46- $55 Total Hotel Deposit: US$ ______________ METHOD OF PAYMENT (Check one): [ ] Check [ ] Bank Transfer [ ] Visa [ ] American Express Card Number __________________________________Expiration date ____________ Cardholder's name _______________________________________________ Cardholder's signature ____________________________Date: ______________ - The check should be payable to "12-ICPR" in US dollars. - The Bank Transfer should be made to the credit of the following account. International Ltd. Account Number 396699 Israel Discount Bank, Branch 100 4 Rothschild Blvd. 66881 Tel-Aviv, Israel Please enclose a copy of the transfer document with the registration form. E-Mail this registration form to: icpr at math.tau.ac.il or Mail/Fax to: 12-ICPR, c/o International P.O.Box 29313 61292 Tel-Aviv, Israel Tel: +972-3-510 2538 Fax: +972-3-660 604 \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/ cut here \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/ You could get the following files by FTP or E-Mail: "announcement" - The file including this announcement. "advance-program" - Includes the Advance Technical Program. "tutorials" - Includes the Tutorial Program and Abstracts. "registration" - Includes Registration and Hotel Information. "tourism" - Information on tourist activities (Pre and Post Conference Tours). These files can be retrieved by E-Mail as follows: Send E-Mail to ftpmail at cs.huji.ac.il having the following lines: ------- open cd pub/ICPR get advance-program get tutorials get registration get tourism quit ------- To use anonymous ftp, connect to: ftp.huji.ac.il (user: anonymous. Password: your E-Mail). Once logged on, perform the following ftp commands: ------ cd pub/ICPR get advance-program get tutorials get registration get tourism quit ------ These files can also be retrieved by E-Mail as follows: Send E-Mail to ftpmail at cs.huji.ac.il having the following lines: ------- open cd pub/ICPR get advance-program get tutorials get registration quit ------- ---------------------------- Paper versions of the Advance Program will be mailed in late June 1994. If you will not get a copy in a couple of weeks, you can request a copy be sending E-Mail to icpr at math.tau.ac.il. This address should also be used for any other communication regarding the 12-ICPR. =============================================================================== From chauvet at bmsr14.usc.edu Mon Jun 13 20:41:01 1994 From: chauvet at bmsr14.usc.edu (Gilbert Chauvet) Date: Mon, 13 Jun 94 17:41:01 PDT Subject: PostDoc position Message-ID: <9406140041.AA04698@bmsr14.usc.edu> Post Doc position available in: MATHEMATICAL BIOLOGY Institute of Theoretical Biology, University of ANGERS (FRANCE), beginning the 1st January 1995, for 2 years. Qualification: PhD in Applied Mathematics Project: Modeling in Neurobiology using non-local reaction diffusion equations in general (numerical and theoretical aspects). Methods will be applied to hippocampus. Contact: Pr G.A. Chauvet, IBT e-mail: chauvetg at ibt.univ-angers.fr Phone: (33) 41 72 34 27 Fax: (33) 41 72 34 46 From kzhang at cogsci.UCSD.EDU Mon Jun 13 19:40:48 1994 From: kzhang at cogsci.UCSD.EDU (Kechen Zhang) Date: Mon, 13 Jun 1994 16:40:48 -0700 Subject: tech report available Message-ID: <9406132341.AA23051@cogsci.UCSD.EDU> ftp-host: cogsci.ucsd.edu ftp-filename: /pub/tr/9402.charsys.ps.Z size of uncompressed file: 0.55 Mb printed pages: 21 This is the first UCSD Cognitive Science technical report available by anonymous ftp, thanks for the help from Kathy Farrelly, Paul Maglio, Javier Movellan, Marty Sereno, Mark Wallen and David Zipser. The abstract and the ftp instructions are as follows. ------------------------------------------------------------------------ Temporal association by Hebbian connections: The method of characteristic system Kechen Zhang June 1994 Report 9402 Department of Cognitive Science University of California, San Diego La Jolla, CA 92093-0515 email: kzhang at cogsci.ucsd.edu Abstract While it is generally accepted that the stability of a static memory pattern corresponds to a certain point attractor in the dynamics of the underlying neural network, when temporal order is introduced and a sequence of memory patterns need to be retrieved successively in continuous time, it becomes less clear what general method should be used to decide whether the transient dynamics is robust or not. In this paper, it is shown that such a general method can be developed if all the connections in the neural network are Hebbian. This method is readily applied to various structures of coupled networks as well as to the standard temporal association model with asymmetric time-delayed connections. The basic idea is to introduce new variables made of memory-overlap projections with alternating signs, and then circumvent the nonlinearity of the sigmoid function by exploiting the dynamical symmetry inherent in the Hebb rule. The result is a self-contained, low-dimensional deterministic system. A powerful feature of this approach is that it can translate questions about the stability of the sequential memory transitions in the original neural network into questions about the stability of the periodic oscillation in the corresponding characteristic system. This correspondence enables direct, quantitative prediction of the behaviors of the original system, as being confirmed by computer simulations on the conjugate networks, the ``tri-synaptic loop'' networks, and the time-delayed network. In particular, the conjugate networks (consisting of two Hopfield subnets coupled by asymmetric Hebbian connections) offer a simple but sufficient structure for the storage and retrieval of sequential memory patterns without any additional temporal mechanisms besides the intrinsic dynamics. The same structure can also be used for the recognition of a temporal sequence of sparse memory patterns. Other topics include the storage capacity of the conjugate networks, the exact solution of the limit cycle in the characteristic system, the sequential retrieval at variable speeds, and the problem of equivalence between coupling and time delays. ------------------------------------------------------------------------ To retrieve the compressed PostScript file, do the following: unix> ftp cogsci.ucsd.edu ftp> login: anonymous ftp> password: ftp> cd pub/tr ftp> bin ftp> ls ftp> get 9402.charsys.ps.Z ftp> bye unix> uncompress 9402.charsys.ps.Z unix> lpr -P -s 9402.charsys.ps (or however you print PostScript) page numbers: 0 - 20 From P.McKevitt at dcs.shef.ac.uk Tue Jun 14 09:39:34 1994 From: P.McKevitt at dcs.shef.ac.uk (Paul Mc Kevitt) Date: Tue, 14 Jun 94 09:39:34 BST Subject: No subject Message-ID: <9406140839.AA27563@dcs.shef.ac.uk> CONFERENCE ANNOUNCEMENT AND PRELIMINARY CALL FOR PAPERS AISB-95: Hybrid Problems, Hybrid Solutions. ============================================ Monday 3rd -- Friday 7th April 1995 Halifax Hall of Residence & Computer Science Department University of Sheffield Sheffield, ENGLAND The Tenth Biennial Conference on AI and Cognitive Science organised by the Society for the Study of Artificial Intelligence and the Simulation of Behaviour Programme Chair: John Hallam (University of Edinburgh) Programme Committee: Dave Cliff (University of Sussex) Erik Sandewall (University of Linkoeping) Nigel Shadbolt (University of Nottingham) Sam Steel (University of Essex) Yorick Wilks (University of Sheffield) Local Organisation: Paul Mc Kevitt (University of Sheffield) The past few years have seen an increasing tendency for diversification in research into Artificial Intelligence, Cognitive Science and Artificial Life. A number of approaches are being pursued, based variously on symbolic reasoning, connectionist systems and models, behaviour-based systems, and ideas from complex dynamical systems. Each has its own particular insight and philosophical position. This variety of approaches appears in all areas of Artificial Intelligence. There are both sybmolic and connectionist natural language processing, both classical and behaviour-based vision research, for instance. While purists from each approach may claim that all the problems of cognition can in principle be tackled without recourse to other methods, in practice (and maybe in theory, also) combinations of methods from the different approaches (hybrid methods) are more successful than a pure approach for certain kinds of problems. The committee feels that there is an unrealised synergy between the various approaches that an AISB conference may be able to explore. Thus, the focus of the tenth AISB Conference is on such hybrid methods. We particularly seek papers that describe novel theoretical and/or experimental work which uses a hybrid approach or papers from purists, arguing cogently that compromise is unnecessary or unproductive. While papers such as those are particularly sought, good papers on any topic in Artificial Intelligence will be considered: as always, the most important criteria for acceptance will be soundness, originality, substance and clarity. Research in all areas is equally welcome. The AISB conference is a single track conference lasting three days, with a two day tutorial and workshop programme preceding the main technical event, and around twenty high calibre papers will be presented in the technical sessions. It is expected that the proceedings of the conference will be published in book form in time to be available at the conference itself, making it a forum for rapid dissemination of research results. SUBMISSIONS: High quality original papers dealing with the issues raised by mixing different approaches, or otherwise related to the Conference Theme, should be sent to the Programme Chair. Papers which give comparative experimental evaluation of methods from different paradigms applied to the same problem, papers which propose and evaluate mixed-paradigm theoretical models or tools, and papers that focus on hybrid systems applied to real world problems will be particularly welcome, as will papers from purists who argue cogently that the hybrid approach is flawed and a particular pure approach is to be preferred. Papers being submitted, whether verbatim or in essence, to other conferences whose review process runs concurrently with AISB-95 should indicate this fact on their title page. If a submitted paper appears at another conference it must be withdrawn from AISB-95 (this does not apply to presentation at specialist workshops). Papers that violate these requirements may be rejected without review. SHEFFIELD: Sheffield is one of the friendliest cities in the UK and is situated well having the best and closest surrounding countryside of any major city in the UK. The Peak District National Park is only minutes away. It is a good city for walkers, runners, and climbers. It has two theatres, the Crucible and Lyceum. The Lyceum, a beautiful Victorian theatre, has recently been renovated. Also, the city has three 10 screen cinemas. There is a library theatre which shows more artistic films. The city has a large number of museums many of which demonstrate Sheffield's industrial past, and there are a number of Galleries in the City, including the Mapping Gallery and Ruskin. A number of important ancient houses are close to Sheffield such as Chatsworth House. The Peak District National Park is a beautiful site for visiting and rambling upon. There are large shopping areas in the City and by 1995 Sheffield will be served by a 'supertram' system: the line to the Meadowhall shopping and leisure complex is already open. The University of Sheffield's Halls of Residence are situated on the western side of the city in a leafy residential area described by John Betjeman as ``the prettiest suburb in England''. Halifax Hall is centred on a local Steel Baron's house, dating back to 1830 and set in extensive grounds. It was acquired by the University in 1830 and converted into a Hall of Residence for women with the addition of a new wing. ARTIFICIAL INTELLIGENCE AT SHEFFIELD: Sheffield Computer Science Department has a strong programme in Cognitive Systems and is part of the University's Institute for Language, Speech and Hearing (ILASH). ILASH has its own machines and support staff, and academic staff attached to it from nine departments. Sheffield Psychology Department has the Artificial Intelligence Vision Research Unit (AIVRU) which was founded in 1984 to coordinate a large industry/university Alvey research consortium working on the development of computer vision systems for autonomous vehicles and robot workstations. FORMAT AND DEADLINES: Four copies of submitted papers must be received by the Programme Chair no later than 24 OCTOBER 1994 to be considered. Papers should be at most 12 pages in length and be produced in 12 point, with at most 60 lines of text per A4 page and margins at least 1 inch (2.5cm) wide on all sides (default LaTeX article style is OK). They should include a cover sheet (not counted in the 12 page limit) giving the paper title, the abstract, the authors and their affiliations, including a contact address for both electronic and paper mail for the principal author. Papers should be submitted in hard-copy, not electronically. Papers that do not adhere to this format specification may be rejected without review. Notification of acceptance will be sent to authors by 7 DECEMBER 1994 and full camera-ready copy will be due in early JANUARY 1995 (publishers' deadlines permitting). CONFERENCE ADDRESS: Correspondence relating to the conference programme, submissions of papers, etc. should be directed to the conference programme chair at the address below. John Hallam, Department of Artificial Intelligence, University of Edinburgh, 5 Forrest Hill, Edinburgh EH1 2QL, SCOTLAND. Phone: + 44 31 650 3097 FAX: + 44 31 650 6899 E-mail: john at aifh.edinburgh.ac.uk Correspondence concerning local arrangements should be directed to the local arrangements organiser at the following address. Paul Mc Kevitt, Department of Computer Science, University of Sheffield, Regent Court, 211 Portobello Street, Sheffield S1 4DP, ENGLAND. Phone: + 44 742 825572 FAX: + 44 742 780972 E-mail: p.mckevitt at dcs.sheffield.ac.uk From terry at salk.edu Tue Jun 14 17:04:18 1994 From: terry at salk.edu (Terry Sejnowski) Date: Tue, 14 Jun 94 14:04:18 PDT Subject: Professorship at Lund Message-ID: <9406142104.AA10140@salk.edu> Lund University in Sweden has announced an opening for a full professorship at the Department of Theoretical Physics: "Physics particularly the theory of collective phenomena The area covers physics of condensed matter in a broad sense in particular the theory of complex systems." A full professorship in Sweden is a very good position with a lot of freedom and a reasonable base funding. Applications should be submitted to Lund University before June 29, 1994. Could you please make this known to persons that may be interested. Information about the position is given by professor Lars Hedin, phone +46-46-10 90 80, e-mail Lars.Hedin at teorfys.lu.se. ----- From bernabe at cnm.us.es Wed Jun 15 10:18:21 1994 From: bernabe at cnm.us.es (Bernabe Linares B.) Date: Wed, 15 Jun 94 16:18:21 +0200 Subject: paper available on neural hardware Message-ID: <9406151418.AA12753@sparc1.cnm.us.es> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/bernabe.art1chip.ps.Z The file bernabe.art1chip.ps.Z is now available for copying from the Neuroprose repository: Title: A CMOS VLSI Analog Current-Mode High-Speed ART1 Chip (4 pages) Authors: T. Serrano, B. Linares-Barranco, and J. L. Huertas Filiation: National Microelectronics Center (CNM), Sevilla, SPAIN. This paper is going to appear in the ICNN'94 proceedings, and it is going to be presented at the ICNN'94 meeting, Orlando, Florida, the 28th of June, in the Special Invited Session "Advanced Analog Neural Networks and Applications", at 8:20am (according to the preliminary program). ABSTRACT: In this paper we present a real time neural categorizer chip based on the ART1 algorithm [1]. The circuit implements a modified version of the original ART1 algorithm more suitable for VLSI implementations [2]. It has been designed using analog current-mode circuit design techniques, and consists basically of current mirrors that are switched ON and OFF according to the binary input patterns. The chip is able to cluster 100 binary pixels input patterns into up to 18 different categories. Modular expandability of the system is possible by simply interconnecting more chips in a matrix array. This way a system can be built to cluster NX100 binary pixels images into MX18 different clusters, using an NXM array of chips. Avarage pattern classification is performed in less than 1us, which means an equivalent computing power of 1.8X10^9 connections per second. The chip has been fabricated in a single-poly double-metal 1.5um standard digital low cost CMOS process, has a die area of 1cm^2, and is mounted in a 120 pins PGA package. Although the circuit is analog in nature, it is full-digital-compatible since all its inputs and outputs are digital. Sorry, no hardcopies available. From mario at physics.uottawa.ca Thu Jun 16 08:52:40 1994 From: mario at physics.uottawa.ca (Mario Marchand) Date: Thu, 16 Jun 94 09:52:40 ADT Subject: Errata for Neural Net and PAC learning papers Message-ID: <9406161252.AA07302@physics.uottawa.ca> Dear Colleagues The following papers have already been announced 10 days ago but have been moved to another directory. To retrieve any one of them please follow these instructions: unix> ftp Dirac.physics.uottawa.ca Name: anonymous Password: "your email address" fpt> cd /pub/tr/marchand ftp> binary ftp> get filename.ps.Z ftp> bye unix> uncompress filename.ps.Z .. and print the postscript file: filename.ps ---- **************************************************************** FILE: COLT93.ps.Z TITLE: Average Case Analysis of the Clipped Hebb Rule for Nonoverlapping Perceptron Networks AUTHORS: Mostefa Golea, Mario Marchand Abstract We investigate the clipped Hebb rule for learning different multilayer networks of nonoverlapping perceptrons with binary weights and zero thresholds when the examples are generated according to the uniform distribution. Using the central limit theorem and very simple counting arguments, we calculate exactly its learning curves (\ie\ the generalization rates as a function of the number of training examples) in the limit of a large number of inputs. We find that the learning curves converge {\em exponentially\/} rapidly to perfect generalization. These results are very encouraging given the simplicity of the learning rule. The analytic expressions of the learning curves are in excellent agreement with the numerical simulations, even for moderate values of the number of inputs. ******************************************************************* ****************************************************************** FILE: EuroCOLT93.ps.Z TITLE: On Learning Simple Deterministic and Probabilistic Neural Concepts AUTHORS: Mostefa Golea, Mario Marchans Abstract We investigate the learnability, under the uniform distribution, of deterministic and probabilistic neural concepts that can be represented as {\em simple combinations} of {\em nonoverlapping} perceptrons with binary weights. Two perceptrons are said to be nonoverlapping if they do not share any input variables. In the deterministic case, we investigate, within the distribution-specific PAC model, the learnability of {\em perceptron decision lists} and {\em generalized perceptron decision lists}. In the probabilistic case, we adopt the approach of {\em learning with a model of probability} introduced by Kearns and Schapire~\cite{KS90} and Yamanishi~\cite{Y92}, and investigate a class of concepts we call {\em probabilistic majorities} of nonoverlapping perceptrons. We give polynomial time algorithms for learning these restricted classes of networks. The algorithms work by estimating various statistical quantities that yield enough information to infer, with high probability, the target concept. *********************************************************************** *********************************************************************** FILE: NeuralComp.ps.Z TITLE: On Learning Perceptrons with Binary Weights AUTHORS: Mostefa Golea, Mario Marchand Abstract We present an algorithm that PAC learns any perceptron with binary weights and arbitrary threshold under the family of {\em product distributions\/}. The sample complexity of this algorithm is of $O((n/\epsilon)^4 \ln(n/\delta))$ and its running time increases only linearly with the number of training examples. The algorithm does not try to find an hypothesis that agrees with all of the training examples; rather, it constructs a binary perceptron based on various probabilistic estimates obtained from the training examples. We show that, under the restricted case of the {\em uniform distribution\/} and zero threshold, the algorithm reduces to the well known {\em clipped Hebb rule\/}. We calculate exactly the average generalization rate (\ie\ the learning curve) of the algorithm, under the uniform distribution, in the limit of an infinite number of dimensions. We find that the error rate decreases {\em exponentially\/} as a function of the number of training examples. Hence, the average case analysis gives a sample complexity of $O(n\ln(1/\epsilon))$; a large improvement over the PAC learning analysis. The analytical expression of the learning curve is in excellent agreement with the extensive numerical simulations. In addition, the algorithm is very robust with respect to classification noise. *********************************************************************** ********************************************************************** FILE: NIPS92.ps.Z TITLE: On Learning $\mu$-Perceptron Networks with Binary Weights AUTHORS: Mostefa Golea, Mario Marchand, Thomas R. Hancock Abstract Neural networks with binary weights are very important from both the theoretical and practical points of view. In this paper, we investigate the learnability of single binary perceptrons and unions of $\mu$-binary-perceptron networks, {\em i.e.\/} an ``OR'' of binary perceptrons where each input unit is connected to one and only one perceptron. We give a polynomial time algorithm that PAC learns these networks under the uniform distribution. The algorithm is able to identify both the network connectivity and the weight values necessary to represent the target function. These results suggest that, under reasonable distributions, $\mu$-perceptron networks may be easier to learn than fully connected networks. ********************************************************************** ********************************************************************** FILE: Network93.ps.Z TITLE: On Learning Simple Neural Concepts: From Halfspace Intersections to Neural Decision Lists AUTHORS: Mario Marchand, Mostefa Golea Abstract In this paper, we take a close look at the problem of learning simple neural concepts under the uniform distribution of examples. By simple neural concepts we mean concepts that can be represented as simple combinations of perceptrons (halfspaces). One such class of concepts is the class of halfspace intersections. By formalizing the problem of learning halfspace intersections as a {\em set covering problem}, we are led to consider the following sub-problem: given a set of non linearly separable examples, find the largest linearly separable subset of it. We give an approximation algorithm for this NP-hard sub-problem. Simulations, on both linearly and non linearly separable functions, show that this approximation algorithm works well under the uniform distribution, outperforming the Pocket algorithm used by many constructive neural algorithms. Based on this approximation algorithm, we present a greedy method for learning halfspace intersections. We also present extensive numerical results that strongly suggests that this greedy method learns halfspace intersections under the uniform distribution of examples. Finally, we introduce a new class of simple, yet very rich, neural concepts that we call {\em neural decision lists}. We show how the greedy method can be generalized to handle this class of concepts. Both greedy methods for halfspace intersections and neural decision lists were tried on real-world data with very encouraging results. This shows that these concepts are not only important from the theoretical point of view, but also in practice. *************************************************************************** Sorry for this inconvenience, - Mario ---------------------------------------------------------------- | UUU UUU Mario Marchand | | UUU UUU ----------------------------- | | UUU OOOOOOOOOOOOOOOO Department of Physics | | UUU OOO UUU OOO University of Ottawa | | UUUUUUUUUUUUUUUU OOO 150 Louis Pasteur street | | OOO OOO PO BOX 450 STN A | | OOOOOOOOOOOOOOOO Ottawa (Ont) Canada K1N 6N5 | | | | ***** Internet E-Mail: mario at physics.uottawa.ca ********** | | ***** Tel: (613)564-9293 ------------- Fax: 564-6712 ***** | ---------------------------------------------------------------- From gannadm at cs.iastate.edu Thu Jun 16 15:50:22 1994 From: gannadm at cs.iastate.edu (GANN Mailing list (Vasant Honavar)) Date: Thu, 16 Jun 94 14:50:22 CDT Subject: Mailing List on Evolutionary Design of Neural Networks Message-ID: ** Announcement ** ** A Mailing List on Evolutionary Design of Neural Networks ** The neuro-evolution e-mail list (which was originally started by Mike Rudnick but has been defunct for a couple of years due to logistic problems) is being restarted under the new name `gann'. The gann list will focus on the use of evolutionary algorithms (genetic algorithms, genetic programming and their variants) in the exploration of the design space of (artificial) neural network architectures and algorithms. The list will be semi-moderated to keep the signal to noise ratio as high as possible. MEMBERSHIP AND SCOPE: The membership on the list is open to researchers who are actively working in this area. The primary objective of the mailing list is to foster interaction and sharing of new research results, publications, conference announcements, and other useful information among researchers in this area. A partial list of topics of particular interest includes: Genetic Representation (Blueprints) of Neural Networks Encoding and Decoding of Network Blueprints Complexity issues Development models Learning models Representational Bias Efficiency Issues Properties of Representation Experimental Results Theoretical Considerations Details of operation of the mailing list follow: TO SUBSCRIBE: mail to gann-request at cs.iastate.edu with "Subject": subscribe TO UNSUBSCRIBE: mail to gann-request at cs.iastate.edu with "Subject": unsubscribe All administrative queries/enquiries/comments should be addressed to gann-request at cs.iastate.edu All articles/notes/replies-to-queries/submissions to the list should be sent to gann at cs.iastate.edu You will receive a welcoming message once you have been added to the list. _______________________________________________________________________ Dr. Vasant Honavar Dr. Mike Rudnick Assistant Professor Assistant Professor Computer Science & Neuroscience Computer Science Department of Computer Science Department of Computer Science Iowa State University Tulane University honavar at cs.iastate.edu rudnick at cs.tulane.edu Mr. Karthik Balakrishnan Doctoral Student Artificial Intelligence Research Group Department of Computer Science Iowa State University balakris at cs.iastate.edu _______________________________________________________________________ From wermter at nats2.informatik.uni-hamburg.de Fri Jun 17 12:34:04 1994 From: wermter at nats2.informatik.uni-hamburg.de (Stefan Wermter) Date: Fri, 17 Jun 94 18:34:04 +0200 Subject: paper announcement Message-ID: <9406171634.AA01645@nats2.informatik.uni-hamburg.de> ----------------------------------------------------------------------------- FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/wermter.screen.ps.Z ------------------------------------------------------------------------------ The following paper is available via anonymous ftp from the neuroprose archive: File: wermter.screen.ps.Z Length: 6 pages Learning Fault-tolerant Speech Parsing with SCREEN Stefan Wermter and Volker Weber Department of Computer Science University of Hamburg, Germany [to appear in Proceedings of National Conference on Artificial Intelligence 94] Abstract: This paper describes a new approach and a system SCREEN for fault-tolerant speech parsing. SCREEEN stands for Symbolic Connectionist Robust EnterprisE for Natural language. Speech parsing describes the syntactic and semantic analysis of spontaneous spoken language. The general approach is based on incremental immediate flat analysis, learning of syntactic and semantic speech parsing, parallel integration of current hypotheses, and the consideration of various forms of speech related errors. The goal for this approach is to explore the parallel interactions between various knowledge sources for learning incremental fault-tolerant speech parsing. This approach is examined in a system SCREEN using various hybrid connectionist techniques. Hybrid connectionist techniques are examined because of their promising properties of inherent fault tolerance, learning, gradedness and parallel constraint integration. The input for SCREEN is hypotheses about recognized words of a spoken utterance potentially analyzed by a speech system, the output is hypotheses about the flat syntactic and semantic analysis of the utterance. In this paper we focus on the general approach, the overall architecture, and examples for learning flat syntactic speech parsing. Different from most other speech language architectures SCREEN emphasizes an interactive rather than an autonomous position, learning rather than encoding, flat analysis rather than in-depth analysis, and fault-tolerant processing of phonetic, syntactic and semantic knowledge. ------------------------------------------------------------------------------ To retrieve the compressed postscript file, do the following: unix> ftp archive.cis.ohio-state.edu ftp> login: anonymous ftp> password: [your_full_email_address] ftp> cd pub/neuroprose ftp> binary ftp> get wermter.screen.ps.Z ftp> quit unix> uncompress wermter.screen.ps.Z unix> lpr wermter.screen.ps (or however you print postscript) (Hard copies of the paper are unfortunately not available. European paper format has been used but everything should be complete on shorter US paper format as well.) best wishes, Stefan Wermter ************************************************************************** * Dr Stefan Wermter Universitaet Hamburg * * Fachbereich Informatik * * wermter at informatik.uni-hamburg.de Vogt-Koelln-Strasse 30 * * phone: +49 40 54715-531 D-22527 Hamburg * * fax: +49 40 54715-515 Germany * ************************************************************************** From leon at bop.neuristique.fr Fri Jun 17 03:25:12 1994 From: leon at bop.neuristique.fr (Leon Bottou) Date: Fri, 17 Jun 94 09:25:12 +0200 Subject: TR Announcemnent: Effective VC Dimension Message-ID: <9406170725.AA15312@neuristique.fr> **DO NOT FORWARD TO OTHER GROUPS** TR Announcemnent: Effective VC Dimension ---------------------------------------- FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/bottou.effvc.ps.Z The file bottou.effvc.ps.Z is now available for copying from the Neuroprose repository: TITLE: On the Effective VC Dimension (12 pages) Leon Bottou, Neuristique, Paris (France) Vladimir N. Vapnik, ATT Bell Laboratories, Holmdel (NJ) ABSTRACT: The very idea of an ``Effective Vapnik Chervonenkis (VC) dimension'' relies on the hypothesis that the relation between the generalization error and the number of training examples can be expressed by a formula algebraically similar to the VC bound. This hypothesis calls for a serious discussion since the traditional VC bound widely overestimates the generalization error. In this paper we describe an algorithm and data dependent measure of capacity. We derive a confidence interval on the difference between the training error and the generalization error. This confidence interval is much tighter than the traditional VC bound. A simple change of the formulation of the problem yields this extra accuracy: our confidence interval bounds the error difference between a training set and a test set, rather than the error difference between a training set and some hypothetical grand truth. This ``transductive'' approach allows for deriving a data and algorithm dependent confidence interval. From rosen at unr.edu Sun Jun 19 22:37:28 1994 From: rosen at unr.edu (David B. Rosen) Date: Sun, 19 Jun 94 19:37:28 -0700 Subject: searchable bibliographic databases available Message-ID: <9406200240.AA11406@solstice.unr.edu> Just want to point out this very useful service for searching a large collection of neural network bibtex bibliographies via the World-Wide Web: http://glimpse.cs.arizona.edu:1994/bib/Neural/ For more information, and to access other CS-related bibliographies, see: ftp://ftp.cs.umanitoba.ca/pub/bibliographies/index.html (I have nothing to do with them.) -- David Rosen \ Center for Biomedical \ University e-mail,finger: rosen at unr.edu \ Modeling Research \ of Nevada From gjg at cns.edinburgh.ac.uk Mon Jun 20 12:47:13 1994 From: gjg at cns.edinburgh.ac.uk (Geoffrey Goodhill) Date: Mon, 20 Jun 94 12:47:13 BST Subject: Paper on MDS available Message-ID: <1104.9406201147@cns.ed.ac.uk> The following paper has been submitted for publication and is available by anonymous ftp. It expands on our brief preliminary note published in Nature on June 9th (Simmen, Goodhill & Willshaw, 369:448). An evaluation of the use of Multidimensional Scaling ---------------------------------------------------- for understanding brain connectivity ------------------------------------ Geoffrey J. Goodhill, Martin W. Simmen & David J. Willshaw Centre for Cognitive Science University of Edinburgh Research Paper EUCCS / RP-63, June 1994 Abstract -------- A large amount of data is now available about the pattern of connections between brain regions. Computational methods are increasingly relevant for uncovering structure in such datasets. There has been recent interest in the use of Nonmetric Multidimensional Scaling (NMDS) for such analysis (Young, 1992, 1993; Scannell & Young, 1993). NMDS produces a spatial representation of the ``dissimilarities'' between a number of entities. Normally, it is applied to data matrices containing a large number of levels of dissimilarity, whereas for connectivity data there is a very small number. We address the suitability of NMDS for this case. Systematic numerical studies are presented to evaluate the ability of this method to reconstruct known geometrical configurations from dissimilarity data possessing few levels. In this case there is a strong bias for NMDS to produce annular configurations, whether or not such structure exists in the original data. Using a connectivity dataset derived from the primate cortical visual system (Felleman & Van Essen, 1991), we demonstrate why great caution is needed in interpreting the resulting configuration. Application of an independent method that we developed strongly suggests that the visual system NMDS configuration is affected by an annular bias. We question whether an NMDS analysis of the visual system data supports the two streams view of visual processing (Young, 1992). Instructions for obtaining by anonymous ftp: % ftp archive.cis.ohio-state.edu Name: anonymous Password: ftp> binary ftp> cd pub/neuroprose ftp> get goodhill.nmds.ps.Z The paper is approx 1MB and prints on 23 pages. Geoff Goodhill gjg at cns.ed.ac.uk From rwp at eng.cam.ac.uk Mon Jun 20 16:46:34 1994 From: rwp at eng.cam.ac.uk (Richard Prager) Date: Mon, 20 Jun 1994 16:46:34 BST Subject: Cambridge Neural Networks Summer School 1994 Message-ID: <12036.9406201546@dsl.eng.cam.ac.uk> Cambridge University Engineering Department in Collaboration Cambridge University Programme for Industry Announce The Fourth Annual Neural Networks Summer School 3 1/2 day short course 19-22 September 1994 KOHONEN JORDAN SUTTON BOURLARD DAUGMAN JERVIS MACKAY NIRANJAN PRAGER ROBINSON TARRASENKO +--------------------------------------------------------+ | Thanks to support from the ESPRC we are this year able | | to offer fully funded places for selected UK research | | students. There is also a large academic discount.| | See below for details of how to apply for these places.| +--------------------------------------------------------+ OUTLINE AND AIM OF THE COURSE Recently, much progress has been made in the area of neural computing, bringing together a range of powerful techniques from parallel computing, nonlinear functional analysis, statistical inference and dynamical systems theory. There is much potential in this area for solving a range of interesting and difficult problems, with commercial and industrial applications. The course will give a broad introduction to the application and design of neural networks and deal with both the theory and with specific applications. Survey material will be given, together with recent research results in architecture and training methods, and applications including signal processing, control, speech, robotics and human vision. Design methodologies for a number of common neural network architectures will be covered, together with the theory behind neural network algorithms. Participants will learn the strengths and weaknesses of the neural network approach, and how to assess the potential of the technology in respect of their own requirements. Lectures will be given by international experts in the field, and delegates will have the opportunity of learning first hand the technical and practical details of recent work in neural networks from those who are contributing to those developments. LABORATORY DEMONSTRATIONS Informal evening visits to Cambridge University Engineering Department laboratories, which will include demonstrations of a number of current research projects. POSTER SESSION There will be an informal poster session in which delegates may present their current work or interests should they so wish. Please contact the Course Administrator for further details. LECTURERS DR HERVE BOURLARD is with Lernout & Hauspie Speech Products in Brussels. He has made many contributions to the subject particularly in the area of speech recognition. DR JOHN DAUGMAN came to Cambridge in 1991 as a Senior Research Fellow in Zoology (computational neuroscience) and is now a Lecturer in Artificial Intelligence in the Computer Laboratory at Cambridge University. His areas of research and publication include computational neuroscience, multi-dimensional signal processing and pattern recognition, machine vision and biological vision. DR TIMOTHY JERVIS is with Schlumberger Cambridge Research Ltd. His interests lie in the field of neural networks and in the application of Bayesian statistical techniques to learning control. PROFESSOR MICHAEL JORDAN is in the Department of Brain & Cognitive Science at MIT. He was a founding member of the PDP research group and he made many contributions to the subject particularly in forward and inverse systems. PROFESSOR TEUVO KOHONEN is with the Academy of Finland and Laboratory of Computer and Information Science at Helsinki University of Technology. His specialities are in self-organising maps and their applications. DR DAVID MACKAY is the Royal Society Smithson Research Fellow at Cambridge University and works on Bayesian methods and non-linear modelling at the Cavendish Laboratory. He obtained his PhD in Computation and Neural Systems at California Institute of Technology. DR MAHESAN NIRANJAN is with the Department of Engineering at Cambridge University. His specialities are in speech processing and pattern classification. DR RICHARD PRAGER is with the Department of Engineering at Cambridge University. His specialities are in speech and vision processing. DR TONY ROBINSON is with the Department of Engineering at Cambridge University. His specialities are in recurrent networks and speech processing. DR RICH SUTTON is with the Adaptive Systems Department of GTE Laboratories near Boston, USA. His specialities are in reinforcement learning, planning and animal learning behaviours. DR LIONEL TARASSENKO is with the Department of Engineering at the University of Oxford. His specialities are in robotics and the hardware implementation of neural computing. WHO SHOULD ATTEND This course is intended for engineers, software specialists and other scientists who need to assess the current potential of neural networks. Delegates will have the opportunity to learn at first hand the technical and practical details of recent work in this field. The Neural Networks Summer School has been running for four consecutive years and has consistently received high praise from those who have attended. We attract lecturers of international stature, and speakers this year will include Professor Teuvo Kohonen, Professor Michael Jordan, Dr Rich Sutton, Dr Lionel Tarassenko, Dr David MacKay and Dr John Daugman. PROGRAMME The course will be structured to enable full discussion periods between lecturers and delegates. All the formal sessions will be covered by comprehensive course notes. Lecture subjects will include: **Introduction and overview** Connectionist computing: an introduction and overview Programming a neural network Parallel distributed processing perspective Theory and parallels with conventional algorithms **Architectures** Pattern processing and generalisation Bayesian methods and non-linear modelling Reinforcement learning neural networks Multiple expert networks Self organising neural networks Feedback networks for optimization **Applications** System identifications Time series predictions Learning forward and inverse dynamical models Control of nonlinear dynamical systems using neural networks Artificial and biological vision systems Silicon VLSI neural networks Applications to diagnostic systems Applications to speech recognition Applications to mobile robotics Financial system modelling Applications in medical diagnostics COURSE FEES and ACCOMMODATION The course fee is 750 UK pounds (350 UK pounds with academic discount for full time students and faculty of higher education institutes), payable in advance, and includes a full set of course notes, a certificate of attendance, and all day-time refreshments for the duration of the course. In order to benefit fully from the course we strongly recommend that delegates elect to be residential as courses are designed to allow planned and informal discussions in the evening. Accommodation can be arranged in college rooms with shared facilities at Corpus Christi College at 187 UKpounds for 4 nights to include bed and breakfast, dinner and a Course Dinner. If you would prefer to make your own arrangements please indicate on the registration form and details of local hotels will be sent to you. EPSRC SPONSORED PLACES A limited number of EPSRC sponsored places are available for all full time UK registered students. However, priority placement will be given to students with EPSRC (SERC) funding. Sponsorship covers all course fees, meals and college accommodation (Monday, Tuesday and Wednesday nights only). To be considered for a place, please send a one page summary of current research including how you expect to benefit by attending, a curriculum vitae, a letter of recommendation from your supervisor and the nature of your current funding. The deadline for applications is 1 August 1994. --------------------------------------------------------------------------- I wish to REGISTER for the course: "Neural Networks Summer School" Title (Dr, Mr, Ms etc) ........................................ Name .......................................................... First Names ................................................... Job Title...................................................... Company........................................................ Division....................................................... Address........................................................ .............................................................. .............................................................. .............................................................. Post Code...................................................... Tel. No........................................................ Fax. No ....................................................... E-mail address ................................................ _____ I am applying for an academic discount _____ I am applying for an EPSRC Scholoarhip _____ I will be paying a commercial/industrial rate ______ Please reserve one place and accommodation for 4 nights. I enclose a cheque/purchase order for _______, made payable to the University of Cambridge/EYA. ______ Please reserve one place and send details of local hotels. I enclose a cheque/purchase order for _______, made payable to the University of Cambridge/EYA. I have the following special requirements concerning diet or disabilities: Total Amount Enclosed: UKL ____________ For further information contact: Rebecca Simons, Course Administrator University of Cambridge Programme for Industry 1 Trumpington Street, Cambridge CB2 1QA Tel:+44 (0)223 332722 Fax: +44 (0)223 301122 Email: rjs1008 at uk.ac.cam.phx From iiscorp at netcom.com Mon Jun 20 19:51:47 1994 From: iiscorp at netcom.com (IIS Corp) Date: Mon, 20 Jun 94 16:51:47 PDT Subject: Soft Computing Days in San Francisco Message-ID: Soft Computing Days in San Francisco Zadeh, Widrow, Koza, Ruspini, Stork, Whitley, Bezdek, Bonissone, and Berenji On Soft Computing: Fuzzy Logic, Neural Networks, and Genetic Algorithms Three short courses San Francisco, CA October 24-28, 1994 Traditional (hard) computing methods do not provide sufficient capabilities to develop and implement intelligent systems. Soft computing methods have proved to be important practical tools to build and construct these systems. The following three courses, offered by Intelligent Inference Systems Corp., will focus on all major soft computing technologies: fuzzy logic, neural networks, genetic algorithms, and genetic programming. These courses may be taken either individually or in combination. Course 1: Artificial Neural Networks (Oct. 24) Bernard Widrow and David Stork Course 2: Genetic Algorithms and Genetic Programming (Oct. 25) John Koza and Darrell Whitley Course 3: Fuzzy Logic Inference (Oct. 26-28) Lotfi Zadeh, Jim Bezdek, Enrique Ruspini, Piero Bonissone, and Hamid Berenji For further details on course topics and registration information, send an email to iiscorp at netcom.com or contact Intelligent Inference Systems Corp., Phone (408) 730-8345, Fax: (408) 730-8550. A detailed brochure will be sent to you as soon as possible. From terry at salk.edu Mon Jun 20 14:07:54 1994 From: terry at salk.edu (Terry Sejnowski) Date: Mon, 20 Jun 94 11:07:54 PDT Subject: Neural Computation, Vol 6, No 4 Message-ID: <9406201807.AA20944@salk.edu> NEURAL COMPUTATION July 1994 Volume 6 Number 4 Article: What is the Goal of Sensory Coding? David J. Field Note: Design Principles of Columnar Organization in Visual Cortex Ernst Niebur and Florentin Worgotter Letters: Elastic Net Model of Ocular Dominance: Overall Stripe Pattern and Monocular Deprivation Geoffrey Goodhill and David Willshaw The Effect of Synchronized Inputs at the Single Neuron Level Ojvind Bernander, Christof Koch and Marius Usher Segmentation by a Network of Oscillators with Stored Memories H. Sompolinsky and M. Tsodyks Numerical Bifurcation Analysis of an Oscillatory Neural Network with Synchronous/Asynchronous Connections Yukio Hayashi Analysis of the Effects of Noise on a Model for the Neural Mechanism of Short-Term Active Memory J. Devin McAuley and Joseph Stampfli Reduction of Conductance Based Models with Slow Synapses to Neural Nets Bard Ermentrout Dimension Reduction of Biological Neuron Models by Artificial Neural Networks Kenji Doya and Allen I. Selverston Neural Network Process Models Based on Linear Model Structures Gary M. Scott and W. Harmon Ray Stability of Oja's PCA Subspace Rule Juha Karhunen Supervised Training of Neural Networks via Ellipsoid Algorithms Man-Fung Cheung, Kevin M. Passino and Stephen Yurkovich Why Some Feedforward Networks Can't Learn Some Polynomials N. Scott Cardell, Wayne Joerding and Ying Li ----- SUBSCRIPTIONS - 1994 - VOLUME 6 - BIMONTHLY (6 issues) ______ $40 Student and Retired ______ $65 Individual ______ $166 Institution Add $22 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-5 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 e-mail: hiscox at mitvma.mit.edu ----- From ling at csd.uwo.ca Tue Jun 21 03:43:03 1994 From: ling at csd.uwo.ca (Charles X Ling) Date: Tue, 21 Jun 94 03:43:03 EDT Subject: Paper on overfitting ... and learning verb past tense Message-ID: <9406210743.AA29302@mccarthy.csd.uwo.ca> Hi. A few months ago I posted some questions on the overfitting effect of neural network learning, and I got some very helpful replies and from many people. Thanks a million! After much more work, I have just finished a short paper (to be submitted) which contains clear results on the overfitting issue. Your comments and suggestions on the paper will be highly appreciated! *********** Overfitting in Neural-Network Learning of Discrete Patterns Charles X. Ling Abstract Weigend reports that the presence and absence of overfitting in neural networks depends on how the testing error is measured, and that there is no overfitting in terms of the classification error. In this paper, we show that, in terms of the classification error, overfitting can be very evident depending on the representation used to encode the attributes. We design a simple learning problem of a small Boolean function with clear rationale, and present experimental results to support our claims. We verify our findings in the task of learning the past tense of English verbs. Instructions for obtaining by anonymous ftp: % ftp ftp.csd.uwo.ca Name: anonymous Password: ftp> cd pub/SPA/papers ftp> get overfitting.ps The paper is approx 280K and prints on 19 pages. ********** While I am here, I'd like to ask if anyone is working on the connectionst models for the learning of the past tense of English verbs, which I have worked on with symbolic system SPA (see two papers in the same directory as the overfitting paper). To insure more direct comparison, if running SPA is needed, I'd be very happy to assist and collaborate. Regards, Charles From shultz at hebb.psych.mcgill.ca Tue Jun 21 08:44:50 1994 From: shultz at hebb.psych.mcgill.ca (Tom Shultz) Date: Tue, 21 Jun 94 08:44:50 EDT Subject: No subject Message-ID: <9406211244.AA02074@hebb.psych.mcgill.ca> Subject: Abstract Date: 21 June '94 FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/shultz.cross.ps.Z Please do not forward this announcement to other boards. Thank you. ------------------------------------------------------------- The following paper has been placed in the Neuroprose archive at Ohio State University: Analyzing Cross Connected Networks (8 pages) Thomas R. Shultz Department of Psychology & McGill Cognitive Science Centre McGill University Montreal, Quebec, Canada H3A 1B1 shultz at psych.mcgill.ca and Jeffrey L. Elman Center for Research on Language Department of Cognitive Science University of California at San Diego LaJolla, CA 92093-0126 U.S.A. elman at crl.ucsd.edu Abstract The non-linear complexities of neural networks make network solutions difficult to understand. Sanger's contribution analysis is here extended to the analysis of networks automatically generated by the cascade-correlation learning algorithm. Because such networks have cross connections that supersede hidden layers, standard analyses of hidden unit activation patterns are insufficient. A contribution is defined as the product of an output weight and the associated activation on the sending unit, whether that sending unit is an input or a hidden unit, multiplied by the sign of the output target for the current input pattern. Intercorrelations among contributions, as gleaned from the matrix of contributions x input patterns, can be subjected to principal components analysis (PCA) to extract the main features of variation in the contributions. Such an analysis is applied to three problems, continuous XOR, arithmetic comparison, and distinguishing between two interlocking spirals. In all three cases, this technique yields useful insights into network solutions that are consistent across several networks. The paper has been published in J. D. Cowan, G. Tesauro, & J. Alspector (Eds.), Advances in Neural Information Processing Systems 6, pp. 1117-1124. San Francisco, CA: Morgan Kaufmannn. Instructions for ftp retrieval of this paper are given below. If you are unable to retrieve and print it and therefore wish to receive a hardcopy, please send e-mail to shultz at psych.mcgill.ca Please do not reply directly to this message. FTP INSTRUCTIONS: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get shultz.cross.ps.Z ftp> quit unix> uncompress shultz.cross.ps.Z Thanks to Jordan Pollack for maintaining this archive. Tom Shultz From wray at ptolemy.arc.nasa.gov Wed Jun 22 13:42:20 1994 From: wray at ptolemy.arc.nasa.gov (Wray Buntine) Date: Wed, 22 Jun 94 10:42:20 PDT Subject: How about some WWW structure to all these papers? Message-ID: <9406221742.AA05693@ptolemy.arc.nasa.gov> There's an opportunity here for some ambitious student. We get notices of several papers a day over connectionists. How about someone turn them into a WWW site with: Title, abstract, and author details. URL link to the FTP site & file. So instead of getting bombarded with abstracts, etc., and having to do the laborious download, etc., I just look up, http://cs.cmu.edu/pub/neuron/Connectist-Abstracts from my WWW/Xmosaic hotlist every few days, and when I see something I like I just point and click and, hey, presto, ghostview starts up with the Postscript file. Wray Buntine NASA Ames Research Center phone: (415) 604 3389 Mail Stop 269-2 fax: (415) 604 3594 Moffett Field, CA, 94035-1000 email: wray at kronos.arc.nasa.gov From stolcke at ICSI.Berkeley.EDU Wed Jun 22 20:25:16 1994 From: stolcke at ICSI.Berkeley.EDU (Andreas Stolcke) Date: Wed, 22 Jun 1994 17:25:16 PDT Subject: How about some WWW structure to all these papers? In-Reply-To: Your message of Wed, 22 Jun 1994 10:42:20 -0700. <9406221742.AA05693@ptolemy.arc.nasa.gov> Message-ID: <199406230025.RAA23681@tiramisu> In message <9406221742.AA05693 at ptolemy.arc.nasa.gov>you wrote: > > There's an opportunity here for some ambitious student. > > We get notices of several papers a day over connectionists. > How about someone turn them into a WWW site with: > > Title, abstract, and author details. > URL link to the FTP site & file. > Several services of this sort already exist. The most useful is probably the Computer Science Techreport index, at http://cs.indiana.edu/cstr/search. It indexes reports from several dozen sources, including the neuroprose archive. You search for title keywords, authors, etc. and it returns URLs to the ftp sites. You follow those, and voila', you got your report on screen (or on disk). I would urge sites that maintain collections of ftp-able reports to get in touch with Marc Van Heyningen (mvanheyn at cs.indiana.edu) to be put on the list of sites queried by this index. --Andreas From kainen at cs.UMD.EDU Thu Jun 23 02:46:51 1994 From: kainen at cs.UMD.EDU (Paul Kainen) Date: Thu, 23 Jun 1994 02:46:51 -0400 Subject: How about some WWW structure to all these papers? Message-ID: <199406230646.CAA28076@tove.cs.UMD.EDU> Actually, one can already access the archives via web browsers. For instance, using "lynx" which is available even without being a node (or a similar approach with Mosaic), one can just type "lynx ftp://archive.cis.ohio-state.edu/pub/neuroprose/" to access the directory via the WWW. Thus, lynx actually subsumes gopher and ftp. To navigate, you just move the cursor or change the options page to number the links so that you can just type the number (usually faster). Using lynx eliminates the nuisance of the ftp log-in (which is a sham anyway) and also makes finding files a lot easier. The interface is also fairly intelligent so all you need to do is to type "D" to download the file to your host (or yourself is you're on the net directly). You are given a choice to write the resulting file to disk under whatever name you prefer, and if the file is binary, the transfer mode is set automatically. Anyway, there are tutorials on lynx, which is a neat program by Lou Montulli at the Univ. of Kansas; see, e.g., http://info.cern.ch/hypertext/WWW/Lynx/Status.html (by typing "lynx" followed by the URL above). Paul Kainen (kainen at cs.umd.edu) From donna at Lanl.GOV Thu Jun 23 18:00:36 1994 From: donna at Lanl.GOV (donna@Lanl.GOV) Date: Thu, 23 Jun 1994 15:00:36 -0700 Subject: Letter from Alan Lapedes Message-ID: <9406232058.AA02786@t13.lanl.gov> POSITIONS AT LOS ALAMOS NATIONAL LABORATORY (Postdoctoral and Tech) Positions involving sequence analysis of DNA, RNA and protein sequences, as well as general aspects of computational biology, are available at Los Alamos National Laboratory. Depending on funds, we will have a limited number of (a) postdoctoral positions (b) data entry, and elementary data analysis positions (techs) (c) graduate student and summer student positions. Funding for these positions is generally associated with specific grants, and the successful applicant will generally be expected to work on specific projects. Projects range from analysis of HIV and human papilloma virus sequences, to immune system studies, to more general aspects of computational biology involving evolution, and sequence-structure-function relationships. Exceptionally well qualified candidates who are interested in theoretical and computational investigations related to the just mentioned topics, and with expertise in one or more of the following areas are encouraged to apply: (a) immunology (b) virology (c) sequence analysis (d) statistical analysis (e) neural net/pattern recognition analysis (f) programming skills (C language) (g) computational biology (h) structural biology Candidates may contact the following address: email: donna at lanl.gov post: Donna Spitzmiller MS B213 Theoretical Division LANL Los Alamos, New Mexico 87545 voice: 505-665-3209 FAX: 505-665-3003 for application material. Questions concerning specific positions should be sent to the same address and will be distributed to appropriate staff members for reply. Please indicate in your initial inquiry whether you are interested in a student, tech, or postdoctoral position. Candidates for tech positions can not have a Ph.D. degree. Candidates for postdoctoral positions must have completed their Ph.D. within the last three years. Los Alamos National Laboratory is an equal opportunity employer. Thank you, Alan Lapedes Los Alamos National Laboratory T-13 Los Alamos, NM 87545 From paolo at mcculloch.ing.unifi.it Fri Jun 24 13:17:18 1994 From: paolo at mcculloch.ing.unifi.it (Paolo Frasconi) Date: Fri, 24 Jun 94 19:17:18 +0200 Subject: Report available Message-ID: <9406241717.AA15610@mcculloch.ing.unifi.it> FTP-host: ftp-dsi.ing.unifi.it FTP-file: pub/tech-reports/em.tr-11-94.ps.Z ULR: ftp://ftp-dsi.ing.unifi.it/pub/tech-reports/em.tr-11-94.ps.Z The following technical report is available by anonymous ftp. Length is 41 pages. ------------------------------------------------------------------ An EM Approach to Learning Sequential Behavior Yoshua Bengio Dept. Informatique et Recherche Operationnelle Universite de Montreal, Montreal, Qc H3C-3J7 bengioy at iro.umontreal.ca Paolo Frasconi Dipartimento di Sistemi e Informatica Universita di Firenze (Italy) paolo at mcculloch.ing.unifi.it Tech. Report. DSI 11/94 Universita di Firenze Abstract We consider problems of sequence processing and we propose a solution based on a discrete state model. We introduce a recurrent architecture having a modular structure that allocates subnetworks to discrete states. Different subnetworks are model the dynamics (state transition) and the output of the model, conditional on the previous state and an external input. The model has a statistical interpretation and can be trained by the EM or GEM algorithms, considering state trajectories as missing data. This allows to decouple temporal credit assignment and actual parameters estimation. The model presents similarities to hidden Markov models, but allows to map input sequences to output sequences, using the same processing style of recurrent networks. For this reason we call it Input/Output HMM (IOHMM). Another remarkable difference is that IOHMMs are trained using a supervised learning paradigm (while potentially taking advantage of the EM algorithm), whereas standard HMMs are trained by an unsupervised EM algorithm (or a supervised criterion with gradient ascent). We also study the problem of learning long-term dependencies with Markovian systems, making comparisons to recurrent networks trained by gradient descent. The analysis reported in this paper shows that Markovian models generally suffer from a problem of diffusion of temporal credit for long-term dependencies and fully connected transition graphs. However, while recurrent networks exhibit a conflict between long-term information storing and trainability, these two requirements are either both satisfied or both not satisfied in Markovian models. Finally, we demonstrate that EM supervised learning is well suited for solving grammatical inference problems. Experimental results are presented for the seven Tomita grammars, showing that these adaptive models can attain excellent generalization. --------------------------------------------------------- Paolo Frasconi Dipartimento di Sistemi e Informatica. Via di Santa Marta 3 50139 Firenze (Italy) +39 (55) 479-6361 / fax +39 (55) 479-6363 --------------------------------------------------------- From jjg at phoenix.Princeton.EDU Fri Jun 24 16:47:54 1994 From: jjg at phoenix.Princeton.EDU (Jack J. Gelfand) Date: Fri, 24 Jun 94 16:47:54 EDT Subject: Postdoctoral Position Message-ID: <9406242047.AA04846@tucson.Princeton.EDU> Postdoctoral Position - Recent Ph.D. for research in hybrid force/position control and modeling of the human motor control system. This is a joint project in the Department of Mechanical and Aerospace Engineering and Department of Psychology. Candidate must have strong background in adaptive control theory with an interest in motor physiology. Also includes work on anthropomorphic robots. Please send a vita and a list of 3 references to: Dr. Jack Gelfand Princeton University 1-S-6 Green Hall Princeton, NJ 08544 jjg at phoenix.princeton.edu 609-258-2930 We would prefer someone who is available before October, 1994, but will consider all applicants. Position is for 1 or 2 years. Princeton University is an equal opportunity employer. From hwang at pierce.ee.washington.edu Mon Jun 27 09:10:13 1994 From: hwang at pierce.ee.washington.edu (Jenq-Neng Hwang) Date: Mon, 27 Jun 94 06:10:13 PDT Subject: Advance-Program and Registration of NNSP'94 Message-ID: <9406271310.AA14410@pierce.ee.washington.edu.Jaimie> 1994 IEEE WORKSHOP ON NEURAL NETWORKS FOR SIGNAL PROCESSING September 6-8, 1994 Ermioni, Greece The Workshop, sponsored by the Neural Network Technical Committee of the IEEE Signal Processing Society, in cooperation with the IEEE Neural Network Council and with co-spnsorship from ARPA and Intracom S.A. Greece, is designed to serve as a regular forum for researchers from universities and industry who are interested in interdisciplinary research on neural networks for signal processing applications. In the present scope, the workshop encompasses up-to-date research results in several key areas, including learning algorithms, network architectures, speech processing, image processing, adaptive signal processing, medical signal processing, and other applications. GENERAL CHAIR John Vlontzos, INTRACOM S.A. Peania, Attica, Greece, jvlo at intranet.gr PROGRAM CHAIR Jenq-Neng Hwang, University of Washington Seattle, Washington, USA, hwang at ee.washington.edu PROCEEDINGS CHAIR Elizabeth J. Wilson, Raytheon Co. Marlborough, MA, USA, bwilson at sud2.ed.ray.com FINANCE CHAIR Demetris Kalivas, INTRACOM S.A. Peania, Attica, Greece, dkal at intranet.gr PROGRAM COMMITTEE Joshua Alspector (Bellcore, USA) Les Atlas (U. of Washington, USA) Charles Bachmann (Naval Research Lab. USA) David Burr (Bellcore, USA) Rama Chellappa (U. of Maryland, USA) Lee Giles (NEC Research, USA) Steve J. Hanson (Siemens Corp. Research, USA) Yu-Hen Hu (U. of Wisconsin, USA) Jenq-Neng Hwang (U. of Washington, USA) Bing-Huang Juang (AT&T Bell Lab., USA) Shigeru Katagiri (ATR Japan) Sun-Yuan Kung (Princeton U., USA) Gary M. Kuhn (Siemens Corp. Research, USA) Stephanos Kollias (National Tech. U. of Athens, Greece) Richard Lippmann (MIT Lincoln Lab., USA) Fleming Lure (Caelum Research Co., USA) John Makhoul (BBN Lab., USA) Richard Mammone (Rutgers U., USA) Elias Manolakos (Northeastern U., USA) Nahesan Niranjan (Cambridge U., UK) Tomaso Poggio (MIT, USA) Jose Principe (U. of Florida, USA) Wojtek Przytula (Hughes Research Lab., USA) Ulrich Ramacher (Siemens Corp., Germany) Bhaskar D. Rao (UC San Diego, USA) Andreas Stafylopatis (National Tech. U. of Athens, Greece) Noboru Sonehara (NTT Co., Japan) John Sorensen (Tech. U. of Denmark, Denmark) Yoh'ichi Tohkura (ATR, Japan) John Vlontzos (Intracom S.A., Greece) Raymond Watrous (Siemens Corp. Research, USA) Christian Wellekens (Eurecom, France) Yiu-Fai Issac Wong (Lawrence Livermore Lab., USA) Barbara Yoon (ARPA, USA) TENTATIVE ADVANCE PROGRAM OF NNSP'94, ERMIONI, GREECE ---------------------------------------------------- ****** TUESDAY, SEPTEMBER 6TH, 1994 ****** 8:15 am -- 8:30 am ------------------ OPENING REMARKS: John Vlontzos 8:30 am -- 9:20 am ------------------ PLENARY TALK: Effective VC Dimensions -- Leon Bottou, Neuristique Inc., France 9:30 am -- 11:50 am ------------------- LEARNING ALGORITHMS I: (oral presentation) Chair: John Sorenson A Novel Unsupervised Competitive Learning Rule with Learning Rate Adaptation for Noise Cancelling and Signal Separation -- M. Van Hulle -- (Laboratorium voor Neuro-en Psychofysiologie, Belgium) A Statistical Inference Based Growth Criterion for the RBF Network -- V. Kadirkamanathan -- (University of Sheffield, United Kingdom) Neural Network Inversion Techniques for EM Training and Testing of Incomplete Data -- J. N. Hwang, C. J. Wang -- (University of Washington, USA) OSA--A Topological Algorithm for Constructing Two-Layer Neural Networks -- F.M. Frattale Mascioli, G. Martinelli -- (University of Rome, Italy) Adaptive Regularization -- L. Hansen, C. Rasmussen, C. Svarer, J. Larsen -- (Technical University of Denmark, Denmark) Generalization Performance of Regularized Neural Network Models -- J. Larsen, L. Hansen -- (Technical University of Denmark, Denmark) An Application of Importance-Based Feature Extraction in Reinforcement Learning (080) -- D. Finton, Y.H. Hu -- (University of Wisconsin-Madison, USA) 12:00 am -- 12:40 pm -------------------- LEARNING ALGORITHMS II: (3-minute oral preview of poster presentations) Chair: Andreas Stafylopatis Multilayer Perceptron Design Algorithm -- E. Wilson, D. Tufts -- (Raytheon Company, USA) Mixture Density Estimation via EM Algorithm with Deterministic Annealing -- N. Ueda, R. Nakano -- (Purdue University, USA) Faster and Better Training of Multi-Layer Perceptron for Forecasting Problems -- R. Laddad, U. Desai, P. Poonacha -- (Indian Institute of Technology, India) An Interval Computation Approach to Backpropagation -- C. Pedreira, E. Parente -- (Catholic University of Rio de Janeiro, Brazil) Robust Estimation for Radial Basis Functions -- A. Bors, I. Pitas -- (University of Thessaloniki, Greece) NETWORK ARCHITECTURES I: (3-minute oral preview of poster presentations) Chair: Yu-Hen Hu A Hybrid Neural Network Architechture for Automatic Object Recognition -- T. Fechner, R. Tanger -- (Daimler Benz Forschungsgruppe Systemtechnik, Germany) Time Series Prediction Using Genetically Trained Wavelet Networks -- A. Prochazka, V. Sys -- (Prague University of Chemical Technology, Czech Republic) A Network Of Physiological Neurons With Differentiated Excitatory And Inhibitory Units Possessing Pattern Recognition Capacity -- E. Ventouras, M. Kitsonas, S. Hadjiagapis, N. Uzunoglu, C. -- Papageorgiou, A. Rabavilas, C. Stefanis -- (National Technical University of Athens, Greece) Locally Excitatory Globally Inhibitory Oscillator Networks: Theory and Application to Pattern Segmentation -- D. Wang, D. Terman -- (The Ohio State University, USA) Learning with Imperfect Perception -- W. Wen and M. Yokoo -- (NTT Communication Science Laboratories, Japan) A Learning Algorithm for Multi-Layer Perceptrons with Hard-Limiting Threshold Units -- R. Goodman, Z. Zeng -- (California Institute of Technology, USA) The Selection of Neural Models of Non-Linear Dynamical Systems by Statistical Tests -- D. Urbani, P. Roussel-Ragot, L. Personnaz, G. Dreyfus -- (ESPCI de la Ville de Paris, France) 1:30 pm -- 2:50 pm ------------------ LEARNING ALGORITHMS II and NETWORK ARCHITECTURES I: (poster presentations) 3:00 pm -- 5:40 pm ------------------ NETWORK ARCHITECTURES: (oral presentation) Chair: Stephanos Kollias The Use of Recurrent Neural Networks for Classification -- T. Burrows, M. Niranjan -- (Cambridge University, United Kingdom) Network Structures for Nonlinear Digital Filters -- J.N. Lin, R. Unbehauen -- (Lehrstuhl fur Allgemeine und Theoretische Elektrotechnik Universit, Federal Republic Germany) Pruning Recurrent Neural Networks for Improved Generalization Performance -- C. Omlin, C.L. Giles -- (University of Maryland, USA) A Unifying View of Some Training Algorithms for Multilayer Perceptrons with FIR Filter Synapses -- A. Back, E. Wan, S. Lawrence, and A. C. Tsoi -- (The University of Queensland, Australia) Spectral Feature Extraction Using Poisson Moments -- S. Celebi, J. Principe -- (University of Florida, USA) Application of the Fuzzy Min-Max Neural Network Classifier to Problems with Continuous and Discrete Attributes -- A. Likas, K. Blekas, A. Stafylopatis -- (National Technical University of Athens, Greece) Time Signal Filtering by Relative Neighborhood Graph Localized Linear Approximation -- J. Sorensen -- (Technical University of Denmark, Denmark) Classification Using Hierarchical Mixtures of Experts -- S. Waterhouse, A. Robinson -- (Cambridge University, United Kingdom) 7:00 pm -- 9:00 pm ------------------ PANEL DISCUSSION: NEURAL NETWORKS FOR INDUSTRIAL APPLICATIONS Moderator: Gary Kuhn (Siemens Corporate Research, USA) Panelist: Kazuo Asakawa (Fujitsu Laboratories Ltd., Japan) Leon Bottou (Neuristique Inc., France) Dan Hammerstrom (Adaptive Solution Inc., USA) Kevin Farrel (Rutgers University, USA) John Vlontzos (Intracom S. A., Greece) Georg Zimmermann (Siemens, Germany) ****** Wednesday, SEPTEMBER 7TH, 1994 ****** 8:30 am -- 9:20 am ------------------ PLENARY TALK: Massively Parallel Context Processing -- Dan Hammerstrom, Adaptive Solution Inc., USA 9:30 am -- 11:30 am ------------------- SPEECH PROCESSING I: (oral presentation) Chair: Bing-Huang Juang Recurrent Network Automata for Speech Recognition: A Summary of Recent Work -- R. Gemello, D. Albesano, F. Mana, R. Cancelliere -- (Centro Studie Laboratori Telecommunicazioni, Italy) Acoustic Echo Cancellation for Hands-free Telephony Using Neural Networks -- A. Birkett, R. Goubran -- (Carleton University, Canada) Minimum Error Training for Speech Recognition -- E. McDermott, S. Katagiri -- (ATR Human Information Processing Research Laboratories, Japan) Connectionist Model Combination for Large Vocabulary Speech Recognition -- M. Hochberg, G. Cook, S. Renals, T. Robinson -- (Cambridge University, United Kingdom) Neural Tree Network/Vector Quantization Probability Estimators for Speaker Recognition -- K. Farrell, S. Kosonocky, R. Mammone -- (Rutgers University, USA) Parallel Training of MLP Probability Estimators for Speech Recognition: A Gender-Based Approach -- N. Mirghafori, N. Morgan, H. Bourlard -- (International Computer Science Institute, USA) 11:45 am -- 12:30 pm -------------------- Speech Processing II: (3-minute oral preview of poster presentations) Chair: Shigeru Katagiri LVQ as a Feature Transformation for HMMs -- K. Torkkola -- (Institute Dalle Molle D'Intelligence Artificielle Perceptive, Switzerland) Autoassociator-Based Modular Architecture for Speaker Independent Phoneme Recognition -- L. Lastrucci, G. Bellesi, M. Gori, G. Soda -- (Universita di Firenze, Italy) Non-linear Speech Analysis Using Recurrent Radial Basis Function Networks -- P. Moakes, S. Beet -- (University of Sheffield, England) Word Recognition Using a Neural Network and a Phonetically Based DTW -- Y. Matsuura, H. Miyazawa, T. Skinner -- (Meidensha Corp., Japan) A Monolithic Speech Recognizer Based on Fully Recurrent Neural Networks -- K. Kasper, H. Reininger, D. Wolf, H. Wust -- (Johann Wolfgang Goethe-Universit, FRG) Fuzzification of Formant Trajectories for Classification of CV Utterances Using Neural Network Models -- B. Yegnanarayana, C. C. Sekhar, S Prakash -- (Indian Institute of Technology, India) Minimum Error Classification of Keyword-Sequences -- T. Komori, S. Katagiri -- (ATR Human Information Processing Research Laboratories, Japan) Hybrid Training Method for Tied Mixture Density Hidden Markov Models Using Learning Vector Quantization and Viterbi Estimation -- M. Kurimo -- (Helsinki University of Technology, Finland) Image Processing I: (3-minute oral preview of poster presentations) Chair: Yiu-Fai Issac Wong Medical Imaging with Neural Networks -- C. Pattichis, A. Constantinides -- (University of Cyprus, Cyprus) High Resolution Image Reconstruction Using Mean Field Annealing -- T. Numnonda, M. Andrews -- (University of Aukland, New Zealand) Hardware Neural Network Implementation of Tracking System -- G. Lendaris, R. Pap, R. Saeks, C. Thomas, R. Akita -- (Accurate Automation Corp., USA) Fast Image Analysis Using Kohonen Maps -- D. Willett, C. Busch, F. Seibert -- (Darmstadt Computer Graphics Center, FRG) Analysis of Satellite Imagery Using a Neural Network Based Terrain Classifier -- M. Perrone, M. Larkin -- (Brown University, USA) 1:30 pm -- 2:50 pm ------------------ Speech Processing II and Image Processing I: (poster presentations) 3:00 pm -- 5:00 pm ------------------ IMAGE PROCESSING II: (oral presentation) Chair: Sun-Yuan Kung Moving Objects Classification in a Domestic Environment Using Quadratic Neural Network -- G. Lim, M. Alder, C deSilva, Y. Attikiouzel -- (The Univ. of Western Australia, Western Australia) Application of the HLVQ Neural Network to Hand-Written Digit Recognition -- B. Solaiman, Y. Autret -- (Ecole Nationale Superieure des Telecommunications de Bretagne, France) Neural Networks for Robust Image Feature Classification: A Comparative Study -- S. Madiraju, C.C. Liu -- (The University of Melbourne, Australia) Application of SVD Networks to Multi-Object Motion-Shape Analysis -- S.Y. Kung, J. Taur, M.Y. Chiu -- (Princeton University, USA) Ensemble Methods for Automatic Masking of Clouds in AVIRIS Imagery -- C.M. Bachmann, E.E. Clothiaux, J.W. Moore, K. J. Andreano, -- D. Q. Luong -- (Naval Research Laboratory, USA) Saddle-node Dynamics for Edge Detection (092) -- Y-F Wong -- (Lawrence Livermore National Laboratory, USA) ****** THURSDAY, SEPTEMBER 8TH, 1994 ****** 8:30 am -- 9:20 am ------------------ PLENARY TALK: Knowledge-Based Neural Networks -- Kazuo Asakawa, Fujitsu Laboratories Ltd., Japan 9:30 am -- 10:20 am ------------------- INVITED TALK: Predicting Impredictability -- Andreas S. Weigend, University of Colorado, USA 10:30 am -- 11:50 am ------------------- ADAPTIVE SIGNAL PROCESSING: (oral presentation) Chair: Elias Manolakos A Neural Network Trained with the Extended Kalman Algorithm Used for the Equalization of a Binary Communication Channel -- M. Birgmeier -- (Technische Universit, Austria) Neural-net Based Receiver Structures for Single- and Multi-amplitude Signals in Interference Channels -- D. Bouras, P.T. Mathiopoulos, D. Makrakis -- (The University of British Columbia, Canada) A Hybrid Digital Computer-Hopfield Neural Network CDMA Detector for Real-time Multi-user Demodulation -- G. Kechriotis and E. Manolakos -- (Northeastern University, USA) Improving the Resolution of a Sensor Array Pattern by Neural Networks and Learning -- C. Bracco, S. Marcos, M. Benidir -- (Laboratoire des Signaux and Systemes, France) 12:00 pm -- 12:40 pm -------------------- Other Applications: (3-minute oral preview of poster presentations) Chair: Barbara Yoon and Jose Principe Sensitivity Analysis on Neural Networks for Meteorological Variable Forecasting -- J. Castellanos, A. Pazos, J. Rios, J.L. Zafra -- (Facultad de Informatica - UPM, Spain) A Hopfield Net Based Adaptation Algorithm for Phased Antenna Arrays -- M. Alberti -- (University of Paderborn, Germany) Modeling of Glaucoma Induced Changes in the Retina and Neural Net Assisted Diagnosis -- S. von Spreckelsen, P. Grumstup, J. Johnsen, L. Hansen -- (Technical University of Denmark, Denmark) Blind Deconvolution of Signals Using a Complex Recurrent Network -- A. Back, A.C. Tsoi -- (The University of Queensland, Australia) Continuous-time Nonlinear Signal Processing: A Neural Network Approach for Gray Box Identification -- R. Rico-Martinez, J. Anderson, I. Kevrekidis -- (Princeton University, USA) A Quantitative Study of Evoked Potential Estimation Using a Feedforward Neural Network -- A. Dumitras, A. Murgan, V. Lazarescu -- (Technical University of Bucharest, Romania) Neural Estimation of Kinetic Rate Constants from Dynamic Pet-Scans -- T. Fog, L. Nielsen, L. Hansen, S. Holm, I. Law, C. Svarer, -- O. Paulson -- (Technical University of Denmark, Denmark) Auditory Stream Segregation Based on Oscillatory Correlation -- D. Wang -- (The Ohio State University, USA) Application of Neural Networks for Sensor Performance Improvement -- S. Poopalasingam, C. Reeves, and N. Steele -- (Coventry University, United Kingdom) Neural-Network Based Classification of Laser-Doppler Flowmetry Signals -- N. Panagiotidis, A. Delopoulos, S. Kollias -- (National Technical University of Athens, Greece) NeuroDevice - Neural Network Device Modelling Interface for VLSI Design -- P. Ojala, J. Saarinen, K. Kaski -- (Tampere University of Technology, Finland) Encoding Pyramids by Labeling RAAM -- S. Lonardi, A. Sperduti, A. Starita -- (Corso Italia 40, Italy) Reconstructed dynamics and Chaotic Signal Modeling -- J.M. Kuo, J. Principe -- (University of Florida, USA) A Neural Network Scheme for Earthquake Prediction Based on the Seismic Electric Signals -- S. Lakkos, A. Hadjiprocopis, R. Comley -- (City University of London, United Kingdom) 1:30 pm -- 2:50 pm ------------------ Other Applications: (poster presentations) 3:00 pm -- 5:20 pm ------------------ Medical Signal Processing: (oral presentation) Chair: Hsin-Chia Fu Medical Diagnosis and Artificial Neural Networks: A Medical Expert System Applied to Pulmonary Diseases -- G.P. Economou, C. Spriopoulos, N. Economopoulos, N. Charokopos, -- D. Lymberopoulos, M. Spiliopoulou, E. Haralambopulu, C. Goutis -- (University of Patras, Greece) Toward Improving Excercise ECG for Detecting Ischemic Heart Disease with Recurrent and Feedforward Neural Nets -- G. Dorffner, E. Leitgeb, H. Koller -- (Austrian Research Institute for Artificial Intelligence, Austria) Neural Networks and Higher Order Spectra for Breast Cancer Detection -- T. Stathaki, A. Constantinides -- (Imperial College, United Kingdom) Towards Semen Quality Assessment Using Neural Networks -- C. Linneberg, P. Salamon, C. Svarer, L. Hansen -- (Technical University of Denmark, Denmark) Use of Neural networks in detection of ischemic episodes from ECG leads -- N. Maglaveras, T. Stamkopoulos, C. Pappas, M. Strintzis -- (Aristotelian University, Greece) EEG Signal Analysis Using a Multi-layer Perceptron with Linear Preprocessing -- S.A. Mylonas, R.A. Comley -- (City University of London, United Kingdom) ************** THE END *************** REGISTRATION FORM 1994 IEEE Workshop on Neural Networks for Signal Processing Please complete this form (type or print) Name ___________________________________________________________ Last First Middle Firm or University _____________________________________________ Mailing Address ________________________________________________ ________________________________________________________________ ________________________________________________________________ Country Phone FAX Fee payment must be made by money order or personal check. Do not send cash. Make fee payable to "IEEE NNSP'94 - c/o D. Kalivas' and mail it together with the registration form to: NNSP' 94 c/o D. Kalivas Intracom S.A. P.O. Box 68 19002 Peania Greece. For further information, Dr. Kalivas can be reached at Tel.: 011 301 6860479 FAX: 011 301 6860312 e-mail: dkal at intranet.gr Advanced registration, before: July 15 _________________________________________________________________________ _________________________________________________________________________ REGISTRATION FEE AND HOTEL ACCOMMODATIONS Registration fee with single room (for three nights: September 5, 6 and 7) and meals for one person Date IEEE Member Non-member __________________________________________________________________ Before July 15 U.S. $600 U.S. $650 * After July 15 U.S. $650 U.S. $700 Registration fee with double room (for three nights: September 5, 6 and 7) and meals for two persons Date IEEE Member Non-member __________________________________________________________________ Before July 15 U.S. $720 U.S. $770 * After July 15 U.S. $770 U.S. $820 Registration fee without room but still with meals Date IEEE Member Non-member __________________________________________________________________ Before July 15 U.S. $400 U.S. $450 * After July 15 U.S. $450 U.S. $500 Additional nights at extra cost (U.S. $ 100 per night) with no meals Please specify dates and include payment in the registration fee Number of additional nights: ______________________________ Dates: ____________________________________________________ Extra Cost: _______________________________________________ * After July 15 there will be only a limited number of rooms available, therefore we strongly recommend the early registration. I want reservation for the bus leaving the Eastern Airport of Athens at 4:00 pm on the 5th of September for ___ persons. __________________________________________________________________________ __________________________________________________________________________ TRAVEL INFORMATION NNSP'94 will be held at the Porto Hydra Hotel. The Porto Hydra Hotel is located in the eastern Peloponnese, opposite the island of Hydra and near famous archaeological sites (ancient theater of Epidaurus, Mycenae, Tiryns). A one-page map is provided. The exact hotel address is: Hotel Porto Hydra Plepi Ermionidos Greece Tel: 30 - 754 - 41112, 41270-4 FAX: 30 - 754 - 41295 Possible ways to get to Porto Hydra Hotel are: By car: Follow the route Athens - Korinthos - Epidaurus - Kranidi - - Ermioni - Porto Hydra. This distance is 195 km and it will take you approximately 2.5 hours. By ship: Take a taxi from the airport to the port of Passalimani (less than 2000 drachmas cost). There is a ship leaving every morning to Ermioni. You do not need to make reservations. The trip is four hours and 15 minutes long. For time schedules call 30-1-4511311. From Ermioni a hotel bus will take you to Porto Hydra. By Hydrofoil Boats: Take a taxi from the airport to the port of ZEA (less than 2000 drachmas cost). There are "flying dolphins" (hydrofoil boats) going to Ermioni. You need to buy the tickets in advance (from a travel agent) because at the time you would get to the port they may be sold out. The trip is two hours long. For time schedules call 30-1-4280001. From Ermioni a hotel bus will take you to Porto Hydra. Best way: We will rent a bus (or buses) to take you from the Eastern Airport of Athens (this is the airport where most of the international flights arrive) to the Porto Hydra Hotel. The buses will leave at 4:00 pm on the 5th of September. To reserve seats on the bus, please check the appropriate box and specify the number of reserved seats you want in the Registration Form. From isabelle at neural.att.com Mon Jun 27 19:20:45 1994 From: isabelle at neural.att.com (Isabelle Guyon) Date: Mon, 27 Jun 94 19:20:45 EDT Subject: No subject Message-ID: <9406272320.AA08319@neural> > - > - > - > - > - > - > - > - < - < - < - < - < - < - < - < - < - - > First UNIPEN Benchmark of On-line Handwriting Recognizers < - -> Organized by NIST < - - > - > - > - > - > - > - > - > - < - < - < - < - < - < - < - < - < - June 1994 * CALL FOR DATA (please post) At the initiative of Technical Committee 11 of the International Association for Pattern Recognition (IAPR), the UNIPEN project was started to stimulate research and development in on-line handwriting recognition (e. g. for pen computers and pen communicators). UNIPEN provides a platform of data exchange at the Linguistic Data Consortium (LDC) and is organizing this year a worldwide benchmark under the control of the US National Institute of Standards and Technologies (NIST). The benchmark is concerned with writer independent recognition of sentences, isolated words and isolated characters of any writing style (handprinted and/or cursive). Although UNIPEN will provide, in the future, data for various alphabets, this particular benchmark is limited to letters and symbols from an English computer keyboard . The data will be donated by the participants. * Conditions of participation Participation to the benchmark is open to any individual or institution who provides a sample of handwriting in the UNIPEN format which contains at least 12,000 characters. The data must be of acceptable quality and donated by October 1st, 1994 . The database of donated samples will be available for free to the data donators. Registration material can be obtained by sending email to Stan Janet at stan at magi.ncsl.nist.gov or via ftp: ftp ftp.cis.upenn.edu Name: anonymous Password: [use your email address] ftp> cd pub/UNIPEN-pub/documents ftp> get call-for-data.ps ftp> quit * Organizing committee Isabelle Guyon, AT&T Bell Laboratories, USA Lambert Schomaker, Nijmegen Institute for Cognition and Information, The Netherlands Stan Janet, National Institute of Standards and Technologies, USA Mark Liberman, Linguistic Data Consortium, University of Pennsylvania, USA Rejean Plamondon, IAPR, TC11, Ecole Polytechnique de Montreal, Canada From rreilly at nova.ucd.ie Tue Jun 28 10:40:19 1994 From: rreilly at nova.ucd.ie (Ronan Reilly) Date: Tue, 28 Jun 1994 15:40:19 +0100 Subject: INNC'94: Programme & registration (433 lines) Message-ID: A non-text attachment was scrubbed... Name: not available Type: x-sun-attachment Size: 11747 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/5f0ebeb8/attachment.ksh From L.P.OMard at lut.ac.uk Wed Jun 29 12:50:52 1994 From: L.P.OMard at lut.ac.uk (L.P.OMard) Date: Wed, 29 Jun 94 12:50:52 bst Subject: Announcing the "lutear-auditory-simulation" Mailing List Message-ID: <9406291150.AA04238@hpc.lut.ac.uk> Hello People, This message is to announce the "lutear-auditory-simulation" mailing list. The list may be joined by anybody (international internet users), by sending the following message to "mailbase at mailbase.ac.uk": To: mailbase at mailbase.ac.uk Subject: (you may leave this blank) Message text: join lutear-auditory-simulation Ann Jones stop - where "Ann Jones" should be replaced by your own first and second names. What is LUTEar? ---------------- The LUTEar Core Routines Library (CRL, version 1.5.2, October 1993) is a computational platform and set of coding conventions which supports a modular approach to auditory system modelling. The system is written in ANSI-C and works on a wide range of operating systems. It is available via anonymous FTP from:- suna.lut.ac.uk (131.231.16.2): /public/hulpo/lutear. The CRL brings together established models, developed by the group, and also contributed by other researchers in the field, which simulate various stages in the auditory process. Since the first release, the LUTEar CRL has been tested and used both at the originating laboratory and at many other sites. It has been used as a tool for speech processing, speech and voice analysis as well as in the investigation of auditory phenomena, for which it was primarily constructed. Included with this release is a comprehensive series of test programs. These programs were used to test the CRL routines; they reproduce the behaviour of the respective published models included. The programs also provide examples of how the CRL may be used in auditory investigation programs. In addition the programs read data from parameter files, and thus can be readily used to investigate further the behaviour of the models included in the CRL. FTP the "README" file for a more comprehensive list of features. -----------------||---------------------- The anouncement of the next release of LUTEar will be announced using the present mailing list and the new "lutear-auditory-simulation", but all future announcements will only be sent to the new mailing list. The aims of the "lutear-auditory-simulation" mailing list are as follows:- (1) Quick bug reporting to all concerned: People can report suspected bugs, so that in the while I investigate, others can be aware of potential problems. (2) The expression of "wish lists" which others can comment on: This might give me idea of priorities for extensions of LUTEar. (3) Discussion of development of LUTEar: The development of LUTEar is an on-going process. The mailing list would provide a forum for discussion of any major changes which I am considering. This would allow me to avoid introducing unpopular changes in LUTEar. (4) Encourage users to share extra routines they have written amongst themselves. One of the very attractive features of LUTEar is that is provides a common platform for investigations, so that people can compare results obtained with a "standard" piece of software. Extra routines written by users can be assessed, by other users as well as by myself, and subsequently incorporated in the standard code. (5) Reporting of successes and failures in attemping to apply LUTEar to various problems : This will give everyone feedback about the performance of LUTEar, providing individuals wih further ideas on how they can use it, and also highlight as yet unresolved difficulties. (6) The discussion of optimum parameter settings for various applications: Hopefully when people report results obtained with LUTEar, they will be quite specific about which routines and parameters they used. (7) Exchange of scientific information amongst LUTEar users: Users of LUTEar must have common scientific interests! So many of us probably do already know each other and something about each other's work. Nevertheless, it would be good to know who "the others" are, and to discuss specific problems we are trying to apply LUTear to. I indebted to Angela Darling (angie at phonetics.ucl.ac.uk) for providing the major portion of the above text. ..Lowel. +-------------------------+-----------------------------------------------+ |Lowel P. O'Mard PhD. | /\ / \ Speech & Hearing | |Dept. of Human Sciences, | /\/\ /\/ \/ /\ \ /\ Laboratory | |University of Technology,|_/\/\/ /\ \/\/ /\ /\/ \ \/ /\/\_ /\___ | |Loughborough, | \/\/ \/\/\/ \/ /\ \/\/ /\ / | |Leics. LE11 3TU, U.K. | \ /\/\/\ /\/ \ /\/\/ \/ Director: | |L.P.OMard at lut.ac.uk | \/ \/ \/ Prof. Ray Meddis | +-------------------------+-----------------------------------------------+ From lss at compsci.stirling.ac.uk Wed Jun 29 10:08:01 1994 From: lss at compsci.stirling.ac.uk (Dr L S Smith (Staff)) Date: 29 Jun 94 10:08:01 BST (Wed) Subject: TR available:Synchronization in Dynamic Neural Networks Message-ID: <9406291008.AA04295@uk.ac.stir.cs.peseta> ***DO NOT FORWARD TO OTHER GROUPS*** University of Stirling, Centre for Cognitive and Computational Neuroscience, Stirling FK9 4LA, Scotland CCCN Technical report CCCN-18 (Ph.D. Thesis) Synchronization in Dynamic Neural Networks David E. Cairns This thesis is concerned with the function and implementation of synchronization in networks of oscillators. Evidence for the existence of synchronization in cortex is reviewed and a suitable architecture for exhibiting synchronization is defined. A number of factors which affect the performance of synchronization in networks of laterally coupled oscillators are investigated. It is shown that altering the strength of the lateral connections between nodes and altering the connective scope of a network can be used to improve synchronization performance. It is also shown that complete connective scope is not required for global synchrony to occur. The effects of noise on synchronization performance are also investigated and it is shown that where an oscillator network is able to synchronize effectively, it will also be robust to a moderate level of noise in the lateral connections. Where a particular oscillator model shows poor synchronization performance, it is shown that noise in the lateral connections is capable of improving synchronization performance. A number of applications of synchronizing oscillator networks are investigated. The use of synchronized oscillations to encode global binding information is investigated and the relationship between the form of grouping obtained and connective scope is discussed. The potential for using learning in synchronizing oscillator networks is illustrated and an investigation is made into the possibility of maintaining multiple phases in a network of synchronizing oscillators. It is concluded from these investigations that it is difficult to maintain multiple phases in the network architecture used throughout this thesis and a modified architecture capable of producing the required behaviour is demonstrated. This report is available by anonymous FTP from ftp.cs.stir.ac.uk (139.153.254.29) in the directory pub/tr/cccn The filename is TR18.ps.Z (and, as usual, this needs transferred in binary mode, decompress'd, and the postscript printed.) The decompressed file is 13.0 Mb long, and the printed document 100 pages. Hard Copies may be made available for a price (unfortunately, our budget does not run to free copies of theses) Due to the thesis containing grey-level illustrations, it does not photocopy well. If necessary. Email lss at cs.stir.ac.uk. From petsche at scr.siemens.com Wed Jun 29 14:30:57 1994 From: petsche at scr.siemens.com (Thomas Petsche) Date: Wed, 29 Jun 1994 14:30:57 -0400 Subject: New book Message-ID: <199406291830.OAA15744@puffin.scr.siemens.com> The following book is now available from MIT Press: COMPUTATIONAL LEARNING THEORY AND NATURAL LEARNING SYSTEMS Volume II: Intersection between Theory and Experiments Edited by Stephen J. Hanson, Thomas Petsche, Michael Kearns, and Ronald L. Rivest. The book is the result of a workshop of the same name which brought together researchers from learning theory, machines learning, and neural networks. The book includes 23 chapters by authors in these various fields plus a unified bibliography and index: 1. Bayes Decisions in a Neural Network-PAC Setting Svetlana Anulova, Jorge R. Cuellar, Klaus-U. Hoeffgen and Hans-U. Simon 2. Average Case Analysis of $k$-CNF and $k$-DNF Learning Algorithms Daniel S. Hirschberg, Michael J. Pazzani and Kamal M. Ali 3. Filter Likelihoods and Exhaustive Learning David H. Wolpert 4. Incorporating Prior Knowledge into Networks of Locally-Tuned Units Martin Roescheisen, Reimar Hofmann and Volker Tresp 5. Using Knowledge-Based Neural Networks to Refine Roughly-Correct Information Geoffrey G. Towell and Jude W. Shavlik 6. Sensitivity Constraints in Learning Scott H. Clearwater and Yongwon Lee 7. Evaluation of Learning Biases Using Probabilistic Domain Knowledge Marie desJardins 8. Detecting Structure in Small Datasets by Network Fitting under Complexity Constraints W. Finnoff and H.G. Zimmermann 9. Associative Methods in Reinforcement Learning: An Empirical Study Leslie Pack Kaelbling 10. A Schema for Using Multiple Knowledge Matjaz Gams, Marko Bohanec and Bojan Cestnik 11. Probabilistic Hill-Climbing William W. Cohen, Russell Greiner and Dale Schuurmans 12. Prototype Selection Using Competitive Learning Michael Lemmon 13. Learning with Instance-Based Encodings Henry Tirri 14. Contrastive Learning with Graded Random Networks Javier R. Movellan and James L. McClelland 15. Probability Density Estimation and Local Basis Function Neural Networks Padhraic Smyth 16. Hamiltonian Dynamics of Neural Networks Ulrich Ramacher 17. Learning Properties of Multi-Layer Perceptrons with and without Feedback D. Gawronska, B. Schuermann and J. Hollatz 18. Unsupervised Learning for Mobile Robot Navigation Using Probabilistic Data Association Ingemar J. Cox and John J. Leonard 19. Evolution of a Subsumption Architecture that Performs a Wall Following Task for an Autonomous Mobile Robot John R. Koza 20. A Connectionist Model of the Learning of Personal Pronouns in English Thomas R. Shultz, David Buckingham and Yuriko Oshima-Takane 21. Neural Network Modeling of Physiological Processes Volker Tresp, John Moody and Wolf-Ruediger Delong 22. Projection Pursuit Learning: Some Theoretical Issues Ying Zhao and Christopher G. Atkeson 23. A Comparative Study of the Kohonen Self-Organizing Map and the Elastic Net Yiu-fai Wong The book is ISBN 0-262-58133-7 and the price is $35 (I believe). Additional ordering information can be obtained from: Neil Blaisdell MIT/Bradford Books Sales Department blaisdel at mit.edu (They will take a credit card order if you like (and trust the net with you credit card number).) From massone at mimosa.eecs.nwu.edu Thu Jun 30 12:13:59 1994 From: massone at mimosa.eecs.nwu.edu (Lina Massone) Date: Thu, 30 Jun 94 11:13:59 CDT Subject: paper available Message-ID: <9406301613.AA09680@mimosa.eecs.nwu.edu> FTP-host: archive.cis.ohio-state.edu FTP-file: massone.sensorimotor.ps.Z The following paper has been placed in the Neuroprose archive. SENSORIMOTOR LEARNING Lina L. E. Massone Dept. Of Biomedical Engineering Dept. of Electrical Engineering and Computer Science Northwestern University (to appear in The Handbook of Brain Theory and Neural Networks, Michael A. Arbib Ed., MIT Press.) The paper does not have an abstract, so what follows are the first few paragraphs of the introduction and the list of contents. 1. INTRODUCTION A sensorimotor transformation maps signals from various sensory modalities into an appropriate set of efferent motor commands to skeletal muscles or to robotic actuators. Sensorimotor learning refers to the process of tuning the internal parameters of the various structures of the central nervous system (CNS) or of some processing architecture in such a way that a satisfactory motor performance will result. Sensorimotor transformations and sensorimotor learning can be viewed as complex parallel distributed information processing tasks. A number of different components contribute to the tasks' complexity. On the sensory side, signals from various sources (vision, hearing, touch, pain, proprioception) need to be integrated and interpreted (Stein and Meredith, 1993). In particular, they need to be translated into a form that can be used for motor purposes because the coordinates of afferent sensory signals are different from the coordinates of the movements they are guiding. This convergent process, from many parallel sensory signals to an intermediate internal representation, is referred to as the early stages in a sensorimotor transformation. A widely accepted concept that describes the computed internal representation is the so-called motor program: a specification of the important parameters of the movement to be executed (Keele and Summers, 1976). Bernstein used the term motor program to denote a prototype of a planned movement described with an abstract central language that encodes specific features of the movement itself. This idea was then elaborated by Schmidt (1988) and modified by Arbib (1990), who introduced the concept of coordinated control program: a combination of perceptual and motor functional units called schemas, whose dynamic interaction causes the emergence of behavior. A different approach was proposed by Schoner and Kelso (1988), who redefined motor programs in dynamic terms as transitions between the equilibrium states of a dynamical system. ETC. ETC. ETC. ETC. 2. COMPUTATIONAL APPROACHES 2.1 Coordinate Transformation 2.2 Forward and Inverse Models 2.3 Optimization and Learning 2.4 Representing Information ********************************************************* From seung at physics.att.com Thu Jun 30 17:20:15 1994 From: seung at physics.att.com (seung@physics.att.com) Date: Thu, 30 Jun 94 17:20:15 EDT Subject: tutorial announcement--statistical physics and learning Message-ID: <9406302120.AA04149@physics.att.com> ========================================================= What does statistical physics have to say about learning? ========================================================= Sunday, July 10, 1994 8:45 am to 12:15 pm Milledoler Hall, room 100 Rutgers University New Brunswick, New Jersey Tutorial conducted by Sebastian Seung and Michael Kearns AT&T Bell Laboratories Murray Hill, NJ 07974 Free of charge and open to the general public, thanks to sponsorship from DIMACS. Held in conjunction with the Eleventh International Conference on Machine Learning (ML94, July 11-13, 1994) and the Seventh Annual Conference on Computational Learning Theory (COLT94, July 12-15, 1994). The study of learning has historically been the domain of psychologists, statisticians, and computer scientists. Statistical physicists are the seemingly unlikely latecomers to the subject. This tutorial is an overview of the ideas they are now bringing to learning theory, and of the relationship of these ideas to statistics and computational learning theory. We focus on the analysis of learning curves, defined here as graphs of generalization error versus the number of examples used in training. We explain why supervised learning from examples can lead to learning curves with a variety of behaviors, some of which are very different from (though consistent with) the Vapnik-Chervonenkis bounds. This is illustrated most dramatically by the presence of phase transitions in certain learning models. We discuss theoretical progress towards understanding two puzzling empirical findings--that neural networks sometimes attain good generalization with fewer examples than adjustable parameters, and that generalization performance can be relatively insensitive to the size of the hidden layer. We conclude with a discussion of the relationship of the statistical physics approach with that of the Vapnik-Chervonenkis theory. No prior knowledge of learning theory will be assumed. This tutorial is one of a set of DIMACS-sponsored tutorials that are free and open to the general public. Directions to Rutgers can be found in the ML/COLT announcement, which is available via anonymous ftp from www.cs.rutgers.edu in the directory "/pub/learning94". Users of www information servers such as mosaic can find the information at "http://www.cs.rutgers.edu/pub/learning94/learning94.html". Other available information includes a campus map, and abstracts of all workshops/tutorials. Questions can be directed to ml94 at cs.rutgers.edu, colt94 at research.att.com, or to Sebastian Seung at 908-582-7418 and seung at physics.att.com From gaussier at ensea.fr Wed Jun 1 11:16:48 1994 From: gaussier at ensea.fr (gaussier@ensea.fr) Date: Wed, 1 Jun 1994 15:16:48 +0000 Subject: No subject Message-ID: <9406011416.AA02894@aladdin.ensea.fr> >From Perception to Action - PerAc'94 Lausanne Lausanne, Switzerland, 7-9 september 1994 A State of the Art Conference on Autonomous Robots and Artificial Life """""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" PerAc is an international conference intended to promote the study of autonomous systems interacting with an unknown world. This means that their control systems should not rely on an "a priori" modelization of the environment or on the intervention of an external operator during exploration. The general theme of the conference is the study of how actions can help to modify perception in an interesting manner. Emphasis is put on "technological transfer" from biology (ethology, neurobiology...) and psychology to engineering. The conference will address theorical aspects and applications to the design of an autonomous robot. Presentations will deal with simple behavioral robots but also with the interest of collective intelligence and genetic algoritms. Attention will be also focused on Neural Networks and building blocks, and on how to use them to realize complex architectures and "Intelligent Systems". At last, cognitive implications of our approach will be discussed. --- Ask for the detailed programme booklet including --- registrationand hotel reservation forms --- (please provide your complete post address) +++ PerAc'94, LAMI-EPFL, CH-1015 Lausanne +++ Fax ++41 21 693-5263, E-mail perac at di.epfl.ch ----------------------------- | Short final programme | See the Call for Poster and Contest ----------------------------- near the end of this message Monday 5, Tuesday 6: Tutorials and Contest ---------- Wednesday 7 *** 1 - Opening session R. Pfeifer, University Zurich, Switzerland "From Perception to Action: The Right Direction?" *** 2 - Collective Intelligence J.L. Deneubourg, ULB Bruxelles, Belgium "Trams, Ants and Matches - Decentralization of Transportation Systems" T. Fukuda et al., University Nagoya, Japan "Self-Organizing Robotic Systems - Organization and Evolution of Group Behavior in Cellular Robotic" H. Asama, Riken Wako, Japan "Operation of Cooperative Multiple Robots using Communication in a Decentralized Robotic System" *** 3 - Simple Behavioral Robots D. McFarland, Balliol College Oxford, GB "Animal Robotics - From Self-Sufficient to Autonomy" H. Cruse, University Bielefeld, Germany "Simple Neural Nets for Controlling a 6-legged Walking System" C. Ferrel, MIT Cambridge, USA "Robust and Adaptive Locomotion of an Autonomous Hexapod" T. Shibata et al., MEL Tsukuba, Japan "Adaptation and Learning for Lasso Robot" L. Steels, University Bruxelles, Belgium "Mathematical Analysis of Behavior Systems" *** Poster and demo session. Contest award, winner's demos ---------- Thursday, Sept 8 J.A. Meyer, ENS Paris, France "Development, Learning and Evolution in Animats" *** 4 - Genetic Algorithms P. Husband et al., University Brighton, UK "The Use of Genetic Algorithms for the Development of Sensorimotor Control Systems" D. Floreano, University Trieste & F. Mondada, EPFL "Active Perception and Grasping - An Autonomous Perspective." M.A. Bedau, Reed College Portland, USA "The Evolution of Sensory Motor Loops" S. Nolfi et al., CNR Roma, Italy "Phenotipic Plasticity in Evolving Neural Networks P. Bessiere, E. Dedieu & E. Mazer, CNRS Grenoble, France "Representing Robot - Environment Interactions Using Probabilities" *** 5 - Active Perception N. Franceschini, CNRS Marseille, France "Mobile Robots with Insect Eyes" X. Arreguit & E.A. Vittoz, CSEM Neuchbtel, Swizerland "Perception Systems implemented in Analog VLSI for Real-Time Applications" P. Dario, ARTS Lab Pisa, Italy "Micro Sensors and Microactuators for Robots" F. Heitger, R. von der Heydt, ETH Zurich, Switzerland "A Computational Model of Neural Contour Processings - Figure-Ground Segregation and Illusory Contours" T. Pun et al., University of Geneva, Switzerland "Exploiting Dynamic Aspect of Visual Perception for Object Recognition" *** Poster and demo session. Banquet ---------- Friday, Sept 9 *** 6 - Building Blocks and Architectures for Designing Intelligent Systems Y. Burnod, CREARE Paris, France "Computational Properties of the Cerebral Cortex to Learn Sensorimotor Programs" J.S. Albus, W. Rippey, NIST Gaithersburg, USA "An architecture and Methodology for Designing Intelligent Machine Systems" S. Grossberg, University Boston, USA "Neural Models for Real Time Learning and Adaptation" Ph. Gaussier, ENSEA Paris, S. Zrehen, EPF Lausanne "Why Topological Maps are Useful for Learning in an Autonomous System" C. Engels and G. Schvner, Ruhr-University Bochum, Germany "Dynamic Field Architecture for Autonomous Systems" P.F.M.J.Verschure, NeuroScience, La Jolla, USA "Value-based Learning in Real World Artifact" *** 7 - Complex Architecture to Control Autonomous Robots R. Chatilla, LAAS-CNRS Toulouse, France "Control Architectures for Autonomous Mobile Robots" Y.A. Bachelder & A.M. Waxman, MIT Lexington, USA "A Neural System for Qualitative Mapping and Navigation in Visual Environments" Ph. Gaussier, ENSEA Paris, S. Zrehen, EPF Lausanne, Switzerland "Complex Neural Architecture for Emerging Cognition Abilities in an Autonomous System" *** 8 - Cognition K. Dautenhahn, GMD Sankt Augustin, Germany "Trying to Imitate - A Step Towards Releasing Robots from Social Isolation" J. Taylor, King's College London, UK "The Relational Mind" J. Stewart, Inst. Pasteur Paris, France "The Implications for Understanding High-Level Cognition of a Grounding in Elementary Adaptive Systems" ------------------------------------------------------ | PerAc'94 Registration fees | | Before July 1 350 Swiss Francs Students 170 CHF | | After July 1 400 Swiss Francs Students 200 CHF | ------------------------------------------------------ oooooooooooooooooooooooooooooooooooooooooooooo | PerAc'94 Call for Posters, | | Call for Demonstrations, Call for Videos | | Contest, Tutorials | oooooooooooooooooooooooooooooooooooooooooooooo -- Posters -- 4-page short papers that will be published in the proceedings and presented as posters are due for June 1, 1994, the latest. Posters will be displayed during the whole Conference and enough time will be provided to promote interaction with the authors. Posters will be refereed; authors will be informed before July 1. Due to the tight printing schedule, late posters will not be accepted -- Demonstrations -- Robotic demonstrations are considered as posters. In addition to the 4-page abstract describing the scientific interest of the demonstration, the submission should include a 1-page requirement for demonstration space and support. Last minute informal demonstrations will be accepted if they fall within the range of the conference -- Videos -- 5 minute video clips have to be submitted in Super-VHS or VHS (preferably PAL, NTSC leads to a poorer quality). Tapes together with a 2-page description should be submitted before July 15, 1994 -- Contest -- A robotic contest will be organized the day before the conference. Teams participating to the contest must be announced before July 15, and will be able to follow the conference freely. -- Tutorials -- Three tutorials will be organized in parallel on Monday 5 and Tuesday 6. Neural Networks (in french), organized by C. Lehmann, Mantra-EPFL VLSI for Perception, organized by X. Arreguit, CSEM, Neuchbtel Artificial Life, organized by E.Sanchez, EPFL For further information: PerAc'94, LAMI-EPFL, CH-1015 Lausanne Fax ++41 21 693-5263, E-mail perac at di.epfl.ch From giles at research.nj.nec.com Wed Jun 1 12:28:52 1994 From: giles at research.nj.nec.com (Lee Giles) Date: Wed, 1 Jun 94 12:28:52 EDT Subject: summer position Message-ID: <9406011628.AA15007@fuzzy> SUMMER STUDENT OPPORTUNITY NEC RESEARCH INSTITUTE PRINCETON NJ Due to an unexpected opportunity, NEC Research Institute in Princeton, N.J. has an immediate opening for a 3 month summer research and programming position. The research emphasis will be on recurrent neural networks and machine learning methods applied to natural language processing. The successful candidate will have knowledge of machine learning and neural network methodologies plus excellent coding skills in the C/Unix environment. Interested applicants should send their resumes by mail, fax, or email to the address below. Dr. C. Lee Giles or Dr. Sandiway Fong NEC Research Institute 4 Independence Way Princeton, N.J. 08540 FAX: 609-951-2482 EMAIL: sandiway at research.nj.nec.com Applicants must show documentation of eligibility for employment. Because this is a summer position, the only expenses to be paid will be salary. NEC is an equal opportunity employer. -- C. Lee Giles / NEC Research Institute / 4 Independence Way Princeton, NJ 08540 / 609-951-2642 / Fax 2482 == From williabv at sun.aston.ac.uk Wed Jun 1 13:10:04 1994 From: williabv at sun.aston.ac.uk (williabv) Date: Wed, 1 Jun 1994 17:10:04 +0000 Subject: papers available by ftp Message-ID: <5966.9406011610@sun.aston.ac.uk> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/williams.wcci94.ps.Z FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/williams.ecal_93.ps.Z ------------------------------------------------------------------------------ The following two papers have been placed in the Neuroprose repository: pub/neuroprose/williams.wcci94.ps.Z Improving Classification Performance in the Bumptree Network by Optimising Topology with a Genetic Algorithm. Williams, B. V., Bostock, R. T. J., Bounds, D. G. and Harget, A. J. Submitted to IEEE Evolutionary Computation 1994, part of WCCI '94. ABSTRACT: The Bumptree is a binary tree of Gaussians which partitions a Euclidian space. The leaf layer consists of a set of local linear classifiers, and the whole system can be trained in a supervised manner to form a piecewise linear model. In this paper a Genetic Algorithm (GA) is used to optimise the topology of the tree. We discuss the properties of the genetic coding scheme, and argue that the GA/bumptree does not suffer from the same scaling problems as other GA/neural-net hybrids. Results on test problems, including a non-trivial classification task, are encouraging, with the GA able to discover topologies which give improved performance over those generated by a constructive algorithm. ------------------------------------------------------------------------------ pub/neuroprose/williams.ecal_93.ps.Z Learning and Evolution in Populations of Backprop Networks Williams, B.V. and Bounds, D. G. Presented at ECAL '93 in Brussels. ABSTRACT: This paper describes some simple Artificial Life experiments based on an earlier paper by Parisi, Nolfi and Cecconi (1991) in which an MLP controls a simple animat (simulated animal). A GA is used to optimise connection weights to produce basic food gathering behaviour. We discuss the merits of the approach, and suggest a more simple explanation of certain observed behaviours to that offered by the original authors. ------------------------------------------------------------------------------ How to retrieve and print the papers - an example session: % ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive FTP server (Version wu-2.4(2) Mon Apr 18 14:41:30 EDT 1994) ready. Name (archive.cis.ohio-state.edu:myname): anonymous 331 Guest login ok, send your complete e-mail address as password. Password: myname at mysite 230 Guest login ok, access restrictions apply. ftp> cd pub/neuroprose 250-Please read the file README 250- it was last modified on Fri Jul 2 09:04:46 1993 - 334 days ago 250 CWD command successful. ftp> binary 200 Type set to I. ftp> get williams.wcci94.ps.Z [wait while the file is downloaded] ftp> quit 221 Goodbye. % uncompress williams.wcci94.ps.Z % lp williams.wcci94.ps From geva at fit.qut.edu.au Thu Jun 2 02:23:06 1994 From: geva at fit.qut.edu.au (Shlomo Geva) Date: Thu, 2 Jun 1994 16:23:06 +1000 (EST) Subject: ANZIIS 94 2nd Call for Papers Message-ID: <199406020623.QAA22614@ocean.fit.qut.edu.au> A non-text attachment was scrubbed... Name: not available Type: text Size: 7719 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/799994d6/attachment-0001.ksh From venkat at occam.informatik.uni-tuebingen.de Thu Jun 2 09:57:34 1994 From: venkat at occam.informatik.uni-tuebingen.de (Venkat Ajjanagadde) Date: Thu, 2 Jun 94 15:57:34 +0200 Subject: Paper available by ftp Message-ID: <9406021357.AA01049@occam.informatik.uni-tuebingen.de> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/ajjanagadde.aaai94.ps.Z --------------------------------------------- The following paper is available via anonymous ftp from the neuroprose archive: File: ajjanagadde.aaai94.ps.Z 6 pages Unclear Distinctions Lead to Unnecessary Shortcomings: Examining the Rule vs Fact, Role vs Filler, and Type vs Predicate Distinctions from a Connectionist Representation and Reasoning Perspective [To appear in Proc. of the AAAI-94] Venkat Ajjanagadde Wilhelm-Schickard Institute Universitaet Tuebingen, Germany abstract: This paper deals with three distinctions pertaining to knowledge representation, namely, the rules vs facts distinction, roles vs fillers distinction, and predicates vs types distinction. Though these distinctions may indeed have some intuitive appeal, the exact natures of these distinctions are not entirely clear. This paper discusses some of the problems that arise when one accords these distinctions a prominent status in a connectionist system by choosing the representational structures so as to reflect these distinctions. The example we will look at in this paper is the connectionist reasoning system developed by Ajjanagadde and Shastri (Ajjanagadde & Shastri, 1991; Shastri & Ajjanagadde, 1993). Their system performs an interesting class of inferences using activation synchrony to represent dynamic bindings. The rule/fact, role/filler, type/predicate distinctions figure predominantly in the way knowledge is encoded in their system. We will discuss some significant shortcomings this leads to. Then, we will propose a much more uniform scheme for representing knowledge. The resulting system enjoys some significant advantages over Ajjanagadde & Shastri's system, while retaining the idea of using synchrony to represent bindings. To retrieve the compressed postscript file, do the following: unix> ftp archive.cis.ohio-state.edu ftp> login: anonymous ftp> password: [your_full_email_address] ftp> cd pub/neuroprose ftp> binary ftp> get ajjanagadde.aaai94.ps.Z ftp> bye unix> uncompress ajjanagadde.aaai94.ps.Z unix> lpr ajjanagadde.aaai94.ps (or however you print postscript) From bishopc at helios.aston.ac.uk Fri Jun 3 09:31:21 1994 From: bishopc at helios.aston.ac.uk (bishopc) Date: Fri, 3 Jun 1994 13:31:21 +0000 Subject: Paper available by ftp Message-ID: <24937.9406031231@sun.aston.ac.uk> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/bishop.novelty.ps.Z The following technical report is available by anonymous ftp. ------------------------------------------------------------------------ NOVELTY DETECTION AND NEURAL NETWORK VALIDATION Chris M Bishop Neural Computing Research Group Aston University Birmingham, B4 7ET, U.K. email: c.m.bishop at aston.ac.uk Neural Computing Research Group Report: NCRG/4289 (Accepted for publication in IEE Proceedings) Abstract One of the key factors limiting the use of neural networks in many industrial applications has been the difficulty of demonstrating that a trained network will continue to generate reliable outputs once it is in routine use. An important potential source of errors arises from novel input data, that is input data which differ significantly from the data used to train the network. In this paper we investigate the relationship between the degree of novelty of input data and the corresponding reliability of the outputs from the network. We describe a quantitative procedure for assessing novelty, and we demonstrate its performance using an application involving the monitoring of oil flow in multi-phase pipelines. -------------------------------------------------------------------- ftp instructions: % ftp archive.cis.ohio-state.edu Name: anonymous password: your full email address ftp> cd pub/neuroprose ftp> binary ftp> get bishop.novelty.ps.Z ftp> bye % uncompress bishop.novelty.ps.Z % lpr bishop.novelty.ps -------------------------------------------------------------------- Professor Chris M Bishop Tel. +44 (0)21 359 3611 x4270 Neural Computing Research Group Fax. +44 (0)21 333 6215 Dept. of Computer Science c.m.bishop at aston.ac.uk Aston University Birmingham B4 7ET, UK -------------------------------------------------------------------- From cnna at tce.ing.uniroma1.it Fri Jun 3 06:26:57 1994 From: cnna at tce.ing.uniroma1.it (cnna@tce.ing.uniroma1.it) Date: Fri, 3 Jun 1994 12:26:57 +0200 Subject: CNNA-94 Call for Papers Message-ID: <9406031026.AA16107@tce.ing.uniroma1.it> *********************************************************************** * * * CCCC NN N NN N A 999 4 4 * * C C N N N N N N A A 9 9 4 4 * * C N N N N N N A A *** 99999 44444 * * C N N N N N N AAAAAAA 9 4 * * CCCCC N NN N NN A A 999 4 * * * *********************************************************************** Third IEEE International Workshop on Cellular Neural Networks and their Applications CNNA-94 Rome, Italy, December 18-21, 1994 CALL FOR PAPERS ORGANIZED BY Department of Electronic Engineering "La Sapienza" University of Rome, Italy CO-SPONSORED BY IEEE Circuits & Systems Society IEEE Region 8 Italian National Research Council (CNR) WITH COOPERATION of the following IEEE local Sections: Central & Southern Italy, Northern Italy, Czech Republic, Hungary, Turkey. SCIENTIFIC COMMITTEE V. Cimagallli (Italy) - chairman N.N. Aizenberg (Ukraine) L.O. Chua (USA) A. Dmitriev (Russia) K. Halonen (Finland) M. Hasler (Switzerland) J. Herault (France) J.L. Huertas (Spain) J.A. Nossek (Germany) S. Jankowski (Poland) T. Roska (Hungary) M. Salerno (Italy) M. Tanaka (Japan) J. Taylor (Great Britain) J. Vandewalle (Belgium) X. Wu (P.R. China) INVITATION The Workshop is the third such gathering of scientists and engineers dedicated to CNN research. The first two CNNA meetings showed CNN is a powerful paradigm with important possibilities of application, and fostered wide interest in the scientific and industrial community. CNN theory is now reaching maturity; therefore, this Workshop will be structured in a new way, and will feature the following main topics of interest: 1) CNN Universal Chip prototypes are expected to be demonstrated in operation; ------------------------------------------------------------------------------- 2) Real-life applications of CNN chips will also be demostrated, in diverse fields, such as medical diagnosis, and robotics; 3) Distinguished speakers will give keynotes on latest findings. Prof. L.O. Chua will give the inaugural lecture on the topic: "The CNN Universal Chip: Dawn of a New Computer Paradigm". Lectures have been invited by Profs. J. Hamori and F. Werblin on related topics of neurobiology, and Prof. A. Csurgay on "Toward the Use of Quantum Devices in a Cellular Structure". 4) A round-table discussion, chaired by Dr. C. Lau, will be organized, on the topic: "Trends and Applications of the CNN Paradigm" 5) Authors are encouraged to submit a more detailed version of their papers to the upcoming special issue on CNNs of the International Journal of Circuit Theory and Applications, edited by Profs. T. Roska and J. Vandewalle (deadline: January 1995). TOPICS OF INTEREST include, but are not limited to: - Basic Theory: stability, sensitivity, properties and theoretical limitations, comparison with different neural network paradigms, steady-state, oscillatory, chaotic dynamics. - Architectures: cloning templates or non-homogeneous weights, continuous or discrete time, layered systems, nonlinear and/or delayed interactions. - Learning and Design: algorithms, generalization, information storage, CAD tools, design by examples, synthesis. - Hardware Implementation: analog and digital ICs, optical realizations, discrete-component dedicated circuits, dedicated simulation circuitry - Applications: signal processing, system modelling, CAM, classifiers. ORGANIZING COMMITTEE V. Cimagalli, chairman M. Balsi A. Londei P. Tognolatti CORRESPONDANCE should be addressed to CNNA-94 Dipartimento di Ingegneria Elettronica via Eudossiana, 18 00184 Roma Italy tel.: +39-6-44585836 fax: +39-6-4742647 e-mail: cnna at tce.ing.uniroma1.it VENUE Sessions will be held at "La Sapienza" University of Rome between the morning of Dec. 19 and the afternoon of Dec.21. A welcome cocktail will be offered to participants in the evening of Dec. 18, and a banquet on Dec. 20. Assistance for accommodation will be provided by American Express (see attached form) PERSPECTIVE AUTHORS are encouraged to submit a 3-page extended summary of their works in 4 copies (mail or fax, no electronic submission available), written in English, to the above address. Accepted contributions will have to be extended to a maximum of 6 pages, according to the style described in authors' kit to be sent with notification of acceptance. SCHEDULE is as follows: Extended summary to be received by Sep. 2 Notification of acceptance mailed by Oct. 10 Camera-ready paper to be received by Nov. 11 ------------------------------------------------------------------------------- REGISTRATION FORM mail to CNNA-94 Dipartimento di Ingegneria Elettronica, via Eudossiana, 18 Roma, Italy, I-00184 or fax to +39-6-4742647) title:____________ last name:_______________________________________ first name:______________________________________ institution:_____________________________________ address: street:__________________________________________ city:____________________________________________ state/country:___________________________________ zip code:______________ tel:_____________________________________________ fax:_____________________________________________ e-mail:__________________________________________ [] I intend to submit (have submitted) a paper title:___________________________________________ _________________________________________________ spouse/guest: first name:______________________________________ last name:_______________________________________ ------------------------------------------------------------------------------- REGISTRATION FEES Registration fees may be paid: (check one) [] by bank swift tranfer to:Banca di Roma, ag. 158, via Eudossiana, 18 Roma, Italy, I-00184. Bank codes: ABI 3002-3, CAB 03380-3, account no. 503/34, payable to: "Sezione Italia Centro-Sud dell'IEEE - CNNA'94", stating clearly your name (copy enclosed) [] by Italian personal cheque payable to "Sezione Italia Centro-Sud dell'IEEE - CNNA'94" (non-transferable, sent with this form by "Assicurata"); Fees are in Italian Lire; they include participation in all sessions, proceedings, coffee breaks between sessions, welcome cocktail, and banquet. Please check where applicable: BEFORE NOV. 11 AFTER NOV. 11 FULL [] Itl. 500,000 [] Itl. 550,000 IEEE MEMBER* [] Itl. 400,000 [] Itl. 450,000 *(no.__________________) FULL-TIME STUDENT** [] Itl. 100,000 [] Itl. 200,000 **(please enclose letter of certification from department chairperson) SPOUSE/GUEST: WELCOME COCKTAIL [] Itl. 40,000 BANQUET [] Itl. 120,000 [] I need a receipt of payment ------------------------------------------------------------------------------- ACCOMMODATION FORM mail to American Express Co., S.p.A., Business Meetings & Convention Dept., Piazza di Spagna, 38 Rome, Italy, I-00187, or fax to: +39-6-67642699 title:____________ last name:_______________________________________ first name:______________________________________ institution:_____________________________________ address: street:__________________________________________ city:____________________________________________ state/country:___________________________________ zip code:______________ tel:_____________________________________________ fax:_____________________________________________ spouse/guest: first name:______________________________________ last name:_______________________________________ ------------------------------------------------------------------------------- ACCOMMODATION FEES Accommodation fees may be paid (check one) [] by bank swift tranfer to: Banco di Sicilia, ag. 15 Piazzale Ardig, 43/45 Rome, Italy I-00142. Bank codes: ABI J01020, CAB 03215, account no. 410158991, payable to American Express; (copy enclosed) [] by Italian personal cheque payable to American Express (non-transferable, sent by "Assicurata" with this form); [] by American Express card Accommodation at Hotel Palatino (****), close to Workshop venue, is offered at special rates for participants to CNNA-94. Reservations are guaranteed when booking before Sep. 20; after this date they will be confirmed on a space availability basis. Following rates include room and breakfast: 3 NIGHTS EACH ADDITIONAL NIGHT: SINGLE Itl. 250,0000 Itl. 86,000 DOUBLE Itl. 380,000 Itl. 135,000 Please fill in: arrival date:_______________________ departure date:_____________________ total nights:_____ [] single room deposit: Itl. 100,000 [] double room deposit: Itl. 150,000 shared with _____________________________ Please charge my American Express Card for the above stated amount No._________________________________ Expiry date:___________________________ Signature:_____________________________ ------------------------------------------------------------------------- A POSTSCRIPT COPY OF THIS CALL FOR PAPERS CAN BE OBTAINED BY ANONYMOUS FTP AS FOLLOWS: ftp volterra.science.unitn.it (130.186.34.16) login as "anonymous" wuth password cd pub/cells binary get cnna.cfp.ps.Z bye then uncompress the file (uncompress cnna.cfp.ps.Z) and print. From ingber at alumni.caltech.edu Mon Jun 6 04:44:30 1994 From: ingber at alumni.caltech.edu (Lester Ingber) Date: Mon, 06 Jun 1994 01:44:30 -0700 Subject: Research Opportunities Parallelizing ASA and PATHINT Message-ID: <199406060844.BAA03619@alumni.caltech.edu> Research Opportunities Parallelizing ASA and PATHINT I am looking for one to several people with experience parallelizing C code, e.g., on Crays, to work on parallelizing two specific algorithms: (a) Adaptive Simulated Annealing (ASA), and (b) an algorithm to calculate the time-development of multi-variable nonlinear Fokker-Planck-type systems, using a powerful non-Monte Carlo path integral algorithm (PATHINT). Some code and papers dealing with these algorithms can be obtained from ftp.alumni.caltech.edu [131.215.139.234] in the /pub/ingber directory. I am PI of an award of Cray time on an NSF Supercomputer, and have ported these codes successfully onto a C90. However, I am short of time to further optimize these codes, which is an essential requirement before doing production runs on C90 and T3D Crays. If necessary, I will do this work myself, but I would rather share the work, experience, and research with other interested people who also can expedite these projects. All code will remain under my copyright under the GNU General Public License (GPL), i.e., the least restrictive Library GPL. There are several immediate projects that are just waiting for detailed calculations, using codes which already run on SPARCstations, but need the power of supercomputers for production runs. All results of these studies will be published in peer-reviewed scientific journals, and only active participants on these projects will be co-authors on these papers. Examples of these projects include: neuroscience statistical mechanics of neocortical interactions (SMNI) EEG correlates of behavioral states short-term memory modeling realistic chaos + noise modeling financial applications 2- and 3-state term-structure security calculations testing of trading rules nonlinear modeling persistence of chaos in the presence of moderate noise If you are interested, please send me a short description of projects you have worked on, and how many hours/week you are prepared to commit to these projects for at least a period of 6-12 months. Lester || Prof. Lester Ingber || || Lester Ingber Research || || P.O. Box 857 E-Mail: ingber at alumni.caltech.edu || || McLean, VA 22101 Archive: ftp.alumni.caltech.edu:/pub/ingber || From mb at archsci.arch.su.EDU.AU Mon Jun 6 00:13:59 1994 From: mb at archsci.arch.su.EDU.AU (mb@archsci.arch.su.EDU.AU) Date: Mon, 6 Jun 1994 14:13:59 +1000 Subject: Paper in NeuroProse: Time Delay Radial Basis Functions. Message-ID: <9406060413.AA13495@assynt> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/berthold.tdrbf-icnn94.ps.Z The following paper was placed in the NeuroProse archive (Thanks to Jordan for maintaining this valuable service!) to appear in: Proc. ICNN'94, Orlando, Florida 4 pages, sorry no hardcopies available. A Time Delay Radial Basis Function Network for Phoneme Recognition Michael R. Berthold ABSTRACT: This paper presents the Time Delay Radial Basis Function Network (TDRBF) for recognition of phonemes. The TDRBF combines features from Time Delay Neural Networks (TDNN) and Radial Basis Functions (RBF). The ability to detect acoustic features and their temporal relationship independent of position in time is inherited from TDNN. The use of RBFs leads to shorter training times and less parameters to adjust, which makes it easier to apply TDRBF to new tasks. The recognition of three phonemes with about 750 training and testing tokens each was choosen as an evaluation task. The results suggest an equivalent performance of TDRBF and TDNN presented by Waibel et al., but TDRBF require much less training time to reach a good performance and in addition have a clear indication when the minimum error is reached, therefore no danger of overtraining exists. INSTRUCTIONS FOR RETRIVAL: % ftp archive.cis.ohio-state.edu ftp> anonymous ftp> (your eMail-address) ftp> binary ftp> cd pub/neuroprose ftp> get berthold.tdrbf-icnn94.ps.Z ftp> quit % uncompress berthold.tdrbf-icnn94.ps.Z % lpr berthold.tdrbf-icnn94.ps (or however you print on your system) ------------------------------------------------------------------------- Michael R. Berthold Phone: +61 2 692 3549 Key Centre of Design Computing Fax: +61 2 692 3031 University of Sydney Internet: mb at archsci.arch.su.edu.au NSW 2006 Australia or: berthold at fzi.de ------------------------------------------------------------------------- From mario at physics.uottawa.ca Mon Jun 6 10:58:05 1994 From: mario at physics.uottawa.ca (Mario Marchand) Date: Mon, 6 Jun 94 11:58:05 ADT Subject: Neural Net and PAC learning papers available by anonymous ftp Message-ID: <9406061458.AA01644@physics.uottawa.ca> The following papers are now available by anonymous ftp from the host "Dirac.physics.uottawa.ca" in the "pub" directory. Mario Marchand mario at physics.uottawa.ca ---- **************************************************************** FILE: COLT93.ps.Z TITLE: Average Case Analysis of the Clipped Hebb Rule for Nonoverlapping Perceptron Networks AUTHORS: Mostefa Golea, Mario Marchand Abstract We investigate the clipped Hebb rule for learning different multilayer networks of nonoverlapping perceptrons with binary weights and zero thresholds when the examples are generated according to the uniform distribution. Using the central limit theorem and very simple counting arguments, we calculate exactly its learning curves (\ie\ the generalization rates as a function of the number of training examples) in the limit of a large number of inputs. We find that the learning curves converge {\em exponentially\/} rapidly to perfect generalization. These results are very encouraging given the simplicity of the learning rule. The analytic expressions of the learning curves are in excellent agreement with the numerical simulations, even for moderate values of the number of inputs. ******************************************************************* ****************************************************************** FILE: EuroCOLT93.ps.Z TITLE: On Learning Simple Deterministic and Probabilistic Neural Concepts AUTHORS: Mostefa Golea, Mario Marchans Abstract We investigate the learnability, under the uniform distribution, of deterministic and probabilistic neural concepts that can be represented as {\em simple combinations} of {\em nonoverlapping} perceptrons with binary weights. Two perceptrons are said to be nonoverlapping if they do not share any input variables. In the deterministic case, we investigate, within the distribution-specific PAC model, the learnability of {\em perceptron decision lists} and {\em generalized perceptron decision lists}. In the probabilistic case, we adopt the approach of {\em learning with a model of probability} introduced by Kearns and Schapire~\cite{KS90} and Yamanishi~\cite{Y92}, and investigate a class of concepts we call {\em probabilistic majorities} of nonoverlapping perceptrons. We give polynomial time algorithms for learning these restricted classes of networks. The algorithms work by estimating various statistical quantities that yield enough information to infer, with high probability, the target concept. *********************************************************************** *********************************************************************** FILE: NeuralComp.ps.Z TITLE: On Learning Perceptrons with Binary Weights AUTHORS: Mostefa Golea, Mario Marchand Abstract We present an algorithm that PAC learns any perceptron with binary weights and arbitrary threshold under the family of {\em product distributions\/}. The sample complexity of this algorithm is of $O((n/\epsilon)^4 \ln(n/\delta))$ and its running time increases only linearly with the number of training examples. The algorithm does not try to find an hypothesis that agrees with all of the training examples; rather, it constructs a binary perceptron based on various probabilistic estimates obtained from the training examples. We show that, under the restricted case of the {\em uniform distribution\/} and zero threshold, the algorithm reduces to the well known {\em clipped Hebb rule\/}. We calculate exactly the average generalization rate (\ie\ the learning curve) of the algorithm, under the uniform distribution, in the limit of an infinite number of dimensions. We find that the error rate decreases {\em exponentially\/} as a function of the number of training examples. Hence, the average case analysis gives a sample complexity of $O(n\ln(1/\epsilon))$; a large improvement over the PAC learning analysis. The analytical expression of the learning curve is in excellent agreement with the extensive numerical simulations. In addition, the algorithm is very robust with respect to classification noise. *********************************************************************** ********************************************************************** FILE: NIPS92.ps.Z TITLE: On Learning $\mu$-Perceptron Networks with Binary Weights AUTHORS: Mostefa Golea, Mario Marchand, Thomas R. Hancock Abstract Neural networks with binary weights are very important from both the theoretical and practical points of view. In this paper, we investigate the learnability of single binary perceptrons and unions of $\mu$-binary-perceptron networks, {\em i.e.\/} an ``OR'' of binary perceptrons where each input unit is connected to one and only one perceptron. We give a polynomial time algorithm that PAC learns these networks under the uniform distribution. The algorithm is able to identify both the network connectivity and the weight values necessary to represent the target function. These results suggest that, under reasonable distributions, $\mu$-perceptron networks may be easier to learn than fully connected networks. ********************************************************************** ********************************************************************** FILE: Network93.ps.Z TITLE: On Learning Simple Neural Concepts: From Halfspace Intersections to Neural Decision Lists AUTHORS: Mario Marchand, Mostefa Golea Abstract In this paper, we take a close look at the problem of learning simple neural concepts under the uniform distribution of examples. By simple neural concepts we mean concepts that can be represented as simple combinations of perceptrons (halfspaces). One such class of concepts is the class of halfspace intersections. By formalizing the problem of learning halfspace intersections as a {\em set covering problem}, we are led to consider the following sub-problem: given a set of non linearly separable examples, find the largest linearly separable subset of it. We give an approximation algorithm for this NP-hard sub-problem. Simulations, on both linearly and non linearly separable functions, show that this approximation algorithm works well under the uniform distribution, outperforming the Pocket algorithm used by many constructive neural algorithms. Based on this approximation algorithm, we present a greedy method for learning halfspace intersections. We also present extensive numerical results that strongly suggests that this greedy method learns halfspace intersections under the uniform distribution of examples. Finally, we introduce a new class of simple, yet very rich, neural concepts that we call {\em neural decision lists}. We show how the greedy method can be generalized to handle this class of concepts. Both greedy methods for halfspace intersections and neural decision lists were tried on real-world data with very encouraging results. This shows that these concepts are not only important from the theoretical point of view, but also in practice. *************************************************************************** From fu at cis.ufl.edu Mon Jun 6 15:14:32 1994 From: fu at cis.ufl.edu (fu@cis.ufl.edu) Date: Mon, 6 Jun 1994 15:14:32 -0400 Subject: a special issue Message-ID: <199406061914.PAA07659@whale.cis.ufl.edu> CALL FOR SUBMISSIONS Special Issue of the Journal ``Knowledge-Based Systems'' Theme: ``Knowledge-Based Neural Networks'' Guest Editor: LiMin Fu (University of Florida, USA) A. Background: Knowledge-based neural networks are concerned with the use of domain knowledge to determine the initial structure of the neural network. Such constructions have drawn increasing attention recently. The rudimentary idea is simple: the knowledge-based approach models what we know and the neural-network approach does what we are ignorant or uncertain of. Furthermore, there is an urgent need for a bridge between symbolic artificial intelligence and neural networks. The main goal of this special issue is to explore the relationship between them for engineering intelligent systems. B. Contents: Examples of specific research include but are not limited to: (1) How do we build a neural network based on prior knowledge? (2) How do neural heuristics improve the current model for a particular problem (e.g., classification, planning, signal processing, and control)? (3) How does knowledge in conjunction with neural heuristics contribute to machine learning? (4) What is the emergent behavior of a hybrid system? (5) What are the fundamental issues behind the combined approach? C. Schedule: Four copies of a full-length Paper should be submitted, by 1 September 1994, to Dr. LiMin Fu Department of Computer and Information Sciences 301 CSE University of Florida Gainesville, FL 32611 USA All papers will be subject to stringent review. From marcio at dca.fee.unicamp.br Mon Jun 6 18:33:25 1994 From: marcio at dca.fee.unicamp.br (Prof. Marcio Luiz de Andrade Netto) Date: Mon, 6 Jun 94 17:33:25 EST Subject: NNACIP '94 - Call for papers Message-ID: <9406062033.AA21625@dca.fee.unicamp.br> PLEASE, DISTRIBUTE FREELY. ==================================================================== Preliminary Call for Papers NNACIP '94 AMCA / IEEE International Workshop on Neural Networks Applied to Control and Image Processing November 7-11, 1994, in Mexico City The purpose of this international workshop is to provide an opportunity for the leading researchers of the world to broaden their horizon through discussions on various applications of Neural Networks (NN) to control and signal processing. This workshop is intended to provide also an international forum for presentation of concepts and applications of NN in both: i)design, implementation, supervision, and monitoring of control systems, and ii)digital image processing, 2D & 3D pattern recognition. Main Topics * modeling, estimation and identification * intelligent control systems applications * autonomous robots * process monitoring and supervision * fault detection and emergency control * intelligent motion control * adaptive learning control systems * manufacturing processes * image filtering * image segmentation * image compression * feature extraction * 2D and 3D pattern recognition * pattern classification * multi-sensor integration * Signal processing Organized by Asociacin de Mxico de Control Automtico, AMCA International Program Committee Toshio Fukuda (J) Chairman Francisco Cervantes (MEX) Editor Luis Alonso Romero (E) Marcio Andrade Netto (BRA) Danilo Bassi (RCH) G.A. Bekey (USA) J.C. Bertrand (F) Pierre Borne(F) Aldo Cipriano (RCH) Ricardo Carelli (ARG) Jess Figueroa-Nazuno (MEX) Petros Ioannou (USA) Rafael Kelly (MEX) Dominique Lamy (F) Nuria Piera-Carrete (E) M. Sakawa (J) Edgar Snchez-Sinencio (USA) Carme Torras (E) Manuel Valenzuela (MEX) Sponsors * IEEE Systems, Man & Cybernetics * IEEE Neural Networks Council * CONACyT * CINVESTAV * IPN * UNAM * UVM * ANIAC * UAM-Atz Organizing Committee J. H. Sossa-Azuela, President A.J. Malo-Tamayo, Secretary A. Ramrez-Trevio, Treasurer E. Gortcheva R.A. Garrido P. Wiederhold C. Verde S. Gonzlez-Brambila D. Rosenblueth E. Snchez-Camperos Important Dates Submission of extended abstracts and proposals for invited session: June 20, 94 Acceptance notification: July 15, 94 Camera ready papers due: Sept. 30, 94 Workshop realization: Nov. 7-11, 94 Information J.M. Ibarra-Zannatha (General Chairman) Cinvestav-IPN, Control Automtico A.P. 14-740, 07000 Mxico, D.F., MEXICO Tel.: +52-5-754.7601, Fax.: +52-5-586.6290 & 752.0290 E-mail: nnacip at mvax1.red.cinvestav.mx Prospective authors are invited to submit papers in any technical area listed above, sending three copies of a 3-4 pages extended summary includig problem statement, methodology, figures and references. ====================================================================== Marcio L. Andrade Netto School of Electrical Engineering State University of Campinas BRASIL marcio at fee.unicamp.br ===================================================================== From D.C.Mitchell at exeter.ac.uk Wed Jun 8 07:11:44 1994 From: D.C.Mitchell at exeter.ac.uk (D.C.Mitchell@exeter.ac.uk) Date: Wed, 8 Jun 1994 12:11:44 +0100 (BST) Subject: Cognitive Science position at Exeter Message-ID: <4145.9406081111@scraps> A non-text attachment was scrubbed... Name: not available Type: text Size: 1530 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/9a371e24/attachment-0001.ksh From georg at ai.univie.ac.at Wed Jun 8 07:07:06 1994 From: georg at ai.univie.ac.at (Georg Dorffner) Date: Wed, 8 Jun 1994 13:07:06 +0200 Subject: WWW Neural Network Page at OFAI Message-ID: <199406081107.AA01688@chicago.ai.univie.ac.at> ========================================================= Announcing a New WWW Page on Neural Networks at the Austrian Research Institute for Artificial Intelligence ========================================================= As part of the Worl-Wide-Web (WWW) server of the Department of Medical Cybernetics and Artificial Intelligence of the University of Vienna and the Austrian Research Institute for Artificial Intelligence (OFAI) URL: http://www.ai.univie.ac.at a home page specifically dedicated to our research and services in neural networks has been established: URL: http://www.ai.univie.ac.at/oefai/nn/nngroup.html It gives a description of the research currently being undertaken at the Neural Network Group of the OFAI, which consists of the following four domains: - practical applications of neural networks - theoretical research on neural networks - cognitive modeling with neural networks - neural network simulation tools A complete list of publications is given, many of which can be directly retrieved as postscript files. Among the local services provided you are welcome to use our ======================================== bibliographical search utility BIBLIO, ======================================== which permits you to search among 3500 books and papers in the field of neural networks. Search key can be an author's name and/or a string contained in the title. The basis for the search is an on-line data base containing books, reports, journal and conference references, such as IEEE and INNS neural network conferences. This data base is constantly being extended. Finally, links to other neural network WWW pages, as well as data and report repositories are also given. Enjoy! P.S: If you have questions or difficulties, mail to georg at ai.univie.ac.at From wahba at stat.wisc.edu Wed Jun 8 13:20:12 1994 From: wahba at stat.wisc.edu (Grace Wahba) Date: Wed, 8 Jun 94 12:20:12 -0500 Subject: ftp-papers:learn,tune Message-ID: <9406081720.AA03139@hera.stat.wisc.edu> The following technical reports are available by anonymous ftp. nonlin-learn.ps.Z G. Wahba, Generalization and Regularization in Nonlinear Learning Systems, UW-Madison Statistics Dept TR 921, May, 1994. To appear in the Handbook of Brain Theory, Michael Arbib, Ed. Relates feedforward neural nets, radial basis functions and smoothing spline anova within the length limitations of the Handbook. tuning-nwp.ps.Z G. Wahba, D. R. Johnson, F. Gao and J. Gong, Adaptive tuning of numerical weather prediction models: Part I: randomized GCV and related methods in three and four dimensional data assimilation. UW-Madison Statistics Dept TR 920, April, 1994, submitted. Shows how to tune the bias-variance tradeoff and other tradeoffs via generalized cross validation (gcv), unbiased risk (ubr), and generalized maximum likelihood (gml) with very large data sets in the context of regularized function estimation. Shows how to use randomized trace estimation to compute gcv and ubr, and in particular to use these randomized estimates to estimate when to stop the iteration when large variational problems are solved iteratively. Written in the language of data assimilation in numerical weather prediction but the methods may be of interest in machine learning. -------------------------------------------------------------------- ftp instructions: fn = nonlin-learn or tuning-nwp % ftp ftp.stat.wisc.edu Name: anonymous password: your email address ftp> cd pub/wahba ftp> binary ftp> get fn.ps.Z ftp> bye % uncompress fn.ps.Z % lpr fn.ps -------------------------------------------------------------------- Grace Wahba Statistics Dept University of Wisconsin-Madison wahba at stat.wisc.edu -------------------------------------------------------------------- Get Contents for other papers of interest. From bishopc at helios.aston.ac.uk Wed Jun 8 14:53:15 1994 From: bishopc at helios.aston.ac.uk (bishopc) Date: Wed, 8 Jun 1994 18:53:15 +0000 Subject: Neural Computing Applications Forum Message-ID: <17961.9406081753@sun.aston.ac.uk> ------------------------------------------------------------------- NEURAL COMPUTING APPLICATIONS FORUM The Neural Computing Applications Forum (NCAF) was formed in 1990 and has since come to provide the principal mechanism for exchange of ideas and information between academics and industrialists in the UK on all aspects of neural networks and their practical applications. NCAF organises four 2-day conferences each year, which are attended by around 100 participants. It has its own international journal `Neural Computing and Applications' which is published quarterly by Springer-Verlag, and it produces a quarterly newsletter `Networks'. Forthcoming conferences will be held at Oxford University (30 June and 1 July, 1994), Aston University (14 and 15 September, 1994) and London (11 and 12 January, 1995). The programme for the Oxford conference is given below. Annual membership rates (Pounds Stirling): Company: 250 Individual: 140 Student: 55 Membership includes free registration at all four annual conferences, a subscription to the journal `Neural Computing and Applications', and a subscription to `Networks'. For further information: Tel: +44 (0)784 477271 Fax: +44 (0)784 472879 email: c.m.bishop at aston.ac.uk Chris M Bishop (Chairman, NCAF) -------------------------------------------------------------------- NCAF Two-Day Conference: PRACTICAL APPLICATIONS AND TECHNIQUES OF NEURAL NETWORKS 30 June and 1 July 1994 St Hugh's College, Oxford 30 June 1994 ------------ Tutorial: Bayes for Beginners (An introduction to Bayesian methods for neural networks) Chris M Bishop, Aston University Invited Talk: Medical Applications of Neural Networks Lionel Tarassenko, Oxford University Workshop: Key Issues in the Application of Neural Networks Discussion leaders: Lionel Tarassenko, Oxford University Steve Roberts, Oxford University Andy Wright, British Aerospace Peter Cowley, Rolls Royce Invited Talk: Is Statistics Plus Neural Networks More Than the Sum of its Parts? Brian Ripley, Oxford University 1 July 1994 ----------- Solving Literary Mysteries using Neural Computation Robert Matthews, Aston University Water Quality Monitoring Peter Smith, Loughborough University Intelligent Hybrid Systems Suram Goonatilake, University College London Alarm Monitoring in Intensive Care Lorraine Dodd, Neural Solutions Applying Neural Networks to Machine Health Monitoring Peter Smith, Sunderland University Fault Diagnosis on Telephone Networks Andy Chaskell, British Telecom A Learning System for Visual Tracking of Object Motion Andrew Blake, Oxford University Modelling Conditional Distributions Chris Bishop, Aston University Neural Networks for Spacecraft Control Andy Wright, British Aerospace Neural Networks for Speaker Recognition Steve Frederikson, Oxford University -------------------------------------------------------------------- Professor Chris M Bishop Tel. +44 (0)21 359 3611 x4270 Neural Computing Research Group Fax. +44 (0)21 333 6215 Dept. of Computer Science c.m.bishop at aston.ac.uk Aston University Birmingham B4 7ET, UK -------------------------------------------------------------------- From tgd at chert.CS.ORST.EDU Wed Jun 8 15:48:02 1994 From: tgd at chert.CS.ORST.EDU (Tom Dietterich) Date: Wed, 8 Jun 94 12:48:02 PDT Subject: Postdoctoral Position: Applying Machine Learning to Ecosystem Modeling Message-ID: <9406081948.AA01142@edison.CS.ORST.EDU> Postdoctoral Position: Applying Machine Learning to Ecosystem Modeling Complex ecosystem models are calibrated by manually fitting them to available data sets. This is time-consuming, and it can result in overfitting of the models to the data. We are applying machine learning methods to automate this calibration and thereby improve the reliability and statistical validity of the resulting models. Our ecosystem model--MAPSS--predicts amounts and types of vegetation that will grow under global warming climate scenarios. An important goal of global change research is to incorporate such vegetation models into existing ocean-atmosphere physical models. Under NSF funding, we are seeking a Post-Doc to assume a major role in carrying out this research. Components of the research involve (a) representing ecosystem models declaratively, (b) implementing gradient and non-gradient search techniques for parameter fitting, (c) implementing parallel algorithms for running and fitting the ecosystem model, and (d) conducting basic research on issues of combining prior knowledge with data to learn effectively. The ideal candidate will have a PhD in computer science or a closely related discipline with experience in neural networks, simulated annealing (and similar search procedures), knowledge representation, and parallel computing. The candidate must know or be eager to learn some basic plant physiology and soil hydrology. Computational resources for this project include a 16-processor 1Gflop Meiko multicomputer and a 128-processor CNAPS neurocomputer. Applicants should send a CV, summary of research accomplishments, sample papers, and 3 letters of reference to Thomas G. Dietterich 303 Dearborn Hall Department of Computer Science Oregon State University Corvallis, OR 97331 tgd at cs.orst.edu Principal investigators: Thomas G. Dietterich, Department of Computer Science Ron Nielson, US Forest Service OSU is an Affirmative Action/Equal Opportunity Employer and Complies with Section 504 of the Rehabilitation Act of 1973. OSU has a policy of being responsive to the needs of dual-career couples. Closing Date: July 5, 1994 From leow at cs.utexas.edu Wed Jun 8 16:23:47 1994 From: leow at cs.utexas.edu (Wee Kheng Leow) Date: Wed, 8 Jun 1994 15:23:47 -0500 Subject: new dissertation available Message-ID: <199406082023.PAA10635@coltexo.cs.utexas.edu> FTP-host: cs.utexas.edu FTP-filename: pub/neural-nets/papers/leow.diss.tar The following dissertation is available through anonymous ftp. It is also available in the WWW from the UTCS Neural Nets Research Group home page http://www.cs.utexas.edu/~sirosh/nn.html under High-Level Vision publications. It contains 198 pages. ----------------- VISOR: Learning Visual Schemas in Neural Networks for Object Recognition and Scene Analysis Wee Kheng Leow Department of Computer Sciences The University of Texas at Austin Abstract This dissertation describes a neural network system called VISOR for object recognition and scene analysis. The research with VISOR aims at three general goals: (1) to contribute to building robust, general vision systems that can be adapted to different applications, (2) to contribute to a better understanding of the human visual system by modeling high-level perceptual phenomena, and (3) to address several fundamental problems in neural network implementation of intelligent systems, including resource-limited representation, and representing and learning structured knowledge. These goals lead to a schema-based approach to visual processing, and focus the research on the representation and learning of visual schemas in neural networks. Given an input scene, VISOR focuses attention at one component of an object at a time, and extracts the shape and position of the component. The schemas, represented in a hierarchy of maps and connections between them, cooperate and compete to determine which one best matches the input. VISOR keeps shifting attention to other parts of the scene, reusing the same schema representations to identify the objects one at a time, eventually recognizing what the scene depicts. The recognition result consists of labels for the objects and the entire scene. VISOR also learns to encode the schemas' spatial structures through unsupervised modification of connection weights, and reinforcement feedback from the environment is used to determine whether to adapt existing schemas or create new schemas to represent novel inputs. VISOR's operation is based on cooperative, competitive, and parallel bottom-up and top-down processes that seem to underlie many human perceptual phenomena. Therefore, VISOR can provide a computational account of many such phenomena, including shifting of attention, priming effect, perceptual reversal, and circular reaction, and may lead to a better understanding of how these processes are carried out in the human visual system. Compared to traditional rule-based systems, VISOR shows remarkable robustness of recognition, and is able to indicate the confidence of its analysis as the inputs differ increasingly from the schemas. With such properties, VISOR is a promising first step towards a general vision system that can be used in different applications after learning the application-specific schemas. --------------- The dissertation is contained in a tar file called leow.diss.tar, which consists of 5 compressed ps files, with a total of 198 pages. To retrieve the files, do the following: unix> ftp cs.utexas.edu Name: anonymous Password: ftp> binary ftp> cd pub/neural-nets/papers ftp> get leow.diss.tar ftp> quit unix> tar xvf leow.diss.tar unix> uncompress *.ps.Z unix> lpr -P *.ps If the ps files are too large, you may have to use lpr with the -s option. From opitz at cs.wisc.edu Wed Jun 8 17:09:53 1994 From: opitz at cs.wisc.edu (Dave Opitz) Date: Wed, 8 Jun 94 16:09:53 -0500 Subject: Papers available by ftp Message-ID: <9406082109.AA20718@flanders.cs.wisc.edu> The following three papers have been placed in an FTP repository at the University of Wisconsin (abstracts appear at the end of the message). These papers are also available on WWW via Mosaic. Type "Mosaic ftp://ftp.cs.wisc.edu/machine-learning/shavlik-group" or "Mosaic http://www.cs.wisc.edu/~shavlik/uwml.html" (for our group's "home page"). Opitz, D. W. & Shavlik, J. W. (1994). "Using genetic search to refine knowledge-based neural networks." Proceedings of the 11th International Conference on Machine Learning, New Brunswick, NJ. Craven, M. W. & Shavlik, J. W. (1994). "Using sampling and queries to extract rules from trained neural networks." Proceedings of the 11th International Conference on Machine Learning, New Brunswick, NJ. Maclin, R. & Shavlik, J. W. (1994). "Incorporating advice into agents that learn from reinforcements." Proceedings of the 12th National Conference on Artificial Intelligence (AAAI-94), Seattle, WA. (A longer version appears as UW-CS TR 1227.) ---------------------- To retrieve the papers by ftp: unix> ftp ftp.cs.wisc.edu Name: anonymous Password: (Your e-mail address) ftp> binary ftp> cd machine-learning/shavlik-group/ ftp> get opitz.mlc94.ps.Z ftp> get craven.mlc94.ps.Z ftp> get maclin.aaai94.ps.Z (or get maclin.tr94.ps.Z) ftp> quit unix> uncompress opitz.mlc94.ps.Z (similarly for the other 2 papers) unix> lpr opitz.mlc94.ps ============================================================================== Using Genetic Search to Refine Knowledge-Based Neural Networks David W. Opitz Jude W. Shavlik Abstract: An ideal inductive-learning algorithm should exploit all available resources, such as computing power and domain-specific knowledge, to improve its ability to generalize. Connectionist theory-refinement systems have proven to be effective at utilizing domain-specific knowledge; however, most are unable to exploit available computing power. This weakness occurs because they lack the ability to refine the topology of the networks they produce, thereby limiting generalization, especially when given impoverished domain theories. We present the REGENT algorithm, which uses genetic algorithms to broaden the type of networks seen during its search. It does this by using (a) the domain theory to help create an initial population and (b) crossover and mutation operators specifically designed for knowledge-based networks. Experiments on three real-world domains indicate that our new algorithm is able to significantly increase generalization compared to a standard connectionist theory-refinement system, as well as our previous algorithm for growing knowledge-based networks. ============================================================================== Using Sampling and Queries to Extract Rules from Trained Neural Networks Mark W. Craven Jude W. Shavlik Abstract: Concepts learned by neural networks are difficult to understand because they are represented using large assemblages of real-valued parameters. One approach to understanding trained neural networks is to extract symbolic rules that describe their classification behavior. There are several existing rule-extraction approaches that operate by searching for such rules. We present a novel method that casts rule extraction not as a search problem, but instead as a learning problem. In addition to learning from training examples, our method exploits the property that networks can be efficiently queried. We describe algorithms for extracting both conjunctive and M-of-N rules, and present experiments that show that our method is more efficient than conventional search-based approaches. ============================================================================== Incorporating Advice into Agents that Learn from Reinforcements Rich Maclin Jude W. Shavlik Abstract: Learning from reinforcements is a promising approach for creating intelligent agents. However, reinforcement learning usually requires a large number of training episodes. We present an approach that addresses this shortcoming by allowing a connectionist Q-learner to accept advice given, at any time and in a natural manner, by an external observer. In our approach, the advice-giver watches the learner and occasionally makes suggestions, expressed as instructions in a simple programming language. Based on techniques from knowledge-based neural networks, these programs are inserted directly into the agent's utility function. Subsequent reinforcement learning further integrates and refines the advice. We present empirical evidence that shows our approach leads to statistically-significant gains in expected reward. Importantly, the advice improves the expected reward regardless of the stage of training at which it is given. From atul at nynexst.com Wed Jun 8 18:21:44 1994 From: atul at nynexst.com (Atul Chhabra) Date: Wed, 8 Jun 94 18:21:44 EDT Subject: Summer Job Opening Message-ID: <9406082221.AA21468@texas.nynexst.com> SUMMER STUDENT POSITION NYNEX Science & Technology White Plains, NY We have an immediate need for a summer student for upto a three month period. The position is for research and programming in Hidden Markov Models and the Viterbi Algorithm for incorporating contextual information into a handprinted character recognition system. Candidates must have knowledge of HMM, Viterbi algorithm, and neural networks. They must possess excellent C programming skills in the Unix environment. NYNEX is the Regional Bell Operating Company (RBOC) for New York and New England. NYNEX Science & Technology is the R&D division of NYNEX. Applicants should send their resumes by mail, fax, or email to the address below. Email resumes must be plain text only. ---------- Atul K. Chhabra Phone: (914)644-2786 NYNEX Science & Technology Fax: (914)644-2404 500 Westchester Avenue Internet: atul at nynexst.com White Plains, NY 10604 From stefano at kant.irmkant.rm.cnr.it Thu Jun 9 10:45:05 1994 From: stefano at kant.irmkant.rm.cnr.it (stefano@kant.irmkant.rm.cnr.it) Date: Thu, 9 Jun 1994 14:45:05 GMT Subject: TR available Message-ID: <9406091445.AA12074@kant.irmkant.rm.cnr.it> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/nolfi.plastic.ps.Z FTP-filename: /pub/neuroprose/nolfi.erobot.ps.Z Two papers are now available for copying from the Neuroprose repository. Hardcopies are not available. Comments are welcome. PHENOTYPIC PLASTICITY IN EVOLVING NEURAL NETWORKS /pub/neuroprose/nolfi.plastic.ps.Z 12-pages To appear In: Proceedings of the First Conference From Perception to Action, 5-9 September, Lausanne. HOW TO EVOLVE AUTONOMOUS ROBOTS: DIFFERENT APPROACHES IN EVOLUTIONARY ROBOTICS /pub/neuroprose/nolfi.erobot.ps.Z 9-pages To appear in: Proceedings of ALIFEIV Conference, Cambridge, MA, 7-9 July. PHENOTYPIC PLASTICITY IN EVOLVING NEURAL NETWORKS Stefano Nolfi, Orazio Miglino, and Domenico Parisi We present a model based on genetic algorithm and neural networks. The neural networks develop on the basis of an inherited genotype but they show phenotypic plasticity, i.e. they develop in ways that are adapted to the specific environment. The genotype-to-phenotype mapping is not abstractly conceived as taking place in a single instant but is a temporal process that takes a substantial portion of an individual's lifetime to complete and is sensitive to the particular environment in which the individual happens to develop. Furthermore, the respective roles of the genotype and of the environment are not decided a priori but are part of what evolves. We show how such a model is able to evolve control systems for autonomous robots that can adapt to different types of environments. HOW TO EVOLVE AUTONOMOUS ROBOTS: DIFFERENT APPROACHES IN EVOLUTIONARY ROBOTICS Stefano Nolfi, Dario Floreano, Orazio Miglino, and Francesco Mondada A methodology for evolving the control systems of autonomous robots has not yet been well established. In this paper we will show different examples of applications of evolutionary robotics to real robots by describing three different approaches to develop neural controllers for mobile robots. In all the experiments described real robots are involved and are indeed the ultimate means of evaluating the success and the results of the procedures employed. Each approach will be compared with the others and the relative advantages and drawbacks will be discussed. Last, but not least, we will try to tackle a few important issues related to the design of the hardware and of the evolutionary conditions in which the control system of the autonomous agent should evolve. Stefano Nolfi Institute of Psychology National Research Council Viale Marx,15 00137 Rome, Italy e-mail:stefano at kant.irmkant.rm.cnr.it From marwan at sedal.sedal.su.OZ.AU Sat Jun 11 11:22:47 1994 From: marwan at sedal.sedal.su.OZ.AU (Marwan A. Jabri, Sydney Univ. Elec. Eng., Tel: +61-2 692 2240) Date: Sat, 11 Jun 94 10:22:47 EST Subject: ACNN'95 Call for Papers Message-ID: <9406110022.AA18145@sedal.sedal.su.OZ.AU> C A L L F O R P A P E R S = = = = = = = = = = = = = = = = A C N N ' 9 5 SIXTH AUSTRALIAN CONFERENCE ON NEURAL NETWORKS 6th - 8th FEBRUARY 1995 University of Sydney Sydney, Australia in co-operation with Asian-Pacific Neural Network Assembly (APNNA) Australian Neuroscience Society (ANS) Institute of Electrical & Electronics Engineers (IEEE) Institution of Radio & Electronics Engineers (IREE) Systems Engineering & Design Automation Laboratory (SEDAL) The sixth Australian conference on neural networks will be held in Sydney on February 6th - 8th 1995 at the University of Sydney. ACNN'95 is the annual national meeting of the Australian neural network community. It is a multi-disciplinary meeting and seeks contributions from Neuroscientists, Engineers, Computer Scientists, Mathematicians, Physicists and Psychologists. ACNN'95 will feature a number of invited speakers. The program will include presentations and poster sessions. Proceedings will be printed and distributed to the attendees. The posters will be displayed for a significant period of time, and time will be allocated for authors to be present at their poster in the conference program. Software demonstrations will be possible for authors. SparcStations, DEC Stations and PCs will be available in an adjacent laboratory for these demonstrations. Invited Keynote Speaker ACNN'95 will feature a number of keynote speakers, including Dr Larry Jackel, AT&T Bell Laboratories and Dr Edmund Rolls, Oxford University. Pre-Conference Workshops Pre-Conference Workshops covering basic introductions to neural computing, neuroscience, applications and implementations will be held. A separate flyer describing the workshops will be issued at a later date. Workshop on Industrial Application of Neural Computing The Warren Centre for Advanced Engineering will be organising a workshop on industrial application of neural computing. A separate information sheet will be issued. Submission Categories The major categories for paper submissions include, but are not restricted to: 1. Neuroscience: Integrative function of neural networks in vision, Audition, Motor, Somatosensory and Autonomic functions; Synaptic function; Cellular information processing; 2. Theory: Learning; Generalisation; Complexity; Scaling; Stability; Dynamics; 3. Implementation: Hardware implementation of neural nets; Analog and digital VLSI implementation; Optical implementation; 4. Architectures and Learning Algorithms: New architectures and learning algorithms; Hierarchy; Modularity; Learning pattern sequences; Information integration; 5. Cognitive Science and AI: Computational models of perception and pattern recognition; Memory; Concept formation; Problem solving and reasoning; Visual and auditory attention; Language acquisition and production; Neural network implementation of expert systems; 6. Applications: Application of neural nets to signal processing and analysis; Pattern recognition: Speech, Machine vision; Motor control; Robotics; Forecast; Medical. Initial Submission of Papers As this is a multi-disciplinary meeting, papers are required to be comprehensible to an informed researcher outside the particular stream of the author in addition to the normal requirements of technical merit. Papers should be submitted as close as possible to final form and must not exceed four single A4 pages (2-column format). The first page should include the title and abstract, and should leave space for, but not include the authors' names and affiliations. A cover page should be supplied giving the title of the paper, the name and affiliation of each author, together with the postal address, the e-mail address, and the phone and fax numbers of a designated contact author. The type font should be no smaller than 10 point except in footnotes. A serif font such as Times or New Century Schoolbook is preferred. Four copies of the paper and the front cover sheet should be sent to: Agatha Shotam ACNN'95 Secretariat University of Sydney Department of Electrical Engineering NSW 2006 Australia Each manuscript should clearly indicate submission category (from the six listed) and author preference for oral or poster presentations. This initial submission must be on hard copy to reach us by Friday, 2 September 1994. Submission Deadlines Friday, 2 September 1994 Deadline for receipt of paper submissions Friday, 28 October 1994 Notification of acceptance Friday, 2 December 1994 Final papers in camera-ready form for printing Venue Peter Nicol Russell Building, University of Sydney, Australia. ACNN'95 Organising Committee General Chairs: Marwan Jabri University of Sydney Cyril Latimer University of Sydney Technical Program Chair: Bill Gibson University of Sydney Technical Program Advisors: Max Bennett University of Sydney Bill Wilson University of New South Wales Len Hamey Macquarie University Program Committee: Peter Bartlett Australian National University Max Bennett University of Sydney Robert Bogner University of Adelaide Tony Burkitt Australian National University Terry Caelli Curtin University of Technology Simon Carlile University of Sydney Margaret Charles University of Sydney George Coghill University of Auckland Tom Downs University of Queensland Barry Flower University of Sydney Tom Gedeon University of New South Wales Bill Gibson University of Sydney Simon Goss Defence Science & Tech Org Graeme Halford University of Queensland Len Hamey Macquarie University Andrew Heathcote University of Newcastle Marwan Jabri University of Sydney Andrew Jennings Royal Melbourne Institute of Tech Nikola Kasabov University of Otago Adam Kowalczyk Telecom Australia Cyril Latimer University of Sydney Philip Leong University of Sydney Iain Macleod Australian National University M Palaniswami University of Melbourne Nick Redding Defence Science & Tech Org M Srinivasan Australian National University Ah Chung Tsoi University of Queensland Janet Wiles University of Queensland Robert Williamson Australian National University Bill Wilson University of New South Wales International Advisory Board: Yaser Abu-Mostafa Caltech Josh Alspector Bellcore Shun-ichi Amari University of Tokyo, Japan Michael Arbib University of Southern California Scott Fahlman Carnegie Mellon University Hans Peter Graf AT&T Bell Laboratories Yuzo Hirai University of Tsukuba Masumi Ishikawa Kyushu Institute of Technology Larry Jackel AT&T Bell Laboratories John Lazzaro University of Berkeley Yann LeCun AT&T Bell Laboratories Richard Lippmann MIT Lincoln Lab Nelson Morgan Intl Computer Sci Inst, Berkeley Alan Murray University of Edinburgh Fernando Pineda John Hopkins University Terry Sejnowski The SALK Institute Lionel Tarassenko Oxford University Eric Vittoz CSEM, Switzerland David Zipser University of California, San Diego Registrations The registration fee to attend ACNN'95 is: Full Time Students A$110.00 Academics A$250.00 Other A$350.00 A discount of 20% applies for advance registration. Registration forms must be posted before December 2nd, 1994, to be entitled to the discount. To be eligible for the Full Time Student rate, a letter from the Head of Department as verification of enrolment is required. Accommodation Delegates will have to make their accommodation arrangements directly with the hotel of their choice. A list of accommodation close to the conference venue is provided below. Rates quoted are on a `per night' basis. Rates may be subject to change without prior notice. University Motor Inn 25 Arundel St Glebe NSW 2037 Rates: A$92 (Single) Tel: +61 (2) 660 5777 A$95 (Double) Fax: +61 (2) 660 2929 Camperdown Towers Motel 144 Mallet St Camperdown NSW2050 Rates: A$65 (Single) Tel: +61 (2) 519 5211 A$75 (Double) Fax: +61 (2) 519 9179 Metro Motor Inn 1 Meagher St Chippendale NSW 2008 Tel: +61 (2) 319 4133 Rates: A$68 (Single/Double) Tel: +61 (2) 698 7665 Sydney Travellers Rest Hotel 37 Ultimo Rd Rates: A$27 students (Dormitory) Haymarket NSW 2000 A$39 adults (Twin Share) Tel: +61 (2) 281 5555 A$62 (Single) Fax: +61 (2) 281 2666 Alishan International House 100 Glebe Point Rd Glebe NSW 2037 Rates: A$18 (Dormitory) Tel: +61 (2) 566 4048 A$55 (Single) Fax: +61 (2) 525 4686 A$70 (Double) ------------------------------------------------------------------------------- ACNN'95 Registration Form Title & Name: ___________________________________________________________ Organisation: ___________________________________________________________ Department: _____________________________________________________________ Occupation: _____________________________________________________________ Address: ________________________________________________________________ State: ____________________ Post Code: _____________ Country: ___________ Tel: ( ) __________________________ Fax: ( ) _____________________ E-mail: _________________________________________________________________ [ ] Find enclosed a cheque for the sum of @ : ______________________ [ ] Charge my credit card for the sum of # :________________________ Mastercard/Visa# Number : ______________________________________ Valid until: ________ Signature: __________________ Date: ______ ------------------------------------------------------------------------------ To register, please fill in this form and return it together with payment to : Mrs Agatha Shotam Secretariat ACNN'95 Department of Electrical Engineering The University of Sydney NSW 2006 AUSTRALIA ------------------------------------------------------------------------------ [ ] I would like to attend the Pre-Conference Workshops * [ ] I would like to attend the Industrial Application of Neural Computing Workshop * * Registration fees for the Pre-Conference Workshops and the Industrial Application of Neural Computing Workshop will be determined at a later stage. _________________________________________________________________________ @ Registration fees: Before 2 Dec 94 After 2 Dec 94 Full Time Students A$ 88.00 A$110.00 Academics A$200.00 A$250.00 Other A$280.00 A$350.00 # Please encircle type of card From jakobc at Mordred.DoCS.UU.SE Mon Jun 13 04:59:34 1994 From: jakobc at Mordred.DoCS.UU.SE (Jakob Carlstr|m) Date: Mon, 13 Jun 94 10:59:34 +0200 Subject: Research position available Message-ID: <9406130859.AAMordred00548@Mordred.DoCS.UU.SE> RESEARCH POSITION AVAILABLE: Hardware implementation of artificial neural networks A research position is available in the field of hardware implementation of artificial neural networks, at the Department of Computer Systems, Uppsala University, Sweden. The position is open to a scientist with a solid background in neural networks and familiarity with analog and digital electronic circuit construction as well as VLSI design. A candidate of postdoctoral or equivalent status will be preferred. The position will be in a new project for developing neural network algorithms and hardware, aiming at VLSI implementations. The researcher is expected to play a major role in this project. The position is tenable for one year at minimum, possibly longer. Uppsala University is Scandinavia's oldest university, founded in 1477, and offers a stimulating research environment. The Department of Computer Systems conducts research on artificial neural networks, real-time systems and formal methods for concurrent systems. The neural networks group was formed in 1991, and consists of four graduate students supervised by Associate Professor Lars Asplund. We have published reports on neural network-based control of digital telecom networks, and on hardware architectures for neural networks. Further information may be obtained from Associate Professor Lars Asplund, Department of Computer Systems, Uppsala University, Box 325, S-751 05 Uppsala, Sweden; fax +46 18 55 02 25; email asplund at docs.uu.se. Applicants should send a full CV, sample papers and the names and addresses of two professional referees to the above address. Closing date: August 8, 1994. From sassk at macaulay-land-use.scot-agric-res-inst.ac.uk Mon Jun 13 07:37:17 1994 From: sassk at macaulay-land-use.scot-agric-res-inst.ac.uk (Jim Kay) Date: Mon, 13 Jun 94 11:37:17 GMT Subject: Stats vs ANNs : A Competition Message-ID: <3352.9406131137@mluri.sari.ac.uk> Statistics vs. Neural Networks A Competition Can artificial neural networks outperform statistical methods in a fair comparison ? Finding applications where they can is one of the main objectives of a two-day workshop to be held on April 19-20, 1995 in Edinburgh, Scotland. We invite entries to this competition which should reach Jim Kay at the address given below by November the first. The decisions reached will be communicated to applicants by the 15th of January, 1995. The best four entries will be selected and one applicant per entry will be invited to attend the workshop and make an oral presentation of their results; costs of accommodation and travel (within the UK) will be provided subject to certain upper bounds. The other general objectives of the workshop are: to discuss problems of statistical interest within ANN research; to discuss statistical concepts and tools that expand the technology of ANN research; to enhance collaborative research involving experts from one or more of the two communities. We look forward to receiving your applications which should include a contact name and address and be no more than 10 typed A4 pages. Jim Kay and Mike Titterington SASS Environmental Modelling Unit Macaulay Land Use Research Institute Craigiebuckler Aberdeen AB9 2 QJ Scotland, UK e-mail : j.kay at uk.ac.sari.mluri (within the UK) j.kay at mluri.sari.ac.uk (internet address) Tel. : +224 - 318611 (ext. 2269) Fax : +224 - 208065 From tishby at CS.HUJI.AC.IL Mon Jun 13 10:48:13 1994 From: tishby at CS.HUJI.AC.IL (Tali Tishby) Date: Mon, 13 Jun 1994 17:48:13 +0300 Subject: 12-ICPR: Registration Information Message-ID: <199406131448.AA25912@irs01.cs.huji.ac.il> =============================================================================== 12th ICPR : INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION 9-13 October 1994 Renaissance Hotel, Jerusalem =============================================================================== The 12-ICPR contains about 220 presentations organized in four tracks, with a total of 56 sessions: o COMPUTER VISION AND IMAGE PROCESSING - 25 Sessions o PATTERN RECOGNITION AND NEURAL NETWORKS - 19 Sessions o SIGNAL PROCESSING - 7 Sessions o PARALLEL COMPUTING - 5 Sessions In addition, about 230 papers will be presented in poster form. The program committees did their best to achieve a high-quality and balanced technical program. Combined with the inspiring location at Jerusalem, we are certain that the 12th ICPR will be a rewarding and memorable experience. In addition to the excellent technical program, an exciting range of TUTORIALS, SOCIAL EVENTS, and TOURS will be offered. For example, the BANQUET is planned to be a Bedouine Feast on the shore of the Dead Sea. In the remainder of this mailing, you will find: - 12-ICPR Conference Registration Form (E-Mail registration possible!) - 12-ICPR Hotel Reservation Form (E-Mail reservations possible!) - How to get the TECHNICAL PROGRAM, TUTORIAL PROGRAM, and full REGISTRATION, HOTEL, and TOURING INFORMATION. Read the conditions on registration and hotel reservations, including cancellations, before booking!!! ******************************************************************************* * A note regarding hotel reservations: Jerusalem enjoys more visitors these * * days than in any other period. This causes shortage in hotel rooms, and you * * are urged to book your hotel as early as possible! * ******************************************************************************* \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/ cut here \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/ Date:____________ 12-ICPR CONFERENCE REGISTRATION FORM ************************************ 12-ICPR, 9-13 October 1994 Renaissance Hotel, Jerusalem Last Name _______________ First Name ___________________ Title:___________ Mailing Address __________________________________________________________ __________________________________________________________ City/State/Code _______________________________________ Country _________ Phone _________________ Fax __________________ E-mail ____________________ Registration Fee Schedule: ______________________________ Advance On-Site or IAPR Society and Member Number By Aug. 9 After Aug. 9 Member $395 USD __________ $455 USD __________ Non-member $445 USD __________ $495 USD __________ *Student $195 USD __________ $235 USD __________ One Tutorial $160 USD __________ $200 USD __________ Two Tutorials $270 USD __________ $330 USD __________ *Student - Each Tutorial $ 50 USD __________ $ 50 USD __________ Receptions for accompanying person $ 40 USD __________ $ 40 USD __________ Banquet for students and accompanying persons (12 Oct): Quantity _____ $ 40 USD __________ $ 40 USD __________ Extra Conference Proceedings: Quantity _____ $110 USD __________ Full registration fee includes: all sessions, one copy of proceedings, coffee breaks and all social events. Banquet is not included for students and accompanying persons. *Please bring a proof of full-time student status to the registration desk. Tutorial registration at student's rates is on a waitlist basis only. Morning Tutorials, Sunday, Oct. 9, 1994. Check at most one: ----------------------------------------------------------- ____ A1: O. Faugeras - "Invariant Theory for Pattern Recognition" ____ B1: Yann le Cun - "Neural Networks in Pattern Recognition" ____ C1: S. Furui - "Speech Recognition Techniques" ____ D1: T. Bernard, V. Cantoni, and M. Ferretti - "Special Chips for Pattern Recognition and Image Processing" ____ E1: H. Samet - "Geographic Information Systems" Afternoon Tutorials, Sunday, Oct. 9, 1994. Check at most one: ------------------------------------------------------------- ____ A2: R. Haralick - "Image Analysis with Mathematical Morphology" ____ B2: H. Baird - "Document Image Analysis" ____ C2: M. Tekalp - "Digital Video Processing" ____ D2: R. Hummel - "Parallel computing methods for Object Recognition" ____ E2: A. Jain - "Statistical Pattern Recognition" TOTAL PAYMENT for Registration: US$ __________ METHOD OF PAYMENT (Check one): [ ] Check [ ] Bank Transfer [ ] Visa [ ] American Express Card Number _______________________________________ Expiration date _______ Cardholder's name _______________________________________________ Cardholder's signature ___________________________ Date: _______________ - The check should be payable to "12-ICPR" in US dollars. - The Bank Transfer should be made to the credit of the following account. International / 12-ICPR Account Number 412716 Israel Discount Bank, Branch 100 4 Rothschild Blvd. 66881 Tel-Aviv, Israel Please enclose a copy of the transfer document with the registration form. E-Mail this registration form to: icpr at math.tau.ac.il or Mail/Fax to: 12-ICPR, c/o International P.O.Box 29313 61292 Tel-Aviv, Israel Tel: +972-3-510 2538 Fax: +972-3-660 604 \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/ cut here \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/ Date:_______________ 12-ICPR HOTEL RESERVATION FORM ****************************** 12-ICPR, 9-13 October 1994 Renaissance Hotel, Jerusalem E-MAIL, MAIL, FAX, or phone your reservation by Aug. 9 to ensure availability and special rates to the hotel of your choice. Hotels in Jerusalem at the time of the conference are usually highly busy. We will accept accomodation forms up to the conference, but reservation is guaranteed ONLY for advanced registration. ** For more information, including TOURING, ftp the files "registration" ** ** and "tourism" ** Last Name _______________ First Name ___________________ Title:___________ Mailing Address __________________________________________________________ __________________________________________________________ City/State/Code _______________________________________ Country _________ Phone _________________ Fax __________________ E-mail ____________________ I will share a room with: ________________________________________________ Check-in Date ___________ Check-out Date _________________ (______ nights) Please indicate at least three choices for hotel: Use "1" for highest priority, "3" for lowest. A Deposit of US$200 per room is needed for categories A/B/C, and a deposit of US$100 per room is needed for D/E. Hotel category Single Double ---------------- ------------- ------------- A [ ] $174-$190 [ ] $187-$209 B1 (Conf. Site) [ ] $100-$118 [ ] $131-$133 B2 [ ] $100-$118 [ ] $131-$133 C1 (Walking Dist.) [ ] $71- $80 [ ] $79-$102 C2 [ ] $71- $80 [ ] $79-$102 D [ ] $59 [ ] $71 E (Rooms in Homes) [ ] $35- $42 [ ] $46- $55 Total Hotel Deposit: US$ ______________ METHOD OF PAYMENT (Check one): [ ] Check [ ] Bank Transfer [ ] Visa [ ] American Express Card Number __________________________________Expiration date ____________ Cardholder's name _______________________________________________ Cardholder's signature ____________________________Date: ______________ - The check should be payable to "12-ICPR" in US dollars. - The Bank Transfer should be made to the credit of the following account. International Ltd. Account Number 396699 Israel Discount Bank, Branch 100 4 Rothschild Blvd. 66881 Tel-Aviv, Israel Please enclose a copy of the transfer document with the registration form. E-Mail this registration form to: icpr at math.tau.ac.il or Mail/Fax to: 12-ICPR, c/o International P.O.Box 29313 61292 Tel-Aviv, Israel Tel: +972-3-510 2538 Fax: +972-3-660 604 \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/ cut here \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/ You could get the following files by FTP or E-Mail: "announcement" - The file including this announcement. "advance-program" - Includes the Advance Technical Program. "tutorials" - Includes the Tutorial Program and Abstracts. "registration" - Includes Registration and Hotel Information. "tourism" - Information on tourist activities (Pre and Post Conference Tours). These files can be retrieved by E-Mail as follows: Send E-Mail to ftpmail at cs.huji.ac.il having the following lines: ------- open cd pub/ICPR get advance-program get tutorials get registration get tourism quit ------- To use anonymous ftp, connect to: ftp.huji.ac.il (user: anonymous. Password: your E-Mail). Once logged on, perform the following ftp commands: ------ cd pub/ICPR get advance-program get tutorials get registration get tourism quit ------ These files can also be retrieved by E-Mail as follows: Send E-Mail to ftpmail at cs.huji.ac.il having the following lines: ------- open cd pub/ICPR get advance-program get tutorials get registration quit ------- ---------------------------- Paper versions of the Advance Program will be mailed in late June 1994. If you will not get a copy in a couple of weeks, you can request a copy be sending E-Mail to icpr at math.tau.ac.il. This address should also be used for any other communication regarding the 12-ICPR. =============================================================================== From chauvet at bmsr14.usc.edu Mon Jun 13 20:41:01 1994 From: chauvet at bmsr14.usc.edu (Gilbert Chauvet) Date: Mon, 13 Jun 94 17:41:01 PDT Subject: PostDoc position Message-ID: <9406140041.AA04698@bmsr14.usc.edu> Post Doc position available in: MATHEMATICAL BIOLOGY Institute of Theoretical Biology, University of ANGERS (FRANCE), beginning the 1st January 1995, for 2 years. Qualification: PhD in Applied Mathematics Project: Modeling in Neurobiology using non-local reaction diffusion equations in general (numerical and theoretical aspects). Methods will be applied to hippocampus. Contact: Pr G.A. Chauvet, IBT e-mail: chauvetg at ibt.univ-angers.fr Phone: (33) 41 72 34 27 Fax: (33) 41 72 34 46 From kzhang at cogsci.UCSD.EDU Mon Jun 13 19:40:48 1994 From: kzhang at cogsci.UCSD.EDU (Kechen Zhang) Date: Mon, 13 Jun 1994 16:40:48 -0700 Subject: tech report available Message-ID: <9406132341.AA23051@cogsci.UCSD.EDU> ftp-host: cogsci.ucsd.edu ftp-filename: /pub/tr/9402.charsys.ps.Z size of uncompressed file: 0.55 Mb printed pages: 21 This is the first UCSD Cognitive Science technical report available by anonymous ftp, thanks for the help from Kathy Farrelly, Paul Maglio, Javier Movellan, Marty Sereno, Mark Wallen and David Zipser. The abstract and the ftp instructions are as follows. ------------------------------------------------------------------------ Temporal association by Hebbian connections: The method of characteristic system Kechen Zhang June 1994 Report 9402 Department of Cognitive Science University of California, San Diego La Jolla, CA 92093-0515 email: kzhang at cogsci.ucsd.edu Abstract While it is generally accepted that the stability of a static memory pattern corresponds to a certain point attractor in the dynamics of the underlying neural network, when temporal order is introduced and a sequence of memory patterns need to be retrieved successively in continuous time, it becomes less clear what general method should be used to decide whether the transient dynamics is robust or not. In this paper, it is shown that such a general method can be developed if all the connections in the neural network are Hebbian. This method is readily applied to various structures of coupled networks as well as to the standard temporal association model with asymmetric time-delayed connections. The basic idea is to introduce new variables made of memory-overlap projections with alternating signs, and then circumvent the nonlinearity of the sigmoid function by exploiting the dynamical symmetry inherent in the Hebb rule. The result is a self-contained, low-dimensional deterministic system. A powerful feature of this approach is that it can translate questions about the stability of the sequential memory transitions in the original neural network into questions about the stability of the periodic oscillation in the corresponding characteristic system. This correspondence enables direct, quantitative prediction of the behaviors of the original system, as being confirmed by computer simulations on the conjugate networks, the ``tri-synaptic loop'' networks, and the time-delayed network. In particular, the conjugate networks (consisting of two Hopfield subnets coupled by asymmetric Hebbian connections) offer a simple but sufficient structure for the storage and retrieval of sequential memory patterns without any additional temporal mechanisms besides the intrinsic dynamics. The same structure can also be used for the recognition of a temporal sequence of sparse memory patterns. Other topics include the storage capacity of the conjugate networks, the exact solution of the limit cycle in the characteristic system, the sequential retrieval at variable speeds, and the problem of equivalence between coupling and time delays. ------------------------------------------------------------------------ To retrieve the compressed PostScript file, do the following: unix> ftp cogsci.ucsd.edu ftp> login: anonymous ftp> password: ftp> cd pub/tr ftp> bin ftp> ls ftp> get 9402.charsys.ps.Z ftp> bye unix> uncompress 9402.charsys.ps.Z unix> lpr -P -s 9402.charsys.ps (or however you print PostScript) page numbers: 0 - 20 From P.McKevitt at dcs.shef.ac.uk Tue Jun 14 09:39:34 1994 From: P.McKevitt at dcs.shef.ac.uk (Paul Mc Kevitt) Date: Tue, 14 Jun 94 09:39:34 BST Subject: No subject Message-ID: <9406140839.AA27563@dcs.shef.ac.uk> CONFERENCE ANNOUNCEMENT AND PRELIMINARY CALL FOR PAPERS AISB-95: Hybrid Problems, Hybrid Solutions. ============================================ Monday 3rd -- Friday 7th April 1995 Halifax Hall of Residence & Computer Science Department University of Sheffield Sheffield, ENGLAND The Tenth Biennial Conference on AI and Cognitive Science organised by the Society for the Study of Artificial Intelligence and the Simulation of Behaviour Programme Chair: John Hallam (University of Edinburgh) Programme Committee: Dave Cliff (University of Sussex) Erik Sandewall (University of Linkoeping) Nigel Shadbolt (University of Nottingham) Sam Steel (University of Essex) Yorick Wilks (University of Sheffield) Local Organisation: Paul Mc Kevitt (University of Sheffield) The past few years have seen an increasing tendency for diversification in research into Artificial Intelligence, Cognitive Science and Artificial Life. A number of approaches are being pursued, based variously on symbolic reasoning, connectionist systems and models, behaviour-based systems, and ideas from complex dynamical systems. Each has its own particular insight and philosophical position. This variety of approaches appears in all areas of Artificial Intelligence. There are both sybmolic and connectionist natural language processing, both classical and behaviour-based vision research, for instance. While purists from each approach may claim that all the problems of cognition can in principle be tackled without recourse to other methods, in practice (and maybe in theory, also) combinations of methods from the different approaches (hybrid methods) are more successful than a pure approach for certain kinds of problems. The committee feels that there is an unrealised synergy between the various approaches that an AISB conference may be able to explore. Thus, the focus of the tenth AISB Conference is on such hybrid methods. We particularly seek papers that describe novel theoretical and/or experimental work which uses a hybrid approach or papers from purists, arguing cogently that compromise is unnecessary or unproductive. While papers such as those are particularly sought, good papers on any topic in Artificial Intelligence will be considered: as always, the most important criteria for acceptance will be soundness, originality, substance and clarity. Research in all areas is equally welcome. The AISB conference is a single track conference lasting three days, with a two day tutorial and workshop programme preceding the main technical event, and around twenty high calibre papers will be presented in the technical sessions. It is expected that the proceedings of the conference will be published in book form in time to be available at the conference itself, making it a forum for rapid dissemination of research results. SUBMISSIONS: High quality original papers dealing with the issues raised by mixing different approaches, or otherwise related to the Conference Theme, should be sent to the Programme Chair. Papers which give comparative experimental evaluation of methods from different paradigms applied to the same problem, papers which propose and evaluate mixed-paradigm theoretical models or tools, and papers that focus on hybrid systems applied to real world problems will be particularly welcome, as will papers from purists who argue cogently that the hybrid approach is flawed and a particular pure approach is to be preferred. Papers being submitted, whether verbatim or in essence, to other conferences whose review process runs concurrently with AISB-95 should indicate this fact on their title page. If a submitted paper appears at another conference it must be withdrawn from AISB-95 (this does not apply to presentation at specialist workshops). Papers that violate these requirements may be rejected without review. SHEFFIELD: Sheffield is one of the friendliest cities in the UK and is situated well having the best and closest surrounding countryside of any major city in the UK. The Peak District National Park is only minutes away. It is a good city for walkers, runners, and climbers. It has two theatres, the Crucible and Lyceum. The Lyceum, a beautiful Victorian theatre, has recently been renovated. Also, the city has three 10 screen cinemas. There is a library theatre which shows more artistic films. The city has a large number of museums many of which demonstrate Sheffield's industrial past, and there are a number of Galleries in the City, including the Mapping Gallery and Ruskin. A number of important ancient houses are close to Sheffield such as Chatsworth House. The Peak District National Park is a beautiful site for visiting and rambling upon. There are large shopping areas in the City and by 1995 Sheffield will be served by a 'supertram' system: the line to the Meadowhall shopping and leisure complex is already open. The University of Sheffield's Halls of Residence are situated on the western side of the city in a leafy residential area described by John Betjeman as ``the prettiest suburb in England''. Halifax Hall is centred on a local Steel Baron's house, dating back to 1830 and set in extensive grounds. It was acquired by the University in 1830 and converted into a Hall of Residence for women with the addition of a new wing. ARTIFICIAL INTELLIGENCE AT SHEFFIELD: Sheffield Computer Science Department has a strong programme in Cognitive Systems and is part of the University's Institute for Language, Speech and Hearing (ILASH). ILASH has its own machines and support staff, and academic staff attached to it from nine departments. Sheffield Psychology Department has the Artificial Intelligence Vision Research Unit (AIVRU) which was founded in 1984 to coordinate a large industry/university Alvey research consortium working on the development of computer vision systems for autonomous vehicles and robot workstations. FORMAT AND DEADLINES: Four copies of submitted papers must be received by the Programme Chair no later than 24 OCTOBER 1994 to be considered. Papers should be at most 12 pages in length and be produced in 12 point, with at most 60 lines of text per A4 page and margins at least 1 inch (2.5cm) wide on all sides (default LaTeX article style is OK). They should include a cover sheet (not counted in the 12 page limit) giving the paper title, the abstract, the authors and their affiliations, including a contact address for both electronic and paper mail for the principal author. Papers should be submitted in hard-copy, not electronically. Papers that do not adhere to this format specification may be rejected without review. Notification of acceptance will be sent to authors by 7 DECEMBER 1994 and full camera-ready copy will be due in early JANUARY 1995 (publishers' deadlines permitting). CONFERENCE ADDRESS: Correspondence relating to the conference programme, submissions of papers, etc. should be directed to the conference programme chair at the address below. John Hallam, Department of Artificial Intelligence, University of Edinburgh, 5 Forrest Hill, Edinburgh EH1 2QL, SCOTLAND. Phone: + 44 31 650 3097 FAX: + 44 31 650 6899 E-mail: john at aifh.edinburgh.ac.uk Correspondence concerning local arrangements should be directed to the local arrangements organiser at the following address. Paul Mc Kevitt, Department of Computer Science, University of Sheffield, Regent Court, 211 Portobello Street, Sheffield S1 4DP, ENGLAND. Phone: + 44 742 825572 FAX: + 44 742 780972 E-mail: p.mckevitt at dcs.sheffield.ac.uk From terry at salk.edu Tue Jun 14 17:04:18 1994 From: terry at salk.edu (Terry Sejnowski) Date: Tue, 14 Jun 94 14:04:18 PDT Subject: Professorship at Lund Message-ID: <9406142104.AA10140@salk.edu> Lund University in Sweden has announced an opening for a full professorship at the Department of Theoretical Physics: "Physics particularly the theory of collective phenomena The area covers physics of condensed matter in a broad sense in particular the theory of complex systems." A full professorship in Sweden is a very good position with a lot of freedom and a reasonable base funding. Applications should be submitted to Lund University before June 29, 1994. Could you please make this known to persons that may be interested. Information about the position is given by professor Lars Hedin, phone +46-46-10 90 80, e-mail Lars.Hedin at teorfys.lu.se. ----- From bernabe at cnm.us.es Wed Jun 15 10:18:21 1994 From: bernabe at cnm.us.es (Bernabe Linares B.) Date: Wed, 15 Jun 94 16:18:21 +0200 Subject: paper available on neural hardware Message-ID: <9406151418.AA12753@sparc1.cnm.us.es> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/bernabe.art1chip.ps.Z The file bernabe.art1chip.ps.Z is now available for copying from the Neuroprose repository: Title: A CMOS VLSI Analog Current-Mode High-Speed ART1 Chip (4 pages) Authors: T. Serrano, B. Linares-Barranco, and J. L. Huertas Filiation: National Microelectronics Center (CNM), Sevilla, SPAIN. This paper is going to appear in the ICNN'94 proceedings, and it is going to be presented at the ICNN'94 meeting, Orlando, Florida, the 28th of June, in the Special Invited Session "Advanced Analog Neural Networks and Applications", at 8:20am (according to the preliminary program). ABSTRACT: In this paper we present a real time neural categorizer chip based on the ART1 algorithm [1]. The circuit implements a modified version of the original ART1 algorithm more suitable for VLSI implementations [2]. It has been designed using analog current-mode circuit design techniques, and consists basically of current mirrors that are switched ON and OFF according to the binary input patterns. The chip is able to cluster 100 binary pixels input patterns into up to 18 different categories. Modular expandability of the system is possible by simply interconnecting more chips in a matrix array. This way a system can be built to cluster NX100 binary pixels images into MX18 different clusters, using an NXM array of chips. Avarage pattern classification is performed in less than 1us, which means an equivalent computing power of 1.8X10^9 connections per second. The chip has been fabricated in a single-poly double-metal 1.5um standard digital low cost CMOS process, has a die area of 1cm^2, and is mounted in a 120 pins PGA package. Although the circuit is analog in nature, it is full-digital-compatible since all its inputs and outputs are digital. Sorry, no hardcopies available. From mario at physics.uottawa.ca Thu Jun 16 08:52:40 1994 From: mario at physics.uottawa.ca (Mario Marchand) Date: Thu, 16 Jun 94 09:52:40 ADT Subject: Errata for Neural Net and PAC learning papers Message-ID: <9406161252.AA07302@physics.uottawa.ca> Dear Colleagues The following papers have already been announced 10 days ago but have been moved to another directory. To retrieve any one of them please follow these instructions: unix> ftp Dirac.physics.uottawa.ca Name: anonymous Password: "your email address" fpt> cd /pub/tr/marchand ftp> binary ftp> get filename.ps.Z ftp> bye unix> uncompress filename.ps.Z .. and print the postscript file: filename.ps ---- **************************************************************** FILE: COLT93.ps.Z TITLE: Average Case Analysis of the Clipped Hebb Rule for Nonoverlapping Perceptron Networks AUTHORS: Mostefa Golea, Mario Marchand Abstract We investigate the clipped Hebb rule for learning different multilayer networks of nonoverlapping perceptrons with binary weights and zero thresholds when the examples are generated according to the uniform distribution. Using the central limit theorem and very simple counting arguments, we calculate exactly its learning curves (\ie\ the generalization rates as a function of the number of training examples) in the limit of a large number of inputs. We find that the learning curves converge {\em exponentially\/} rapidly to perfect generalization. These results are very encouraging given the simplicity of the learning rule. The analytic expressions of the learning curves are in excellent agreement with the numerical simulations, even for moderate values of the number of inputs. ******************************************************************* ****************************************************************** FILE: EuroCOLT93.ps.Z TITLE: On Learning Simple Deterministic and Probabilistic Neural Concepts AUTHORS: Mostefa Golea, Mario Marchans Abstract We investigate the learnability, under the uniform distribution, of deterministic and probabilistic neural concepts that can be represented as {\em simple combinations} of {\em nonoverlapping} perceptrons with binary weights. Two perceptrons are said to be nonoverlapping if they do not share any input variables. In the deterministic case, we investigate, within the distribution-specific PAC model, the learnability of {\em perceptron decision lists} and {\em generalized perceptron decision lists}. In the probabilistic case, we adopt the approach of {\em learning with a model of probability} introduced by Kearns and Schapire~\cite{KS90} and Yamanishi~\cite{Y92}, and investigate a class of concepts we call {\em probabilistic majorities} of nonoverlapping perceptrons. We give polynomial time algorithms for learning these restricted classes of networks. The algorithms work by estimating various statistical quantities that yield enough information to infer, with high probability, the target concept. *********************************************************************** *********************************************************************** FILE: NeuralComp.ps.Z TITLE: On Learning Perceptrons with Binary Weights AUTHORS: Mostefa Golea, Mario Marchand Abstract We present an algorithm that PAC learns any perceptron with binary weights and arbitrary threshold under the family of {\em product distributions\/}. The sample complexity of this algorithm is of $O((n/\epsilon)^4 \ln(n/\delta))$ and its running time increases only linearly with the number of training examples. The algorithm does not try to find an hypothesis that agrees with all of the training examples; rather, it constructs a binary perceptron based on various probabilistic estimates obtained from the training examples. We show that, under the restricted case of the {\em uniform distribution\/} and zero threshold, the algorithm reduces to the well known {\em clipped Hebb rule\/}. We calculate exactly the average generalization rate (\ie\ the learning curve) of the algorithm, under the uniform distribution, in the limit of an infinite number of dimensions. We find that the error rate decreases {\em exponentially\/} as a function of the number of training examples. Hence, the average case analysis gives a sample complexity of $O(n\ln(1/\epsilon))$; a large improvement over the PAC learning analysis. The analytical expression of the learning curve is in excellent agreement with the extensive numerical simulations. In addition, the algorithm is very robust with respect to classification noise. *********************************************************************** ********************************************************************** FILE: NIPS92.ps.Z TITLE: On Learning $\mu$-Perceptron Networks with Binary Weights AUTHORS: Mostefa Golea, Mario Marchand, Thomas R. Hancock Abstract Neural networks with binary weights are very important from both the theoretical and practical points of view. In this paper, we investigate the learnability of single binary perceptrons and unions of $\mu$-binary-perceptron networks, {\em i.e.\/} an ``OR'' of binary perceptrons where each input unit is connected to one and only one perceptron. We give a polynomial time algorithm that PAC learns these networks under the uniform distribution. The algorithm is able to identify both the network connectivity and the weight values necessary to represent the target function. These results suggest that, under reasonable distributions, $\mu$-perceptron networks may be easier to learn than fully connected networks. ********************************************************************** ********************************************************************** FILE: Network93.ps.Z TITLE: On Learning Simple Neural Concepts: From Halfspace Intersections to Neural Decision Lists AUTHORS: Mario Marchand, Mostefa Golea Abstract In this paper, we take a close look at the problem of learning simple neural concepts under the uniform distribution of examples. By simple neural concepts we mean concepts that can be represented as simple combinations of perceptrons (halfspaces). One such class of concepts is the class of halfspace intersections. By formalizing the problem of learning halfspace intersections as a {\em set covering problem}, we are led to consider the following sub-problem: given a set of non linearly separable examples, find the largest linearly separable subset of it. We give an approximation algorithm for this NP-hard sub-problem. Simulations, on both linearly and non linearly separable functions, show that this approximation algorithm works well under the uniform distribution, outperforming the Pocket algorithm used by many constructive neural algorithms. Based on this approximation algorithm, we present a greedy method for learning halfspace intersections. We also present extensive numerical results that strongly suggests that this greedy method learns halfspace intersections under the uniform distribution of examples. Finally, we introduce a new class of simple, yet very rich, neural concepts that we call {\em neural decision lists}. We show how the greedy method can be generalized to handle this class of concepts. Both greedy methods for halfspace intersections and neural decision lists were tried on real-world data with very encouraging results. This shows that these concepts are not only important from the theoretical point of view, but also in practice. *************************************************************************** Sorry for this inconvenience, - Mario ---------------------------------------------------------------- | UUU UUU Mario Marchand | | UUU UUU ----------------------------- | | UUU OOOOOOOOOOOOOOOO Department of Physics | | UUU OOO UUU OOO University of Ottawa | | UUUUUUUUUUUUUUUU OOO 150 Louis Pasteur street | | OOO OOO PO BOX 450 STN A | | OOOOOOOOOOOOOOOO Ottawa (Ont) Canada K1N 6N5 | | | | ***** Internet E-Mail: mario at physics.uottawa.ca ********** | | ***** Tel: (613)564-9293 ------------- Fax: 564-6712 ***** | ---------------------------------------------------------------- From gannadm at cs.iastate.edu Thu Jun 16 15:50:22 1994 From: gannadm at cs.iastate.edu (GANN Mailing list (Vasant Honavar)) Date: Thu, 16 Jun 94 14:50:22 CDT Subject: Mailing List on Evolutionary Design of Neural Networks Message-ID: ** Announcement ** ** A Mailing List on Evolutionary Design of Neural Networks ** The neuro-evolution e-mail list (which was originally started by Mike Rudnick but has been defunct for a couple of years due to logistic problems) is being restarted under the new name `gann'. The gann list will focus on the use of evolutionary algorithms (genetic algorithms, genetic programming and their variants) in the exploration of the design space of (artificial) neural network architectures and algorithms. The list will be semi-moderated to keep the signal to noise ratio as high as possible. MEMBERSHIP AND SCOPE: The membership on the list is open to researchers who are actively working in this area. The primary objective of the mailing list is to foster interaction and sharing of new research results, publications, conference announcements, and other useful information among researchers in this area. A partial list of topics of particular interest includes: Genetic Representation (Blueprints) of Neural Networks Encoding and Decoding of Network Blueprints Complexity issues Development models Learning models Representational Bias Efficiency Issues Properties of Representation Experimental Results Theoretical Considerations Details of operation of the mailing list follow: TO SUBSCRIBE: mail to gann-request at cs.iastate.edu with "Subject": subscribe TO UNSUBSCRIBE: mail to gann-request at cs.iastate.edu with "Subject": unsubscribe All administrative queries/enquiries/comments should be addressed to gann-request at cs.iastate.edu All articles/notes/replies-to-queries/submissions to the list should be sent to gann at cs.iastate.edu You will receive a welcoming message once you have been added to the list. _______________________________________________________________________ Dr. Vasant Honavar Dr. Mike Rudnick Assistant Professor Assistant Professor Computer Science & Neuroscience Computer Science Department of Computer Science Department of Computer Science Iowa State University Tulane University honavar at cs.iastate.edu rudnick at cs.tulane.edu Mr. Karthik Balakrishnan Doctoral Student Artificial Intelligence Research Group Department of Computer Science Iowa State University balakris at cs.iastate.edu _______________________________________________________________________ From wermter at nats2.informatik.uni-hamburg.de Fri Jun 17 12:34:04 1994 From: wermter at nats2.informatik.uni-hamburg.de (Stefan Wermter) Date: Fri, 17 Jun 94 18:34:04 +0200 Subject: paper announcement Message-ID: <9406171634.AA01645@nats2.informatik.uni-hamburg.de> ----------------------------------------------------------------------------- FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/wermter.screen.ps.Z ------------------------------------------------------------------------------ The following paper is available via anonymous ftp from the neuroprose archive: File: wermter.screen.ps.Z Length: 6 pages Learning Fault-tolerant Speech Parsing with SCREEN Stefan Wermter and Volker Weber Department of Computer Science University of Hamburg, Germany [to appear in Proceedings of National Conference on Artificial Intelligence 94] Abstract: This paper describes a new approach and a system SCREEN for fault-tolerant speech parsing. SCREEEN stands for Symbolic Connectionist Robust EnterprisE for Natural language. Speech parsing describes the syntactic and semantic analysis of spontaneous spoken language. The general approach is based on incremental immediate flat analysis, learning of syntactic and semantic speech parsing, parallel integration of current hypotheses, and the consideration of various forms of speech related errors. The goal for this approach is to explore the parallel interactions between various knowledge sources for learning incremental fault-tolerant speech parsing. This approach is examined in a system SCREEN using various hybrid connectionist techniques. Hybrid connectionist techniques are examined because of their promising properties of inherent fault tolerance, learning, gradedness and parallel constraint integration. The input for SCREEN is hypotheses about recognized words of a spoken utterance potentially analyzed by a speech system, the output is hypotheses about the flat syntactic and semantic analysis of the utterance. In this paper we focus on the general approach, the overall architecture, and examples for learning flat syntactic speech parsing. Different from most other speech language architectures SCREEN emphasizes an interactive rather than an autonomous position, learning rather than encoding, flat analysis rather than in-depth analysis, and fault-tolerant processing of phonetic, syntactic and semantic knowledge. ------------------------------------------------------------------------------ To retrieve the compressed postscript file, do the following: unix> ftp archive.cis.ohio-state.edu ftp> login: anonymous ftp> password: [your_full_email_address] ftp> cd pub/neuroprose ftp> binary ftp> get wermter.screen.ps.Z ftp> quit unix> uncompress wermter.screen.ps.Z unix> lpr wermter.screen.ps (or however you print postscript) (Hard copies of the paper are unfortunately not available. European paper format has been used but everything should be complete on shorter US paper format as well.) best wishes, Stefan Wermter ************************************************************************** * Dr Stefan Wermter Universitaet Hamburg * * Fachbereich Informatik * * wermter at informatik.uni-hamburg.de Vogt-Koelln-Strasse 30 * * phone: +49 40 54715-531 D-22527 Hamburg * * fax: +49 40 54715-515 Germany * ************************************************************************** From leon at bop.neuristique.fr Fri Jun 17 03:25:12 1994 From: leon at bop.neuristique.fr (Leon Bottou) Date: Fri, 17 Jun 94 09:25:12 +0200 Subject: TR Announcemnent: Effective VC Dimension Message-ID: <9406170725.AA15312@neuristique.fr> **DO NOT FORWARD TO OTHER GROUPS** TR Announcemnent: Effective VC Dimension ---------------------------------------- FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/bottou.effvc.ps.Z The file bottou.effvc.ps.Z is now available for copying from the Neuroprose repository: TITLE: On the Effective VC Dimension (12 pages) Leon Bottou, Neuristique, Paris (France) Vladimir N. Vapnik, ATT Bell Laboratories, Holmdel (NJ) ABSTRACT: The very idea of an ``Effective Vapnik Chervonenkis (VC) dimension'' relies on the hypothesis that the relation between the generalization error and the number of training examples can be expressed by a formula algebraically similar to the VC bound. This hypothesis calls for a serious discussion since the traditional VC bound widely overestimates the generalization error. In this paper we describe an algorithm and data dependent measure of capacity. We derive a confidence interval on the difference between the training error and the generalization error. This confidence interval is much tighter than the traditional VC bound. A simple change of the formulation of the problem yields this extra accuracy: our confidence interval bounds the error difference between a training set and a test set, rather than the error difference between a training set and some hypothetical grand truth. This ``transductive'' approach allows for deriving a data and algorithm dependent confidence interval. From rosen at unr.edu Sun Jun 19 22:37:28 1994 From: rosen at unr.edu (David B. Rosen) Date: Sun, 19 Jun 94 19:37:28 -0700 Subject: searchable bibliographic databases available Message-ID: <9406200240.AA11406@solstice.unr.edu> Just want to point out this very useful service for searching a large collection of neural network bibtex bibliographies via the World-Wide Web: http://glimpse.cs.arizona.edu:1994/bib/Neural/ For more information, and to access other CS-related bibliographies, see: ftp://ftp.cs.umanitoba.ca/pub/bibliographies/index.html (I have nothing to do with them.) -- David Rosen \ Center for Biomedical \ University e-mail,finger: rosen at unr.edu \ Modeling Research \ of Nevada From gjg at cns.edinburgh.ac.uk Mon Jun 20 12:47:13 1994 From: gjg at cns.edinburgh.ac.uk (Geoffrey Goodhill) Date: Mon, 20 Jun 94 12:47:13 BST Subject: Paper on MDS available Message-ID: <1104.9406201147@cns.ed.ac.uk> The following paper has been submitted for publication and is available by anonymous ftp. It expands on our brief preliminary note published in Nature on June 9th (Simmen, Goodhill & Willshaw, 369:448). An evaluation of the use of Multidimensional Scaling ---------------------------------------------------- for understanding brain connectivity ------------------------------------ Geoffrey J. Goodhill, Martin W. Simmen & David J. Willshaw Centre for Cognitive Science University of Edinburgh Research Paper EUCCS / RP-63, June 1994 Abstract -------- A large amount of data is now available about the pattern of connections between brain regions. Computational methods are increasingly relevant for uncovering structure in such datasets. There has been recent interest in the use of Nonmetric Multidimensional Scaling (NMDS) for such analysis (Young, 1992, 1993; Scannell & Young, 1993). NMDS produces a spatial representation of the ``dissimilarities'' between a number of entities. Normally, it is applied to data matrices containing a large number of levels of dissimilarity, whereas for connectivity data there is a very small number. We address the suitability of NMDS for this case. Systematic numerical studies are presented to evaluate the ability of this method to reconstruct known geometrical configurations from dissimilarity data possessing few levels. In this case there is a strong bias for NMDS to produce annular configurations, whether or not such structure exists in the original data. Using a connectivity dataset derived from the primate cortical visual system (Felleman & Van Essen, 1991), we demonstrate why great caution is needed in interpreting the resulting configuration. Application of an independent method that we developed strongly suggests that the visual system NMDS configuration is affected by an annular bias. We question whether an NMDS analysis of the visual system data supports the two streams view of visual processing (Young, 1992). Instructions for obtaining by anonymous ftp: % ftp archive.cis.ohio-state.edu Name: anonymous Password: ftp> binary ftp> cd pub/neuroprose ftp> get goodhill.nmds.ps.Z The paper is approx 1MB and prints on 23 pages. Geoff Goodhill gjg at cns.ed.ac.uk From rwp at eng.cam.ac.uk Mon Jun 20 16:46:34 1994 From: rwp at eng.cam.ac.uk (Richard Prager) Date: Mon, 20 Jun 1994 16:46:34 BST Subject: Cambridge Neural Networks Summer School 1994 Message-ID: <12036.9406201546@dsl.eng.cam.ac.uk> Cambridge University Engineering Department in Collaboration Cambridge University Programme for Industry Announce The Fourth Annual Neural Networks Summer School 3 1/2 day short course 19-22 September 1994 KOHONEN JORDAN SUTTON BOURLARD DAUGMAN JERVIS MACKAY NIRANJAN PRAGER ROBINSON TARRASENKO +--------------------------------------------------------+ | Thanks to support from the ESPRC we are this year able | | to offer fully funded places for selected UK research | | students. There is also a large academic discount.| | See below for details of how to apply for these places.| +--------------------------------------------------------+ OUTLINE AND AIM OF THE COURSE Recently, much progress has been made in the area of neural computing, bringing together a range of powerful techniques from parallel computing, nonlinear functional analysis, statistical inference and dynamical systems theory. There is much potential in this area for solving a range of interesting and difficult problems, with commercial and industrial applications. The course will give a broad introduction to the application and design of neural networks and deal with both the theory and with specific applications. Survey material will be given, together with recent research results in architecture and training methods, and applications including signal processing, control, speech, robotics and human vision. Design methodologies for a number of common neural network architectures will be covered, together with the theory behind neural network algorithms. Participants will learn the strengths and weaknesses of the neural network approach, and how to assess the potential of the technology in respect of their own requirements. Lectures will be given by international experts in the field, and delegates will have the opportunity of learning first hand the technical and practical details of recent work in neural networks from those who are contributing to those developments. LABORATORY DEMONSTRATIONS Informal evening visits to Cambridge University Engineering Department laboratories, which will include demonstrations of a number of current research projects. POSTER SESSION There will be an informal poster session in which delegates may present their current work or interests should they so wish. Please contact the Course Administrator for further details. LECTURERS DR HERVE BOURLARD is with Lernout & Hauspie Speech Products in Brussels. He has made many contributions to the subject particularly in the area of speech recognition. DR JOHN DAUGMAN came to Cambridge in 1991 as a Senior Research Fellow in Zoology (computational neuroscience) and is now a Lecturer in Artificial Intelligence in the Computer Laboratory at Cambridge University. His areas of research and publication include computational neuroscience, multi-dimensional signal processing and pattern recognition, machine vision and biological vision. DR TIMOTHY JERVIS is with Schlumberger Cambridge Research Ltd. His interests lie in the field of neural networks and in the application of Bayesian statistical techniques to learning control. PROFESSOR MICHAEL JORDAN is in the Department of Brain & Cognitive Science at MIT. He was a founding member of the PDP research group and he made many contributions to the subject particularly in forward and inverse systems. PROFESSOR TEUVO KOHONEN is with the Academy of Finland and Laboratory of Computer and Information Science at Helsinki University of Technology. His specialities are in self-organising maps and their applications. DR DAVID MACKAY is the Royal Society Smithson Research Fellow at Cambridge University and works on Bayesian methods and non-linear modelling at the Cavendish Laboratory. He obtained his PhD in Computation and Neural Systems at California Institute of Technology. DR MAHESAN NIRANJAN is with the Department of Engineering at Cambridge University. His specialities are in speech processing and pattern classification. DR RICHARD PRAGER is with the Department of Engineering at Cambridge University. His specialities are in speech and vision processing. DR TONY ROBINSON is with the Department of Engineering at Cambridge University. His specialities are in recurrent networks and speech processing. DR RICH SUTTON is with the Adaptive Systems Department of GTE Laboratories near Boston, USA. His specialities are in reinforcement learning, planning and animal learning behaviours. DR LIONEL TARASSENKO is with the Department of Engineering at the University of Oxford. His specialities are in robotics and the hardware implementation of neural computing. WHO SHOULD ATTEND This course is intended for engineers, software specialists and other scientists who need to assess the current potential of neural networks. Delegates will have the opportunity to learn at first hand the technical and practical details of recent work in this field. The Neural Networks Summer School has been running for four consecutive years and has consistently received high praise from those who have attended. We attract lecturers of international stature, and speakers this year will include Professor Teuvo Kohonen, Professor Michael Jordan, Dr Rich Sutton, Dr Lionel Tarassenko, Dr David MacKay and Dr John Daugman. PROGRAMME The course will be structured to enable full discussion periods between lecturers and delegates. All the formal sessions will be covered by comprehensive course notes. Lecture subjects will include: **Introduction and overview** Connectionist computing: an introduction and overview Programming a neural network Parallel distributed processing perspective Theory and parallels with conventional algorithms **Architectures** Pattern processing and generalisation Bayesian methods and non-linear modelling Reinforcement learning neural networks Multiple expert networks Self organising neural networks Feedback networks for optimization **Applications** System identifications Time series predictions Learning forward and inverse dynamical models Control of nonlinear dynamical systems using neural networks Artificial and biological vision systems Silicon VLSI neural networks Applications to diagnostic systems Applications to speech recognition Applications to mobile robotics Financial system modelling Applications in medical diagnostics COURSE FEES and ACCOMMODATION The course fee is 750 UK pounds (350 UK pounds with academic discount for full time students and faculty of higher education institutes), payable in advance, and includes a full set of course notes, a certificate of attendance, and all day-time refreshments for the duration of the course. In order to benefit fully from the course we strongly recommend that delegates elect to be residential as courses are designed to allow planned and informal discussions in the evening. Accommodation can be arranged in college rooms with shared facilities at Corpus Christi College at 187 UKpounds for 4 nights to include bed and breakfast, dinner and a Course Dinner. If you would prefer to make your own arrangements please indicate on the registration form and details of local hotels will be sent to you. EPSRC SPONSORED PLACES A limited number of EPSRC sponsored places are available for all full time UK registered students. However, priority placement will be given to students with EPSRC (SERC) funding. Sponsorship covers all course fees, meals and college accommodation (Monday, Tuesday and Wednesday nights only). To be considered for a place, please send a one page summary of current research including how you expect to benefit by attending, a curriculum vitae, a letter of recommendation from your supervisor and the nature of your current funding. The deadline for applications is 1 August 1994. --------------------------------------------------------------------------- I wish to REGISTER for the course: "Neural Networks Summer School" Title (Dr, Mr, Ms etc) ........................................ Name .......................................................... First Names ................................................... Job Title...................................................... Company........................................................ Division....................................................... Address........................................................ .............................................................. .............................................................. .............................................................. Post Code...................................................... Tel. No........................................................ Fax. No ....................................................... E-mail address ................................................ _____ I am applying for an academic discount _____ I am applying for an EPSRC Scholoarhip _____ I will be paying a commercial/industrial rate ______ Please reserve one place and accommodation for 4 nights. I enclose a cheque/purchase order for _______, made payable to the University of Cambridge/EYA. ______ Please reserve one place and send details of local hotels. I enclose a cheque/purchase order for _______, made payable to the University of Cambridge/EYA. I have the following special requirements concerning diet or disabilities: Total Amount Enclosed: UKL ____________ For further information contact: Rebecca Simons, Course Administrator University of Cambridge Programme for Industry 1 Trumpington Street, Cambridge CB2 1QA Tel:+44 (0)223 332722 Fax: +44 (0)223 301122 Email: rjs1008 at uk.ac.cam.phx From iiscorp at netcom.com Mon Jun 20 19:51:47 1994 From: iiscorp at netcom.com (IIS Corp) Date: Mon, 20 Jun 94 16:51:47 PDT Subject: Soft Computing Days in San Francisco Message-ID: Soft Computing Days in San Francisco Zadeh, Widrow, Koza, Ruspini, Stork, Whitley, Bezdek, Bonissone, and Berenji On Soft Computing: Fuzzy Logic, Neural Networks, and Genetic Algorithms Three short courses San Francisco, CA October 24-28, 1994 Traditional (hard) computing methods do not provide sufficient capabilities to develop and implement intelligent systems. Soft computing methods have proved to be important practical tools to build and construct these systems. The following three courses, offered by Intelligent Inference Systems Corp., will focus on all major soft computing technologies: fuzzy logic, neural networks, genetic algorithms, and genetic programming. These courses may be taken either individually or in combination. Course 1: Artificial Neural Networks (Oct. 24) Bernard Widrow and David Stork Course 2: Genetic Algorithms and Genetic Programming (Oct. 25) John Koza and Darrell Whitley Course 3: Fuzzy Logic Inference (Oct. 26-28) Lotfi Zadeh, Jim Bezdek, Enrique Ruspini, Piero Bonissone, and Hamid Berenji For further details on course topics and registration information, send an email to iiscorp at netcom.com or contact Intelligent Inference Systems Corp., Phone (408) 730-8345, Fax: (408) 730-8550. A detailed brochure will be sent to you as soon as possible. From terry at salk.edu Mon Jun 20 14:07:54 1994 From: terry at salk.edu (Terry Sejnowski) Date: Mon, 20 Jun 94 11:07:54 PDT Subject: Neural Computation, Vol 6, No 4 Message-ID: <9406201807.AA20944@salk.edu> NEURAL COMPUTATION July 1994 Volume 6 Number 4 Article: What is the Goal of Sensory Coding? David J. Field Note: Design Principles of Columnar Organization in Visual Cortex Ernst Niebur and Florentin Worgotter Letters: Elastic Net Model of Ocular Dominance: Overall Stripe Pattern and Monocular Deprivation Geoffrey Goodhill and David Willshaw The Effect of Synchronized Inputs at the Single Neuron Level Ojvind Bernander, Christof Koch and Marius Usher Segmentation by a Network of Oscillators with Stored Memories H. Sompolinsky and M. Tsodyks Numerical Bifurcation Analysis of an Oscillatory Neural Network with Synchronous/Asynchronous Connections Yukio Hayashi Analysis of the Effects of Noise on a Model for the Neural Mechanism of Short-Term Active Memory J. Devin McAuley and Joseph Stampfli Reduction of Conductance Based Models with Slow Synapses to Neural Nets Bard Ermentrout Dimension Reduction of Biological Neuron Models by Artificial Neural Networks Kenji Doya and Allen I. Selverston Neural Network Process Models Based on Linear Model Structures Gary M. Scott and W. Harmon Ray Stability of Oja's PCA Subspace Rule Juha Karhunen Supervised Training of Neural Networks via Ellipsoid Algorithms Man-Fung Cheung, Kevin M. Passino and Stephen Yurkovich Why Some Feedforward Networks Can't Learn Some Polynomials N. Scott Cardell, Wayne Joerding and Ying Li ----- SUBSCRIPTIONS - 1994 - VOLUME 6 - BIMONTHLY (6 issues) ______ $40 Student and Retired ______ $65 Individual ______ $166 Institution Add $22 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-5 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 e-mail: hiscox at mitvma.mit.edu ----- From ling at csd.uwo.ca Tue Jun 21 03:43:03 1994 From: ling at csd.uwo.ca (Charles X Ling) Date: Tue, 21 Jun 94 03:43:03 EDT Subject: Paper on overfitting ... and learning verb past tense Message-ID: <9406210743.AA29302@mccarthy.csd.uwo.ca> Hi. A few months ago I posted some questions on the overfitting effect of neural network learning, and I got some very helpful replies and from many people. Thanks a million! After much more work, I have just finished a short paper (to be submitted) which contains clear results on the overfitting issue. Your comments and suggestions on the paper will be highly appreciated! *********** Overfitting in Neural-Network Learning of Discrete Patterns Charles X. Ling Abstract Weigend reports that the presence and absence of overfitting in neural networks depends on how the testing error is measured, and that there is no overfitting in terms of the classification error. In this paper, we show that, in terms of the classification error, overfitting can be very evident depending on the representation used to encode the attributes. We design a simple learning problem of a small Boolean function with clear rationale, and present experimental results to support our claims. We verify our findings in the task of learning the past tense of English verbs. Instructions for obtaining by anonymous ftp: % ftp ftp.csd.uwo.ca Name: anonymous Password: ftp> cd pub/SPA/papers ftp> get overfitting.ps The paper is approx 280K and prints on 19 pages. ********** While I am here, I'd like to ask if anyone is working on the connectionst models for the learning of the past tense of English verbs, which I have worked on with symbolic system SPA (see two papers in the same directory as the overfitting paper). To insure more direct comparison, if running SPA is needed, I'd be very happy to assist and collaborate. Regards, Charles From shultz at hebb.psych.mcgill.ca Tue Jun 21 08:44:50 1994 From: shultz at hebb.psych.mcgill.ca (Tom Shultz) Date: Tue, 21 Jun 94 08:44:50 EDT Subject: No subject Message-ID: <9406211244.AA02074@hebb.psych.mcgill.ca> Subject: Abstract Date: 21 June '94 FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/shultz.cross.ps.Z Please do not forward this announcement to other boards. Thank you. ------------------------------------------------------------- The following paper has been placed in the Neuroprose archive at Ohio State University: Analyzing Cross Connected Networks (8 pages) Thomas R. Shultz Department of Psychology & McGill Cognitive Science Centre McGill University Montreal, Quebec, Canada H3A 1B1 shultz at psych.mcgill.ca and Jeffrey L. Elman Center for Research on Language Department of Cognitive Science University of California at San Diego LaJolla, CA 92093-0126 U.S.A. elman at crl.ucsd.edu Abstract The non-linear complexities of neural networks make network solutions difficult to understand. Sanger's contribution analysis is here extended to the analysis of networks automatically generated by the cascade-correlation learning algorithm. Because such networks have cross connections that supersede hidden layers, standard analyses of hidden unit activation patterns are insufficient. A contribution is defined as the product of an output weight and the associated activation on the sending unit, whether that sending unit is an input or a hidden unit, multiplied by the sign of the output target for the current input pattern. Intercorrelations among contributions, as gleaned from the matrix of contributions x input patterns, can be subjected to principal components analysis (PCA) to extract the main features of variation in the contributions. Such an analysis is applied to three problems, continuous XOR, arithmetic comparison, and distinguishing between two interlocking spirals. In all three cases, this technique yields useful insights into network solutions that are consistent across several networks. The paper has been published in J. D. Cowan, G. Tesauro, & J. Alspector (Eds.), Advances in Neural Information Processing Systems 6, pp. 1117-1124. San Francisco, CA: Morgan Kaufmannn. Instructions for ftp retrieval of this paper are given below. If you are unable to retrieve and print it and therefore wish to receive a hardcopy, please send e-mail to shultz at psych.mcgill.ca Please do not reply directly to this message. FTP INSTRUCTIONS: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get shultz.cross.ps.Z ftp> quit unix> uncompress shultz.cross.ps.Z Thanks to Jordan Pollack for maintaining this archive. Tom Shultz From wray at ptolemy.arc.nasa.gov Wed Jun 22 13:42:20 1994 From: wray at ptolemy.arc.nasa.gov (Wray Buntine) Date: Wed, 22 Jun 94 10:42:20 PDT Subject: How about some WWW structure to all these papers? Message-ID: <9406221742.AA05693@ptolemy.arc.nasa.gov> There's an opportunity here for some ambitious student. We get notices of several papers a day over connectionists. How about someone turn them into a WWW site with: Title, abstract, and author details. URL link to the FTP site & file. So instead of getting bombarded with abstracts, etc., and having to do the laborious download, etc., I just look up, http://cs.cmu.edu/pub/neuron/Connectist-Abstracts from my WWW/Xmosaic hotlist every few days, and when I see something I like I just point and click and, hey, presto, ghostview starts up with the Postscript file. Wray Buntine NASA Ames Research Center phone: (415) 604 3389 Mail Stop 269-2 fax: (415) 604 3594 Moffett Field, CA, 94035-1000 email: wray at kronos.arc.nasa.gov From stolcke at ICSI.Berkeley.EDU Wed Jun 22 20:25:16 1994 From: stolcke at ICSI.Berkeley.EDU (Andreas Stolcke) Date: Wed, 22 Jun 1994 17:25:16 PDT Subject: How about some WWW structure to all these papers? In-Reply-To: Your message of Wed, 22 Jun 1994 10:42:20 -0700. <9406221742.AA05693@ptolemy.arc.nasa.gov> Message-ID: <199406230025.RAA23681@tiramisu> In message <9406221742.AA05693 at ptolemy.arc.nasa.gov>you wrote: > > There's an opportunity here for some ambitious student. > > We get notices of several papers a day over connectionists. > How about someone turn them into a WWW site with: > > Title, abstract, and author details. > URL link to the FTP site & file. > Several services of this sort already exist. The most useful is probably the Computer Science Techreport index, at http://cs.indiana.edu/cstr/search. It indexes reports from several dozen sources, including the neuroprose archive. You search for title keywords, authors, etc. and it returns URLs to the ftp sites. You follow those, and voila', you got your report on screen (or on disk). I would urge sites that maintain collections of ftp-able reports to get in touch with Marc Van Heyningen (mvanheyn at cs.indiana.edu) to be put on the list of sites queried by this index. --Andreas From kainen at cs.UMD.EDU Thu Jun 23 02:46:51 1994 From: kainen at cs.UMD.EDU (Paul Kainen) Date: Thu, 23 Jun 1994 02:46:51 -0400 Subject: How about some WWW structure to all these papers? Message-ID: <199406230646.CAA28076@tove.cs.UMD.EDU> Actually, one can already access the archives via web browsers. For instance, using "lynx" which is available even without being a node (or a similar approach with Mosaic), one can just type "lynx ftp://archive.cis.ohio-state.edu/pub/neuroprose/" to access the directory via the WWW. Thus, lynx actually subsumes gopher and ftp. To navigate, you just move the cursor or change the options page to number the links so that you can just type the number (usually faster). Using lynx eliminates the nuisance of the ftp log-in (which is a sham anyway) and also makes finding files a lot easier. The interface is also fairly intelligent so all you need to do is to type "D" to download the file to your host (or yourself is you're on the net directly). You are given a choice to write the resulting file to disk under whatever name you prefer, and if the file is binary, the transfer mode is set automatically. Anyway, there are tutorials on lynx, which is a neat program by Lou Montulli at the Univ. of Kansas; see, e.g., http://info.cern.ch/hypertext/WWW/Lynx/Status.html (by typing "lynx" followed by the URL above). Paul Kainen (kainen at cs.umd.edu) From donna at Lanl.GOV Thu Jun 23 18:00:36 1994 From: donna at Lanl.GOV (donna@Lanl.GOV) Date: Thu, 23 Jun 1994 15:00:36 -0700 Subject: Letter from Alan Lapedes Message-ID: <9406232058.AA02786@t13.lanl.gov> POSITIONS AT LOS ALAMOS NATIONAL LABORATORY (Postdoctoral and Tech) Positions involving sequence analysis of DNA, RNA and protein sequences, as well as general aspects of computational biology, are available at Los Alamos National Laboratory. Depending on funds, we will have a limited number of (a) postdoctoral positions (b) data entry, and elementary data analysis positions (techs) (c) graduate student and summer student positions. Funding for these positions is generally associated with specific grants, and the successful applicant will generally be expected to work on specific projects. Projects range from analysis of HIV and human papilloma virus sequences, to immune system studies, to more general aspects of computational biology involving evolution, and sequence-structure-function relationships. Exceptionally well qualified candidates who are interested in theoretical and computational investigations related to the just mentioned topics, and with expertise in one or more of the following areas are encouraged to apply: (a) immunology (b) virology (c) sequence analysis (d) statistical analysis (e) neural net/pattern recognition analysis (f) programming skills (C language) (g) computational biology (h) structural biology Candidates may contact the following address: email: donna at lanl.gov post: Donna Spitzmiller MS B213 Theoretical Division LANL Los Alamos, New Mexico 87545 voice: 505-665-3209 FAX: 505-665-3003 for application material. Questions concerning specific positions should be sent to the same address and will be distributed to appropriate staff members for reply. Please indicate in your initial inquiry whether you are interested in a student, tech, or postdoctoral position. Candidates for tech positions can not have a Ph.D. degree. Candidates for postdoctoral positions must have completed their Ph.D. within the last three years. Los Alamos National Laboratory is an equal opportunity employer. Thank you, Alan Lapedes Los Alamos National Laboratory T-13 Los Alamos, NM 87545 From paolo at mcculloch.ing.unifi.it Fri Jun 24 13:17:18 1994 From: paolo at mcculloch.ing.unifi.it (Paolo Frasconi) Date: Fri, 24 Jun 94 19:17:18 +0200 Subject: Report available Message-ID: <9406241717.AA15610@mcculloch.ing.unifi.it> FTP-host: ftp-dsi.ing.unifi.it FTP-file: pub/tech-reports/em.tr-11-94.ps.Z ULR: ftp://ftp-dsi.ing.unifi.it/pub/tech-reports/em.tr-11-94.ps.Z The following technical report is available by anonymous ftp. Length is 41 pages. ------------------------------------------------------------------ An EM Approach to Learning Sequential Behavior Yoshua Bengio Dept. Informatique et Recherche Operationnelle Universite de Montreal, Montreal, Qc H3C-3J7 bengioy at iro.umontreal.ca Paolo Frasconi Dipartimento di Sistemi e Informatica Universita di Firenze (Italy) paolo at mcculloch.ing.unifi.it Tech. Report. DSI 11/94 Universita di Firenze Abstract We consider problems of sequence processing and we propose a solution based on a discrete state model. We introduce a recurrent architecture having a modular structure that allocates subnetworks to discrete states. Different subnetworks are model the dynamics (state transition) and the output of the model, conditional on the previous state and an external input. The model has a statistical interpretation and can be trained by the EM or GEM algorithms, considering state trajectories as missing data. This allows to decouple temporal credit assignment and actual parameters estimation. The model presents similarities to hidden Markov models, but allows to map input sequences to output sequences, using the same processing style of recurrent networks. For this reason we call it Input/Output HMM (IOHMM). Another remarkable difference is that IOHMMs are trained using a supervised learning paradigm (while potentially taking advantage of the EM algorithm), whereas standard HMMs are trained by an unsupervised EM algorithm (or a supervised criterion with gradient ascent). We also study the problem of learning long-term dependencies with Markovian systems, making comparisons to recurrent networks trained by gradient descent. The analysis reported in this paper shows that Markovian models generally suffer from a problem of diffusion of temporal credit for long-term dependencies and fully connected transition graphs. However, while recurrent networks exhibit a conflict between long-term information storing and trainability, these two requirements are either both satisfied or both not satisfied in Markovian models. Finally, we demonstrate that EM supervised learning is well suited for solving grammatical inference problems. Experimental results are presented for the seven Tomita grammars, showing that these adaptive models can attain excellent generalization. --------------------------------------------------------- Paolo Frasconi Dipartimento di Sistemi e Informatica. Via di Santa Marta 3 50139 Firenze (Italy) +39 (55) 479-6361 / fax +39 (55) 479-6363 --------------------------------------------------------- From jjg at phoenix.Princeton.EDU Fri Jun 24 16:47:54 1994 From: jjg at phoenix.Princeton.EDU (Jack J. Gelfand) Date: Fri, 24 Jun 94 16:47:54 EDT Subject: Postdoctoral Position Message-ID: <9406242047.AA04846@tucson.Princeton.EDU> Postdoctoral Position - Recent Ph.D. for research in hybrid force/position control and modeling of the human motor control system. This is a joint project in the Department of Mechanical and Aerospace Engineering and Department of Psychology. Candidate must have strong background in adaptive control theory with an interest in motor physiology. Also includes work on anthropomorphic robots. Please send a vita and a list of 3 references to: Dr. Jack Gelfand Princeton University 1-S-6 Green Hall Princeton, NJ 08544 jjg at phoenix.princeton.edu 609-258-2930 We would prefer someone who is available before October, 1994, but will consider all applicants. Position is for 1 or 2 years. Princeton University is an equal opportunity employer. From hwang at pierce.ee.washington.edu Mon Jun 27 09:10:13 1994 From: hwang at pierce.ee.washington.edu (Jenq-Neng Hwang) Date: Mon, 27 Jun 94 06:10:13 PDT Subject: Advance-Program and Registration of NNSP'94 Message-ID: <9406271310.AA14410@pierce.ee.washington.edu.Jaimie> 1994 IEEE WORKSHOP ON NEURAL NETWORKS FOR SIGNAL PROCESSING September 6-8, 1994 Ermioni, Greece The Workshop, sponsored by the Neural Network Technical Committee of the IEEE Signal Processing Society, in cooperation with the IEEE Neural Network Council and with co-spnsorship from ARPA and Intracom S.A. Greece, is designed to serve as a regular forum for researchers from universities and industry who are interested in interdisciplinary research on neural networks for signal processing applications. In the present scope, the workshop encompasses up-to-date research results in several key areas, including learning algorithms, network architectures, speech processing, image processing, adaptive signal processing, medical signal processing, and other applications. GENERAL CHAIR John Vlontzos, INTRACOM S.A. Peania, Attica, Greece, jvlo at intranet.gr PROGRAM CHAIR Jenq-Neng Hwang, University of Washington Seattle, Washington, USA, hwang at ee.washington.edu PROCEEDINGS CHAIR Elizabeth J. Wilson, Raytheon Co. Marlborough, MA, USA, bwilson at sud2.ed.ray.com FINANCE CHAIR Demetris Kalivas, INTRACOM S.A. Peania, Attica, Greece, dkal at intranet.gr PROGRAM COMMITTEE Joshua Alspector (Bellcore, USA) Les Atlas (U. of Washington, USA) Charles Bachmann (Naval Research Lab. USA) David Burr (Bellcore, USA) Rama Chellappa (U. of Maryland, USA) Lee Giles (NEC Research, USA) Steve J. Hanson (Siemens Corp. Research, USA) Yu-Hen Hu (U. of Wisconsin, USA) Jenq-Neng Hwang (U. of Washington, USA) Bing-Huang Juang (AT&T Bell Lab., USA) Shigeru Katagiri (ATR Japan) Sun-Yuan Kung (Princeton U., USA) Gary M. Kuhn (Siemens Corp. Research, USA) Stephanos Kollias (National Tech. U. of Athens, Greece) Richard Lippmann (MIT Lincoln Lab., USA) Fleming Lure (Caelum Research Co., USA) John Makhoul (BBN Lab., USA) Richard Mammone (Rutgers U., USA) Elias Manolakos (Northeastern U., USA) Nahesan Niranjan (Cambridge U., UK) Tomaso Poggio (MIT, USA) Jose Principe (U. of Florida, USA) Wojtek Przytula (Hughes Research Lab., USA) Ulrich Ramacher (Siemens Corp., Germany) Bhaskar D. Rao (UC San Diego, USA) Andreas Stafylopatis (National Tech. U. of Athens, Greece) Noboru Sonehara (NTT Co., Japan) John Sorensen (Tech. U. of Denmark, Denmark) Yoh'ichi Tohkura (ATR, Japan) John Vlontzos (Intracom S.A., Greece) Raymond Watrous (Siemens Corp. Research, USA) Christian Wellekens (Eurecom, France) Yiu-Fai Issac Wong (Lawrence Livermore Lab., USA) Barbara Yoon (ARPA, USA) TENTATIVE ADVANCE PROGRAM OF NNSP'94, ERMIONI, GREECE ---------------------------------------------------- ****** TUESDAY, SEPTEMBER 6TH, 1994 ****** 8:15 am -- 8:30 am ------------------ OPENING REMARKS: John Vlontzos 8:30 am -- 9:20 am ------------------ PLENARY TALK: Effective VC Dimensions -- Leon Bottou, Neuristique Inc., France 9:30 am -- 11:50 am ------------------- LEARNING ALGORITHMS I: (oral presentation) Chair: John Sorenson A Novel Unsupervised Competitive Learning Rule with Learning Rate Adaptation for Noise Cancelling and Signal Separation -- M. Van Hulle -- (Laboratorium voor Neuro-en Psychofysiologie, Belgium) A Statistical Inference Based Growth Criterion for the RBF Network -- V. Kadirkamanathan -- (University of Sheffield, United Kingdom) Neural Network Inversion Techniques for EM Training and Testing of Incomplete Data -- J. N. Hwang, C. J. Wang -- (University of Washington, USA) OSA--A Topological Algorithm for Constructing Two-Layer Neural Networks -- F.M. Frattale Mascioli, G. Martinelli -- (University of Rome, Italy) Adaptive Regularization -- L. Hansen, C. Rasmussen, C. Svarer, J. Larsen -- (Technical University of Denmark, Denmark) Generalization Performance of Regularized Neural Network Models -- J. Larsen, L. Hansen -- (Technical University of Denmark, Denmark) An Application of Importance-Based Feature Extraction in Reinforcement Learning (080) -- D. Finton, Y.H. Hu -- (University of Wisconsin-Madison, USA) 12:00 am -- 12:40 pm -------------------- LEARNING ALGORITHMS II: (3-minute oral preview of poster presentations) Chair: Andreas Stafylopatis Multilayer Perceptron Design Algorithm -- E. Wilson, D. Tufts -- (Raytheon Company, USA) Mixture Density Estimation via EM Algorithm with Deterministic Annealing -- N. Ueda, R. Nakano -- (Purdue University, USA) Faster and Better Training of Multi-Layer Perceptron for Forecasting Problems -- R. Laddad, U. Desai, P. Poonacha -- (Indian Institute of Technology, India) An Interval Computation Approach to Backpropagation -- C. Pedreira, E. Parente -- (Catholic University of Rio de Janeiro, Brazil) Robust Estimation for Radial Basis Functions -- A. Bors, I. Pitas -- (University of Thessaloniki, Greece) NETWORK ARCHITECTURES I: (3-minute oral preview of poster presentations) Chair: Yu-Hen Hu A Hybrid Neural Network Architechture for Automatic Object Recognition -- T. Fechner, R. Tanger -- (Daimler Benz Forschungsgruppe Systemtechnik, Germany) Time Series Prediction Using Genetically Trained Wavelet Networks -- A. Prochazka, V. Sys -- (Prague University of Chemical Technology, Czech Republic) A Network Of Physiological Neurons With Differentiated Excitatory And Inhibitory Units Possessing Pattern Recognition Capacity -- E. Ventouras, M. Kitsonas, S. Hadjiagapis, N. Uzunoglu, C. -- Papageorgiou, A. Rabavilas, C. Stefanis -- (National Technical University of Athens, Greece) Locally Excitatory Globally Inhibitory Oscillator Networks: Theory and Application to Pattern Segmentation -- D. Wang, D. Terman -- (The Ohio State University, USA) Learning with Imperfect Perception -- W. Wen and M. Yokoo -- (NTT Communication Science Laboratories, Japan) A Learning Algorithm for Multi-Layer Perceptrons with Hard-Limiting Threshold Units -- R. Goodman, Z. Zeng -- (California Institute of Technology, USA) The Selection of Neural Models of Non-Linear Dynamical Systems by Statistical Tests -- D. Urbani, P. Roussel-Ragot, L. Personnaz, G. Dreyfus -- (ESPCI de la Ville de Paris, France) 1:30 pm -- 2:50 pm ------------------ LEARNING ALGORITHMS II and NETWORK ARCHITECTURES I: (poster presentations) 3:00 pm -- 5:40 pm ------------------ NETWORK ARCHITECTURES: (oral presentation) Chair: Stephanos Kollias The Use of Recurrent Neural Networks for Classification -- T. Burrows, M. Niranjan -- (Cambridge University, United Kingdom) Network Structures for Nonlinear Digital Filters -- J.N. Lin, R. Unbehauen -- (Lehrstuhl fur Allgemeine und Theoretische Elektrotechnik Universit, Federal Republic Germany) Pruning Recurrent Neural Networks for Improved Generalization Performance -- C. Omlin, C.L. Giles -- (University of Maryland, USA) A Unifying View of Some Training Algorithms for Multilayer Perceptrons with FIR Filter Synapses -- A. Back, E. Wan, S. Lawrence, and A. C. Tsoi -- (The University of Queensland, Australia) Spectral Feature Extraction Using Poisson Moments -- S. Celebi, J. Principe -- (University of Florida, USA) Application of the Fuzzy Min-Max Neural Network Classifier to Problems with Continuous and Discrete Attributes -- A. Likas, K. Blekas, A. Stafylopatis -- (National Technical University of Athens, Greece) Time Signal Filtering by Relative Neighborhood Graph Localized Linear Approximation -- J. Sorensen -- (Technical University of Denmark, Denmark) Classification Using Hierarchical Mixtures of Experts -- S. Waterhouse, A. Robinson -- (Cambridge University, United Kingdom) 7:00 pm -- 9:00 pm ------------------ PANEL DISCUSSION: NEURAL NETWORKS FOR INDUSTRIAL APPLICATIONS Moderator: Gary Kuhn (Siemens Corporate Research, USA) Panelist: Kazuo Asakawa (Fujitsu Laboratories Ltd., Japan) Leon Bottou (Neuristique Inc., France) Dan Hammerstrom (Adaptive Solution Inc., USA) Kevin Farrel (Rutgers University, USA) John Vlontzos (Intracom S. A., Greece) Georg Zimmermann (Siemens, Germany) ****** Wednesday, SEPTEMBER 7TH, 1994 ****** 8:30 am -- 9:20 am ------------------ PLENARY TALK: Massively Parallel Context Processing -- Dan Hammerstrom, Adaptive Solution Inc., USA 9:30 am -- 11:30 am ------------------- SPEECH PROCESSING I: (oral presentation) Chair: Bing-Huang Juang Recurrent Network Automata for Speech Recognition: A Summary of Recent Work -- R. Gemello, D. Albesano, F. Mana, R. Cancelliere -- (Centro Studie Laboratori Telecommunicazioni, Italy) Acoustic Echo Cancellation for Hands-free Telephony Using Neural Networks -- A. Birkett, R. Goubran -- (Carleton University, Canada) Minimum Error Training for Speech Recognition -- E. McDermott, S. Katagiri -- (ATR Human Information Processing Research Laboratories, Japan) Connectionist Model Combination for Large Vocabulary Speech Recognition -- M. Hochberg, G. Cook, S. Renals, T. Robinson -- (Cambridge University, United Kingdom) Neural Tree Network/Vector Quantization Probability Estimators for Speaker Recognition -- K. Farrell, S. Kosonocky, R. Mammone -- (Rutgers University, USA) Parallel Training of MLP Probability Estimators for Speech Recognition: A Gender-Based Approach -- N. Mirghafori, N. Morgan, H. Bourlard -- (International Computer Science Institute, USA) 11:45 am -- 12:30 pm -------------------- Speech Processing II: (3-minute oral preview of poster presentations) Chair: Shigeru Katagiri LVQ as a Feature Transformation for HMMs -- K. Torkkola -- (Institute Dalle Molle D'Intelligence Artificielle Perceptive, Switzerland) Autoassociator-Based Modular Architecture for Speaker Independent Phoneme Recognition -- L. Lastrucci, G. Bellesi, M. Gori, G. Soda -- (Universita di Firenze, Italy) Non-linear Speech Analysis Using Recurrent Radial Basis Function Networks -- P. Moakes, S. Beet -- (University of Sheffield, England) Word Recognition Using a Neural Network and a Phonetically Based DTW -- Y. Matsuura, H. Miyazawa, T. Skinner -- (Meidensha Corp., Japan) A Monolithic Speech Recognizer Based on Fully Recurrent Neural Networks -- K. Kasper, H. Reininger, D. Wolf, H. Wust -- (Johann Wolfgang Goethe-Universit, FRG) Fuzzification of Formant Trajectories for Classification of CV Utterances Using Neural Network Models -- B. Yegnanarayana, C. C. Sekhar, S Prakash -- (Indian Institute of Technology, India) Minimum Error Classification of Keyword-Sequences -- T. Komori, S. Katagiri -- (ATR Human Information Processing Research Laboratories, Japan) Hybrid Training Method for Tied Mixture Density Hidden Markov Models Using Learning Vector Quantization and Viterbi Estimation -- M. Kurimo -- (Helsinki University of Technology, Finland) Image Processing I: (3-minute oral preview of poster presentations) Chair: Yiu-Fai Issac Wong Medical Imaging with Neural Networks -- C. Pattichis, A. Constantinides -- (University of Cyprus, Cyprus) High Resolution Image Reconstruction Using Mean Field Annealing -- T. Numnonda, M. Andrews -- (University of Aukland, New Zealand) Hardware Neural Network Implementation of Tracking System -- G. Lendaris, R. Pap, R. Saeks, C. Thomas, R. Akita -- (Accurate Automation Corp., USA) Fast Image Analysis Using Kohonen Maps -- D. Willett, C. Busch, F. Seibert -- (Darmstadt Computer Graphics Center, FRG) Analysis of Satellite Imagery Using a Neural Network Based Terrain Classifier -- M. Perrone, M. Larkin -- (Brown University, USA) 1:30 pm -- 2:50 pm ------------------ Speech Processing II and Image Processing I: (poster presentations) 3:00 pm -- 5:00 pm ------------------ IMAGE PROCESSING II: (oral presentation) Chair: Sun-Yuan Kung Moving Objects Classification in a Domestic Environment Using Quadratic Neural Network -- G. Lim, M. Alder, C deSilva, Y. Attikiouzel -- (The Univ. of Western Australia, Western Australia) Application of the HLVQ Neural Network to Hand-Written Digit Recognition -- B. Solaiman, Y. Autret -- (Ecole Nationale Superieure des Telecommunications de Bretagne, France) Neural Networks for Robust Image Feature Classification: A Comparative Study -- S. Madiraju, C.C. Liu -- (The University of Melbourne, Australia) Application of SVD Networks to Multi-Object Motion-Shape Analysis -- S.Y. Kung, J. Taur, M.Y. Chiu -- (Princeton University, USA) Ensemble Methods for Automatic Masking of Clouds in AVIRIS Imagery -- C.M. Bachmann, E.E. Clothiaux, J.W. Moore, K. J. Andreano, -- D. Q. Luong -- (Naval Research Laboratory, USA) Saddle-node Dynamics for Edge Detection (092) -- Y-F Wong -- (Lawrence Livermore National Laboratory, USA) ****** THURSDAY, SEPTEMBER 8TH, 1994 ****** 8:30 am -- 9:20 am ------------------ PLENARY TALK: Knowledge-Based Neural Networks -- Kazuo Asakawa, Fujitsu Laboratories Ltd., Japan 9:30 am -- 10:20 am ------------------- INVITED TALK: Predicting Impredictability -- Andreas S. Weigend, University of Colorado, USA 10:30 am -- 11:50 am ------------------- ADAPTIVE SIGNAL PROCESSING: (oral presentation) Chair: Elias Manolakos A Neural Network Trained with the Extended Kalman Algorithm Used for the Equalization of a Binary Communication Channel -- M. Birgmeier -- (Technische Universit, Austria) Neural-net Based Receiver Structures for Single- and Multi-amplitude Signals in Interference Channels -- D. Bouras, P.T. Mathiopoulos, D. Makrakis -- (The University of British Columbia, Canada) A Hybrid Digital Computer-Hopfield Neural Network CDMA Detector for Real-time Multi-user Demodulation -- G. Kechriotis and E. Manolakos -- (Northeastern University, USA) Improving the Resolution of a Sensor Array Pattern by Neural Networks and Learning -- C. Bracco, S. Marcos, M. Benidir -- (Laboratoire des Signaux and Systemes, France) 12:00 pm -- 12:40 pm -------------------- Other Applications: (3-minute oral preview of poster presentations) Chair: Barbara Yoon and Jose Principe Sensitivity Analysis on Neural Networks for Meteorological Variable Forecasting -- J. Castellanos, A. Pazos, J. Rios, J.L. Zafra -- (Facultad de Informatica - UPM, Spain) A Hopfield Net Based Adaptation Algorithm for Phased Antenna Arrays -- M. Alberti -- (University of Paderborn, Germany) Modeling of Glaucoma Induced Changes in the Retina and Neural Net Assisted Diagnosis -- S. von Spreckelsen, P. Grumstup, J. Johnsen, L. Hansen -- (Technical University of Denmark, Denmark) Blind Deconvolution of Signals Using a Complex Recurrent Network -- A. Back, A.C. Tsoi -- (The University of Queensland, Australia) Continuous-time Nonlinear Signal Processing: A Neural Network Approach for Gray Box Identification -- R. Rico-Martinez, J. Anderson, I. Kevrekidis -- (Princeton University, USA) A Quantitative Study of Evoked Potential Estimation Using a Feedforward Neural Network -- A. Dumitras, A. Murgan, V. Lazarescu -- (Technical University of Bucharest, Romania) Neural Estimation of Kinetic Rate Constants from Dynamic Pet-Scans -- T. Fog, L. Nielsen, L. Hansen, S. Holm, I. Law, C. Svarer, -- O. Paulson -- (Technical University of Denmark, Denmark) Auditory Stream Segregation Based on Oscillatory Correlation -- D. Wang -- (The Ohio State University, USA) Application of Neural Networks for Sensor Performance Improvement -- S. Poopalasingam, C. Reeves, and N. Steele -- (Coventry University, United Kingdom) Neural-Network Based Classification of Laser-Doppler Flowmetry Signals -- N. Panagiotidis, A. Delopoulos, S. Kollias -- (National Technical University of Athens, Greece) NeuroDevice - Neural Network Device Modelling Interface for VLSI Design -- P. Ojala, J. Saarinen, K. Kaski -- (Tampere University of Technology, Finland) Encoding Pyramids by Labeling RAAM -- S. Lonardi, A. Sperduti, A. Starita -- (Corso Italia 40, Italy) Reconstructed dynamics and Chaotic Signal Modeling -- J.M. Kuo, J. Principe -- (University of Florida, USA) A Neural Network Scheme for Earthquake Prediction Based on the Seismic Electric Signals -- S. Lakkos, A. Hadjiprocopis, R. Comley -- (City University of London, United Kingdom) 1:30 pm -- 2:50 pm ------------------ Other Applications: (poster presentations) 3:00 pm -- 5:20 pm ------------------ Medical Signal Processing: (oral presentation) Chair: Hsin-Chia Fu Medical Diagnosis and Artificial Neural Networks: A Medical Expert System Applied to Pulmonary Diseases -- G.P. Economou, C. Spriopoulos, N. Economopoulos, N. Charokopos, -- D. Lymberopoulos, M. Spiliopoulou, E. Haralambopulu, C. Goutis -- (University of Patras, Greece) Toward Improving Excercise ECG for Detecting Ischemic Heart Disease with Recurrent and Feedforward Neural Nets -- G. Dorffner, E. Leitgeb, H. Koller -- (Austrian Research Institute for Artificial Intelligence, Austria) Neural Networks and Higher Order Spectra for Breast Cancer Detection -- T. Stathaki, A. Constantinides -- (Imperial College, United Kingdom) Towards Semen Quality Assessment Using Neural Networks -- C. Linneberg, P. Salamon, C. Svarer, L. Hansen -- (Technical University of Denmark, Denmark) Use of Neural networks in detection of ischemic episodes from ECG leads -- N. Maglaveras, T. Stamkopoulos, C. Pappas, M. Strintzis -- (Aristotelian University, Greece) EEG Signal Analysis Using a Multi-layer Perceptron with Linear Preprocessing -- S.A. Mylonas, R.A. Comley -- (City University of London, United Kingdom) ************** THE END *************** REGISTRATION FORM 1994 IEEE Workshop on Neural Networks for Signal Processing Please complete this form (type or print) Name ___________________________________________________________ Last First Middle Firm or University _____________________________________________ Mailing Address ________________________________________________ ________________________________________________________________ ________________________________________________________________ Country Phone FAX Fee payment must be made by money order or personal check. Do not send cash. Make fee payable to "IEEE NNSP'94 - c/o D. Kalivas' and mail it together with the registration form to: NNSP' 94 c/o D. Kalivas Intracom S.A. P.O. Box 68 19002 Peania Greece. For further information, Dr. Kalivas can be reached at Tel.: 011 301 6860479 FAX: 011 301 6860312 e-mail: dkal at intranet.gr Advanced registration, before: July 15 _________________________________________________________________________ _________________________________________________________________________ REGISTRATION FEE AND HOTEL ACCOMMODATIONS Registration fee with single room (for three nights: September 5, 6 and 7) and meals for one person Date IEEE Member Non-member __________________________________________________________________ Before July 15 U.S. $600 U.S. $650 * After July 15 U.S. $650 U.S. $700 Registration fee with double room (for three nights: September 5, 6 and 7) and meals for two persons Date IEEE Member Non-member __________________________________________________________________ Before July 15 U.S. $720 U.S. $770 * After July 15 U.S. $770 U.S. $820 Registration fee without room but still with meals Date IEEE Member Non-member __________________________________________________________________ Before July 15 U.S. $400 U.S. $450 * After July 15 U.S. $450 U.S. $500 Additional nights at extra cost (U.S. $ 100 per night) with no meals Please specify dates and include payment in the registration fee Number of additional nights: ______________________________ Dates: ____________________________________________________ Extra Cost: _______________________________________________ * After July 15 there will be only a limited number of rooms available, therefore we strongly recommend the early registration. I want reservation for the bus leaving the Eastern Airport of Athens at 4:00 pm on the 5th of September for ___ persons. __________________________________________________________________________ __________________________________________________________________________ TRAVEL INFORMATION NNSP'94 will be held at the Porto Hydra Hotel. The Porto Hydra Hotel is located in the eastern Peloponnese, opposite the island of Hydra and near famous archaeological sites (ancient theater of Epidaurus, Mycenae, Tiryns). A one-page map is provided. The exact hotel address is: Hotel Porto Hydra Plepi Ermionidos Greece Tel: 30 - 754 - 41112, 41270-4 FAX: 30 - 754 - 41295 Possible ways to get to Porto Hydra Hotel are: By car: Follow the route Athens - Korinthos - Epidaurus - Kranidi - - Ermioni - Porto Hydra. This distance is 195 km and it will take you approximately 2.5 hours. By ship: Take a taxi from the airport to the port of Passalimani (less than 2000 drachmas cost). There is a ship leaving every morning to Ermioni. You do not need to make reservations. The trip is four hours and 15 minutes long. For time schedules call 30-1-4511311. From Ermioni a hotel bus will take you to Porto Hydra. By Hydrofoil Boats: Take a taxi from the airport to the port of ZEA (less than 2000 drachmas cost). There are "flying dolphins" (hydrofoil boats) going to Ermioni. You need to buy the tickets in advance (from a travel agent) because at the time you would get to the port they may be sold out. The trip is two hours long. For time schedules call 30-1-4280001. From Ermioni a hotel bus will take you to Porto Hydra. Best way: We will rent a bus (or buses) to take you from the Eastern Airport of Athens (this is the airport where most of the international flights arrive) to the Porto Hydra Hotel. The buses will leave at 4:00 pm on the 5th of September. To reserve seats on the bus, please check the appropriate box and specify the number of reserved seats you want in the Registration Form. From isabelle at neural.att.com Mon Jun 27 19:20:45 1994 From: isabelle at neural.att.com (Isabelle Guyon) Date: Mon, 27 Jun 94 19:20:45 EDT Subject: No subject Message-ID: <9406272320.AA08319@neural> > - > - > - > - > - > - > - > - < - < - < - < - < - < - < - < - < - - > First UNIPEN Benchmark of On-line Handwriting Recognizers < - -> Organized by NIST < - - > - > - > - > - > - > - > - > - < - < - < - < - < - < - < - < - < - June 1994 * CALL FOR DATA (please post) At the initiative of Technical Committee 11 of the International Association for Pattern Recognition (IAPR), the UNIPEN project was started to stimulate research and development in on-line handwriting recognition (e. g. for pen computers and pen communicators). UNIPEN provides a platform of data exchange at the Linguistic Data Consortium (LDC) and is organizing this year a worldwide benchmark under the control of the US National Institute of Standards and Technologies (NIST). The benchmark is concerned with writer independent recognition of sentences, isolated words and isolated characters of any writing style (handprinted and/or cursive). Although UNIPEN will provide, in the future, data for various alphabets, this particular benchmark is limited to letters and symbols from an English computer keyboard . The data will be donated by the participants. * Conditions of participation Participation to the benchmark is open to any individual or institution who provides a sample of handwriting in the UNIPEN format which contains at least 12,000 characters. The data must be of acceptable quality and donated by October 1st, 1994 . The database of donated samples will be available for free to the data donators. Registration material can be obtained by sending email to Stan Janet at stan at magi.ncsl.nist.gov or via ftp: ftp ftp.cis.upenn.edu Name: anonymous Password: [use your email address] ftp> cd pub/UNIPEN-pub/documents ftp> get call-for-data.ps ftp> quit * Organizing committee Isabelle Guyon, AT&T Bell Laboratories, USA Lambert Schomaker, Nijmegen Institute for Cognition and Information, The Netherlands Stan Janet, National Institute of Standards and Technologies, USA Mark Liberman, Linguistic Data Consortium, University of Pennsylvania, USA Rejean Plamondon, IAPR, TC11, Ecole Polytechnique de Montreal, Canada From rreilly at nova.ucd.ie Tue Jun 28 10:40:19 1994 From: rreilly at nova.ucd.ie (Ronan Reilly) Date: Tue, 28 Jun 1994 15:40:19 +0100 Subject: INNC'94: Programme & registration (433 lines) Message-ID: A non-text attachment was scrubbed... Name: not available Type: x-sun-attachment Size: 11747 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/dff70a58/attachment.ksh From L.P.OMard at lut.ac.uk Wed Jun 29 12:50:52 1994 From: L.P.OMard at lut.ac.uk (L.P.OMard) Date: Wed, 29 Jun 94 12:50:52 bst Subject: Announcing the "lutear-auditory-simulation" Mailing List Message-ID: <9406291150.AA04238@hpc.lut.ac.uk> Hello People, This message is to announce the "lutear-auditory-simulation" mailing list. The list may be joined by anybody (international internet users), by sending the following message to "mailbase at mailbase.ac.uk": To: mailbase at mailbase.ac.uk Subject: (you may leave this blank) Message text: join lutear-auditory-simulation Ann Jones stop - where "Ann Jones" should be replaced by your own first and second names. What is LUTEar? ---------------- The LUTEar Core Routines Library (CRL, version 1.5.2, October 1993) is a computational platform and set of coding conventions which supports a modular approach to auditory system modelling. The system is written in ANSI-C and works on a wide range of operating systems. It is available via anonymous FTP from:- suna.lut.ac.uk (131.231.16.2): /public/hulpo/lutear. The CRL brings together established models, developed by the group, and also contributed by other researchers in the field, which simulate various stages in the auditory process. Since the first release, the LUTEar CRL has been tested and used both at the originating laboratory and at many other sites. It has been used as a tool for speech processing, speech and voice analysis as well as in the investigation of auditory phenomena, for which it was primarily constructed. Included with this release is a comprehensive series of test programs. These programs were used to test the CRL routines; they reproduce the behaviour of the respective published models included. The programs also provide examples of how the CRL may be used in auditory investigation programs. In addition the programs read data from parameter files, and thus can be readily used to investigate further the behaviour of the models included in the CRL. FTP the "README" file for a more comprehensive list of features. -----------------||---------------------- The anouncement of the next release of LUTEar will be announced using the present mailing list and the new "lutear-auditory-simulation", but all future announcements will only be sent to the new mailing list. The aims of the "lutear-auditory-simulation" mailing list are as follows:- (1) Quick bug reporting to all concerned: People can report suspected bugs, so that in the while I investigate, others can be aware of potential problems. (2) The expression of "wish lists" which others can comment on: This might give me idea of priorities for extensions of LUTEar. (3) Discussion of development of LUTEar: The development of LUTEar is an on-going process. The mailing list would provide a forum for discussion of any major changes which I am considering. This would allow me to avoid introducing unpopular changes in LUTEar. (4) Encourage users to share extra routines they have written amongst themselves. One of the very attractive features of LUTEar is that is provides a common platform for investigations, so that people can compare results obtained with a "standard" piece of software. Extra routines written by users can be assessed, by other users as well as by myself, and subsequently incorporated in the standard code. (5) Reporting of successes and failures in attemping to apply LUTEar to various problems : This will give everyone feedback about the performance of LUTEar, providing individuals wih further ideas on how they can use it, and also highlight as yet unresolved difficulties. (6) The discussion of optimum parameter settings for various applications: Hopefully when people report results obtained with LUTEar, they will be quite specific about which routines and parameters they used. (7) Exchange of scientific information amongst LUTEar users: Users of LUTEar must have common scientific interests! So many of us probably do already know each other and something about each other's work. Nevertheless, it would be good to know who "the others" are, and to discuss specific problems we are trying to apply LUTear to. I indebted to Angela Darling (angie at phonetics.ucl.ac.uk) for providing the major portion of the above text. ..Lowel. +-------------------------+-----------------------------------------------+ |Lowel P. O'Mard PhD. | /\ / \ Speech & Hearing | |Dept. of Human Sciences, | /\/\ /\/ \/ /\ \ /\ Laboratory | |University of Technology,|_/\/\/ /\ \/\/ /\ /\/ \ \/ /\/\_ /\___ | |Loughborough, | \/\/ \/\/\/ \/ /\ \/\/ /\ / | |Leics. LE11 3TU, U.K. | \ /\/\/\ /\/ \ /\/\/ \/ Director: | |L.P.OMard at lut.ac.uk | \/ \/ \/ Prof. Ray Meddis | +-------------------------+-----------------------------------------------+ From lss at compsci.stirling.ac.uk Wed Jun 29 10:08:01 1994 From: lss at compsci.stirling.ac.uk (Dr L S Smith (Staff)) Date: 29 Jun 94 10:08:01 BST (Wed) Subject: TR available:Synchronization in Dynamic Neural Networks Message-ID: <9406291008.AA04295@uk.ac.stir.cs.peseta> ***DO NOT FORWARD TO OTHER GROUPS*** University of Stirling, Centre for Cognitive and Computational Neuroscience, Stirling FK9 4LA, Scotland CCCN Technical report CCCN-18 (Ph.D. Thesis) Synchronization in Dynamic Neural Networks David E. Cairns This thesis is concerned with the function and implementation of synchronization in networks of oscillators. Evidence for the existence of synchronization in cortex is reviewed and a suitable architecture for exhibiting synchronization is defined. A number of factors which affect the performance of synchronization in networks of laterally coupled oscillators are investigated. It is shown that altering the strength of the lateral connections between nodes and altering the connective scope of a network can be used to improve synchronization performance. It is also shown that complete connective scope is not required for global synchrony to occur. The effects of noise on synchronization performance are also investigated and it is shown that where an oscillator network is able to synchronize effectively, it will also be robust to a moderate level of noise in the lateral connections. Where a particular oscillator model shows poor synchronization performance, it is shown that noise in the lateral connections is capable of improving synchronization performance. A number of applications of synchronizing oscillator networks are investigated. The use of synchronized oscillations to encode global binding information is investigated and the relationship between the form of grouping obtained and connective scope is discussed. The potential for using learning in synchronizing oscillator networks is illustrated and an investigation is made into the possibility of maintaining multiple phases in a network of synchronizing oscillators. It is concluded from these investigations that it is difficult to maintain multiple phases in the network architecture used throughout this thesis and a modified architecture capable of producing the required behaviour is demonstrated. This report is available by anonymous FTP from ftp.cs.stir.ac.uk (139.153.254.29) in the directory pub/tr/cccn The filename is TR18.ps.Z (and, as usual, this needs transferred in binary mode, decompress'd, and the postscript printed.) The decompressed file is 13.0 Mb long, and the printed document 100 pages. Hard Copies may be made available for a price (unfortunately, our budget does not run to free copies of theses) Due to the thesis containing grey-level illustrations, it does not photocopy well. If necessary. Email lss at cs.stir.ac.uk. From petsche at scr.siemens.com Wed Jun 29 14:30:57 1994 From: petsche at scr.siemens.com (Thomas Petsche) Date: Wed, 29 Jun 1994 14:30:57 -0400 Subject: New book Message-ID: <199406291830.OAA15744@puffin.scr.siemens.com> The following book is now available from MIT Press: COMPUTATIONAL LEARNING THEORY AND NATURAL LEARNING SYSTEMS Volume II: Intersection between Theory and Experiments Edited by Stephen J. Hanson, Thomas Petsche, Michael Kearns, and Ronald L. Rivest. The book is the result of a workshop of the same name which brought together researchers from learning theory, machines learning, and neural networks. The book includes 23 chapters by authors in these various fields plus a unified bibliography and index: 1. Bayes Decisions in a Neural Network-PAC Setting Svetlana Anulova, Jorge R. Cuellar, Klaus-U. Hoeffgen and Hans-U. Simon 2. Average Case Analysis of $k$-CNF and $k$-DNF Learning Algorithms Daniel S. Hirschberg, Michael J. Pazzani and Kamal M. Ali 3. Filter Likelihoods and Exhaustive Learning David H. Wolpert 4. Incorporating Prior Knowledge into Networks of Locally-Tuned Units Martin Roescheisen, Reimar Hofmann and Volker Tresp 5. Using Knowledge-Based Neural Networks to Refine Roughly-Correct Information Geoffrey G. Towell and Jude W. Shavlik 6. Sensitivity Constraints in Learning Scott H. Clearwater and Yongwon Lee 7. Evaluation of Learning Biases Using Probabilistic Domain Knowledge Marie desJardins 8. Detecting Structure in Small Datasets by Network Fitting under Complexity Constraints W. Finnoff and H.G. Zimmermann 9. Associative Methods in Reinforcement Learning: An Empirical Study Leslie Pack Kaelbling 10. A Schema for Using Multiple Knowledge Matjaz Gams, Marko Bohanec and Bojan Cestnik 11. Probabilistic Hill-Climbing William W. Cohen, Russell Greiner and Dale Schuurmans 12. Prototype Selection Using Competitive Learning Michael Lemmon 13. Learning with Instance-Based Encodings Henry Tirri 14. Contrastive Learning with Graded Random Networks Javier R. Movellan and James L. McClelland 15. Probability Density Estimation and Local Basis Function Neural Networks Padhraic Smyth 16. Hamiltonian Dynamics of Neural Networks Ulrich Ramacher 17. Learning Properties of Multi-Layer Perceptrons with and without Feedback D. Gawronska, B. Schuermann and J. Hollatz 18. Unsupervised Learning for Mobile Robot Navigation Using Probabilistic Data Association Ingemar J. Cox and John J. Leonard 19. Evolution of a Subsumption Architecture that Performs a Wall Following Task for an Autonomous Mobile Robot John R. Koza 20. A Connectionist Model of the Learning of Personal Pronouns in English Thomas R. Shultz, David Buckingham and Yuriko Oshima-Takane 21. Neural Network Modeling of Physiological Processes Volker Tresp, John Moody and Wolf-Ruediger Delong 22. Projection Pursuit Learning: Some Theoretical Issues Ying Zhao and Christopher G. Atkeson 23. A Comparative Study of the Kohonen Self-Organizing Map and the Elastic Net Yiu-fai Wong The book is ISBN 0-262-58133-7 and the price is $35 (I believe). Additional ordering information can be obtained from: Neil Blaisdell MIT/Bradford Books Sales Department blaisdel at mit.edu (They will take a credit card order if you like (and trust the net with you credit card number).) From massone at mimosa.eecs.nwu.edu Thu Jun 30 12:13:59 1994 From: massone at mimosa.eecs.nwu.edu (Lina Massone) Date: Thu, 30 Jun 94 11:13:59 CDT Subject: paper available Message-ID: <9406301613.AA09680@mimosa.eecs.nwu.edu> FTP-host: archive.cis.ohio-state.edu FTP-file: massone.sensorimotor.ps.Z The following paper has been placed in the Neuroprose archive. SENSORIMOTOR LEARNING Lina L. E. Massone Dept. Of Biomedical Engineering Dept. of Electrical Engineering and Computer Science Northwestern University (to appear in The Handbook of Brain Theory and Neural Networks, Michael A. Arbib Ed., MIT Press.) The paper does not have an abstract, so what follows are the first few paragraphs of the introduction and the list of contents. 1. INTRODUCTION A sensorimotor transformation maps signals from various sensory modalities into an appropriate set of efferent motor commands to skeletal muscles or to robotic actuators. Sensorimotor learning refers to the process of tuning the internal parameters of the various structures of the central nervous system (CNS) or of some processing architecture in such a way that a satisfactory motor performance will result. Sensorimotor transformations and sensorimotor learning can be viewed as complex parallel distributed information processing tasks. A number of different components contribute to the tasks' complexity. On the sensory side, signals from various sources (vision, hearing, touch, pain, proprioception) need to be integrated and interpreted (Stein and Meredith, 1993). In particular, they need to be translated into a form that can be used for motor purposes because the coordinates of afferent sensory signals are different from the coordinates of the movements they are guiding. This convergent process, from many parallel sensory signals to an intermediate internal representation, is referred to as the early stages in a sensorimotor transformation. A widely accepted concept that describes the computed internal representation is the so-called motor program: a specification of the important parameters of the movement to be executed (Keele and Summers, 1976). Bernstein used the term motor program to denote a prototype of a planned movement described with an abstract central language that encodes specific features of the movement itself. This idea was then elaborated by Schmidt (1988) and modified by Arbib (1990), who introduced the concept of coordinated control program: a combination of perceptual and motor functional units called schemas, whose dynamic interaction causes the emergence of behavior. A different approach was proposed by Schoner and Kelso (1988), who redefined motor programs in dynamic terms as transitions between the equilibrium states of a dynamical system. ETC. ETC. ETC. ETC. 2. COMPUTATIONAL APPROACHES 2.1 Coordinate Transformation 2.2 Forward and Inverse Models 2.3 Optimization and Learning 2.4 Representing Information ********************************************************* From seung at physics.att.com Thu Jun 30 17:20:15 1994 From: seung at physics.att.com (seung@physics.att.com) Date: Thu, 30 Jun 94 17:20:15 EDT Subject: tutorial announcement--statistical physics and learning Message-ID: <9406302120.AA04149@physics.att.com> ========================================================= What does statistical physics have to say about learning? ========================================================= Sunday, July 10, 1994 8:45 am to 12:15 pm Milledoler Hall, room 100 Rutgers University New Brunswick, New Jersey Tutorial conducted by Sebastian Seung and Michael Kearns AT&T Bell Laboratories Murray Hill, NJ 07974 Free of charge and open to the general public, thanks to sponsorship from DIMACS. Held in conjunction with the Eleventh International Conference on Machine Learning (ML94, July 11-13, 1994) and the Seventh Annual Conference on Computational Learning Theory (COLT94, July 12-15, 1994). The study of learning has historically been the domain of psychologists, statisticians, and computer scientists. Statistical physicists are the seemingly unlikely latecomers to the subject. This tutorial is an overview of the ideas they are now bringing to learning theory, and of the relationship of these ideas to statistics and computational learning theory. We focus on the analysis of learning curves, defined here as graphs of generalization error versus the number of examples used in training. We explain why supervised learning from examples can lead to learning curves with a variety of behaviors, some of which are very different from (though consistent with) the Vapnik-Chervonenkis bounds. This is illustrated most dramatically by the presence of phase transitions in certain learning models. We discuss theoretical progress towards understanding two puzzling empirical findings--that neural networks sometimes attain good generalization with fewer examples than adjustable parameters, and that generalization performance can be relatively insensitive to the size of the hidden layer. We conclude with a discussion of the relationship of the statistical physics approach with that of the Vapnik-Chervonenkis theory. No prior knowledge of learning theory will be assumed. This tutorial is one of a set of DIMACS-sponsored tutorials that are free and open to the general public. Directions to Rutgers can be found in the ML/COLT announcement, which is available via anonymous ftp from www.cs.rutgers.edu in the directory "/pub/learning94". Users of www information servers such as mosaic can find the information at "http://www.cs.rutgers.edu/pub/learning94/learning94.html". Other available information includes a campus map, and abstracts of all workshops/tutorials. Questions can be directed to ml94 at cs.rutgers.edu, colt94 at research.att.com, or to Sebastian Seung at 908-582-7418 and seung at physics.att.com