From marney at ai.mit.edu Thu Jun 1 15:09:48 1995 From: marney at ai.mit.edu (Marney Smyth) Date: Thu, 1 Jun 95 15:09:48 EDT Subject: NNSP95 - Formal Program Message-ID: <9506011909.AA16249@motor-cortex> 1995 IEEE WORKSHOP ON ---------------------- NEURAL NETWORKS FOR SIGNAL PROCESSING ------------------------------------- August 31 - September 2, 1995, Royal Sonesta Hotel Cambridge, Massachusetts, USA. The full program for NNSP95 is now available, and you can get the information here, or by consulting our WWW Homepage at this URL: http://www.cdsp.neu.edu/info/nnsp95.html, or by anonymous ftp at site ftp.cdsp.neu.edu public directory /pub/NNSP95. NNSP95 is sponsored by the Neural Networks Technical Committee of the IEEE Signal Processing Society, in cooperation with the IEEE Neural Network Council and with co-sponsorship from ONR/ARPA and NSF (through CBCL, the Center for Biological and Computational Learning at MIT). The Workshop is designed to serve as a regular forum for researchers from universities and industry who are interested in interdisciplinary research on neural networks for signal processing applications. NNSP95 offers a showcase for current research results in key areas, including learning algorithms, network architectures, speech processing, image processing, computer vision, adaptive signal processing, medical signal processing, digital communications and other applications. GENERAL CHAIRS -- Federico Girosi Center for Biological and Computational Learning, Artificial Intelligence Laboratory, MIT, E25-201, Cambridge, MA 02139 Tel: (617)253-0548, Fax: (617)258-6287, email: girosi at ai.mit.edu -- John Makhoul BBN Systems and Technologies 70 Fawcett Street, Cambridge, MA 02138 Tel: (617)873-3332, Fax: (617)873-2534, email: makhoul at bbn.com PROGRAM CHAIR -- Elias S. Manolakos Communications and Digital Signal Processing (CDSP) Center for Research and Graduate Studies 409 Dana Build., Electrical and Computer Engineering Department Northeastern University, Boston, MA 02115 tel: (617)373-3021, fax: (617)-373-4189, email: elias at cdsp.neu.edu PROCEEDINGS CHAIR -- Elizabeth J. Wilson, Raytheon Co. Marlborough, MA, email: bwilson at sud2.ed.ray.com FINANCE CHAIR -- Judy Franklin, GTE Laboratories Incorporated Waltham, MA 02254, email: jfranklin at gte.com PUBLICITY CHAIR -- Marney Smyth , MIT, email: marney at ai.mit.edu LOCAL ARRANGEMENTS CHAIR -- Mary Pat Fitzgerald , MIT, email: marypat at ai.mit.edu TECHNICAL PROGRAM COMMITTEE Joshua Alspector (Bellcore, USA) Charles Bachmann (Naval Research Lab., USA) Alice Chiang (MIT Lincoln Lab., USA) A. Constantinides (Imperial College, UK) Lee Giles (NEC Research, USA) Federico Girosi (CBCL, MIT, USA) Lars Kai Hansen (Tech. U. of Denmark, Denmark) Yu-Hen Hu (U. of Wisconsin, USA) Jenq-Neng Hwang (U. of Washington, USA) Bing-Huang Juang (AT&T Bell Lab., USA) Shigeru Katagiri (ATR Japan) George Kechriotis (Thinking Machines Inc., USA) Stephanos Kollias (National Tech. U. of Athens, Greece) Sun-Yuan Kung (Princeton U., USA) Gary M. Kuhn (Siemens Corp. Research, USA) Richard Lippmann (MIT Lincoln Lab., USA) John Makhoul (BBN Lab., USA) Elias Manolakos (CDSP, Northeastern U., USA) P. Takis Mathiopoulos (U. of British Columbia, Canada) Mahesan Niranjan (Cambridge U., UK) Tomaso Poggio (CBCL, MIT, USA) Jose Principe (U. of Florida, USA) Wojtek Przytula (Hughes Research Lab., USA) John Sorensen (Tech. U. of Denmark, Denmark) Andreas Stafylopatis (National Tech. U. of Athens, Greece) John Vlontzos (Intracom S.A., Greece) Raymond Watrous (Siemens Corp. Research, USA) Christian Wellekens (Eurecom, France) Ron Williams (Northeastern U., USA) Barbara Yoon (ARPA, USA) Xinhua Zhuang (U. of Missouri, USA) TENTATIVE ADVANCE PROGRAM OF NNSP'95, CAMBRIDGE MA, USA ------------------------------------------------------- ****** THURSDAY AUGUST 31st, 1995 ****** 8:30 am -- 8:45 am -------------------- OPENING REMARKS: Federico Girosi 8:45 am -- 9:30 am -------------------- PLENARY TALK 1: "Learning from Hints" -- Yaser Abu-Mostafa, Caltech, USA 9:30 am -- 9:50 am Coffee Break -------------------- 9:50 am -- 10:50 am -------------------- THEORY 1: (Oral Presentations) Missing Data in Nonlinear Time-Series Prediction -- Volker Tresp -- Central Research,Siemens AG, Germany Non-Linear Time Series Modeling with Self-Organization Feature Maps -- Jose C. Principe, Ludong Wang -- University of Florida, USA Neural Networks for Function Approximation -- H.N. Mhaskar, L. Khachikyan -- California State University Los Angeles, USA 10:50 am -- 11:30 am -------------------- THEORY 2: (3-minute oral preview of poster presentations) Simultaneous Design of Feature Extractor and Pattern Classifier Using the Minimum Classification Error Training Algorithm -- K.K.Paliwal, M. Bacchiani, Y. Sagisaka -- ATR Interpreting Communications Laboratories, Japan Discriminative Subspace Method for Minimum Error Pattern Recognition -- Hideyuki Watanabe, Shigeru Katagiri -- ATR Interpreting Telecommunications Research Labs, Japan A unifying view of Stochastic Approximation Kalman Filter and Backpropagation -- Enrico Capobianco -- Statistics Department, University of Padua, Italy Globally-Ordered Topology-Preserving Maps Achieved with a Learning Rule Performing Local Weight Updates Only -- Marc M. Van Hulle -- Laboratorium voor Neuro-en Psychofysiologie, K.U. Leuven, Belgium A Self-Organizing System for the Development of Neural Network Parameter Estimators -- Michael Manry -- The University of Texas at Arlington, USA Recognition of Oscillatory Signals Using a Neural Network Oscillator -- Masakazu Matsugu, Chi-Sang Poon -- Imaging Research Center, Canon Inc., Japan Principal Feature Classification -- Donald W. Tufts, Qi Li -- University of Rhode Island, USA A Habituation Based Neural Network Structure for Classifying Spatio-Temporal Patterns -- Bryan W. Stiles, Joydeep Ghosh -- The University of Texas at Austin, USA A Numerical Approach for Estimating Higher Order Spectra Using Neural Network Autoregressive Model -- Naohiro Toda, Shiro Usui -- Toyohashi University of Technology, Japan Fuzzy Neural Network Approach Based on Dirichlet Tesselations for Nearest Neighbor Classification of Patterns -- K. Blekas, A. Likas, A. Stafylopatis -- National Technical University of Athens, Greece The Dynamics of Associative Memory with a Self-Consistent Noise -- Ioan Opris -- Department of Physics, University of Bucharest, Romania Recursive Nonlinear Identification using Multiple Model Algorithm -- Visakan Kadirkamanathan -- University of Sheffield, UK 11:30 am -- 12:30 pm -------------------- THEORY 2: (poster presentations) 12:30 pm -- 2:00 pm LUNCH BREAK -------------------- 2:00 pm -- 2:45 pm -------------------- PLENARY TALK 2: "Regularization: Theory and New Algorithms" -- John Moody, Oregon Graduate Institute, USA 2:45 pm -- 4:05 pm -------------------- SPEECH PROCESSING: (Oral Presentations) Speaker Verification using Phoneme-Based Neural Tree Networks and Phonetic Weighting Scoring Method -- Han-Sheng Liou, Richard J. Mammone -- CAIP Center, Rutgers University, USA Scaling Down: Applying Large Vocabulary Hybrid HMM-MLP Methods to Telephone Recognition of Digits and Natural Numbers -- Kristine Ma, Nelson Morgan -- International Computer Science Institute, Berkeley, USA Combining Local PCA and Radial Basis Function Networks for Speaker Normalization -- Cesare Furlanello, D. Giuliani -- Instituto per La Ricerca Scientifica e Tecnologica, Italy Discriminatory Measures for Speaker Recognition -- Kevin R. Farrell -- Dictaphone Corporation, Stratford, CT, USA 4:05 pm -- 4:25 pm Coffee Break -------------------- 4:25 pm -- 4:50pm -------------------- THEORY AND SPEECH PROCESSING: (3-minute oral preview of poster presentations) Mutual Information in a Linear Noisy Network -- Alessandro Campa, Paolo Del Giudice, Nestor Parga, Jean-Pierre Nadal -- Istituto Superiore di Sanita and INFN Sezione Sanita, Italy Constrained Pole-Zero Filters as Discrete-Time Operators for System Approximation -- Andrew D. Back,Ah Chung Tsoi -- University of Queensland, Australia Prior Knowledge and the Creation of "Virtual" Examples for RBF Networks -- F. Girosi, N. Chan -- MIT Artificial Intelligence Laboratory, USA From Artificial Neural Network Inversion to Hidden Markov Model Inversion: Application to Robust Speech Recognition -- Seokyong Moon, Jenq-Neng Hwang -- University of Washington, USA Hierarchical Mixtures of Experts Methodology Applied to Continuous Speech Recognition -- Ying Zhao, Richard Schwartz, Jason Sroka, John Makhoul -- BBN Systems and Technologies, Cambridge, MA, USA A Speech Recognizer with Low Complexity Based on RNN -- Claus Kasper, Herbert Reininger, Dietrich Wolf, Harald Wust -- J.W. Goethe-Universitat Frankfurt, Germany Automatic Speech Segmentation Using Neural Tree Network (NTN) -- Manish Sharma, Richard Mammone -- CAIP Center, Rutgers University, USA 4:50pm -- 5:30 pm -------------------- THEORY AND SPEECH PROCESSING (Poster Presentations) 7:30pm -- 9:30 pm -------------------- PANEL DISCUSSION: "Why Neural Networks are not Dead" -- Moderator: Gary Kuhn (Siemens,Princeton, NJ) -- Participants: T.Poggio, MIT S. Grossberg, BU J. Makhoul, BBN P. Ienne, EPFL N. Morgan, ICSI S. Katagiri, ATR ****** FRIDAY SEPTEMBER 1st, 1995 ****** 8:30 am -- 9:15 am -------------------- PLENARY TALK 3: "Neural Networks for Electronic Eyes" -- S.Y. Kung, Princeton University, USA 9:15 am -- 9:35 am Coffee Break -------------------- 9:35 am -- 10:55 am -------------------- IMAGE PROCESSING / COMPUTER VISION (Oral Presentations) Motion Estimation and Segmentation using a Recurrent Mixture of Experts Architecture -- Yair Weiss,Edward H. Adelson -- Department of Brain and Cognitive Sciences, MIT, USA Using perceptron-like algorithms for the analysis and parameterization of object motion -- M. Mattavelli,E. Amaldi -- Swiss Federal Institute of Technology, Switzerland A Multiple Scale Neural System for Boundary and Surface Representation of SAR Data -- Stephen Grossberg, Ennio Mingolla, James Williamson -- Department of Cognitive and Neural Systems, Boston University, USA A Neural Network Approach to Face/Palm Recognition -- S.Y. Kung, M. Fang, S.H. Lin -- Princeton University, USA 10:55 am -- 11:30 am -------------------- IMAGE PROCESSING / COMPUTER VISION (3-minute oral preview of poster presentations) A Probabilistic DBNN with Applications to Sensor Fusion and Object Recognition -- Shang-Hung Lin, Long-Ji Lin, S.Y. Kung -- Princeton University, USA Sample Weighting when Training Self-Organizing Maps for Image Compression -- Jari Kangas -- Helsinski University of Technology, Finland Estimating Image Velocity with Convected Activation Profiles: Analysis and Improvements for Special Cases -- Robert K. Cunningham, Allen M. Waxman -- Machine Intelligence Group, MIT Lincoln Laboratory, USA Pruning Projection Pursuit Models for Improved Cloud Detection in AVIRIS Imagery -- Charles M. Bachmann, Eugene E. Clothiaux, John W. Moore, Dong Q. Luong -- Airborne Radr Branch Code 5365, Naval Research Laboratory, USA A New Learning Scheme for the Recognition of Dynamic Handwritten Characters -- Fidimahery Andrianasy, Maurice Milgram -- PARC/UPMC, France Velocity Measurement of Granular Flow with a Hopfield Network -- Jingeol Lee, Jose C. Principe, Daniel M. Hanes -- University of Florida, USA Neural Network Based Image Segmentation for Image Interpolation -- Stefano Marsi, Sergio Carrato -- University of Trieste, Italy Learning a Distribution-based Face Model for Human Face Detection -- Kah-Kay Sung, Tomaso Poggio -- MIT Artificial Intelligence Laboratory, USA Action-Based Neural Networks for Effective Recognition of Images -- Vassilios N. Alexopoulos, Stefanos D. Kollias -- National Technical University of Athens, Greece Feature-Locked Loop and its Application to Image Databases -- Alex Sherstinsky, Rosalind W. Picard -- Media Laboratory, MIT, USA An Error Diffusion Neural Network for Digital Image Halftoning -- Barry L. Shoop, Eugene K. Ressler -- United States Military Academy, USA 11:30 am -- 12:30 pm -------------------- IMAGE PROCESSING / COMPUTER VISION (Poster Presentations) 12:30 pm -- 2:00 pm LUNCH BREAK -------------------- 2:00 pm -- 2:45 pm -------------------- PLENARY TALK 4: "Learning algorithms for probabilistic trees, chains, and networks" -- Michael I. Jordan, MIT, USA 2:45 pm -- 4:05 pm -------------------- OTHER APPLICATIONS (Oral Presentations) Estimation of the Glucose Metabolism from Dynamic PET-Scans Using Neural Networks -- Claus Svarer, Soren Holm, Niels Morch, Olaf Paulson and L.K. Hansen -- Department of Neurology, The University of Kopenhagen, Denmark Nonlinear Echo Cancellation Using a Partial Adaptive Time Delay Neural Network -- A.N. Birkett, R.A. Goubran -- Carleton University, Canada Customized ECG Beat Classifier Mixture of Experts -- Yu Hen Hu, Surekha Palreddy, Willis J. Tompkins -- University of Wisconsin, USA Semiautomated Extraction of Decision Relevant Features from a Raw Data Based Artificial Neural Network Demonstrated by the Problem of Saccade Detection in EOG Recordings of Smooth Pursuit Eye Movements -- Peter K. Tigges, Norbert Kathmann, Rolf R. Engel -- Psychiatric Clinic, University of Munich, Germany 4:05 pm -- 4:25 pm Coffee Break -------------------- 4:25 pm -- 5:05 pm -------------------- OTHER APPLICATIONS / IMPLEMENTATIONS (3-minute oral preview of poster presentations) EEG Signal Classification with Different Signal Representations -- Charles W. Anderson, Saikumar V. Devulapalli, Erik A. Stolc -- Colorado State University, USA Design and Evaluation of Neural Classifiers - Application to Skin Lesion Classification -- Mads Hintz-Madsen, Lars Kai Hansen, Jan Larsen, Eric Olesen and -- Krzysztof T. Drzewiecki -- Technical University of Denmark, Denmark A Study of the Application of the CMAC Artificial Neural Network to the Problem of Gas Sensor Array Calibration -- Parag M. Bajaria, Bruce E. Segee -- University of Maine, USA Classification of Gamma Ray Signals Using Neural Networks -- N.G. Bourbakis, A. Tacsillo, M. Tacsillo -- AAAI LAb., Binghamton University, USA Adaptive Preprocessing for On-Line Learning with Adaptive Resonance Theory (ART) Networks -- Harald Ruda, Magnus Snorasson -- Cognitive and Neural Systems Department, Boston University, USA Intelligent Network Monitoring -- Cynthia S. Hood, Chuanyi Ji -- Rensselaer Polytechnic Institute, USA A Robust Backward Adaptive Quantizer -- Dominique Martinez, Woodward Yang -- Division of Applied Sciences, Harvard University, USA A Maximum Partial Likelihood Framework for Channel Equalization by Distribution Learning -- Tulay Adali, Xiao Liu, Kemal Sonmez -- University of Maryland Baltimore County, USA Constructive Neural Network Design for the Solution of Two State Classification: Problems with Application to Channel Equalization -- Catherine Z.W. Hassell Sweatman, Gavin J. Gibson, Bernard Mulgrew -- University of Edinburgh, UK A Parallel Mapping of Backpropagation Algorithm for Mesh Signal Processor -- Shoab A. Khan, Vijay K. Madisetti -- Georgia Institute of Technology, USA Digital Neuroimplementations of Visual Motion-Tracking Systems -- Anna Maria Colla, Luca Trogu, Rodolfo Zunino -- University of Genova, Italy Level Crossing Time Interval Circuit for Micropower Analog VLSI Auditory Processing -- Nagendra Kumar, Gert Cauwenberghs, Andreas G. Andreou -- Johns Hopkins University, USA 5:05 pm -- 6:05 pm -------------------- OTHER APPLICATIONS / IMPLEMENTATIONS (Poster Presentations) 7:00 pm GALA DINNER: Grand Clam Bake -------------------- ****** SATURDAY SEPTEMBER 2nd, 1995 ****** 8:30 am -- 9:15 am -------------------- PLENARY TALK 5: "Structure of Learning Theory" -- Vladimir Vapnik, AT at T Bell Labs, USA 9:15 am -- 9:35 am Coffee Break -------------------- 9:35 am -- 10:35 am -------------------- COMMUNICATIONS (Oral Presentations) Optimum Lag and Subset Selection for Radial Basis Function Equaliser -- Eng-Siong Chng, Bernard Mulgrew, Shen Chen, Gavin Gibson -- The University of Edinburgh, UK Channel Equalization by Finite Mixtures and EM Algorithm -- Lei Xu -- The Chinese University of Hong Kong, Hong Kong Comparison of a Neural Network based Receiver to the Optimal and Multistage CDMA Multiuser Detectors -- George Kechriotis, Elias S. Manolakos -- Northeastern University, USA 10:35 am -- 11:55 am -------------------- THEORY 3 (Oral Presentations) Empirical Generalization Assessment of Neural Network Models -- Jan Larsen, Lars Kai Hansen -- Technical University of Denmark, Denmark Active Learning the Weights of a RBF Network -- Kah-Kay Sung, Partha Niyogi -- MIT Artificial Intelligence Laboratory, USA A Novel Approach to Pattern Recognition Based on Discriminative Metric Design -- Hideyuki Watanabe, Tsuyoshi Yamaguchi, Shigeru Katagiri -- ATR Interpreting Telecommunications Research Laboratories, Japan A Maximum Entropy Approach for Optimal Statistical Classification -- David Miller, Ajit Rao, Kenneth Rose, Allen Gersho -- University of California Santa Barbara, USA ******************** END OF ADVANCE TECHNICAL PROGRAM ***************** NNSP95 REGISTRATION FORM ------------------------------ 1995 IEEE Workshop on Neural Networks for Signal Processing August 31, 1995 - September 2, 1995 Cambridge, Massachusetts USA Please complete this form (type or print) Name ___________________________________________________ Last First Middle Firm or University ______________________________________ Mailing Address _________________________________________ __________________________________________________________ __________________________________________________________ Country Phone FAX __________________________________________________________ email Fee payment must be made by MONEY ORDER or PERSONAL CHECK. All amounts are given in US dollar figures. Make fee payable to "IEEE NNSP95 - c/o Judy A. Franklin". Mail it, together with this completed Registration Form to: Judy A. Franklin GTE Laboratories 40 Sylvan Road Waltham, MA 02254 USA For further information, Dr. Franklin can be reached at Tel.: 617-466-4246 FAX: 617-890-9320 e-mail: jfranklin at gte.com Advanced registration before: June 2. DO NOT SEND CASH. REGISTRATION FEE* Date IEEE Member Non-member ________________________________________________________ Before June 2 U.S. $295 U.S. $345 Late Registration (After June 2) U.S. $345 U.S. $395 * Registration fee includes Workshop Proceedings, breakfast and all coffee breaks, and the Grand Clam Bake on 9/1/95. * On-site registration is possible, at *late registration* fees (see above). Payment of late registration must be in US Dollar amounts, by Money Order or Check (preferably drawn on a US Bank account). *************************************************************************** HOTEL ACCOMMODATION NNSP95 will be held at the Royal Sonesta Hotel, Cambridge, MA. The hotel is centrally located overloooking the Charles River, and offers very nice views of Boston and Cambridge. Hotel accommodations are the responsibility of each participant. The Royal Sonesta Hotel has reserved a block of rooms for this event. The special room rates for NNSP95 participants are: Single U.S. $130.00 per night+ Double U.S. $130.00 per night+ Please be aware that these prices do not include Massachusetts State tax (5.7%) and a city tax (4%). There are a number of important points to be aware of with regard to hotel reservations for the Workshop: * All reservation will be held until 6pm on the day of arrival, unless guaranteed for late arrival. Guaranteed reservations will be held for the night of arrival only. If you fail to take up your reservation, you will be charged for one night's room, with tax . * These special rates apply between August 29th and September 2nd, inclusive. * After July 29 there is no guarantee that rooms are available, so we strongly recommend making reservations early. * You must quote your participation in the IEEE Workshop on Neural Networks for Signal Processing when booking the room, in order to qualify for this special rate. To make reservations, call the hotel directly at: 617-491-3600 The address of the hotel is: Royal Sonesta Hotel 5 Cambridge Parkway Cambridge, MA 02142 phone: 617-491-3600 fax: 617-661-5956 **************************************************************************** TRAVEL INFORMATION Possible ways to get to the Royal Sonesta are: * From the North via Route 93 South: Take 93 South to Exit 26, "Cambridge/Somerville & Storrow Drive", Follow directions below "From the South". * From the Airport:Take the main airport roadway (one way) and follow the signs for "Sumner Tunnel, Boston/Rt. 93 North." Go through the tunnel and take an immediate right onto 93 North. Follow directions below "From the South" * From the South via route 93 north: Take 93 North through Boston to Exit 26, "Cambridge/Somerville & Storrow Drive." Go down and around exit ramp and stay to the far right, following signs for Cambridge. DO NOT GET ON STORROW DRIVE. At the end of the ramp, at a set of lights, take a left onto Nashua Street (you will pass beneath the bridge on which there is a sign for the Museum of Science) and take immediate right on Rt. 28 North/O'Brien Highway. Go past the Museum of Science and at the first set of lights take a left on Edwin Land Blvd. The Royal Sonesta is on the left, across the street from the Cambridgeside Galleria Mall. * From the West via Mass. Turnpike/Route 90 East: Take the Mass. Turnpike to Exit 18, "Allston/Cambridge" (left-sided exit). Go through the toll booth and bear right, following signs for Cambridge and Somerville. Proceed through two sets of lights and go straight over the River Street Bridge, crossing over the Charles River, and take an immediate right on Memorial Drive East (the Mobil Gas station will be in front of you). At the first split stay in the left lane and proceed over the bridge ("Cars Only"). At the second split, shortly after the Hyatt Regency Hotel on left, stay left and go under the overpass ("Cars Only"). Move immediately to the right lane and bear to the right at the last split. Memorial Drive now turns into Edwin Lane Blvd. Proceed under the Longfellow Bridge and the Royal Sonesta Hotel will be on your right at the second set of lights. * From the West via Route 2 East: Take Rt. 2 East and follow signs for "Watertown/Boston & Rt. 3." You will pass the Alewife train station on the right. At the rotary stay to the left and continue on Route 3; merge onto Memorial Drive East (the CHarles River will be on the right). Follow directions according to "From the West via Mass Pike" (above). Parking: The Royal Sonesta has two parking garages to accommodate guests and visitors. If the internal garage is full, the parking attendant in the booth will direct you across the street to the Cambridgeside Galleria Mall, in which the hotel has a small parking section. From Andreas.Scherer at FernUni-Hagen.de Fri Jun 2 05:49:57 1995 From: Andreas.Scherer at FernUni-Hagen.de (Andreas Scherer) Date: Fri, 2 Jun 1995 11:49:57 +0200 Subject: CFP: Session on Adaptive Computing in Engineering Message-ID: <9506020938.AA03852@galaxy.fernuni-hagen.de> CALL FOR PAPERS Computer Applications Symposium Session on Adaptive Computing In Engineering Houston, TX January 28 - February 2, 1996 The American Society of Mechanical Engineers (ASME) Petroleum Division is sponsoring the Energy-sources Technology Conference & Exhibition (ETCE), to be held January 28 - February 2, 1996, at the George R. Brown Convention Center in Houston, Texas. A part of this conference is the Computer Applications Symposium, which focuses on the uses of computers in engineering-related applications. Attendees will be from both academia and industry. This coming year, one session will be devoted to applications of Adaptive Computing Techniques in Engineering. Suggested topics include the following: Fuzzy-Logic Genetic Algorithms Neural Networks Machine Learning Hybrid Systems All presented papers will be published in the symposium proceedings. Please contact the session organizers if you have any questions. We look forward to your contributions. Authors are requested to send a letter of intent, an information sheet that includes full names of the author(s), phone number and FAX or E-mail if applicable and an abstract (up to 200 words) by May 30, 1995. Full papers are due by August 1. Important Dates: July 15, 1995: Deadline for submission of full paper (10 pages) August 1, 1995: Notification of acceptance Sept. 10, 1995: Final paper deadline Jan. 28 - Feb. 2, 1996: Conference Dr. John R. Sullins Dr. Andreas Scherer Dept. of Computer & Information Sci. University of Hagen Youngstown State University Applied Computer Science I 410 Wick Avenue Feithstr. 140 Youngstown, OH 58084 Hagen USA Germany john at cis.ysu.edu andreas.scherer at fernuni-hagen.de Tel.: (216) 742-1806 Tel.: +49/2331/987-2972 FAX.: (216) 742-1998 FAX.: +49/2331/987-314 From massone at mimosa.eecs.nwu.edu Fri Jun 2 12:38:55 1995 From: massone at mimosa.eecs.nwu.edu (Lina Massone) Date: Fri, 2 Jun 1995 11:38:55 -0500 Subject: paper available on dynamic pattern formation Message-ID: <199506021638.LAA22575@mimosa.eecs.nwu.edu> The following paper is now available as a technical report of the Neural Information Processing Laboratory of Northwestern University. The paper can be retrieved through the web by connecting to: http://www.eecs.nwu.edu/pub/nipl or by anonymous ftp to: eecs.nwu.edu cd pub/nipl The role of initial conditions in dynamic pattern formation Lina L.E. Massone and Tony Khoshaba Tech. Rep. Neural Information Processing Laboratory Northwestern University *Submitted to NIPS 95* In this paper we present the results of an empirical study of the properties of recurrent backpropagation (Pineda, 1988) with special emphasis on the characteristics of the resulting weight distributions and their dependency on the initial conditions and on the classes of dynamic tasks that the network learns. The results of this study indicate that the weights of the trained network exhibit properties that are dictated by both the desired equilibria and the initial values of the weights, but not by the initial state of the network. In particular, we were able to quantify the dependency of the final weights on the initial weights in terms of monotonic, practically linear relationships between the standard deviations of the two distributions. We discuss the implications of these results for dynamical systems in general and for the study of the brain functions. From jmgeever at yeats.ucd.ie Sat Jun 3 12:46:02 1995 From: jmgeever at yeats.ucd.ie (John McGeever) Date: Sat, 3 Jun 1995 17:46:02 +0100 Subject: NCAF/SIENA Conf. At University College Dublin. Message-ID: *************************************************************** Neural Computing Applications Forum and (NCAF) Simulation Initiative for European Neural Applications (SIENA) *************************************************************** Dear Colleagues, A one-day joint NCAF/SIENA Conference will be held at University College Dublin on Tuesday 20 June 1995. Programme 8.30 am Registration / Coffee 9.00 Welcome to NCAF and SIENA Tom Harris and Tony Morgan 9.30 Tutorial: An Introduction to Neural Networks Tom Harris, Brunel University 10.30 Coffee 11.00 Keynote Speaker: Neural Networks: The Real Applications Andy Wright, British Aerospace 12.00 pm Using Neural Networks to Learn Hand-Eye Co-ordination Marggie Jones, University College Galway 12.30 Lunch 14.00 Countermatch: A Neural Network Approach to Signature Verification Graham Hesketh, AEA Technology 14.30 Implementing Neural Networks with Semiconductor Optical Devices Paul Horan, Hitachi Dublin Laboratory 15.00 Tea 15.30 Artificial Neural Networks for Articulatory Based Speech Synthesis John Keating, St. Patricks College 16.00 The IBM ZISC Chip Guy Paillet, Neuroptics Consulting 16.30 Open Forum 17.00 Close of Session Notes: Accommodation on-campus at 18 pounds sterling is available for the night of the 19 June. (Please indicate male/female) Breakfast and Lunch are not included in the Registration fee. There are three food outlets on campus. Exhibitors can demonstrate their products free of charge having gained entry to the meeting - please contact NCAF for further details. REGISTRATION: The fee is 50 pounds sterling, 30 for students. Send Name, correspondence address, organisation, and other contact details (phone/fax/email) with a crossed sterling cheque (registration fee and accommodation fee if applicable) made payable to NCAF to Ila Patel, Dublin Symposium Bookings, Neural Computing Applications Forum, Box 73, EGHAM, Surrey, TW20 OYZ UK. Tel (+44/0) 1784 477271 Fax (+44/0) 1784 472879 - Mark fax 'NCAF' email NCAFsec at brunel.ac.uk Advance registration is very much preferred. Informal enquires may be made to jmgeever at nova.ucd.ie From esann at dice.ucl.ac.be Sun Jun 4 10:45:06 1995 From: esann at dice.ucl.ac.be (esann@dice.ucl.ac.be) Date: Sun, 4 Jun 1995 16:45:06 +0200 Subject: Neural Processing Letters Vol.2 No.1 Message-ID: <199506041445.QAA13064@ns1.dice.ucl.ac.be> Neural Processing Letters: last two issues ------------------------------------------ You will find enclosed the table of contents of the March and May 1995 issues of "Neural Processing Letters" (Vol.2 No.2 & 3). The abstracts of these papers are contained on the below mentioned FTP and WWW servers. We also inform you that subscription to the journal is now possible by credit card. All necessary information is contained on the following servers: - FTP server: ftp.dice.ucl.ac.be directory: /pub/neural-nets/NPL - WWW server: http://www.dice.ucl.ac.be/neural-nets/NPL/NPL.html If you have no access to these servers, or for any other information (subscriptions, instructions for authors, free sample copies,...), please don't hesitate to contact directly the publisher: D facto publications 45 rue Masui B-1210 Brussels Belgium Phone: + 32 2 245 43 63 Fax: + 32 2 245 46 94 Neural Processing Letters, Vol.2, No.2, March 1995 __________________________________________________ - Asymptotic performances of a constructive algorithm Florence d'Alche-Buc, Jean-Pierre Nadal - A multilayer incremental neural network architecture for classification Tamer Olmez, Ertugrul Yazgan, Okan K. Ersoy - Fine-tuning Cascade-Correlation trained feedforward network with backpropagation Mikko Lehtokangas, Jukka Saarinen, Kimmo Kaski - Determining initial weights of feedforward neural networks based on least squares method Y.F.Yam, T.W.S.Chow - Post-processing of coded images by neural network cancellation of the unmasked noise M.Mattavelli, O.Bruyndonckx, S.Comes, B.Macq - Neural learning of chaotic dynamics Gustavo Deco, Bernd Schurmann - Improving the Counterpropagation network performances Alessandra Chiuderi - Book review: R?seaux neuronaux et traitement du signal by Jeanny H?rault and Christian Jutten. In French. Patrick Thiran Neural Processing Letters, Vol.2, No.3, May 1995 ________________________________________________ - Weighted Radial Basis Functions for improved pattern recognition and signal processing Leonardo M. Reyneri - Analog weight adaptation hardware A.J. Annema, H. Wallinga - Adaptative time constants improve the prediction capability of recurrent neural networks Jean-Philippe Draye, Davor Pavisic, Guy Cheron, Gaetan Libert - A general exploratory projection pursuit network Colin Fyfe - Improving the approximation and convergence capabilities of projection pursuit learning Tin-Yau Kwok, Dit-Yan Yeung - Invariance in radial basis function neural networks in human face classification A. Jonathan Howell, Hilary Buxton _____________________________ D facto publications - conference services 45 rue Masui 1210 Brussels Belgium tel: +32 2 245 43 63 fax: +32 2 245 46 94 _____________________________ From athenasc at world.std.com Sat Jun 3 18:57:39 1995 From: athenasc at world.std.com (athena scientific) Date: Sat, 3 Jun 1995 18:57:39 -0400 Subject: Textbook Series on Optimization and Neural Computation Message-ID: <199506032257.AA11738@world.std.com> Athena Scientific is pleased to announce a series of M.I.T. graduate course textbooks that are distinguished by scientific rigor, educational value, and production quality. ******************************************************************** 1) Dynamic Programming and Optimal Control (2 Vols.), by Dimitri P. Bertsekas (June 1995). 2) Nonlinear Programming, by Dimitri P. Bertsekas (due Fall 1995). 3) Linear Programming, by Dimitris Bertsimas and John N. Tsitsiklis (due Spring 1996). 4) Neuro-Dynamic Programming, by Dimitri P. Bertsekas and John N. Tsitsiklis (Spring 1996). ******************************************************************** The first three are textbooks currently used in first year graduate courses at the Department of Electrical Engineering and Computer Science and the Operations Research Center of M.I.T. The last book is used in a graduate course on neural computation. The books are well suited for instruction and have been refined in the classroom over many years. For further information contact Athena Scientific, P.O. Box 391, Belmont, MA 02178-9998, U.S.A., Tel: (617) 489-3097, FAX: (617) 489-3097, email: athenasc at world.std.com, or the authors: bertsekas at lids.mit.edu, dbertsim at math.mit.edu, jnt at mit.edu To order your copy of Dynamic Programming and Optimal Control, see the ordering information at the end of this announcement. ******************************************************************** DYNAMIC PROGRAMMING AND OPTIMAL CONTROL, by Dimitri P. Bertsekas For FTP access of detailed table of contents, preface, and the 1st chapter: FTP to LIDS.MIT.EDU with username ANONYMOUS, enter password as directed, and type cd /pub/bertsekas/DP_BOOK BRIEF DESCRIPTION This two-volume textbook is a greatly expanded and pedagogically improved version of the author's "Dynamic Programming: Deterministic and Stochastic Models," (Prentice-Hall, 1987). It treats simultaneously stochastic optimal control problems popular in modern control theory and Markovian decision problems popular in operations research. New features include: ----------------------------------------------------------------------- ** neurodynamic programming/reinforcement learning techniques, a recent breakthrough in the practical application of dynamic programming to complex problems ** deterministic discrete- and continuous-time optimal control problems, including the Pontryagin Minimum Principle ** an extensive treatment of deterministic and stochastic shortest path problems ----------------------------------------------------------------------- The first volume is oriented towards modeling, conceptualization, and finite-horizon problems, but also includes a substantive introduction to infinite horizon problems that is suitable for classroom use. The second volume is oriented towards mathematical analysis and computation, and treats infinite horizon problems extensively. The text contains many illustrations, worked-out examples, and exercises. The author is Professor of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology, and has been teaching the material of this book in introductory graduate courses for over twenty years. CONTENTS OF VOLUME I (400 pages) 1. The Dynamic Programming Algorithm 2. Deterministic Systems and the Shortest Path Problem 3. Deterministic Continuous-Time Optimal Control 4. Problems with Perfect State Information 5. Problems with Imperfect State Information 6. Suboptimal and Adaptive Control 7. Introduction to Infinite Horizon Problems Appendixes: Math. Review, Probability Review, Least Squares Estimation and the Kalman Filter CONTENTS OF VOLUME II (304 pages, available August 1995) 1. Infinite Horizon - Discounted Problems 2. Stochastic Shortest Path Problems 3. Undiscounted Problems 4. Average Cost per Stage Problems 5. Continuous-Time Problems ******************************************************************** Free 30-Day Examination Copy to Prospective Instructors Free Deskcopy and Solutions Manual Upon Classroom Adoption ******************************************************************** ORDER W/ VISA/MASTERCARD BY FAX OR PHONE: (617) 489-3097 ____________________________________________________________________ ORDER BY MAIL: Athena Scientific, P.O.Box 391, Belmont, MA 02178-9998, U.S.A. email: athenasc at world.std.com o Dynamic Programming and Optimal Control, Vol. I: $64.00 ISBN: 1-886529-12-4 o Dynamic Programming and Optimal Control, Vol. II: $55.50 ISBN: 1-886529-13-2 o Dynamic Programming and Optimal Control, Vols. I and II: $99.50 ISBN: 1-886529-11-6 o MY CHECK OR MONEY ORDER IS ENCLOSED: I am enclosing my check or money order, payable to Athena Scientific. I have also included $3.00 for shipping. I understand that books can be returned for a full refund if I am not completely satisfied. o CHARGE MY: o VISA o MASTERCARD Account #__________________________________ Expiration Date____________________________ Signature__________________________________ ____________________________________________________________________  From andre at icmsc.sc.usp.br Mon Jun 5 16:26:36 1995 From: andre at icmsc.sc.usp.br ( Andre Carlos P. de Leon F. de Carvalho ) Date: Mon, 5 Jun 95 15:26:36 EST Subject: II Brazilian Symposium on Neural Networks Message-ID: <9506051826.AA06848@xavante> II Brazilian Symposium on Neural Networks ***************************************** October 18-20, 1995 First call for papers Sponsored by the Brazilian Computer Science Society (SBC) You are cordially invited to attend the II Brazilian Symposium on Neural Networks (SBRN) which will be held at the University of Sao Paulo, campus of Sao Carlos, Sao Paulo. Sao Carlos with its 160.000 population is a pleasant university city known by its clima and high technology companies. Scientific papers will be analyzed by the program committee. This analysis will take into account originality, significance to the area, and clarity. Accepted papers will be fully published in the conference proceedings. The major topics of interest include, but are not limited to: Applications Architecture and Topology Biological Perspectives Cognitive Science Dynamic Systems Fuzzy Logic Genetic Algorithms Hardware Implementation Hybrid Systems Learning Models Otimisation Parallel and Distributed Implementations Pattern Recognition Robotics and Control Signal Processing Theoretical Models Program Committee: - Andre C. P. L. F. de Carvalho - ICMSC/USP - Dante Barone - II/UFRGS (Chairman) - Edson C B C Filho - DI/UFPE - Fernando Gomide - FEE/UNICAMP - Geraldo Mateus - DCC/UFMG - Luciano da Fontoura Costa - IFSC/USP - Rafael Linden - IBCCF/UFRJ - Paulo Martins Engel - II/UFRGS Organising Committee: - Aluizio Araujo - EESC/USP - Andre C. P. L. F. de Carvalho - ICMSC/USP (Chairman) - Dante Barone - II/UFRGS - Edson C B C Filho - DI/UFPE - Germano Vasconcelos - DI/UFPE - Glauco Caurin - EESC/USP - Luciano da Fontoura Costa - IFSC/USP - Roseli A. Francelin Romero - ICMSC/USP - Teresa B. Ludermir - DI/UFPE SUBMISSION PROCEDURE: The Symposium seeks contributions to the state of the art and future perspectives of Neural Networks research. A submitted paper must be in Portuguese, Spanish or English. The submissions must include the original paper and three more copies and must follow the format below (no E-mail or FAX submissions). The paper must be printed using a laser printer, in one-column format, not numbered, 8.5 X 11.0 inch (21,7 X 28.0 cm). It must not exceed six pages, including all figures and diagrams. The font size should be 10 pts, such as TIMES-ROMAN font or its equivalent with the following margins: right and left 2.5 cm, top 3.5 cm, and bottom 2.0 cm. The first page should contain the paper's title, the complete author(s) name(s), affiliation(s), and mailing address(es), followed by a short (150 words) abstract and list of descriptive key words and an acompanying letter. In the accompanying letter, the following information must be included: * Manuscript title * first author's name, mailing address and E-mail * Technical area SUBMISSION ADDRESS: Four copies (one original and three copies) should be submitted to: Andre C. P. L. F. de Carvalho - SBRN 95 Departamento de Ciencias de Computacao e Estatistica ICMSC - Universidade de Sao Paulo Caixa Postal 668 CEP 13560.070 Sao Carlos, SP Brazil Phone: +55 162 726222 FAX: +55 162 749150 E-mail: IISBRN at icmsc.sc.usp.br DEADLINES: July 30, 1995 (mailing date) Deadline for paper submission August 30, 1995 Notification to authors October 18-20, 1995 II SBRN MORE INFORMATION: * Up-to-minute information about the symposium is available on the World Wide Web (WWW) at http://www.icmsc.sc.usp.br * Questions can be sent by E-mail to IISBRN at icmsc.sc.usp.br Hope to see you in Sao Carlos! From jagota at next1.msci.memst.edu Mon Jun 5 20:53:32 1995 From: jagota at next1.msci.memst.edu (Arun Jagota) Date: Mon, 5 Jun 1995 19:53:32 -0500 Subject: Optimization special issue Message-ID: <199506060053.AA18315@next1> CALL FOR PAPERS (please post) JOURNAL OF ARTIFICIAL NEURAL NETWORKS SPECIAL ISSUE on NEURAL NETWORKS FOR OPTIMIZATION Submission Deadline: 7 August 1995 ** JANN editor-in-chief: Omid M. Omidvar Special issue editor: Arun Jagota JANN is published by ABLEX The aim of this special issue is to publish high-quality papers presenting original research in the evolving topic of neural network algorithms for optimization problems. Papers may be theoretical or applied in nature. Applications of neural networks to optimization, rather than of optimization to (learning in) neural networks, are prefered. Papers utilizing relevant ideas from statistical physics, mathematical programming, or combinatorical algorithms are welcome. Papers describing significant applications, comparisons with conventional algorithms, or comparisons amongst neural network algorithms are also welcome. EXAMPLE PAPER TITLES: A Mean Field Annealing Network for the Traveling Salesman Problem A New Energy Function for the Graph Bipartitioning Problem Comparisons of the Hopfield and Elastic Nets Solving N-queens: Neural Networks Versus Randomized Greedy Methods FORMATTING INSTRUCTIONS: Manuscripts may be FULL LENGTH or NOTE. Full length manuscripts should range between eight (8) and sixteen (16) single column, SINGLE spaced, 8.5 X 11 pages, in 11 pt, including figures, tables, and references. Figures and tables should be formatted IN text, as they would appear in print. Do not have a separate title page, rather the first page of text should begin with the paper title, authors, affiliations, and an abstract, limited to two hundred and fifty (250) words. Notes should range between two (2) and seven (7) pages. Abstract should be limited to fifty (50) words. Other than these, notes should be formatted the same way as full length manuscripts. SUBMISSION GUIDELINES: ** Apologies for the short notice. One reason for the submission deadline of 7 August 1995 is that the journal editor needs the final accepted papers with him before the end of 1995. If all goes well, the special issue will be out soon thereafter. If you can't make the August 7 deadline, please contact me by email as soon as possible. ELECTRONIC (much prefered): HARDCOPY A SINGLE POSTSCRIPT file of Send THREE copies of the manuscript, to the email the manuscript to the address given below. Do not surface mail address follow-up with hardcopy. supplied below. All materials (figures etc) should be absorbed into this single file. Whether submitting electronic or hardcopy, send a cover letter by e-mail, containing the paper title, whether it is FULL LENGTH or NOTE, list of authors, and the corresponding author's address, especially e-mail and phone. (Correspondence with authors will be handled by email as much as possible.) If submitting hardcopy, send the email letter in advance; no hardcopy letter is needed. If submitting electronic, do not send the cover letter in the same email message as the postscript file. SUBMIT TO: Arun Jagota Department of Mathematical Sciences University of Memphis Memphis TN 38152 USA Phone: 901 678-3071 E-mail: jagota at next1.msci.memst.edu FINAL SUBMISSION: Authors of accepted papers might be asked to format final versions in camera-ready format. The JANN formatting instructions would be provided then. A JANN style file, for LaTeX users, will also be made available. LaTeX style files save considerable time in reformatting. From kim.plunkett at psy.ox.ac.uk Tue Jun 6 10:48:54 1995 From: kim.plunkett at psy.ox.ac.uk (Kim Plunkett) Date: Tue, 6 Jun 1995 14:48:54 +0000 Subject: Postdoctoral position Message-ID: <9506061448.AA53527@mac17.psych.ox.ac.uk> A Postdoctoral Research position is available to work on an ESRC funded project entitled "Learning Inflectional Morphology". The appointment is for 3 years with a starting salary of up to GBP 17,813 per annum. The position, to start on 1st October 1995 The successful candidate will have a Ph.D. degree (or equivalent) and a good working knowledge of the application of neural network models to language processing. Knowledge of the language acquisition literature and experience in experimental psycholin Further information can be obtained from Dr. Kim Plunkett, Department of Experimental Psychology, South Parks Road, Oxford, OX1 3UD, UK, Tel: 01865-271398, email: plunkett at psy.ox.ac.uk. From moon at pierce.ee.washington.edu Tue Jun 6 13:16:51 1995 From: moon at pierce.ee.washington.edu (Seokyong Moon) Date: Tue, 6 Jun 95 10:16:51 PDT Subject: Paper available on hidden Markov model inversion Message-ID: <9506061716.AA14623@pierce.ee.washington.edu.Jaimie> FTP-host: pierce.ee.washington.edu FTP-filename: /pub/papers/hmm-inversion.ps.Z This paper is 30 pages long. Robust Speech Recognition using Gradient-Based Inversion and Baum-Welch Inversion of Hidden Markov Models Seokyong Moon, Jenq-Neng Hwang Information Processing Laboratory Department of Electrical Engineering, FT-10 University of Washington, Seattle, WA 98195 E-mail: moon at pierce.ee.washington.edu, hwang at ee.washington.edu The gradient based hidden Markov model (HMM) inversion algorithm is studied and applied to robust speech recognition tasks under general types of mismatched conditions. It stems from the gradient-based inversion algorithm of an artificial neural network (ANN) by viewing an HMM as a special type of ANNs. The HMM inversion has a conceptual duality to the HMM training just as ANN inversion does to ANN training. The forward training of an HMM, based on either the Baum-Welch reestimation or gradient method, finds the model parameters to optimize some criteria, e.g., maximum likelihood (ML), maximum mutual information (MMI) and mean squared error (MSE), with given speech inputs. On the other hand, the inversion of an HMM finds speech inputs that optimize some criterion with given model parameters. The performance of the proposed gradient based HMM inversion for noisy speech recognition under additive noise corruption and microphone mismatch conditions is compared with the robust Baum-Welch HMM inversion technique along with other noisy speech recognition technique, i.e., the robust minimax (MINIMAX) classification technique. From listerrj at helios.aston.ac.uk Wed Jun 7 07:03:59 1995 From: listerrj at helios.aston.ac.uk (Richard Lister) Date: Wed, 07 Jun 1995 12:03:59 +0100 Subject: PhD Research Studentships Message-ID: <19990.9506071104@sun.aston.ac.uk> Neural Computing Research Group ------------------------------- Dept of Computer Science and Applied Mathematics Aston University, Birmingham, UK PHD RESEARCH STUDENTSHIPS ------------------------- *** Full details at http://neural-server.aston.ac.uk/ *** The Neural Computing Research Group has attracted substantial levels of industrial and research council funding and will therefore be able to offer a number of full-time PhD studentships to commence in October 1995. Currently we are seeking candidates for four studentships. These will pay full fees at the home rates and hence are suitable for UK and European Union citizens only. The studentships also cover living expenses at the same rate as a research council studentship. Feature Extraction Techniques for Nonstationary Financial Market Time Series ---------------------------------------------------------------------------- The project will examine conventional and neural network techniques for the extraction of features to elucidate hidden structure in generally multivariate financial time series.The problem domain is made more complicated by the inherent nonstationarity of the time series. Techniques based on dynamical systems theory and statistical pattern analysis will be developed and applied to real-world data. The ideal candidate should be mathematically and computationally competent and have a general interest in the field of financial mathematics, although no previous experience in this area is required. The project is in collaboration with a financial company, Union CAL Ltd, London. Validation and Verification of Neural Network Systems ----------------------------------------------------- (Two Studentships) One of the major factors limiting the widespread exploitation of neural networks has been the perceived difficulty of ensuring that a trained network will continue to perform satisfactorily when installed in an operational system. In the case of safety-critical systems it is clearly vital that a high degree of overall system integrity be achieved. However, almost all potential applications of neural networks entail some level of undesirable consequence if the network generates incorrect or inaccurate predictions. Currently there is no general framework for assessing the robustness of neural network solutions or of systems containing embedded neural networks. These two studentships will be closely associated with a substantial project funded by the Engineering and Physical Sciences Research Council to address the basic issues involved in validation of systems containing neural networks. The studentships are funded by two industrial companies: British Aerospace and Lloyds Register of Shipping, and will involve developing case studies to demonstrate the applicability of validation and verification techiques to real-world applications involving neural networks. Potential candidates should be mathematically and computationally competent with a background either in artificial neural networks or another relevant field. Neural networks applied to ignition timing and automatic calibration -------------------------------------------------------------------- This project involves a collaborative research programme between the Neural Computing Research Group and SAGEM in the general area of applying neural networks to the ignition timing and calibration of gasoline internal combustion engines. The ideal student would be computationally literate (preferably in C/C++) on UNIX and PC systems and have good mathematical and/or engineering abilities. An awareness of the importance of applying advanced technology and implementing ideas as engineering products is essential. In addition the ideal candidate would have some knowledge and interest in internal combustion engines and also relevant sensor technology. Neural Computing Research Group ------------------------------- The Neural Computing Research Group currently comprises the following academic staff: Chris Bishop Professor David Lowe Professor David Bounds Professor Geoffrey Hinton Visiting Professor Richard Rohwer Lecturer Alan Harget Lecturer Ian Nabney Lecturer David Saad Lecturer (arrives 1 August) two further posts (currently being appointed) together with the following Research Fellows: Chris Williams Shane Murnion Alan McLachlan Huaihu Zhu four further posts (currently being advertised) a full-time software support assistant, and eleven postgraduate research students. How to Apply ------------ If you wish to be considered for one of these positions you will need to complete an application form which can be obtained by sending your full postal address to: Professor C M Bishop Research Admissions Tutor Neural Computing Research Group Department of Computer Science and Applied Mathematics Aston University Birmingham B4 7ET, U.K. Tel: 0121 359 3611 ext. 4270 Fax: 0121 333 6215 e-mail: c.m.bishop at aston.ac.uk The minimum entry qualification is a First Class or Upper Second Class Honours degree in a relevant discipline, or the equivalent overseas qualification. Overseas applicants whose first language is not English must provide evidence of competence in English. Acceptable evidence includes possession of a UK or North American degree, or a formal certificate such as the British Council's ELTS (6.0 or better) or the USA TOEFL (550 or better). From pauer at cse.ucsc.edu Wed Jun 7 16:21:28 1995 From: pauer at cse.ucsc.edu (Peter Auer) Date: Wed, 7 Jun 1995 13:21:28 -0700 Subject: Call for impromptu-talks at COLT 95 Message-ID: <199506072021.NAA25809@arapaho.cse.ucsc.edu> Following an old (and very successful) tradition we will have at COLT 95 (July 5 - 8 in Santa Cruz, USA) again (in addition to two invited talks and talks that have been officially accepted for presentation by the program committee) also informal sessions with "impromptu-talks". In fact, at COLT 95 we have reserved even more time for these, because they have turned out to be quite fruitful for a quick dissemination of new/unfinished research results. The program chair of COLT 95 (Wolfgang Maass) has asked me to help in the organization of these sessions. The sessions for impromptu talks at COLT 95 will take place -- on July 6 from 4:40 to 5:20 (Chair: Ron Rivest) -- on July 6 from 7:30 to 9:00 (Chair: Peter Auer) -- on July 7 from 4:50 to 5:30 (Chair: David Haussler). Slots for these impromptu-talks are assigned on a first-come first-serve basis, and you can sign up by sending email to me, i.e. to Peter Auer: pauer at cse.ucsc.edu . In previous years we had impromptu-talks of up to 10 minutes, but we may have to shorten that in case of a stronger demand. Any topic of potential interest to the COLT-community is appropriate for these impromptu-talks, including recent research results that have been (or will be) officially presented at other conferences, report on work in progress, discussion of open problems. I will send out the schedule of the impromptu-talks on Jun 28. If there are still slots available you might sign up for impromptu-talks during the conference, but usually slots fill up pretty fast. Below I have attached the conference and registration information for COLT'95. Peter Auer. ---------------------------------------------------------------------- COLT '95 Eighth ACM Conference on Computational Learning Theory Wednesday, July 5 through Saturday, July 8, 1995 University of California, Santa Cruz, California ********************************************************************** Below is a short ascii summary of the information regarding this year's Colt conference. Additional information, maps, ... can be obtained from the colt web page: http://www.cse.ucsc.edu/~lisa/colt.html ********************************************************************** The Colt conference will be held on campus, which is hidden away in the redwoods on the Pacific Coast of Northern California. The conference is in cooperation with the ACM Special Interest Group on Algorithms and Computation Theory (SIGACT) and the ACM Special Interest Group on Artificial Intelligence (SIGART). 1. Flight tickets: San Jose Airport is the closest, about a 45 minute drive. San Francisco Airport (SFO) is about an hour and forty-five minutes away, but has slightly better flight connections. 2. Transportation from the airport to Santa Cruz: The first option is to rent a car and drive south from San Jose on Hwy 880, which becomes Hwy 17 or from San Francisco take either Hwy 280 or 101 to Hwy 17. When you get to Santa Cruz, take Route 1 (Mission St.) north. Turn right on Bay Street and follow the signs to UCSC. Commuters must purchase parking permits for $4.00/day M-F (parking is free Saturday and Sunday) from the information kiosk at the Main entrance to campus or the conference satellite office. Those staying on campus can pick up permits with their room keys. Various van services also connect Santa Cruz with the the San Francisco and San Jose airports. The Santa Cruz Airporter (408) 423-1214 (or (800) 497-4997 from anywhere) has regularly scheduled trips (every two hours from 9am until 11pm from San Jose International Airport, and every two hours from 8am until 10pm from SFO, $15 each way from either airport, you MUST mention the name of the conference when booking), and will drop you off at the Crown/Merrill housing. ABC Transportation (408) 464-8893 ((800) 734-4313 from California (24hr.)) runs a private sedan service ($47 for one, $57 for two, $67 for three to six from San Jose Airport to UC Santa Cruz, $79 for one, $89 for two, and $99 for three to six from SFO to UCSC, additional $10 after 11:30 pm, additional $20 to meet an international flight) and will drop you off at your room. Book at least 24 hours in advance. 3. Conference and room registration: Please fill out the enclosed form and send it to us with your payment. It MUST be postmarked by June 1 and received by June 5 to obtain the early registration rate and guarantee the room. Conference attendance is limited by the available space, and late registrations may need to be returned. Your arrival: This year we will be at the Crown/Merrill apartments. (Same place it was four years ago.) Enter the campus at the Main Entrance, which is the intersection of High and Bay Streets. (Look for the COLT signs.) Bay Street turns into Coolidge Drive, continue on this road until you reach the Crown/Merrill apartments. Housing registration will be at the Crown/Merrill Satellite Office (408) 459-2611 from 2:00 to 5:00 pm on Wednesday. Keys, meal cards, parking permits, maps, and information about what to do in Santa Cruz will be available. The office will remain open until 10:00 pm for late arrivals. Arrivals after 10:00 pm: stop at the Main Entrance Kiosk and have the guard call the College Proctor, who will meet you at the Satellite Office and give you your housing materials. Problems? Please go directly to the Crown/Merrill Satellite Office, or contact your Conference Director. In case of emerengcy, dial 911 from any campus phone. The weather in July is mostly sunny with occasional summer fog. Even though the air may be cool, the sun can be deceptively strong; those susceptible to sunburn should come prepared with sunblock. Bring T-shirts, slacks, shorts, and a sweater or light jacket, as it cools down at night. For information on the local bus routes and schedules, call the Metro Center at (408) 425-8600. Bring swimming trunks, tennis rackets, etc. You can get day passes for $5.00 (East Field House, Physical Education Office) to use the recreation facilities on campus. For questions about registration or accommodations, contact COLT'95, Computer Science Dept., UCSC, Santa Cruz, CA 95064. The e-mail address is colt95 at cse.ucsc.edu, and fax is (408)459-4829. For emergencies, call (408)459-2263. 4. General Conference Information: The Conference Registration will be 4 - 8pm Wednesday, outside the Cowell dining hall. Late registrations will be at the same location during the technical sessions. All lectures will be in the Cowell dining hall. A banquet will be held Wednesday from 6:30--8:00pm outside the Cowell dining hall followed by an invited talk by Leslie Valiant at 8:00pm inside the dining hall. There will be a terminal available in the dining hall for checking e-mail. The campus Copy Center is in the Communications Building (open 8am to 5pm). The conference has been organized to allow time for informal discussion and collaboration. In addition to the regular technical sessions, we are pleased to present two special invited lectures by Leslie Valiant and Terrance Sejnowski. -------------------- Conference Schedule -------------------- ----------------- Wednesday, July 5 ----------------- 2:00-5:00 pm, Housing Registration, Crown/Merrill Satellite Office. Note: All technical sessions will take place in the Cowell Dining Hall. Session 1: 5:00 - 6:00 Chair: Wolfgang Maass 5:00-5:20 ``An Experimental and Theoretical Comparison of Model Selection Methods" Michael Kearns, Yishay Mansour, Andrew Y. Ng, and Dana Ron 5:20-5:40 ``On the Learnability and Usage of Acyclic Probabilistic Finite Automata" Dana Ron, Yoram Singer, and Naftali Tishby 5:40-6:00 ``Learning to Model Sequences Generated by Switching Distributions" Yoav Freund and Dana Ron 6:30 - 8:00 {\bf \hspace*{.25in} Banquet} 8:00 Invited Talk by Leslie G. Valiant ``Rationality" ---------------- Thursday, July 6 ---------------- Session 2: 8:30 - 10:00 Chair: Dana Angluin 8:30-8:50 ``A Game of Prediction with Expert Advice" Volodya G. Vovk 8:50-9:10 ``Predicting Nearly as well as the Best Pruning of a Decision Tree" David P. Helmbold and Robert E. Schapire 9:10-9:30 ``A Comparison of New and Old Algorithms for a Mixture Estimation Problem" David P. Helmbold, Robert E. Schapire, Yoram Singer, and Manfred K. Warmuth 9:30-9:40 ``A Note on Learning Multivariate Polynomials under the Uniform Distribution" Nader H. Bshouty 9:40-9:50 ~~Randomized Approximate Aggregating Strategies and Their Applications to Prediction and Discrimination" Kenji Yamanishi 9:50-10:00 ``How to Use Expert Advice in the Case when Actual Values of Estimated Events Remain Unknown" Olga Mitina and Nikolai Vereshchagin 10:00 - 10:30 Break Session 3: 10:30 - 12:00 Chair: Robert Schapire 10:30-10:50 ``Learning With Unreliable Boundary Queries" Avrim Blum, Prasad Chalasani, Sally A. Goldman, and Donna K. Slonim 10:50-11:10 ``Generalized Teaching Dimensions and the Query Complexity of Learning" Tibor Hegedus 11:10-11:30 ``Learning DNF over the Uniform Distribution Using a Quantum Example Oracle" Nader H. Bshouty and Jeffrey C. Jackson 11:30-11:40 ``Reducing the Number of Queries in Self-Directed Learning" Yiqun L. Yin 11:40-11:50 ``On Self-Directed Learning" Shai Ben-David, Nadav Eiron, and Eyal Kushilevitz 11:50-12:00 ``Being Taught can be Faster than Asking Questions" Ronald L. Rivest and Yiqun L. Yin 12:00 - 1:30 Lunch Session 4: 1:30 - 3:30 Chair: Phil Long 1:30-1:50 ``Reductions for Learning via Queries" William Gasarch and Geoffrey R. Hird 1:50-2:10 ``Learning via Queries and Oracles" Frank Stephan 2:10-2:20 ``On the Inductive Inference of Real Valued Functions Kalvis Apsitis, Rusins Freivalds, and Carl H. Smith 2:20-2:30 ``Inductive Inference of Functions on the Rationals" Douglas A. Cenzer and William R. Moser 2:30-2:40 ``Language Learning from Texts: Mind Changes, Limited Memory and Monotonicity" Efim Kinber and Frank Stephan 2:40-2:50 ``On Learning Decision Trees with Large Output Domains" Nader H. Bshouty, Christino Tamon, and David K. Wilson 2:50-3:00 ``On the Learnability of $Z_{N}$-DNF Formulas" Nader Bshouty, Zhixiang Chen, Scott E. Decatur, and Steven Homer 3:00-3:10 ``Proper Learning Algorithm for Functions of $k$ Terms under Smooth Distributions" Yoshifumi Sakai, Eiji Takimoto, and Akira Maruoka 3:10-3:20 ``On-line Learning of Binary and $n$-ary Relations over Multi-dimensional Clusters" Atsuyoshi Nakamura and Naoki Abe 3:20-3:30 ``DNF: If You Can't Learn 'em, Teach 'em: An Interactive Model of Teaching" David H. Mathias 3:30 - 3:40 Break 3:40 - 4:40 Poster Discussion Session I 4:40 - 5:20 Impromptu Talks I - Chair: Ron Rivest 7:30 - 9:00 Impromptu Talks II - Chair: Peter Auer 9:00 Business Meeting - Cowell Dining Hall -------------- Friday, July 7 -------------- 8:30-9:30 Invited Talk} by Terrence Sejnowski ``Predictive Hebbian Learning" 9:30 - 10:00 Break Session 5: 10:00 - 12:00 Chair: Naftali Tishby 10:00-10:20 ``On Genetic Algorithms" Eric B. Baum, Dan Boneh, and Charles Garrett 10:20-10:40 ``On the Optimal Capacity of Binary Neural Networks: Rigorous Combinatorial Approaches" Jeong Han Kim and James R. Roche 10:40-11:00 ``From Noise-Free to Noise-Tolerant and from On-line to Batch Learning" Norbert Klasner and Hans Ulrich Simon 11:00-11:10 ``Sample Sizes for Sigmoidal Neural Networks" John Shawe-Taylor 11:10-11:20 ``Online Learning via Congregational Gradient Descent" Kim L. Blackmore, Robert C. Williamson, Iven M. Y. Mareels, and William A. Sethares 11:20-11:30 ``Criteria for Specifying Machine Complexity in Learning" Changfeng Wang and Santosh S. Venkatesh 11:30-11:40 ``Markov Decision Processes in Large State Spaces" Lawrence K. Saul and Satinder P. Singh 11:40-11:50 ``The Perceptron Algorithm vs. Winnow: Linear vs. Logarithmic Mistake Bounds when Few Input Variables are Relevant" Jyrki Kivinen and Manfred K. Warmuth 11:50-12:00 ``Learning by a Population of Perceptrons" Kukjin Kang and Jong-Hoon Oh 12:00 - 1:30 Lunch Session 6: 1:30 - 3:40 Chair: Tom Dietterich 1:30-1:50 ``Learning to Reason with a Restricted View" Roni Khardon and Dan Roth 1:50-2:10 ``Learning Internal Representations" Jonathan Baxter 2:10-2:20 ``Piecemeal Graph Exploration by a Mobile Robot" Baruch Awerbuch, Margrit Betke, Ronald L.Rivest, and Mona Singh 2:20-2:30 ``Concept Learning with Geometric Hypotheses" David P. Dobkin and Dimitrios Gunopulos 2:30-2:40 ``More or Less Efficient Agnostic Learning of Convex Polygons" Paul Fischer 2:40-2:50 ``Noise-Tolerant Parallel Learning of Geometric Concepts" Nader H. Bshouty, Sally A. Goldman, and David H. Mathias 2:50-3:00 ``On Learning from Noisy and Incomplete Examples" Scott E. Decatur and Rosario Gennaro 3:00-3:10 ``On Learning Bounded-Width Branching Programs" Funda Erg\"un, Ravi S. Kumar, and Ronitt Rubinfeld 3:10-3:20 ``On Efficient Agnostic Learning of Linear Combinations of Basis Functions" Wee Sun Lee, Peter L. Bartlett, and Robert C. Williamson 3:20-3:30 ``Sequential PAC Learning" Dale Schuurmans and Russell Greiner 3:30-3:40 ``Regression NSS: An Alternative to Cross Validation" Michael P. Perrone and Brian S. Blais 3:40 - 3:50 Break 3:50 - 4:50 Poster Discussion Session II 4:50 - 5:30 Impromptu Talks III - Chair: David Haussler 8:00 - 10:00 Beach Party at Twin Lakes Beach ---------------- Saturday, July 8 ---------------- Session 7: 8:40 - 10:00 Chair: Jeff Jackson 8:40-9:00 ``More Theorems about Scale-sensitive Dimensions and Learning" Peter L. Bartlett and Philip M. Long 9:00-9:20 ``General Bounds on the Mutual Information Between a Parameter and $n$ Conditionally Independent Observations" David Haussler and Manfred Opper 9:20-9:40 ``Learning from a Mixture of Labeled and Unlabeled Examples with Parametric Side Information" Joel Ratsaby and Santosh S. Venkatesh 9:40-10:00 ``Learning Using Group Representations" Dan Boneh 10:00 - 10:40 Break Session 8: 10:40 - 12:00 Chair: Peter Bartlett 10:40-11:00 ``Exactly Learning Automata with Small Cover Time" Dana Ron and Ronitt Rubinfeld 11:00-11:20 ``Specification and Simulation of Statistical Query Algorithms for Efficiency and Noise Tolerance" Javed A. Aslam and Scott E. Decatur 11:20-11:40 ``Simple Learning Algorithms Using Divide and Conquer" Nader H. Bshouty 11:40-Noon ``A Note on VC-Dimension and Measure of Sets of Reals" Shai Ben-David and Leonid Gurvits Noon Conference Ends ======================== REGISTRATION INFORMATION ======================== Please fill in the information needed for registration and accommodations. Make your payment by check or international money order, in U.S. dollars and payable through a U.S. bank, to UC Regents/COLT '95. Mail this form together with payment (by June 1, 1995 to avoid the late fee) to: COLT '95 Dept. of Computer Science University of California Santa Cruz, California 95064 Questions: e-mail colt95 at cse.ucsc.edu, fax (408)459-4829. Confirmations will be sent by e-mail. Anyone needing special arrangements to accommodate a disability should enclose a note with their registration. If you don't receive confirmation within three weeks of payment, let us know. Name: _________________________________________________________ Affiliation: __________________________________________________ Address: ______________________________________________________ City: ____________________ State: __________ Zip: _____________ Country: ______________________________________________________ Telephone: _______________________ Fax: ______________________ Email address: ________________________________________________ The registration fee includes a copy of the proceedings. ACM/SIG Members: $170 (with banquet) $ ______________ Non-Members: $190 (with banquet) $ ______________ Late Members: $225 (after June 1) $ ______________ Late Non-Members: $245 (after June 1) $ ______________ Full time students: $85 (no banquet) $ ______________ Extra banquet tickets: ______ (quantity) x $20 = $_______________ How many in your party have dietary restrictions? _____________________ Vegetarian: _____________________ Other: ______________________________ Shirt size: _____ medium _____ large _____ x-large ------------------------- Accommodations and Dining ------------------------- Accommodation fees are $60 per person for a double and $72 for a single per night at the Crown/Merrill Apartments. Cafeteria style breakfast (7:30 to 8:30am), lunch (12:00 to 1:00pm), and dinner (5:30 to 6:30pm) will be served in the Crown/Merrill Dining Hall. Doors close at the end of the time indicated, but dining may continue beyond this time. The first meal provided is dinner on the day of arrival and the last meal is lunch on the day you leave. NO REFUNDS can be given after June 1. Those with uncertain plans should make reservations at an off-campus hotel. Each attendee should pick one of the following options: _____ Package #1: Weds., Thurs., Fri. nights: $180 double, $216 single. _____ Package #2: Weds., Thurs., Fri., Sat. nights: $240 double, $288 single. _____ Other housing arrangement. Each 4-person apartment has a living room, a kitchen, a common bathroom, and either four single separate rooms, two double rooms, or two single and one double room. We need the following information to make room assignments: Gender (M/F): ____________________ Smoker (Y/N): ______________________ Roommate Preference: __________________________________________________ For shorter stays, longer stays, and other special requirements, you can get other accommodations through the Conference Office. Make reservations directly with them at (408) 459-2611, fax (408) 459-3422, and do this soon as on-campus rooms for the summer fill up well in advance. Off-campus hotels include the Dream Inn (408) 426-4330 and the Ocean Pacific Lodge (408) 457-1234 or (800) 995-0289. AMOUNT ENCLOSED: Registration $ ___________________ Banquet tickets $ ___________________ Accommodations $ ___________________ Discount* $ ___________________ TOTAL $ ___________________ *There is a $35 discount for registering for both Colt '95 and ML '95. (The discount does not apply for student registrations.) Proof of registration to ML '95 is required for discount to be taken. We explored the possibility of a shuttle bus from Colt to ML, but there was not enough interest. You will need to make your own arrangements for travel to ML from Santa Cruz. From yarowsky at unagi.cis.upenn.edu Thu Jun 8 16:27:33 1995 From: yarowsky at unagi.cis.upenn.edu (David Yarowsky) Date: Thu, 8 Jun 95 16:27:33 EDT Subject: ACL-95 WVLC3 - Supervised Training vs Self-organizing Methods Message-ID: <9506082027.AA16195@unagi.cis.upenn.edu> Keywords: Corpora, Self-Organization, Statistical Models, Unsupervised Learning THE THIRD WORKSHOP ON VERY LARGE CORPORA ----------------------------------------- Friday, 30 June 1995 8:45 AM - 5:25 PM MIT, Cambridge, Massachusetts, USA at ACL-95 (June 26-29) (Sponsored by ACL's SIGDAT and SIGNLL) The workshop will present original research in corpus-based and statistical natural language processing. Topics will include sense disambiguation, grammar induction, part-of-speech tagging, information retrieval, language modeling, and machine translation. This year's theme is: Supervised Training vs. Self-organizing Methods Historically, annotated corpora have made a significant contribution to tasks such as part-of-speech tagging and sense disambiguation. But annotated corpora are expensive and generally unavailable for languages other than English. Self-organizing methods offer the hope that annotated corpora might not be necessary. Can we achieve comparable performance using little or no tagged training data? What are the tradeoffs? Organizers: Ken Church and David Yarowsky Industrial Sponsor: LEXIS-NEXIS, Division of Reed and Elsevier, Plc. REGISTRATION: Registration fees are $40 for payment received by 15 June 1995 and $45 at the door. Registration includes a copy of the proceedings, catered lunch and refreshments during the day. Acceptable forms of payment are US$ cheques payable to "ACL" or credit card (VISA/Mastercard) payment. E-mail registrations are encouraged. Please submit the following form along with payment: -------------------------------------------------------------------- Name: Institution (for name tag): Postal address: Email address: Payment (specify cheque or credit card): Credit card info - Name on card: - Card number: - Expiration date: Dietary requirements (vegetarian, etc.): -------------------------------------------------------------------- Please send to: David Yarowsky Dept. of Computer and Information Science University of Pennsylvania 200 S. 33rd St. Philadelphia, PA 19104-6389 USA email: yarowsky at unagi.cis.upenn.edu ============================================================================ PRELIMINARY PROGRAM 8:15 - 8:45 Registration. Coffee, danish, etc. available 8:45 - 8:50 Welcome 8:50 - 9:35 INVITED TALK (Mark Liberman) 9:35 - 9:50 Break 9:50 - 10:15 Eric Brill Unsupervised Learning of Disambiguation Rules for Part of Speech Tagging 10:15 - 10:40 Carl de Marcken Lexical Heads, Phrase Structure and the Induction of Grammar 10:40 - 11:05 Michael Collins and James Brooks Prepositional Phrase Attachment through a Backed-off Model 11:05 - 11:15 Break 11:15 - 11:40 Andrew Golding A Bayesian Hybrid Method for Context-sensitive Spelling Correction 11:40 - 12:05 Philip Resnik Disambiguating Noun Groupings with Respect to Wordnet Senses 12:05 - 1:05 CATERED LUNCH 1:05 - 1:30 Dekai Wu Trainable Coarse Bilingual Grammars for Parallel Text Bracketing 1:30 - 1:55 Lance Ramshaw and Mitch Marcus Text Chunking using Transformation-Based Learning 1:55 - 2:05 Break 2:05 - 3:00 INVITED TALK (Henry Kucera and Nelson Francis) 3:00 - 3:10 Break 3:10 - 3:35 Fernando Pereira, Yoram Singer and Naftali Tishby Beyond Word N-Grams 3:35 - 4:00 Jing-Shin Chang, Yi-Chung Lin and Keh-Yih Su Automatic Construction of a Chinese Electronic Dictionary 4:00 - 4:10 Break 4:10 - 4:35 Kenneth Church and William Gale Inverse Document Frequency (IDF): A Measure of Deviations from Poisson 4:35 - 5:00 Joe Zhou and Pete Dapkus Automatic Suggestion of Significant Terms for a Predefined Topic 5:00 - 5:25 Ellen Riloff and Jay Shoen Automatically Acquiring Conceptual Patterns without an Annotated Corpus ------------------------------------------------------------ More Information: http://www.cis.upenn.edu/~yarowsky/wvlc3.html ACL-95 Homepage: http://www.ai.mit.edu/people/cgdemarc/acl/acl-info.html From mhb0 at Lehigh.EDU Sun Jun 11 15:57:26 1995 From: mhb0 at Lehigh.EDU (Mark H. Bickhard) Date: Sun, 11 Jun 1995 15:57:26 EDT Subject: New Book Announcement Message-ID: <199506111957.PAA51802@ns4-1.CC.Lehigh.EDU> BOOK ANNOUNCEMENT Foundational Issues in Artificial Intelligence and Cognitive Science: Impasse and Solution. Elsevier Science 1995 Mark H. Bickhard Lehigh University mhb0 at lehigh.edu Loren Terveen AT&T Bell Laboratories terveen at research.att.com SHORT DESCRIPTION The book focuses on a conceptual flaw in contemporary artificial intelligence and cognitive science. Many people have discovered diverse manifestations and facets of this flaw, but the central conceptual impasse is at best only partially perceived. Its consequences, nevertheless, visit themselves as distortions and failures of multiple research projects - and make impossible the ultimate aspirations of the fields. The impasse concerns a presupposition concerning the nature of representation - that all representation has the nature of encodings: encodingism. Encodings certainly exist, but encoding*ism* is at root logically incoherent; any *programmatic* research predicated on it is doomed to distortion and ultimate failure. The impasse and its consequences - and steps away from that impasse - are explored in a large number of projects and approaches. These include SOAR, CYC, PDP, situated cognition, subsumption architecture robotics, and the frame problems - a general survey of the current research in AI and Cognitive Science emerges. Interactivism, an alternative model of representation, is proposed and examined. SYNOPSIS The central point of Foundational Issues in Artificial Intelligence and Cognitive Science - Impasse and Solution is that there is a conceptual flaw in contemporary approaches to artificial intelligence and cognitive science, a flaw that makes impossible the ultimate aspirations of these fields. Many people have discovered diverse manifestations and facets of this flaw, but the central conceptual impasse is only partially perceived. The consequences, nevertheless, visit themselves as distortions and failures of research projects across the fields. The locus of the impasse concerns a common assumption or presupposition that underlies all parts of the field - a presupposition concerning the nature of representation. We call this assumption "encodingism", the assumption that representation is fundamentally constituted as encodings. This assumption, in fact, has been dominant throughout Western history. We argue that it is at root logically incoherent, and, therefore, that any programmatic research predicated on it is doomed to distortion and ultimate failure. On the other hand, encodings clearly do exist, and therefore are clearly possible, and we show how that could be - but they cannot be the foundational form of representation. Similarly, contemporary encoding approaches are enormously powerful, and major advances have been made within these dominant programmatic frameworks - but the encodingism flaw in those frameworks limit their ultimate possibilities, and will frustrate efforts toward the programmatic goal of understanding and constructing minds. The book characterizes and demonstrates this impasse, discusses a number of partial recognitions of and movements away from it, and then traces its consequences in a large number of projects and approaches within the fields. These include SOAR, CYC, PDP, situated cognition, subsumption architecture robotics, and the frame problems. In surveying the consequences of the impasse, we also provide a general survey of the current research in AI and Cognitive Science per se. We do not propose an unsolvable impasse, and, in fact, present an alternative that does resolve that impasse. This is developed for contrast, for perspective, to demonstrate that there is an alternative, and to explore some of its nature. We end with an exploration of some of the architectural implications of the alternative - called interactivism - and argue that such architectures are 1) not subject to the encodingism incoherence 2) more powerful than Turing machines, 3) more consistent with properties of central nervous system functioning than other contemporary approaches, and 4) capable of resolving the many problematics in the field that we argue are in fact manifestations of the underlying impasse. The audience for this book will include researchers, academics, and students in artificial intelligence, cognitive science, robotics, cognitive psychology, philosophy of mind and language, natural language processing, connectionism, and learning. The focus of the book is on the nature of representation, and representation permeates everywhere - so also, therefore, do the implications of our critique and our alternative permeate everywhere. CONTENTS Preface xi Introduction 1 A PREVIEW 2 I GENERAL CRITIQUE 5 1 Programmatic Arguments 7 CRITIQUES AND QUALIFICATIONS 8 DIAGNOSES AND SOLUTIONS 8 IN-PRINCIPLE ARGUMENTS 9 2 The Problem of Representation 11 ENCODINGISM 11 Circularity 12 Incoherence - The Fundamental Flaw 13 A First Rejoinder 15 The Necessity of an Interpreter 17 3 Consequences of Encodingism 19 LOGICAL CONSEQUENCES 19 Skepticism 19 Idealism 20 Circular Microgenesis 20 Incoherence Again 20 Emergence 21 4 Responses to the Problems of Encodings 25 FALSE SOLUTIONS 25 Innatism 25 Methodological Solipsism 26 Direct Reference 27 External Observer Semantics 27 Internal Observer Semantics 28 Observer Idealism 29 Simulation Observer Idealism 30 SEDUCTIONS 31 Transduction 31 Correspondence as Encoding: Confusing Factual and Epistemic Correspondence 32 5 Current Criticisms of AI and Cognitive Science 35 AN APORIA 35 Empty Symbols 35 ENCOUNTERS WITH THE ISSUES 36 Searle 36 Gibson 40 Piaget 40 Maturana and Varela 42 Dreyfus 42 Hermeneutics 44 6 General Consequences of the Encodingism Impasse 47 REPRESENTATION 47 LEARNING 47 THE MENTAL 51 WHY ENCODINGISM? 51 II INTERACTIVISM: AN ALTERNATIVE TO ENCODINGISM 53 7 The Interactive Model 55 BASIC EPISTEMOLOGY 56 Representation as Function 56 Epistemic Contact: Interactive Differentiation and Implicit Definition 60 Representational Content 61 EVOLUTIONARY FOUNDATIONS 65 SOME COGNITIVE PHENOMENA 66 Perception 66 Learning 69 Language 71 8 Implications for Foundational Mathematics 75 TARSKI 75 Encodings for Variables and Quantifiers 75 Tarski's Theorems and the Encodingism Incoherence 76 Representational Systems Adequate to Their Own Semantics 77 Observer Semantics 78 Truth as a Counterexample to Encodingism 79 TURING 80 Semantics for the Turing Machine Tape 81 Sequence, But Not Timing 81 Is Timing Relevant to Cognition? 83 Transcending Turing Machines 84 III ENCODINGISM: ASSUMPTIONS AND CONSEQUENCES 87 9 Representation: Issues within Encodingism 89 EXPLICIT ENCODINGISM IN THEORY AND PRACTICE 90 Physical Symbol Systems 90 The Problem Space Hypothesis 98 SOAR 100 PROLIFERATION OF BASIC ENCODINGS 106 CYC - Lenat's Encyclopedia Project 107 TRUTH-VALUED VERSUS NON-TRUTH-VALUED 118 Procedural vs Declarative Representation 119 PROCEDURAL SEMANTICS 120 Still Just Input Correspondences 121 SITUATED AUTOMATA THEORY 123 NON-COGNITIVE FUNCTIONAL ANALYSIS 126 The Observer Perspective Again 128 BRIAN SMITH 130 Correspondence 131 Participation 131 No Interaction 132 Correspondence is the Wrong Category 133 ADRIAN CUSSINS 134 INTERNAL TROUBLES 136 Too Many Correspondences 137 Disjunctions 138 Wide and Narrow 140 Red Herrings 142 10 Representation: Issues about Encodingism 145 SOME EXPLORATIONS OF THE LITERATURE 145 Stevan Harnad 145 Radu Bogdan 164 Bill Clancey 169 A General Note on Situated Cognition 174 Rodney Brooks: Anti-Representationalist Robotics 175 Agre and Chapman 178 Benny Shanon 185 Pragmatism 191 Kuipers' Critters 195 Dynamic Systems Approaches 199 A DIAGNOSIS OF THE FRAME PROBLEMS 214 Some Interactivism-Encodingism Differences 215 Implicit versus Explicit Classes of Input Strings 217 Practical Implicitness: History and Context 220 Practical Implicitness: Differentiation and Apperception 221 Practical Implicitness: Apperceptive Context Sensitivities 222 A Counterargument: The Power of Logic 223 Incoherence: Still another corollary 229 Counterfactual Frame Problems 230 The Intra-object Frame Problem 232 11 Language 235 INTERACTIVIST VIEW OF COMMUNICATION 237 THEMES EMERGING FROM AI RESEARCH IN LANGUAGE 239 Awareness of the Context-dependency of Language 240 Awareness of the Relational Distributivity of Meaning 240 Awareness of Process in Meaning 242 Toward a Goal-directed, Social Conception of Language 247 Awareness of Goal-directedness of Language 248 Awareness of Social, Interactive Nature of Language 252 Conclusions 259 12 Learning 261 RESTRICTION TO A COMBINATORIC SPACE OF ENCODING 261 LEARNING FORCES INTERACTIVISM 262 Passive Systems 262 Skepticism, Disjunction, and the Necessity of Error for Learning 266 Interactive Internal Error Conditions 267 What Could be in Error? 270 Error as Failure of Interactive Functional Indications - of Interactive Implicit Predications 270 Learning Forces Interactivism 271 Learning and Interactivism 272 COMPUTATIONAL LEARNING THEORY 273 INDUCTION 274 GENETIC AI 275 Overview 276 Convergences 278 Differences 278 Constructivism 281 13 Connectionism 283 OVERVIEW 283 STRENGTHS 286 WEAKNESSES 289 ENCODINGISM 292 CRITIQUING CONNECTIONISM AND AI LANGUAGE APPROACHES 296 IV SOME NOVEL ARCHITECTURES 299 14 Interactivism and Connectionism 301 INTERACTIVISM AS AN INTEGRATING PERSPECTIVE 301 Hybrid Insufficiency 303 SOME INTERACTIVIST EXTENSIONS OF ARCHITECTURE 304 Distributivity 304 Metanets 307 15 Foundations of an Interactivist Architecture 309 THE CENTRAL NERVOUS SYSTEM 310 Oscillations and Modulations 310 Chemical Processing and Communication 311 Modulatory "Computations" 312 The Irrelevance of Standard Architectures 313 A Summary of the Argument 314 PROPERTIES AND POTENTIALITIES 317 Oscillatory Dynamic Spaces 317 Binding 318 Dynamic Trajectories 320 "Formal" Processes Recovered 322 Differentiators In An Oscillatory Dynamics 322 An Alternative Mathematics 323 The Interactive Alternative 323 V CONCLUSIONS 325 16 Transcending the Impasse 327 FAILURES OF ENCODINGISM 327 INTERACTIVISM 329 SOLUTIONS AND RESOURCES 330 TRANSCENDING THE IMPASSE 331 References 333 Index 367 PREFACE Artificial Intelligence and Cognitive Science are at a foundational impasse which is at best only partially recognized. This impasse has to do with assumptions concerning the nature of representation: standard approaches to representation are at root circular and incoherent. In particular, Artificial Intelligence research and Cognitive Science are conceptualized within a framework that assumes that cognitive processes can be modeled in terms of manipulations of encoded symbols. Furthermore, the more recent developments of connectionism and Parallel Distributed Processing, even though the issue of manipulation is contentious, share the basic assumption concerning the encoding nature of representation. In all varieties of these approaches, representation is construed as some form of encoding correspondence. The presupposition that representation is constituted as encodings, while innocuous for *some applied* Artificial Intelligence research, is fatal for the further reaching programmatic aspirations of both Artificial Intelligence and Cognitive Science. First, this encodingist assumption constitutes a *presupposition* about a basic aspect of mental phenomena - representation - rather than constituting a *model* of that phenomenon. Aspirations of Artificial Intelligence and Cognitive Science to provide any foundational account of representation are thus doomed to circularity: the encodingist approach presupposes what it purports to be (programmatically) able to explain. Second, the encoding assumption is not only itself in need of explication and modeling, but, even more critically, the standard presupposition that representation is *essentially* constituted as encodings is logically fatally flawed. This flaw yields numerous subsidiary consequences, both conceptual and applied. This book began as an article attempting to lay out this basic critique at the programmatic level. Terveen suggested that it would be more powerful to supplement the general critique with explorations of actual projects and positions in the fields, showing how the foundational flaws visit themselves upon the efforts of researchers. We began that task, and, among other things, discovered that there is no natural closure to it - there are always more positions that could be considered, and they increase in number exponentially with time. There is no intent and no need, however, for our survey to be exhaustive. It is primarily illustrative and demonstrative of the problems that emerge from the underlying programmatic flaw. Our selections of what to include in the survey have had roughly three criteria. We favored: 1) major and well known work, 2) positions that illustrate interesting deleterious consequences of the encodingism framework, and 3) positions that illustrate the existence and power of moves in the direction of the alternative framework that we propose. We have ended up, *en passant*, with a representative survey of much of the field. Nevertheless, there remain many more positions and research projects that we would like to have been able to address. MAIN FEATURES Identifies a fundamental premise about the nature of representation that underlies much of Cognitive Science - that representation is constituted as encodings. Explores fatal flaws with this premise. Surveys major projects within Cognitive Science and Artificial Intelligence. Shows how they embody the encodingism premise, and how they are limited by it. Identifies movements within Cognitive Science and AI away from encodingism. Presents an alternative to encodingism - interactivism. Demonstrates that interactivism avoids the fatal flaws of encodingisms, and that it provides a coherent framework for understanding representation. Unifies insights from the various movements in Cognitive Science away from encodingism. Sketches an interactivist cognitive architecture. FIELDS OF INTEREST Cognitive Science Simulation of Cognitive Processes Artificial Intelligence, Knowledge Engineering, Expert Systems Human Information Processing Philosophy of Language Philosophy of Mind Cognitive Psychology Robotics Artificial Life Autonomous Agents Dynamic Systems and Behavior Learning Theory of Computation Semantics Pragmatics Connectionism Linguistics Neuroscience Bickhard, M. H., Terveen, L. (1995). Foundational Issues in Artificial Intelligence and Cognitive Science - Impasse and Solution. Elsevier Scientific. ISBN 0 444 82048 5 In the US/Canada orders may be placed with: Elsevier Science P.O. Box 945 New York, NY 10159-0945 Phone (212) 633-3750 Fax (212) 633-3764 Email: usorders-f at elsevier.com Elsevier has given this book an unfortunately high price: Dfl. 240 -- US$ 141.25. We deeply regret that. Nevertheless, we suggest that it is well worth taking a look at, whether by purchase, local library, or inter-library loan. From georg at ai.univie.ac.at Tue Jun 13 09:26:13 1995 From: georg at ai.univie.ac.at (Georg Dorffner) Date: Tue, 13 Jun 1995 15:26:13 +0200 (MET DST) Subject: Neural Nets and EEG: WWW page and workshop Message-ID: <199506131326.PAA29403@jedlesee.ai.univie.ac.at> The European BIOMED-1 project ========================================================================= A N N D E E (Enhancement of EEG-Based Diagnosis of Neurological and Psychiatric Disorders Using Artificial Neural Networks) ========================================================================= announces its WWW home page: http://www.ai.univie.ac.at/oefai/nn/anndee/ ANNDEE is a concerted action sponsored by the European Commission and the Austrian Federal Ministry of Science, Research, and the Arts. It is devoted to coordinating research at several European centers aimed at processing EEG data using neural networks, in order to enhance diagnosis based on EEG. Among the application areas focused upon within ANNDEE are: - detection and classification of psychoses (e.g. schizophrenia) - detection and classification of degenerative diseases (e.g. Parkinson's) - automatic sleep staging and detection of sleep disorders (e.g. apneua, arousals) - spike detection in epilepsy - classification of single-trial, event-related EEG (e.g. for aiding handicapped) The home page at the above URL does not only give information about the ANNDEE project but is aimed at growing into a comprehensive server for anyone interested in this topic. Currently it includes - a list of partners and associated sites - a list of other sites on the Web - a search form for bibliographical references - links to publicly available EEG data - a list of important events =========================I M P O R T A N T !============================== Currently, some of the pages are still rather short. In order to make these services as complete as possible, we urge everyone working on EEG data processing with neural networks, to send us their - address (incl. URL, if applicable) - description of their work - references - links to available data (this is not restricted to European sites!!) Send email to: anndee-admin at ai.univie.ac.at Questions concerning the scientific part of the project should be directed to: georg at ai.univie.ac.at (Georg Dorffner) ========================================================================== Check it out!! This service is part of the WWW server at the Austrian Research Institute for Artificial Intelligence Vienna, Austria __________________________________________________________________________ First Announcement: Public ANNDEE Meeting The ANNDEE project announces its first public workshop ======================================= Neural Network-Based EEG Analysis ======================================= June 29-30, 1995 Graz, Austria This workshop comprises lectures by ANNDEE participants as well as invited speakers, such as J. Kangas (Finland). It also includes tutorials on LVQ and SOM (self-organizing feature maps). For more information see http://www-dpmi.tu-graz.ac.at/Workshop/Workshop.html or send email to: pregenz at dpmi.tu-graz.ac.at (Martin Pregenzer) ______________________________________________________________ From dawei at venezia.rockefeller.edu Tue Jun 13 12:42:17 1995 From: dawei at venezia.rockefeller.edu (Dawei Dong) Date: Tue, 13 Jun 95 12:42:17 -0400 Subject: two papers on temporal information processing by Dong & Atick Message-ID: <9506131642.AA26190@venezia.rockefeller.edu> A theory of temporal information processing in neural systems: how the early visual pathways, such as LGN, temporally modulate the incoming signals of natural scenes? The following two papers explore the above subject by 1) measuring the temporal power spectrum of natural time-varying images to reveal the underlying statistical regularities, and, 2) based on the measurements, using information theory to predict the optimal temporal filter which is shown in quantitative agreements with physiological experiments. Dawei Dong 1) ftp://venezia.rockefeller.edu/dawei/papers/95-TIME.ps.Z (213K, 19 pages) Statistics of natural time-varying images Dawei W. Dong and Joseph J. Atick Computational Neuroscience Laboratory The Rockefeller University 1230 York Avenue New York, NY 10021-6399 Abstract Natural time-varying images possess substantial spatiotemporal correlations. We measure these correlations --- or equivalently the power spectrum --- for an ensemble of more than a thousand segments of motion pictures, and we find significant regularities. More precisely, our measurements show that the dependence of the power spectrum on the spatial frequency, $f$, and temporal frequency, $w$, is in general nonseparable and is given by $f^{-m-1} F(w/f)$, where $F(w/f)$ is a nontrivial function of the ratio $w/f$. We give a theoretical derivation of this scaling behaviour and show that it emerges from objects with a static power spectrum $\sim f^{-m}$, appearing at a wide range of depths and moving with a distribution of velocities relative to the observer. We show that in the regime of relatively high temporal and low spatial frequencies, the power spectrum becomes independent of the details of the velocity distribution and it is separable into the product of spatial and temporal power spectra with the temporal part given by the universal power-law $\sim w^{-2}$. Making some reasonable assumptions about the form of the velocity distribution we derive an analytical expression for the spatiotemporal power spectrum which is in excellent agreement with the data for the entire range of spatial and temporal frequencies of our measurements. The results in this paper have direct implications to neural processing of time-varying images in the visual pathway. (Accepted for publication in Network: Computation in Neural Systems) 2) ftp://venezia.rockefeller.edu/dawei/papers/95-LGN.ps.Z (279K, 26 pages) Temporal decorrelation: a theory of lagged and nonlagged responses in the lateral geniculate nucleus Dawei W. Dong and Joseph J. Atick Computational Neuroscience Laboratory The Rockefeller University 1230 York Avenue New York, NY 10021-6399 Abstract Natural time-varying images possess significant temporal correlations when sampled frame by frame by the photoreceptors. These correlations persist even after retinal processing and hence, under natural activation conditions, the signal sent to the lateral geniculate nucleus is temporally redundant or inefficient. We explore the hypothesis that the LGN is concerned, among other things, with improving efficiency of visual representation through active temporal decorrelation of the retinal signal much in the same way that the retina improves efficiency by spatially decorrelating incoming images. Using some recently measured statistical properties of time-varying images, we predict the spatio-temporal receptive fields that achieve this decorrelation. It is shown that, because of neuronal nonlinearities, temporal decorrelation requires two response types, the {\it lagged} and {\it nonlagged}, just as spatial decorrelation requires {\it on} and {\it off} response types. The tuning and response properties of the predicted LGN cells compare quantitatively well with what is observed in recent physiological experiments. {Network: Computation in Neural Systems}{ Vol~6(2) pp~159-178} From omlinc at research.nj.nec.com Tue Jun 13 14:47:26 1995 From: omlinc at research.nj.nec.com (Christian Omlin) Date: Tue, 13 Jun 95 14:47:26 EDT Subject: Preprint Available - Knowledge Extraction Message-ID: <9506131847.AA00207@arosa> The following technical report is available from the archive of the Computer Science Department, University of Maryland. URL: http://www.cs.umd.edu:80/TR/UMCP-CSD:CS-TR-3465 FTP: ftp.cs.umd.edu:/pub/papers/papers/3465/3465.ps.Z We welcome your comments. Christian Extraction of Rules from Discrete-Time Recurrent Neural Networks Revised Technical Report CS-TR-3465 and UMIACS-TR-95-54 University of Maryland, College Park, MD 20742 Christian W. Omlin and C. Lee Giles NEC Research Institute 4 Independence Way Princeton, N.J. 08540 USA E-mail: {omlinc,giles}@research.nj.nec.com ABSTRACT The extraction of symbolic knowledge from trained neural networks and the direct encoding of (partial) knowledge into networks prior to training are important issues. They allow the exchange of information between symbolic and connectionist knowledge representation. The focus of this paper is on the quality of the rules that are extracted from recurrent neural networks. Discrete-time recurrent neural networks can be trained to correctly classify strings of a regular language. Rules defining the learned grammar can be extracted from networks in the form of deterministic finite-state automata (DFA's) by applying clustering algorithms in the output space of recurrent state neurons. Our algorithm can extract different finite-state automata that are consistent with a training set from the same network. We compare the generalization performances of these different models and the trained network and we introduce a heuristic that permits us to choose among the consistent DFA's the model which best approximates the learned grammar. Keywords: Recurrent Neural Networks, Grammatical Inference, Regular Languages, Deterministic Finite-State Automata, Rule Extraction, Generalization Performance, Model Selection, Occam's Razor. From terry at salk.edu Tue Jun 13 16:19:25 1995 From: terry at salk.edu (Terry Sejnowski) Date: Tue, 13 Jun 95 13:19:25 PDT Subject: Neural Computation 7:4 Message-ID: <9506132019.AA28440@salk.edu> Neural Computation Volume 7 Number 4 July 1995 Review: Hints Yaser Abu-Mostafa Articles: Topology and geometry of weight solutions in multi-layer networks Frans M. Coetzee and Virginia L. Stonick Letters: Time-skew Hebb rule in a nonisopotential neuron Barak A. Pearlmutter Synapse models for neural networks: From ion channel kinetics to multiplicative coefficient wij Francois Chapeau-Blondeau and Nicolas Chambet Generalization and analysis of the Lisberger-Sejnowski VOR model Ning Qian Stable adaptive control of robot manipulators using neural networks Robert M. Sanner and Jean-Jacques E. Slotine Modular and hybrid connectionist system for automatic speaker identification Younes Bennani Error estimation by series association for artificial neural network systems Keehoon Kim and Eric B. Bartlett Test error fluctuations in finite linear perceptrons D. Barber, D. Saad and P. Sollich Learning and extracting initial mealy automata with a modular neural network model Peter Tino and Jozef Sajda Dynamic cell structure learns perfectly topology preserving map Jorg Bruske and Gerald Sommer ----- ABSTRACTS - http://www-mitpress.mit.edu/ SUBSCRIPTIONS - 1995 - VOLUME 7 - BIMONTHLY (6 issues) ______ $40 Student and Retired ______ $68 Individual ______ $180 Institution Add $22 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-6 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 e-mail: hiscox at mitvma.mit.edu ----- From stokely at atax.eng.uab.edu Mon Jun 12 18:48:39 1995 From: stokely at atax.eng.uab.edu (Ernest Stokely) Date: Mon, 12 Jun 1995 17:48:39 -0500 Subject: Position in Biomedical Engineering Message-ID: Tenure Track Faculty Position -------------------------- The Department of Biomedical Engineering at the University of Alabama at Birmingham has an opening for a tenure-track faculty member. The opening is being filled as part of a Whitaker Foundation Special Opportunities Award for a training and research program in functional and structural imaging of the brain. Candidates are particularly invited in cross-disciplinary areas of neurosystems, biological neural networks, computational neurobiology, or other multidisciplinary areas that combine neurobiology and imaging. The person selected for this position will be expected to form active research collaborations with other units in the Medical Affairs part of the UAB campus. Candidates should have a Ph.D. degree in engineering or a related field, and must be a U.S. citizen or have permanent residency in the U.S. The search will be continued until the position is filled. UAB is an autonomous campus within the University of Alabama system. UAB faculty currently are involved in over $140 million of externally funded grants and contracts. A 4.1 Tesla clinical NMR facility for cardiovascular research, several other small-bore MR systems, a Philips Gyroscan system, and a team of research scientists and engineers working in various aspects of MR imaging and spectroscopy are housed only 200 meters from the School of Engineering. In addition, the brain imaging project will involve collaborations with members of the Neurobiology Center, as well as faculty members from the Departments of Neurology, Psychiatry, and Radiology. To apply send a letter of application, a current curriculum vitae, and three letters of reference to Dr. Ernest Stokely, Department of Biomedical Engineering, BEC 256, University of Alabama at Birmingham, Birmingham, Alabama 35294-4461. The University of Alabama at Birmingham is an equal opportunity, affirmative action employer, and encourages applications from qualified women and minorities. Ernest Stokely Chair, Department of Biomedical Engineering BEC 256 University of Alabama at Birmingham Birmingham, Alabama 35294-4461 Internet: stokely at atax.eng.uab.edu FAX: (205) 975-4919 Phone: (205) 934-8420 From omlinc at research.nj.nec.com Wed Jun 14 11:13:23 1995 From: omlinc at research.nj.nec.com (Christian Omlin) Date: Wed, 14 Jun 95 11:13:23 EDT Subject: Technical Report - Alternate Sites Message-ID: <9506141513.AA02813@arosa> There seems to be a problem with accessing the technical report Extraction of Rules from Discrete-Time Recurrent Neural Networks by Christian W. Omlin and C. Lee Giles from the sites http://www.cs.umd.edu:80/TR/UMCP-CSD:CS-TR-3465 ftp.cs.umd.edu:/pub/papers/papers/3465/3465.ps.Z The above technical report can now be accessed either through my home page at http://www.neci.nj.nec.com/homepages/omlin/omlin.html or via ftp from ftp.nj.nec.com /pub/omlinc/rule_extraction.ps.Z I apologize for the inconvenience. Christian From phkywong at uxmail.ust.hk Thu Jun 15 05:34:24 1995 From: phkywong at uxmail.ust.hk (Dr. Michael Wong) Date: Thu, 15 Jun 1995 17:34:24 +0800 Subject: Paper on Neural Network Classification Available Message-ID: <95Jun15.173424+0800_hkt.18918-1+162@uxmail.ust.hk> FTP-host: physics.ust.hk FTP-file: pub/kymwong/nips95.ps.gz The following paper, submitted to the Theory session of NIPS-95, is now available via anonymous FTP. (8 pages long) ============================================================================ Neural Network Classification of Non-Uniform Data K. Y. Michael Wong and H. C.Lau, Department of Physics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong. E-mail address: phkywong at usthk.ust.hk, phhclau at usthk.ust.hk ABSTRACT We consider a model of non-uniform data, which resembles typical data for system faults in diagnostic classification tasks. Pre-processing the data for feature extraction and dimensionality reduction improves the performance of neural network classifiers, in terms of the number of training examples required for good generalization. This result supports the use of hybrid expert systems in which feature extraction techniques such as classification trees are used to build a pre-processing layer for neural network classifiers. ============================================================================ FTP instructions: unix> ftp physics.ust.hk Name: anonymous Password: your full email address ftp> cd pub/kymwong ftp> get nips95.ps.gz ftp> quit unix> gunzip nips95.ps.gz unix> lpr nips95.ps From djf3 at cornell.edu Thu Jun 15 12:04:47 1995 From: djf3 at cornell.edu (David Field) Date: Thu, 15 Jun 1995 12:04:47 -0400 Subject: Big brains conference Message-ID: There remains a limited number of openings for people wishing to attend this year's Cornell Symposium. A list of speakers and abstracts of talks can be found on our World Wide Web page. The URL is http://comp9.psych.cornell.edu/Psychology/big-brains.html or http://redwood.psych.cornell.edu/big-brains.html Cornell University Summer Symposium 1995: Big Brains June 23-26 Co-organizers: Barbara Finlay and David Field Many forces encourage and constrain the development of large brains, and large brains assemble themselves into new architectures that reflect these forces. Big brains are presumably selected for better and more efficient perception, cognition and behavior: how is selection for behavior translated into structure? Organismal and developmental constraints on selection for increased brain size are numerous: energetic requirements of the developing fetus, linkage of early developmental events, body conformation factors like pelvis size, social structure of the species and the mature brain's energetic requirements are all examples of forces which influence brain size and conformation. We now know a number of essential facts about how distribution of connectivity, modularity and the nature of functional specialization change as brains get large. Current work on computational architecture of neural nets has revealed strategies that work optimally for either small or large assemblies of units, and principles of organization that emerge only in larger assemblies. Using "big brains" as a focal point, this conference draws together researchers who work at all these levels of analysis to understand more of how our large brain has come to be. From dhw at santafe.edu Thu Jun 15 18:01:35 1995 From: dhw at santafe.edu (David Wolpert) Date: Thu, 15 Jun 95 16:01:35 MDT Subject: Paper announcement Message-ID: <9506152201.AA22307@sfi.santafe.edu> NEW PAPER ANNOUNCEMENT. *** Some Results Concerning Off-Training-Set and IID Error for the Gibbs and the Bayes Optimal Generalizers by David H. Wolpert, Emanuel Knill, Tal Grossman Abstract: In this paper we analyze the average behavior of the Bayes-optimal and Gibbs learning algorithms. We do this both for off-training-set error and conventional IID error (for which test sets overlap with training sets). For the IID case we provide a major extension to one of the better known results of \cite{haussler}. We also show that expected IID test set error is a non-increasing function of training set size for either algorithm. On the other hand, as we show, the expected off training-set error for both learning algorithms can increase with training set size, for non-uniform sampling distributions. We characterize what relationship the sampling distribution must have with the prior for such an increase. We show in particular that for uniform sampling distributions and either algorithm, the expected off-training set error is a non-increasing function of training set size. For uniform sampling distributions, we also characterize the priors for which the expected error of the Bayes-optimal algorithm stays constant. In addition we show that for the Bayes-optimal algorithm, expected off-training-set error can increase with training set size when the target function is fixed, but if and only if the expected error averaged over all targets decreases with training set size. Our results hold for arbitrary noise and arbitrary loss functions. *** To retrieve this file, anonymous ftp to ftp.santafe.edu. Go to pub/dhw_ftp. Compressed postscript of the file is called OTS.BO.Gibbs.ps.Z. From jlm at crab.psy.cmu.edu Thu Jun 15 16:04:01 1995 From: jlm at crab.psy.cmu.edu (James L. McClelland) Date: Thu, 15 Jun 95 16:04:01 EDT Subject: ANNOUNCING THE PDP++ SIMULATOR Message-ID: <9506152004.AA25171@crab.psy.cmu.edu.psy.cmu.edu> ANNOUNCING: The PDP++ Software Authors: Randall C. O'Reilly, Chadley K. Dawson, and James L. McClelland The PDP++ software is a new neural-network simulation system written in C++. It represents the next generation of the PDP software released with the McClelland and Rumelhart "Explorations in Parallel Distributed Processing Handbook", MIT Press, 1987. It is easy enough for novice users, but very powerful and flexible for research use. The current version is 1.0 beta (1.0b). It has been used and tested locally fairly extensively during development, but this is the first general release. The software can be obtained by anonymous ftp from: Anonymous FTP Site: hydra.psy.cmu.edu/pub/pdp++ For more information, see our web page: WWW Page: http://www.cs.cmu.edu/Web/Groups/CNBC/PDP++/PDP++.html There is a 250 page (printed) manual and an HTML version available on-line at the above address. Software Features: ================== o Full Graphical User Interface (GUI) based on the InterViews toolkit. Allows user-selected "look and feel". o Network Viewer shows network architecture and processing in real- time, allows network to be constructed with simple point-and-click actions. o Training and testing data can be graphed on-line and network state can be displayed over time numerically or using a wide range of color or size-based graphical representations. o Environment Viewer shows training patterns using color or size-based graphical representations. o Flexible object-oriented design allows mix-and-match simulation construction and easy extension by deriving new object types from existing ones. o Built-in 'CSS' scripting language uses C++ syntax, allows full access to simulation object data and functions. Transition between script code and compiled code is simplified since both are C++. Script has command-line completion, source-level debugger, and provides standard C/C++ library functions and objects. o Scripts can control processing, generate training and testing patterns, automate routine tasks, etc. o Scripts can be generated from GUI actions, and the user can create GUI interfaces from script objects to extend and customize the simulation environment. Supported Algorithms: ===================== o Feedforward and recurrent error backpropagation. Recurrent BP includes continuous, real-time models, and Almeida-Pineda. o Constraint satisfaction algorithms and associated learning algorithms including Boltzmann Machine, Hopfield models, mean-field networks (DBM), Interactive Activation and Competition (IAC), and continuous stochastic networks. o Self-organizing learning including Competitive Learning, Soft Competitive Learning, simple Hebbian, and Self-organizing Maps ("Kohonen Nets"). The Fine Print: =============== PDP++ is copyrighted and cannot be sold or distributed by anyone other than the copyright holders. However, the full source code is freely available, and the user is granted full permission to modify, copy, and use it. See our web page for details. The software runs on Unix workstations under XWindows. It requires a minimum of 16 Meg of RAM, and 32 Meg is preferable. It has been developed and tested on Sun Sparc's under SunOs 4.1.3, HP 7xx under HP-UX 9.x, and SGI Irix 5.3. Statically linked binaries are available for these machines. Other machine types will require compiling from the source. Cfront 3.x and g++ 2.6.3 are supported C++ compilers. The GUI in PDP++ is based on the InterViews toolkit, version 3.2a. However, we had to patch it to get it to work. We distribute pre-compiled libraries containing these patches for the above architectures. For architectures other than those above, you will have to apply our patches to InterViews before compiling. The basic GUI and script technology in PDP++ is based on a type-scanning system called TypeAccess which interfaces with the CSS script language to provide a virtually automatic interface mechanism. While these were developed for PDP++, they can easily be used for any kind of application, and CSS is available as a stand-alone executable for use like Perl or TCL. The binary-only distribution requires about 54 Meg of disk space, since we have been unable to get shared libraries to work with C++ on the above platforms. Each simulation executable is around 8-12 Meg in size, and there are 3 of these (bp++, cs++, so++), plus the CSS and 'maketa' executables. The compiled source-code distribution takes about 115 Meg (but only around 16 Meg before compiling). For more information on the details of the software, see our web page. From furuhashi at nuee.nagoya-u.ac.jp Fri Jun 16 03:54:55 1995 From: furuhashi at nuee.nagoya-u.ac.jp (furuhashi@nuee.nagoya-u.ac.jp) Date: Fri, 16 Jun 1995 16:54:55 +0900 Subject: Call for Papers of WWW'95 Message-ID: <9506160754.AA19245@gemini.bioele.nuee.nagoya-u.ac.jp> CALL FOR PAPERS 1995 IEEE/Nagoya University World Wisepersons Workshop (WWW'95) ON FUZZY LOGIC AND NEURAL NETWORKS/EVOLUTIONARY COMPUTATION November 14 and 15, 1995 Rubrum Ohzan Chikusa-ku, Nagoya, JAPAN Sponsored by Nagoya University Co-sponsored by IEEE Industrial Electronics Society Technically Co-sponsored by IEEE Robotics and Automation Society International Fuzzy Systems Association Japan Society for Fuzzy Theory and Systems North American Fuzzy Information Processing Society Society of Instrument and Control Engineers Robotics Society of Japan There are growing interests in combination technologies of fuzzy logic and neural networks, fuzzy logic and evolutionary computation for acquisition of experts' knowledge, modeling of nonlinear systems, realizing complex adaptive systems. The goal of the 1995 IEEE/Nagoya University WWW on Fuzzy Logic and Neural Networks/Evolutionary Computation is to give its attendees opportunities to exchange information and ideas on various aspects of the Combination Technologies and to stimulate and inspire pioneering works in this area. To keep the quality of these workshop high, only a limited number of people are accepted as participants of the workshops. The papers presented at the workshop are planned to be edited and published from Springer-Verlag. For speakers of excellent papers, partial financial assistance of travel expense as well as lodging fee in Nagoya will be provided by the steering committee of WWW'95. TOPICS: Combination of Fuzzy Logic and Neural Networks, Combination of Fuzzy Logic and Evolutionary Computation, Learning and Adaptation, Knowledge Acquisition, Modeling, Human Machine Interface IMPORTANT DATES: Submission of Abstracts of Papers : June 30, 1995 Acceptance Notification : Aug. 31, 1995 Final Manuscript : Sept. 30, 1995 Abstracts should be type-written in English within 4 pages of A4 size or Letter sized sheet. Use Times or one of the similar typefaces. The size of the letters should be 10 points or larger. All correspondence and submission of papers should be sent to Takeshi Furuhashi, General Chair Dept. of Information Electronics, Nagoya University Furo-cho, Chikusa-ku, Nagoya 464-01, JAPAN TEL: +81-52-789-2792, FAX: +81-52-789-3166 E mail: furuhashi at nuee.nagoya-u.ac.jp IEEE/Nagoya University WWW: IEEE/Nagoya University WWW (World Wiseperson Workshop) is a series of workshops sponsored by Nagoya University and co-sponsored by IEEE Industrial Electronics Society. City of Nagoya, located two hours away from Tokyo, has many electro-mechanical industries in its surroundings such as Mitsubishi, TOYOTA, and their allied companies. Nagoya is a mecca of robotics industries, machine industries and aerospace industries in Japan. The series of workshops will give its attendees opportunities to exchange information on advanced sciences and technologies and to visit industries and research institutes in this area. WORKSHOP ORGANIZATION Honorary Chair: Masanobu Hasatani (Dean, School of Engineering, Nagoya University) General Chair: Takeshi Furuhashi (Nagoya University) Advisory Committee: Chair: Toshio Fukuda (Nagoya University) Toshio Goto (Nagoya University) Fumio Harashima (University of Tokyo) Richard D. Klafter (Temple University) C.S. George Lee (Purdue University) Hiroyasu Nomura (Nagoya University) Shigeru Okuma (Nagoya University) Yoshiki Uchikawa (Nagoya University) Steering Committee: S.Abe (Hitach Ltd.) K.Aoki (Toyota Motor Corporation) T.Aoki (Nagoya Municipal Industrial Res. Inst.) M.Arao (OMRON Corporation) Y.Dote (Muroran Institute of Technology) M.Fathi (University of Dortmund) M.Gen (Ashikaga Institute of Technology) H.Hashimoto (Univ. of Tokyo) I.Hayashi (Hannann Universtity) M.Hiller (Gerhard-Mercator-Universit?) H.Honda (Oki Technosystems Laboratory, Inc.) H.Ichihashi (University of Osaka Prefecture) T.Iokibe (Meidensha Corporation) H.Ishibuchi (University of Osaka Prefecture) A.Ishiguro (Nagoya University) O.Ito (Fuji Electric Corporate Res.& Develop., Ltd.) N.Kasabov (University of Otago) R.Katayama (Sanyo Electric Co., Ltd.) E.Khan (National Semiconductor) H.Kitano (Sony CSL) K.M.Lee (KAIST) M.A.Lee (University of California, Berkeley) Y.Maeda (Osaka Electro-Communication Univ.) T.Muramatsu (Nippon Steel Corporation) S.Nakanishi (Tokai University) T.Nomura (SHARP Corporation) H.Ohno (Toyota Central Res.& Develop.Lab., Inc.) M.Sano (Hiroshima City University) M.Sakawa (Hiroshima University) T.Shibata (MEL, MITI) H.Shiizuka (Kogakuin University) K.Shimohara (ATR) K.Tanaka (Kanazawa University) T.Yamada (NTT) T.Yamaguchi (Utsunomiya University) N.Wakami (Matsushita Electric Industrial Co., Ltd.) J.Watada (Osaka Institute of Technology) K.Watanabe (Saga University) --------------------------------------------------- Takeshi Furuhashi, Assoc. Professor Dept. of Information Electronics, Nagoya University Furo-cho, Chikusa-ku, Nagoya 464-01, Japan Tel.+81-52-789-2792, Fax.+81-52-789-3166 --------------------------------------------------- From juergen at idsia.ch Fri Jun 16 04:58:52 1995 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Fri, 16 Jun 95 10:58:52 +0200 Subject: new IDSIA papers Message-ID: <9506160858.AA13539@fava.idsia.ch> 3 new IDSIA publications available. Click at http://www.idsia.ch or use ftp: FTP-host: fava.idsia.ch (192.132.252.1) FTP-filenames: /pub/papers/ml95.kolmogorov.ps.gz 9 pages /pub/papers/ml95.antq.ps.gz 9 pages /pub/papers/iwann95.invertible.ps.gz 8 pages (use gunzip to uncompress) ___________________________________________________________________ DISCOVERING SOLUTIONS WITH LOW KOLMOGOROV COMPLEXITY AND HIGH GENERALIZATION CAPABILITY Juergen Schmidhuber, IDSIA To appear in Machine Learning: Proc. 12th int. conf., 1995. This paper reviews basic concepts of Kolmogorov complexity theory relevant to machine learning. It shows how a derivate of Levin's universal search algorithm can be used to discover neural nets with low Levin complexity, low Kolmogorov complexity, and high generalization capability. At least with certain toy problems where it is computationally feasible, the method can lead to generalization results unmatchable by previous neural net algorithms. The final section addresses problems with incremental learning situations. ANT-Q Luca Gambardella, IDSIA Marco Dorigo, IDSIA To appear in Machine Learning: Proc. 12th int. conf., 1995. We introduce Ant-Q, a family of algorithms which share many similarities with Q-learning (Watkins, 1989). Ant-Q is a generalization of the ``ant system'' (AS --- Dorigo, 1992; Dorigo, Maniezzo and Colorni, 1996), a distributed algorithm for combinatorial optimization based on the ant colony metaphor. In applications to symmetric traveling salesman problems (TSPs), we demonstrate (1) that some Ant-Q instances outperform AS, and (2) that Ant-Q compares favorably with other heuristic approaches based on neural nets or local search. Finally, we apply Ant-Q to some difficult asymmetric TSP's and obtain excellent results: Ant-Q finds solutions of a quality which usually can be found only by highly specialized algorithms. LEARNING THE VISUOMOTOR COORDINATION OF A MOBILE ROBOT BY USING THE INVERTIBLE KOHONEN MAP Cristina Versino, IDSIA Luca Gambardella, IDSIA In Proc. International Workshop on Artificial Neural Networks 1995. This paper is based on the insight that the Extended Kohonen Map (EKM) is naturally invertible: given an input pattern, the network output is generated by competition among the neuron fan-in weight vectors (conventional ``forward mode''). Viceversa, given an output value, a corresponding input pattern can be obtained by competition among the neuron fan-out weight vectors (unconventional ``backward mode''). This invertibility property makes EKM worth considering for sensorimotor modeling. We present an experiment concerning visuomotor coordination of a simple mobile robot. ``Learning by doing'' creates a sensorimotor model: pairs are collected by observing the robot's behavior. These pairs are used for estimating the model's parameters. Training the network on the robot's direct kinematics (forward mode), one simultaneously obtains a solution to the inverse kinematics problem (backward mode). The experiment has been performed both in a simulation and by using a real robot. ___________________________________________________________________ Related and other papers in http://www.idsia.ch Comments welcome. Juergen Schmidhuber Research Director IDSIA, Corso Elvezia 36 6900-Lugano, Switzerland juergen at idsia.ch From harnad at ecs.soton.ac.uk Sat Jun 17 11:26:53 1995 From: harnad at ecs.soton.ac.uk (Stevan Harnad) Date: Sat, 17 Jun 95 16:26:53 +0100 Subject: Memory: BBS Call for Commentators Message-ID: <23611.9506171526@cogsci> Below is the abstract of a forthcoming target article on: MEMORY METAPHORS by A. Koriat and M. Goldsmith This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: bbs at ecs.soton.ac.uk or write to: Behavioral and Brain Sciences Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://cogsci.ecs.soton.ac.uk/~harnad/bbs.html gopher://gopher.princeton.edu:70/11/.libraries/.pujournals ftp://ftp.princeton.edu/pub/harnad/BBS To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp (or gopher or world-wide-web) according to the instructions that follow after the abstract. ____________________________________________________________________ MEMORY METAPHORS AND THE LABORATORY/REAL-LIFE CONTROVERSY: CORRESPONDENCE VERSUS STOREHOUSE VIEWS OF MEMORY Asher Koriat and Morris Goldsmith Department of Psychology University of Haifa Haifa, Israel rsps301 at uvm.haifa.ac.il KEYWORDS: accuracy, assessment, capacity ecological validity, intentionality, memory, metamemory, metaphors, monitoring, representation, storehouse, subject control. ABSTRACT: The study of memory is witnessing a spirited clash between proponents of traditional laboratory research and those advocating a more naturalistic approach to the study of "everyday" memory. The debate has generally centered on the "what" (content), "where" (context), and "how" (methods) of memory research. In the present target article, we argue that this controversy discloses a further, more fundamental breach between two underlying memory metaphors, each having distinct implications for memory theory and assessment: Whereas traditional memory research has been dominated by the storehouse metaphor, leading to a focus on the quantity of items remaining in store, the recent wave of everyday memory research discloses a shift towards a correspondence metaphor, focusing on the accuracy or faithfulness of memory in representing past events. Our analysis shows the correspondence metaphor to call for a research approach which differs from the traditional approach in important respects: in emphasizing the intentional-representational function of memory, in addressing the wholistic and graded aspects of memory correspondence, in taking an output-bound assessment perspective, and in allowing more room for the operation of subject-controlled metamemory processes and motivational factors. This analysis can help tie together some of the what, where, and how aspects of the everyday-laboratory controversy. More importantly, in explicating the unique metatheoretical foundation of the accuracy-oriented approach to memory, our aim is to promote a more effective exploitation of the correspondence metaphor in both naturalistic and laboratory research contexts. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from ftp.princeton.edu according to the instructions below (the filename is bbs.koriat). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- These files are also on the World Wide Web and the easiest way to retrieve them is with Netscape, Mosaic, gopher, archie, veronica, etc. Here are some of the URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs.html http://cogsci.ecs.soton.ac.uk/~harnad/bbs.html gopher://gopher.princeton.edu:70/11/.libraries/.pujournals ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.koriat ftp://cogsci.ecs.soton.ac.uk/pub/harnad/BBS/bbs.koriat To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.koriat When you have the file(s) you want, type: quit ---------- Where the above procedure is not available there are two fileservers: ftpmail at decwrl.dec.com and bitftp at pucc.bitnet that will do the transfer for you. To one or the other of them, send the following one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). ------------------------------------------------------------- From harnad at ecs.soton.ac.uk Sat Jun 17 11:44:57 1995 From: harnad at ecs.soton.ac.uk (Stevan Harnad) Date: Sat, 17 Jun 95 16:44:57 +0100 Subject: EEG Dynamics: BBS Call for Commentators Message-ID: <23748.9506171544@cogsci> Below is the abstract of a forthcoming target article on: BRAIN DYNAMICS, EEG & NEURAL NETS by JJ Wright & DTJ Liley This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: bbs at ecs.soton.ac.uk or write to: Behavioral and Brain Sciences Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://cogsci.ecs.soton.ac.uk/~harnad/bbs.html gopher://gopher.princeton.edu:70/11/.libraries/.pujournals ftp://ftp.princeton.edu/pub/harnad/BBS To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp (or gopher or world-wide-web) according to the instructions that follow after the abstract. ____________________________________________________________________ DYNAMICS OF THE BRAIN AT GLOBAL AND MICROSCOPIC SCALES: NEURAL NETWORKS AND THE EEG. J.J. Wright and D.T.J. Liley Mental Health Research Institute Parkville Victoria 3052, Australia jjw at cortex.mhri.edu.au Swinburne Center for Applied Neuroscience Hawthorne, Victoria 3122 Melbourne, Australia KEYWORDS: chaos, EEG simulation, electrocorticogram, neocortex, network symmetry, neurodynamics. ABSTRACT: There is some complementarity of models for the origin of the electroencephalogram (EEG), and neural network models for information storage in brain-like systems. From the EEG models of Freeman, Nunez, and the author's group, we argue that the wave-like processes revealed in the EEG exhibit linear and near-equilibrium dynamics at macroscopic scale, despite extremely nonlinear, probably chaotic, dynamics at microscopic scale. Simulations of cortical neuronal interactions at global and microscopic scales are then presented. The simulations depend on anatomical and physiological estimates of synaptic densities, coupling symmetries, synaptic gain, dendritic time constants and axonal delays. It is shown that the frequency content, wave velocities, frequency/wavenumber spectra and response to cortical activation of the electrocorticogram (ECoG) can be reproduced by a "lumped" simulation treating small cortical areas as single functional units. The corresponding cellular neural network simulation has properties which include those of attractor neural networks proposed by Amit, and Paresi. Within the simulations at both scales, sharp transitions occur between low and high cell firing rates. These transitions may form a basis for neural interactions across scale. To maintain overall cortical dynamics in the normal low firing-rate range, interactions between the cortex and subcortical systems are required to prevent runaway global excitation. Thus the interaction of cortex and subcortex via cortico-striatal and related pathways, may partly regulate global dynamics by a principle analogous to adiabatic control of artificial neural networks -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from ftp.princeton.edu according to the instructions below (the filename is bbs.wright). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- These files are also on the World Wide Web and the easiest way to retrieve them is with Netscape, Mosaic, gopher, archie, veronica, etc. Here are some of the URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs.html http://cogsci.ecs.soton.ac.uk/~harnad/bbs.html gopher://gopher.princeton.edu:70/11/.libraries/.pujournals ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.wright ftp://cogsci.ecs.soton.ac.uk/pub/harnad/BBS/bbs.wright To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.wright When you have the file(s) you want, type: quit ---------- Where the above procedure is not available there are two fileservers: ftpmail at decwrl.dec.com and bitftp at pucc.bitnet that will do the transfer for you. To one or the other of them, send the following one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). ------------------------------------------------------------- From piuri at elet.polimi.it Mon Jun 19 19:18:57 1995 From: piuri at elet.polimi.it (Vincenzo Piuri) Date: Tue, 20 Jun 1995 00:18:57 +0100 Subject: call for papers Message-ID: <9506192318.AA01819@ipmel2.elet.polimi.it> ================================================================ CESA'96 IMACS/IEEE-SMC Multiconference Computational Engineering in Systems Applications Lille, France - July 9-12, 1996 ================================================================ Call for Papers for the Special Sessions on Neural Technologies ================================================================ The aim of this meeting is to make the state of the art of the various aspects of computational engineering involved in system theory and applications. It will be organized in four distinct simultaneous symposia: "Modelling, Analysis, and Simulation", "Discrete Events and Manufacturing Systems", "Control, Optimization and Supervision", and "Robotics and Cybernetics". A Special Session on "Neural Techniques for Identification and Prediction" will be held in the Symposium on "Modelling, Analysis, and Simulation". Papers are solicited on all aspects of the neural technologies concerning system identification and prediction. In particular, the Special Session will be focused on theoretical, design and practical aspects. A Special Session on "Neural Control Systems: Techniques, Implementations, and Applications" will be held in the Symposium on "Control, Optimization and Supervision". Papers are solicited on all aspects of the neural technologies concerning system control: theory, design methodologies, realizations, case studies, and applications are welcome. Authors interested in the above Special Sessions are kindly invited to send a letter of interest by August 31, 1995, to the Special Session Organizer (email is preferred). This letter should contain the name and the address (including email) of the possible contact author, the name of the special session, a tentative title of the paper. It does not limit possible further submissions. Authors are then requested to submit to the Special Session Organizer (email and fax submission are accepted): - a one-page abstract by Semptember 30, 1995, for review assignment, - the preliminary version of the paper or an extended abstract by November 15, 1995. Acceptance/rejection will be mailed by January 15, 1996. The final camera-ready version of the paper is due by May 1, 1996. Prof. Vincenzo Piuri Organizer of the Special Sessions on Neural Technologies Department of Electronics and Information Politecnico di Milano, Italy fax +39-2-2399-3411 email piuri at elet.polimi.it ================================================================ From workshop at Physik.Uni-Wuerzburg.DE Mon Jun 19 17:06:03 1995 From: workshop at Physik.Uni-Wuerzburg.DE (Wolfgang Kinzel) Date: Mon, 19 Jun 95 17:06:03 MESZ Subject: workshop and autumn school on neural nets in Wuerzburg Message-ID: <199506191506.RAA19777@wptx01.physik.uni-wuerzburg.de> First Announcement and Call for Abstracts Interdisciplinary Autumn School and Workshop on Neural Networks: Application, Biology, and Theory October 12-14 (school) and 16-18 (workshop), 1995 W"urzburg, Germany INVITED SPEAKERS INCLUDE: M. Abeles, Jerusalem A. Aertsen, Rehovot J.K. Anlauf, Siemens AG J.P. Aubin, Paris M. Biehl, W"urzburg C. v.d. Broeck, Diepenbeek M. Cottrell, Paris B. Fritzke, Bochum Th. Fritsch, W"urzburg J. G"oppert, T"ubingen L.K.Hansen, Lyngby M. Hemberger, Daimler Benz AG L. van Hemmen, M"unchen J.A. Hertz, Copenhagen J. Hopfield, Pasadena I. Kanter, Ramat-Gan C. Koch, Pasadena P. Kraus, Bochum B. Lautrup, Copenhagen W. Maass, Graz Th. Martinetz, Siemens AG M. Opper, Santa Cruz H. Scheich, Magdeburg H.G. Schuster, Kiel S. Seung, AT&T Bell-Lab. W. Singer, Frankfurt S.A. Solla, Copenhagen H. Sompolinsky, Jerusalem F. Varela, Paris A. Weigend, Boulder AUTUMN SCHOOL, Oct. 12-14: Introductory lectures on theory and applications of neural nets for graduate students and interested postgraduates in biology, medicine, mathematics, physics, computer science, and other related disciplines. Topics include neuronal modelling, statistical physics, hardware and application of neural nets in telecommunication, financial forecasting, and biological data analysis. WORKSHOP, Oct. 16-18: Biology, theory, and applications of neural networks with particular emphasis on the interdisciplinary aspects of the field. There will be only invited lectures with ample time for discussion. In addition, poster sessions will be scheduled. REGISTRATION: Requested before AUGUST 31, per FAX or (E-)MAIL to Workshop on Neural Networks Inst. f"ur Theor. Physik, Julius-Maxmimilians-Universit"at Am Hubland, D-97074 W"urzburg, Germany Fax: +49 931 888 5141 e-mail: workshop at physik.uni-wuerzburg.de The registration fee is DM 150,- for the Autumn school and DM 150,- for the Workshop, due upon arrival (cash only). Students pay DM 80,- for each event (student ID required). ABSTRACTS: Participants who wish to present a poster should submit title and abstract together with their registration before August 31. ACCOMMODATION: Registered participants will receive a request form of the Tourist Office W"urzburg together with general informations. Early registration is advised. In case of registration after July 31 please contact directly the Fremdenverkehrsamt, Am Congress Centrum, D-97070 W"urzburg, Fax +49 931 37372. ORGANIZING COMMITTEE: M. Biehl, Th. Fritsch, W. Kinzel, Univ. W"urzburg. SCIENTIFIC ADVISORY COUNCIL: D. Flockerzi, K.-D. Kniffki, W. Knobloch, M. Meesmann, T. Nowak, F. Schneider, P. Tran-Gia, Universit"at W"urzburg. SPONSORS: Peter Beate Heller-Stiftung im Stifterverband f. die Deutsche Wissenschaft, Research Center of Daimler Benz AG, Stiftung der St"adtischen Sparkasse W"urzburg. --------------------------------cut here---------------------------------- Registration Form Please return to: Workshop on Neural Networks Institut f"ur Theoretische Physik Julius-Maximilians-Universit"at Am Hubland D-97074 W"urzburg, Germany Fax: +49 931 888 5141 E-mail : workshop at physik.uni-wuerzburg.de I will attend the Autumn School Oct. 12-14 [ ] * (Reg. fee DM $150,- [ ] / 80,- [ ] due upon arrival) * Workshop Oct. 16-18 [ ] * (Reg. fee DM $150,- [ ] / 80,- [ ] due upon arrival) * * Please mark, reduced fee applies only for participants with valid student-ID. I wish to present a poster [ ] (If yes, please send a title page with a 10-line abstract!) Name: Affiliation: Address: Phone: Fax: E-mail: (please provide full postal address in any case!) Signature: --------------------------------------------------------------------------- From moreno at eel.upc.es Tue Jun 20 09:55:09 1995 From: moreno at eel.upc.es (Juan M. Moreno) Date: Tue, 20 Jun 1995 9:55:09 UTC+0100 Subject: Ph.D. Thesis: VLSI Architectures for Evolutive Neural Models Message-ID: <582*/S=moreno/OU=eel/O=upc/PRMD=iris/ADMD=mensatex/C=es/@MHS> FTP-host: ftp.upc.es (147.83.98.7) FTP-file: /upc/eel/moreno_vlsi_94.tar (2 MB compressed, 5.6 MB uncompressed, 184 pages) The following Ph.D. Thesis is now available by anonymous ftp. FTP instructions can be found at the end of this message. -------------------------------------------------------------------- VLSI ARCHITECTURES FOR EVOLUTIVE NEURAL MODELS J.M. Moreno Arostegui Technical University of Catalunya Department of Electronics Engineering RESUME In the last years there has been an increasing interest in the research field related to the artificial neural network models. The reason for this interest has been the development of advanced tools and techniques for microelectronics design, which have permitted to translate into efficient physical realizations the theoretical connectionist models. However, there are several problems associated to the classical artificial neural network models, related basically to their convergence properties, and to the necessity to define heuristically the proper network structure for a particular problem In order to alleviate these problems, evolutive neural models offer the possibility to construct automatically during the training process the proper network structure able to handle efficiently a certain task. Furthermore, these neural models allows for establishing incremental learning schemes, so that new knowledge can be easily incorporated in the network, without the necessity to perform from scratch a new comple te training process. The present work tries to offer efficient solutions, under the form of VLSI microelectronics architectures, for the eventual realization of systems based on the evolutive neural paradigms. An exhaustive analysis on the different types of evolutive neural models has been first performed. The goal of this analysis is to select those evolutive neural models whose data flow is suitable for an eventual hardware implementation. As a result, the incremental evolutive neural models have been selected as the most appropriate ones in the case a hardware realization is envisaged. Afterwards, the improvement of the convergence properties of evolutive neural models has been considered. This improvement is required so as to allow for more efficient physical implementations able to face real world tasks. As a result, three different methods have been proposed so as to enhance the network construction process provided by evolutive neural models. The next step towards the implementation of evolutive neural models has consisted of the selection of the most suitable hardware architectures in order to realize the data flow imposed by the corresponding training an recall phases associated to these neural models. As a previous step, an algorithm vectorization process has been performed, so as to detect the basic operations required by the training and recall schemes. Then, by analyzing the efficiency offered by different hardware architectures in carrying out these basic operations, we have selected two architectures as the most suitable for an eventual hardware implementation. Bearing in mind the results provided by the previous architecture analysis, a digital architecture has been proposed. This architecture is able to organize properly its resources, so as to match the requirements imposed by the corresponding training and recall phases, being thus capable of emulating the two architectures selected by the analysis indicated previously. The architecture is organized as an array of processing units, which can be configured in order to provide an specific array organization. A specific RISC (Reduced Instruction Set Computer) has been developed in order to realize these processing units. This processor has a generic enough instruction set, which permits the efficient emulation (both in terms of speed and compactness) of a wide range of evolutive neural models. Finally, an analog systolic architecture has been proposed, which allows also for the physical implementation of the evolutive neural models indicated previously. This architecture has been developed using a systolic modular principle, so that it permits to emulate different neural models just by changing the functionality of the building blocks which constitute its processing units. The main advantage offered by this architecture is the possibility to develop compact systems capable to provide high processing rates, being thus suitable for those tasks where an integrated signal processing scheme is required. ---------------------------------------------------------------------------- FTP instructions: unix> ftp ftp.upc.es (147.83.98.7) Name: anonymous Password: (your e-mail address) ftp> cd /upc/eel ftp> bin ftp> get moreno_vlsi_94.tar ftp> bye unix> tar xvf moreno_vlsi_94.tar As a result, you get 12 different compressed postscript files (5.6 MB). Just uncompress these files and print them on your local printer. Sorry, but there are no hard copies available. Regards, ---------------------------------------------------------------------------- || Juan Manuel Moreno Arostegui || || || || || || Dept. Enginyeria Electronica || Tel. : +34 3 401 74 88 || || Universitat Politecnica de Catalunya || || || Modul C-4, Campus Nord || Fax : +34 3 401 67 56 || || c/ Gran Capita s/n || || || 08034-Barcelona || E-mail : moreno at eel.upc.es || || SPAIN || || ---------------------------------------------------------------------------- From lawrence at research.nj.nec.com Tue Jun 20 15:29:53 1995 From: lawrence at research.nj.nec.com (Steve Lawrence) Date: Tue, 20 Jun 1995 15:29:53 -0400 (EDT) Subject: TR Available: Neural Network and Machine Learning for Natural Language Processing Message-ID: <199506201929.PAA06572@heavenly> A non-text attachment was scrubbed... Name: not available Type: text Size: 2934 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/bec69e10/attachment.ksh From harnad at ecs.soton.ac.uk Wed Jun 21 16:23:04 1995 From: harnad at ecs.soton.ac.uk (Stevan Harnad) Date: Wed, 21 Jun 95 21:23:04 +0100 Subject: EEG and Memory: PSYC Call for Commentary Message-ID: <5416.9506212023@cogsci> PSYCOLOQUY Commentary is invited on: Wolfgang Klimesch on EEG & Memory Qualified professional biobehavioral, neural or cognitive scientists are hereby invited to submit Open Peer Commentary on the target article whose abstract appears below. It has been published in PSYCOLOQUY, a refereed electronic journal sponsored by the American Psychological Association. Instructions for retrieval and for preparing commentaries follow the abstract. The address for submitting commentaries and articles and for requesting information is psyc at pucc.princteton.edu The URLs for retrieving articles are: http://www.princeton.edu/~harnad/psyc.html gopher://gopher.princeton.edu:70/11/.libraries/.pujournals ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1995.volume.6 TARGET ARTICLE AUTHOR'S RATIONALE FOR SOLICITING COMMENTARY: Memory processes can be described as brain oscillations and memory network models (such as the connectivity model (Klimesch, 1994)) can easily be applied to the neuronal level if abstract activation values are interpreted in terms of frequency values reflecting oscillatory processes. I would be very interested in eliciting commentaries on (1) this basic rationale, (2) the statement that in the cortex oscillations are mandatory for information transmission, (3) the proposed role of EEG alpha and (4) EEG theta for memory processes. ----------------------------------------------------------------------- psycoloquy.95.6.06.memory-brain.1.klimesch ISSN 1055-0143 (55 paragraphs, 75 references, 1279 lines) PSYCOLOQUY is sponsored by the American Psychological Association (APA) Copyright 1995 Wolfgang Klimesch MEMORY PROCESSES DESCRIBED AS BRAIN OSCILLATIONS IN THE EEG-ALPHA AND THETA BANDS Wolfgang Klimesch University of Salzburg Department of Physiological Psychology Institute of Psychology, Hellbrunnerstr. 34 A-5020 Salzburg, AUSTRIA Klimesch at edvz.sbg.ac.at ABSTRACT: This target article tries to integrate results in memory research from diverse disciplines such as psychophysiology, cognitive psychology, anatomy and neurophysiology. The integrating link is seen in more recent anatomical findings that provide strong arguments for the assumption that oscillations provide the basic form of communication between cortical cell assemblies. The basic argument is that episodic memory processes, which are part of a complex working memory system, are reflected by oscillations in the theta band, whereas long-term memory processes are reflected by alpha oscillations. It is assumed that alpha and theta oscillations serve to encode, access, and retrieve cortical codes that are stored in the form of widely distributed but intensely interconnected cell assemblies. KEYWORDS: Alpha, EEG, Hippocampus, Memory, Oscillation, Thalamus, Theta. ------------------------------------------------------------- These files are also on the World Wide Web and the easiest way to retrieve them is with Netscape, Mosaic, gopher, archie, veronica, etc. Here are some of the URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/psyc.html http://cogsci.ecs.soton.ac.uk/~harnad/psyc.html gopher://gopher.princeton.edu:70/11/.libraries/.pujournals ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1995.volume.6/ ftp://cogsci.ecs.soton.ac.uk/pub/harnad/Psycoloquy/1995.volume.6/ To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/Psycoloquy/1995.volume.6 To show the available files, type: ls Next, retrieve the file you want with (for example): mget *.1.klimesch When you have the file(s) you want, type: quit ---------- Where the above procedure is not available there are two fileservers: ftpmail at decwrl.dec.com and bitftp at pucc.bitnet that will do the transfer for you. To one or the other of them, send the following one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). ------------------------------------------------------------- INSTRUCTIONS FOR PSYCOLOQUY COMMENTATORS Accepted PSYCOLOQUY target articles have been judged by 5-8 referees to be appropriate for Open Peer Commentary, the special service provided by PSYCOLOQUY to investigators in psychology, neuroscience, behavioral biology, cognitive sciences and philosophy who wish to solicit multiple responses from an international group of fellow specialists within and across these disciplines to a particularly significant and controversial piece of work. If you feel that you can contribute substantive criticism, interpretation, elaboration or pertinent complementary or supplementary material on a PSYCOLOQUY target article, you are invited to submit a formal electronic commentary. Please note that although commentaries are solicited and most will appear, acceptance cannot, of course, be guaranteed. 1. Before preparing your commentary, please read carefully the Instructions for Authors and Commentators and examine recent numbers of PSYCOLOQUY. 2. Commentaries should be limited to 200 lines (1800 words, references included). PSYCOLOQUY reserves the right to edit commentaries for relevance and style. In the interest of speed, commentators will only be sent the edited draft for review when there have been major editorial changes. Where judged necessary by the Editor, commentaries will be formally refereed. 3. Please provide a title for your commentary. As many commentators will address the same general topic, your title should be a distinctive one that reflects the gist of your specific contribution and is suitable for the kind of keyword indexing used in modern bibliographic retrieval systems. Each commentary should have a brief (~50-60 word) abstract 4. All paragraphs should be numbered consecutively. Line length should not exceed 72 characters. The commentary should begin with the title, your name and full institutional address (including zip code) and email address. References must be prepared in accordance with the examples given in the Instructions. Please read the sections of the Instruction for Authors concerning style, preparation and editing. PSYCOLOQUY is a refereed electronic journal (ISSN 1055-0143) sponsored on an experimental basis by the American Psychological Association and currently estimated to reach a readership of 40,000. PSYCOLOQUY publishes brief reports of new ideas and findings on which the author wishes to solicit rapid peer feedback, international and interdisciplinary ("Scholarly Skywriting"), in all areas of psychology and its related fields (biobehavioral science, cognitive science, neuroscience, social science, etc.). All contributions are refereed. Target article length should normally not exceed 500 lines [c. 4500 words]. Commentaries and responses should not exceed 200 lines [c. 1800 words]. All target articles, commentaries and responses must have (1) a short abstract (up to 100 words for target articles, shorter for commentaries and responses), (2) an indexable title, (3) the authors' full name(s) and institutional address(es). In addition, for target articles only: (4) 6-8 indexable keywords, (5) a separate statement of the authors' rationale for soliciting commentary (e.g., why would commentary be useful and of interest to the field? what kind of commentary do you expect to elicit?) and (6) a list of potential commentators (with their email addresses). All paragraphs should be numbered in articles, commentaries and responses (see format of already published articles in the PSYCOLOQUY archive; line length should be < 80 characters, no hyphenation). It is strongly recommended that all figures be designed so as to be screen-readable ascii. If this is not possible, the provisional solution is the less desirable hybrid one of submitting them as postscript files (or in some other universally available format) to be printed out locally by readers to supplement the screen-readable text of the article. PSYCOLOQUY also publishes multiple reviews of books in any of the above fields; these should normally be the same length as commentaries, but longer reviews will be considered as well. Book authors should submit a 500-line self-contained Precis of their book, in the format of a target article; if accepted, this will be published in PSYCOLOQUY together with a formal Call for Reviews (of the book, not the Precis). The author's publisher must agree in advance to furnish review copies to the reviewers selected. Authors of accepted manuscripts assign to PSYCOLOQUY the right to publish and distribute their text electronically and to archive and make it permanently retrievable electronically, but they retain the copyright, and after it has appeared in PSYCOLOQUY authors may republish their text in any way they wish -- electronic or print -- as long as they clearly acknowledge PSYCOLOQUY as its original locus of publication. However, except in very special cases, agreed upon in advance, contributions that have already been published or are being considered for publication elsewhere are not eligible to be considered for publication in PSYCOLOQUY, Please submit all material to psyc at pucc.bitnet or psyc at pucc.princeton.edu Anonymous ftp archive is DIRECTORY pub/harnad/Psycoloquy HOST princeton.edu From juergen at idsia.ch Wed Jun 21 04:09:40 1995 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Wed, 21 Jun 95 10:09:40 +0200 Subject: one more Message-ID: <9506210809.AA23052@fava.idsia.ch> http://www.idsia.ch/reports.html FTP-host: fava.idsia.ch (192.132.252.1) FTP-filename: /pub/papers/idsia59-95.ps.gz (12 pages, 69k) ENVIRONMENT-INDEPENDENT REINFORCEMENT ACCELERATION Technical Note IDSIA-59-95 Write-up of invited talk at Hongkong Univ. ST (May 29, 1995) Juergen Schmidhuber, IDSIA A reinforcement learning system with limited computational resources interacts with an unrestricted, unknown environment. Its goal is to maximize cumulative reward, to be obtained throughout its limited, unknown lifetime. System policy is an arbitrary modifiable algorithm mapping environmental inputs and internal states to outputs and new internal states. The problem is: in realistic, unknown environments, each policy modification process (PMP) occurring during system life may have unpredictable influence on environmental states, rewards and PMPs at any later time. Existing reinforcement learning algorithms cannot properly deal with this. Neither can naive exhaustive search among all policy candidates -- not even in case of very small search spaces. In fact, a reasonable way of measuring performance improvements in such general (but typical) situations is missing. I define such a measure based on the novel ``reinforcement acceleration criterion'' (RAC). RAC is satisfied if the beginning of each completed PMP that computed a currently valid policy modification has been followed by faster average reinforcement intake than system start-up and the beginnings of all previous such PMPs (the computation time for PMPs is taken into account). Then I present a method called ``environment-independent reinforcement acceleration'' (EIRA) which is guaranteed to achieve RAC. EIRA does neither care whether the system's policy allows for changing itself, nor whether there are multiple, interacting learning systems. Consequences are: (1) a sound theoretical framework for ``meta- learning'' (because the success of a PMP recursively depends on the success of all later PMPs, for which it is setting the stage). (2) A sound theoretical framework for multi-agent learning. The principles have been implemented (1) in a single system using an assembler-like programming language to modify its own policy, and (2) a system consisting of multiple agents, where each agent is in fact just a connection in a fully recurrent reinforcement learning neural net. A by-product of this research is a general reinforcement learning algorithm for such nets. Preliminary experiments illustrate the theory. Juergen Schmidhuber IDSIA, Corso Elvezia 36 6900-Lugano, Switzerland juergen at idsia.ch http://www.idsia.ch From john at dcs.rhbnc.ac.uk Thu Jun 22 11:15:36 1995 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Thu, 22 Jun 95 16:15:36 +0100 Subject: Technical Report Series in Neural and Computational Learning Message-ID: The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT): several new reports available ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-022: ---------------------------------------- Option price forecasting using artificial neural networks by A. Fiordaliso, Universite de Mons-Hainaut Abstract: (Paper is in French) The problem considered here, in forecasting the price of a call option on a short term interest rate future, namely the 3 months Eurodollar (ED3). The aim of our research is to build up Artificial Neural Network models (ANN) that could be integreated in a fuzzy expert system to dynamically manage an option portfolio. We detail some problems and techniques related to the set up of ANN models for univariate and multivariate previsions. We compare our results with some other forecasting techniques. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-041: ---------------------------------------- A General Feedforward Neural Network Model by C\'edric GEGOUT, Bernard GIRAU and Fabrice ROSSI Ecole Normale Sup\'erieure de Lyon, Ecole Normale Sup\'erieure de Paris, THOMSON-CSF/SDC/DPR/R4, Bagneux, France Abstract: In this paper, we generalize a model proposed by L\'eon Bottou and Patrick Gallinari. This model gives a general mathematical description of feedforward neural networks, for which standard models, such as Multi-Layer Perceptrons or Radial Basis Function based neural networks, are only particular cases. A generalized back-propagation, which gives an efficient way to compute the differential of the function computed by the neural network, is introduced and carefully proved. We also introduce an evaluation of the theoretical time needed to compute the differential with the help of both direct algorithm and back-propagation. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-043: ---------------------------------------- On-line Learning with Malicious Noise and the Closure Algorithm by Peter Auer, IGI, Graz University of Technology Nicol\`o Cesa-Bianchi, DSI, University of Milan Abstract: We investigate a variant of the on-line learning model for classes of $\Bool$-valued functions (concepts) in which the labels of a certain amount of the input instances are corrupted by adversarial noise. We propose an extension of a general learning strategy, known as ``Closure Algorithm'', to this noise model, and show a worst-case mistake bound of $m + (d+1)K$ for learning an arbitrary intersection-closed concept class $\scC$, where $K$ is the number of noisy labels, $d$ is a combinatorial parameter measuring $\scC$'s complexity, and $m$ is the worst-case mistake bound of the Closure Algorithm for learning $\scC$ in the noise-free model. For several concept classes our extended Closure Algorithm is efficient and can tolerate a noise rate up to the information-theoretic upper bound. Finally, we show how to efficiently turn any algorithm for the on-line noise model into a learning algorithm for the PAC model with malicious noise. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-044: ---------------------------------------- Neural Networks with Quadratic VC Dimension by Pascal Koiran, Ecole Normale Sup\'erieure de Lyon Eduardo D. Sontag, Rutgers University Abstract: This paper shows that neural networks which use continuous activation functions have VC dimension at least as large as the square of the number of weights $w$. This result settles a long-standing open question, namely whether the well-known $O(w \log w)$ bound, known for hard-threshold nets, also held for more general sigmoidal nets. Implications for the number of samples needed for valid generalization are discussed. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-045: ---------------------------------------- Learning Internal Representations (Short Version) by Jonathan Baxter, Royal Hollloway, University of London Abstract: Probably the most important problem in machine learning is the preliminary biasing of a learner's hypothesis space so that it is small enough to ensure good generalisation from reasonable training sets, yet large enough that it contains a good solution to the problem being learnt. In this paper a mechanism for {\em automatically} learning or biasing the learner's hypothesis space is introduced. It works by first learning an appropriate {\em internal representation} for a learning environment and then using that representation to bias the learner's hypothesis space for the learning of future tasks drawn from the same environment. An internal representation must be learnt by sampling from {\em many similar tasks}, not just a single task as occurs in ordinary machine learning. It is proved that the number of examples $m$ {\em per task} required to ensure good generalisation from a representation learner obeys $m = O(a+b/n)$ where $n$ is the number of tasks being learnt and $a$ and $b$ are constants. If the tasks are learnt independently ({\em i.e.} without a common representation) then $m=O(a+b)$. It is argued that for learning environments such as eech and character recognition $b\gg a$ and hence representation learning in these environments can potentially yield a drastic reduction in the number of examples required per task. It is also proved that if $n = O(b)$ (with $m=O(a+b/n)$) then the representation learnt will be good for learning novel tasks from the same environment, and that the number of examples required to generalise well on a novel task will be reduced to $O(a)$ (as opposed to $O(a+b)$ if no representation is used). It is shown that gradient descent can be used to train neural network representations and the results of an experiment are reported in which a neural network representation was learnt for an environment consisting of {\em translationally invariant} Boolean functions. The experiment ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-046: ---------------------------------------- Learning Model Bias by Jonathan Baxter, Royal Hollloway, University of London Abstract: In this paper the problem of {\em learning} appropriate domain-specific bias is addressed. It is shown that this can be achieved by learning many related tasks from the same domain, and a sufficient bound is given on the number tasks that must be learnt. A corollary of the theorem is that in appropriate domains the number of examples required per task for good generalisation when learning $n$ tasks scales like $\frac1n$. An experiment providing strong qualitative support for the theoretical results is reported. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-047: ---------------------------------------- The Canonical Metric for Vector Quantization by Jonathan Baxter, Royal Hollloway, University of London Abstract: To measure the quality of a set of vector quantization points a means of measuring the distance between two points is required. Common metrics such as the {\em Hamming} and {\em Euclidean} metrics, while mathematically simple, are inappropriate for comparing speech signals or images. In this paper it is argued that there often exists a natural {\em environment} of functions to the quantization process (for example, the word classifiers in speech recognition and the character classifiers in character recognition) and that such an enviroment induces a {\em canonical metric} on the space being quantized. It is proved that optimizing the {\em reconstruction error} with respect to the canonical metric gives rise to optimal approximations of the functions in the environment, so that the canonical metric can be viewed as embodying all the essential information relevant to learning the functions in the environment. Techniques for {\em learning} the canonical metric are discussed, in particular the relationship between learning the canonical metric and {\em internal representation learning}. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-048: ---------------------------------------- The Complexity of Query Learning Minor Closed Graph Classes by Carlos Domingo, Tokyo Institute of Technology John Shawe-Taylor, Royal Holloway, University of London Abstract: The paper considers the problem of learning classes of graphs closed under taking minors. It is shown that any such class can be properly learned in polynomial time using membership and equivalence queries. The representation of the class is in terms of a set of minimal excluded minors (obstruction set). Moreover, a negative result for learning such classes using only equivalence queries is also provided, after introducing a notion of reducibility between query learning problems. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-049: ---------------------------------------- Generalisation of A Class of Continuous Neural Networks by John Shawe-Taylor and Jieyu Zhao, Royal Holloway, University of London Abstract: We propose a way of using boolean circuits to perform real valued computation in a way that naturally extends their boolean functionality. The functionality of multiple fan in threshold gates in this model is shown to mimic that of a hardware implementation of continuous Neural Networks. A Vapnik-Chervonenkis dimension and sample size analysis for the systems is performed giving best known sample sizes for a real valued Neural Network. Experimental results confirm the conclusion that the sample sizes required for the networks are significantly smaller than for sigmoidal networks. ----------------------- The Report NC-TR-95-011 can be accessed and printed as follows % ftp cscx.cs.rhbnc.ac.uk (134.219.200.45) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-95-022.ps.Z ftp> bye % zcat nc-tr-95-022.ps.Z | lpr -l Similarly for the other technical report. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. The files may also be accessed via WWW starting from the NeuroCOLT homepage: http://www.dcs.rhbnc.ac.uk/neural/neurocolt.html Best wishes John Shawe-Taylor From icsc at freenet.edmonton.ab.ca Thu Jun 22 12:08:53 1995 From: icsc at freenet.edmonton.ab.ca (icsc@freenet.edmonton.ab.ca) Date: Thu, 22 Jun 1995 10:08:53 -0600 (MDT) Subject: Announcement / Call for papers SOCO'96 Message-ID: ICSC - International Computer Science Conventions Call for Papers International Symposium on SOFT COMPUTING - SOCO'96 (Fuzzy Logic, Artificial Neural Networks and Genetic Algorithms) To be held at the University of Reading, Whiteknights, Reading, England March 26 - 28, 1996 I. SPONSORS University of Reading, U.K. International Computer Science Conventions (ICSC), Canada/Switzerland II. ORGANISATION OF THE CONFERENCE SOCO'96 is organised as a parallel conference to IIA'96 (International Symposium on Intelligent Industrial Automation). Both conferences are joint operations of the Department of Cybernetics, University of Reading, England and International Computer Science Conventions (ICSC), Canada/Switzerland. III. PURPOSE OF THE CONFERENCE The purpose of this conference is to assist communication of research in the fields of Fuzzy Logic, Neural Networks, Genetic Algorithms and their technological applications. The 'marriage' of fuzzy logic and neural net technologies offers many advantages in terms of fault-tolerance and speed of implementation. Intelligent automation is achieved by implementing humanlike intelligence and soft computing, a newly introduced concept, which encompasses three intelligence-based methods: fuzzy logic, neural network and genetic algorithms. IV. TOPICS Papers are encouraged in all areas related to Soft Computing, such as the following examples: * Artificial Neural Networks * Fuzzy Logic * Fuzzy Control * Genetic Algorithms * AI and Expert Systems * Probabilistic Reasoning * Machine Learning * Distributed Intelligence * Learning Algorithms and Intelligent Control * Self-Organizing Systems V. INTERNATIONAL SCIENTIFIC COMMITTEE - ISC H. Adeli, USA / E. Alpaydin, Turkey / P.G. Anderson, USA (Chairman) / M. Dorigo, Belgium / H. Hellendoorn, Germany / M. Jamshidi, USA/France / B. Kosko, USA / F. Masulli, Italy / P.G. Morasso, Italy / C.C. Nguyen, USA / G.D. Smith, U.K. / N. Steele, U.K. (Vice Chairman) / S. Tzafestas, Greece / K. Warwick, U.K. VI. PUBLICATION OF PAPERS All accepted papers will appear in the conference proceedings, published by ICSC Academic Press. In addition, some selected papers may also be considered for journal publication. VII. SUBMISSIONS OF MANUSCRIPTS Prospective authors are requested to send two copies of their abstracts of 500 words for review by the International Scientific Committee. All abstracts must be written in English, starting with a succinct statement of the problem, the results achieved, their significance and a comparison with previous work. If authors believe that more details are necessary to substantiate the main claims of the paper, they may include a clearly marked appendix that will be read at the discretion of the International Scientific Committee. The abstract should also include: * Indication, if submitted for SOCO'96 or IIA'96 (see separate call for papers) * Title of proposed paper * Authors names, affiliations, addresses * Name of author to contact for correspondence * Email address and fax number of contact author * Name of topic which best describes the paper (max. 5 keywords) Contributions are welcome from those working in industry and having experience in the topics of this conference as well as from academics. The Conference language is English. Abstracts may be submitted either by electronic mail (ASCII text), fax or mail (2 copies) to either one of the following addresses: ICSC Canada P.O. Box 279 Millet, Alberta T0C 1Z0 Canada Email: icsc at freenet.edmonton.ab.ca Fax: +1-403-387-4329 or ISCS Switzerland P.O. Box 657 CH-8055 Zurich Switzerland or University of Reading Dept. of Cybernetics Whiteknights P.O. Box 225 Reading RG6 6AY U.K. VIII.WORKSHOP Contributions for a workshop on Soft Computing Methods for Pattern Recognition are welcome and abstracts (marked "workshop") may be submitted to ICSC Canada until July 31, 1995. IX. DEADLINES AND REGISTRATION It is the intention of the organizers to have the conference proceedings available for the delegates. Consequently the deadlines below are to be strictly respected: * Submission of Abstracts: July 31, 1995 * Notification of Acceptance: September 30, 1995 * Delivery of Full Papers: November 30, 1995 * Early registration: November 30, 1995 * Late registration Full registration (approx. English Pounds 325.00 for early registration) includes attendance to all sessions, lunches, dinners and coffee-breaks, pre-conference reception, conference banquet/social programme and conference proceedings. Combined registration for SOCO'96 and IIA'96 will be available at reduced rates. Full-time students, who have a valid student ID-card, may register with a rebate by eliminating proceedings, banquet/social programme and meals. Extra banquet/social programme tickets will be sold for accompanying persons and students. The proceedings can be purchased separately. X. ACCOMMODATION Accommodation (not included in the registration fee) is available at very reasonable rates at the University Campus. Full details will be made available with the letter of acceptance. XI. FURTHER INFORMATION For further information please contact: ICSC Canada, P.O. Box 279, Millet, Alberta, T0C 1Z0, Canada Email: icsc at freenet.edmonton.ab.ca Fax: +1-403-387-4329 / Phone: +1-403-387-3546 or University of Reading, Department of Cybernetics, Whiteknights, P.O. Box 225, Reading RG6 6AY, U.K. Fax: +44-1734-318 220 / Phone: +44-1734-318 214) ICSC CANADA email: icsc at freenet.edmonton.ab.ca MILLET, AB, T0C 1Z0 From smagt at fwi.uva.nl Fri Jun 23 05:34:12 1995 From: smagt at fwi.uva.nl (Patrick van der Smagt) Date: Fri, 23 Jun 1995 11:34:12 +0200 (MET DST) Subject: Preprint available (robotics & vision NN) Message-ID: <199506230934.AA03412@brad.fwi.uva.nl> The following preprint is now available: A VISUALLY GUIDED ROBOT AND A NEURAL NETWORK JOIN TO GRASP SLANTED OBJECTS P. van der Smagt, A. Dev, and F.C.A. Groen (1995) Proceedings of the 1995 Dutch Conference on Neural Networks (in print) --------------------------------------------------------------------------- FTP-HOST: ftp.fwi.uva.nl (146.50.3.49) FTP-FILE: pub/computer-systems/aut-sys/reports/SmaDevGro95.ps.gz 84 Kb, 8 pages --------------------------------------------------------------------------- ftp://ftp.fwi.uva.nl/pub/computer-systems/aut-sys/reports/SmaDevGro95.ps.gz http://www.fwi.uva.nl/fwi/research/vg4/neuro/publications/publications.html --------------------------------------------------------------------------- Abstract: In this paper we introduce a method for model-free monocular visual guidance of a robot arm. The robot arm, with a single camera in its end-effector, should be positioned above a target, with a changing pan and tilt, which is placed against a textured background. It is shown that a trajectory can be planned in visual space by using components of the optic flow, and this trajectory can be translated to joint torques by a self-learning neural network. No model of the robot, camera, or environment is used. The method reaches a high grasping accuracy after only a few trials. From uzimmer at informatik.uni-kl.de Fri Jun 23 10:07:28 1995 From: uzimmer at informatik.uni-kl.de (Uwe R. Zimmer, AG vP) Date: Fri, 23 Jun 95 15:07:28 +0100 Subject: PhD thesis "Adaptive Approaches to Basic Mobile Robot Tasks" available Message-ID: <950623.150728.2257@ag-vp-file-server.informatik.uni-kl.de> PhD thesis available via WWW / FTP: keywords: mobile robots, exploration, world modelling, navigation, object recognition, artificial neural networks, fuzzy logic ------------------------------------------------------------------ Adaptive Approaches to Basic Mobile Robot Tasks ------------------------------------------------------------------ Uwe R. Zimmer PhD thesis - January 1995 The present thesis addresses the research field of adaptive behaviour concerning mobile robots. The world as "seen" by the robot is previously unknown and has to be explored by manoeuvring according to certain optimization criteria. This assumption enhances the fitness of a mobile robot for a range of applications beyond rigid installations, demanding normally significant effort, and offering limited ability to adapt to changes in the environment. A central concept emphasized in this thesis is the achieving of competence and fitness through continuous interaction with the robot's world. Lifelong learning is considered, even after achieving a temporally sufficient degree of adaptation and running in parallel to the actual robot's application. The levels of competence are generated bottom up, i.e. upper levels are based on the current robot's experience modelled in lower levels. The terms (the skills are formulated with) employed on higher levels are generated through real world interactions on lower levels. The robotics problems discussed are limited to some basic tasks, which are found to be relevant for most mobile robot applications. These are exploration of unknown environments, stable self-localization with respect to the current world and its internal representation, as well as navigation, target extraction, and target recognition. In order to cope with problems resulting from a lack of proper a-priori knowledge and defined and reliably detectable symbols in unknown and dynamic environments, connectionist methods are employed to a great extend. Realtime constraints are considered at all levels of competence, with the natural exception of global planning. The research field of target extraction and identification with respect to mobile robot constraints leads especially to the discussion of visual search (steering), extraction of geometric primitives even at system start-up time, and to the generation of symbols out of subsymbolic processing. These symbols can be reliably recognized and should be suitable for a following symbolic planning level, outside the focus of the present thesis. The presented approach ensures a large degree of adaptability on all levels, not discussed before to this wide extent, or even investigated for the first time regarding some components (e.g. visual search with highly focused devices). The exploration, self-localization, and navigation tasks are attacked by an integral approach allowing the parallel processing of these tasks in a dynamic environment. The stability and reliability of the discussed techniques are proven on the base of realtime and real world experiments with a mobile platform. The high error tolerance and low demands concerning the used sensor devices, as well as the small computation power required, are (currently) unique features of the presented method. Files: - Part I : Introduction - 24 pages, 0.9 MB - Part II : ALICE - 30 pages, 2.0 MB - Part III : SPIN - 60 pages, 1.6 MB - Part IV : Conclusion & Appendix - 38 pages, 0.9 MB for the WWW-links to the files of this thesis: ------------------------------------------------------------------ http://ag-vp-www.informatik.uni-kl.de/Projekte/ALICE/abs.PhD.html ------------------------------------------------------------------ for the homepage of the author (including more reports): ------------------------------------------------------------------ http://ag-vp-www.informatik.uni-kl.de/Leute/Uwe/ ------------------------------------------------------------------ or for the ftp-server hosting the files: ------------------------------------------------------------------ ftp://ag-vp-ftp.informatik.uni-kl.de/Public/Neural_Networks/ Reports/Zimmer.PhD/ ... ------------------------------------------------------------------ ----------------------------------------------------- ----- Uwe R. Zimmer --- University of Kaiserslautern - Computer Science Department | 67663 Kaiserslautern - Germany | ------------------------------.--------------------------------. Phone:+49 631 205 2624 | Fax:+49 631 205 2803 | ------------------------------.--------------------------------. http://ag-vp-www.informatik.uni-kl.de/Leute/Uwe/ | From milanese at cui.unige.ch Fri Jun 23 09:28:58 1995 From: milanese at cui.unige.ch (Ruggero Milanese) Date: Fri, 23 Jun 1995 15:28:58 +0200 Subject: Combining multiple estimates Message-ID: <2340*/S=milanese/OU=cui/O=unige/PRMD=switch/ADMD=arcom/C=ch/@MHS> Hello, I am interested in computing the trajectory and the velocity parameters of objects moving in the camera field of view, combining classical computer vision and neural network algorithms. I have several measures/estimates of the motion parameters, extracted through different methods. I am interested in combining these estimates in order to obtain a "combined-estimate" which is closer to the "real values". So far, I have found the following papers that seem relevant to this problem: - "Combining estimators using non-constant weighting functions", by V. Tresp and M. Taniguchi, Advances in Neural Information Processing Systems 7, MIT Press, 1995. - "Multitarget-Multisensor Tracking: Advanced Applications", editor Bar-Shalom, Artech House, 1990. - "Optimal linear combination of neural networks", PhD. Thesis Sh. Hashem, 1993. Could anyone please give me any other pointers or suggest alternative approaches that possibly use non-constant weighting functions? Please send replies to: Sylvia Gil Dept. of Computer Science E-mail: gil at cui.unige.ch University of Geneva Phone: +41 (22) 705-7628 24, rue du General Dufour Fax: +41 (22) 705-7780 1211 Geneva 4 http://cuiwww.unige.ch Switzerland Thank you a lot. -Ruggero Milanese University of Geneva, Switzerland milanese at cui.unige.ch From rinkus at PARK.BU.EDU Fri Jun 23 10:56:00 1995 From: rinkus at PARK.BU.EDU (rinkus@PARK.BU.EDU) Date: Fri, 23 Jun 1995 10:56:00 -0400 Subject: paper avail.: "TEMECOR: An Associative, Episodic, Temporal Sequence Memory" Message-ID: <199506231456.KAA03369@space.bu.edu> FTP-host: cns-ftp.bu.edu FTP-file: pub/rinkus/nips95_rinkus.ps.Z The following paper, which has been submitted to NIPS-95, and which is an extended and revised version of a paper that has been accepted for invited address at WCNN-95, is available via anonymous FTP at the above location. The paper is 8 pages long. ============================================================================ TEMECOR: An Associative, Episodic, Temporal Sequence Memory Gerard J. Rinkus Cognitive and Neural Systems Department Boston University Boston, MA 02215 rinkus at cns.bu.edu ABSTRACT A distributed associative neural model of {\em episodic memory}\/ for spatio-temporal patterns is presented. The model exhibits {\em faster-than-linear}\/ capacity scaling, under single-trial learning, for both uncorrelated and correlated patterns. The correlated pattern sets used in simulations reported herein are formally, sets of {\em complex state sequences}\/ (CSSs)---i.e. sequences in which states can recur multiple times. Efficient representation of large sets of CSSs is central to speech and language processing. The English lexicon, for example, is formally representable as a set of many thousands of CSSs over an alphabet of about 50 phonemes. The model chooses internal representations (IRs) for each state in a highly random fashion. This implies maximal {\em dispersion}---i.e. maximal average Hamming distance---over the set of IRs chosen during learning. Maximal dispersion yields maximal episodic availability of the traces of the individual exemplars. ============================================================================ FTP instructions: unix> ftp cns-ftp.bu.edu Name: anonymous Password: your full email address ftp> cd pub/rinkus ftp> get nips95_rinkus.ps.Z ftp> bye unix> uncompress nips95_rinkus.ps.Z From hu at eceserv0.ece.wisc.edu Fri Jun 23 12:41:18 1995 From: hu at eceserv0.ece.wisc.edu (Yu Hu) Date: Fri, 23 Jun 1995 11:41:18 -0500 Subject: LAST CALL FOR PAPERS: 1995 Int' Symp. on ANN (Taiwan, ROC) Message-ID: <199506231641.AA26351@eceserv0.ece.wisc.edu> ****************************************************************************** SECOND AND LAST CALL FOR PAPERS 1995 International Symposium on Artificial Neural Networks December 18-20, 1995, Hsinchu, Taiwan, Republic of China ****************************************************************************** Sponsored by National Chiao-Tung University in cooperation with Ministry of Education, Taiwan R.O.C., National Science Council, Taiwan R.O.C. and IEEE Signal Processing Society ************************* Distinguished Speakers: ************************* Prof. Leon Chua, UC Berkeley, USA. Prof. John Moody, Oregon Graduate Institute, USA. Porf. Tse Jun Tarn, Washington Univ., USA. ************************* Call for Papers ************************* The third of a series of International Symposium on Artificial Neural Networks will be held at the National Chiao-Tung University, Hsinchu, Taiwan in December of 1995. Papers are solicited for, but not limited to, the following topics: Associative Memory Robotics Electrical Neurocomputers Sensation & Perception Image/Speech Processing Sensory/Motor Control Systems Machine Vision Supervised Learning Neurocognition Unsupervised Learning Neurodynamics Fuzzy Neural Systems Optical Neurocomputers Mathematical Methods Optimization Other Applications Prospective authors are invited to submit 4 copies of extended summaries of no more than 4 pages. All the manuscripts should be written in English with single-spaced, single column, on 8.5" by 11" white papers. The top of the first page of the summary should include a title, authors' names, affiliations, address, telephone/fax numbers, and email address if applicable. The indicated corresponding author will receive an acknowledgement of his/her submissions. Camera-ready full papers of accepted manuscripts will be published in a hard-bound proceedings and distributed at the symposium. For more information, please consult at the MOSAIC URL site http://www.ee.washington.edu/isann95.html, or use anonymous ftp from pierce.ee.washington.edu/pub/isann95/read.me (128.95.31.129). ************************* SCHEDULE ************************* Submission of extended summary: July 15 Notification of acceptance: September 30 Submission of photo-ready paper: October 31 Advanced registration, before: November 10 ************************* For submission from USA and Europe: ************************* Professor Yu-Hen Hu Dept. of Electrical and Computer Engineering Univ. of Wisconsin - Madison, Madison, WI 53706-1691 Phone: (608) 262-6724, Fax: (608) 262-1267 Email: hu at engr.wisc.edu ************************* For submission from Asia and Other Areas: ************************* Professor Sin-Horng Chen Dept. of Communication Engineering National Chiao-Tung Univ., Hsinchu, Taiwan Phone: (886) 35-712121 ext. 54522, Fax: (886) 35-710116 Email: isann95 at cc.nctu.edu.tw ORGANIZATIOIN General Co-Chairs Hsin-Chia Fu Jenq-Neng Hwang National Chiao-Tung University University of Washington Hsinchu, Taiwan Seattle, Washington, USA hcfu at csie.nctu.edu.tw hwang at ee.washington.ed Program Co-Chairs Sin-Horng Chen Yu-Hen Hu National Chiao-Tung University University of Wisconsin Hsinchu, Taiwan Madison, Wisconsin, USA schen at cc.nctu.edu.tw hu at engr.wisc.edu Advisory Board Co-Chair Sun-Yuan Kung C. Y. Wu Princeton University National Science Council Princeton, New Jersey, US Taipei, Taiwan, ROC From baluja at GS93.SP.CS.CMU.EDU Fri Jun 23 16:18:51 1995 From: baluja at GS93.SP.CS.CMU.EDU (Shumeet Baluja) Date: Fri, 23 Jun 95 16:18:51 EDT Subject: paper: Removing the Genetics from the Standard Genetic Algorithm Message-ID: Title: Removing the Genetics from the Standard Genetic Algorithm By: Shumeet Baluja and Rich Caruana Abstract: We present an abstraction of the genetic algorithm (GA), termed population-based incremental learning (PBIL), that explicitly maintains the statistics contained in a GA's population, but which abstracts away the crossover operator and redefines the role of the population. This results in PBIL being simpler, both computationally and theoretically, than the GA. Empirical results reported elsewhere show that PBIL is faster and more effective than the GA on a large set of commonly used benchmark problems. Here we present results on a problem custom designed to benefit both from the GA's crossover operator and from its use of a population. The results show that PBIL performs as well as, or better than, GAs carefully tuned to do well on this problem. This suggests that even on problems custom designed for GAs, much of the power of the GA may derive from the statistics maintained implicitly in its population, and not from the population itself nor from the crossover operator. This paper may be of interest to the connectionist community as the PBIL algorithm is largely based upon supervised competitive learning algorithms. This paper will appear in the Proceedings of the International Confernece on Machine Learning, 1995. instructions ---------------------------------------------- via anonymous ftp at: reports.adm.cs.cmu.edu once you are logged in, issue the following commands: binary cd 1995 get CMU-CS-95-141.ps From baluja at GS93.SP.CS.CMU.EDU Fri Jun 23 17:05:45 1995 From: baluja at GS93.SP.CS.CMU.EDU (Shumeet Baluja) Date: Fri, 23 Jun 95 17:05:45 EDT Subject: paper: ANN Based Task-Specific Focus of Attention Message-ID: Title: Using the Representation in a Neural Network's Hidden Layer for Task-Specific Focus of Attention By: Shumeet Baluja and Dean Pomerleau Abstract: In many real-world tasks, the ability to focus attention on the important features of the input is crucial for good performance. In this paper a mechanism for achieving task-specific focus of attention is presented. A saliency map, which is based upon a computed expectation of the contents of the inputs at the next time step, indicates which regions of the input retina are important for performing the task. The saliency map can be used to accentuate the features which are important, and de-emphasize those which are not. The performance of this method is demonstrated on a real-world robotics task: autonomous road following. The applicability of this method is also demonstrated in a non-visual domain. Architectural and algorithmic details are provided, as well as empirical results. This paper will appear in IJCAI 95. instructions ---------------------------------------------- via anonymous ftp at: reports.adm.cs.cmu.edu once you are logged in, issue the following commands: binary cd 1995 get CMU-CS-95-143.ps From rafal at mech.gla.ac.uk Sun Jun 25 12:35:13 1995 From: rafal at mech.gla.ac.uk (Rafal W Zbikowski) Date: Sun, 25 Jun 1995 17:35:13 +0100 Subject: PhD on neurocontrol: 2nd announcement Message-ID: <4600.199506251635@gryphon.mech.gla.ac.uk> My PhD thesis on neurocontrol can be found on the anonymous FTP server ftp.mech.gla.ac.uk (130.209.12.14) in directory rafal as PostScript file (ca 1.2 M) zbikowski_phd.ps For details see abstract below. Rafal Zbikowski Control Group, Department of Mechanical Engineering, Glasgow University, Glasgow G12 8QQ, Scotland, UK rafal at mech.gla.ac.uk ----------------------------- cut here --------------------------------- ``Recurrent Neural Networks: Some Control Aspects'' PhD Thesis Rafal Zbikowski ABSTRACT This work aims at a rigorous theoretical research on nonlinear adaptive control using recurrent neural networks. Attention is focussed on the dynamic, nonlinear parametric structures as generic models suitable for on-line use. The discussion is centred around proper mathematical formulation and analysis of the complex and abstract issues and therefore no experimental data are given. The main aim of this work is to explore the capabilities of deterministic, continuous-time recurrent neural networks as state-space, generic, parametric models in the framework of nonlinear adaptive control. The notion of *nonlinear neural adaptive control* is introduced and discussed. The continuous-time state-space approach to recurrent neural networks is used. A general formalism of genericity of control is set up and developed into the *differential approximation* as the focal point of recurrent networks theory. A comparison of approaches to neural approximation, both feedforward and recurrent, is presented within a unified framework and with emphasis on relevance for neurocontrol. Two approaches to identifiability of recurrent networks are analysed in detail: one based on the State Isomorphism Theorem and the other on the I/O equivalence. The Lie algebra associated with recurrent networks is described and difficulties in verification of (weak) controllability and observability pointed out. Learning algorithms for recurrent networks are systematically presented and interpreted as deterministic, infinite-dimensional optimisation problems. Also the continuous-time version of the Real-Time Recurrent Learning is rigorously derived. Proper links between recurrent learning and optimal control are established. Finally, the interpretation of graceful degradation as an optimal sensitivity problem is given. From dhw at santafe.edu Sun Jun 25 14:51:58 1995 From: dhw at santafe.edu (David Wolpert) Date: Sun, 25 Jun 95 12:51:58 MDT Subject: No subject Message-ID: <9506251851.AA12379@sfi.santafe.edu> In a recent posting, Sylvia Gil asks for "pointers to ... approaches that ... use non-constant weighting functions (to combine estimators)." The oldest work in the neural network community on non-constant combining of estimators, and by far the most thoroughly researched, is stacking.^1 Stacking is basically the idea of using the behavior of estimators when trained on part of the training set and queried on the rest of it to learn how best to combine those estimators. The original work on stacking was Wolpert, D. (1992). "Stacked Generalization". Neural Networks, 5, p. 241. and the earlier tech report (1990) upon which it was based. Other work on stacking are the papers Breiman, L. (1992). "Stacked Regression". University of California Berkeley Statistics Dept., tech. report 367. {I believe this is now in press in Machine Learning.} LeBlanc, M, and Tibshirani, R. (1993). "Combining estimates in regression and classification". University of Toronto Statistics Dept., tech. report. In addition, much in each of the following papers concern stacking: Chan, P., and Stolfo, S. (1995). "A Comparative Evaluation of Voting and Meta-Learning on Partitioned Data". To appear in the Proceedings of ML 95. Krough, A, (1995). To appear in NIPS 7, Morgan Kauffman. {I forget the title, as well as who the other author is.} Mackay, D. (1993). "Bayesian non-linear modeling for the energy prediction competition". Cavendish Laboratory, Cambridge University tech. report. Zhang, X., Mesirov, J., Waltz, D. (1993). J. Mol. Biol., 232, p. 1227. Zhang, X., Mesirov, J., Waltz, D. (1992), J. Mol. Biol., 225, p. 1049. Moroever one of the references Gil mentioned (Hashem's) is a rediscovery and then investigation of stacking. (Hashem was not aware of the previous work on stacking when he did his research.) Finally, a simple variant of the way to use stacking to improve a single estimator, called "EESA", is the subject of the following paper Kim, Bartlett (1995). "Error estimation by series association for neural network systems". Neural Computation, 7., p. 799. Two non-stacking references on combining you should probably read are Meir, R. (1995). "Bias, variance, and the combination of estimators; the case of least linear squares". To appear in NIPS 7, Morgan Kauffman. Perrone, M. (1993) PhD. thesis, Brown University Physics Dept. David Wolpert 1 - Actually, there is some earlier work on combining estimators, in which one does not partition the training set (as in stacking), but rather uses the residuals (reated by training the estimators on the full training set)to combine those estimators. However this scheme consistently performs worse than stacking. See for example the earlier of the two articles by Zhang et al. From maggini at McCulloch.Ing.UniFI.IT Mon Jun 26 06:26:03 1995 From: maggini at McCulloch.Ing.UniFI.IT (Marco Maggini) Date: Mon, 26 Jun 1995 12:26:03 +0200 Subject: paper available on grammatical inference using RNN Message-ID: <9506261026.AA19956@McCulloch.Ing.UniFI.IT> FTP-host: ftp-dsi.ing.unifi.it FTP-filename: /pub/tech-reports/noisy-gram.ps.Z The following paper, which has been submitted to NEURAP-95 is available via anonymous FTP at the above location. The paper is 8 pages long (about 200Kb). ======================================================================== Learning Regular Grammars From Noisy Examples Using Recurrent Neural Networks M. Gori, M. Maggini, and G. Soda Dipartimento di Sistemi e Informatica Universita' di Firenze Via di Santa Marta 3 - 50139 Firenze - Italy Tel. +39 (55) 479.6265 - Fax +39 (55) 479.6363 E-mail : {marco,maggini,giovanni}@mcculloch.ing.unifi.it WWW: http:/www-dsi.ing.unifi.it/neural ABSTRACT Many successful results have recently been reported concerning the application of recurrent neural networks to the induction of simple finite state grammars. These results can be used to explore the computational capabilities of neural models applied to symbolic tasks. Many insights have been given on the links between the continuous dynamics of a recurrent neural network and the symbolic rules that we want to learn. However, so far, the advantages of dynamical adaptive models and the related gradient-driven learning techniques with respect to classical symbolic inference algorithms have not been clearly shown. In this paper, we explore a class of inductive inference problems that seems to be very well-suited for optimization-based learning algorithms. Bearing in mind the idea of optimal rather than perfect solution, we explore how optimality criteria can help a successful development of the learning process when some of the examples are erroneously labeled. Some experimental results show that neural network-based learning algorithms favor the development of the ``simplest'' solutions, thus eliminating most of the exceptions that arise when dealing with erroneous examples. ============================================================================= The paper can be accessed and printed as follows: % ftp ftp-dsi.ing.unifi.it (150.217.11.10) Name: anonymous password: your full email address ftp> cd /pub/tech-reports ftp> binary ftp> get noisy-gram.ps.Z ftp> bye % uncompress noisy-gram.ps.Z % lpr noisy-gram.ps From hamps at stevens.speech.cs.cmu.edu Mon Jun 26 09:48:41 1995 From: hamps at stevens.speech.cs.cmu.edu (John Hampshire) Date: Mon, 26 Jun 95 09:48:41 -0400 Subject: combining estimators w/ non-constant weighting Message-ID: <9506261348.AA27513@stevens.speech.cs.cmu.edu> In a response to Sylvia Gil David Wolpert writes: ==== The oldest work in the neural network community on non-constant combining of estimators, and by far the most thoroughly researched, is stacking... ==== Perhaps the second claim is accurate, but the first claim (oldest in neural nets) is not. Robbie Jacobs, Mike Jordan, and Steve Nowlan have a long series of papers on the topic of combining estimators (maybe that's not what they called it, but that's certainly what it is); their works date back to well before 92. Likewise, I wrote a NIPS paper (90 or 91... not worth reading) and an IEEE PAMI article (92... probably worth reading) with Alex Waibel on this topic. Since David's not aware of these earlier and contemporary works, his second ''most thoroughly researched'' claim would appear doubtful as well. -John From vecera+ at CMU.EDU Mon Jun 26 11:03:44 1995 From: vecera+ at CMU.EDU (Shaun Vecera) Date: Mon, 26 Jun 1995 11:03:44 -0400 (EDT) Subject: Tech Report: Figure-Ground Organization Message-ID: The following Technical Report is available both electronically from our own FTP server or in hard copy form. Instructions for obtaining copies may be found at the end of this post. ftp://hydra.psy.cmu.edu:/pub/pdp.cns/pdp.cns.95.3.ps.Z ======================================================================== FIGURE-GROUND ORGANIZATION AND SHAPE RECOGNITION PROCESSES: AN INTERACTIVE ACCOUNT Shaun P. Vecera Randall C. O'Reilly Technical Report PDP.CNS.95.3 June 1995 Traditional theories of visual processing have assumed that figure-ground organization must precede object representation and identification. Such a view seems logically necessary: How can one recognize an object before the visual system knows which region should be the figure? However, a number of behavioral studies have shown that subjects are more likely to call a familiar region ``figure'' relative to a less familiar region, a finding inconsistent with the traditional accounts of visual processing. To explain these results, Peterson and colleagues have proposed an additional ``prefigural'' object recognition process that operates before any figure-ground organization (M. A. Peterson, 1994). We propose a more parsimonious interactive account of figure-ground organization in which partial results of figure-ground processes interact with object representations in a hierarchical system similar to that envisioned by traditional theories. We present a computational model that embodies this graded, interactive approach and show that this model can account several behavioral results, including orientation effects, exposure duration effects, and the combination of multiple cues. Finally, these principles of graded, interactive processing offer the possibility of providing a more general information processing framework for visual and higher-cognitive systems. ======================================================================= Retrieval information for pdp.cns TRs: unix> ftp 128.2.248.152 # hydra.psy.cmu.edu Name: anonymous Password: ftp> cd pub/pdp.cns ftp> binary ftp> get pdp.cns.95.3.ps.Z ftp> quit unix> zcat pdp.cns.95.3.ps.Z | lpr # or however you print postscript NOTE: The compressed file is 128K. Uncompressed, the file is 313K. The printed version is 51 total pages. For those who do not have FTP access, physical copies can be requested from Barbara Dorney . From schwenk at robo.jussieu.fr Mon Jun 26 13:33:45 1995 From: schwenk at robo.jussieu.fr (Holger Schwenk) Date: Mon, 26 Jun 1995 18:33:45 +0100 (WETDST) Subject: NIPS*7 preprint avaible (OCR, fast tangent distance) Message-ID: <950626183345.29040000.adc22387@lea.robo.jussieu.fr> **DO NOT FORWARD TO OTHER GROUPS** FTP-host: ftp.robo.jussieu.fr FTP-filename: /papers/schwenk.nips7.ps.gz (8 pages, 39k) The following paper, which will appear in NIPS*7, MIT Press, is available via anonymous FTP at the above location. The paper is 8 pages long, the screendumps will look best on 600dpi laser printers. No hardcopies available. =============================================================================== Transformation Invariant Autoassociation with Application to Handwritten Character Recognition H. Schwenk and M. Milgram PARC - boite 164 Universite Pierre et Marie Curie 4, place Jussieu 75252 Paris cedex 05, FRANCE ABSTRACT When training neural networks by the classical backpropagation algorithm the whole problem to learn must be expressed by a set of inputs and desired outputs. However, we often have high-level knowledge about the learning problem. In optical character recognition (OCR), for instance, we know that the classification should be invariant under a set of transformations like rotation or translation. We propose a new modular classification system based on several autoassociative multilayer perceptrons which allows the efficient incorporation of such knowledge. Results are reported on the NIST database of upper case handwritten letters and compared to other approaches to the invariance problem. ============================================================================ FTP instructions: unix> ftp ftp.robo.jussieu.fr Name: anonymous Password: your full email address ftp> cd papers ftp> bin ftp> get schwenk.nips95.ps.gz ftp> quit unix> gunzip schwenk.nips95.ps.gz unix> lp schwenk.nips95.ps (or however you print postscript) The above ftp server will host also other papers of our group in the near future. I welcome your comments. --------------------------------------------------------------------- Holger Schwenk PARC - boite 164 tel: (+33 1) 44.27.63.08 Universite Pierre et Marie Curie fax: (+33 1) 44.27.62.14 4, place Jussieu 75252 Paris cedex 05 email: schwenk at robo.jussieu.fr FRANCE --------------------------------------------------------------------- From unni at neuro.cs.gmr.com Mon Jun 26 06:37:59 1995 From: unni at neuro.cs.gmr.com (K.P. Unnikrishnan CS/50) Date: Mon, 26 Jun 1995 15:37:59 +0500 Subject: Caltech/GM Postdoc in Control Message-ID: <9506261937.AA07347@neuro.cs.gmr.com> CALIFORNIA INSTITUTE OF TECHNOLOGY POST-DOCTORAL FELLOWSHIP Neural Networks for Control A post-doctoral fellowship is available for research in neural networks for automotive control. The position is initially for one year, renewable for another. This position is part of an ongoing project between Caltech and General Motors. Research in neural architectures and algorithms appropriate to specific real-world problems are goals of this project. Deadline for submission is August 15, 1995. The selected candidate would be expected to spend part of the time at the GM Research Labs in Michigan and part of the time at Caltech. Applicants should send a curriculum vitae, a brief description of relevant experience in control theory and neural networks, and names of three references to: Ms. Laura Rodriguez Caltech 139-74 Pasadena, California 91125 Informal enquiries can be made to unni at gmr.com. The California Institute of Technology is an affirmative action/equal opportunity employer. Women, minorities, veterans, and disabled persons are encouraged to apply. From arbib at pollux.usc.edu Mon Jun 26 15:17:35 1995 From: arbib at pollux.usc.edu (Michael A. Arbib) Date: Mon, 26 Jun 1995 12:17:35 -0700 Subject: The Handbook of Brain Theory and Neural Networks Message-ID: <199506261917.MAA08341@pollux.usc.edu> Advertisement and Request for Feedback from Michael A. Arbib: arbib at pollux.usc.edu ADVERTISEMENT: The following is adapted from the brochure put out by MIT Press for The Handbook of Brain Theory and Neural Networks which they have just published, and which I edited: "In hundreds of articles by experts from around the world, and in overviews and "road maps" prepared by the editor, the Handbook of Brain Theory and Neural Networks charts the immense progress made in recent years in many specific topics related to two great questions: How does the brain work? and How can we build intelligent machines? "While many books have appeared on limited aspects of one subfield or another of brain theory and neural networks, this handbook covers the entire sweep of topics - from detailed models of single neurons, analyses of a wide variety of biological neural networks, and connectionist studies, to mathematical analyses of a variety of abstract neural networks, and technological applications of adaptive, artificial neural networks. "The excitement, and the frustration, of these topics is that they span such a broad range of disciplines including mathematics, statistical physics and chemistry, neurology and neurobiology, and computer science and electrical engineering as well as cognitive psychology, artificial intelligence, and philosophy. Thus, much effort has gone into making the Handbook accessible to readers with varied backgrounds while still providing a clear view of much of the recent, specialized research in specific topics. PART II - ROAD MAPS provides an entree into the many articles ds while still providing a clear view of much of the recent, specialized research in specific topics. "The heart of the book, Part III - ARTICLES, is comprised of 266 original articles by leaders in the various fields, arranged alphabetically by title. Parts I and II, written by the editor, are designed to help readers orient themselves to this vast range of material. PART I - BACKGROUND introduces several basic neural models, explains how the present study of Brain Theory and Neural Networks integrates brain theory, artificial intelligence, and cognitive psychology, and provides a tutorial on the concepts essential for understanding neural networks as dynamic, adaptive systems. PART II - ROAD MAPS provides an entree into the many articles of Part III via an introductory "Meta-Map" and twenty-thrRONS AND NETWORKS Biological Neurons Biological Networks Mammalian Brain Regions SENSORY SYSTEMS Vision Other Sensory Systems rouped under eight general headings: CONNECTIONISM: PSYCHOLOGY, LINGUISTICS, AND AI Connectionist Psychology Connectionist Linguistics Artificial Intelligence and Neural Networks DYNAMICS, SELF-ORGANIZATION, AND COOPERATIVITY Dynamic Systems and Optimization Cooperative Phenomena Self-Organization in Neural Networks LEARNING IN ARTIFICIAL NEURAL NETWORKS Learning in Artificial Neural Networks, Deterministic Learning in Artificial Neural Networks, Statistical Computability and Complexity APPLICATIONS AND IMPLEMENTATIONS Control Theory and Robotics Applications of Neural Networks Implementation of Neural Networks BIOLOGICAL NEURONS AND NETWORKS Biological Neurons Biological Networks Mammalian Brain Regions SENSORY SYSTEMS Vision Other Sensory Systems PLASTICITY IN DEVELOPMENT AND LEARNING Mechanisms of Neural Plasticity Development and Regeneration of Neural Networks Learning in Biological Systems MOTOR CONTROL Motor Pattern Generators and Neuroethology Biological Motor Control Primate Motor Control ***** REQUEST FOR FEEDBACK Any feedback on the Handbook would be much appreciated - both praise for what worked well and suggestions for what needs improvement. In particular, what topics are missing in Part III (you'll need to tour the Index of the Handbook as well as looking for the "obvious" headings in the Table of Contents), and what subtopics are missing in individual articles? And how might Parts I and II be improved? Any other comments and suggestions will be most welcome. Meanwhile, I hope that many of you enjoy the book and find that it does succeed in bridging the cultural divide between those who study the brain and those who study artificial neural networks. ***** With best wishes Michael Arbib P.S. Please forward this message to other news groups, etc. From pierre at mbfys.kun.nl Tue Jun 27 03:33:27 1995 From: pierre at mbfys.kun.nl (Pierre v.d. Laar) Date: Tue, 27 Jun 1995 09:33:27 +0200 (MET DST) Subject: Paper Available: A Neural Model of Visual Attention Message-ID: <199506270733.JAA00291@anthemius.mbfys.kun.nl> Dear Connectionists, The following paper is accepted to appear in the Proceedings of the third SNN Symposium A Neural Model of Visual Attention Pi\"erre van de Laar, Tom Heskes, and Stan Gielen Abstract: We propose a biologically plausible neural model of selective covert visual attention. We show that this model is able to learn focussing on object-specific features. It has similar learning characteristics as humans in the learning and unlearning paradigm of Shiffrin and Schneider (1977). R. M. Shiffrin and W. Schneider. Controlled and automatic human information processing: II. perceptual learning, automatic attending and a general theory. Psychological Review, 84(2):127--190, 1977. The online version can be reached through: http://www.mbfys.kun.nl/~pierre/Proc3SNN/ The postscript version is available through ftp at: ftp.mbfys.kun.nl as file: snn/pub/reports/Laar.Proc3SNN.ps.Z FTP INSTRUCTIONS unix% ftp ftp.mbfys.kun.nl (or 131.174.83.52) Name: anonymous Password: (use your e-mail address) ftp> cd snn/pub/reports/ ftp> binary ftp> get Laar.Proc3SNN.ps.Z ftp> bye unix% uncompress Laar.Proc3SNN.ps.Z unix% lpr Laar.Proc3SNN.ps Remarks, suggestions and relevant references are welcome. Best wishes Pi\"erre van de Laar Department of Medical Physics and Biophysics, University of Nijmegen, The Netherlands pierre at mbfys.kun.nl URL: http://www.mbfys.kun.nl/~pierre/ P.S. More information about the SNN symposium can be found at: http://www.mbfys.kun.nl/SNN/Symposium/ From lxu at cs.cuhk.hk Tue Jun 27 08:27:10 1995 From: lxu at cs.cuhk.hk (Dr. Xu Lei) Date: Tue, 27 Jun 95 20:27:10 +0800 Subject: combining estimators w/ non-constant weighting Message-ID: <9506271227.AA21272@cucs18.cs.cuhk.hk> Also the topic of combining of classifiers has been studied in Pattern Recognition literure as well as Neural networks. My colleauges and I has done some work (Pls use your own judgement to decide they are worth reading). -Lei Lei Xu, Adam Krzyzak and Ching Y. Suen, (1991), `` Associative Switch for Combining Multiple Classifiers", Proc. of 1991 International Joint Conference on Neural Networks, Seattle, July, Vol. I, 1991, pp43-48. Later, the detailed version is published in {\sl Journal of Artificial Neural Networks}, Vol.1, No.1, pp77-100, 1994. Lei Xu, Adam Krzyzak and Ching Y. Suen, (1992), `` Several Methods for Combining Multiple Classifiers and Their Applications in Handwritten Character Recognition", {\sl IEEE Trans. on System, Man and Cybernetics}, Vol. SMC-22, No.3, pp418-435, 1992, regular paper. Lei Xu and M.I.Jordan (1993), ``EM Learning on A Generalized Finite Mixture Model for Combining Multiple Classifiers'', Proceedings of World Congress on Neural Networks, Portland, OR, Vol. IV, 1993, Lei Xu, M.I.Jordan and G. E. Hinton (1995), `` An Alternative Model for mixtures of Experts", to appear on {\em Advances in Neural Information Processing Systems 7}, eds., Cowan, J.D., Tesauro, G., and Alspector, J., MIT Press, Cambridge MA, 1995. From uzimmer at informatik.uni-kl.de Tue Jun 27 12:43:00 1995 From: uzimmer at informatik.uni-kl.de (Uwe R. Zimmer, AG vP) Date: Tue, 27 Jun 95 17:43:00 +0100 Subject: Paper available: Baseline Detection in Chromatograms Message-ID: <950627.174300.2292@ag-vp-file-server.informatik.uni-kl.de> Paper available via WWW / FTP: keywords: chromatography, baseline detection, artificial neural networks, self-organization, constraint topologic mapping, fuzzy logic ------------------------------------------------------------------ Deriving Baseline Detection Algorithms from Verbal Descriptions ------------------------------------------------------------------ Baerbel Herrnberger & Uwe R. Zimmer (submitted for publication) This paper is on baseline detection in chromatography, a widely used technique for evaluating complex mixtures of substances in analytical chemistry. The resulting chromatograms are characterized by peaks indicating the presence and the amount of certain substances. Due to disturbances of various kinds, chromatograms have to be corrected for baseline before taking further measurements on these peaks. The presented strategy of automatic baseline detection combines fuzzy logic and neural network approaches. It is based on a verbal description of a baseline refering to a 2D image of a chromatogram instead of a data vector. Baselines are expected to touch data points on the lower border of the chromatogram forming a mainly horizontal and straight line. That description has been translated into a couple of algorithms in a two-stage approach with the first stage proceeding on a local, and the second proceeding on a global level. The first stage assigns a value regarded as the degree of baseline membership or significance to each data point - the second uses a global optimization strategy for coordinating these significances and for producing the final curve, simultaneously. Expecting no single feature being sufficient for baseline/non-baseline discrimination, a couple of features is extracted. Deriving them from a 2D image, positional relations between data points can be considered. The type of feature fusion is derived from a cost function upon a set of pre-classified data points. Constrained topological mapping will be the basis for the second stage. The statistical stability of the proposed approach is superior to known techniques, while keeping the computational effort low. (5 pages - 432 KB) for the WWW-link: ------------------------------------------------------------------ http://ag-vp-www.informatik.uni-kl.de/Leute/Uwe/abs.Baseline.html ------------------------------------------------------------------ for the homepage of the authors (including more reports): ------------------------------------------------------------------ http://ag-vp-www.informatik.uni-kl.de/Leute/Uwe/bahe.html http://ag-vp-www.informatik.uni-kl.de/Leute/Uwe/ ------------------------------------------------------------------ or for the ftp-server hosting the file: ------------------------------------------------------------------ ftp://ag-vp-ftp.informatik.uni-kl.de/Public/Neural_Networks/ Reports/Herrnberger.Baseline.ps.Z ------------------------------------------------------------------ ----------------------------------------------------- ----- Uwe R. Zimmer --- University of Kaiserslautern - Computer Science Department | 67663 Kaiserslautern - Germany | ------------------------------.--------------------------------. Phone:+49 631 205 2624 | Fax:+49 631 205 2803 | ------------------------------.--------------------------------. http://ag-vp-www.informatik.uni-kl.de/Leute/Uwe/ | From jacobs at psych.Stanford.EDU Tue Jun 27 12:47:52 1995 From: jacobs at psych.Stanford.EDU (Robert Jacobs) Date: Tue, 27 Jun 1995 09:47:52 -0700 Subject: combining estimators Message-ID: <199506271647.JAA13795@aragorn.Stanford.EDU> I have written a review article on statistical methods for combining estimators. Linear combination techniques are covered, as well as supra Bayesian procedures. The article is scheduled to appear in the journal "Neural Computation" (volume 7, number 5). I recently received the page proofs so I imagine that it will appear relatively soon, possibly in the next issue. I will put the abstract to the article at the bottom of this note. Robbie Jacobs ============================================================ Methods For Combining Experts' Probability Assessments This article reviews statistical techniques for combining multiple probability distributions. The framework is that of a decision maker who consults several experts regarding some events. The experts express their opinions in the form of probability distributions. The decision maker must aggregate the experts' distributions into a single distribution that can be used for decision making. Two classes of aggregation methods are reviewed. When using a supra Bayesian procedure, the decision maker treats the expert opinions as data that may be combined with its own prior distribution via Bayes' rule. When using a linear opinion pool, the decision maker forms a linear combination of the expert opinions. The major feature that makes the aggregation of expert opinions difficult is the high correlation or dependence that typically occurs among these opinions. A theme of this paper is the need for training procedures that result in experts with relatively independent opinions or for aggregation methods that implicitly or explicitly model the dependence among the experts. Analyses are presented that show that $m$ dependent experts are worth the same as $k$ independent experts where $k \leq m$. In some cases, an exact value for $k$ can be given; in other cases, lower and upper bounds can be placed on $k$. From dhw at santafe.edu Tue Jun 27 17:09:22 1995 From: dhw at santafe.edu (David Wolpert) Date: Tue, 27 Jun 95 15:09:22 MDT Subject: Mixtures of experts and combining learning algorithms Message-ID: <9506272109.AA25931@sfi.santafe.edu> John Hampshire writes: >>>> Robbie Jacobs, Mike Jordan, and Steve Nowlan have a long series of papers on the topic of combining estimators (maybe that's not what they called it, but that's certainly what it is); their works date back to well before 92. Likewise, I wrote a NIPS paper (90 or 91... not worth reading) and an IEEE PAMI article (92... probably worth reading) with Alex Waibel on this topic. Since David's not aware of these earlier and contemporary works, his second ''most thoroughly researched'' claim would appear doubtful as well. >>>> I am aware of the ground-breaking work of Nowlan, Jacobs, Jordan, Hinton, etc. on adaptive mixtures of experts (AME) (as well as other related schemes John didn't mention). And it is related to the subject at hand, so I should have mentioned it in posting. However I have trouble seeing what exactly John was driving at, since although it's related, I don't think AME directly addresses Gil's question. Most (almost all?) of the work on AME I've encountered concerns members of a restricted family of learning algorithms. Namely, parametric learning algorithms that work by minimizing some cost function. (For example, in Nowlan and Hinton (NIPS3), one is explicitly combining neural nets.) Loosely speaking, AME in essence "co-opts" how these algorithms work, by combining their individual cost functions into a larger cost function that is then minimized by varying everybody's parameters together. However I took Gil's question (perhaps incorrectly) to concern the combination of *arbitrary* types of estimators, which in particular includes estimators (like nearest neighbor) that need not be parametric and therefore can not readily be "co-opted". (Certainly the work she listed, like Sharif's, concerns the combination of such arbitrary estimators.) This simply is not the concern of most of the work on AME. Now one could imagine varying AME so that the "experts" being combined are not parameterized input-output functions but rather the outputs of more general kinds of learning algorithms. For example, one could have the "experts" be the end-products of assorted nearest neighbor schemes, that are trained independently of one another. *After* that training one would train the gating network to combine the individual experts. (In contrast, in vanilla AME one trains the gating network together with the individual experts in one go.) However 1) It can be argued that it is stretching thing to view this as AME, especially if you adopt the perspective that AME is a kind of mixture modelling. 2) More importantly, I already referred to this kind of scheme in my original posting: "Actually, there is some earlier work on combining estimators, in which one does not partition the training set (as in stacking), but rather uses the residuals (created by training the estimators on the full training set) to combine those estimators. However this scheme appears to perform worse than stacking. See for example the earlier of the two articles by Zhang et al." *** Summarizing, we have one of two possibilities. Either i) John is referring to a possible variant of AME that I did mention (albeit without explicitly using the phrase "AME"), or ii) John is referring to the more common variant of AME in which it can not combine arbitrary kinds of estimators, and therefore is not a candidate for what (I presumed) Gil had in mind. Obviously I am not as much of an expert on the AME as John, so there might very well be a section or two (or even a whole paper or two!) that falls outside of those two categorizations of AME. But I think it's fair to say that most of the work on AME is not concerned with combining arbitrary estimators, in ways other than those referred to in my posting. Nonetheless, I certainly would recommend that Gil (and others) acquaint themselves with the seminal work of AME. I was definitely remiss in not including AME in my (quickly knocked together) list of work related to Gil's question. David Wolpert From nin at math.tau.ac.il Wed Jun 28 05:30:15 1995 From: nin at math.tau.ac.il (Intrator Nathan) Date: Wed, 28 Jun 1995 12:30:15 +0300 Subject: Combining estimators - which ones to combine Message-ID: <199506280930.MAA10327@silly3.math.tau.ac.il> The focous of most of the papers cited so far on combining experts was how to combine, while the issue of what to combine is as important. The fundametal observation is that combining, or in the simple case averaging estimators is effective only if these estimators are made somehow to be independent. One can cause independence via bootstrap methods (Breiman's Stacking and recently, Bagging) or via smooth bootstrap which amounts to injecting noise during training. Ones estimators are independent enough, simple averaging gives very good performance. - Nathan Intrator Refs: @misc{Breiman93, author="L. Breiman", title="Stacked regression", year=1993, note="Technical report, Univ. of Cal, Berkeley", } @misc{Breiman94, author="L. Breiman", title="Bagging predictors", year=1994, note="Technical report, Univ. of Cal, Berkeley", } @misc{RavivIntrator95, author="Y. Raviv and N. Intrator", year=1995, note="Preprint", title="Bootstraping with Noise: An Effective Regularization Technique", abstract="Bootsrap samples with noise are shown to be an effective smoothness and capacity control for training feed-forward networks as well as more traditional statistical models such as general additive models. The effect of smoothness and ensemble averaging are shown to be complementary and not equivalent to noise injection. The two-spiral, a highly non-linear noise-free problem, is used to demonstrate these findings.", url="ftp://cns.math.tau.ac.il/papers/spiral.ps.Z", } The last one can also be accessed via my research page: http://www.math.tau.ac.il/~nin/research.html From brunak at cbs.dtu.dk Wed Jun 28 11:29:01 1995 From: brunak at cbs.dtu.dk (Soren Brunak) Date: Wed, 28 Jun 95 11:29:01 METDST Subject: Combining neural estimators, NetGene Message-ID: Re: Combining neural network estimators, NetGene For people interested in earlier work on combined neural networks, I would like to bring the following paper to their attention: Prediction of human mRNA donor and acceptor sites from the DNA sequence, S. Brunak, J. Engelbrecht, and S. Knudsen, J. Mol. Biol., 220, 49-65, 1991. (Abstract below). The paper describes a method for locating intron splice sites in human genes by combining a search for coding regions and a search for donor and acceptor sites. The method was implemented as a mail server in February 1992, and is still widely used. Since 1992 it has processed more than 50 million nucleotides of DNA for researchers from many different countries, mainly UK, USA and Germany. The mail server is reached by sending mail to: NetGene at cbs.dtu.dk, regards, Soren Brunak Center for Biological Sequence Analysis The Technical University of Denmark DK-2800 Lyngby, Denmark Email: brunak at cbs.dtu.dk ----------------------------------------------------------------------- Abstract: Artificial neural networks have been applied to the prediction of splice site location in human pre--mRNA. A joint prediction scheme where prediction of transition regions between introns and exons regulates a cutoff level for splice site assignment was able to predict splice site locations with confidence levels far better than previously reported in the literature. The problem of predicting donor and acceptor sites in human genes is hampered by the presence of numerous amounts of false positives --- in the paper the distribution of these false splice sites is examined and linked to a possible scenario for the splicing mechanism {\it in vivo}. When the presented method detects 95\% of the true donor and acceptor sites it makes less than 0.1\% false donor site assignments and less than 0.4\% false acceptor site assignments. For the large data set used in this study this means that on the average there are one and a half false donor sites per true donor site and six false acceptor sites per true acceptor site. With the joint assignment method more than a fifth of the true donor sites and around one fourth of the true acceptor sites could be detected without accompaniment of any false positive predictions. Highly confident splice sites could not be isolated with a widely used weight matrix method or by separate splice site networks. A complementary relation between the confidence levels of the coding/non--coding and the separate splice site networks was observed, with many weak splice sites having sharp transitions in the coding/non--coding signal and many stronger splice sites having more ill--defined transitions between coding and non--coding. Prediction of human mRNA donor and acceptor sites from the DNA sequence, S. Brunak, J. Engelbrecht, and S. Knudsen, J. Mol. Biol., 220, 49-65, 1991. From lxu at cs.cuhk.hk Thu Jun 29 04:56:31 1995 From: lxu at cs.cuhk.hk (Dr. Xu Lei) Date: Thu, 29 Jun 95 16:56:31 +0800 Subject: combining estimators w/ non-constant weighting Message-ID: <9506290856.AA19321@cucs18.cs.cuhk.hk> David Wolpert wrote: >>>> However I took Gil's question (perhaps incorrectly) to concern the combination of *arbitrary* types of estimators, which in particular includes estimators (like nearest neighbor) that need not be parametric and therefore can not readily be "co-opted". (Certainly the work she listed, like Sharif's, concerns the combination of such arbitrary estimators.) This simply is not the concern of most of the work on AME. >>>> The following two papers with AME concern this type of work. Actually, this type of work can be treated as a special case of Mixture of Experts ---Some experts have been pretrained and fixed and only the gating net and other experts need to be trained in ME learning. Lei Xu and M.I.Jordan (1993), ``EM Learning on A Generalized Finite Mixture Model for Combining Multiple Classifiers'', Proceedings of World Congress on Neural Networks, Portland, OR, Vol. IV, 1993, Lei Xu, M.I.Jordan and G. E. Hinton (1995), `` An Alternative Model for mixtures of Experts", to appear on {\em Advances in Neural Information Systems 7}, eds., Cowan, J.D., Tesauro, G., and Alspector, J., MIT Press, Cambridge MA, 1995. From C.Campbell at bristol.ac.uk Thu Jun 29 05:50:50 1995 From: C.Campbell at bristol.ac.uk (I C G Campbell) Date: Thu, 29 Jun 1995 10:50:50 +0100 (BST) Subject: PhD Studentship available Message-ID: <199506290950.KAA05185@zeus.bris.ac.uk> I would be very grateful if the following studentship announcement could be advertised on your BB. With many thanks, Colin Campbell *********************************** EPSRC CASE AWARD LEADING TO PhD Non-Linear Modelling in an Extended Kalman Filter The Centre for Communications Research at Bristol University, in collaboration with The Sowerby Research Centre, British Aerospace (Operations) Ltd. have been awarded a CASE scholarship by the Engineering and Physical Sciences Research Council in the area of non-linear modelling. This research programme will commence in October 1995 and will continue until October 1998. The Kalman filter is a widely used object tracking algorithm with applications ranging from industrial robotics to aircraft radar systems. This research programme will investigate the application of artificial neural networks (and related algorithms) as non-linear modelling devices within a Kalman filter framework and will integrate with existing research in the area of object tracking and identification in multi-sensor systems. The studentship will comprise the standard EPSRC award plus an industrial supplement. Although the research programme will be based at Bristol University, the student can expect to spend at least three months in industry over the three year period. Applicants should possess a good honours degree in Electronic Engineering, Engineering Mathematics or a related discipline and must be EC citizens. Please apply in writing, with a full CV, to Dr. David Bull, Centre for Communications Research, University of Bristol, Queens Building, Bristol BS8 1TR (email: Dave.Bull at bristol.ac.uk, tel:0117 928 8613). /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ /\/\/\/\/\/\/\/\/\/\/\/\ Dr. David R. Bull, Reader in Digital Signal Processing, Centre for Communications Research, University of Bristol, Queens Building, University Walk, Bristol BS8 1TR, UK tel: +44 117 928 8613, fax: +44 117 925 5265, email: Dave.Bull at bristol.ac.uk /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ /\/\/\/\/\/\/\/\/\/\/\/\ From pjs at aig.jpl.nasa.gov Thu Jun 29 16:46:14 1995 From: pjs at aig.jpl.nasa.gov (Padhraic J. Smyth) Date: Thu, 29 Jun 95 13:46:14 PDT Subject: Combining human and machine experts Message-ID: <9506292046.AA02754@amorgos.jpl.nasa.gov> A tangent to the discussion on combining estimators and experts is a pointer to the statistical literature on the topic of combining the subjective ratings of multiple human experts. This problem arises in medical diagnosis and in remote sensing applications where it is not uncommon to have multiple opinions and one does not know which expert to believe. Quite a bit of work has been published in this area, here's some pointers to a few of many papers on the topic: J. S. Uebersax, ``Statistical modeling of expert ratings on medical treatment appropriateness," {\it J. Amer. Statist. Assoc.}, vol.88, no.422, pp.421--427, 1993. A. Agresti, ``Modelling patterns of agreement and disagreement," {\it Statistical Methods in Medical Research}, vol.1, pp.201--218, 1992. S. French, "Group consensus probability distributions: a critical survey," in Bayesian Statistics 2, pp.183-202, Bernardo, DeGroot, Lindley, Smith (eds), Elsevier Science (North-Holland), 1985. A. P. Dawid and A. M. Skene, ``Maximum likelihood estimation of observer error-rates using the EM algorithm," {\it Applied Statistics}, vol.28, no.1, pp.20--28, 1979. and a paper we had at last year's NIPS: P. Smyth, M.C. Burl, U. M. Fayyad, P. Perona, P. Baldi, `Inferring ground truth from subjectively-labeled images of Venus,' to appear in NIPS 7. (can be gotten from my home page: http://www-aig.jpl.nasa.gov/mls/home/pjs) Its interesting to note the differences between combining algorithmic "experts" and human experts: as a function of the input data, the algorithms are usually deterministic while the humans are usually non-deterministic, i.e., given the same data more than once they can produce different estimates. Humans are certainly more difficult to model than algorithms for combination purposes since they can "drift" over time, be affected by non-data factors, and so forth - not implying of course that combining algorithmic experts is necessarily easy ! I have not seen any work on combining both human and algorithmic predictions, I would be interested if anyone knows of such work. Padhraic Smyth From jordan at psyche.mit.edu Thu Jun 29 17:15:25 1995 From: jordan at psyche.mit.edu (Michael Jordan) Date: Thu, 29 Jun 95 17:15:25 EDT Subject: Combining neural estimators, NetGene Message-ID: <9506292115.AA04943@psyche.mit.edu> Just a small clarifying point regarding combining estimators. Just because two algorithms (e.g., stacking and mixtures of experts) end up forming linear combinations of models doesn't necessarily mean that they have much to do with each other. It's not the architecture that counts, it's the underlying statistical assumptions that matter---the statistical assumptions determine how the parameters get set. Indeed, a mixture of experts model is making the assumption that, probabilistically, a single underlying expert is responsible for each data point. This is very different from stacking, where there is no such mutual exclusivity assumption. Moreover, the linear combination rule of mixtures of experts arises only if you consider the conditional mean of the mixture distribution; i.e., E(y|x). When the conditional distribution of y|x has multiple modes, which isn't unusual, a mixture model is particularly appropriate and the linear combination rule *isn't* the right way to summarize the distribution. In my view, mixtures of experts are best thought of just another kind of statistical model, on the same level as, say, loglinear models or hidden Markov models. Indeed, they basically are a statistical form of a decision tree model. Note that decision trees embody the mutual exclusivity assumption (by definition of "decision")---this makes it very natural to formalize decision trees as mixture models. (Cf. decision *graphs*, which don't make a mutual exclusivity assumption and *aren't* handled well within the mixture model framework.) I would tend to place stacking at a higher level in the inference process, as a general methodology for--in some sense--approximating an average over a posterior distribution on a complex model space. "Higher level" just means that it's harder to relate stacking to a specific generative probability model. It's the level of inference at which everybody agrees that no one model is very likely to be correct for any instantiation of any x---for two reasons: because our current library of possible statistical models is fairly impoverished, and because we have an even more impoverished theory of how all of these models relate to each other (i.e., how they might be parameterized instances of some kind of super-model). This means that--in our current state of ignorance--mutual exclusivity (and exhaustivity---the second assumption underlying mixture models) make no sense (at the higher levels of inference), and some kind of smart averaging has got to be built in whether we understand it fully or not. Mike From sala at digame.dgcd.doc.ca Thu Jun 29 07:54:16 1995 From: sala at digame.dgcd.doc.ca (Ken Sala) Date: Thu, 29 Jun 95 07:54:16 EDT Subject: Job Opening at CRC, Canada Message-ID: <9506291154.AA04131@digame.doc.ca> Research Position Available Development of Hardwired Neural Network Testbed The Communications Research Center (CRC) Institute, part of Industry Canada, is offering a second term (two years) research position at the research engineer level within the Communications Systems Branch of the CRC. The principal goal of this position is the application of commercially available neural network circuitry to the task of real time, pattern classification of communications signals and synthetic aperture radar images. This position is a part of a CRC project sponsored by the Canadian Department of National Defense. The candidate should have, at minimum, a graduate degree in the engineering or physical sciences. Preference will be given to candidates holding a Ph.D. degree in engineering in an area relevant to the project. Knowledge or experience in systems development, machine-level programming (C/C++), integrated circuit design and analysis, and neural network theory and application is desirable. An ability to work independently and to communicate effectively is required. Preference will be given to candidates who hold Canadian citizenship. The CRC is located in suburban Ottawa on a large site shared with DND research facilities (Defense Research Establishment Ottawa) and with the Canadian Space Agency. Interested candidates may reply to: Dr. K. Sala Communications Research Center P.O. Box 11490, Stn. `H' Ottawa, ON K2H 8S2 Fax: (613) 990-8369 e-mail: sala at digame.dgcd.doc.ca Candidates must include: (1) a rsum containing, at a minimum, educational background, work experience, and a complete list of publications; (2) a firm indication of the earliest date of availability; and (3) current citizenship status. A phone number, fax number, and e-mail address by which the applicant could be contacted should be supplied with the information above. From scheler at informatik.tu-muenchen.de Thu Jun 29 04:58:12 1995 From: scheler at informatik.tu-muenchen.de (Gabriele Scheler) Date: Thu, 29 Jun 1995 10:58:12 +0200 Subject: Two preprints on Conn. NLP Message-ID: <95Jun29.105816met_dst.42340@papa.informatik.tu-muenchen.de> Dear connectionists, two preprints on connectionist and hybrid approaches to NLP are available from our ftp-server. ftp flop.informatik.tu-muenchen.de directory: pub/articles-etc scheler_aspect.ps.gz scheler_hybrid.ps.gz ------------------------------------------------------------------------------- Dr. Gabriele Scheler phone: +49-89-2105-8476 Institut f"ur Informatik fax: +49-89-2105-8207 Technische Universit"at M"unchen Arcisstr.21 email:scheler at informatik.tu-muenchen.de D-82090 M"unchen Germany -------------------------------------------------------------------------------- Titles and Abstracts: scheler_hybrid.ps.gz ------------------------------------------------------------------ A Hybrid Model of Semantic Inference (to appear: 4th Conference on Cognitive Science in Natural Language Processing}, Dublin, Ireland, 1995 ) Gabriele Scheler/Johann Schumann Institut f{\"u}r Informatik Technische Universit{\"a}t M{\"u}nchen e-mail: scheler, schumann at informatik.tu-muenchen.de keywords: NLP, hybrid systems, automated theorem proving, text understanding, temporal cognition Abstract (Conclusion) We have built a system for the interpretation of tense and aspects consisting of a neural network component, which translates sentences into semantic representations and a theorem prover which proves inferences between logical forms. An interesting question that has been partly answered by this research is whether atomic features can indeed mediate between syntactic structure and cognitive structure, which are both complex. It was shown that complex logical representations can be built from an atomic feature representation. The implications for learning of natural language categories and the interface between natural language and cognition from this approach could be far-reaching. For instance, atomic features may be seen as a biologically simple way of linking structures (i.e. natural language morphology and temporal cognition) which are complex in different ways. However, this approach needs to be carried over to other linguistic domains (e.g. determiners, plural phenomena, mood, prepositions or lexical meaning) to further explore its possibilities. Logic is probably not the implementation medium in human brains. Logical representations must be seen to provide only a meta-theory of cognition. In contrast to other approaches, we use full first-order logic as representation medium. This allows an open set of inferences which can be applied to various tasks. These inferences can also be used to construct a closed relational set, i.e. a temporal 'scenario'. ------------------------------------------------------------------ scheler_aspect.ps.gz Learning the Semantics of Aspect (to appear: D. Jones (ed.) New Methods in Language Processing, University College London Press) Gabriele Scheler The main point of this paper is to show how we can extract semantic features, describing aspectual meanings, from a syntactic representation. Aspectual meanings are represented as sets of features in an interlingua. The goal is to translate English to Russian aspectual categories. This is realized by a specialized language processing module, which is based on the concept of vertical modularity. The results of supervised learning of syntactic-semantic correspondences using standard back-propagation show that both learning and generalization to new patterns are successful. Furthermore, the correct generation of Russian aspect from the automatically created semantic representations is demonstrated. The results are relevant to machine translation in a hybrid systems approach and to the study of linguistic category formation. -------------------------------------------------------------------------------- From marney at ai.mit.edu Thu Jun 1 15:09:48 1995 From: marney at ai.mit.edu (Marney Smyth) Date: Thu, 1 Jun 95 15:09:48 EDT Subject: NNSP95 - Formal Program Message-ID: <9506011909.AA16249@motor-cortex> 1995 IEEE WORKSHOP ON ---------------------- NEURAL NETWORKS FOR SIGNAL PROCESSING ------------------------------------- August 31 - September 2, 1995, Royal Sonesta Hotel Cambridge, Massachusetts, USA. The full program for NNSP95 is now available, and you can get the information here, or by consulting our WWW Homepage at this URL: http://www.cdsp.neu.edu/info/nnsp95.html, or by anonymous ftp at site ftp.cdsp.neu.edu public directory /pub/NNSP95. NNSP95 is sponsored by the Neural Networks Technical Committee of the IEEE Signal Processing Society, in cooperation with the IEEE Neural Network Council and with co-sponsorship from ONR/ARPA and NSF (through CBCL, the Center for Biological and Computational Learning at MIT). The Workshop is designed to serve as a regular forum for researchers from universities and industry who are interested in interdisciplinary research on neural networks for signal processing applications. NNSP95 offers a showcase for current research results in key areas, including learning algorithms, network architectures, speech processing, image processing, computer vision, adaptive signal processing, medical signal processing, digital communications and other applications. GENERAL CHAIRS -- Federico Girosi Center for Biological and Computational Learning, Artificial Intelligence Laboratory, MIT, E25-201, Cambridge, MA 02139 Tel: (617)253-0548, Fax: (617)258-6287, email: girosi at ai.mit.edu -- John Makhoul BBN Systems and Technologies 70 Fawcett Street, Cambridge, MA 02138 Tel: (617)873-3332, Fax: (617)873-2534, email: makhoul at bbn.com PROGRAM CHAIR -- Elias S. Manolakos Communications and Digital Signal Processing (CDSP) Center for Research and Graduate Studies 409 Dana Build., Electrical and Computer Engineering Department Northeastern University, Boston, MA 02115 tel: (617)373-3021, fax: (617)-373-4189, email: elias at cdsp.neu.edu PROCEEDINGS CHAIR -- Elizabeth J. Wilson, Raytheon Co. Marlborough, MA, email: bwilson at sud2.ed.ray.com FINANCE CHAIR -- Judy Franklin, GTE Laboratories Incorporated Waltham, MA 02254, email: jfranklin at gte.com PUBLICITY CHAIR -- Marney Smyth , MIT, email: marney at ai.mit.edu LOCAL ARRANGEMENTS CHAIR -- Mary Pat Fitzgerald , MIT, email: marypat at ai.mit.edu TECHNICAL PROGRAM COMMITTEE Joshua Alspector (Bellcore, USA) Charles Bachmann (Naval Research Lab., USA) Alice Chiang (MIT Lincoln Lab., USA) A. Constantinides (Imperial College, UK) Lee Giles (NEC Research, USA) Federico Girosi (CBCL, MIT, USA) Lars Kai Hansen (Tech. U. of Denmark, Denmark) Yu-Hen Hu (U. of Wisconsin, USA) Jenq-Neng Hwang (U. of Washington, USA) Bing-Huang Juang (AT&T Bell Lab., USA) Shigeru Katagiri (ATR Japan) George Kechriotis (Thinking Machines Inc., USA) Stephanos Kollias (National Tech. U. of Athens, Greece) Sun-Yuan Kung (Princeton U., USA) Gary M. Kuhn (Siemens Corp. Research, USA) Richard Lippmann (MIT Lincoln Lab., USA) John Makhoul (BBN Lab., USA) Elias Manolakos (CDSP, Northeastern U., USA) P. Takis Mathiopoulos (U. of British Columbia, Canada) Mahesan Niranjan (Cambridge U., UK) Tomaso Poggio (CBCL, MIT, USA) Jose Principe (U. of Florida, USA) Wojtek Przytula (Hughes Research Lab., USA) John Sorensen (Tech. U. of Denmark, Denmark) Andreas Stafylopatis (National Tech. U. of Athens, Greece) John Vlontzos (Intracom S.A., Greece) Raymond Watrous (Siemens Corp. Research, USA) Christian Wellekens (Eurecom, France) Ron Williams (Northeastern U., USA) Barbara Yoon (ARPA, USA) Xinhua Zhuang (U. of Missouri, USA) TENTATIVE ADVANCE PROGRAM OF NNSP'95, CAMBRIDGE MA, USA ------------------------------------------------------- ****** THURSDAY AUGUST 31st, 1995 ****** 8:30 am -- 8:45 am -------------------- OPENING REMARKS: Federico Girosi 8:45 am -- 9:30 am -------------------- PLENARY TALK 1: "Learning from Hints" -- Yaser Abu-Mostafa, Caltech, USA 9:30 am -- 9:50 am Coffee Break -------------------- 9:50 am -- 10:50 am -------------------- THEORY 1: (Oral Presentations) Missing Data in Nonlinear Time-Series Prediction -- Volker Tresp -- Central Research,Siemens AG, Germany Non-Linear Time Series Modeling with Self-Organization Feature Maps -- Jose C. Principe, Ludong Wang -- University of Florida, USA Neural Networks for Function Approximation -- H.N. Mhaskar, L. Khachikyan -- California State University Los Angeles, USA 10:50 am -- 11:30 am -------------------- THEORY 2: (3-minute oral preview of poster presentations) Simultaneous Design of Feature Extractor and Pattern Classifier Using the Minimum Classification Error Training Algorithm -- K.K.Paliwal, M. Bacchiani, Y. Sagisaka -- ATR Interpreting Communications Laboratories, Japan Discriminative Subspace Method for Minimum Error Pattern Recognition -- Hideyuki Watanabe, Shigeru Katagiri -- ATR Interpreting Telecommunications Research Labs, Japan A unifying view of Stochastic Approximation Kalman Filter and Backpropagation -- Enrico Capobianco -- Statistics Department, University of Padua, Italy Globally-Ordered Topology-Preserving Maps Achieved with a Learning Rule Performing Local Weight Updates Only -- Marc M. Van Hulle -- Laboratorium voor Neuro-en Psychofysiologie, K.U. Leuven, Belgium A Self-Organizing System for the Development of Neural Network Parameter Estimators -- Michael Manry -- The University of Texas at Arlington, USA Recognition of Oscillatory Signals Using a Neural Network Oscillator -- Masakazu Matsugu, Chi-Sang Poon -- Imaging Research Center, Canon Inc., Japan Principal Feature Classification -- Donald W. Tufts, Qi Li -- University of Rhode Island, USA A Habituation Based Neural Network Structure for Classifying Spatio-Temporal Patterns -- Bryan W. Stiles, Joydeep Ghosh -- The University of Texas at Austin, USA A Numerical Approach for Estimating Higher Order Spectra Using Neural Network Autoregressive Model -- Naohiro Toda, Shiro Usui -- Toyohashi University of Technology, Japan Fuzzy Neural Network Approach Based on Dirichlet Tesselations for Nearest Neighbor Classification of Patterns -- K. Blekas, A. Likas, A. Stafylopatis -- National Technical University of Athens, Greece The Dynamics of Associative Memory with a Self-Consistent Noise -- Ioan Opris -- Department of Physics, University of Bucharest, Romania Recursive Nonlinear Identification using Multiple Model Algorithm -- Visakan Kadirkamanathan -- University of Sheffield, UK 11:30 am -- 12:30 pm -------------------- THEORY 2: (poster presentations) 12:30 pm -- 2:00 pm LUNCH BREAK -------------------- 2:00 pm -- 2:45 pm -------------------- PLENARY TALK 2: "Regularization: Theory and New Algorithms" -- John Moody, Oregon Graduate Institute, USA 2:45 pm -- 4:05 pm -------------------- SPEECH PROCESSING: (Oral Presentations) Speaker Verification using Phoneme-Based Neural Tree Networks and Phonetic Weighting Scoring Method -- Han-Sheng Liou, Richard J. Mammone -- CAIP Center, Rutgers University, USA Scaling Down: Applying Large Vocabulary Hybrid HMM-MLP Methods to Telephone Recognition of Digits and Natural Numbers -- Kristine Ma, Nelson Morgan -- International Computer Science Institute, Berkeley, USA Combining Local PCA and Radial Basis Function Networks for Speaker Normalization -- Cesare Furlanello, D. Giuliani -- Instituto per La Ricerca Scientifica e Tecnologica, Italy Discriminatory Measures for Speaker Recognition -- Kevin R. Farrell -- Dictaphone Corporation, Stratford, CT, USA 4:05 pm -- 4:25 pm Coffee Break -------------------- 4:25 pm -- 4:50pm -------------------- THEORY AND SPEECH PROCESSING: (3-minute oral preview of poster presentations) Mutual Information in a Linear Noisy Network -- Alessandro Campa, Paolo Del Giudice, Nestor Parga, Jean-Pierre Nadal -- Istituto Superiore di Sanita and INFN Sezione Sanita, Italy Constrained Pole-Zero Filters as Discrete-Time Operators for System Approximation -- Andrew D. Back,Ah Chung Tsoi -- University of Queensland, Australia Prior Knowledge and the Creation of "Virtual" Examples for RBF Networks -- F. Girosi, N. Chan -- MIT Artificial Intelligence Laboratory, USA From Artificial Neural Network Inversion to Hidden Markov Model Inversion: Application to Robust Speech Recognition -- Seokyong Moon, Jenq-Neng Hwang -- University of Washington, USA Hierarchical Mixtures of Experts Methodology Applied to Continuous Speech Recognition -- Ying Zhao, Richard Schwartz, Jason Sroka, John Makhoul -- BBN Systems and Technologies, Cambridge, MA, USA A Speech Recognizer with Low Complexity Based on RNN -- Claus Kasper, Herbert Reininger, Dietrich Wolf, Harald Wust -- J.W. Goethe-Universitat Frankfurt, Germany Automatic Speech Segmentation Using Neural Tree Network (NTN) -- Manish Sharma, Richard Mammone -- CAIP Center, Rutgers University, USA 4:50pm -- 5:30 pm -------------------- THEORY AND SPEECH PROCESSING (Poster Presentations) 7:30pm -- 9:30 pm -------------------- PANEL DISCUSSION: "Why Neural Networks are not Dead" -- Moderator: Gary Kuhn (Siemens,Princeton, NJ) -- Participants: T.Poggio, MIT S. Grossberg, BU J. Makhoul, BBN P. Ienne, EPFL N. Morgan, ICSI S. Katagiri, ATR ****** FRIDAY SEPTEMBER 1st, 1995 ****** 8:30 am -- 9:15 am -------------------- PLENARY TALK 3: "Neural Networks for Electronic Eyes" -- S.Y. Kung, Princeton University, USA 9:15 am -- 9:35 am Coffee Break -------------------- 9:35 am -- 10:55 am -------------------- IMAGE PROCESSING / COMPUTER VISION (Oral Presentations) Motion Estimation and Segmentation using a Recurrent Mixture of Experts Architecture -- Yair Weiss,Edward H. Adelson -- Department of Brain and Cognitive Sciences, MIT, USA Using perceptron-like algorithms for the analysis and parameterization of object motion -- M. Mattavelli,E. Amaldi -- Swiss Federal Institute of Technology, Switzerland A Multiple Scale Neural System for Boundary and Surface Representation of SAR Data -- Stephen Grossberg, Ennio Mingolla, James Williamson -- Department of Cognitive and Neural Systems, Boston University, USA A Neural Network Approach to Face/Palm Recognition -- S.Y. Kung, M. Fang, S.H. Lin -- Princeton University, USA 10:55 am -- 11:30 am -------------------- IMAGE PROCESSING / COMPUTER VISION (3-minute oral preview of poster presentations) A Probabilistic DBNN with Applications to Sensor Fusion and Object Recognition -- Shang-Hung Lin, Long-Ji Lin, S.Y. Kung -- Princeton University, USA Sample Weighting when Training Self-Organizing Maps for Image Compression -- Jari Kangas -- Helsinski University of Technology, Finland Estimating Image Velocity with Convected Activation Profiles: Analysis and Improvements for Special Cases -- Robert K. Cunningham, Allen M. Waxman -- Machine Intelligence Group, MIT Lincoln Laboratory, USA Pruning Projection Pursuit Models for Improved Cloud Detection in AVIRIS Imagery -- Charles M. Bachmann, Eugene E. Clothiaux, John W. Moore, Dong Q. Luong -- Airborne Radr Branch Code 5365, Naval Research Laboratory, USA A New Learning Scheme for the Recognition of Dynamic Handwritten Characters -- Fidimahery Andrianasy, Maurice Milgram -- PARC/UPMC, France Velocity Measurement of Granular Flow with a Hopfield Network -- Jingeol Lee, Jose C. Principe, Daniel M. Hanes -- University of Florida, USA Neural Network Based Image Segmentation for Image Interpolation -- Stefano Marsi, Sergio Carrato -- University of Trieste, Italy Learning a Distribution-based Face Model for Human Face Detection -- Kah-Kay Sung, Tomaso Poggio -- MIT Artificial Intelligence Laboratory, USA Action-Based Neural Networks for Effective Recognition of Images -- Vassilios N. Alexopoulos, Stefanos D. Kollias -- National Technical University of Athens, Greece Feature-Locked Loop and its Application to Image Databases -- Alex Sherstinsky, Rosalind W. Picard -- Media Laboratory, MIT, USA An Error Diffusion Neural Network for Digital Image Halftoning -- Barry L. Shoop, Eugene K. Ressler -- United States Military Academy, USA 11:30 am -- 12:30 pm -------------------- IMAGE PROCESSING / COMPUTER VISION (Poster Presentations) 12:30 pm -- 2:00 pm LUNCH BREAK -------------------- 2:00 pm -- 2:45 pm -------------------- PLENARY TALK 4: "Learning algorithms for probabilistic trees, chains, and networks" -- Michael I. Jordan, MIT, USA 2:45 pm -- 4:05 pm -------------------- OTHER APPLICATIONS (Oral Presentations) Estimation of the Glucose Metabolism from Dynamic PET-Scans Using Neural Networks -- Claus Svarer, Soren Holm, Niels Morch, Olaf Paulson and L.K. Hansen -- Department of Neurology, The University of Kopenhagen, Denmark Nonlinear Echo Cancellation Using a Partial Adaptive Time Delay Neural Network -- A.N. Birkett, R.A. Goubran -- Carleton University, Canada Customized ECG Beat Classifier Mixture of Experts -- Yu Hen Hu, Surekha Palreddy, Willis J. Tompkins -- University of Wisconsin, USA Semiautomated Extraction of Decision Relevant Features from a Raw Data Based Artificial Neural Network Demonstrated by the Problem of Saccade Detection in EOG Recordings of Smooth Pursuit Eye Movements -- Peter K. Tigges, Norbert Kathmann, Rolf R. Engel -- Psychiatric Clinic, University of Munich, Germany 4:05 pm -- 4:25 pm Coffee Break -------------------- 4:25 pm -- 5:05 pm -------------------- OTHER APPLICATIONS / IMPLEMENTATIONS (3-minute oral preview of poster presentations) EEG Signal Classification with Different Signal Representations -- Charles W. Anderson, Saikumar V. Devulapalli, Erik A. Stolc -- Colorado State University, USA Design and Evaluation of Neural Classifiers - Application to Skin Lesion Classification -- Mads Hintz-Madsen, Lars Kai Hansen, Jan Larsen, Eric Olesen and -- Krzysztof T. Drzewiecki -- Technical University of Denmark, Denmark A Study of the Application of the CMAC Artificial Neural Network to the Problem of Gas Sensor Array Calibration -- Parag M. Bajaria, Bruce E. Segee -- University of Maine, USA Classification of Gamma Ray Signals Using Neural Networks -- N.G. Bourbakis, A. Tacsillo, M. Tacsillo -- AAAI LAb., Binghamton University, USA Adaptive Preprocessing for On-Line Learning with Adaptive Resonance Theory (ART) Networks -- Harald Ruda, Magnus Snorasson -- Cognitive and Neural Systems Department, Boston University, USA Intelligent Network Monitoring -- Cynthia S. Hood, Chuanyi Ji -- Rensselaer Polytechnic Institute, USA A Robust Backward Adaptive Quantizer -- Dominique Martinez, Woodward Yang -- Division of Applied Sciences, Harvard University, USA A Maximum Partial Likelihood Framework for Channel Equalization by Distribution Learning -- Tulay Adali, Xiao Liu, Kemal Sonmez -- University of Maryland Baltimore County, USA Constructive Neural Network Design for the Solution of Two State Classification: Problems with Application to Channel Equalization -- Catherine Z.W. Hassell Sweatman, Gavin J. Gibson, Bernard Mulgrew -- University of Edinburgh, UK A Parallel Mapping of Backpropagation Algorithm for Mesh Signal Processor -- Shoab A. Khan, Vijay K. Madisetti -- Georgia Institute of Technology, USA Digital Neuroimplementations of Visual Motion-Tracking Systems -- Anna Maria Colla, Luca Trogu, Rodolfo Zunino -- University of Genova, Italy Level Crossing Time Interval Circuit for Micropower Analog VLSI Auditory Processing -- Nagendra Kumar, Gert Cauwenberghs, Andreas G. Andreou -- Johns Hopkins University, USA 5:05 pm -- 6:05 pm -------------------- OTHER APPLICATIONS / IMPLEMENTATIONS (Poster Presentations) 7:00 pm GALA DINNER: Grand Clam Bake -------------------- ****** SATURDAY SEPTEMBER 2nd, 1995 ****** 8:30 am -- 9:15 am -------------------- PLENARY TALK 5: "Structure of Learning Theory" -- Vladimir Vapnik, AT at T Bell Labs, USA 9:15 am -- 9:35 am Coffee Break -------------------- 9:35 am -- 10:35 am -------------------- COMMUNICATIONS (Oral Presentations) Optimum Lag and Subset Selection for Radial Basis Function Equaliser -- Eng-Siong Chng, Bernard Mulgrew, Shen Chen, Gavin Gibson -- The University of Edinburgh, UK Channel Equalization by Finite Mixtures and EM Algorithm -- Lei Xu -- The Chinese University of Hong Kong, Hong Kong Comparison of a Neural Network based Receiver to the Optimal and Multistage CDMA Multiuser Detectors -- George Kechriotis, Elias S. Manolakos -- Northeastern University, USA 10:35 am -- 11:55 am -------------------- THEORY 3 (Oral Presentations) Empirical Generalization Assessment of Neural Network Models -- Jan Larsen, Lars Kai Hansen -- Technical University of Denmark, Denmark Active Learning the Weights of a RBF Network -- Kah-Kay Sung, Partha Niyogi -- MIT Artificial Intelligence Laboratory, USA A Novel Approach to Pattern Recognition Based on Discriminative Metric Design -- Hideyuki Watanabe, Tsuyoshi Yamaguchi, Shigeru Katagiri -- ATR Interpreting Telecommunications Research Laboratories, Japan A Maximum Entropy Approach for Optimal Statistical Classification -- David Miller, Ajit Rao, Kenneth Rose, Allen Gersho -- University of California Santa Barbara, USA ******************** END OF ADVANCE TECHNICAL PROGRAM ***************** NNSP95 REGISTRATION FORM ------------------------------ 1995 IEEE Workshop on Neural Networks for Signal Processing August 31, 1995 - September 2, 1995 Cambridge, Massachusetts USA Please complete this form (type or print) Name ___________________________________________________ Last First Middle Firm or University ______________________________________ Mailing Address _________________________________________ __________________________________________________________ __________________________________________________________ Country Phone FAX __________________________________________________________ email Fee payment must be made by MONEY ORDER or PERSONAL CHECK. All amounts are given in US dollar figures. Make fee payable to "IEEE NNSP95 - c/o Judy A. Franklin". Mail it, together with this completed Registration Form to: Judy A. Franklin GTE Laboratories 40 Sylvan Road Waltham, MA 02254 USA For further information, Dr. Franklin can be reached at Tel.: 617-466-4246 FAX: 617-890-9320 e-mail: jfranklin at gte.com Advanced registration before: June 2. DO NOT SEND CASH. REGISTRATION FEE* Date IEEE Member Non-member ________________________________________________________ Before June 2 U.S. $295 U.S. $345 Late Registration (After June 2) U.S. $345 U.S. $395 * Registration fee includes Workshop Proceedings, breakfast and all coffee breaks, and the Grand Clam Bake on 9/1/95. * On-site registration is possible, at *late registration* fees (see above). Payment of late registration must be in US Dollar amounts, by Money Order or Check (preferably drawn on a US Bank account). *************************************************************************** HOTEL ACCOMMODATION NNSP95 will be held at the Royal Sonesta Hotel, Cambridge, MA. The hotel is centrally located overloooking the Charles River, and offers very nice views of Boston and Cambridge. Hotel accommodations are the responsibility of each participant. The Royal Sonesta Hotel has reserved a block of rooms for this event. The special room rates for NNSP95 participants are: Single U.S. $130.00 per night+ Double U.S. $130.00 per night+ Please be aware that these prices do not include Massachusetts State tax (5.7%) and a city tax (4%). There are a number of important points to be aware of with regard to hotel reservations for the Workshop: * All reservation will be held until 6pm on the day of arrival, unless guaranteed for late arrival. Guaranteed reservations will be held for the night of arrival only. If you fail to take up your reservation, you will be charged for one night's room, with tax . * These special rates apply between August 29th and September 2nd, inclusive. * After July 29 there is no guarantee that rooms are available, so we strongly recommend making reservations early. * You must quote your participation in the IEEE Workshop on Neural Networks for Signal Processing when booking the room, in order to qualify for this special rate. To make reservations, call the hotel directly at: 617-491-3600 The address of the hotel is: Royal Sonesta Hotel 5 Cambridge Parkway Cambridge, MA 02142 phone: 617-491-3600 fax: 617-661-5956 **************************************************************************** TRAVEL INFORMATION Possible ways to get to the Royal Sonesta are: * From the North via Route 93 South: Take 93 South to Exit 26, "Cambridge/Somerville & Storrow Drive", Follow directions below "From the South". * From the Airport:Take the main airport roadway (one way) and follow the signs for "Sumner Tunnel, Boston/Rt. 93 North." Go through the tunnel and take an immediate right onto 93 North. Follow directions below "From the South" * From the South via route 93 north: Take 93 North through Boston to Exit 26, "Cambridge/Somerville & Storrow Drive." Go down and around exit ramp and stay to the far right, following signs for Cambridge. DO NOT GET ON STORROW DRIVE. At the end of the ramp, at a set of lights, take a left onto Nashua Street (you will pass beneath the bridge on which there is a sign for the Museum of Science) and take immediate right on Rt. 28 North/O'Brien Highway. Go past the Museum of Science and at the first set of lights take a left on Edwin Land Blvd. The Royal Sonesta is on the left, across the street from the Cambridgeside Galleria Mall. * From the West via Mass. Turnpike/Route 90 East: Take the Mass. Turnpike to Exit 18, "Allston/Cambridge" (left-sided exit). Go through the toll booth and bear right, following signs for Cambridge and Somerville. Proceed through two sets of lights and go straight over the River Street Bridge, crossing over the Charles River, and take an immediate right on Memorial Drive East (the Mobil Gas station will be in front of you). At the first split stay in the left lane and proceed over the bridge ("Cars Only"). At the second split, shortly after the Hyatt Regency Hotel on left, stay left and go under the overpass ("Cars Only"). Move immediately to the right lane and bear to the right at the last split. Memorial Drive now turns into Edwin Lane Blvd. Proceed under the Longfellow Bridge and the Royal Sonesta Hotel will be on your right at the second set of lights. * From the West via Route 2 East: Take Rt. 2 East and follow signs for "Watertown/Boston & Rt. 3." You will pass the Alewife train station on the right. At the rotary stay to the left and continue on Route 3; merge onto Memorial Drive East (the CHarles River will be on the right). Follow directions according to "From the West via Mass Pike" (above). Parking: The Royal Sonesta has two parking garages to accommodate guests and visitors. If the internal garage is full, the parking attendant in the booth will direct you across the street to the Cambridgeside Galleria Mall, in which the hotel has a small parking section. From Andreas.Scherer at FernUni-Hagen.de Fri Jun 2 05:49:57 1995 From: Andreas.Scherer at FernUni-Hagen.de (Andreas Scherer) Date: Fri, 2 Jun 1995 11:49:57 +0200 Subject: CFP: Session on Adaptive Computing in Engineering Message-ID: <9506020938.AA03852@galaxy.fernuni-hagen.de> CALL FOR PAPERS Computer Applications Symposium Session on Adaptive Computing In Engineering Houston, TX January 28 - February 2, 1996 The American Society of Mechanical Engineers (ASME) Petroleum Division is sponsoring the Energy-sources Technology Conference & Exhibition (ETCE), to be held January 28 - February 2, 1996, at the George R. Brown Convention Center in Houston, Texas. A part of this conference is the Computer Applications Symposium, which focuses on the uses of computers in engineering-related applications. Attendees will be from both academia and industry. This coming year, one session will be devoted to applications of Adaptive Computing Techniques in Engineering. Suggested topics include the following: Fuzzy-Logic Genetic Algorithms Neural Networks Machine Learning Hybrid Systems All presented papers will be published in the symposium proceedings. Please contact the session organizers if you have any questions. We look forward to your contributions. Authors are requested to send a letter of intent, an information sheet that includes full names of the author(s), phone number and FAX or E-mail if applicable and an abstract (up to 200 words) by May 30, 1995. Full papers are due by August 1. Important Dates: July 15, 1995: Deadline for submission of full paper (10 pages) August 1, 1995: Notification of acceptance Sept. 10, 1995: Final paper deadline Jan. 28 - Feb. 2, 1996: Conference Dr. John R. Sullins Dr. Andreas Scherer Dept. of Computer & Information Sci. University of Hagen Youngstown State University Applied Computer Science I 410 Wick Avenue Feithstr. 140 Youngstown, OH 58084 Hagen USA Germany john at cis.ysu.edu andreas.scherer at fernuni-hagen.de Tel.: (216) 742-1806 Tel.: +49/2331/987-2972 FAX.: (216) 742-1998 FAX.: +49/2331/987-314 From massone at mimosa.eecs.nwu.edu Fri Jun 2 12:38:55 1995 From: massone at mimosa.eecs.nwu.edu (Lina Massone) Date: Fri, 2 Jun 1995 11:38:55 -0500 Subject: paper available on dynamic pattern formation Message-ID: <199506021638.LAA22575@mimosa.eecs.nwu.edu> The following paper is now available as a technical report of the Neural Information Processing Laboratory of Northwestern University. The paper can be retrieved through the web by connecting to: http://www.eecs.nwu.edu/pub/nipl or by anonymous ftp to: eecs.nwu.edu cd pub/nipl The role of initial conditions in dynamic pattern formation Lina L.E. Massone and Tony Khoshaba Tech. Rep. Neural Information Processing Laboratory Northwestern University *Submitted to NIPS 95* In this paper we present the results of an empirical study of the properties of recurrent backpropagation (Pineda, 1988) with special emphasis on the characteristics of the resulting weight distributions and their dependency on the initial conditions and on the classes of dynamic tasks that the network learns. The results of this study indicate that the weights of the trained network exhibit properties that are dictated by both the desired equilibria and the initial values of the weights, but not by the initial state of the network. In particular, we were able to quantify the dependency of the final weights on the initial weights in terms of monotonic, practically linear relationships between the standard deviations of the two distributions. We discuss the implications of these results for dynamical systems in general and for the study of the brain functions. From jmgeever at yeats.ucd.ie Sat Jun 3 12:46:02 1995 From: jmgeever at yeats.ucd.ie (John McGeever) Date: Sat, 3 Jun 1995 17:46:02 +0100 Subject: NCAF/SIENA Conf. At University College Dublin. Message-ID: *************************************************************** Neural Computing Applications Forum and (NCAF) Simulation Initiative for European Neural Applications (SIENA) *************************************************************** Dear Colleagues, A one-day joint NCAF/SIENA Conference will be held at University College Dublin on Tuesday 20 June 1995. Programme 8.30 am Registration / Coffee 9.00 Welcome to NCAF and SIENA Tom Harris and Tony Morgan 9.30 Tutorial: An Introduction to Neural Networks Tom Harris, Brunel University 10.30 Coffee 11.00 Keynote Speaker: Neural Networks: The Real Applications Andy Wright, British Aerospace 12.00 pm Using Neural Networks to Learn Hand-Eye Co-ordination Marggie Jones, University College Galway 12.30 Lunch 14.00 Countermatch: A Neural Network Approach to Signature Verification Graham Hesketh, AEA Technology 14.30 Implementing Neural Networks with Semiconductor Optical Devices Paul Horan, Hitachi Dublin Laboratory 15.00 Tea 15.30 Artificial Neural Networks for Articulatory Based Speech Synthesis John Keating, St. Patricks College 16.00 The IBM ZISC Chip Guy Paillet, Neuroptics Consulting 16.30 Open Forum 17.00 Close of Session Notes: Accommodation on-campus at 18 pounds sterling is available for the night of the 19 June. (Please indicate male/female) Breakfast and Lunch are not included in the Registration fee. There are three food outlets on campus. Exhibitors can demonstrate their products free of charge having gained entry to the meeting - please contact NCAF for further details. REGISTRATION: The fee is 50 pounds sterling, 30 for students. Send Name, correspondence address, organisation, and other contact details (phone/fax/email) with a crossed sterling cheque (registration fee and accommodation fee if applicable) made payable to NCAF to Ila Patel, Dublin Symposium Bookings, Neural Computing Applications Forum, Box 73, EGHAM, Surrey, TW20 OYZ UK. Tel (+44/0) 1784 477271 Fax (+44/0) 1784 472879 - Mark fax 'NCAF' email NCAFsec at brunel.ac.uk Advance registration is very much preferred. Informal enquires may be made to jmgeever at nova.ucd.ie From esann at dice.ucl.ac.be Sun Jun 4 10:45:06 1995 From: esann at dice.ucl.ac.be (esann@dice.ucl.ac.be) Date: Sun, 4 Jun 1995 16:45:06 +0200 Subject: Neural Processing Letters Vol.2 No.1 Message-ID: <199506041445.QAA13064@ns1.dice.ucl.ac.be> Neural Processing Letters: last two issues ------------------------------------------ You will find enclosed the table of contents of the March and May 1995 issues of "Neural Processing Letters" (Vol.2 No.2 & 3). The abstracts of these papers are contained on the below mentioned FTP and WWW servers. We also inform you that subscription to the journal is now possible by credit card. All necessary information is contained on the following servers: - FTP server: ftp.dice.ucl.ac.be directory: /pub/neural-nets/NPL - WWW server: http://www.dice.ucl.ac.be/neural-nets/NPL/NPL.html If you have no access to these servers, or for any other information (subscriptions, instructions for authors, free sample copies,...), please don't hesitate to contact directly the publisher: D facto publications 45 rue Masui B-1210 Brussels Belgium Phone: + 32 2 245 43 63 Fax: + 32 2 245 46 94 Neural Processing Letters, Vol.2, No.2, March 1995 __________________________________________________ - Asymptotic performances of a constructive algorithm Florence d'Alche-Buc, Jean-Pierre Nadal - A multilayer incremental neural network architecture for classification Tamer Olmez, Ertugrul Yazgan, Okan K. Ersoy - Fine-tuning Cascade-Correlation trained feedforward network with backpropagation Mikko Lehtokangas, Jukka Saarinen, Kimmo Kaski - Determining initial weights of feedforward neural networks based on least squares method Y.F.Yam, T.W.S.Chow - Post-processing of coded images by neural network cancellation of the unmasked noise M.Mattavelli, O.Bruyndonckx, S.Comes, B.Macq - Neural learning of chaotic dynamics Gustavo Deco, Bernd Schurmann - Improving the Counterpropagation network performances Alessandra Chiuderi - Book review: R?seaux neuronaux et traitement du signal by Jeanny H?rault and Christian Jutten. In French. Patrick Thiran Neural Processing Letters, Vol.2, No.3, May 1995 ________________________________________________ - Weighted Radial Basis Functions for improved pattern recognition and signal processing Leonardo M. Reyneri - Analog weight adaptation hardware A.J. Annema, H. Wallinga - Adaptative time constants improve the prediction capability of recurrent neural networks Jean-Philippe Draye, Davor Pavisic, Guy Cheron, Gaetan Libert - A general exploratory projection pursuit network Colin Fyfe - Improving the approximation and convergence capabilities of projection pursuit learning Tin-Yau Kwok, Dit-Yan Yeung - Invariance in radial basis function neural networks in human face classification A. Jonathan Howell, Hilary Buxton _____________________________ D facto publications - conference services 45 rue Masui 1210 Brussels Belgium tel: +32 2 245 43 63 fax: +32 2 245 46 94 _____________________________ From athenasc at world.std.com Sat Jun 3 18:57:39 1995 From: athenasc at world.std.com (athena scientific) Date: Sat, 3 Jun 1995 18:57:39 -0400 Subject: Textbook Series on Optimization and Neural Computation Message-ID: <199506032257.AA11738@world.std.com> Athena Scientific is pleased to announce a series of M.I.T. graduate course textbooks that are distinguished by scientific rigor, educational value, and production quality. ******************************************************************** 1) Dynamic Programming and Optimal Control (2 Vols.), by Dimitri P. Bertsekas (June 1995). 2) Nonlinear Programming, by Dimitri P. Bertsekas (due Fall 1995). 3) Linear Programming, by Dimitris Bertsimas and John N. Tsitsiklis (due Spring 1996). 4) Neuro-Dynamic Programming, by Dimitri P. Bertsekas and John N. Tsitsiklis (Spring 1996). ******************************************************************** The first three are textbooks currently used in first year graduate courses at the Department of Electrical Engineering and Computer Science and the Operations Research Center of M.I.T. The last book is used in a graduate course on neural computation. The books are well suited for instruction and have been refined in the classroom over many years. For further information contact Athena Scientific, P.O. Box 391, Belmont, MA 02178-9998, U.S.A., Tel: (617) 489-3097, FAX: (617) 489-3097, email: athenasc at world.std.com, or the authors: bertsekas at lids.mit.edu, dbertsim at math.mit.edu, jnt at mit.edu To order your copy of Dynamic Programming and Optimal Control, see the ordering information at the end of this announcement. ******************************************************************** DYNAMIC PROGRAMMING AND OPTIMAL CONTROL, by Dimitri P. Bertsekas For FTP access of detailed table of contents, preface, and the 1st chapter: FTP to LIDS.MIT.EDU with username ANONYMOUS, enter password as directed, and type cd /pub/bertsekas/DP_BOOK BRIEF DESCRIPTION This two-volume textbook is a greatly expanded and pedagogically improved version of the author's "Dynamic Programming: Deterministic and Stochastic Models," (Prentice-Hall, 1987). It treats simultaneously stochastic optimal control problems popular in modern control theory and Markovian decision problems popular in operations research. New features include: ----------------------------------------------------------------------- ** neurodynamic programming/reinforcement learning techniques, a recent breakthrough in the practical application of dynamic programming to complex problems ** deterministic discrete- and continuous-time optimal control problems, including the Pontryagin Minimum Principle ** an extensive treatment of deterministic and stochastic shortest path problems ----------------------------------------------------------------------- The first volume is oriented towards modeling, conceptualization, and finite-horizon problems, but also includes a substantive introduction to infinite horizon problems that is suitable for classroom use. The second volume is oriented towards mathematical analysis and computation, and treats infinite horizon problems extensively. The text contains many illustrations, worked-out examples, and exercises. The author is Professor of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology, and has been teaching the material of this book in introductory graduate courses for over twenty years. CONTENTS OF VOLUME I (400 pages) 1. The Dynamic Programming Algorithm 2. Deterministic Systems and the Shortest Path Problem 3. Deterministic Continuous-Time Optimal Control 4. Problems with Perfect State Information 5. Problems with Imperfect State Information 6. Suboptimal and Adaptive Control 7. Introduction to Infinite Horizon Problems Appendixes: Math. Review, Probability Review, Least Squares Estimation and the Kalman Filter CONTENTS OF VOLUME II (304 pages, available August 1995) 1. Infinite Horizon - Discounted Problems 2. Stochastic Shortest Path Problems 3. Undiscounted Problems 4. Average Cost per Stage Problems 5. Continuous-Time Problems ******************************************************************** Free 30-Day Examination Copy to Prospective Instructors Free Deskcopy and Solutions Manual Upon Classroom Adoption ******************************************************************** ORDER W/ VISA/MASTERCARD BY FAX OR PHONE: (617) 489-3097 ____________________________________________________________________ ORDER BY MAIL: Athena Scientific, P.O.Box 391, Belmont, MA 02178-9998, U.S.A. email: athenasc at world.std.com o Dynamic Programming and Optimal Control, Vol. I: $64.00 ISBN: 1-886529-12-4 o Dynamic Programming and Optimal Control, Vol. II: $55.50 ISBN: 1-886529-13-2 o Dynamic Programming and Optimal Control, Vols. I and II: $99.50 ISBN: 1-886529-11-6 o MY CHECK OR MONEY ORDER IS ENCLOSED: I am enclosing my check or money order, payable to Athena Scientific. I have also included $3.00 for shipping. I understand that books can be returned for a full refund if I am not completely satisfied. o CHARGE MY: o VISA o MASTERCARD Account #__________________________________ Expiration Date____________________________ Signature__________________________________ ____________________________________________________________________  From andre at icmsc.sc.usp.br Mon Jun 5 16:26:36 1995 From: andre at icmsc.sc.usp.br ( Andre Carlos P. de Leon F. de Carvalho ) Date: Mon, 5 Jun 95 15:26:36 EST Subject: II Brazilian Symposium on Neural Networks Message-ID: <9506051826.AA06848@xavante> II Brazilian Symposium on Neural Networks ***************************************** October 18-20, 1995 First call for papers Sponsored by the Brazilian Computer Science Society (SBC) You are cordially invited to attend the II Brazilian Symposium on Neural Networks (SBRN) which will be held at the University of Sao Paulo, campus of Sao Carlos, Sao Paulo. Sao Carlos with its 160.000 population is a pleasant university city known by its clima and high technology companies. Scientific papers will be analyzed by the program committee. This analysis will take into account originality, significance to the area, and clarity. Accepted papers will be fully published in the conference proceedings. The major topics of interest include, but are not limited to: Applications Architecture and Topology Biological Perspectives Cognitive Science Dynamic Systems Fuzzy Logic Genetic Algorithms Hardware Implementation Hybrid Systems Learning Models Otimisation Parallel and Distributed Implementations Pattern Recognition Robotics and Control Signal Processing Theoretical Models Program Committee: - Andre C. P. L. F. de Carvalho - ICMSC/USP - Dante Barone - II/UFRGS (Chairman) - Edson C B C Filho - DI/UFPE - Fernando Gomide - FEE/UNICAMP - Geraldo Mateus - DCC/UFMG - Luciano da Fontoura Costa - IFSC/USP - Rafael Linden - IBCCF/UFRJ - Paulo Martins Engel - II/UFRGS Organising Committee: - Aluizio Araujo - EESC/USP - Andre C. P. L. F. de Carvalho - ICMSC/USP (Chairman) - Dante Barone - II/UFRGS - Edson C B C Filho - DI/UFPE - Germano Vasconcelos - DI/UFPE - Glauco Caurin - EESC/USP - Luciano da Fontoura Costa - IFSC/USP - Roseli A. Francelin Romero - ICMSC/USP - Teresa B. Ludermir - DI/UFPE SUBMISSION PROCEDURE: The Symposium seeks contributions to the state of the art and future perspectives of Neural Networks research. A submitted paper must be in Portuguese, Spanish or English. The submissions must include the original paper and three more copies and must follow the format below (no E-mail or FAX submissions). The paper must be printed using a laser printer, in one-column format, not numbered, 8.5 X 11.0 inch (21,7 X 28.0 cm). It must not exceed six pages, including all figures and diagrams. The font size should be 10 pts, such as TIMES-ROMAN font or its equivalent with the following margins: right and left 2.5 cm, top 3.5 cm, and bottom 2.0 cm. The first page should contain the paper's title, the complete author(s) name(s), affiliation(s), and mailing address(es), followed by a short (150 words) abstract and list of descriptive key words and an acompanying letter. In the accompanying letter, the following information must be included: * Manuscript title * first author's name, mailing address and E-mail * Technical area SUBMISSION ADDRESS: Four copies (one original and three copies) should be submitted to: Andre C. P. L. F. de Carvalho - SBRN 95 Departamento de Ciencias de Computacao e Estatistica ICMSC - Universidade de Sao Paulo Caixa Postal 668 CEP 13560.070 Sao Carlos, SP Brazil Phone: +55 162 726222 FAX: +55 162 749150 E-mail: IISBRN at icmsc.sc.usp.br DEADLINES: July 30, 1995 (mailing date) Deadline for paper submission August 30, 1995 Notification to authors October 18-20, 1995 II SBRN MORE INFORMATION: * Up-to-minute information about the symposium is available on the World Wide Web (WWW) at http://www.icmsc.sc.usp.br * Questions can be sent by E-mail to IISBRN at icmsc.sc.usp.br Hope to see you in Sao Carlos! From jagota at next1.msci.memst.edu Mon Jun 5 20:53:32 1995 From: jagota at next1.msci.memst.edu (Arun Jagota) Date: Mon, 5 Jun 1995 19:53:32 -0500 Subject: Optimization special issue Message-ID: <199506060053.AA18315@next1> CALL FOR PAPERS (please post) JOURNAL OF ARTIFICIAL NEURAL NETWORKS SPECIAL ISSUE on NEURAL NETWORKS FOR OPTIMIZATION Submission Deadline: 7 August 1995 ** JANN editor-in-chief: Omid M. Omidvar Special issue editor: Arun Jagota JANN is published by ABLEX The aim of this special issue is to publish high-quality papers presenting original research in the evolving topic of neural network algorithms for optimization problems. Papers may be theoretical or applied in nature. Applications of neural networks to optimization, rather than of optimization to (learning in) neural networks, are prefered. Papers utilizing relevant ideas from statistical physics, mathematical programming, or combinatorical algorithms are welcome. Papers describing significant applications, comparisons with conventional algorithms, or comparisons amongst neural network algorithms are also welcome. EXAMPLE PAPER TITLES: A Mean Field Annealing Network for the Traveling Salesman Problem A New Energy Function for the Graph Bipartitioning Problem Comparisons of the Hopfield and Elastic Nets Solving N-queens: Neural Networks Versus Randomized Greedy Methods FORMATTING INSTRUCTIONS: Manuscripts may be FULL LENGTH or NOTE. Full length manuscripts should range between eight (8) and sixteen (16) single column, SINGLE spaced, 8.5 X 11 pages, in 11 pt, including figures, tables, and references. Figures and tables should be formatted IN text, as they would appear in print. Do not have a separate title page, rather the first page of text should begin with the paper title, authors, affiliations, and an abstract, limited to two hundred and fifty (250) words. Notes should range between two (2) and seven (7) pages. Abstract should be limited to fifty (50) words. Other than these, notes should be formatted the same way as full length manuscripts. SUBMISSION GUIDELINES: ** Apologies for the short notice. One reason for the submission deadline of 7 August 1995 is that the journal editor needs the final accepted papers with him before the end of 1995. If all goes well, the special issue will be out soon thereafter. If you can't make the August 7 deadline, please contact me by email as soon as possible. ELECTRONIC (much prefered): HARDCOPY A SINGLE POSTSCRIPT file of Send THREE copies of the manuscript, to the email the manuscript to the address given below. Do not surface mail address follow-up with hardcopy. supplied below. All materials (figures etc) should be absorbed into this single file. Whether submitting electronic or hardcopy, send a cover letter by e-mail, containing the paper title, whether it is FULL LENGTH or NOTE, list of authors, and the corresponding author's address, especially e-mail and phone. (Correspondence with authors will be handled by email as much as possible.) If submitting hardcopy, send the email letter in advance; no hardcopy letter is needed. If submitting electronic, do not send the cover letter in the same email message as the postscript file. SUBMIT TO: Arun Jagota Department of Mathematical Sciences University of Memphis Memphis TN 38152 USA Phone: 901 678-3071 E-mail: jagota at next1.msci.memst.edu FINAL SUBMISSION: Authors of accepted papers might be asked to format final versions in camera-ready format. The JANN formatting instructions would be provided then. A JANN style file, for LaTeX users, will also be made available. LaTeX style files save considerable time in reformatting. From kim.plunkett at psy.ox.ac.uk Tue Jun 6 10:48:54 1995 From: kim.plunkett at psy.ox.ac.uk (Kim Plunkett) Date: Tue, 6 Jun 1995 14:48:54 +0000 Subject: Postdoctoral position Message-ID: <9506061448.AA53527@mac17.psych.ox.ac.uk> A Postdoctoral Research position is available to work on an ESRC funded project entitled "Learning Inflectional Morphology". The appointment is for 3 years with a starting salary of up to GBP 17,813 per annum. The position, to start on 1st October 1995 The successful candidate will have a Ph.D. degree (or equivalent) and a good working knowledge of the application of neural network models to language processing. Knowledge of the language acquisition literature and experience in experimental psycholin Further information can be obtained from Dr. Kim Plunkett, Department of Experimental Psychology, South Parks Road, Oxford, OX1 3UD, UK, Tel: 01865-271398, email: plunkett at psy.ox.ac.uk. From moon at pierce.ee.washington.edu Tue Jun 6 13:16:51 1995 From: moon at pierce.ee.washington.edu (Seokyong Moon) Date: Tue, 6 Jun 95 10:16:51 PDT Subject: Paper available on hidden Markov model inversion Message-ID: <9506061716.AA14623@pierce.ee.washington.edu.Jaimie> FTP-host: pierce.ee.washington.edu FTP-filename: /pub/papers/hmm-inversion.ps.Z This paper is 30 pages long. Robust Speech Recognition using Gradient-Based Inversion and Baum-Welch Inversion of Hidden Markov Models Seokyong Moon, Jenq-Neng Hwang Information Processing Laboratory Department of Electrical Engineering, FT-10 University of Washington, Seattle, WA 98195 E-mail: moon at pierce.ee.washington.edu, hwang at ee.washington.edu The gradient based hidden Markov model (HMM) inversion algorithm is studied and applied to robust speech recognition tasks under general types of mismatched conditions. It stems from the gradient-based inversion algorithm of an artificial neural network (ANN) by viewing an HMM as a special type of ANNs. The HMM inversion has a conceptual duality to the HMM training just as ANN inversion does to ANN training. The forward training of an HMM, based on either the Baum-Welch reestimation or gradient method, finds the model parameters to optimize some criteria, e.g., maximum likelihood (ML), maximum mutual information (MMI) and mean squared error (MSE), with given speech inputs. On the other hand, the inversion of an HMM finds speech inputs that optimize some criterion with given model parameters. The performance of the proposed gradient based HMM inversion for noisy speech recognition under additive noise corruption and microphone mismatch conditions is compared with the robust Baum-Welch HMM inversion technique along with other noisy speech recognition technique, i.e., the robust minimax (MINIMAX) classification technique. From listerrj at helios.aston.ac.uk Wed Jun 7 07:03:59 1995 From: listerrj at helios.aston.ac.uk (Richard Lister) Date: Wed, 07 Jun 1995 12:03:59 +0100 Subject: PhD Research Studentships Message-ID: <19990.9506071104@sun.aston.ac.uk> Neural Computing Research Group ------------------------------- Dept of Computer Science and Applied Mathematics Aston University, Birmingham, UK PHD RESEARCH STUDENTSHIPS ------------------------- *** Full details at http://neural-server.aston.ac.uk/ *** The Neural Computing Research Group has attracted substantial levels of industrial and research council funding and will therefore be able to offer a number of full-time PhD studentships to commence in October 1995. Currently we are seeking candidates for four studentships. These will pay full fees at the home rates and hence are suitable for UK and European Union citizens only. The studentships also cover living expenses at the same rate as a research council studentship. Feature Extraction Techniques for Nonstationary Financial Market Time Series ---------------------------------------------------------------------------- The project will examine conventional and neural network techniques for the extraction of features to elucidate hidden structure in generally multivariate financial time series.The problem domain is made more complicated by the inherent nonstationarity of the time series. Techniques based on dynamical systems theory and statistical pattern analysis will be developed and applied to real-world data. The ideal candidate should be mathematically and computationally competent and have a general interest in the field of financial mathematics, although no previous experience in this area is required. The project is in collaboration with a financial company, Union CAL Ltd, London. Validation and Verification of Neural Network Systems ----------------------------------------------------- (Two Studentships) One of the major factors limiting the widespread exploitation of neural networks has been the perceived difficulty of ensuring that a trained network will continue to perform satisfactorily when installed in an operational system. In the case of safety-critical systems it is clearly vital that a high degree of overall system integrity be achieved. However, almost all potential applications of neural networks entail some level of undesirable consequence if the network generates incorrect or inaccurate predictions. Currently there is no general framework for assessing the robustness of neural network solutions or of systems containing embedded neural networks. These two studentships will be closely associated with a substantial project funded by the Engineering and Physical Sciences Research Council to address the basic issues involved in validation of systems containing neural networks. The studentships are funded by two industrial companies: British Aerospace and Lloyds Register of Shipping, and will involve developing case studies to demonstrate the applicability of validation and verification techiques to real-world applications involving neural networks. Potential candidates should be mathematically and computationally competent with a background either in artificial neural networks or another relevant field. Neural networks applied to ignition timing and automatic calibration -------------------------------------------------------------------- This project involves a collaborative research programme between the Neural Computing Research Group and SAGEM in the general area of applying neural networks to the ignition timing and calibration of gasoline internal combustion engines. The ideal student would be computationally literate (preferably in C/C++) on UNIX and PC systems and have good mathematical and/or engineering abilities. An awareness of the importance of applying advanced technology and implementing ideas as engineering products is essential. In addition the ideal candidate would have some knowledge and interest in internal combustion engines and also relevant sensor technology. Neural Computing Research Group ------------------------------- The Neural Computing Research Group currently comprises the following academic staff: Chris Bishop Professor David Lowe Professor David Bounds Professor Geoffrey Hinton Visiting Professor Richard Rohwer Lecturer Alan Harget Lecturer Ian Nabney Lecturer David Saad Lecturer (arrives 1 August) two further posts (currently being appointed) together with the following Research Fellows: Chris Williams Shane Murnion Alan McLachlan Huaihu Zhu four further posts (currently being advertised) a full-time software support assistant, and eleven postgraduate research students. How to Apply ------------ If you wish to be considered for one of these positions you will need to complete an application form which can be obtained by sending your full postal address to: Professor C M Bishop Research Admissions Tutor Neural Computing Research Group Department of Computer Science and Applied Mathematics Aston University Birmingham B4 7ET, U.K. Tel: 0121 359 3611 ext. 4270 Fax: 0121 333 6215 e-mail: c.m.bishop at aston.ac.uk The minimum entry qualification is a First Class or Upper Second Class Honours degree in a relevant discipline, or the equivalent overseas qualification. Overseas applicants whose first language is not English must provide evidence of competence in English. Acceptable evidence includes possession of a UK or North American degree, or a formal certificate such as the British Council's ELTS (6.0 or better) or the USA TOEFL (550 or better). From pauer at cse.ucsc.edu Wed Jun 7 16:21:28 1995 From: pauer at cse.ucsc.edu (Peter Auer) Date: Wed, 7 Jun 1995 13:21:28 -0700 Subject: Call for impromptu-talks at COLT 95 Message-ID: <199506072021.NAA25809@arapaho.cse.ucsc.edu> Following an old (and very successful) tradition we will have at COLT 95 (July 5 - 8 in Santa Cruz, USA) again (in addition to two invited talks and talks that have been officially accepted for presentation by the program committee) also informal sessions with "impromptu-talks". In fact, at COLT 95 we have reserved even more time for these, because they have turned out to be quite fruitful for a quick dissemination of new/unfinished research results. The program chair of COLT 95 (Wolfgang Maass) has asked me to help in the organization of these sessions. The sessions for impromptu talks at COLT 95 will take place -- on July 6 from 4:40 to 5:20 (Chair: Ron Rivest) -- on July 6 from 7:30 to 9:00 (Chair: Peter Auer) -- on July 7 from 4:50 to 5:30 (Chair: David Haussler). Slots for these impromptu-talks are assigned on a first-come first-serve basis, and you can sign up by sending email to me, i.e. to Peter Auer: pauer at cse.ucsc.edu . In previous years we had impromptu-talks of up to 10 minutes, but we may have to shorten that in case of a stronger demand. Any topic of potential interest to the COLT-community is appropriate for these impromptu-talks, including recent research results that have been (or will be) officially presented at other conferences, report on work in progress, discussion of open problems. I will send out the schedule of the impromptu-talks on Jun 28. If there are still slots available you might sign up for impromptu-talks during the conference, but usually slots fill up pretty fast. Below I have attached the conference and registration information for COLT'95. Peter Auer. ---------------------------------------------------------------------- COLT '95 Eighth ACM Conference on Computational Learning Theory Wednesday, July 5 through Saturday, July 8, 1995 University of California, Santa Cruz, California ********************************************************************** Below is a short ascii summary of the information regarding this year's Colt conference. Additional information, maps, ... can be obtained from the colt web page: http://www.cse.ucsc.edu/~lisa/colt.html ********************************************************************** The Colt conference will be held on campus, which is hidden away in the redwoods on the Pacific Coast of Northern California. The conference is in cooperation with the ACM Special Interest Group on Algorithms and Computation Theory (SIGACT) and the ACM Special Interest Group on Artificial Intelligence (SIGART). 1. Flight tickets: San Jose Airport is the closest, about a 45 minute drive. San Francisco Airport (SFO) is about an hour and forty-five minutes away, but has slightly better flight connections. 2. Transportation from the airport to Santa Cruz: The first option is to rent a car and drive south from San Jose on Hwy 880, which becomes Hwy 17 or from San Francisco take either Hwy 280 or 101 to Hwy 17. When you get to Santa Cruz, take Route 1 (Mission St.) north. Turn right on Bay Street and follow the signs to UCSC. Commuters must purchase parking permits for $4.00/day M-F (parking is free Saturday and Sunday) from the information kiosk at the Main entrance to campus or the conference satellite office. Those staying on campus can pick up permits with their room keys. Various van services also connect Santa Cruz with the the San Francisco and San Jose airports. The Santa Cruz Airporter (408) 423-1214 (or (800) 497-4997 from anywhere) has regularly scheduled trips (every two hours from 9am until 11pm from San Jose International Airport, and every two hours from 8am until 10pm from SFO, $15 each way from either airport, you MUST mention the name of the conference when booking), and will drop you off at the Crown/Merrill housing. ABC Transportation (408) 464-8893 ((800) 734-4313 from California (24hr.)) runs a private sedan service ($47 for one, $57 for two, $67 for three to six from San Jose Airport to UC Santa Cruz, $79 for one, $89 for two, and $99 for three to six from SFO to UCSC, additional $10 after 11:30 pm, additional $20 to meet an international flight) and will drop you off at your room. Book at least 24 hours in advance. 3. Conference and room registration: Please fill out the enclosed form and send it to us with your payment. It MUST be postmarked by June 1 and received by June 5 to obtain the early registration rate and guarantee the room. Conference attendance is limited by the available space, and late registrations may need to be returned. Your arrival: This year we will be at the Crown/Merrill apartments. (Same place it was four years ago.) Enter the campus at the Main Entrance, which is the intersection of High and Bay Streets. (Look for the COLT signs.) Bay Street turns into Coolidge Drive, continue on this road until you reach the Crown/Merrill apartments. Housing registration will be at the Crown/Merrill Satellite Office (408) 459-2611 from 2:00 to 5:00 pm on Wednesday. Keys, meal cards, parking permits, maps, and information about what to do in Santa Cruz will be available. The office will remain open until 10:00 pm for late arrivals. Arrivals after 10:00 pm: stop at the Main Entrance Kiosk and have the guard call the College Proctor, who will meet you at the Satellite Office and give you your housing materials. Problems? Please go directly to the Crown/Merrill Satellite Office, or contact your Conference Director. In case of emerengcy, dial 911 from any campus phone. The weather in July is mostly sunny with occasional summer fog. Even though the air may be cool, the sun can be deceptively strong; those susceptible to sunburn should come prepared with sunblock. Bring T-shirts, slacks, shorts, and a sweater or light jacket, as it cools down at night. For information on the local bus routes and schedules, call the Metro Center at (408) 425-8600. Bring swimming trunks, tennis rackets, etc. You can get day passes for $5.00 (East Field House, Physical Education Office) to use the recreation facilities on campus. For questions about registration or accommodations, contact COLT'95, Computer Science Dept., UCSC, Santa Cruz, CA 95064. The e-mail address is colt95 at cse.ucsc.edu, and fax is (408)459-4829. For emergencies, call (408)459-2263. 4. General Conference Information: The Conference Registration will be 4 - 8pm Wednesday, outside the Cowell dining hall. Late registrations will be at the same location during the technical sessions. All lectures will be in the Cowell dining hall. A banquet will be held Wednesday from 6:30--8:00pm outside the Cowell dining hall followed by an invited talk by Leslie Valiant at 8:00pm inside the dining hall. There will be a terminal available in the dining hall for checking e-mail. The campus Copy Center is in the Communications Building (open 8am to 5pm). The conference has been organized to allow time for informal discussion and collaboration. In addition to the regular technical sessions, we are pleased to present two special invited lectures by Leslie Valiant and Terrance Sejnowski. -------------------- Conference Schedule -------------------- ----------------- Wednesday, July 5 ----------------- 2:00-5:00 pm, Housing Registration, Crown/Merrill Satellite Office. Note: All technical sessions will take place in the Cowell Dining Hall. Session 1: 5:00 - 6:00 Chair: Wolfgang Maass 5:00-5:20 ``An Experimental and Theoretical Comparison of Model Selection Methods" Michael Kearns, Yishay Mansour, Andrew Y. Ng, and Dana Ron 5:20-5:40 ``On the Learnability and Usage of Acyclic Probabilistic Finite Automata" Dana Ron, Yoram Singer, and Naftali Tishby 5:40-6:00 ``Learning to Model Sequences Generated by Switching Distributions" Yoav Freund and Dana Ron 6:30 - 8:00 {\bf \hspace*{.25in} Banquet} 8:00 Invited Talk by Leslie G. Valiant ``Rationality" ---------------- Thursday, July 6 ---------------- Session 2: 8:30 - 10:00 Chair: Dana Angluin 8:30-8:50 ``A Game of Prediction with Expert Advice" Volodya G. Vovk 8:50-9:10 ``Predicting Nearly as well as the Best Pruning of a Decision Tree" David P. Helmbold and Robert E. Schapire 9:10-9:30 ``A Comparison of New and Old Algorithms for a Mixture Estimation Problem" David P. Helmbold, Robert E. Schapire, Yoram Singer, and Manfred K. Warmuth 9:30-9:40 ``A Note on Learning Multivariate Polynomials under the Uniform Distribution" Nader H. Bshouty 9:40-9:50 ~~Randomized Approximate Aggregating Strategies and Their Applications to Prediction and Discrimination" Kenji Yamanishi 9:50-10:00 ``How to Use Expert Advice in the Case when Actual Values of Estimated Events Remain Unknown" Olga Mitina and Nikolai Vereshchagin 10:00 - 10:30 Break Session 3: 10:30 - 12:00 Chair: Robert Schapire 10:30-10:50 ``Learning With Unreliable Boundary Queries" Avrim Blum, Prasad Chalasani, Sally A. Goldman, and Donna K. Slonim 10:50-11:10 ``Generalized Teaching Dimensions and the Query Complexity of Learning" Tibor Hegedus 11:10-11:30 ``Learning DNF over the Uniform Distribution Using a Quantum Example Oracle" Nader H. Bshouty and Jeffrey C. Jackson 11:30-11:40 ``Reducing the Number of Queries in Self-Directed Learning" Yiqun L. Yin 11:40-11:50 ``On Self-Directed Learning" Shai Ben-David, Nadav Eiron, and Eyal Kushilevitz 11:50-12:00 ``Being Taught can be Faster than Asking Questions" Ronald L. Rivest and Yiqun L. Yin 12:00 - 1:30 Lunch Session 4: 1:30 - 3:30 Chair: Phil Long 1:30-1:50 ``Reductions for Learning via Queries" William Gasarch and Geoffrey R. Hird 1:50-2:10 ``Learning via Queries and Oracles" Frank Stephan 2:10-2:20 ``On the Inductive Inference of Real Valued Functions Kalvis Apsitis, Rusins Freivalds, and Carl H. Smith 2:20-2:30 ``Inductive Inference of Functions on the Rationals" Douglas A. Cenzer and William R. Moser 2:30-2:40 ``Language Learning from Texts: Mind Changes, Limited Memory and Monotonicity" Efim Kinber and Frank Stephan 2:40-2:50 ``On Learning Decision Trees with Large Output Domains" Nader H. Bshouty, Christino Tamon, and David K. Wilson 2:50-3:00 ``On the Learnability of $Z_{N}$-DNF Formulas" Nader Bshouty, Zhixiang Chen, Scott E. Decatur, and Steven Homer 3:00-3:10 ``Proper Learning Algorithm for Functions of $k$ Terms under Smooth Distributions" Yoshifumi Sakai, Eiji Takimoto, and Akira Maruoka 3:10-3:20 ``On-line Learning of Binary and $n$-ary Relations over Multi-dimensional Clusters" Atsuyoshi Nakamura and Naoki Abe 3:20-3:30 ``DNF: If You Can't Learn 'em, Teach 'em: An Interactive Model of Teaching" David H. Mathias 3:30 - 3:40 Break 3:40 - 4:40 Poster Discussion Session I 4:40 - 5:20 Impromptu Talks I - Chair: Ron Rivest 7:30 - 9:00 Impromptu Talks II - Chair: Peter Auer 9:00 Business Meeting - Cowell Dining Hall -------------- Friday, July 7 -------------- 8:30-9:30 Invited Talk} by Terrence Sejnowski ``Predictive Hebbian Learning" 9:30 - 10:00 Break Session 5: 10:00 - 12:00 Chair: Naftali Tishby 10:00-10:20 ``On Genetic Algorithms" Eric B. Baum, Dan Boneh, and Charles Garrett 10:20-10:40 ``On the Optimal Capacity of Binary Neural Networks: Rigorous Combinatorial Approaches" Jeong Han Kim and James R. Roche 10:40-11:00 ``From Noise-Free to Noise-Tolerant and from On-line to Batch Learning" Norbert Klasner and Hans Ulrich Simon 11:00-11:10 ``Sample Sizes for Sigmoidal Neural Networks" John Shawe-Taylor 11:10-11:20 ``Online Learning via Congregational Gradient Descent" Kim L. Blackmore, Robert C. Williamson, Iven M. Y. Mareels, and William A. Sethares 11:20-11:30 ``Criteria for Specifying Machine Complexity in Learning" Changfeng Wang and Santosh S. Venkatesh 11:30-11:40 ``Markov Decision Processes in Large State Spaces" Lawrence K. Saul and Satinder P. Singh 11:40-11:50 ``The Perceptron Algorithm vs. Winnow: Linear vs. Logarithmic Mistake Bounds when Few Input Variables are Relevant" Jyrki Kivinen and Manfred K. Warmuth 11:50-12:00 ``Learning by a Population of Perceptrons" Kukjin Kang and Jong-Hoon Oh 12:00 - 1:30 Lunch Session 6: 1:30 - 3:40 Chair: Tom Dietterich 1:30-1:50 ``Learning to Reason with a Restricted View" Roni Khardon and Dan Roth 1:50-2:10 ``Learning Internal Representations" Jonathan Baxter 2:10-2:20 ``Piecemeal Graph Exploration by a Mobile Robot" Baruch Awerbuch, Margrit Betke, Ronald L.Rivest, and Mona Singh 2:20-2:30 ``Concept Learning with Geometric Hypotheses" David P. Dobkin and Dimitrios Gunopulos 2:30-2:40 ``More or Less Efficient Agnostic Learning of Convex Polygons" Paul Fischer 2:40-2:50 ``Noise-Tolerant Parallel Learning of Geometric Concepts" Nader H. Bshouty, Sally A. Goldman, and David H. Mathias 2:50-3:00 ``On Learning from Noisy and Incomplete Examples" Scott E. Decatur and Rosario Gennaro 3:00-3:10 ``On Learning Bounded-Width Branching Programs" Funda Erg\"un, Ravi S. Kumar, and Ronitt Rubinfeld 3:10-3:20 ``On Efficient Agnostic Learning of Linear Combinations of Basis Functions" Wee Sun Lee, Peter L. Bartlett, and Robert C. Williamson 3:20-3:30 ``Sequential PAC Learning" Dale Schuurmans and Russell Greiner 3:30-3:40 ``Regression NSS: An Alternative to Cross Validation" Michael P. Perrone and Brian S. Blais 3:40 - 3:50 Break 3:50 - 4:50 Poster Discussion Session II 4:50 - 5:30 Impromptu Talks III - Chair: David Haussler 8:00 - 10:00 Beach Party at Twin Lakes Beach ---------------- Saturday, July 8 ---------------- Session 7: 8:40 - 10:00 Chair: Jeff Jackson 8:40-9:00 ``More Theorems about Scale-sensitive Dimensions and Learning" Peter L. Bartlett and Philip M. Long 9:00-9:20 ``General Bounds on the Mutual Information Between a Parameter and $n$ Conditionally Independent Observations" David Haussler and Manfred Opper 9:20-9:40 ``Learning from a Mixture of Labeled and Unlabeled Examples with Parametric Side Information" Joel Ratsaby and Santosh S. Venkatesh 9:40-10:00 ``Learning Using Group Representations" Dan Boneh 10:00 - 10:40 Break Session 8: 10:40 - 12:00 Chair: Peter Bartlett 10:40-11:00 ``Exactly Learning Automata with Small Cover Time" Dana Ron and Ronitt Rubinfeld 11:00-11:20 ``Specification and Simulation of Statistical Query Algorithms for Efficiency and Noise Tolerance" Javed A. Aslam and Scott E. Decatur 11:20-11:40 ``Simple Learning Algorithms Using Divide and Conquer" Nader H. Bshouty 11:40-Noon ``A Note on VC-Dimension and Measure of Sets of Reals" Shai Ben-David and Leonid Gurvits Noon Conference Ends ======================== REGISTRATION INFORMATION ======================== Please fill in the information needed for registration and accommodations. Make your payment by check or international money order, in U.S. dollars and payable through a U.S. bank, to UC Regents/COLT '95. Mail this form together with payment (by June 1, 1995 to avoid the late fee) to: COLT '95 Dept. of Computer Science University of California Santa Cruz, California 95064 Questions: e-mail colt95 at cse.ucsc.edu, fax (408)459-4829. Confirmations will be sent by e-mail. Anyone needing special arrangements to accommodate a disability should enclose a note with their registration. If you don't receive confirmation within three weeks of payment, let us know. Name: _________________________________________________________ Affiliation: __________________________________________________ Address: ______________________________________________________ City: ____________________ State: __________ Zip: _____________ Country: ______________________________________________________ Telephone: _______________________ Fax: ______________________ Email address: ________________________________________________ The registration fee includes a copy of the proceedings. ACM/SIG Members: $170 (with banquet) $ ______________ Non-Members: $190 (with banquet) $ ______________ Late Members: $225 (after June 1) $ ______________ Late Non-Members: $245 (after June 1) $ ______________ Full time students: $85 (no banquet) $ ______________ Extra banquet tickets: ______ (quantity) x $20 = $_______________ How many in your party have dietary restrictions? _____________________ Vegetarian: _____________________ Other: ______________________________ Shirt size: _____ medium _____ large _____ x-large ------------------------- Accommodations and Dining ------------------------- Accommodation fees are $60 per person for a double and $72 for a single per night at the Crown/Merrill Apartments. Cafeteria style breakfast (7:30 to 8:30am), lunch (12:00 to 1:00pm), and dinner (5:30 to 6:30pm) will be served in the Crown/Merrill Dining Hall. Doors close at the end of the time indicated, but dining may continue beyond this time. The first meal provided is dinner on the day of arrival and the last meal is lunch on the day you leave. NO REFUNDS can be given after June 1. Those with uncertain plans should make reservations at an off-campus hotel. Each attendee should pick one of the following options: _____ Package #1: Weds., Thurs., Fri. nights: $180 double, $216 single. _____ Package #2: Weds., Thurs., Fri., Sat. nights: $240 double, $288 single. _____ Other housing arrangement. Each 4-person apartment has a living room, a kitchen, a common bathroom, and either four single separate rooms, two double rooms, or two single and one double room. We need the following information to make room assignments: Gender (M/F): ____________________ Smoker (Y/N): ______________________ Roommate Preference: __________________________________________________ For shorter stays, longer stays, and other special requirements, you can get other accommodations through the Conference Office. Make reservations directly with them at (408) 459-2611, fax (408) 459-3422, and do this soon as on-campus rooms for the summer fill up well in advance. Off-campus hotels include the Dream Inn (408) 426-4330 and the Ocean Pacific Lodge (408) 457-1234 or (800) 995-0289. AMOUNT ENCLOSED: Registration $ ___________________ Banquet tickets $ ___________________ Accommodations $ ___________________ Discount* $ ___________________ TOTAL $ ___________________ *There is a $35 discount for registering for both Colt '95 and ML '95. (The discount does not apply for student registrations.) Proof of registration to ML '95 is required for discount to be taken. We explored the possibility of a shuttle bus from Colt to ML, but there was not enough interest. You will need to make your own arrangements for travel to ML from Santa Cruz. From yarowsky at unagi.cis.upenn.edu Thu Jun 8 16:27:33 1995 From: yarowsky at unagi.cis.upenn.edu (David Yarowsky) Date: Thu, 8 Jun 95 16:27:33 EDT Subject: ACL-95 WVLC3 - Supervised Training vs Self-organizing Methods Message-ID: <9506082027.AA16195@unagi.cis.upenn.edu> Keywords: Corpora, Self-Organization, Statistical Models, Unsupervised Learning THE THIRD WORKSHOP ON VERY LARGE CORPORA ----------------------------------------- Friday, 30 June 1995 8:45 AM - 5:25 PM MIT, Cambridge, Massachusetts, USA at ACL-95 (June 26-29) (Sponsored by ACL's SIGDAT and SIGNLL) The workshop will present original research in corpus-based and statistical natural language processing. Topics will include sense disambiguation, grammar induction, part-of-speech tagging, information retrieval, language modeling, and machine translation. This year's theme is: Supervised Training vs. Self-organizing Methods Historically, annotated corpora have made a significant contribution to tasks such as part-of-speech tagging and sense disambiguation. But annotated corpora are expensive and generally unavailable for languages other than English. Self-organizing methods offer the hope that annotated corpora might not be necessary. Can we achieve comparable performance using little or no tagged training data? What are the tradeoffs? Organizers: Ken Church and David Yarowsky Industrial Sponsor: LEXIS-NEXIS, Division of Reed and Elsevier, Plc. REGISTRATION: Registration fees are $40 for payment received by 15 June 1995 and $45 at the door. Registration includes a copy of the proceedings, catered lunch and refreshments during the day. Acceptable forms of payment are US$ cheques payable to "ACL" or credit card (VISA/Mastercard) payment. E-mail registrations are encouraged. Please submit the following form along with payment: -------------------------------------------------------------------- Name: Institution (for name tag): Postal address: Email address: Payment (specify cheque or credit card): Credit card info - Name on card: - Card number: - Expiration date: Dietary requirements (vegetarian, etc.): -------------------------------------------------------------------- Please send to: David Yarowsky Dept. of Computer and Information Science University of Pennsylvania 200 S. 33rd St. Philadelphia, PA 19104-6389 USA email: yarowsky at unagi.cis.upenn.edu ============================================================================ PRELIMINARY PROGRAM 8:15 - 8:45 Registration. Coffee, danish, etc. available 8:45 - 8:50 Welcome 8:50 - 9:35 INVITED TALK (Mark Liberman) 9:35 - 9:50 Break 9:50 - 10:15 Eric Brill Unsupervised Learning of Disambiguation Rules for Part of Speech Tagging 10:15 - 10:40 Carl de Marcken Lexical Heads, Phrase Structure and the Induction of Grammar 10:40 - 11:05 Michael Collins and James Brooks Prepositional Phrase Attachment through a Backed-off Model 11:05 - 11:15 Break 11:15 - 11:40 Andrew Golding A Bayesian Hybrid Method for Context-sensitive Spelling Correction 11:40 - 12:05 Philip Resnik Disambiguating Noun Groupings with Respect to Wordnet Senses 12:05 - 1:05 CATERED LUNCH 1:05 - 1:30 Dekai Wu Trainable Coarse Bilingual Grammars for Parallel Text Bracketing 1:30 - 1:55 Lance Ramshaw and Mitch Marcus Text Chunking using Transformation-Based Learning 1:55 - 2:05 Break 2:05 - 3:00 INVITED TALK (Henry Kucera and Nelson Francis) 3:00 - 3:10 Break 3:10 - 3:35 Fernando Pereira, Yoram Singer and Naftali Tishby Beyond Word N-Grams 3:35 - 4:00 Jing-Shin Chang, Yi-Chung Lin and Keh-Yih Su Automatic Construction of a Chinese Electronic Dictionary 4:00 - 4:10 Break 4:10 - 4:35 Kenneth Church and William Gale Inverse Document Frequency (IDF): A Measure of Deviations from Poisson 4:35 - 5:00 Joe Zhou and Pete Dapkus Automatic Suggestion of Significant Terms for a Predefined Topic 5:00 - 5:25 Ellen Riloff and Jay Shoen Automatically Acquiring Conceptual Patterns without an Annotated Corpus ------------------------------------------------------------ More Information: http://www.cis.upenn.edu/~yarowsky/wvlc3.html ACL-95 Homepage: http://www.ai.mit.edu/people/cgdemarc/acl/acl-info.html From mhb0 at Lehigh.EDU Sun Jun 11 15:57:26 1995 From: mhb0 at Lehigh.EDU (Mark H. Bickhard) Date: Sun, 11 Jun 1995 15:57:26 EDT Subject: New Book Announcement Message-ID: <199506111957.PAA51802@ns4-1.CC.Lehigh.EDU> BOOK ANNOUNCEMENT Foundational Issues in Artificial Intelligence and Cognitive Science: Impasse and Solution. Elsevier Science 1995 Mark H. Bickhard Lehigh University mhb0 at lehigh.edu Loren Terveen AT&T Bell Laboratories terveen at research.att.com SHORT DESCRIPTION The book focuses on a conceptual flaw in contemporary artificial intelligence and cognitive science. Many people have discovered diverse manifestations and facets of this flaw, but the central conceptual impasse is at best only partially perceived. Its consequences, nevertheless, visit themselves as distortions and failures of multiple research projects - and make impossible the ultimate aspirations of the fields. The impasse concerns a presupposition concerning the nature of representation - that all representation has the nature of encodings: encodingism. Encodings certainly exist, but encoding*ism* is at root logically incoherent; any *programmatic* research predicated on it is doomed to distortion and ultimate failure. The impasse and its consequences - and steps away from that impasse - are explored in a large number of projects and approaches. These include SOAR, CYC, PDP, situated cognition, subsumption architecture robotics, and the frame problems - a general survey of the current research in AI and Cognitive Science emerges. Interactivism, an alternative model of representation, is proposed and examined. SYNOPSIS The central point of Foundational Issues in Artificial Intelligence and Cognitive Science - Impasse and Solution is that there is a conceptual flaw in contemporary approaches to artificial intelligence and cognitive science, a flaw that makes impossible the ultimate aspirations of these fields. Many people have discovered diverse manifestations and facets of this flaw, but the central conceptual impasse is only partially perceived. The consequences, nevertheless, visit themselves as distortions and failures of research projects across the fields. The locus of the impasse concerns a common assumption or presupposition that underlies all parts of the field - a presupposition concerning the nature of representation. We call this assumption "encodingism", the assumption that representation is fundamentally constituted as encodings. This assumption, in fact, has been dominant throughout Western history. We argue that it is at root logically incoherent, and, therefore, that any programmatic research predicated on it is doomed to distortion and ultimate failure. On the other hand, encodings clearly do exist, and therefore are clearly possible, and we show how that could be - but they cannot be the foundational form of representation. Similarly, contemporary encoding approaches are enormously powerful, and major advances have been made within these dominant programmatic frameworks - but the encodingism flaw in those frameworks limit their ultimate possibilities, and will frustrate efforts toward the programmatic goal of understanding and constructing minds. The book characterizes and demonstrates this impasse, discusses a number of partial recognitions of and movements away from it, and then traces its consequences in a large number of projects and approaches within the fields. These include SOAR, CYC, PDP, situated cognition, subsumption architecture robotics, and the frame problems. In surveying the consequences of the impasse, we also provide a general survey of the current research in AI and Cognitive Science per se. We do not propose an unsolvable impasse, and, in fact, present an alternative that does resolve that impasse. This is developed for contrast, for perspective, to demonstrate that there is an alternative, and to explore some of its nature. We end with an exploration of some of the architectural implications of the alternative - called interactivism - and argue that such architectures are 1) not subject to the encodingism incoherence 2) more powerful than Turing machines, 3) more consistent with properties of central nervous system functioning than other contemporary approaches, and 4) capable of resolving the many problematics in the field that we argue are in fact manifestations of the underlying impasse. The audience for this book will include researchers, academics, and students in artificial intelligence, cognitive science, robotics, cognitive psychology, philosophy of mind and language, natural language processing, connectionism, and learning. The focus of the book is on the nature of representation, and representation permeates everywhere - so also, therefore, do the implications of our critique and our alternative permeate everywhere. CONTENTS Preface xi Introduction 1 A PREVIEW 2 I GENERAL CRITIQUE 5 1 Programmatic Arguments 7 CRITIQUES AND QUALIFICATIONS 8 DIAGNOSES AND SOLUTIONS 8 IN-PRINCIPLE ARGUMENTS 9 2 The Problem of Representation 11 ENCODINGISM 11 Circularity 12 Incoherence - The Fundamental Flaw 13 A First Rejoinder 15 The Necessity of an Interpreter 17 3 Consequences of Encodingism 19 LOGICAL CONSEQUENCES 19 Skepticism 19 Idealism 20 Circular Microgenesis 20 Incoherence Again 20 Emergence 21 4 Responses to the Problems of Encodings 25 FALSE SOLUTIONS 25 Innatism 25 Methodological Solipsism 26 Direct Reference 27 External Observer Semantics 27 Internal Observer Semantics 28 Observer Idealism 29 Simulation Observer Idealism 30 SEDUCTIONS 31 Transduction 31 Correspondence as Encoding: Confusing Factual and Epistemic Correspondence 32 5 Current Criticisms of AI and Cognitive Science 35 AN APORIA 35 Empty Symbols 35 ENCOUNTERS WITH THE ISSUES 36 Searle 36 Gibson 40 Piaget 40 Maturana and Varela 42 Dreyfus 42 Hermeneutics 44 6 General Consequences of the Encodingism Impasse 47 REPRESENTATION 47 LEARNING 47 THE MENTAL 51 WHY ENCODINGISM? 51 II INTERACTIVISM: AN ALTERNATIVE TO ENCODINGISM 53 7 The Interactive Model 55 BASIC EPISTEMOLOGY 56 Representation as Function 56 Epistemic Contact: Interactive Differentiation and Implicit Definition 60 Representational Content 61 EVOLUTIONARY FOUNDATIONS 65 SOME COGNITIVE PHENOMENA 66 Perception 66 Learning 69 Language 71 8 Implications for Foundational Mathematics 75 TARSKI 75 Encodings for Variables and Quantifiers 75 Tarski's Theorems and the Encodingism Incoherence 76 Representational Systems Adequate to Their Own Semantics 77 Observer Semantics 78 Truth as a Counterexample to Encodingism 79 TURING 80 Semantics for the Turing Machine Tape 81 Sequence, But Not Timing 81 Is Timing Relevant to Cognition? 83 Transcending Turing Machines 84 III ENCODINGISM: ASSUMPTIONS AND CONSEQUENCES 87 9 Representation: Issues within Encodingism 89 EXPLICIT ENCODINGISM IN THEORY AND PRACTICE 90 Physical Symbol Systems 90 The Problem Space Hypothesis 98 SOAR 100 PROLIFERATION OF BASIC ENCODINGS 106 CYC - Lenat's Encyclopedia Project 107 TRUTH-VALUED VERSUS NON-TRUTH-VALUED 118 Procedural vs Declarative Representation 119 PROCEDURAL SEMANTICS 120 Still Just Input Correspondences 121 SITUATED AUTOMATA THEORY 123 NON-COGNITIVE FUNCTIONAL ANALYSIS 126 The Observer Perspective Again 128 BRIAN SMITH 130 Correspondence 131 Participation 131 No Interaction 132 Correspondence is the Wrong Category 133 ADRIAN CUSSINS 134 INTERNAL TROUBLES 136 Too Many Correspondences 137 Disjunctions 138 Wide and Narrow 140 Red Herrings 142 10 Representation: Issues about Encodingism 145 SOME EXPLORATIONS OF THE LITERATURE 145 Stevan Harnad 145 Radu Bogdan 164 Bill Clancey 169 A General Note on Situated Cognition 174 Rodney Brooks: Anti-Representationalist Robotics 175 Agre and Chapman 178 Benny Shanon 185 Pragmatism 191 Kuipers' Critters 195 Dynamic Systems Approaches 199 A DIAGNOSIS OF THE FRAME PROBLEMS 214 Some Interactivism-Encodingism Differences 215 Implicit versus Explicit Classes of Input Strings 217 Practical Implicitness: History and Context 220 Practical Implicitness: Differentiation and Apperception 221 Practical Implicitness: Apperceptive Context Sensitivities 222 A Counterargument: The Power of Logic 223 Incoherence: Still another corollary 229 Counterfactual Frame Problems 230 The Intra-object Frame Problem 232 11 Language 235 INTERACTIVIST VIEW OF COMMUNICATION 237 THEMES EMERGING FROM AI RESEARCH IN LANGUAGE 239 Awareness of the Context-dependency of Language 240 Awareness of the Relational Distributivity of Meaning 240 Awareness of Process in Meaning 242 Toward a Goal-directed, Social Conception of Language 247 Awareness of Goal-directedness of Language 248 Awareness of Social, Interactive Nature of Language 252 Conclusions 259 12 Learning 261 RESTRICTION TO A COMBINATORIC SPACE OF ENCODING 261 LEARNING FORCES INTERACTIVISM 262 Passive Systems 262 Skepticism, Disjunction, and the Necessity of Error for Learning 266 Interactive Internal Error Conditions 267 What Could be in Error? 270 Error as Failure of Interactive Functional Indications - of Interactive Implicit Predications 270 Learning Forces Interactivism 271 Learning and Interactivism 272 COMPUTATIONAL LEARNING THEORY 273 INDUCTION 274 GENETIC AI 275 Overview 276 Convergences 278 Differences 278 Constructivism 281 13 Connectionism 283 OVERVIEW 283 STRENGTHS 286 WEAKNESSES 289 ENCODINGISM 292 CRITIQUING CONNECTIONISM AND AI LANGUAGE APPROACHES 296 IV SOME NOVEL ARCHITECTURES 299 14 Interactivism and Connectionism 301 INTERACTIVISM AS AN INTEGRATING PERSPECTIVE 301 Hybrid Insufficiency 303 SOME INTERACTIVIST EXTENSIONS OF ARCHITECTURE 304 Distributivity 304 Metanets 307 15 Foundations of an Interactivist Architecture 309 THE CENTRAL NERVOUS SYSTEM 310 Oscillations and Modulations 310 Chemical Processing and Communication 311 Modulatory "Computations" 312 The Irrelevance of Standard Architectures 313 A Summary of the Argument 314 PROPERTIES AND POTENTIALITIES 317 Oscillatory Dynamic Spaces 317 Binding 318 Dynamic Trajectories 320 "Formal" Processes Recovered 322 Differentiators In An Oscillatory Dynamics 322 An Alternative Mathematics 323 The Interactive Alternative 323 V CONCLUSIONS 325 16 Transcending the Impasse 327 FAILURES OF ENCODINGISM 327 INTERACTIVISM 329 SOLUTIONS AND RESOURCES 330 TRANSCENDING THE IMPASSE 331 References 333 Index 367 PREFACE Artificial Intelligence and Cognitive Science are at a foundational impasse which is at best only partially recognized. This impasse has to do with assumptions concerning the nature of representation: standard approaches to representation are at root circular and incoherent. In particular, Artificial Intelligence research and Cognitive Science are conceptualized within a framework that assumes that cognitive processes can be modeled in terms of manipulations of encoded symbols. Furthermore, the more recent developments of connectionism and Parallel Distributed Processing, even though the issue of manipulation is contentious, share the basic assumption concerning the encoding nature of representation. In all varieties of these approaches, representation is construed as some form of encoding correspondence. The presupposition that representation is constituted as encodings, while innocuous for *some applied* Artificial Intelligence research, is fatal for the further reaching programmatic aspirations of both Artificial Intelligence and Cognitive Science. First, this encodingist assumption constitutes a *presupposition* about a basic aspect of mental phenomena - representation - rather than constituting a *model* of that phenomenon. Aspirations of Artificial Intelligence and Cognitive Science to provide any foundational account of representation are thus doomed to circularity: the encodingist approach presupposes what it purports to be (programmatically) able to explain. Second, the encoding assumption is not only itself in need of explication and modeling, but, even more critically, the standard presupposition that representation is *essentially* constituted as encodings is logically fatally flawed. This flaw yields numerous subsidiary consequences, both conceptual and applied. This book began as an article attempting to lay out this basic critique at the programmatic level. Terveen suggested that it would be more powerful to supplement the general critique with explorations of actual projects and positions in the fields, showing how the foundational flaws visit themselves upon the efforts of researchers. We began that task, and, among other things, discovered that there is no natural closure to it - there are always more positions that could be considered, and they increase in number exponentially with time. There is no intent and no need, however, for our survey to be exhaustive. It is primarily illustrative and demonstrative of the problems that emerge from the underlying programmatic flaw. Our selections of what to include in the survey have had roughly three criteria. We favored: 1) major and well known work, 2) positions that illustrate interesting deleterious consequences of the encodingism framework, and 3) positions that illustrate the existence and power of moves in the direction of the alternative framework that we propose. We have ended up, *en passant*, with a representative survey of much of the field. Nevertheless, there remain many more positions and research projects that we would like to have been able to address. MAIN FEATURES Identifies a fundamental premise about the nature of representation that underlies much of Cognitive Science - that representation is constituted as encodings. Explores fatal flaws with this premise. Surveys major projects within Cognitive Science and Artificial Intelligence. Shows how they embody the encodingism premise, and how they are limited by it. Identifies movements within Cognitive Science and AI away from encodingism. Presents an alternative to encodingism - interactivism. Demonstrates that interactivism avoids the fatal flaws of encodingisms, and that it provides a coherent framework for understanding representation. Unifies insights from the various movements in Cognitive Science away from encodingism. Sketches an interactivist cognitive architecture. FIELDS OF INTEREST Cognitive Science Simulation of Cognitive Processes Artificial Intelligence, Knowledge Engineering, Expert Systems Human Information Processing Philosophy of Language Philosophy of Mind Cognitive Psychology Robotics Artificial Life Autonomous Agents Dynamic Systems and Behavior Learning Theory of Computation Semantics Pragmatics Connectionism Linguistics Neuroscience Bickhard, M. H., Terveen, L. (1995). Foundational Issues in Artificial Intelligence and Cognitive Science - Impasse and Solution. Elsevier Scientific. ISBN 0 444 82048 5 In the US/Canada orders may be placed with: Elsevier Science P.O. Box 945 New York, NY 10159-0945 Phone (212) 633-3750 Fax (212) 633-3764 Email: usorders-f at elsevier.com Elsevier has given this book an unfortunately high price: Dfl. 240 -- US$ 141.25. We deeply regret that. Nevertheless, we suggest that it is well worth taking a look at, whether by purchase, local library, or inter-library loan. From georg at ai.univie.ac.at Tue Jun 13 09:26:13 1995 From: georg at ai.univie.ac.at (Georg Dorffner) Date: Tue, 13 Jun 1995 15:26:13 +0200 (MET DST) Subject: Neural Nets and EEG: WWW page and workshop Message-ID: <199506131326.PAA29403@jedlesee.ai.univie.ac.at> The European BIOMED-1 project ========================================================================= A N N D E E (Enhancement of EEG-Based Diagnosis of Neurological and Psychiatric Disorders Using Artificial Neural Networks) ========================================================================= announces its WWW home page: http://www.ai.univie.ac.at/oefai/nn/anndee/ ANNDEE is a concerted action sponsored by the European Commission and the Austrian Federal Ministry of Science, Research, and the Arts. It is devoted to coordinating research at several European centers aimed at processing EEG data using neural networks, in order to enhance diagnosis based on EEG. Among the application areas focused upon within ANNDEE are: - detection and classification of psychoses (e.g. schizophrenia) - detection and classification of degenerative diseases (e.g. Parkinson's) - automatic sleep staging and detection of sleep disorders (e.g. apneua, arousals) - spike detection in epilepsy - classification of single-trial, event-related EEG (e.g. for aiding handicapped) The home page at the above URL does not only give information about the ANNDEE project but is aimed at growing into a comprehensive server for anyone interested in this topic. Currently it includes - a list of partners and associated sites - a list of other sites on the Web - a search form for bibliographical references - links to publicly available EEG data - a list of important events =========================I M P O R T A N T !============================== Currently, some of the pages are still rather short. In order to make these services as complete as possible, we urge everyone working on EEG data processing with neural networks, to send us their - address (incl. URL, if applicable) - description of their work - references - links to available data (this is not restricted to European sites!!) Send email to: anndee-admin at ai.univie.ac.at Questions concerning the scientific part of the project should be directed to: georg at ai.univie.ac.at (Georg Dorffner) ========================================================================== Check it out!! This service is part of the WWW server at the Austrian Research Institute for Artificial Intelligence Vienna, Austria __________________________________________________________________________ First Announcement: Public ANNDEE Meeting The ANNDEE project announces its first public workshop ======================================= Neural Network-Based EEG Analysis ======================================= June 29-30, 1995 Graz, Austria This workshop comprises lectures by ANNDEE participants as well as invited speakers, such as J. Kangas (Finland). It also includes tutorials on LVQ and SOM (self-organizing feature maps). For more information see http://www-dpmi.tu-graz.ac.at/Workshop/Workshop.html or send email to: pregenz at dpmi.tu-graz.ac.at (Martin Pregenzer) ______________________________________________________________ From dawei at venezia.rockefeller.edu Tue Jun 13 12:42:17 1995 From: dawei at venezia.rockefeller.edu (Dawei Dong) Date: Tue, 13 Jun 95 12:42:17 -0400 Subject: two papers on temporal information processing by Dong & Atick Message-ID: <9506131642.AA26190@venezia.rockefeller.edu> A theory of temporal information processing in neural systems: how the early visual pathways, such as LGN, temporally modulate the incoming signals of natural scenes? The following two papers explore the above subject by 1) measuring the temporal power spectrum of natural time-varying images to reveal the underlying statistical regularities, and, 2) based on the measurements, using information theory to predict the optimal temporal filter which is shown in quantitative agreements with physiological experiments. Dawei Dong 1) ftp://venezia.rockefeller.edu/dawei/papers/95-TIME.ps.Z (213K, 19 pages) Statistics of natural time-varying images Dawei W. Dong and Joseph J. Atick Computational Neuroscience Laboratory The Rockefeller University 1230 York Avenue New York, NY 10021-6399 Abstract Natural time-varying images possess substantial spatiotemporal correlations. We measure these correlations --- or equivalently the power spectrum --- for an ensemble of more than a thousand segments of motion pictures, and we find significant regularities. More precisely, our measurements show that the dependence of the power spectrum on the spatial frequency, $f$, and temporal frequency, $w$, is in general nonseparable and is given by $f^{-m-1} F(w/f)$, where $F(w/f)$ is a nontrivial function of the ratio $w/f$. We give a theoretical derivation of this scaling behaviour and show that it emerges from objects with a static power spectrum $\sim f^{-m}$, appearing at a wide range of depths and moving with a distribution of velocities relative to the observer. We show that in the regime of relatively high temporal and low spatial frequencies, the power spectrum becomes independent of the details of the velocity distribution and it is separable into the product of spatial and temporal power spectra with the temporal part given by the universal power-law $\sim w^{-2}$. Making some reasonable assumptions about the form of the velocity distribution we derive an analytical expression for the spatiotemporal power spectrum which is in excellent agreement with the data for the entire range of spatial and temporal frequencies of our measurements. The results in this paper have direct implications to neural processing of time-varying images in the visual pathway. (Accepted for publication in Network: Computation in Neural Systems) 2) ftp://venezia.rockefeller.edu/dawei/papers/95-LGN.ps.Z (279K, 26 pages) Temporal decorrelation: a theory of lagged and nonlagged responses in the lateral geniculate nucleus Dawei W. Dong and Joseph J. Atick Computational Neuroscience Laboratory The Rockefeller University 1230 York Avenue New York, NY 10021-6399 Abstract Natural time-varying images possess significant temporal correlations when sampled frame by frame by the photoreceptors. These correlations persist even after retinal processing and hence, under natural activation conditions, the signal sent to the lateral geniculate nucleus is temporally redundant or inefficient. We explore the hypothesis that the LGN is concerned, among other things, with improving efficiency of visual representation through active temporal decorrelation of the retinal signal much in the same way that the retina improves efficiency by spatially decorrelating incoming images. Using some recently measured statistical properties of time-varying images, we predict the spatio-temporal receptive fields that achieve this decorrelation. It is shown that, because of neuronal nonlinearities, temporal decorrelation requires two response types, the {\it lagged} and {\it nonlagged}, just as spatial decorrelation requires {\it on} and {\it off} response types. The tuning and response properties of the predicted LGN cells compare quantitatively well with what is observed in recent physiological experiments. {Network: Computation in Neural Systems}{ Vol~6(2) pp~159-178} From omlinc at research.nj.nec.com Tue Jun 13 14:47:26 1995 From: omlinc at research.nj.nec.com (Christian Omlin) Date: Tue, 13 Jun 95 14:47:26 EDT Subject: Preprint Available - Knowledge Extraction Message-ID: <9506131847.AA00207@arosa> The following technical report is available from the archive of the Computer Science Department, University of Maryland. URL: http://www.cs.umd.edu:80/TR/UMCP-CSD:CS-TR-3465 FTP: ftp.cs.umd.edu:/pub/papers/papers/3465/3465.ps.Z We welcome your comments. Christian Extraction of Rules from Discrete-Time Recurrent Neural Networks Revised Technical Report CS-TR-3465 and UMIACS-TR-95-54 University of Maryland, College Park, MD 20742 Christian W. Omlin and C. Lee Giles NEC Research Institute 4 Independence Way Princeton, N.J. 08540 USA E-mail: {omlinc,giles}@research.nj.nec.com ABSTRACT The extraction of symbolic knowledge from trained neural networks and the direct encoding of (partial) knowledge into networks prior to training are important issues. They allow the exchange of information between symbolic and connectionist knowledge representation. The focus of this paper is on the quality of the rules that are extracted from recurrent neural networks. Discrete-time recurrent neural networks can be trained to correctly classify strings of a regular language. Rules defining the learned grammar can be extracted from networks in the form of deterministic finite-state automata (DFA's) by applying clustering algorithms in the output space of recurrent state neurons. Our algorithm can extract different finite-state automata that are consistent with a training set from the same network. We compare the generalization performances of these different models and the trained network and we introduce a heuristic that permits us to choose among the consistent DFA's the model which best approximates the learned grammar. Keywords: Recurrent Neural Networks, Grammatical Inference, Regular Languages, Deterministic Finite-State Automata, Rule Extraction, Generalization Performance, Model Selection, Occam's Razor. From terry at salk.edu Tue Jun 13 16:19:25 1995 From: terry at salk.edu (Terry Sejnowski) Date: Tue, 13 Jun 95 13:19:25 PDT Subject: Neural Computation 7:4 Message-ID: <9506132019.AA28440@salk.edu> Neural Computation Volume 7 Number 4 July 1995 Review: Hints Yaser Abu-Mostafa Articles: Topology and geometry of weight solutions in multi-layer networks Frans M. Coetzee and Virginia L. Stonick Letters: Time-skew Hebb rule in a nonisopotential neuron Barak A. Pearlmutter Synapse models for neural networks: From ion channel kinetics to multiplicative coefficient wij Francois Chapeau-Blondeau and Nicolas Chambet Generalization and analysis of the Lisberger-Sejnowski VOR model Ning Qian Stable adaptive control of robot manipulators using neural networks Robert M. Sanner and Jean-Jacques E. Slotine Modular and hybrid connectionist system for automatic speaker identification Younes Bennani Error estimation by series association for artificial neural network systems Keehoon Kim and Eric B. Bartlett Test error fluctuations in finite linear perceptrons D. Barber, D. Saad and P. Sollich Learning and extracting initial mealy automata with a modular neural network model Peter Tino and Jozef Sajda Dynamic cell structure learns perfectly topology preserving map Jorg Bruske and Gerald Sommer ----- ABSTRACTS - http://www-mitpress.mit.edu/ SUBSCRIPTIONS - 1995 - VOLUME 7 - BIMONTHLY (6 issues) ______ $40 Student and Retired ______ $68 Individual ______ $180 Institution Add $22 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-6 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 e-mail: hiscox at mitvma.mit.edu ----- From stokely at atax.eng.uab.edu Mon Jun 12 18:48:39 1995 From: stokely at atax.eng.uab.edu (Ernest Stokely) Date: Mon, 12 Jun 1995 17:48:39 -0500 Subject: Position in Biomedical Engineering Message-ID: Tenure Track Faculty Position -------------------------- The Department of Biomedical Engineering at the University of Alabama at Birmingham has an opening for a tenure-track faculty member. The opening is being filled as part of a Whitaker Foundation Special Opportunities Award for a training and research program in functional and structural imaging of the brain. Candidates are particularly invited in cross-disciplinary areas of neurosystems, biological neural networks, computational neurobiology, or other multidisciplinary areas that combine neurobiology and imaging. The person selected for this position will be expected to form active research collaborations with other units in the Medical Affairs part of the UAB campus. Candidates should have a Ph.D. degree in engineering or a related field, and must be a U.S. citizen or have permanent residency in the U.S. The search will be continued until the position is filled. UAB is an autonomous campus within the University of Alabama system. UAB faculty currently are involved in over $140 million of externally funded grants and contracts. A 4.1 Tesla clinical NMR facility for cardiovascular research, several other small-bore MR systems, a Philips Gyroscan system, and a team of research scientists and engineers working in various aspects of MR imaging and spectroscopy are housed only 200 meters from the School of Engineering. In addition, the brain imaging project will involve collaborations with members of the Neurobiology Center, as well as faculty members from the Departments of Neurology, Psychiatry, and Radiology. To apply send a letter of application, a current curriculum vitae, and three letters of reference to Dr. Ernest Stokely, Department of Biomedical Engineering, BEC 256, University of Alabama at Birmingham, Birmingham, Alabama 35294-4461. The University of Alabama at Birmingham is an equal opportunity, affirmative action employer, and encourages applications from qualified women and minorities. Ernest Stokely Chair, Department of Biomedical Engineering BEC 256 University of Alabama at Birmingham Birmingham, Alabama 35294-4461 Internet: stokely at atax.eng.uab.edu FAX: (205) 975-4919 Phone: (205) 934-8420 From omlinc at research.nj.nec.com Wed Jun 14 11:13:23 1995 From: omlinc at research.nj.nec.com (Christian Omlin) Date: Wed, 14 Jun 95 11:13:23 EDT Subject: Technical Report - Alternate Sites Message-ID: <9506141513.AA02813@arosa> There seems to be a problem with accessing the technical report Extraction of Rules from Discrete-Time Recurrent Neural Networks by Christian W. Omlin and C. Lee Giles from the sites http://www.cs.umd.edu:80/TR/UMCP-CSD:CS-TR-3465 ftp.cs.umd.edu:/pub/papers/papers/3465/3465.ps.Z The above technical report can now be accessed either through my home page at http://www.neci.nj.nec.com/homepages/omlin/omlin.html or via ftp from ftp.nj.nec.com /pub/omlinc/rule_extraction.ps.Z I apologize for the inconvenience. Christian From phkywong at uxmail.ust.hk Thu Jun 15 05:34:24 1995 From: phkywong at uxmail.ust.hk (Dr. Michael Wong) Date: Thu, 15 Jun 1995 17:34:24 +0800 Subject: Paper on Neural Network Classification Available Message-ID: <95Jun15.173424+0800_hkt.18918-1+162@uxmail.ust.hk> FTP-host: physics.ust.hk FTP-file: pub/kymwong/nips95.ps.gz The following paper, submitted to the Theory session of NIPS-95, is now available via anonymous FTP. (8 pages long) ============================================================================ Neural Network Classification of Non-Uniform Data K. Y. Michael Wong and H. C.Lau, Department of Physics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong. E-mail address: phkywong at usthk.ust.hk, phhclau at usthk.ust.hk ABSTRACT We consider a model of non-uniform data, which resembles typical data for system faults in diagnostic classification tasks. Pre-processing the data for feature extraction and dimensionality reduction improves the performance of neural network classifiers, in terms of the number of training examples required for good generalization. This result supports the use of hybrid expert systems in which feature extraction techniques such as classification trees are used to build a pre-processing layer for neural network classifiers. ============================================================================ FTP instructions: unix> ftp physics.ust.hk Name: anonymous Password: your full email address ftp> cd pub/kymwong ftp> get nips95.ps.gz ftp> quit unix> gunzip nips95.ps.gz unix> lpr nips95.ps From djf3 at cornell.edu Thu Jun 15 12:04:47 1995 From: djf3 at cornell.edu (David Field) Date: Thu, 15 Jun 1995 12:04:47 -0400 Subject: Big brains conference Message-ID: There remains a limited number of openings for people wishing to attend this year's Cornell Symposium. A list of speakers and abstracts of talks can be found on our World Wide Web page. The URL is http://comp9.psych.cornell.edu/Psychology/big-brains.html or http://redwood.psych.cornell.edu/big-brains.html Cornell University Summer Symposium 1995: Big Brains June 23-26 Co-organizers: Barbara Finlay and David Field Many forces encourage and constrain the development of large brains, and large brains assemble themselves into new architectures that reflect these forces. Big brains are presumably selected for better and more efficient perception, cognition and behavior: how is selection for behavior translated into structure? Organismal and developmental constraints on selection for increased brain size are numerous: energetic requirements of the developing fetus, linkage of early developmental events, body conformation factors like pelvis size, social structure of the species and the mature brain's energetic requirements are all examples of forces which influence brain size and conformation. We now know a number of essential facts about how distribution of connectivity, modularity and the nature of functional specialization change as brains get large. Current work on computational architecture of neural nets has revealed strategies that work optimally for either small or large assemblies of units, and principles of organization that emerge only in larger assemblies. Using "big brains" as a focal point, this conference draws together researchers who work at all these levels of analysis to understand more of how our large brain has come to be. From dhw at santafe.edu Thu Jun 15 18:01:35 1995 From: dhw at santafe.edu (David Wolpert) Date: Thu, 15 Jun 95 16:01:35 MDT Subject: Paper announcement Message-ID: <9506152201.AA22307@sfi.santafe.edu> NEW PAPER ANNOUNCEMENT. *** Some Results Concerning Off-Training-Set and IID Error for the Gibbs and the Bayes Optimal Generalizers by David H. Wolpert, Emanuel Knill, Tal Grossman Abstract: In this paper we analyze the average behavior of the Bayes-optimal and Gibbs learning algorithms. We do this both for off-training-set error and conventional IID error (for which test sets overlap with training sets). For the IID case we provide a major extension to one of the better known results of \cite{haussler}. We also show that expected IID test set error is a non-increasing function of training set size for either algorithm. On the other hand, as we show, the expected off training-set error for both learning algorithms can increase with training set size, for non-uniform sampling distributions. We characterize what relationship the sampling distribution must have with the prior for such an increase. We show in particular that for uniform sampling distributions and either algorithm, the expected off-training set error is a non-increasing function of training set size. For uniform sampling distributions, we also characterize the priors for which the expected error of the Bayes-optimal algorithm stays constant. In addition we show that for the Bayes-optimal algorithm, expected off-training-set error can increase with training set size when the target function is fixed, but if and only if the expected error averaged over all targets decreases with training set size. Our results hold for arbitrary noise and arbitrary loss functions. *** To retrieve this file, anonymous ftp to ftp.santafe.edu. Go to pub/dhw_ftp. Compressed postscript of the file is called OTS.BO.Gibbs.ps.Z. From jlm at crab.psy.cmu.edu Thu Jun 15 16:04:01 1995 From: jlm at crab.psy.cmu.edu (James L. McClelland) Date: Thu, 15 Jun 95 16:04:01 EDT Subject: ANNOUNCING THE PDP++ SIMULATOR Message-ID: <9506152004.AA25171@crab.psy.cmu.edu.psy.cmu.edu> ANNOUNCING: The PDP++ Software Authors: Randall C. O'Reilly, Chadley K. Dawson, and James L. McClelland The PDP++ software is a new neural-network simulation system written in C++. It represents the next generation of the PDP software released with the McClelland and Rumelhart "Explorations in Parallel Distributed Processing Handbook", MIT Press, 1987. It is easy enough for novice users, but very powerful and flexible for research use. The current version is 1.0 beta (1.0b). It has been used and tested locally fairly extensively during development, but this is the first general release. The software can be obtained by anonymous ftp from: Anonymous FTP Site: hydra.psy.cmu.edu/pub/pdp++ For more information, see our web page: WWW Page: http://www.cs.cmu.edu/Web/Groups/CNBC/PDP++/PDP++.html There is a 250 page (printed) manual and an HTML version available on-line at the above address. Software Features: ================== o Full Graphical User Interface (GUI) based on the InterViews toolkit. Allows user-selected "look and feel". o Network Viewer shows network architecture and processing in real- time, allows network to be constructed with simple point-and-click actions. o Training and testing data can be graphed on-line and network state can be displayed over time numerically or using a wide range of color or size-based graphical representations. o Environment Viewer shows training patterns using color or size-based graphical representations. o Flexible object-oriented design allows mix-and-match simulation construction and easy extension by deriving new object types from existing ones. o Built-in 'CSS' scripting language uses C++ syntax, allows full access to simulation object data and functions. Transition between script code and compiled code is simplified since both are C++. Script has command-line completion, source-level debugger, and provides standard C/C++ library functions and objects. o Scripts can control processing, generate training and testing patterns, automate routine tasks, etc. o Scripts can be generated from GUI actions, and the user can create GUI interfaces from script objects to extend and customize the simulation environment. Supported Algorithms: ===================== o Feedforward and recurrent error backpropagation. Recurrent BP includes continuous, real-time models, and Almeida-Pineda. o Constraint satisfaction algorithms and associated learning algorithms including Boltzmann Machine, Hopfield models, mean-field networks (DBM), Interactive Activation and Competition (IAC), and continuous stochastic networks. o Self-organizing learning including Competitive Learning, Soft Competitive Learning, simple Hebbian, and Self-organizing Maps ("Kohonen Nets"). The Fine Print: =============== PDP++ is copyrighted and cannot be sold or distributed by anyone other than the copyright holders. However, the full source code is freely available, and the user is granted full permission to modify, copy, and use it. See our web page for details. The software runs on Unix workstations under XWindows. It requires a minimum of 16 Meg of RAM, and 32 Meg is preferable. It has been developed and tested on Sun Sparc's under SunOs 4.1.3, HP 7xx under HP-UX 9.x, and SGI Irix 5.3. Statically linked binaries are available for these machines. Other machine types will require compiling from the source. Cfront 3.x and g++ 2.6.3 are supported C++ compilers. The GUI in PDP++ is based on the InterViews toolkit, version 3.2a. However, we had to patch it to get it to work. We distribute pre-compiled libraries containing these patches for the above architectures. For architectures other than those above, you will have to apply our patches to InterViews before compiling. The basic GUI and script technology in PDP++ is based on a type-scanning system called TypeAccess which interfaces with the CSS script language to provide a virtually automatic interface mechanism. While these were developed for PDP++, they can easily be used for any kind of application, and CSS is available as a stand-alone executable for use like Perl or TCL. The binary-only distribution requires about 54 Meg of disk space, since we have been unable to get shared libraries to work with C++ on the above platforms. Each simulation executable is around 8-12 Meg in size, and there are 3 of these (bp++, cs++, so++), plus the CSS and 'maketa' executables. The compiled source-code distribution takes about 115 Meg (but only around 16 Meg before compiling). For more information on the details of the software, see our web page. From furuhashi at nuee.nagoya-u.ac.jp Fri Jun 16 03:54:55 1995 From: furuhashi at nuee.nagoya-u.ac.jp (furuhashi@nuee.nagoya-u.ac.jp) Date: Fri, 16 Jun 1995 16:54:55 +0900 Subject: Call for Papers of WWW'95 Message-ID: <9506160754.AA19245@gemini.bioele.nuee.nagoya-u.ac.jp> CALL FOR PAPERS 1995 IEEE/Nagoya University World Wisepersons Workshop (WWW'95) ON FUZZY LOGIC AND NEURAL NETWORKS/EVOLUTIONARY COMPUTATION November 14 and 15, 1995 Rubrum Ohzan Chikusa-ku, Nagoya, JAPAN Sponsored by Nagoya University Co-sponsored by IEEE Industrial Electronics Society Technically Co-sponsored by IEEE Robotics and Automation Society International Fuzzy Systems Association Japan Society for Fuzzy Theory and Systems North American Fuzzy Information Processing Society Society of Instrument and Control Engineers Robotics Society of Japan There are growing interests in combination technologies of fuzzy logic and neural networks, fuzzy logic and evolutionary computation for acquisition of experts' knowledge, modeling of nonlinear systems, realizing complex adaptive systems. The goal of the 1995 IEEE/Nagoya University WWW on Fuzzy Logic and Neural Networks/Evolutionary Computation is to give its attendees opportunities to exchange information and ideas on various aspects of the Combination Technologies and to stimulate and inspire pioneering works in this area. To keep the quality of these workshop high, only a limited number of people are accepted as participants of the workshops. The papers presented at the workshop are planned to be edited and published from Springer-Verlag. For speakers of excellent papers, partial financial assistance of travel expense as well as lodging fee in Nagoya will be provided by the steering committee of WWW'95. TOPICS: Combination of Fuzzy Logic and Neural Networks, Combination of Fuzzy Logic and Evolutionary Computation, Learning and Adaptation, Knowledge Acquisition, Modeling, Human Machine Interface IMPORTANT DATES: Submission of Abstracts of Papers : June 30, 1995 Acceptance Notification : Aug. 31, 1995 Final Manuscript : Sept. 30, 1995 Abstracts should be type-written in English within 4 pages of A4 size or Letter sized sheet. Use Times or one of the similar typefaces. The size of the letters should be 10 points or larger. All correspondence and submission of papers should be sent to Takeshi Furuhashi, General Chair Dept. of Information Electronics, Nagoya University Furo-cho, Chikusa-ku, Nagoya 464-01, JAPAN TEL: +81-52-789-2792, FAX: +81-52-789-3166 E mail: furuhashi at nuee.nagoya-u.ac.jp IEEE/Nagoya University WWW: IEEE/Nagoya University WWW (World Wiseperson Workshop) is a series of workshops sponsored by Nagoya University and co-sponsored by IEEE Industrial Electronics Society. City of Nagoya, located two hours away from Tokyo, has many electro-mechanical industries in its surroundings such as Mitsubishi, TOYOTA, and their allied companies. Nagoya is a mecca of robotics industries, machine industries and aerospace industries in Japan. The series of workshops will give its attendees opportunities to exchange information on advanced sciences and technologies and to visit industries and research institutes in this area. WORKSHOP ORGANIZATION Honorary Chair: Masanobu Hasatani (Dean, School of Engineering, Nagoya University) General Chair: Takeshi Furuhashi (Nagoya University) Advisory Committee: Chair: Toshio Fukuda (Nagoya University) Toshio Goto (Nagoya University) Fumio Harashima (University of Tokyo) Richard D. Klafter (Temple University) C.S. George Lee (Purdue University) Hiroyasu Nomura (Nagoya University) Shigeru Okuma (Nagoya University) Yoshiki Uchikawa (Nagoya University) Steering Committee: S.Abe (Hitach Ltd.) K.Aoki (Toyota Motor Corporation) T.Aoki (Nagoya Municipal Industrial Res. Inst.) M.Arao (OMRON Corporation) Y.Dote (Muroran Institute of Technology) M.Fathi (University of Dortmund) M.Gen (Ashikaga Institute of Technology) H.Hashimoto (Univ. of Tokyo) I.Hayashi (Hannann Universtity) M.Hiller (Gerhard-Mercator-Universit?) H.Honda (Oki Technosystems Laboratory, Inc.) H.Ichihashi (University of Osaka Prefecture) T.Iokibe (Meidensha Corporation) H.Ishibuchi (University of Osaka Prefecture) A.Ishiguro (Nagoya University) O.Ito (Fuji Electric Corporate Res.& Develop., Ltd.) N.Kasabov (University of Otago) R.Katayama (Sanyo Electric Co., Ltd.) E.Khan (National Semiconductor) H.Kitano (Sony CSL) K.M.Lee (KAIST) M.A.Lee (University of California, Berkeley) Y.Maeda (Osaka Electro-Communication Univ.) T.Muramatsu (Nippon Steel Corporation) S.Nakanishi (Tokai University) T.Nomura (SHARP Corporation) H.Ohno (Toyota Central Res.& Develop.Lab., Inc.) M.Sano (Hiroshima City University) M.Sakawa (Hiroshima University) T.Shibata (MEL, MITI) H.Shiizuka (Kogakuin University) K.Shimohara (ATR) K.Tanaka (Kanazawa University) T.Yamada (NTT) T.Yamaguchi (Utsunomiya University) N.Wakami (Matsushita Electric Industrial Co., Ltd.) J.Watada (Osaka Institute of Technology) K.Watanabe (Saga University) --------------------------------------------------- Takeshi Furuhashi, Assoc. Professor Dept. of Information Electronics, Nagoya University Furo-cho, Chikusa-ku, Nagoya 464-01, Japan Tel.+81-52-789-2792, Fax.+81-52-789-3166 --------------------------------------------------- From juergen at idsia.ch Fri Jun 16 04:58:52 1995 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Fri, 16 Jun 95 10:58:52 +0200 Subject: new IDSIA papers Message-ID: <9506160858.AA13539@fava.idsia.ch> 3 new IDSIA publications available. Click at http://www.idsia.ch or use ftp: FTP-host: fava.idsia.ch (192.132.252.1) FTP-filenames: /pub/papers/ml95.kolmogorov.ps.gz 9 pages /pub/papers/ml95.antq.ps.gz 9 pages /pub/papers/iwann95.invertible.ps.gz 8 pages (use gunzip to uncompress) ___________________________________________________________________ DISCOVERING SOLUTIONS WITH LOW KOLMOGOROV COMPLEXITY AND HIGH GENERALIZATION CAPABILITY Juergen Schmidhuber, IDSIA To appear in Machine Learning: Proc. 12th int. conf., 1995. This paper reviews basic concepts of Kolmogorov complexity theory relevant to machine learning. It shows how a derivate of Levin's universal search algorithm can be used to discover neural nets with low Levin complexity, low Kolmogorov complexity, and high generalization capability. At least with certain toy problems where it is computationally feasible, the method can lead to generalization results unmatchable by previous neural net algorithms. The final section addresses problems with incremental learning situations. ANT-Q Luca Gambardella, IDSIA Marco Dorigo, IDSIA To appear in Machine Learning: Proc. 12th int. conf., 1995. We introduce Ant-Q, a family of algorithms which share many similarities with Q-learning (Watkins, 1989). Ant-Q is a generalization of the ``ant system'' (AS --- Dorigo, 1992; Dorigo, Maniezzo and Colorni, 1996), a distributed algorithm for combinatorial optimization based on the ant colony metaphor. In applications to symmetric traveling salesman problems (TSPs), we demonstrate (1) that some Ant-Q instances outperform AS, and (2) that Ant-Q compares favorably with other heuristic approaches based on neural nets or local search. Finally, we apply Ant-Q to some difficult asymmetric TSP's and obtain excellent results: Ant-Q finds solutions of a quality which usually can be found only by highly specialized algorithms. LEARNING THE VISUOMOTOR COORDINATION OF A MOBILE ROBOT BY USING THE INVERTIBLE KOHONEN MAP Cristina Versino, IDSIA Luca Gambardella, IDSIA In Proc. International Workshop on Artificial Neural Networks 1995. This paper is based on the insight that the Extended Kohonen Map (EKM) is naturally invertible: given an input pattern, the network output is generated by competition among the neuron fan-in weight vectors (conventional ``forward mode''). Viceversa, given an output value, a corresponding input pattern can be obtained by competition among the neuron fan-out weight vectors (unconventional ``backward mode''). This invertibility property makes EKM worth considering for sensorimotor modeling. We present an experiment concerning visuomotor coordination of a simple mobile robot. ``Learning by doing'' creates a sensorimotor model: pairs are collected by observing the robot's behavior. These pairs are used for estimating the model's parameters. Training the network on the robot's direct kinematics (forward mode), one simultaneously obtains a solution to the inverse kinematics problem (backward mode). The experiment has been performed both in a simulation and by using a real robot. ___________________________________________________________________ Related and other papers in http://www.idsia.ch Comments welcome. Juergen Schmidhuber Research Director IDSIA, Corso Elvezia 36 6900-Lugano, Switzerland juergen at idsia.ch From harnad at ecs.soton.ac.uk Sat Jun 17 11:26:53 1995 From: harnad at ecs.soton.ac.uk (Stevan Harnad) Date: Sat, 17 Jun 95 16:26:53 +0100 Subject: Memory: BBS Call for Commentators Message-ID: <23611.9506171526@cogsci> Below is the abstract of a forthcoming target article on: MEMORY METAPHORS by A. Koriat and M. Goldsmith This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: bbs at ecs.soton.ac.uk or write to: Behavioral and Brain Sciences Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://cogsci.ecs.soton.ac.uk/~harnad/bbs.html gopher://gopher.princeton.edu:70/11/.libraries/.pujournals ftp://ftp.princeton.edu/pub/harnad/BBS To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp (or gopher or world-wide-web) according to the instructions that follow after the abstract. ____________________________________________________________________ MEMORY METAPHORS AND THE LABORATORY/REAL-LIFE CONTROVERSY: CORRESPONDENCE VERSUS STOREHOUSE VIEWS OF MEMORY Asher Koriat and Morris Goldsmith Department of Psychology University of Haifa Haifa, Israel rsps301 at uvm.haifa.ac.il KEYWORDS: accuracy, assessment, capacity ecological validity, intentionality, memory, metamemory, metaphors, monitoring, representation, storehouse, subject control. ABSTRACT: The study of memory is witnessing a spirited clash between proponents of traditional laboratory research and those advocating a more naturalistic approach to the study of "everyday" memory. The debate has generally centered on the "what" (content), "where" (context), and "how" (methods) of memory research. In the present target article, we argue that this controversy discloses a further, more fundamental breach between two underlying memory metaphors, each having distinct implications for memory theory and assessment: Whereas traditional memory research has been dominated by the storehouse metaphor, leading to a focus on the quantity of items remaining in store, the recent wave of everyday memory research discloses a shift towards a correspondence metaphor, focusing on the accuracy or faithfulness of memory in representing past events. Our analysis shows the correspondence metaphor to call for a research approach which differs from the traditional approach in important respects: in emphasizing the intentional-representational function of memory, in addressing the wholistic and graded aspects of memory correspondence, in taking an output-bound assessment perspective, and in allowing more room for the operation of subject-controlled metamemory processes and motivational factors. This analysis can help tie together some of the what, where, and how aspects of the everyday-laboratory controversy. More importantly, in explicating the unique metatheoretical foundation of the accuracy-oriented approach to memory, our aim is to promote a more effective exploitation of the correspondence metaphor in both naturalistic and laboratory research contexts. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from ftp.princeton.edu according to the instructions below (the filename is bbs.koriat). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- These files are also on the World Wide Web and the easiest way to retrieve them is with Netscape, Mosaic, gopher, archie, veronica, etc. Here are some of the URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs.html http://cogsci.ecs.soton.ac.uk/~harnad/bbs.html gopher://gopher.princeton.edu:70/11/.libraries/.pujournals ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.koriat ftp://cogsci.ecs.soton.ac.uk/pub/harnad/BBS/bbs.koriat To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.koriat When you have the file(s) you want, type: quit ---------- Where the above procedure is not available there are two fileservers: ftpmail at decwrl.dec.com and bitftp at pucc.bitnet that will do the transfer for you. To one or the other of them, send the following one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). ------------------------------------------------------------- From harnad at ecs.soton.ac.uk Sat Jun 17 11:44:57 1995 From: harnad at ecs.soton.ac.uk (Stevan Harnad) Date: Sat, 17 Jun 95 16:44:57 +0100 Subject: EEG Dynamics: BBS Call for Commentators Message-ID: <23748.9506171544@cogsci> Below is the abstract of a forthcoming target article on: BRAIN DYNAMICS, EEG & NEURAL NETS by JJ Wright & DTJ Liley This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: bbs at ecs.soton.ac.uk or write to: Behavioral and Brain Sciences Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://cogsci.ecs.soton.ac.uk/~harnad/bbs.html gopher://gopher.princeton.edu:70/11/.libraries/.pujournals ftp://ftp.princeton.edu/pub/harnad/BBS To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp (or gopher or world-wide-web) according to the instructions that follow after the abstract. ____________________________________________________________________ DYNAMICS OF THE BRAIN AT GLOBAL AND MICROSCOPIC SCALES: NEURAL NETWORKS AND THE EEG. J.J. Wright and D.T.J. Liley Mental Health Research Institute Parkville Victoria 3052, Australia jjw at cortex.mhri.edu.au Swinburne Center for Applied Neuroscience Hawthorne, Victoria 3122 Melbourne, Australia KEYWORDS: chaos, EEG simulation, electrocorticogram, neocortex, network symmetry, neurodynamics. ABSTRACT: There is some complementarity of models for the origin of the electroencephalogram (EEG), and neural network models for information storage in brain-like systems. From the EEG models of Freeman, Nunez, and the author's group, we argue that the wave-like processes revealed in the EEG exhibit linear and near-equilibrium dynamics at macroscopic scale, despite extremely nonlinear, probably chaotic, dynamics at microscopic scale. Simulations of cortical neuronal interactions at global and microscopic scales are then presented. The simulations depend on anatomical and physiological estimates of synaptic densities, coupling symmetries, synaptic gain, dendritic time constants and axonal delays. It is shown that the frequency content, wave velocities, frequency/wavenumber spectra and response to cortical activation of the electrocorticogram (ECoG) can be reproduced by a "lumped" simulation treating small cortical areas as single functional units. The corresponding cellular neural network simulation has properties which include those of attractor neural networks proposed by Amit, and Paresi. Within the simulations at both scales, sharp transitions occur between low and high cell firing rates. These transitions may form a basis for neural interactions across scale. To maintain overall cortical dynamics in the normal low firing-rate range, interactions between the cortex and subcortical systems are required to prevent runaway global excitation. Thus the interaction of cortex and subcortex via cortico-striatal and related pathways, may partly regulate global dynamics by a principle analogous to adiabatic control of artificial neural networks -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from ftp.princeton.edu according to the instructions below (the filename is bbs.wright). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- These files are also on the World Wide Web and the easiest way to retrieve them is with Netscape, Mosaic, gopher, archie, veronica, etc. Here are some of the URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs.html http://cogsci.ecs.soton.ac.uk/~harnad/bbs.html gopher://gopher.princeton.edu:70/11/.libraries/.pujournals ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.wright ftp://cogsci.ecs.soton.ac.uk/pub/harnad/BBS/bbs.wright To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.wright When you have the file(s) you want, type: quit ---------- Where the above procedure is not available there are two fileservers: ftpmail at decwrl.dec.com and bitftp at pucc.bitnet that will do the transfer for you. To one or the other of them, send the following one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). ------------------------------------------------------------- From piuri at elet.polimi.it Mon Jun 19 19:18:57 1995 From: piuri at elet.polimi.it (Vincenzo Piuri) Date: Tue, 20 Jun 1995 00:18:57 +0100 Subject: call for papers Message-ID: <9506192318.AA01819@ipmel2.elet.polimi.it> ================================================================ CESA'96 IMACS/IEEE-SMC Multiconference Computational Engineering in Systems Applications Lille, France - July 9-12, 1996 ================================================================ Call for Papers for the Special Sessions on Neural Technologies ================================================================ The aim of this meeting is to make the state of the art of the various aspects of computational engineering involved in system theory and applications. It will be organized in four distinct simultaneous symposia: "Modelling, Analysis, and Simulation", "Discrete Events and Manufacturing Systems", "Control, Optimization and Supervision", and "Robotics and Cybernetics". A Special Session on "Neural Techniques for Identification and Prediction" will be held in the Symposium on "Modelling, Analysis, and Simulation". Papers are solicited on all aspects of the neural technologies concerning system identification and prediction. In particular, the Special Session will be focused on theoretical, design and practical aspects. A Special Session on "Neural Control Systems: Techniques, Implementations, and Applications" will be held in the Symposium on "Control, Optimization and Supervision". Papers are solicited on all aspects of the neural technologies concerning system control: theory, design methodologies, realizations, case studies, and applications are welcome. Authors interested in the above Special Sessions are kindly invited to send a letter of interest by August 31, 1995, to the Special Session Organizer (email is preferred). This letter should contain the name and the address (including email) of the possible contact author, the name of the special session, a tentative title of the paper. It does not limit possible further submissions. Authors are then requested to submit to the Special Session Organizer (email and fax submission are accepted): - a one-page abstract by Semptember 30, 1995, for review assignment, - the preliminary version of the paper or an extended abstract by November 15, 1995. Acceptance/rejection will be mailed by January 15, 1996. The final camera-ready version of the paper is due by May 1, 1996. Prof. Vincenzo Piuri Organizer of the Special Sessions on Neural Technologies Department of Electronics and Information Politecnico di Milano, Italy fax +39-2-2399-3411 email piuri at elet.polimi.it ================================================================ From workshop at Physik.Uni-Wuerzburg.DE Mon Jun 19 17:06:03 1995 From: workshop at Physik.Uni-Wuerzburg.DE (Wolfgang Kinzel) Date: Mon, 19 Jun 95 17:06:03 MESZ Subject: workshop and autumn school on neural nets in Wuerzburg Message-ID: <199506191506.RAA19777@wptx01.physik.uni-wuerzburg.de> First Announcement and Call for Abstracts Interdisciplinary Autumn School and Workshop on Neural Networks: Application, Biology, and Theory October 12-14 (school) and 16-18 (workshop), 1995 W"urzburg, Germany INVITED SPEAKERS INCLUDE: M. Abeles, Jerusalem A. Aertsen, Rehovot J.K. Anlauf, Siemens AG J.P. Aubin, Paris M. Biehl, W"urzburg C. v.d. Broeck, Diepenbeek M. Cottrell, Paris B. Fritzke, Bochum Th. Fritsch, W"urzburg J. G"oppert, T"ubingen L.K.Hansen, Lyngby M. Hemberger, Daimler Benz AG L. van Hemmen, M"unchen J.A. Hertz, Copenhagen J. Hopfield, Pasadena I. Kanter, Ramat-Gan C. Koch, Pasadena P. Kraus, Bochum B. Lautrup, Copenhagen W. Maass, Graz Th. Martinetz, Siemens AG M. Opper, Santa Cruz H. Scheich, Magdeburg H.G. Schuster, Kiel S. Seung, AT&T Bell-Lab. W. Singer, Frankfurt S.A. Solla, Copenhagen H. Sompolinsky, Jerusalem F. Varela, Paris A. Weigend, Boulder AUTUMN SCHOOL, Oct. 12-14: Introductory lectures on theory and applications of neural nets for graduate students and interested postgraduates in biology, medicine, mathematics, physics, computer science, and other related disciplines. Topics include neuronal modelling, statistical physics, hardware and application of neural nets in telecommunication, financial forecasting, and biological data analysis. WORKSHOP, Oct. 16-18: Biology, theory, and applications of neural networks with particular emphasis on the interdisciplinary aspects of the field. There will be only invited lectures with ample time for discussion. In addition, poster sessions will be scheduled. REGISTRATION: Requested before AUGUST 31, per FAX or (E-)MAIL to Workshop on Neural Networks Inst. f"ur Theor. Physik, Julius-Maxmimilians-Universit"at Am Hubland, D-97074 W"urzburg, Germany Fax: +49 931 888 5141 e-mail: workshop at physik.uni-wuerzburg.de The registration fee is DM 150,- for the Autumn school and DM 150,- for the Workshop, due upon arrival (cash only). Students pay DM 80,- for each event (student ID required). ABSTRACTS: Participants who wish to present a poster should submit title and abstract together with their registration before August 31. ACCOMMODATION: Registered participants will receive a request form of the Tourist Office W"urzburg together with general informations. Early registration is advised. In case of registration after July 31 please contact directly the Fremdenverkehrsamt, Am Congress Centrum, D-97070 W"urzburg, Fax +49 931 37372. ORGANIZING COMMITTEE: M. Biehl, Th. Fritsch, W. Kinzel, Univ. W"urzburg. SCIENTIFIC ADVISORY COUNCIL: D. Flockerzi, K.-D. Kniffki, W. Knobloch, M. Meesmann, T. Nowak, F. Schneider, P. Tran-Gia, Universit"at W"urzburg. SPONSORS: Peter Beate Heller-Stiftung im Stifterverband f. die Deutsche Wissenschaft, Research Center of Daimler Benz AG, Stiftung der St"adtischen Sparkasse W"urzburg. --------------------------------cut here---------------------------------- Registration Form Please return to: Workshop on Neural Networks Institut f"ur Theoretische Physik Julius-Maximilians-Universit"at Am Hubland D-97074 W"urzburg, Germany Fax: +49 931 888 5141 E-mail : workshop at physik.uni-wuerzburg.de I will attend the Autumn School Oct. 12-14 [ ] * (Reg. fee DM $150,- [ ] / 80,- [ ] due upon arrival) * Workshop Oct. 16-18 [ ] * (Reg. fee DM $150,- [ ] / 80,- [ ] due upon arrival) * * Please mark, reduced fee applies only for participants with valid student-ID. I wish to present a poster [ ] (If yes, please send a title page with a 10-line abstract!) Name: Affiliation: Address: Phone: Fax: E-mail: (please provide full postal address in any case!) Signature: --------------------------------------------------------------------------- From moreno at eel.upc.es Tue Jun 20 09:55:09 1995 From: moreno at eel.upc.es (Juan M. Moreno) Date: Tue, 20 Jun 1995 9:55:09 UTC+0100 Subject: Ph.D. Thesis: VLSI Architectures for Evolutive Neural Models Message-ID: <582*/S=moreno/OU=eel/O=upc/PRMD=iris/ADMD=mensatex/C=es/@MHS> FTP-host: ftp.upc.es (147.83.98.7) FTP-file: /upc/eel/moreno_vlsi_94.tar (2 MB compressed, 5.6 MB uncompressed, 184 pages) The following Ph.D. Thesis is now available by anonymous ftp. FTP instructions can be found at the end of this message. -------------------------------------------------------------------- VLSI ARCHITECTURES FOR EVOLUTIVE NEURAL MODELS J.M. Moreno Arostegui Technical University of Catalunya Department of Electronics Engineering RESUME In the last years there has been an increasing interest in the research field related to the artificial neural network models. The reason for this interest has been the development of advanced tools and techniques for microelectronics design, which have permitted to translate into efficient physical realizations the theoretical connectionist models. However, there are several problems associated to the classical artificial neural network models, related basically to their convergence properties, and to the necessity to define heuristically the proper network structure for a particular problem In order to alleviate these problems, evolutive neural models offer the possibility to construct automatically during the training process the proper network structure able to handle efficiently a certain task. Furthermore, these neural models allows for establishing incremental learning schemes, so that new knowledge can be easily incorporated in the network, without the necessity to perform from scratch a new comple te training process. The present work tries to offer efficient solutions, under the form of VLSI microelectronics architectures, for the eventual realization of systems based on the evolutive neural paradigms. An exhaustive analysis on the different types of evolutive neural models has been first performed. The goal of this analysis is to select those evolutive neural models whose data flow is suitable for an eventual hardware implementation. As a result, the incremental evolutive neural models have been selected as the most appropriate ones in the case a hardware realization is envisaged. Afterwards, the improvement of the convergence properties of evolutive neural models has been considered. This improvement is required so as to allow for more efficient physical implementations able to face real world tasks. As a result, three different methods have been proposed so as to enhance the network construction process provided by evolutive neural models. The next step towards the implementation of evolutive neural models has consisted of the selection of the most suitable hardware architectures in order to realize the data flow imposed by the corresponding training an recall phases associated to these neural models. As a previous step, an algorithm vectorization process has been performed, so as to detect the basic operations required by the training and recall schemes. Then, by analyzing the efficiency offered by different hardware architectures in carrying out these basic operations, we have selected two architectures as the most suitable for an eventual hardware implementation. Bearing in mind the results provided by the previous architecture analysis, a digital architecture has been proposed. This architecture is able to organize properly its resources, so as to match the requirements imposed by the corresponding training and recall phases, being thus capable of emulating the two architectures selected by the analysis indicated previously. The architecture is organized as an array of processing units, which can be configured in order to provide an specific array organization. A specific RISC (Reduced Instruction Set Computer) has been developed in order to realize these processing units. This processor has a generic enough instruction set, which permits the efficient emulation (both in terms of speed and compactness) of a wide range of evolutive neural models. Finally, an analog systolic architecture has been proposed, which allows also for the physical implementation of the evolutive neural models indicated previously. This architecture has been developed using a systolic modular principle, so that it permits to emulate different neural models just by changing the functionality of the building blocks which constitute its processing units. The main advantage offered by this architecture is the possibility to develop compact systems capable to provide high processing rates, being thus suitable for those tasks where an integrated signal processing scheme is required. ---------------------------------------------------------------------------- FTP instructions: unix> ftp ftp.upc.es (147.83.98.7) Name: anonymous Password: (your e-mail address) ftp> cd /upc/eel ftp> bin ftp> get moreno_vlsi_94.tar ftp> bye unix> tar xvf moreno_vlsi_94.tar As a result, you get 12 different compressed postscript files (5.6 MB). Just uncompress these files and print them on your local printer. Sorry, but there are no hard copies available. Regards, ---------------------------------------------------------------------------- || Juan Manuel Moreno Arostegui || || || || || || Dept. Enginyeria Electronica || Tel. : +34 3 401 74 88 || || Universitat Politecnica de Catalunya || || || Modul C-4, Campus Nord || Fax : +34 3 401 67 56 || || c/ Gran Capita s/n || || || 08034-Barcelona || E-mail : moreno at eel.upc.es || || SPAIN || || ---------------------------------------------------------------------------- From lawrence at research.nj.nec.com Tue Jun 20 15:29:53 1995 From: lawrence at research.nj.nec.com (Steve Lawrence) Date: Tue, 20 Jun 1995 15:29:53 -0400 (EDT) Subject: TR Available: Neural Network and Machine Learning for Natural Language Processing Message-ID: <199506201929.PAA06572@heavenly> A non-text attachment was scrubbed... Name: not available Type: text Size: 2934 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/bec69e10/attachment-0001.ksh From harnad at ecs.soton.ac.uk Wed Jun 21 16:23:04 1995 From: harnad at ecs.soton.ac.uk (Stevan Harnad) Date: Wed, 21 Jun 95 21:23:04 +0100 Subject: EEG and Memory: PSYC Call for Commentary Message-ID: <5416.9506212023@cogsci> PSYCOLOQUY Commentary is invited on: Wolfgang Klimesch on EEG & Memory Qualified professional biobehavioral, neural or cognitive scientists are hereby invited to submit Open Peer Commentary on the target article whose abstract appears below. It has been published in PSYCOLOQUY, a refereed electronic journal sponsored by the American Psychological Association. Instructions for retrieval and for preparing commentaries follow the abstract. The address for submitting commentaries and articles and for requesting information is psyc at pucc.princteton.edu The URLs for retrieving articles are: http://www.princeton.edu/~harnad/psyc.html gopher://gopher.princeton.edu:70/11/.libraries/.pujournals ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1995.volume.6 TARGET ARTICLE AUTHOR'S RATIONALE FOR SOLICITING COMMENTARY: Memory processes can be described as brain oscillations and memory network models (such as the connectivity model (Klimesch, 1994)) can easily be applied to the neuronal level if abstract activation values are interpreted in terms of frequency values reflecting oscillatory processes. I would be very interested in eliciting commentaries on (1) this basic rationale, (2) the statement that in the cortex oscillations are mandatory for information transmission, (3) the proposed role of EEG alpha and (4) EEG theta for memory processes. ----------------------------------------------------------------------- psycoloquy.95.6.06.memory-brain.1.klimesch ISSN 1055-0143 (55 paragraphs, 75 references, 1279 lines) PSYCOLOQUY is sponsored by the American Psychological Association (APA) Copyright 1995 Wolfgang Klimesch MEMORY PROCESSES DESCRIBED AS BRAIN OSCILLATIONS IN THE EEG-ALPHA AND THETA BANDS Wolfgang Klimesch University of Salzburg Department of Physiological Psychology Institute of Psychology, Hellbrunnerstr. 34 A-5020 Salzburg, AUSTRIA Klimesch at edvz.sbg.ac.at ABSTRACT: This target article tries to integrate results in memory research from diverse disciplines such as psychophysiology, cognitive psychology, anatomy and neurophysiology. The integrating link is seen in more recent anatomical findings that provide strong arguments for the assumption that oscillations provide the basic form of communication between cortical cell assemblies. The basic argument is that episodic memory processes, which are part of a complex working memory system, are reflected by oscillations in the theta band, whereas long-term memory processes are reflected by alpha oscillations. It is assumed that alpha and theta oscillations serve to encode, access, and retrieve cortical codes that are stored in the form of widely distributed but intensely interconnected cell assemblies. KEYWORDS: Alpha, EEG, Hippocampus, Memory, Oscillation, Thalamus, Theta. ------------------------------------------------------------- These files are also on the World Wide Web and the easiest way to retrieve them is with Netscape, Mosaic, gopher, archie, veronica, etc. Here are some of the URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/psyc.html http://cogsci.ecs.soton.ac.uk/~harnad/psyc.html gopher://gopher.princeton.edu:70/11/.libraries/.pujournals ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1995.volume.6/ ftp://cogsci.ecs.soton.ac.uk/pub/harnad/Psycoloquy/1995.volume.6/ To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/Psycoloquy/1995.volume.6 To show the available files, type: ls Next, retrieve the file you want with (for example): mget *.1.klimesch When you have the file(s) you want, type: quit ---------- Where the above procedure is not available there are two fileservers: ftpmail at decwrl.dec.com and bitftp at pucc.bitnet that will do the transfer for you. To one or the other of them, send the following one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). ------------------------------------------------------------- INSTRUCTIONS FOR PSYCOLOQUY COMMENTATORS Accepted PSYCOLOQUY target articles have been judged by 5-8 referees to be appropriate for Open Peer Commentary, the special service provided by PSYCOLOQUY to investigators in psychology, neuroscience, behavioral biology, cognitive sciences and philosophy who wish to solicit multiple responses from an international group of fellow specialists within and across these disciplines to a particularly significant and controversial piece of work. If you feel that you can contribute substantive criticism, interpretation, elaboration or pertinent complementary or supplementary material on a PSYCOLOQUY target article, you are invited to submit a formal electronic commentary. Please note that although commentaries are solicited and most will appear, acceptance cannot, of course, be guaranteed. 1. Before preparing your commentary, please read carefully the Instructions for Authors and Commentators and examine recent numbers of PSYCOLOQUY. 2. Commentaries should be limited to 200 lines (1800 words, references included). PSYCOLOQUY reserves the right to edit commentaries for relevance and style. In the interest of speed, commentators will only be sent the edited draft for review when there have been major editorial changes. Where judged necessary by the Editor, commentaries will be formally refereed. 3. Please provide a title for your commentary. As many commentators will address the same general topic, your title should be a distinctive one that reflects the gist of your specific contribution and is suitable for the kind of keyword indexing used in modern bibliographic retrieval systems. Each commentary should have a brief (~50-60 word) abstract 4. All paragraphs should be numbered consecutively. Line length should not exceed 72 characters. The commentary should begin with the title, your name and full institutional address (including zip code) and email address. References must be prepared in accordance with the examples given in the Instructions. Please read the sections of the Instruction for Authors concerning style, preparation and editing. PSYCOLOQUY is a refereed electronic journal (ISSN 1055-0143) sponsored on an experimental basis by the American Psychological Association and currently estimated to reach a readership of 40,000. PSYCOLOQUY publishes brief reports of new ideas and findings on which the author wishes to solicit rapid peer feedback, international and interdisciplinary ("Scholarly Skywriting"), in all areas of psychology and its related fields (biobehavioral science, cognitive science, neuroscience, social science, etc.). All contributions are refereed. Target article length should normally not exceed 500 lines [c. 4500 words]. Commentaries and responses should not exceed 200 lines [c. 1800 words]. All target articles, commentaries and responses must have (1) a short abstract (up to 100 words for target articles, shorter for commentaries and responses), (2) an indexable title, (3) the authors' full name(s) and institutional address(es). In addition, for target articles only: (4) 6-8 indexable keywords, (5) a separate statement of the authors' rationale for soliciting commentary (e.g., why would commentary be useful and of interest to the field? what kind of commentary do you expect to elicit?) and (6) a list of potential commentators (with their email addresses). All paragraphs should be numbered in articles, commentaries and responses (see format of already published articles in the PSYCOLOQUY archive; line length should be < 80 characters, no hyphenation). It is strongly recommended that all figures be designed so as to be screen-readable ascii. If this is not possible, the provisional solution is the less desirable hybrid one of submitting them as postscript files (or in some other universally available format) to be printed out locally by readers to supplement the screen-readable text of the article. PSYCOLOQUY also publishes multiple reviews of books in any of the above fields; these should normally be the same length as commentaries, but longer reviews will be considered as well. Book authors should submit a 500-line self-contained Precis of their book, in the format of a target article; if accepted, this will be published in PSYCOLOQUY together with a formal Call for Reviews (of the book, not the Precis). The author's publisher must agree in advance to furnish review copies to the reviewers selected. Authors of accepted manuscripts assign to PSYCOLOQUY the right to publish and distribute their text electronically and to archive and make it permanently retrievable electronically, but they retain the copyright, and after it has appeared in PSYCOLOQUY authors may republish their text in any way they wish -- electronic or print -- as long as they clearly acknowledge PSYCOLOQUY as its original locus of publication. However, except in very special cases, agreed upon in advance, contributions that have already been published or are being considered for publication elsewhere are not eligible to be considered for publication in PSYCOLOQUY, Please submit all material to psyc at pucc.bitnet or psyc at pucc.princeton.edu Anonymous ftp archive is DIRECTORY pub/harnad/Psycoloquy HOST princeton.edu From juergen at idsia.ch Wed Jun 21 04:09:40 1995 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Wed, 21 Jun 95 10:09:40 +0200 Subject: one more Message-ID: <9506210809.AA23052@fava.idsia.ch> http://www.idsia.ch/reports.html FTP-host: fava.idsia.ch (192.132.252.1) FTP-filename: /pub/papers/idsia59-95.ps.gz (12 pages, 69k) ENVIRONMENT-INDEPENDENT REINFORCEMENT ACCELERATION Technical Note IDSIA-59-95 Write-up of invited talk at Hongkong Univ. ST (May 29, 1995) Juergen Schmidhuber, IDSIA A reinforcement learning system with limited computational resources interacts with an unrestricted, unknown environment. Its goal is to maximize cumulative reward, to be obtained throughout its limited, unknown lifetime. System policy is an arbitrary modifiable algorithm mapping environmental inputs and internal states to outputs and new internal states. The problem is: in realistic, unknown environments, each policy modification process (PMP) occurring during system life may have unpredictable influence on environmental states, rewards and PMPs at any later time. Existing reinforcement learning algorithms cannot properly deal with this. Neither can naive exhaustive search among all policy candidates -- not even in case of very small search spaces. In fact, a reasonable way of measuring performance improvements in such general (but typical) situations is missing. I define such a measure based on the novel ``reinforcement acceleration criterion'' (RAC). RAC is satisfied if the beginning of each completed PMP that computed a currently valid policy modification has been followed by faster average reinforcement intake than system start-up and the beginnings of all previous such PMPs (the computation time for PMPs is taken into account). Then I present a method called ``environment-independent reinforcement acceleration'' (EIRA) which is guaranteed to achieve RAC. EIRA does neither care whether the system's policy allows for changing itself, nor whether there are multiple, interacting learning systems. Consequences are: (1) a sound theoretical framework for ``meta- learning'' (because the success of a PMP recursively depends on the success of all later PMPs, for which it is setting the stage). (2) A sound theoretical framework for multi-agent learning. The principles have been implemented (1) in a single system using an assembler-like programming language to modify its own policy, and (2) a system consisting of multiple agents, where each agent is in fact just a connection in a fully recurrent reinforcement learning neural net. A by-product of this research is a general reinforcement learning algorithm for such nets. Preliminary experiments illustrate the theory. Juergen Schmidhuber IDSIA, Corso Elvezia 36 6900-Lugano, Switzerland juergen at idsia.ch http://www.idsia.ch From john at dcs.rhbnc.ac.uk Thu Jun 22 11:15:36 1995 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Thu, 22 Jun 95 16:15:36 +0100 Subject: Technical Report Series in Neural and Computational Learning Message-ID: The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT): several new reports available ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-022: ---------------------------------------- Option price forecasting using artificial neural networks by A. Fiordaliso, Universite de Mons-Hainaut Abstract: (Paper is in French) The problem considered here, in forecasting the price of a call option on a short term interest rate future, namely the 3 months Eurodollar (ED3). The aim of our research is to build up Artificial Neural Network models (ANN) that could be integreated in a fuzzy expert system to dynamically manage an option portfolio. We detail some problems and techniques related to the set up of ANN models for univariate and multivariate previsions. We compare our results with some other forecasting techniques. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-041: ---------------------------------------- A General Feedforward Neural Network Model by C\'edric GEGOUT, Bernard GIRAU and Fabrice ROSSI Ecole Normale Sup\'erieure de Lyon, Ecole Normale Sup\'erieure de Paris, THOMSON-CSF/SDC/DPR/R4, Bagneux, France Abstract: In this paper, we generalize a model proposed by L\'eon Bottou and Patrick Gallinari. This model gives a general mathematical description of feedforward neural networks, for which standard models, such as Multi-Layer Perceptrons or Radial Basis Function based neural networks, are only particular cases. A generalized back-propagation, which gives an efficient way to compute the differential of the function computed by the neural network, is introduced and carefully proved. We also introduce an evaluation of the theoretical time needed to compute the differential with the help of both direct algorithm and back-propagation. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-043: ---------------------------------------- On-line Learning with Malicious Noise and the Closure Algorithm by Peter Auer, IGI, Graz University of Technology Nicol\`o Cesa-Bianchi, DSI, University of Milan Abstract: We investigate a variant of the on-line learning model for classes of $\Bool$-valued functions (concepts) in which the labels of a certain amount of the input instances are corrupted by adversarial noise. We propose an extension of a general learning strategy, known as ``Closure Algorithm'', to this noise model, and show a worst-case mistake bound of $m + (d+1)K$ for learning an arbitrary intersection-closed concept class $\scC$, where $K$ is the number of noisy labels, $d$ is a combinatorial parameter measuring $\scC$'s complexity, and $m$ is the worst-case mistake bound of the Closure Algorithm for learning $\scC$ in the noise-free model. For several concept classes our extended Closure Algorithm is efficient and can tolerate a noise rate up to the information-theoretic upper bound. Finally, we show how to efficiently turn any algorithm for the on-line noise model into a learning algorithm for the PAC model with malicious noise. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-044: ---------------------------------------- Neural Networks with Quadratic VC Dimension by Pascal Koiran, Ecole Normale Sup\'erieure de Lyon Eduardo D. Sontag, Rutgers University Abstract: This paper shows that neural networks which use continuous activation functions have VC dimension at least as large as the square of the number of weights $w$. This result settles a long-standing open question, namely whether the well-known $O(w \log w)$ bound, known for hard-threshold nets, also held for more general sigmoidal nets. Implications for the number of samples needed for valid generalization are discussed. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-045: ---------------------------------------- Learning Internal Representations (Short Version) by Jonathan Baxter, Royal Hollloway, University of London Abstract: Probably the most important problem in machine learning is the preliminary biasing of a learner's hypothesis space so that it is small enough to ensure good generalisation from reasonable training sets, yet large enough that it contains a good solution to the problem being learnt. In this paper a mechanism for {\em automatically} learning or biasing the learner's hypothesis space is introduced. It works by first learning an appropriate {\em internal representation} for a learning environment and then using that representation to bias the learner's hypothesis space for the learning of future tasks drawn from the same environment. An internal representation must be learnt by sampling from {\em many similar tasks}, not just a single task as occurs in ordinary machine learning. It is proved that the number of examples $m$ {\em per task} required to ensure good generalisation from a representation learner obeys $m = O(a+b/n)$ where $n$ is the number of tasks being learnt and $a$ and $b$ are constants. If the tasks are learnt independently ({\em i.e.} without a common representation) then $m=O(a+b)$. It is argued that for learning environments such as eech and character recognition $b\gg a$ and hence representation learning in these environments can potentially yield a drastic reduction in the number of examples required per task. It is also proved that if $n = O(b)$ (with $m=O(a+b/n)$) then the representation learnt will be good for learning novel tasks from the same environment, and that the number of examples required to generalise well on a novel task will be reduced to $O(a)$ (as opposed to $O(a+b)$ if no representation is used). It is shown that gradient descent can be used to train neural network representations and the results of an experiment are reported in which a neural network representation was learnt for an environment consisting of {\em translationally invariant} Boolean functions. The experiment ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-046: ---------------------------------------- Learning Model Bias by Jonathan Baxter, Royal Hollloway, University of London Abstract: In this paper the problem of {\em learning} appropriate domain-specific bias is addressed. It is shown that this can be achieved by learning many related tasks from the same domain, and a sufficient bound is given on the number tasks that must be learnt. A corollary of the theorem is that in appropriate domains the number of examples required per task for good generalisation when learning $n$ tasks scales like $\frac1n$. An experiment providing strong qualitative support for the theoretical results is reported. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-047: ---------------------------------------- The Canonical Metric for Vector Quantization by Jonathan Baxter, Royal Hollloway, University of London Abstract: To measure the quality of a set of vector quantization points a means of measuring the distance between two points is required. Common metrics such as the {\em Hamming} and {\em Euclidean} metrics, while mathematically simple, are inappropriate for comparing speech signals or images. In this paper it is argued that there often exists a natural {\em environment} of functions to the quantization process (for example, the word classifiers in speech recognition and the character classifiers in character recognition) and that such an enviroment induces a {\em canonical metric} on the space being quantized. It is proved that optimizing the {\em reconstruction error} with respect to the canonical metric gives rise to optimal approximations of the functions in the environment, so that the canonical metric can be viewed as embodying all the essential information relevant to learning the functions in the environment. Techniques for {\em learning} the canonical metric are discussed, in particular the relationship between learning the canonical metric and {\em internal representation learning}. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-048: ---------------------------------------- The Complexity of Query Learning Minor Closed Graph Classes by Carlos Domingo, Tokyo Institute of Technology John Shawe-Taylor, Royal Holloway, University of London Abstract: The paper considers the problem of learning classes of graphs closed under taking minors. It is shown that any such class can be properly learned in polynomial time using membership and equivalence queries. The representation of the class is in terms of a set of minimal excluded minors (obstruction set). Moreover, a negative result for learning such classes using only equivalence queries is also provided, after introducing a notion of reducibility between query learning problems. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-049: ---------------------------------------- Generalisation of A Class of Continuous Neural Networks by John Shawe-Taylor and Jieyu Zhao, Royal Holloway, University of London Abstract: We propose a way of using boolean circuits to perform real valued computation in a way that naturally extends their boolean functionality. The functionality of multiple fan in threshold gates in this model is shown to mimic that of a hardware implementation of continuous Neural Networks. A Vapnik-Chervonenkis dimension and sample size analysis for the systems is performed giving best known sample sizes for a real valued Neural Network. Experimental results confirm the conclusion that the sample sizes required for the networks are significantly smaller than for sigmoidal networks. ----------------------- The Report NC-TR-95-011 can be accessed and printed as follows % ftp cscx.cs.rhbnc.ac.uk (134.219.200.45) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-95-022.ps.Z ftp> bye % zcat nc-tr-95-022.ps.Z | lpr -l Similarly for the other technical report. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. The files may also be accessed via WWW starting from the NeuroCOLT homepage: http://www.dcs.rhbnc.ac.uk/neural/neurocolt.html Best wishes John Shawe-Taylor From icsc at freenet.edmonton.ab.ca Thu Jun 22 12:08:53 1995 From: icsc at freenet.edmonton.ab.ca (icsc@freenet.edmonton.ab.ca) Date: Thu, 22 Jun 1995 10:08:53 -0600 (MDT) Subject: Announcement / Call for papers SOCO'96 Message-ID: ICSC - International Computer Science Conventions Call for Papers International Symposium on SOFT COMPUTING - SOCO'96 (Fuzzy Logic, Artificial Neural Networks and Genetic Algorithms) To be held at the University of Reading, Whiteknights, Reading, England March 26 - 28, 1996 I. SPONSORS University of Reading, U.K. International Computer Science Conventions (ICSC), Canada/Switzerland II. ORGANISATION OF THE CONFERENCE SOCO'96 is organised as a parallel conference to IIA'96 (International Symposium on Intelligent Industrial Automation). Both conferences are joint operations of the Department of Cybernetics, University of Reading, England and International Computer Science Conventions (ICSC), Canada/Switzerland. III. PURPOSE OF THE CONFERENCE The purpose of this conference is to assist communication of research in the fields of Fuzzy Logic, Neural Networks, Genetic Algorithms and their technological applications. The 'marriage' of fuzzy logic and neural net technologies offers many advantages in terms of fault-tolerance and speed of implementation. Intelligent automation is achieved by implementing humanlike intelligence and soft computing, a newly introduced concept, which encompasses three intelligence-based methods: fuzzy logic, neural network and genetic algorithms. IV. TOPICS Papers are encouraged in all areas related to Soft Computing, such as the following examples: * Artificial Neural Networks * Fuzzy Logic * Fuzzy Control * Genetic Algorithms * AI and Expert Systems * Probabilistic Reasoning * Machine Learning * Distributed Intelligence * Learning Algorithms and Intelligent Control * Self-Organizing Systems V. INTERNATIONAL SCIENTIFIC COMMITTEE - ISC H. Adeli, USA / E. Alpaydin, Turkey / P.G. Anderson, USA (Chairman) / M. Dorigo, Belgium / H. Hellendoorn, Germany / M. Jamshidi, USA/France / B. Kosko, USA / F. Masulli, Italy / P.G. Morasso, Italy / C.C. Nguyen, USA / G.D. Smith, U.K. / N. Steele, U.K. (Vice Chairman) / S. Tzafestas, Greece / K. Warwick, U.K. VI. PUBLICATION OF PAPERS All accepted papers will appear in the conference proceedings, published by ICSC Academic Press. In addition, some selected papers may also be considered for journal publication. VII. SUBMISSIONS OF MANUSCRIPTS Prospective authors are requested to send two copies of their abstracts of 500 words for review by the International Scientific Committee. All abstracts must be written in English, starting with a succinct statement of the problem, the results achieved, their significance and a comparison with previous work. If authors believe that more details are necessary to substantiate the main claims of the paper, they may include a clearly marked appendix that will be read at the discretion of the International Scientific Committee. The abstract should also include: * Indication, if submitted for SOCO'96 or IIA'96 (see separate call for papers) * Title of proposed paper * Authors names, affiliations, addresses * Name of author to contact for correspondence * Email address and fax number of contact author * Name of topic which best describes the paper (max. 5 keywords) Contributions are welcome from those working in industry and having experience in the topics of this conference as well as from academics. The Conference language is English. Abstracts may be submitted either by electronic mail (ASCII text), fax or mail (2 copies) to either one of the following addresses: ICSC Canada P.O. Box 279 Millet, Alberta T0C 1Z0 Canada Email: icsc at freenet.edmonton.ab.ca Fax: +1-403-387-4329 or ISCS Switzerland P.O. Box 657 CH-8055 Zurich Switzerland or University of Reading Dept. of Cybernetics Whiteknights P.O. Box 225 Reading RG6 6AY U.K. VIII.WORKSHOP Contributions for a workshop on Soft Computing Methods for Pattern Recognition are welcome and abstracts (marked "workshop") may be submitted to ICSC Canada until July 31, 1995. IX. DEADLINES AND REGISTRATION It is the intention of the organizers to have the conference proceedings available for the delegates. Consequently the deadlines below are to be strictly respected: * Submission of Abstracts: July 31, 1995 * Notification of Acceptance: September 30, 1995 * Delivery of Full Papers: November 30, 1995 * Early registration: November 30, 1995 * Late registration Full registration (approx. English Pounds 325.00 for early registration) includes attendance to all sessions, lunches, dinners and coffee-breaks, pre-conference reception, conference banquet/social programme and conference proceedings. Combined registration for SOCO'96 and IIA'96 will be available at reduced rates. Full-time students, who have a valid student ID-card, may register with a rebate by eliminating proceedings, banquet/social programme and meals. Extra banquet/social programme tickets will be sold for accompanying persons and students. The proceedings can be purchased separately. X. ACCOMMODATION Accommodation (not included in the registration fee) is available at very reasonable rates at the University Campus. Full details will be made available with the letter of acceptance. XI. FURTHER INFORMATION For further information please contact: ICSC Canada, P.O. Box 279, Millet, Alberta, T0C 1Z0, Canada Email: icsc at freenet.edmonton.ab.ca Fax: +1-403-387-4329 / Phone: +1-403-387-3546 or University of Reading, Department of Cybernetics, Whiteknights, P.O. Box 225, Reading RG6 6AY, U.K. Fax: +44-1734-318 220 / Phone: +44-1734-318 214) ICSC CANADA email: icsc at freenet.edmonton.ab.ca MILLET, AB, T0C 1Z0 From smagt at fwi.uva.nl Fri Jun 23 05:34:12 1995 From: smagt at fwi.uva.nl (Patrick van der Smagt) Date: Fri, 23 Jun 1995 11:34:12 +0200 (MET DST) Subject: Preprint available (robotics & vision NN) Message-ID: <199506230934.AA03412@brad.fwi.uva.nl> The following preprint is now available: A VISUALLY GUIDED ROBOT AND A NEURAL NETWORK JOIN TO GRASP SLANTED OBJECTS P. van der Smagt, A. Dev, and F.C.A. Groen (1995) Proceedings of the 1995 Dutch Conference on Neural Networks (in print) --------------------------------------------------------------------------- FTP-HOST: ftp.fwi.uva.nl (146.50.3.49) FTP-FILE: pub/computer-systems/aut-sys/reports/SmaDevGro95.ps.gz 84 Kb, 8 pages --------------------------------------------------------------------------- ftp://ftp.fwi.uva.nl/pub/computer-systems/aut-sys/reports/SmaDevGro95.ps.gz http://www.fwi.uva.nl/fwi/research/vg4/neuro/publications/publications.html --------------------------------------------------------------------------- Abstract: In this paper we introduce a method for model-free monocular visual guidance of a robot arm. The robot arm, with a single camera in its end-effector, should be positioned above a target, with a changing pan and tilt, which is placed against a textured background. It is shown that a trajectory can be planned in visual space by using components of the optic flow, and this trajectory can be translated to joint torques by a self-learning neural network. No model of the robot, camera, or environment is used. The method reaches a high grasping accuracy after only a few trials. From uzimmer at informatik.uni-kl.de Fri Jun 23 10:07:28 1995 From: uzimmer at informatik.uni-kl.de (Uwe R. Zimmer, AG vP) Date: Fri, 23 Jun 95 15:07:28 +0100 Subject: PhD thesis "Adaptive Approaches to Basic Mobile Robot Tasks" available Message-ID: <950623.150728.2257@ag-vp-file-server.informatik.uni-kl.de> PhD thesis available via WWW / FTP: keywords: mobile robots, exploration, world modelling, navigation, object recognition, artificial neural networks, fuzzy logic ------------------------------------------------------------------ Adaptive Approaches to Basic Mobile Robot Tasks ------------------------------------------------------------------ Uwe R. Zimmer PhD thesis - January 1995 The present thesis addresses the research field of adaptive behaviour concerning mobile robots. The world as "seen" by the robot is previously unknown and has to be explored by manoeuvring according to certain optimization criteria. This assumption enhances the fitness of a mobile robot for a range of applications beyond rigid installations, demanding normally significant effort, and offering limited ability to adapt to changes in the environment. A central concept emphasized in this thesis is the achieving of competence and fitness through continuous interaction with the robot's world. Lifelong learning is considered, even after achieving a temporally sufficient degree of adaptation and running in parallel to the actual robot's application. The levels of competence are generated bottom up, i.e. upper levels are based on the current robot's experience modelled in lower levels. The terms (the skills are formulated with) employed on higher levels are generated through real world interactions on lower levels. The robotics problems discussed are limited to some basic tasks, which are found to be relevant for most mobile robot applications. These are exploration of unknown environments, stable self-localization with respect to the current world and its internal representation, as well as navigation, target extraction, and target recognition. In order to cope with problems resulting from a lack of proper a-priori knowledge and defined and reliably detectable symbols in unknown and dynamic environments, connectionist methods are employed to a great extend. Realtime constraints are considered at all levels of competence, with the natural exception of global planning. The research field of target extraction and identification with respect to mobile robot constraints leads especially to the discussion of visual search (steering), extraction of geometric primitives even at system start-up time, and to the generation of symbols out of subsymbolic processing. These symbols can be reliably recognized and should be suitable for a following symbolic planning level, outside the focus of the present thesis. The presented approach ensures a large degree of adaptability on all levels, not discussed before to this wide extent, or even investigated for the first time regarding some components (e.g. visual search with highly focused devices). The exploration, self-localization, and navigation tasks are attacked by an integral approach allowing the parallel processing of these tasks in a dynamic environment. The stability and reliability of the discussed techniques are proven on the base of realtime and real world experiments with a mobile platform. The high error tolerance and low demands concerning the used sensor devices, as well as the small computation power required, are (currently) unique features of the presented method. Files: - Part I : Introduction - 24 pages, 0.9 MB - Part II : ALICE - 30 pages, 2.0 MB - Part III : SPIN - 60 pages, 1.6 MB - Part IV : Conclusion & Appendix - 38 pages, 0.9 MB for the WWW-links to the files of this thesis: ------------------------------------------------------------------ http://ag-vp-www.informatik.uni-kl.de/Projekte/ALICE/abs.PhD.html ------------------------------------------------------------------ for the homepage of the author (including more reports): ------------------------------------------------------------------ http://ag-vp-www.informatik.uni-kl.de/Leute/Uwe/ ------------------------------------------------------------------ or for the ftp-server hosting the files: ------------------------------------------------------------------ ftp://ag-vp-ftp.informatik.uni-kl.de/Public/Neural_Networks/ Reports/Zimmer.PhD/ ... ------------------------------------------------------------------ ----------------------------------------------------- ----- Uwe R. Zimmer --- University of Kaiserslautern - Computer Science Department | 67663 Kaiserslautern - Germany | ------------------------------.--------------------------------. Phone:+49 631 205 2624 | Fax:+49 631 205 2803 | ------------------------------.--------------------------------. http://ag-vp-www.informatik.uni-kl.de/Leute/Uwe/ | From milanese at cui.unige.ch Fri Jun 23 09:28:58 1995 From: milanese at cui.unige.ch (Ruggero Milanese) Date: Fri, 23 Jun 1995 15:28:58 +0200 Subject: Combining multiple estimates Message-ID: <2340*/S=milanese/OU=cui/O=unige/PRMD=switch/ADMD=arcom/C=ch/@MHS> Hello, I am interested in computing the trajectory and the velocity parameters of objects moving in the camera field of view, combining classical computer vision and neural network algorithms. I have several measures/estimates of the motion parameters, extracted through different methods. I am interested in combining these estimates in order to obtain a "combined-estimate" which is closer to the "real values". So far, I have found the following papers that seem relevant to this problem: - "Combining estimators using non-constant weighting functions", by V. Tresp and M. Taniguchi, Advances in Neural Information Processing Systems 7, MIT Press, 1995. - "Multitarget-Multisensor Tracking: Advanced Applications", editor Bar-Shalom, Artech House, 1990. - "Optimal linear combination of neural networks", PhD. Thesis Sh. Hashem, 1993. Could anyone please give me any other pointers or suggest alternative approaches that possibly use non-constant weighting functions? Please send replies to: Sylvia Gil Dept. of Computer Science E-mail: gil at cui.unige.ch University of Geneva Phone: +41 (22) 705-7628 24, rue du General Dufour Fax: +41 (22) 705-7780 1211 Geneva 4 http://cuiwww.unige.ch Switzerland Thank you a lot. -Ruggero Milanese University of Geneva, Switzerland milanese at cui.unige.ch From rinkus at PARK.BU.EDU Fri Jun 23 10:56:00 1995 From: rinkus at PARK.BU.EDU (rinkus@PARK.BU.EDU) Date: Fri, 23 Jun 1995 10:56:00 -0400 Subject: paper avail.: "TEMECOR: An Associative, Episodic, Temporal Sequence Memory" Message-ID: <199506231456.KAA03369@space.bu.edu> FTP-host: cns-ftp.bu.edu FTP-file: pub/rinkus/nips95_rinkus.ps.Z The following paper, which has been submitted to NIPS-95, and which is an extended and revised version of a paper that has been accepted for invited address at WCNN-95, is available via anonymous FTP at the above location. The paper is 8 pages long. ============================================================================ TEMECOR: An Associative, Episodic, Temporal Sequence Memory Gerard J. Rinkus Cognitive and Neural Systems Department Boston University Boston, MA 02215 rinkus at cns.bu.edu ABSTRACT A distributed associative neural model of {\em episodic memory}\/ for spatio-temporal patterns is presented. The model exhibits {\em faster-than-linear}\/ capacity scaling, under single-trial learning, for both uncorrelated and correlated patterns. The correlated pattern sets used in simulations reported herein are formally, sets of {\em complex state sequences}\/ (CSSs)---i.e. sequences in which states can recur multiple times. Efficient representation of large sets of CSSs is central to speech and language processing. The English lexicon, for example, is formally representable as a set of many thousands of CSSs over an alphabet of about 50 phonemes. The model chooses internal representations (IRs) for each state in a highly random fashion. This implies maximal {\em dispersion}---i.e. maximal average Hamming distance---over the set of IRs chosen during learning. Maximal dispersion yields maximal episodic availability of the traces of the individual exemplars. ============================================================================ FTP instructions: unix> ftp cns-ftp.bu.edu Name: anonymous Password: your full email address ftp> cd pub/rinkus ftp> get nips95_rinkus.ps.Z ftp> bye unix> uncompress nips95_rinkus.ps.Z From hu at eceserv0.ece.wisc.edu Fri Jun 23 12:41:18 1995 From: hu at eceserv0.ece.wisc.edu (Yu Hu) Date: Fri, 23 Jun 1995 11:41:18 -0500 Subject: LAST CALL FOR PAPERS: 1995 Int' Symp. on ANN (Taiwan, ROC) Message-ID: <199506231641.AA26351@eceserv0.ece.wisc.edu> ****************************************************************************** SECOND AND LAST CALL FOR PAPERS 1995 International Symposium on Artificial Neural Networks December 18-20, 1995, Hsinchu, Taiwan, Republic of China ****************************************************************************** Sponsored by National Chiao-Tung University in cooperation with Ministry of Education, Taiwan R.O.C., National Science Council, Taiwan R.O.C. and IEEE Signal Processing Society ************************* Distinguished Speakers: ************************* Prof. Leon Chua, UC Berkeley, USA. Prof. John Moody, Oregon Graduate Institute, USA. Porf. Tse Jun Tarn, Washington Univ., USA. ************************* Call for Papers ************************* The third of a series of International Symposium on Artificial Neural Networks will be held at the National Chiao-Tung University, Hsinchu, Taiwan in December of 1995. Papers are solicited for, but not limited to, the following topics: Associative Memory Robotics Electrical Neurocomputers Sensation & Perception Image/Speech Processing Sensory/Motor Control Systems Machine Vision Supervised Learning Neurocognition Unsupervised Learning Neurodynamics Fuzzy Neural Systems Optical Neurocomputers Mathematical Methods Optimization Other Applications Prospective authors are invited to submit 4 copies of extended summaries of no more than 4 pages. All the manuscripts should be written in English with single-spaced, single column, on 8.5" by 11" white papers. The top of the first page of the summary should include a title, authors' names, affiliations, address, telephone/fax numbers, and email address if applicable. The indicated corresponding author will receive an acknowledgement of his/her submissions. Camera-ready full papers of accepted manuscripts will be published in a hard-bound proceedings and distributed at the symposium. For more information, please consult at the MOSAIC URL site http://www.ee.washington.edu/isann95.html, or use anonymous ftp from pierce.ee.washington.edu/pub/isann95/read.me (128.95.31.129). ************************* SCHEDULE ************************* Submission of extended summary: July 15 Notification of acceptance: September 30 Submission of photo-ready paper: October 31 Advanced registration, before: November 10 ************************* For submission from USA and Europe: ************************* Professor Yu-Hen Hu Dept. of Electrical and Computer Engineering Univ. of Wisconsin - Madison, Madison, WI 53706-1691 Phone: (608) 262-6724, Fax: (608) 262-1267 Email: hu at engr.wisc.edu ************************* For submission from Asia and Other Areas: ************************* Professor Sin-Horng Chen Dept. of Communication Engineering National Chiao-Tung Univ., Hsinchu, Taiwan Phone: (886) 35-712121 ext. 54522, Fax: (886) 35-710116 Email: isann95 at cc.nctu.edu.tw ORGANIZATIOIN General Co-Chairs Hsin-Chia Fu Jenq-Neng Hwang National Chiao-Tung University University of Washington Hsinchu, Taiwan Seattle, Washington, USA hcfu at csie.nctu.edu.tw hwang at ee.washington.ed Program Co-Chairs Sin-Horng Chen Yu-Hen Hu National Chiao-Tung University University of Wisconsin Hsinchu, Taiwan Madison, Wisconsin, USA schen at cc.nctu.edu.tw hu at engr.wisc.edu Advisory Board Co-Chair Sun-Yuan Kung C. Y. Wu Princeton University National Science Council Princeton, New Jersey, US Taipei, Taiwan, ROC From baluja at GS93.SP.CS.CMU.EDU Fri Jun 23 16:18:51 1995 From: baluja at GS93.SP.CS.CMU.EDU (Shumeet Baluja) Date: Fri, 23 Jun 95 16:18:51 EDT Subject: paper: Removing the Genetics from the Standard Genetic Algorithm Message-ID: Title: Removing the Genetics from the Standard Genetic Algorithm By: Shumeet Baluja and Rich Caruana Abstract: We present an abstraction of the genetic algorithm (GA), termed population-based incremental learning (PBIL), that explicitly maintains the statistics contained in a GA's population, but which abstracts away the crossover operator and redefines the role of the population. This results in PBIL being simpler, both computationally and theoretically, than the GA. Empirical results reported elsewhere show that PBIL is faster and more effective than the GA on a large set of commonly used benchmark problems. Here we present results on a problem custom designed to benefit both from the GA's crossover operator and from its use of a population. The results show that PBIL performs as well as, or better than, GAs carefully tuned to do well on this problem. This suggests that even on problems custom designed for GAs, much of the power of the GA may derive from the statistics maintained implicitly in its population, and not from the population itself nor from the crossover operator. This paper may be of interest to the connectionist community as the PBIL algorithm is largely based upon supervised competitive learning algorithms. This paper will appear in the Proceedings of the International Confernece on Machine Learning, 1995. instructions ---------------------------------------------- via anonymous ftp at: reports.adm.cs.cmu.edu once you are logged in, issue the following commands: binary cd 1995 get CMU-CS-95-141.ps From baluja at GS93.SP.CS.CMU.EDU Fri Jun 23 17:05:45 1995 From: baluja at GS93.SP.CS.CMU.EDU (Shumeet Baluja) Date: Fri, 23 Jun 95 17:05:45 EDT Subject: paper: ANN Based Task-Specific Focus of Attention Message-ID: Title: Using the Representation in a Neural Network's Hidden Layer for Task-Specific Focus of Attention By: Shumeet Baluja and Dean Pomerleau Abstract: In many real-world tasks, the ability to focus attention on the important features of the input is crucial for good performance. In this paper a mechanism for achieving task-specific focus of attention is presented. A saliency map, which is based upon a computed expectation of the contents of the inputs at the next time step, indicates which regions of the input retina are important for performing the task. The saliency map can be used to accentuate the features which are important, and de-emphasize those which are not. The performance of this method is demonstrated on a real-world robotics task: autonomous road following. The applicability of this method is also demonstrated in a non-visual domain. Architectural and algorithmic details are provided, as well as empirical results. This paper will appear in IJCAI 95. instructions ---------------------------------------------- via anonymous ftp at: reports.adm.cs.cmu.edu once you are logged in, issue the following commands: binary cd 1995 get CMU-CS-95-143.ps From rafal at mech.gla.ac.uk Sun Jun 25 12:35:13 1995 From: rafal at mech.gla.ac.uk (Rafal W Zbikowski) Date: Sun, 25 Jun 1995 17:35:13 +0100 Subject: PhD on neurocontrol: 2nd announcement Message-ID: <4600.199506251635@gryphon.mech.gla.ac.uk> My PhD thesis on neurocontrol can be found on the anonymous FTP server ftp.mech.gla.ac.uk (130.209.12.14) in directory rafal as PostScript file (ca 1.2 M) zbikowski_phd.ps For details see abstract below. Rafal Zbikowski Control Group, Department of Mechanical Engineering, Glasgow University, Glasgow G12 8QQ, Scotland, UK rafal at mech.gla.ac.uk ----------------------------- cut here --------------------------------- ``Recurrent Neural Networks: Some Control Aspects'' PhD Thesis Rafal Zbikowski ABSTRACT This work aims at a rigorous theoretical research on nonlinear adaptive control using recurrent neural networks. Attention is focussed on the dynamic, nonlinear parametric structures as generic models suitable for on-line use. The discussion is centred around proper mathematical formulation and analysis of the complex and abstract issues and therefore no experimental data are given. The main aim of this work is to explore the capabilities of deterministic, continuous-time recurrent neural networks as state-space, generic, parametric models in the framework of nonlinear adaptive control. The notion of *nonlinear neural adaptive control* is introduced and discussed. The continuous-time state-space approach to recurrent neural networks is used. A general formalism of genericity of control is set up and developed into the *differential approximation* as the focal point of recurrent networks theory. A comparison of approaches to neural approximation, both feedforward and recurrent, is presented within a unified framework and with emphasis on relevance for neurocontrol. Two approaches to identifiability of recurrent networks are analysed in detail: one based on the State Isomorphism Theorem and the other on the I/O equivalence. The Lie algebra associated with recurrent networks is described and difficulties in verification of (weak) controllability and observability pointed out. Learning algorithms for recurrent networks are systematically presented and interpreted as deterministic, infinite-dimensional optimisation problems. Also the continuous-time version of the Real-Time Recurrent Learning is rigorously derived. Proper links between recurrent learning and optimal control are established. Finally, the interpretation of graceful degradation as an optimal sensitivity problem is given. From dhw at santafe.edu Sun Jun 25 14:51:58 1995 From: dhw at santafe.edu (David Wolpert) Date: Sun, 25 Jun 95 12:51:58 MDT Subject: No subject Message-ID: <9506251851.AA12379@sfi.santafe.edu> In a recent posting, Sylvia Gil asks for "pointers to ... approaches that ... use non-constant weighting functions (to combine estimators)." The oldest work in the neural network community on non-constant combining of estimators, and by far the most thoroughly researched, is stacking.^1 Stacking is basically the idea of using the behavior of estimators when trained on part of the training set and queried on the rest of it to learn how best to combine those estimators. The original work on stacking was Wolpert, D. (1992). "Stacked Generalization". Neural Networks, 5, p. 241. and the earlier tech report (1990) upon which it was based. Other work on stacking are the papers Breiman, L. (1992). "Stacked Regression". University of California Berkeley Statistics Dept., tech. report 367. {I believe this is now in press in Machine Learning.} LeBlanc, M, and Tibshirani, R. (1993). "Combining estimates in regression and classification". University of Toronto Statistics Dept., tech. report. In addition, much in each of the following papers concern stacking: Chan, P., and Stolfo, S. (1995). "A Comparative Evaluation of Voting and Meta-Learning on Partitioned Data". To appear in the Proceedings of ML 95. Krough, A, (1995). To appear in NIPS 7, Morgan Kauffman. {I forget the title, as well as who the other author is.} Mackay, D. (1993). "Bayesian non-linear modeling for the energy prediction competition". Cavendish Laboratory, Cambridge University tech. report. Zhang, X., Mesirov, J., Waltz, D. (1993). J. Mol. Biol., 232, p. 1227. Zhang, X., Mesirov, J., Waltz, D. (1992), J. Mol. Biol., 225, p. 1049. Moroever one of the references Gil mentioned (Hashem's) is a rediscovery and then investigation of stacking. (Hashem was not aware of the previous work on stacking when he did his research.) Finally, a simple variant of the way to use stacking to improve a single estimator, called "EESA", is the subject of the following paper Kim, Bartlett (1995). "Error estimation by series association for neural network systems". Neural Computation, 7., p. 799. Two non-stacking references on combining you should probably read are Meir, R. (1995). "Bias, variance, and the combination of estimators; the case of least linear squares". To appear in NIPS 7, Morgan Kauffman. Perrone, M. (1993) PhD. thesis, Brown University Physics Dept. David Wolpert 1 - Actually, there is some earlier work on combining estimators, in which one does not partition the training set (as in stacking), but rather uses the residuals (reated by training the estimators on the full training set)to combine those estimators. However this scheme consistently performs worse than stacking. See for example the earlier of the two articles by Zhang et al. From maggini at McCulloch.Ing.UniFI.IT Mon Jun 26 06:26:03 1995 From: maggini at McCulloch.Ing.UniFI.IT (Marco Maggini) Date: Mon, 26 Jun 1995 12:26:03 +0200 Subject: paper available on grammatical inference using RNN Message-ID: <9506261026.AA19956@McCulloch.Ing.UniFI.IT> FTP-host: ftp-dsi.ing.unifi.it FTP-filename: /pub/tech-reports/noisy-gram.ps.Z The following paper, which has been submitted to NEURAP-95 is available via anonymous FTP at the above location. The paper is 8 pages long (about 200Kb). ======================================================================== Learning Regular Grammars From Noisy Examples Using Recurrent Neural Networks M. Gori, M. Maggini, and G. Soda Dipartimento di Sistemi e Informatica Universita' di Firenze Via di Santa Marta 3 - 50139 Firenze - Italy Tel. +39 (55) 479.6265 - Fax +39 (55) 479.6363 E-mail : {marco,maggini,giovanni}@mcculloch.ing.unifi.it WWW: http:/www-dsi.ing.unifi.it/neural ABSTRACT Many successful results have recently been reported concerning the application of recurrent neural networks to the induction of simple finite state grammars. These results can be used to explore the computational capabilities of neural models applied to symbolic tasks. Many insights have been given on the links between the continuous dynamics of a recurrent neural network and the symbolic rules that we want to learn. However, so far, the advantages of dynamical adaptive models and the related gradient-driven learning techniques with respect to classical symbolic inference algorithms have not been clearly shown. In this paper, we explore a class of inductive inference problems that seems to be very well-suited for optimization-based learning algorithms. Bearing in mind the idea of optimal rather than perfect solution, we explore how optimality criteria can help a successful development of the learning process when some of the examples are erroneously labeled. Some experimental results show that neural network-based learning algorithms favor the development of the ``simplest'' solutions, thus eliminating most of the exceptions that arise when dealing with erroneous examples. ============================================================================= The paper can be accessed and printed as follows: % ftp ftp-dsi.ing.unifi.it (150.217.11.10) Name: anonymous password: your full email address ftp> cd /pub/tech-reports ftp> binary ftp> get noisy-gram.ps.Z ftp> bye % uncompress noisy-gram.ps.Z % lpr noisy-gram.ps From hamps at stevens.speech.cs.cmu.edu Mon Jun 26 09:48:41 1995 From: hamps at stevens.speech.cs.cmu.edu (John Hampshire) Date: Mon, 26 Jun 95 09:48:41 -0400 Subject: combining estimators w/ non-constant weighting Message-ID: <9506261348.AA27513@stevens.speech.cs.cmu.edu> In a response to Sylvia Gil David Wolpert writes: ==== The oldest work in the neural network community on non-constant combining of estimators, and by far the most thoroughly researched, is stacking... ==== Perhaps the second claim is accurate, but the first claim (oldest in neural nets) is not. Robbie Jacobs, Mike Jordan, and Steve Nowlan have a long series of papers on the topic of combining estimators (maybe that's not what they called it, but that's certainly what it is); their works date back to well before 92. Likewise, I wrote a NIPS paper (90 or 91... not worth reading) and an IEEE PAMI article (92... probably worth reading) with Alex Waibel on this topic. Since David's not aware of these earlier and contemporary works, his second ''most thoroughly researched'' claim would appear doubtful as well. -John From vecera+ at CMU.EDU Mon Jun 26 11:03:44 1995 From: vecera+ at CMU.EDU (Shaun Vecera) Date: Mon, 26 Jun 1995 11:03:44 -0400 (EDT) Subject: Tech Report: Figure-Ground Organization Message-ID: The following Technical Report is available both electronically from our own FTP server or in hard copy form. Instructions for obtaining copies may be found at the end of this post. ftp://hydra.psy.cmu.edu:/pub/pdp.cns/pdp.cns.95.3.ps.Z ======================================================================== FIGURE-GROUND ORGANIZATION AND SHAPE RECOGNITION PROCESSES: AN INTERACTIVE ACCOUNT Shaun P. Vecera Randall C. O'Reilly Technical Report PDP.CNS.95.3 June 1995 Traditional theories of visual processing have assumed that figure-ground organization must precede object representation and identification. Such a view seems logically necessary: How can one recognize an object before the visual system knows which region should be the figure? However, a number of behavioral studies have shown that subjects are more likely to call a familiar region ``figure'' relative to a less familiar region, a finding inconsistent with the traditional accounts of visual processing. To explain these results, Peterson and colleagues have proposed an additional ``prefigural'' object recognition process that operates before any figure-ground organization (M. A. Peterson, 1994). We propose a more parsimonious interactive account of figure-ground organization in which partial results of figure-ground processes interact with object representations in a hierarchical system similar to that envisioned by traditional theories. We present a computational model that embodies this graded, interactive approach and show that this model can account several behavioral results, including orientation effects, exposure duration effects, and the combination of multiple cues. Finally, these principles of graded, interactive processing offer the possibility of providing a more general information processing framework for visual and higher-cognitive systems. ======================================================================= Retrieval information for pdp.cns TRs: unix> ftp 128.2.248.152 # hydra.psy.cmu.edu Name: anonymous Password: ftp> cd pub/pdp.cns ftp> binary ftp> get pdp.cns.95.3.ps.Z ftp> quit unix> zcat pdp.cns.95.3.ps.Z | lpr # or however you print postscript NOTE: The compressed file is 128K. Uncompressed, the file is 313K. The printed version is 51 total pages. For those who do not have FTP access, physical copies can be requested from Barbara Dorney . From schwenk at robo.jussieu.fr Mon Jun 26 13:33:45 1995 From: schwenk at robo.jussieu.fr (Holger Schwenk) Date: Mon, 26 Jun 1995 18:33:45 +0100 (WETDST) Subject: NIPS*7 preprint avaible (OCR, fast tangent distance) Message-ID: <950626183345.29040000.adc22387@lea.robo.jussieu.fr> **DO NOT FORWARD TO OTHER GROUPS** FTP-host: ftp.robo.jussieu.fr FTP-filename: /papers/schwenk.nips7.ps.gz (8 pages, 39k) The following paper, which will appear in NIPS*7, MIT Press, is available via anonymous FTP at the above location. The paper is 8 pages long, the screendumps will look best on 600dpi laser printers. No hardcopies available. =============================================================================== Transformation Invariant Autoassociation with Application to Handwritten Character Recognition H. Schwenk and M. Milgram PARC - boite 164 Universite Pierre et Marie Curie 4, place Jussieu 75252 Paris cedex 05, FRANCE ABSTRACT When training neural networks by the classical backpropagation algorithm the whole problem to learn must be expressed by a set of inputs and desired outputs. However, we often have high-level knowledge about the learning problem. In optical character recognition (OCR), for instance, we know that the classification should be invariant under a set of transformations like rotation or translation. We propose a new modular classification system based on several autoassociative multilayer perceptrons which allows the efficient incorporation of such knowledge. Results are reported on the NIST database of upper case handwritten letters and compared to other approaches to the invariance problem. ============================================================================ FTP instructions: unix> ftp ftp.robo.jussieu.fr Name: anonymous Password: your full email address ftp> cd papers ftp> bin ftp> get schwenk.nips95.ps.gz ftp> quit unix> gunzip schwenk.nips95.ps.gz unix> lp schwenk.nips95.ps (or however you print postscript) The above ftp server will host also other papers of our group in the near future. I welcome your comments. --------------------------------------------------------------------- Holger Schwenk PARC - boite 164 tel: (+33 1) 44.27.63.08 Universite Pierre et Marie Curie fax: (+33 1) 44.27.62.14 4, place Jussieu 75252 Paris cedex 05 email: schwenk at robo.jussieu.fr FRANCE --------------------------------------------------------------------- From unni at neuro.cs.gmr.com Mon Jun 26 06:37:59 1995 From: unni at neuro.cs.gmr.com (K.P. Unnikrishnan CS/50) Date: Mon, 26 Jun 1995 15:37:59 +0500 Subject: Caltech/GM Postdoc in Control Message-ID: <9506261937.AA07347@neuro.cs.gmr.com> CALIFORNIA INSTITUTE OF TECHNOLOGY POST-DOCTORAL FELLOWSHIP Neural Networks for Control A post-doctoral fellowship is available for research in neural networks for automotive control. The position is initially for one year, renewable for another. This position is part of an ongoing project between Caltech and General Motors. Research in neural architectures and algorithms appropriate to specific real-world problems are goals of this project. Deadline for submission is August 15, 1995. The selected candidate would be expected to spend part of the time at the GM Research Labs in Michigan and part of the time at Caltech. Applicants should send a curriculum vitae, a brief description of relevant experience in control theory and neural networks, and names of three references to: Ms. Laura Rodriguez Caltech 139-74 Pasadena, California 91125 Informal enquiries can be made to unni at gmr.com. The California Institute of Technology is an affirmative action/equal opportunity employer. Women, minorities, veterans, and disabled persons are encouraged to apply. From arbib at pollux.usc.edu Mon Jun 26 15:17:35 1995 From: arbib at pollux.usc.edu (Michael A. Arbib) Date: Mon, 26 Jun 1995 12:17:35 -0700 Subject: The Handbook of Brain Theory and Neural Networks Message-ID: <199506261917.MAA08341@pollux.usc.edu> Advertisement and Request for Feedback from Michael A. Arbib: arbib at pollux.usc.edu ADVERTISEMENT: The following is adapted from the brochure put out by MIT Press for The Handbook of Brain Theory and Neural Networks which they have just published, and which I edited: "In hundreds of articles by experts from around the world, and in overviews and "road maps" prepared by the editor, the Handbook of Brain Theory and Neural Networks charts the immense progress made in recent years in many specific topics related to two great questions: How does the brain work? and How can we build intelligent machines? "While many books have appeared on limited aspects of one subfield or another of brain theory and neural networks, this handbook covers the entire sweep of topics - from detailed models of single neurons, analyses of a wide variety of biological neural networks, and connectionist studies, to mathematical analyses of a variety of abstract neural networks, and technological applications of adaptive, artificial neural networks. "The excitement, and the frustration, of these topics is that they span such a broad range of disciplines including mathematics, statistical physics and chemistry, neurology and neurobiology, and computer science and electrical engineering as well as cognitive psychology, artificial intelligence, and philosophy. Thus, much effort has gone into making the Handbook accessible to readers with varied backgrounds while still providing a clear view of much of the recent, specialized research in specific topics. PART II - ROAD MAPS provides an entree into the many articles ds while still providing a clear view of much of the recent, specialized research in specific topics. "The heart of the book, Part III - ARTICLES, is comprised of 266 original articles by leaders in the various fields, arranged alphabetically by title. Parts I and II, written by the editor, are designed to help readers orient themselves to this vast range of material. PART I - BACKGROUND introduces several basic neural models, explains how the present study of Brain Theory and Neural Networks integrates brain theory, artificial intelligence, and cognitive psychology, and provides a tutorial on the concepts essential for understanding neural networks as dynamic, adaptive systems. PART II - ROAD MAPS provides an entree into the many articles of Part III via an introductory "Meta-Map" and twenty-thrRONS AND NETWORKS Biological Neurons Biological Networks Mammalian Brain Regions SENSORY SYSTEMS Vision Other Sensory Systems rouped under eight general headings: CONNECTIONISM: PSYCHOLOGY, LINGUISTICS, AND AI Connectionist Psychology Connectionist Linguistics Artificial Intelligence and Neural Networks DYNAMICS, SELF-ORGANIZATION, AND COOPERATIVITY Dynamic Systems and Optimization Cooperative Phenomena Self-Organization in Neural Networks LEARNING IN ARTIFICIAL NEURAL NETWORKS Learning in Artificial Neural Networks, Deterministic Learning in Artificial Neural Networks, Statistical Computability and Complexity APPLICATIONS AND IMPLEMENTATIONS Control Theory and Robotics Applications of Neural Networks Implementation of Neural Networks BIOLOGICAL NEURONS AND NETWORKS Biological Neurons Biological Networks Mammalian Brain Regions SENSORY SYSTEMS Vision Other Sensory Systems PLASTICITY IN DEVELOPMENT AND LEARNING Mechanisms of Neural Plasticity Development and Regeneration of Neural Networks Learning in Biological Systems MOTOR CONTROL Motor Pattern Generators and Neuroethology Biological Motor Control Primate Motor Control ***** REQUEST FOR FEEDBACK Any feedback on the Handbook would be much appreciated - both praise for what worked well and suggestions for what needs improvement. In particular, what topics are missing in Part III (you'll need to tour the Index of the Handbook as well as looking for the "obvious" headings in the Table of Contents), and what subtopics are missing in individual articles? And how might Parts I and II be improved? Any other comments and suggestions will be most welcome. Meanwhile, I hope that many of you enjoy the book and find that it does succeed in bridging the cultural divide between those who study the brain and those who study artificial neural networks. ***** With best wishes Michael Arbib P.S. Please forward this message to other news groups, etc. From pierre at mbfys.kun.nl Tue Jun 27 03:33:27 1995 From: pierre at mbfys.kun.nl (Pierre v.d. Laar) Date: Tue, 27 Jun 1995 09:33:27 +0200 (MET DST) Subject: Paper Available: A Neural Model of Visual Attention Message-ID: <199506270733.JAA00291@anthemius.mbfys.kun.nl> Dear Connectionists, The following paper is accepted to appear in the Proceedings of the third SNN Symposium A Neural Model of Visual Attention Pi\"erre van de Laar, Tom Heskes, and Stan Gielen Abstract: We propose a biologically plausible neural model of selective covert visual attention. We show that this model is able to learn focussing on object-specific features. It has similar learning characteristics as humans in the learning and unlearning paradigm of Shiffrin and Schneider (1977). R. M. Shiffrin and W. Schneider. Controlled and automatic human information processing: II. perceptual learning, automatic attending and a general theory. Psychological Review, 84(2):127--190, 1977. The online version can be reached through: http://www.mbfys.kun.nl/~pierre/Proc3SNN/ The postscript version is available through ftp at: ftp.mbfys.kun.nl as file: snn/pub/reports/Laar.Proc3SNN.ps.Z FTP INSTRUCTIONS unix% ftp ftp.mbfys.kun.nl (or 131.174.83.52) Name: anonymous Password: (use your e-mail address) ftp> cd snn/pub/reports/ ftp> binary ftp> get Laar.Proc3SNN.ps.Z ftp> bye unix% uncompress Laar.Proc3SNN.ps.Z unix% lpr Laar.Proc3SNN.ps Remarks, suggestions and relevant references are welcome. Best wishes Pi\"erre van de Laar Department of Medical Physics and Biophysics, University of Nijmegen, The Netherlands pierre at mbfys.kun.nl URL: http://www.mbfys.kun.nl/~pierre/ P.S. More information about the SNN symposium can be found at: http://www.mbfys.kun.nl/SNN/Symposium/ From lxu at cs.cuhk.hk Tue Jun 27 08:27:10 1995 From: lxu at cs.cuhk.hk (Dr. Xu Lei) Date: Tue, 27 Jun 95 20:27:10 +0800 Subject: combining estimators w/ non-constant weighting Message-ID: <9506271227.AA21272@cucs18.cs.cuhk.hk> Also the topic of combining of classifiers has been studied in Pattern Recognition literure as well as Neural networks. My colleauges and I has done some work (Pls use your own judgement to decide they are worth reading). -Lei Lei Xu, Adam Krzyzak and Ching Y. Suen, (1991), `` Associative Switch for Combining Multiple Classifiers", Proc. of 1991 International Joint Conference on Neural Networks, Seattle, July, Vol. I, 1991, pp43-48. Later, the detailed version is published in {\sl Journal of Artificial Neural Networks}, Vol.1, No.1, pp77-100, 1994. Lei Xu, Adam Krzyzak and Ching Y. Suen, (1992), `` Several Methods for Combining Multiple Classifiers and Their Applications in Handwritten Character Recognition", {\sl IEEE Trans. on System, Man and Cybernetics}, Vol. SMC-22, No.3, pp418-435, 1992, regular paper. Lei Xu and M.I.Jordan (1993), ``EM Learning on A Generalized Finite Mixture Model for Combining Multiple Classifiers'', Proceedings of World Congress on Neural Networks, Portland, OR, Vol. IV, 1993, Lei Xu, M.I.Jordan and G. E. Hinton (1995), `` An Alternative Model for mixtures of Experts", to appear on {\em Advances in Neural Information Processing Systems 7}, eds., Cowan, J.D., Tesauro, G., and Alspector, J., MIT Press, Cambridge MA, 1995. From uzimmer at informatik.uni-kl.de Tue Jun 27 12:43:00 1995 From: uzimmer at informatik.uni-kl.de (Uwe R. Zimmer, AG vP) Date: Tue, 27 Jun 95 17:43:00 +0100 Subject: Paper available: Baseline Detection in Chromatograms Message-ID: <950627.174300.2292@ag-vp-file-server.informatik.uni-kl.de> Paper available via WWW / FTP: keywords: chromatography, baseline detection, artificial neural networks, self-organization, constraint topologic mapping, fuzzy logic ------------------------------------------------------------------ Deriving Baseline Detection Algorithms from Verbal Descriptions ------------------------------------------------------------------ Baerbel Herrnberger & Uwe R. Zimmer (submitted for publication) This paper is on baseline detection in chromatography, a widely used technique for evaluating complex mixtures of substances in analytical chemistry. The resulting chromatograms are characterized by peaks indicating the presence and the amount of certain substances. Due to disturbances of various kinds, chromatograms have to be corrected for baseline before taking further measurements on these peaks. The presented strategy of automatic baseline detection combines fuzzy logic and neural network approaches. It is based on a verbal description of a baseline refering to a 2D image of a chromatogram instead of a data vector. Baselines are expected to touch data points on the lower border of the chromatogram forming a mainly horizontal and straight line. That description has been translated into a couple of algorithms in a two-stage approach with the first stage proceeding on a local, and the second proceeding on a global level. The first stage assigns a value regarded as the degree of baseline membership or significance to each data point - the second uses a global optimization strategy for coordinating these significances and for producing the final curve, simultaneously. Expecting no single feature being sufficient for baseline/non-baseline discrimination, a couple of features is extracted. Deriving them from a 2D image, positional relations between data points can be considered. The type of feature fusion is derived from a cost function upon a set of pre-classified data points. Constrained topological mapping will be the basis for the second stage. The statistical stability of the proposed approach is superior to known techniques, while keeping the computational effort low. (5 pages - 432 KB) for the WWW-link: ------------------------------------------------------------------ http://ag-vp-www.informatik.uni-kl.de/Leute/Uwe/abs.Baseline.html ------------------------------------------------------------------ for the homepage of the authors (including more reports): ------------------------------------------------------------------ http://ag-vp-www.informatik.uni-kl.de/Leute/Uwe/bahe.html http://ag-vp-www.informatik.uni-kl.de/Leute/Uwe/ ------------------------------------------------------------------ or for the ftp-server hosting the file: ------------------------------------------------------------------ ftp://ag-vp-ftp.informatik.uni-kl.de/Public/Neural_Networks/ Reports/Herrnberger.Baseline.ps.Z ------------------------------------------------------------------ ----------------------------------------------------- ----- Uwe R. Zimmer --- University of Kaiserslautern - Computer Science Department | 67663 Kaiserslautern - Germany | ------------------------------.--------------------------------. Phone:+49 631 205 2624 | Fax:+49 631 205 2803 | ------------------------------.--------------------------------. http://ag-vp-www.informatik.uni-kl.de/Leute/Uwe/ | From jacobs at psych.Stanford.EDU Tue Jun 27 12:47:52 1995 From: jacobs at psych.Stanford.EDU (Robert Jacobs) Date: Tue, 27 Jun 1995 09:47:52 -0700 Subject: combining estimators Message-ID: <199506271647.JAA13795@aragorn.Stanford.EDU> I have written a review article on statistical methods for combining estimators. Linear combination techniques are covered, as well as supra Bayesian procedures. The article is scheduled to appear in the journal "Neural Computation" (volume 7, number 5). I recently received the page proofs so I imagine that it will appear relatively soon, possibly in the next issue. I will put the abstract to the article at the bottom of this note. Robbie Jacobs ============================================================ Methods For Combining Experts' Probability Assessments This article reviews statistical techniques for combining multiple probability distributions. The framework is that of a decision maker who consults several experts regarding some events. The experts express their opinions in the form of probability distributions. The decision maker must aggregate the experts' distributions into a single distribution that can be used for decision making. Two classes of aggregation methods are reviewed. When using a supra Bayesian procedure, the decision maker treats the expert opinions as data that may be combined with its own prior distribution via Bayes' rule. When using a linear opinion pool, the decision maker forms a linear combination of the expert opinions. The major feature that makes the aggregation of expert opinions difficult is the high correlation or dependence that typically occurs among these opinions. A theme of this paper is the need for training procedures that result in experts with relatively independent opinions or for aggregation methods that implicitly or explicitly model the dependence among the experts. Analyses are presented that show that $m$ dependent experts are worth the same as $k$ independent experts where $k \leq m$. In some cases, an exact value for $k$ can be given; in other cases, lower and upper bounds can be placed on $k$. From dhw at santafe.edu Tue Jun 27 17:09:22 1995 From: dhw at santafe.edu (David Wolpert) Date: Tue, 27 Jun 95 15:09:22 MDT Subject: Mixtures of experts and combining learning algorithms Message-ID: <9506272109.AA25931@sfi.santafe.edu> John Hampshire writes: >>>> Robbie Jacobs, Mike Jordan, and Steve Nowlan have a long series of papers on the topic of combining estimators (maybe that's not what they called it, but that's certainly what it is); their works date back to well before 92. Likewise, I wrote a NIPS paper (90 or 91... not worth reading) and an IEEE PAMI article (92... probably worth reading) with Alex Waibel on this topic. Since David's not aware of these earlier and contemporary works, his second ''most thoroughly researched'' claim would appear doubtful as well. >>>> I am aware of the ground-breaking work of Nowlan, Jacobs, Jordan, Hinton, etc. on adaptive mixtures of experts (AME) (as well as other related schemes John didn't mention). And it is related to the subject at hand, so I should have mentioned it in posting. However I have trouble seeing what exactly John was driving at, since although it's related, I don't think AME directly addresses Gil's question. Most (almost all?) of the work on AME I've encountered concerns members of a restricted family of learning algorithms. Namely, parametric learning algorithms that work by minimizing some cost function. (For example, in Nowlan and Hinton (NIPS3), one is explicitly combining neural nets.) Loosely speaking, AME in essence "co-opts" how these algorithms work, by combining their individual cost functions into a larger cost function that is then minimized by varying everybody's parameters together. However I took Gil's question (perhaps incorrectly) to concern the combination of *arbitrary* types of estimators, which in particular includes estimators (like nearest neighbor) that need not be parametric and therefore can not readily be "co-opted". (Certainly the work she listed, like Sharif's, concerns the combination of such arbitrary estimators.) This simply is not the concern of most of the work on AME. Now one could imagine varying AME so that the "experts" being combined are not parameterized input-output functions but rather the outputs of more general kinds of learning algorithms. For example, one could have the "experts" be the end-products of assorted nearest neighbor schemes, that are trained independently of one another. *After* that training one would train the gating network to combine the individual experts. (In contrast, in vanilla AME one trains the gating network together with the individual experts in one go.) However 1) It can be argued that it is stretching thing to view this as AME, especially if you adopt the perspective that AME is a kind of mixture modelling. 2) More importantly, I already referred to this kind of scheme in my original posting: "Actually, there is some earlier work on combining estimators, in which one does not partition the training set (as in stacking), but rather uses the residuals (created by training the estimators on the full training set) to combine those estimators. However this scheme appears to perform worse than stacking. See for example the earlier of the two articles by Zhang et al." *** Summarizing, we have one of two possibilities. Either i) John is referring to a possible variant of AME that I did mention (albeit without explicitly using the phrase "AME"), or ii) John is referring to the more common variant of AME in which it can not combine arbitrary kinds of estimators, and therefore is not a candidate for what (I presumed) Gil had in mind. Obviously I am not as much of an expert on the AME as John, so there might very well be a section or two (or even a whole paper or two!) that falls outside of those two categorizations of AME. But I think it's fair to say that most of the work on AME is not concerned with combining arbitrary estimators, in ways other than those referred to in my posting. Nonetheless, I certainly would recommend that Gil (and others) acquaint themselves with the seminal work of AME. I was definitely remiss in not including AME in my (quickly knocked together) list of work related to Gil's question. David Wolpert From nin at math.tau.ac.il Wed Jun 28 05:30:15 1995 From: nin at math.tau.ac.il (Intrator Nathan) Date: Wed, 28 Jun 1995 12:30:15 +0300 Subject: Combining estimators - which ones to combine Message-ID: <199506280930.MAA10327@silly3.math.tau.ac.il> The focous of most of the papers cited so far on combining experts was how to combine, while the issue of what to combine is as important. The fundametal observation is that combining, or in the simple case averaging estimators is effective only if these estimators are made somehow to be independent. One can cause independence via bootstrap methods (Breiman's Stacking and recently, Bagging) or via smooth bootstrap which amounts to injecting noise during training. Ones estimators are independent enough, simple averaging gives very good performance. - Nathan Intrator Refs: @misc{Breiman93, author="L. Breiman", title="Stacked regression", year=1993, note="Technical report, Univ. of Cal, Berkeley", } @misc{Breiman94, author="L. Breiman", title="Bagging predictors", year=1994, note="Technical report, Univ. of Cal, Berkeley", } @misc{RavivIntrator95, author="Y. Raviv and N. Intrator", year=1995, note="Preprint", title="Bootstraping with Noise: An Effective Regularization Technique", abstract="Bootsrap samples with noise are shown to be an effective smoothness and capacity control for training feed-forward networks as well as more traditional statistical models such as general additive models. The effect of smoothness and ensemble averaging are shown to be complementary and not equivalent to noise injection. The two-spiral, a highly non-linear noise-free problem, is used to demonstrate these findings.", url="ftp://cns.math.tau.ac.il/papers/spiral.ps.Z", } The last one can also be accessed via my research page: http://www.math.tau.ac.il/~nin/research.html From brunak at cbs.dtu.dk Wed Jun 28 11:29:01 1995 From: brunak at cbs.dtu.dk (Soren Brunak) Date: Wed, 28 Jun 95 11:29:01 METDST Subject: Combining neural estimators, NetGene Message-ID: Re: Combining neural network estimators, NetGene For people interested in earlier work on combined neural networks, I would like to bring the following paper to their attention: Prediction of human mRNA donor and acceptor sites from the DNA sequence, S. Brunak, J. Engelbrecht, and S. Knudsen, J. Mol. Biol., 220, 49-65, 1991. (Abstract below). The paper describes a method for locating intron splice sites in human genes by combining a search for coding regions and a search for donor and acceptor sites. The method was implemented as a mail server in February 1992, and is still widely used. Since 1992 it has processed more than 50 million nucleotides of DNA for researchers from many different countries, mainly UK, USA and Germany. The mail server is reached by sending mail to: NetGene at cbs.dtu.dk, regards, Soren Brunak Center for Biological Sequence Analysis The Technical University of Denmark DK-2800 Lyngby, Denmark Email: brunak at cbs.dtu.dk ----------------------------------------------------------------------- Abstract: Artificial neural networks have been applied to the prediction of splice site location in human pre--mRNA. A joint prediction scheme where prediction of transition regions between introns and exons regulates a cutoff level for splice site assignment was able to predict splice site locations with confidence levels far better than previously reported in the literature. The problem of predicting donor and acceptor sites in human genes is hampered by the presence of numerous amounts of false positives --- in the paper the distribution of these false splice sites is examined and linked to a possible scenario for the splicing mechanism {\it in vivo}. When the presented method detects 95\% of the true donor and acceptor sites it makes less than 0.1\% false donor site assignments and less than 0.4\% false acceptor site assignments. For the large data set used in this study this means that on the average there are one and a half false donor sites per true donor site and six false acceptor sites per true acceptor site. With the joint assignment method more than a fifth of the true donor sites and around one fourth of the true acceptor sites could be detected without accompaniment of any false positive predictions. Highly confident splice sites could not be isolated with a widely used weight matrix method or by separate splice site networks. A complementary relation between the confidence levels of the coding/non--coding and the separate splice site networks was observed, with many weak splice sites having sharp transitions in the coding/non--coding signal and many stronger splice sites having more ill--defined transitions between coding and non--coding. Prediction of human mRNA donor and acceptor sites from the DNA sequence, S. Brunak, J. Engelbrecht, and S. Knudsen, J. Mol. Biol., 220, 49-65, 1991. From lxu at cs.cuhk.hk Thu Jun 29 04:56:31 1995 From: lxu at cs.cuhk.hk (Dr. Xu Lei) Date: Thu, 29 Jun 95 16:56:31 +0800 Subject: combining estimators w/ non-constant weighting Message-ID: <9506290856.AA19321@cucs18.cs.cuhk.hk> David Wolpert wrote: >>>> However I took Gil's question (perhaps incorrectly) to concern the combination of *arbitrary* types of estimators, which in particular includes estimators (like nearest neighbor) that need not be parametric and therefore can not readily be "co-opted". (Certainly the work she listed, like Sharif's, concerns the combination of such arbitrary estimators.) This simply is not the concern of most of the work on AME. >>>> The following two papers with AME concern this type of work. Actually, this type of work can be treated as a special case of Mixture of Experts ---Some experts have been pretrained and fixed and only the gating net and other experts need to be trained in ME learning. Lei Xu and M.I.Jordan (1993), ``EM Learning on A Generalized Finite Mixture Model for Combining Multiple Classifiers'', Proceedings of World Congress on Neural Networks, Portland, OR, Vol. IV, 1993, Lei Xu, M.I.Jordan and G. E. Hinton (1995), `` An Alternative Model for mixtures of Experts", to appear on {\em Advances in Neural Information Systems 7}, eds., Cowan, J.D., Tesauro, G., and Alspector, J., MIT Press, Cambridge MA, 1995. From C.Campbell at bristol.ac.uk Thu Jun 29 05:50:50 1995 From: C.Campbell at bristol.ac.uk (I C G Campbell) Date: Thu, 29 Jun 1995 10:50:50 +0100 (BST) Subject: PhD Studentship available Message-ID: <199506290950.KAA05185@zeus.bris.ac.uk> I would be very grateful if the following studentship announcement could be advertised on your BB. With many thanks, Colin Campbell *********************************** EPSRC CASE AWARD LEADING TO PhD Non-Linear Modelling in an Extended Kalman Filter The Centre for Communications Research at Bristol University, in collaboration with The Sowerby Research Centre, British Aerospace (Operations) Ltd. have been awarded a CASE scholarship by the Engineering and Physical Sciences Research Council in the area of non-linear modelling. This research programme will commence in October 1995 and will continue until October 1998. The Kalman filter is a widely used object tracking algorithm with applications ranging from industrial robotics to aircraft radar systems. This research programme will investigate the application of artificial neural networks (and related algorithms) as non-linear modelling devices within a Kalman filter framework and will integrate with existing research in the area of object tracking and identification in multi-sensor systems. The studentship will comprise the standard EPSRC award plus an industrial supplement. Although the research programme will be based at Bristol University, the student can expect to spend at least three months in industry over the three year period. Applicants should possess a good honours degree in Electronic Engineering, Engineering Mathematics or a related discipline and must be EC citizens. Please apply in writing, with a full CV, to Dr. David Bull, Centre for Communications Research, University of Bristol, Queens Building, Bristol BS8 1TR (email: Dave.Bull at bristol.ac.uk, tel:0117 928 8613). /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ /\/\/\/\/\/\/\/\/\/\/\/\ Dr. David R. Bull, Reader in Digital Signal Processing, Centre for Communications Research, University of Bristol, Queens Building, University Walk, Bristol BS8 1TR, UK tel: +44 117 928 8613, fax: +44 117 925 5265, email: Dave.Bull at bristol.ac.uk /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ /\/\/\/\/\/\/\/\/\/\/\/\ From pjs at aig.jpl.nasa.gov Thu Jun 29 16:46:14 1995 From: pjs at aig.jpl.nasa.gov (Padhraic J. Smyth) Date: Thu, 29 Jun 95 13:46:14 PDT Subject: Combining human and machine experts Message-ID: <9506292046.AA02754@amorgos.jpl.nasa.gov> A tangent to the discussion on combining estimators and experts is a pointer to the statistical literature on the topic of combining the subjective ratings of multiple human experts. This problem arises in medical diagnosis and in remote sensing applications where it is not uncommon to have multiple opinions and one does not know which expert to believe. Quite a bit of work has been published in this area, here's some pointers to a few of many papers on the topic: J. S. Uebersax, ``Statistical modeling of expert ratings on medical treatment appropriateness," {\it J. Amer. Statist. Assoc.}, vol.88, no.422, pp.421--427, 1993. A. Agresti, ``Modelling patterns of agreement and disagreement," {\it Statistical Methods in Medical Research}, vol.1, pp.201--218, 1992. S. French, "Group consensus probability distributions: a critical survey," in Bayesian Statistics 2, pp.183-202, Bernardo, DeGroot, Lindley, Smith (eds), Elsevier Science (North-Holland), 1985. A. P. Dawid and A. M. Skene, ``Maximum likelihood estimation of observer error-rates using the EM algorithm," {\it Applied Statistics}, vol.28, no.1, pp.20--28, 1979. and a paper we had at last year's NIPS: P. Smyth, M.C. Burl, U. M. Fayyad, P. Perona, P. Baldi, `Inferring ground truth from subjectively-labeled images of Venus,' to appear in NIPS 7. (can be gotten from my home page: http://www-aig.jpl.nasa.gov/mls/home/pjs) Its interesting to note the differences between combining algorithmic "experts" and human experts: as a function of the input data, the algorithms are usually deterministic while the humans are usually non-deterministic, i.e., given the same data more than once they can produce different estimates. Humans are certainly more difficult to model than algorithms for combination purposes since they can "drift" over time, be affected by non-data factors, and so forth - not implying of course that combining algorithmic experts is necessarily easy ! I have not seen any work on combining both human and algorithmic predictions, I would be interested if anyone knows of such work. Padhraic Smyth From jordan at psyche.mit.edu Thu Jun 29 17:15:25 1995 From: jordan at psyche.mit.edu (Michael Jordan) Date: Thu, 29 Jun 95 17:15:25 EDT Subject: Combining neural estimators, NetGene Message-ID: <9506292115.AA04943@psyche.mit.edu> Just a small clarifying point regarding combining estimators. Just because two algorithms (e.g., stacking and mixtures of experts) end up forming linear combinations of models doesn't necessarily mean that they have much to do with each other. It's not the architecture that counts, it's the underlying statistical assumptions that matter---the statistical assumptions determine how the parameters get set. Indeed, a mixture of experts model is making the assumption that, probabilistically, a single underlying expert is responsible for each data point. This is very different from stacking, where there is no such mutual exclusivity assumption. Moreover, the linear combination rule of mixtures of experts arises only if you consider the conditional mean of the mixture distribution; i.e., E(y|x). When the conditional distribution of y|x has multiple modes, which isn't unusual, a mixture model is particularly appropriate and the linear combination rule *isn't* the right way to summarize the distribution. In my view, mixtures of experts are best thought of just another kind of statistical model, on the same level as, say, loglinear models or hidden Markov models. Indeed, they basically are a statistical form of a decision tree model. Note that decision trees embody the mutual exclusivity assumption (by definition of "decision")---this makes it very natural to formalize decision trees as mixture models. (Cf. decision *graphs*, which don't make a mutual exclusivity assumption and *aren't* handled well within the mixture model framework.) I would tend to place stacking at a higher level in the inference process, as a general methodology for--in some sense--approximating an average over a posterior distribution on a complex model space. "Higher level" just means that it's harder to relate stacking to a specific generative probability model. It's the level of inference at which everybody agrees that no one model is very likely to be correct for any instantiation of any x---for two reasons: because our current library of possible statistical models is fairly impoverished, and because we have an even more impoverished theory of how all of these models relate to each other (i.e., how they might be parameterized instances of some kind of super-model). This means that--in our current state of ignorance--mutual exclusivity (and exhaustivity---the second assumption underlying mixture models) make no sense (at the higher levels of inference), and some kind of smart averaging has got to be built in whether we understand it fully or not. Mike From sala at digame.dgcd.doc.ca Thu Jun 29 07:54:16 1995 From: sala at digame.dgcd.doc.ca (Ken Sala) Date: Thu, 29 Jun 95 07:54:16 EDT Subject: Job Opening at CRC, Canada Message-ID: <9506291154.AA04131@digame.doc.ca> Research Position Available Development of Hardwired Neural Network Testbed The Communications Research Center (CRC) Institute, part of Industry Canada, is offering a second term (two years) research position at the research engineer level within the Communications Systems Branch of the CRC. The principal goal of this position is the application of commercially available neural network circuitry to the task of real time, pattern classification of communications signals and synthetic aperture radar images. This position is a part of a CRC project sponsored by the Canadian Department of National Defense. The candidate should have, at minimum, a graduate degree in the engineering or physical sciences. Preference will be given to candidates holding a Ph.D. degree in engineering in an area relevant to the project. Knowledge or experience in systems development, machine-level programming (C/C++), integrated circuit design and analysis, and neural network theory and application is desirable. An ability to work independently and to communicate effectively is required. Preference will be given to candidates who hold Canadian citizenship. The CRC is located in suburban Ottawa on a large site shared with DND research facilities (Defense Research Establishment Ottawa) and with the Canadian Space Agency. Interested candidates may reply to: Dr. K. Sala Communications Research Center P.O. Box 11490, Stn. `H' Ottawa, ON K2H 8S2 Fax: (613) 990-8369 e-mail: sala at digame.dgcd.doc.ca Candidates must include: (1) a rsum containing, at a minimum, educational background, work experience, and a complete list of publications; (2) a firm indication of the earliest date of availability; and (3) current citizenship status. A phone number, fax number, and e-mail address by which the applicant could be contacted should be supplied with the information above. From scheler at informatik.tu-muenchen.de Thu Jun 29 04:58:12 1995 From: scheler at informatik.tu-muenchen.de (Gabriele Scheler) Date: Thu, 29 Jun 1995 10:58:12 +0200 Subject: Two preprints on Conn. NLP Message-ID: <95Jun29.105816met_dst.42340@papa.informatik.tu-muenchen.de> Dear connectionists, two preprints on connectionist and hybrid approaches to NLP are available from our ftp-server. ftp flop.informatik.tu-muenchen.de directory: pub/articles-etc scheler_aspect.ps.gz scheler_hybrid.ps.gz ------------------------------------------------------------------------------- Dr. Gabriele Scheler phone: +49-89-2105-8476 Institut f"ur Informatik fax: +49-89-2105-8207 Technische Universit"at M"unchen Arcisstr.21 email:scheler at informatik.tu-muenchen.de D-82090 M"unchen Germany -------------------------------------------------------------------------------- Titles and Abstracts: scheler_hybrid.ps.gz ------------------------------------------------------------------ A Hybrid Model of Semantic Inference (to appear: 4th Conference on Cognitive Science in Natural Language Processing}, Dublin, Ireland, 1995 ) Gabriele Scheler/Johann Schumann Institut f{\"u}r Informatik Technische Universit{\"a}t M{\"u}nchen e-mail: scheler, schumann at informatik.tu-muenchen.de keywords: NLP, hybrid systems, automated theorem proving, text understanding, temporal cognition Abstract (Conclusion) We have built a system for the interpretation of tense and aspects consisting of a neural network component, which translates sentences into semantic representations and a theorem prover which proves inferences between logical forms. An interesting question that has been partly answered by this research is whether atomic features can indeed mediate between syntactic structure and cognitive structure, which are both complex. It was shown that complex logical representations can be built from an atomic feature representation. The implications for learning of natural language categories and the interface between natural language and cognition from this approach could be far-reaching. For instance, atomic features may be seen as a biologically simple way of linking structures (i.e. natural language morphology and temporal cognition) which are complex in different ways. However, this approach needs to be carried over to other linguistic domains (e.g. determiners, plural phenomena, mood, prepositions or lexical meaning) to further explore its possibilities. Logic is probably not the implementation medium in human brains. Logical representations must be seen to provide only a meta-theory of cognition. In contrast to other approaches, we use full first-order logic as representation medium. This allows an open set of inferences which can be applied to various tasks. These inferences can also be used to construct a closed relational set, i.e. a temporal 'scenario'. ------------------------------------------------------------------ scheler_aspect.ps.gz Learning the Semantics of Aspect (to appear: D. Jones (ed.) New Methods in Language Processing, University College London Press) Gabriele Scheler The main point of this paper is to show how we can extract semantic features, describing aspectual meanings, from a syntactic representation. Aspectual meanings are represented as sets of features in an interlingua. The goal is to translate English to Russian aspectual categories. This is realized by a specialized language processing module, which is based on the concept of vertical modularity. The results of supervised learning of syntactic-semantic correspondences using standard back-propagation show that both learning and generalization to new patterns are successful. Furthermore, the correct generation of Russian aspect from the automatically created semantic representations is demonstrated. The results are relevant to machine translation in a hybrid systems approach and to the study of linguistic category formation. --------------------------------------------------------------------------------