From verleysen at dice.ucl.ac.be Fri Apr 1 10:52:45 1994 From: verleysen at dice.ucl.ac.be (verleysen@dice.ucl.ac.be) Date: Fri, 1 Apr 1994 17:52:45 +0200 Subject: Copernicus project with Central and Eastern European Countries Message-ID: <9404011549.AA11956@ns1.dice.ucl.ac.be> Dear Colleagues, We are currently setting up a project to enhance the exchange of information in the field of neural networks between Central and Eastern European countries, and Western European ones. This project will be submitted as a 'Concerted Action' of the Copernicus programme, to the Commission of the European Communities. We are looking for additional partners from Central and Eastern European countries, to enlarge this proposal to the greatest number of institutions; the countries which may be involved are: Albania, Bulgaria, Czech Republic, Estonia, Hungary, Latvia, Lithuania, Poland, Romania, Slovak Republic and Slovenia. Newly Independant States of the former Soviet Union may also participate in addition. If your institution is interested in participating to this proposal, or if you know people from these countries who could be interested in, please send names and addresses of contact persons (with fax and E-mail if possible) to: verleysen at dice.ucl.ac.be We will then send all necessary material to join this proposal. The time schedule to send the proposal is very tight; we should thus be in contact with each possible partner BEFORE APRIL 6th. Please don't hesitate to contact us for any information. Thank you in advance. Michel Verleysen ===================================================== Michel Verleysen Universite Catholique de Louvain Microelectronics Laboratory 3, pl. du Levant B-1348 Louvain-la-Neuve Belgium tel: +32 10 47 25 51 fax: + 32 10 47 86 67 E-mail: verleysen at dice.ucl.ac.be =====================================================  From 100322.250 at CompuServe.COM Sun Apr 3 15:21:09 1994 From: 100322.250 at CompuServe.COM (Johannes C. Scholtes) Date: 03 Apr 94 15:21:09 EDT Subject: Workshops on Neural Networks and Information Retrieval in Amsterdam Message-ID: <940403192109_100322.250_BHB57-1@CompuServe.COM> Preliminary Program Neural Networks and Information Retrieval in a Libraries Context Amsterdam , The Netherlands Friday June 24, 1994 and Friday September 16, 1994 M.S.C. Information Retrieval Technologies BV, based in Amsterdam, the Netherlands, is currently undertaking a study on Neural Networks and Information Retrieval in a Libraries Context, in collaboration with the Department of Computational Linguistics of the University of Amsterdam and the Department of Information Technology and Information Science at Amsterdam Polytechnic. This study is funded by the European Commission as a complementary measure under the Libraries Programme In this study the general application of artificial neural net (ANN) technology to information retrieval (IR) problems is investigated in a libraries context. Typical applications of this technology are advanced interface design, current awareness, SDI, fuzzy search and concept formation. In order to discuss and disseminate the results obtained through this study, two one-day workshops will be organized by M.S.C. Information Retrieval Technologies BV, the first one after compilation of the State of the Art Report and the second one after completion of the prototyping and experimentation phase. During both workshops, there will be much room for discussions on how to commercialise such applications of ANN in a libraries context. Both workshops are open to participants from other organizations, commercial and academic, that are interested in various applications of ANNs in existing libraries systems. For who: Interesting for all: - Computer Companies - Information Management and Supply Companies - Government Agencies - Libraries - Universities and Polytechniques That are Interested in: - Neural Networks - Information Retrieval - Libraries Sciences - Natural Language Processing - Advanced Computer Science - Data compression For applications such as: - Current Awareness - Selective Dissemination of Information (SDI) - Information Filtering - Automatic Contents Based Information Distribution - Categorization - Advanced Interface Design - Fuzzy Retrieval (Information recognized by Optical Character Recognition and Speech Recognition). - Retrieval Generalization - Thesaurus Generation - Information Compression - Juke box staging General Information Costs per participant for both days: Commercial companies Dfl. 950,- Universities and non-profit institutions (*) Dfl. 500,- Students (*) Dfl. 150,- (*) Letter of university or non-profit institution must be shown at registration These costs include: Workshop Proceedings State of the Art report on Neural Networks in Information Retrieval as composed by MSC Achievements report on Neural Networks in Information Retrieval as composed by MSC Ongoing coffee & tea Lunch Diner Future mailings on progress Limited availability of travel grants for students (please apply) All other expenses such as traveling, hotels, short stays, etc. are not included in the fee. Payment The following payment methods are accepted: 1. Credit Cards 2. Prepayment by bank 3. Personal cheques More information: M.S.C. Information Retrieval Technologies BV Dr Johannes C. Scholtes Dufaystraat 1 1075 GR AMSTERDAM the Netherlands Telephone: +31 20 679 4273 Fax: +31 20 6710 793 Internet: 100322.250 at compuserve.com or scholtes at msc.mhs.compuserve.com Compuserve: MHS: SCHOLTES at MSC or 100322,250 Background & Introduction Recent research of artificial neural networks (ANN) in the field of pattern recognition and pattern classification applications has provided successful alternatives of traditional techniques. Products applied for optical character recognition (OCR), speech recognition, hand-written character recognition and prediction of non-linear time series are good examples of commercialization of these ANN techniques. So far, the European Commission has funded more than 40 projects of different sizes under the ESPRIT and other programmes which involve research on or the application of ANN technology. The task of Information Retrieval (IR), that is the matching of a large number of documents against a query, can also been seen as a pattern recognition or pattern classification task. Therefore, there have been several approaches to the application of ANN in IR in order to increase the quality of the retrieval process. Despite the theoretical and practical evidence that ANN are good tools for pattern recognition tasks, it is still an open question whether they are appropriate tools within the specific domain of Bibliographic Information Retrieval. Apart from some minor studies it seems no real attempt has been made up until now to integrate an ANN as a main component of a bibliographical information retrieval system or an on-line library catalogue (OPAC). It is therefore not clear whether and how ANN techniques can be combined with more "classical" methods, for instance rule-based or statistical approaches. By the same token it is not clear either to what extent existing OPACs could benefit from ANN technology. Objectives The objectives of this study are: to ascertain the State-of-the-Art of the application of Artificial Neural Net (ANN) technology to Information Retrieval (IR), with particular emphasis on bibliographic information in a libraries context; to assess the (potential) quality of ANN-based approaches to IR in this particular domain of interest, in comparison with traditional practices. Here "quality must be understood in terms of both (measurable) efficiency and practical benefits; to stimulate interest in the practical application of ANN technology to bibliographic information retrieval in a libraries context. Information Retrieval It can be stated that Information Retrieval (IR) is the ultimate combination between Natural Language Processing (NLP) and Artificial Intelligence (AI). On the one hand there is an enormous amount of NLP data that needs to be processed and understood to return the proper information to the user. On the other hand, one needs to understand what the user intends with his or her query given the context of the other queries and some kind of user model. Most of these systems still use techniques that were developed over thirty years ago and that implement nothing more than a global surface analysis of the textual (layout) properties. No deep structure whatsoever is incorporated in the decision to whether or not retrieve a text. There is one large dilemma in IR research. The data collections are so incredibly large, that any method other than a global surface analysis would fail. However, such a global analysis could never implement a contextually sensitive method to restrict the number of possible candidates returned by the retrieval system. Information retrieval can also be a very frustrating area of research. Whenever one invents a new model, it is difficult to show that it works better (qualitatively and quantitatively) than any previous model. The addition of new dependencies often results in much too slow a system. Systems such as Salton's SMART exist for over 30 years without having any serious competition. The field of information retrieval would be greatly indebted to a method that could incorporate more context without slowing down. Since computers are only capable of processing numbers within reasonable time limits, such a method should be based on vectors of numbers rather than on symbol manipulations. This is exactly where the challenge lies: on the one hand keep up the speed, and on the other incorporate more context. Artificial Neural Networks The connectionist approach offers a massively parallel, highly distributed and highly interconnected solution for the integration of various kinds of knowledge, with preservation of generality. It might be that connectionism or neural networks (despite all currently unsolved questions concerning learning, stability, recursion, firing rules, network architecture, etc.), will contribute to the research in natural-language processing and information retrieval. Distributed data representation may solve many of the unsolved problems in IR by introducing a powerful and efficient knowledge integration and generalization tool. However, distributed data representation and self-organization trigger new problems that should be solved in an elegant manner. Current Problems in Information Retrieval The main objectives of current IR research can be characterised as the search for systems that exhibit adaptive behaviour, interactive behaviour and transparency. More specifically, these models should implement properties for: Understanding incomplete queries or making incomplete matches, Understanding vague user intentions, Ability to generalise over queries as well as over query results, Adapting to the needs of an evolving user (model), Allowing dynamic relevance feed-back, Aid for the user to browse intelligently through the data, and Addition of (language) context sensitivity. Different Approaches in Information Retrieval and Neural NetworksTwo main directions of neural network related research information retrieval can be observed. First, there are relatively static databases that are investigated with a dynamic query (free text search, also known as document retrieval systems). Next, there are the more dynamic databases that need to be filtered with respect to a relatively static query (the filtering problem also known as current awareness systems and Selective Dissemination of Information, SDI). In the first case the data can be preprocessed due to their static character. In the second case, the amounts of data are so large that there is no time whatsoever for a preprocessing phase. A direct context-sensitive hit-and-go must be made. Early neural models adapt well to the paradigms currently used in information retrieval. Index terms can be replaced by processing units, hyperlinks by connections between units, and network training resembles the index normalisation process. However, these models do not adapt well to the general notion of neural networks. In addition, it is difficult to imagine what to teach a neural information retrieval system if it is used as a supervised training algorithm. The address space will almost always be too limited due to the large amounts of data to be processed. A combination of structured (query, retrieved document numbers) pairs does not seem plausible either, considering the restricted amount of memory of (current) neural network technology. Nevertheless, most of the neural IR models found in literature are based on these principles. Also problematic are the so-called clustering networks. Due to the large amounts of data in free text databases, clustering is very expensive and is therefore considered irrelevant in changing information retrieval environments. More interesting are the unsupervised, associative memory type of models, that can be used to implement a specific pattern matching task. This type of neural networks can be particularly useful in a filtering application. Here, the memory demands of the neural network only need to fulfil the query (or interest) size, and not the size of the entire data base. It is in this area where neural networks are expected to be most useful and relevant for information retrieval. Especially topics such as fuzzy retrieval, current awareness, SDI, concept formation and advanced interface design are in the scope of the project. However, input from the workshops is very important for the final determination of the direction of the research. Program Day 1: June 24, 1994 9.15-9.30 Welcome and Introduction Dr Ir Johannes C. Scholtes, President of MSC Information Retrieval Technologies B.V. 9.30-11.00 Tutorial Neural Networks (Back Propagation Kohonen Feature Maps) Dr Ir Johan Henseler, Forensic Laboratories, Head of Section Computer Criminality 11.00-11.15 Break 11.15-12.30 Information Retrieval Application in Libraries Dr E. Sieverts, Professor at Amsterdam Polytechnique. Library Program 12.30-13.30 Lunch 13.30-15.00 Presentation Findings & State of the Art Report 15.00-15.15 Break 15.15-16.00 Directions for (Commercial) Applications Dr ir Johannes C. Scholtes 16.00-17.00 Panel Discussion 17.00-18.00 Reception 19.00-... Diner and evening program Day 2: September 16, 1994 9.15-9.30 Welcome and Introduction Dr Ir Johannes C. Scholtes. President of MSC Information Retrieval Technologies B.V. 9.30-11.00 Achievements Dr Ir Johannes C. Scholtes. President of MSC Information Retrieval Technologies B.V. & Dr E. Sieverts. Professor at Amsterdam Polytechnique Library Program 11.00 - 12.30 Hands on demonstrations 12.30-13.30 Lunch 13.30-15.00 Problem Issues by Dr E. Sieverts. Professor at Amsterdam Polytechnique. Library Program 15.00-15.15 Break 15.15-16.00 Commercial Implications by Dr Ir Johannes C. Scholtes. President of MSC Information Retrieval Technologies B.V. 16.00-17.00 Panel Discussion 17.00-18.00 Reception 19.00-... Diner and evening program During the day, demo's of the prototypes will be available to the participants of the workshop. Each demo will be guided by a specialist who demonstrates the software  From erikds at reks.uia.ac.be Sat Apr 2 15:33:58 1994 From: erikds at reks.uia.ac.be (Erik De Schutter) Date: Sat, 2 Apr 94 22:33:58 +0200 Subject: Postdoc positions available Message-ID: <9404022033.AA20654@kuifje> Please post and forward (not at Caltech). TWO POSTDOCTORAL POSITIONS IN COMPUTATIONAL NEUROSCIENCE Join a new multi-disciplinary team at the University of Antwerp, Belgium, to explore functional properties of the cerebellar cortex. Projects include detailed modeling of calcium stores and metabotropic receptors in a compartmental model of a Purkinje cell (see J. Neurophysiol. 71, 375-400 and 401-419, 1994), and the creation of a large-scale realistic network model of cerebellar cortex based on compartmental models. Candidates should have experience in one or more of 3 fields: computational neuroscience (preferentially compartmental models or using GENESIS), cerebellar physiology, or single cell physiology (preferentially calcium-imaging experience). All candidates should expect to do modeling work only, training will be provided if necessary. Positions are available for 2 to 3 years, starting autumn 1994. Salary commensurate with experience. Funding is independent of nationality. Applicants must send curriculum vitae and names of three references to: Dr. Erik De Schutter Dept. of Medicine University of Antwerp - UIA B2610 Antwerp Belgium fax: ++32-3-8202541 e-mail: erikds at reks.uia.ac.be (preferred medium)  From drb at ivan.csc.ncsu.edu Mon Apr 4 13:25:04 1994 From: drb at ivan.csc.ncsu.edu (Dr. Dennis Bahler) Date: Mon, 4 Apr 94 13:25:04 EDT Subject: postdoc position announcement Message-ID: <199404041725.AA01752@ivan.csc.ncsu.edu> ============================================================================== POSTDOCTORAL POSITION NATIONAL INSTITUTE OF ENVIRONMENTAL HEALTH SCIENCES The National Institute of Environmental Health Sciences in Research Triangle Park, North Carolina has an opening for a postdoctoral research position in computer science. The person in this position will join an existing research team studying the application of methods from artificial intelligence to the prediction of risks from exposure to chemical agents. Experience with inductive learning methods, decision trees and neural networks is beneficial. Minority candidates and women are encouraged to apply. Appointees must be U.S. citizens or permanent U.S. residents. Curriculum Vitae and three letters of reference should be sent to: Dr. Christopher J. Portier Laboratory of Quantitative and Computational Biology National Institute of Environmental Health Sciences PO Box 12233 Mail Drop A3-06 Research Triangle Park, North Carolina 27709 Curriculum vitaes will be accepted via e-mail to portier at niehs.nih.gov. Reference letters can be sent by e-mail and followed by hard copy. Candidates will be interviewed after April 21, 1994. Applications will be accepted until a candidate is chosen. ==============================================================================  From payman at uw-isdl.ee.washington.edu Mon Apr 4 14:43:25 1994 From: payman at uw-isdl.ee.washington.edu (payman@uw-isdl.ee.washington.edu) Date: Mon, 4 Apr 1994 11:43:25 -0700 Subject: TR available: Fourier Analysis and Filtering of a Single Hidden Layer Perceptron Message-ID: <199404041843.LAA29008@graham.ee.washington.edu> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/marks.fourier.ps.Z The following paper is now available from the neuroprose repository: Fourier Analysis and Filtering of a Single Hidden Layer Perceptron Robert J. Marks II & Payman Arabshahi Department of Electrical Engineering University of Washington FT-10 Seattle, WA 98195 USA This is an invited paper to appear in the Proceedings of the International Conference on Artificial Neural Networks (IEEE/ENNS) Sorrento, Italy, May 1994. Abstract We show that the Fourier transform of the linear output of a single hidden layer perceptron consists of a multitude of line masses passing through the origin. Each line corresponds to one of the hidden neurons and its slope is determined by that neuron's weight vector. We also show that convolving the output of the network with a function can be achieved simply by modifying the shape of the sigmoidal nonlinearities in the hidden layer. To retrieve the file: unix> ftp archive.cis.ohio-state.edu Name: anonymous Password: your email address ftp> cd pub/neuroprose ftp> binary ftp> get marks.fourier.ps.Z ftp> bye unix> uncompress marks.fourier.ps.Z unix> lpr marks.fourier.ps  From kinder at informatik.tu-muenchen.de Wed Apr 6 07:52:04 1994 From: kinder at informatik.tu-muenchen.de (Margit Kinder) Date: Wed, 6 Apr 1994 13:52:04 +0200 Subject: paper available Message-ID: <94Apr6.135205met_dst.42263@papa.informatik.tu-muenchen.de> The following paper is now available via anonymous ftp from the neuroprose archive. Although it has already been published in "Neural Networks 6/93", I put it into this archive since I am still receiving many requests for it. The manuscript is 10 pages. --------------------------------------------------------------------- Classification of Trajectories - Extracting Invariants with a Neural Network Margit Kinder and Wilfried Brauer Technische Universit"at M"unchen A neural classifier of planar trajectories is presented. There already exist a large variety of classifiers that are specialized on particular invariants contained in a trajectory classification task such as position-invariance, rotation-invariance, size-invariance, .. . That is, there exist classifiers specialized on recognizing trajectories e.g. independently of their position. The neural classifier presented in this paper is not restricted to certain invariants in a task: The neural network itself extracts the invariants contained in a classification task by assessing only the trajectories. The trajectories need to be given as a set of points. No additional information must be available for training, which saves the designer from determining the needed invariants by himself. Besides its applicability to real-world problems, such a more general classifier is also cognitively plausible: In assessing trajectories for classification, human beings are able to find class specific features, no matter what kinds of invariants they are confronted with. Invariants are easily handled by ignoring unspecific features. ----------------------------------------------------------------------- FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/kinder.extracting_invariants.ps.Z Margit Kinder ----------------------------------------------------------------------- Margit Kinder e-mail: kinder at informatik.tu-muenchen.de Fakult"at f"ur Informatik Technische Universit"at M"unchen Tel: +49 89 2105 8476 80290 Munich, Fed. Rep. of Germany Fax: +49 89 2105 8207  From soodak%cn.ROCKEFELLER.EDU at ROCKVAX.ROCKEFELLER.EDU Wed Apr 6 05:42:56 1994 From: soodak%cn.ROCKEFELLER.EDU at ROCKVAX.ROCKEFELLER.EDU (Bob Soodak) Date: Wed, 6 Apr 94 05:42:56 EDT Subject: Paper available: Simulation of visual cortex... Message-ID: <9404060942.AA16676@cn.rockefeller.edu> The following reprint is now available: FTP-host: archive.cis.ohio-state.edu FTP-file: /pub/neuroprose/soodak.suture.ps.Z Author: R. Soodak Rockefeller University Title: Simulation of visual cortex development under lid-suture conditions: Enhancement of response specificity by a reverse-Hebb rule in the absence of spatially patterned input Size: 15 pages Published in Biological Cybernetics 70, 303-309 (1994) (Due to copyright restrictions the manuscript could not be posted prior to publication.) Abstract: In this report, I show that a reverse-Hebb synaptic modification rule leads to the enhancement of response specificity of simulated visual cortex neurons in the absence of spatial patterning of the afferent activity. Although it is clear that receptive fields in the visual cortex can be modified by experience, many studies have shown a substantial increase of response specificity in cats deprived of pattern vision by lid suture, leading some to conclude that receptive field properties are essentially hard-wired. The hard-wired vs. experience-dependent controversy can be resolved by assuming that while Hebb-type plasticity is responsible for developmental synaptic changes, the organization of presynaptic activity which exists under conditions of visual deprivation is sufficient to drive the neurons towards greater specificity (Linsker 1986a-c; Miller 1989, 1992; Miller et al. 1989). As a reverse-Hebb rule enhances response specificity by balancing the push-pull system of ON- and OFF-center afferents, the sufficient condition is that the activity of ON- and OFF-center retinal ganglion cells be negatively correlated, a condition which will be met by diffuse illumination as seen through sutured eyelids. Unlike the models of Linsker and Miller and colleagues, which are based on a standard-Hebb rule, the model presented here does not require the presence of a "Mexican hat" spatial patterning of the afferent correlations, which has not been observed experimentally. To retrieve the file: unix> ftp archive.cis.ohio-state.edu Name: anonymous Password: your full email address ftp> cd pub/neuroprose ftp> binary ftp> get soodak.suture.ps.Z ftp> bye unix> uncompress soodak.suture.ps.Z unix> lpr soodak.suture.ps For hard copy please send name and address in a form suitable for use as a mailing label to: Robert Soodak Rockefeller Univ. 1230 York Ave. New York, NY 10021 USA  From mwitten at CHPC.UTEXAS.EDU Tue Apr 5 04:40:35 1994 From: mwitten at CHPC.UTEXAS.EDU (mwitten@CHPC.UTEXAS.EDU) Date: Tue, 5 Apr 1994 14:40:35 +0600 Subject: COMPMED 94 FINAL SCHEDULE Message-ID: <9404051940.AA08550@morpheus> FINAL PROGRAM ANNOUNCEMENT FIRST WORLD CONGRESS ON COMPUTATIONAL MEDICINE AND PUBLIC HEALTH 24-28 April 1994 Hyatt on the Lake, Austin, Texas The final program for the First World Congress On Computational Medicine and Public Health has now been set. Over 200 speakers will be presenting work in a variety of applications areas related to medicine and public health. Registration is still open for attendees. Registration details and/or a copy of the schedule at a glance, schedule-in-detail may be requested by sending an email request to compmed94 at chpc.utexas.edu or by calling 512-471-2472 or by faxing 512-471-2445 There is no ftp form of the conference schedule due to the size of the file. We will be happy to fax/send a copy to anyone who requests it. The conference proceedings will appear as a series of volumes published by World Scientific. If you are interested in possibly submitting a paper for the proceedings, please contact mwitten at chpc.utexas.edu or call 512-471-2457 The overwhelming response to this congress has already justified having a second world congress in the future. The tentative schedule is to have in in 3 years. If you are interested in participating at the 2nd World Congress On Computational Medicine and Public Health, please contact Dr. Matthew Witten Congress Chair mwitten at chpc.utexas.edu  From epdp at big.att.com Wed Apr 6 16:18:48 1994 From: epdp at big.att.com (Edwin Pednault) Date: Wed, 6 Apr 94 16:18:48 EDT Subject: Workshop on Learning and Descriptional Complexity Message-ID: <9404062018.AA11511@big.l1135.att.com> Workshop on Applications of Descriptional Complexity to Inductive, Statistical, and Visual Inference Sunday, July 10, 1994 Rutgers University New Brunswick, New Jersey Held in Conjunction with the Eleventh International Conference on Machine Learning (ML94, July 11-13, 1994) and the Seventh Annual Conference on Computational Learning Theory (COLT94, July 12-15, 1994). Interest in the minimum description-length (MDL) principle is increasing in the machine learning and computational learning theory communities. One reason is that MDL provides a basis for inductive learning in the presence of noise and other forms of uncertainty. Another reason is that it enables one to combine and compare different kinds of data models within a single unified framework, allowing a wide range of inductive-inference problems to be addressed. Interest in the MDL principle is not restricted to the learning community. Inductive-inference problems arise in one form or another in many disciplines, including information theory, statistics, computer vision, and signal processing. In each of these disciplines, inductive-inference problems have been successfully pursued using the MDL principle and related descriptional complexity measures, such as stochastic complexity, predictive MDL, and algorithmic probability. The purpose of this workshop is two fold: (1) to provide an opportunity to researchers in all disciplines involved with descriptional complexity to meet and share results; and (2) to foster greater interaction between the descriptional complexity community and the machine learning and computational learning theory communities, enabling each group to benefit from the results and insights of the others. To meet these objectives, the format of the workshop is designed to maximize opportunities for interaction among participants. In addition, a tutorial on descriptional complexity will be held prior to the workshop to encourage broad participation. The tutorial and workshop may be attended together or individually. The topics of the workshop will include, but will not be limited to, - Applications of descriptional complexity to all forms of inductive inference, including those in statistics, machine learning, computer vision, pattern recognition, and signal processing. - Rates of convergence, error bounds, distortion bounds, and other convergence and accuracy results. - New descriptional complexity measures for inductive learning. - Specializations and approximations of complexity measures that take advantage of problem-specific constraints. - Representational techniques, search techniques, and other application and implementation related issues. - Theoretical and empirical comparisons between different descriptional complexity measures, and with other learning techniques. WORKSHOP FORMAT The workshop will be held on Sunday, July 10, 1994. Attendance will be open. However, those who wish to attend should contact the organizers prior to the workshop at the address below. To maximize the opportunity for interaction, the workshop will consist primarily of poster presentations, with a few selected talks and a moderated wrap-up discussion. Posters will be the primary medium for presentation. This medium was chosen because it encourages close interaction between participants, and because many more posters can be accommodated than talks. Both factors should encourage productive interaction across a wide range of topics despite the constraints of a one-day workshop. Depending on the number and quality of the submissions, arrangements may be made to publish a book of papers after the workshop under the auspices of the International Federation for Information Processing Working Group 14.2 on Descriptional Complexity. SUBMISSIONS Posters will be accepted on the basis of extended abstracts that should not exceed 3000 words, excluding references (i.e., about six pages of text, single spaced). Separate one-page summaries should accompany the submitted abstracts. The summary pages of accepted abstracts will be distributed to all interested participants prior to the workshop, and should be written accordingly. Summaries longer than one page will have only their first page distributed. Six copies of each extended abstract and two copies of each summary page must be received at the address below by May 18, 1994. Acceptance decisions will be made by June 10, 1994. Copies of the summary pages of accepted abstracts will be mailed to all those who submit abstracts and to those who contact the organizers before the decision date. Because we expect the audience to be diverse, clarity of presentation will be a criterion in the review process. Contributions and key insights should be clearly conveyed with a wide audience in mind. Authors whose submissions are accepted will be expected to provide the organizers with full-length papers or revised versions of their extended abstracts when they arrive at the workshop. These papers and abstracts will be used for the publisher's review. Authors may wish to bring additional copies to distribute at the workshop. IMPORTANT DATES May 18 Extended abstracts due June 10 Acceptance decisions made, summary pages distributed July 10 Workshop PROGRAM COMMITTEE Ed Pednault (Chair), AT&T Bell Laboratories. Andrew Barron, Yale University. Ron Book, University of California, Santa Barbara. Tom Cover, Stanford University. Juris Hartmanis, Cornell University. Shuichi Itoh, University of Electro-Communications. Jorma Rissanen, IBM Almaden Research Center. Paul Vitanyi, CWI and University of Amsterdam. Detlef Wotschke, University of Frankfurt. Kenji Yamanishi, NEC Corporation. CONTACT ADDRESS Ed Pednault AT&T Bell Laboratories, 4G-318 101 Crawfords Corner Road Holmdel, NJ 07733-3030 email: epdp at research.att.com tel: 908-949-1074 ----------------------------------------------------------------------- Tutorial on Descriptional Complexity and Inductive Learning One of the earliest theories of inductive inference was first formulated by Solomonoff in the late fifties and early sixties. It was expanded in subsequent and, in some cases, independent work by Solomonoff, Kolmogorov, Chaitin, Wallace, Rissanen, and others. The theory received its first citation in the AI literature even before its official publication. It provides a basis for learning both deterministic and probabilistic target concepts, and it establishes bounds on what is computationally learnable in the limit. Over time, this theory found its way into several fields, including probability theory and theoretical computer science. In probability theory, it provides a precise mathematical definition for the notion of a random sample sequence. In theoretical computer science, it is being used among other things to prove lower bounds on the computational complexity of problems, to analyze average-case behavior of algorithms, and to explore the relationship between the succinctness of a representation and the computational complexity of algorithms that employ that representation. Interest in the theory diminished in artificial intelligence in the mid to late sixties because of the inherent intractability of the theory in its most general form. However, research in the seventies and early eighties led to several tractable specializations developed expressly for inductive inference. These specializations in turn led to applications in many disciplines, including information theory, statistics, machine learning, computer vision, and signal processing. The body of theory as it now stands has developed well beyond its origins in inductive inference, encompassing algorithmic probability, Kolmogorov complexity, algorithmic information theory, generalized Kolmogorov complexity, minimum message-length inference, the minimum description-length (MDL) principle, stochastic complexity, predictive MDL, and related concepts. It is being referred to collectively as descriptional complexity to reflect this evolution. This tutorial will provide an introduction to the principal concepts and results of descriptional complexity as they apply to inductive inference. The practical application of these results will be illustrated through case studies drawn from statistics, machine learning, and computer vision. No prior background will be assumed in the presentation other than a passing familiarity with probability theory and the theory of computation. Attendees should expect to gain a sound conceptual understanding of descriptional complexity and its main results. The tutorial will be held on Sunday, July 10, 1994. -----------------------------------------------------------------------  From noordewi at cs.rutgers.edu Wed Apr 6 18:32:34 1994 From: noordewi at cs.rutgers.edu (Michiel (Mick) Date: Wed, 6 Apr 94 18:32:34 EDT Subject: CFP ML94 Workshop on Molecular Biology Message-ID: <9404062232.AA19226@binnacle.rutgers.edu> Computational Molecular Biology and Machine Learning Workshop Machine Learning Conference 1994 Program Committee: Michiel Noordewier (Rutgers University) Lindley Darden (Rockefeller University) Description and Focus --------------------- This workshop will focus on the application of methods from machine learning to the promising problem area of molecular biology. A goal is to consolidate a machine learning faction in the emerging field of computational biology. One problem area is identified as genetic sequence search and analysis, and protein structure prediction. Biological sequences have become a ready source of sample data for machine learning approaches to classification. Recently such sequences have also provided problems for sophisticated pattern recognition paradigms, including those borrowed from computational linguistics, Bayesian methods, and artificial neural networks. This workshop will bring together workers using such diverse approaches, and will focus on the rich set of problems presented by the recent availability of extensive biological sequence information. Another area of applicability of ML techniques to molecular biology is in the application of computational discovery methods. Such methods are employed for forming, ranking, evaluating, and improving hypotheses. Learning strategies using analogies or homologies among molecules or processes from different organisms or species are also of interest. The format of the workshop will be the presentation of short papers followed by panel discussions. Submission Requirements ----------------------- Persons wishing to attend the workshop should submit three copies of a 1-2 page research summary including a list of relevant publications, along with a phone number and an electronic mail address. Persons wishing to make presentations at the workshop should submit three copies of a short paper (no more than 10 pages) or extended abstract, in addition to the research summary. All submissions must be received by May 1, 1994. Notification of acceptance or rejection will be mailed to applicants by May 15, 1994. A set of working notes will be distributed at the workshop. Camera ready copies of papers accepted for inclusion in the working notes of the workshop will be due on June, 15, 1994. The timetable is as follows: Abstracts, papers, etc due to chair 1 May Decisions made, submitters get feedback 15 May Final working-note submissions rcv'd by chair 15 June workshop date 10 July, 1994  From vg197 at neutrino.pnl.gov Wed Apr 6 19:01:17 1994 From: vg197 at neutrino.pnl.gov (Sherif Hashem) Date: Wed, 06 Apr 1994 16:01:17 -0700 (PDT) Subject: World Wide Web: Neural Network Home Page Message-ID: <9404062301.AA08111@neutrino.pnl.gov> ********************************************** * A N N O U N C I N G A N E W * * W O R L D W I D E W E B * * N E U R A L N E T W O R K * * H O M E P A G E * ********************************************** The World Wide Web (WWW) server at the Pacific Northwest Laboratory is now available for public access. We have created a Neural Network Home Page about the neural network research taking place in our group (Computing and Information Sciences in the Molecular Sciences Research Center). Our home page is composed of: - Lists of references to neural network papers in the following areas: * Chemical Sensor Analysis * Spectroscopic Analysis * Chemical Process Control * Molecular Modeling * Nuclear Science and Engineering * Medicine * Manufacturing * Optical Neurocomputing - Description of our work with neural networks - Access to our group's papers in electronic form (currently html) - Links to other neural network and neuroscience home pages The current Uniform Resource Locator (URL) for our Neural Network Home Page is: http://www.msrc.pnl.gov:2080/docs/cie/neural/neural.homepage.html This is a new home page, and we welcome all constructive comments, suggestions, additions, and hypertext links! Sherif Hashem and Paul Keller Pacific Northwest Laboratory Richland, Washington, USA phone: (509) 375-6995 (509) 375-2254 fax: (509) 375-6631 e-mail: s_hashem at pnl.gov pe_keller at pnl.gov  From jameel at cs.tulane.edu Thu Apr 7 04:55:17 1994 From: jameel at cs.tulane.edu (Akhtar Jameel) Date: Thu, 7 Apr 1994 03:55:17 -0500 (CDT) Subject: Call for papers Message-ID: <9404070855.AA26312@pegasus.cs.tulane.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 5811 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/d761cbec/attachment.ksh From scheler at informatik.tu-muenchen.de Thu Apr 7 07:46:18 1994 From: scheler at informatik.tu-muenchen.de (Gabriele Scheler) Date: Thu, 7 Apr 1994 13:46:18 +0200 Subject: Announcement Technical Report Message-ID: <94Apr7.134623met_dst.42263@papa.informatik.tu-muenchen.de> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/scheler.generate.ps.Z The file scheler.generate.ps.Z is now available for copying from the Neuroprose repository: Multilingual Generation of Grammatical Categories Gabriele Scheler Technische Universit"at M"unchen (15 pages) also available as Report FKI-190-94 from Institut f"ur Informatik TU M"unchen D 80290 M"unchen ftp-host: flop.informatik.tu-muenchen.de ftp-file: pub/fki/fki-190-94.ps.Z ABSTRACT: We present an interlingual semantic representation for the synthesis of morphological aspect of English and Russian by standard back-propagation. Grammatical meanings are represented symbolically and translated into a binary representation. Generalization is assessed by test sentences and by a translation of the training sentences of the other language. The results are relevant to machine translation in a hybrid systems approach and to the study of linguistic category formation.  From gupta at prl.philips.co.uk Thu Apr 7 10:45:31 1994 From: gupta at prl.philips.co.uk (Ashok Gupta) Date: Thu, 7 Apr 94 14:45:31 GMT Subject: Bulk Data Types For Architecture Independence - a One day Meeting in London Message-ID: <1998.9404071445@apollo23.prl.philips.co.uk> The British Computer Society Parallel Processing Specialist Group (BCS PPSG) ------------------------------------------------------------ ** Bulk Data Types for Architecture Independence ** ____________________________________________________________ A One Day Open Meeting with Invited Papers Friday May 20 1994, Institute of Education, London, UK Invited Speakers: David Skillicorn, Queen's University, Canada Murray Cole, Edinburgh University Grant Malcolm, PRG, Oxford University Richard Miller, PRG, Oxford University Stewart Reddaway, Cambridge Parallel Processing A key factor in the growth of parallel computing is the availability of portable software. An attractive approach to portability lies in the provision of high-level programming constructs over bulk data, and their mapping to parallel architectures under the guidance of an appropriate cost calculus. Bulk operations have been in use since their introduction in APL, but have been understood much better recently. Bird-Meertens theory describes the operations of greatest interest for composed data types and provides a wealth of mathematical laws which can be used to transform and map algorithms for specific parallel architectures. Data types to which this approach has been applied include arrays, relations, lists, and trees. Speakers will describe this strategy for parallel software development, explain the underlying mathematics (at an accessible level), and illustrate the approach. The PPSG, founded in 1986, exists to foster development of parallel architectures, languages and applications and to disseminate information on parallel processing. Membership is completely open; you do not have to be a member of the British Computer Society. For further information about the group contact either of the following: Chair: Mr A Gupta Membership Secretary: Dr N Tucker Philips Research Labs Paradis Consultants Cross Oak Lane East Berriow Redhill Berriow Bridge Surrey North Hill UK nr. Launceston RH1 5HA Cornwall PL15 7NL, UK gupta at prl.philips.co.uk paradis at cix.compulink.co.uk *************************************************************** * Please share this information and display this announcement * *************************************************************** The British Computer Society Parallel Processing Specialist Group Booking Form/Invoice BCS VAT No. : 440-3490-76 Please reserve a place at the Conference on Bulk Data Types for Architecture Independence, London, May 20 1994, for the individual(s) named below. Name of delegate BCS membership no. Fee VAT Total (if applicable) ___________________________________________________________________________ ___________________________________________________________________________ ___________________________________________________________________________ Cheques, in pounds sterling, should be made payable to "BCS Parallel Processing Specialist Group". Unfortunately credit card bookings cannot be accepted. The delegate fees (including lunch and refreshments), in pounds sterling, are : Members of both PPSG & BCS: 55 + 9.62 VAT = 64.62 PPSG or BCS members: 70 + 12.25 VAT = 82.25 Non members: 90 + 15.75 VAT = 105.75 Full-time students: 25 + 4.37 VAT = 29.37 (Students should provide a letter of endorsement from their supervisor that also clearly details their institution) Contact Address: ___________________________________________ ___________________________________________ ___________________________________________ Email address: ___________________________________________ Date: _________________ Day time telephone: _________________ Places are limited so please return this form as soon as possible to : Mrs C. Cunningham BCS PPSG 2 Mildenhall Close, Lower Earley, Reading, RG6 3AT, UK (Phone +44 (0) 734 665570) ...................................................................... Apologies for the multiple postings.  From hunt at DBresearch-berlin.de Thu Apr 7 14:33:00 1994 From: hunt at DBresearch-berlin.de (Dr. Ken Hunt) Date: Thu, 7 Apr 94 14:33 MET DST Subject: Neural Control Colloquium bei Daimler-Benz in Berlin Message-ID: IEE Colloquium on Advances in Neural Networks for Control and Systems ------------------------------------ Date: 25-27 May 1994 Venue: Systems Technology Research, Daimler-Benz AG, Berlin, Germany Co-sponsors: IEE German Centre The Michael Faraday Institution e.V. Daimler-Benz AG Provisional Programme --------------------- Wednesday 25 May ---------------- 18:30 -- 20:30 Welcoming reception and buffet Thursday 26 May --------------- 8:30 Registration 9:30 -- 10:15 Supervised learning and divide-and-conquer via the EM algorithm EM = Expectation-Maximization M. Jordan (Massachusetts Institute of Technology, USA) 10:15 -- 10:45 The TACOMA algorithm for reflective growing of neural networks TACOMA = TAsk decomposition, COrrelation Measures and local Attention neurons J. Lange, H-M. Voigt and D. Wolf (Center for Applied Computer Science, and Technical University of Berlin, Berlin, Germany) Pause 11:15 -- 12:00 The ASMOD algorithm - some theoretical and practical results ASMOD = Adaptive Spline Modelling of Observation Data T. Kavli and E. Weyer (SINTEF SI, Oslo, Norway) 12:00 -- 12:30 Semi-empirical modelling of non-linear dynamical systems T. A. Johansen (Norwegian Institute of Technology, Trondheim, Norway) Lunch 14:00 -- 14:45 Neural networks for control of industrial processes B. Schuermann (Siemens AG, Munich, Germany) 14:45 -- 15:15 Data analysis by means of Kohonen feature maps for load forecast in power systems S. Heine and I. Neumann (Hochschule fuer Technik, Leipzig, and BEST Data Engineering GmbH, Germany) Pause 15:45 -- 16:15 Improved prediction of the corrosion behaviour of car body steel using a Kohonen self-organising map W. Kessler, R. Kessler, M. Kraus (Fachhochschule fuer Technik und Wirtschaft, Reutlingen, Germany), and R. Kuebler (Mercedes-Benz AG, Sindlefingen, Germany) 16:15 -- 16:45 Adaptive neural network control of the temperature in an oven O. Dubois, J. Nicolas and A. Billat (UFR Sciences Exactes et Naturelles, Reims, France) 16:45 -- 17:15 Exothermic heat estimation using fuzzy-neural nets for a batch reactor temperature control system E. Cuellar, J. Coronado, C. Moreno and J. Izquierdo (University of Valladolid, Spain) 19:30 Colloquium Dinner Friday 27 May ------------- 9:00 -- 9:45 On interpolating memories for learning control H. Tolle (Technische Hochschule Darmstadt, Germany) 9:45 --10:15 Comparison of optimisation techniques for training feedforward networks G. Irwin and G Lightbody (Queen's University of Belfast, UK) 10:15 -- 10:45 Dynamic systems in neural networks K. Warwick, C Kambhampati(University of Reading, UK) and P. Parks (University of Oxford, UK) Pause 11:15 -- 12:00 Adaptive neurocontrol of MIMO systems based on stability theory MIMO = Multi-input Multi-Output J-M Renders, M. Saerens, and H. Bersini (Universite Libre de Bruxelles, Belgium) 12:00 -- 12:30 Learning in neural networks and stochastic approximation with averaging P. Shcherbakov, S. Tikhonov (Institute of Control Sciences, Moscow, Russia) and J. Mason (University of Reading, UK) Lunch 14:00 -- 14:45 Adaptive neurofuzzy systems for difficult modelling and control problems M. Brown and C. Harris (University of Southampton, UK) 14:45 -- 15:15 Constructive training - industrial perspectives R. Murray-Smith, K. Hunt and F. Lohnert (Daimler-Benz AG, Berlin, Germany) Pause 15:45 -- 16:15 Equalisation using non-linear adaptive clustering C. Cowan (Loughborough University of Technology, UK) 16:15 -- 16:45 Hierarchical competitive net architecture T. Long (NeuroDyne Inc.) and E. Hanzevack (University of South Carolina, USA) REGISTRATION FORM "ADVANCES IN NEURAL NETWORKS FOR CONTROL AND SYSTEMS" Colloquium from Wednesday, 25 - Friday, 27 May 1994 at The Systems Technology Research Centre, Daimler-Benz AG, Berlin, Germany The IEE is registered as a charity IEE VAT Reg No: 240-3420-16 1. Surname: Title: 2. Address for correspondence : Postal Code: Tel No: 3. Class of Membership of IEE or IEEIE: Membership No: 4. Details for name badge Name: Company affiliation: 5. Special dietary requirements 6. How did you hear about this event (programme booklet, direct circular from IEE, IEE News, other press, Email bulletin board, training department etc)? PLEASE BOOK EARLY AS NUMBERS ARE LIMITED If you have any queries, please ring the Secretary, (LS(D)CA), on ++ 44 71 240 1871, Extension 2206 REGISTRATION FEES: (includes admission, digest, lunches, refreshments and Colloquium Dinner) IEE Members: (*) GBP 84.00 (includes VAT @ GBP 12.51) IEE Retired, Unemployed and Student Members: (#) NO CHARGE Non-Members: GBP 140.00 (includes VAT @ GBP 20.85) Retired, Unemployed and Student Non-Members: (#) GBP 42.00 (includes VAT @ GBP 6.25) I will/will not be attending the Welcoming Buffet on Wednesday, 25 May from 6.30-8.30pm TOTAL REMITTANCE ENCLOSED (Cheques should be made payable to "IEE" and crossed) INVOICE FACILITIES WILL ONLY BE CONSIDERED UPON RECEIPT OF AN OFFICIAL ORDER NUMBER AND AN ADMINISTRATIVE CHARGE OF GBP 5.00 + VAT WILL BE MADE. PLEASE CHARGE TO MY CREDIT CARD - please include number and expiry date of card Access Visa Master card American Express Card Holders Name Registered address of Card Holder if different from above NOTES (*) Members of the IEEIE, Eurel Member Associations, andDaimler-Benz Personnel will be admitted at Members' rates. (#) ALL students must have their applications endorsed by their Professor or Head of Department. REMITTANCE MUST ACCOMPANY THIS COMPLETED FORM and be returned to: David Penrose Institution of Electrical Engineers Savoy Place London WC2R 0BL Email: dpenrose at iee.org.uk Tel: + 44 71 344 5417 FAX: + 44 71 497 3633  From jose at learning.siemens.com Thu Apr 7 16:46:03 1994 From: jose at learning.siemens.com (Stephen Hanson) Date: Thu, 7 Apr 1994 16:46:03 -0400 (EDT) Subject: NEW Machine Learning Volume Message-ID: <0hd74=K1GEMnA751M0@tractatus.siemens.com> This is a new volume just published that may be of interest to you: COMPUTATIONAL LEARNING THEORY and NATURAL LEARNING SYSTEMS Constraints and Prospects MIT/BRADFORD 1994. Editors, S. Hanson, G. Drastal, R. Rivest Table of Contents FOUNDATIONS Daniel Osherson, Massachusetts Institute of Technology, Michael Stob, Calvin College, and Scott Weinstein, University of Pennsylvania. {em Logic and Learning} Ranan Banerji, Saint Joseph's University. {em Learning Theoretical Terms} Stephen Judd, Siemens Corporate Research, {em How Network Complexity is Affected by Node Function Sets} Diane Cook, University of Illinois. {em Defining the Limits of Analogical Planning} REPRESENTATION and BIAS Larry Rendell and Raj Seshu, University of Illinois. {em Learning Hard Concepts Through Constructive Induction: Framework and Rationale} Harish Ragavan and Larry Rendell, University of Illinois. {em The Utility of Domain Knowledge for Learning Disjunctive Concepts} George Drastal, Siemens Corporate Research. {em Learning in an Abstraction Space} Raj Seshu, University of Denver. {em Binary Decision Trees and an ``Average-Case'' Model for Concept Learning: Implications for Feature Construction and the Study of Bias} Richard Maclin and Jude Shavlik, University of Wisconsin, Madison. {em Refining Algorithms with Knowledge-Based Neural Networks: Improving the Chou-Fasman Algorithm for Protein Folding} SAMPLING PROBLEMS Michael Kearns and Robert Schapire, Massachusetts Institute of Technology. {em Efficient Distribution-free Learning of Probabilistic Concepts} Marek Karpinski and Thorsten Werther, University of Bonn. {em VC Dimension and Sampling Complexity of Learning Sparse Polynomials and Rational Functions} Haym Hirsh and William Cohen, Rutgers University. {em Learning from Data with Bounded Inconsistency:Theoretical and Experimental Results} Wolfgang Maass and Gyorgy Turan, University of Illinois. {em How Fast Can a Threshold Gate Learn?} Eric Baum, NEC Research Institute. {em When are k-Nearest Neighbor and Back Propagation Accurate for Feasible Sized Sets of Examples?} EXPERIMENTAL Ross Quinlan, University of Sydney. {em Comparing Connectionist and Symbolic Learning Methods} Andreas Weigend and David Rumelhart, Stanford University. {em Weight-Elimination and Effective Network Size} Ronald Rivest and Yiqun Yin, Massachusetts Institute of Technology. {em Simulation Results for a New Two-Armed Bandit Heuristic} Susan Epstein, Hunter College. {em Hard Questions About Easy Tasks: Issues From Learning to Play Games} Lorien Pratt, Rutgers University. {em Experiments on the Transfer of Knowledge between Neural Networks} Stephen J. Hanson, Ph.D. Head, Learning Systems Department SIEMENS Research 755 College Rd. East Princeton, NJ 08540  From jbower at smaug.bbb.caltech.edu Fri Apr 8 15:26:30 1994 From: jbower at smaug.bbb.caltech.edu (Jim Bower) Date: Fri, 8 Apr 94 12:26:30 PDT Subject: CNS*94 registration Message-ID: <9404081926.AA09267@smaug.bbb.caltech.edu> ****************************************************** Registration Information for the Third Annual Computation and Neural Systems Meeting CNS*94 July 21 - July 26, 1994 Monterey, California ****************************************************** CNS*94: Registration is now open for this year's Computation and Neural Systems meeting (CNS*94). This is the third in a series of annual inter-disciplinary conferences intended to address the broad range of research approaches and issues involved in the general field of computational neuroscience. As in the previous years, this meeting will bring together experimental and theoretical neurobiologists along with engineers, computer scientists, cognitive scientists, physicists, and mathematicians interested in understanding how biological neural systems compute. The meeting will equally emphasize experimental, model-based, and more abstract theoretical approaches to understanding neurobiological computation. Meeting Structure: The meeting will be organized in two parts: three days of research presentations, and two days of follow up workshops. Most presentations will be based on submitted papers, with 10 presentations by specially invited speakers. 111 submitted papers have been accepted for presentation at the meeting based on peer review. Details on the agenda can be obtained via ftp or through telnet as described below. Location: The two components of the meeting will take place in two different locations on the Monterey Peninsula on the coast south of San Francisco, California. The main meeting will be held at the Doubletree Hotel in downtown Monterey itself. This modern hotel is located at Monterey's historic Fisherman's Wharf. Following the main meeting, two days of post-meeting workshops will be held at Asilomar Conference Center at Asilomar State Beach in Pacific Grove just a few miles away. Main Meeting Accommodations: Accommodations for the main meeting have been arranged at the Doubletree Hotel. We have reserved a block of rooms at the special rate for all attendees of $119 per night single or double occupancy in the conference hotel (that is, 2 people sharing a room would split the $119!). A fixed number of rooms have been reserved for students at the rate of $99 per night single or double occupancy (yes, that means $50 a nite per student!). These student room rates are on a first-come-first-served basis, so we recommend acting quickly to reserve these slots. Each additional person per room is a $20 charge. Registering for the meeting, WILL NOT result in an automatic room reservation. Instead you must make your own reservations by contacting: the Doubletree Hotel at Fisherman's Wharf Two Portola Plaza Monterey, CA 93940 (408) 649-4511 NOTE: IN ORDER TO GET THE REDUCED RATES, YOU MUST CONFIRM HOTEL REGISTRATIONS BY JUNE 18,1994. When making reservations by phone, make sure and indicate that you are registering for the CNS*94 meeting. Students will be asked to verify their status on check in with a student ID or other documentation. Workshop Accommodations: Housing for the workshops will be provided on-site at Asilomar Conference Center. Transportation to the conference center, three nights lodging and all meals are included in the workshop registration fee of $250. Acknowledgment of registration for the workshops and payment of fees WILL constitute a guarantee of accommodations at the workshop. However, space at the workshops is limited to 125, so early registration is highly recommended. Registration Fees: The strong response to this year's call for papers, coupled with grant support, has allowed us to reduce the registration fees from pervious years. Accordingly, fees this year will be: Main meeting registration received before June 15, 1994: Students, main meeting: $ 90 Others, main meeting: $ 200 Main meeting registration after June 15, 1994 Students, main meeting: $ 125 Others, main meeting: $ 235 Workshops: $250 (includes all meals and three nights lodging) Banquet: Registration for the main meeting includes a single ticket to the annual CNS Banquet this year to be held within the Monterey Aquarium on Thursday evening, July 21st. Additional Banquet tickets can be purchased for $35 each person. ****************************************************** Additional Meeting Information: Additional information about the meeting is available via FTP over the internet (address: 131.215.137.69 ). To obtain information about the agenda, currently registered attendees, or paper abstracts, the initial sequence is the same (Things you type are in ""): > yourhost% "ftp 131.215.137.69" > 220 mordor FTP server (SunOS 4.1) ready. Name (131.215.137.69:): "ftp" > 331 Guest login OK, send ident as password. Password: "yourname at yourhost.yourside.yourdomain" > 230 Guest login OK, access restrictions apply. ftp> "cd cns94" > 250 CWD command successful. ftp> At this point you can do one of several things: 1) To examine what is available type: "ls" Directory as of 4/5/94: abstracts94 agenda94 attendees94 general_information94 registration94 rooms_to_share94 travel_arrangements94 travel_grants94 workshops94 2) To download specific files type: "get " for example: "get agenda94" or "get attendees94" Once you have obtained the information you want type: "quit" ****************************************************** Registration Procedure: Participants can register for the meeting in several different ways: 1) electronically, 2) via email, 3) via regular surface mail. Each different method is described below. Please only register using one method. You will receive a confirmation of registration within two weeks. 1) Interactive electronic registration: For those of you with internet connectivity who would like to register electronically for CNS*94 we have provided an internet account through which you may submit your registration information. To use this service you need only "telnet" to "mordor.bbb.caltech.edu" and login as "cns94". No password is required. For example: yourhost% "telnet mordor.bbb.caltech.edu" Trying 131.215.137.69 ... Connected to mordor.bbb.caltech.edu. Escape character is '^]'. SunOS UNIX (mordor) login: "cns94" Now answer all questions (Note that all registration through this electronic service is subject to verification of payment.) 2) Email registration: For those with easy access to electronic mail simply fill in the attached registration form and email it to: cns94 at smaug.bbb.caltech.edu 3) Surface mail registration: Finally, for those who elect neither of the above options, or who are paying by means other than credit card, please print out the attached registration form and send with payment via surface mail to: CNS*94 Registrations Division of Biology 216-76 Caltech Pasadena, CA 91125 Those registering by 1 or 2 above, but paying with check or money order should send payment to the above address as well with you name and institution clearly marked. Registration becomes effective when payment is received. ****************************************************** CNS*94 REGISTRATION FORM Monterey, California. July 20 - July 26, 1994 ****************************************************** Last Name: First Name: Title: Organization: Address: City: State: Zip : Country: Telephone: email address: Registration Fees: Technical Program -- July 21 - 23 _____ Regular $ 200 ($225 after June 15th) _____ Student $ 90 ($125 after June 15th) _____ Banquet $ 35 (each additional banquet ticket) Post-meeting Workshop -- July 24 - 26 _____ $ 250 (includes round-trip transportation, all meals and lodging) Total Payment: $ ______ Please indicate method of payment : ____ Check or Money Order - Payable in U.S. dollars to CNS*94 - Caltech - Please make sure to indicate CNS*94 and YOUR name on all money transfers. ____ Charge my card: ____ Visa ____ Mastercard ____ American Express number: ________________________________________ Expiration date __________ Name of cardholder ______________________________ Signature as appears on card (for mailed in applications): _________________________ Date ____________ ===================================================== Additional Questions: Did you submit an abstract & summary ? ( ) yes ( ) no title : Do you have special dietary preferences or restrictions (e.g., diabetic, low sodium, kosher, vegetarian)? If so, please note: Some grants to cover partial travel expenses may become available. Do you wish further information ? ( ) yes ( ) no ======================================================  From tesauro at watson.ibm.com Fri Apr 8 16:14:12 1994 From: tesauro at watson.ibm.com (Gerald Tesauro) Date: Fri, 8 Apr 94 16:14:12 EDT Subject: 2nd Call for Workshops-- NIPS*94 (submission deadline May 21) Message-ID: CALL FOR PROPOSALS NIPS*94 Post-Conference Workshops December 2 and 3, 1994 Vail, Colorado Following the regular program of the Neural Information Processing Systems 1994 conference, workshops on current topics in neural information processing will be held on December 2 and 3, 1994, in Vail, Colorado. Proposals by qualified individuals interested in chairing one of these workshops are solicited. Past topics have included: active learning and control, architectural issues, attention, bayesian analysis, benchmarking neural network applications, computational complexity issues, computational neuroscience, fast training techniques, genetic algorithms, music, neural network dynamics, optimization, recurrent nets, rules and connectionist models, self- organization, sensory biophysics, speech, time series prediction, vision and audition, implementations, and grammars. The goal of the workshops is to provide an informal forum for researchers to discuss important issues of current interest. Sessions will meet in the morning and in the afternoon of both days, with free time in between for ongoing individual exchange or outdoor activities. Concrete open and/or controversial issues are encouraged and preferred as workshop topics. Representation of alternative viewpoints and panel-style discussions are particularly encouraged. Individuals proposing to chair a workshop will have responsibilities including: 1) arranging short informal presentations by experts working on the topic, 2) moderating or leading the discussion and reporting its high points, findings, and conclusions to the group during evening plenary sessions (the ``gong show''), and 3) writing a brief summary. Submission Procedure: Interested parties should submit a short proposal for a workshop of interest postmarked by May 21, 1994. (Express mail is not necessary. Submissions by electronic mail will also be accepted.) Proposals should include a title, a description of what the workshop is to address and accomplish, the proposed length of the workshop (one day or two days), and the planned format. It should motivate why the topic is of interest or controversial, why it should be discussed and what the targeted group of participants is. In addition, please send a brief resume of the prospective workshop chair, a list of publications and evidence of scholarship in the field of interest. Mail submissions to: Todd K. Leen, NIPS*94 Workshops Chair Department of Computer Science and Engineering Oregon Graduate Institute of Science and Technology P.O. Box 91000 Portland Oregon 97291-1000 USA (e-mail: tleen at cse.ogi.edu) Name, mailing address, phone number, fax number, and e-mail net address should be on all submissions. PROPOSALS MUST BE POSTMARKED BY MAY 21, 1994 Please Post  From jramire at conicit.ve Fri Apr 8 16:38:31 1994 From: jramire at conicit.ve (Jose Ramirez G. (AVINTA) Date: Fri, 8 Apr 1994 16:38:31 -0400 (AST) Subject: CFP IBEROAMERICAN CONGRESS ON AI Message-ID: <9404082038.AA25999@dino.conicit.ve> * PLEASE POST * PLEASE POST * PLEASE POST * PLEASE POST * PLEASE POST CALL FOR PAPERS IBEROAMERICAN CONGRESS ON ARTIFICIAL INTELLIGENCE IBERAMIA 94 NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CNIASE 94 IBERAMIA 94/CNIASE 94 will be sponsored by the Venezuelan Association for AI -AVINTA-, the Mexican AI Society -SMIA- and the Spanish Association for IA -AEPIA-. The goal of the conference is to promote research and development in Artificial Intelligence and scientific interchange among AI researchers and practitioners. IBERAMIA 94/CNIASE 94 will be hosted by the Centro de Investigaciones Oficina Metropolis and the School of Systems Engineering of the Universidad Metropolitana of Caracas -UNIMET-, between Tuesday 25th October and Friday 28th October 1994. Program Committee Chair: Hector Geffner - USB (Venezuela) Program Committee: Julian Araoz - USB (Venezuela) Jorge Baralt - USB (Venezuela) Francisco Cantu - ITESM (Mexico) Nuria Castell - UPC (Spain) Alberto Castillo - UCLA (Venezuela) Helder Coelho - INESC (Portugal) Nelson Correa - Univ. de los Andes (Colombia) Alvaro del Val - Stanford U. (USA) Felix Garcia Padilla - UDO (Venezuela) Luciano Garcia - UH (Cuba) Francisco Garijo - TID (Spain) Warren Greiff - UDLAP (Mexico) Christian Lemaitre - LANIA (Mexico) Alonso Marquez - USB (Venezuela) Luis Moniz Pereira - UNL (Portugal) Jose Ali Moreno - UCV (Venezuela) Pablo Noriega - INEGI (Mexico) Olga Padron - UH (Cuba) Tarcisio Pequeno - LIA/UFC (Brazil) Javier Pinto - PUC (Chile) Jose Ramirez - UNIMET (Venezuela) Antonio Sanchez Aguilar - UDLA-P (Mexico) Carlos Sierra - CSIC (Spain) Guillermo Simari - UNS (Argentina) Angel Vina - UAM (Spain) Organizing Committee Chair: Adelaide Bianchini - UNIMET Organizing Committee: Antonietta Bosque - UNIMET Preciosa Castro - UNIMET Edna R. de Millan - UNIMET Rodrigo Ramirez - UNIMET Irene Torres - AVINTA Supporting Associations: Francisco Garijo - AEPIA Christian Lemaitre - SMIA Jose Ramirez - AVINTA We invite authors to submit papers describing original work in all areas of AI, including but not limited to: Machine Learning Knowledge Acquisition Natural Language Processing Genetic Algorithms Evolutionary Programming Knowledge Based Systems Knowledge Representation and Reasoning Automated Reasoning Knowledge-based Simulation Cognitive Modelling Robotics Case-based Reasoning Distributed Artificial Intelligence Neural Networks Virtual Reality All submissions will be refereed for quality and originality. Authors must submit three (3) copies of their papers (not electronic or fax transmisions) by June 30, 1.994 to the following address: AVINTA Apartado 67079 Caracas 1061 Venezuela +58-2-2836942,fax:+58-2-2832689 jramire at conicit.ve or Universidad Metropolitana Centro de Investigaciones Oficina Metropolis Autopista Petare-Guarenas Distribuidor Universidad Terrazas del Avila Caracas 1070-A Venezuela +58-2-2423089, fax:+58-2-2425668 abianc at conicit.ve All copies must be clearly legible. Notification of receipt will be mailed to the first author. Papers can be written in English, Spanish or Portuguese and must be printed on 8 1/2 x 11 inches paper using 12 point type (14 point type for headings). The body of submitted papers must be at most 12 pages. Each copy must have a title page (separate from the body of the paper) containing title of the paper, names and addresses of all authors, telephone number, fax number, electronic mail address and a short (less than 200 word) abstract. All accepted papers will be published in full length by McGraw-Hill. Important dates: Deadline for paper submission: June 30, 1994 Notification of acceptance: July 30, 1994 Camera Ready Copy: September 9,1994 Location: Caracas is located on the north of South America, facing the Caribbean Sea; is a modern city with an enjoyable wheather all year long (20 C to 30 C), with many interesting sites including cultural complexes, historical downtown, shopping malls and excelent hotels and restaurants offering the best food from all over the world. The Simon Bolivar International Airport is 45 minutes from downtown, and have regular flights to all major cities in the world.  From tesauro at watson.ibm.com Fri Apr 8 16:12:49 1994 From: tesauro at watson.ibm.com (Gerald Tesauro) Date: Fri, 8 Apr 94 16:12:49 EDT Subject: 2nd Call for Papers-- NIPS*94 (submission deadline May 21) Message-ID: ********* PLEASE NOTE NEW SUBMISSION FORMAT FOR 1994 ********* CALL FOR PAPERS Neural Information Processing Systems -Natural and Synthetic- Monday, November 28 - Saturday, December 3, 1994 Denver, Colorado This is the eighth meeting of an interdisciplinary conference which brings together neuroscientists, engineers, computer scientists, cognitive scientists, physicists, and mathematicians interested in all aspects of neural processing and computation. The conference will include invited talks, and oral and poster presentations of refereed papers. There will be no parallel sessions. There will also be one day of tutorial presentations (Nov 28) preceding the regular session, and two days of focused workshops will follow at a nearby ski area (Dec 2-3). Major categories for paper submission, and examples of keywords within categories, are the following: Neuroscience: systems physiology, cellular physiology, signal and noise analysis, oscillations, synchronization, inhibition, neuromodulation, synaptic plasticity, computational models. Theory: computational learning theory, complexity theory, dynamical systems, statistical mechanics, probability and statistics, approximation theory. Implementations: VLSI, optical, parallel processors, software simulators, implementation languages. Algorithms and Architectures: learning algorithms, constructive/pruning algorithms, localized basis functions, decision trees, recurrent networks, genetic algorithms, combinatorial optimization, performance comparisons. Visual Processing: image recognition, coding and classification, stereopsis, motion detection, visual psychophysics. Speech, Handwriting and Signal Processing: speech recognition, coding and synthesis, handwriting recognition, adaptive equalization, nonlinear noise removal. Applications: time-series prediction, medical diagnosis, financial analysis, DNA/protein sequence analysis, music processing, expert systems. Cognitive Science & AI: natural language, human learning and memory, perception and psychophysics, symbolic reasoning. Control, Navigation, and Planning: robotic motor control, process control, navigation, path planning, exploration, dynamic programming. Review Criteria: All submitted papers will be thoroughly refereed on the basis of technical quality, novelty, significance and clarity. Submissions should contain new results that have not been published previously. Authors are encouraged to submit their most recent work, as there will be an opportunity after the meeting to revise accepted manuscripts before submitting final camera-ready copy. ********** PLEASE NOTE NEW SUBMISSIONS FORMAT FOR 1994 ********** Paper Format: Submitted papers may be up to eight pages in length. The page limit will be strictly enforced, and any submission exceeding eight pages will not be considered. Authors are encouraged (but not required) to use the NIPS style files obtainable by anonymous FTP at the sites given below. Papers must include physical and e-mail addresses of all authors, and must indicate one of the nine major categories listed above, keyword information if appropriate, and preference for oral or poster presentation. Unless otherwise indicated, correspondence will be sent to the first author. Submission Instructions: Send six copies of submitted papers to the address given below; electronic or FAX submission is not acceptable. Include one additional copy of the abstract only, to be used for preparation of the abstracts booklet distributed at the meeting. Submissions mailed first-class within the US or Canada must be postmarked by May 21, 1994. Submissions from other places must be received by this date. Mail submissions to: David Touretzky NIPS*94 Program Chair Computer Science Department Carnegie Mellon University 5000 Forbes Avenue Pittsburgh PA 15213-3890 USA Mail general inquiries/requests for registration material to: NIPS*94 Conference NIPS Foundation PO Box 60035 Pasadena, CA 91116-6035 USA (e-mail: nips94 at caltech.edu) FTP sites for LaTex style files "nips.tex" and "nips.sty": helper.systems.caltech.edu (131.215.68.12) in /pub/nips b.gp.cs.cmu.edu (128.2.242.8) in /usr/dst/public/nips NIPS*94 Organizing Committee: General Chair, Gerry Tesauro, IBM; Program Chair, David Touretzky, CMU; Publications Chair, Joshua Alspector, Bellcore; Publicity Chair, Bartlett Mel, Caltech; Workshops Chair, Todd Leen, OGI; Treasurer, Rodney Goodman, Caltech; Local Arrangements, Lori Pratt, Colorado School of Mines; Tutorials Chairs, Steve Hanson, Siemens and Gerry Tesauro, IBM; Contracts, Steve Hanson, Siemens and Scott Kirkpatrick, IBM; Government & Corporate Liaison, John Moody, OGI; Overseas Liaisons: Marwan Jabri, Sydney Univ., Mitsuo Kawato, ATR, Alan Murray, Univ. of Edinburgh, Joachim Buhmann, Univ. of Bonn, Andreas Meier, Simon Bolivar Univ. DEADLINE FOR SUBMISSIONS IS MAY 21, 1994 (POSTMARKED) -please post-  From furu at bioele.nuee.nagoya-u.ac.jp Mon Apr 11 09:32:21 1994 From: furu at bioele.nuee.nagoya-u.ac.jp (=?ISO-2022-JP?B?GyRCOEU2NiEhSXAbKEI=?=) Date: Mon, 11 Apr 94 09:32:21 JST Subject: Final Call for Papers of WWW Message-ID: <9404110032.AA01802@gemini.bioele.nuee.nagoya-u.ac.jp> FINAL CALL FOR PAPERS 1994 IEEE/Nagoya University World Wisemen/women Workshop (WWW) ON FUZZY LOGIC AND NEURAL NETWORKS/GENETIC ALGORITHMS -Architecture and Applications for Knowledge Acquisition/Adaptation- August 9 and 10, 1994 Nagoya University Symposion Chikusa-ku, Nagoya, JAPAN Sponsored by Nagoya University Co-sponsored by IEEE Industrial Electronics Society Technically Co-sponsored by IEEE Neural Network Council IEEE Robotics and Automation Society International Fuzzy Systems Association Japan Society for Fuzzy Theory and Systems North American Fuzzy Information Processing Society Society of Instrument and Control Engineers Robotics Society of Japan There are growing interests in combination technologies of fuzzy logic and neural networks, fuzzy logic and genetic algorithm for acquisition of experts' knowledge, modeling of nonlinear systems, realizing adaptive systems. The goal of the 1994 IEEE/Nagoya University WWW on Fuzzy Logic and Neural Networks/Genetic Algorithm is to give its attendees opportunities to exchange information and ideas on various aspects of the Combination Technologies and to stimulate and inspire pioneering works in this area. To keep the quality of these workshop high, only a limited number of people are accepted as participants of the workshops. The papers presented at the workshop are planned to be edited and published from Springer-Verlag. TOPICS: Combination of Fuzzy Logic and Neural Networks, Combination of Fuzzy Logic and Genetic Algorithm, Learning and Adaptation, Knowledge Acquisition, Modeling, Human Machine Interface IMPORTANT DATES: Submission of Abstracts of Papers : April 30, 1994 Acceptance Notification : May 31, 1994 Final Manuscript : July 1, 1994 Abstracts should be type-written in English within 4 pages of A4 size or Letter sized sheet. Use Times or one of the similar typefaces. The size of the letters should be 10 points or larger. All correspondence and submission of papers should be sent to Takeshi Furuhashi, General Chair Dept. of Information Electronics, Nagoya University Furo-cho, Chikusa-ku, Nagoya 464-01, JAPAN TEL: +81-52-789-2792 FAX: +81-52-789-3166 E mail: furu at bioele.nuee.nagoya-u.ac.jp This workshop will be held just after the 3rd International Conference of Fuzzy Logic, Neural Nets and Soft Computing(IIZUKA'94) from 1 to 7, '94. For speakers of excellent papers, assistance of travel expenses from Iizuka to Nagoya and Nagoya to an airport in Japan as well as lodging fee in Nagoya will be provided by the Steering committee of WWW. Candidates are recommended to attend the conference in Iizuka. IEEE/Nagoya University WWW: IEEE/Nagoya University WWW (World Wisemen/women Workshop) is a series of workshops sponsored by Nagoya University and co-sponsored by IEEE Industrial Electronics Society. City of Nagoya, located two hours away from Tokyo, has many electro-mechanical industries in its surroundings such as Mitsubishi, TOYOTA, and their allied companies. Nagoya is a mecca of robotics industries, machine industries and aerospace industries in Japan. The series of workshops will give its attendees opportunities to exchange information on advanced sciences and technologies and to visit industries and research institutes in this area. WORKSHOP ORGANIZATION Honorary Chair : Masanobu Hasatani (Dean, School of Engineering, Nagoya University) General Chair : Takeshi Furuhashi (Nagoya University) Advisory Committee: Chair : Toshio Fukuda (Nagoya University) Toshio Goto (Nagoya University) Fumio Harashima (University of Tokyo) Hiroyasu Nomura (Nagoya University) Yoshiki Uchikawa (Nagoya University) Takeshi Yamakawa (Kyushu Institute of Technology) Steering Committee: H.Berenji (NASA Ames Research Center) W.Eppler (University of Karlsruhe) I.Hayashi (Hannan University) Y.Hayashi (Ibaraki University) H.Ichihashi (Osaka Prefectural University) A.Imura (Laboratory for International Fuzzy Engineering) M.Jordan (Massachusetts Institute of Technology) C.-C.Jou (National Chiao Tung University) E.Khan (National Semiconductor) R.Langari (Texas A & M University) S.Nakanishi (Tokai University) H.Takagi (Matsushita Electric Industrial Co., Ltd.) K.Tanaka (Kanazawa University) M.Valenzuela-Rendon (Instituto Tecnologico y de Estudios Superiores de Monterrey) L.-X.Wang (University of California Berkeley) T.Yamaguchi (Utsunomiya University) J.Yen (Texas A & M University) ===================================================== Takeshi Furuhashi Dept. of Information Electronics, Nagoya University Furo-cho, Chikusaku, Nagoya ?464-01 Japan Tel.(052)789-2792 Fax.(052)789-3166 E-mail furu at bioele.nuee.nagoya-u.ac.jp =====================================================  From amari at sat.t.u-tokyo.ac.jp Mon Apr 11 13:52:37 1994 From: amari at sat.t.u-tokyo.ac.jp (Shun-ichi Amari) Date: Mon, 11 Apr 94 13:52:37 JST Subject: Announcement of newpaper, Information Geometry and EM algorithm Message-ID: The following paper is now available via anonymous ftp from the neuroprose archive. It is a technical report, METR 94-4, University of Tokyo and will appear in "Neural Networks". It consisits of two files, am19.ps for the main body (85 pages) figs-ps for the figures. If you have any problems, contact to mura at sat.t.u-tokyo.ac.jp --------- FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/amari.geometryofem.tar.Z This includes two files, am19.ps and figs.ps Use the unix command uncompress + tar to uncompress and divide into two files. --------- "Information Geometry of the EM and em Algorithms for Neural Networks" by Shun-ichi Amari In order to realize an input-output relation given by noise-contaminated examples, it is effective to use a stochastic model of neural networks. A model network includes hidden units whose activation values are not specified nor observed. It is useful to estimate the hidden variables from the observed or specified input-output data based on the stochastic model. Two algorithms, the EM- and em-algorithms, have so far been proposed for this purpose. The EM-algorithm is an iterative statistical technique of using the conditional expectation, and the em-algorithm is a geometrical one given by information geometry. The $em$-algorithm minimizes iteratively the Kullback-Leibler divergence in the manifold of neural networks. These two algorithms are equivalent in most cases. The present paper gives a unified information geometrical framework for studying stochastic models of neural networks, by forcussing on the EM and em algorithms, and proves a condition which guarantees their equivalence. Examples include 1) Boltzmann machines with hidden units, 2) mixtures of experts, 3) stochastic multilayer perceptron, 4) normal mixture model, 5) hidden Markov model, among others.  From Christian.Lehmann at di.epfl.ch Tue Apr 12 10:06:24 1994 From: Christian.Lehmann at di.epfl.ch (Christian Lehmann) Date: Tue, 12 Apr 94 16:06:24 +0200 Subject: neural hw paper available Message-ID: <9404121406.AA07536@lamisun.epfl.ch> N E W P A P E R S AVAILABLE O N NEUROCOMPUTING HARDWARE The following papers are now available via anonymous ftp from the neuroprose archive. There are three papers on different subjects related to our work on neurocomputing hardware. Should you experience any problem, please, do not hesitate to contact us: Christian Lehmann The MANTRA Center for neuromimetic systems MANTRA-DI-EPFL CH-1015 Lausanne Switzerland or lehmann at di.epfl.ch --------- Author : M. A. Viredaz Title : MANTRA I: An SIMD Processor Array for Neural Computation In : Proceedings of the Euro-ARCH'93 Conference FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/viredaz.e-arch93.ps.Z Length : 12 pages note : figure 7 is missing (photography of boards), contact us if wanted. Abstract: This paper presents an SIMD processor array dedicated to the implementation of neural networks. The heart of this machine is a systolic array of simple processing elements (PEs). A VLSI custom chip containing 2x2 PEs was built. The machine is designed to sustain sufficient instruction and data flows to keep a utilization rate close to 100%. Finally, this computer is intended to be inserted in a network of heterogeneous nodes. --------- Authors : P. Ienne, M. A. Viredaz Title : GENES IV: A Bit-Serial Processing Element for a Multi-Model Neural-Network Accelerator In : Proceedings of the International Conference on Application Specific Array Processors, 1994 FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/ienne.genes.ps.Z Length : 12 pages Abstract: A systolic array of dedicated processing elements (PEs) is presented as the heart of a multi-model neural-network accelerator. The instruction set of the PEs allows the implementation of several widely-used neural models, including multi-layer Perceptrons with the backpropagation learning rule and Kohonen feature maps. Each PE holds an element of the synaptic weight matrix. An instantaneous swapping mechanism of the weight matrix allows the implementation of neural networks larger than the physical PE array. A systolically-flowing instruction accompanies each input vector propagating in the array. This avoids the need of emptying and refilling the array when the operating mode of the array is changed. Both the GENES IV chip, containing a matrix of 2x2 PEs, and an auxiliary arithmetic circuit have been manufactured and successfully tested. The MANTRA I machine has been built around these chips. Peak performances of the full system are between 200 and 400 MCPS in the evaluation phase and between 100 and 200 MCUPS during the learning phase (depending on the algorithm being implemented). --------- Author : P. Ienne Title : Architectures for Neuro-Computers: Review and Performance Evaluation In : EPFL Computer Science Department Technical Report 93/21 FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/ienne.nnarch.ps.Z Length : 63 pages Abstract: As the field of neural-network matures toward real-world applications, a need for hardware systems to efficiently compute larger networks arises. Several designs have been proposed in the recent years and a selection of the more interesting VLSI digital realizations are reviewed here. Limitations of conventional performance measurements are briefly discussed and a different architectural level evaluation approach is attempted by proposing a number of characteristic performance indexes on idealized architecture classes. As a results of this analysis, some conclusions on the advantages and limitations of the different architectures and on the feasibility of the proposed approach are drawn. Architectural aspects that require further developments are also emphasized. ********* See also Authors : C. Lehmann, M. Viredaz and F. Blayo Title : A Generic Systolic Array Building Block for Neural Networks with On-Chip Learning In : IEEE Trans. on NN, 4(3), May 1993  From bossan at bio1.peb.ufrj.br Tue Apr 12 19:06:00 1994 From: bossan at bio1.peb.ufrj.br (Marcelo de Carvalho Bossan) Date: Tue, 12 Apr 94 18:06:00 EST Subject: world congress Message-ID: <9404122106.AA01606@bio1> Message to "Neural Networkers" World Congress on Medical Physics and Biomedical Engineering - RIO'94 The next World Congress on Medical Physics and Biomedical Engineering will be held in Rio de Janeiro, Brazil, 21-26 August 1994. The scientific program includes approximately 1900 papers. Round table, state-of-the-art and tutorial sessions are being finalized, and nearly 250 of the world's leading experts have so far sent abstracts for their invited presentations. Issues in Neural Networks (NN) are inserted mainly in the topic Expert and Decision Support Systems (chairmen: N. Saranummi, Finland, and R. J. Machado, Brazil). A tutorial lecture on Hybrid Expert Systems will be presented by A. Rocha (Brazil) and oral sessions are programmed on both Neural Networks and Hybrid Expert Systems. A Round-table on NN in Electrocardiology (speakers: R. G. Mark, USA, N. Maglaveras, Greece, L. Glass, Austria and R. M. Sabbatini, Brazil) is also included in the program, in adiction to scientific sessions focusing NN in various applications. Attracting the largest possible audience is our present priority, as we can now present an extensive scientific program of high standard. We particularly hope that a large number of students will be able to participate as we offer courses and over 30 tutorial sessions. We are currently also making arrangements for low-priced board and lodging for students. Cut-price air-fairs to Rio (or packages including hotels) are available. Supporting from funding agencies has allowed reduced registration fees for Latin American participants. Together with the Final Announcement (and the final letter to the authors) we will give details of the reception of conference participants at the airport and transports to hotels, which are currently being organized. A free bus service will operate on Saturday (20th August) and Sunday (21st) during peak arrival hours. One of the taxi-companies operating within the airport will be available to participants arriving at other times (fixed prices fairs, around US$ 15), with a counter on information regarding the congress and hotels. We look forward to welcoming you here, where you can enjoy stimulating scientific presentations and discussions, renew old friendships and make new ones, all in exotic and beautiful surroundings. For more information: World Congress on Medical Physics and Biomedical Engineering Rio de Janeiro - 21-26 August 1994 Congrex do Brasil Rua do Ouvidor 60, Gr.414 Tel: +55 (021) 224-6080 - Fax: +55 (021) 231-1492 or bossan at bio1.peb.ufrj.br  From lss at compsci.stirling.ac.uk Wed Apr 13 09:33:50 1994 From: lss at compsci.stirling.ac.uk (Dr L S Smith (Staff)) Date: 13 Apr 94 13:33:50 GMT (Wed) Subject: New TR available Message-ID: <9404131333.AA06530@uk.ac.stir.cs.peseta> ***DO NOT FORWARD TO OTHER GROUPS*** University of Stirling (Scotland), Centre for Cognitive and Computational Neuroscience.... CCCN Technical report CCCN-15 Activation Functions, Computational Goals and Learning Rules for Local Processors with Contextual Guidance. Information about context can enable local processors to discover latent variables that are relevant to the context within which they occur, and it can also guide short-term processing. For example, Becker and Hinton (1992) have shown how context can guide learning, and Hummel and Biederman (1992) have shown how it can guide processing in a large neural net for object recognition. This paper therefore studies the basic capabilities of a local processor with two distinct classes of inputs : receptive field inputs that provide the primary drive and contextual inputs that modulate their effects. The contextual predictions are used to guide processing without confusing them with the receptive field inputs. The processor's transfer function must therefore distinguish these two roles. Given these two classes of input the information in the output can be decomposed into four disjoint components to provide a space of possible goals in which the unsupervised learning of Linsker (1988) and the internally supervised learning of Becker and Hinton (1992) are special cases. Learning rules are derived from an information-theoretic objective function, and simulations show that a local processor trained with these rules and using an appropriate activation function has the elementary properties required. This report is available by anonymous FTP from ftp.cs.stir.ac.uk in the directory pub/tr/cccn The filename is TR15.ps.Z (and, as usual, this needs decompress'd, and the postscript printed.) As a last resort, hard copy may be available: email lss at cs.stir.ac.uk with your postal address...  From L.P.OMard at lut.ac.uk Wed Apr 13 17:40:05 1994 From: L.P.OMard at lut.ac.uk (L.P.OMard) Date: Wed, 13 Apr 94 17:40:05 bst Subject: Latest LUTEar Core Routines Library (1.5.0) Message-ID: <9404131640.AA03083@hpc.lut.ac.uk> Dear All, Please find below the README file for the latest version of the LUTEar Core Routines Library (CRL). The UNIX and Macintosh (THINK C 5.0) platform versions are now available (the MSDOS, Borland C, version will be ready by Friday at the latest) via anonymous FTP from:- suna.lut.ac.uk (131.231.16.2): /public/hulpo/lutear Connect via FTP then login with user name "anonymous" and give your e-mail address as the password. Download and thead the "INSTALL150" file from the "/public/hulpo/lutear" directory (as also given above), then follow the installation procedure for your platform. If you have any problems at all, do not hesitate to get in contact with me. Any comments, improvements, additions or corrections you may wish to suggest are very welcome; it is only by direct feed-back from users that I can ensure that the Core Routines Library is a delight to use, as well as implementing state of the art auditory models. ..Lowel. 1. Introduction As computer modelling of the auditory system increased in complexity the need for common working tools became more pressing. Such tools are necessary to allow the rapid dissemination of new computer code, and to permit other members of the scientific community to replicate and challenge published results. The auditory models developed by the Speech and Hearing Laboratory, at Loughborough University of Technology (UK.), have received much attention, due principally to their simple form and the many published papers in which the models are used to explain auditory phenomena. The many requests for the computer code of the model simulations led to the group releasing the LUTEar Core Routines Library (CRL, version 1.0.0, October 1993) as a computational platform and set of coding conventions which supports a modular approach to auditory system modelling. The system is written in ANSI-C and works on a wide range of operating systems. LUTEar has now been consolidated and much improved in the latest release (version 1.5.0). The CRL brings together established models, developed by the group, and also contributed by other researchers in the field, which simulate various stages in the auditory process. Since the first release, the LUTEar CRL has been tested and used both at the originating laboratory and at many other sites. It has been used as a tool for speech processing, speech and voice analysis as well as in the investigation of auditory phenomena, for which it was primarily constructed. This latest version of the CRL is a product of the proving ground to which it was subjected, and we hope that it will be as well received as was the first version. Included with this release is a comprehensive series of test programs. These programs were used to test the CRL routines; they reproduce the behaviour of the respective published models included. The programs also provide examples of how the CRL may be used in auditory investigation programs. In addition the programs read data from parameter files, and thus can be readily used to investigate further the behaviour of the models included in the CRL. The CRL routines have been subjected as much as possible to careful and exhaustive testing. No system, however, is infallible so it is hoped that, with the gentle admonitions of the library's users, any problems or omissions will be quickly corrected. In addition it is expected that the library will be augmented by further models as the scientific endeavour continues. Many weeks have been required to get the manual into its current form. It is not perfect, so gentle admonitions and suggested changes/additions are invited. 1.1. CRL Features The library has a modular structure which can be used to create auditory investigation/application systems or incorporated in existing code, as required. The library is intuitive in application, and has comprehensive error reporting embedded in efficient code. All the modules conform to a simple standard format. The design allows for plugging and unplugging alternative models of the same component auditory process for purposes of comparison. Ultimately the CRL is a development based on the meld of experimental investigation methods and the tenets of good software engineering practice. The following is a list of the principal features of the CRL:- o Modular Structure; o Processing stage data can be handled by a single unit; o Processing stage units can link to data from other stages; o Multi-channel data is handled invisibly; o Efficient algorithms are used throughout; o Meaningful routine and variable names are used; o All routines are prefixed by their module name; o Comprehensive error handling incorporated in routines. 1.1.1 Main features new in version 1.5.0 o Improved manual: greater detail with over 65 figures and an index. o Sound data file format reading/writing support; o Connection management system (invisible to user); o Modules can now read/print their own parameters; o Generic programming introduced; o New analysis routines, including FFT's; o Binaural processing support; o Non-linear basilar membrane filter model;* o Stochastic inner hair cell model;* o McGregor neural cell model; o Dendrite filter model; o Spike generation module (for Meddis86 IHC model output); o New stimulus generation modules. o Parameter files can have comment or blank lines; o Direction of warnings and error messages to a specified file; * These models are still in development, prior to publishing, but they have been included for those who may wish to look at them. +-------------------------+-----------------------------------------------+ |Dr. Lowel P. O'Mard | /\ / \ Speech & Hearing | |Dept. of Human Sciences, | /\/\ /\/ \/ /\ \ /\ Laboratory | |University of Technology,|_/\/\/ /\ \/\/ /\ /\/ \ \/ /\/\_ /\___ | |Loughborough, | \/\/ \/\/\/ \/ /\ \/\/ /\ / | |Leics. LE11 3TU, U.K. | \ /\/\/\ /\/ \ /\/\/ \/ Director: | |L.P.OMard at lut.ac.uk | \/ \/ \/ Prof. Ray Meddis | +-------------------------+-----------------------------------------------+  From lss at compsci.stirling.ac.uk Thu Apr 14 07:07:15 1994 From: lss at compsci.stirling.ac.uk (Dr L S Smith (Staff)) Date: 14 Apr 94 11:07:15 GMT (Thu) Subject: New TR available (corrected version) Message-ID: <9404141107.AA11262@uk.ac.stir.cs.peseta> ***DO NOT FORWARD TO OTHER GROUPS*** (I omitted the authors names, and the IP number of the FTP site) University of Stirling (Scotland), Centre for Cognitive and Computational Neuroscience.... CCCN Technical report CCCN-15 Activation Functions, Computational Goals and Learning Rules for Local Processors with Contextual Guidance. Jim Kay and W.A. Phillips, Centre for Cognitive and Computational Neuroscience, Departments of Mathematics \& Statistics and Psychology University of Stirling Scotland, UK Information about context can enable local processors to discover latent variables that are relevant to the context within which they occur, and it can also guide short-term processing. For example, Becker and Hinton (1992) have shown how context can guide learning, and Hummel and Biederman (1992) have shown how it can guide processing in a large neural net for object recognition. This paper therefore studies the basic capabilities of a local processor with two distinct classes of inputs : receptive field inputs that provide the primary drive and contextual inputs that modulate their effects. The contextual predictions are used to guide processing without confusing them with the receptive field inputs. The processor's transfer function must therefore distinguish these two roles. Given these two classes of input the information in the output can be decomposed into four disjoint components to provide a space of possible goals in which the unsupervised learning of Linsker (1988) and the internally supervised learning of Becker and Hinton (1992) are special cases. Learning rules are derived from an information-theoretic objective function, and simulations show that a local processor trained with these rules and using an appropriate activation function has the elementary properties required. This report is available by anonymous FTP from ftp.cs.stir.ac.uk (139.153.254.29) in the directory pub/tr/cccn The filename is TR15.ps.Z (and, as usual, this needs decompress'd, and the postscript printed.) As a last resort, hard copy may be available: email lss at cs.stir.ac.uk with your postal address...  From L.P.OMard at lut.ac.uk Fri Apr 15 09:50:07 1994 From: L.P.OMard at lut.ac.uk (L.P.OMard) Date: Fri, 15 Apr 94 09:50:07 bst Subject: Auditory Modelling Software from Loughborough (LUTEar 1.5.0) Message-ID: <9404150850.AA20633@hpc.lut.ac.uk> Dear All, First of all, I would like to offer an apology for re-posting this message. It was brought to my notice that my title did not describe what "LUTEar" actually is. I have now given the message a more descriptive title (suggested by Adrian Rees), and I hope that anybody annoyed by having to read this post twice will forgive me. Please find below the README file for the latest version of the LUTEar Core Routines Library (CRL). The UNIX and Macintosh (THINK C 5.0) platform versions are now available (the MSDOS, Borland C, version will be ready by Friday at the latest) via anonymous FTP from:- suna.lut.ac.uk (131.231.16.2): /public/hulpo/lutear Connect via FTP then login with user name "anonymous" and give your e-mail address as the password. Download and thead the "INSTALL150" file from the "/public/hulpo/lutear" directory (as also given above), then follow the installation procedure for your platform. If you have any problems at all, do not hesitate to get in contact with me. Any comments, improvements, additions or corrections you may wish to suggest are very welcome; it is only by direct feed-back from users that I can ensure that the Core Routines Library is a delight to use, as well as implementing state of the art auditory models. ..Lowel. 1. Introduction As computer modelling of the auditory system increased in complexity the need for common working tools became more pressing. Such tools are necessary to allow the rapid dissemination of new computer code, and to permit other members of the scientific community to replicate and challenge published results. The auditory models developed by the Speech and Hearing Laboratory, at Loughborough University of Technology (UK.), have received much attention, due principally to their simple form and the many published papers in which the models are used to explain auditory phenomena. The many requests for the computer code of the model simulations led to the group releasing the LUTEar Core Routines Library (CRL, version 1.0.0, October 1993) as a computational platform and set of coding conventions which supports a modular approach to auditory system modelling. The system is written in ANSI-C and works on a wide range of operating systems. LUTEar has now been consolidated and much improved in the latest release (version 1.5.0). The CRL brings together established models, developed by the group, and also contributed by other researchers in the field, which simulate various stages in the auditory process. Since the first release, the LUTEar CRL has been tested and used both at the originating laboratory and at many other sites. It has been used as a tool for speech processing, speech and voice analysis as well as in the investigation of auditory phenomena, for which it was primarily constructed. This latest version of the CRL is a product of the proving ground to which it was subjected, and we hope that it will be as well received as was the first version. Included with this release is a comprehensive series of test programs. These programs were used to test the CRL routines; they reproduce the behaviour of the respective published models included. The programs also provide examples of how the CRL may be used in auditory investigation programs. In addition the programs read data from parameter files, and thus can be readily used to investigate further the behaviour of the models included in the CRL. The CRL routines have been subjected as much as possible to careful and exhaustive testing. No system, however, is infallible so it is hoped that, with the gentle admonitions of the library's users, any problems or omissions will be quickly corrected. In addition it is expected that the library will be augmented by further models as the scientific endeavour continues. Many weeks have been required to get the manual into its current form. It is not perfect, so gentle admonitions and suggested changes/additions are invited. 1.1. CRL Features The library has a modular structure which can be used to create auditory investigation/application systems or incorporated in existing code, as required. The library is intuitive in application, and has comprehensive error reporting embedded in efficient code. All the modules conform to a simple standard format. The design allows for plugging and unplugging alternative models of the same component auditory process for purposes of comparison. Ultimately the CRL is a development based on the meld of experimental investigation methods and the tenets of good software engineering practice. The following is a list of the principal features of the CRL:- o Modular Structure; o Processing stage data can be handled by a single unit; o Processing stage units can link to data from other stages; o Multi-channel data is handled invisibly; o Efficient algorithms are used throughout; o Meaningful routine and variable names are used; o All routines are prefixed by their module name; o Comprehensive error handling incorporated in routines. 1.1.1 Main features new in version 1.5.0 o Improved manual: greater detail with over 65 figures and an index. o Sound data file format reading/writing support; o Connection management system (invisible to user); o Modules can now read/print their own parameters; o Generic programming introduced; o New analysis routines, including FFT's; o Binaural processing support; o Non-linear basilar membrane filter model;* o Stochastic inner hair cell model;* o McGregor neural cell model; o Dendrite filter model; o Spike generation module (for Meddis86 IHC model output); o New stimulus generation modules. o Parameter files can have comment or blank lines; o Direction of warnings and error messages to a specified file; * These models are still in development, prior to publishing, but they have been included for those who may wish to look at them. +-------------------------+-----------------------------------------------+ |Lowel P. O'Mard PhD. | /\ / \ Speech & Hearing | |Dept. of Human Sciences, | /\/\ /\/ \/ /\ \ /\ Laboratory | |University of Technology,|_/\/\/ /\ \/\/ /\ /\/ \ \/ /\/\_ /\___ | |Loughborough, | \/\/ \/\/\/ \/ /\ \/\/ /\ / | |Leics. LE11 3TU, U.K. | \ /\/\/\ /\/ \ /\/\/ \/ Director: | |L.P.OMard at lut.ac.uk | \/ \/ \/ Prof. Ray Meddis | +-------------------------+-----------------------------------------------+  From usui at bpel.tutics.tut.ac.jp Wed Apr 6 22:31:17 1994 From: usui at bpel.tutics.tut.ac.jp (Shiro USUI) Date: Wed, 6 Apr 94 22:31:17 JST Subject: [amari@sat.t.u-tokyo.ac.jp: Announcement of newpaper, Information Geometry and EM algorithm] Message-ID: <9404061331.AA25173@sv630.bpel-subnet> The following paper is now available via anonymous ftp from the neuroprose archive. It will appear in "Neural Networks". It consisits of two files, am19.ps for the main body (85 pages) figs-ps for the figures. If you have any problems, contact to mura at sat.t.u-tokyo.ac.jp --------- FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/amari.geometryofem.tar.Z This includes two files, am19.ps and figs.ps Use the unix command uncompress + tar to uncompress and divide into two files. --------- "Information Geometry of the EM and em Algorithms for Neural Networks" by Shun-ichi Amari In order to realize an input-output relation given by noise-contaminated examples, it is effective to use a stochastic model of neural networks. A model network includes hidden units whose activation values are not specified nor observed. It is useful to estimate the hidden variables from the observed or specified input-output data based on the stochastic model. Two algorithms, the EM- and em-algorithms, have so far been proposed for this purpose. The EM-algorithm is an iterative statistical technique of using the conditional expectation, and the em-algorithm is a geometrical one given by information geometry. The $em$-algorithm minimizes iteratively the Kullback-Leibler divergence in the manifold of neural networks. These two algorithms are equivalent in most cases. The present paper gives a unified information geometrical framework for studying stochastic models of neural networks, by forcussing on the EM and em algorithms, and proves a condition which guarantees their equivalence. Examples include 1) Boltzmann machines with hidden units, 2) mixtures of experts, 3) stochastic multilayer perceptron, 4) normal mixture model, 5) hidden Markov model, among others.  From giles at research.nj.nec.com Sun Apr 17 14:10:29 1994 From: giles at research.nj.nec.com (Lee Giles) Date: Sun, 17 Apr 94 14:10:29 EDT Subject: Call for papers: IWANNT*95 Message-ID: <9404171810.AA01165@fuzzy> PLEASE POST CALL FOR PAPERS International Workshop on Applications of Neural Networks to Telecommunications (IWANNT*95) Stockholm, Sweden May 22-24, 1995 You are invited to submit a paper to an international workshop on applications of neural networks and other intelligent systems to problems in telecommunications and information networking. This is the second workshop in a series that began in Princeton, New Jersey on October, 18-20 1993. This conference will take place in the center of Stockholm at a time of the year when the beautiful city is at its best. A tour in the famous archipelago adds to the attraction. This workshop will bring together active researchers in neural networks and related intelligent systems with potential users in the telecommunications industries. Today, telecommunications also means data transmission, cable TV, wireless, and entertainment industries. We expect the workshop to be a forum for discussion of applications issues relevant to the enlarged circle of telecommunications industries. It is sponsored by IEEE, INNS, SNNS (Swedish Neuronet Society),Bellcore and Ericsson. Suggested Topics: Application of Neural Networks and other Intelligent Systems in: Network Management Congestion Control Adaptive Equalization Speech Recognition Security Verification Language ID/Translation Information Filtering Dynamic Routing Software Reliability Fraud Detection Financial and Market Prediction Adaptive User Interfaces Fault Identification and Prediction Character Recognition Adaptive Control Data Compression Please submit 6 copies of both a 50 word abstract and a 1000 word summary of your paper by September 16, 1994. Mail papers to the conference administrator: Betty Greer, IWANNT*95 Bellcore, MRE 2P-295 445 South St. Morristown, NJ 07960 (201) 829-4993 (fax) 829-5888 bg1 at faline.bellcore.com Abstract and Summary Due: September 16, 1994 Author Notification of Acceptance: November 1, 1994 Camera-Ready Copy of Paper Due: February 10, 1995 Organizing Committee: General Chair Josh Alspector Bellcore, MRE 2P-396 445 South St. Morristown, NJ 07960-6438 (201) 829-4342 josh at bellcore.com Program Chair Rod Goodman Caltech 116-81 Pasadena, CA 91125 (818) 356-3677 rogo at micro.caltech.edu Publications Chair Timothy X Brown Bellcore, MRE 2E-378 445 South St. Morristown, NJ 07960-6438 (201) 829-4314 timxb at faline.bellcore.com Treasurer Anthony Jayakumar, Bellcore Publicity Atul Chhabra, NYNEX Lee Giles, NEC Local Arrangements Miklos Boda, Ellemtel Bengt Asker, Ericsson Program Committee Harald Brandt, Ellemtel Tzi-Dar Chiueh, National Taiwan University Michael Gell, British Telecom Larry Jackel, AT&T Bell Laboratories Thomas John, Southwestern Bell Adam Kowalczyk, Telecom Australia S Y Kung, Princeton University Tadashi Sone, NTT INNS Liaison Bernard Widrow, Stanford University IEEE Liaison Steve Weinstein, NEC Conference Administrator Betty Greer Bellcore, MRE 2P-295 445 South St. Morristown, NJ 07960 (201) 829-4993 (fax) 829-5888 bg1 at faline.bellcore.com ----------------------------------------------------------------------------- ----------------------------------------------------------------------------- International Workshop on Applications of Neural Networks to Telecommunications (IWANNT*95) Stockholm, Sweden May 22-24, 1995 Registration Form Name: _____________________________________________________________ Institution: __________________________________________________________ Mailing Address: ___________________________________________________________________ ___________________________________________________________________ ___________________________________________________________________ ___________________________________________________________________ Telephone: ______________________________ Fax: ____________________________________ E-mail: _____________________________________________________________ I will attend | | Send more information | | Paper enclosed | | Registration Fee Enclosed | | ($400; $500 after Apr. 15, 1995; $200 students;) Please make sure your name is on the check (made out to IWANNT*95) Registration includes lunch, a boat tour of the Stockholm archipelago, and proceedings available at the conference. Mail to: Betty Greer, IWANNT*95 Bellcore, MRE 2P-295 445 South St. Morristown, NJ 07960 (201) 829-4993 (fax) 829-5888 bg1 at faline.bellcore.com Deadline for submissions: September 16, 1994 Author Notification of Acceptance: November 1, 1994 Camera-Ready Copy of Paper Due: February 10, 1995 -- C. Lee Giles / NEC Research Institute / 4 Independence Way Princeton, NJ 08540 / 609-951-2642 / Fax 2482 ==  From hirsh at cs.rutgers.edu Sun Apr 17 17:27:17 1994 From: hirsh at cs.rutgers.edu (Haym Hirsh) Date: Sun, 17 Apr 94 17:27:17 EDT Subject: on-line information for ML94 and COLT94 Message-ID: Information for this summer's Machine Learning (ML94) and Computational Learning Theory (COLT94) conferences is now available on-line. Users of anonymous ftp can find the information on www.cs.rutgers.edu in the directory "/pub/learning94". Users of www information servers such as mosaic can find the information at "http://www.cs.rutgers.edu/pub/learning94/learning94.html". Please send comments or questions to ml94 at cs.rutgers.edu. Please note that the early registration deadline is May 27, and (for those planning on staying at the nearby Hyatt rather than in dorms), conference room rates are only guaranteed until June 10. Finally, the conferences coincide this year with World Cup soccer matches being held at Giants Stadium in East Rutherford, New Jersey. These games are expected to be the largest sporting event ever held in the New York metropolitan area, and we therefore strongly encourage conference attendees to make travel arrangements as early as possible. Haym  From tibs at utstat.toronto.edu Sun Apr 17 21:45:00 1994 From: tibs at utstat.toronto.edu (tibs@utstat.toronto.edu) Date: Sun, 17 Apr 94 21:45 EDT Subject: new book Message-ID: The following new book may be of interest to connectionists: An Introduction to the Bootstrap- Brad Efron and Rob Tibshirani This is the first general book written on the bootstrap and related topics (Jackknife, cross-validation ...) The purpose of this book is to present an overview of the bootstrap and related methods for assessing statistical accuracy. The objectives are a) to provide the reader with a working knowledge of bootstrap and related methodologies, and b) serve as a resource text for researchers in the area. The first 19 chapters are expository and are suitable for a one semester course at the upper undergraduate or masters level. They require one probability and one statistics course as a prerequisite. Each chapter has numerous problems. We have written this part of the book so that it will be accessible to non-specialists, particularly scientists who are seeking to learn about these methods for possible use in their own work. The remaining chapters are at a higher mathematical level, and together with parts of Chapters 6-19, are suitable for a graduate level course in statistics. This book is aimed at statisticians, upper year undergraduate and graduate students in statistics, and scientists, engineers and doctors who do quantitative research. Bradley Efron is the inventor of the bootstrap and is responsible for many of the major research advances in the area. Robert Tibshirani was a student of Dr. Efron's, has contributed to research in this area and is an active researcher and author in the statistical community. Ordering information: An Introduction to the Bootstrap- Efron and Tibshirani ISBN 0-412-04231-2 Chapman and Hall One Penn Plaza- 41st floor New York, Ny. 10119 Phone 212 564-1060 Customer service FAX 212-268-9964 Toll free order FAX 1-800-248-4724 ============================================================= | Rob Tibshirani To every man is given the key to | Dept. of Preventive the gates of heaven; | Medicine and Biostatistics the same key opens the gates of hell. | McMurrich Bldg. | University of Toronto Buddhist proverb | Toronto, Canada M5S 1A8 Phone: 1-416-978-4642 (biostats) Email: tibs at utstat.toronto.edu 416-978-0673 (stats) FAX: 1-416-978-8299  From jb at informatik.uni-bonn.de Tue Apr 19 13:13:59 1994 From: jb at informatik.uni-bonn.de (jb@informatik.uni-bonn.de) Date: Tue, 19 Apr 94 18:13:59 +0100 Subject: Positions for PhD students Message-ID: <9404191615.AA00497@olymp.informatik.uni-bonn.de> !!!!!!!!!!!!!!!!!!!!!!!! A T T E N T I O N !!!!!!!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!!!!!!!! A T T E N T I O N !!!!!!!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!!!!!!!! A T T E N T I O N !!!!!!!!!!!!!!!!!!!!!!!! Research Positions available in Image Analysis/Pattern Recognition at the University of Bonn. Two positions for PhD students/Postdocs are open at the Computer Science Department of the University of Bonn starting Juli 1, 1994. One position is available to conduct research in image sequence analysis for surveillance applications, the other is dedicated to video compression applications. (Salary at the research associate level) Interested candidates should have a background in one of the following research field: * computer vision and image processing * statistical pattern recognition * neural networks and/or connectionists modeling Applicants should send the Curriculum Vitae and a description of their research interests to Prof. J. Buhmann Institut fuer Informatik III Tel.: +49 228 550 380 Universitaet Bonn Fax: +49 228 550 382 Roemerstr. 164 email: jb at informatik.uni-bonn.de D-53117 Bonn jb at cs.bonn.edu Fed. Rep. Germany  From B344DSL at UTARLG.UTA.EDU Tue Apr 19 15:49:43 1994 From: B344DSL at UTARLG.UTA.EDU (B344DSL@UTARLG.UTA.EDU) Date: Tue, 19 Apr 1994 13:49:43 -0600 (CST) Subject: Conference preliminary program Message-ID: <01HBD0U7TU5U0013SH@UTARLG.UTA.EDU> Preliminary Program CONFERENCE ON OSCILLATIONS IN NEURAL SYSTEMS Sponsored by the Metroplex Institute for Neural Dynamics (MIND) and the University of Texas at Arlington Co-sponsored by the Departments of Mathematics and Psychology MAY 5-7, 1994 UNIVERSITY OF TEXAS AT ARLINGTON MAIN LIBRARY, 6TH FLOOR PARLOR The topic of neural oscillation is currently of great interest to psychologists and neuroscientists alike. Recently it has been observed that neurons in separate areas of the brain will oscillate in synchrony in response to certain stimuli. One hypothesized function for such synchronized oscillations is to solve the "binding problem," that is, how is it that disparate features of objects (e.g., a person's face and their voice) are tied together into a single unitary whole. Some bold speculators (such as Francis Crick in his recent book, The Astonishing Hypothesis) even argue that synchronized neural oscillations form the basis for consciousness. It is still possible to schedule poster presentations. Those interested in presenting a poster are invited to submit abstracts (1-2 paragraphs) of any work related to the theme of the conference. Abstracts should submitted, by e-mail, snail mail, or fax, to: Professor Daniel S. Levine Department of Mathematics, University of Texas at Arlington 411 S. Nedderman Drive Arlington, TX 76019-0408 Office telephone: 817-273-3598, fax: 817-794-5802 e-mail: b344dsl at utarlg.uta.edu Further inquiries about the conference can be addressed to Professor Levine or to the other two conference organizers: Professor Vincent Brown Mr. Timothy Shirey 817-273-3247 214-495-3500 or 214-422-4570 b096vrb at utarlg.uta.edu 73353.3524 at compuserve.com Please distribute this announcement to anyone you think may be interested in the conference. INVITED SPEAKERS Bill Baird, University of California/Berkeley "Grammatical Inference by Attentional Control of Synchronization in an Architecture of Coupled Oscillatory Associative Memories" Adi Bulsara, Naval Research Laboratories/San Diego "Complexity in the Neurosciences: Signals, Noise, Nonlinearity, and the Meanderings of a Theoretical Physicist" Alexander Grunewald, Boston University "Binding of Object Representations by Synchronous Cortical Dynamics Explains Temporal Order and Spatial Pooling Data" David Horn, Tel Aviv University "Segmentation and Binding in Oscillatory Networks" Alianna Maren, Accurate Automation Corporation (Title to be added) George Mpitsos, Oregon State University "Attractor Gradients: Architects of Network Organization in Biological Systems" Martin Stemmler, California Institute of Technology "Synchronization and Oscillations in Spiking Networks" Roger Traub, IBM/New York (Title to be added) Robert Wong, Downstate Medical Center/Brooklyn (Title to be added) Geoffrey Yuen, Northwestern University (Title to be added) OTHER TALKS Section I. Neuroscience. D. Baxter, C. Canavier, H. Lechner, University of Texas/Houston, J. Clark, Rice University, & J. Byrne, University of Texas/Houston "Coexisting Stable Oscillatory States in a Model Neuron Suggest Novel Mechanisms for the Effects of Synaptic Inputs and Neuromodulators" Guenter Gross & Barry Rhoades, University of North Texas "Spontaneous and Evoked Oscillatory Bursting States in Cultured Networks" Elizabeth Thomas, Willamette College "A Computational Model of Spindle Oscillations" Section II. Theory. Anthony Brown, Defense Research Agency, United Kingdom "Preliminary Work on the design of an Analog Oscillatory Neural Network" Arun Jagota, Memphis State University, & Xin Wang, University of California/Los Angeles "Oscillations in Discrete and Continuous Hopfield Networks" Jacek Kowalski, University of North Texas (Title to be added) Nam Seog Park, Dave Robertson, & Keith Stenning, University of Edinburgh "From Dynamic Bindings to Further Symbolic Knowledge Representation Using synchronous Activity of Neurons" Seth Wolpert, University of Maine "Modeling Neural Oscillations using VLSI-based Neuromimes" Section III. Psychology. David DeMaris, University of Texas/Austin Sriram Govindarajan & Vincent Brown, University of Texas/Arlington "Feature Binding and Illusory Conjunctions: Psychological Constraints and a Model" Mark Steyvers, Indiana University "Use of Synchronized Chaotic Oscillations to Model Multistability in Perceptual Grouping" POSTERS Shien-Fong Lin, Rashi Abbas, & John Wikso, Jr., Vanderbilt University "One-dimensional Magnetic Measurement of Two-origin Bioelectric Current Oscillation" George Mobus & Paul Fisher, University of North Texas "Edge-of-chaos-Search: Using a Quasi-chaotic Oscillator circuit for Foraging in a Mobile Autonomous Robot" Andrew Penz, Texas Inztruments (Title to be added) Barry Rhoades, University of North Texas "Global Neurochemical Determination of Local EEG in the Olfactory Bulb" David Young, Louisiana State University "Oscillations Created by the Fragmented Access of Distributed Connectionist Representations Tentative Schedule Posters (ongoing throughout the conference): Lin, Mobus, Penz, Rhoades, Young Thursday AM: Introductions, Mpitsos, Baxter, Stemmler Thursday PM: Yuen, Thomas, Kowalski, Gross Friday AM: Wong, Traub, Baird, Jagota Friday PM: A. Brown, Bulsara, Maren, Horn Saturday AM: Park, DeMaris, Wolpert, Grunewald, Steyvers Saturday PM: Govindarajan, Panel Discussion Registration and Travel Information Official Conference Motel: Park Inn 703 Benge Drive Arlington, TX 76013 1-800-777-0100 or 817-860-2323 A block of rooms has been reserved at the Park Inn for $35 a night (single or double). Room sharing arrangements are possible. Reservations should be made directly through the motel. Official Conference Travel Agent: Airline reservations to Dallas-Fort Worth airport should be made through Dan Dipert travel in Arlington, 1-800-443-5335. For those who wish to fly on American Airlines, a Star File account has been set up for a 5% discount off lowest available fares (two week advance, staying over Saturday night) or 10% off regular coach fare; arrangements for Star File reservations should be made through Dan Dipert. Please let the conference organizers know (by e-mail or telephone) when you plan to arrive: some people can be met at the airport (about 30 minutes from Arlington), others can call Super Shuttle at 817-329-2000 upon arrival for transportation to the Park Inn (about $14-$16 per person). Registration for the conference is $25 for students, $65 for non- student oral or poster presenters, $85 for others. MIND members will have $20 (or $10 for students) deducted from the registration. A registration form is attached to this announcement. Registrants will receive the MIND monthly newsletter (on e-mail when possible) for the remainder of 1994. REGISTRATION FOR MIND CONFERENCE ON OSCILLATIONS IN NEURAL SYSTEMS, UNIVERSITY OF TEXAS AT ARLINGTON, MAY 5-7, 1994 Name ______________________________________________________________ Address ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ____________________________________________________________ E-Mail __________________________________________________________ Telephone _________________________________________________________ Registration fee enclosed: _____ $15 Student, member of MIND _____ $25 Student _____ $65 Non-student oral or poster presenter _____ $65 Non-student member of MIND _____ $85 All others Will you be staying at the Park Inn? ____ Yes ____ No Are you planning to share a room with someone you know? ____ Yes ____ No If so, please list that person's name __________________________ If not, would be you be interested in sharing a room with another conference attendee to be assigned? ____ Yes ____ No PLEASE REMEMBER TO CALL THE PARK INN DIRECTLY FOR YOUR RESERVATION (WHETHER SINGLE OR DOUBLE) AT 1-800-777-0100 OR 817-860-2323.  From joerg at nathan.gmd.de Wed Apr 20 04:50:40 1994 From: joerg at nathan.gmd.de (Joerg Kindermann) Date: Wed, 20 Apr 1994 10:50:40 +0200 Subject: postdoc position in NN available Message-ID: <199404200850.AA28569@tetris.gmd.de> !!!!!!!!!!!!!!!!!!!!!!!! A T T E N T I O N !!!!!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!!!!!!!! A T T E N T I O N !!!!!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!!!!!!!! A T T E N T I O N !!!!!!!!!!!!!!!!!!!!!! The research group "Adaptive Systems" of the German National Research Center for Computer Science (GMD) in Sankt Augustin near Bonn has a postdoctoral research position available. The group, headed by Dr. Muehlenbein, consists of about 15 researchers working in the areas of genetic algorithms, reflective statistics and robotics. Appointment will be for two years (as of 1 July 1994 or later). It can possibly be extended by another year. Applicants should have a good background and strong research interests in one or more of the following areas: - statistical properties of neural networks and related learning algorithms - predictive reliability of statistical models - adaptive algorithms The successful candidate is expected to contribute actively to our research projects in the area of reflective exploration (selective sampling, model selection) with neural networks and related statistical methods. Good programming skills in C++ or C are required. Applicants should send their Curriculum Vitae and a description of their research interests one of the addresses below. Applications should be received by May 8, 1994. Dr. Gerhard Paass Dr. Joerg Kindermann paass at gmd.de kindermann at gmd.de phone: +49 02241 142698 phone: +49 02241 142437 System Design Technology Institute German National Research Center for Computer Science (GMD) Schloss Birlinghoven D-53754 St. Augustin Germany  From david at cns.edinburgh.ac.uk Wed Apr 20 11:48:39 1994 From: david at cns.edinburgh.ac.uk (David Willshaw) Date: Wed, 20 Apr 94 11:48:39 BST Subject: Contents of NETWORK, Volume 5, Number 1. Message-ID: <10255.9404201048@cns.ed.ac.uk> Network: Computation in Neural Systems Volume 5 Number 1 February 1994 ---------------------------------------- PAPERS 1 Hebbian learning is jointly controlled by electrotonic and input structure K Y Tsai, N T Carnevale and T H Brown 21 Efficient mapping from neuroanatomical to electrotonic space K Y Tsai, N T Carnevale, B J Claiborne and T H Brown 47 Regulating the nonlinear dynamics of olfactory cortex Xingbao Wu and H Liljenstrom 61 Spontaneous symmetry breaking and the formation of columnar structures in the primary visual cortex K Yamagishi 75 Using generalized principal component analysis to achieve associative memory in a Hopfield net S Coombes and J G Taylor 89 Learning temporal sequences by excitatory synaptic changes only Y Metzger and D Lehmann 101 Hierarchical neural networks for time-series analysis and control T Frohlinghaus, A Weichert and P Rujan 117 ABSTRACTS SECTION 119 BOOK REVIEWS 119 An introduction to the modeling of neural networks P Peretto 120 Computational learning theory M Anthony and N Biggs NETWORK welcomes research Papers and Letters where the findings have demonstrable relevance across traditional disciplinary boundaries. Research Papers can be of any length, if that length can be justified by content. Rarely, however, is it expected that a length in excess of 10,000 words will be justified. 2,500 words is the expected limit for research Letters. Articles can be published from authors' TeX source codes. Macros can be supplied to produce papers in the form suitable for refereeing and for IOP house style. For more details contact the Editorial Services Manager at IOP Publishing, Techno House, Redcliffe Way, Bristol BS1 6NX, UK. Telephone: (+44) 0272 297481 Fax: (+44) 0272 294318 Telex: 449149 INSTP G Email Janet: net at uk.co.ioppublishing Subscription Information Frequency: quarterly Subscription rates: Institution 192.00 pounds (US$376.00) Individual (UK) 32.00 pounds (Overseas) 35.00 pounds (US$75.00) A microfiche edition is also available  From riedml at ira.uka.de Thu Apr 21 07:41:29 1994 From: riedml at ira.uka.de (Martin Riedmiller) Date: Thu, 21 Apr 94 7:41:29 MET DST Subject: Paper available Message-ID: <"iraun1.ira.995:21.04.94.05.45.26"@ira.uka.de> The following paper is available via anonymous ftp from i11s16.ira.uka.de. Instructions for retrieval follow the abstract. ***************************************************************** Advanced Supervised Learning in Multi-layer Perceptrons - From Backpropagation to Adaptive Learning Algorithms Martin Riedmiller Institut fuer Logik, Komplexitaet und Deduktionssyteme University of Karlsruhe W-76128 Karlsruhe FRG riedml at ira.uka.de ABSTRACT Since the presentation of the backpropagation algorithm a vast variety of improvements of the technique for training the weights in a feed-forward neural network have been proposed. The following article introduces the concept of supervised learning in multi-layer perceptrons based on the technique of gradient descent. Some problems and drawbacks of the original backpropagation learning procedure are discussed, eventually leading to the development of more sophisticated techniques. This article concentrates on adaptive learning strategies. Some of the most popular learning algorithms are described and discussed according to their classification in terms of global and local adaptation strategies. The behavior of several learning procedures on some popular benchmark problems is reported, thereby illuminating convergence, robustness, and scaling properties of the respective algorithms. This paper has been accepted for publication in the special issue on Neural Network Standards of "Computer Standards & Interfaces", volume 16, edited by J. Fulcher. Elsevier Science Publishers, Amsterdam, 1994. ***************************************************************** ------------------------------ To obtain a copy of this paper, please follow the following FTP instructions: ftp i11s16.ira.uka.de (or ftp 129.13.33.16) Name: anonymous Password: (your email address) ftp> cd pub/neuro/papers ftp> binary ftp> get riedml.csi94.ps.Z ftp> quit % uncompress <>.ps.Z % lpr <>.ps  From tfb007 at hp1.uni-rostock.de Fri Apr 22 11:05:33 1994 From: tfb007 at hp1.uni-rostock.de (Neural Network Group) Date: Fri, 22 Apr 94 11:05:33 MESZ Subject: pattern segmentation request Message-ID: Using combinations of classical transforms (fourier-transform, KLT and modifications) we constructed some modules for feature extraction and made investigations with recurrent NN-topolgies based on statisticel methods like for example fractal NN for document-analysis problems. In our studies we've found some help in publications from National Institute of Standards and Technology in the following papers: ir_4766.ps Optimization of Neural Network Topology and Information Content Using Boltzman Methods ir_4776.ps Training Feed Forward Neural Networks Using Conjugate Gradients ir_4824.ps Karhunen Loeve Feature Extraction for Neural Handwritten Character Recognition ir_4893.ps Topological Seperation Versus Weight Sharing in Neural Network Optimization Here we got interesting results and a good performance in pattern recognition tasks. Recognition of pattern is an interesting field at all, but there is also the problem of finding any pattern or a special pattern in an image, the problem of pattern segmentation. Or in a more general case: How to find a special object at any places and different sizes in an image? Referring to that problem I've read an article about pattern segmentation using gabor functions and genetic algorithms from NIST (ir_xxxx.ps). But we could'nt get any good results in algorithms for object search using Neural Networks. During last weeks I've read a lot of publications from connectionists archive, but could'nt find much help in solving that problem. Do you have any hints for me how to get some information about algorithms and methods in pattern segmentation or/and in a more general sense object search-algorithms in 2-dimensional images? Many thanks Gratefully yours Welf Wustlich from Neural Network Group Rostock  From kedar at gate.ee.lsu.edu Mon Apr 25 13:11:37 1994 From: kedar at gate.ee.lsu.edu (Kedar Babu Madineni) Date: Mon, 25 Apr 94 12:11:37 CDT Subject: Report available Message-ID: <9404251711.AA01430@gate.ee.lsu.edu> The below mentioned technical report is now available. If you would like to have postscript versions of this paper please send your email request to me. ---------------------------------------------------------------- Technical Report ECE/LSU 94-41, April 25, 1994 TWO CORNER CLASSIFICATION ALGORITHMS FOR TRAINING THE KAK FEEDFORWARD NEURAL NETWORK Abstract This paper presents two new algorithms ALG1 and ALG2 for training the Kak feedforward neural network. The learning power and the generalization capabilities for this class of algorithms are presented. Comparing with the results for backpropagation, we obtain the following results: ALG2's generalization capability was sometimes better to or comparable to that of backpropagation. ALG1's generalization capability was also comparable to that of backpropagation. The main advantage of the proposed algorithms is in the training time. Both ALG1 and ALG2 outperformed backpropagation in learning time, as expected. From these results, it can be said that ALG2 is superior to backpropagation in learning time and at least comparable in generalization. The comparison experiments were performed on time-series prediction of interest rates of corporate bonds over a time period of three years, and creditworthiness of a customer.  From david at cns.edinburgh.ac.uk Tue Apr 26 09:43:19 1994 From: david at cns.edinburgh.ac.uk (david@cns.edinburgh.ac.uk) Date: Tue, 26 Apr 94 09:43:19 BST Subject: Contents of latest issue of NETWORK Message-ID: <9634.9404260843@cns.ed.ac.uk> NETWORK: Computation in Neural Systems Volume 5 Number 2 May 1994 PAPERS 121 Coding of odour quality: roles of convergence and inhibition J-P Rospars and J-C Fort 147 Designing receptive fields for highest fidelity D L Ruderman 157 Efficient stereo coding in the multiscale representation Zhaoping Li and J J Atick 175 Intracortical connectivity of pyramidal and stellate cells: estimates of synaptic densities and coupling symmetry D T J Liley and J J Wright 191 A millimetric-scale simulation of electrocortical wave dynamics based on anatomical estimates of cortical synaptic density J J Wright and D T J Liley 203 Inductive inference and neural nets J Bernasconi and K Gustafson 229 Effects of temporary synaptic strengthening and residual cell potential in the retrieval of patterns T Nakano and O Moriyama 241 A shape-recognition model using dynamical links E Bienenstock and R Doursat 259 Modelling of the Bonhoeffer-effect during LTP learning A Koester, A Zippelius and R Kree 277 Optimal signalling in attractor neural networks I Meilijson and E Ruppin NETWORK welcomes research papers where the findings have demonstrable relevance across traditional disciplinary boundaries. Research Papers can be of any length, if that length can be justified by content. Rarely, however, is it expected that a length in excess of 10,000 words will be justified. Articles can be published from authors' TeX source codes. NETWORK is published as four issues per annual volume (quarterly in February, May, August, and November) by Institute of Physics Publishing, Techno House, Redcliffe Way, Bristol BS1 6NX, UK. Subscription Information For all countries, except the United States, Canada and Mexico, the institutional subscription rate is 192.00 pounds. The rate for individual subscribers is 32.00 pounds (UK) and 35.00 pounds (overseas). Delivery is by air-speeded mail from the UK to subscribers in most overseas countries, and by airfreight and registered mail to subscribers in India. Orders to: Institute of Physics Publishing, Order Processing Department, Techno House, Redcliffe Way, Bristol BS1 6NX, UK. For the US, Canada and Mexico, the institutional subscription rate is US$376.00. The rate for individual subscribers is US$75.00. Delivery is by transatlantic airfreight and onward mailing. Orders to: American Institute of Physics, Subscriber Services, 500 Sunnyside Blvd, Woodbury, NY 11797-2999, USA. Editorial and Marketing Office Institute of Physics Publishing Techno House, Redcliffe Way Bristol BS1 6NX, UK Telephone: 0272 297481 Telex: 449149 Facsimile: 0272 294318 Email: within JANET: net at uk.co.ioppublishing from other networks: net at ioppublishing.co.uk x400: /s=net/o=ioppl/prmd=iopp/admd=0/c=gb  From omlinc at cs.rpi.edu Tue Apr 26 14:33:41 1994 From: omlinc at cs.rpi.edu (omlinc@cs.rpi.edu) Date: Tue, 26 Apr 94 14:33:41 EDT Subject: TR available from neuroprose archive Message-ID: <9404261833.AA15782@colossus.cs.rpi.edu> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/omlin.dfa_encoding.ps.Z The following paper is now available from the neuroprose archive. Please send your comments regarding the paper to omlinc at cs.rpi.edu. -Christian Constructing Deterministic Finite-State Automata in Recurrent Neural Networks Christian W. Omlin (a,b), C. Lee Giles (a,c) (a) NEC Research Institute, 4 Independence Way, Princeton, NJ 08540 (b) CS Department, Rensselaer Polytechnic Institute, Troy, NY 12180 (c) UMIACS, University of Maryland, College Park, MD 20742 Abstract Recurrent neural networks that are trained to behave like deterministic finite-state automata (DFA's) can show deteriorating performance when tested on long strings. This deteriorating performance can be attributed to the instability of the internal representation of the learned DFA states. The use of a sigmoidal discriminant function together with the recurrent structure contribute to this instability. We prove that a simple algorithm can construct second-order recurrent neural networks with a sparse inter- connection topology and sigmoidal discriminant function such that the internal DFA state representations are stable, i.e. the constructed network correctly classifies strings of arbitrary length. The algorithm is based on encoding strengths of weights directly into the neural network. We derive a relationship between the weight strength and the number of DFA states for robust string classification. For a DFA with n states and m input alphabet symbols and m input alphabet symbols, the constructive algorithm generates a "programmed" neural network with n+1 neurons and O(mn) weights. We compare out algorithm to other methods proposed in the literature. (23 pages, 9 figures, 2 tables) This paper is also available as Technical Report No. 94-3, Computer Science Department, Rensselaer Polytechnic Institute, Troy, NY 12180.  From dayhoff at src.umd.edu Wed Apr 27 00:28:04 1994 From: dayhoff at src.umd.edu (Judith E. Dayhoff) Date: Wed, 27 Apr 1994 00:28:04 -0400 Subject: Please post for INNS/WCNN'94 Message-ID: <199404270428.AAA07366@newra.src.umd.edu> WW WW CCCCCCCC NN NN NN NN oo 999999 44 44 WW WW CC CC NNN NN NNN NN oo 99 99 44 44 WW W WW CC NNNN NN NNNN NN 99 99 44 44 WW WWW WW CC NN NN NN NN NN NN 9999999 4444444 WWW WWW CC NN NN NN NN NN NN 99 44 W W CCCCCCC NN NNNN NN NNNN 99 44 ************************** *REGISTRATION INFORMATION* ************************** WORLD CONGRESS ON NEURAL NETWORKS, SAN DIEGO, CALIFORNIA, JUNE 5-9, 1994 *** Neural Network Industrial Exposition INNS University Short Courses Six Plenary Talks *** Five Special Sessions Twenty Sessions of Invited and Contributed Talks At least 9 SIG sessions *** Sponsored and Organized by the International Neural Network Society (INNS) *** Table of Contents of This Announcement: 1. NEWS AND CHANGES TO PRELIMINARY PROGRAM 2. INDUSTRIAL EXPOSITION SCHEDULE CHANGES AND NEW LECTURES 3. PLENARY TALKS 4. SPECIAL SESSIONS 5. INVITED/CONTRIBUTED SESSIONS 6. SHORT COURSES 7. TRAVEL ARRANGEMENTS 8. NOTE 9. REGISTRATION 10.HOTEL RESERVATIONS =================================================================== 1. NEWS From the Organizing Committee: The INNS Office has had a management change. We apologize for any delay, confusion or inconvenience you may have experienced during transition. The Registration Deadline has been extended to MAY 16. Many of you know our new management from previous Congresses: Talley Associates (Att: Melissa Bishop) Address: 875 Kings Highway, Suite 200 Woodbury, NJ 08096; Voice 609-845-1720; FAX 609-853-0411 You may use this FAX for Congress Registration. Details of the change will be presented Monday June 6, 5-6 pm during the Presidential Speech. Your support is making WCNN'94 a success. Signed: Paul Werbos, Bernard Widrow, Harold Szu Should you have any specific recommendation about ways to make WCNN-94 more productive, please contact any Governors that you know or Dr. Harold Szu at (301) 390-3097; FAX (301) 390-3923; e-mail: hszu%ulysses at relay.nswc.navy.mil. To improve the structure of the Congress and achieve a more compact schedule for attendees, several changes have been made since the Preliminary Program: A. Short Courses Start Sunday Morning June 5. All Saturday Short Courses have been moved to Monday June 6, with the exception that Course [I] (J. Dayhoff) will be given Sunday 8AM - 12PM. To make room in the schedule for that change, Course [H] (W. Freeman) moves from Sunday to Monday 8AM - 12PM. On Monday the Short Courses are concurrent with the Exposition. B. The SPECIAL OFFER has been made more generous, to encourage students. For each of your Short Course registrations you can give a colleague in the same or lower-priced Registration Category a FREE Short Course! Enter his or her name on the Registration Form below ``TOTAL.'' The recipient of the gift should indicate ``Gift from [your name]'' at the time of registration. IF YOU HAVE ALREADY PRE-REGISTERED, arrange the gift now by FAX to 609-853-0411. =================================================================== 2. INDUSTRIAL EXPOSITION SCHEDULE CHANGES AND NEW LECTURES Monday June 6: 8 - 11 AM: Hardware; Software Video-demo talks; and Posters; 10 - 11 AM: Student Contest. The Contest is free-form, permitting many types of imaginative entry; Certificates and T-shirts will be awarded; no Grand Prize. 11 - Noon: Panel on Government Funding + Two New Lectures - in the Exposition Area: 12 - 1PM: Teuvo Kohonen: ``Exotic Applications of the Self-Organizing Map'' 5 - 6PM: Walter Freeman (Presidential Lecture): "Noncomputational Neural Networks" =================================================================== 3. PLENARY TALKS: Lotfi Zadeh, UC Berkeley "Fuzzy Logic, Neural Networks, and Soft Computing" Per Bak, Brookhaven Nat. Lab. "Introduction to Self-Organized Criticality" Bernard Widrow, Stanford University "Adaptive Inverse Control" Melanie Mitchell, Santa Fe Institute "Genetic Algorithm Applications" Paul Werbos, NSF "Brain-Like Intelligence in Artificial Models: How Do We Really Get There?" John G. Taylor, King's College London "Capturing What It Is Like To Be: Modelling the Mind by Neural Networks" =================================================================== 4. SPECIAL SESSIONS "Biomedical Applications of Neural Networks," (Tuesday) "Commercial and Industrial Applications of Neural Networks," (Tuesday) "Financial and Economic Applications of Neural Networks," (Wednesday) "Neural Networks in Chemical Engineering," (Thursday) "Mind, Brain and Consciousness" (Thursday) =================================================================== 5. INVITED/CONTRIBUTED SESSIONS (Too many to list here!) June 7 - 9 Also at least 9 Special Interest Group (SIG) Sessions are tentatively scheduled for Wednesday, June 8 from 8 -9:30 pm. =================================================================== 6. SHORT COURSES 8am - 12pm Sunday, June 5 [M] Gail Carpenter, Boston University: Adaptive Resonance Theory [L] Bernard Widrow, Stanford University: Adaptive Filters, Adaptive Controls, Adaptive Neural Networks and Applications [I] Judith Dayhoff, University of Maryland: Neurodynamics of Temporal Processing [G] Shun-Ichi Amari, University of Tokyo: Learning Curves, Generalization Errors and Model Selection 1pm - 5pm Sunday, June 5 [U] Lotfi Zadeh, University of California, Berkeley: Fuzzy Logic and Calculi of Fuzzy Rules and Fuzzy Graphs [K] Paul Werbos, NSF: From Backpropagation to Real-Time Control [O] Stephen Grossberg, Boston University: Autonomous Neurodynamics: From Perception to Action [E] John Taylor, King's College, London: Stochastic Neural Computing: From Living Neurons to Hardware 6pm - 10 pm Sunday, June 5 [V] Nicolai G. Rambidi, Int'l. Research Inst. for Management Sciences: Image Processing and Pattern Recognition Based on Molecular Neural Networks [C] Christof Koch, California Institute of Technology: Vision Chips: Implementing Vision Algorithms with Analog VLSI Circuits 8am - 12pm Monday, June 6 [T] Melanie Mitchell, Santa Fe Institute: Genetic Algorithms, Theory and Applications [R] David Casasent, Carnegie Mellon University: Pattern Recognition and Neural Networks [H] Walter Freeman, University of California, Berkeley: Review of Neurobiology: From Single Neurons to Chaotic Dynamics of the Cerebral Cortex [P] Lee Giles, NEC Research Institute: Dynamically-driven Recurrent Networks: Models, Training Algorithms and Applications 1pm - 5pm Monday, June 6 [S] Per Bak, Brookhaven National Laboratory: Introduction to Self-Organized Criticality [D] Kunihiko Fukushima, Osaka University: Visual Pattern Recognition with Neural Networks [B] James A. Anderson, Brown University: Neural Networks Computation as Viewed by Cognitive Science and Neuroscience [Q] Alianna Maren, Accurate Automation Corporation: Introduction to Neural Network Applications 6pm - 10 pm Monday, June 6 [N] Takeshi Yamakawa, Kyushu Institute of Technology: What are the Differences and Similarities among Fuzzy, Neural, and Chaotic Systems? [A] Teuvo Kohonen, Helsinki University of Technology: Advances in the Theory and Applications of Self-Organizing Maps [J] Richard A. Andersen, Massachusetts Institute of Technology: Neurobiologically Plausible Network Models [F] Harold Szu, Naval Surface Warfare Center: Spatiotemporal Information Processing by Means of McCullouch-Pitts and Chaotic Neurons =================================================================== 7. TRAVEL RESERVATIONS: Executive Travel Associates (ETA) has been selected the official travel company for the World Congress on Neural Networks. ETA offers the lowest available fares on any airline at time of booking when you contact them at US phone number 202-828-3501 or toll free (in the US) at 800-562-0189 and identify yourself as a participant in the Congress. Flights booked on American Airlines, the official airline for this meeting, will result in an additional discount. Please provide the booking agent you use with the code: Star #S0464FS =================================================================== 8. ** NOTE ** Neither WCNN'94 (INNS) nor the Hotel can accept "electronic registration" or "electronic reservations" by E-Mail. The Hotel will accept telephoned reservations (note the May 6 deadline below!). For WCNN'94 Registration, use surface/air mail or FAX. ********************************************************************** 9. ___ ____ ____ _ __ _____ ___ _ _____ _ ___ _ _ | | | | \ | / | | | / \ | | / \ |\ | |__\ --- | | \_ | |__\ /___\ | | | | | \ | | \ | \ __ | \ | | \ | | | | | | | \ | | | |___ \___| | __/ | | | | | | | \__/ L \| ----Cut here, print out (Monospaced font such as Monaco 9, 62 lines/pg)---- REGISTRATION FORM WCNN'94 at Town & Country Hotel, San Diego, California June 5 - 9, 1994 REGISTRATION FEE (includes all sessions, plenaries, proceedings, reception, AND Industrial Exposition. Separate registration for Short Courses, below.) Before May 16, 1994 On-Site FEE ENCLOSED _ INNS Member Member Number__________ US$280 US$395 $_________ _ Non Members: US$380 US$495 $_________ _ Full Time Students: US$110 US$135 $_________ _ Spouse/Guest: US$45 US$55 $_________ Or Neural Network Industrial Exposition -Only- _ US$55 US$55 $_________ *************************************************** INNS UNIVERSITY SHORT COURSE REGISTRATION (must be received by May 16, 1994) Circle paid selections: A B C D E F G H I J K L M N O P Q R S T U V Circle free selection (Pay for 2 short courses, get the third FREE) A B C D E F G H I J K L M N O P Q R S T U V SHORT COURSE FEE _ INNS Members: US$275 $_________ _ Non Members: US$325 $_________ _ Full Time Students: US$150 $_________ Congress + Short Course TOTAL: $_________ For each paid course, nominate an accompanying person, registering in the same or lower category, for a free course: Mr./Ms.___________________ That person must also register by May 16, and indicate "Gift from [your name]" on the registration form. METHODS OF PAYMENT _ $ CHECK. All check payments made outside of the USA must be made on a USA bank in US dollars, payable to WCNN'94 _ $ CREDIT CARDS. Only VISA and MasterCard accepted. Registrations sent by FAX or surface/air mail must include an authorized signature. ( ) Visa ( ) M/C Name on Credit Card ______________________________________ Credit Card Number _______________________________________ Exp. Date ________________________________________________ Authorized Signature: _____________________________________ FAX: 609-853-0411 then Mail to INNS/WCNN'94 c/o Talley Associates, 875 Kings Highway, Suite 200 Woodbury, NJ 08096 ========================================================================== 10. HOTEL RESERVATIONS REGISTER AT TOWN & COUNTRY HOTEL, SAN DIEGO, CALIFORNIA (WCNN'94 Site) -----Cut here and print (Monospaced font such as Monaco 9, 62 lines/pg)---- Mail to Reservations, Town and Country Hotel, 500 Hotel Circle North, San Diego, CA 92108, USA; or FAX to 619-291-3584 Telephone: (800)772-8527 or (619)291-7131 INNS - WCNN'94 International Neural Network Society, World Congress on Neural Networks '94 _ Single: US$70 - US$95 plus applicable taxes _ Double: US$80 - US$105 plus applicable taxes Check in time: 3:00 pm. Check out time: 12:00 noon. Room reservations will be available on a first-come, first-serve basis until May 6, 1994. Reservations received after this date will be accepted on a space-available basis and cannot be guaranteed. Reservations after May 6 will also be subject to the rates prevailing at the time of the reservation. A confirmation of your reservation will be sent to you by the hotel. A first night's room deposit is required to confirm a reservation. PRINT OR TYPE ALL INFORMATION. Single________ Double_______ Arrival Date and approximate time:________________________________ Departure Date and approximate time:______________________________ Names of all occupants of room:____________________________________ RESERVATION CONFIRMATION SHOULD BE SENT TO: Name:____________________ Address:____________________________________________________________ ____________________________________________________________ City:____________________State/Province:_________________Country:__________ Type of Credit Card: (circle one) VISA/ MasterCard/ AmEx/ Diner's Club/ Discover/ Optima Card Number:______________________________ Exp. Date____________________ Name as it appears on your Card:______________________________ Authorized Signature: ______________________________ Cancellation Policy: Deposits are refundable if reservation is cancelled 48 hours in advance of arrival date. Be sure to record your cancellation number. Please indicate any disability which will require special assistance: _____________________________________________ FAX to 619-291-3584 ==========================================================================  From tfb007 at hp1.uni-rostock.de Wed Apr 27 22:29:19 1994 From: tfb007 at hp1.uni-rostock.de (Neural Network Group) Date: Wed, 27 Apr 94 22:29:19 MESZ Subject: FTP-access to NIST-Archive Message-ID: Because of too many questions like: "how can I get the files irxxxx... from NIST?" I decided to post the adress to the connectionists: Name: SEQUOYAH.NCSL.NIST.GOV Address: 129.6.61.25 There you can find databases and publications. If there are questions left, feel free to contact me Neural Network Group Rostock Welf Wustlich  From dodson at ecf.toronto.edu Wed Apr 27 17:50:29 1994 From: dodson at ecf.toronto.edu (C.T.J. Dodson) Date: Wed, 27 Apr 1994 17:50:29 -0400 Subject: Research Position at Toronto Message-ID: <94Apr27.175033edt.8429@cannon.ecf.toronto.edu> Research Position Available at University of Toronto A position is available now for a person with experience in neural network computing to join a mixed discipline group working on the simulation of stochastic fibrous suspensions in turbulent flow. The appointee should have a PhD or similar qualifications, and be prepared to work in a group with access to very good computing Research Position at University of Toronto A position is available from 1 June 1994 for a person qualified to do research in the application of neural network simulation methods. The project involves a mixed discipline group and concerns the simulation and analysis of stochastic fibrous suspensions in turbulent flows. Good computing facilities are available (SGI Challenge, Indigo, KSR1). The succesful applicant should have a PhD or be at similar level. The salary is decreed by NSERC, about $27,000pa without benefits. The appointment would be for up to 2 years, with the possibility of an extension to a third year. Interested applicants please send resumes, letters of reference and possible starting dates to me as soon as possible. ****************************** Prof CTJ Dodson Department of Chemical Engineering and Department of Mathematics University of Toronto 200 College Street, Toronto M5S 1A1 Tel 416 978 5610 Fax 416 978 1144 Room Wallberg 362 email dodson at ecf.utoronto.ca ******************************  From dodson at ecf.toronto.edu Thu Apr 28 09:24:18 1994 From: dodson at ecf.toronto.edu (C.T.J. Dodson) Date: Thu, 28 Apr 1994 09:24:18 -0400 Subject: Research Position at Toronto Message-ID: <94Apr28.092427edt.9091@cannon.ecf.toronto.edu> [Sorry, but an earlier version yesterday seems to have been corrupted; here is a corrected advertisement. Thanks Kit Dodson] Research Position Available at University of Toronto A position is available from 1 June 1994 for a person qualified to do research in the application of neural network simulation methods. The project involves a mixed discipline group and concerns the simulation and analysis of stochastic fibrous suspensions in turbulent flows. Good computing facilities are available (SGI Challenge, Indigo, KSR1). The succesful applicant should have a PhD or be at similar level. The salary is decreed by NSERC, about $27,000pa without benefits. The appointment would be for up to 2 years, with the possibility of an extension to a third year. Interested applicants please send resumes, letters of reference and possible starting dates to me as soon as possible. ****************************** Prof CTJ Dodson Department of Chemical Engineering and Department of Mathematics University of Toronto 200 College Street, Toronto M5S 1A1 Tel 416 978 5610 Fax 416 978 1144 Room Wallberg 362 email dodson at ecf.utoronto.ca ******************************  From hendin at thunder.tau.ac.il Thu Apr 28 10:26:49 1994 From: hendin at thunder.tau.ac.il (Ofer Hendin) Date: Thu, 28 Apr 1994 17:26:49 +0300 (IDT) Subject: preprint available Message-ID: FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/hendin.olfaction.ps.Z The file hendin.olfaction.ps.Z now available for "anonymous ftp" copying from the Neuroprose repository (9 pages): ============================================================================ DECOMPOSITION OF A MIXTURE OF SIGNALS IN A MODEL OF THE OLFACTORY BULB (to be published in PNAS) O. HENDIN and D. HORN School of Physics and Astronomy Raymond and Beverly Sackler Faculty of Exact Sciences Tel Aviv University Tel Aviv 69978 Israel J. J. HOPFIELD Divisions of Chemistry and Biology California Institute of Technology Pasadena, California 91125 ABSTRACT We describe models for the olfactory bulb which perform separation and decomposition of mixed odor inputs from different sources. The odors are unknown to the system, hence this is an analog and extension of the engineering problem of blind separation of signals. The separation process makes use of the different temporal fluctuations of the input odors which occur under natural conditions. We discuss two possibilities, one relying on a specific architecture connecting modules with the same sensory inputs, and the other assuming that the modules (e.g. glomeruli) have different receptive fields in odor space. We compare the implications of these models for the testing of mixed odors from a single source. ============================================================================ Ofer. ____________________________________________________________________ Ofer Hendin e-mail: hendin at thunder.tau.ac.il School of Physics and Astronomy Phone : +972 3 640 7452 Raymond and Beverly Sackler Faculty of Exact Sciences Tel Aviv University Tel Aviv 69978, Israel. ____________________________________________________________________  From kak at gate.ee.lsu.edu Thu Apr 28 10:46:06 1994 From: kak at gate.ee.lsu.edu (Subhash Kak) Date: Thu, 28 Apr 94 09:46:06 CDT Subject: No subject Message-ID: <9404281446.AA08343@gate.ee.lsu.edu> The following report (a revision of an earlier report) is now available as a postscript file. If you would like for me to email you a copy, do let me know. _______________________________________________________________ Can We Build A Quantum Neural Computer? by Subhash Kak Department of Electrical & Computer Engineering Louisiana State University Baton Rouge, LA 70803-5901, USA Technical Report: ECE/LSU 92-13; 94-42 December 15, 1992; Revised April 26, 1994 Abstract: Hitherto computers have been designed based on classical laws. We consider the question of building a quantum neural computer and speculate on its computing power. We argue that such a computer could have the potential to solve artificial intelligence problems. It is also shown that information is not locally additive in a quantum computational paradigm. This is demonstrated by considering an informational-theoretic analysis of the EPR experiment. Non-additivity of information in biological processing would be one piece of evidence establishing that consciousness should be modelled using a quantum theory. -----------------------------------------------------------------------  From terry at salk.edu Thu Apr 28 20:05:07 1994 From: terry at salk.edu (Terry Sejnowski) Date: Thu, 28 Apr 94 17:05:07 PDT Subject: Neural Computation 6:3 Message-ID: <9404290005.AA15953@salk.edu> NEURAL COMPUTATION May 1994 Volume 6 Number 3 Review: Statistical Physics Algorithms that Converge A.L. Yuille and J. J. Kosowsky Article: Object Recognition and Sensitive Periods: A Computational Analysis of Visual Imprinting Randall C. O'Reilly and Mark H. Johnson Letters: Computing Stereo Disparity and Motion with Known Binocular Cell Properties Ning Qian Integration and Differentiation in Dynamic Recurrent Neural Networks Edwin E. Munro, Larry E. Shupe, and Eberhard E. Fetz A Convergence Result for Learning in Recurrent Neural Networks Chung-Ming Kuan, Kurt Hornik and Halbert White Topology Learning Solved by Extended Objects: A Neural Network Model Csaba Szepesvari, Laszlo Balazs and Andras Lorincz Dynamics of Discrete Time, Continuous-State Hopfield Networks Pascal Koiran Alopex: A Correlation-Based Learning Algorithm for Feedforward and Recurrent Neural Networks K. P. Unnikrishnan and K. P. Venugopal Duality Between Learning Machines: A Bridge Between Supervised and Unsupervised Learning Jean-Pierre Nadal and N. Parga Finding the Embedding Dimension and Variable Dependencies in Time Series Hong Pi and Carsten Peterson Comparison of Some Neural Network and Scattered Data Approximations: The Inverse Manipulator Kinematics Example Dimitry Gorinevsky and Thomas H. Connolly Functionally Equivalent Feedforward Neural Networks Vera Kurkova and Paul C. Kainen ----- SUBSCRIPTIONS - 1994 - VOLUME 6 - BIMONTHLY (6 issues) ______ $40 Student and Retired ______ $65 Individual ______ $166 Institution Add $22 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-5 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 e-mail: hiscox at mitvma.mit.edu -----  From danr at das.harvard.edu Fri Apr 29 00:41:41 1994 From: danr at das.harvard.edu (Dan Roth) Date: Fri, 29 Apr 94 00:41:41 EDT Subject: Advanced Tutorial on Learning DNF Message-ID: <9404290441.AA28585@endor.harvard.edu> Advanced Tutorial on the State of the Art in Learning DNF Rules =============================================================== Sunday, July 10, 1994 Rutgers University New Brunswick, New Jersey Held in Conjunction with the Eleventh International Conference on Machine Learning (ML94, July 11-13, 1994) and the Seventh Annual Conference on Computational Learning Theory (COLT94, July 12-15, 1994). Learning DNF rules is one of the most important problems and widely-investigated areas of inductive learning from examples. Despite its long-standing position in both the Machine Learning and COLT communities, there has been little interaction between them. This workshop aims to promote such interaction. The COLT community has studied DNF extensively under its standard learning models. While the general problem is still one of the main open problems in COLT, there have been many exciting developments in recent years, and techniques for solving major subproblems have been developed. Inductive learning of subclasses of DNF such as production rules, decision trees and decision lists has been an active research topic in the Machine Learning community for years, but theory has had almost no impact on the experimentalists in machine learning working in this area. The purpose of this workshop is to provide an opportunity for cross-fertilization of ideas, by exposing each community to the other`s ideas: ML researchers to the frameworks, results and techniques developed in COLT; the theoretical community to many problems that are important from practical points of view, but are not currently addressed by COLT, as well as to approaches that were shown to work in practice but lack a formal analysis. To achieve this goal the workshop is organized around a set of invited talks, given by some of the prominent researchers in the field in both communities. Our intention is to have as much discussion as possible during the formal presentations. The speakers are: Nader Bshouty, University of Calgary, Canada Learning via the Monotone Theory Wray Buntine, NASA Generating rule-based algorithms via graphical modeling Tom Hancock, Siemens Learning Subclasses of DNF from examples Rob Holte, University of Ottawa, Canada Empirical Analyses of Learning Systems Jeff Jackson, Carnegie Mellon University Learning DNF under the Uniform Distribution Michael Kearns, AT&T Bell Labs An Overview of Computational Learning Theory Research on Decision Trees and DNF Yishay Mansour, Tel-Aviv University, Israel Learning boolean functions using the Fourier Transform. Cullen Schaffer, CUNY Learning M-of-N and Related Concepts PARTICIPATION The Workshop is open to people who register to the COLT/ML conference. We hope to attract researchers that are active in the area of DNF as well as the general COLT/ML audience. WORKSHOP ORGANIZERS Jason Catlett Dan Roth AT&T Bell Laboratories Harvard University Murray Hill, NJ 07974 Cambridge, MA 02138 +1 908 582 4978 +1 617 495 5847 catlett at research.att.com danr at das.harvard.edu  From tibs at utstat.toronto.edu Fri Apr 29 09:19:00 1994 From: tibs at utstat.toronto.edu (tibs@utstat.toronto.edu) Date: Fri, 29 Apr 94 09:19 EDT Subject: new mauscript Message-ID: Available in pub/nnboot.ps.Z at utstat.toronto.edu A comparison of some error estimates for neural network models Robert Tibshirani Department of Preventive Medicine and Biostatistics and Department of Statistics University of Toronto We discuss a number of methods for estimating the standard error of predicted values from a neural network (single layer perceptron) model. These methods include the delta method based on the Hessian, bootstrap estimators, and the ``sandwich'' estimator. The methods are described and compared in a number of examples. We find that the bootstrap methods perform best, partly because they capture variability due to the choice of starting weights. ============================================================= | Rob Tibshirani To every man is given the key to | Dept. of Preventive the gates of heaven; | Medicine and Biostatistics the same key opens the gates of hell. | McMurrich Bldg. | University of Toronto Buddhist proverb | Toronto, Canada M5S 1A8 Phone: 1-416-978-4642 (biostats) Email: tibs at utstat.toronto.edu 416-978-0673 (stats) FAX: 1-416-978-8299  From verleysen at dice.ucl.ac.be Fri Apr 1 10:52:45 1994 From: verleysen at dice.ucl.ac.be (verleysen@dice.ucl.ac.be) Date: Fri, 1 Apr 1994 17:52:45 +0200 Subject: Copernicus project with Central and Eastern European Countries Message-ID: <9404011549.AA11956@ns1.dice.ucl.ac.be> Dear Colleagues, We are currently setting up a project to enhance the exchange of information in the field of neural networks between Central and Eastern European countries, and Western European ones. This project will be submitted as a 'Concerted Action' of the Copernicus programme, to the Commission of the European Communities. We are looking for additional partners from Central and Eastern European countries, to enlarge this proposal to the greatest number of institutions; the countries which may be involved are: Albania, Bulgaria, Czech Republic, Estonia, Hungary, Latvia, Lithuania, Poland, Romania, Slovak Republic and Slovenia. Newly Independant States of the former Soviet Union may also participate in addition. If your institution is interested in participating to this proposal, or if you know people from these countries who could be interested in, please send names and addresses of contact persons (with fax and E-mail if possible) to: verleysen at dice.ucl.ac.be We will then send all necessary material to join this proposal. The time schedule to send the proposal is very tight; we should thus be in contact with each possible partner BEFORE APRIL 6th. Please don't hesitate to contact us for any information. Thank you in advance. Michel Verleysen ===================================================== Michel Verleysen Universite Catholique de Louvain Microelectronics Laboratory 3, pl. du Levant B-1348 Louvain-la-Neuve Belgium tel: +32 10 47 25 51 fax: + 32 10 47 86 67 E-mail: verleysen at dice.ucl.ac.be =====================================================  From 100322.250 at CompuServe.COM Sun Apr 3 15:21:09 1994 From: 100322.250 at CompuServe.COM (Johannes C. Scholtes) Date: 03 Apr 94 15:21:09 EDT Subject: Workshops on Neural Networks and Information Retrieval in Amsterdam Message-ID: <940403192109_100322.250_BHB57-1@CompuServe.COM> Preliminary Program Neural Networks and Information Retrieval in a Libraries Context Amsterdam , The Netherlands Friday June 24, 1994 and Friday September 16, 1994 M.S.C. Information Retrieval Technologies BV, based in Amsterdam, the Netherlands, is currently undertaking a study on Neural Networks and Information Retrieval in a Libraries Context, in collaboration with the Department of Computational Linguistics of the University of Amsterdam and the Department of Information Technology and Information Science at Amsterdam Polytechnic. This study is funded by the European Commission as a complementary measure under the Libraries Programme In this study the general application of artificial neural net (ANN) technology to information retrieval (IR) problems is investigated in a libraries context. Typical applications of this technology are advanced interface design, current awareness, SDI, fuzzy search and concept formation. In order to discuss and disseminate the results obtained through this study, two one-day workshops will be organized by M.S.C. Information Retrieval Technologies BV, the first one after compilation of the State of the Art Report and the second one after completion of the prototyping and experimentation phase. During both workshops, there will be much room for discussions on how to commercialise such applications of ANN in a libraries context. Both workshops are open to participants from other organizations, commercial and academic, that are interested in various applications of ANNs in existing libraries systems. For who: Interesting for all: - Computer Companies - Information Management and Supply Companies - Government Agencies - Libraries - Universities and Polytechniques That are Interested in: - Neural Networks - Information Retrieval - Libraries Sciences - Natural Language Processing - Advanced Computer Science - Data compression For applications such as: - Current Awareness - Selective Dissemination of Information (SDI) - Information Filtering - Automatic Contents Based Information Distribution - Categorization - Advanced Interface Design - Fuzzy Retrieval (Information recognized by Optical Character Recognition and Speech Recognition). - Retrieval Generalization - Thesaurus Generation - Information Compression - Juke box staging General Information Costs per participant for both days: Commercial companies Dfl. 950,- Universities and non-profit institutions (*) Dfl. 500,- Students (*) Dfl. 150,- (*) Letter of university or non-profit institution must be shown at registration These costs include: Workshop Proceedings State of the Art report on Neural Networks in Information Retrieval as composed by MSC Achievements report on Neural Networks in Information Retrieval as composed by MSC Ongoing coffee & tea Lunch Diner Future mailings on progress Limited availability of travel grants for students (please apply) All other expenses such as traveling, hotels, short stays, etc. are not included in the fee. Payment The following payment methods are accepted: 1. Credit Cards 2. Prepayment by bank 3. Personal cheques More information: M.S.C. Information Retrieval Technologies BV Dr Johannes C. Scholtes Dufaystraat 1 1075 GR AMSTERDAM the Netherlands Telephone: +31 20 679 4273 Fax: +31 20 6710 793 Internet: 100322.250 at compuserve.com or scholtes at msc.mhs.compuserve.com Compuserve: MHS: SCHOLTES at MSC or 100322,250 Background & Introduction Recent research of artificial neural networks (ANN) in the field of pattern recognition and pattern classification applications has provided successful alternatives of traditional techniques. Products applied for optical character recognition (OCR), speech recognition, hand-written character recognition and prediction of non-linear time series are good examples of commercialization of these ANN techniques. So far, the European Commission has funded more than 40 projects of different sizes under the ESPRIT and other programmes which involve research on or the application of ANN technology. The task of Information Retrieval (IR), that is the matching of a large number of documents against a query, can also been seen as a pattern recognition or pattern classification task. Therefore, there have been several approaches to the application of ANN in IR in order to increase the quality of the retrieval process. Despite the theoretical and practical evidence that ANN are good tools for pattern recognition tasks, it is still an open question whether they are appropriate tools within the specific domain of Bibliographic Information Retrieval. Apart from some minor studies it seems no real attempt has been made up until now to integrate an ANN as a main component of a bibliographical information retrieval system or an on-line library catalogue (OPAC). It is therefore not clear whether and how ANN techniques can be combined with more "classical" methods, for instance rule-based or statistical approaches. By the same token it is not clear either to what extent existing OPACs could benefit from ANN technology. Objectives The objectives of this study are: to ascertain the State-of-the-Art of the application of Artificial Neural Net (ANN) technology to Information Retrieval (IR), with particular emphasis on bibliographic information in a libraries context; to assess the (potential) quality of ANN-based approaches to IR in this particular domain of interest, in comparison with traditional practices. Here "quality must be understood in terms of both (measurable) efficiency and practical benefits; to stimulate interest in the practical application of ANN technology to bibliographic information retrieval in a libraries context. Information Retrieval It can be stated that Information Retrieval (IR) is the ultimate combination between Natural Language Processing (NLP) and Artificial Intelligence (AI). On the one hand there is an enormous amount of NLP data that needs to be processed and understood to return the proper information to the user. On the other hand, one needs to understand what the user intends with his or her query given the context of the other queries and some kind of user model. Most of these systems still use techniques that were developed over thirty years ago and that implement nothing more than a global surface analysis of the textual (layout) properties. No deep structure whatsoever is incorporated in the decision to whether or not retrieve a text. There is one large dilemma in IR research. The data collections are so incredibly large, that any method other than a global surface analysis would fail. However, such a global analysis could never implement a contextually sensitive method to restrict the number of possible candidates returned by the retrieval system. Information retrieval can also be a very frustrating area of research. Whenever one invents a new model, it is difficult to show that it works better (qualitatively and quantitatively) than any previous model. The addition of new dependencies often results in much too slow a system. Systems such as Salton's SMART exist for over 30 years without having any serious competition. The field of information retrieval would be greatly indebted to a method that could incorporate more context without slowing down. Since computers are only capable of processing numbers within reasonable time limits, such a method should be based on vectors of numbers rather than on symbol manipulations. This is exactly where the challenge lies: on the one hand keep up the speed, and on the other incorporate more context. Artificial Neural Networks The connectionist approach offers a massively parallel, highly distributed and highly interconnected solution for the integration of various kinds of knowledge, with preservation of generality. It might be that connectionism or neural networks (despite all currently unsolved questions concerning learning, stability, recursion, firing rules, network architecture, etc.), will contribute to the research in natural-language processing and information retrieval. Distributed data representation may solve many of the unsolved problems in IR by introducing a powerful and efficient knowledge integration and generalization tool. However, distributed data representation and self-organization trigger new problems that should be solved in an elegant manner. Current Problems in Information Retrieval The main objectives of current IR research can be characterised as the search for systems that exhibit adaptive behaviour, interactive behaviour and transparency. More specifically, these models should implement properties for: Understanding incomplete queries or making incomplete matches, Understanding vague user intentions, Ability to generalise over queries as well as over query results, Adapting to the needs of an evolving user (model), Allowing dynamic relevance feed-back, Aid for the user to browse intelligently through the data, and Addition of (language) context sensitivity. Different Approaches in Information Retrieval and Neural NetworksTwo main directions of neural network related research information retrieval can be observed. First, there are relatively static databases that are investigated with a dynamic query (free text search, also known as document retrieval systems). Next, there are the more dynamic databases that need to be filtered with respect to a relatively static query (the filtering problem also known as current awareness systems and Selective Dissemination of Information, SDI). In the first case the data can be preprocessed due to their static character. In the second case, the amounts of data are so large that there is no time whatsoever for a preprocessing phase. A direct context-sensitive hit-and-go must be made. Early neural models adapt well to the paradigms currently used in information retrieval. Index terms can be replaced by processing units, hyperlinks by connections between units, and network training resembles the index normalisation process. However, these models do not adapt well to the general notion of neural networks. In addition, it is difficult to imagine what to teach a neural information retrieval system if it is used as a supervised training algorithm. The address space will almost always be too limited due to the large amounts of data to be processed. A combination of structured (query, retrieved document numbers) pairs does not seem plausible either, considering the restricted amount of memory of (current) neural network technology. Nevertheless, most of the neural IR models found in literature are based on these principles. Also problematic are the so-called clustering networks. Due to the large amounts of data in free text databases, clustering is very expensive and is therefore considered irrelevant in changing information retrieval environments. More interesting are the unsupervised, associative memory type of models, that can be used to implement a specific pattern matching task. This type of neural networks can be particularly useful in a filtering application. Here, the memory demands of the neural network only need to fulfil the query (or interest) size, and not the size of the entire data base. It is in this area where neural networks are expected to be most useful and relevant for information retrieval. Especially topics such as fuzzy retrieval, current awareness, SDI, concept formation and advanced interface design are in the scope of the project. However, input from the workshops is very important for the final determination of the direction of the research. Program Day 1: June 24, 1994 9.15-9.30 Welcome and Introduction Dr Ir Johannes C. Scholtes, President of MSC Information Retrieval Technologies B.V. 9.30-11.00 Tutorial Neural Networks (Back Propagation Kohonen Feature Maps) Dr Ir Johan Henseler, Forensic Laboratories, Head of Section Computer Criminality 11.00-11.15 Break 11.15-12.30 Information Retrieval Application in Libraries Dr E. Sieverts, Professor at Amsterdam Polytechnique. Library Program 12.30-13.30 Lunch 13.30-15.00 Presentation Findings & State of the Art Report 15.00-15.15 Break 15.15-16.00 Directions for (Commercial) Applications Dr ir Johannes C. Scholtes 16.00-17.00 Panel Discussion 17.00-18.00 Reception 19.00-... Diner and evening program Day 2: September 16, 1994 9.15-9.30 Welcome and Introduction Dr Ir Johannes C. Scholtes. President of MSC Information Retrieval Technologies B.V. 9.30-11.00 Achievements Dr Ir Johannes C. Scholtes. President of MSC Information Retrieval Technologies B.V. & Dr E. Sieverts. Professor at Amsterdam Polytechnique Library Program 11.00 - 12.30 Hands on demonstrations 12.30-13.30 Lunch 13.30-15.00 Problem Issues by Dr E. Sieverts. Professor at Amsterdam Polytechnique. Library Program 15.00-15.15 Break 15.15-16.00 Commercial Implications by Dr Ir Johannes C. Scholtes. President of MSC Information Retrieval Technologies B.V. 16.00-17.00 Panel Discussion 17.00-18.00 Reception 19.00-... Diner and evening program During the day, demo's of the prototypes will be available to the participants of the workshop. Each demo will be guided by a specialist who demonstrates the software  From erikds at reks.uia.ac.be Sat Apr 2 15:33:58 1994 From: erikds at reks.uia.ac.be (Erik De Schutter) Date: Sat, 2 Apr 94 22:33:58 +0200 Subject: Postdoc positions available Message-ID: <9404022033.AA20654@kuifje> Please post and forward (not at Caltech). TWO POSTDOCTORAL POSITIONS IN COMPUTATIONAL NEUROSCIENCE Join a new multi-disciplinary team at the University of Antwerp, Belgium, to explore functional properties of the cerebellar cortex. Projects include detailed modeling of calcium stores and metabotropic receptors in a compartmental model of a Purkinje cell (see J. Neurophysiol. 71, 375-400 and 401-419, 1994), and the creation of a large-scale realistic network model of cerebellar cortex based on compartmental models. Candidates should have experience in one or more of 3 fields: computational neuroscience (preferentially compartmental models or using GENESIS), cerebellar physiology, or single cell physiology (preferentially calcium-imaging experience). All candidates should expect to do modeling work only, training will be provided if necessary. Positions are available for 2 to 3 years, starting autumn 1994. Salary commensurate with experience. Funding is independent of nationality. Applicants must send curriculum vitae and names of three references to: Dr. Erik De Schutter Dept. of Medicine University of Antwerp - UIA B2610 Antwerp Belgium fax: ++32-3-8202541 e-mail: erikds at reks.uia.ac.be (preferred medium)  From drb at ivan.csc.ncsu.edu Mon Apr 4 13:25:04 1994 From: drb at ivan.csc.ncsu.edu (Dr. Dennis Bahler) Date: Mon, 4 Apr 94 13:25:04 EDT Subject: postdoc position announcement Message-ID: <199404041725.AA01752@ivan.csc.ncsu.edu> ============================================================================== POSTDOCTORAL POSITION NATIONAL INSTITUTE OF ENVIRONMENTAL HEALTH SCIENCES The National Institute of Environmental Health Sciences in Research Triangle Park, North Carolina has an opening for a postdoctoral research position in computer science. The person in this position will join an existing research team studying the application of methods from artificial intelligence to the prediction of risks from exposure to chemical agents. Experience with inductive learning methods, decision trees and neural networks is beneficial. Minority candidates and women are encouraged to apply. Appointees must be U.S. citizens or permanent U.S. residents. Curriculum Vitae and three letters of reference should be sent to: Dr. Christopher J. Portier Laboratory of Quantitative and Computational Biology National Institute of Environmental Health Sciences PO Box 12233 Mail Drop A3-06 Research Triangle Park, North Carolina 27709 Curriculum vitaes will be accepted via e-mail to portier at niehs.nih.gov. Reference letters can be sent by e-mail and followed by hard copy. Candidates will be interviewed after April 21, 1994. Applications will be accepted until a candidate is chosen. ==============================================================================  From payman at uw-isdl.ee.washington.edu Mon Apr 4 14:43:25 1994 From: payman at uw-isdl.ee.washington.edu (payman@uw-isdl.ee.washington.edu) Date: Mon, 4 Apr 1994 11:43:25 -0700 Subject: TR available: Fourier Analysis and Filtering of a Single Hidden Layer Perceptron Message-ID: <199404041843.LAA29008@graham.ee.washington.edu> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/marks.fourier.ps.Z The following paper is now available from the neuroprose repository: Fourier Analysis and Filtering of a Single Hidden Layer Perceptron Robert J. Marks II & Payman Arabshahi Department of Electrical Engineering University of Washington FT-10 Seattle, WA 98195 USA This is an invited paper to appear in the Proceedings of the International Conference on Artificial Neural Networks (IEEE/ENNS) Sorrento, Italy, May 1994. Abstract We show that the Fourier transform of the linear output of a single hidden layer perceptron consists of a multitude of line masses passing through the origin. Each line corresponds to one of the hidden neurons and its slope is determined by that neuron's weight vector. We also show that convolving the output of the network with a function can be achieved simply by modifying the shape of the sigmoidal nonlinearities in the hidden layer. To retrieve the file: unix> ftp archive.cis.ohio-state.edu Name: anonymous Password: your email address ftp> cd pub/neuroprose ftp> binary ftp> get marks.fourier.ps.Z ftp> bye unix> uncompress marks.fourier.ps.Z unix> lpr marks.fourier.ps  From kinder at informatik.tu-muenchen.de Wed Apr 6 07:52:04 1994 From: kinder at informatik.tu-muenchen.de (Margit Kinder) Date: Wed, 6 Apr 1994 13:52:04 +0200 Subject: paper available Message-ID: <94Apr6.135205met_dst.42263@papa.informatik.tu-muenchen.de> The following paper is now available via anonymous ftp from the neuroprose archive. Although it has already been published in "Neural Networks 6/93", I put it into this archive since I am still receiving many requests for it. The manuscript is 10 pages. --------------------------------------------------------------------- Classification of Trajectories - Extracting Invariants with a Neural Network Margit Kinder and Wilfried Brauer Technische Universit"at M"unchen A neural classifier of planar trajectories is presented. There already exist a large variety of classifiers that are specialized on particular invariants contained in a trajectory classification task such as position-invariance, rotation-invariance, size-invariance, .. . That is, there exist classifiers specialized on recognizing trajectories e.g. independently of their position. The neural classifier presented in this paper is not restricted to certain invariants in a task: The neural network itself extracts the invariants contained in a classification task by assessing only the trajectories. The trajectories need to be given as a set of points. No additional information must be available for training, which saves the designer from determining the needed invariants by himself. Besides its applicability to real-world problems, such a more general classifier is also cognitively plausible: In assessing trajectories for classification, human beings are able to find class specific features, no matter what kinds of invariants they are confronted with. Invariants are easily handled by ignoring unspecific features. ----------------------------------------------------------------------- FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/kinder.extracting_invariants.ps.Z Margit Kinder ----------------------------------------------------------------------- Margit Kinder e-mail: kinder at informatik.tu-muenchen.de Fakult"at f"ur Informatik Technische Universit"at M"unchen Tel: +49 89 2105 8476 80290 Munich, Fed. Rep. of Germany Fax: +49 89 2105 8207  From soodak%cn.ROCKEFELLER.EDU at ROCKVAX.ROCKEFELLER.EDU Wed Apr 6 05:42:56 1994 From: soodak%cn.ROCKEFELLER.EDU at ROCKVAX.ROCKEFELLER.EDU (Bob Soodak) Date: Wed, 6 Apr 94 05:42:56 EDT Subject: Paper available: Simulation of visual cortex... Message-ID: <9404060942.AA16676@cn.rockefeller.edu> The following reprint is now available: FTP-host: archive.cis.ohio-state.edu FTP-file: /pub/neuroprose/soodak.suture.ps.Z Author: R. Soodak Rockefeller University Title: Simulation of visual cortex development under lid-suture conditions: Enhancement of response specificity by a reverse-Hebb rule in the absence of spatially patterned input Size: 15 pages Published in Biological Cybernetics 70, 303-309 (1994) (Due to copyright restrictions the manuscript could not be posted prior to publication.) Abstract: In this report, I show that a reverse-Hebb synaptic modification rule leads to the enhancement of response specificity of simulated visual cortex neurons in the absence of spatial patterning of the afferent activity. Although it is clear that receptive fields in the visual cortex can be modified by experience, many studies have shown a substantial increase of response specificity in cats deprived of pattern vision by lid suture, leading some to conclude that receptive field properties are essentially hard-wired. The hard-wired vs. experience-dependent controversy can be resolved by assuming that while Hebb-type plasticity is responsible for developmental synaptic changes, the organization of presynaptic activity which exists under conditions of visual deprivation is sufficient to drive the neurons towards greater specificity (Linsker 1986a-c; Miller 1989, 1992; Miller et al. 1989). As a reverse-Hebb rule enhances response specificity by balancing the push-pull system of ON- and OFF-center afferents, the sufficient condition is that the activity of ON- and OFF-center retinal ganglion cells be negatively correlated, a condition which will be met by diffuse illumination as seen through sutured eyelids. Unlike the models of Linsker and Miller and colleagues, which are based on a standard-Hebb rule, the model presented here does not require the presence of a "Mexican hat" spatial patterning of the afferent correlations, which has not been observed experimentally. To retrieve the file: unix> ftp archive.cis.ohio-state.edu Name: anonymous Password: your full email address ftp> cd pub/neuroprose ftp> binary ftp> get soodak.suture.ps.Z ftp> bye unix> uncompress soodak.suture.ps.Z unix> lpr soodak.suture.ps For hard copy please send name and address in a form suitable for use as a mailing label to: Robert Soodak Rockefeller Univ. 1230 York Ave. New York, NY 10021 USA  From mwitten at CHPC.UTEXAS.EDU Tue Apr 5 04:40:35 1994 From: mwitten at CHPC.UTEXAS.EDU (mwitten@CHPC.UTEXAS.EDU) Date: Tue, 5 Apr 1994 14:40:35 +0600 Subject: COMPMED 94 FINAL SCHEDULE Message-ID: <9404051940.AA08550@morpheus> FINAL PROGRAM ANNOUNCEMENT FIRST WORLD CONGRESS ON COMPUTATIONAL MEDICINE AND PUBLIC HEALTH 24-28 April 1994 Hyatt on the Lake, Austin, Texas The final program for the First World Congress On Computational Medicine and Public Health has now been set. Over 200 speakers will be presenting work in a variety of applications areas related to medicine and public health. Registration is still open for attendees. Registration details and/or a copy of the schedule at a glance, schedule-in-detail may be requested by sending an email request to compmed94 at chpc.utexas.edu or by calling 512-471-2472 or by faxing 512-471-2445 There is no ftp form of the conference schedule due to the size of the file. We will be happy to fax/send a copy to anyone who requests it. The conference proceedings will appear as a series of volumes published by World Scientific. If you are interested in possibly submitting a paper for the proceedings, please contact mwitten at chpc.utexas.edu or call 512-471-2457 The overwhelming response to this congress has already justified having a second world congress in the future. The tentative schedule is to have in in 3 years. If you are interested in participating at the 2nd World Congress On Computational Medicine and Public Health, please contact Dr. Matthew Witten Congress Chair mwitten at chpc.utexas.edu  From epdp at big.att.com Wed Apr 6 16:18:48 1994 From: epdp at big.att.com (Edwin Pednault) Date: Wed, 6 Apr 94 16:18:48 EDT Subject: Workshop on Learning and Descriptional Complexity Message-ID: <9404062018.AA11511@big.l1135.att.com> Workshop on Applications of Descriptional Complexity to Inductive, Statistical, and Visual Inference Sunday, July 10, 1994 Rutgers University New Brunswick, New Jersey Held in Conjunction with the Eleventh International Conference on Machine Learning (ML94, July 11-13, 1994) and the Seventh Annual Conference on Computational Learning Theory (COLT94, July 12-15, 1994). Interest in the minimum description-length (MDL) principle is increasing in the machine learning and computational learning theory communities. One reason is that MDL provides a basis for inductive learning in the presence of noise and other forms of uncertainty. Another reason is that it enables one to combine and compare different kinds of data models within a single unified framework, allowing a wide range of inductive-inference problems to be addressed. Interest in the MDL principle is not restricted to the learning community. Inductive-inference problems arise in one form or another in many disciplines, including information theory, statistics, computer vision, and signal processing. In each of these disciplines, inductive-inference problems have been successfully pursued using the MDL principle and related descriptional complexity measures, such as stochastic complexity, predictive MDL, and algorithmic probability. The purpose of this workshop is two fold: (1) to provide an opportunity to researchers in all disciplines involved with descriptional complexity to meet and share results; and (2) to foster greater interaction between the descriptional complexity community and the machine learning and computational learning theory communities, enabling each group to benefit from the results and insights of the others. To meet these objectives, the format of the workshop is designed to maximize opportunities for interaction among participants. In addition, a tutorial on descriptional complexity will be held prior to the workshop to encourage broad participation. The tutorial and workshop may be attended together or individually. The topics of the workshop will include, but will not be limited to, - Applications of descriptional complexity to all forms of inductive inference, including those in statistics, machine learning, computer vision, pattern recognition, and signal processing. - Rates of convergence, error bounds, distortion bounds, and other convergence and accuracy results. - New descriptional complexity measures for inductive learning. - Specializations and approximations of complexity measures that take advantage of problem-specific constraints. - Representational techniques, search techniques, and other application and implementation related issues. - Theoretical and empirical comparisons between different descriptional complexity measures, and with other learning techniques. WORKSHOP FORMAT The workshop will be held on Sunday, July 10, 1994. Attendance will be open. However, those who wish to attend should contact the organizers prior to the workshop at the address below. To maximize the opportunity for interaction, the workshop will consist primarily of poster presentations, with a few selected talks and a moderated wrap-up discussion. Posters will be the primary medium for presentation. This medium was chosen because it encourages close interaction between participants, and because many more posters can be accommodated than talks. Both factors should encourage productive interaction across a wide range of topics despite the constraints of a one-day workshop. Depending on the number and quality of the submissions, arrangements may be made to publish a book of papers after the workshop under the auspices of the International Federation for Information Processing Working Group 14.2 on Descriptional Complexity. SUBMISSIONS Posters will be accepted on the basis of extended abstracts that should not exceed 3000 words, excluding references (i.e., about six pages of text, single spaced). Separate one-page summaries should accompany the submitted abstracts. The summary pages of accepted abstracts will be distributed to all interested participants prior to the workshop, and should be written accordingly. Summaries longer than one page will have only their first page distributed. Six copies of each extended abstract and two copies of each summary page must be received at the address below by May 18, 1994. Acceptance decisions will be made by June 10, 1994. Copies of the summary pages of accepted abstracts will be mailed to all those who submit abstracts and to those who contact the organizers before the decision date. Because we expect the audience to be diverse, clarity of presentation will be a criterion in the review process. Contributions and key insights should be clearly conveyed with a wide audience in mind. Authors whose submissions are accepted will be expected to provide the organizers with full-length papers or revised versions of their extended abstracts when they arrive at the workshop. These papers and abstracts will be used for the publisher's review. Authors may wish to bring additional copies to distribute at the workshop. IMPORTANT DATES May 18 Extended abstracts due June 10 Acceptance decisions made, summary pages distributed July 10 Workshop PROGRAM COMMITTEE Ed Pednault (Chair), AT&T Bell Laboratories. Andrew Barron, Yale University. Ron Book, University of California, Santa Barbara. Tom Cover, Stanford University. Juris Hartmanis, Cornell University. Shuichi Itoh, University of Electro-Communications. Jorma Rissanen, IBM Almaden Research Center. Paul Vitanyi, CWI and University of Amsterdam. Detlef Wotschke, University of Frankfurt. Kenji Yamanishi, NEC Corporation. CONTACT ADDRESS Ed Pednault AT&T Bell Laboratories, 4G-318 101 Crawfords Corner Road Holmdel, NJ 07733-3030 email: epdp at research.att.com tel: 908-949-1074 ----------------------------------------------------------------------- Tutorial on Descriptional Complexity and Inductive Learning One of the earliest theories of inductive inference was first formulated by Solomonoff in the late fifties and early sixties. It was expanded in subsequent and, in some cases, independent work by Solomonoff, Kolmogorov, Chaitin, Wallace, Rissanen, and others. The theory received its first citation in the AI literature even before its official publication. It provides a basis for learning both deterministic and probabilistic target concepts, and it establishes bounds on what is computationally learnable in the limit. Over time, this theory found its way into several fields, including probability theory and theoretical computer science. In probability theory, it provides a precise mathematical definition for the notion of a random sample sequence. In theoretical computer science, it is being used among other things to prove lower bounds on the computational complexity of problems, to analyze average-case behavior of algorithms, and to explore the relationship between the succinctness of a representation and the computational complexity of algorithms that employ that representation. Interest in the theory diminished in artificial intelligence in the mid to late sixties because of the inherent intractability of the theory in its most general form. However, research in the seventies and early eighties led to several tractable specializations developed expressly for inductive inference. These specializations in turn led to applications in many disciplines, including information theory, statistics, machine learning, computer vision, and signal processing. The body of theory as it now stands has developed well beyond its origins in inductive inference, encompassing algorithmic probability, Kolmogorov complexity, algorithmic information theory, generalized Kolmogorov complexity, minimum message-length inference, the minimum description-length (MDL) principle, stochastic complexity, predictive MDL, and related concepts. It is being referred to collectively as descriptional complexity to reflect this evolution. This tutorial will provide an introduction to the principal concepts and results of descriptional complexity as they apply to inductive inference. The practical application of these results will be illustrated through case studies drawn from statistics, machine learning, and computer vision. No prior background will be assumed in the presentation other than a passing familiarity with probability theory and the theory of computation. Attendees should expect to gain a sound conceptual understanding of descriptional complexity and its main results. The tutorial will be held on Sunday, July 10, 1994. -----------------------------------------------------------------------  From noordewi at cs.rutgers.edu Wed Apr 6 18:32:34 1994 From: noordewi at cs.rutgers.edu (Michiel (Mick) Date: Wed, 6 Apr 94 18:32:34 EDT Subject: CFP ML94 Workshop on Molecular Biology Message-ID: <9404062232.AA19226@binnacle.rutgers.edu> Computational Molecular Biology and Machine Learning Workshop Machine Learning Conference 1994 Program Committee: Michiel Noordewier (Rutgers University) Lindley Darden (Rockefeller University) Description and Focus --------------------- This workshop will focus on the application of methods from machine learning to the promising problem area of molecular biology. A goal is to consolidate a machine learning faction in the emerging field of computational biology. One problem area is identified as genetic sequence search and analysis, and protein structure prediction. Biological sequences have become a ready source of sample data for machine learning approaches to classification. Recently such sequences have also provided problems for sophisticated pattern recognition paradigms, including those borrowed from computational linguistics, Bayesian methods, and artificial neural networks. This workshop will bring together workers using such diverse approaches, and will focus on the rich set of problems presented by the recent availability of extensive biological sequence information. Another area of applicability of ML techniques to molecular biology is in the application of computational discovery methods. Such methods are employed for forming, ranking, evaluating, and improving hypotheses. Learning strategies using analogies or homologies among molecules or processes from different organisms or species are also of interest. The format of the workshop will be the presentation of short papers followed by panel discussions. Submission Requirements ----------------------- Persons wishing to attend the workshop should submit three copies of a 1-2 page research summary including a list of relevant publications, along with a phone number and an electronic mail address. Persons wishing to make presentations at the workshop should submit three copies of a short paper (no more than 10 pages) or extended abstract, in addition to the research summary. All submissions must be received by May 1, 1994. Notification of acceptance or rejection will be mailed to applicants by May 15, 1994. A set of working notes will be distributed at the workshop. Camera ready copies of papers accepted for inclusion in the working notes of the workshop will be due on June, 15, 1994. The timetable is as follows: Abstracts, papers, etc due to chair 1 May Decisions made, submitters get feedback 15 May Final working-note submissions rcv'd by chair 15 June workshop date 10 July, 1994  From vg197 at neutrino.pnl.gov Wed Apr 6 19:01:17 1994 From: vg197 at neutrino.pnl.gov (Sherif Hashem) Date: Wed, 06 Apr 1994 16:01:17 -0700 (PDT) Subject: World Wide Web: Neural Network Home Page Message-ID: <9404062301.AA08111@neutrino.pnl.gov> ********************************************** * A N N O U N C I N G A N E W * * W O R L D W I D E W E B * * N E U R A L N E T W O R K * * H O M E P A G E * ********************************************** The World Wide Web (WWW) server at the Pacific Northwest Laboratory is now available for public access. We have created a Neural Network Home Page about the neural network research taking place in our group (Computing and Information Sciences in the Molecular Sciences Research Center). Our home page is composed of: - Lists of references to neural network papers in the following areas: * Chemical Sensor Analysis * Spectroscopic Analysis * Chemical Process Control * Molecular Modeling * Nuclear Science and Engineering * Medicine * Manufacturing * Optical Neurocomputing - Description of our work with neural networks - Access to our group's papers in electronic form (currently html) - Links to other neural network and neuroscience home pages The current Uniform Resource Locator (URL) for our Neural Network Home Page is: http://www.msrc.pnl.gov:2080/docs/cie/neural/neural.homepage.html This is a new home page, and we welcome all constructive comments, suggestions, additions, and hypertext links! Sherif Hashem and Paul Keller Pacific Northwest Laboratory Richland, Washington, USA phone: (509) 375-6995 (509) 375-2254 fax: (509) 375-6631 e-mail: s_hashem at pnl.gov pe_keller at pnl.gov  From jameel at cs.tulane.edu Thu Apr 7 04:55:17 1994 From: jameel at cs.tulane.edu (Akhtar Jameel) Date: Thu, 7 Apr 1994 03:55:17 -0500 (CDT) Subject: Call for papers Message-ID: <9404070855.AA26312@pegasus.cs.tulane.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 5811 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/d761cbec/attachment-0001.ksh From scheler at informatik.tu-muenchen.de Thu Apr 7 07:46:18 1994 From: scheler at informatik.tu-muenchen.de (Gabriele Scheler) Date: Thu, 7 Apr 1994 13:46:18 +0200 Subject: Announcement Technical Report Message-ID: <94Apr7.134623met_dst.42263@papa.informatik.tu-muenchen.de> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/scheler.generate.ps.Z The file scheler.generate.ps.Z is now available for copying from the Neuroprose repository: Multilingual Generation of Grammatical Categories Gabriele Scheler Technische Universit"at M"unchen (15 pages) also available as Report FKI-190-94 from Institut f"ur Informatik TU M"unchen D 80290 M"unchen ftp-host: flop.informatik.tu-muenchen.de ftp-file: pub/fki/fki-190-94.ps.Z ABSTRACT: We present an interlingual semantic representation for the synthesis of morphological aspect of English and Russian by standard back-propagation. Grammatical meanings are represented symbolically and translated into a binary representation. Generalization is assessed by test sentences and by a translation of the training sentences of the other language. The results are relevant to machine translation in a hybrid systems approach and to the study of linguistic category formation.  From gupta at prl.philips.co.uk Thu Apr 7 10:45:31 1994 From: gupta at prl.philips.co.uk (Ashok Gupta) Date: Thu, 7 Apr 94 14:45:31 GMT Subject: Bulk Data Types For Architecture Independence - a One day Meeting in London Message-ID: <1998.9404071445@apollo23.prl.philips.co.uk> The British Computer Society Parallel Processing Specialist Group (BCS PPSG) ------------------------------------------------------------ ** Bulk Data Types for Architecture Independence ** ____________________________________________________________ A One Day Open Meeting with Invited Papers Friday May 20 1994, Institute of Education, London, UK Invited Speakers: David Skillicorn, Queen's University, Canada Murray Cole, Edinburgh University Grant Malcolm, PRG, Oxford University Richard Miller, PRG, Oxford University Stewart Reddaway, Cambridge Parallel Processing A key factor in the growth of parallel computing is the availability of portable software. An attractive approach to portability lies in the provision of high-level programming constructs over bulk data, and their mapping to parallel architectures under the guidance of an appropriate cost calculus. Bulk operations have been in use since their introduction in APL, but have been understood much better recently. Bird-Meertens theory describes the operations of greatest interest for composed data types and provides a wealth of mathematical laws which can be used to transform and map algorithms for specific parallel architectures. Data types to which this approach has been applied include arrays, relations, lists, and trees. Speakers will describe this strategy for parallel software development, explain the underlying mathematics (at an accessible level), and illustrate the approach. The PPSG, founded in 1986, exists to foster development of parallel architectures, languages and applications and to disseminate information on parallel processing. Membership is completely open; you do not have to be a member of the British Computer Society. For further information about the group contact either of the following: Chair: Mr A Gupta Membership Secretary: Dr N Tucker Philips Research Labs Paradis Consultants Cross Oak Lane East Berriow Redhill Berriow Bridge Surrey North Hill UK nr. Launceston RH1 5HA Cornwall PL15 7NL, UK gupta at prl.philips.co.uk paradis at cix.compulink.co.uk *************************************************************** * Please share this information and display this announcement * *************************************************************** The British Computer Society Parallel Processing Specialist Group Booking Form/Invoice BCS VAT No. : 440-3490-76 Please reserve a place at the Conference on Bulk Data Types for Architecture Independence, London, May 20 1994, for the individual(s) named below. Name of delegate BCS membership no. Fee VAT Total (if applicable) ___________________________________________________________________________ ___________________________________________________________________________ ___________________________________________________________________________ Cheques, in pounds sterling, should be made payable to "BCS Parallel Processing Specialist Group". Unfortunately credit card bookings cannot be accepted. The delegate fees (including lunch and refreshments), in pounds sterling, are : Members of both PPSG & BCS: 55 + 9.62 VAT = 64.62 PPSG or BCS members: 70 + 12.25 VAT = 82.25 Non members: 90 + 15.75 VAT = 105.75 Full-time students: 25 + 4.37 VAT = 29.37 (Students should provide a letter of endorsement from their supervisor that also clearly details their institution) Contact Address: ___________________________________________ ___________________________________________ ___________________________________________ Email address: ___________________________________________ Date: _________________ Day time telephone: _________________ Places are limited so please return this form as soon as possible to : Mrs C. Cunningham BCS PPSG 2 Mildenhall Close, Lower Earley, Reading, RG6 3AT, UK (Phone +44 (0) 734 665570) ...................................................................... Apologies for the multiple postings.  From hunt at DBresearch-berlin.de Thu Apr 7 14:33:00 1994 From: hunt at DBresearch-berlin.de (Dr. Ken Hunt) Date: Thu, 7 Apr 94 14:33 MET DST Subject: Neural Control Colloquium bei Daimler-Benz in Berlin Message-ID: IEE Colloquium on Advances in Neural Networks for Control and Systems ------------------------------------ Date: 25-27 May 1994 Venue: Systems Technology Research, Daimler-Benz AG, Berlin, Germany Co-sponsors: IEE German Centre The Michael Faraday Institution e.V. Daimler-Benz AG Provisional Programme --------------------- Wednesday 25 May ---------------- 18:30 -- 20:30 Welcoming reception and buffet Thursday 26 May --------------- 8:30 Registration 9:30 -- 10:15 Supervised learning and divide-and-conquer via the EM algorithm EM = Expectation-Maximization M. Jordan (Massachusetts Institute of Technology, USA) 10:15 -- 10:45 The TACOMA algorithm for reflective growing of neural networks TACOMA = TAsk decomposition, COrrelation Measures and local Attention neurons J. Lange, H-M. Voigt and D. Wolf (Center for Applied Computer Science, and Technical University of Berlin, Berlin, Germany) Pause 11:15 -- 12:00 The ASMOD algorithm - some theoretical and practical results ASMOD = Adaptive Spline Modelling of Observation Data T. Kavli and E. Weyer (SINTEF SI, Oslo, Norway) 12:00 -- 12:30 Semi-empirical modelling of non-linear dynamical systems T. A. Johansen (Norwegian Institute of Technology, Trondheim, Norway) Lunch 14:00 -- 14:45 Neural networks for control of industrial processes B. Schuermann (Siemens AG, Munich, Germany) 14:45 -- 15:15 Data analysis by means of Kohonen feature maps for load forecast in power systems S. Heine and I. Neumann (Hochschule fuer Technik, Leipzig, and BEST Data Engineering GmbH, Germany) Pause 15:45 -- 16:15 Improved prediction of the corrosion behaviour of car body steel using a Kohonen self-organising map W. Kessler, R. Kessler, M. Kraus (Fachhochschule fuer Technik und Wirtschaft, Reutlingen, Germany), and R. Kuebler (Mercedes-Benz AG, Sindlefingen, Germany) 16:15 -- 16:45 Adaptive neural network control of the temperature in an oven O. Dubois, J. Nicolas and A. Billat (UFR Sciences Exactes et Naturelles, Reims, France) 16:45 -- 17:15 Exothermic heat estimation using fuzzy-neural nets for a batch reactor temperature control system E. Cuellar, J. Coronado, C. Moreno and J. Izquierdo (University of Valladolid, Spain) 19:30 Colloquium Dinner Friday 27 May ------------- 9:00 -- 9:45 On interpolating memories for learning control H. Tolle (Technische Hochschule Darmstadt, Germany) 9:45 --10:15 Comparison of optimisation techniques for training feedforward networks G. Irwin and G Lightbody (Queen's University of Belfast, UK) 10:15 -- 10:45 Dynamic systems in neural networks K. Warwick, C Kambhampati(University of Reading, UK) and P. Parks (University of Oxford, UK) Pause 11:15 -- 12:00 Adaptive neurocontrol of MIMO systems based on stability theory MIMO = Multi-input Multi-Output J-M Renders, M. Saerens, and H. Bersini (Universite Libre de Bruxelles, Belgium) 12:00 -- 12:30 Learning in neural networks and stochastic approximation with averaging P. Shcherbakov, S. Tikhonov (Institute of Control Sciences, Moscow, Russia) and J. Mason (University of Reading, UK) Lunch 14:00 -- 14:45 Adaptive neurofuzzy systems for difficult modelling and control problems M. Brown and C. Harris (University of Southampton, UK) 14:45 -- 15:15 Constructive training - industrial perspectives R. Murray-Smith, K. Hunt and F. Lohnert (Daimler-Benz AG, Berlin, Germany) Pause 15:45 -- 16:15 Equalisation using non-linear adaptive clustering C. Cowan (Loughborough University of Technology, UK) 16:15 -- 16:45 Hierarchical competitive net architecture T. Long (NeuroDyne Inc.) and E. Hanzevack (University of South Carolina, USA) REGISTRATION FORM "ADVANCES IN NEURAL NETWORKS FOR CONTROL AND SYSTEMS" Colloquium from Wednesday, 25 - Friday, 27 May 1994 at The Systems Technology Research Centre, Daimler-Benz AG, Berlin, Germany The IEE is registered as a charity IEE VAT Reg No: 240-3420-16 1. Surname: Title: 2. Address for correspondence : Postal Code: Tel No: 3. Class of Membership of IEE or IEEIE: Membership No: 4. Details for name badge Name: Company affiliation: 5. Special dietary requirements 6. How did you hear about this event (programme booklet, direct circular from IEE, IEE News, other press, Email bulletin board, training department etc)? PLEASE BOOK EARLY AS NUMBERS ARE LIMITED If you have any queries, please ring the Secretary, (LS(D)CA), on ++ 44 71 240 1871, Extension 2206 REGISTRATION FEES: (includes admission, digest, lunches, refreshments and Colloquium Dinner) IEE Members: (*) GBP 84.00 (includes VAT @ GBP 12.51) IEE Retired, Unemployed and Student Members: (#) NO CHARGE Non-Members: GBP 140.00 (includes VAT @ GBP 20.85) Retired, Unemployed and Student Non-Members: (#) GBP 42.00 (includes VAT @ GBP 6.25) I will/will not be attending the Welcoming Buffet on Wednesday, 25 May from 6.30-8.30pm TOTAL REMITTANCE ENCLOSED (Cheques should be made payable to "IEE" and crossed) INVOICE FACILITIES WILL ONLY BE CONSIDERED UPON RECEIPT OF AN OFFICIAL ORDER NUMBER AND AN ADMINISTRATIVE CHARGE OF GBP 5.00 + VAT WILL BE MADE. PLEASE CHARGE TO MY CREDIT CARD - please include number and expiry date of card Access Visa Master card American Express Card Holders Name Registered address of Card Holder if different from above NOTES (*) Members of the IEEIE, Eurel Member Associations, andDaimler-Benz Personnel will be admitted at Members' rates. (#) ALL students must have their applications endorsed by their Professor or Head of Department. REMITTANCE MUST ACCOMPANY THIS COMPLETED FORM and be returned to: David Penrose Institution of Electrical Engineers Savoy Place London WC2R 0BL Email: dpenrose at iee.org.uk Tel: + 44 71 344 5417 FAX: + 44 71 497 3633  From jose at learning.siemens.com Thu Apr 7 16:46:03 1994 From: jose at learning.siemens.com (Stephen Hanson) Date: Thu, 7 Apr 1994 16:46:03 -0400 (EDT) Subject: NEW Machine Learning Volume Message-ID: <0hd74=K1GEMnA751M0@tractatus.siemens.com> This is a new volume just published that may be of interest to you: COMPUTATIONAL LEARNING THEORY and NATURAL LEARNING SYSTEMS Constraints and Prospects MIT/BRADFORD 1994. Editors, S. Hanson, G. Drastal, R. Rivest Table of Contents FOUNDATIONS Daniel Osherson, Massachusetts Institute of Technology, Michael Stob, Calvin College, and Scott Weinstein, University of Pennsylvania. {em Logic and Learning} Ranan Banerji, Saint Joseph's University. {em Learning Theoretical Terms} Stephen Judd, Siemens Corporate Research, {em How Network Complexity is Affected by Node Function Sets} Diane Cook, University of Illinois. {em Defining the Limits of Analogical Planning} REPRESENTATION and BIAS Larry Rendell and Raj Seshu, University of Illinois. {em Learning Hard Concepts Through Constructive Induction: Framework and Rationale} Harish Ragavan and Larry Rendell, University of Illinois. {em The Utility of Domain Knowledge for Learning Disjunctive Concepts} George Drastal, Siemens Corporate Research. {em Learning in an Abstraction Space} Raj Seshu, University of Denver. {em Binary Decision Trees and an ``Average-Case'' Model for Concept Learning: Implications for Feature Construction and the Study of Bias} Richard Maclin and Jude Shavlik, University of Wisconsin, Madison. {em Refining Algorithms with Knowledge-Based Neural Networks: Improving the Chou-Fasman Algorithm for Protein Folding} SAMPLING PROBLEMS Michael Kearns and Robert Schapire, Massachusetts Institute of Technology. {em Efficient Distribution-free Learning of Probabilistic Concepts} Marek Karpinski and Thorsten Werther, University of Bonn. {em VC Dimension and Sampling Complexity of Learning Sparse Polynomials and Rational Functions} Haym Hirsh and William Cohen, Rutgers University. {em Learning from Data with Bounded Inconsistency:Theoretical and Experimental Results} Wolfgang Maass and Gyorgy Turan, University of Illinois. {em How Fast Can a Threshold Gate Learn?} Eric Baum, NEC Research Institute. {em When are k-Nearest Neighbor and Back Propagation Accurate for Feasible Sized Sets of Examples?} EXPERIMENTAL Ross Quinlan, University of Sydney. {em Comparing Connectionist and Symbolic Learning Methods} Andreas Weigend and David Rumelhart, Stanford University. {em Weight-Elimination and Effective Network Size} Ronald Rivest and Yiqun Yin, Massachusetts Institute of Technology. {em Simulation Results for a New Two-Armed Bandit Heuristic} Susan Epstein, Hunter College. {em Hard Questions About Easy Tasks: Issues From Learning to Play Games} Lorien Pratt, Rutgers University. {em Experiments on the Transfer of Knowledge between Neural Networks} Stephen J. Hanson, Ph.D. Head, Learning Systems Department SIEMENS Research 755 College Rd. East Princeton, NJ 08540  From jbower at smaug.bbb.caltech.edu Fri Apr 8 15:26:30 1994 From: jbower at smaug.bbb.caltech.edu (Jim Bower) Date: Fri, 8 Apr 94 12:26:30 PDT Subject: CNS*94 registration Message-ID: <9404081926.AA09267@smaug.bbb.caltech.edu> ****************************************************** Registration Information for the Third Annual Computation and Neural Systems Meeting CNS*94 July 21 - July 26, 1994 Monterey, California ****************************************************** CNS*94: Registration is now open for this year's Computation and Neural Systems meeting (CNS*94). This is the third in a series of annual inter-disciplinary conferences intended to address the broad range of research approaches and issues involved in the general field of computational neuroscience. As in the previous years, this meeting will bring together experimental and theoretical neurobiologists along with engineers, computer scientists, cognitive scientists, physicists, and mathematicians interested in understanding how biological neural systems compute. The meeting will equally emphasize experimental, model-based, and more abstract theoretical approaches to understanding neurobiological computation. Meeting Structure: The meeting will be organized in two parts: three days of research presentations, and two days of follow up workshops. Most presentations will be based on submitted papers, with 10 presentations by specially invited speakers. 111 submitted papers have been accepted for presentation at the meeting based on peer review. Details on the agenda can be obtained via ftp or through telnet as described below. Location: The two components of the meeting will take place in two different locations on the Monterey Peninsula on the coast south of San Francisco, California. The main meeting will be held at the Doubletree Hotel in downtown Monterey itself. This modern hotel is located at Monterey's historic Fisherman's Wharf. Following the main meeting, two days of post-meeting workshops will be held at Asilomar Conference Center at Asilomar State Beach in Pacific Grove just a few miles away. Main Meeting Accommodations: Accommodations for the main meeting have been arranged at the Doubletree Hotel. We have reserved a block of rooms at the special rate for all attendees of $119 per night single or double occupancy in the conference hotel (that is, 2 people sharing a room would split the $119!). A fixed number of rooms have been reserved for students at the rate of $99 per night single or double occupancy (yes, that means $50 a nite per student!). These student room rates are on a first-come-first-served basis, so we recommend acting quickly to reserve these slots. Each additional person per room is a $20 charge. Registering for the meeting, WILL NOT result in an automatic room reservation. Instead you must make your own reservations by contacting: the Doubletree Hotel at Fisherman's Wharf Two Portola Plaza Monterey, CA 93940 (408) 649-4511 NOTE: IN ORDER TO GET THE REDUCED RATES, YOU MUST CONFIRM HOTEL REGISTRATIONS BY JUNE 18,1994. When making reservations by phone, make sure and indicate that you are registering for the CNS*94 meeting. Students will be asked to verify their status on check in with a student ID or other documentation. Workshop Accommodations: Housing for the workshops will be provided on-site at Asilomar Conference Center. Transportation to the conference center, three nights lodging and all meals are included in the workshop registration fee of $250. Acknowledgment of registration for the workshops and payment of fees WILL constitute a guarantee of accommodations at the workshop. However, space at the workshops is limited to 125, so early registration is highly recommended. Registration Fees: The strong response to this year's call for papers, coupled with grant support, has allowed us to reduce the registration fees from pervious years. Accordingly, fees this year will be: Main meeting registration received before June 15, 1994: Students, main meeting: $ 90 Others, main meeting: $ 200 Main meeting registration after June 15, 1994 Students, main meeting: $ 125 Others, main meeting: $ 235 Workshops: $250 (includes all meals and three nights lodging) Banquet: Registration for the main meeting includes a single ticket to the annual CNS Banquet this year to be held within the Monterey Aquarium on Thursday evening, July 21st. Additional Banquet tickets can be purchased for $35 each person. ****************************************************** Additional Meeting Information: Additional information about the meeting is available via FTP over the internet (address: 131.215.137.69 ). To obtain information about the agenda, currently registered attendees, or paper abstracts, the initial sequence is the same (Things you type are in ""): > yourhost% "ftp 131.215.137.69" > 220 mordor FTP server (SunOS 4.1) ready. Name (131.215.137.69:): "ftp" > 331 Guest login OK, send ident as password. Password: "yourname at yourhost.yourside.yourdomain" > 230 Guest login OK, access restrictions apply. ftp> "cd cns94" > 250 CWD command successful. ftp> At this point you can do one of several things: 1) To examine what is available type: "ls" Directory as of 4/5/94: abstracts94 agenda94 attendees94 general_information94 registration94 rooms_to_share94 travel_arrangements94 travel_grants94 workshops94 2) To download specific files type: "get " for example: "get agenda94" or "get attendees94" Once you have obtained the information you want type: "quit" ****************************************************** Registration Procedure: Participants can register for the meeting in several different ways: 1) electronically, 2) via email, 3) via regular surface mail. Each different method is described below. Please only register using one method. You will receive a confirmation of registration within two weeks. 1) Interactive electronic registration: For those of you with internet connectivity who would like to register electronically for CNS*94 we have provided an internet account through which you may submit your registration information. To use this service you need only "telnet" to "mordor.bbb.caltech.edu" and login as "cns94". No password is required. For example: yourhost% "telnet mordor.bbb.caltech.edu" Trying 131.215.137.69 ... Connected to mordor.bbb.caltech.edu. Escape character is '^]'. SunOS UNIX (mordor) login: "cns94" Now answer all questions (Note that all registration through this electronic service is subject to verification of payment.) 2) Email registration: For those with easy access to electronic mail simply fill in the attached registration form and email it to: cns94 at smaug.bbb.caltech.edu 3) Surface mail registration: Finally, for those who elect neither of the above options, or who are paying by means other than credit card, please print out the attached registration form and send with payment via surface mail to: CNS*94 Registrations Division of Biology 216-76 Caltech Pasadena, CA 91125 Those registering by 1 or 2 above, but paying with check or money order should send payment to the above address as well with you name and institution clearly marked. Registration becomes effective when payment is received. ****************************************************** CNS*94 REGISTRATION FORM Monterey, California. July 20 - July 26, 1994 ****************************************************** Last Name: First Name: Title: Organization: Address: City: State: Zip : Country: Telephone: email address: Registration Fees: Technical Program -- July 21 - 23 _____ Regular $ 200 ($225 after June 15th) _____ Student $ 90 ($125 after June 15th) _____ Banquet $ 35 (each additional banquet ticket) Post-meeting Workshop -- July 24 - 26 _____ $ 250 (includes round-trip transportation, all meals and lodging) Total Payment: $ ______ Please indicate method of payment : ____ Check or Money Order - Payable in U.S. dollars to CNS*94 - Caltech - Please make sure to indicate CNS*94 and YOUR name on all money transfers. ____ Charge my card: ____ Visa ____ Mastercard ____ American Express number: ________________________________________ Expiration date __________ Name of cardholder ______________________________ Signature as appears on card (for mailed in applications): _________________________ Date ____________ ===================================================== Additional Questions: Did you submit an abstract & summary ? ( ) yes ( ) no title : Do you have special dietary preferences or restrictions (e.g., diabetic, low sodium, kosher, vegetarian)? If so, please note: Some grants to cover partial travel expenses may become available. Do you wish further information ? ( ) yes ( ) no ======================================================  From tesauro at watson.ibm.com Fri Apr 8 16:14:12 1994 From: tesauro at watson.ibm.com (Gerald Tesauro) Date: Fri, 8 Apr 94 16:14:12 EDT Subject: 2nd Call for Workshops-- NIPS*94 (submission deadline May 21) Message-ID: CALL FOR PROPOSALS NIPS*94 Post-Conference Workshops December 2 and 3, 1994 Vail, Colorado Following the regular program of the Neural Information Processing Systems 1994 conference, workshops on current topics in neural information processing will be held on December 2 and 3, 1994, in Vail, Colorado. Proposals by qualified individuals interested in chairing one of these workshops are solicited. Past topics have included: active learning and control, architectural issues, attention, bayesian analysis, benchmarking neural network applications, computational complexity issues, computational neuroscience, fast training techniques, genetic algorithms, music, neural network dynamics, optimization, recurrent nets, rules and connectionist models, self- organization, sensory biophysics, speech, time series prediction, vision and audition, implementations, and grammars. The goal of the workshops is to provide an informal forum for researchers to discuss important issues of current interest. Sessions will meet in the morning and in the afternoon of both days, with free time in between for ongoing individual exchange or outdoor activities. Concrete open and/or controversial issues are encouraged and preferred as workshop topics. Representation of alternative viewpoints and panel-style discussions are particularly encouraged. Individuals proposing to chair a workshop will have responsibilities including: 1) arranging short informal presentations by experts working on the topic, 2) moderating or leading the discussion and reporting its high points, findings, and conclusions to the group during evening plenary sessions (the ``gong show''), and 3) writing a brief summary. Submission Procedure: Interested parties should submit a short proposal for a workshop of interest postmarked by May 21, 1994. (Express mail is not necessary. Submissions by electronic mail will also be accepted.) Proposals should include a title, a description of what the workshop is to address and accomplish, the proposed length of the workshop (one day or two days), and the planned format. It should motivate why the topic is of interest or controversial, why it should be discussed and what the targeted group of participants is. In addition, please send a brief resume of the prospective workshop chair, a list of publications and evidence of scholarship in the field of interest. Mail submissions to: Todd K. Leen, NIPS*94 Workshops Chair Department of Computer Science and Engineering Oregon Graduate Institute of Science and Technology P.O. Box 91000 Portland Oregon 97291-1000 USA (e-mail: tleen at cse.ogi.edu) Name, mailing address, phone number, fax number, and e-mail net address should be on all submissions. PROPOSALS MUST BE POSTMARKED BY MAY 21, 1994 Please Post  From jramire at conicit.ve Fri Apr 8 16:38:31 1994 From: jramire at conicit.ve (Jose Ramirez G. (AVINTA) Date: Fri, 8 Apr 1994 16:38:31 -0400 (AST) Subject: CFP IBEROAMERICAN CONGRESS ON AI Message-ID: <9404082038.AA25999@dino.conicit.ve> * PLEASE POST * PLEASE POST * PLEASE POST * PLEASE POST * PLEASE POST CALL FOR PAPERS IBEROAMERICAN CONGRESS ON ARTIFICIAL INTELLIGENCE IBERAMIA 94 NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CNIASE 94 IBERAMIA 94/CNIASE 94 will be sponsored by the Venezuelan Association for AI -AVINTA-, the Mexican AI Society -SMIA- and the Spanish Association for IA -AEPIA-. The goal of the conference is to promote research and development in Artificial Intelligence and scientific interchange among AI researchers and practitioners. IBERAMIA 94/CNIASE 94 will be hosted by the Centro de Investigaciones Oficina Metropolis and the School of Systems Engineering of the Universidad Metropolitana of Caracas -UNIMET-, between Tuesday 25th October and Friday 28th October 1994. Program Committee Chair: Hector Geffner - USB (Venezuela) Program Committee: Julian Araoz - USB (Venezuela) Jorge Baralt - USB (Venezuela) Francisco Cantu - ITESM (Mexico) Nuria Castell - UPC (Spain) Alberto Castillo - UCLA (Venezuela) Helder Coelho - INESC (Portugal) Nelson Correa - Univ. de los Andes (Colombia) Alvaro del Val - Stanford U. (USA) Felix Garcia Padilla - UDO (Venezuela) Luciano Garcia - UH (Cuba) Francisco Garijo - TID (Spain) Warren Greiff - UDLAP (Mexico) Christian Lemaitre - LANIA (Mexico) Alonso Marquez - USB (Venezuela) Luis Moniz Pereira - UNL (Portugal) Jose Ali Moreno - UCV (Venezuela) Pablo Noriega - INEGI (Mexico) Olga Padron - UH (Cuba) Tarcisio Pequeno - LIA/UFC (Brazil) Javier Pinto - PUC (Chile) Jose Ramirez - UNIMET (Venezuela) Antonio Sanchez Aguilar - UDLA-P (Mexico) Carlos Sierra - CSIC (Spain) Guillermo Simari - UNS (Argentina) Angel Vina - UAM (Spain) Organizing Committee Chair: Adelaide Bianchini - UNIMET Organizing Committee: Antonietta Bosque - UNIMET Preciosa Castro - UNIMET Edna R. de Millan - UNIMET Rodrigo Ramirez - UNIMET Irene Torres - AVINTA Supporting Associations: Francisco Garijo - AEPIA Christian Lemaitre - SMIA Jose Ramirez - AVINTA We invite authors to submit papers describing original work in all areas of AI, including but not limited to: Machine Learning Knowledge Acquisition Natural Language Processing Genetic Algorithms Evolutionary Programming Knowledge Based Systems Knowledge Representation and Reasoning Automated Reasoning Knowledge-based Simulation Cognitive Modelling Robotics Case-based Reasoning Distributed Artificial Intelligence Neural Networks Virtual Reality All submissions will be refereed for quality and originality. Authors must submit three (3) copies of their papers (not electronic or fax transmisions) by June 30, 1.994 to the following address: AVINTA Apartado 67079 Caracas 1061 Venezuela +58-2-2836942,fax:+58-2-2832689 jramire at conicit.ve or Universidad Metropolitana Centro de Investigaciones Oficina Metropolis Autopista Petare-Guarenas Distribuidor Universidad Terrazas del Avila Caracas 1070-A Venezuela +58-2-2423089, fax:+58-2-2425668 abianc at conicit.ve All copies must be clearly legible. Notification of receipt will be mailed to the first author. Papers can be written in English, Spanish or Portuguese and must be printed on 8 1/2 x 11 inches paper using 12 point type (14 point type for headings). The body of submitted papers must be at most 12 pages. Each copy must have a title page (separate from the body of the paper) containing title of the paper, names and addresses of all authors, telephone number, fax number, electronic mail address and a short (less than 200 word) abstract. All accepted papers will be published in full length by McGraw-Hill. Important dates: Deadline for paper submission: June 30, 1994 Notification of acceptance: July 30, 1994 Camera Ready Copy: September 9,1994 Location: Caracas is located on the north of South America, facing the Caribbean Sea; is a modern city with an enjoyable wheather all year long (20 C to 30 C), with many interesting sites including cultural complexes, historical downtown, shopping malls and excelent hotels and restaurants offering the best food from all over the world. The Simon Bolivar International Airport is 45 minutes from downtown, and have regular flights to all major cities in the world.  From tesauro at watson.ibm.com Fri Apr 8 16:12:49 1994 From: tesauro at watson.ibm.com (Gerald Tesauro) Date: Fri, 8 Apr 94 16:12:49 EDT Subject: 2nd Call for Papers-- NIPS*94 (submission deadline May 21) Message-ID: ********* PLEASE NOTE NEW SUBMISSION FORMAT FOR 1994 ********* CALL FOR PAPERS Neural Information Processing Systems -Natural and Synthetic- Monday, November 28 - Saturday, December 3, 1994 Denver, Colorado This is the eighth meeting of an interdisciplinary conference which brings together neuroscientists, engineers, computer scientists, cognitive scientists, physicists, and mathematicians interested in all aspects of neural processing and computation. The conference will include invited talks, and oral and poster presentations of refereed papers. There will be no parallel sessions. There will also be one day of tutorial presentations (Nov 28) preceding the regular session, and two days of focused workshops will follow at a nearby ski area (Dec 2-3). Major categories for paper submission, and examples of keywords within categories, are the following: Neuroscience: systems physiology, cellular physiology, signal and noise analysis, oscillations, synchronization, inhibition, neuromodulation, synaptic plasticity, computational models. Theory: computational learning theory, complexity theory, dynamical systems, statistical mechanics, probability and statistics, approximation theory. Implementations: VLSI, optical, parallel processors, software simulators, implementation languages. Algorithms and Architectures: learning algorithms, constructive/pruning algorithms, localized basis functions, decision trees, recurrent networks, genetic algorithms, combinatorial optimization, performance comparisons. Visual Processing: image recognition, coding and classification, stereopsis, motion detection, visual psychophysics. Speech, Handwriting and Signal Processing: speech recognition, coding and synthesis, handwriting recognition, adaptive equalization, nonlinear noise removal. Applications: time-series prediction, medical diagnosis, financial analysis, DNA/protein sequence analysis, music processing, expert systems. Cognitive Science & AI: natural language, human learning and memory, perception and psychophysics, symbolic reasoning. Control, Navigation, and Planning: robotic motor control, process control, navigation, path planning, exploration, dynamic programming. Review Criteria: All submitted papers will be thoroughly refereed on the basis of technical quality, novelty, significance and clarity. Submissions should contain new results that have not been published previously. Authors are encouraged to submit their most recent work, as there will be an opportunity after the meeting to revise accepted manuscripts before submitting final camera-ready copy. ********** PLEASE NOTE NEW SUBMISSIONS FORMAT FOR 1994 ********** Paper Format: Submitted papers may be up to eight pages in length. The page limit will be strictly enforced, and any submission exceeding eight pages will not be considered. Authors are encouraged (but not required) to use the NIPS style files obtainable by anonymous FTP at the sites given below. Papers must include physical and e-mail addresses of all authors, and must indicate one of the nine major categories listed above, keyword information if appropriate, and preference for oral or poster presentation. Unless otherwise indicated, correspondence will be sent to the first author. Submission Instructions: Send six copies of submitted papers to the address given below; electronic or FAX submission is not acceptable. Include one additional copy of the abstract only, to be used for preparation of the abstracts booklet distributed at the meeting. Submissions mailed first-class within the US or Canada must be postmarked by May 21, 1994. Submissions from other places must be received by this date. Mail submissions to: David Touretzky NIPS*94 Program Chair Computer Science Department Carnegie Mellon University 5000 Forbes Avenue Pittsburgh PA 15213-3890 USA Mail general inquiries/requests for registration material to: NIPS*94 Conference NIPS Foundation PO Box 60035 Pasadena, CA 91116-6035 USA (e-mail: nips94 at caltech.edu) FTP sites for LaTex style files "nips.tex" and "nips.sty": helper.systems.caltech.edu (131.215.68.12) in /pub/nips b.gp.cs.cmu.edu (128.2.242.8) in /usr/dst/public/nips NIPS*94 Organizing Committee: General Chair, Gerry Tesauro, IBM; Program Chair, David Touretzky, CMU; Publications Chair, Joshua Alspector, Bellcore; Publicity Chair, Bartlett Mel, Caltech; Workshops Chair, Todd Leen, OGI; Treasurer, Rodney Goodman, Caltech; Local Arrangements, Lori Pratt, Colorado School of Mines; Tutorials Chairs, Steve Hanson, Siemens and Gerry Tesauro, IBM; Contracts, Steve Hanson, Siemens and Scott Kirkpatrick, IBM; Government & Corporate Liaison, John Moody, OGI; Overseas Liaisons: Marwan Jabri, Sydney Univ., Mitsuo Kawato, ATR, Alan Murray, Univ. of Edinburgh, Joachim Buhmann, Univ. of Bonn, Andreas Meier, Simon Bolivar Univ. DEADLINE FOR SUBMISSIONS IS MAY 21, 1994 (POSTMARKED) -please post-  From furu at bioele.nuee.nagoya-u.ac.jp Mon Apr 11 09:32:21 1994 From: furu at bioele.nuee.nagoya-u.ac.jp (=?ISO-2022-JP?B?GyRCOEU2NiEhSXAbKEI=?=) Date: Mon, 11 Apr 94 09:32:21 JST Subject: Final Call for Papers of WWW Message-ID: <9404110032.AA01802@gemini.bioele.nuee.nagoya-u.ac.jp> FINAL CALL FOR PAPERS 1994 IEEE/Nagoya University World Wisemen/women Workshop (WWW) ON FUZZY LOGIC AND NEURAL NETWORKS/GENETIC ALGORITHMS -Architecture and Applications for Knowledge Acquisition/Adaptation- August 9 and 10, 1994 Nagoya University Symposion Chikusa-ku, Nagoya, JAPAN Sponsored by Nagoya University Co-sponsored by IEEE Industrial Electronics Society Technically Co-sponsored by IEEE Neural Network Council IEEE Robotics and Automation Society International Fuzzy Systems Association Japan Society for Fuzzy Theory and Systems North American Fuzzy Information Processing Society Society of Instrument and Control Engineers Robotics Society of Japan There are growing interests in combination technologies of fuzzy logic and neural networks, fuzzy logic and genetic algorithm for acquisition of experts' knowledge, modeling of nonlinear systems, realizing adaptive systems. The goal of the 1994 IEEE/Nagoya University WWW on Fuzzy Logic and Neural Networks/Genetic Algorithm is to give its attendees opportunities to exchange information and ideas on various aspects of the Combination Technologies and to stimulate and inspire pioneering works in this area. To keep the quality of these workshop high, only a limited number of people are accepted as participants of the workshops. The papers presented at the workshop are planned to be edited and published from Springer-Verlag. TOPICS: Combination of Fuzzy Logic and Neural Networks, Combination of Fuzzy Logic and Genetic Algorithm, Learning and Adaptation, Knowledge Acquisition, Modeling, Human Machine Interface IMPORTANT DATES: Submission of Abstracts of Papers : April 30, 1994 Acceptance Notification : May 31, 1994 Final Manuscript : July 1, 1994 Abstracts should be type-written in English within 4 pages of A4 size or Letter sized sheet. Use Times or one of the similar typefaces. The size of the letters should be 10 points or larger. All correspondence and submission of papers should be sent to Takeshi Furuhashi, General Chair Dept. of Information Electronics, Nagoya University Furo-cho, Chikusa-ku, Nagoya 464-01, JAPAN TEL: +81-52-789-2792 FAX: +81-52-789-3166 E mail: furu at bioele.nuee.nagoya-u.ac.jp This workshop will be held just after the 3rd International Conference of Fuzzy Logic, Neural Nets and Soft Computing(IIZUKA'94) from 1 to 7, '94. For speakers of excellent papers, assistance of travel expenses from Iizuka to Nagoya and Nagoya to an airport in Japan as well as lodging fee in Nagoya will be provided by the Steering committee of WWW. Candidates are recommended to attend the conference in Iizuka. IEEE/Nagoya University WWW: IEEE/Nagoya University WWW (World Wisemen/women Workshop) is a series of workshops sponsored by Nagoya University and co-sponsored by IEEE Industrial Electronics Society. City of Nagoya, located two hours away from Tokyo, has many electro-mechanical industries in its surroundings such as Mitsubishi, TOYOTA, and their allied companies. Nagoya is a mecca of robotics industries, machine industries and aerospace industries in Japan. The series of workshops will give its attendees opportunities to exchange information on advanced sciences and technologies and to visit industries and research institutes in this area. WORKSHOP ORGANIZATION Honorary Chair : Masanobu Hasatani (Dean, School of Engineering, Nagoya University) General Chair : Takeshi Furuhashi (Nagoya University) Advisory Committee: Chair : Toshio Fukuda (Nagoya University) Toshio Goto (Nagoya University) Fumio Harashima (University of Tokyo) Hiroyasu Nomura (Nagoya University) Yoshiki Uchikawa (Nagoya University) Takeshi Yamakawa (Kyushu Institute of Technology) Steering Committee: H.Berenji (NASA Ames Research Center) W.Eppler (University of Karlsruhe) I.Hayashi (Hannan University) Y.Hayashi (Ibaraki University) H.Ichihashi (Osaka Prefectural University) A.Imura (Laboratory for International Fuzzy Engineering) M.Jordan (Massachusetts Institute of Technology) C.-C.Jou (National Chiao Tung University) E.Khan (National Semiconductor) R.Langari (Texas A & M University) S.Nakanishi (Tokai University) H.Takagi (Matsushita Electric Industrial Co., Ltd.) K.Tanaka (Kanazawa University) M.Valenzuela-Rendon (Instituto Tecnologico y de Estudios Superiores de Monterrey) L.-X.Wang (University of California Berkeley) T.Yamaguchi (Utsunomiya University) J.Yen (Texas A & M University) ===================================================== Takeshi Furuhashi Dept. of Information Electronics, Nagoya University Furo-cho, Chikusaku, Nagoya ?464-01 Japan Tel.(052)789-2792 Fax.(052)789-3166 E-mail furu at bioele.nuee.nagoya-u.ac.jp =====================================================  From amari at sat.t.u-tokyo.ac.jp Mon Apr 11 13:52:37 1994 From: amari at sat.t.u-tokyo.ac.jp (Shun-ichi Amari) Date: Mon, 11 Apr 94 13:52:37 JST Subject: Announcement of newpaper, Information Geometry and EM algorithm Message-ID: The following paper is now available via anonymous ftp from the neuroprose archive. It is a technical report, METR 94-4, University of Tokyo and will appear in "Neural Networks". It consisits of two files, am19.ps for the main body (85 pages) figs-ps for the figures. If you have any problems, contact to mura at sat.t.u-tokyo.ac.jp --------- FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/amari.geometryofem.tar.Z This includes two files, am19.ps and figs.ps Use the unix command uncompress + tar to uncompress and divide into two files. --------- "Information Geometry of the EM and em Algorithms for Neural Networks" by Shun-ichi Amari In order to realize an input-output relation given by noise-contaminated examples, it is effective to use a stochastic model of neural networks. A model network includes hidden units whose activation values are not specified nor observed. It is useful to estimate the hidden variables from the observed or specified input-output data based on the stochastic model. Two algorithms, the EM- and em-algorithms, have so far been proposed for this purpose. The EM-algorithm is an iterative statistical technique of using the conditional expectation, and the em-algorithm is a geometrical one given by information geometry. The $em$-algorithm minimizes iteratively the Kullback-Leibler divergence in the manifold of neural networks. These two algorithms are equivalent in most cases. The present paper gives a unified information geometrical framework for studying stochastic models of neural networks, by forcussing on the EM and em algorithms, and proves a condition which guarantees their equivalence. Examples include 1) Boltzmann machines with hidden units, 2) mixtures of experts, 3) stochastic multilayer perceptron, 4) normal mixture model, 5) hidden Markov model, among others.  From Christian.Lehmann at di.epfl.ch Tue Apr 12 10:06:24 1994 From: Christian.Lehmann at di.epfl.ch (Christian Lehmann) Date: Tue, 12 Apr 94 16:06:24 +0200 Subject: neural hw paper available Message-ID: <9404121406.AA07536@lamisun.epfl.ch> N E W P A P E R S AVAILABLE O N NEUROCOMPUTING HARDWARE The following papers are now available via anonymous ftp from the neuroprose archive. There are three papers on different subjects related to our work on neurocomputing hardware. Should you experience any problem, please, do not hesitate to contact us: Christian Lehmann The MANTRA Center for neuromimetic systems MANTRA-DI-EPFL CH-1015 Lausanne Switzerland or lehmann at di.epfl.ch --------- Author : M. A. Viredaz Title : MANTRA I: An SIMD Processor Array for Neural Computation In : Proceedings of the Euro-ARCH'93 Conference FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/viredaz.e-arch93.ps.Z Length : 12 pages note : figure 7 is missing (photography of boards), contact us if wanted. Abstract: This paper presents an SIMD processor array dedicated to the implementation of neural networks. The heart of this machine is a systolic array of simple processing elements (PEs). A VLSI custom chip containing 2x2 PEs was built. The machine is designed to sustain sufficient instruction and data flows to keep a utilization rate close to 100%. Finally, this computer is intended to be inserted in a network of heterogeneous nodes. --------- Authors : P. Ienne, M. A. Viredaz Title : GENES IV: A Bit-Serial Processing Element for a Multi-Model Neural-Network Accelerator In : Proceedings of the International Conference on Application Specific Array Processors, 1994 FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/ienne.genes.ps.Z Length : 12 pages Abstract: A systolic array of dedicated processing elements (PEs) is presented as the heart of a multi-model neural-network accelerator. The instruction set of the PEs allows the implementation of several widely-used neural models, including multi-layer Perceptrons with the backpropagation learning rule and Kohonen feature maps. Each PE holds an element of the synaptic weight matrix. An instantaneous swapping mechanism of the weight matrix allows the implementation of neural networks larger than the physical PE array. A systolically-flowing instruction accompanies each input vector propagating in the array. This avoids the need of emptying and refilling the array when the operating mode of the array is changed. Both the GENES IV chip, containing a matrix of 2x2 PEs, and an auxiliary arithmetic circuit have been manufactured and successfully tested. The MANTRA I machine has been built around these chips. Peak performances of the full system are between 200 and 400 MCPS in the evaluation phase and between 100 and 200 MCUPS during the learning phase (depending on the algorithm being implemented). --------- Author : P. Ienne Title : Architectures for Neuro-Computers: Review and Performance Evaluation In : EPFL Computer Science Department Technical Report 93/21 FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/ienne.nnarch.ps.Z Length : 63 pages Abstract: As the field of neural-network matures toward real-world applications, a need for hardware systems to efficiently compute larger networks arises. Several designs have been proposed in the recent years and a selection of the more interesting VLSI digital realizations are reviewed here. Limitations of conventional performance measurements are briefly discussed and a different architectural level evaluation approach is attempted by proposing a number of characteristic performance indexes on idealized architecture classes. As a results of this analysis, some conclusions on the advantages and limitations of the different architectures and on the feasibility of the proposed approach are drawn. Architectural aspects that require further developments are also emphasized. ********* See also Authors : C. Lehmann, M. Viredaz and F. Blayo Title : A Generic Systolic Array Building Block for Neural Networks with On-Chip Learning In : IEEE Trans. on NN, 4(3), May 1993  From bossan at bio1.peb.ufrj.br Tue Apr 12 19:06:00 1994 From: bossan at bio1.peb.ufrj.br (Marcelo de Carvalho Bossan) Date: Tue, 12 Apr 94 18:06:00 EST Subject: world congress Message-ID: <9404122106.AA01606@bio1> Message to "Neural Networkers" World Congress on Medical Physics and Biomedical Engineering - RIO'94 The next World Congress on Medical Physics and Biomedical Engineering will be held in Rio de Janeiro, Brazil, 21-26 August 1994. The scientific program includes approximately 1900 papers. Round table, state-of-the-art and tutorial sessions are being finalized, and nearly 250 of the world's leading experts have so far sent abstracts for their invited presentations. Issues in Neural Networks (NN) are inserted mainly in the topic Expert and Decision Support Systems (chairmen: N. Saranummi, Finland, and R. J. Machado, Brazil). A tutorial lecture on Hybrid Expert Systems will be presented by A. Rocha (Brazil) and oral sessions are programmed on both Neural Networks and Hybrid Expert Systems. A Round-table on NN in Electrocardiology (speakers: R. G. Mark, USA, N. Maglaveras, Greece, L. Glass, Austria and R. M. Sabbatini, Brazil) is also included in the program, in adiction to scientific sessions focusing NN in various applications. Attracting the largest possible audience is our present priority, as we can now present an extensive scientific program of high standard. We particularly hope that a large number of students will be able to participate as we offer courses and over 30 tutorial sessions. We are currently also making arrangements for low-priced board and lodging for students. Cut-price air-fairs to Rio (or packages including hotels) are available. Supporting from funding agencies has allowed reduced registration fees for Latin American participants. Together with the Final Announcement (and the final letter to the authors) we will give details of the reception of conference participants at the airport and transports to hotels, which are currently being organized. A free bus service will operate on Saturday (20th August) and Sunday (21st) during peak arrival hours. One of the taxi-companies operating within the airport will be available to participants arriving at other times (fixed prices fairs, around US$ 15), with a counter on information regarding the congress and hotels. We look forward to welcoming you here, where you can enjoy stimulating scientific presentations and discussions, renew old friendships and make new ones, all in exotic and beautiful surroundings. For more information: World Congress on Medical Physics and Biomedical Engineering Rio de Janeiro - 21-26 August 1994 Congrex do Brasil Rua do Ouvidor 60, Gr.414 Tel: +55 (021) 224-6080 - Fax: +55 (021) 231-1492 or bossan at bio1.peb.ufrj.br  From lss at compsci.stirling.ac.uk Wed Apr 13 09:33:50 1994 From: lss at compsci.stirling.ac.uk (Dr L S Smith (Staff)) Date: 13 Apr 94 13:33:50 GMT (Wed) Subject: New TR available Message-ID: <9404131333.AA06530@uk.ac.stir.cs.peseta> ***DO NOT FORWARD TO OTHER GROUPS*** University of Stirling (Scotland), Centre for Cognitive and Computational Neuroscience.... CCCN Technical report CCCN-15 Activation Functions, Computational Goals and Learning Rules for Local Processors with Contextual Guidance. Information about context can enable local processors to discover latent variables that are relevant to the context within which they occur, and it can also guide short-term processing. For example, Becker and Hinton (1992) have shown how context can guide learning, and Hummel and Biederman (1992) have shown how it can guide processing in a large neural net for object recognition. This paper therefore studies the basic capabilities of a local processor with two distinct classes of inputs : receptive field inputs that provide the primary drive and contextual inputs that modulate their effects. The contextual predictions are used to guide processing without confusing them with the receptive field inputs. The processor's transfer function must therefore distinguish these two roles. Given these two classes of input the information in the output can be decomposed into four disjoint components to provide a space of possible goals in which the unsupervised learning of Linsker (1988) and the internally supervised learning of Becker and Hinton (1992) are special cases. Learning rules are derived from an information-theoretic objective function, and simulations show that a local processor trained with these rules and using an appropriate activation function has the elementary properties required. This report is available by anonymous FTP from ftp.cs.stir.ac.uk in the directory pub/tr/cccn The filename is TR15.ps.Z (and, as usual, this needs decompress'd, and the postscript printed.) As a last resort, hard copy may be available: email lss at cs.stir.ac.uk with your postal address...  From L.P.OMard at lut.ac.uk Wed Apr 13 17:40:05 1994 From: L.P.OMard at lut.ac.uk (L.P.OMard) Date: Wed, 13 Apr 94 17:40:05 bst Subject: Latest LUTEar Core Routines Library (1.5.0) Message-ID: <9404131640.AA03083@hpc.lut.ac.uk> Dear All, Please find below the README file for the latest version of the LUTEar Core Routines Library (CRL). The UNIX and Macintosh (THINK C 5.0) platform versions are now available (the MSDOS, Borland C, version will be ready by Friday at the latest) via anonymous FTP from:- suna.lut.ac.uk (131.231.16.2): /public/hulpo/lutear Connect via FTP then login with user name "anonymous" and give your e-mail address as the password. Download and thead the "INSTALL150" file from the "/public/hulpo/lutear" directory (as also given above), then follow the installation procedure for your platform. If you have any problems at all, do not hesitate to get in contact with me. Any comments, improvements, additions or corrections you may wish to suggest are very welcome; it is only by direct feed-back from users that I can ensure that the Core Routines Library is a delight to use, as well as implementing state of the art auditory models. ..Lowel. 1. Introduction As computer modelling of the auditory system increased in complexity the need for common working tools became more pressing. Such tools are necessary to allow the rapid dissemination of new computer code, and to permit other members of the scientific community to replicate and challenge published results. The auditory models developed by the Speech and Hearing Laboratory, at Loughborough University of Technology (UK.), have received much attention, due principally to their simple form and the many published papers in which the models are used to explain auditory phenomena. The many requests for the computer code of the model simulations led to the group releasing the LUTEar Core Routines Library (CRL, version 1.0.0, October 1993) as a computational platform and set of coding conventions which supports a modular approach to auditory system modelling. The system is written in ANSI-C and works on a wide range of operating systems. LUTEar has now been consolidated and much improved in the latest release (version 1.5.0). The CRL brings together established models, developed by the group, and also contributed by other researchers in the field, which simulate various stages in the auditory process. Since the first release, the LUTEar CRL has been tested and used both at the originating laboratory and at many other sites. It has been used as a tool for speech processing, speech and voice analysis as well as in the investigation of auditory phenomena, for which it was primarily constructed. This latest version of the CRL is a product of the proving ground to which it was subjected, and we hope that it will be as well received as was the first version. Included with this release is a comprehensive series of test programs. These programs were used to test the CRL routines; they reproduce the behaviour of the respective published models included. The programs also provide examples of how the CRL may be used in auditory investigation programs. In addition the programs read data from parameter files, and thus can be readily used to investigate further the behaviour of the models included in the CRL. The CRL routines have been subjected as much as possible to careful and exhaustive testing. No system, however, is infallible so it is hoped that, with the gentle admonitions of the library's users, any problems or omissions will be quickly corrected. In addition it is expected that the library will be augmented by further models as the scientific endeavour continues. Many weeks have been required to get the manual into its current form. It is not perfect, so gentle admonitions and suggested changes/additions are invited. 1.1. CRL Features The library has a modular structure which can be used to create auditory investigation/application systems or incorporated in existing code, as required. The library is intuitive in application, and has comprehensive error reporting embedded in efficient code. All the modules conform to a simple standard format. The design allows for plugging and unplugging alternative models of the same component auditory process for purposes of comparison. Ultimately the CRL is a development based on the meld of experimental investigation methods and the tenets of good software engineering practice. The following is a list of the principal features of the CRL:- o Modular Structure; o Processing stage data can be handled by a single unit; o Processing stage units can link to data from other stages; o Multi-channel data is handled invisibly; o Efficient algorithms are used throughout; o Meaningful routine and variable names are used; o All routines are prefixed by their module name; o Comprehensive error handling incorporated in routines. 1.1.1 Main features new in version 1.5.0 o Improved manual: greater detail with over 65 figures and an index. o Sound data file format reading/writing support; o Connection management system (invisible to user); o Modules can now read/print their own parameters; o Generic programming introduced; o New analysis routines, including FFT's; o Binaural processing support; o Non-linear basilar membrane filter model;* o Stochastic inner hair cell model;* o McGregor neural cell model; o Dendrite filter model; o Spike generation module (for Meddis86 IHC model output); o New stimulus generation modules. o Parameter files can have comment or blank lines; o Direction of warnings and error messages to a specified file; * These models are still in development, prior to publishing, but they have been included for those who may wish to look at them. +-------------------------+-----------------------------------------------+ |Dr. Lowel P. O'Mard | /\ / \ Speech & Hearing | |Dept. of Human Sciences, | /\/\ /\/ \/ /\ \ /\ Laboratory | |University of Technology,|_/\/\/ /\ \/\/ /\ /\/ \ \/ /\/\_ /\___ | |Loughborough, | \/\/ \/\/\/ \/ /\ \/\/ /\ / | |Leics. LE11 3TU, U.K. | \ /\/\/\ /\/ \ /\/\/ \/ Director: | |L.P.OMard at lut.ac.uk | \/ \/ \/ Prof. Ray Meddis | +-------------------------+-----------------------------------------------+  From lss at compsci.stirling.ac.uk Thu Apr 14 07:07:15 1994 From: lss at compsci.stirling.ac.uk (Dr L S Smith (Staff)) Date: 14 Apr 94 11:07:15 GMT (Thu) Subject: New TR available (corrected version) Message-ID: <9404141107.AA11262@uk.ac.stir.cs.peseta> ***DO NOT FORWARD TO OTHER GROUPS*** (I omitted the authors names, and the IP number of the FTP site) University of Stirling (Scotland), Centre for Cognitive and Computational Neuroscience.... CCCN Technical report CCCN-15 Activation Functions, Computational Goals and Learning Rules for Local Processors with Contextual Guidance. Jim Kay and W.A. Phillips, Centre for Cognitive and Computational Neuroscience, Departments of Mathematics \& Statistics and Psychology University of Stirling Scotland, UK Information about context can enable local processors to discover latent variables that are relevant to the context within which they occur, and it can also guide short-term processing. For example, Becker and Hinton (1992) have shown how context can guide learning, and Hummel and Biederman (1992) have shown how it can guide processing in a large neural net for object recognition. This paper therefore studies the basic capabilities of a local processor with two distinct classes of inputs : receptive field inputs that provide the primary drive and contextual inputs that modulate their effects. The contextual predictions are used to guide processing without confusing them with the receptive field inputs. The processor's transfer function must therefore distinguish these two roles. Given these two classes of input the information in the output can be decomposed into four disjoint components to provide a space of possible goals in which the unsupervised learning of Linsker (1988) and the internally supervised learning of Becker and Hinton (1992) are special cases. Learning rules are derived from an information-theoretic objective function, and simulations show that a local processor trained with these rules and using an appropriate activation function has the elementary properties required. This report is available by anonymous FTP from ftp.cs.stir.ac.uk (139.153.254.29) in the directory pub/tr/cccn The filename is TR15.ps.Z (and, as usual, this needs decompress'd, and the postscript printed.) As a last resort, hard copy may be available: email lss at cs.stir.ac.uk with your postal address...  From L.P.OMard at lut.ac.uk Fri Apr 15 09:50:07 1994 From: L.P.OMard at lut.ac.uk (L.P.OMard) Date: Fri, 15 Apr 94 09:50:07 bst Subject: Auditory Modelling Software from Loughborough (LUTEar 1.5.0) Message-ID: <9404150850.AA20633@hpc.lut.ac.uk> Dear All, First of all, I would like to offer an apology for re-posting this message. It was brought to my notice that my title did not describe what "LUTEar" actually is. I have now given the message a more descriptive title (suggested by Adrian Rees), and I hope that anybody annoyed by having to read this post twice will forgive me. Please find below the README file for the latest version of the LUTEar Core Routines Library (CRL). The UNIX and Macintosh (THINK C 5.0) platform versions are now available (the MSDOS, Borland C, version will be ready by Friday at the latest) via anonymous FTP from:- suna.lut.ac.uk (131.231.16.2): /public/hulpo/lutear Connect via FTP then login with user name "anonymous" and give your e-mail address as the password. Download and thead the "INSTALL150" file from the "/public/hulpo/lutear" directory (as also given above), then follow the installation procedure for your platform. If you have any problems at all, do not hesitate to get in contact with me. Any comments, improvements, additions or corrections you may wish to suggest are very welcome; it is only by direct feed-back from users that I can ensure that the Core Routines Library is a delight to use, as well as implementing state of the art auditory models. ..Lowel. 1. Introduction As computer modelling of the auditory system increased in complexity the need for common working tools became more pressing. Such tools are necessary to allow the rapid dissemination of new computer code, and to permit other members of the scientific community to replicate and challenge published results. The auditory models developed by the Speech and Hearing Laboratory, at Loughborough University of Technology (UK.), have received much attention, due principally to their simple form and the many published papers in which the models are used to explain auditory phenomena. The many requests for the computer code of the model simulations led to the group releasing the LUTEar Core Routines Library (CRL, version 1.0.0, October 1993) as a computational platform and set of coding conventions which supports a modular approach to auditory system modelling. The system is written in ANSI-C and works on a wide range of operating systems. LUTEar has now been consolidated and much improved in the latest release (version 1.5.0). The CRL brings together established models, developed by the group, and also contributed by other researchers in the field, which simulate various stages in the auditory process. Since the first release, the LUTEar CRL has been tested and used both at the originating laboratory and at many other sites. It has been used as a tool for speech processing, speech and voice analysis as well as in the investigation of auditory phenomena, for which it was primarily constructed. This latest version of the CRL is a product of the proving ground to which it was subjected, and we hope that it will be as well received as was the first version. Included with this release is a comprehensive series of test programs. These programs were used to test the CRL routines; they reproduce the behaviour of the respective published models included. The programs also provide examples of how the CRL may be used in auditory investigation programs. In addition the programs read data from parameter files, and thus can be readily used to investigate further the behaviour of the models included in the CRL. The CRL routines have been subjected as much as possible to careful and exhaustive testing. No system, however, is infallible so it is hoped that, with the gentle admonitions of the library's users, any problems or omissions will be quickly corrected. In addition it is expected that the library will be augmented by further models as the scientific endeavour continues. Many weeks have been required to get the manual into its current form. It is not perfect, so gentle admonitions and suggested changes/additions are invited. 1.1. CRL Features The library has a modular structure which can be used to create auditory investigation/application systems or incorporated in existing code, as required. The library is intuitive in application, and has comprehensive error reporting embedded in efficient code. All the modules conform to a simple standard format. The design allows for plugging and unplugging alternative models of the same component auditory process for purposes of comparison. Ultimately the CRL is a development based on the meld of experimental investigation methods and the tenets of good software engineering practice. The following is a list of the principal features of the CRL:- o Modular Structure; o Processing stage data can be handled by a single unit; o Processing stage units can link to data from other stages; o Multi-channel data is handled invisibly; o Efficient algorithms are used throughout; o Meaningful routine and variable names are used; o All routines are prefixed by their module name; o Comprehensive error handling incorporated in routines. 1.1.1 Main features new in version 1.5.0 o Improved manual: greater detail with over 65 figures and an index. o Sound data file format reading/writing support; o Connection management system (invisible to user); o Modules can now read/print their own parameters; o Generic programming introduced; o New analysis routines, including FFT's; o Binaural processing support; o Non-linear basilar membrane filter model;* o Stochastic inner hair cell model;* o McGregor neural cell model; o Dendrite filter model; o Spike generation module (for Meddis86 IHC model output); o New stimulus generation modules. o Parameter files can have comment or blank lines; o Direction of warnings and error messages to a specified file; * These models are still in development, prior to publishing, but they have been included for those who may wish to look at them. +-------------------------+-----------------------------------------------+ |Lowel P. O'Mard PhD. | /\ / \ Speech & Hearing | |Dept. of Human Sciences, | /\/\ /\/ \/ /\ \ /\ Laboratory | |University of Technology,|_/\/\/ /\ \/\/ /\ /\/ \ \/ /\/\_ /\___ | |Loughborough, | \/\/ \/\/\/ \/ /\ \/\/ /\ / | |Leics. LE11 3TU, U.K. | \ /\/\/\ /\/ \ /\/\/ \/ Director: | |L.P.OMard at lut.ac.uk | \/ \/ \/ Prof. Ray Meddis | +-------------------------+-----------------------------------------------+  From usui at bpel.tutics.tut.ac.jp Wed Apr 6 22:31:17 1994 From: usui at bpel.tutics.tut.ac.jp (Shiro USUI) Date: Wed, 6 Apr 94 22:31:17 JST Subject: [amari@sat.t.u-tokyo.ac.jp: Announcement of newpaper, Information Geometry and EM algorithm] Message-ID: <9404061331.AA25173@sv630.bpel-subnet> The following paper is now available via anonymous ftp from the neuroprose archive. It will appear in "Neural Networks". It consisits of two files, am19.ps for the main body (85 pages) figs-ps for the figures. If you have any problems, contact to mura at sat.t.u-tokyo.ac.jp --------- FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/amari.geometryofem.tar.Z This includes two files, am19.ps and figs.ps Use the unix command uncompress + tar to uncompress and divide into two files. --------- "Information Geometry of the EM and em Algorithms for Neural Networks" by Shun-ichi Amari In order to realize an input-output relation given by noise-contaminated examples, it is effective to use a stochastic model of neural networks. A model network includes hidden units whose activation values are not specified nor observed. It is useful to estimate the hidden variables from the observed or specified input-output data based on the stochastic model. Two algorithms, the EM- and em-algorithms, have so far been proposed for this purpose. The EM-algorithm is an iterative statistical technique of using the conditional expectation, and the em-algorithm is a geometrical one given by information geometry. The $em$-algorithm minimizes iteratively the Kullback-Leibler divergence in the manifold of neural networks. These two algorithms are equivalent in most cases. The present paper gives a unified information geometrical framework for studying stochastic models of neural networks, by forcussing on the EM and em algorithms, and proves a condition which guarantees their equivalence. Examples include 1) Boltzmann machines with hidden units, 2) mixtures of experts, 3) stochastic multilayer perceptron, 4) normal mixture model, 5) hidden Markov model, among others.  From giles at research.nj.nec.com Sun Apr 17 14:10:29 1994 From: giles at research.nj.nec.com (Lee Giles) Date: Sun, 17 Apr 94 14:10:29 EDT Subject: Call for papers: IWANNT*95 Message-ID: <9404171810.AA01165@fuzzy> PLEASE POST CALL FOR PAPERS International Workshop on Applications of Neural Networks to Telecommunications (IWANNT*95) Stockholm, Sweden May 22-24, 1995 You are invited to submit a paper to an international workshop on applications of neural networks and other intelligent systems to problems in telecommunications and information networking. This is the second workshop in a series that began in Princeton, New Jersey on October, 18-20 1993. This conference will take place in the center of Stockholm at a time of the year when the beautiful city is at its best. A tour in the famous archipelago adds to the attraction. This workshop will bring together active researchers in neural networks and related intelligent systems with potential users in the telecommunications industries. Today, telecommunications also means data transmission, cable TV, wireless, and entertainment industries. We expect the workshop to be a forum for discussion of applications issues relevant to the enlarged circle of telecommunications industries. It is sponsored by IEEE, INNS, SNNS (Swedish Neuronet Society),Bellcore and Ericsson. Suggested Topics: Application of Neural Networks and other Intelligent Systems in: Network Management Congestion Control Adaptive Equalization Speech Recognition Security Verification Language ID/Translation Information Filtering Dynamic Routing Software Reliability Fraud Detection Financial and Market Prediction Adaptive User Interfaces Fault Identification and Prediction Character Recognition Adaptive Control Data Compression Please submit 6 copies of both a 50 word abstract and a 1000 word summary of your paper by September 16, 1994. Mail papers to the conference administrator: Betty Greer, IWANNT*95 Bellcore, MRE 2P-295 445 South St. Morristown, NJ 07960 (201) 829-4993 (fax) 829-5888 bg1 at faline.bellcore.com Abstract and Summary Due: September 16, 1994 Author Notification of Acceptance: November 1, 1994 Camera-Ready Copy of Paper Due: February 10, 1995 Organizing Committee: General Chair Josh Alspector Bellcore, MRE 2P-396 445 South St. Morristown, NJ 07960-6438 (201) 829-4342 josh at bellcore.com Program Chair Rod Goodman Caltech 116-81 Pasadena, CA 91125 (818) 356-3677 rogo at micro.caltech.edu Publications Chair Timothy X Brown Bellcore, MRE 2E-378 445 South St. Morristown, NJ 07960-6438 (201) 829-4314 timxb at faline.bellcore.com Treasurer Anthony Jayakumar, Bellcore Publicity Atul Chhabra, NYNEX Lee Giles, NEC Local Arrangements Miklos Boda, Ellemtel Bengt Asker, Ericsson Program Committee Harald Brandt, Ellemtel Tzi-Dar Chiueh, National Taiwan University Michael Gell, British Telecom Larry Jackel, AT&T Bell Laboratories Thomas John, Southwestern Bell Adam Kowalczyk, Telecom Australia S Y Kung, Princeton University Tadashi Sone, NTT INNS Liaison Bernard Widrow, Stanford University IEEE Liaison Steve Weinstein, NEC Conference Administrator Betty Greer Bellcore, MRE 2P-295 445 South St. Morristown, NJ 07960 (201) 829-4993 (fax) 829-5888 bg1 at faline.bellcore.com ----------------------------------------------------------------------------- ----------------------------------------------------------------------------- International Workshop on Applications of Neural Networks to Telecommunications (IWANNT*95) Stockholm, Sweden May 22-24, 1995 Registration Form Name: _____________________________________________________________ Institution: __________________________________________________________ Mailing Address: ___________________________________________________________________ ___________________________________________________________________ ___________________________________________________________________ ___________________________________________________________________ Telephone: ______________________________ Fax: ____________________________________ E-mail: _____________________________________________________________ I will attend | | Send more information | | Paper enclosed | | Registration Fee Enclosed | | ($400; $500 after Apr. 15, 1995; $200 students;) Please make sure your name is on the check (made out to IWANNT*95) Registration includes lunch, a boat tour of the Stockholm archipelago, and proceedings available at the conference. Mail to: Betty Greer, IWANNT*95 Bellcore, MRE 2P-295 445 South St. Morristown, NJ 07960 (201) 829-4993 (fax) 829-5888 bg1 at faline.bellcore.com Deadline for submissions: September 16, 1994 Author Notification of Acceptance: November 1, 1994 Camera-Ready Copy of Paper Due: February 10, 1995 -- C. Lee Giles / NEC Research Institute / 4 Independence Way Princeton, NJ 08540 / 609-951-2642 / Fax 2482 ==  From hirsh at cs.rutgers.edu Sun Apr 17 17:27:17 1994 From: hirsh at cs.rutgers.edu (Haym Hirsh) Date: Sun, 17 Apr 94 17:27:17 EDT Subject: on-line information for ML94 and COLT94 Message-ID: Information for this summer's Machine Learning (ML94) and Computational Learning Theory (COLT94) conferences is now available on-line. Users of anonymous ftp can find the information on www.cs.rutgers.edu in the directory "/pub/learning94". Users of www information servers such as mosaic can find the information at "http://www.cs.rutgers.edu/pub/learning94/learning94.html". Please send comments or questions to ml94 at cs.rutgers.edu. Please note that the early registration deadline is May 27, and (for those planning on staying at the nearby Hyatt rather than in dorms), conference room rates are only guaranteed until June 10. Finally, the conferences coincide this year with World Cup soccer matches being held at Giants Stadium in East Rutherford, New Jersey. These games are expected to be the largest sporting event ever held in the New York metropolitan area, and we therefore strongly encourage conference attendees to make travel arrangements as early as possible. Haym  From tibs at utstat.toronto.edu Sun Apr 17 21:45:00 1994 From: tibs at utstat.toronto.edu (tibs@utstat.toronto.edu) Date: Sun, 17 Apr 94 21:45 EDT Subject: new book Message-ID: The following new book may be of interest to connectionists: An Introduction to the Bootstrap- Brad Efron and Rob Tibshirani This is the first general book written on the bootstrap and related topics (Jackknife, cross-validation ...) The purpose of this book is to present an overview of the bootstrap and related methods for assessing statistical accuracy. The objectives are a) to provide the reader with a working knowledge of bootstrap and related methodologies, and b) serve as a resource text for researchers in the area. The first 19 chapters are expository and are suitable for a one semester course at the upper undergraduate or masters level. They require one probability and one statistics course as a prerequisite. Each chapter has numerous problems. We have written this part of the book so that it will be accessible to non-specialists, particularly scientists who are seeking to learn about these methods for possible use in their own work. The remaining chapters are at a higher mathematical level, and together with parts of Chapters 6-19, are suitable for a graduate level course in statistics. This book is aimed at statisticians, upper year undergraduate and graduate students in statistics, and scientists, engineers and doctors who do quantitative research. Bradley Efron is the inventor of the bootstrap and is responsible for many of the major research advances in the area. Robert Tibshirani was a student of Dr. Efron's, has contributed to research in this area and is an active researcher and author in the statistical community. Ordering information: An Introduction to the Bootstrap- Efron and Tibshirani ISBN 0-412-04231-2 Chapman and Hall One Penn Plaza- 41st floor New York, Ny. 10119 Phone 212 564-1060 Customer service FAX 212-268-9964 Toll free order FAX 1-800-248-4724 ============================================================= | Rob Tibshirani To every man is given the key to | Dept. of Preventive the gates of heaven; | Medicine and Biostatistics the same key opens the gates of hell. | McMurrich Bldg. | University of Toronto Buddhist proverb | Toronto, Canada M5S 1A8 Phone: 1-416-978-4642 (biostats) Email: tibs at utstat.toronto.edu 416-978-0673 (stats) FAX: 1-416-978-8299  From jb at informatik.uni-bonn.de Tue Apr 19 13:13:59 1994 From: jb at informatik.uni-bonn.de (jb@informatik.uni-bonn.de) Date: Tue, 19 Apr 94 18:13:59 +0100 Subject: Positions for PhD students Message-ID: <9404191615.AA00497@olymp.informatik.uni-bonn.de> !!!!!!!!!!!!!!!!!!!!!!!! A T T E N T I O N !!!!!!!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!!!!!!!! A T T E N T I O N !!!!!!!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!!!!!!!! A T T E N T I O N !!!!!!!!!!!!!!!!!!!!!!!! Research Positions available in Image Analysis/Pattern Recognition at the University of Bonn. Two positions for PhD students/Postdocs are open at the Computer Science Department of the University of Bonn starting Juli 1, 1994. One position is available to conduct research in image sequence analysis for surveillance applications, the other is dedicated to video compression applications. (Salary at the research associate level) Interested candidates should have a background in one of the following research field: * computer vision and image processing * statistical pattern recognition * neural networks and/or connectionists modeling Applicants should send the Curriculum Vitae and a description of their research interests to Prof. J. Buhmann Institut fuer Informatik III Tel.: +49 228 550 380 Universitaet Bonn Fax: +49 228 550 382 Roemerstr. 164 email: jb at informatik.uni-bonn.de D-53117 Bonn jb at cs.bonn.edu Fed. Rep. Germany  From B344DSL at UTARLG.UTA.EDU Tue Apr 19 15:49:43 1994 From: B344DSL at UTARLG.UTA.EDU (B344DSL@UTARLG.UTA.EDU) Date: Tue, 19 Apr 1994 13:49:43 -0600 (CST) Subject: Conference preliminary program Message-ID: <01HBD0U7TU5U0013SH@UTARLG.UTA.EDU> Preliminary Program CONFERENCE ON OSCILLATIONS IN NEURAL SYSTEMS Sponsored by the Metroplex Institute for Neural Dynamics (MIND) and the University of Texas at Arlington Co-sponsored by the Departments of Mathematics and Psychology MAY 5-7, 1994 UNIVERSITY OF TEXAS AT ARLINGTON MAIN LIBRARY, 6TH FLOOR PARLOR The topic of neural oscillation is currently of great interest to psychologists and neuroscientists alike. Recently it has been observed that neurons in separate areas of the brain will oscillate in synchrony in response to certain stimuli. One hypothesized function for such synchronized oscillations is to solve the "binding problem," that is, how is it that disparate features of objects (e.g., a person's face and their voice) are tied together into a single unitary whole. Some bold speculators (such as Francis Crick in his recent book, The Astonishing Hypothesis) even argue that synchronized neural oscillations form the basis for consciousness. It is still possible to schedule poster presentations. Those interested in presenting a poster are invited to submit abstracts (1-2 paragraphs) of any work related to the theme of the conference. Abstracts should submitted, by e-mail, snail mail, or fax, to: Professor Daniel S. Levine Department of Mathematics, University of Texas at Arlington 411 S. Nedderman Drive Arlington, TX 76019-0408 Office telephone: 817-273-3598, fax: 817-794-5802 e-mail: b344dsl at utarlg.uta.edu Further inquiries about the conference can be addressed to Professor Levine or to the other two conference organizers: Professor Vincent Brown Mr. Timothy Shirey 817-273-3247 214-495-3500 or 214-422-4570 b096vrb at utarlg.uta.edu 73353.3524 at compuserve.com Please distribute this announcement to anyone you think may be interested in the conference. INVITED SPEAKERS Bill Baird, University of California/Berkeley "Grammatical Inference by Attentional Control of Synchronization in an Architecture of Coupled Oscillatory Associative Memories" Adi Bulsara, Naval Research Laboratories/San Diego "Complexity in the Neurosciences: Signals, Noise, Nonlinearity, and the Meanderings of a Theoretical Physicist" Alexander Grunewald, Boston University "Binding of Object Representations by Synchronous Cortical Dynamics Explains Temporal Order and Spatial Pooling Data" David Horn, Tel Aviv University "Segmentation and Binding in Oscillatory Networks" Alianna Maren, Accurate Automation Corporation (Title to be added) George Mpitsos, Oregon State University "Attractor Gradients: Architects of Network Organization in Biological Systems" Martin Stemmler, California Institute of Technology "Synchronization and Oscillations in Spiking Networks" Roger Traub, IBM/New York (Title to be added) Robert Wong, Downstate Medical Center/Brooklyn (Title to be added) Geoffrey Yuen, Northwestern University (Title to be added) OTHER TALKS Section I. Neuroscience. D. Baxter, C. Canavier, H. Lechner, University of Texas/Houston, J. Clark, Rice University, & J. Byrne, University of Texas/Houston "Coexisting Stable Oscillatory States in a Model Neuron Suggest Novel Mechanisms for the Effects of Synaptic Inputs and Neuromodulators" Guenter Gross & Barry Rhoades, University of North Texas "Spontaneous and Evoked Oscillatory Bursting States in Cultured Networks" Elizabeth Thomas, Willamette College "A Computational Model of Spindle Oscillations" Section II. Theory. Anthony Brown, Defense Research Agency, United Kingdom "Preliminary Work on the design of an Analog Oscillatory Neural Network" Arun Jagota, Memphis State University, & Xin Wang, University of California/Los Angeles "Oscillations in Discrete and Continuous Hopfield Networks" Jacek Kowalski, University of North Texas (Title to be added) Nam Seog Park, Dave Robertson, & Keith Stenning, University of Edinburgh "From Dynamic Bindings to Further Symbolic Knowledge Representation Using synchronous Activity of Neurons" Seth Wolpert, University of Maine "Modeling Neural Oscillations using VLSI-based Neuromimes" Section III. Psychology. David DeMaris, University of Texas/Austin Sriram Govindarajan & Vincent Brown, University of Texas/Arlington "Feature Binding and Illusory Conjunctions: Psychological Constraints and a Model" Mark Steyvers, Indiana University "Use of Synchronized Chaotic Oscillations to Model Multistability in Perceptual Grouping" POSTERS Shien-Fong Lin, Rashi Abbas, & John Wikso, Jr., Vanderbilt University "One-dimensional Magnetic Measurement of Two-origin Bioelectric Current Oscillation" George Mobus & Paul Fisher, University of North Texas "Edge-of-chaos-Search: Using a Quasi-chaotic Oscillator circuit for Foraging in a Mobile Autonomous Robot" Andrew Penz, Texas Inztruments (Title to be added) Barry Rhoades, University of North Texas "Global Neurochemical Determination of Local EEG in the Olfactory Bulb" David Young, Louisiana State University "Oscillations Created by the Fragmented Access of Distributed Connectionist Representations Tentative Schedule Posters (ongoing throughout the conference): Lin, Mobus, Penz, Rhoades, Young Thursday AM: Introductions, Mpitsos, Baxter, Stemmler Thursday PM: Yuen, Thomas, Kowalski, Gross Friday AM: Wong, Traub, Baird, Jagota Friday PM: A. Brown, Bulsara, Maren, Horn Saturday AM: Park, DeMaris, Wolpert, Grunewald, Steyvers Saturday PM: Govindarajan, Panel Discussion Registration and Travel Information Official Conference Motel: Park Inn 703 Benge Drive Arlington, TX 76013 1-800-777-0100 or 817-860-2323 A block of rooms has been reserved at the Park Inn for $35 a night (single or double). Room sharing arrangements are possible. Reservations should be made directly through the motel. Official Conference Travel Agent: Airline reservations to Dallas-Fort Worth airport should be made through Dan Dipert travel in Arlington, 1-800-443-5335. For those who wish to fly on American Airlines, a Star File account has been set up for a 5% discount off lowest available fares (two week advance, staying over Saturday night) or 10% off regular coach fare; arrangements for Star File reservations should be made through Dan Dipert. Please let the conference organizers know (by e-mail or telephone) when you plan to arrive: some people can be met at the airport (about 30 minutes from Arlington), others can call Super Shuttle at 817-329-2000 upon arrival for transportation to the Park Inn (about $14-$16 per person). Registration for the conference is $25 for students, $65 for non- student oral or poster presenters, $85 for others. MIND members will have $20 (or $10 for students) deducted from the registration. A registration form is attached to this announcement. Registrants will receive the MIND monthly newsletter (on e-mail when possible) for the remainder of 1994. REGISTRATION FOR MIND CONFERENCE ON OSCILLATIONS IN NEURAL SYSTEMS, UNIVERSITY OF TEXAS AT ARLINGTON, MAY 5-7, 1994 Name ______________________________________________________________ Address ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ____________________________________________________________ E-Mail __________________________________________________________ Telephone _________________________________________________________ Registration fee enclosed: _____ $15 Student, member of MIND _____ $25 Student _____ $65 Non-student oral or poster presenter _____ $65 Non-student member of MIND _____ $85 All others Will you be staying at the Park Inn? ____ Yes ____ No Are you planning to share a room with someone you know? ____ Yes ____ No If so, please list that person's name __________________________ If not, would be you be interested in sharing a room with another conference attendee to be assigned? ____ Yes ____ No PLEASE REMEMBER TO CALL THE PARK INN DIRECTLY FOR YOUR RESERVATION (WHETHER SINGLE OR DOUBLE) AT 1-800-777-0100 OR 817-860-2323.  From joerg at nathan.gmd.de Wed Apr 20 04:50:40 1994 From: joerg at nathan.gmd.de (Joerg Kindermann) Date: Wed, 20 Apr 1994 10:50:40 +0200 Subject: postdoc position in NN available Message-ID: <199404200850.AA28569@tetris.gmd.de> !!!!!!!!!!!!!!!!!!!!!!!! A T T E N T I O N !!!!!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!!!!!!!! A T T E N T I O N !!!!!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!!!!!!!! A T T E N T I O N !!!!!!!!!!!!!!!!!!!!!! The research group "Adaptive Systems" of the German National Research Center for Computer Science (GMD) in Sankt Augustin near Bonn has a postdoctoral research position available. The group, headed by Dr. Muehlenbein, consists of about 15 researchers working in the areas of genetic algorithms, reflective statistics and robotics. Appointment will be for two years (as of 1 July 1994 or later). It can possibly be extended by another year. Applicants should have a good background and strong research interests in one or more of the following areas: - statistical properties of neural networks and related learning algorithms - predictive reliability of statistical models - adaptive algorithms The successful candidate is expected to contribute actively to our research projects in the area of reflective exploration (selective sampling, model selection) with neural networks and related statistical methods. Good programming skills in C++ or C are required. Applicants should send their Curriculum Vitae and a description of their research interests one of the addresses below. Applications should be received by May 8, 1994. Dr. Gerhard Paass Dr. Joerg Kindermann paass at gmd.de kindermann at gmd.de phone: +49 02241 142698 phone: +49 02241 142437 System Design Technology Institute German National Research Center for Computer Science (GMD) Schloss Birlinghoven D-53754 St. Augustin Germany  From david at cns.edinburgh.ac.uk Wed Apr 20 11:48:39 1994 From: david at cns.edinburgh.ac.uk (David Willshaw) Date: Wed, 20 Apr 94 11:48:39 BST Subject: Contents of NETWORK, Volume 5, Number 1. Message-ID: <10255.9404201048@cns.ed.ac.uk> Network: Computation in Neural Systems Volume 5 Number 1 February 1994 ---------------------------------------- PAPERS 1 Hebbian learning is jointly controlled by electrotonic and input structure K Y Tsai, N T Carnevale and T H Brown 21 Efficient mapping from neuroanatomical to electrotonic space K Y Tsai, N T Carnevale, B J Claiborne and T H Brown 47 Regulating the nonlinear dynamics of olfactory cortex Xingbao Wu and H Liljenstrom 61 Spontaneous symmetry breaking and the formation of columnar structures in the primary visual cortex K Yamagishi 75 Using generalized principal component analysis to achieve associative memory in a Hopfield net S Coombes and J G Taylor 89 Learning temporal sequences by excitatory synaptic changes only Y Metzger and D Lehmann 101 Hierarchical neural networks for time-series analysis and control T Frohlinghaus, A Weichert and P Rujan 117 ABSTRACTS SECTION 119 BOOK REVIEWS 119 An introduction to the modeling of neural networks P Peretto 120 Computational learning theory M Anthony and N Biggs NETWORK welcomes research Papers and Letters where the findings have demonstrable relevance across traditional disciplinary boundaries. Research Papers can be of any length, if that length can be justified by content. Rarely, however, is it expected that a length in excess of 10,000 words will be justified. 2,500 words is the expected limit for research Letters. Articles can be published from authors' TeX source codes. Macros can be supplied to produce papers in the form suitable for refereeing and for IOP house style. For more details contact the Editorial Services Manager at IOP Publishing, Techno House, Redcliffe Way, Bristol BS1 6NX, UK. Telephone: (+44) 0272 297481 Fax: (+44) 0272 294318 Telex: 449149 INSTP G Email Janet: net at uk.co.ioppublishing Subscription Information Frequency: quarterly Subscription rates: Institution 192.00 pounds (US$376.00) Individual (UK) 32.00 pounds (Overseas) 35.00 pounds (US$75.00) A microfiche edition is also available  From riedml at ira.uka.de Thu Apr 21 07:41:29 1994 From: riedml at ira.uka.de (Martin Riedmiller) Date: Thu, 21 Apr 94 7:41:29 MET DST Subject: Paper available Message-ID: <"iraun1.ira.995:21.04.94.05.45.26"@ira.uka.de> The following paper is available via anonymous ftp from i11s16.ira.uka.de. Instructions for retrieval follow the abstract. ***************************************************************** Advanced Supervised Learning in Multi-layer Perceptrons - From Backpropagation to Adaptive Learning Algorithms Martin Riedmiller Institut fuer Logik, Komplexitaet und Deduktionssyteme University of Karlsruhe W-76128 Karlsruhe FRG riedml at ira.uka.de ABSTRACT Since the presentation of the backpropagation algorithm a vast variety of improvements of the technique for training the weights in a feed-forward neural network have been proposed. The following article introduces the concept of supervised learning in multi-layer perceptrons based on the technique of gradient descent. Some problems and drawbacks of the original backpropagation learning procedure are discussed, eventually leading to the development of more sophisticated techniques. This article concentrates on adaptive learning strategies. Some of the most popular learning algorithms are described and discussed according to their classification in terms of global and local adaptation strategies. The behavior of several learning procedures on some popular benchmark problems is reported, thereby illuminating convergence, robustness, and scaling properties of the respective algorithms. This paper has been accepted for publication in the special issue on Neural Network Standards of "Computer Standards & Interfaces", volume 16, edited by J. Fulcher. Elsevier Science Publishers, Amsterdam, 1994. ***************************************************************** ------------------------------ To obtain a copy of this paper, please follow the following FTP instructions: ftp i11s16.ira.uka.de (or ftp 129.13.33.16) Name: anonymous Password: (your email address) ftp> cd pub/neuro/papers ftp> binary ftp> get riedml.csi94.ps.Z ftp> quit % uncompress <>.ps.Z % lpr <>.ps  From tfb007 at hp1.uni-rostock.de Fri Apr 22 11:05:33 1994 From: tfb007 at hp1.uni-rostock.de (Neural Network Group) Date: Fri, 22 Apr 94 11:05:33 MESZ Subject: pattern segmentation request Message-ID: Using combinations of classical transforms (fourier-transform, KLT and modifications) we constructed some modules for feature extraction and made investigations with recurrent NN-topolgies based on statisticel methods like for example fractal NN for document-analysis problems. In our studies we've found some help in publications from National Institute of Standards and Technology in the following papers: ir_4766.ps Optimization of Neural Network Topology and Information Content Using Boltzman Methods ir_4776.ps Training Feed Forward Neural Networks Using Conjugate Gradients ir_4824.ps Karhunen Loeve Feature Extraction for Neural Handwritten Character Recognition ir_4893.ps Topological Seperation Versus Weight Sharing in Neural Network Optimization Here we got interesting results and a good performance in pattern recognition tasks. Recognition of pattern is an interesting field at all, but there is also the problem of finding any pattern or a special pattern in an image, the problem of pattern segmentation. Or in a more general case: How to find a special object at any places and different sizes in an image? Referring to that problem I've read an article about pattern segmentation using gabor functions and genetic algorithms from NIST (ir_xxxx.ps). But we could'nt get any good results in algorithms for object search using Neural Networks. During last weeks I've read a lot of publications from connectionists archive, but could'nt find much help in solving that problem. Do you have any hints for me how to get some information about algorithms and methods in pattern segmentation or/and in a more general sense object search-algorithms in 2-dimensional images? Many thanks Gratefully yours Welf Wustlich from Neural Network Group Rostock  From kedar at gate.ee.lsu.edu Mon Apr 25 13:11:37 1994 From: kedar at gate.ee.lsu.edu (Kedar Babu Madineni) Date: Mon, 25 Apr 94 12:11:37 CDT Subject: Report available Message-ID: <9404251711.AA01430@gate.ee.lsu.edu> The below mentioned technical report is now available. If you would like to have postscript versions of this paper please send your email request to me. ---------------------------------------------------------------- Technical Report ECE/LSU 94-41, April 25, 1994 TWO CORNER CLASSIFICATION ALGORITHMS FOR TRAINING THE KAK FEEDFORWARD NEURAL NETWORK Abstract This paper presents two new algorithms ALG1 and ALG2 for training the Kak feedforward neural network. The learning power and the generalization capabilities for this class of algorithms are presented. Comparing with the results for backpropagation, we obtain the following results: ALG2's generalization capability was sometimes better to or comparable to that of backpropagation. ALG1's generalization capability was also comparable to that of backpropagation. The main advantage of the proposed algorithms is in the training time. Both ALG1 and ALG2 outperformed backpropagation in learning time, as expected. From these results, it can be said that ALG2 is superior to backpropagation in learning time and at least comparable in generalization. The comparison experiments were performed on time-series prediction of interest rates of corporate bonds over a time period of three years, and creditworthiness of a customer.  From david at cns.edinburgh.ac.uk Tue Apr 26 09:43:19 1994 From: david at cns.edinburgh.ac.uk (david@cns.edinburgh.ac.uk) Date: Tue, 26 Apr 94 09:43:19 BST Subject: Contents of latest issue of NETWORK Message-ID: <9634.9404260843@cns.ed.ac.uk> NETWORK: Computation in Neural Systems Volume 5 Number 2 May 1994 PAPERS 121 Coding of odour quality: roles of convergence and inhibition J-P Rospars and J-C Fort 147 Designing receptive fields for highest fidelity D L Ruderman 157 Efficient stereo coding in the multiscale representation Zhaoping Li and J J Atick 175 Intracortical connectivity of pyramidal and stellate cells: estimates of synaptic densities and coupling symmetry D T J Liley and J J Wright 191 A millimetric-scale simulation of electrocortical wave dynamics based on anatomical estimates of cortical synaptic density J J Wright and D T J Liley 203 Inductive inference and neural nets J Bernasconi and K Gustafson 229 Effects of temporary synaptic strengthening and residual cell potential in the retrieval of patterns T Nakano and O Moriyama 241 A shape-recognition model using dynamical links E Bienenstock and R Doursat 259 Modelling of the Bonhoeffer-effect during LTP learning A Koester, A Zippelius and R Kree 277 Optimal signalling in attractor neural networks I Meilijson and E Ruppin NETWORK welcomes research papers where the findings have demonstrable relevance across traditional disciplinary boundaries. Research Papers can be of any length, if that length can be justified by content. Rarely, however, is it expected that a length in excess of 10,000 words will be justified. Articles can be published from authors' TeX source codes. NETWORK is published as four issues per annual volume (quarterly in February, May, August, and November) by Institute of Physics Publishing, Techno House, Redcliffe Way, Bristol BS1 6NX, UK. Subscription Information For all countries, except the United States, Canada and Mexico, the institutional subscription rate is 192.00 pounds. The rate for individual subscribers is 32.00 pounds (UK) and 35.00 pounds (overseas). Delivery is by air-speeded mail from the UK to subscribers in most overseas countries, and by airfreight and registered mail to subscribers in India. Orders to: Institute of Physics Publishing, Order Processing Department, Techno House, Redcliffe Way, Bristol BS1 6NX, UK. For the US, Canada and Mexico, the institutional subscription rate is US$376.00. The rate for individual subscribers is US$75.00. Delivery is by transatlantic airfreight and onward mailing. Orders to: American Institute of Physics, Subscriber Services, 500 Sunnyside Blvd, Woodbury, NY 11797-2999, USA. Editorial and Marketing Office Institute of Physics Publishing Techno House, Redcliffe Way Bristol BS1 6NX, UK Telephone: 0272 297481 Telex: 449149 Facsimile: 0272 294318 Email: within JANET: net at uk.co.ioppublishing from other networks: net at ioppublishing.co.uk x400: /s=net/o=ioppl/prmd=iopp/admd=0/c=gb  From omlinc at cs.rpi.edu Tue Apr 26 14:33:41 1994 From: omlinc at cs.rpi.edu (omlinc@cs.rpi.edu) Date: Tue, 26 Apr 94 14:33:41 EDT Subject: TR available from neuroprose archive Message-ID: <9404261833.AA15782@colossus.cs.rpi.edu> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/omlin.dfa_encoding.ps.Z The following paper is now available from the neuroprose archive. Please send your comments regarding the paper to omlinc at cs.rpi.edu. -Christian Constructing Deterministic Finite-State Automata in Recurrent Neural Networks Christian W. Omlin (a,b), C. Lee Giles (a,c) (a) NEC Research Institute, 4 Independence Way, Princeton, NJ 08540 (b) CS Department, Rensselaer Polytechnic Institute, Troy, NY 12180 (c) UMIACS, University of Maryland, College Park, MD 20742 Abstract Recurrent neural networks that are trained to behave like deterministic finite-state automata (DFA's) can show deteriorating performance when tested on long strings. This deteriorating performance can be attributed to the instability of the internal representation of the learned DFA states. The use of a sigmoidal discriminant function together with the recurrent structure contribute to this instability. We prove that a simple algorithm can construct second-order recurrent neural networks with a sparse inter- connection topology and sigmoidal discriminant function such that the internal DFA state representations are stable, i.e. the constructed network correctly classifies strings of arbitrary length. The algorithm is based on encoding strengths of weights directly into the neural network. We derive a relationship between the weight strength and the number of DFA states for robust string classification. For a DFA with n states and m input alphabet symbols and m input alphabet symbols, the constructive algorithm generates a "programmed" neural network with n+1 neurons and O(mn) weights. We compare out algorithm to other methods proposed in the literature. (23 pages, 9 figures, 2 tables) This paper is also available as Technical Report No. 94-3, Computer Science Department, Rensselaer Polytechnic Institute, Troy, NY 12180.  From dayhoff at src.umd.edu Wed Apr 27 00:28:04 1994 From: dayhoff at src.umd.edu (Judith E. Dayhoff) Date: Wed, 27 Apr 1994 00:28:04 -0400 Subject: Please post for INNS/WCNN'94 Message-ID: <199404270428.AAA07366@newra.src.umd.edu> WW WW CCCCCCCC NN NN NN NN oo 999999 44 44 WW WW CC CC NNN NN NNN NN oo 99 99 44 44 WW W WW CC NNNN NN NNNN NN 99 99 44 44 WW WWW WW CC NN NN NN NN NN NN 9999999 4444444 WWW WWW CC NN NN NN NN NN NN 99 44 W W CCCCCCC NN NNNN NN NNNN 99 44 ************************** *REGISTRATION INFORMATION* ************************** WORLD CONGRESS ON NEURAL NETWORKS, SAN DIEGO, CALIFORNIA, JUNE 5-9, 1994 *** Neural Network Industrial Exposition INNS University Short Courses Six Plenary Talks *** Five Special Sessions Twenty Sessions of Invited and Contributed Talks At least 9 SIG sessions *** Sponsored and Organized by the International Neural Network Society (INNS) *** Table of Contents of This Announcement: 1. NEWS AND CHANGES TO PRELIMINARY PROGRAM 2. INDUSTRIAL EXPOSITION SCHEDULE CHANGES AND NEW LECTURES 3. PLENARY TALKS 4. SPECIAL SESSIONS 5. INVITED/CONTRIBUTED SESSIONS 6. SHORT COURSES 7. TRAVEL ARRANGEMENTS 8. NOTE 9. REGISTRATION 10.HOTEL RESERVATIONS =================================================================== 1. NEWS From the Organizing Committee: The INNS Office has had a management change. We apologize for any delay, confusion or inconvenience you may have experienced during transition. The Registration Deadline has been extended to MAY 16. Many of you know our new management from previous Congresses: Talley Associates (Att: Melissa Bishop) Address: 875 Kings Highway, Suite 200 Woodbury, NJ 08096; Voice 609-845-1720; FAX 609-853-0411 You may use this FAX for Congress Registration. Details of the change will be presented Monday June 6, 5-6 pm during the Presidential Speech. Your support is making WCNN'94 a success. Signed: Paul Werbos, Bernard Widrow, Harold Szu Should you have any specific recommendation about ways to make WCNN-94 more productive, please contact any Governors that you know or Dr. Harold Szu at (301) 390-3097; FAX (301) 390-3923; e-mail: hszu%ulysses at relay.nswc.navy.mil. To improve the structure of the Congress and achieve a more compact schedule for attendees, several changes have been made since the Preliminary Program: A. Short Courses Start Sunday Morning June 5. All Saturday Short Courses have been moved to Monday June 6, with the exception that Course [I] (J. Dayhoff) will be given Sunday 8AM - 12PM. To make room in the schedule for that change, Course [H] (W. Freeman) moves from Sunday to Monday 8AM - 12PM. On Monday the Short Courses are concurrent with the Exposition. B. The SPECIAL OFFER has been made more generous, to encourage students. For each of your Short Course registrations you can give a colleague in the same or lower-priced Registration Category a FREE Short Course! Enter his or her name on the Registration Form below ``TOTAL.'' The recipient of the gift should indicate ``Gift from [your name]'' at the time of registration. IF YOU HAVE ALREADY PRE-REGISTERED, arrange the gift now by FAX to 609-853-0411. =================================================================== 2. INDUSTRIAL EXPOSITION SCHEDULE CHANGES AND NEW LECTURES Monday June 6: 8 - 11 AM: Hardware; Software Video-demo talks; and Posters; 10 - 11 AM: Student Contest. The Contest is free-form, permitting many types of imaginative entry; Certificates and T-shirts will be awarded; no Grand Prize. 11 - Noon: Panel on Government Funding + Two New Lectures - in the Exposition Area: 12 - 1PM: Teuvo Kohonen: ``Exotic Applications of the Self-Organizing Map'' 5 - 6PM: Walter Freeman (Presidential Lecture): "Noncomputational Neural Networks" =================================================================== 3. PLENARY TALKS: Lotfi Zadeh, UC Berkeley "Fuzzy Logic, Neural Networks, and Soft Computing" Per Bak, Brookhaven Nat. Lab. "Introduction to Self-Organized Criticality" Bernard Widrow, Stanford University "Adaptive Inverse Control" Melanie Mitchell, Santa Fe Institute "Genetic Algorithm Applications" Paul Werbos, NSF "Brain-Like Intelligence in Artificial Models: How Do We Really Get There?" John G. Taylor, King's College London "Capturing What It Is Like To Be: Modelling the Mind by Neural Networks" =================================================================== 4. SPECIAL SESSIONS "Biomedical Applications of Neural Networks," (Tuesday) "Commercial and Industrial Applications of Neural Networks," (Tuesday) "Financial and Economic Applications of Neural Networks," (Wednesday) "Neural Networks in Chemical Engineering," (Thursday) "Mind, Brain and Consciousness" (Thursday) =================================================================== 5. INVITED/CONTRIBUTED SESSIONS (Too many to list here!) June 7 - 9 Also at least 9 Special Interest Group (SIG) Sessions are tentatively scheduled for Wednesday, June 8 from 8 -9:30 pm. =================================================================== 6. SHORT COURSES 8am - 12pm Sunday, June 5 [M] Gail Carpenter, Boston University: Adaptive Resonance Theory [L] Bernard Widrow, Stanford University: Adaptive Filters, Adaptive Controls, Adaptive Neural Networks and Applications [I] Judith Dayhoff, University of Maryland: Neurodynamics of Temporal Processing [G] Shun-Ichi Amari, University of Tokyo: Learning Curves, Generalization Errors and Model Selection 1pm - 5pm Sunday, June 5 [U] Lotfi Zadeh, University of California, Berkeley: Fuzzy Logic and Calculi of Fuzzy Rules and Fuzzy Graphs [K] Paul Werbos, NSF: From Backpropagation to Real-Time Control [O] Stephen Grossberg, Boston University: Autonomous Neurodynamics: From Perception to Action [E] John Taylor, King's College, London: Stochastic Neural Computing: From Living Neurons to Hardware 6pm - 10 pm Sunday, June 5 [V] Nicolai G. Rambidi, Int'l. Research Inst. for Management Sciences: Image Processing and Pattern Recognition Based on Molecular Neural Networks [C] Christof Koch, California Institute of Technology: Vision Chips: Implementing Vision Algorithms with Analog VLSI Circuits 8am - 12pm Monday, June 6 [T] Melanie Mitchell, Santa Fe Institute: Genetic Algorithms, Theory and Applications [R] David Casasent, Carnegie Mellon University: Pattern Recognition and Neural Networks [H] Walter Freeman, University of California, Berkeley: Review of Neurobiology: From Single Neurons to Chaotic Dynamics of the Cerebral Cortex [P] Lee Giles, NEC Research Institute: Dynamically-driven Recurrent Networks: Models, Training Algorithms and Applications 1pm - 5pm Monday, June 6 [S] Per Bak, Brookhaven National Laboratory: Introduction to Self-Organized Criticality [D] Kunihiko Fukushima, Osaka University: Visual Pattern Recognition with Neural Networks [B] James A. Anderson, Brown University: Neural Networks Computation as Viewed by Cognitive Science and Neuroscience [Q] Alianna Maren, Accurate Automation Corporation: Introduction to Neural Network Applications 6pm - 10 pm Monday, June 6 [N] Takeshi Yamakawa, Kyushu Institute of Technology: What are the Differences and Similarities among Fuzzy, Neural, and Chaotic Systems? [A] Teuvo Kohonen, Helsinki University of Technology: Advances in the Theory and Applications of Self-Organizing Maps [J] Richard A. Andersen, Massachusetts Institute of Technology: Neurobiologically Plausible Network Models [F] Harold Szu, Naval Surface Warfare Center: Spatiotemporal Information Processing by Means of McCullouch-Pitts and Chaotic Neurons =================================================================== 7. TRAVEL RESERVATIONS: Executive Travel Associates (ETA) has been selected the official travel company for the World Congress on Neural Networks. ETA offers the lowest available fares on any airline at time of booking when you contact them at US phone number 202-828-3501 or toll free (in the US) at 800-562-0189 and identify yourself as a participant in the Congress. Flights booked on American Airlines, the official airline for this meeting, will result in an additional discount. Please provide the booking agent you use with the code: Star #S0464FS =================================================================== 8. ** NOTE ** Neither WCNN'94 (INNS) nor the Hotel can accept "electronic registration" or "electronic reservations" by E-Mail. The Hotel will accept telephoned reservations (note the May 6 deadline below!). For WCNN'94 Registration, use surface/air mail or FAX. ********************************************************************** 9. ___ ____ ____ _ __ _____ ___ _ _____ _ ___ _ _ | | | | \ | / | | | / \ | | / \ |\ | |__\ --- | | \_ | |__\ /___\ | | | | | \ | | \ | \ __ | \ | | \ | | | | | | | \ | | | |___ \___| | __/ | | | | | | | \__/ L \| ----Cut here, print out (Monospaced font such as Monaco 9, 62 lines/pg)---- REGISTRATION FORM WCNN'94 at Town & Country Hotel, San Diego, California June 5 - 9, 1994 REGISTRATION FEE (includes all sessions, plenaries, proceedings, reception, AND Industrial Exposition. Separate registration for Short Courses, below.) Before May 16, 1994 On-Site FEE ENCLOSED _ INNS Member Member Number__________ US$280 US$395 $_________ _ Non Members: US$380 US$495 $_________ _ Full Time Students: US$110 US$135 $_________ _ Spouse/Guest: US$45 US$55 $_________ Or Neural Network Industrial Exposition -Only- _ US$55 US$55 $_________ *************************************************** INNS UNIVERSITY SHORT COURSE REGISTRATION (must be received by May 16, 1994) Circle paid selections: A B C D E F G H I J K L M N O P Q R S T U V Circle free selection (Pay for 2 short courses, get the third FREE) A B C D E F G H I J K L M N O P Q R S T U V SHORT COURSE FEE _ INNS Members: US$275 $_________ _ Non Members: US$325 $_________ _ Full Time Students: US$150 $_________ Congress + Short Course TOTAL: $_________ For each paid course, nominate an accompanying person, registering in the same or lower category, for a free course: Mr./Ms.___________________ That person must also register by May 16, and indicate "Gift from [your name]" on the registration form. METHODS OF PAYMENT _ $ CHECK. All check payments made outside of the USA must be made on a USA bank in US dollars, payable to WCNN'94 _ $ CREDIT CARDS. Only VISA and MasterCard accepted. Registrations sent by FAX or surface/air mail must include an authorized signature. ( ) Visa ( ) M/C Name on Credit Card ______________________________________ Credit Card Number _______________________________________ Exp. Date ________________________________________________ Authorized Signature: _____________________________________ FAX: 609-853-0411 then Mail to INNS/WCNN'94 c/o Talley Associates, 875 Kings Highway, Suite 200 Woodbury, NJ 08096 ========================================================================== 10. HOTEL RESERVATIONS REGISTER AT TOWN & COUNTRY HOTEL, SAN DIEGO, CALIFORNIA (WCNN'94 Site) -----Cut here and print (Monospaced font such as Monaco 9, 62 lines/pg)---- Mail to Reservations, Town and Country Hotel, 500 Hotel Circle North, San Diego, CA 92108, USA; or FAX to 619-291-3584 Telephone: (800)772-8527 or (619)291-7131 INNS - WCNN'94 International Neural Network Society, World Congress on Neural Networks '94 _ Single: US$70 - US$95 plus applicable taxes _ Double: US$80 - US$105 plus applicable taxes Check in time: 3:00 pm. Check out time: 12:00 noon. Room reservations will be available on a first-come, first-serve basis until May 6, 1994. Reservations received after this date will be accepted on a space-available basis and cannot be guaranteed. Reservations after May 6 will also be subject to the rates prevailing at the time of the reservation. A confirmation of your reservation will be sent to you by the hotel. A first night's room deposit is required to confirm a reservation. PRINT OR TYPE ALL INFORMATION. Single________ Double_______ Arrival Date and approximate time:________________________________ Departure Date and approximate time:______________________________ Names of all occupants of room:____________________________________ RESERVATION CONFIRMATION SHOULD BE SENT TO: Name:____________________ Address:____________________________________________________________ ____________________________________________________________ City:____________________State/Province:_________________Country:__________ Type of Credit Card: (circle one) VISA/ MasterCard/ AmEx/ Diner's Club/ Discover/ Optima Card Number:______________________________ Exp. Date____________________ Name as it appears on your Card:______________________________ Authorized Signature: ______________________________ Cancellation Policy: Deposits are refundable if reservation is cancelled 48 hours in advance of arrival date. Be sure to record your cancellation number. Please indicate any disability which will require special assistance: _____________________________________________ FAX to 619-291-3584 ==========================================================================  From tfb007 at hp1.uni-rostock.de Wed Apr 27 22:29:19 1994 From: tfb007 at hp1.uni-rostock.de (Neural Network Group) Date: Wed, 27 Apr 94 22:29:19 MESZ Subject: FTP-access to NIST-Archive Message-ID: Because of too many questions like: "how can I get the files irxxxx... from NIST?" I decided to post the adress to the connectionists: Name: SEQUOYAH.NCSL.NIST.GOV Address: 129.6.61.25 There you can find databases and publications. If there are questions left, feel free to contact me Neural Network Group Rostock Welf Wustlich  From dodson at ecf.toronto.edu Wed Apr 27 17:50:29 1994 From: dodson at ecf.toronto.edu (C.T.J. Dodson) Date: Wed, 27 Apr 1994 17:50:29 -0400 Subject: Research Position at Toronto Message-ID: <94Apr27.175033edt.8429@cannon.ecf.toronto.edu> Research Position Available at University of Toronto A position is available now for a person with experience in neural network computing to join a mixed discipline group working on the simulation of stochastic fibrous suspensions in turbulent flow. The appointee should have a PhD or similar qualifications, and be prepared to work in a group with access to very good computing Research Position at University of Toronto A position is available from 1 June 1994 for a person qualified to do research in the application of neural network simulation methods. The project involves a mixed discipline group and concerns the simulation and analysis of stochastic fibrous suspensions in turbulent flows. Good computing facilities are available (SGI Challenge, Indigo, KSR1). The succesful applicant should have a PhD or be at similar level. The salary is decreed by NSERC, about $27,000pa without benefits. The appointment would be for up to 2 years, with the possibility of an extension to a third year. Interested applicants please send resumes, letters of reference and possible starting dates to me as soon as possible. ****************************** Prof CTJ Dodson Department of Chemical Engineering and Department of Mathematics University of Toronto 200 College Street, Toronto M5S 1A1 Tel 416 978 5610 Fax 416 978 1144 Room Wallberg 362 email dodson at ecf.utoronto.ca ******************************  From dodson at ecf.toronto.edu Thu Apr 28 09:24:18 1994 From: dodson at ecf.toronto.edu (C.T.J. Dodson) Date: Thu, 28 Apr 1994 09:24:18 -0400 Subject: Research Position at Toronto Message-ID: <94Apr28.092427edt.9091@cannon.ecf.toronto.edu> [Sorry, but an earlier version yesterday seems to have been corrupted; here is a corrected advertisement. Thanks Kit Dodson] Research Position Available at University of Toronto A position is available from 1 June 1994 for a person qualified to do research in the application of neural network simulation methods. The project involves a mixed discipline group and concerns the simulation and analysis of stochastic fibrous suspensions in turbulent flows. Good computing facilities are available (SGI Challenge, Indigo, KSR1). The succesful applicant should have a PhD or be at similar level. The salary is decreed by NSERC, about $27,000pa without benefits. The appointment would be for up to 2 years, with the possibility of an extension to a third year. Interested applicants please send resumes, letters of reference and possible starting dates to me as soon as possible. ****************************** Prof CTJ Dodson Department of Chemical Engineering and Department of Mathematics University of Toronto 200 College Street, Toronto M5S 1A1 Tel 416 978 5610 Fax 416 978 1144 Room Wallberg 362 email dodson at ecf.utoronto.ca ******************************  From hendin at thunder.tau.ac.il Thu Apr 28 10:26:49 1994 From: hendin at thunder.tau.ac.il (Ofer Hendin) Date: Thu, 28 Apr 1994 17:26:49 +0300 (IDT) Subject: preprint available Message-ID: FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/hendin.olfaction.ps.Z The file hendin.olfaction.ps.Z now available for "anonymous ftp" copying from the Neuroprose repository (9 pages): ============================================================================ DECOMPOSITION OF A MIXTURE OF SIGNALS IN A MODEL OF THE OLFACTORY BULB (to be published in PNAS) O. HENDIN and D. HORN School of Physics and Astronomy Raymond and Beverly Sackler Faculty of Exact Sciences Tel Aviv University Tel Aviv 69978 Israel J. J. HOPFIELD Divisions of Chemistry and Biology California Institute of Technology Pasadena, California 91125 ABSTRACT We describe models for the olfactory bulb which perform separation and decomposition of mixed odor inputs from different sources. The odors are unknown to the system, hence this is an analog and extension of the engineering problem of blind separation of signals. The separation process makes use of the different temporal fluctuations of the input odors which occur under natural conditions. We discuss two possibilities, one relying on a specific architecture connecting modules with the same sensory inputs, and the other assuming that the modules (e.g. glomeruli) have different receptive fields in odor space. We compare the implications of these models for the testing of mixed odors from a single source. ============================================================================ Ofer. ____________________________________________________________________ Ofer Hendin e-mail: hendin at thunder.tau.ac.il School of Physics and Astronomy Phone : +972 3 640 7452 Raymond and Beverly Sackler Faculty of Exact Sciences Tel Aviv University Tel Aviv 69978, Israel. ____________________________________________________________________  From kak at gate.ee.lsu.edu Thu Apr 28 10:46:06 1994 From: kak at gate.ee.lsu.edu (Subhash Kak) Date: Thu, 28 Apr 94 09:46:06 CDT Subject: No subject Message-ID: <9404281446.AA08343@gate.ee.lsu.edu> The following report (a revision of an earlier report) is now available as a postscript file. If you would like for me to email you a copy, do let me know. _______________________________________________________________ Can We Build A Quantum Neural Computer? by Subhash Kak Department of Electrical & Computer Engineering Louisiana State University Baton Rouge, LA 70803-5901, USA Technical Report: ECE/LSU 92-13; 94-42 December 15, 1992; Revised April 26, 1994 Abstract: Hitherto computers have been designed based on classical laws. We consider the question of building a quantum neural computer and speculate on its computing power. We argue that such a computer could have the potential to solve artificial intelligence problems. It is also shown that information is not locally additive in a quantum computational paradigm. This is demonstrated by considering an informational-theoretic analysis of the EPR experiment. Non-additivity of information in biological processing would be one piece of evidence establishing that consciousness should be modelled using a quantum theory. -----------------------------------------------------------------------  From terry at salk.edu Thu Apr 28 20:05:07 1994 From: terry at salk.edu (Terry Sejnowski) Date: Thu, 28 Apr 94 17:05:07 PDT Subject: Neural Computation 6:3 Message-ID: <9404290005.AA15953@salk.edu> NEURAL COMPUTATION May 1994 Volume 6 Number 3 Review: Statistical Physics Algorithms that Converge A.L. Yuille and J. J. Kosowsky Article: Object Recognition and Sensitive Periods: A Computational Analysis of Visual Imprinting Randall C. O'Reilly and Mark H. Johnson Letters: Computing Stereo Disparity and Motion with Known Binocular Cell Properties Ning Qian Integration and Differentiation in Dynamic Recurrent Neural Networks Edwin E. Munro, Larry E. Shupe, and Eberhard E. Fetz A Convergence Result for Learning in Recurrent Neural Networks Chung-Ming Kuan, Kurt Hornik and Halbert White Topology Learning Solved by Extended Objects: A Neural Network Model Csaba Szepesvari, Laszlo Balazs and Andras Lorincz Dynamics of Discrete Time, Continuous-State Hopfield Networks Pascal Koiran Alopex: A Correlation-Based Learning Algorithm for Feedforward and Recurrent Neural Networks K. P. Unnikrishnan and K. P. Venugopal Duality Between Learning Machines: A Bridge Between Supervised and Unsupervised Learning Jean-Pierre Nadal and N. Parga Finding the Embedding Dimension and Variable Dependencies in Time Series Hong Pi and Carsten Peterson Comparison of Some Neural Network and Scattered Data Approximations: The Inverse Manipulator Kinematics Example Dimitry Gorinevsky and Thomas H. Connolly Functionally Equivalent Feedforward Neural Networks Vera Kurkova and Paul C. Kainen ----- SUBSCRIPTIONS - 1994 - VOLUME 6 - BIMONTHLY (6 issues) ______ $40 Student and Retired ______ $65 Individual ______ $166 Institution Add $22 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-5 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 e-mail: hiscox at mitvma.mit.edu -----  From danr at das.harvard.edu Fri Apr 29 00:41:41 1994 From: danr at das.harvard.edu (Dan Roth) Date: Fri, 29 Apr 94 00:41:41 EDT Subject: Advanced Tutorial on Learning DNF Message-ID: <9404290441.AA28585@endor.harvard.edu> Advanced Tutorial on the State of the Art in Learning DNF Rules =============================================================== Sunday, July 10, 1994 Rutgers University New Brunswick, New Jersey Held in Conjunction with the Eleventh International Conference on Machine Learning (ML94, July 11-13, 1994) and the Seventh Annual Conference on Computational Learning Theory (COLT94, July 12-15, 1994). Learning DNF rules is one of the most important problems and widely-investigated areas of inductive learning from examples. Despite its long-standing position in both the Machine Learning and COLT communities, there has been little interaction between them. This workshop aims to promote such interaction. The COLT community has studied DNF extensively under its standard learning models. While the general problem is still one of the main open problems in COLT, there have been many exciting developments in recent years, and techniques for solving major subproblems have been developed. Inductive learning of subclasses of DNF such as production rules, decision trees and decision lists has been an active research topic in the Machine Learning community for years, but theory has had almost no impact on the experimentalists in machine learning working in this area. The purpose of this workshop is to provide an opportunity for cross-fertilization of ideas, by exposing each community to the other`s ideas: ML researchers to the frameworks, results and techniques developed in COLT; the theoretical community to many problems that are important from practical points of view, but are not currently addressed by COLT, as well as to approaches that were shown to work in practice but lack a formal analysis. To achieve this goal the workshop is organized around a set of invited talks, given by some of the prominent researchers in the field in both communities. Our intention is to have as much discussion as possible during the formal presentations. The speakers are: Nader Bshouty, University of Calgary, Canada Learning via the Monotone Theory Wray Buntine, NASA Generating rule-based algorithms via graphical modeling Tom Hancock, Siemens Learning Subclasses of DNF from examples Rob Holte, University of Ottawa, Canada Empirical Analyses of Learning Systems Jeff Jackson, Carnegie Mellon University Learning DNF under the Uniform Distribution Michael Kearns, AT&T Bell Labs An Overview of Computational Learning Theory Research on Decision Trees and DNF Yishay Mansour, Tel-Aviv University, Israel Learning boolean functions using the Fourier Transform. Cullen Schaffer, CUNY Learning M-of-N and Related Concepts PARTICIPATION The Workshop is open to people who register to the COLT/ML conference. We hope to attract researchers that are active in the area of DNF as well as the general COLT/ML audience. WORKSHOP ORGANIZERS Jason Catlett Dan Roth AT&T Bell Laboratories Harvard University Murray Hill, NJ 07974 Cambridge, MA 02138 +1 908 582 4978 +1 617 495 5847 catlett at research.att.com danr at das.harvard.edu  From tibs at utstat.toronto.edu Fri Apr 29 09:19:00 1994 From: tibs at utstat.toronto.edu (tibs@utstat.toronto.edu) Date: Fri, 29 Apr 94 09:19 EDT Subject: new mauscript Message-ID: Available in pub/nnboot.ps.Z at utstat.toronto.edu A comparison of some error estimates for neural network models Robert Tibshirani Department of Preventive Medicine and Biostatistics and Department of Statistics University of Toronto We discuss a number of methods for estimating the standard error of predicted values from a neural network (single layer perceptron) model. These methods include the delta method based on the Hessian, bootstrap estimators, and the ``sandwich'' estimator. The methods are described and compared in a number of examples. We find that the bootstrap methods perform best, partly because they capture variability due to the choice of starting weights. ============================================================= | Rob Tibshirani To every man is given the key to | Dept. of Preventive the gates of heaven; | Medicine and Biostatistics the same key opens the gates of hell. | McMurrich Bldg. | University of Toronto Buddhist proverb | Toronto, Canada M5S 1A8 Phone: 1-416-978-4642 (biostats) Email: tibs at utstat.toronto.edu 416-978-0673 (stats) FAX: 1-416-978-8299