From Connectionists-Request at cs.cmu.edu Mon May 1 00:05:21 1995 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Mon, 01 May 95 00:05:21 -0400 Subject: Bi-monthly Reminder Message-ID: <21903.799301121@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated September 9, 1994. This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is a moderated forum for enlightened technical discussions and professional announcements. It is not a random free-for-all like comp.ai.neural-nets. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to thousands of busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. -- Dave Touretzky & Lisa Saksida --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, corresponded with him directly and retrieved the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new textbooks related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. The file "current" in the same directory contains the archives for the current month. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU 2. Login as user anonymous with password your username. 3. 'cd' directly to the following directory: /afs/cs/project/connect/connect-archives The archive directory is the ONLY one you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into this directory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- Using Mosaic and the World Wide Web ----------------------------------- You can also access these files using the following url: http://www.cs.cmu.edu:8001/afs/cs/project/connect/connect-archives ---------------------------------------------------------------------- The NEUROPROSE Archive ---------------------- Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. We strongly discourage the merger into the repository of existing bodies of work or the use of this medium as a vanity press for papers which are not of publication quality. PLACING A FILE To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. Current naming convention is author.title.filetype.Z where title is just enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. The Z indicates that the file has been compressed by the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is in the appendix. Make sure your paper is single-spaced, so as to save paper, and include an INDEX Entry, consisting of 1) the filename, 2) the email contact for problems, 3) the number of pages and 4) a one sentence description. See the INDEX file for examples. ANNOUNCING YOUR PAPER It is the author's responsibility to invite other researchers to make copies of their paper. Before announcing, have a friend at another institution retrieve and print the file, so as to avoid easily found local postscript library errors. And let the community know how many pages to expect on their printer. Finally, information about where the paper will/might appear is appropriate inside the paper as well as in the announcement. In your subject line of your mail message, rather than "paper available via FTP," please indicate the subject or title, e.g. "paper available "Solving Towers of Hanoi with ART-4" Please add two lines to your mail header, or the top of your message, so as to facilitate the development of mailer scripts and macros which can automatically retrieve files from both NEUROPROSE and other lab-specific repositories: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/filename.ps.Z When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. To prevent forwarding, place a "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your file. If you do offer hard copies, be prepared for a high cost. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! A shell script called Getps, written by Tony Plate, is in the directory, and can perform the necessary retrieval operations, given the file name. Functions for GNU Emacs RMAIL, and other mailing systems will also be posted as debugged and available. At any time, for any reason, the author may request their paper be updated or removed. For further questions contact: Jordan Pollack Associate Professor Computer Science Department Center for Complex Systems Brandeis University Phone: (617) 736-2713/* to fax Waltham, MA 02254 email: pollack at cs.brandeis.edu APPENDIX: Here is an example of naming and placing a file: unix> compress myname.title.ps unix> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put myname.title.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for myname.title.ps.Z 226 Transfer complete. 100000 bytes sent in 1.414 seconds ftp> quit 221 Goodbye. unix> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file myname.title.ps.Z in the Inbox. Here is the INDEX entry: myname.title.ps.Z mylogin at my.email.address 12 pages. A random paper which everyone will want to read Let me know when it is in place so I can announce it to Connectionists at cmu. ^D AFTER RECEIVING THE GO-AHEAD, AND HAVING A FRIEND TEST RETRIEVE THE FILE, HE DOES THE FOLLOWING: unix> mail connectionists Subject: TR announcement: Born Again Perceptrons FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/myname.title.ps.Z The file myname.title.ps.Z is now available for copying from the Neuroprose repository: Random Paper (12 pages) Somebody Somewhere Cornell University ABSTRACT: In this unpublishable paper, I generate another alternative to the back-propagation algorithm which performs 50% better on learning the exclusive-or problem. ~r.signature ^D ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "ftp.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. Another valid directory is "/afs/cs/project/connect/code", where we store various supported and unsupported neural network simulators and related software. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "neural-bench at cs.cmu.edu". From hpan at ecn.purdue.edu Mon May 1 15:32:52 1995 From: hpan at ecn.purdue.edu (Hong Pan) Date: Mon, 1 May 1995 14:32:52 -0500 Subject: TR aval: Linsker's Network: Qualitative Analysis On Parameter Space Message-ID: <199505011932.OAA23341@en.ecn.purdue.edu> *************** PLEASE DO NOT FORWARD TO OTHER BBOARDS ***************** FTP-host: archive.cis.ohio-state.edu Mode: binary FTP-filename: /pub/neuroprose/pan.purdue-tr-ee-95-12.ps.Z URL: file://archive.cis.ohio-state.edu/pub/neuroprose/ ------------------------------------------------------------------------ The following Technical Report concerning the dynamical mechanism of a class of network models that use the limiter function (or the piecewise linear sigmoidal function) as the constraint limiting the size of the weight or the state variables, has been placed in the Neuroprose archive (see above for FTP-host) and is currently available as a compressed postscript file named pan.purdue-tr-ee-95-12.ps.Z (65 pages with 5 tables & 18 figures) Comments, questions and suggestions about the work can be sent to: hpan at ecn.purdue.edu ***** Hardcopies cannot be provided ***** ------------------------------------------------------------------------ Linsker-type Hebbian Learning: A Qualitative Analysis On The Parameter Space Jianfeng Feng Hong Pan Vwani P. Roychowdhury Mathematisches Institut School of Electrical Engineering Universit\"{a}t M\"{u}nchen 1285 Electrical Engineering Building Theresienstr. 39 Purdue University D-80333 M\"{u}nchen West Lafayette Germany IN 47907-1285 ------------------------------------------------------------------------ Abstract: From cyril at psychvax.psych.su.OZ.AU Mon May 1 21:54:03 1995 From: cyril at psychvax.psych.su.OZ.AU (Cyril Latimer) Date: Tue, 2 May 1995 11:54:03 +1000 Subject: Modelling Symmetry Detection with Back-propagation Networks Message-ID: The following paper appeared in Spatial Vision, and reprints may be requested from the address given below. Latimer, C.R., Joung, W., & Stevens, C.J. Modelling symmetry detection with back-propagation networks. Spatial Vision, 1994, 8(4), 415-431. Abstract This paper reports experimental data and results of network simulations in a project on symmetry detection in small 6 x 6 binary patterns. Patterns were symmetrical about the vertical, horizontal, positive-oblique or negative-oblique axis, and were viewed on a computer screen. Encouraged to react quickly and accurately, subjects indicated axis of symmetry by pressing one of four designated keys. Detection times and errors were recorded. Back-propagation networks were trained to categorize the patterns on the basis of axis of symmetry, and, by employing cascaded activation functions on their output units, it was possible to compare network performance with subjects' detection times. Best correspondence between simulated and human detection-time functions was observed after the networks had been given significantly more training on patterns symmetrical about the vertical and the horizontal axes. In comparison with no pre-training and pre-training with asymmetric patterns, pre-training networks with sets of single vertical, horizontal, positive-oblique or negative-oblique bars speeded subsequent learning of symmetrical patterns. Results are discussed within the context of theories suggesting that faster detection of symmetries about the vertical and horizontal axes may be due to significantly more early experience with stimuli oriented on these axes. ------------------------------- * ---------------------------------- Dr. Cyril R. Latimer Ph: +61 2 351-2481 Department of Psychology * * Fax: +61 2 351-2603 University of Sydney * NSW 2006, Australia email: cyril at psych.su.oz.au ------------------------------ * ----------------------------------- From markey at dendrite.cs.colorado.edu Tue May 2 01:40:03 1995 From: markey at dendrite.cs.colorado.edu (Kevin Markey) Date: Mon, 1 May 1995 23:40:03 -0600 Subject: Thesis/TR: Sensorimotor foundations of phonology -- a model. Message-ID: <199505020540.XAA15632@dendrite.cs.colorado.edu> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/Thesis/markey.thesis.ps.Z Ph.D. Thesis available by anonymous ftp (128 pages) The Sensorimotor Foundations of Phonology: A Computational Model of Early Childhood Articulatory and Phonetic Development Kevin L. Markey Department of Computer Science University of Colorado at Boulder ABSTRACT This thesis describes HABLAR, a computational model of the sensorimotor foundations of early childhood phonological development. HABLAR is intended to replicate the major milestones of emerging speech and demonstrate key characteristics of normal development, including the phonetic characteristics of babble, systematic and context-sensitive patterns of sound substitutions and deletions, overgeneralization errors, and the emergence of adult phonemic organization. HABLAR simulates a complete sensorimotor system consisting of an auditory system that detects and categorizes speech sounds using only acoustic cues drawn from its linguistic environment, an articulatory system that generates synthetic speech based on a realistic computer model of the vocal tract, and a hierarchical cognitive architecture that bridges the two. The environment in which the model resides is also simulated. The model is an autonomous agent which actively experiments within this environment. The principal hypothesis guiding the model is that phonological development emerges from the interaction of auditory perception and hierarchical motor control. The model's auditory perception is specialized to segment and categorize acoustic signals into discrete phonetic events which closely correspond to discrete sets of functionally coordinated gestures learned by the model's articulatory control apparatus. HABLAR learns the correspondence between discrete phonetic and articulatory events, not between continuous speech and continuous vocal tract motion. HABLAR's perceptual and motor organization is initially syllabic. Phonemes are not built into the model but emerge (along with an adult-like phonological organization) due to the differentiation of early syllable-sized motor patterns into phoneme-sized patterns while the model learns a large lexicon. Learning occurs in two phases. In the first phase, HABLAR's auditory perception employs soft competitive learning to acquire phonetic features which categorize the spectral properties of utterances in the linguistic environment. In the second phase, reinforcement based on the phonetic proximity of target and actual utterances guides learning by the model's two levels of motor control. The phonological control level uses Q-learning to learn an optimal policy linking phonetic and articulatory events. The articulatory control level employs a parallel Q-learning architecture to learn a policy which controls the vocal tract's twelve degrees-of-freedom. HABLAR has been fully implemented as a computational model. Simulations of the model's auditory perception demonstrate that it faithfully preserves and makes explicit phonetic properties of the acoustic signal. Auditory simulations also mimic categorical vowel and consonant perception which develops in human infancy. Other results demonstrate the feasibility of learning multi-dimensional articulatory control with a parallel reinforcement learning architecture, and the effectiveness of shaping motor control with reinforcement based on the phonetic proximity of target and actual utterances. The model provides qualitative accounts of developmental data. It is predicted to make pronunciation errors similar to those observed among children because of the relative articulatory difficulty of its producing different speech sounds, its tendency to eliminate the biggest phonetic errors first, its generalization of already mastered sounds across phonetic similarities, and contextual effects of phonetic representations and internal distributed representations which underlie speech production. ----------------------------------------------------------------------------- Sorry, hard copies are not available. Thanks to Jordan Pollack for maintaining neuroprose. Kevin L. Markey Department of Psychology 2155 S. Race Street University of Denver Denver, CO 80208 markey at cs.colorado.edu ------------------------------------------------------------------------------ From PREFENES at lbs.lon.ac.uk Tue May 2 11:04:59 1995 From: PREFENES at lbs.lon.ac.uk (Paul Refenes) Date: Tue, 2 May 1995 11:04:59 BST Subject: Doctoral Research Scholarships Message-ID: Collaborative PhD Research Scholarships Department of Decision Science London Business School University of London The Department of Decision Science at London Business School is offering three scholarships on its Doctoral programme. Commencing in October 1995 the research areas will include Neural Networks, Non-parametric statistics, Financial Engineering, Simulation, Optimisation and Decision Analysis. Principled Model Selection for Neural Network Applications in Nonlinear Time Series: to utilise developments from multinomial,times series theory and from the non-parametric statistics field for developing distribution theories, statistical diagnostics, and test procedures for recurrent neural network model identification. The methodology will be used to develop models of nonlinear cointegration in equity markets and in telecommunications data. Advanced Decision Technology in Financial Risk Management: The use of advanced decision technologies such as neural networks, non parametric statistics and genetic algorithms for the development of financial risk management models in the currency and soft commodity markets. Our industrial collaborator has special interest on robust neural network models for hedging and arbitrage strategies in the currency, soft commodity and equity markets. Intelligent systems in Industry Modelling and Simulation Environments: the use of simulation for the development of business strategy and the facilitation of executive debate is now well established and popular. Neural network technology will be used for the development of "intelligent simulation agents" that can process the vast amount of data generated by the simulations and adapt their behaviour by learning from the feed back patterns. London Business School offers students enrolled in the doctoral programme core courses on Research Methodology, Statistical Analysis, as well as a choice of advanced specialised subject area courses including Financial Economics, Equity Investment, Derivatives Research, etc. Candidates with a strong background in mathematics, oprerations research, computer science, nonparametric statistics, and/or econometrics who wish to apply are invited to write with a copy of their CV to: Professor D. Bunn or Dr A-P. N. Refenes London Business School Regents Park, London NW1 4SA tel: ++ 44 171 262 5050 fax: ++ 44 171 728 78 75 The Department =========== The Department of Decision Sciences of the London Business School is actively involved in innovative multi-disciplinary research on the application of new business modelling methodologies to individual and organisation decision- making. In seeking to extend the effectiveness of conventional methods of management science, statistical methods and decision support systems, with the latest generation of software platforms, artificial intelligence, neural networks, genetic algorithms and computationally intensive methods, the research themes of the department remain at the forefront of new practice. The NeuroForecasting Research Unit ================================== The NeuroForecasting Research Unit at London Business School is the major centre in Europe for research into neural networks, non-parametric statistics and financial engineering. With funding from the DTI, the European Commission and a consortium of leading financial institutions the research unit has attained a world-wide reputation for collaborative research. Doctoral students work in a team of highly motivated post-doctoral fellows, research fellows, doctoral students and faculty who are amongst Europe's leading authorities in the field. Advanced Decision Support Platforms =================================== The current trend in the design of decision support is towards a synthesis of multiple approaches and integration of business modelling techniques (optimisation with simulation, forecasting with decision analysis, etc. Using object-oriented software architectures, the group has developed innovative approaches for model structuring, strategic analysis, forecasting and decision- analytic procedures. Several companies and the ESRC are currently supporting this work. From raffaele at caio.irmkant.rm.cnr.it Tue May 2 17:44:23 1995 From: raffaele at caio.irmkant.rm.cnr.it (raffaele@caio.irmkant.rm.cnr.it) Date: Tue, 2 May 1995 16:44:23 -0500 Subject: simulation of protein folding process (paper) Message-ID: <9505022144.AA09110@caio.irmkant.rm.cnr.it> FTP-host: kant.irmkant.rm.cnr.it FTP-filename: /pub/econets/calabretta.folding.ps.Z The following paper has been placed in the anonymous-ftp archive (see above for ftp-host) and is now available as a compressed postscript file named calabretta.folding.ps.Z (14 pages of output) The paper is also available by World Wide Web: http://kant.irmkant.rm.cnr.it/gral.html It will appear in Proceedings of 3rd European Conference on Artificial Life (Granada, Spain, 4-6 June 1995). Comments welcome. Raffaele Calabretta email address: raffaele at caio.irmkant.rm.cnr.it ------------------------------------------------------------------ "An Artificial Model for Predicting the Tertiary Structure of Unknown Proteins that Emulates the Folding Process" Raffaele Calabretta, Stefano Nolfi, Domenico Parisi Department of Neural Systems and Artificial Life Institute of Psychology National Research Council V.le Marx, 15 00137 ROME ITALY ---------------------------------------------------------------------------- Abstract: We present an "ab initio" method that tries to determine the tertiary structure of unknown proteins by modelling the folding process without using potentials extracted from known protein structures. We have been able to obtain appropriate matrices of folding potentials, i.e. 'forces' able to drive the folding process to produce correct tertiary structures, using a genetic algorithm. Some initial simulations that try to simulate the folding process of a fragment of the crambin that results in an alpha- helix, have yielded good results. We discuss some general implications of an Artificial Life approach to protein folding which makes an attempt at simulating the actual folding process rather than just trying to predict its final result. ---------------------------------------------------------------------------- From jhoh at vision.postech.ac.kr Tue May 2 11:24:56 1995 From: jhoh at vision.postech.ac.kr (Prof. Jong-Hoon Oh) Date: Wed, 3 May 1995 00:24:56 +0900 Subject: Post-doc Position, Statisitical Physics of Neural Networks Message-ID: Postoctoral Position at POHANG UNIVERSITY OF SCIENCE AND TECHNOLOGY "Statistical Physics of Neural Networks" A post doctorial position is available at the Basic Science Research Institute of Pohang Institute of Science and technology. Main research area will be statistical mechanics of neural networks. Background in statistical physics of neural networks, spin glasses or other condensed matter systems is prefered, but someone with a strong theoretical or computational physics background who is willing to explore this exciting new field can also be considered for this position. Current research is mainly concentrated to statisitical physics of learning in the multi-layered neural networks, including issues such as generalization, storage capacity, population learning, model selection. Now we are extending our research area to the biological neural networks and time series prediction. We have computing facilities such as two parallel computers and several high-end workstations. We hope the successful applicant can start to work either in June or in September, but we have some flexibility. We will support him/her for a year, and it can be extened for one more year according to his/her performance. Further information can be asked through e-mail. An applicant should send a CV and a list of publications to the following address, and arrange two recommendation letters (or at least one from Ph. D. adviser) to be arrived before May 15. CV in TeX/LaTeX format by e-mail is welcome. We prefer e-mail communication. Recommendation letters can also be sent by e-mail. Prof. Jong-Hoon OH Department of Physics jhoh at vision.postech.ac.kr Pohang Institute of Science Technology Tel) +82-562-2792069 Hyoja San 31 Pohang, 790-784 Kyoungbuk, Korea Fax) +82-562-2793099 **************************************************************************** Jong-Hoon Oh Associate Professor, Department of Physics jhoh at vision.postech.ac.kr Pohang Institute of Science Technology Tel) +82-562-2792069 Hyoja San 31 Pohang, 790-784 Kyoungbuk, Korea Fax) +82-562-2793099 **************************************************************************** From maggini at McCulloch.Ing.UniFI.IT Tue May 2 12:42:38 1995 From: maggini at McCulloch.Ing.UniFI.IT (Marco Maggini) Date: Tue, 2 May 1995 18:42:38 +0200 Subject: Neurap 95 WWW page Message-ID: <9505021642.AA03087@McCulloch.Ing.UniFI.IT> NEURAP'95 8th International Conference on Neural Networks and their Applications Marseilles - December 13-14-15, 1995 France First announcement and call for papers WWW: http://www-dsi.ing.unifi.it/neural/neurap/neurap.html SCOPE OF THE CONFERENCE Following the previous conferences in Nimes, in 1995 the eighth Neural Networks and their Applications Conference will be organized in Marseilles, France. The attendance is unique in its kind composed half by industrial engineers and half by university scientists, coming from all over the world. The purpose of the NEURAP conference is to present the latest results in the application of artificial neural networks. Theoretical aspects of Artificial Neural Networks are to be presented at the ESANN (European Symposium on Artificial Neural Networks) conference. This edition will give a particular place, but not exclusively, to the three following application domains: * Automation * Robotics * Electrical Engineering. To this end, leading international researchers in these domains have been added to the scientific committee. The program committee of NEURAP'95 welcomes papers covering any kind of applications, methods, techniques, or tools that help to understand or develop neural networks applications. To help the prospective authors, the following is a non exhaustive list of topics which will be covered: * Speech or image recognition * Fault tolerance * Data or sensor fusion * Process control * Forecasting * Classification * Knowledge acquisition * Planning * Methods or tools for evaluating neural networks performance * Preprocessing of data * Simulation tools (research, education, development) * Hybrid systems (fuzzy, genetic algorithms, symbolic representation, etc.) * etc. ... The conference will be held in Marseilles, second largest city in France. Due to the proximity of the Mediterrannean sea, winter is usually sunny and temperate. Marseilles is well served by airways and railways, and is connected to the major European cities. CALL FOR CONTRIBUTIONS Prospective authors are invited to submit six originals of their contribution (full paper) before June 15, 1995. The proceedings will be publish in English. Papers should not exceed eight A4 pages (double columms, including figures and references). Printing area will be 17 x 23.5 cm (centered on the A4 pages); left, right, top and bottom margins will thus respectively be 1.9, 1.9, 2.5 and 3.4 cm. 10-point Times font will be used for the main text; headings will be in bold characters (but not underlined), and will be separate from the main text by two blank lines before and one after. Manuscripts prepared in this format will be reproduced in the same size in the book. Originals of the figures will be pasted into the manuscript and centered between the margins. The lettering of the figures should be in 10-point Times font size. Figures should be numbered. The legends also should be centered between the margins and be written in 9-point Times font size. The pages of the manuscript will not be numbered (numbering decided by the editor). A separate page (not included in the manuscript) will indicate: * the title of the manuscript * author(s) name(s) * the complete address (including phone & fax numbers and E-mail) of the corresponding author * a list of five keywords or topics On the same page, the authors will copy and sign the following paragraph: "in case of acceptation of the paper for presentation at NEURAP'95: * at least one of the authors will register to the conference and will present the paper * the author(s) give their rights up over the paper to the organizers of NEURAP'95, for the proceedings and any publication that could directly be generated by the conference * if the paper does not match the format requirements for the proceedings, the author(s) will send a revised version within two weeks of the notification of acceptation." Presentations will be oral or poster, depending on the wish of theauthor(s) and, also, of organisation constraints. 20 minutes will be allowed for oral presentation. Each poster will be allowed an oral presentation of 3 minutes at the beginning of the poster presentation. The full paper of either oral or poster presentation will be published in the proceedings. REGISTRATION FEES (indicative) Registration before Registration after October 1st, 1995 October 1st, 1995 Students 1000 FF 1200 FF Unversities 1600 FF 1800 FF Industries 2000 FF 2300 FF An "advanced registration form" is available by writing to the conference secretariat (see reply form below). Please ask for this form in order to benefit from the reduced registration fee before October 1st, 1995. DEADLINES Submission of papers June 15, 1995 Notification of acceptance September 18, 1995 Conference December 13-14-15, 1995 CONFERENCE SECRETARIAT Dr. Claude TOUZET IUSPIM Email: diam_ct at vmesa11.u-3mrs.fr Domaine Universitaire de Saint-Jrme Phone: +33 91 05 60 60 F-13397 Marseille Cedex 20 (France) Fax: +33 91 05 60 09 REPLY FORM If you wish to receive the final program of NEURAP'95, for any address change, or to add one of your colleagues in our database, please send this form to the conference secretariat. Please indicate if you wish to receive the advanced registration form. Please return this form under stamped envelope to: NEURAP'95 IUSPIM Domaine Universitaire de Saint-Jrme Avenue Escadrille Normandie-Niemen F-13397 Marseille Cedex 20 France - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Name: ............................................................ First Name: ...................................................... University or Comany: ............................................ Address: ......................................................... ................................................................. ZIP: ....................... Town: .............................. Country: ......................................................... Tel: ............................................................. Fax: ............................................................. E-mail: .......................................................... [ ] Please send me the "advanced registration form". - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - SCIENTIFIC COMMITEE (to be confirmed) Jeanny Herault INPG (Grenoble, F) - President Karl Goser Universitt Dortmund (D) - President Bernard Amy IMAG (Grenoble, F) Xavier Arreguit CSEM (CH) Jacob Bahren CESAR (Oak Ridge, USA) Gaston Baudat Sodeco (Genve, CH) Jean-Marie Bernassau Sanofi Recherche (Montpellier, F) Pierre Bessire IMAG/LIFIA (Grenoble, F) Jean Bigeon INPG (Grenoble, F) Giacomo Bisio Universit di Genova (I) Franois Blayo SAMOS - Univ. Paris I (F) Jean Bourjault Universit de Besanon (F) Paul Bourret Onera-Cert (Toulouse, F) Joan Cabestany UPC (Barcelone, E) Leon O. Chua University of California (USA) Mauricio Cirrincione Universita di Palermo (I) Ian Cloete University of Stellenbosch (South Africa) Daniel Collobert CNET (lannion, F) Philippe Coiffet CRIIF (Gif sur Yvette, F) Marie Cottrell SAMOS - Universit Paris I (F) Alexandru Cristea Institut of Virology (Bucharest, Romania) Dante Del Corso Politecnico di Torino (I) Marc Duranton LEP (Limeil-Brvannes, F) Franoise Fogelman Sligos (Clamart, F) Kunihiko Fukushima Osaka University (J) Patrick Gallinari Univ. Pierre et Marie Curie (Paris, F) Josef Gppert University of Tbingen (D) Marita Gordon CEA (Grenoble, F) Marco Gori Universita di Firenze (I) Erwin Groospietsch GMD (Sankt Augustin, D) Martin Hasler EPFL (Lausanne, CH) Jean-Paul Haton Crin- inria (Nancy, F) Jaap Hoekstra Delft University of Technology (NL) Yujiro Inouye Osaka University (Japan) Masumi Ishikawa Kyushu Institute of Technology (J) Christian Jutten INPG (Grenoble, F) Heinrich Klar Technische Universitt Berlin (D) Jean-Franois Lavignon DRET (Arcueil, F) John Lazzaro Univ. of California (Berkeley, USA) Vincent Lorquet ITMI (Grenoble, F) Daniel Memmi CNRS/LIMSI (Orsay, F) Ruy Milidiu University of Rio (Bresil) Pietro Morasso University of Genoa (I) Fabien Moutarde Alcatel Alsthom Recherche (F) Alan F. Murray University of Edinburgh (GB) Akira Namatame National Defence Academy (J) Josef A. Nossek Technische Univ. Mnchen (D) Erkki Oja Lappeenranta Univ. of Tech. (FIN) Stanislaw Osowski University of Warsaw (Poland) Carsten Peterson University of Lund (S) Alberto Prieto Universidad de Granada (E) Pierre Puget CEA (Grenoble, F) Ulrich Ramacher Technische Universitt Dresden (D) Leornardo Reyneri Universita di Pisa (I) Tamas Roska MTA-SZTAKI (Budapest, H) Jean-Claude Sabonnadiere INPG (Grenoble, F) Juan Miguel Santos University of Buenos Aires (Argentina) Leslie S. Smith University of Stirling (GB) John T. Taylor University College London (GB) Carme Torras Institut de Cibernetica/CSIC (E) Claude Touzet DIAM/IUSPIM (Marseille, F) Michel Verleysen UCL (Louvain-La-Neuve, B) Eric Vittoz CSEM (Neuchtel, CH) Alexandre Wallyn CGInn (Boulogne-Billancourt, F) ORGANIZING COMMITTEE Norbert Giambiasi DIAM/IUSPIM - President Jean-Claude Bertrand IUSPIM Claudia Frydman DIAM/IUSPIM J.-Franois Lemaitre IIRIAM Danielle Bertrand IUSPIM From kak at gate.ee.lsu.edu Tue May 2 14:48:44 1995 From: kak at gate.ee.lsu.edu (Subhash Kak) Date: Tue, 2 May 95 13:48:44 CDT Subject: Paper Message-ID: <9505021848.AA19752@gate.ee.lsu.edu> The following paper is available by anonymous ftp. Comments on the paper are most welcome. INFORMATION, PHYSICS AND COMPUTATION Subhash C. Kak Louisiana State University Baton Rouge, LA 70803-5901 Abstract: The paper presents several observations on the connections between information, physics and computation. This includes energy and computing speed and the question of quantum computing in the style of Feynman and others. Technical Report ECE-95-04, April 19, 1995 --------- ftp://gate.ee.lsu.edu/pub/kak/inf.ps.Z From Alex.Monaghan at CompApp.DCU.IE Tue May 2 15:13:32 1995 From: Alex.Monaghan at CompApp.DCU.IE (Alex.Monaghan@CompApp.DCU.IE) Date: Tue, 2 May 95 15:13:32 BST Subject: CSNLP Conference at Dublin City University Message-ID: <9505021413.AA21392@janitor.compapp.dcu.ie> PLEASE POST! PLEASE POST! PLEASE POST! PLEASE POST! PLEASE POST! Call for Participation in the Fourth International Conference on The COGNITIVE SCIENCE of NATURAL LANGUAGE PROCESSING Dublin City University, 5-7 July 1995 Theme: The Role of Syntax There is currently considerable debate regarding the place and importance of syntax in NLP. Papers dealing with this matter will feature strongly in the programme. Invited Speakers: The following speakers have agreed to give keynote talks: Mark Steedman, University of Pennsylvania Alison Henry, University of Ulster Other areas addressed will include: Machine Translation Connectionism Semantic inferencing Spoken dialogue Prosody Hybrid approaches Assessment tools and methods This is a small conference, limited to about 40 delegates. We aim to keep things relatively informal, and to promote discussion and debate. With two dozen contributed papers and two invited talks, all outstanding, there should be plenty of material to interest a wide range of researchers. Registration and Accommodation: The registration fee will be IR#60, and will include proceedings, lunches and one evening meal. Accommodation can be reserved in the campus residences at DCU. A single room is IR#16 per night, with full Irish breakfast an additional IR#4. Accommodation will be "First come, first served": there is a heavy demand for campus rooms in the summer. There are also several hotels and B&B establishments nearby: addresses will be provided on request. To register, contact Alex Monaghan at the addresses given below. Payment in advance is possible but not obligatory. Please state gender (for accommodation purposes) and any unusual dietary requirements. CSNLP Alex Monaghan School of Computer Applications Dublin City University Dublin 9 Ireland Email registrations are preferred, please mail alex at compapp.dcu.ie (internet) --------- Deadlines: 26th June --- Final date for registration, accommodation, meals etc. A provisional programme will be sent out in due course. From yorick at dcs.shef.ac.uk Tue May 2 18:18:44 1995 From: yorick at dcs.shef.ac.uk (Yorick Wilks) Date: Tue, 2 May 95 18:18:44 BST Subject: Research in CS, AI, NLP and Speech Message-ID: <9505021718.AA11579@dcs.shef.ac.uk> University of Sheffield, UK Department of Computer Science RESEARCH DEGREES IN COMPUTER SCIENCE ************************************ This department intends to recruit a number of postgraduate research students to commence studies in October 1995. Successful applicants will be registered for an M.Phil or Ph.D. The department has four research groups, with interests as follows: Formal Methods and Software Engineering --------------------------------------- Telematics, Formal Specification, Verification and Testing, Object-Oriented Languages and Design, Proof Theory. Parallel Processing ------------------- Parallel Database Machines, Parallel CASE Tools, Safety-Critical systems. Artificial Intelligence and Neural Networks ------------------------------------------- Natural Language Processing (including corpus and lexically based methods, information extraction and pragmatics), Neural Networks, Computer Graphics, Intelligent Tutoring Systems, Computer Argumentation. Speech and Hearing ------------------ Auditory Scene Analysis, Models of Auditory Perception, Automatic Speech Recognition. It is expected that a number of (British Government) EPSRC awards will be available to UK residents, in addition to the University's own studentship and bursary schemes, some of which are open to all. Candidates for these awards should have a good honours degree in a relevant discipline (not necessarily Computer Science), or should attain such a degree by October 1995. Part-time registration is also possible. We especially welcome applications from (non-British) EU citizens elegible for support under the EU's Research Training Grants schemes (with application deadlines in May and September). Application forms and further particulars are available from The Departmental Secretary, Department of Computer Science, University of Sheffield, Regent Court, 211 Portobello St, Sheffield S1 4DP. More details can also be obtained from world-wide-web address http://www.dcs.shef.ac.uk. Informal enquiries may be addressed to Dr. Phil. Green, phone 0114-282-5578, email p.green at dcs.sheffield.ac.uk Prof Yorick Wilks, phone 0114-282-5563, email yorick at dcs.sheffield.ac.uk From B344DSL at utarlg.uta.edu Tue May 2 18:22:22 1995 From: B344DSL at utarlg.uta.edu (B344DSL@utarlg.uta.edu) Date: Tue, 02 May 1995 17:22:22 -0500 (CDT) Subject: Tentative program for MIND meeting at Texas A&M Message-ID: Conference on Neural Networks for Novel High-Order Rule Formation Sponsored by Metroplex Institute for Neural Dynamics (MIND) and For a New Social Science (NSS) Forum Theatre, Rudder Theatre Complex, Texas A&M University, May 20-21, 1995 Tentative Schedule and List of Abstracts Saturday, May 20 ~ 4:30 - 5:30 PM Karl Pribram, Radford University Brain, Values, and Creativity Sunday, May 21 ~ 9:00 - 10:00 John Taylor, University of London Building the Mind from Neural Networks 10:00 - 10:45 Daniel Levine, University of Texas at Arlington The Prefrontal Cortex and Rule Formation 10:45 - 11:00 Break 11:00 - 11:45 Sam Leven, For a New Social Science Synesthesia and S.A.M.: Modeling Creative Process 11:45 - 12:15 Richard Long, University of Central Florida A Computational Model of Emotion Based Learning: Variation and Selection of Attractors in ANNs 12:15 - 1:45 Lunch 1:45 - 2:30 Ron Sun, University of Alabama An Agent Architecture with On-line Learning of Conceptual and Subconceptual Knowledge 2:30 - 3:00 Madhav Moganti, University of Missouri, Rolla Generation of FAM Rules Using DCL Network in PCB Inspection 3:30 - 3:45 Break 3:45 - 4:30 Ramkrishna Prakash, University of Houston Towards Neural Bases of Cognitive Functions: Sensorimotor Intelligence 4:30 - 5:15 Risto Miikkulainen, University of Texas Learning and Performing Sequential Decision Tasks Through Symbiotic Evolution of Neural Networks 5:15 - 5:45 Richard Filer, University of York Correlation Matrix Memory in Rule-based Reasoning and Combinatorial Rule Match Posters (to be up continuously): Risto Miikkulainen, University of Texas Parsing Embedded Structures with Subsymbolic Neural Networks Haejung Paik and Caren Marzban, University of Oklahoma Predicting Television Extreme Viewers and Nonviewers: A Neural Network Analysis Haejung Paik, University of Oklahoma Television Viewing and Mathematics Achievement Doug Warner, University of New Mexico Modeling of an Air Combat Expert: The Relevance of Context Abstracts for talks: Pribram Perturbation, internally or externally generated, produces an orienting reaction which interrupts ongoing behavior and demarcates an episode. As the orienting reaction habituates, the weightings (values) of polarizations of the junctional microprocess become (re)structured on the basis of protocritic processing. Temporary stability characterizes the new structure which acts as a reinforcing attractor for the duration of the episode, i.e., until dishabituation (another orienting reaction) occurs. Habituation leads to extinction and under suitable conditions an extinguished experience can become reactivated, i.e., made relevant. Innovation depends on such reactivation and is enhanced not only by adding randomness to the process, but also by adding structured variety produced by prior experience. Taylor After a description of a global approach to the mind, the manner in which various modules in the brain can contribute will be explored, and related to the European Human Brain Project and to developments stemming from non-invasive instruments and single neuron measurements. A possible neural model of the mind will be presented, with suggestions outlined as to how it could be it could be tested. Levine Familiar modeling principles (e.g., Hebbian or associative learning, lateral inhibition, opponent processing, neuromodulation) could recur, in different combinations, in architectures that can learn diverse rules. These rules include, for example: go to the most novel object; alternate between two given objects; touch three given objects, without repeats, in any order. Frontal lobe damage interferes with learning all three of those rules. Hence, network models of rule learning and encoding should include a module analogous to the prefrontal cortex. They should also include modules analogous to the hippocampus, for episode setting, and the amygdala, for emotional evaluation. Through its connections with the parietal association cortex, with secondary cortical areas for individual sensory modalities, and with supplementary motor and premotor cortices, the dorsolateral part of the prefrontal cortex contains representations of movement sequences the animal has performed or thought about performing. Through connections to the amygdala via the orbital prefrontal cortex (which seems to be extensively and diffusely connected to the dorsolateral part), selective enhancement occurs of those motor sequence representations which have led to reward. I propose that the prefrontal cortex also contains "coincidence detectors" which respond to commonalities in any spatial or temporal attributes among all those reinforced representations. Such coincidence detection is a prelude to generating rules and thereby making inferences about classes of possible future movements. This general prefrontal function includes the function of tying together working memory representations that has been ascribed to it by other modelers (Kimberg & Farah, 1993) but goes beyond it. It also encompasses the ability to generate new rules, in coordination with the hippocampus, if current rules prove to be unsatisfactory. Leven (To be added) Long (with Leon Hardy) We propose a novel neural network architecture which is based on a broader theory of learning and cognitive self-organization. The model is designed to be loosely based on the mammalian brain's limbic and cortical neurophysiology and which possesses a number of unique and useful properties. This architecture uses a variation and selection algorithm similar to those found in evolutionary programming (EP), and genetic algorithms (GA). In this case, however, selection does not operate on bit strings, or even neuronal weights; instead, variation and selection acts on attractors in a dynamical system. Furthermore, hierarchical processing is imposed on a single neuronal layer in a manner that is easily scalable by simply adding additional nodes. This is accomplished using a large, uniform-intensity input signal that sweeps across a neural layer. This "sweeping activation" alternately pushes nodes into their active threshold regime, thus turning them "on". In this way, the active portion of the network settles into an attractor, becoming the preprocessed "input" to the newly recruited nodes. The attractor neural network (ANN) which forms the basis of this system is similar to a Hopfield neural network in that it has the same node update rule and is asynchronous, but differs from a traditional Hopfield network in two ways. First, unlike a fully connected Hopfield network, we use a sparse connection scheme using a random walk or gaussian distribution. Second, we allow for positive-weighted self connections which dramatically improves attractor stability when negative or inhibitory weights are allowed. This model is derived from a more general theory of emotion and emotion- based learning in the mammalian brain. The theory postulates that negative and positive emotion is synonymous with variation and selection respectively. The theory further classifies various emotions according to their role in learning, and so makes predictions as to the functions of various brain regions and their interconnections. Sun In developing autonomous agents, we usually emphasize only the procedural and situated knowledge, ignoring generic and declarative knowledge that is more explicit and more widely applicable. On the other hand, in developing AI symbolic reasoning models, we usually emphasize only the declarative and context-free knowledge. In order to develop versatile cognitive agents that learn in situated contexts and generalize resulting knowledge to different environments, we explore the possibility of learning both declarative and procedural knowledge in a hybrid connectionist architecture. The architecture is based on the two-level idea proposed earlier by the author. Declarative knowledge is represented conceptually, while procedural knowledge is represented subconceptually. The architecture integrates embodied reactions, rules, learning, and decision-making in a unified framework, and structures different learning components (including Q-learning and rule induction) in a synergistic way to perform on-line and integrated learning. Moganti Many vision problems are solved using knowledge-based approaches. The conventional knowledge-based systems use domain experts to generate the initial rules and their membership functions, and then by trial and error refine the rules and membership functions to optimize the final system's performance. However, it would be difficult for human experts to examine all the input-output data in complex vision applications to find and tune the rules and functions within the system. In this presentation, the speaker introduces the application of fuzzy logic in complex computer vision applications. It will be shown that neural networks could be effectively used in the estimation of fuzzy rules, thus making the knowledge acquisition simple, robust, and complete. As an example application, the problem of visual inspection of defects in printed circuit boards (PCBs) will be presented. The speaker will present the work carried out by him where the inspection problem is characterized as a pattern classification problem. The process involves a two-level classification of the printed circuit board image sub-patterns into either a non- defective class or a defective class. The PCB sub-patterns are checked for geometric shape and dimensional verification using fuzzy information extracted from the scan-line grid with an adaptive fuzzy data algorithm that uses differential competitive learning (DCL) in updating winning synaptic vectors. The fuzzy feature vectors drastically reduce the conventional inspection systems. The presentation concludes with experimental results showing the superiority of the approach. It will be shown that the basic method presented is by no means limited to the PCB inspection application. The model can easily be extended to other vision problems like silicon wafer inspection, automatic target recognition (ATR) systems, etc. Prakash (with Haluk Ogmen) A developmental neural network model was proposed (Ogmen, 1992, 1995) that ties higher level cognitive functions to lower level sensorimotor intelligence through stage transitions and the decalage vertical" (Piaget, 1975). Our neural representation of a sensorimotor reflex comprises of sensory, motor, and affective elements. The affective elements establish an internal organization: The primary affective and secondary affective elements dictate the totality and the relationship aspects of the organization, respectively. In order to study sensorimotor intelligence in detail the network was elaborated for the sucking and rooting reflexes. During the first two sub-stages of the sensorimotor stage, as proposed by Piaget (1952), assimilation predominates over accommodation. We will present simulations of recognitory and functional assimilations in the sucking reflex, and reciprocal assimilation between the sucking and rooting reflexes. We will then consider possible subcortical neural substrates for our sensorimotor model of the rooting reflex in order to bring the model closer to neurophysiology. The subcortical areas believed to be involved in the rooting reflex are the spinal trigeminal nuclei which receive facial somatosensory afferents and the cervical motor neurons that control the neck muscles. Neurons in these regions are proposed to correspond to the sensory and motor elements of our model, respectively. The reticular formation which receives and sends projections to these two regions and which receives inputs from visceral regions is a good candidate for the loci of the affective elements of our model. In this talk, we will discuss these three areas and their mapping to our model in further detail. Miikkulainen A new approach called SANE (Symbiotic, Adaptive Neuro-Evolution) for learning and performing sequential decision tasks is presented. In SANE, a population of neurons is evolved through genetic algorithms to form a neural network for the given task. Compared to problem-general heuristics, SANE forms more effective decision strategies because it learns to utilize domain-specific information. Applications of SANE to controlling the inverted pendulum, performing value ordering in constraint satisfaction search, and focusing minimax search in game playing will be described and compared to traditional methods. Filer (with James Austin) This abstract is taken from a paper that presents Correlation Matrix Memory, a form of binary associative neural network, and the potential of using this technology in expert systems. The particular focus of this paper is on a comparison with an existing database technique used for achieving partial match, Multi-level Superimposed Coding (Kim & Lee, 1992), and how using Correlation Matrix Memory (CMM) enables very efficient rule matching, and a combinatorial rule match in linear time. We achieve this utilising a comparatively simple network approach, which has obvious implications for advancing models of reasoning in the brain. Rule-based reasoning has been the subject of a lot of work in AI, and some expert systems have proved very useful, e.g., PROSPECTOR (Gaschnig, 1980) and DENDRAL (Lindsay et al., 1980), but it is clear that the usefulness of an expert system is not necessarily the result of a particular architecture. We suggest that efficient partial match is a fundamental requirement, and combinatorial pattern match is a facility that is directly related to dealing with partial information, but a brute force approach invariably takes combinatorial time to do this. Combinatorial match we take to mean the ability to answer a sub-maximally specified query that should succeed if a subset of these attributes match (i.e., specify A attributes and a number N, N s A, and the query succeeds if any N of A match). This sort of match is fundamental, not only in knowledge-based reasoning, but also in (vision) occlusion analysis. Touretzky and Hinton (1988) were the first to emulate a symbolic, rule-based system in a connectionist architecture. A connectionist approach held the promise of better performance with partial information and being generally less brittle. Whether or not this is the case, Touretzky and Hinton usefully demonstrated that connectionist networks are capable of symbolic reasoning. This paper describes CMM, which maintains a truly distributed knowledge representation, and the use of CMM as an inference engine (Austin, 1994). This paper is concerned with some very useful properties; we believe we have shown a fundamental link between database technology and an artificial neural network technology that has parallels in neurobiology. Abstracts for posters: Miikkulainen A distributed neural network model called SPEC for processing sentences with recursive relative clauses is described. The model is based on separating the tasks of segmenting the input word sequence into clauses, forming the case-role representations, and keeping track of the recursive embeddings into different modules. The system needs to be trained only with the basic sentence constructs, and it generalizes not only to new instances of familiar relative clause structures, but to novel structures as well. SPEC exhibits plausible memory degradation as the depth of the center embeddings increases, its memory is primed by earlier constituents, and its performance is aided by semantic constraints between the constituents. The ability to process structure is largely due to a central executive network that monitors and controls the execution of the entire system. This way, in contrast to earlier subsymbolic systems, parsing is modeled as a controlled high-level process rather than one based on automatic reflex responses. Paik and Marzban In an attempt to better understand the attributes of the "average" viewer, an analysis of the data characterizing television nonviewers and extreme viewers is performed. The data is taken from the 1988, 1989, and 1990 General Social Surveys (GSS), conducted by the National Opinion Research Center (NORC). Given the assumption-free, model-independent representation that a neural network can offer, we perform such an analysis and discuss the significance of the findings. For comparison, a linear discriminant analysis is also performed, and is shown to be outperformed by the neural network. Furthermore, the set of demographic variables are identified as the strongest predictor of nonviewers, and the combination of family-related and social activity-related variables as the strongest attribute of extreme viewers. Paik This study examines the correlation between mathematics achievement and television viewing, and explores the underlying processes. The data consists of 13,542 high school seniors from the first wave of the High School and Beyond project, conducted by the National Opinion Research Center on behalf of the National Center for Education Statistics. A neural network is employed for the analysis; unlike methods employed in prior studies, with no a priori assumptions about the underlying model or the distributions of the data, the neural network yields a correlation impervious to errors or inaccuracies arising from possibly violated assumptions. A curvilinear relationship is found, independent of viewer characteristics, parental background, parental involvement, and leisure activities, with a maximum at about one hour of viewing, and persistent upon the inclusion of statistical errors. The choice of mathematics performance as the measure of achievement elevates the found curvilinearity to a content-independent status, because of the lack of television programs dealing with high school senior mathematics. It is further shown that the curvilinearity is replaced with an entirely positive correlation across all hours of television viewing, for lower ability students. A host of intervening variables, and their contributions to the process, are examined. It is shown that the process, and especially the component with a positive correlation, involves only cortical stimulations brought about by the formal features of television programs. Warner A modeling approach was used to investigate the theorized connection between expertise and context. Using the domain of air-combat maneuvering, an expert was modeled both with and without respect to context. Neural networks were used for each condition. One network used a simple multi-layer perceptron with inputs for five consecutive time segments from the data as well as a quantitative descriptor for context in this domain. The comparison used a set of networks with identical structure to the first network. The same data were provided to each condition, however the data were divided by context before being provided to separate networks for the comparison. It was discovered, after training and generalization testing on all networks, that the comparison condition using context-explicit networks performed better for strict definitions of offensive context. This distinction implies the use of context in an air-combat task by the expert human pilot. Simulating problems using a standard model and comparing it against the same model incorporating hypothesized explicit divisions within the data should prove to be a useful tool in psychology. Transportation and Hotel Information Texas A&M is in College Station, TX, about 1.5 to 2 hours NW of Houston and NE of Austin. Bryan/College Station Airport (Easterwood) is only about five minutes from the conference site, and is served by American (American Eagle), Continental, and Delta (ASA). The College Station Hilton (409-693-7500) has a block of rooms reserved for the Creative Concepts Conference (of which MIND is a satellite) at $60 a night, and a shuttle bus to and from the A&M campus. There are also rooms available at the Memorial Student Union on campus (409-845-8909) on campus for about $40 a night. Other nearby hotels include the Comfort Inn (409-846-7333), Hampton Inn (409-846-0184), LaQuinta (409-696-7777), and Ramada-Aggieland (409-693-9891), all of which have complimentary shuttles to campus. From chiva at biologie.ens.fr Wed May 3 09:54:37 1995 From: chiva at biologie.ens.fr (Emmanuel CHIVA) Date: Wed, 3 May 1995 15:54:37 +0200 Subject: Groupe de BioInformatique WWW Home Page Message-ID: <9505031354.AA12362@apollon.ens.fr> ** ANNOUNCING ** There is now a homepage for the Groupe de BioInformatique, Ecole Normale Superieure, Paris (France) at the following URL: http://www.ens.fr/bioinfo/www which includes: - the description of our research areas (e.g, the animat approach, NNets, GAs Image Processing and vision, Cell metabolism), complete bibliography (some articles can be retrieved) and personal pages - the Adaptive Behavior journal homepage - the SAB conference homepage - numerous pointers to additional related servers Please send reactions and comments to chiva at wotan.ens.fr ============================================================================ Ecole Normale Superieure | Emmanuel M. Chiva | Departement de Biologie | | Groupe de BioInformatique | Tel: + 33 1 44323633 | 46, rue d'Ulm | Fax: + 33 1 44323901 | 75230 Paris cedex 05 France | email: chiva at wotan.ens.fr | ============================================================================ From rob at comec4.mh.ua.edu Wed May 3 10:34:15 1995 From: rob at comec4.mh.ua.edu (Robert Elliott Smith) Date: Wed, 03 May 95 08:34:15 -0600 Subject: Papers to be presented at ICGA6 Message-ID: <9505031334.AA16118@comec4.mh.ua.edu> The organizers of the Sixth International Conference on Genetic Algorithms, to be held in Pittsburgh, PA, July 15-19, 1995, are please to present the following list of papers that will be presented at the conference. This list is followed by registration information for the conference. =================== ICGA-95: PAPERS ACCECPTED FOR PRESENTATION SELECTION Generalized Convergence Models for Tournament- and (mu,lambda)-Selection Thomas Baeck A Mathematical Analysis of Tournament Selection Tobias Blickle, Lothar Thiele On Decentralizing Selection Algorithms Kenneth De Jong, Jayshree Sarma Finding Multimodal Solutions Using Restricted Tournament Selection Georges Harik Analysis of Genetic Algorithms Evolution under Pure Selection Filippo Neri, Lorenza Saitta MUTATION AND RECOMBINATION A New Class of Crossover Operators for Numerical Optimization Jaroslaw Arabas, Jan J. Mulawka, Jacek Pokrasniewicz On Multi-Dimensional Encoding/Crossover Thang N. Bui, Byung-Ro Moon On the Adaptation of Arbitrary Normal Mutation Distributions in Evolution Strategies: The Generating Set Adaptation Nikolaus Hansen, Andreas Ostermeier, Andreas Gawelczyk The Nature of Mutation in Genetic Algorithms Robert Hinterding, Harry Gielewski, T. C. Peachey Crossover, Macromutation, and Population-based Search Terry Jones What Have You Done for Me Lately? Adapting Operator Probabilities in a Steady-State Genetic Algorithm Bryant A. Julstrom Metabits: Generic Endogenous Crossover Control Jim Levenick Toward More Powerful Recombinations Byung Ro Moon, Andrew B. Kahng Fuzzy Recombination for the Continuous Breeder Genetic Algorithm H.-M. Voigt, H. Muhlenbein, D. Cvetkovic EVOLUTIONARY COMPUTATION TECHNIQUES The Distributed Genetic Algorithm Revisited Theodore C. Belding Solving Constraint Satisfaction Problems Using a Genetic/Systematic Search Hybrid That Realizes When to Quit James Bowen, Gerry Dozier Enhancing GA Performance Through Incest Prohibitions Based on Ancestry Robert Craighurst, Worthy Martin A Comparison of Parallel and Sequential Niching Methods Samir W. Mahfoud Selectively Destructive Re-start Jonathan Maresky, Yuval Davidor, Daniel Gitler, Gad Aharoni, Amnon Barak Genetic Algorithms, Numerical Optimization, and Constraints Zbigniew Michalewicz, Sita S. Raghavan A New Diploid Scheme and Dominance Change Mechanism for Non-Stationary Function Optimization Khim Peow Ng, Kok Cheong Wong When Seduction Meets Selection Edmund Ronald Population-Oriented Simulated Annealing: A Genetic/Thermodynamic Hybrid Approach to Optimization James M. Varanelli, James P. Cohoon FORMAL ANALYSIS OF EVOLUTIONARY COMPUTATION AND PROBLEM DIFFICULTY Fitness Distance Correlation as a Measure of Problem Difficulty for Genetic Algorithms Terry Jones, Stephanie Forrest Signal-to-noise, Crosstalk and Long Range Problem Difficulty in Genetic Algorithms Hillol Kargupta Efficient Tracing of the Behaviour of Genetic Algorithms using Expected Values of Bit and Walsh Products J.N. Kok, P. Floreen Optimization Using Replicators Anil Menon, Kishan Mehrotra, Chilukuri K. Mohan, Sanjay Ranka Epistasis in Genetic Algorithms: An Experimental Design Perspective Colin Reeves, Christine Wright Epistasis in Periodic Programs Stefan Voget Hyperplane Ranking in Simple Genetic Algorithms Darrell Whitley, Keith Mathias, Larry Pyeatt Building Better Test Functions D. Whitley, K. Mathias, S. Rana, J Dzubera GENETIC PROGRAMMING The Evolution of Agents that Build Mental Models and Create Simple Plans Using Genetic Programming David Andre Causality in Genetic Programming Dana H. Ballard, Justinian Rosca Solving Complex Problems with Genetic Algorithms Bertrand Daniel Dunay, Frederic E. Petry Strongly Typed Genetic Programming in Evolving Cooperation Strategies Thoms Haynes, Roger L. Wainwright, Sandip Sen, Dale A. Schoenefeld Temporal Data Processing Using Genetic Programming Hitoshi Iba, Hugo de Garis, Taisuke Sato Two Ways of Discovering the Size and Shape of a Computer Program to Solve a Problem John R. Koza Evolving Data Structures Using Genetic Programming W.B. Langdon Accurate Replication in Genetic Programming Nicholas Freitag McPhee, Justin Darwin Miller Complexity Compression and Evolution Peter Nordin, Wolfgang Banzhaf Evolving Turing-Complete Programs for a Register Machine with Self-modifying Code Peter Nordin, Wolfgang Banzhaf CO-EVOLUTION AND EMERGENT ORGANIZATION Biological Symbiosis as a Metaphor for Computational Hybridization Jason M. Daida, Steven J. Ross, Brian C. Hannan Evolving Globally Synchronized Cellular Automata Rajarshi Das, James P. Crutchfield, Melanie Mitchell, James E. Hanson The Evolution of Emergent Organization in Immune System Gene Libraries Ron Hightower, Stephanie Forrest, Alan S. Perelson Co-evolution of Non-Deterministic Incremental Algorithms as a New Approach for Search in State Spaces Hugues Juille The Symbiotic Evolution of Solutions and their Representations Jan Paredis A Coevolutionary Approach to Learning Sequential Decision Rules Mitchell A. Potter, Kenneth A. De Jong, John J. Grefenstette Methods for Competitive Co-evolution: Finding Opponents Worth Beating Christopher D. Rosin, Richard K. Belew EVOLUTIONARY COMPUTATION IN COMBINATION WITH MACHINE LEARNING OR NEURAL NETS Evolution in Multi-agent Systems: Evolving Communicating Classifier Systems for Gait in a Quadrapedal Robot Lawrence Bull, Terrence C. Fogarty Adaptive Distributed Routing using Evolutionary Fuzzy Control Brian Carse, Terry Fogarty, Alistair Munro Relational Schemata: A Way to Improve the Expressiveness of Classifiers Philippe Collard, Cathy Escazut The Mating Pool: A Testbed for Experiments in the Evolution of Symbol Systems Lawrence Davis, David Orvosh Genetic Algorithm Enlarges the Capacity of Associative Memory Akira Imada, Keijiro Araki A Genetic Algorithm for Optimizing Fuzzy Decision Trees Cezary Z. Janikow PLEASE: A Prototype Learning System using Genetic Algorithms Leslie Knight, Sandip Sen A Parallel Genetic Algorithm for Concept Learning Filippo Neri, Attilio Giordana Evolutionary Grown Semi-Weighted Neural Networks Steve G. Romaniuk Combining Genetic Algorithms with Memory Based Reasoning John W. Sheppard, Steven L. Salzberg Cellular Encoding Applied to Neurocontrol Darrell Whitley, Frederic Gruau, Larry Pyeatt EVOLUTIONARY COMPUTATION APPLICATIONS I Determining Factorization: A New Encoding Scheme for Spanning Trees Applied to the Probabilistic Minimum Spanning Tree Problem Faris N. Abuali, Roger L. Wainwright, Dale A. Schoenefeld A Hybrid Genetic Algorithm for the Maximum Clique Problem Thang Nguyen Bui, Paul H. Eppley Finding (Near-)Optimal Steiner Trees in Large Graphs Henrik Esbensen Solving Equal Piles with the Grouping Genetic Algorithm Emanuel Falkenauer A Study of Genetic Algorithm Hybrids for Facility Layout Problems Kazuhiro Kado, Dave Corne, Peter Ross An Efficient Genetic Algorithm for Job Shop Scheduling Problems Shigenobu Kobayashi, Isao Ono, Masayuki Yamamura A Comparative Study of Genetic Search Kihong Park Inference of Stochastic Regular Grammars by Massively Parallel Genetic Algorithms Markus Schwehm, Alexander Ost Genetic Algorithm Approach to the Search for Golomb Rulers Stephen W. Soliday, Abdollah Homaifar, Gary L. Lebby An Adaptive Clustering Method using a Geometric Shape for Vehicle Routing Problems with Time Windows Sam R. Thangiah EVOLUTIONARY COMPUTATION APPLICATIONS II Applying Genetic Algorithms to Outlier Detection Kelly D. Crawford, Roger L. Wainwright Design of Statistical Quality Control Procedures Using Genetic Algorithms Aristides T. Hatjimihail, Theophanes T. Hatjimihail A Segregated Genetic Algorithm for Constrained Structural Optimization R. Le Riche, C. Knopf-Lenoir, R.T. Haftka A Preliminary Study of Genetic Data Compression Wee K. Ng A Standard GA Approach to Native Protein Conformation Prediction Arnold L. Patton, W. F. Punch, III, E. D. Goodman Using GAs to Characterize Workloads Chrisila C. Pettey, Thomas D. Wagner, Lawrence W. Dowdy Development of the Genetic Function Approximation Algorithm David Rogers A Parallel Genetic Algorithm for Multi-objective Microprocessor Design Timothy J. Stanley, Trevor Mudge A Hybrid Genetic Algorithm for Highly Constrained Timetabling Problems Rupert Weare, Edmund Burke, Dave Ellilman Evolutionary Computation in Air Traffic Control Planning C.H.M. van Kemenade, C.F.W. Hendriks, J.N. Kok Use of the Genetic Algorithm for Load Balancing of Sugar Beet Presses Frank Vavak, Terence C. Fogarty, Philip Cheng ========= Registration Information: 6TH INTERNATIONAL CONFERENCE ON GENETIC ALGORITHMS July 15-19, 1995 University of Pittsburgh Pittsburgh, Pennsylvania, USA CONFERENCE COMMITTEE Stephen F. Smith, Chair Carnegie Mellon University Peter J. Angeline, Finance Loral Federal Systems Larry J. Eshelman, Program Philips Laboratories Terry Fogarty, Tutorials University of the West of England, Bristol Alan C. Schultz, Workshops Naval Research Laboratory Alice E. Smith, Local Arrangements University of Pittsburgh Robert E. Smith, Publicity University of Alabama The 6th International Conference on Genetic Algorithms (ICGA-95) brings together an international community from academia, government, and industry interested in algorithms suggested by the evolutionary process of natural selection, and will include pre-conference tutorials, invited speakers, and workshops. Topics will include: genetic algorithms and classifier systems, evolution strategies, and other forms of evolutionary computation; machine learning and optimization using these methods, their relations to other learning paradigms (e.g., neural networks and simulated annealing), and mathematical descriptions of their behavior. The conference host for 1995 will be the University of Pittsburgh located in Pittsburgh, Pennsylvania. The conference will begin Saturday afternoon, July 15, for those who plan on attending the tutorials. A reception is planned for Saturday evening. The conference meeting will begin Sunday morning July 16 and end Wednesday afternoon, July 19. The complete conference program and schedule will be sent later to those who register. TUTORIALS ICGA-95 will begin with three parallel sessions of tutorials on Saturday. Conference attendees may attend up to three tutorials (one from each session) for a supplementary fee (see registration form). Tutorial Session I 11:00 a.m.-12:30 p.m. I.A Introduction to Genetic Algorithms Melanie Mitchell - A brief history of Evolutionary Computation. The appeal of evolution. Search spaces and fitness landscapes. Elements of Genetic Algorithms. A Simple GA. GAs versus traditional search methods. Overview of GA applications. Brief case studies of GAs applied to: the Prisoner's Dilemma, Sorting Networks, Neural Networks, and Cellular Automata. How and why do GAs work? I.B Application of Genetic Algorithms Lawrence Davis - There are hundreds of real-world applications of genetic algorithms, and a considerable body of engineering expertise has grown up as a result. This tutorial will describe many of those principles, and present case studies demonstrating their use. I.C Genetics-Based Machine Learning Robert Smith - This tutorial discusses rule-based, neural, and fuzzy techniques that utilize GAs for exploration in the context reinforcement learning control. A rule-based technique, the learning classifier system (LCS), is shown to be analogous to a neural network. The integration of fuzzy logic into the LCS is also discussed. Research issues related to GA-based learning are overviewed. The application potential for genetics-based machine learning is discussed. Tutorial Session II 1:30-3:00 p.m. II.A Basic Genetic Algorithm Theory Darrell Whitley - Hyperplane Partitions and the Schema Theorem. Binary and Nonbinary Representations; Gray coding, Static hyperplane averages, Dynamic hyperplane averages and Deception, the K-armed bandit analogy and Hyperplane ranking. II.B Basic Genetic Programming John Koza - Genetic Programming is an extension of the genetic algorithm in which populations of computer programs are evolved to solve problems. The tutorial explains how crossover is done on program trees and illustrates how the user goes about applying genetic programming to various problems of different types from different fields. Multi-part programs and automatically defined functions are briefly introduced. II.C Evolutionary Programming David Fogel - Evolutionary programming, which originated in the early 1960s, has recently been successfully applied to difficult, diverse real-world problems. This tutorial will provide information on the history, theory, and practice of evolutionary programming. Case-studies and comparisons will be presented. Tutorial Session III 3:30-5:00 p.m. III.A Advanced Genetic Algorithm Theory Darrell Whitley - Exact Non-Markov models of simple genetic algorithms. Markov models of simple genetic algorithms. The Schema Theorem and Price's Theorem. Convergence Proofs, Exact Non-Markov models for permutation based representations. III.B Advanced Genetic Programming John Koza - The emphasis is on evolving multi-part programs containing reusable automatically defined functions in order to exploit the regularities of problem environments. ADFs may improve performance, improve parsimony, and provide scalability. Recursive ADFs, iteration-performing branches, various types of memories (including indexed memory and mental models), architecturally diverse populations, and point typing are explained. III.C Evolution Strategies Hans-Paul Schwefel and Thomas Baeck - Evolution Strategies in the context of their historical origin for optimization in Berlin in the 1960s. Comparison of the computer-versions (1+1) and (10,100) ES with classical optimum seeking methods for parameter optimization. Formal descriptions of ES. Global convergence conditions. Time efficiency in some simple situations. The role of recombination. Auto-adaptation of internal models of the environment. Multi-criteria optimization. Parallel versions. Short list of application examples. GETTING TO PITTSBURGH The Pittsburgh International Airport is served by most of the major airlines. Information on transportation from the airport and directions to the University of Pittsburgh campus, will be sent along with your conference registration confirmation letter. LODGING University Holiday Inn, 100 Lytton Avenue two blocks from convention site $92/day (single) $9 /day parking charge pool (indoor), exercise facilities Reserve by June 18. Call 412-682-6200. Hampton Inn, 3315 Hamlet Street 12 blocks from convention site $72/day (single) free parking, breakfast, and one-way airport transportation Reserve by July 1. Call 412-681-1000. Howard Johnson's, 3401 Boulevard of the Allies 12 blocks from convention site $56/day (single) free parking and Oakland transportation pool (outdoor) Reserve by June 13. Call 412-683-6100. Sutherland Hall (dorm), University Drive-Pitt campus 10 blocks from convention site (steep hill) $30/day, single no amenities (phone, TV, etc.) shared bathroom Reserve by July 1. Call 412-648-1100. CONFERENCE FEES REGISTRATION FEE Registrations received by June 11 are $250 for participants and $100 for students. Registrations received on or after June 12 and walk-in registrations at the conference will be $295 for participants and $125 for students. Included in the registration fee are entry to all technical sessions, several lunches, coffee breaks, reception Saturday evening, conference materials, and conference proceedings. TUTORIALS There is a separate fee for the Saturday tutorial sessions. Attendees may register for up to three tutorials (one from each tutorial session). The fee for one tutorial is $40 for participants and $15 for students; two tutorials, $75 for participants and $25 for students; three tutorials, $110 for participants and $35 for students. The deadline to register without a late fee is June 11. After this date, participants and students will be assessed a flat $20 late fee, whether they register for one, two, or all three tutorials. CONFERENCE BANQUET Not included in the registration fee is the ticket for the banquet. Participants may purchase banquet tickets for an additional $30. Note - Please purchase your banquet tickets nowQyou will be unable to buy them upon arrival. GUEST TICKETS Guest tickets for the Saturday evening reception are $10 each; guest tickets for the conference banquet are $30 each for adults and $10 each for children. Note - Please purchase additional tickets now - you will be unable to buy them upon arrival. CANCELLATION/REFUND POLICY For cancellations received up to and including June 1, a full refund will be given minus a $25 handling fee. FINANCIAL ASSISTANCE FOR STUDENTS With support from the Naval Center for Applied Research in Artificial Intelligence, Naval Research Laboratory, a limited fund has been set aside to assist students with travel expenses. Students should have their advisor certify their student status and that sufficient funds are not available. Students interested in obtaining such assistance should send a letter before May 22 describing their situation and needs to: Peter J. Angeline, c/o Advanced Technologies Dept, Loral Federal Systems, State Route 17C, Mail Drop 0210, Owego, NY 13827-3994 USA. TO REGISTER Early registration is recommended. You may register by mail, fax, or email using a credit card (MasterCard or VISA). You may also pay by check if registering by mail. Note: Students must also send with their registration a photocopy of their valid university student ID or a letter from a professor. Complete the registration form and return with payment. If more than one registrant from the same institution will be attending, make additional copies of the registration form. Mail ICGA 95 Department of Industrial Engineering University of Pittsburgh 1048 Benedum Hall Pittsburgh, PA 15261 USA Fax Fax the registration form to 412-624-9831 Email Receive email form by contacting: icga at engrng.pitt.edu Up-to-date conference information is available on the World Wide Web (WWW) http://www.aic.nrl.navy.mil/galist/icga95/ CALL FOR ICGA '95 WORKSHOP PROPOSALS ICGA workshop proposals are now being solicited. Workshops tend to range from informal sessions to more formal sessions with presentations and working notes. Each accepted workshop will be supplied with space and an overhead projector. VCRs might be available. If you are interested in organizing a workshop, send a workshop title, short description, proposed format, and name of the organizers to the workshop coordinator by April 15, 1995. Alan C. Schultz - schultz at aic.nrl.navy.mil Code 5510, Navy Center for Artificial Intelligence Naval Research Laboratory Washington DC 30375-5337 USA REGISTRATION FORM Prof / Dr / Mr / Ms / Mrs Name ______________________________________________________ Last First MI I would like my name tag to read _____________________________________________ Affiliation/Business ______________________________________________________ Address ______________________________________________________ City ______________________________________________________ State ___________________ Zip ________________________ Country_____________________________________________ Telephone (include area code) Business _______________________________ Home______________________________ FEES (all figures in US dollars) Conference Registration Fee By June 11 ___ participant, $250 ___ student, $100 =$_________ On or after June 12 ___ participant, $295 ___ student, $125 =$_________ July 15 Tutorials Select up to three tutorials, but no more than one tutorial per tutorial session. Tutorial Session I: ___I.A Introduction to Genetic Algorithms ___I.B Application of Genetic Algorithms ___I.C Genetics-Based Machine Learning Tutorial Session II: ___II.A Basic Genetic Algorithm Theory ___II.B Basic Genetic Programming ___II.C Evolutionary Programming Tutorial Session III: ___III.A Advanced Genetic Algorithm Theory ___III.B Advanced Genetic Programming ___III.C Evolution Strategies Tutorial Registration Fee By June 11 ___one tutorial: participant, $40 student, $15 ___two tutorials: participant, $75 student, $25 = $_________ ___three tutorials: participant, $110 student, $35 On or after June 12, participants and students add a $20 late fee for tutorials = $_________ Banquet Ticket (not included in the Registration Fee; no tickets may be purchased upon arrival) participants/adult guest #______ ticket(s) @ $30 = $_________ child #______ ticket(s) @ $10 = $_________ Additional Saturday reception tickets (no tickets may be purchased upon arrival) guest #______ ticket(s) @ $10 = $_________ TOTAL (US dollars) $____________ METHOD OF PAYMENT ___ Check (payable to the University of Pittsburgh, US banks only) ___ MasterCard ___ VISA #__________________________________________ Expiration Date ____________________ Signature of card holder ______________________________________________ Note: Students must submit with their registration a photocopy of their valid student ID or a letter from a professor. Mail ICGA 95, Department of Industrial Engineering, University of Pittsburgh, 1048 Benedum Hall, Pittsburgh, PA 15261 USA Fax 412-624-9831 Email To receive email form: icga at engrng.pitt.edu World Wide Web (WWW) For up-to-date conference information: http://www.aic.nrl.navy.mil/galist/icga95/ ------------------------------------------- Robert Elliott Smith Department of Engineering Science and Mechanics Room 210 Hardaway Hall The University of Alabama Box 870278 Tuscaloosa, Alabama 35487 <> rob at comec4.mh.ua.edu <> (205) 348-1618 <> (205) 348-7240 <> http://hamton.eng.ua.edu/college/home/mh/faculty/rsmith/Web/smith.html ------------------------------------------- From sbh at eng.cam.ac.uk Thu May 4 14:49:41 1995 From: sbh at eng.cam.ac.uk (S.B. Holden) Date: Thu, 04 May 1995 14:49:41 BST Subject: New technical report Message-ID: <199505041349.19030@club.eng.cam.ac.uk> The following technical report is available by anonymous ftp from the archive of the Speech, Vision and Robotics Group at the Cambridge University Engineering Department. Average-Case Learning Curves for Radial Basis Function Networks Sean B. Holden and Mahesan Niranjan Technical Report CUED/F-INFENG/TR.212 Cambridge University Engineering Department Trumpington Street Cambridge CB2 1PZ England Abstract The application of statistical physics to the study of the learning curves of feedforward connectionist networks has, to date, been concerned mostly with networks that do not include hidden layers. Recent work has extended the theory to networks such as committee machines and parity machines; however these are not networks that are often used in practice and an important direction for current and future research is the extension of the theory to practical connectionist networks. In this paper we investigate the learning curves of a class of networks that has been widely, and successfully applied to practical problems: the Gaussian radial basis function networks (RBFNs). We address the problem of learning linear and nonlinear, realizable and unrealizable, target rules from noise-free training examples using a stochastic training algorithm. Expressions for the generalization error, defined as the expected error for a network with a given set of parameters, are derived for general Gaussian RBFNs, for which all parameters, including centres and spread parameters, are adaptable. Specializing to the case of RBFNs with fixed basis functions we then study the learning curves for these networks in the limit of high temperature. ************************ How to obtain a copy ************************ a) Via FTP: unix> ftp svr-ftp.eng.cam.ac.uk Name: anonymous Password: (type your email address) ftp> cd reports ftp> binary ftp> get holden_tr212.ps.Z ftp> quit unix> uncompress holden_tr212.ps.Z unix> lpr holden_tr212.ps (or however you print PostScript) b) Via postal mail: Request a hardcopy from Dr. Sean B. Holden Department of Computer Science University College London Gower Street London WC1E 6BT U.K. or email me: s.holden at cs.ucl.ac.uk From goldfarb at unb.ca Thu May 4 23:29:22 1995 From: goldfarb at unb.ca (Lev Goldfarb CS) Date: Fri, 5 May 1995 00:29:22 -0300 (ADT) Subject: New list INDUCTIVE Message-ID: ****************************ANNOUNCEMENT ******************************** Announcing a new electronic mailing list called INDUCTIVE (Inductive Learning Group) ****************************ANNOUNCEMENT ******************************** This mailing list is initiated to provide a separate forum for discussing various scientific issues related to INDUCTIVE (LEARNING) PROCESSES. We strongly feel that these processes are of central importance to cognitive science in general and artificial intelligence (AI) in particular, and that so far they have not been given the attention and effort they deserve. Moreover, we feel that the success of the entire enterprise (of cognitive science) depends on the success of the effort to model the inductive learning processes understood sufficiently broadly. We also believe that the current (and the previous) subdivisions of cognitive psychology and AI impedes (and has impeded) the progress of both enterprises, since there are serious reasons to believe that all cognitive processes are built on top of the inductive learning processes. We cordially invite various researchers from the above two disciplines (including those working in Pattern Recognition and Neural Networks) to join this supervised mailing list. As a first question we propose to discuss the very definition of the inductive learning process: Inductive learning is a process by means of which, given a finite positive training set C+ from a possibly infinite class (or category) C and a finite set C- from the complement of C, an agent is able to reach a state (of inductive generalization) which allows it to form an idea about, and REPRESENTATION of, the class C. This state, in turn, enables the agent to recognize a new object as belonging to class C or not. ****************************************************************************** The subscription to this list is free. This list will be moderated and we reserve the right to terminate the membership of those members who abuse the list. You may subscribe to the list by simply sending the following text to the address INDUCTIVE-SERVER at UNB.CA SUBSCRIBE INDUCTIVE FIRSTNAME LASTNAME ****************************************************************************** Lev Goldfarb Tel: 506-453-4566 Faculty of Computer Science Fax: 506-453-3566 University of New Brunswick E-mail: goldfarb at unb.ca Fredericton, N.B., E3B 5A3 Canada From Dave_Touretzky at DST.BOLTZ.CS.CMU.EDU Fri May 5 00:34:26 1995 From: Dave_Touretzky at DST.BOLTZ.CS.CMU.EDU (Dave_Touretzky@DST.BOLTZ.CS.CMU.EDU) Date: Fri, 05 May 95 00:34:26 -0400 Subject: computational neuroscience course syllabus Message-ID: <449.799648466@DST.BOLTZ.CS.CMU.EDU> This term I introduced a new course at CMU called Computational Models of Neural Systems. The course looked at neurobiological structures where the anatomy and phsiology are sufficiently well known that we can form explicit theories about how they represent and process information, and test those theories with computer simulations. Examples include the hippocampus, piriform cortex, parietal cortex, cerebellum, and early stages of the visual system. The course syllabus is available online at: http://www.cs.cmu.edu/~dst/pubs/cmns-syllabus.ps.gz or via anonymous FTP: FTP-host: ftp.cs.cmu.edu FTP-path: /afs/cs/usr/dst/www/pubs/cmns-syllabus.ps.gz The URL for the course Web page is: http://www.cs.cmu.edu/afs/cs/academic/class/15880b-s95/Web/home.html I would be pleased to receive comments on the syllabus, suggestions for additional or alternate readings, and pointers to syllabi that other people have developed for similar courses. Thanks to the following people for help with suggesting and/or supplying readings: Jim Bower, Jay Buckingham, Mike Hasselmo, Yi-Jen Lin, Randy O'Reilly, David Redish, Lisa Saksida, David Willshaw, and Rich Zemel. -- Dave Touretzky From berg at cs.albany.edu Sat May 6 14:46:02 1995 From: berg at cs.albany.edu (George Berg) Date: Sat, 6 May 1995 14:46:02 -0400 (EDT) Subject: 3rd Albany Conferenence on Molecular Biology Message-ID: <199505061846.OAA23320@atlas.cs.albany.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 3068 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/57d630b0/attachment.ksh From INFOMED at ccvax.unicamp.br Sun May 7 23:55:52 1995 From: INFOMED at ccvax.unicamp.br (INFOMED@ccvax.unicamp.br) Date: Sun, 7 May 1995 23:55:52 BSC (-0300 C) Subject: New Book on Medical Reasoning (Lots of ANN papers) Message-ID: <01HQ8NLW6XRU936I47@ccvax.unicamp.br> NEW BOOK ON MEDICAL DECISION MAKING ----------------------------------- Advances in Fuzzy Systems - Applications and Theory - Vol. 3 COMPARATIVE APPROACHES TO MEDICAL REASONING edited by M E Cohen (California State Univ., Fresno & Univ. California, San Francisco) & D L Hudson (Univ. California, San Francisco) This book focuses on approaches to computer-assisted medical decision-making. A unique feature of the book is that a specific problem in medical decision-making has been selected from the literature, with each contributed chapter presenting a different approach to the solution of the same problem. Theoretical foundations for each approach are provided, followed by practical application. Techniques include knowledge-based reasoning, neural network models, hybrid systems, reasoning with uncertainty, and fuzzy logic, among others. The goal is to supply the reader with a variety of theoretical techniques whose practical implementation can be clearly understood through the example. Using a single, concrete example to illustrate different theoretical approaches allows various techniques to be easily contrasted and permits the reader to determine which aspects are pertinent to specific types of applications. Although the methods are illustrated in a medical problem, they have wide applicability in numerous areas of decision-making. Contents: Knowledge-Based Systems: CLINAID: Medical Knowledge-Based System Based on Fuzzy Relational Structures (L J Kohout et al.); Diagnostic Aspects of Diastolic Dysfunction: Representation in D-Log Language (A Muscari); Neural Network Models: Improved Noninvasive Diagnosis of Coronary Artery Disease Using Neural Networks (M Akay et al.); Fuzzy Neural Network-Based Adaptive Reasoning with Experiential Knowledge (H Ding & M M Gupta); Estimation of Long-Term Mortality of Myocardial Infarction Using a Neural Network Based on the Alopex Algorithm (W J Kostis et al.); Neural Network-Based Approach to Outcome Prognosis for Patients with Diastolic Dysfunction (R M E Sabbatini et al.); Statistical Approaches: Implementation of Statistical Pattern Recognition for Congestive Heart Failure (E A Patrick); The Application of Bayesian Inference with Fuzzy Evidences in Medical Reasoning (C R=F6mer & A Kandel); Modeling Techniques: 24 Hour Analysis of Heart Rate Fluctuations Before and After Carotid Surgery Using Wavelet Transform (M Akay et al.); Reasoning for Introducing a New Parameter for Assessment of Myocardial Status - The Specific Potential of Myocardium (L Bach=E1rov=E1); Hybrid Systems: Correct Diagnosis of Chest Pain by an Integrated Expert System (D Assanelli et al.); Phonocardiogram Analysis of Congenital and Acquired Heart Diseases Using Artificial Neural Networks (D Barschdorff et al.); Hybrid System for Diagnosis and Treatment of Heart Disease (D L Hudson et al.). Readership: Computer scientists and medical information scientists. 330pp Pub. date: Summer 1995 ISBN no.: 981-02-2162-2 US$86 To order or request for more information: Send an email to our marketing department, wspmkt at singnet.com.sg World Scientific Publishing Co. Pte. Ltd. Block 1022 Hougang Ave 1 #05-3520 Tai Seng Industrial Estate Singapore 1953 Republic of Singapore Tel: 65-3825663, Fax: 65-3825919 Internet e-mail: wsped at singnet.com.sg (Editorial dept, Singapore office) worldscp at singnet.com.sg (Singapore office) wspub at tigger.jvnc.net (US office) wspc at wspc.demon.co.uk (UK office) * Now on the World-Wide Web! * * Our Home Page URL http://www.wspc.co.uk/wspc/index.html * * .sty files for our journals can be obtained by anonymous FTP to ftp.singnet.com.sg at the directory /groups/world_scientific * From Gerhard.Paass at gmd.de Mon May 8 11:23:36 1995 From: Gerhard.Paass at gmd.de (Gerhard Paass) Date: Mon, 8 May 1995 17:23:36 +0200 Subject: CFP: Autumn School in Connectionism and Neural Networks, Muenster Message-ID: <199505081523.AA02627@sein.gmd.de> CALL FOR PARTICIPATION ================================================================= = = = H e K o N N 9 5 = = = Autumn School in C o n n e c t i o n i s m and N e u r a l N e t w o r k s October 2-6, 1995 Muenster, Germany Conference Language: German ---------------------------------------------------------------- A comprehensive description of the Autumn School together with abstracts of the courses can be found at the following addresses: WWW: http://borneo.gmd.de/~hekonn anonymous FTP: ftp.gmd.de directory: Learning/neural/hekonn95 = = = O V E R V I E W = = = Artificial neural networks (ANN's) have in recent years been discussed in many diverse areas, ranging from the modelling of learning in the cortex to the control of industrial processes. The goal of the Autumn School in Connectionism and Neural Networks is to give a comprehensive introduction to conectionism and artificial neural networks and to give an overview of the current state of the art. Courses will be offered in five thematic tracks. (The conference language is German.) The FOUNDATION track will introduce basic concepts (A. Zell, Univ. Stuttgart), as well as present lectures on information processing in biological neural systems (G. Palm, Univ. Ulm), on the relationship between ANN's and fuzzy logic (R. Kruse, Univ. Braunschweig), and on genetic algorithms (S. Vogel, Univ. Cologne). The THEORY track is devoted to the properties of ANN's as abstract learning algorithms. Courses are offered on approximation properties of ANN's (K. Hornik, Univ. Vienna), the algorithmic complexity of learning procedures (M. Schmitt, TU Graz), prediction uncertainty and model selection (G. Paass, GMD St. Augustin), and "neural" solutions of optimization problems (J. Buhmann, Univ. Bonn). This year, special emphasis will be put on APPLICATIONS of ANN's to real-world problems. This track covers courses on vision (H.Bischof, TU Vienna), character recognition (J. Schuermann, Daimler Benz Ulm), speech recognition (R. Rojas, FU Berlin), industrial applications (B. Schuermann, Siemens Munich), robotics (K.Moeller, Univ. Bonn), and hardware for ANN's (U. Rueckert, TU Hamburg-Harburg). In the track on SYMBOLIC CONNECTIONISM, there will be courses on: knowledge processing with ANN's (F. Kurfess, New Jersey IT), hybrid systems in natural language processing (S. Wermter, Univ. Hamburg), connectionist aspects of natural language processing (U. Schade, Univ. Bielefeld), and procedures for extracting rules from ANN's (J. Diederich, QUT Brisbane). In the section on COGNITIVE MODELLING, we have courses on representation and cognitive models (G. Dorffner, Univ. Vienna), aspects of cognitive psychology (R. Mangold-Allwinn, Univ. Saarbruecken), self-organizing ANN's in the visual system (C. v.d. Malsburg, Univ. Bochum), and information processing in the visual cortex (J.L. v. Hemmen, TU Munich). In addition, there will be courses on PROGRAMMING and SIMULATORS. Participants will have the opportunity to work with the SESAME system (J. Kindermann, GMD St.Augustin) and the SNNS simulator (A.Zell, Univ. Stuttgart). From robtag at dia.unisa.it Mon May 8 07:46:32 1995 From: robtag at dia.unisa.it (Tagliaferri Roberto) Date: Mon, 8 May 1995 13:46:32 +0200 Subject: WIRN 95 Message-ID: <9505081146.AA07078@udsab.dia.unisa.it> WIRN VIETRI '95 VII ITALIAN WORKSHOP ON NEURAL NETS IIASS "E. R. Caianiello", Vietri s/m (SA), Italy May 18-20, 1995 Thursday 18 May Mathematical Models 9.30 Analog computations on networks of spiking neurons Maass W. 9.50 A general learning framework for the RAAM family Sperduti A., Starita A. 10.10 On-line learning from clustered input examples Biehl M., Riegler P., Solla S.A., Marangi C. 10.30 A study of the unsupervised learning algorithms Tirozzi B., Feng J. 10.50 Conceptual spaces and attentive mechanisms for scene analysis Chella A., Frixione M., Gaglio S. 11.10 Neural approximations for nonlinear finite-memory state estimators Alessandri A., Parisini T., Zoppoli R. 11.30 COFFEE BREAK 12.00 Using neural networks to guide term simplification: some results for the group theory Paccanaro A. 12.20 Modelling the Wiener cascade using time delayed and recurrent neural networks Wagner M.G., Thompson I.M., Manchanda S., Hearne P.G., Green G.G.R. 12.40 The anti-Hebbian synapse in a nonlinear neural network Palmieri F. 13.00 Model of color perception Damianovic' Z. 13.20 LUNCH 15.30 Pasero E. (Review talk) Architectures and Algorithms 16.30 Multiple topology representing networks Sanguineti V., Spada G., Chiaverini S., Morasso P. 16.50 A digital MLP architecture for real-time hierarchical classification Caviglia D.D., Marchesi M., Valle M., Baiardo V., Baratta D. 17.10 Hardware implementation of neural systems for visual target tracking Colla A.M., Trogu L., Zunino R. 17.30 COFFEE BREAK 18.00 Using neural networks to reduce schedules in time-critical communication systems Cavalieri S., Mirabella O. 18.20 Training feedforward neural networks in the presence of prior information Burrascano P., Pirollo D. 18.40 A statistical-neural algorithm based on neural-gas network for dynamic localisation of robots Giuffrida F., Vercelli G., Morasso P. 19.00 A neural network for soft-decision decoding of Reed-Solomon codes Ortn Ortuno I. Friday 19 May 9.30 Oja E. (Invited talk) Pattern Recognition 10.30 An interactive neural network based approach to the segmentation of multimodal medical images Firenze F., Schenone A., Acquarone F., Morasso P. 10.50 Image compression method based on backpropagation neural network and discrete orthogonal transforms Oravec M. 11.10 On choosing the parameters in the dynamic link network Feng F., Tirozzi B. 11.30 COFFEE BREAK 12.00 A neural network for spectral analysis of stratigraphic records Brescia M., D'Argenio B.,Ferreri V., Longo G., Pelosi N., Rampone S., Tagliaferri R. 12.20 Orlandi G. (Review talk) 13.20 Lunch 15.00 Poster Session 17.00 E. R. Caianiello Fellowship Award 17.10 ANNUAL SIREN MEETING 20.00 Conference Dinner Saturday 20 May 9.30 Jordan M. (Invited talk) Pattern Recognition 10.30 A modular neural architecture related to computational neural mechanism for the solution of a pattern recognition problem Morabito F.C., Campolo M. 10.50 Atmospheric pressure wave forecasting through fuzzy systems Masulli F., Casalino F., Caviglia R., Papa L. 11.10 Neural fuzzy image segmentation by a hierarchical approach Petrosino A., Marsella M. 11.30 COFFEE BREAK Applications 12.00 A fuzzy neural network for the detection of anomalies Marsella M., Meneganti M., Tagliaferri R. 12.20 A fuzzy neural network for the on-line detection of B.O.D. Mappa G., Salvi G., Tagliaferri R. 12.40 An efficient multilayer perceptron for handwritten character recognition Gioiello M., Tarantino A., Sorbello F., Vassallo G. 13.00 Neural-based forecasting of physical measures in a power plant Bruzzo S., Camastra F., Colla A.M. Poster Session A global cost-function for multilayer networks Zecchina R. Adaptive representation properties of the circular back-propagation model Ridella S., Rovetta S., Zunino R. Off-line supervised learning from clustered input examples Marangi C., Solla S.A., Biehl M., Riegler P. Images clustering through neural networks Borghese N.A. A wavelet application to the analysis of stratigraphic records D'Argenio B., D'Urzo C., Longo G., Pelosi N., Rampone S., Tagliaferri R. Can learning process in neural networks be considered as a phase transition? Pessa E., Pietronilla Penna M. Self-explanation in a learning McCulloch and Pitts net Lauria F.E., Sette M., Visco S. A fast and robust BCS application to the stereo vision Ardizzone E., Molinelli D., Pirrone R. Analog CMOS pseudo-random generator for the VLSI implementation of the Boltzmann machine Belhaire E., Caviglia D.D., Garda P., Morgavi G., Valle M. Polynomial time approximation of min-energy in Hopfield networks Bertoni A., Campadelli P., Posenato R. A hybrid symbolic subsymbolic system for distributed parameter systems Apolloni B., Piccolboni A., Sozzio E. Image reconstruction using improved ``Neural-gas" Fontana M., Borghese N.A., Ferrari S. WIRN VIETRI '95 VII ITALIAN WORKSHOP ON NEURAL NETS POSTCONFERENCE SHORT COURSE IIASS "E. R. Caianiello", Vietri s/m (SA), Italy May 22-23, 1995 Professor E. Oja : The self-organizing map in data classification and clustering Professor M. Jordan : 1) Regole di apprendimento basate sull' algoritmo E.M. 2) Regole di apprendimento basate sulla meccanica statistica Monday 22 May 15.00-16.50 E. Oja 17.10-19.00 M. Jordan Tuesday 23 May 15.00-16.50 M. Jordan 17.10-19.00 E. Oja From sutton at gte.com Mon May 8 13:59:18 1995 From: sutton at gte.com (Rich Sutton) Date: Mon, 8 May 1995 12:59:18 -0500 Subject: postscript preprints (Reinforcement Learning) Message-ID: This is to announce the availability of two new postscript preprints: REINFORCEMENT LEARNING WITH REPLACING ELIGIBILITY TRACES Satinder P. Singh Richard S. Sutton to appear in Machine Learning ftp://ftp.cs.umass.edu/pub/anw/pub/sutton/singh-sutton-96.ps.gz ABSTRACT: The eligibility trace is one of the basic mechanisms used in reinforcement learning to handle delayed reward. In this paper we introduce a new kind of eligibility trace, the {\it replacing} trace, analyze it theoretically, and show that it results in faster, more reliable learning than the conventional trace. Both kinds of trace assign credit to prior events according to how recently they occurred, but only the conventional trace gives greater credit to repeated events. Our analysis is for conventional and replace-trace versions of the offline TD(1) algorithm applied to undiscounted absorbing Markov chains. First, we show that these methods converge under repeated presentations of the training set to the same predictions as two well known Monte Carlo methods. We then analyze the relative efficiency of the two Monte Carlo methods. We show that the method corresponding to conventional TD is biased, whereas the method corresponding to replace-trace TD is unbiased. In addition, we show that the method corresponding to replacing traces is closely related to the maximum likelihood solution for these tasks, and that its mean squared error is always lower in the long run. Computational results confirm these analyses and show that they are applicable more generally. In particular, we show that replacing traces significantly improve performance and reduce parameter sensitivity on the ``Mountain-Car" task, a full reinforcement-learning problem with a continuous state space, when using a feature-based function approximator. TD MODELS: MODELING THE WORLD AT A MIXTURE OF TIME SCALES Richard S. Sutton to appear in Proc. ML95 ftp://ftp.cs.umass.edu/pub/anw/pub/sutton/sutton-95.ps.Z ABSTRACT: Temporal-difference (TD) learning can be used not just to predict {\it rewards}, as is commonly done in reinforcement learning, but also to predict {\it states}, i.e., to learn a model of the world's dynamics. We present theory and algorithms for intermixing TD models of the world at different levels of temporal abstraction within a single structure. Such multi-scale TD models can be used in model-based reinforcement-learning architectures and dynamic programming methods in place of conventional Markov models. This enables planning at higher and varied levels of abstraction, and, as such, may prove useful in formulating methods for hierarchical or multi-level planning and reinforcement learning. In this paper we treat only the {\it prediction} problem---that of learning a model and value function for the case of fixed agent behavior. Within this context, we establish the theoretical foundations of multi-scale models and derive TD algorithms for learning them. Two small computational experiments are presented to test and illustrate the theory. This work is an extension and generalization of the work of Singh (1992), Dayan (1993), and Sutton \& Pinette (1985). The following previously published papers related to reinforcement learning are also available online for the first time: Sutton, R.S., Barto, A.G. (1990) "Time-Derivative Models of Pavlovian Reinforcement," in Learning and Computational Neuroscience: Foundations of Adaptive Networks, M. Gabriel and J. Moore, Eds., pp. 497--537. MIT Press. (Main paper for the TD model of classical conditioning) ftp://ftp.cs.umass.edu/pub/anw/pub/sutton/sutton-barto-90.ps.Z Barto, A.G., Sutton, R.S., Watkins, C.J.C.H. (1990) "Learning and Sequential Decision Making". In Learning and Computational Neuroscience, M. Gabriel and J.W. Moore, Eds., pp. 539-602, MIT Press. (a good intro to RL) ftp://ftp.cs.umass.edu/pub/anw/pub/sutton/barto-sutton-watkins-90.ps.Z Sutton, R.S. (1992b) "Gain Adaptation Beats Least Squares?", Proceedings of the Seventh Yale Workshop on Adaptive and Learning Systems, pp. 161-166, Yale University, New Haven, CT. (Step-size adaptation from an engineering perspective, 2 new algorithms) ftp://ftp.cs.umass.edu/pub/anw/pub/sutton/sutton-92b.ps.Z For abstracts, see the file ftp://ftp.cs.umass.edu/pub/anw/pub/sutton/CATALOG. If you have trouble obtaining these files, an alternate route is via the mirror at ftp://ftp.gte.com/pub/reinforcement-learning/. From giles at research.nj.nec.com Mon May 8 13:48:30 1995 From: giles at research.nj.nec.com (Lee Giles) Date: Mon, 8 May 95 13:48:30 EDT Subject: TR: Fixed Points in Two--Neuron Discrete Time Recurrent Networks: Message-ID: <9505081748.AA20816@alta> The following Technical Report is available via the University of Maryland Department of Computer Science and the NEC Research Institute archives: _____________________________________________________________________________ "Fixed Points in Two--Neuron Discrete Time Recurrent Networks: Stability and Bifurcation Considerations" UNIVERSITY OF MARYLAND TECHNICAL REPORT UMIACS-TR-95-51 and CS-TR-3461 Peter Tino[1,2], Bill G. Horne[2], C. Lee Giles[2,3] [1] Dept. of Informatics and Computer Systems, Slovak Technical University, Ilkovicova 3, 812 19 Bratislava, Slovakia [2] NEC Research Institute, 4 Independence Way, Princeton, NJ 08540 [3] UMIACS, University of Maryland, College Park, MD 20742 {tino,horne,giles}@research.nj.nec.com The position, number and stability types of fixed points of a two--neuron recurrent net work with nonzero weights are investigated. Using simple geometrical arguments in the space of derivatives of the sigmoid transfer function with respect to the weighted sum of neuron inputs, we partition the network state space into several regions corre sponding to stability types of the fixed points. If the neurons have the same mutual interaction pattern, i.e. they either mutually inhibit or mutually excite themselves, a lower bound on the rate of convergence of the attractive fixed points towards the satu ration values, as the absolute values of weights on the self--loops grow, is given. The role of weights in location of fixed points is explored through an intuitively appealing characterization of neurons according to their inhibition/excitation performance in the network. In particular, each neuron can be of one of the four types: greedy, enthusias tic, altruistic or depressed. Both with and without the external inhibition/excitation sources, we investigate the position and number of fixed points according to character of the neurons. When both neurons self-excite themselves and have the same mutual interaction pattern, the mechanism of creation of a new attractive fixed point is shown to be that of saddle node bifurcation. ------------------------------------------------------------------------- http://www.neci.nj.nec.com/homepages/giles.html http://www.cs.umd.edu/TRs/TR-no-abs.html or ftp://ftp.nj.nec.com/pub/giles/papers/UMD-CS-TR-3461.two.neuron.recurrent.nets.ps.Z -------------------------------------------------------------------------- -- C. Lee Giles / NEC Research Institute / 4 Independence Way Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 URL http://www.neci.nj.nec.com/homepages/giles.html == From georgiou at wiley.csusb.edu Mon May 8 20:15:04 1995 From: georgiou at wiley.csusb.edu (georgiou@wiley.csusb.edu) Date: Mon, 8 May 1995 17:15:04 -0700 Subject: CFP: First Int'l Conf. on Computational Intelligence and Neurosciences Message-ID: <199505090015.AA25909@wiley.csusb.edu> FIRST INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCES September 28 to October 1, 1995. ``Shell Island'' Hotels of Wrightsville Beach, North Carolina, USA. We are pleased to announce the First International Conference on Computational Intelligence and Neurosciences to be held as part of the upcoming Joint Conference on Information Sciences (JCIS) from September 28 to October 1, 1995 in Wrightsville Beach, NC. We expect that this symposium will be of interest to neural network researchers, computer scientists, engineers, mathematicians, physicists, neuroscientists and psychologists. Research in neural computing has grown enormously in the past decade, and it is becoming an increasingly specialized field. Using a combination of didactic and workshop settings, we wish to present an overview of where neural computing is presently at. Talks will be given by experts in theoretical and experimental neuroscience, and there will be ample opportunity for discussion and collegial exchange. In addition to presenting current work, our aim is to address some of the important open questions in neural computing. We hope to delineate how information science, as an interdisciplinary field, can aid in moving neural computing into the next century. Invited Speakers include: James Anderson (Brown University) Subhash Kak (Louisiana State University) Haluk Ogmen (Houston of Houston) Ed Page (University of South Carolina) Jeffrey Sutton (Harvard University) L.E.H. Trainor (University of Toronto) Co-chairs: Subhash Kak & Jeffrey Sutton Program Committee Robert Erickson George Georgiou David Hislop Michael Huerta Subhash C. Kak Stephen Koslow Sridhar Narayan Slater E. Newman Gregory Lockhead Richard Palmer David C. Rubin Nestor Schmajuk David W. Smith John Staddon Jeffrey P. Sutton Harold Szu L.E.H. Trainor Abraham Waksman Paul Werbos M. L. Wolbarsht Max Woodbury Areas for which papers are sought include: * Neural Network Architectures * Artificially Intelligent Neural Networks * Artificial Life * Associative Memory * Computational Intelligence * Cognitive Science * Fuzzy Neural Systems * Relations between Fuzzy Logic and Neural Networks * Theory of Evalutionary Computation * Efficiency/Robustness Comparisons with Other Direct Search Algorithms * Parallel Computer Applications * Integration of Fuzzy Logic and Evolutionary Computing * Evaluationary Computation for Neural Networks * Fuzzy Logic in Evolutionary Algorithms * Neurocognition * Neurodynamics * Optimization * Feature Extraction & Pattern Recognition * Learning and Memory * Implementations (electronic, Optical, Biochips) * Intelligent Control Summary Deadline: July 20, 1995 Decision & Notification: August 5, 1995 Send summaries to: George M. Georgiou Computer Science Department California State University San Bernardino, CA 92407 georgiou at wiley.csusb.edu Papers will be accepted based on summaries. A summary shall not exceed 4 pages of 10-point font, double-column, single-spaced text, (1 page minimum) with figures and tables included. Any summary exceeding 4 pages will be charged $100 per additional page. Three copies of the summary are required by July 24,1995. A deposit of $150 check must be included to guarantee the publication of your 4 pages summary in the Proceedings. $150 can be deducted from registration fee later. Final version of the full length paper must be submitted by October 1, 1995. Four (4) copies of the full length paper shall be prepared according to the ``Information for Authors'' appearing at the back cover of Information Sciences, an International Journal (Elsevier Publishing Co.). A full paper shall not exceed 20 pages including figures and tables. All full length papers will be reviewed by experts in their respective fields. Revised papers will be due on April 15, 1996. Accepted papers will appear in the hard-covered proceeding (book) to be published by a publisher or Information Sciences Journal (INS journal now has three publications: Informatics and Computer Sciences, Intelligent Systems, Applications). All fully registered conference attendees will receive a copy of proceeding (summary) on September 28, 1995; a free one-year subscription (paid by this conference) of Information Sciences Journal - Applications. Lastly, the right to purchase either or all of Vol.I, Vol.II, Vol.III of Advances in FT & T hard-covered, deluxe, professional books at 1/2 price. The Title of the books are ``Advances in Fuzzy Theory & Technology, Volume I or II or III''. --------------------------------------------------------------------------- ******************************************** * JCIS'95 REGISTRATION FEES & INFORMATION * ******************************************** Up to 7/15/95 After 7/15/95 Full Registration $275.00 $395.00 Student Registration $100.00 $160.00 Tutorial(per Mini-Course) $120.00 $160.00 Exhibit Boot Fee $300.00 $400.00 One Day Fee(no pre-reg. discount) $195.00 $ 85.00 (Student) FULL CONFERENCE REGISTRATION: Includes admission to all sessions, exhibit area, coffee, tea and soda. A copy of conference proceedings (summary) at conference and one year subscription of Information Sciences - Applications, An International Journal, published by Elsevier Publishing Co. In addition, the right to purchase the hard-cover deluxe books at 1/2 price. Award Banquet on Sept. 30, 1995 is included through Full Registration. One day registration does not include banquet, but one year IS Journal - C subscription is included for one-day full registration only. Tutorials are not included. STUDENT CONFERENCE REGISTRATION: For full-time students only. A letter from your department is required. You must present a current student ID with picture. A copy of conference proceedings (summary) is included. Admission to all sessions, exhibit area, area, coffee,tea and soda. The right to purchase the hard-cover deluxe books at 1/2 price. Free subscription of IS Journal - Applications, however, is not included. TUTORIALS REGISTRATION: Any person can register for the Tutorials. A copy of lecture notes for the course registered is included. Coffee, tea and soda are included. The summary and free subscription of IS Journal - Applications is, However, not included. The right to purchase the hard-cover deluxe books is included. --------------------------------------------------------------------------- *************** * TUTORIALS * *************** Several mini-courses are scheduled for sign-up. Please take note that any one of them may be cancelled or combined with other mini-courses due to the lack of attendance. Cost of each mini-course is $120 up to 7/15/95 & $160 after 7/15/95, the same cost for all mini-course. No. Name of Mini-Course Instructor Time ------------------------------------------------------------------------ A Languages and Compilers for J. Ramanujan 6:30 pm - 9 pm Distributed Memory Machine Sept. 28 ------------------------------------------------------------------------- B Pattern Recognition Theory H. D. Cheng 6:30 pm - 9 pm Sept. 28 ------------------------------------------------------------------------- C Fuzzy Set Theory George Klir 2:00pm - 4:30pm Sept. 29 ------------------------------------------------------------------------- D Neural Network Theory Richard Palmer 2:00pm - 4:30pm Sept. 29 ------------------------------------------------------------------------- E Fuzzy Expert Systems I. B. Turksen 6:30 pm - 9 pm Sept. 29 ------------------------------------------------------------------------- F Intelligent Control Systems Chris Tseng 6:30 pm - 9 pm Sept. 29 ------------------------------------------------------------------------- G Neural Network Applications Subhash Kak 2:00pm - 4:30pm Sept. 30 ------------------------------------------------------------------------- H Pattern Recognition Applications Edward K. Wong 2:00pm - 4:30pm Sept. 30 ------------------------------------------------------------------------- I Fuzzy Logic & NN Integration Marcus Thint 9:30am - 12:00 Oct. 1 ------------------------------------------------------------------------- J Rough Set Theory Tsau Young Lin 9:30am - 12:00 Oct. 1 ------------------------------------------------------------------------- VARIOUS CONFERENCE CONTACTS: Tutorial Conference Information Paul P. Wang Jerry C.Y. Tyan Kitahiro Kaneda ppw at ee.duke.edu ctyan at ee.duke.edu hiro at ee.duke.edu Tel. (919)660-5271 Tel. (919)660-5233 Tel. (919)660-5233 660-5259 Coordinates Overall Administration Local Arrangement Chair Xiliang Gu Sridhar Narayan gu at ee.duke.edu Dept. of Mathematical Sciences Tel. (919)660-5233 Wilmington, NC 28403 (919)383-5936 U. S. A. narayan at cms.uncwil.edu Tel: 910 395 3671 (work) 910 395 5378 (home) --------------------------------------------------------------------------- *********************** * TRAVEL ARRANGEMENTS * *********************** The Travel Center of Durham, Inc. has been designated the officeal travel provider. Special domestic fares have been arranged and The Travel Center is prepared to book all flight travel. Domestic United States and Canada: 1-800-334-1085 International FAX: 919-687-0903 ********************** * HOTEL ARRANGEMENTS * ********************** SHELL ISLAND RESORT HOTELS 2700 N. LUMINA AVE. WRIGHTSVILLE BEACH, NC 28480 U. S. A. This is the conference site and lodging. A block of suites (double rooms) have been reserved for JCIS'95 attendees with discounted rate. All prices listed here are for double occupancies. $100.00 + 9% Tax (Sun.- Thur.) $115.00 + 9% Tax (Fri. - Sat.) $10.00 for each additional person over 2 people per room. We urge you to make reservation early. Free transportation from and to Wilmington, N. C. Airport is available for ``Shell Island'' Resort Hotel Guests. However, you must make reservation for this free service. Please contact: Carvie Gillikin, Director of Sales Voice: 1-800-689-6765 or: 910-256-8696 FAX: 910-256-0154 --------------------------------------------------------------------------- If you wish to automatically receive information through email on JCIS'95 as it also pertains to the other two conferences that are part of JCIS'95 ("Fourth Annual Conference on Fuzzy Theory and Technology" and "Second Annual Conference on Computer Theory and Informatics"), please send email To: georgiou at wiley.csusb.edu Subject: JCIS-95 The body of the message is not significant. --------------------------------------------------------------------------- CONFERENCE REGISTRATION FORM It is important to choose only one plan; Participation Plan A or Plan B or Plan C. (Choose Plan C for First International Conference on Computational Intelligence and Neurosciences) [ ] I wish to receive further information. [ ] I intend to participate in the conference. [ ] I intend to present my paper to regular session. [ ] I intend to register in tutorial(s). Mane: Dr./Mr./Mrs. _________________________________________________ Address: ___________________________________________________________ Country: ___________________________________________________________ Phone:________________ Fax: _______________ E-mail: ________________ Affiliation(for Badge): ____________________________________________ Participation Plan: [ ]A [ ]B [ ]C Up to 7/15/95 After 7/15/95 Full Registration [ ]$275.00 [ ]$395.00 Student Registration [ ]$100.00 [ ]$160.00 Tutorial(per Mini-Course) [ ]$120.00 [ ]$160.00 Exhibit Boot Fee [ ]$300.00 [ ]$400.00 One Day Fee(no pre-reg. discount) [ ]$195.00 [ ]$ 85.00 (Student) Total Enclosed(U.S. Dollars): ________________ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ $ Please make check payable and mail to: $ $ FT & T $ $ c/o. Paul P. Wang $ $ Dept. of Electrical Engineering $ $ Duke University $ $ Durham, NC 27708 $ $ U. S. A. $ $ $ $ All foreign payments must be made by $ $ draft on a US Bank in US dollars. No $ $ credit cards or purchase order can be $ $ accepted. $ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ --------------------------------------------------------------------------- From tirthank at titanic.mpce.mq.edu.au Tue May 9 02:22:38 1995 From: tirthank at titanic.mpce.mq.edu.au (Tirthankar Raychaudhuri) Date: Tue, 9 May 1995 16:22:38 +1000 (EST) Subject: Technical Report on Active Learning Available Message-ID: <9505090622.AA04755@titanic.mpce.mq.edu.au> A non-text attachment was scrubbed... Name: not available Type: text Size: 2330 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/c080f623/attachment.ksh From wermter at nats5.informatik.uni-hamburg.de Tue May 9 12:29:31 1995 From: wermter at nats5.informatik.uni-hamburg.de (Stefan Wermter) Date: Tue, 9 May 1995 12:29:31 --100 Subject: IJCAI95 workshop program: learning for language processing Message-ID: <9505091029.AA27223@nats13.nats> IJCAI-95 Workshop on New Approaches to Learning for Natural Language Processing International Joint Conference on Artificial Intelligence (IJCAI-95) Palais de Congres, Montreal, Canada August 21, 1995 ORGANIZING COMMITTEE -------------------- Stefan Wermter, University of Hamburg, Germany (workshop contact person) Gabriele Scheler, Technical University Munich, Germany Ellen Riloff, University of Utah, USA INVITED SPEAKERS ---------------- Eugene Charniak, Brown University, USA Noel Sharkey, Sheffield University, UK PROGRAM COMMITTEE ----------------- Jaime Carbonell, Carnegie Mellon University, USA Joachim Diederich, Queensland University of Technology, Australia Georg Dorffner, University of Vienna, Austria Jerry Feldman, ICSI, Berkeley, USA Walther von Hahn, University of Hamburg, Germany Aravind Joshi, University of Pennsylvania, USA Ellen Riloff, University of Utah, USA Gabriele Scheler, Technical University Munich, Germany Stefan Wermter, University of Hamburg, Germany WORKSHOP DESCRIPTION -------------------- In the last few years, there has been a great deal of interest and activity in developing new approaches to learning for natural language processing Various learning methods have been used, including - connectionist methods/neural networks - machine learning algorithms - hybrid symbolic and subsymbolic methods - statistical techniques - corpus-based approaches. In general, learning methods are designed to support automated knowledge acquisition, fault tolerance, plausible induction, and rule inferences. Using learning methods for natural language processing is especially important because language learning is an enabling technology for many other language processing problems, including noisy speech/language integration, machine translation, and information retrieval. Different methods support language learning to various degrees but, in general, learning is important for building more flexible, scalable, adaptable, and portable natural language systems. This workshop is of interest particularly at this time because systems built by learning methods have reached a level where they can be applied to real-world problems in natural language processing and where they can be compared with more traditional encoding methods. The workshop will provide a forum for discussing various learning approaches for supporting natural language processsing. In particular the workshop will focus on questions like: - How can we apply suitable existing learning methods for language processing? - What new learning methods are needed for language processing and why? - What language knowledge should be learned and why? - What are similarities and differences between different approaches for language learning? (e.g., machine learning algorithms vs neural networks) - What are strengths and limitations of learning rather than manual encoding? - How can learning and encoding be combined in symbolic/connectionist systems? - Which aspects of system architectures and knowledge engineering have to be considered? (e.g., modular, integrated, hybrid systems) - What are successful applications of learning methods in various fields? (speech/language integration, machine translation, information retrieval) - How can we evaluate learning methods using real-world language? (text, speech, dialogs, etc.) WORKSHOP PROGRAM ---------------- 8:00 am Start of Workshop 8:00 am Welcome and Introduction Stefan Wermter 8:10am - 9:50am Session: Neural network approachs, Hybrid approaches, Genetic approaches ------------------------------------------------------------------------ 8:10am - 8:30am On the applicability of neural network and machine learning methodologies to natural language processing Steve Lawrence, Sandiway Fong, C. Lee Giles 8:30am - 8:50am Knowledge acquisition in concept and document spaces by using self-organizing neural networks Werner Winiwarter, Erich Schweighofer, Dieter Merkl 8:50am - 9:10am A genetic algorithm for the induction of natural language grammars Tony C. Smith, Ian H. Witten 9:10am - 9:30am SKOPE: A connectionist/symbolic architecture of spoken Korean processing Geunbae Lee, J. H. Lee 9:30am - 9:50am Integrating different learning approaches into a multilingual spoken translation system P. Geutner, B. Suhm, T. Kemmp, A. Lavie, L. Mayfield, A. E. McNair, I. Rogina, T. Schultz, T. Sloboda, W. Ward, M. Woszczyna, A. Waibel 9:50am - 10:20am Invited Talk ************ Connectionist Natural Language Processing: Representation and Learning Noel Sharkey, Sheffield University, UK 10:20am - 10:40am Break ----- 10:40am - 12:20am Session: Statistical approaches, Corpus-based approaches -------------------------------------------------------- 10:40am - 11:00am Selective sampling in natural language learning Ido Dagan, Sean P. Engelson 11:00am - 11:20am Learning restricted probabilistic link grammars Eva Fong, Dekai Wu 11:20am - 11:40am A statistical approach to learning prepositional phrase attachment disambiguation Alexander Franz 11:40am - 12:00am Training stochastical grammars on semantic categories W.R. Hogenhout, Yuji Matsumoto 12:00am - 12:20pm Automatic classification of speech acts with semantic classification trees and polygrams Marion Mast, Elmar Noeth, Heinrich Niemann, Ernst Guenter Schukat Talamazzini 12:20pm - 12:50pm Invited Talk ************ Learning syntactic disambiguation through word statistics and why you should care about it Eugene Charniak, Brown University, USA 12:50pm - 2:00pm Lunch Break ----------- 2:00pm - 3:40pm Session: Machine learning appoaches, Symbolic approaches -------------------------------------------------------- 2:00pm - 2:20pm A comparison of two methods employing inductive logic programming for corpus-based parser construction John M. Zelle, Raymond J. Mooney 2:20pm - 2:40pm Using inductive logic programming to learn the past tense of English verbs Mary Elaine Califf, Raymond J. Mooney 2:40pm - 3:00pm A revision learner to acquire verb selection rules from human-made rules and examples Shigeo Kaneda, Hussein Almuallim, Yasuhiro Akiba, Megumi Ishii, Tsukasa Kawaoka 3:00pm - 3:20pm Using parsed corpora for circumventing parsing Aravind K. Joshi, B. Srinivas 3:20pm - 3:40pm Acquiring and updating hierarchical knowledge for machine translation based on a clustering technique Takefumi Yamazaki, Michael J. Pazzani, Christopher Merz 3:40pm - 4:00pm Break ----- 4:00pm - 5:40pm Session: Knowledge acquisition approaches, Information extraction approaches ---------------------------------------------------------------------------- 4:00pm - 4:20pm Embedded machine learning systems for natural language processing: a general framework Claire Cardie 4:20pm - 4:40pm Learning information extraction patterns from examples Scott B. Huffman 4:40pm - 5:00pm A symbolic and surgical acquisition of terms through variation Christian Jacquemin 5:00pm - 5:20pm Concept learning from texts - a terminological meta-reasoning perspective Udo Hahn, Manfred Klenner, Klemens Schnattinger 5:20pm - 5:40pm Applying machine learning to anaphora resolution Chinatsu Aone, Scott William Bennett 5:40pm - 6:00pm Discussion and open end ----------------------- Further accepted papers ----------------------- Advances in analogy-based learning: false friends and exceptional items in pronunciation by paradigm-driven analogy Stefano Federici, Vito Pirrelli, Francais Yvon A minimum description length approach to grammar inference Peter Gruenwald Implications of an automatic lexical acquisition system Peter M. Hastings Confronting an existing machine learning algorithm to the text categorization task Isabelle Moulinier, Jean-Gabriel Ganascia Issues in inductive learning of domain-specific text extraction rules Stephen Soderland, David Fisher, Jonathan Aseltine, Wendy Lehnert Can punctuation help learning? Miles Osborne Ross Hayward, Emanuel Pop, Joachim Diederich Cascade 2 networks for grammar recognition ******************************************************************************** * Dr Stefan Wermter University of Hamburg * * Dept. of Computer Science * * Vogt-Koelln-Strasse 30 * * email: wermter at informatik.uni-hamburg.de D-22527 Hamburg * * phone: +49 40 54715-531 Germany * * fax: +49 40 54715-515 * * http://www.informatik.uni-hamburg.de/Arbeitsbereiche/NATS/staff/wermter.html * ******************************************************************************** From S.W.Ellacott at bton.ac.uk Tue May 9 16:20:37 1995 From: S.W.Ellacott at bton.ac.uk (S.W.Ellacott@bton.ac.uk) Date: Tue, 09 May 1995 20:20:37 GMT Subject: Studentships available Message-ID: <19950509.202037.33@diamond> PhD STUDENTSHIPS IN COMPUTATIONAL MATHEMATICS / NEURAL NETWORKS AT UNIVERSITY OF HUDDERSFIELD Two 3-year research studentships are now available to good graduates in numerate degrees, optionally with relevant MSc degree or experience, to work with Prof John Mason / Dr Iain Anderson starting September/October 1995 : 1) CASE Studentship (UK/EC only) on "Approximation by Neural Networks in Hydraulics" with Hydraulics Research Ltd Wallingford - EPSRC grant plus 2250pds - involving NNs, approximation, parallel algorithms. 2) Postgraduate bursary (about EPSRC basic rate plus UK/EC PhD fees) on "Approximation by Wavelets", with requirement to provide some programming support to MSc course - involving approximation, numerical analysis and applications (such as neural networks). Reply with CV and names of 2 referees as soon as possible to Prof J.C.Mason, School of Computing and Mathematics, University of Huddersfield, Queensgate, Huddersfield HD1 3DH, England - email: j.c.mason at hud.ac.uk, phone: 01484-472680, fax: 01484-421106. -- Steve Ellacott Dept. Math. Sciences, University of Brighton, Moulsecoomb, BN2 4GJ, UK Tel. home: (01273) 885845. Tel. office: (01273) 642544 or 642414 Fax: (01273) 642405 From lksaul at psyche.mit.edu Tue May 9 18:01:25 1995 From: lksaul at psyche.mit.edu (Lawrence Saul) Date: Tue, 9 May 95 18:01:25 EDT Subject: paper available: Mean Field Theory for Sigmoid Belief Networks Message-ID: <9505092201.AA14682@psyche.mit.edu> FTP-host: psyche.mit.edu FTP-file: pub/lksaul/belief.ps.Z The following paper is now available by anonymous ftp. ================================================================== Mean Field Theory for Sigmoid Belief Networks (12 pages) Lawrence K. Saul, Tommi Jaakkola, and Michael I. Jordan Center for Biological and Computational Learning Massachusetts Institute of Technology 79 Amherst Street, E10-243 Cambridge, MA 02139 Abstract: Bayesian networks (a.k.a. belief networks) are stochastic feedforward networks of discrete or real-valued units. In this paper we show how to calculate a rigorous lower bound on the likelihood of observed activities in sigmoid belief networks. We view these networks in the framework of statistical mechanics and derive a mean field theory for the average activities of the units. The advantage of this framework is that the mean field free energy gives a rigorous lower bound on the log-likelihood of any partial instantiation of the network's activity. The feedforward directionality of belief networks gives rise to terms that do not appear in the mean field theory for symmetric networks of binary units. Nevertheless, the mean field equations have a simple closed form and can be solved by iteration to yield a lower bound on the likelihood. Empirical results suggest that this bound may be tight enough to serve as a basis for inference and learning. ================================================================== From wolff at cache.crc.ricoh.com Tue May 9 19:07:03 1995 From: wolff at cache.crc.ricoh.com (Greg Wolff) Date: Tue, 9 May 1995 16:07:03 -0700 Subject: JOB Announcement Message-ID: <9505092307.AA02749@cheetah.crc.ricoh.com> The Machine Learning and Perception group at Ricoh's California Research Center in Menlo Park, CA is looking for a skilled C programmer to assist in the development and implementation of machine learning and image processing algorithms. The job description and contact information follows. More information about Ricoh CRC may be found at http://www.crc.ricoh.com/ Support Programmer/Research Engineer Position responsibilities: * The successful candidate will be responsible for developing code and using existing software to perform simulations (in C/Unix on a SPARCstation) of new pattern recognition/machine learning/image processing algorithms developed in cooperation with others. * She or he will work on a number of projects concurrently, quickly comprehending the technology and contributing as appropriate. He or she will document and report work, make oral presentations, and assist in technology transfer efforts. A background in pattern recognition, neural networks, image processing, machine learning, or parallel programming is very desirable, but not essential. Candidate requirements: * Strong C programming and Unix skills (experimental, not necessarily production) -- Work experience involving programming of more than two years or advanced degree * Strong demonstrated learning ability * Excellent verbal and communication skills * Good organizational ability * Masters degree (or equivalent experience) in Electrical Engineering, Computer Science or related field * Optional: knowledge or experience with parallel programming, neural networks, pattern recognition.... ---------------------------------------------------------------------------- RICOH California Research Center (RCRC): RCRC is a small research center in Menlo Park, CA, near the Stanford University campus and other Silicon Valley landmarks. The roughly 20 researchers focus on pattern recognition, image processing, image and document analysis, visual perception, artificial intelligence, machine learning, electronic service, and hardware for implementing computationally expensive algorithms. The environment is innovative, collegial and exciting. RCRC is a part of RICOH Corporation, the wholly owned subsidiary of RICOH Company, Ltd. in Japan. RICOH is a pioneer in facsimile, copiers, optical equipment, office automation products and more. Ricoh Corporation is an Equal Employment Opportunity Employer.[1] ---------------------------------------------------------------------------- Please send any questions by e-mail to the address below, and type "Programming job" as your header line. Full applications (which must include a resume and the names and addresses of at least two people familiar with your work) should be sent by surface mail to: Dr. David G. Stork Chief Scientist RICOH California Research Center 2882 Sand Hill Road, Suite 115 Menlo Park CA 94025 stork at crc.ricoh.com [1] See http://www.crc.ricoh.com/openings/mlp.html From john at dcs.rhbnc.ac.uk Wed May 10 04:00:52 1995 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Wed, 10 May 95 09:00:52 +0100 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <199505100800.JAA13813@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT): several new reports available ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-011: ---------------------------------------- Classification by Polynomial Surfaces by Martin Anthony, London School of Economics and Political Science Abstract: Linear threshold functions (for real and Boolean inputs) have received much attention, for they are the component parts of many artificial neural networks. Linear threshold functions are exactly those functions such that the positive and negative examples are separated by a hyperplane. One extension of this notion is to allow separators to be surfaces whose equations are polynomials of at most a given degree (linear separation being the degree-$1$ case). We investigate the representational and expressive power of polynomial separators. Restricting to the Boolean domain, by using an upper bound on the number of functions defined on $\{0,1\}^n$ by polynomial separators having at most a given degree, we show, as conjectured by Wang and Williams, that for almost every Boolean function, one needs a polynomial surface of degree at least $\left\lfloor n/2\right\rfloor$ in order to separate the negative examples from thepositive examples. Further, we show that, for odd $n$, at most half of all Boolean functions are realizable by a separating surface of degree $\left\lfloor n/2\right\rfloor$. We then compute the Vapnik-Chervonenkis dimension of the class of functions realized by polynomial separating surfaces of at most a given degree, both for the case of Boolean inputs and real inputs. In the case of linear separators, the VC dimensions coincide for these two cases, but for surfaces of higher degree, there is a strict divergence. We then use these results on the VC dimension to quantify the sample size required for valid generalization in Valiant's probably approximately correct framework. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-013: ---------------------------------------- Learnability of Kolmogorov-Easy Circuit Expressions Via Queries by Jos\'e L. Balc\'azar, Universitat Polit\'ecnica de Catalunya Harry Buhrman, CWI, Amsterdam Montserrat Hermo, Universidad del Pa\'\i s Vasco Abstract: Circuit expressions were introduced to provide a natural link between Computational Learning and certain aspects of Structural Complexity. Upper and lower bounds on the learnability of circuit expressions are known. We study here the case in which the circuit expressions are of low (time-bounded) Kolmogorov complexity. We show that these are polynomial-time learnable from membership queries in the presence of an NP oracle. We also exactly characterize the sets that have such circuit expressions, and precisely identify the subclass whose circuit expressions can be learned from membership queries alone. The extension of the results to various Kolmogorov complexity bounds is discussed. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-017: ---------------------------------------- Identification of the Human Arm Kinetics using Dynamic Recurrent Neural Networks by Jean-Philippe DRAYE, Facult\'{e} Polytechnique de Mons, Guy CHERON, University of Brussels, Marc BOURGEOIS, University of Brussels, Davor PAVISIC, Facult\'{e} Polytechnique de Mons, Ga\"{e}tan LIBERT, Facult\'{e} Polytechnique de Mons Abstract: Artificial neural networks offer an exciting alternative for modeling and identi fying complex non-linear systems. This paper investigates the identification of discrete-time non-linear systems using dynamic recurrent neural networks. We use this kind of networks to efficiently identify the complex temporal relati onship between the patterns of muscle activation represented by the electromyogr aphy signal (EMG) and their mechanical actions in three-dimensional space. The results show that dynamic neural networks provide a successful platform for biomechanical modeling and simulation including complex temporal relationships. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-019: ---------------------------------------- An Algebraic Characterization of Tractable Constraints by Peter Jeavons, Royal Holloway, University of London Abstract: Many combinatorial search problems may be expressed as `constraint satisfaction problems', and this class of problems is known to be NP-complete. In this paper we investigate what restrictions must be imposed on the allowed constraints in order to ensure tractability. We describe a simple algebraic closure condition, and show that this is both necessary and sufficient to ensure tractability in Boolean valued problems. We also demonstrate that this condition is necessary for problems with arbitrary finite domains. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-020: ---------------------------------------- An incremental neural classifier on a MIMD computer by Arnulfo Azcarraga, LIFIA - IMAG - INPG, France H\'el\`ene Paugam-Moisy and Didier Puzenat, LIP - URA 1398 du CNRS, ENS Lyon, France Abstract: MIMD computers are among the best parallel architectures available. They are easily scalable with numerous processors and have potentially huge comput ing power. One area of application for such computers is the field of neural net works. This article presents a study, and two parallel implementations, of a spe cific neural incremental classifier of visual patterns. This neural network is i ncremental in that network units are created whenever the classifier is not able to recognize correctly a pattern. The dynamic nature of the model renders the p arallel algorithms rather complex. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-021: ---------------------------------------- Model Selection for Neural Networks: Comparing MDL and NIC by Guido te Brake, Utrecht University, Joost N. Kok, Utrecht University, Paul M.B. Vit\'anyi, CWI, Amsterdam Abstract: We compare the MDL and NIC methods for determining the correct size of a feedforward neural network. The NIC method has to be adapted for this kind of networks. We include an experiment based on a small standard problem. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-023: ---------------------------------------- PAC Learning and Artificial Neural Networks by Martin Anthony and Norman Biggs, London School of Economics and Political Science, University of London Abstract: In this article, we discuss the `probably approximately correct' (PAC) learning paradigm as it applies to artificial neural networks. The PAC learning model is a probabilistic framework for the study of learning and generalization. It is useful not only for neural classification problems, but also for learning problems more often associated with mainstream artificial intelligence, such as the inference of Boolean functions. In PAC theory, the notion of succesful learning is formally defined using probability theory. Very roughly speaking, if a large enough sample of randomly drawn training examples is presented, then it should be likely that, after learning, the neural network will classify most other randomly drawn examples correctly. The PAC model formalises the terms `likely' and `most'. Furthermore, the learning algorithm must be expected to act quickly, since otherwise it may be of little use in practice. There are thus two main emphases in PAC learning theory. First, there is the issue of how many training examples should be presented. Secondly, there is the question of whether learning can be achieved using a fast algorithm. These are known, respectively, as the {\it sample complexity} and {\it computational complexity} problems. This article provides a brief introduction to these. We highlight the importance of the Vapnik-Chervonenkis dimension, a combinatorial parameter which measures the `expressive power' of a neural network, and describe how this parameter quantifies fairly precisely the sample complexity of PAC learning. In discussing the computational complexity of PAC learning, we shall present a result which illustrates that in some cases the problem of PAC learning is inherently intractable. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-024: ---------------------------------------- Graphs and Artificial Neural Networks by Martin Anthony, London School of Economics and Political Science, University of London Abstract: `Artificial neural networks' are machines (or models of computation) based loosely on the ways in which the brain is believed to work. In this chapter, we discuss some links between graph theory and artificial neural networks. We describe how some combinatorial optimisation tasks may be approached by using a type of artificial neural network known as a Boltzmann machine. We then focus on `learning' in feedforward artificial neural networks, explaining how the graph structure of a network and the hardness of graph-colouring quantify the complexity of learning. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-025: ---------------------------------------- The Vapnik-Chervonenkis Dimension of a Random Graph by Martin Anthony, Graham Brightwell, London School of Economics and Political Science, University of London Colin Cooper, University of North London Abstract: In this paper we investigate a parameter defined for any graph, known as the {\it Vapnik-Chervonenkis dimension} (or VC dimension). For any vertex $x$ of a graph $G$, the closed neighbourhood $N(x)$ of $x$ is the set of all vertices of $G$ adjacent to $x$, together with $x$. We say that a set $D$ of vertices of $G$ is {\it shattered} if every subset $R$ of $D$ can be realised as $R=D \cap N(x)$ for some vertex $x$ of $G$. The Vapnik-Chervonenkis dimension of $G$ is defined to be the largest cardinality of a shattered set of vertices. This parameter can be used to provide bounds on the complexity of a learning problem on graphs. Our main result gives, for each positive integer $d$, the exact threshold function for a random graph $G(n,p)$ to have VC~dimension $d$. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-026: ---------------------------------------- Probabilistic Decision Trees and Multilayered Perceptrons by Pascal Bigot and Michel Cosnard, LIP, ENS, Lyon, France Abstract: We propose a new algorithm to compute a multilayered perceptron for classification problems, based on the design of a binary decision tree. We show how to modify this algorithm for using ternary logic, introducing a Don'tKnow class. This modification could be applied to any heuristic based on the recursive construction of a decision tree. Another way of dealing with uncertainty for improving generalization performance is to construct probabilistic decision trees. We explain how to modify the preceding heuristics for constructing such trees and associating probabilistic multilayered perceptrons. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-027: ---------------------------------------- A characterization of the existence of energies for neural networks by Michel Cosnard, LIP, ENS, Lyon, France Eric Gole, Universidad de Chile, Santiago, Chile Abstract: In this paper we give under an appropriate theoretical framework a characterization about neural networks which admit an energy. We prove that a neural network admits an energy if and only if the weight matrix verifies two conditions: the diagonal elements are non-negative and the associated incidence graph does not admit non-quasi-symmetric circuits. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-028: ---------------------------------------- Improvement of Gradient Descent based Algorithms Training Multilayer Perceptrons with an Evolutionnary Initialization by C\'edric G\'egout, \'Ecole Normale Sup\'erieure de Lyon, Lyon Abstract: Gradient descent algorithms reducing the mean square error computed on a training set are widely used for training real valued feedforward networks, because of their easy implementation and their efficacy. But in some cases they are trapped in a local optimum and are not able to find a good network. In order to eliminate theses limitated cases, usually we could only restart the gradient descent or found an initialization point constructed with unreliable and training set dependant heuristics. This paper presents a new method to find a good initialization point. An evolutionary algorithm provides an individual whose phenotype is a neural network. This individual is the best one that makes a quick, efficient and robust gradient descent. The genotypes are real valued vectors containing parameters of networks. Therefore we use special genetic operators. Simulation results show that this initialization reduces the neural network training time, the training complexity and improves the robustness of gradient descent based algorithms. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-029: ---------------------------------------- The Curse of Dimensionality and the Perceptron Algorithm by Jyrki Kivinen, University of Helsinki Manfred K.~Warmuth, University of California, Santa Cruz Abstract: We give an adversary strategy that forces the Perceptron algorithm to make $(N-k+1)/2$ mistakes when learning $k$-literal disjunctions over $N$ variables. Experimentally we see that even for simple random data, the number of mistakes made by the Perceptron algorithm grows almost linearly with $N$, even if the number $k$ of relevant variable remains a small constant. Thus, the Perceptron algorithm suffers from the curse of dimensionality even when the target is extremely simple and almost all of the dimensions are irrelevant. In contrast, Littlestone's algorithm Winnow makes at most %$O(k(1+\log(N/k))$ mistakes for the same problem. $O(k\log N)$ mistakes for the same problem. Both algorithms use linear threshold functions as their hypotheses. However, Winnow does multiplicative updates to its weight vector instead of the additive updates of the Perceptron algorithm. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-030: ---------------------------------------- Identifying Regular Languages over Partially-Commutative Monoids by Claudio Ferretti -- Giancarlo Mauri, Universit\`a di Milano - ITALY Abstract: We define a new technique useful in identifying a subclass of regular languages defined on a free partially commutative monoid (regular trace languages), using equivalence and membership queries. Our algorithm extends an algorithm defined by Dana Angluin in 1987 to learn DFA's. The words of a trace language can be seen as equivalence classes of strings. We show how to extract, from a given equivalence class, a string of an unknown underlying regular langu age. These strings can drive the original learning algorithm which identify a regular string language that defines also the target trace language. In this way the algorithm applies also to classes of unrecognizable regular trace languages and, as a corollary, to a class of unrecognizable string languages. We also discuss bounds on the number of examples needed to identify the target language and on the time required to process them. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-031: ---------------------------------------- A Comparative Study For Forecasting Intra-daily Exchange Rate Data by Sabine P Toulson, London School of Economics, University of London Abstract: For the last few years neural nets have been applied to economic and financial forecasting where they have shown to be increasingly successful. This paper compares the performance of a two hidden layer multi-layer perceptron (MLP) with conventional statistical techniques. The statistical techniques used here consist of a structural model (SM) and the stochastic volatility model (SV). After reviewing each of the three models a comparison between the MLP and the SM is made investigating the predictive power of both models for a one-step ahead forecast of the Dollar-Deutschmark exchange rate. Reasons are given for why the MLP is expected to perform better than a conventional model in this case. A further study gives results on the performance of an MLP and a SV model in predicting the volatility of the Dollar- Deutschmark exchange rate and a combination of both models is proposed to decrease the forecasting error. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-032: ---------------------------------------- Characterizations of Learnability for Classes of $\{ 0,...,n \}$-valued Functions by Shai Ben-David, Technion, Israel Nicol\`o Cesa-Bianchi, Universit\`a di Milano, Italy David Haussler, University of California at Santa Cruz, USA Philip M. Long, Duke University, USA Abstract: We investigate the PAC learnability of classes of $\sn$-valued functions ($n < \infty$). For $n=1$ it is known that the finiteness of the Vapnik-Chervonenkis dimension is necessary and sufficient for learning. For $n > 1$ several generalizations of the VC-dimension, each yielding a distinct characterization of learnability, have been proposed by a number of researchers. In this paper we present a general scheme for extending the VC-dimension to the case $n > 1$. Our scheme defines a wide variety of notions of dimension in which all these variants of the VC-dimension, previously introduced in the context of learning, appear as special cases. Our main result is a simple condition characterizing the set of notions of dimension whose finiteness is necessary and sufficient for learning. This provides a variety of new tools for determining the learnability of a class of multi-valued functions. Our characterization is also shown to hold in the ``robust'' variant of PAC model and for any ``reasonable'' loss function. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-033: ---------------------------------------- Constructing Computationally Efficient Bayesian Models via Unsupervised Clustering by Petri Myllym\"aki and Henry Tirri, University of Helsinki, Finland Abstract: Given a set of samples of an unknown probability distribution, we study the problem of constructing a good approximative Bayesian network model of the probability distribution in question. This task can be viewed as a search problem, where the goal is to find a maximal probability network model, given the data. In this work, we do not make an attempt to learn arbitrarily complex multi-connected Bayesian network structures, since such resulting models can be unsuitable for practical purposes due to the exponential amount of time required for the reasoning task. Instead, we restrict ourselves to a special class of simple tree-structured Bayesian networks called Bayesian prototype trees, for which a polynomial time algorithm for Bayesian reasoning exists. We show how the probability of a given Bayesian prototype tree model can be evaluated, given the data, and how this evaluation criterion can be used in a stochastic simulated annealing algorithm for searching the model space. The simulated annealing algorithm provably finds the maximal probability model, provided that a sufficient amount of time is used. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-034: ---------------------------------------- Mapping Bayesian Networks to Boltzmann Machines by Petri Myllym\"aki, University of Helsinki, Finland Abstract: We study the task of finding a maximal a posteriori (MAP) instantiation of Bayesian network variables, given a partial value assignment as an initial constraint. This problem is known to be NP-hard, so we concentrate on a stochastic approximation algorithm, simulated annealing. This stochastic algorithm can be realized as a sequential process on the set of Bayesian network variables, where only one variable is allowed to change at a time. Consequently, the method can become impractically slow as the number of variables increases. We present a method for mapping a given Bayesian network to a massively parallel Bolztmann machine neural network architecture, in the sense that instead of using the normal sequential simulated annealing algorithm, we can use a massively parallel stochastic process on the Boltzmann machine architecture. The neural network updating process provably converges to a state which solves a given MAP task. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-035: ---------------------------------------- A MINIMAL LENGTH ENCODING SYSTEM by Tony Bellotti, London Electricity plc, UK Alex Gammerman, Royal Holloway, University of London, UK Abstract: Emily is a project to develop a computer system that can organise symbolic knowledge given in a high-level relational language, based on the principle of minimal length encoding (MLE). The purpose of developing this system is to test the hypothesis that minimal length encoding can be used as a general method for induction. A prototype version, Emily2, has already been implemented. It is the purpose of this paper to describe this system, to present some of our results and to indicate future developments. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-036: ---------------------------------------- Techniques in Neural Learning by Pascal Koiran, DIMACS, Rutgers University John Shawe-Taylor, Royal Holloway, University of London Abstract: This paper takes ideas developed in a theoretical framework by Maass and adapts them for a practical learning algorithm for feedforward sigmoid neural networks. A number of different techniques are presented which are based loosely around the common theme of taking advantage of the linearity of the net input to a neuron, or in other words the fact that there is only a single non-linearity per neuron. Some experimental results are included, though many of the ideas are as yet untested. The paper can therefore be viewed as a tool box offering a selection of possible techniques for incorporation in practical, heuristic learning algorithms for multi-layer perceptrons. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-037: ---------------------------------------- $\P\neq \NP$ over the non standard reals implies $\P\neq \NP$ over $\R$ by Christian Michaux, University of Mons-Hainaut, Belgium Abstract: Blum, Shub and Smale showed the existence of a $\NP$-complete problem over the real closed fields in the framework of their theory of computation over the reals. This allows to ask for the $\P\neq \NP$ question over real closed fields. Here we show that $\P\neq\NP$ over a real closed extension of the reals implies $\P\neq \NP$ over the reals. We also discuss the converse. This leads to define some subclasses of $\P/$poly. Finally we show that the transfer result about $\P\neq \NP$ is a part of a very abstract result. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-038: ---------------------------------------- Computing with Truly Asynchronous Threshold Logic Networks by Pekka Orponen, Technical University of Graz, Austria Abstract: We present simulation mechanisms by which any network of threshold logic units with either symmetric or asymmetric interunit connections (i.e., a symmetric or asymmetric ``Hopfield net'') can be simulated on a network of the same type, but without any a priori constraints on the order of updates of the units. Together with earlier constructions, the results show that the truly asynchronous network model is computationally equivalent to the seemingly more powerful models with either ordered sequential or fully parallel updates. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-040: ---------------------------------------- Descriptive Complexity Theory over the Real Numbers by Erich Gr\"adel and Klaus Meer, RWTH Aachen, Germany Abstract: We present a logical approach to complexity over the real numbers with respect to the model of Blum, Shub and Smale. The logics under consideration are interpreted over a special class of two-sorted structures, called {\em $\R$-structures}: They consist of a finite structure together with the ordered field of reals and a finite set of functions from the finite structure into $\R$. They are a special case of the {\em metafinite structures} introduced recently by Gr\"adel and Gurevich. We argue that $\R$-structures provide the right class of structures to develop a descriptive complexity theory over $\R$. We substantiate this claim by a number of results that relate logical definability on $\R$-structures with complexity of computations of BSS-machines. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-042: ---------------------------------------- Knowledge Extraction From Neural Networks : A Survey by R. Baron, ENS-Lyon CNRS, France Abstract: Artificial neural networks may learn to solve arbitrary complex problems. But knowledge acquired is hard to exhibit. Thus neural networks appear as ``black boxes'', the decisions of which can't be explained. In this survey, diff erent techniques for knowledge extraction from neural networks are presented. Early works have shown the interest of the study of internal representations, bu t these studies were domain specific. Thus, authors tried to extract a more general form of knowledge, like rules of an expert system. In a more restricted field, it is also possible to extract automata from neural networks, likely to recognize a formal language. Finally, numerical information may be obtained in process modelling, and this may be of interest in industrial applications. ----------------------- The Report NC-TR-95-011 can be accessed and printed as follows % ftp cscx.cs.rhbnc.ac.uk (134.219.200.45) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-95-011.ps.Z ftp> bye % zcat nc-tr-95-011.ps.Z | lpr -l Similarly for the other technical report. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. The files may also be accessed via WWW starting from the NeuroCOLT homepage: http://www.dcs.rhbnc.ac.uk/neural/neurocolt.html Best wishes John Shawe-Taylor From rsun at cs.ua.edu Wed May 10 14:17:14 1995 From: rsun at cs.ua.edu (Ron Sun) Date: Wed, 10 May 1995 13:17:14 -0500 Subject: No subject Message-ID: <9505101817.AA19625@athos.cs.ua.edu> In relation to the workshop > The IJCAI Workshop on > Connectionist-Symbolic Integration: > From Unified to Hybrid Approaches > > to be held at IJCAI'95 > Montreal, Canada > August 19-20, 1995 I like to update a bibliography of work on connectionist-symbolic integration. About two years ago, I solicited input from this mailing list and compiled a bibliography on the above topic (available in Neuroprose). However, since then, there has been a considerable amount of new developments that need to be collected and categorized. Therefore, I want to update (and re-compile) that bibliography. Please send me any of the followings: -- New publications since Spring 1993 -- Earlier publications that were inadvertently omitted in the the current bibliography -- Lists of your own publications in this area, preferably annotated (if they are not already in the bibliography). My e-mail address is: rsun at cs.ua.edu If you have hardcopies that you can send me, here is my address: Dr. Ron Sun Department of Computer Science The University of Alabama Tuscaloosa, AL 35487 (205) 348-6363 Your help is greatly appreciated. However, the decision whether to include a paper or not in the bibliography is solely the responsibility of the editor. ---Ron p.s. --------- The previous bibliography (36 pages) on connectionist models with symbolic processing is available in neuroprose. To get a copy of the bibliography, use FTP as follows: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get sun.nn-sp-bib.ps.Z ftp> quit unix> uncompress sun.nn-sp-bib.ps.Z unix> lpr sun.nn-sp-bib.ps (or however you print postscript) A clean-up version of the bibliography is published in the book: Ron Sun and Larry Bookman. (eds.) Computational Architectures Integrating Neural and Symbolic Processes. Kluwer Academic Publishers. 1994. From cabestan at eel.upc.es Thu May 11 16:45:48 1995 From: cabestan at eel.upc.es (Joan Cabestany) Date: Thu, 11 May 1995 16:45:48 UTC+0100 Subject: IWANN'95 Programme Message-ID: <1133*/S=cabestan/OU=eel/O=upc/PRMD=iris/ADMD=mensatex/C=es/@MHS> Dear colleagues, IWANN'95 (International Workshop on Artificial Neural Networks) will be held in Torremolinos (Malaga) in Spain next June, 7-9. All people interested in the final Programme and details can obtain a postcript file from our server: ftp ftp.upc.es username:anonymous password: e-mail address cd upc/eel get iwann95.ps (120K aprox.) Yours J.Cabestany From bhuiyan at mars.elcom.nitech.ac.jp Fri May 12 03:43:39 1995 From: bhuiyan at mars.elcom.nitech.ac.jp (bhuiyan@mars.elcom.nitech.ac.jp) Date: Fri, 12 May 95 16:43:39 +0900 Subject: Pre-print Available via FTP Message-ID: <9505120743.AA28284@mars.elcom.nitech.ac.jp> FTP-host: ftp.elcom.nitech.ac.jp (133.68.21.193) FTP-filename: /pub/WCNN_95.ps.gz URL ftp://133.68.21.193/pub/WCNN_95.ps.gz Performance Evaluation of a Neural Network based Edge Detector for high-contrast images Md. Shoaib Bhuiyan and Akira Iwata Dept. Electrical & Computer Engineering Nagoya Institute of Technology, Nagoya, Japan 466 To appear in Proc. World Congress on Neural Networks, 1995 ABSTRACT The performance of a neural network based edge detector for high-contrast images has been investigated both quantitatively and qualitatively. We have compared it's performance for both synthetic and natural images with those of four existing edge detection methods namely, Sobel's operator, Johnson proposed Contrast based Sobel operator, Marr-Hildreth's Laplacian-of-Gaussian (LoG) operator, and Canny's operator. We have also investigated it's noise immunity and compared with those of the above mentioned methods. We have found the performance of the neural network based edge detector to be consistently better, especially for images where the illumination varies widely. ----------------------------------------------------------------- The paper can be retrieved via anonymous ftp by following these instructions: unix> ftp ftp.elcom.nitech.ac.jp (133.68.21.193) ftp:name> anonymous Password:> your complete e-mail address ftp> cd pub ftp> get WCNN_95.ps.gz ftp> bye unix> gunzip WCNN_95.ps.gz unix> lpr WCNN_95.ps WCNN_95.ps is 5.57Mb, five pages in postscript format. The paper presents some results of our previously proposed algorithm to extract edges from an image with high contrast (also available by ftp from the same location, filename: ICONIP.ps.gz) and compares them with four existing edge detection mathods. Your feedback is very much appreciated (bhuiyan at mars.elcom.nitech.ac.jp) --Md. Shoaib Bhuiyan From jordan at psyche.mit.edu Sun May 14 18:57:22 1995 From: jordan at psyche.mit.edu (Michael Jordan) Date: Sun, 14 May 95 18:57:22 EDT Subject: Tech Report Available: EM algorithm Message-ID: FTP-host: psyche.mit.edu FTP-file: pub/jordan/AIM-1520.ps.Z The following paper is now available by anonymous ftp. ================================================================== On Convergence Properties of the EM Algorithm for Gaussian Mixtures (10 pages) Lei Xu and Michael I. Jordan CUHK and MIT Abstract: We build up the mathematical connection between the ``Expectation-Maximization'' (EM) algorithm and gradient-based approaches for maximum likelihood learning of finite Gaussian mixtures. We show that the EM step in parameter space is obtained from the gradient via a projection matrix $P$, and we provide an explicit expression for the matrix. We then analyze the convergence of EM in terms of special properties of $P$ and provide new results analyzing the effect that $P$ has on the likelihood surface. Based on these mathematical results, we present a comparative discussion of the advantages and disadvantages of EM and other algorithms for the learning of Gaussian mixture models. ================================================================== From massone at mimosa.eecs.nwu.edu Mon May 15 11:19:38 1995 From: massone at mimosa.eecs.nwu.edu (Lina Massone) Date: Mon, 15 May 1995 10:19:38 -0500 Subject: post-doc opening (forwarded) Message-ID: <199505151519.KAA03259@mimosa.eecs.nwu.edu> I am posting this message for a friend. Please do not send inquiries to me! L. Massone **************************************************** Motor Control Postdoctoral Fellow Start Date: September 1, 1995 I seek a postdoctoral fellow for an NSF funded study on the learning of coordination during complex multijoint voluntary actions. I use computer simulation and empirical methods to address phenomena and mechanisms that underlie the learning of coordination between balance, posture and voluntary task goals during multijoint pulls made by freely-standing humans. Two new, large movement analysis laboratories are available with full computerized capabilities for collecting and analyzing biomechanical and EMG data (force plates, motion analysis, load cells). Applicants must have completed their Ph.D. in motor systems neuroscience, engineering, kinesiology or a related discipline. Knowledge of biomechanics (including modeling), systems analysis, nonlinear dynamics, motor psychology or statistics are highly desirable. Good communication skills are a strong plus. Opportunities exist for participating in seminars and courses offered through the Institute for Neuroscience, Programs in Medical Biomechanics, Physiology, Programs in Physical Therapy, and other departments. Please send a letter of application (including career goals), vita and the names, addresses and phone numbers of two references to Wynne A. Lee, Ph.D., Programs in Physical Therapy, Northwestern University Medical School, 645 N. Michigan Ave., Chicago IL 60611-2814. Email: wlee at casbah.acns.nwu.edu ----------------------------------------------- Wynne A. Lee, Ph.D. Programs in Physical Therapy, and The Institute for Neuroscience Northwestern University Medical School 645 N. Michigan Avenue (Suite 1100) Chicago IL 60611-2814 voice: 312-908-6795 fax: 312-908-0741 email: wlee at casbah.acns.nwu.edu ----------------------------------------------- From lbl at nagoya.riken.go.jp Mon May 15 22:16:09 1995 From: lbl at nagoya.riken.go.jp (Bao-Liang Lu) Date: Tue, 16 May 1995 11:16:09 +0900 Subject: Paper available: parallel and modular multi-sieving net Message-ID: <9505160216.AA20492@xian.riken.go.jp> FTP-host:archive.cis.ohio-state.edu FTP-file:pub/neuroprose/lu.multisieve.ps.Z The following paper is now available by anonymous ftp. ========================================================================== A Parallel and Modular Multi-sieving Neural Network Architecture for Constructive Learning Bao-Liang Lu (RIKEN) Koji Ito (Toyohashi Univ. of Tech.; RIKEN) Hajime Kita (Kyoto Univ.) Yoshikazu Nishikawa (Kyoto Univ.) Abstract: In this paper we present a parallel and modular multi-sieving neural network (PMSN) architecture for constructive learning. This PMSN architecture is different from existing constructive learning networks such as the cascade correlation architecture. The constructing element of the PMSNs is a compound modular network rather than a hidden unit. This compound modular network is called a sieving module (SM). In the PMSN a complex learning task is docomposed into a set of relatively simple subtasks automatically. Each of these subtasks is solved by a corresponding individual SM and all of these SMs are processed in parallel. (It will appear in the Proc. of Fourth International Conference on Artificial Neural Networks (ANN'95), Cambridge, UK, 26-28 June 1995. 6 pages. No hard copies available.) ============================================================================ Bao-Liang Lu --------------------------------------------- Bio-Mimetic Control Research Center, RIKEN 3-8-31 Rokuban, Atsuta-ku, Nagoya 456, Japan Phone: +81-52-654-9137 Fax: +81-52-654-9138 Email: lbl at nagoya.riken.go.jp From gbugmann at school-of-computing.plymouth.ac.uk Mon May 15 06:00:26 1995 From: gbugmann at school-of-computing.plymouth.ac.uk (Guido.Bugmann xtn 2566) Date: Mon, 15 May 1995 11:00:26 +0100 (BST) Subject: Position for Research Assistant Message-ID: University of Plymouth School of Computing Neurodynamics Research Group Postgraduate Research Assistant Applications are invited for a University-funded three-year postgraduate research assistant, to carry out an investigation within a broad range of topics related to the development of novel, biologically-inspired, neural network based, learning control systems, with particular application to the control of autonomous mobile robots. The range of work extends from theoretical studies of cognition and intelligent behaviour, through computational modelling of brain function, to the construction of neural network based controllers. The research will be carried out within the Neurodynamics Research Group of the School of Computing, further details of which are given below. The research assistant will be required to register for a PhD degree (fees are waived for University staff) and to carry out limited teaching/demonstrating duties. We are looking for high quality candidates who have, or are in the process of completing, a first degree or masters degree in a relevant discipline, eg electronic/mechanical engineering, mathematics, psychology, cognitive science, who are willing or able to use computational tools for either simulation or real-time control, and who have a strong interest in pursuing research in neural networks/systems, adaptive learning systems, and control systems. The salary for the post will be on the University Research Assistant scale, with a salary in the range A39231 to 12756 pa., dependent upon age, qualifications, experience, etc. Informal discussions about the post can be held with Dr Guido Bugmann (e-mail: gbugmann at soc.plym.ac.uk; tel: 01752 232566). Applications (by mail or email) should comprise a CV, a short description of interests and the names of 2 referees. Applications should be sent to Guido Bugmann at the address below, as soon as possible. The position will stay open until a suitable candidate is found. ----------------------------- Dr. Guido Bugmann Neurodynamics Research Group School of Computing University of Plymouth Plymouth PL4 8AA United Kingdom ----------------------------- Tel: (+44) 1752 23 25 66 / 41 Fax: (+44) 1752 23 25 40 Email: gbugmann at soc.plym.ac.uk ----------------------------- The Neurodynamics Research Group - Background Information The aim of this group is to investigate and develop computational neural models of brain behaviour in sensory perception, learning, memory and motor action planning and generation, and to use these models to develop novel artificial systems for intelligent sensory-motor control, eg of autonomous robots. The group was started in September 1991 and is led by Professor Mike Denham. Researchers in the group currently include a postdoctoral University Research Fellow, Dr Guido Bugmann, who has an international reputation in the field of neural dynamics, an EPSRC-funded postdoctoral Research Fellow, Dr Raju Bapi, who was a member of Prof Dan Levine's research group at the University of Texas, and who has expertise in the modelling of frontal lobe behaviour. The Group also has one University-funded Research Assistant and four research students The Group was also recently expanded by the appointment of a Senior Lecturer in Artificial Intelligence, Dr Sue McCabe, who was previously at the Royal Naval Engineering College and has expertise in AI, intelligent control and intelligent sensing, especially neural network models of auditory processing. The Group was awarded an EPSRC research grant, starting in August 1994, to investigate a novel biologically-inspired architecture for an intelligent control system, which is a collaborative project with Professor John Taylor and the Centre for Neural Networks at Kings College London. So far, the Group have been working on specific parts of the proposed integrated learning control system and have been able to contribute significantly to knowledge on visual information processing and on planning. As a result of our work over the last year, we have now begun to define the approach necessary for solving the deep theoretical and practical problems of integrating the various parts of the proposed system, based around an "sensory-action" approach to object perception and recognition and to the learning of spatial maps and adaptive behaviours for changing control objectives and environments. A novel neural network based system for control of an autonomous mobile robot has been developed and a simulation has been constructed using the Cortex-Pro system on a 486 PC. This simulated system controls currently a real robot with video camera and provides a practical working example of the basic architecture of the proposed learning control system. The intention is to build more advanced and detailed models of individual modules into the system as a result of parallel conceptual and theoretical research, eg into perception, learning and motor planning, in the Group. Recent publications: "A model for latencies in the visual system" Bugmann, G. and Taylor J.G. (1993) Proc. 3rd Conf. on Artificial Neural Networks (ICANN'93, Amsterdam), Gielen S. and Kappen B. (eds), p.165-168. "Modelling of the high firing variability of real cortical neurons with the temporal noisy-leaky integrator neuron model" Christodoulou C., Clarkson T., Bugmann G. and Taylor J.G. (1994) Proc. IEEE Int. Conf. on Neural Networks (ICNN'94) part of the World Congress on Computational Intelligence (WCCI'94), Orlando, Florida, USA, 2239-2244.. "An artificial neural network architecture for multiple temporal sequence processing" McCabe S L and Denham M J (1994) Proc World Congress on Neural Networks (WCNN'94), San Diego, California, USA, 738-743. "Role of short-term memory in visual information processing" Bugmann, G. and Taylor J.G. (1994) Proc. of Int. Symp. on Dynamics of Neural Processing, Washington, DC, USA, 132-136. "Learning to control intelligently" Denham, M J (1994) Proc. IEE Int Conf Control'94, Warwick, UK (plenary paper) "Route finding by neural net" Bugmann G, Taylor J G and Denham M J (1995) in Taylor JG (ed) Neural Networks, Alfred Waller L, Henley on Thames, pp217-230. "Robot control using temporal sequence learnin" Denham M J and McCabe S L (1995) Proc. World Congress on Neural Networks (WCNN'95), Washington D.C., USA (accepted for presentation) "Segmentation of the auditory scene" McCabe S L and Denham M J(1995) Proc. World Congress on Neural Networks (WCNN'95), Washington D.C., USA (accepted for presentation) ------------------------------------------------------------------- From p.j.b.hancock at psych.stir.ac.uk Tue May 16 16:23:29 1995 From: p.j.b.hancock at psych.stir.ac.uk (Peter Hancock) Date: Tue, 16 May 95 16:23:29 BST Subject: Info theory workshop Message-ID: <9505161523.AA1552850034@nevis.stir.ac.uk> Call for contributions. Workshop on Information Theory and the Brain. 4-5th September 1995, University of Stirling, Scotland What is the goal of sensory coding? What algorithms help the brain to achieve that goal? What is the information content of spiking in neurons? Where is the trade-off between redundancy and decorrelation? How do internal representations reflect the statistics of sensory input? How is input from different modalities combined? These are the kind of issues to be discussed. The main thrust of the workshop is in furthering our understanding of what is happening in the brain, but with an eye also to possible applications of such algorithms. Numbers will be limited to around 30, for reasons of space and informality. Costs still to be determined, but minimal (less than 50 pounds, excluding accomodation). It is hoped that a proceedings will be published after the event. Postgraduates are particularly welcome. Stirling is situated in the centre of Scotland, with easy access by road, rail, and international airports at Edinburgh and Glasgow. The Edinburgh International Festival and Fringe will still be in progress and well worth a visit. Submissions: Please submit a one-page abstract, preferably by email, to Peter Hancock, pjh at psych.stir.ac.uk, by 30th June 1995. We expect that most people attending will contribute in some form. Organising committee: Roland Baddeley (Oxford) Peter Foldiak (St. Andrews) Colin Fyfe (Paisley) Peter Hancock (Stirling) Jim Kay (SASS, Aberdeen) Mark Plumbley (King's College London) Further information from Peter Hancock, Department of Psychology, University of Stirling, FK9 4LA, UK. Phone (+44) 1786 467675. Fax (+44) 1786 467641 Email pjh at psych.stir.ac.uk Note: the first question above is borrowed from David Field: What is the Goal of Sensory Coding?, Field, D., Neural Computation 6, 559-601, 1994. -- ------------------------------------------------------- Peter Hancock Department of Psychology 0 0 Face University of Stirling | Research FK9 4LA, UK \_/ Group Phone 01786 467675 Fax 01786 467641 pjh at psych.stir.ac.uk http://nevis.stir.ac.uk/~pjh ------------------------------------------------------- From S.Khebbal at cs.ucl.ac.uk Tue May 16 13:35:33 1995 From: S.Khebbal at cs.ucl.ac.uk (S.Khebbal@cs.ucl.ac.uk) Date: Tue, 16 May 95 18:35:33 +0100 Subject: New Intelligent Hybrid Systems Book Message-ID: NEW BOOK ANNOUCEMENT INTELLIGENT HYBRID SYSTEMS S. GOONATILAKE and S. KHEBBAL University College London There is now a growing realisation in the intelligent systems community that many complex problems require hybrid solutions. Increasingly hybrid systems combining genetic algorithms, fuzzy logic, neural networks, and expert systems are proving their effectiveness in a wide variety of real-world problems. This timely book brings together leading researchers from the United States, Europe and Asia who are pioneering the theory and application of Intelligent Hybrid Systems. The book provides a definition of hybrid systems, summarises the current state of the art, and details innovative methods for integrating different intelligent techniques. Application examples are drawn from domains including industrial control, financial and business modelling, and cognitive simulation. The book is also intended to equip researchers, application developers and managers with key reference and resource material for the successful development of hybrid systems. CONTENTS: ======== Chap 1: Intelligent Hybrid Systems : Issues, Classes and Future Trends - Suran Goonatilake & Sukhdev Khebbal, University College London, UK. PART ONE: FUNCTION-REPLACING HYBRIDS ==================================== Chap 2: Fuzzy Controller Synthesis with Neural Network Process Models - Wendy Foslien & Tariq Samad, Honeywell SSDC, California, USA. Chap 3: Replacing the Pattern Matcher of a Expert System with a Neural Network - Henry Tirri, Univ. of Helsinki, Finland. Chap 4: Genetic Algorithms and Fuzzy Logic for Adaptive Process Control - Charles Karr, U.S. Bureau of Mines, USA. Chap 5: Neural Network Weight Selection Using Genetic Algorithms - David Montana, BBN Systems & Technologies Inc, USA. PART TWO: INTERCOMMUNICATING HYBRIDS ==================================== Chap 6: A Unified Approach For Engineering Design - David Powell, Michael Skolnick & Shi Shing Tong, GEC, USA. Chap 7: A Hybrid System for Data Mining - Randy Kerber, Brian Livezey, & Evangelos Simoudis, Lockheed AI Center, Palo Alto, USA. Chap 8: Using Fuzzy Pre-processing with Neural Networks for Chemical Process Diagnostic Problems - Casimer Klimasaukas, NeuralWare, USA. Chap 9: A Multi Agent Approach for the Integration of Neural Networks and Expert Systems - Andreas Scherer & Gunter Schlageter, Praktische Informatik, FernUniversitaet, Hagen, Germany. PART THREE: POLYMORPHIC HYBRIDS =============================== Chap 10: Integrating Symbol Processing Systems and Connectionist Networks - Vasant Honavar & Leonard Uhr, Iowa State University & Dept of Computer Science, University of Wisconsin-Madison, USA. Chap 11: Reasoning with Rules and Variables in Neural Networks - Venkat Ajjanagadde & Lokendra Shastri, University of Pennsylvania, USA. Chap 12: The NeurOagent: A Neural Multi-agent Approach for Modelling, Distributed Processing and Learning - Khai Minh Pham, InferOne, France. Chap 13: Genetic Programming of Neural Networks: Theory and Practice - Frederic Gruau, Grenoble, France. PART FOUR: DEVELOPING HYBRID SYSTEMS ==================================== Chap 14: Tools and Environments for Hybrid Systems - Sukhdev Khebbal & Danny Shamhong, University College London, UK. ISBN 0471 94242 1 300pp January 1995 (Sterling) 29.95/$47.95 John Wiley & Sons Ltd, Baffins Lane, Chichester, West Sussex, PO19 1UD, UK. (also offices in New York, Brisbane, Toronto, and Singapore) There is also a World Wide Web page on Intelligent Hybrid Systems at : http://www.cs.ucl.ac.uk/staff/skhebbal/ihs and for the Intelligent Hybrid Systems Book at : http://www.cs.ucl.ac.uk/staff/skhebbal/ihs/ihsbook.html =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= # MAIL ADDRESS : | EMAIL ADDRESS : # # Sukhdev Khebbal, | S.Goonatilke at cs.ucl.ac.uk # # Suran Goonatilake, | S.Khebbal at cs.ucl.ac.uk # # Department of Computer Science,|------------------------------------# # University College London, | TELEPHONE NUMBERS : # # Gower Street, | Voice: +44 (0)171 391 1329 # # London WC1E 6BT. | Fax : +44 (0)171 387 1397 # =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= From markey at dendrite.cs.colorado.edu Tue May 16 14:11:27 1995 From: markey at dendrite.cs.colorado.edu (Kevin Markey) Date: Tue, 16 May 1995 12:11:27 -0600 Subject: NSF cognitive & behavioral budget cuts Message-ID: <199505161811.MAA22596@dendrite.cs.colorado.edu> The following alert was originally posted in Info-Childes. Congress will act on the budget resolution by the end of this week. ------------------------------------------------------------------------- EMERGENCY ACTION ALERT >From the Federation of Behavioral, Psychological and Cognitive Sciences The House Budget Committee has recommended the complete elimination of NSF research funding for Psychology, Anthropology, Sociology, Linguistics, Political Science, Economics, Geography, Cognitive Science, Decision, Risk and Management Sciences, History of Science, and Statistical Research for the Behavioral and Social Sciences-- as NSF's contribution to balancing the Federal budget. There is no doubt that NSF funding will be cut in the effort to balance the budget. But to selectively wipe out the behavioral and social sciences goes far beyond simply saving money. This is the most important crisis these sciences have faced since Ronald Reagan attempted to eliminate the same sciences in the early 1980s. Action on this will happen very quickly. The Budget Committee approved the budget package on May 11. The vote on the package by the full House will happen sometime between the 15th and 18th of May. In all likelihood, the budget resolution will pass the House unaltered. The Appropriations Committee will be bound by the spending limits imposed by the Budget Committee. But it need not be bound by the particular cuts recommended by the Budget Committee! Unfortunately, the House leadership has also made it known that no program that lacks a current authorization will be funded. The National Science Foundation is not currently authorized. Efforts to pass its authorization failed last year in the Senate. The House Science Committee Chair, Robert Walker (R-PA) has said that as soon as the budget is passed, the Science Committee will proceed to report its authorizations which include, among other things, NSF, NASA, and the research programs of the Department of Energy. Robert Walker is also the Vice-Chair of the Budget Committee, and he played a key role in determining the selective cuts at NSF. In a news conference on May 12, Walker said that the Directorate containing the research programs mentioned above was created simply because it was "politically correct" and that it is now time to make a correction. This means that there is little chance the NSF authorization from his Committee will contain an authorization for the Social, Behavioral, and Economic Sciences Directorate. If the Committee does not authorize the Directorate, the Appropriations Committee cannot fund the research programs it contains. So scientists must pay close attention to actions of the Budget, Appropriations, and the authorizing committee. The only way the course of events can be changed is for concerned citizens to let their elected representatives know that they as voters to not approve of these ideological cuts masquerading as budget balancing measures. You must take it on yourself immediately to 1) write or call your own representative and senator's office to express your disapproval 2) send a copy of your letter to: Robert Walker, George Brown (ranking minority member of the Science Committee and a likely ally of behavioral and social scientists), Jerry Lewis (Chairman of the House Appropriations Subcommittee that appropriates money for the National Science Foundation). And this next thing is equally important: SEND, FAX OR EMAIL A COPY OF YOUR CORRESPONDENCE TO THE FEDERATION OF BEHAVIORAL, PSYCHOLOGICAL, AND COGNITIVE SCIENCES. We have to be able to monitor how great an impact behavioral and social scientists are having, and the only way we can do that is by keeping track of how many contacts from scientists congressional offices have received. Any letter to Congress may be addressed as follows: Representative's name, U.S. House of Representatives (or U.S. Senate) Washington, D.C. 20515 (House) or 20510 (Senate). The Federation email is federation at apa.org. Federation fax is (202) 336-6158. If you need more information, our telephone number is (202) 336-5920. 3) Help us get the word out. Please see that the anthropology, sociology, linguistics, economics, political science, cognitive science, and geography departments on your campus receive this action alert as well. 4) It is very important that elected representatives do not hear only from the scientists affected. If you have acquaintances in the physical or biological sciences or the university administration who would write a letter or make a phone call to an elected representative, do everything you can to get such a communication sent. Margaret Jean Intons-Peterson Department of Psychology Indiana University Bloomington, Indiana 47405 INTONS at INDIANA.EDU Phone: 812-855-3991 Fax: 812-855-4691 From jaap.murre at mrc-apu.cam.ac.uk Fri May 12 12:40:47 1995 From: jaap.murre at mrc-apu.cam.ac.uk (Jaap Murre) Date: Fri, 12 May 1995 17:40:47 +0100 Subject: Paper on connectivity of the brain Message-ID: <199505121640.RAA24847@sirius.mrc-apu.cam.ac.uk> The following paper has been added to our ftp-site: J.M.J. Murre, & D.P.F. Sturdy (submitted). The connectivity of the brain: multi-level quantitative analysis. Revised version submitted to Biological Cybernetics. Abstract We develop a mathematical formalism for calculating connectivity volumes generated by specific topologies with various physical packing strategies. We consider four topologies (full, random, nearest neighbor, and modular connectivity) and three physical models: (i) interior packing, where neurons and connection fibers are intermixed, (ii) sheeted packing where neurons are located on a sheet with fibers running underneath, and (iii) exterior packing where the neurons are located at the surfaces of a cube or sphere with fibers taking up the internal volume. By extensive cross-referencing of available human neuroanatomical data we produce a consistent set of parameters for the whole brain, the cerebral cortex, and the cerebellar cortex. By comparing these inferred values with those predicted by the expressions, we draw the following general conclusions for the human brain, cortex, cerebellum: (i) Interior packing is less efficient than exterior packing (in a sphere). (ii) Fully and randomly connected topologies are extremely inefficient. More specifically we find evidence that different topologies and physical packing strategies might be used at different scales. (iii) For the human brain at a macrostructural level, modular topologies on an exterior sphere approach the data most closely. (iv) On a mesostructural level, laminarization and columnarization are evidence of the superior efficiency of organizing the wiring as sheets. (v) Within sheets, microstructures emerge in which interior models are shown to be the most efficient. With regard to interspecies similarities and differences we conjecture (vi) that the remarkable constancy of number of neurons per underlying mm2 of cortex may be the result of evolution minimizing interneuron distance in grey matter, and (vii) that the topologies that best fit the human brain data should not be assumed to apply to other mammals, such as the mouse for which we show that a random topology may be feasible for the cortex. The paper is 39 pages, single spaced. The postscript file and its compressed versions are called: ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre/connect.ps (940 Kb) ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre/connect.ps.Z (327 Kb) ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre/connect.zip (223 Kb) (with PKZIP 2.04g) -- Jaap Murre jaap.murre at mrc-apu.cam.ac.uk After 1 June 1995: pn_murre at macmail.psy.uva.nl From btelfer at relay.nswc.navy.mil Wed May 17 09:39:44 1995 From: btelfer at relay.nswc.navy.mil (Brian Telfer) Date: Wed, 17 May 95 09:39:44 EDT Subject: Final Call for WCNN-95 Special Session Message-ID: <9505171339.AA01538@ulysses.nswc.navy.mil> (Submitted by Harold Szu) Call for Papers for WCNN-95 special Novel Results Session by June 15 1995 World Congress on Neural Networks, Washington DC 7/17-21/95 Highlights & Attractions: @ Keynote speaker: Dr. M. Nelson, White House/Office Science Tech. Policy, will speak on Federal Programs on Information Superhighway. @ 7 plenary talks by Kohonen, Alkon, Carpenter, Szu, Freeman, Taylor, Amari @ 19 sessions, 7 special sessions, Special Interest Group meetings @ Neural Network Industrial Enterprise Day (Monday) & Federal Clean Car Initiative @ 2-day Fuzzy Neural Networks Symposium @ 24 INNS University Short Courses (P. Werbos's replaces H. Szu's) @ NIH/FDA Biomedical Symposium, highlights e.g., Telemedicine @ WCNN-95 Golf Range Sunday Afternoon Competition, @ Student Volunteers Application, please contact via e-mail: Charles at seas.gwu.edu Program details can be obtained by contacting Talley Management at address below or 74577.504 at compuserve.com. Background: The World Congress on Neural Networks is the only International Neural Network Conference on the North American Continent in 1995. Don't miss it. To encourage your active participation, here is a unique offer: The date is 1995 July 17-21 Washington DC: Renaissance Hotel($99/day; 800-228-9898, (202)962-4445 (fax)). WCNN is sponsored by the International Neural Network Society as an annual mechanism for Interdisciplinary Information Dissemination, in collaboration with IEEE, SPIE, NIH, FDA, ONR, APS, AAAI, AICE, APNNA, JNNS, ENNS, SME. IEEE members will enjoy the same low registration fee as INNS members ($75 Student members, $255 regular INNS or IEEE Members, registration must be sent prior to June 16 to TALLEY, 875 KINGS HIGHWAY, SUITE 200 WOODBURY, NJ 08096-3172, or 609-853-0411 FAX). Note that to enjoy the discount, your registration must be sent to Talley by June 16, but your technical paper must be sent directly to the Local Organization Committee address shown as follows. Accepted papers for WCNN-95 will be presented in "Novel Results" Session and published as a supplement to the paper proceedings available on-site. However, the final papers are not due until June 15 1995, a month before the DC Conference. It will give you plenty of time to report your most recent and exciting developments. (i) Submission Procedure: Paper format is: camera ready, 8.5x11 paper, 1" margins, single column, single spaced, minimum 10-pt font, 4 page limit ($20 per extra page). Cover letter should contain: full title of paper, corresponding and presenting authors, address, telephone and fax numbers, email address, preference of oral or poster presentation, audio-visual requirements. All poster presenters will give a 3 minute oral introduction (2 viewgraph maximum, including 1 quadchart containing authors/title, background, approach, results) to their poster during the oral session. (ii) Review Procedure: If a member of INNS Governing Board or SIGINNS Chair has already reviewed with an endorsement for acceptance for presentation in the submission letter, the paper will be accepted as it is. Send the original and one copy. The Local Organization Committee (LOC) will review it for the type of presentation and immediately inform the authors by e-mail or Fax ASAP. If not endorsed, the paper will be reviewed by the LOC. Send the original paper and five copies. (iii) All papers accepted for oral or poster presentation will be included in the session called Novel Results and will appear in a supplement to the WCNN-95 Proceedings and be distributed on-site. (iv) Best Poster Award will be chosen from all contributors who wish to be so considered. Best Poster Awards are rated according to: (a) Quality of Technical Content, (b) Quality of Oral Presentation, (c) Effectiveness of Poster Design, and the best three will be kept in a central area with all other winners throughout the conference. (v) Oral vs Poster: Oral presentation is prefered when a brand new result requires simultaneous peer review, when the main result can be presented in the limited time slot, and when a known speaker is capable to give a stimulating talk. Poster presentation is prefered if the author wishes to interact with the original inventors, if the result requires more than 15 minutes to do it justice, and if the paper requires nontraditional demo and tailoring to individual expertise. (vi) Submission address: Harold Szu WCNN-95 LOC Chair 9402 Wildoak Dr. Bethesda MD 20814. (301) 394-3097 (Office); (301) 394-1929 (Brian) (301) 394-3923 (Fax) e-mail: HSzu at Ulysses.NSWC.Navy.Mil In Summmary: Please encourage your colleagues that this is the only Conference in neural nets to attend to keep up with the interdisciplinary development related to the brain-style computing, Biomedical & Engineering Applications, Natural Intelligence, Mind & Body, Learning and Artificial Neural Network Models. (i) Governors & SIGINNS Chairs Guaranteed Acceptance,if reviewed by them. (ii) INNS & IEEE Identical Membership Discount Rate. (iii) Special Session: Novel Results DL: June 15 1995 (iv) Usual single column, single space, 12-pt font, 4 page limit. (vi) Papers accepted will be included in the WCNN Proceeding package. From maja at cs.brandeis.edu Wed May 17 18:18:38 1995 From: maja at cs.brandeis.edu (Maja Mataric) Date: Wed, 17 May 1995 18:18:38 -0400 Subject: Conference Announcement and Call For Papers Message-ID: <199505172218.SAA06588@garnet.cs.brandeis.edu> ============================================================================== Conference Announcement and Call For Papers FROM ANIMALS TO ANIMATS Fourth International Conference on Simulation of Adaptive Behavior (SAB96) Cape Cod, Massachusetts, USA, September 9-13, 1996 The objective of the conference is to bring together researchers in ethology, psychology, ecology, artificial intelligence, artificial life, robotics, and related fields so as to further our understanding of the behaviors and underlying mechanisms that allow natural and artificial animals to adapt and survive in uncertain environments. The conference will focus particularly on well-defined models, computer simulations, and robotics demonstrations, in order to help characterize and compare various organizational principles or architectures capable of inducing adaptive behavior in real animals or synthetic agents. Contributions treating any of the following topics from the perspective of adaptive behavior will receive special emphasis: Action selection Learning and development Perception and motor control Evolutionary computation Neural correlates of behavior Coevolutionary models Emergent structures and behaviors Parallel and distributed models Motivation and emotion Collective and social behavior Internal world models Autonomous robots Characterization of environments Applied adaptive behavior Authors should make every effort to suggest implications of their work for both natural and artificial animals. Papers which do not deal explicitly with adaptive behavior will be rejected. Submission Instructions Authors are requested to send five copies (hard copy only) of a full paper to the Program Chair (Pattie Maes) arriving no later than Feb 9th, 1996. Late submissions will not be considered. Papers should not exceed 10 pages (excluding the title page), with 1 inch margins all around, and no smaller than 10 pt (12 pitch) type (Times Roman preferred). The Web site listed below contains the latex .sty file producing the preferred format for submissions. Each paper must include a title page containing the following: (1) Full names, postal addresses, phone numbers, email addresses (if available), and fax numbers for each author, (2) A 100-200 word abstract, (3) The topic area(s) in which the paper could be reviewed (see list above). Camera ready versions of the papers, in two-column format, will be required by May 10th. Computer, video, and robotic demonstrations are also invited for submission. Submit a 2-page proposal plus a title page as above to the program chair. Indicate equipment requirements and relevance to the themes of the conference. Conference Chairs Pattie Maes, Program MIT Media Lab 20 Ames Street Rm 305 Cambridge, MA 02139 USA email: pattie at media.mit.edu Maja Mataric, Local Arrangements Volen Center for Complex Systems Computer Science Department Brandeis University Waltham, MA 02254 USA email: maja at cs.brandeis.edu Jean-Arcady Meyer, Publicity Groupe de Bioinformatique URA686.Ecole Normale Superieure 46 rue d'Ulm 75230 Paris Cedex 05 France email: meyer at wotan.ens.fr Jordan Pollack, Local Arrangements/Financial Volen Center for Complex Systems Computer Science Department Brandeis University Waltham, MA 02254 USA email: pollack at cs.brandeis.edu Herbert Roitblat, Financial Department of Psychology University of Hawaii 2430 Campus Road Honolulu, HI 96822 USA email: roitblat at uhunix.uhcc.hawaii.edu Stewart Wilson, Proceedings The Rowland Institute for Science 100 Edwin H. Land Blvd. Cambridge, MA 02142 USA email: wilson at smith.rowland.org (Tentative) Program Committee: M. Arbib, USA; R. Arkin, USA; R. Beer, USA; A. Berthoz, France; B. Blumberg, USA; L. Booker, USA; R. Brooks, USA; D. Cliff, UK; P. Colgan, Canada; T. Collett, UK; H. Cruse, Germany; J. Delius, Germany; A. Dickinson, UK; J. Ferber, France; D. Floreano, UK; N. Franceschini, France; S. Giszter, USA; S. Goss, Belgium; J. Hallam, UK; I. Harvey, UK; I. Horswill, USA; P. Husbands, UK; L. Kaelbling, USA; H. Klopf, USA; L-J. Lin, USA; M. Littman, USA; D. McFarland, UK; J. Millan, Spain; G. Miller, UK; R. Pfeifer, Switzerland; J. Slotine, USA; T. Smithers, Spain; O. Sporns, USA; J. Staddon, USA; L. Steels, Belgium; L. Stein, USA; F. Toates, UK; P. Todd, USA; S. Tsuji, Japan; W. Uttal, USA; D. Waltz, USA. Official Language: English Publisher: MIT Press/Bradford Books Important Dates =============== FEB 9, 1996: Submissions must be received APR 12: Notification of acceptance or rejection (via email) MAY 10: Camera ready revised versions due JUN 10: Early registration deadline AUG 8: Hotel reservations and regular registration deadline SEP 9-13: Conference dates General queries to: sab96 at cs.brandeis.edu WWW Page: http://www.cs.brandeis.edu/conferences/sab96 ============================================================================== From tibs at pc-tibs.Stanford.EDU Wed May 17 20:28:08 1995 From: tibs at pc-tibs.Stanford.EDU (Rob Tibshirani) Date: Wed, 17 May 1995 17:28:08 -0700 Subject: new paper Message-ID: <199505180028.RAA14900@pc-tibs.Stanford.EDU> The following paper (without figures) is now available at the ftp site utstat.toronto.edu in pub/bootpred.shar (shar postscript files). A paper copy with figures is available upon request from karola at playfair.stanford.edu Cross-Validation and the Bootstrap: Estimating the Error Rate of a Prediction Rule Bradley Efron Robert Tibshirani Stanford Univ Univ of Toronto A training set of data has been used to construct a rule for predicting future responses. What is the error rate of this rule? The traditional answer to this question is given by cross-validation. The cross-validation estimate of prediction error is nearly unbiased, but can be highly variable. This article discusses bootstrap estimates of prediction error, which can be thought of as smoothed versions of cross-validation. A particular bootstrap method, the $632+$ rule, is shown to substantially outperform cross-validation in a catalog of 24 simulation experiments. Besides providing point estimates, we also consider estimating the variability of an error rate estimate. All of the results here are nonparametric, and apply to any possible prediction rule. The simulations include ``smooth'' prediction rules like Fisher's Linear Discriminant Function, and unsmooth ones like Nearest Neighbors. ============================================================= | Rob Tibshirani The History of science | Dept. of Statistics is full of fruitful errors | Sequoia Hall and barren truths | Stanford Univ | Stanford, CA Arthur Koestler | USA 94305 Phone: 1-415-725 2237 Email: tibs at playfair.stanford.edu FAX: 1-416-725-8977 From jhoh at vision.postech.ac.kr Wed May 17 21:40:52 1995 From: jhoh at vision.postech.ac.kr (Prof. Jong-Hoon Oh) Date: Thu, 18 May 1995 10:40:52 +0900 Subject: Statistical Physics of Neural Networks: Preprints available via ftp Message-ID: Dear Colleagues, Preprints to be published in the proceedings of NNSMP95 (Neural Networks: The Statistical Mechanics Perspective) are available. Here is the table of contents for NNSMP95 Proceedings. It is available via anonymous ftp at tico.postech.ac.kr in the pub/NNSMP/proceedings95. For accompanying mailing list service, please read README file. **************************************************************************** Jong-Hoon Oh Associate Professor, Department of Physics jhoh at vision.postech.ac.kr Pohang Institute of Science Technology Tel) +82-562-2792069 Hyoja San 31 Pohang, 790-784 Kyoungbuk, Korea Fax) +82-562-2793099 **************************************************************************** -----------------Table of Contents - LaTeX format -------------------------- \documentstyle[12pt]{article} \begin{document} \centerline{\Large\bf Table of Contents} \bigskip \bigskip Preface \bigskip {\bf Part I. Learning Curves} \begin{itemize} \item Statistical Theory of Learning Curves S. Amari, N. Murata and K. Ikeda. \item Generalization in Two-Layer Neural Networks J.-H. Oh, K. Kang, C. Kwon and Y. Park \item Annealed Theories of Learning H. S. Seung \item Mutual Information and Bayes Methods for Learning a Distribution D. Haussler and M. Opper \item General Bounds for Predictive Errors in Supervised Learning M. Opper and D. Haussler \item Perceptron Learning: The Largest Version Space M. Biehl and M. Opper \item Large Scale Simulations for Learning Curves K.-R. M\"uller, M. Finke, N. Murata and S. Amari \item Geometry of Admissible Parameter Region in Neural Learning K. Ikeda and S. Amari \item Learning by a Population of Perceptrons K. Kang, J.-H. Oh and C. Kwon \end{itemize} {\bf Part II. Dynamics} \begin{itemize} \item On-Line Learning of Dichotomies: Algorithms and Learning Curves H. Sompolinsky, N. Barkai and H. S. Seung \item The Bit-Generator and Time-Series Prediction E. Eisenstein, I. Kanter, D. A. Kessler and W. Kinzel \item Phase Dynamics of Two and Three Coupled Hodgkin-Huxley Neurons under DC Currents S. Kim, S. G. Lee, H. Kook and J. H. Shin \item Periodic Synchronization in Networks of Neuronal Oscillators M. Y. Choi \item Synchronization in Neural Networks with Finite Storage Capacity K. Park and M. Y. Choi \end{itemize} {\bf Part III. Associative Memory and Other Topics} \begin{itemize} \item The Cavity Method: Applications to Learning and Retrieval in Neural Networks K. Y. M. Wong \item Storage Capacity of a Fully Connected Committee Machine C. Kwon, Y. Park and J.-H. Oh \item Thermodynamic Properties of the Multi-Neuron Interaction Model without Truncating the Interaction D.~Boll\'e, J.~Huyghebaert and G.~M.~Shim \item Symmetry between Neuronal and Synaptic Dynamics of Neural Net H.-F. Yanai \item Learning and Maximum Entropy in General Boltzmann Machines C. Hicks and H. Ogawa \item On the (Free) Energy of Stochastic and Continuous Hopfield Neural Networks J. van den Berg and J. C. Bioch \item Neural Thermodynamics for Biological Ensembles A. Coster \end{itemize} {\bf Part IV. Applications} \begin{itemize} \item Learning Algorithms for Classification: A Comparison on Handwritten Digit Recognition Y. LeCun, L. D. Jackel, L. Bottou, C. Cortes, J. S. Denker, H. Drucker, I. Guyon, U. A. M\"uller, E. S\"ackinger, P. Simard and V. Vapnik \item On the Consequences of the Statistical Mechanics Theory of Learning Curves for the Model Selection Problem M. J. Kearns \item Learning of a Two-Layer Neural Network with Flexible Hidden Layer Size J. Kim, K. Kang and J.-H. Oh \item Distributed Population Representation in Proprioceptive Cortex S. Cho, M. Jang and J. A. Reggia \item Designing Cost Functions for Additional Network Functionality S. Y. Lee \item Self-Organization of Gaussian Mixture Model for PDF Estimation S. Lee and S. Shimoji \end{itemize} \end{document} **************************************************************************** Jong-Hoon Oh Associate Professor, Department of Physics jhoh at vision.postech.ac.kr Pohang Institute of Science Technology Tel) +82-562-2792069 Hyoja San 31 Pohang, 790-784 Kyoungbuk, Korea Fax) +82-562-2793099 **************************************************************************** From jaap.murre at mrc-apu.cam.ac.uk Wed May 17 11:36:43 1995 From: jaap.murre at mrc-apu.cam.ac.uk (Jaap Murre) Date: Wed, 17 May 1995 16:36:43 +0100 Subject: Brain-size neurocomputers Message-ID: <199505171536.QAA26136@sirius.mrc-apu.cam.ac.uk> The following paper has been added to our ftp site: Heemskerk, J.N.H., & J.M.J. Murre (submitted). Brain-size neurocomputers: analyses and simulations of neural topologies on Fractal Architectures. Submitted to the IEEE Transactions on Neural Networks. Abstract Current neurocomputers are more than 50 million times slower than the brain. Although chip speeds exceed the switching speed of biological neurons with several orders of magnitude, artificial neural networks are of a much smaller scale than real brains. The primary aim of most neurocomputer designs is speeding up neural paradigms rather than implementing large-scale neural networks. In order to simulate neural networks of brain-size, neurocomputers need to be scaled up. We here present MindShape which is a design concept for a very large-scale neurocomputer based on a hierarchical-modular or Fractal Architecture. A Fractal Architecture can be built up from two types of elements: neural processing elements (NPEs) and communication elements (CEs). Massive usage of these elements allows for both distributed calculation and distributed control. A detailed description of this machine is presented, with reference to a realized feasibility study (the BSP400, see [1][2]). Through performance analyses and simulations of data- communication, it is shown that the Fractal Architecture supports efficient implementation of structured neural networks. We finally demonstrate that physical realization of brain-size neurocomputers is feasible with current technology. Files can be found at: ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre/lsize.ps (1589 Kb) ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre/lsize.ps.Z (347 Kb) ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre/lsize.zip (495 Kb) Jan N.H. Heemskerk hmskerk at rulfsw.leidenuniv.nl Jacob M.J. Murre jaap.murre at mrc-apu.cam.ac.uk (after 1 June 1995: pn_murre at macmail.psy.uva.nl) From pihong at cse.ogi.edu Thu May 18 20:42:00 1995 From: pihong at cse.ogi.edu (Hong Pi) Date: Thu, 18 May 95 17:42 PDT Subject: Neural net short course at OGI Message-ID: Oregon Graduate Institute of Science & Technology, Office of Continuing Education, offers the short course: NEURAL NETWORKS: ALGORITHMS AND APPLICATIONS June 12-16, 1995, at the OGI campus near Portland, Oregon. Course Organizer: John E. Moody Lead Instructor: Hong Pi With Lectures By: Dan Hammerstrom Todd K. Leen John E. Moody Thorsteinn S. Rognvaldsson Eric A. Wan Artificial neural networks (ANN) have emerged as a new information processing technique and an effective computational model for solving pattern recognition and completion, feature extraction, optimization, and function approximation problems. This course introduces participants to the neural network paradigms and their applications in pattern classification; system identification; signal processing and image analysis; control engineering; diagnosis; time series prediction; financial analysis and trading; and speech recognition. Designing a neural network application involves steps from data preprocessing to network tuning and selection. This course, with many examples, application demos and hands-on lab practice, will familiarize the participants with the techniques necessary for building successful applications. About 50 percent of the class time is assigned to lab sessions. The simulations will be based on Matlab, the Matlab Neural Net Toolbox, and other software running on 486 PCs. Prerequisites: Linear algebra and calculus. Previous experience with using Matlab is helpful, but not required. Who will benefit: Technical professionals, business analysts and other individuals who wish to gain a basic understanding of the theory and algorithms of neural computation and/or are interested in applying ANN techniques to real-world, data-driven modeling problems. Course Objectives: After completing the course, students will: - Understand the basic neural networks paradigms - Be familiar with the range of ANN applications - Have a good understanding of the techniques for designing successful applications - Gain hands-on experience with ANN modeling. Course Outline Neural Networks: Biological and Artificial The biological inspiration. History of neural computing. Types of architectures and learning algorithms. Application areas. Simple Perceptrons and Adalines Decision surfaces. Perceptron and Adaline learning rules. Stochastic gradient descent. Lab experiments. Multi-Layer Feed-Forward Networks I Multi-Layer Perceptrons. Back-propagation learning. Generalization. Early Stopping. Network performance analysis. Lab experiments. Multi-Layer Feed-Forward Networks II Radial basis function networks. Projection pursuit regression. Variants of back-propagation. Levenburg-Marquardt optimization. Lab experiments. Network Performance Optimization Network pruning techniques. Input variable selection. Sensitivity Analysis. Regularization. Lab experiments. Neural Networks for Pattern Recognition and Classification Nonparametric classification. Logistic regression. Bayesian approach. Statistical inference. Relation to other classification methods. Self-Organized Networks and Unsupervised Learning K-means clustering. Kohonen feature mapping. Learning vector quantization. Adaptive principal components analysis. Exploratory projection pursuit. Applications. Lab experiments. Time Series Prediction with Neural Networks Linear time series models. Nonlinear approaches. Case studies: economic and financial time series analysis. Lab experiments. Neural Network for Adaptive Control Nonlinear modeling in control. Neural network representations for dynamical systems. Reinforcement learning. Applications. Lab Experiments. Massively Parallel Implementation of Neural Nets on the Desktop Architecture and application demos of the Adaptive Solutions' CNAPS System. Summary and Perspectives About the Instructors Dan Hammerstrom received the B.S. degree in Electrical Engineering, with distinction, from Montana State University, the M.S. degree in Electrical Engineering from Stanford University, and the Ph.D. degree in Electrical Engineering from the University of Illinois. He was on the faculty of Cornell University from 1977 to 1980 as an assistant professor. From 1980 to 1985 he worked for Intel where he participated in the development and implementation of the iAPX-432 and i960 and, as a consultant, the iWarp systolic processor that was jointly developed by Intel and Carnegie Mellon University. He is an associate professor at Oregon Graduate Institute where he is pursuing research in massively parallel VLSI architectures, and is the founder and Chief Technical Officer of Adaptive Solutions, Inc. He is the architect of the Adaptive Solutions CNAPS neurocomputer.Dr. Hammerstrom's research interests are in the area of the VLSI architectures for pattern recognition. Todd K. Leen is associate professor of Computer Science and Engineering at Oregon Graduate Institute of Science & Technology. He received his Ph.D. in theoretical Physics from the University of Wisconsin in 1982. From 1982-1987 he worked at IBM Corporation, and then pursued research in mathematical biology at Good Samaritan Hospital's Neurological Sciences Institute. He joined OGI in 1989. Dr. Leen's current research interests include neural learning, algorithms and architectures, stochastic optimization, model constraints and pruning, and neural and non-neural approaches to data representation and coding. He is particularly interested in fast, local modeling approaches, and applications to image and speech processing. Dr. Leen served as theory program chair for the 1993 Neural Information Processing Systems (NIPS) conference, and workshops chair for the 1994 NIPS conference. John E. Moody is associate professor of Computer Science and Engineering at Oregon Graduate Institute of Science & Technology. His current research focuses on neural network learning theory and algorithms in it's many manifestations. He is particularly interested in statistical learning theory, the dynamics of learning, and learning in dynamical contexts. Key application areas of his work are adaptive signal processing, adaptive control, time series analysis, forecasting, economics and finance. Moody has authored over 35 scientific papers, more than 25 of which concern the theory, algorithms, and applications of neural networks. Prior to joining the Oregon Graduate Institute, Moody was a member of the Computer Science and Neuroscience faculties at Yale University. Moody received his Ph.D. and M.A. degrees in Theoretical Physics from Princeton University, and graduated Summa Cum Laude with a B.A. in Physics from the University of Chicago. Hong Pi is a senior research associate at Oregon Graduate Institute. He received his Ph.D. in theoretical physics from University of Wisconsin. His research interests include nonlinear modeling, neural network algorithms and applications. Thorsteinn S. Rognvaldsson received the Ph.D. degree in theoretical physics from Lund University, Sweden, in 1994. His research interests are Neural Networks for prediction and classification. He is currently a postdoctoral research associate at Oregon Graduate Institute. Eric A. Wan, Assistant Professor of Electrical Engineering and Applied Physics, Oregon Graduate Institute of Science & Technology, received his Ph.D. in electrical engineering from Stanford University in 1994. His research interests include learning algorithms and architectures for neural networks and adaptive signal processing. He is particularly interested in neural applications to time series prediction, speech enhancement, system identification, and adaptive control. He is a member of IEEE, INNS, Tau Beta Pi, Sigma Xi, and Phi Beta Kappa. For a complete course brochure contact: Linda M. Pease, Director Office of Continuing Education Oregon Graduate Institute of Science & Technology PO Box 91000 Portland, OR 97291-1000 +1-503-690-1259 +1-503-690-1686 (fax) e-mail: continuinged at admin.ogi.edu WWW home page: http://www.ogi.edu From patrick at magi.ncsl.nist.gov Fri May 19 09:43:52 1995 From: patrick at magi.ncsl.nist.gov (Patrick Grother) Date: Fri, 19 May 95 09:43:52 EDT Subject: New NIST Technical Document Image Database Message-ID: <9505191343.AA24500@magi.ncsl.nist.gov> NIST Special Database 20 Scientific and Technical Document Database Special Database 20 contains 23468 high resolution binary images obtained from copyright-expired scientific and technical journals and books. The images contain a very rich set of graphic elements such as graphs, tables, equations, two column text, maps, pictues, footnotes, annotations, and arrays of such elements. No ground truthing or original typesetting information is available. The images contain predominantly machine printed English, although three French and German documents are included. + 104 articles, books, journals + 23468 full page binary images + High Resolution 15.75 dots per mm ( 400 dpi ) + 4 compact discs each containing about 500 Mb + Updated CCITT IV Compression Source Code: 25x compression + A structural statistics file for each image + Page rotation estimates + Software utilities Special Database 20 is available as a four 5.25 inch CD-ROM set in the ISO-9660 format. Price: $1000.00 US. For sales contact: Standard Reference Data National Institute of Standards and Technology Building 221, Room A323 Gaithersburg, MD 20899 Voice: (301) 975-2208 FAX: (301) 926-0416 email: srdata at enh.nist.gov For technical details contact: Patrick Grother Visual Image Processing Group National Institute of Standards and Technology Building 225, Room A216 Gaithersburg, Maryland 20899 Voice: (301) 975-4157 email: patrick at magi.ncsl.nist.gov From lopez at Physik.Uni-Wuerzburg.DE Mon May 22 15:40:15 1995 From: lopez at Physik.Uni-Wuerzburg.DE (Bernardo Lopez) Date: Mon, 22 May 95 15:40:15 MESZ Subject: preprint Message-ID: <199505221340.PAA01017@wptx14.physik.uni-wuerzburg.de> FTP-host: ftp.physik.uni-wuerzburg.de FTP-filename: /pub/preprint/WUE-ITP-95-011.ps.gz The following paper has been placed in the Neuroprose archive (see above for ftp-host) as a compressed postscript file named WUE-ITP-95-011.ps.gz (10 pages of output) email address: lopez at physik.uni-wuerzburg.de **** Hardcopies cannot be provided **** ------------------------------------------------------------------ Title: Storage of correlated patterns in a perceptron Authors: B L\'opez, M Schr\"oder and M Opper Institut f\"ur Theoretische Physik, Universit\"at W\"urzburg, Am Hubland, D-97074 W\"urzburg ------------------------------------------------------------------ Abstract: We calculate the storage capacity of a perceptron for correlated gaussian patterns. We find that the storage capacity $\alpha_c$ can be less than 2 if similar patterns are mapped onto different outputs and vice versa. As long as the patterns are in general position we obtain, in contrast to previous works, that $\alpha_c \geq 1$ in agreement with Cover's theorem. Numerical simulations confirm the results. ------------------------------------------------------------------ From ingber at alumni.caltech.edu Mon May 22 10:47:23 1995 From: ingber at alumni.caltech.edu (Lester Ingber) Date: 22 May 1995 14:47:23 GMT Subject: paper: Statistical mechanics ... short-term memory Message-ID: <3pq85r$8qq@gap.cco.caltech.edu> The following paper, appearing this month in Phys Rev E, is available via anonymous ftp. ======================================================================== %A L. Ingber %A P.L. Nunez %T Statistical mechanics of neocortical interactions: High resolution path-integral calculation of short-term memory %J Phys. Rev. E %V 51 %N 5 %P 5074-5083 %D 1995 Statistical mechanics of neocortical interactions: High resolution path-integral calculation of short-term memory Lester Ingber Lester Ingber Research P.O. Box 857, McLean, Virginia 22101 ingber at alumni.caltech.edu and Paul L. Nunez Department of Biomedical Engineering Tulane University, New Orleans, Louisiana 70118 pln at bmen.tulane.edu We present high-resolution path-integral calculations of a previously developed model of short-term memory in neocortex. These calculations, made possible with supercomputer resources, supplant similar calculations made in L. Ingber, Phys. Rev. E 49, 4652 (1994), and support coarser estimates made in L. Ingber, Phys. Rev. A 29, 3346 (1984). We also present a current experimental context for the relevance of these calculations using the approach of statistical mechanics of neocortical interactions, especially in the context of electroencephalographic data. ======================================================================== Interactively [brackets signify machine prompts]: [your_machine%] ftp ftp.alumni.caltech.edu [Name (...):] anonymous [Password:] your_e-mail_address [ftp>] cd pub/ingber [ftp>] binary [ftp>] ls [ftp>] get smni95_stm.ps.Z [ftp>] quit The 00index file contains an index of the other files. This archive also can be accessed via WWW path http://alumni.caltech.edu/~ingber/ If you do not have ftp access, get information on the FTPmail service by: mail ftpmail at decwrl.dec.com, and send only the word "help" in the body of the message. Sorry, I cannot assume the task of mailing out hardcopies of code or papers. My volunteer time assisting people with their queries on my codes and papers must be limited to electronic mail correspondence. -- /* RESEARCH E-Mail: ingber at alumni.caltech.edu * * INGBER WWW: http://alumni.caltech.edu/~ingber/ * * LESTER Archive: ftp.alumni.caltech.edu:/pub/ingber * * Prof. Lester Ingber _ P.O. Box 857 _ McLean, VA 22101 _ 1.800.L.INGBER */ From sjoberg at isy.liu.se Mon May 22 11:48:37 1995 From: sjoberg at isy.liu.se (Jonas Sjoberg) Date: Mon, 22 May 95 17:48:37 +0200 Subject: PhD thesis available Message-ID: <9505221548.AA01932@joakim.isy.liu.se> My PhD thesis with the title NON-LINEAR SYSTEM IDENTIFICATION WITH NEURAL NETWORKS is available by FTP or WWW. It contains 223 pages and it is stored as compressed postscript. (3.6 Mbyte uncompressed, 1.2 Mbyte compressed). ________________________________________________________________________________ Jonas Sjo"berg Dept. of El. Engineering University of Linkoping Telefax: +46-13-282622, or +46-13-139282 S-581 83 Linko"ping E-Mail: sjoberg at isy.liu.se Sweden ________________________________________________________________________________ Anonymous FTP: joakim.isy.liu.se or 130.236.24.1 directory: pub/Misc/NN/ file : PhDsjoberg.ps.Z WWW: file://joakim.isy.liu.se/pub/Misc/NN/ file : PhDsjoberg.ps. Abstract: This thesis addresses the non-linear system identification problem, and in particular, investigates the use of neural networks in system identification. An overview of different possible model structures is given in a common framework. A nonlinear structure is described as the concatenation of a map from the observed data to the regressor, and a map from the regressor to the output space. This divides the model structure selection problem into two problems with lower complexity: that of choosing the regressor and that of choosing the non-linear map. The possible choices for the regressors consists of past inputs and outputs, and filtered versions of them. The dynamics of the model depends on the choice of regressor, and families of different model structures are suggested based on analogies to linear black-box models. State-space models are also described within this common framework by a special choice of regressor. It is shown that state-space models which have no parameters in the state update function can be viewed as an input-output model preceded by a pre-filter. A parameterized state update function, on the other hand, can be seen as a data driven regressor selector. The second step of the non-linear identification is the mapping from the regressor to the output space. It is often advantageous to try some intermediate mappings between the linear and the general non-linear mapping. Such non-linear black-box mappings are discussed and motivated by considering different noise assumptions. The validation of a linear model should contain a test for non-linearities and it is shown that, in general, it is easy to detect non-linearities. This implies that it is not worth spending too much energy searching for optimal non-linear validation methods for a specific problem. Instead the validation method should be chosen so that it is easy to apply. Two such methods, based on polynomials and neural nets, are suggested. Further, two validation methods, the correlation-test and the parametric F-test, are investigated. It is shown that under certain conditions these methods coincide. Parameter estimates are usually based on criterion minimization. In connection with neural nets it has been noted that it is not always optimal to try to find the absolute minimum point of the criterion. Instead a better estimate can be obtained if the numerical search for the minimum is prematurely stopped. A formal connection between this stopped search and regularization is given. It is shown that the numerical minimization of the criterion can be view as a regularization term which is gradually turned to zero. This closely connects to, and explains, what is called overtraining in the neural net literature. From martin.davies at psy.ox.ac.uk Mon May 22 13:15:31 1995 From: martin.davies at psy.ox.ac.uk (Martin Davies) Date: Mon, 22 May 1995 18:15:31 +0100 Subject: Euro-SPP '95 Message-ID: <9505221815.AA31230@Mac8> ************************************************************ EUROPEAN SOCIETY FOR PHILOSOPHY AND PSYCHOLOGY FOURTH ANNUAL CONFERENCE St. Catherine's College Oxford OX1 3UJ 30 August - 1 September 1995 ************************************************************ This conference is supported by the McDonnell-Pew Centre for Cognitive Neuroscience and by the Mind Association. ************************************************************ The Fourth Annual Conference of the European Society for Philosophy and Psychology will be held at St. Catherine's College Oxford, from Wednesday 30 August to Friday 1 September 1995. The programme includes: *Invited Lectures* by Michael Posner, Wolfgang Kunne, Jacques Mehler and Alan Cowey; *Invited Symposia* on Attention and Space, Emotion and Irrationality, Foundations of Artificial Life, and Brain Imaging; plus Submitted Papers and Posters. The conference desk will open at 9.00 am on Wednesday 30 August. Coffee will be available from 11.00 am and the first session will commence at 11.30 am. The final session of the conference will end at about 4.30 pm on Friday 1 September. There will be a Conference Dinner on the Friday evening at a cost of approximately 20 pounds per head. ************************************************************ **REGISTER NOW PAY LATER!** The Euro-SPP and St. Catherine's College would welcome an early indication of your intention to attend this conference. Please register now. We will invoice you later. If you pay your Euro-SPP Membership Fees and return your completed Conference Registration Form by *Friday 16 June* then a special Conference Registration Fee of 30 pounds will apply. After that date, the Conference Registration Fee will be 35 pounds for Euro-SPP members. The Conference Registration Fee for non-members of the Euro-SPP is 55 pounds. The basic accommodation and meals package, from mid-morning on Wednesday 30 August until late afternoon on Friday 1 September, costs 108 pounds. Bed and breakfast accommodation is also available for the nights of Tuesday 29 August, Friday 1 September, and Saturday 2 September, at an additional cost of 28 pounds per night. Accommodation is in single study-bedrooms. A limited number of bedrooms with en suite bathroom may also be available at a supplement of 12 pounds per night. For those who do not require accommodation, the basic meals package (excluding breakfast), from mid-morning on Wednesday 30 August until late afternoon on Friday 1 September, costs 60 pounds. ************************************************************ **DISCOUNTS FOR STUDENTS** The Conference Registration Fee for students is 10 pounds for Euro-SPP members or 20 pounds for non-members. The McDonnell-Pew Centre for Cognitive Neuroscience and the Mind Association have provided a number of bursaries to assist students with the costs of accommodation and meals at Euro-SPP '95. These will be awarded to students in order of receipt of applications, until the resources are used up. A McDonnell-Pew Bursary or Mind Association Bursary offers a discount of 40 pounds on the accommodation and meals package or a discount of 20 pounds on the meals only package. In order to apply for a bursary, you must include with your Conference Registration Form a letter from your supervisor confirming that the conference is relevant to your studies. ************************************************************ This announcement is also available on the World Wide Web at: http://www.cogs.susx.ac.uk/users/ronaldl/espp.html ************************************************************ For information about the conference, please contact: Euro-SPP '95, Department of Experimental Psychology, South Parks Road, Oxford OX1 3UD, England. Email: espp95 at psy.ox.ac.uk Fax: +44 1865 310447 ************************************************************ Euro-SPP Membership fees: Dfl. 50,- or Dfl. 35,- for students. Fees must be paid in Dutch currency. Fees may be paid by Mastercard. For membership details, please contact: Mrs. Susan Struycken, Department of Psychology, Tilburg University, P.O. Box 90153, 5000 LE Tilburg, The Netherlands. Email: espp at kub.nl ************************************************************ ************************************************************ PROVISIONAL PROGRAMME ************************************************************ WEDNESDAY 30 AUGUST 9.00 am Conference desk opens 11.00 am COFFEE 11.30 am - 12.45 pm INVITED LECTURE 1: The McDonnell-Pew Lecture Imaging the Mechanisms of Consciousness Speaker: Michael Posner (Psychology, Oregon) 1.00 pm LUNCH 2.30 - 4.30 pm SYMPOSIUM 1: Attention and Space Speakers: John Duncan (MRC Applied Psychology Unit, Cambridge) Pierre Jacob (Philosophy, CNRS, Paris) Albert Newen (Philosophy, Bielefeld) 4.30 - 5.30 pm POSTER SESSION and TEA 5.30 - 7.00 pm SUBMITTED PAPER SESSIONS 1A/1B 1A 5.30 Wendy Clements (Psychology, Sussex) and Josef Perner (Psychology, Salzburg) 6.15 Ted Ruffman (Psychology, Sussex) 1B 5.30 Greg Currie (Philosophy, Flinders University, Adelaide) 6.15 Sonia Sedivy (Philosophy, Toronto) 7.15 pm DINNER followed by INVITED LECTURE 2: The Mind Association Lecture Intentional Content Speaker: Wolfgang Kunne (Philosophy, Hamburg) ************************************************************ THURSDAY 31 AUGUST 9.00 - 11.00 am SYMPOSIUM 2: Emotion and Irrationality Speakers: Nico Frijda (Psychology, Amsterdam) James Hopkins (Philosophy, King's College, London) 11.00 am COFFEE 11.30 am - 12.45 pm INVITED LECTURE 3: Towards a Biology of Language Speaker: Jacques Mehler (Cognitive Science, EHESS, Paris) 1.00 pm LUNCH 2.15 - 4.30 pm SUBMITTED PAPER SESSIONS 2A/2B/2C 2A 2.15 Georges Rey (Philosophy, Maryland) 3.00 Owen Flanagan (Philosophy, Duke University, North Carolina) 3.45 Ullin Place (Thirsk, North Yorkshire) 2B 2.15 J.G. Taylor (Mathematics, King's College, London) 3.00 Laurie Stowe (Language and Literature, Groningen) 3.45 Susan Dwyer (Philosophy, McGill University, Montreal) 2C 2.15 Christoph Hoerl (Philosophy, Oxford) 3.00 Michel Treisman (Psychology, Oxford) 3.45 Karen Neander (Philosophy, Australian National University) 4.30 - 5.00 pm TEA 5.00 - 7.00 pm SYMPOSIUM 3: Foundations of Artificial Life Speakers: Christopher Langton (Santa Fe Institute, New Mexico) Luc Steels (AI Laboratory, Free University of Brussels) Michael Wheeler (Cognitive and Computing Sciences, Sussex) 7.00 pm BUSINESS MEETING followed by RECEPTION and DINNER ************************************************************ FRIDAY 1 SEPTEMBER 9.00 - 11.15 am SUBMITTED PAPER SESSIONS 3A/3B/3C 3A 9.00 Paul Pietroski (Philosophy, McGill University, Montreal) 9.45 Kirk Ludwig (Philosophy, Florida) 10.30 Antoni Gomila (Philosophy, University of the Balearic Islands) 3B 9.00 Beatrice de Gelder, Jean Vroomen and Jan Pieter Teunisse (Psychology, Tilburg) 9.45 Philip Benson (Physiology, Oxford) and Mary Katsikitis (Psychiatry, Adelaide) 10.30 Anthony Skillen (Philosophy, Kent) 3C 9.00 Andy Young (MRC Applied Psychology Unit, Cambridge) and Tony Stone (King Alfred's College, Winchester) 9.45 Richard Held (Brain and Cognitive Science, MIT) 10.30 Lawrence Weiskrantz (Psychology, Oxford), John Barbur and Arash Sahraie (Applied Vision Research Centre, City University, London) 11.15 am COFFEE 11.45 am - 1.00 pm INVITED LECTURE 4: Localisation of Functions in the Cerebral Cortex: Modern Phrenology? Speaker: Alan Cowey (Psychology, Oxford) 1.00 pm LUNCH 2.15 - 4.30 pm SYMPOSIUM 4: Brain Imaging (A McDonnell-Pew Symposium in Cognitive Neuroscience) Speakers: Dick Passingham (Psychology, Oxford) Chris Frith (MRC Cyclotron Unit, Hammersmith Hospital) David Rosenthal (Philosophy, City University of New York) 4.30 pm TEA There will be a CONFERENCE DINNER on Friday evening ************************************************************ POSTER PRESENTATIONS by David Buller (Philosophy, Northern Illinois); Matthew Elton (Philosophy, Stirling); Brian Keeley (Philosophy, University of California, San Diego); Jonathan Knowles (Philosophy, Birkbeck College, London); Ronald Lemmen (Cognitive and Computing Sciences. Sussex); Kenneth Livingston (Philosophy, Vassar College, New York); Gregory Mulhauser (Philosophy, Glasgow); Ajit Narayan and Jeremy Olsen (Computer and Cognitive Sciences, Exeter); Greg Ray (Philosophy, Florida); Antti Revonsuo (Cognitive Neuroscience, Turku); Carolien Rieffe (Psychology, Free University, Amsterdam); Tadeusz Szubka (Philosophy, Catholic University of Lublin); Yoshio Yano (Psychology, Kyoto University of Education) ************************************************************ ************************************************************ CONFERENCE REGISTRATION FORM ************************************************************ Please complete and sign this form and send it, as soon as possible, to: Euro-SPP '95, Department of Experimental Psychology, South Parks Road, Oxford OX1 3UD, England. Name: ______________________________________________________ Professional affiliation: __________________________________ Address: ___________________________________________________ _____________________________________________________ _____________________________________________________ Email: _____________________________________________________ Fax: _____________________________________________________ Conference Registration Fees: 35 pounds (or 30 pounds before 16 June) for non-student members 55 pounds for non-student non-members 10 pounds for student members 20 pounds for student non-members I expect to attend the Euro-SPP '95 Conference in Oxford. I have indicated my requirements by ticking below. Please send me an invoice in due course. I shall keep you informed of any changes in my plans. Signed: _____________________________________________________ Date: _____________________________________________________ ** I shall require the basic meals only package at 60 pounds. (Please note any special dietary requirements.) ** I shall require the basic meals and accommodation package at 108 pounds.(Please note any special dietary requirements.) ** I shall require a room for the night of Tuesday 29 August at 28 pounds. ** I shall require a room for the night of Friday 1 September at 28 pounds. ** I shall require a room for the night of Saturday 2 September at 28 pounds. ** I request a room with en suite bathroom at an additional cost of 12 pounds per night (subject to availability). ** I expect to attend the Conference Dinner on Friday 1 September at a cost of 20 pounds. ** Please send me information about hotel accommodation in Oxford. ** I am a student, and I am applying for a *McDonnell-Pew* or *Mind Association* Bursary. I enclose a letter from my supervisor. From ingber at alumni.caltech.edu Mon May 22 10:28:51 1995 From: ingber at alumni.caltech.edu (Lester Ingber) Date: 22 May 1995 14:28:51 GMT Subject: paper: Path-integral evolution of chaos embedded in noise: Message-ID: <3pq733$7us@gap.cco.caltech.edu> The following paper is available via anonymous ftp. ======================================================================== Path-integral evolution of chaos embedded in noise: Duffing neocortical analog Lester Ingber Lester Ingber Research P.O. Box 857, McLean, Virginia 22101 U.S.A. ingber at alumni.caltech.edu Ramesh Srinivasan Department of Psychology University of Oregon, Eugene, Oregon 97403 U.S.A. ramesh at oregon.uoregon.edu and Electrical Geodesics Eugene, Oregon 97403 U.S.A. Paul L. Nunez Department of Biomedical Engineering Tulane University, New Orleans, Louisiana 70118 U.S.A. pnunez at mailhost.tcs.tulane.edu Abstract--A two dimensional time-dependent Duffing oscillator model of macroscopic neocortex exhibits chaos for some ranges of parameters. We embed this model in moderate noise, typical of the context presented in real neocortex, using PATHINT, a non- Monte-Carlo path-integral algorithm that is particularly adept in handling nonlinear Fokker-Planck systems. This approach shows promise to investigate whether chaos in neocortex, as predicted by such models, can survive in noisy contexts. Keywords: chaos, path integral, Fokker Planck, Duffing equation, neocortex ======================================================================== Interactively [brackets signify machine prompts]: [your_machine%] ftp ftp.alumni.caltech.edu [Name (...):] anonymous [Password:] your_e-mail_address [ftp>] cd pub/ingber [ftp>] binary [ftp>] ls [ftp>] get path95_duffing.ps.Z [ftp>] quit The 00index file contains an index of the other files. This archive also can be accessed via WWW path http://alumni.caltech.edu/~ingber/ If you do not have ftp access, get information on the FTPmail service by: mail ftpmail at decwrl.dec.com, and send only the word "help" in the body of the message. Sorry, I cannot assume the task of mailing out hardcopies of code or papers. My volunteer time assisting people with their queries on my codes and papers must be limited to electronic mail correspondence. -- /* RESEARCH E-Mail: ingber at alumni.caltech.edu * * INGBER WWW: http://alumni.caltech.edu/~ingber/ * * LESTER Archive: ftp.alumni.caltech.edu:/pub/ingber * * Prof. Lester Ingber _ P.O. Box 857 _ McLean, VA 22101 _ 1.800.L.INGBER */ From listerrj at helios.aston.ac.uk Tue May 23 04:59:13 1995 From: listerrj at helios.aston.ac.uk (listerrj) Date: Tue, 23 May 1995 09:59:13 +0100 (BST) Subject: Four Postdoctoral Research Fellowships Message-ID: <17253.9505230859@sun.aston.ac.uk> Neural Computing Research Group ------------------------------- Dept of Computer Science and Applied Mathematics Aston University, Birmingham, UK FOUR POSTDOCTORAL RESEARCH FELLOWSHIPS -------------------------------------- *** Full details at http://neural-server.aston.ac.uk/ *** The Neural Computing Research Group has recently been successful in attracting significant levels of funding from the Engineering and Physical Sciences Research Council, and consequently is able to offer 4 full-time postdoctoral Research Fellowships. These positions have a nominal start date of 1 October 1995, although earlier or later start dates can be agreed if appropriate. Validation and Verification of Neural Network Systems ----------------------------------------------------- (Two Posts) One of the major factors limiting the widespread exploitation of neural networks has been the perceived difficulty of ensuring that a trained network will continue to perform satisfactorily when installed in an operational system. In the case of safety-critical systems it is clearly vital that a high degree of overall system integrity be achieved. However, almost all potential applications of neural networks entail some level of undesirable consequence if the network generates incorrect or inaccurate predictions. Currently there is no general framework for assessing the robustness of neural network solutions or of systems containing embedded neural networks. This substantial and ambitious programme will address the basic issues involved in neural network validation and verification in the context both of stand-alone network solutions and of embedded systems. It will develop and demonstrate robust techniques for quantifying the reliability of neural network predictions, and will also provide the necessary theoretical foundation of a subsequent framework for neural network system validation. This project will address the theoretical basis for determining valid generalisation and error estimates for neural network predictions, and will aim to understand the impact of uncertainties in network predictions on overall system performance in the context of embedded applications. It will also demonstrate the use of these techniques for validation of network solutions through case studies based on real-world applications and data, which will be provided by industrial collaborators. Potential candidates should be mathematically and computationally competent with a background either in artificial neural networks or a relevant field. These posts are tenable for two years in the first instance, with a possible extension for a further three years. Nonstationary Feature Extraction and Tracking for the Classification -------------------------------------------------------------------- of Turning Points in Multivariate Time Series ---------------------------------------------- (One Post) The project is aimed at extracting information from nonstationary, nonlinear time series. Real-world examples which have motivated the proposal include: the early classification of highs, lows and sideways drift in financial global bond markets; the forecasting of characteristic clustering such as peaks and troughs in consumer driven electricity load demand, along with the corresponding impacts on pool-price prediction, or the expectation of dynamic loading patterns in telecommunications networks. The key lies in an appropriate {\em representation} of the data. The intended methodology is to extend the theoretical basis of the current state of the art on neural network feature extraction techniques to tackle real-world problems presented by industry and commerce. The emphasis is to seek appropriate representations of nonstationary data such that the resulting `clusterings' may be exploited to perform classification. Because real data is generally nonstationary the principal axes of the feature space change in time and so we need to track this nonstationarity if `market' characteristics as determined by the features are to be useful. Potential candidates should be mathematically and computationally competent with a background either in artificial neural networks, dynamical systems theory, statistical pattern processing, or have relevant experience from a physics or electrical engineering background. This post is tenable for three years. Neural Networks for Visualization of High Dimensional Data ---------------------------------------------------------- (One Post) Visualization has proven to be one of the most powerful ways to interpret and understand complex sets of data, such as records of financial transactions, corporate databases, customer profiles, and marketing surveys. Particular problems arise, however, when the data involves large numbers of variables, corresponding to spaces of high dimensionality. Additionally, the data is often plagued with deficiencies such as missing variables, mislabelled values, and inconsistencies in the representations of different quantities (for instance, the same attribute may be represented in different ways in different parts of the data base). Such problems severely limit the performance of current visualization algorithms. This project will investigate the theoretical basis for visualizing data using neural networks, and will develop practical techniques for visualization applicable to large-scale data sets. These techniques will be based, for example, on recent developments in latent-variable density estimation. Potential candidates should be mathematically and computationally competent with a background either in artificial neural networks or a relevant field. This post is tenable for two years. Neural Computing Research Group ------------------------------- The Neural Computing Research Group currently comprises the following academic staff Chris Bishop Professor David Lowe Professor David Bounds Professor Geoffrey Hinton Visiting Professor Richard Rohwer Lecturer Alan Harget Lecturer Ian Nabney Lecturer David Saad Lecturer (arrives 1 August) two further posts (currently being appointed) together with the following Research Fellows Chris Williams Shane Murnion Alan McLachlan Huaihu Zhu a full-time software support assistant, and eleven postgraduate research students. Conditions of Service --------------------- Salaries will be up to point 6 on the RA 1A scale, currently 15,556 UK pounds. These salary scales are currently under review, and are subject to annual increments. How to Apply ------------ If you wish to be considered for one of these positions, please send a full CV and publications list, together with the names of 4 referees, to: Professor C M Bishop Neural Computing Research Group Department of Computer Science and Applied Mathematics Aston University Birmingham B4 7ET, U.K. Tel: 0121 359 3611 ext. 4270 Fax: 0121 333 6215 e-mail: c.m.bishop at aston.ac.uk (email submission of postscript files is welcome) Closing date: 7 July 1995 ------------------------- From Guszti.Bartfai at Comp.VUW.AC.NZ Tue May 23 18:45:59 1995 From: Guszti.Bartfai at Comp.VUW.AC.NZ (Guszti Bartfai) Date: Wed, 24 May 1995 10:45:59 +1200 Subject: Paper announcement Message-ID: <199505232246.KAA19738@circa.comp.vuw.ac.nz> The following paper has been accepted for publication in the journal "Neural Networks". Title: On the Match Tracking Anomaly of the ARTMAP Neural Network Author: Guszti Bartfai Department of Computer Science Victoria University of Wellington New Zealand Abstract: This article analyses the {\em match tracking anomaly} ({\em MTA}) of the ARTMAP neural network. The anomaly arises when an input pattern exactly matches its category prototype that the network has previously learned, and the network generates a prediction (through a previously learned associative link) that contradicts the output category that was selected upon presentation of the corresponding target output. Carpenter at al.\ claimed that such anomalous situation will never arise if the (binary) input vectors have the same number of 1's [Carpenter91]. This paper shows that such situations {\em can} in fact occur. The {\em timing} according to which inputs are presented to the network in each learning trial is crucial: if the target output is presented to the network {\em before} the corresponding input pattern, certain pattern sequences will lead the network to the {\em MTA}. Two kinds of {\em MTA} are distinguished: one that is independent of the choice parameter ($\beta$) of the ART$_b$ module, and another that is not. Results of experiments that were carried out on a machine learning database demonstrate the existence of the match tracking anomaly as well as support the analytical results presented here. Reference: [Carpenter91] Author = "G.A. Carpenter and S. Grossberg and J.H. Reynolds", Title = "{ARTMAP: Supervised Real-Time Learning and Classification of Nonstationary Data by a Self-Organizing Neural Network}", Journal = "Neural Networks", Year = 1991, Volume = 4, Pages = "565--588", ------- The paper is available by anonymous FTP (24 pages, 88k). Full WWW access path: ftp://ftp.comp.vuw.ac.nz/doc/vuw-publications/CS-TR-95/CS-TR-95-1.ps.gz FTP instructions: % ftp ftp.comp.vuw.ac.nz Name: anonymous Password: ftp> cd doc/vuw-publications/CS-TR-95 ftp> binary ftp> get CS-TR-95-1.ps.gz ftp> quit % gunzip CS-TR-95-1.ps.gz % lpr CS-TR-95-1.ps Any comments are welcome. Guszti Bartfai http://www.comp.vuw.ac.nz/~guszti/ From dave at twinearth.wustl.edu Tue May 23 20:05:39 1995 From: dave at twinearth.wustl.edu (David Chalmers) Date: Tue, 23 May 95 19:05:39 CDT Subject: New archives Message-ID: <9505240005.AA03600@twinearth.wustl.edu> The archives for the Philosophy/Neuroscience/Psychology program at Washington University have been reconfigured. The ftp archive has been moved from thalamus.wustl.edu (which has been down for a while now), and more convenient access on the World Wide Web is now in place. (1) The archive of PNP technical reports is now available by anonymous ftp to wuarchive.wustl.edu, in the directory doc/techreports/wustl.edu/philosophy. Files have the form author.title.format, where format is usually ASCII or PS. An "INDEX" file contains an index with abstracts. Plenty of new papers here! (2) There is now a PNP home page on the WWW, at http://www.artsci.wustl.edu/~philos/pnp.html. At the moment this serves mostly as a gateway to the PNP archive, but more will be appearing shortly. Those with WWW access will probably want to use this to get to the archive, rather than the ftp option. (3) My own home page is now set up at http://www.artsci.wustl.edu/~chalmers/. This has links to a number of my papers on consciousness, content, artificial intelligence, and other topics (including some papers that aren't in the PNP archive), and also to my annotated bibliography in the philosophy of mind (see below). (4) There is now a convenient home page for my bibliography in the philosophy of mind at http://www.artsci.wustl.edu/~chalmers/biblio.html. Many of the current links to the bibliography on the net proceed via an extraordinarily slow connection through apa.oxy.edu and Indiana; this should work much better. Incidentally the bibliography is also available by anonymous ftp to wuarchive in the directory mentioned above, files chalmers.biblio.*. The current version has about 1830 entries in 5 parts. Those who maintain pages with links to these things (especially the archive and the bibliography) might like to update them. --Dave Chalmers (dave at twinearth.wustl.edu) From biehl at Physik.Uni-Wuerzburg.DE Wed May 24 15:36:04 1995 From: biehl at Physik.Uni-Wuerzburg.DE (Michael Biehl) Date: Wed, 24 May 95 15:36:04 MESZ Subject: papers available: learning from clustered input examples Message-ID: <199505241336.PAA00473@wptx08.physik.uni-wuerzburg.de> FTP-host: ftp.physik.uni-wuerzburg.de FTP-filename: /pub/preprint/WUE-ITP-95-013.ps.gz /pub/preprint/WUE-ITP-95-014.ps.gz The following papers are now available via anonymous ftp: (See below for the retrieval procedure) ------------------------------------------------------------------ "On-line learning from clustered input examples" Peter Riegler, Michael Biehl, Sara A. Solla, and Carmela Marangi Ref. WUE-ITP-95-013 AND "Off-line supervised learning from clustered input examples" Carmela Marangi, Sara A. Solla, Michael Biehl, and Peter Riegler Ref. WUE-ITP-95-014 --------------------------------------------------------------------- Both presented at the VII Italian Workshop on Neural Nets in Vietri s/m, May 1995 proceedings to be published by World Scientific ______________________________________________________________________ Retrieval procedure: unix> ftp ftp.physik.uni-wuerzburg.de Name: anonymous Password: {your e-mail address} ftp> cd pub/preprint ftp> get WUE-ITP-95-0??.ps.gz (*) ftp> quit unix> gunzip WUE-ITP-95-0??.ps.gz e.g. unix> lp WUE-ITP-95-0??.ps (7 pages of output) (*) can be replaced by "get WUE-ITP-95-0??.ps". The file will then be uncompressed before transmission (slower!). _____________________________________________________________________ -- Michael Biehl Institut fuer theoretische Physik Julius-Maximilians-Universitaet Wuerzburg Am Hubland D-97074 Wuerzburg email: biehl at physik.uni-wuerzburg.de Tel.: (+49) (0)931 888 5865 " " " 5131 Fax : (+49) (0)931 888 5141 From zoubin at psyche.mit.edu Wed May 24 14:16:35 1995 From: zoubin at psyche.mit.edu (Zoubin Ghahramani) Date: Wed, 24 May 95 14:16:35 EDT Subject: Paper available on factorial hidden Markov models Message-ID: <9505241816.AA24969@psyche.mit.edu> FTP-host: psyche.mit.edu FTP-filename: /pub/zoubin/facthmm.ps.Z URL: ftp://psyche.mit.edu/pub/zoubin/facthmm.ps.Z This technical report is 13 pages long [102K compressed]. Factorial hidden Markov models Zoubin Ghahramani and Michael I. Jordan Department of Brain & Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 02139 We present a framework for learning in hidden Markov models with distributed state representations. Within this framework, we derive a learning algorithm based on the Expectation--Maximization (EM) procedure for maximum likelihood estimation. Analogous to the standard Baum-Welch update rules, the M-step of our algorithm is exact and can be solved via a set of linear equations. However, due to the combinatorial nature of the hidden state representation, the exact E-step is intractable. A simple and tractable mean field approximation is derived. Promising empirical results on a small time series modeling problem are presented for both the mean field approximation and Gibbs sampling. MIT COMPUTATIONAL COGNITIVE SCIENCE TECHNICAL REPORT 9502 From M.Dye at ukc.ac.uk Thu May 25 09:18:52 1995 From: M.Dye at ukc.ac.uk (Matt Dye) Date: Thu, 25 May 1995 14:18:52 +0100 Subject: Lectureship Psychology (Cognitive Neuroscience) Message-ID: UNIVERSITY OF KENT AT CANTERBURY INSTITUTE OF SOCIAL AND APPLIED PSYCHOLOGY DEPARTMENT OF PSYCHOLOGY Canterbury, U.K. Lectureship in Psychology (Cognitive Neuroscience) The Institute of Social and Applied Psychology at the University of Kent attained a 4A rating in the most recent HEFCE Research Assessment exercise, and has been designated as a significant area of expansion within the University. ISAP comprises the Department of Psychology and the Tizard Centre. The Department has 16 HEFCE funded teaching staff in addition to associated research and support staff, independently funded research staff and postgraduate students. The Tizard Centre has a similar number of staff but is funded primarily from research and consultancy contracts. Further Psychology posts have been approved by the University. As part of the Department s development plan we are seeking applications for a permanent lectureship from active post-doctoral researchers in the general area of cognitive neuroscience. Candidates with research interests in any area of cognitive neuroscience or its related disciplines are encouraged to apply. Of particular interest will be candidates who have an expertise in computational, especially connectionist, modelling. Ideally, the successful applicant will be expected to take up his/her appointment by the 1st September 1995 or as soon as possible thereafter. Main responsibilities of the post holder The successful candidate will have both research and teaching responsibilities. General information Research The Departments research activities are concentrated in the broad areas of cognition and of social psychology. Current research within the cognitive neuroscience group includes investigations of object processing, categorization, speech production and language disorders. In all these areas, staff are involved in experimental research with normal subjects as well as the testing of brain-damaged subjects and attempts to model findings using connectionist systems. Research programmes in Social Psychology include the areas of group processes, psychology of health and social psychological aspects of Forensic Psychology. In all of these fields, the department has an excellent record of attracting research funds and studentships. At present substantial research awards are held from the ESRC, ARC (Australian Research Council), the Rowntree Foundation, Nuffield, Canterbury and Thanet Healthcare Trust, Wellcome and The Leverhulme Trust. The successful candidate will be expected and encouraged to develop a substantive programme of research based on his/her own interests. It should also be noted that existing members of staff are keen to develop collaborative research projects where common interests exist. Recent publications Donnelly, N., Humphreys, G.W. & Riddoch, M.J. (1991). Parallel computation of primitive shape descriptions. Journal of Experimental Psychology: Human Perception and Performance, 17, 2, 561-570. Humphreys, G.W., Lloyd-Jones, T.J. & Fias, W. (in press). Semantic interference on naming using a post-sue procedure: Tapping the links between semantics and phonology with pictures and words. Journal of Experimental Psychology: Learning, Memory and Cognition. Muller, H., Humphreys, G.W. & Donnelly, N. (1994). Search via recursive rejection (SERR): Visual search for single and dual form conjuction targets. Journal of Experimental Psychology: Human Perception and Performance, 20, 2, 235-258. Weekes, B.S. (1994). A cognitive-neuropsychological analysis of allograph errors from a patient with acquired disgraphia. Aphasiology, 8, 409-425. Teaching The Department currently offers undergraduate BSc degrees in Psychology, Social Psychology, Social and Clinical Psychology (including Applied, four-year variants), along with an MSc in Social and Applied Psychology. In addition two new MSc degrees, in Forensic Psychology and Health Psychology will be available from October 1995, and an MSc in Cognitive Neuroscience is planned in cooperation with the Department of Biology. The Department has ESRC priority recognition (Mode A and B) for its postgraduate training. We have a large and lively group of postgraduate students, and with our current and planned MSc programmes and our involvement in European Exchange Programmes we are planning for a significant long-term expansion in our postgraduate training and research. The post holder s main teaching responsibility will be in the second and third years of our undergraduate programmes. At present, second year students have the option of taking a course which includes perception, language, memory and basic issues relating to neural computation. Third year students have an option to study Neuropsychology (convened by Dr Donnelly). This course is split into two units. In unit 1 we address basic methodological issues and the neuropsychology of vision and in unit 2 we address the neuropsychology of language, memory and other higher order processes. The appointee would be expected to contribute (along with Drs Donnelly, Weekes and Lloyd-Jones) to both units and the final component of the 2nd year course, and to develop teaching in their own area of research. The appointee will also be expected to take over the modest administrative role involved in convening the second year course. In addition, all final year undergraduate students currently undertake an empirical project and the person appointed will take a share of the supervision and support of these, as well as postgraduate supervision. Teaching methods at Kent comprise the usual mix of large lecture format (especially at the first and second-year undergraduate level), small group seminar teaching, and co-operative group working. Administration The Institute has a fully devolved budget for all non-staff costs and the management of this falls to the relevant Head of Department. All staff contribute to the administration of the Department, and several key managerial responsibilities are delegated from the Head of Department to other colleagues. The staff responsibilities change from year to year in the light of varying teaching and research activities and to allow for study leave. Within this collegial atmosphere a supportive appraisal and probation system is operated under the supervision of senior members of staff. All staff are defined as "research active" and it is the Departmental policy to maximise time available for research. The administration of the Department's teaching and research activities is supported by a full-time departmental administrator and secretarial staff. Technical facilities The Department has a good level of technical support and provision. We have installed a new computer network comprising both Apple Macintoshs and PC's. All staff have their own microcomputers in their offices. We are connected to the University's central computing facilities which include several UNIX-based super-minicomputers and access to national and international computer networks. We have recently refurbished our laboratory facilities to include a high quality audio-visual studio. As a result of our successful expansion, the University has committed funding to a new Psychology building, due for completion during 1996. These resources are supported by a full-time Senior Experimental Officer and a Technician/Demonstrator who have responsibility of their development, operation and maintenance. If you require any further information please contact Professor Dominic Abrams ( Tel: 01227 764000 ext 3084: email D.Abrams at ukc.ac.uk), Dr Brendan Weekes (Tel: 01227 827411: email B.S.Weekes at ukc.ac.uk) or Dr Toby Lloyd-Jones (Tel: 01227 827611: email T.J.Lloyd-Jones at ukc.ac.uk) Salary will be within the Lecturer Grade A/B scale - 14,756 - 25,735 per annum. The University has adopted a policy of making most jobs available for job-sharing if suitable applicants come forward. Applications to job-share this post are welcomed and will be considered without prejudice. Closing date for applications: 16 June 1995 The University is committed to implementing its Equal Opportunities Policy. ------------------------------------------------------------------------------ Matthew Dye (BSc MSc) "I think, therefore I hesitate." Department of Psychology email: see header University of Kent at Canterbury tel: +44 (0)1227 764000 ext 3080 Canterbury FAX: +44 (0)1227 763674 Kent CT2 7LZ ftp server: ftp.ukc.ac.uk UNITED KINGDOM /pub/mwgd1 ------------------------------------------------------------------------------ From N.Sharkey at dcs.shef.ac.uk Thu May 25 11:27:25 1995 From: N.Sharkey at dcs.shef.ac.uk (N.Sharkey@dcs.shef.ac.uk) Date: Thu, 25 May 95 16:27:25 +0100 Subject: summer fellowship Message-ID: <9505251527.AA07192@entropy.dcs.shef.ac.uk> The Department of computer science at Sheffield, UK are offering 4 Summer fellowships for 1995. These would essentially cover travel and living expenses. I am interested in receiving applications from people from the Neural Network Community. The more senior the better. Please email me informally if you are interested. noel Noel Sharkey N.Sharkey at dcs.shef.ac.uk Professor of Computer Science FAX: (0114) 2780972 Department of Computer Science Regent Court University of Sheffield S1 4DP, Sheffield, UK From jbower at bbb.caltech.edu Thu May 25 12:41:58 1995 From: jbower at bbb.caltech.edu (jbower@bbb.caltech.edu) Date: Thu, 25 May 95 09:41:58 PDT Subject: CNS*95 registration announcement Message-ID: <9505251641.AA01679@bbb.caltech.edu> ************************************************************************ REGISTRATION INFORMATION THE FOURTH ANNUAL COMPUTATIONAL NEUROSCIENCE MEETING CNS*95 JULY 12 - JULY 15, 1995 MONTEREY, CALIFORNIA ************************************************************************ CNS*95: Registration is now open for this year's Computational Neuroscience meeting (CNS*95). This is the fourth in a series of annual inter-disciplinary conferences intended to address the broad range of research approaches and issues involved in the general field of computational neuroscience. As in the previous years, this meeting will bring together experimental and theoretical neurobiologists along with engineers, computer scientists, cognitive scientists, physicists, and mathematicians interested in understanding how biological neural systems compute. The meeting will equally emphasize experimental, model-based, and more abstract theoretical approaches to understanding neurobiological computation. MEETING STRUCTURE: The meeting will be organized in two parts: three days of research presentations, and one day of follow up workshops. Most presentations will be based on submitted papers, with several presentations by specially invited speakers. 126 submitted papers have been accepted for presentation at the meeting based on peer review. Details on the agenda can be obtained via ftp, through telnet or on the web as described below. LOCATION: The meeting will take place on the Monterey Peninsula on the coast south of San Francisco, California at the Doubletree Hotel in downtown Monterey. This modern hotel is located on Monterey's historic Fisherman's Wharf. MEETING ACCOMMODATIONS: Accommodations for the meeting have been arranged at the Doubletree Hotel. We have reserved a block of rooms at the special rate for all attendees of $122 per night single or double occupancy in the conference hotel (that is, 2 people sharing a room would split the $122!). A fixed number of rooms have been reserved for students at the rate of $99 per night single or double occupancy (yes, that means $50 a night per student!). These student room rates are on a first-come-first-served basis, so we recommend acting quickly to reserve these slots. Each additional person per room is a $20 charge. Registering for the meeting, WILL NOT result in an automatic room reservation. Instead you must make your own reservations by returning the enclosed registration sheet to the hotel, faxing, or by contacting: the Doubletree Hotel at Fisherman's Wharf ATTENTION: Reservations Dept. Two Portola Plaza Monterey, CA 93940 (408) 649-4511 Fax No. (408) 649-3109 NOTE: IN ORDER TO GET THE REDUCED RATES, YOU MUST CONFIRM HOTEL REGISTRATIONS BY JUNE 12, 1995. When making reservations by phone, make sure and indicate that you are registering for the CNS*95 meeting. Students will be asked to verify their status on check in with a student ID or other documentation. REGISTRATION INFORMATION: You can register for the meeting either electronically, by FAX or by surface mail. Details are provided below. Note that registration fees increase after June 12th. Registration received before June 12, 1995: Students, meeting: $ 90 Others, meeting: $ 200 Meeting registration after June 12, 1995: Students, meeting: $ 125 Others, meeting: $ 235 BANQUET: Registration for the meeting includes a single ticket to the annual CNS Banquet this year to be held within the Monterey Aquarium on Friday evening, July 14st. Additional Banquet tickets can be purchased for $35 each person. HOW TO REGISTER: To register for CNS*95 you can: 1) use our on-line registration form via http://www.bbb.caltech.edu/cns95.html. Note that this will NOT make a hotel reservation for you. You must do that yourself. 2) get an ftp-able copy of a registration form from ftp.bbb.caltech.edu under pub/cns95/registration_form95 URL: http://www.bbb.caltech.edu/cns95.html FTP: ftp.bbb.caltech.edu -- pub/cns95 3) You can also FAX, email, or surface mail the following registration form to: CNS*95 Caltech, Division of Biology 216-76, Pasadena, CA 91125 Attn: Judy Macias Fax Number: (818) 795-2088 email: judy at smaug.bbb.caltech.edu ************************************************************************ CNS*95 REGISTRATION FORM Last Name: First Name: Title: Organization: Address: City: State: Zip: Country: Telephone Email Address: REGISTRATION FEES: Technical Program -- July 12 - 15 Regular $200 ($225 after June 12th) Student $ 90 ($125 after June 12th) Banquet $ 35 (each additional banquet ticket) - July 14th Total Payment: $ Please Indicate Method of Payment: Check or Money Order Payable in U. S. Dollars to: CNS*95 - Caltech Please make sure to indicate CNS*95 and YOUR name on all money transfers. Charge my card: Visa Mastercard American Express Number: Expiration Date: Name of Cardholder: Signature as appears on card (for mailed in applications): Date: Additional questions: Did you submit an abstract and summary? ( ) Yes ( )No Title: Have you attended CNS meetings previously ( ) Yes ( ) No Do you have special dietary preferences or restrictions (e.g., diabetic, low sodium, kosher, vegetarian)? If so, please note: Some grants to cover partial travel expenses may become available. Do you wish further information? ( ) Yes ( ) No ************************************************************************** ADDITIONAL MEETING INFORMATION: Information on the full meeting agenda,the current list of registered attendees, paper abstracts, etc are available on line: Information is available via http://www.bbb.caltech.edu/cns95.html You can ftp information about CNS*95 from ftp.bbb.caltech.edu. Use 'ftp' or 'anonymous' as your ftp login name, enter your email address as the password. Information is available under pub/cns95 HOPE TO SEE YOU AT THE MEETING From watrous at scr.siemens.com Thu May 25 17:20:21 1995 From: watrous at scr.siemens.com (Raymond L Watrous) Date: Thu, 25 May 1995 17:20:21 -0400 Subject: Paper available: KBANN applied to ECG processing Message-ID: <199505252120.RAA00519@tiercel.scr.siemens.com> FTP-HOST: scr.siemens.com FTP-filename: /pub/learning/Papers/watrous/soar.ps.Z The following paper (7 pages, 3 figures) is now available via anonymous ftp: Synthesize, Optimize, Analyze, Repeat (SOAR): Application of Neural Network Tools to ECG Patient Monitoring Raymond Watrous, Geoffrey Towell and Martin S. Glassman Siemens Corporate Research 755 College Road East Princeton, NJ 08540 Abstract Results are reported from the application of tools for synthesizing, optimizing and analyzing neural networks to an ECG Patient Monitoring task. A neural network was synthesized from a rule-based classifier and optimized over a set of normal and abnormal heartbeats. The classification error rate on a separate and larger test set was reduced by a factor of 2. Sensitivity analysis of the synthesized and optimized networks revealed informative differences. Analysis of the weights and unit activations of the optimized network enabled a reduction in size of the network by a factor of 40% without loss of accuracy. +=+=+= The paper will appear in the Proceedings of the Workshop on Environmental and Energy Applications of Neural Networks, March 30-31, 1995, Richland, Washington, and is reprinted from the Proceedings of the Third International Congress on Air- and Structure-Borne Sound and Vibration, June 13-15, 1994, Montreal, Quebec, pp. 997-1004, and the Proceedings of the 1993 Symposium on Nonlinear Theory and its Applications, December 5-10, Honolulu, Hawaii, pp. 565-570. We regret that we are unable to provide hard copies. Raymond Watrous +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+ Learning Systems Department Phone: (609) 734-6596 Siemens Corporate Research FAX: (609) 734-6565 755 College Road East Princeton, NJ 08540 watrous at learning.scr.siemens.com From sutton at gte.com Fri May 26 18:26:12 1995 From: sutton at gte.com (Rich Sutton) Date: Fri, 26 May 1995 17:26:12 -0500 Subject: New RL paper and WWW interface to archive Message-ID: GENERALIZATION IN REINFORCEMENT LEARNING: SUCCESSFUL EXAMPLES USING SPARSE COARSE CODING Richard S. Sutton submitted to NIPS'95 On large problems, reinforcement learning systems must use parameterized function approximators such as neural networks in order to generalize between similar situations and actions. In these cases there are no strong theoretical results on the accuracy of convergence, and computational results have been mixed. In particular, Boyan and Moore reported at last year's meeting a series of negative results in attempting to apply dynamic programming together with function approximation to simple control problems with continuous state spaces. In this paper, we present positive results for all the control tasks they attempted, and for one that is significantly larger. The most important differences are that we used sparse-coarse-coded function approximators (CMACs) whereas they used mostly global function approximators, and that we learned online whereas they learned offline. Boyan and Moore and others have suggested that the problems they encountered could be solved by using actual outcomes ("rollouts"), as in classical Monte Carlo methods, and as in the TD(lambda) algorithm when lambda=1. However, in our experiments this always resulted in substantially poorer performance. We conclude that reinforcement learning can work robustly in conjunction with function approximators, and that there is little justification at present for avoiding the case of general lambda. ________________ The paper is available by ftp as ftp://ftp.cs.umass.edu/pub/anw/pub/sutton/sutton-inprer.ps.gz or via a new WWW interface to my small archive at http://envy.cs.umass.edu/People/sutton/archive.html. Please change any WWW pointers to the old ftp archive. From bilmes at ICSI.Berkeley.EDU Fri May 26 19:19:48 1995 From: bilmes at ICSI.Berkeley.EDU (Jeff Bilmes) Date: Fri, 26 May 1995 16:19:48 PDT Subject: Speech Recognition Systems Position Available, Berkeley CA Message-ID: <199505262319.QAA15929@anchorsteam.ICSI.Berkeley.EDU> Speech Recognition Systems Position Available International Computer Science Institute Berkeley CA The International Computer Science Institute (ICSI) in Berkeley, California now has a position available in the Realization Group. This is a staff position with a focus on speech recognition system design. The job will involve both research and development. Applicants should have several years of experience in designing and writing speech recognition software, and have an interest in developing recognizers that incorporate results from basic research within the group. It would be best if the applicant also had experience in speech research as well, although development experience is the more important qualification. Ordinarily, applicants will have a Ph.D., although strong professional experience can substitute for this requirement. The Realization group at ICSI does a combination of hardware, software, and algorithms for research and development of systems for speech processing and other machine implementations of human signal processing and pattern recognition. In the past, much of the emphasis has been on hybrid neural network/ hidden Markov model based speech recognition, robust recognition in the context of additive and convolutional noise, study of robust properties of the human auditory system, and research into the interaction of statistical sequence recognition with robust signal processing. Hardware and software tools also continue to be developed in the group, leading to strong capabilities for training with computationally intensive algorithms. The group includes a strong group of Berkeley CS and EE graduate students, as well as by fruitful collaborations with other researchers (including Greenberg from Berkeley, Bourlard from Belgium, Hermansky from OGI, Cohen and Franco from SRI, and Robinson from Cambridge U). The group is led by Nelson Morgan. ICSI is an independent, non-profit basic research institute affiliated with the University of California campus in Berkeley, California, and is an Equal Opportunity Employer. For further information, please contact: Dr. Nelson Morgan International Computer Science Institute 1947 Center St., Suite 600 Berkeley, CA. 94704 (415) 642-4274 x131 morgan at icsi.berkeley.edu From eann95 at ra.abo.fi Sat May 27 09:13:46 1995 From: eann95 at ra.abo.fi (EANN-95 Konferensomrede VT) Date: Sat, 27 May 1995 16:13:46 +0300 Subject: EANN 95 ftp,http sites Message-ID: <199505271313.QAA08210@aton.abo.fi> Sorry for the unsolicited mail. We will be really brief. The EANN '95 conference program is in /pub/vt/ab/eann95schedule.ps.Z and can be picked up by anonymous ftp from ftp.abo.fi. EANN '95 home page is at http://www.abo.fi/~abulsari/EANN95.html. EANN '95 organisers From tishby at CS.HUJI.AC.IL Mon May 29 08:21:02 1995 From: tishby at CS.HUJI.AC.IL (Tali Tishby) Date: Mon, 29 May 1995 15:21:02 +0300 Subject: Cortical Dynamics in Jerusalem: Final program Message-ID: <199505291221.AA01406@fugue.cs.huji.ac.il> CORTICAL DYNAMICS IN JERUSALEM JUNE 11-15, 1995 TENTATIVE PROGRAM 24 April 1995 Sunday, 11/6/1995 9-10.00 Registration 10-10.30 Opening Ceremony From Single Neurons to Cortical Networks 10.30-11.30 H.B.Barlow: The Division of Labour between Single Neurons and Networks. 11.30-12.00 Coffee 12.00-13.00 A.Schuz: Anatomical Aspects Relevant to Cortical Dynamics 13.00-14.30 Lunch Dynamics of Single Neurons and Synapses 14.30-15.30 Y. Amitai: Functional Implications of Cellular Classification in theNeocortex. 15.30-16.30 I.Segev: Backward Propagating Action Potential and Forward Synaptic Amplification in Excitable Dendrites of Neocortical Pyramidal Cells. 16.30-17.00 Coffee 17.00-18.00 A.M.Thomson: Synaptic Interactions between Neocortical Neurons; Temporal Patterning Selectively Activates Excitatory or Inhibitory Circuits 18.00-19.00 H Markram: Neocortical Pyramidal Neurons Scale the Efficacy of Synaptic Input according to Arrival Time: A proposed Selection Principle of the most Appropriate Synaptic Information. 19.30 Reception. Monday: 12.6.1995 Recurrent dynamics in cortical circuits I 9.00 10.00 K.A.C.Martin: Recurrent Excitation in Neocortical Circuits 10.00-11.00 H.Sompolinsky: Visual Processing by Recurrent Cortical Networks 11.00-11.30 Coffee 11.30-12.30 R.Eckhorn: Loosely Synchronized Rhythmic and non-Rhythmic Activities in the Visual Cortex and their Potential Roles for Spatial and Tempo 12.30-13.30 A K.Kreiter: Functional Aspects of Neuronal Synchronization in Monkeys. 13.30-16.30 poster session Recurrent Dynamics in Cortical Circuits II 16.30-17.30 J.Bullier:Temporal aspects of cortical Processing in monkey visual cortex. 17.30-18.30 D.Hansel: Chaos and Synchrony Tuesday, 13.6.1995 Brain states and neural codes I 9.00 - 10.00 M. Abeles: Spatio-Temporal Firing Patterns in the Frontal Cortex 10.00-11.00 E. Bienenstock: On the Dynamics of Synfire Chains 11.00-11.30 Coffee 11.30-12.30 D.J. Amit: Global Spontaneous Activity and Local Structured (learned) Delay Activity in the Cortex 12.30-13.30 G.L. Gerstein: Neuronal Assembly Dynamics: Experiments, Analyses and Models 13.30-16.30 poster session Brain states and neural codes II 16.30-17.30 B.J.Richmond: Dimensionality of Neuronal Codes 17.30-18.30 N.Tishby: Analyzing Cortical Activity Using Hidden Markov Models. Wednesday, 14.6.1995 Vision 9.00 - 10.00 C.D.Gilbert: Spatial Integration and Cortical Dynamics 10.00-11.00 A.Grinvald: Cortical dynamics Revealed by Optical Imaging. 11.00-11.30 Coffee 11.30-12.30 D.Mumford: Biological and Computational Models for Low Level Vision. 12.30-13.30 S.Ullman: The Sequence-Seeking Model for Bidirectional Information Flow in the Visual Cortex. 13.30-15.00 Lunch 15.00-18.00 Tour Thursday, 15.6.1995 Mechanisms of Behavior and Cognition 9.00 - 10.00 M.N.Shadlen: Seeing and Deciding about Motion: Neural Correlates of a Psychophysical Decision in Area LIP of the Monkey 10.00-11.00 A.B.Schwartz: Population Response in Motor Cortex during Figure Drawing. 11.00-11.30 Coffee 11.30-12.30 A.M.Graybiel: The Basal Ganglia and Adaptive Control of Behavioral Routines 12.30-13.30 E. Vaadia: Does Coherent Activity in Groups of Neurons Serve a Neural Code? 13.30-15.30 Lunch 15.30-16.30 L.G.Valiant: A Computational Model for Cognition 16.30-17.00 Coffee 17.00-19.00 Discussion: Is Dynamics Relevant to Cortical Function? Moderator: S. Hochstein 20.00 Farewell Banquet ------- End of Forwarded Message From peterk at nsi.edu Thu May 25 22:24:53 1995 From: peterk at nsi.edu (Peter Konig) Date: Thu, 25 May 1995 18:24:53 -0800 Subject: postdoc position available Message-ID: Junior Fellow Position in Cortical Neurophysiology available. Applications are invited for the postdoctoral position of Junior Fellow in Experimental Neurobiology at the Neuroscience Institut, La Jolla, to study mechanisms underlying visual perception and sensorimotor integration in the cat. Applicants should have a background in neuro-physiological techniques and data analysis. Fellows will receive stipends appropriate to their quali- fications and experience. The position is available immediatly. Submit a curriculum vitae, statement of research interests, and names of three references to: Dr. Peter Konig; The Neurosciences Institute 10640 John J Hopkins Drive, San Diego, CA, 92121, USA FAX: 619-626-2199 From jaap.murre at mrc-apu.cam.ac.uk Fri May 26 11:44:06 1995 From: jaap.murre at mrc-apu.cam.ac.uk (Jaap Murre) Date: Fri, 26 May 1995 16:44:06 +0100 Subject: Paper on Amnesia Model Message-ID: <199505261544.QAA10765@sirius.mrc-apu.cam.ac.uk> A new paper has been added to our ftp site: Reference: Murre, J.M.J. (submitted). A model of amnesia. Submitted to Psychological Review. Abstract: We present a model of amnesia (the TraceLink model) based on a review of its neuropsychology, neuroanatomy, and connectionist modelling. The model consists of three subsystems: (1) a trace system, (2) a link system, and (3) a modulatory system. The hippocampus is assigned a double role, being involved in both the link system and the modulatory system. The model is able to account for many of the characteristics of semantic dementia, retrograde amnesia (AA) and anterograde amnesia (RA), including: Ribot gradients, transient-global amnesia, patterns of shrinkage and recovery from AA, correlations between AA and RA or the absence thereof (e.g., in isolated RA). In addition we derive testable predictions concerning implicit memory, forgetting gradients, and neuroanatomy. ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre/trclink.ps (1253 Kb) ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre/trclink.ps.Z ( 406 Kb) ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre/trclink.zip ( 325 Kb) N.B. There is a 10-user limit on our ftp-site. If you don't succeed in connecting, please, try again later. (Normally, there should be a message indicating that the limit has been exceeded.) I am at moment moving to the University of Amsterdam. After 1 June 1995 my mail address will be: j.murre at psy.uva.nl. There may be a delay in answering questions due to moving overhead. -- Jaap Murre jaap.murre at mrc-apu.cam.ac.uk Medical Research Council - Applied Psychology Unit, Cambridge University of Amsterdam, Department of Psychonomics (after June 1) From zhangw at chert.CS.ORST.EDU Tue May 30 15:25:57 1995 From: zhangw at chert.CS.ORST.EDU (Wei Zhang) Date: Tue, 30 May 95 12:25:57 PDT Subject: Reinforcement learning papers Message-ID: <9505301925.AA13760@curie.CS.ORST.EDU> This is to announce the availability of two new postscript preprints: High-Performance Job-Shop Scheduling With A Time-Delay TD($\lambda$) Network Wei Zhang and Thomas G. Dietterich submitted to NIPS-95 ftp://ftp.cs.orst.edu/users/z/zhangw/papers/zhang-tgd-nips95.ps.gz Abstract: Job-shop scheduling is an important task for manufacturing industries. We are interested in the particular task of scheduling payload processing for NASA's space shuttle program. This paper summarizes our previous work on formulating this task for solution by the reinforcement learning algorithm $TD(\lambda)$. A shortcoming of this previous work was its reliance on hand-engineered input features. This paper shows how to extend the time-delay neural network (TDNN) architecture to apply it to irregular-length schedules. Experimental tests show that this TDNN-$TD(\lambda)$ network can match the performance of our previous hand-engineered system. The tests also show that both neural network approaches significantly out-perform the best previous (non-learning) solution to this problem in terms of the quality of the resulting schedules and the number of search steps required to construct them. Value Function Approximations and Job-Shop Scheduling Wei Zhang and Thomas G. Dietterich submitted to Workshop of Value Function Approximation in Reinforcement Learning in ML-95 ftp://ftp.cs.orst.edu/users/z/zhangw/papers/zhang-tgd-ml95rl.ps.gz Abstract We report a successful application of TD($\lambda$) with value function approximation to the task of job-shop scheduling. Our scheduling problems are based on the problem of scheduling payload processing steps for the NASA space shuttle program. The value function is approximated by a 2-layer feedforward network of sigmoid units. A one-step lookahead greedy algorithm using the learned evaluation function outperforms the best existing algorithm for this task, which is an iterative repair method incorporating simulated annealing. To understand the reasons for this performance improvement, this paper introduces several measurements of the learning process and discusses several hypotheses suggested by these measurements. We conclude that the use of value function approximation is not a source of difficulty for our method, and in fact, it may explain the success of the method independent of the use of value iteration. Additional experiments are required to discriminate among our hypotheses. The following reinforcement learning paper is also available at the site: Zhang, W. and Dietterich, T., A Reinforcement Learning Approach to Job-shop Scheduling, to appear in Proc. IJCAI-95, 1995. ftp://ftp.cs.orst.edu/users/z/zhangw/papers/zhang-tgd-ijcai95.ps.gz From csvarer at eivind.ei.dtu.dk Wed May 31 16:54:39 1995 From: csvarer at eivind.ei.dtu.dk (Claus Svarer) Date: Wed, 31 May 95 16:54:39 METDST Subject: ph.d. thesis available: NN for Signal Processing Message-ID: <9505311454.AA08746@ei.dtu.dk> ------------------------------------------------------------------------ FTP-host: eivind.ei.dtu.dk FTP-file: dist/PhD_thesis/csvarer.thesis.ps.Z ------------------------------------------------------------------------ The following ph.d. thesis is now available by anonymous ftp: Neural Networks for Signal Processing Claus Svarer CONNECT, Electronics Institute B349 Technical University of Denmark DK-2800 Lyngby Denmark csvarer at ei.dtu.dk The main themes of the thesis are: Optimization of neural network architectures by pruning of parameters. Pruning is based on Optimal Brain Damage estimates of which parameters induce the least increase the cost-function when removed. Selection of the optimal architecture in a family of pruned networks based on a generalization error estimate (Akaike's Final Prediction Error estimate). Methods for on-line tuning of the different parameters of the network optimization algorithms. The gradient-descent parameter is tuned using a second order Gauss-Newton method, while the weight-decay parameters are tuned to minimize the generalization ability (FPE estimate) of the network. The methods proposed for network optimization are all examined by experiments. Examples are selected from the areas of classification, time-series prediction and non-linear modeling. > SORRY: NO HARD COPIES AVAILABLE < -- _______________________________________ ____________________________________ | | | | Claus Svarer | e-mail : csvarer at ei.dtu.dk | | Rigshospitalet, N2081 | Phone : +45 3545 3545 | | Department of Neurology | Direct : +45 3545 2088 or | | DK-2100 Copenhagen O | +45 3545 3957 | | Denmark | Fax : +45 3545 3898 | |_______________________________________|____________________________________| From Connectionists-Request at cs.cmu.edu Mon May 1 00:05:21 1995 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Mon, 01 May 95 00:05:21 -0400 Subject: Bi-monthly Reminder Message-ID: <21903.799301121@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated September 9, 1994. This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is a moderated forum for enlightened technical discussions and professional announcements. It is not a random free-for-all like comp.ai.neural-nets. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to thousands of busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. -- Dave Touretzky & Lisa Saksida --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, corresponded with him directly and retrieved the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new textbooks related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. The file "current" in the same directory contains the archives for the current month. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU 2. Login as user anonymous with password your username. 3. 'cd' directly to the following directory: /afs/cs/project/connect/connect-archives The archive directory is the ONLY one you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into this directory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- Using Mosaic and the World Wide Web ----------------------------------- You can also access these files using the following url: http://www.cs.cmu.edu:8001/afs/cs/project/connect/connect-archives ---------------------------------------------------------------------- The NEUROPROSE Archive ---------------------- Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. We strongly discourage the merger into the repository of existing bodies of work or the use of this medium as a vanity press for papers which are not of publication quality. PLACING A FILE To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. Current naming convention is author.title.filetype.Z where title is just enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. The Z indicates that the file has been compressed by the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is in the appendix. Make sure your paper is single-spaced, so as to save paper, and include an INDEX Entry, consisting of 1) the filename, 2) the email contact for problems, 3) the number of pages and 4) a one sentence description. See the INDEX file for examples. ANNOUNCING YOUR PAPER It is the author's responsibility to invite other researchers to make copies of their paper. Before announcing, have a friend at another institution retrieve and print the file, so as to avoid easily found local postscript library errors. And let the community know how many pages to expect on their printer. Finally, information about where the paper will/might appear is appropriate inside the paper as well as in the announcement. In your subject line of your mail message, rather than "paper available via FTP," please indicate the subject or title, e.g. "paper available "Solving Towers of Hanoi with ART-4" Please add two lines to your mail header, or the top of your message, so as to facilitate the development of mailer scripts and macros which can automatically retrieve files from both NEUROPROSE and other lab-specific repositories: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/filename.ps.Z When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. To prevent forwarding, place a "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your file. If you do offer hard copies, be prepared for a high cost. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! A shell script called Getps, written by Tony Plate, is in the directory, and can perform the necessary retrieval operations, given the file name. Functions for GNU Emacs RMAIL, and other mailing systems will also be posted as debugged and available. At any time, for any reason, the author may request their paper be updated or removed. For further questions contact: Jordan Pollack Associate Professor Computer Science Department Center for Complex Systems Brandeis University Phone: (617) 736-2713/* to fax Waltham, MA 02254 email: pollack at cs.brandeis.edu APPENDIX: Here is an example of naming and placing a file: unix> compress myname.title.ps unix> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put myname.title.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for myname.title.ps.Z 226 Transfer complete. 100000 bytes sent in 1.414 seconds ftp> quit 221 Goodbye. unix> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file myname.title.ps.Z in the Inbox. Here is the INDEX entry: myname.title.ps.Z mylogin at my.email.address 12 pages. A random paper which everyone will want to read Let me know when it is in place so I can announce it to Connectionists at cmu. ^D AFTER RECEIVING THE GO-AHEAD, AND HAVING A FRIEND TEST RETRIEVE THE FILE, HE DOES THE FOLLOWING: unix> mail connectionists Subject: TR announcement: Born Again Perceptrons FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/myname.title.ps.Z The file myname.title.ps.Z is now available for copying from the Neuroprose repository: Random Paper (12 pages) Somebody Somewhere Cornell University ABSTRACT: In this unpublishable paper, I generate another alternative to the back-propagation algorithm which performs 50% better on learning the exclusive-or problem. ~r.signature ^D ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "ftp.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. Another valid directory is "/afs/cs/project/connect/code", where we store various supported and unsupported neural network simulators and related software. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "neural-bench at cs.cmu.edu". From hpan at ecn.purdue.edu Mon May 1 15:32:52 1995 From: hpan at ecn.purdue.edu (Hong Pan) Date: Mon, 1 May 1995 14:32:52 -0500 Subject: TR aval: Linsker's Network: Qualitative Analysis On Parameter Space Message-ID: <199505011932.OAA23341@en.ecn.purdue.edu> *************** PLEASE DO NOT FORWARD TO OTHER BBOARDS ***************** FTP-host: archive.cis.ohio-state.edu Mode: binary FTP-filename: /pub/neuroprose/pan.purdue-tr-ee-95-12.ps.Z URL: file://archive.cis.ohio-state.edu/pub/neuroprose/ ------------------------------------------------------------------------ The following Technical Report concerning the dynamical mechanism of a class of network models that use the limiter function (or the piecewise linear sigmoidal function) as the constraint limiting the size of the weight or the state variables, has been placed in the Neuroprose archive (see above for FTP-host) and is currently available as a compressed postscript file named pan.purdue-tr-ee-95-12.ps.Z (65 pages with 5 tables & 18 figures) Comments, questions and suggestions about the work can be sent to: hpan at ecn.purdue.edu ***** Hardcopies cannot be provided ***** ------------------------------------------------------------------------ Linsker-type Hebbian Learning: A Qualitative Analysis On The Parameter Space Jianfeng Feng Hong Pan Vwani P. Roychowdhury Mathematisches Institut School of Electrical Engineering Universit\"{a}t M\"{u}nchen 1285 Electrical Engineering Building Theresienstr. 39 Purdue University D-80333 M\"{u}nchen West Lafayette Germany IN 47907-1285 ------------------------------------------------------------------------ Abstract: From cyril at psychvax.psych.su.OZ.AU Mon May 1 21:54:03 1995 From: cyril at psychvax.psych.su.OZ.AU (Cyril Latimer) Date: Tue, 2 May 1995 11:54:03 +1000 Subject: Modelling Symmetry Detection with Back-propagation Networks Message-ID: The following paper appeared in Spatial Vision, and reprints may be requested from the address given below. Latimer, C.R., Joung, W., & Stevens, C.J. Modelling symmetry detection with back-propagation networks. Spatial Vision, 1994, 8(4), 415-431. Abstract This paper reports experimental data and results of network simulations in a project on symmetry detection in small 6 x 6 binary patterns. Patterns were symmetrical about the vertical, horizontal, positive-oblique or negative-oblique axis, and were viewed on a computer screen. Encouraged to react quickly and accurately, subjects indicated axis of symmetry by pressing one of four designated keys. Detection times and errors were recorded. Back-propagation networks were trained to categorize the patterns on the basis of axis of symmetry, and, by employing cascaded activation functions on their output units, it was possible to compare network performance with subjects' detection times. Best correspondence between simulated and human detection-time functions was observed after the networks had been given significantly more training on patterns symmetrical about the vertical and the horizontal axes. In comparison with no pre-training and pre-training with asymmetric patterns, pre-training networks with sets of single vertical, horizontal, positive-oblique or negative-oblique bars speeded subsequent learning of symmetrical patterns. Results are discussed within the context of theories suggesting that faster detection of symmetries about the vertical and horizontal axes may be due to significantly more early experience with stimuli oriented on these axes. ------------------------------- * ---------------------------------- Dr. Cyril R. Latimer Ph: +61 2 351-2481 Department of Psychology * * Fax: +61 2 351-2603 University of Sydney * NSW 2006, Australia email: cyril at psych.su.oz.au ------------------------------ * ----------------------------------- From markey at dendrite.cs.colorado.edu Tue May 2 01:40:03 1995 From: markey at dendrite.cs.colorado.edu (Kevin Markey) Date: Mon, 1 May 1995 23:40:03 -0600 Subject: Thesis/TR: Sensorimotor foundations of phonology -- a model. Message-ID: <199505020540.XAA15632@dendrite.cs.colorado.edu> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/Thesis/markey.thesis.ps.Z Ph.D. Thesis available by anonymous ftp (128 pages) The Sensorimotor Foundations of Phonology: A Computational Model of Early Childhood Articulatory and Phonetic Development Kevin L. Markey Department of Computer Science University of Colorado at Boulder ABSTRACT This thesis describes HABLAR, a computational model of the sensorimotor foundations of early childhood phonological development. HABLAR is intended to replicate the major milestones of emerging speech and demonstrate key characteristics of normal development, including the phonetic characteristics of babble, systematic and context-sensitive patterns of sound substitutions and deletions, overgeneralization errors, and the emergence of adult phonemic organization. HABLAR simulates a complete sensorimotor system consisting of an auditory system that detects and categorizes speech sounds using only acoustic cues drawn from its linguistic environment, an articulatory system that generates synthetic speech based on a realistic computer model of the vocal tract, and a hierarchical cognitive architecture that bridges the two. The environment in which the model resides is also simulated. The model is an autonomous agent which actively experiments within this environment. The principal hypothesis guiding the model is that phonological development emerges from the interaction of auditory perception and hierarchical motor control. The model's auditory perception is specialized to segment and categorize acoustic signals into discrete phonetic events which closely correspond to discrete sets of functionally coordinated gestures learned by the model's articulatory control apparatus. HABLAR learns the correspondence between discrete phonetic and articulatory events, not between continuous speech and continuous vocal tract motion. HABLAR's perceptual and motor organization is initially syllabic. Phonemes are not built into the model but emerge (along with an adult-like phonological organization) due to the differentiation of early syllable-sized motor patterns into phoneme-sized patterns while the model learns a large lexicon. Learning occurs in two phases. In the first phase, HABLAR's auditory perception employs soft competitive learning to acquire phonetic features which categorize the spectral properties of utterances in the linguistic environment. In the second phase, reinforcement based on the phonetic proximity of target and actual utterances guides learning by the model's two levels of motor control. The phonological control level uses Q-learning to learn an optimal policy linking phonetic and articulatory events. The articulatory control level employs a parallel Q-learning architecture to learn a policy which controls the vocal tract's twelve degrees-of-freedom. HABLAR has been fully implemented as a computational model. Simulations of the model's auditory perception demonstrate that it faithfully preserves and makes explicit phonetic properties of the acoustic signal. Auditory simulations also mimic categorical vowel and consonant perception which develops in human infancy. Other results demonstrate the feasibility of learning multi-dimensional articulatory control with a parallel reinforcement learning architecture, and the effectiveness of shaping motor control with reinforcement based on the phonetic proximity of target and actual utterances. The model provides qualitative accounts of developmental data. It is predicted to make pronunciation errors similar to those observed among children because of the relative articulatory difficulty of its producing different speech sounds, its tendency to eliminate the biggest phonetic errors first, its generalization of already mastered sounds across phonetic similarities, and contextual effects of phonetic representations and internal distributed representations which underlie speech production. ----------------------------------------------------------------------------- Sorry, hard copies are not available. Thanks to Jordan Pollack for maintaining neuroprose. Kevin L. Markey Department of Psychology 2155 S. Race Street University of Denver Denver, CO 80208 markey at cs.colorado.edu ------------------------------------------------------------------------------ From PREFENES at lbs.lon.ac.uk Tue May 2 11:04:59 1995 From: PREFENES at lbs.lon.ac.uk (Paul Refenes) Date: Tue, 2 May 1995 11:04:59 BST Subject: Doctoral Research Scholarships Message-ID: Collaborative PhD Research Scholarships Department of Decision Science London Business School University of London The Department of Decision Science at London Business School is offering three scholarships on its Doctoral programme. Commencing in October 1995 the research areas will include Neural Networks, Non-parametric statistics, Financial Engineering, Simulation, Optimisation and Decision Analysis. Principled Model Selection for Neural Network Applications in Nonlinear Time Series: to utilise developments from multinomial,times series theory and from the non-parametric statistics field for developing distribution theories, statistical diagnostics, and test procedures for recurrent neural network model identification. The methodology will be used to develop models of nonlinear cointegration in equity markets and in telecommunications data. Advanced Decision Technology in Financial Risk Management: The use of advanced decision technologies such as neural networks, non parametric statistics and genetic algorithms for the development of financial risk management models in the currency and soft commodity markets. Our industrial collaborator has special interest on robust neural network models for hedging and arbitrage strategies in the currency, soft commodity and equity markets. Intelligent systems in Industry Modelling and Simulation Environments: the use of simulation for the development of business strategy and the facilitation of executive debate is now well established and popular. Neural network technology will be used for the development of "intelligent simulation agents" that can process the vast amount of data generated by the simulations and adapt their behaviour by learning from the feed back patterns. London Business School offers students enrolled in the doctoral programme core courses on Research Methodology, Statistical Analysis, as well as a choice of advanced specialised subject area courses including Financial Economics, Equity Investment, Derivatives Research, etc. Candidates with a strong background in mathematics, oprerations research, computer science, nonparametric statistics, and/or econometrics who wish to apply are invited to write with a copy of their CV to: Professor D. Bunn or Dr A-P. N. Refenes London Business School Regents Park, London NW1 4SA tel: ++ 44 171 262 5050 fax: ++ 44 171 728 78 75 The Department =========== The Department of Decision Sciences of the London Business School is actively involved in innovative multi-disciplinary research on the application of new business modelling methodologies to individual and organisation decision- making. In seeking to extend the effectiveness of conventional methods of management science, statistical methods and decision support systems, with the latest generation of software platforms, artificial intelligence, neural networks, genetic algorithms and computationally intensive methods, the research themes of the department remain at the forefront of new practice. The NeuroForecasting Research Unit ================================== The NeuroForecasting Research Unit at London Business School is the major centre in Europe for research into neural networks, non-parametric statistics and financial engineering. With funding from the DTI, the European Commission and a consortium of leading financial institutions the research unit has attained a world-wide reputation for collaborative research. Doctoral students work in a team of highly motivated post-doctoral fellows, research fellows, doctoral students and faculty who are amongst Europe's leading authorities in the field. Advanced Decision Support Platforms =================================== The current trend in the design of decision support is towards a synthesis of multiple approaches and integration of business modelling techniques (optimisation with simulation, forecasting with decision analysis, etc. Using object-oriented software architectures, the group has developed innovative approaches for model structuring, strategic analysis, forecasting and decision- analytic procedures. Several companies and the ESRC are currently supporting this work. From raffaele at caio.irmkant.rm.cnr.it Tue May 2 17:44:23 1995 From: raffaele at caio.irmkant.rm.cnr.it (raffaele@caio.irmkant.rm.cnr.it) Date: Tue, 2 May 1995 16:44:23 -0500 Subject: simulation of protein folding process (paper) Message-ID: <9505022144.AA09110@caio.irmkant.rm.cnr.it> FTP-host: kant.irmkant.rm.cnr.it FTP-filename: /pub/econets/calabretta.folding.ps.Z The following paper has been placed in the anonymous-ftp archive (see above for ftp-host) and is now available as a compressed postscript file named calabretta.folding.ps.Z (14 pages of output) The paper is also available by World Wide Web: http://kant.irmkant.rm.cnr.it/gral.html It will appear in Proceedings of 3rd European Conference on Artificial Life (Granada, Spain, 4-6 June 1995). Comments welcome. Raffaele Calabretta email address: raffaele at caio.irmkant.rm.cnr.it ------------------------------------------------------------------ "An Artificial Model for Predicting the Tertiary Structure of Unknown Proteins that Emulates the Folding Process" Raffaele Calabretta, Stefano Nolfi, Domenico Parisi Department of Neural Systems and Artificial Life Institute of Psychology National Research Council V.le Marx, 15 00137 ROME ITALY ---------------------------------------------------------------------------- Abstract: We present an "ab initio" method that tries to determine the tertiary structure of unknown proteins by modelling the folding process without using potentials extracted from known protein structures. We have been able to obtain appropriate matrices of folding potentials, i.e. 'forces' able to drive the folding process to produce correct tertiary structures, using a genetic algorithm. Some initial simulations that try to simulate the folding process of a fragment of the crambin that results in an alpha- helix, have yielded good results. We discuss some general implications of an Artificial Life approach to protein folding which makes an attempt at simulating the actual folding process rather than just trying to predict its final result. ---------------------------------------------------------------------------- From jhoh at vision.postech.ac.kr Tue May 2 11:24:56 1995 From: jhoh at vision.postech.ac.kr (Prof. Jong-Hoon Oh) Date: Wed, 3 May 1995 00:24:56 +0900 Subject: Post-doc Position, Statisitical Physics of Neural Networks Message-ID: Postoctoral Position at POHANG UNIVERSITY OF SCIENCE AND TECHNOLOGY "Statistical Physics of Neural Networks" A post doctorial position is available at the Basic Science Research Institute of Pohang Institute of Science and technology. Main research area will be statistical mechanics of neural networks. Background in statistical physics of neural networks, spin glasses or other condensed matter systems is prefered, but someone with a strong theoretical or computational physics background who is willing to explore this exciting new field can also be considered for this position. Current research is mainly concentrated to statisitical physics of learning in the multi-layered neural networks, including issues such as generalization, storage capacity, population learning, model selection. Now we are extending our research area to the biological neural networks and time series prediction. We have computing facilities such as two parallel computers and several high-end workstations. We hope the successful applicant can start to work either in June or in September, but we have some flexibility. We will support him/her for a year, and it can be extened for one more year according to his/her performance. Further information can be asked through e-mail. An applicant should send a CV and a list of publications to the following address, and arrange two recommendation letters (or at least one from Ph. D. adviser) to be arrived before May 15. CV in TeX/LaTeX format by e-mail is welcome. We prefer e-mail communication. Recommendation letters can also be sent by e-mail. Prof. Jong-Hoon OH Department of Physics jhoh at vision.postech.ac.kr Pohang Institute of Science Technology Tel) +82-562-2792069 Hyoja San 31 Pohang, 790-784 Kyoungbuk, Korea Fax) +82-562-2793099 **************************************************************************** Jong-Hoon Oh Associate Professor, Department of Physics jhoh at vision.postech.ac.kr Pohang Institute of Science Technology Tel) +82-562-2792069 Hyoja San 31 Pohang, 790-784 Kyoungbuk, Korea Fax) +82-562-2793099 **************************************************************************** From maggini at McCulloch.Ing.UniFI.IT Tue May 2 12:42:38 1995 From: maggini at McCulloch.Ing.UniFI.IT (Marco Maggini) Date: Tue, 2 May 1995 18:42:38 +0200 Subject: Neurap 95 WWW page Message-ID: <9505021642.AA03087@McCulloch.Ing.UniFI.IT> NEURAP'95 8th International Conference on Neural Networks and their Applications Marseilles - December 13-14-15, 1995 France First announcement and call for papers WWW: http://www-dsi.ing.unifi.it/neural/neurap/neurap.html SCOPE OF THE CONFERENCE Following the previous conferences in Nimes, in 1995 the eighth Neural Networks and their Applications Conference will be organized in Marseilles, France. The attendance is unique in its kind composed half by industrial engineers and half by university scientists, coming from all over the world. The purpose of the NEURAP conference is to present the latest results in the application of artificial neural networks. Theoretical aspects of Artificial Neural Networks are to be presented at the ESANN (European Symposium on Artificial Neural Networks) conference. This edition will give a particular place, but not exclusively, to the three following application domains: * Automation * Robotics * Electrical Engineering. To this end, leading international researchers in these domains have been added to the scientific committee. The program committee of NEURAP'95 welcomes papers covering any kind of applications, methods, techniques, or tools that help to understand or develop neural networks applications. To help the prospective authors, the following is a non exhaustive list of topics which will be covered: * Speech or image recognition * Fault tolerance * Data or sensor fusion * Process control * Forecasting * Classification * Knowledge acquisition * Planning * Methods or tools for evaluating neural networks performance * Preprocessing of data * Simulation tools (research, education, development) * Hybrid systems (fuzzy, genetic algorithms, symbolic representation, etc.) * etc. ... The conference will be held in Marseilles, second largest city in France. Due to the proximity of the Mediterrannean sea, winter is usually sunny and temperate. Marseilles is well served by airways and railways, and is connected to the major European cities. CALL FOR CONTRIBUTIONS Prospective authors are invited to submit six originals of their contribution (full paper) before June 15, 1995. The proceedings will be publish in English. Papers should not exceed eight A4 pages (double columms, including figures and references). Printing area will be 17 x 23.5 cm (centered on the A4 pages); left, right, top and bottom margins will thus respectively be 1.9, 1.9, 2.5 and 3.4 cm. 10-point Times font will be used for the main text; headings will be in bold characters (but not underlined), and will be separate from the main text by two blank lines before and one after. Manuscripts prepared in this format will be reproduced in the same size in the book. Originals of the figures will be pasted into the manuscript and centered between the margins. The lettering of the figures should be in 10-point Times font size. Figures should be numbered. The legends also should be centered between the margins and be written in 9-point Times font size. The pages of the manuscript will not be numbered (numbering decided by the editor). A separate page (not included in the manuscript) will indicate: * the title of the manuscript * author(s) name(s) * the complete address (including phone & fax numbers and E-mail) of the corresponding author * a list of five keywords or topics On the same page, the authors will copy and sign the following paragraph: "in case of acceptation of the paper for presentation at NEURAP'95: * at least one of the authors will register to the conference and will present the paper * the author(s) give their rights up over the paper to the organizers of NEURAP'95, for the proceedings and any publication that could directly be generated by the conference * if the paper does not match the format requirements for the proceedings, the author(s) will send a revised version within two weeks of the notification of acceptation." Presentations will be oral or poster, depending on the wish of theauthor(s) and, also, of organisation constraints. 20 minutes will be allowed for oral presentation. Each poster will be allowed an oral presentation of 3 minutes at the beginning of the poster presentation. The full paper of either oral or poster presentation will be published in the proceedings. REGISTRATION FEES (indicative) Registration before Registration after October 1st, 1995 October 1st, 1995 Students 1000 FF 1200 FF Unversities 1600 FF 1800 FF Industries 2000 FF 2300 FF An "advanced registration form" is available by writing to the conference secretariat (see reply form below). Please ask for this form in order to benefit from the reduced registration fee before October 1st, 1995. DEADLINES Submission of papers June 15, 1995 Notification of acceptance September 18, 1995 Conference December 13-14-15, 1995 CONFERENCE SECRETARIAT Dr. Claude TOUZET IUSPIM Email: diam_ct at vmesa11.u-3mrs.fr Domaine Universitaire de Saint-Jrme Phone: +33 91 05 60 60 F-13397 Marseille Cedex 20 (France) Fax: +33 91 05 60 09 REPLY FORM If you wish to receive the final program of NEURAP'95, for any address change, or to add one of your colleagues in our database, please send this form to the conference secretariat. Please indicate if you wish to receive the advanced registration form. Please return this form under stamped envelope to: NEURAP'95 IUSPIM Domaine Universitaire de Saint-Jrme Avenue Escadrille Normandie-Niemen F-13397 Marseille Cedex 20 France - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Name: ............................................................ First Name: ...................................................... University or Comany: ............................................ Address: ......................................................... ................................................................. ZIP: ....................... Town: .............................. Country: ......................................................... Tel: ............................................................. Fax: ............................................................. E-mail: .......................................................... [ ] Please send me the "advanced registration form". - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - SCIENTIFIC COMMITEE (to be confirmed) Jeanny Herault INPG (Grenoble, F) - President Karl Goser Universitt Dortmund (D) - President Bernard Amy IMAG (Grenoble, F) Xavier Arreguit CSEM (CH) Jacob Bahren CESAR (Oak Ridge, USA) Gaston Baudat Sodeco (Genve, CH) Jean-Marie Bernassau Sanofi Recherche (Montpellier, F) Pierre Bessire IMAG/LIFIA (Grenoble, F) Jean Bigeon INPG (Grenoble, F) Giacomo Bisio Universit di Genova (I) Franois Blayo SAMOS - Univ. Paris I (F) Jean Bourjault Universit de Besanon (F) Paul Bourret Onera-Cert (Toulouse, F) Joan Cabestany UPC (Barcelone, E) Leon O. Chua University of California (USA) Mauricio Cirrincione Universita di Palermo (I) Ian Cloete University of Stellenbosch (South Africa) Daniel Collobert CNET (lannion, F) Philippe Coiffet CRIIF (Gif sur Yvette, F) Marie Cottrell SAMOS - Universit Paris I (F) Alexandru Cristea Institut of Virology (Bucharest, Romania) Dante Del Corso Politecnico di Torino (I) Marc Duranton LEP (Limeil-Brvannes, F) Franoise Fogelman Sligos (Clamart, F) Kunihiko Fukushima Osaka University (J) Patrick Gallinari Univ. Pierre et Marie Curie (Paris, F) Josef Gppert University of Tbingen (D) Marita Gordon CEA (Grenoble, F) Marco Gori Universita di Firenze (I) Erwin Groospietsch GMD (Sankt Augustin, D) Martin Hasler EPFL (Lausanne, CH) Jean-Paul Haton Crin- inria (Nancy, F) Jaap Hoekstra Delft University of Technology (NL) Yujiro Inouye Osaka University (Japan) Masumi Ishikawa Kyushu Institute of Technology (J) Christian Jutten INPG (Grenoble, F) Heinrich Klar Technische Universitt Berlin (D) Jean-Franois Lavignon DRET (Arcueil, F) John Lazzaro Univ. of California (Berkeley, USA) Vincent Lorquet ITMI (Grenoble, F) Daniel Memmi CNRS/LIMSI (Orsay, F) Ruy Milidiu University of Rio (Bresil) Pietro Morasso University of Genoa (I) Fabien Moutarde Alcatel Alsthom Recherche (F) Alan F. Murray University of Edinburgh (GB) Akira Namatame National Defence Academy (J) Josef A. Nossek Technische Univ. Mnchen (D) Erkki Oja Lappeenranta Univ. of Tech. (FIN) Stanislaw Osowski University of Warsaw (Poland) Carsten Peterson University of Lund (S) Alberto Prieto Universidad de Granada (E) Pierre Puget CEA (Grenoble, F) Ulrich Ramacher Technische Universitt Dresden (D) Leornardo Reyneri Universita di Pisa (I) Tamas Roska MTA-SZTAKI (Budapest, H) Jean-Claude Sabonnadiere INPG (Grenoble, F) Juan Miguel Santos University of Buenos Aires (Argentina) Leslie S. Smith University of Stirling (GB) John T. Taylor University College London (GB) Carme Torras Institut de Cibernetica/CSIC (E) Claude Touzet DIAM/IUSPIM (Marseille, F) Michel Verleysen UCL (Louvain-La-Neuve, B) Eric Vittoz CSEM (Neuchtel, CH) Alexandre Wallyn CGInn (Boulogne-Billancourt, F) ORGANIZING COMMITTEE Norbert Giambiasi DIAM/IUSPIM - President Jean-Claude Bertrand IUSPIM Claudia Frydman DIAM/IUSPIM J.-Franois Lemaitre IIRIAM Danielle Bertrand IUSPIM From kak at gate.ee.lsu.edu Tue May 2 14:48:44 1995 From: kak at gate.ee.lsu.edu (Subhash Kak) Date: Tue, 2 May 95 13:48:44 CDT Subject: Paper Message-ID: <9505021848.AA19752@gate.ee.lsu.edu> The following paper is available by anonymous ftp. Comments on the paper are most welcome. INFORMATION, PHYSICS AND COMPUTATION Subhash C. Kak Louisiana State University Baton Rouge, LA 70803-5901 Abstract: The paper presents several observations on the connections between information, physics and computation. This includes energy and computing speed and the question of quantum computing in the style of Feynman and others. Technical Report ECE-95-04, April 19, 1995 --------- ftp://gate.ee.lsu.edu/pub/kak/inf.ps.Z From Alex.Monaghan at CompApp.DCU.IE Tue May 2 15:13:32 1995 From: Alex.Monaghan at CompApp.DCU.IE (Alex.Monaghan@CompApp.DCU.IE) Date: Tue, 2 May 95 15:13:32 BST Subject: CSNLP Conference at Dublin City University Message-ID: <9505021413.AA21392@janitor.compapp.dcu.ie> PLEASE POST! PLEASE POST! PLEASE POST! PLEASE POST! PLEASE POST! Call for Participation in the Fourth International Conference on The COGNITIVE SCIENCE of NATURAL LANGUAGE PROCESSING Dublin City University, 5-7 July 1995 Theme: The Role of Syntax There is currently considerable debate regarding the place and importance of syntax in NLP. Papers dealing with this matter will feature strongly in the programme. Invited Speakers: The following speakers have agreed to give keynote talks: Mark Steedman, University of Pennsylvania Alison Henry, University of Ulster Other areas addressed will include: Machine Translation Connectionism Semantic inferencing Spoken dialogue Prosody Hybrid approaches Assessment tools and methods This is a small conference, limited to about 40 delegates. We aim to keep things relatively informal, and to promote discussion and debate. With two dozen contributed papers and two invited talks, all outstanding, there should be plenty of material to interest a wide range of researchers. Registration and Accommodation: The registration fee will be IR#60, and will include proceedings, lunches and one evening meal. Accommodation can be reserved in the campus residences at DCU. A single room is IR#16 per night, with full Irish breakfast an additional IR#4. Accommodation will be "First come, first served": there is a heavy demand for campus rooms in the summer. There are also several hotels and B&B establishments nearby: addresses will be provided on request. To register, contact Alex Monaghan at the addresses given below. Payment in advance is possible but not obligatory. Please state gender (for accommodation purposes) and any unusual dietary requirements. CSNLP Alex Monaghan School of Computer Applications Dublin City University Dublin 9 Ireland Email registrations are preferred, please mail alex at compapp.dcu.ie (internet) --------- Deadlines: 26th June --- Final date for registration, accommodation, meals etc. A provisional programme will be sent out in due course. From yorick at dcs.shef.ac.uk Tue May 2 18:18:44 1995 From: yorick at dcs.shef.ac.uk (Yorick Wilks) Date: Tue, 2 May 95 18:18:44 BST Subject: Research in CS, AI, NLP and Speech Message-ID: <9505021718.AA11579@dcs.shef.ac.uk> University of Sheffield, UK Department of Computer Science RESEARCH DEGREES IN COMPUTER SCIENCE ************************************ This department intends to recruit a number of postgraduate research students to commence studies in October 1995. Successful applicants will be registered for an M.Phil or Ph.D. The department has four research groups, with interests as follows: Formal Methods and Software Engineering --------------------------------------- Telematics, Formal Specification, Verification and Testing, Object-Oriented Languages and Design, Proof Theory. Parallel Processing ------------------- Parallel Database Machines, Parallel CASE Tools, Safety-Critical systems. Artificial Intelligence and Neural Networks ------------------------------------------- Natural Language Processing (including corpus and lexically based methods, information extraction and pragmatics), Neural Networks, Computer Graphics, Intelligent Tutoring Systems, Computer Argumentation. Speech and Hearing ------------------ Auditory Scene Analysis, Models of Auditory Perception, Automatic Speech Recognition. It is expected that a number of (British Government) EPSRC awards will be available to UK residents, in addition to the University's own studentship and bursary schemes, some of which are open to all. Candidates for these awards should have a good honours degree in a relevant discipline (not necessarily Computer Science), or should attain such a degree by October 1995. Part-time registration is also possible. We especially welcome applications from (non-British) EU citizens elegible for support under the EU's Research Training Grants schemes (with application deadlines in May and September). Application forms and further particulars are available from The Departmental Secretary, Department of Computer Science, University of Sheffield, Regent Court, 211 Portobello St, Sheffield S1 4DP. More details can also be obtained from world-wide-web address http://www.dcs.shef.ac.uk. Informal enquiries may be addressed to Dr. Phil. Green, phone 0114-282-5578, email p.green at dcs.sheffield.ac.uk Prof Yorick Wilks, phone 0114-282-5563, email yorick at dcs.sheffield.ac.uk From B344DSL at utarlg.uta.edu Tue May 2 18:22:22 1995 From: B344DSL at utarlg.uta.edu (B344DSL@utarlg.uta.edu) Date: Tue, 02 May 1995 17:22:22 -0500 (CDT) Subject: Tentative program for MIND meeting at Texas A&M Message-ID: Conference on Neural Networks for Novel High-Order Rule Formation Sponsored by Metroplex Institute for Neural Dynamics (MIND) and For a New Social Science (NSS) Forum Theatre, Rudder Theatre Complex, Texas A&M University, May 20-21, 1995 Tentative Schedule and List of Abstracts Saturday, May 20 ~ 4:30 - 5:30 PM Karl Pribram, Radford University Brain, Values, and Creativity Sunday, May 21 ~ 9:00 - 10:00 John Taylor, University of London Building the Mind from Neural Networks 10:00 - 10:45 Daniel Levine, University of Texas at Arlington The Prefrontal Cortex and Rule Formation 10:45 - 11:00 Break 11:00 - 11:45 Sam Leven, For a New Social Science Synesthesia and S.A.M.: Modeling Creative Process 11:45 - 12:15 Richard Long, University of Central Florida A Computational Model of Emotion Based Learning: Variation and Selection of Attractors in ANNs 12:15 - 1:45 Lunch 1:45 - 2:30 Ron Sun, University of Alabama An Agent Architecture with On-line Learning of Conceptual and Subconceptual Knowledge 2:30 - 3:00 Madhav Moganti, University of Missouri, Rolla Generation of FAM Rules Using DCL Network in PCB Inspection 3:30 - 3:45 Break 3:45 - 4:30 Ramkrishna Prakash, University of Houston Towards Neural Bases of Cognitive Functions: Sensorimotor Intelligence 4:30 - 5:15 Risto Miikkulainen, University of Texas Learning and Performing Sequential Decision Tasks Through Symbiotic Evolution of Neural Networks 5:15 - 5:45 Richard Filer, University of York Correlation Matrix Memory in Rule-based Reasoning and Combinatorial Rule Match Posters (to be up continuously): Risto Miikkulainen, University of Texas Parsing Embedded Structures with Subsymbolic Neural Networks Haejung Paik and Caren Marzban, University of Oklahoma Predicting Television Extreme Viewers and Nonviewers: A Neural Network Analysis Haejung Paik, University of Oklahoma Television Viewing and Mathematics Achievement Doug Warner, University of New Mexico Modeling of an Air Combat Expert: The Relevance of Context Abstracts for talks: Pribram Perturbation, internally or externally generated, produces an orienting reaction which interrupts ongoing behavior and demarcates an episode. As the orienting reaction habituates, the weightings (values) of polarizations of the junctional microprocess become (re)structured on the basis of protocritic processing. Temporary stability characterizes the new structure which acts as a reinforcing attractor for the duration of the episode, i.e., until dishabituation (another orienting reaction) occurs. Habituation leads to extinction and under suitable conditions an extinguished experience can become reactivated, i.e., made relevant. Innovation depends on such reactivation and is enhanced not only by adding randomness to the process, but also by adding structured variety produced by prior experience. Taylor After a description of a global approach to the mind, the manner in which various modules in the brain can contribute will be explored, and related to the European Human Brain Project and to developments stemming from non-invasive instruments and single neuron measurements. A possible neural model of the mind will be presented, with suggestions outlined as to how it could be it could be tested. Levine Familiar modeling principles (e.g., Hebbian or associative learning, lateral inhibition, opponent processing, neuromodulation) could recur, in different combinations, in architectures that can learn diverse rules. These rules include, for example: go to the most novel object; alternate between two given objects; touch three given objects, without repeats, in any order. Frontal lobe damage interferes with learning all three of those rules. Hence, network models of rule learning and encoding should include a module analogous to the prefrontal cortex. They should also include modules analogous to the hippocampus, for episode setting, and the amygdala, for emotional evaluation. Through its connections with the parietal association cortex, with secondary cortical areas for individual sensory modalities, and with supplementary motor and premotor cortices, the dorsolateral part of the prefrontal cortex contains representations of movement sequences the animal has performed or thought about performing. Through connections to the amygdala via the orbital prefrontal cortex (which seems to be extensively and diffusely connected to the dorsolateral part), selective enhancement occurs of those motor sequence representations which have led to reward. I propose that the prefrontal cortex also contains "coincidence detectors" which respond to commonalities in any spatial or temporal attributes among all those reinforced representations. Such coincidence detection is a prelude to generating rules and thereby making inferences about classes of possible future movements. This general prefrontal function includes the function of tying together working memory representations that has been ascribed to it by other modelers (Kimberg & Farah, 1993) but goes beyond it. It also encompasses the ability to generate new rules, in coordination with the hippocampus, if current rules prove to be unsatisfactory. Leven (To be added) Long (with Leon Hardy) We propose a novel neural network architecture which is based on a broader theory of learning and cognitive self-organization. The model is designed to be loosely based on the mammalian brain's limbic and cortical neurophysiology and which possesses a number of unique and useful properties. This architecture uses a variation and selection algorithm similar to those found in evolutionary programming (EP), and genetic algorithms (GA). In this case, however, selection does not operate on bit strings, or even neuronal weights; instead, variation and selection acts on attractors in a dynamical system. Furthermore, hierarchical processing is imposed on a single neuronal layer in a manner that is easily scalable by simply adding additional nodes. This is accomplished using a large, uniform-intensity input signal that sweeps across a neural layer. This "sweeping activation" alternately pushes nodes into their active threshold regime, thus turning them "on". In this way, the active portion of the network settles into an attractor, becoming the preprocessed "input" to the newly recruited nodes. The attractor neural network (ANN) which forms the basis of this system is similar to a Hopfield neural network in that it has the same node update rule and is asynchronous, but differs from a traditional Hopfield network in two ways. First, unlike a fully connected Hopfield network, we use a sparse connection scheme using a random walk or gaussian distribution. Second, we allow for positive-weighted self connections which dramatically improves attractor stability when negative or inhibitory weights are allowed. This model is derived from a more general theory of emotion and emotion- based learning in the mammalian brain. The theory postulates that negative and positive emotion is synonymous with variation and selection respectively. The theory further classifies various emotions according to their role in learning, and so makes predictions as to the functions of various brain regions and their interconnections. Sun In developing autonomous agents, we usually emphasize only the procedural and situated knowledge, ignoring generic and declarative knowledge that is more explicit and more widely applicable. On the other hand, in developing AI symbolic reasoning models, we usually emphasize only the declarative and context-free knowledge. In order to develop versatile cognitive agents that learn in situated contexts and generalize resulting knowledge to different environments, we explore the possibility of learning both declarative and procedural knowledge in a hybrid connectionist architecture. The architecture is based on the two-level idea proposed earlier by the author. Declarative knowledge is represented conceptually, while procedural knowledge is represented subconceptually. The architecture integrates embodied reactions, rules, learning, and decision-making in a unified framework, and structures different learning components (including Q-learning and rule induction) in a synergistic way to perform on-line and integrated learning. Moganti Many vision problems are solved using knowledge-based approaches. The conventional knowledge-based systems use domain experts to generate the initial rules and their membership functions, and then by trial and error refine the rules and membership functions to optimize the final system's performance. However, it would be difficult for human experts to examine all the input-output data in complex vision applications to find and tune the rules and functions within the system. In this presentation, the speaker introduces the application of fuzzy logic in complex computer vision applications. It will be shown that neural networks could be effectively used in the estimation of fuzzy rules, thus making the knowledge acquisition simple, robust, and complete. As an example application, the problem of visual inspection of defects in printed circuit boards (PCBs) will be presented. The speaker will present the work carried out by him where the inspection problem is characterized as a pattern classification problem. The process involves a two-level classification of the printed circuit board image sub-patterns into either a non- defective class or a defective class. The PCB sub-patterns are checked for geometric shape and dimensional verification using fuzzy information extracted from the scan-line grid with an adaptive fuzzy data algorithm that uses differential competitive learning (DCL) in updating winning synaptic vectors. The fuzzy feature vectors drastically reduce the conventional inspection systems. The presentation concludes with experimental results showing the superiority of the approach. It will be shown that the basic method presented is by no means limited to the PCB inspection application. The model can easily be extended to other vision problems like silicon wafer inspection, automatic target recognition (ATR) systems, etc. Prakash (with Haluk Ogmen) A developmental neural network model was proposed (Ogmen, 1992, 1995) that ties higher level cognitive functions to lower level sensorimotor intelligence through stage transitions and the decalage vertical" (Piaget, 1975). Our neural representation of a sensorimotor reflex comprises of sensory, motor, and affective elements. The affective elements establish an internal organization: The primary affective and secondary affective elements dictate the totality and the relationship aspects of the organization, respectively. In order to study sensorimotor intelligence in detail the network was elaborated for the sucking and rooting reflexes. During the first two sub-stages of the sensorimotor stage, as proposed by Piaget (1952), assimilation predominates over accommodation. We will present simulations of recognitory and functional assimilations in the sucking reflex, and reciprocal assimilation between the sucking and rooting reflexes. We will then consider possible subcortical neural substrates for our sensorimotor model of the rooting reflex in order to bring the model closer to neurophysiology. The subcortical areas believed to be involved in the rooting reflex are the spinal trigeminal nuclei which receive facial somatosensory afferents and the cervical motor neurons that control the neck muscles. Neurons in these regions are proposed to correspond to the sensory and motor elements of our model, respectively. The reticular formation which receives and sends projections to these two regions and which receives inputs from visceral regions is a good candidate for the loci of the affective elements of our model. In this talk, we will discuss these three areas and their mapping to our model in further detail. Miikkulainen A new approach called SANE (Symbiotic, Adaptive Neuro-Evolution) for learning and performing sequential decision tasks is presented. In SANE, a population of neurons is evolved through genetic algorithms to form a neural network for the given task. Compared to problem-general heuristics, SANE forms more effective decision strategies because it learns to utilize domain-specific information. Applications of SANE to controlling the inverted pendulum, performing value ordering in constraint satisfaction search, and focusing minimax search in game playing will be described and compared to traditional methods. Filer (with James Austin) This abstract is taken from a paper that presents Correlation Matrix Memory, a form of binary associative neural network, and the potential of using this technology in expert systems. The particular focus of this paper is on a comparison with an existing database technique used for achieving partial match, Multi-level Superimposed Coding (Kim & Lee, 1992), and how using Correlation Matrix Memory (CMM) enables very efficient rule matching, and a combinatorial rule match in linear time. We achieve this utilising a comparatively simple network approach, which has obvious implications for advancing models of reasoning in the brain. Rule-based reasoning has been the subject of a lot of work in AI, and some expert systems have proved very useful, e.g., PROSPECTOR (Gaschnig, 1980) and DENDRAL (Lindsay et al., 1980), but it is clear that the usefulness of an expert system is not necessarily the result of a particular architecture. We suggest that efficient partial match is a fundamental requirement, and combinatorial pattern match is a facility that is directly related to dealing with partial information, but a brute force approach invariably takes combinatorial time to do this. Combinatorial match we take to mean the ability to answer a sub-maximally specified query that should succeed if a subset of these attributes match (i.e., specify A attributes and a number N, N s A, and the query succeeds if any N of A match). This sort of match is fundamental, not only in knowledge-based reasoning, but also in (vision) occlusion analysis. Touretzky and Hinton (1988) were the first to emulate a symbolic, rule-based system in a connectionist architecture. A connectionist approach held the promise of better performance with partial information and being generally less brittle. Whether or not this is the case, Touretzky and Hinton usefully demonstrated that connectionist networks are capable of symbolic reasoning. This paper describes CMM, which maintains a truly distributed knowledge representation, and the use of CMM as an inference engine (Austin, 1994). This paper is concerned with some very useful properties; we believe we have shown a fundamental link between database technology and an artificial neural network technology that has parallels in neurobiology. Abstracts for posters: Miikkulainen A distributed neural network model called SPEC for processing sentences with recursive relative clauses is described. The model is based on separating the tasks of segmenting the input word sequence into clauses, forming the case-role representations, and keeping track of the recursive embeddings into different modules. The system needs to be trained only with the basic sentence constructs, and it generalizes not only to new instances of familiar relative clause structures, but to novel structures as well. SPEC exhibits plausible memory degradation as the depth of the center embeddings increases, its memory is primed by earlier constituents, and its performance is aided by semantic constraints between the constituents. The ability to process structure is largely due to a central executive network that monitors and controls the execution of the entire system. This way, in contrast to earlier subsymbolic systems, parsing is modeled as a controlled high-level process rather than one based on automatic reflex responses. Paik and Marzban In an attempt to better understand the attributes of the "average" viewer, an analysis of the data characterizing television nonviewers and extreme viewers is performed. The data is taken from the 1988, 1989, and 1990 General Social Surveys (GSS), conducted by the National Opinion Research Center (NORC). Given the assumption-free, model-independent representation that a neural network can offer, we perform such an analysis and discuss the significance of the findings. For comparison, a linear discriminant analysis is also performed, and is shown to be outperformed by the neural network. Furthermore, the set of demographic variables are identified as the strongest predictor of nonviewers, and the combination of family-related and social activity-related variables as the strongest attribute of extreme viewers. Paik This study examines the correlation between mathematics achievement and television viewing, and explores the underlying processes. The data consists of 13,542 high school seniors from the first wave of the High School and Beyond project, conducted by the National Opinion Research Center on behalf of the National Center for Education Statistics. A neural network is employed for the analysis; unlike methods employed in prior studies, with no a priori assumptions about the underlying model or the distributions of the data, the neural network yields a correlation impervious to errors or inaccuracies arising from possibly violated assumptions. A curvilinear relationship is found, independent of viewer characteristics, parental background, parental involvement, and leisure activities, with a maximum at about one hour of viewing, and persistent upon the inclusion of statistical errors. The choice of mathematics performance as the measure of achievement elevates the found curvilinearity to a content-independent status, because of the lack of television programs dealing with high school senior mathematics. It is further shown that the curvilinearity is replaced with an entirely positive correlation across all hours of television viewing, for lower ability students. A host of intervening variables, and their contributions to the process, are examined. It is shown that the process, and especially the component with a positive correlation, involves only cortical stimulations brought about by the formal features of television programs. Warner A modeling approach was used to investigate the theorized connection between expertise and context. Using the domain of air-combat maneuvering, an expert was modeled both with and without respect to context. Neural networks were used for each condition. One network used a simple multi-layer perceptron with inputs for five consecutive time segments from the data as well as a quantitative descriptor for context in this domain. The comparison used a set of networks with identical structure to the first network. The same data were provided to each condition, however the data were divided by context before being provided to separate networks for the comparison. It was discovered, after training and generalization testing on all networks, that the comparison condition using context-explicit networks performed better for strict definitions of offensive context. This distinction implies the use of context in an air-combat task by the expert human pilot. Simulating problems using a standard model and comparing it against the same model incorporating hypothesized explicit divisions within the data should prove to be a useful tool in psychology. Transportation and Hotel Information Texas A&M is in College Station, TX, about 1.5 to 2 hours NW of Houston and NE of Austin. Bryan/College Station Airport (Easterwood) is only about five minutes from the conference site, and is served by American (American Eagle), Continental, and Delta (ASA). The College Station Hilton (409-693-7500) has a block of rooms reserved for the Creative Concepts Conference (of which MIND is a satellite) at $60 a night, and a shuttle bus to and from the A&M campus. There are also rooms available at the Memorial Student Union on campus (409-845-8909) on campus for about $40 a night. Other nearby hotels include the Comfort Inn (409-846-7333), Hampton Inn (409-846-0184), LaQuinta (409-696-7777), and Ramada-Aggieland (409-693-9891), all of which have complimentary shuttles to campus. From chiva at biologie.ens.fr Wed May 3 09:54:37 1995 From: chiva at biologie.ens.fr (Emmanuel CHIVA) Date: Wed, 3 May 1995 15:54:37 +0200 Subject: Groupe de BioInformatique WWW Home Page Message-ID: <9505031354.AA12362@apollon.ens.fr> ** ANNOUNCING ** There is now a homepage for the Groupe de BioInformatique, Ecole Normale Superieure, Paris (France) at the following URL: http://www.ens.fr/bioinfo/www which includes: - the description of our research areas (e.g, the animat approach, NNets, GAs Image Processing and vision, Cell metabolism), complete bibliography (some articles can be retrieved) and personal pages - the Adaptive Behavior journal homepage - the SAB conference homepage - numerous pointers to additional related servers Please send reactions and comments to chiva at wotan.ens.fr ============================================================================ Ecole Normale Superieure | Emmanuel M. Chiva | Departement de Biologie | | Groupe de BioInformatique | Tel: + 33 1 44323633 | 46, rue d'Ulm | Fax: + 33 1 44323901 | 75230 Paris cedex 05 France | email: chiva at wotan.ens.fr | ============================================================================ From rob at comec4.mh.ua.edu Wed May 3 10:34:15 1995 From: rob at comec4.mh.ua.edu (Robert Elliott Smith) Date: Wed, 03 May 95 08:34:15 -0600 Subject: Papers to be presented at ICGA6 Message-ID: <9505031334.AA16118@comec4.mh.ua.edu> The organizers of the Sixth International Conference on Genetic Algorithms, to be held in Pittsburgh, PA, July 15-19, 1995, are please to present the following list of papers that will be presented at the conference. This list is followed by registration information for the conference. =================== ICGA-95: PAPERS ACCECPTED FOR PRESENTATION SELECTION Generalized Convergence Models for Tournament- and (mu,lambda)-Selection Thomas Baeck A Mathematical Analysis of Tournament Selection Tobias Blickle, Lothar Thiele On Decentralizing Selection Algorithms Kenneth De Jong, Jayshree Sarma Finding Multimodal Solutions Using Restricted Tournament Selection Georges Harik Analysis of Genetic Algorithms Evolution under Pure Selection Filippo Neri, Lorenza Saitta MUTATION AND RECOMBINATION A New Class of Crossover Operators for Numerical Optimization Jaroslaw Arabas, Jan J. Mulawka, Jacek Pokrasniewicz On Multi-Dimensional Encoding/Crossover Thang N. Bui, Byung-Ro Moon On the Adaptation of Arbitrary Normal Mutation Distributions in Evolution Strategies: The Generating Set Adaptation Nikolaus Hansen, Andreas Ostermeier, Andreas Gawelczyk The Nature of Mutation in Genetic Algorithms Robert Hinterding, Harry Gielewski, T. C. Peachey Crossover, Macromutation, and Population-based Search Terry Jones What Have You Done for Me Lately? Adapting Operator Probabilities in a Steady-State Genetic Algorithm Bryant A. Julstrom Metabits: Generic Endogenous Crossover Control Jim Levenick Toward More Powerful Recombinations Byung Ro Moon, Andrew B. Kahng Fuzzy Recombination for the Continuous Breeder Genetic Algorithm H.-M. Voigt, H. Muhlenbein, D. Cvetkovic EVOLUTIONARY COMPUTATION TECHNIQUES The Distributed Genetic Algorithm Revisited Theodore C. Belding Solving Constraint Satisfaction Problems Using a Genetic/Systematic Search Hybrid That Realizes When to Quit James Bowen, Gerry Dozier Enhancing GA Performance Through Incest Prohibitions Based on Ancestry Robert Craighurst, Worthy Martin A Comparison of Parallel and Sequential Niching Methods Samir W. Mahfoud Selectively Destructive Re-start Jonathan Maresky, Yuval Davidor, Daniel Gitler, Gad Aharoni, Amnon Barak Genetic Algorithms, Numerical Optimization, and Constraints Zbigniew Michalewicz, Sita S. Raghavan A New Diploid Scheme and Dominance Change Mechanism for Non-Stationary Function Optimization Khim Peow Ng, Kok Cheong Wong When Seduction Meets Selection Edmund Ronald Population-Oriented Simulated Annealing: A Genetic/Thermodynamic Hybrid Approach to Optimization James M. Varanelli, James P. Cohoon FORMAL ANALYSIS OF EVOLUTIONARY COMPUTATION AND PROBLEM DIFFICULTY Fitness Distance Correlation as a Measure of Problem Difficulty for Genetic Algorithms Terry Jones, Stephanie Forrest Signal-to-noise, Crosstalk and Long Range Problem Difficulty in Genetic Algorithms Hillol Kargupta Efficient Tracing of the Behaviour of Genetic Algorithms using Expected Values of Bit and Walsh Products J.N. Kok, P. Floreen Optimization Using Replicators Anil Menon, Kishan Mehrotra, Chilukuri K. Mohan, Sanjay Ranka Epistasis in Genetic Algorithms: An Experimental Design Perspective Colin Reeves, Christine Wright Epistasis in Periodic Programs Stefan Voget Hyperplane Ranking in Simple Genetic Algorithms Darrell Whitley, Keith Mathias, Larry Pyeatt Building Better Test Functions D. Whitley, K. Mathias, S. Rana, J Dzubera GENETIC PROGRAMMING The Evolution of Agents that Build Mental Models and Create Simple Plans Using Genetic Programming David Andre Causality in Genetic Programming Dana H. Ballard, Justinian Rosca Solving Complex Problems with Genetic Algorithms Bertrand Daniel Dunay, Frederic E. Petry Strongly Typed Genetic Programming in Evolving Cooperation Strategies Thoms Haynes, Roger L. Wainwright, Sandip Sen, Dale A. Schoenefeld Temporal Data Processing Using Genetic Programming Hitoshi Iba, Hugo de Garis, Taisuke Sato Two Ways of Discovering the Size and Shape of a Computer Program to Solve a Problem John R. Koza Evolving Data Structures Using Genetic Programming W.B. Langdon Accurate Replication in Genetic Programming Nicholas Freitag McPhee, Justin Darwin Miller Complexity Compression and Evolution Peter Nordin, Wolfgang Banzhaf Evolving Turing-Complete Programs for a Register Machine with Self-modifying Code Peter Nordin, Wolfgang Banzhaf CO-EVOLUTION AND EMERGENT ORGANIZATION Biological Symbiosis as a Metaphor for Computational Hybridization Jason M. Daida, Steven J. Ross, Brian C. Hannan Evolving Globally Synchronized Cellular Automata Rajarshi Das, James P. Crutchfield, Melanie Mitchell, James E. Hanson The Evolution of Emergent Organization in Immune System Gene Libraries Ron Hightower, Stephanie Forrest, Alan S. Perelson Co-evolution of Non-Deterministic Incremental Algorithms as a New Approach for Search in State Spaces Hugues Juille The Symbiotic Evolution of Solutions and their Representations Jan Paredis A Coevolutionary Approach to Learning Sequential Decision Rules Mitchell A. Potter, Kenneth A. De Jong, John J. Grefenstette Methods for Competitive Co-evolution: Finding Opponents Worth Beating Christopher D. Rosin, Richard K. Belew EVOLUTIONARY COMPUTATION IN COMBINATION WITH MACHINE LEARNING OR NEURAL NETS Evolution in Multi-agent Systems: Evolving Communicating Classifier Systems for Gait in a Quadrapedal Robot Lawrence Bull, Terrence C. Fogarty Adaptive Distributed Routing using Evolutionary Fuzzy Control Brian Carse, Terry Fogarty, Alistair Munro Relational Schemata: A Way to Improve the Expressiveness of Classifiers Philippe Collard, Cathy Escazut The Mating Pool: A Testbed for Experiments in the Evolution of Symbol Systems Lawrence Davis, David Orvosh Genetic Algorithm Enlarges the Capacity of Associative Memory Akira Imada, Keijiro Araki A Genetic Algorithm for Optimizing Fuzzy Decision Trees Cezary Z. Janikow PLEASE: A Prototype Learning System using Genetic Algorithms Leslie Knight, Sandip Sen A Parallel Genetic Algorithm for Concept Learning Filippo Neri, Attilio Giordana Evolutionary Grown Semi-Weighted Neural Networks Steve G. Romaniuk Combining Genetic Algorithms with Memory Based Reasoning John W. Sheppard, Steven L. Salzberg Cellular Encoding Applied to Neurocontrol Darrell Whitley, Frederic Gruau, Larry Pyeatt EVOLUTIONARY COMPUTATION APPLICATIONS I Determining Factorization: A New Encoding Scheme for Spanning Trees Applied to the Probabilistic Minimum Spanning Tree Problem Faris N. Abuali, Roger L. Wainwright, Dale A. Schoenefeld A Hybrid Genetic Algorithm for the Maximum Clique Problem Thang Nguyen Bui, Paul H. Eppley Finding (Near-)Optimal Steiner Trees in Large Graphs Henrik Esbensen Solving Equal Piles with the Grouping Genetic Algorithm Emanuel Falkenauer A Study of Genetic Algorithm Hybrids for Facility Layout Problems Kazuhiro Kado, Dave Corne, Peter Ross An Efficient Genetic Algorithm for Job Shop Scheduling Problems Shigenobu Kobayashi, Isao Ono, Masayuki Yamamura A Comparative Study of Genetic Search Kihong Park Inference of Stochastic Regular Grammars by Massively Parallel Genetic Algorithms Markus Schwehm, Alexander Ost Genetic Algorithm Approach to the Search for Golomb Rulers Stephen W. Soliday, Abdollah Homaifar, Gary L. Lebby An Adaptive Clustering Method using a Geometric Shape for Vehicle Routing Problems with Time Windows Sam R. Thangiah EVOLUTIONARY COMPUTATION APPLICATIONS II Applying Genetic Algorithms to Outlier Detection Kelly D. Crawford, Roger L. Wainwright Design of Statistical Quality Control Procedures Using Genetic Algorithms Aristides T. Hatjimihail, Theophanes T. Hatjimihail A Segregated Genetic Algorithm for Constrained Structural Optimization R. Le Riche, C. Knopf-Lenoir, R.T. Haftka A Preliminary Study of Genetic Data Compression Wee K. Ng A Standard GA Approach to Native Protein Conformation Prediction Arnold L. Patton, W. F. Punch, III, E. D. Goodman Using GAs to Characterize Workloads Chrisila C. Pettey, Thomas D. Wagner, Lawrence W. Dowdy Development of the Genetic Function Approximation Algorithm David Rogers A Parallel Genetic Algorithm for Multi-objective Microprocessor Design Timothy J. Stanley, Trevor Mudge A Hybrid Genetic Algorithm for Highly Constrained Timetabling Problems Rupert Weare, Edmund Burke, Dave Ellilman Evolutionary Computation in Air Traffic Control Planning C.H.M. van Kemenade, C.F.W. Hendriks, J.N. Kok Use of the Genetic Algorithm for Load Balancing of Sugar Beet Presses Frank Vavak, Terence C. Fogarty, Philip Cheng ========= Registration Information: 6TH INTERNATIONAL CONFERENCE ON GENETIC ALGORITHMS July 15-19, 1995 University of Pittsburgh Pittsburgh, Pennsylvania, USA CONFERENCE COMMITTEE Stephen F. Smith, Chair Carnegie Mellon University Peter J. Angeline, Finance Loral Federal Systems Larry J. Eshelman, Program Philips Laboratories Terry Fogarty, Tutorials University of the West of England, Bristol Alan C. Schultz, Workshops Naval Research Laboratory Alice E. Smith, Local Arrangements University of Pittsburgh Robert E. Smith, Publicity University of Alabama The 6th International Conference on Genetic Algorithms (ICGA-95) brings together an international community from academia, government, and industry interested in algorithms suggested by the evolutionary process of natural selection, and will include pre-conference tutorials, invited speakers, and workshops. Topics will include: genetic algorithms and classifier systems, evolution strategies, and other forms of evolutionary computation; machine learning and optimization using these methods, their relations to other learning paradigms (e.g., neural networks and simulated annealing), and mathematical descriptions of their behavior. The conference host for 1995 will be the University of Pittsburgh located in Pittsburgh, Pennsylvania. The conference will begin Saturday afternoon, July 15, for those who plan on attending the tutorials. A reception is planned for Saturday evening. The conference meeting will begin Sunday morning July 16 and end Wednesday afternoon, July 19. The complete conference program and schedule will be sent later to those who register. TUTORIALS ICGA-95 will begin with three parallel sessions of tutorials on Saturday. Conference attendees may attend up to three tutorials (one from each session) for a supplementary fee (see registration form). Tutorial Session I 11:00 a.m.-12:30 p.m. I.A Introduction to Genetic Algorithms Melanie Mitchell - A brief history of Evolutionary Computation. The appeal of evolution. Search spaces and fitness landscapes. Elements of Genetic Algorithms. A Simple GA. GAs versus traditional search methods. Overview of GA applications. Brief case studies of GAs applied to: the Prisoner's Dilemma, Sorting Networks, Neural Networks, and Cellular Automata. How and why do GAs work? I.B Application of Genetic Algorithms Lawrence Davis - There are hundreds of real-world applications of genetic algorithms, and a considerable body of engineering expertise has grown up as a result. This tutorial will describe many of those principles, and present case studies demonstrating their use. I.C Genetics-Based Machine Learning Robert Smith - This tutorial discusses rule-based, neural, and fuzzy techniques that utilize GAs for exploration in the context reinforcement learning control. A rule-based technique, the learning classifier system (LCS), is shown to be analogous to a neural network. The integration of fuzzy logic into the LCS is also discussed. Research issues related to GA-based learning are overviewed. The application potential for genetics-based machine learning is discussed. Tutorial Session II 1:30-3:00 p.m. II.A Basic Genetic Algorithm Theory Darrell Whitley - Hyperplane Partitions and the Schema Theorem. Binary and Nonbinary Representations; Gray coding, Static hyperplane averages, Dynamic hyperplane averages and Deception, the K-armed bandit analogy and Hyperplane ranking. II.B Basic Genetic Programming John Koza - Genetic Programming is an extension of the genetic algorithm in which populations of computer programs are evolved to solve problems. The tutorial explains how crossover is done on program trees and illustrates how the user goes about applying genetic programming to various problems of different types from different fields. Multi-part programs and automatically defined functions are briefly introduced. II.C Evolutionary Programming David Fogel - Evolutionary programming, which originated in the early 1960s, has recently been successfully applied to difficult, diverse real-world problems. This tutorial will provide information on the history, theory, and practice of evolutionary programming. Case-studies and comparisons will be presented. Tutorial Session III 3:30-5:00 p.m. III.A Advanced Genetic Algorithm Theory Darrell Whitley - Exact Non-Markov models of simple genetic algorithms. Markov models of simple genetic algorithms. The Schema Theorem and Price's Theorem. Convergence Proofs, Exact Non-Markov models for permutation based representations. III.B Advanced Genetic Programming John Koza - The emphasis is on evolving multi-part programs containing reusable automatically defined functions in order to exploit the regularities of problem environments. ADFs may improve performance, improve parsimony, and provide scalability. Recursive ADFs, iteration-performing branches, various types of memories (including indexed memory and mental models), architecturally diverse populations, and point typing are explained. III.C Evolution Strategies Hans-Paul Schwefel and Thomas Baeck - Evolution Strategies in the context of their historical origin for optimization in Berlin in the 1960s. Comparison of the computer-versions (1+1) and (10,100) ES with classical optimum seeking methods for parameter optimization. Formal descriptions of ES. Global convergence conditions. Time efficiency in some simple situations. The role of recombination. Auto-adaptation of internal models of the environment. Multi-criteria optimization. Parallel versions. Short list of application examples. GETTING TO PITTSBURGH The Pittsburgh International Airport is served by most of the major airlines. Information on transportation from the airport and directions to the University of Pittsburgh campus, will be sent along with your conference registration confirmation letter. LODGING University Holiday Inn, 100 Lytton Avenue two blocks from convention site $92/day (single) $9 /day parking charge pool (indoor), exercise facilities Reserve by June 18. Call 412-682-6200. Hampton Inn, 3315 Hamlet Street 12 blocks from convention site $72/day (single) free parking, breakfast, and one-way airport transportation Reserve by July 1. Call 412-681-1000. Howard Johnson's, 3401 Boulevard of the Allies 12 blocks from convention site $56/day (single) free parking and Oakland transportation pool (outdoor) Reserve by June 13. Call 412-683-6100. Sutherland Hall (dorm), University Drive-Pitt campus 10 blocks from convention site (steep hill) $30/day, single no amenities (phone, TV, etc.) shared bathroom Reserve by July 1. Call 412-648-1100. CONFERENCE FEES REGISTRATION FEE Registrations received by June 11 are $250 for participants and $100 for students. Registrations received on or after June 12 and walk-in registrations at the conference will be $295 for participants and $125 for students. Included in the registration fee are entry to all technical sessions, several lunches, coffee breaks, reception Saturday evening, conference materials, and conference proceedings. TUTORIALS There is a separate fee for the Saturday tutorial sessions. Attendees may register for up to three tutorials (one from each tutorial session). The fee for one tutorial is $40 for participants and $15 for students; two tutorials, $75 for participants and $25 for students; three tutorials, $110 for participants and $35 for students. The deadline to register without a late fee is June 11. After this date, participants and students will be assessed a flat $20 late fee, whether they register for one, two, or all three tutorials. CONFERENCE BANQUET Not included in the registration fee is the ticket for the banquet. Participants may purchase banquet tickets for an additional $30. Note - Please purchase your banquet tickets nowQyou will be unable to buy them upon arrival. GUEST TICKETS Guest tickets for the Saturday evening reception are $10 each; guest tickets for the conference banquet are $30 each for adults and $10 each for children. Note - Please purchase additional tickets now - you will be unable to buy them upon arrival. CANCELLATION/REFUND POLICY For cancellations received up to and including June 1, a full refund will be given minus a $25 handling fee. FINANCIAL ASSISTANCE FOR STUDENTS With support from the Naval Center for Applied Research in Artificial Intelligence, Naval Research Laboratory, a limited fund has been set aside to assist students with travel expenses. Students should have their advisor certify their student status and that sufficient funds are not available. Students interested in obtaining such assistance should send a letter before May 22 describing their situation and needs to: Peter J. Angeline, c/o Advanced Technologies Dept, Loral Federal Systems, State Route 17C, Mail Drop 0210, Owego, NY 13827-3994 USA. TO REGISTER Early registration is recommended. You may register by mail, fax, or email using a credit card (MasterCard or VISA). You may also pay by check if registering by mail. Note: Students must also send with their registration a photocopy of their valid university student ID or a letter from a professor. Complete the registration form and return with payment. If more than one registrant from the same institution will be attending, make additional copies of the registration form. Mail ICGA 95 Department of Industrial Engineering University of Pittsburgh 1048 Benedum Hall Pittsburgh, PA 15261 USA Fax Fax the registration form to 412-624-9831 Email Receive email form by contacting: icga at engrng.pitt.edu Up-to-date conference information is available on the World Wide Web (WWW) http://www.aic.nrl.navy.mil/galist/icga95/ CALL FOR ICGA '95 WORKSHOP PROPOSALS ICGA workshop proposals are now being solicited. Workshops tend to range from informal sessions to more formal sessions with presentations and working notes. Each accepted workshop will be supplied with space and an overhead projector. VCRs might be available. If you are interested in organizing a workshop, send a workshop title, short description, proposed format, and name of the organizers to the workshop coordinator by April 15, 1995. Alan C. Schultz - schultz at aic.nrl.navy.mil Code 5510, Navy Center for Artificial Intelligence Naval Research Laboratory Washington DC 30375-5337 USA REGISTRATION FORM Prof / Dr / Mr / Ms / Mrs Name ______________________________________________________ Last First MI I would like my name tag to read _____________________________________________ Affiliation/Business ______________________________________________________ Address ______________________________________________________ City ______________________________________________________ State ___________________ Zip ________________________ Country_____________________________________________ Telephone (include area code) Business _______________________________ Home______________________________ FEES (all figures in US dollars) Conference Registration Fee By June 11 ___ participant, $250 ___ student, $100 =$_________ On or after June 12 ___ participant, $295 ___ student, $125 =$_________ July 15 Tutorials Select up to three tutorials, but no more than one tutorial per tutorial session. Tutorial Session I: ___I.A Introduction to Genetic Algorithms ___I.B Application of Genetic Algorithms ___I.C Genetics-Based Machine Learning Tutorial Session II: ___II.A Basic Genetic Algorithm Theory ___II.B Basic Genetic Programming ___II.C Evolutionary Programming Tutorial Session III: ___III.A Advanced Genetic Algorithm Theory ___III.B Advanced Genetic Programming ___III.C Evolution Strategies Tutorial Registration Fee By June 11 ___one tutorial: participant, $40 student, $15 ___two tutorials: participant, $75 student, $25 = $_________ ___three tutorials: participant, $110 student, $35 On or after June 12, participants and students add a $20 late fee for tutorials = $_________ Banquet Ticket (not included in the Registration Fee; no tickets may be purchased upon arrival) participants/adult guest #______ ticket(s) @ $30 = $_________ child #______ ticket(s) @ $10 = $_________ Additional Saturday reception tickets (no tickets may be purchased upon arrival) guest #______ ticket(s) @ $10 = $_________ TOTAL (US dollars) $____________ METHOD OF PAYMENT ___ Check (payable to the University of Pittsburgh, US banks only) ___ MasterCard ___ VISA #__________________________________________ Expiration Date ____________________ Signature of card holder ______________________________________________ Note: Students must submit with their registration a photocopy of their valid student ID or a letter from a professor. Mail ICGA 95, Department of Industrial Engineering, University of Pittsburgh, 1048 Benedum Hall, Pittsburgh, PA 15261 USA Fax 412-624-9831 Email To receive email form: icga at engrng.pitt.edu World Wide Web (WWW) For up-to-date conference information: http://www.aic.nrl.navy.mil/galist/icga95/ ------------------------------------------- Robert Elliott Smith Department of Engineering Science and Mechanics Room 210 Hardaway Hall The University of Alabama Box 870278 Tuscaloosa, Alabama 35487 <> rob at comec4.mh.ua.edu <> (205) 348-1618 <> (205) 348-7240 <> http://hamton.eng.ua.edu/college/home/mh/faculty/rsmith/Web/smith.html ------------------------------------------- From sbh at eng.cam.ac.uk Thu May 4 14:49:41 1995 From: sbh at eng.cam.ac.uk (S.B. Holden) Date: Thu, 04 May 1995 14:49:41 BST Subject: New technical report Message-ID: <199505041349.19030@club.eng.cam.ac.uk> The following technical report is available by anonymous ftp from the archive of the Speech, Vision and Robotics Group at the Cambridge University Engineering Department. Average-Case Learning Curves for Radial Basis Function Networks Sean B. Holden and Mahesan Niranjan Technical Report CUED/F-INFENG/TR.212 Cambridge University Engineering Department Trumpington Street Cambridge CB2 1PZ England Abstract The application of statistical physics to the study of the learning curves of feedforward connectionist networks has, to date, been concerned mostly with networks that do not include hidden layers. Recent work has extended the theory to networks such as committee machines and parity machines; however these are not networks that are often used in practice and an important direction for current and future research is the extension of the theory to practical connectionist networks. In this paper we investigate the learning curves of a class of networks that has been widely, and successfully applied to practical problems: the Gaussian radial basis function networks (RBFNs). We address the problem of learning linear and nonlinear, realizable and unrealizable, target rules from noise-free training examples using a stochastic training algorithm. Expressions for the generalization error, defined as the expected error for a network with a given set of parameters, are derived for general Gaussian RBFNs, for which all parameters, including centres and spread parameters, are adaptable. Specializing to the case of RBFNs with fixed basis functions we then study the learning curves for these networks in the limit of high temperature. ************************ How to obtain a copy ************************ a) Via FTP: unix> ftp svr-ftp.eng.cam.ac.uk Name: anonymous Password: (type your email address) ftp> cd reports ftp> binary ftp> get holden_tr212.ps.Z ftp> quit unix> uncompress holden_tr212.ps.Z unix> lpr holden_tr212.ps (or however you print PostScript) b) Via postal mail: Request a hardcopy from Dr. Sean B. Holden Department of Computer Science University College London Gower Street London WC1E 6BT U.K. or email me: s.holden at cs.ucl.ac.uk From goldfarb at unb.ca Thu May 4 23:29:22 1995 From: goldfarb at unb.ca (Lev Goldfarb CS) Date: Fri, 5 May 1995 00:29:22 -0300 (ADT) Subject: New list INDUCTIVE Message-ID: ****************************ANNOUNCEMENT ******************************** Announcing a new electronic mailing list called INDUCTIVE (Inductive Learning Group) ****************************ANNOUNCEMENT ******************************** This mailing list is initiated to provide a separate forum for discussing various scientific issues related to INDUCTIVE (LEARNING) PROCESSES. We strongly feel that these processes are of central importance to cognitive science in general and artificial intelligence (AI) in particular, and that so far they have not been given the attention and effort they deserve. Moreover, we feel that the success of the entire enterprise (of cognitive science) depends on the success of the effort to model the inductive learning processes understood sufficiently broadly. We also believe that the current (and the previous) subdivisions of cognitive psychology and AI impedes (and has impeded) the progress of both enterprises, since there are serious reasons to believe that all cognitive processes are built on top of the inductive learning processes. We cordially invite various researchers from the above two disciplines (including those working in Pattern Recognition and Neural Networks) to join this supervised mailing list. As a first question we propose to discuss the very definition of the inductive learning process: Inductive learning is a process by means of which, given a finite positive training set C+ from a possibly infinite class (or category) C and a finite set C- from the complement of C, an agent is able to reach a state (of inductive generalization) which allows it to form an idea about, and REPRESENTATION of, the class C. This state, in turn, enables the agent to recognize a new object as belonging to class C or not. ****************************************************************************** The subscription to this list is free. This list will be moderated and we reserve the right to terminate the membership of those members who abuse the list. You may subscribe to the list by simply sending the following text to the address INDUCTIVE-SERVER at UNB.CA SUBSCRIBE INDUCTIVE FIRSTNAME LASTNAME ****************************************************************************** Lev Goldfarb Tel: 506-453-4566 Faculty of Computer Science Fax: 506-453-3566 University of New Brunswick E-mail: goldfarb at unb.ca Fredericton, N.B., E3B 5A3 Canada From Dave_Touretzky at DST.BOLTZ.CS.CMU.EDU Fri May 5 00:34:26 1995 From: Dave_Touretzky at DST.BOLTZ.CS.CMU.EDU (Dave_Touretzky@DST.BOLTZ.CS.CMU.EDU) Date: Fri, 05 May 95 00:34:26 -0400 Subject: computational neuroscience course syllabus Message-ID: <449.799648466@DST.BOLTZ.CS.CMU.EDU> This term I introduced a new course at CMU called Computational Models of Neural Systems. The course looked at neurobiological structures where the anatomy and phsiology are sufficiently well known that we can form explicit theories about how they represent and process information, and test those theories with computer simulations. Examples include the hippocampus, piriform cortex, parietal cortex, cerebellum, and early stages of the visual system. The course syllabus is available online at: http://www.cs.cmu.edu/~dst/pubs/cmns-syllabus.ps.gz or via anonymous FTP: FTP-host: ftp.cs.cmu.edu FTP-path: /afs/cs/usr/dst/www/pubs/cmns-syllabus.ps.gz The URL for the course Web page is: http://www.cs.cmu.edu/afs/cs/academic/class/15880b-s95/Web/home.html I would be pleased to receive comments on the syllabus, suggestions for additional or alternate readings, and pointers to syllabi that other people have developed for similar courses. Thanks to the following people for help with suggesting and/or supplying readings: Jim Bower, Jay Buckingham, Mike Hasselmo, Yi-Jen Lin, Randy O'Reilly, David Redish, Lisa Saksida, David Willshaw, and Rich Zemel. -- Dave Touretzky From berg at cs.albany.edu Sat May 6 14:46:02 1995 From: berg at cs.albany.edu (George Berg) Date: Sat, 6 May 1995 14:46:02 -0400 (EDT) Subject: 3rd Albany Conferenence on Molecular Biology Message-ID: <199505061846.OAA23320@atlas.cs.albany.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 3068 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/57d630b0/attachment-0001.ksh From INFOMED at ccvax.unicamp.br Sun May 7 23:55:52 1995 From: INFOMED at ccvax.unicamp.br (INFOMED@ccvax.unicamp.br) Date: Sun, 7 May 1995 23:55:52 BSC (-0300 C) Subject: New Book on Medical Reasoning (Lots of ANN papers) Message-ID: <01HQ8NLW6XRU936I47@ccvax.unicamp.br> NEW BOOK ON MEDICAL DECISION MAKING ----------------------------------- Advances in Fuzzy Systems - Applications and Theory - Vol. 3 COMPARATIVE APPROACHES TO MEDICAL REASONING edited by M E Cohen (California State Univ., Fresno & Univ. California, San Francisco) & D L Hudson (Univ. California, San Francisco) This book focuses on approaches to computer-assisted medical decision-making. A unique feature of the book is that a specific problem in medical decision-making has been selected from the literature, with each contributed chapter presenting a different approach to the solution of the same problem. Theoretical foundations for each approach are provided, followed by practical application. Techniques include knowledge-based reasoning, neural network models, hybrid systems, reasoning with uncertainty, and fuzzy logic, among others. The goal is to supply the reader with a variety of theoretical techniques whose practical implementation can be clearly understood through the example. Using a single, concrete example to illustrate different theoretical approaches allows various techniques to be easily contrasted and permits the reader to determine which aspects are pertinent to specific types of applications. Although the methods are illustrated in a medical problem, they have wide applicability in numerous areas of decision-making. Contents: Knowledge-Based Systems: CLINAID: Medical Knowledge-Based System Based on Fuzzy Relational Structures (L J Kohout et al.); Diagnostic Aspects of Diastolic Dysfunction: Representation in D-Log Language (A Muscari); Neural Network Models: Improved Noninvasive Diagnosis of Coronary Artery Disease Using Neural Networks (M Akay et al.); Fuzzy Neural Network-Based Adaptive Reasoning with Experiential Knowledge (H Ding & M M Gupta); Estimation of Long-Term Mortality of Myocardial Infarction Using a Neural Network Based on the Alopex Algorithm (W J Kostis et al.); Neural Network-Based Approach to Outcome Prognosis for Patients with Diastolic Dysfunction (R M E Sabbatini et al.); Statistical Approaches: Implementation of Statistical Pattern Recognition for Congestive Heart Failure (E A Patrick); The Application of Bayesian Inference with Fuzzy Evidences in Medical Reasoning (C R=F6mer & A Kandel); Modeling Techniques: 24 Hour Analysis of Heart Rate Fluctuations Before and After Carotid Surgery Using Wavelet Transform (M Akay et al.); Reasoning for Introducing a New Parameter for Assessment of Myocardial Status - The Specific Potential of Myocardium (L Bach=E1rov=E1); Hybrid Systems: Correct Diagnosis of Chest Pain by an Integrated Expert System (D Assanelli et al.); Phonocardiogram Analysis of Congenital and Acquired Heart Diseases Using Artificial Neural Networks (D Barschdorff et al.); Hybrid System for Diagnosis and Treatment of Heart Disease (D L Hudson et al.). Readership: Computer scientists and medical information scientists. 330pp Pub. date: Summer 1995 ISBN no.: 981-02-2162-2 US$86 To order or request for more information: Send an email to our marketing department, wspmkt at singnet.com.sg World Scientific Publishing Co. Pte. Ltd. Block 1022 Hougang Ave 1 #05-3520 Tai Seng Industrial Estate Singapore 1953 Republic of Singapore Tel: 65-3825663, Fax: 65-3825919 Internet e-mail: wsped at singnet.com.sg (Editorial dept, Singapore office) worldscp at singnet.com.sg (Singapore office) wspub at tigger.jvnc.net (US office) wspc at wspc.demon.co.uk (UK office) * Now on the World-Wide Web! * * Our Home Page URL http://www.wspc.co.uk/wspc/index.html * * .sty files for our journals can be obtained by anonymous FTP to ftp.singnet.com.sg at the directory /groups/world_scientific * From Gerhard.Paass at gmd.de Mon May 8 11:23:36 1995 From: Gerhard.Paass at gmd.de (Gerhard Paass) Date: Mon, 8 May 1995 17:23:36 +0200 Subject: CFP: Autumn School in Connectionism and Neural Networks, Muenster Message-ID: <199505081523.AA02627@sein.gmd.de> CALL FOR PARTICIPATION ================================================================= = = = H e K o N N 9 5 = = = Autumn School in C o n n e c t i o n i s m and N e u r a l N e t w o r k s October 2-6, 1995 Muenster, Germany Conference Language: German ---------------------------------------------------------------- A comprehensive description of the Autumn School together with abstracts of the courses can be found at the following addresses: WWW: http://borneo.gmd.de/~hekonn anonymous FTP: ftp.gmd.de directory: Learning/neural/hekonn95 = = = O V E R V I E W = = = Artificial neural networks (ANN's) have in recent years been discussed in many diverse areas, ranging from the modelling of learning in the cortex to the control of industrial processes. The goal of the Autumn School in Connectionism and Neural Networks is to give a comprehensive introduction to conectionism and artificial neural networks and to give an overview of the current state of the art. Courses will be offered in five thematic tracks. (The conference language is German.) The FOUNDATION track will introduce basic concepts (A. Zell, Univ. Stuttgart), as well as present lectures on information processing in biological neural systems (G. Palm, Univ. Ulm), on the relationship between ANN's and fuzzy logic (R. Kruse, Univ. Braunschweig), and on genetic algorithms (S. Vogel, Univ. Cologne). The THEORY track is devoted to the properties of ANN's as abstract learning algorithms. Courses are offered on approximation properties of ANN's (K. Hornik, Univ. Vienna), the algorithmic complexity of learning procedures (M. Schmitt, TU Graz), prediction uncertainty and model selection (G. Paass, GMD St. Augustin), and "neural" solutions of optimization problems (J. Buhmann, Univ. Bonn). This year, special emphasis will be put on APPLICATIONS of ANN's to real-world problems. This track covers courses on vision (H.Bischof, TU Vienna), character recognition (J. Schuermann, Daimler Benz Ulm), speech recognition (R. Rojas, FU Berlin), industrial applications (B. Schuermann, Siemens Munich), robotics (K.Moeller, Univ. Bonn), and hardware for ANN's (U. Rueckert, TU Hamburg-Harburg). In the track on SYMBOLIC CONNECTIONISM, there will be courses on: knowledge processing with ANN's (F. Kurfess, New Jersey IT), hybrid systems in natural language processing (S. Wermter, Univ. Hamburg), connectionist aspects of natural language processing (U. Schade, Univ. Bielefeld), and procedures for extracting rules from ANN's (J. Diederich, QUT Brisbane). In the section on COGNITIVE MODELLING, we have courses on representation and cognitive models (G. Dorffner, Univ. Vienna), aspects of cognitive psychology (R. Mangold-Allwinn, Univ. Saarbruecken), self-organizing ANN's in the visual system (C. v.d. Malsburg, Univ. Bochum), and information processing in the visual cortex (J.L. v. Hemmen, TU Munich). In addition, there will be courses on PROGRAMMING and SIMULATORS. Participants will have the opportunity to work with the SESAME system (J. Kindermann, GMD St.Augustin) and the SNNS simulator (A.Zell, Univ. Stuttgart). From robtag at dia.unisa.it Mon May 8 07:46:32 1995 From: robtag at dia.unisa.it (Tagliaferri Roberto) Date: Mon, 8 May 1995 13:46:32 +0200 Subject: WIRN 95 Message-ID: <9505081146.AA07078@udsab.dia.unisa.it> WIRN VIETRI '95 VII ITALIAN WORKSHOP ON NEURAL NETS IIASS "E. R. Caianiello", Vietri s/m (SA), Italy May 18-20, 1995 Thursday 18 May Mathematical Models 9.30 Analog computations on networks of spiking neurons Maass W. 9.50 A general learning framework for the RAAM family Sperduti A., Starita A. 10.10 On-line learning from clustered input examples Biehl M., Riegler P., Solla S.A., Marangi C. 10.30 A study of the unsupervised learning algorithms Tirozzi B., Feng J. 10.50 Conceptual spaces and attentive mechanisms for scene analysis Chella A., Frixione M., Gaglio S. 11.10 Neural approximations for nonlinear finite-memory state estimators Alessandri A., Parisini T., Zoppoli R. 11.30 COFFEE BREAK 12.00 Using neural networks to guide term simplification: some results for the group theory Paccanaro A. 12.20 Modelling the Wiener cascade using time delayed and recurrent neural networks Wagner M.G., Thompson I.M., Manchanda S., Hearne P.G., Green G.G.R. 12.40 The anti-Hebbian synapse in a nonlinear neural network Palmieri F. 13.00 Model of color perception Damianovic' Z. 13.20 LUNCH 15.30 Pasero E. (Review talk) Architectures and Algorithms 16.30 Multiple topology representing networks Sanguineti V., Spada G., Chiaverini S., Morasso P. 16.50 A digital MLP architecture for real-time hierarchical classification Caviglia D.D., Marchesi M., Valle M., Baiardo V., Baratta D. 17.10 Hardware implementation of neural systems for visual target tracking Colla A.M., Trogu L., Zunino R. 17.30 COFFEE BREAK 18.00 Using neural networks to reduce schedules in time-critical communication systems Cavalieri S., Mirabella O. 18.20 Training feedforward neural networks in the presence of prior information Burrascano P., Pirollo D. 18.40 A statistical-neural algorithm based on neural-gas network for dynamic localisation of robots Giuffrida F., Vercelli G., Morasso P. 19.00 A neural network for soft-decision decoding of Reed-Solomon codes Ortn Ortuno I. Friday 19 May 9.30 Oja E. (Invited talk) Pattern Recognition 10.30 An interactive neural network based approach to the segmentation of multimodal medical images Firenze F., Schenone A., Acquarone F., Morasso P. 10.50 Image compression method based on backpropagation neural network and discrete orthogonal transforms Oravec M. 11.10 On choosing the parameters in the dynamic link network Feng F., Tirozzi B. 11.30 COFFEE BREAK 12.00 A neural network for spectral analysis of stratigraphic records Brescia M., D'Argenio B.,Ferreri V., Longo G., Pelosi N., Rampone S., Tagliaferri R. 12.20 Orlandi G. (Review talk) 13.20 Lunch 15.00 Poster Session 17.00 E. R. Caianiello Fellowship Award 17.10 ANNUAL SIREN MEETING 20.00 Conference Dinner Saturday 20 May 9.30 Jordan M. (Invited talk) Pattern Recognition 10.30 A modular neural architecture related to computational neural mechanism for the solution of a pattern recognition problem Morabito F.C., Campolo M. 10.50 Atmospheric pressure wave forecasting through fuzzy systems Masulli F., Casalino F., Caviglia R., Papa L. 11.10 Neural fuzzy image segmentation by a hierarchical approach Petrosino A., Marsella M. 11.30 COFFEE BREAK Applications 12.00 A fuzzy neural network for the detection of anomalies Marsella M., Meneganti M., Tagliaferri R. 12.20 A fuzzy neural network for the on-line detection of B.O.D. Mappa G., Salvi G., Tagliaferri R. 12.40 An efficient multilayer perceptron for handwritten character recognition Gioiello M., Tarantino A., Sorbello F., Vassallo G. 13.00 Neural-based forecasting of physical measures in a power plant Bruzzo S., Camastra F., Colla A.M. Poster Session A global cost-function for multilayer networks Zecchina R. Adaptive representation properties of the circular back-propagation model Ridella S., Rovetta S., Zunino R. Off-line supervised learning from clustered input examples Marangi C., Solla S.A., Biehl M., Riegler P. Images clustering through neural networks Borghese N.A. A wavelet application to the analysis of stratigraphic records D'Argenio B., D'Urzo C., Longo G., Pelosi N., Rampone S., Tagliaferri R. Can learning process in neural networks be considered as a phase transition? Pessa E., Pietronilla Penna M. Self-explanation in a learning McCulloch and Pitts net Lauria F.E., Sette M., Visco S. A fast and robust BCS application to the stereo vision Ardizzone E., Molinelli D., Pirrone R. Analog CMOS pseudo-random generator for the VLSI implementation of the Boltzmann machine Belhaire E., Caviglia D.D., Garda P., Morgavi G., Valle M. Polynomial time approximation of min-energy in Hopfield networks Bertoni A., Campadelli P., Posenato R. A hybrid symbolic subsymbolic system for distributed parameter systems Apolloni B., Piccolboni A., Sozzio E. Image reconstruction using improved ``Neural-gas" Fontana M., Borghese N.A., Ferrari S. WIRN VIETRI '95 VII ITALIAN WORKSHOP ON NEURAL NETS POSTCONFERENCE SHORT COURSE IIASS "E. R. Caianiello", Vietri s/m (SA), Italy May 22-23, 1995 Professor E. Oja : The self-organizing map in data classification and clustering Professor M. Jordan : 1) Regole di apprendimento basate sull' algoritmo E.M. 2) Regole di apprendimento basate sulla meccanica statistica Monday 22 May 15.00-16.50 E. Oja 17.10-19.00 M. Jordan Tuesday 23 May 15.00-16.50 M. Jordan 17.10-19.00 E. Oja From sutton at gte.com Mon May 8 13:59:18 1995 From: sutton at gte.com (Rich Sutton) Date: Mon, 8 May 1995 12:59:18 -0500 Subject: postscript preprints (Reinforcement Learning) Message-ID: This is to announce the availability of two new postscript preprints: REINFORCEMENT LEARNING WITH REPLACING ELIGIBILITY TRACES Satinder P. Singh Richard S. Sutton to appear in Machine Learning ftp://ftp.cs.umass.edu/pub/anw/pub/sutton/singh-sutton-96.ps.gz ABSTRACT: The eligibility trace is one of the basic mechanisms used in reinforcement learning to handle delayed reward. In this paper we introduce a new kind of eligibility trace, the {\it replacing} trace, analyze it theoretically, and show that it results in faster, more reliable learning than the conventional trace. Both kinds of trace assign credit to prior events according to how recently they occurred, but only the conventional trace gives greater credit to repeated events. Our analysis is for conventional and replace-trace versions of the offline TD(1) algorithm applied to undiscounted absorbing Markov chains. First, we show that these methods converge under repeated presentations of the training set to the same predictions as two well known Monte Carlo methods. We then analyze the relative efficiency of the two Monte Carlo methods. We show that the method corresponding to conventional TD is biased, whereas the method corresponding to replace-trace TD is unbiased. In addition, we show that the method corresponding to replacing traces is closely related to the maximum likelihood solution for these tasks, and that its mean squared error is always lower in the long run. Computational results confirm these analyses and show that they are applicable more generally. In particular, we show that replacing traces significantly improve performance and reduce parameter sensitivity on the ``Mountain-Car" task, a full reinforcement-learning problem with a continuous state space, when using a feature-based function approximator. TD MODELS: MODELING THE WORLD AT A MIXTURE OF TIME SCALES Richard S. Sutton to appear in Proc. ML95 ftp://ftp.cs.umass.edu/pub/anw/pub/sutton/sutton-95.ps.Z ABSTRACT: Temporal-difference (TD) learning can be used not just to predict {\it rewards}, as is commonly done in reinforcement learning, but also to predict {\it states}, i.e., to learn a model of the world's dynamics. We present theory and algorithms for intermixing TD models of the world at different levels of temporal abstraction within a single structure. Such multi-scale TD models can be used in model-based reinforcement-learning architectures and dynamic programming methods in place of conventional Markov models. This enables planning at higher and varied levels of abstraction, and, as such, may prove useful in formulating methods for hierarchical or multi-level planning and reinforcement learning. In this paper we treat only the {\it prediction} problem---that of learning a model and value function for the case of fixed agent behavior. Within this context, we establish the theoretical foundations of multi-scale models and derive TD algorithms for learning them. Two small computational experiments are presented to test and illustrate the theory. This work is an extension and generalization of the work of Singh (1992), Dayan (1993), and Sutton \& Pinette (1985). The following previously published papers related to reinforcement learning are also available online for the first time: Sutton, R.S., Barto, A.G. (1990) "Time-Derivative Models of Pavlovian Reinforcement," in Learning and Computational Neuroscience: Foundations of Adaptive Networks, M. Gabriel and J. Moore, Eds., pp. 497--537. MIT Press. (Main paper for the TD model of classical conditioning) ftp://ftp.cs.umass.edu/pub/anw/pub/sutton/sutton-barto-90.ps.Z Barto, A.G., Sutton, R.S., Watkins, C.J.C.H. (1990) "Learning and Sequential Decision Making". In Learning and Computational Neuroscience, M. Gabriel and J.W. Moore, Eds., pp. 539-602, MIT Press. (a good intro to RL) ftp://ftp.cs.umass.edu/pub/anw/pub/sutton/barto-sutton-watkins-90.ps.Z Sutton, R.S. (1992b) "Gain Adaptation Beats Least Squares?", Proceedings of the Seventh Yale Workshop on Adaptive and Learning Systems, pp. 161-166, Yale University, New Haven, CT. (Step-size adaptation from an engineering perspective, 2 new algorithms) ftp://ftp.cs.umass.edu/pub/anw/pub/sutton/sutton-92b.ps.Z For abstracts, see the file ftp://ftp.cs.umass.edu/pub/anw/pub/sutton/CATALOG. If you have trouble obtaining these files, an alternate route is via the mirror at ftp://ftp.gte.com/pub/reinforcement-learning/. From giles at research.nj.nec.com Mon May 8 13:48:30 1995 From: giles at research.nj.nec.com (Lee Giles) Date: Mon, 8 May 95 13:48:30 EDT Subject: TR: Fixed Points in Two--Neuron Discrete Time Recurrent Networks: Message-ID: <9505081748.AA20816@alta> The following Technical Report is available via the University of Maryland Department of Computer Science and the NEC Research Institute archives: _____________________________________________________________________________ "Fixed Points in Two--Neuron Discrete Time Recurrent Networks: Stability and Bifurcation Considerations" UNIVERSITY OF MARYLAND TECHNICAL REPORT UMIACS-TR-95-51 and CS-TR-3461 Peter Tino[1,2], Bill G. Horne[2], C. Lee Giles[2,3] [1] Dept. of Informatics and Computer Systems, Slovak Technical University, Ilkovicova 3, 812 19 Bratislava, Slovakia [2] NEC Research Institute, 4 Independence Way, Princeton, NJ 08540 [3] UMIACS, University of Maryland, College Park, MD 20742 {tino,horne,giles}@research.nj.nec.com The position, number and stability types of fixed points of a two--neuron recurrent net work with nonzero weights are investigated. Using simple geometrical arguments in the space of derivatives of the sigmoid transfer function with respect to the weighted sum of neuron inputs, we partition the network state space into several regions corre sponding to stability types of the fixed points. If the neurons have the same mutual interaction pattern, i.e. they either mutually inhibit or mutually excite themselves, a lower bound on the rate of convergence of the attractive fixed points towards the satu ration values, as the absolute values of weights on the self--loops grow, is given. The role of weights in location of fixed points is explored through an intuitively appealing characterization of neurons according to their inhibition/excitation performance in the network. In particular, each neuron can be of one of the four types: greedy, enthusias tic, altruistic or depressed. Both with and without the external inhibition/excitation sources, we investigate the position and number of fixed points according to character of the neurons. When both neurons self-excite themselves and have the same mutual interaction pattern, the mechanism of creation of a new attractive fixed point is shown to be that of saddle node bifurcation. ------------------------------------------------------------------------- http://www.neci.nj.nec.com/homepages/giles.html http://www.cs.umd.edu/TRs/TR-no-abs.html or ftp://ftp.nj.nec.com/pub/giles/papers/UMD-CS-TR-3461.two.neuron.recurrent.nets.ps.Z -------------------------------------------------------------------------- -- C. Lee Giles / NEC Research Institute / 4 Independence Way Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 URL http://www.neci.nj.nec.com/homepages/giles.html == From georgiou at wiley.csusb.edu Mon May 8 20:15:04 1995 From: georgiou at wiley.csusb.edu (georgiou@wiley.csusb.edu) Date: Mon, 8 May 1995 17:15:04 -0700 Subject: CFP: First Int'l Conf. on Computational Intelligence and Neurosciences Message-ID: <199505090015.AA25909@wiley.csusb.edu> FIRST INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCES September 28 to October 1, 1995. ``Shell Island'' Hotels of Wrightsville Beach, North Carolina, USA. We are pleased to announce the First International Conference on Computational Intelligence and Neurosciences to be held as part of the upcoming Joint Conference on Information Sciences (JCIS) from September 28 to October 1, 1995 in Wrightsville Beach, NC. We expect that this symposium will be of interest to neural network researchers, computer scientists, engineers, mathematicians, physicists, neuroscientists and psychologists. Research in neural computing has grown enormously in the past decade, and it is becoming an increasingly specialized field. Using a combination of didactic and workshop settings, we wish to present an overview of where neural computing is presently at. Talks will be given by experts in theoretical and experimental neuroscience, and there will be ample opportunity for discussion and collegial exchange. In addition to presenting current work, our aim is to address some of the important open questions in neural computing. We hope to delineate how information science, as an interdisciplinary field, can aid in moving neural computing into the next century. Invited Speakers include: James Anderson (Brown University) Subhash Kak (Louisiana State University) Haluk Ogmen (Houston of Houston) Ed Page (University of South Carolina) Jeffrey Sutton (Harvard University) L.E.H. Trainor (University of Toronto) Co-chairs: Subhash Kak & Jeffrey Sutton Program Committee Robert Erickson George Georgiou David Hislop Michael Huerta Subhash C. Kak Stephen Koslow Sridhar Narayan Slater E. Newman Gregory Lockhead Richard Palmer David C. Rubin Nestor Schmajuk David W. Smith John Staddon Jeffrey P. Sutton Harold Szu L.E.H. Trainor Abraham Waksman Paul Werbos M. L. Wolbarsht Max Woodbury Areas for which papers are sought include: * Neural Network Architectures * Artificially Intelligent Neural Networks * Artificial Life * Associative Memory * Computational Intelligence * Cognitive Science * Fuzzy Neural Systems * Relations between Fuzzy Logic and Neural Networks * Theory of Evalutionary Computation * Efficiency/Robustness Comparisons with Other Direct Search Algorithms * Parallel Computer Applications * Integration of Fuzzy Logic and Evolutionary Computing * Evaluationary Computation for Neural Networks * Fuzzy Logic in Evolutionary Algorithms * Neurocognition * Neurodynamics * Optimization * Feature Extraction & Pattern Recognition * Learning and Memory * Implementations (electronic, Optical, Biochips) * Intelligent Control Summary Deadline: July 20, 1995 Decision & Notification: August 5, 1995 Send summaries to: George M. Georgiou Computer Science Department California State University San Bernardino, CA 92407 georgiou at wiley.csusb.edu Papers will be accepted based on summaries. A summary shall not exceed 4 pages of 10-point font, double-column, single-spaced text, (1 page minimum) with figures and tables included. Any summary exceeding 4 pages will be charged $100 per additional page. Three copies of the summary are required by July 24,1995. A deposit of $150 check must be included to guarantee the publication of your 4 pages summary in the Proceedings. $150 can be deducted from registration fee later. Final version of the full length paper must be submitted by October 1, 1995. Four (4) copies of the full length paper shall be prepared according to the ``Information for Authors'' appearing at the back cover of Information Sciences, an International Journal (Elsevier Publishing Co.). A full paper shall not exceed 20 pages including figures and tables. All full length papers will be reviewed by experts in their respective fields. Revised papers will be due on April 15, 1996. Accepted papers will appear in the hard-covered proceeding (book) to be published by a publisher or Information Sciences Journal (INS journal now has three publications: Informatics and Computer Sciences, Intelligent Systems, Applications). All fully registered conference attendees will receive a copy of proceeding (summary) on September 28, 1995; a free one-year subscription (paid by this conference) of Information Sciences Journal - Applications. Lastly, the right to purchase either or all of Vol.I, Vol.II, Vol.III of Advances in FT & T hard-covered, deluxe, professional books at 1/2 price. The Title of the books are ``Advances in Fuzzy Theory & Technology, Volume I or II or III''. --------------------------------------------------------------------------- ******************************************** * JCIS'95 REGISTRATION FEES & INFORMATION * ******************************************** Up to 7/15/95 After 7/15/95 Full Registration $275.00 $395.00 Student Registration $100.00 $160.00 Tutorial(per Mini-Course) $120.00 $160.00 Exhibit Boot Fee $300.00 $400.00 One Day Fee(no pre-reg. discount) $195.00 $ 85.00 (Student) FULL CONFERENCE REGISTRATION: Includes admission to all sessions, exhibit area, coffee, tea and soda. A copy of conference proceedings (summary) at conference and one year subscription of Information Sciences - Applications, An International Journal, published by Elsevier Publishing Co. In addition, the right to purchase the hard-cover deluxe books at 1/2 price. Award Banquet on Sept. 30, 1995 is included through Full Registration. One day registration does not include banquet, but one year IS Journal - C subscription is included for one-day full registration only. Tutorials are not included. STUDENT CONFERENCE REGISTRATION: For full-time students only. A letter from your department is required. You must present a current student ID with picture. A copy of conference proceedings (summary) is included. Admission to all sessions, exhibit area, area, coffee,tea and soda. The right to purchase the hard-cover deluxe books at 1/2 price. Free subscription of IS Journal - Applications, however, is not included. TUTORIALS REGISTRATION: Any person can register for the Tutorials. A copy of lecture notes for the course registered is included. Coffee, tea and soda are included. The summary and free subscription of IS Journal - Applications is, However, not included. The right to purchase the hard-cover deluxe books is included. --------------------------------------------------------------------------- *************** * TUTORIALS * *************** Several mini-courses are scheduled for sign-up. Please take note that any one of them may be cancelled or combined with other mini-courses due to the lack of attendance. Cost of each mini-course is $120 up to 7/15/95 & $160 after 7/15/95, the same cost for all mini-course. No. Name of Mini-Course Instructor Time ------------------------------------------------------------------------ A Languages and Compilers for J. Ramanujan 6:30 pm - 9 pm Distributed Memory Machine Sept. 28 ------------------------------------------------------------------------- B Pattern Recognition Theory H. D. Cheng 6:30 pm - 9 pm Sept. 28 ------------------------------------------------------------------------- C Fuzzy Set Theory George Klir 2:00pm - 4:30pm Sept. 29 ------------------------------------------------------------------------- D Neural Network Theory Richard Palmer 2:00pm - 4:30pm Sept. 29 ------------------------------------------------------------------------- E Fuzzy Expert Systems I. B. Turksen 6:30 pm - 9 pm Sept. 29 ------------------------------------------------------------------------- F Intelligent Control Systems Chris Tseng 6:30 pm - 9 pm Sept. 29 ------------------------------------------------------------------------- G Neural Network Applications Subhash Kak 2:00pm - 4:30pm Sept. 30 ------------------------------------------------------------------------- H Pattern Recognition Applications Edward K. Wong 2:00pm - 4:30pm Sept. 30 ------------------------------------------------------------------------- I Fuzzy Logic & NN Integration Marcus Thint 9:30am - 12:00 Oct. 1 ------------------------------------------------------------------------- J Rough Set Theory Tsau Young Lin 9:30am - 12:00 Oct. 1 ------------------------------------------------------------------------- VARIOUS CONFERENCE CONTACTS: Tutorial Conference Information Paul P. Wang Jerry C.Y. Tyan Kitahiro Kaneda ppw at ee.duke.edu ctyan at ee.duke.edu hiro at ee.duke.edu Tel. (919)660-5271 Tel. (919)660-5233 Tel. (919)660-5233 660-5259 Coordinates Overall Administration Local Arrangement Chair Xiliang Gu Sridhar Narayan gu at ee.duke.edu Dept. of Mathematical Sciences Tel. (919)660-5233 Wilmington, NC 28403 (919)383-5936 U. S. A. narayan at cms.uncwil.edu Tel: 910 395 3671 (work) 910 395 5378 (home) --------------------------------------------------------------------------- *********************** * TRAVEL ARRANGEMENTS * *********************** The Travel Center of Durham, Inc. has been designated the officeal travel provider. Special domestic fares have been arranged and The Travel Center is prepared to book all flight travel. Domestic United States and Canada: 1-800-334-1085 International FAX: 919-687-0903 ********************** * HOTEL ARRANGEMENTS * ********************** SHELL ISLAND RESORT HOTELS 2700 N. LUMINA AVE. WRIGHTSVILLE BEACH, NC 28480 U. S. A. This is the conference site and lodging. A block of suites (double rooms) have been reserved for JCIS'95 attendees with discounted rate. All prices listed here are for double occupancies. $100.00 + 9% Tax (Sun.- Thur.) $115.00 + 9% Tax (Fri. - Sat.) $10.00 for each additional person over 2 people per room. We urge you to make reservation early. Free transportation from and to Wilmington, N. C. Airport is available for ``Shell Island'' Resort Hotel Guests. However, you must make reservation for this free service. Please contact: Carvie Gillikin, Director of Sales Voice: 1-800-689-6765 or: 910-256-8696 FAX: 910-256-0154 --------------------------------------------------------------------------- If you wish to automatically receive information through email on JCIS'95 as it also pertains to the other two conferences that are part of JCIS'95 ("Fourth Annual Conference on Fuzzy Theory and Technology" and "Second Annual Conference on Computer Theory and Informatics"), please send email To: georgiou at wiley.csusb.edu Subject: JCIS-95 The body of the message is not significant. --------------------------------------------------------------------------- CONFERENCE REGISTRATION FORM It is important to choose only one plan; Participation Plan A or Plan B or Plan C. (Choose Plan C for First International Conference on Computational Intelligence and Neurosciences) [ ] I wish to receive further information. [ ] I intend to participate in the conference. [ ] I intend to present my paper to regular session. [ ] I intend to register in tutorial(s). Mane: Dr./Mr./Mrs. _________________________________________________ Address: ___________________________________________________________ Country: ___________________________________________________________ Phone:________________ Fax: _______________ E-mail: ________________ Affiliation(for Badge): ____________________________________________ Participation Plan: [ ]A [ ]B [ ]C Up to 7/15/95 After 7/15/95 Full Registration [ ]$275.00 [ ]$395.00 Student Registration [ ]$100.00 [ ]$160.00 Tutorial(per Mini-Course) [ ]$120.00 [ ]$160.00 Exhibit Boot Fee [ ]$300.00 [ ]$400.00 One Day Fee(no pre-reg. discount) [ ]$195.00 [ ]$ 85.00 (Student) Total Enclosed(U.S. Dollars): ________________ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ $ Please make check payable and mail to: $ $ FT & T $ $ c/o. Paul P. Wang $ $ Dept. of Electrical Engineering $ $ Duke University $ $ Durham, NC 27708 $ $ U. S. A. $ $ $ $ All foreign payments must be made by $ $ draft on a US Bank in US dollars. No $ $ credit cards or purchase order can be $ $ accepted. $ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ --------------------------------------------------------------------------- From tirthank at titanic.mpce.mq.edu.au Tue May 9 02:22:38 1995 From: tirthank at titanic.mpce.mq.edu.au (Tirthankar Raychaudhuri) Date: Tue, 9 May 1995 16:22:38 +1000 (EST) Subject: Technical Report on Active Learning Available Message-ID: <9505090622.AA04755@titanic.mpce.mq.edu.au> A non-text attachment was scrubbed... Name: not available Type: text Size: 2330 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/c080f623/attachment-0001.ksh From wermter at nats5.informatik.uni-hamburg.de Tue May 9 12:29:31 1995 From: wermter at nats5.informatik.uni-hamburg.de (Stefan Wermter) Date: Tue, 9 May 1995 12:29:31 --100 Subject: IJCAI95 workshop program: learning for language processing Message-ID: <9505091029.AA27223@nats13.nats> IJCAI-95 Workshop on New Approaches to Learning for Natural Language Processing International Joint Conference on Artificial Intelligence (IJCAI-95) Palais de Congres, Montreal, Canada August 21, 1995 ORGANIZING COMMITTEE -------------------- Stefan Wermter, University of Hamburg, Germany (workshop contact person) Gabriele Scheler, Technical University Munich, Germany Ellen Riloff, University of Utah, USA INVITED SPEAKERS ---------------- Eugene Charniak, Brown University, USA Noel Sharkey, Sheffield University, UK PROGRAM COMMITTEE ----------------- Jaime Carbonell, Carnegie Mellon University, USA Joachim Diederich, Queensland University of Technology, Australia Georg Dorffner, University of Vienna, Austria Jerry Feldman, ICSI, Berkeley, USA Walther von Hahn, University of Hamburg, Germany Aravind Joshi, University of Pennsylvania, USA Ellen Riloff, University of Utah, USA Gabriele Scheler, Technical University Munich, Germany Stefan Wermter, University of Hamburg, Germany WORKSHOP DESCRIPTION -------------------- In the last few years, there has been a great deal of interest and activity in developing new approaches to learning for natural language processing Various learning methods have been used, including - connectionist methods/neural networks - machine learning algorithms - hybrid symbolic and subsymbolic methods - statistical techniques - corpus-based approaches. In general, learning methods are designed to support automated knowledge acquisition, fault tolerance, plausible induction, and rule inferences. Using learning methods for natural language processing is especially important because language learning is an enabling technology for many other language processing problems, including noisy speech/language integration, machine translation, and information retrieval. Different methods support language learning to various degrees but, in general, learning is important for building more flexible, scalable, adaptable, and portable natural language systems. This workshop is of interest particularly at this time because systems built by learning methods have reached a level where they can be applied to real-world problems in natural language processing and where they can be compared with more traditional encoding methods. The workshop will provide a forum for discussing various learning approaches for supporting natural language processsing. In particular the workshop will focus on questions like: - How can we apply suitable existing learning methods for language processing? - What new learning methods are needed for language processing and why? - What language knowledge should be learned and why? - What are similarities and differences between different approaches for language learning? (e.g., machine learning algorithms vs neural networks) - What are strengths and limitations of learning rather than manual encoding? - How can learning and encoding be combined in symbolic/connectionist systems? - Which aspects of system architectures and knowledge engineering have to be considered? (e.g., modular, integrated, hybrid systems) - What are successful applications of learning methods in various fields? (speech/language integration, machine translation, information retrieval) - How can we evaluate learning methods using real-world language? (text, speech, dialogs, etc.) WORKSHOP PROGRAM ---------------- 8:00 am Start of Workshop 8:00 am Welcome and Introduction Stefan Wermter 8:10am - 9:50am Session: Neural network approachs, Hybrid approaches, Genetic approaches ------------------------------------------------------------------------ 8:10am - 8:30am On the applicability of neural network and machine learning methodologies to natural language processing Steve Lawrence, Sandiway Fong, C. Lee Giles 8:30am - 8:50am Knowledge acquisition in concept and document spaces by using self-organizing neural networks Werner Winiwarter, Erich Schweighofer, Dieter Merkl 8:50am - 9:10am A genetic algorithm for the induction of natural language grammars Tony C. Smith, Ian H. Witten 9:10am - 9:30am SKOPE: A connectionist/symbolic architecture of spoken Korean processing Geunbae Lee, J. H. Lee 9:30am - 9:50am Integrating different learning approaches into a multilingual spoken translation system P. Geutner, B. Suhm, T. Kemmp, A. Lavie, L. Mayfield, A. E. McNair, I. Rogina, T. Schultz, T. Sloboda, W. Ward, M. Woszczyna, A. Waibel 9:50am - 10:20am Invited Talk ************ Connectionist Natural Language Processing: Representation and Learning Noel Sharkey, Sheffield University, UK 10:20am - 10:40am Break ----- 10:40am - 12:20am Session: Statistical approaches, Corpus-based approaches -------------------------------------------------------- 10:40am - 11:00am Selective sampling in natural language learning Ido Dagan, Sean P. Engelson 11:00am - 11:20am Learning restricted probabilistic link grammars Eva Fong, Dekai Wu 11:20am - 11:40am A statistical approach to learning prepositional phrase attachment disambiguation Alexander Franz 11:40am - 12:00am Training stochastical grammars on semantic categories W.R. Hogenhout, Yuji Matsumoto 12:00am - 12:20pm Automatic classification of speech acts with semantic classification trees and polygrams Marion Mast, Elmar Noeth, Heinrich Niemann, Ernst Guenter Schukat Talamazzini 12:20pm - 12:50pm Invited Talk ************ Learning syntactic disambiguation through word statistics and why you should care about it Eugene Charniak, Brown University, USA 12:50pm - 2:00pm Lunch Break ----------- 2:00pm - 3:40pm Session: Machine learning appoaches, Symbolic approaches -------------------------------------------------------- 2:00pm - 2:20pm A comparison of two methods employing inductive logic programming for corpus-based parser construction John M. Zelle, Raymond J. Mooney 2:20pm - 2:40pm Using inductive logic programming to learn the past tense of English verbs Mary Elaine Califf, Raymond J. Mooney 2:40pm - 3:00pm A revision learner to acquire verb selection rules from human-made rules and examples Shigeo Kaneda, Hussein Almuallim, Yasuhiro Akiba, Megumi Ishii, Tsukasa Kawaoka 3:00pm - 3:20pm Using parsed corpora for circumventing parsing Aravind K. Joshi, B. Srinivas 3:20pm - 3:40pm Acquiring and updating hierarchical knowledge for machine translation based on a clustering technique Takefumi Yamazaki, Michael J. Pazzani, Christopher Merz 3:40pm - 4:00pm Break ----- 4:00pm - 5:40pm Session: Knowledge acquisition approaches, Information extraction approaches ---------------------------------------------------------------------------- 4:00pm - 4:20pm Embedded machine learning systems for natural language processing: a general framework Claire Cardie 4:20pm - 4:40pm Learning information extraction patterns from examples Scott B. Huffman 4:40pm - 5:00pm A symbolic and surgical acquisition of terms through variation Christian Jacquemin 5:00pm - 5:20pm Concept learning from texts - a terminological meta-reasoning perspective Udo Hahn, Manfred Klenner, Klemens Schnattinger 5:20pm - 5:40pm Applying machine learning to anaphora resolution Chinatsu Aone, Scott William Bennett 5:40pm - 6:00pm Discussion and open end ----------------------- Further accepted papers ----------------------- Advances in analogy-based learning: false friends and exceptional items in pronunciation by paradigm-driven analogy Stefano Federici, Vito Pirrelli, Francais Yvon A minimum description length approach to grammar inference Peter Gruenwald Implications of an automatic lexical acquisition system Peter M. Hastings Confronting an existing machine learning algorithm to the text categorization task Isabelle Moulinier, Jean-Gabriel Ganascia Issues in inductive learning of domain-specific text extraction rules Stephen Soderland, David Fisher, Jonathan Aseltine, Wendy Lehnert Can punctuation help learning? Miles Osborne Ross Hayward, Emanuel Pop, Joachim Diederich Cascade 2 networks for grammar recognition ******************************************************************************** * Dr Stefan Wermter University of Hamburg * * Dept. of Computer Science * * Vogt-Koelln-Strasse 30 * * email: wermter at informatik.uni-hamburg.de D-22527 Hamburg * * phone: +49 40 54715-531 Germany * * fax: +49 40 54715-515 * * http://www.informatik.uni-hamburg.de/Arbeitsbereiche/NATS/staff/wermter.html * ******************************************************************************** From S.W.Ellacott at bton.ac.uk Tue May 9 16:20:37 1995 From: S.W.Ellacott at bton.ac.uk (S.W.Ellacott@bton.ac.uk) Date: Tue, 09 May 1995 20:20:37 GMT Subject: Studentships available Message-ID: <19950509.202037.33@diamond> PhD STUDENTSHIPS IN COMPUTATIONAL MATHEMATICS / NEURAL NETWORKS AT UNIVERSITY OF HUDDERSFIELD Two 3-year research studentships are now available to good graduates in numerate degrees, optionally with relevant MSc degree or experience, to work with Prof John Mason / Dr Iain Anderson starting September/October 1995 : 1) CASE Studentship (UK/EC only) on "Approximation by Neural Networks in Hydraulics" with Hydraulics Research Ltd Wallingford - EPSRC grant plus 2250pds - involving NNs, approximation, parallel algorithms. 2) Postgraduate bursary (about EPSRC basic rate plus UK/EC PhD fees) on "Approximation by Wavelets", with requirement to provide some programming support to MSc course - involving approximation, numerical analysis and applications (such as neural networks). Reply with CV and names of 2 referees as soon as possible to Prof J.C.Mason, School of Computing and Mathematics, University of Huddersfield, Queensgate, Huddersfield HD1 3DH, England - email: j.c.mason at hud.ac.uk, phone: 01484-472680, fax: 01484-421106. -- Steve Ellacott Dept. Math. Sciences, University of Brighton, Moulsecoomb, BN2 4GJ, UK Tel. home: (01273) 885845. Tel. office: (01273) 642544 or 642414 Fax: (01273) 642405 From lksaul at psyche.mit.edu Tue May 9 18:01:25 1995 From: lksaul at psyche.mit.edu (Lawrence Saul) Date: Tue, 9 May 95 18:01:25 EDT Subject: paper available: Mean Field Theory for Sigmoid Belief Networks Message-ID: <9505092201.AA14682@psyche.mit.edu> FTP-host: psyche.mit.edu FTP-file: pub/lksaul/belief.ps.Z The following paper is now available by anonymous ftp. ================================================================== Mean Field Theory for Sigmoid Belief Networks (12 pages) Lawrence K. Saul, Tommi Jaakkola, and Michael I. Jordan Center for Biological and Computational Learning Massachusetts Institute of Technology 79 Amherst Street, E10-243 Cambridge, MA 02139 Abstract: Bayesian networks (a.k.a. belief networks) are stochastic feedforward networks of discrete or real-valued units. In this paper we show how to calculate a rigorous lower bound on the likelihood of observed activities in sigmoid belief networks. We view these networks in the framework of statistical mechanics and derive a mean field theory for the average activities of the units. The advantage of this framework is that the mean field free energy gives a rigorous lower bound on the log-likelihood of any partial instantiation of the network's activity. The feedforward directionality of belief networks gives rise to terms that do not appear in the mean field theory for symmetric networks of binary units. Nevertheless, the mean field equations have a simple closed form and can be solved by iteration to yield a lower bound on the likelihood. Empirical results suggest that this bound may be tight enough to serve as a basis for inference and learning. ================================================================== From wolff at cache.crc.ricoh.com Tue May 9 19:07:03 1995 From: wolff at cache.crc.ricoh.com (Greg Wolff) Date: Tue, 9 May 1995 16:07:03 -0700 Subject: JOB Announcement Message-ID: <9505092307.AA02749@cheetah.crc.ricoh.com> The Machine Learning and Perception group at Ricoh's California Research Center in Menlo Park, CA is looking for a skilled C programmer to assist in the development and implementation of machine learning and image processing algorithms. The job description and contact information follows. More information about Ricoh CRC may be found at http://www.crc.ricoh.com/ Support Programmer/Research Engineer Position responsibilities: * The successful candidate will be responsible for developing code and using existing software to perform simulations (in C/Unix on a SPARCstation) of new pattern recognition/machine learning/image processing algorithms developed in cooperation with others. * She or he will work on a number of projects concurrently, quickly comprehending the technology and contributing as appropriate. He or she will document and report work, make oral presentations, and assist in technology transfer efforts. A background in pattern recognition, neural networks, image processing, machine learning, or parallel programming is very desirable, but not essential. Candidate requirements: * Strong C programming and Unix skills (experimental, not necessarily production) -- Work experience involving programming of more than two years or advanced degree * Strong demonstrated learning ability * Excellent verbal and communication skills * Good organizational ability * Masters degree (or equivalent experience) in Electrical Engineering, Computer Science or related field * Optional: knowledge or experience with parallel programming, neural networks, pattern recognition.... ---------------------------------------------------------------------------- RICOH California Research Center (RCRC): RCRC is a small research center in Menlo Park, CA, near the Stanford University campus and other Silicon Valley landmarks. The roughly 20 researchers focus on pattern recognition, image processing, image and document analysis, visual perception, artificial intelligence, machine learning, electronic service, and hardware for implementing computationally expensive algorithms. The environment is innovative, collegial and exciting. RCRC is a part of RICOH Corporation, the wholly owned subsidiary of RICOH Company, Ltd. in Japan. RICOH is a pioneer in facsimile, copiers, optical equipment, office automation products and more. Ricoh Corporation is an Equal Employment Opportunity Employer.[1] ---------------------------------------------------------------------------- Please send any questions by e-mail to the address below, and type "Programming job" as your header line. Full applications (which must include a resume and the names and addresses of at least two people familiar with your work) should be sent by surface mail to: Dr. David G. Stork Chief Scientist RICOH California Research Center 2882 Sand Hill Road, Suite 115 Menlo Park CA 94025 stork at crc.ricoh.com [1] See http://www.crc.ricoh.com/openings/mlp.html From john at dcs.rhbnc.ac.uk Wed May 10 04:00:52 1995 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Wed, 10 May 95 09:00:52 +0100 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <199505100800.JAA13813@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT): several new reports available ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-011: ---------------------------------------- Classification by Polynomial Surfaces by Martin Anthony, London School of Economics and Political Science Abstract: Linear threshold functions (for real and Boolean inputs) have received much attention, for they are the component parts of many artificial neural networks. Linear threshold functions are exactly those functions such that the positive and negative examples are separated by a hyperplane. One extension of this notion is to allow separators to be surfaces whose equations are polynomials of at most a given degree (linear separation being the degree-$1$ case). We investigate the representational and expressive power of polynomial separators. Restricting to the Boolean domain, by using an upper bound on the number of functions defined on $\{0,1\}^n$ by polynomial separators having at most a given degree, we show, as conjectured by Wang and Williams, that for almost every Boolean function, one needs a polynomial surface of degree at least $\left\lfloor n/2\right\rfloor$ in order to separate the negative examples from thepositive examples. Further, we show that, for odd $n$, at most half of all Boolean functions are realizable by a separating surface of degree $\left\lfloor n/2\right\rfloor$. We then compute the Vapnik-Chervonenkis dimension of the class of functions realized by polynomial separating surfaces of at most a given degree, both for the case of Boolean inputs and real inputs. In the case of linear separators, the VC dimensions coincide for these two cases, but for surfaces of higher degree, there is a strict divergence. We then use these results on the VC dimension to quantify the sample size required for valid generalization in Valiant's probably approximately correct framework. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-013: ---------------------------------------- Learnability of Kolmogorov-Easy Circuit Expressions Via Queries by Jos\'e L. Balc\'azar, Universitat Polit\'ecnica de Catalunya Harry Buhrman, CWI, Amsterdam Montserrat Hermo, Universidad del Pa\'\i s Vasco Abstract: Circuit expressions were introduced to provide a natural link between Computational Learning and certain aspects of Structural Complexity. Upper and lower bounds on the learnability of circuit expressions are known. We study here the case in which the circuit expressions are of low (time-bounded) Kolmogorov complexity. We show that these are polynomial-time learnable from membership queries in the presence of an NP oracle. We also exactly characterize the sets that have such circuit expressions, and precisely identify the subclass whose circuit expressions can be learned from membership queries alone. The extension of the results to various Kolmogorov complexity bounds is discussed. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-017: ---------------------------------------- Identification of the Human Arm Kinetics using Dynamic Recurrent Neural Networks by Jean-Philippe DRAYE, Facult\'{e} Polytechnique de Mons, Guy CHERON, University of Brussels, Marc BOURGEOIS, University of Brussels, Davor PAVISIC, Facult\'{e} Polytechnique de Mons, Ga\"{e}tan LIBERT, Facult\'{e} Polytechnique de Mons Abstract: Artificial neural networks offer an exciting alternative for modeling and identi fying complex non-linear systems. This paper investigates the identification of discrete-time non-linear systems using dynamic recurrent neural networks. We use this kind of networks to efficiently identify the complex temporal relati onship between the patterns of muscle activation represented by the electromyogr aphy signal (EMG) and their mechanical actions in three-dimensional space. The results show that dynamic neural networks provide a successful platform for biomechanical modeling and simulation including complex temporal relationships. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-019: ---------------------------------------- An Algebraic Characterization of Tractable Constraints by Peter Jeavons, Royal Holloway, University of London Abstract: Many combinatorial search problems may be expressed as `constraint satisfaction problems', and this class of problems is known to be NP-complete. In this paper we investigate what restrictions must be imposed on the allowed constraints in order to ensure tractability. We describe a simple algebraic closure condition, and show that this is both necessary and sufficient to ensure tractability in Boolean valued problems. We also demonstrate that this condition is necessary for problems with arbitrary finite domains. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-020: ---------------------------------------- An incremental neural classifier on a MIMD computer by Arnulfo Azcarraga, LIFIA - IMAG - INPG, France H\'el\`ene Paugam-Moisy and Didier Puzenat, LIP - URA 1398 du CNRS, ENS Lyon, France Abstract: MIMD computers are among the best parallel architectures available. They are easily scalable with numerous processors and have potentially huge comput ing power. One area of application for such computers is the field of neural net works. This article presents a study, and two parallel implementations, of a spe cific neural incremental classifier of visual patterns. This neural network is i ncremental in that network units are created whenever the classifier is not able to recognize correctly a pattern. The dynamic nature of the model renders the p arallel algorithms rather complex. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-021: ---------------------------------------- Model Selection for Neural Networks: Comparing MDL and NIC by Guido te Brake, Utrecht University, Joost N. Kok, Utrecht University, Paul M.B. Vit\'anyi, CWI, Amsterdam Abstract: We compare the MDL and NIC methods for determining the correct size of a feedforward neural network. The NIC method has to be adapted for this kind of networks. We include an experiment based on a small standard problem. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-023: ---------------------------------------- PAC Learning and Artificial Neural Networks by Martin Anthony and Norman Biggs, London School of Economics and Political Science, University of London Abstract: In this article, we discuss the `probably approximately correct' (PAC) learning paradigm as it applies to artificial neural networks. The PAC learning model is a probabilistic framework for the study of learning and generalization. It is useful not only for neural classification problems, but also for learning problems more often associated with mainstream artificial intelligence, such as the inference of Boolean functions. In PAC theory, the notion of succesful learning is formally defined using probability theory. Very roughly speaking, if a large enough sample of randomly drawn training examples is presented, then it should be likely that, after learning, the neural network will classify most other randomly drawn examples correctly. The PAC model formalises the terms `likely' and `most'. Furthermore, the learning algorithm must be expected to act quickly, since otherwise it may be of little use in practice. There are thus two main emphases in PAC learning theory. First, there is the issue of how many training examples should be presented. Secondly, there is the question of whether learning can be achieved using a fast algorithm. These are known, respectively, as the {\it sample complexity} and {\it computational complexity} problems. This article provides a brief introduction to these. We highlight the importance of the Vapnik-Chervonenkis dimension, a combinatorial parameter which measures the `expressive power' of a neural network, and describe how this parameter quantifies fairly precisely the sample complexity of PAC learning. In discussing the computational complexity of PAC learning, we shall present a result which illustrates that in some cases the problem of PAC learning is inherently intractable. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-024: ---------------------------------------- Graphs and Artificial Neural Networks by Martin Anthony, London School of Economics and Political Science, University of London Abstract: `Artificial neural networks' are machines (or models of computation) based loosely on the ways in which the brain is believed to work. In this chapter, we discuss some links between graph theory and artificial neural networks. We describe how some combinatorial optimisation tasks may be approached by using a type of artificial neural network known as a Boltzmann machine. We then focus on `learning' in feedforward artificial neural networks, explaining how the graph structure of a network and the hardness of graph-colouring quantify the complexity of learning. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-025: ---------------------------------------- The Vapnik-Chervonenkis Dimension of a Random Graph by Martin Anthony, Graham Brightwell, London School of Economics and Political Science, University of London Colin Cooper, University of North London Abstract: In this paper we investigate a parameter defined for any graph, known as the {\it Vapnik-Chervonenkis dimension} (or VC dimension). For any vertex $x$ of a graph $G$, the closed neighbourhood $N(x)$ of $x$ is the set of all vertices of $G$ adjacent to $x$, together with $x$. We say that a set $D$ of vertices of $G$ is {\it shattered} if every subset $R$ of $D$ can be realised as $R=D \cap N(x)$ for some vertex $x$ of $G$. The Vapnik-Chervonenkis dimension of $G$ is defined to be the largest cardinality of a shattered set of vertices. This parameter can be used to provide bounds on the complexity of a learning problem on graphs. Our main result gives, for each positive integer $d$, the exact threshold function for a random graph $G(n,p)$ to have VC~dimension $d$. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-026: ---------------------------------------- Probabilistic Decision Trees and Multilayered Perceptrons by Pascal Bigot and Michel Cosnard, LIP, ENS, Lyon, France Abstract: We propose a new algorithm to compute a multilayered perceptron for classification problems, based on the design of a binary decision tree. We show how to modify this algorithm for using ternary logic, introducing a Don'tKnow class. This modification could be applied to any heuristic based on the recursive construction of a decision tree. Another way of dealing with uncertainty for improving generalization performance is to construct probabilistic decision trees. We explain how to modify the preceding heuristics for constructing such trees and associating probabilistic multilayered perceptrons. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-027: ---------------------------------------- A characterization of the existence of energies for neural networks by Michel Cosnard, LIP, ENS, Lyon, France Eric Gole, Universidad de Chile, Santiago, Chile Abstract: In this paper we give under an appropriate theoretical framework a characterization about neural networks which admit an energy. We prove that a neural network admits an energy if and only if the weight matrix verifies two conditions: the diagonal elements are non-negative and the associated incidence graph does not admit non-quasi-symmetric circuits. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-028: ---------------------------------------- Improvement of Gradient Descent based Algorithms Training Multilayer Perceptrons with an Evolutionnary Initialization by C\'edric G\'egout, \'Ecole Normale Sup\'erieure de Lyon, Lyon Abstract: Gradient descent algorithms reducing the mean square error computed on a training set are widely used for training real valued feedforward networks, because of their easy implementation and their efficacy. But in some cases they are trapped in a local optimum and are not able to find a good network. In order to eliminate theses limitated cases, usually we could only restart the gradient descent or found an initialization point constructed with unreliable and training set dependant heuristics. This paper presents a new method to find a good initialization point. An evolutionary algorithm provides an individual whose phenotype is a neural network. This individual is the best one that makes a quick, efficient and robust gradient descent. The genotypes are real valued vectors containing parameters of networks. Therefore we use special genetic operators. Simulation results show that this initialization reduces the neural network training time, the training complexity and improves the robustness of gradient descent based algorithms. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-029: ---------------------------------------- The Curse of Dimensionality and the Perceptron Algorithm by Jyrki Kivinen, University of Helsinki Manfred K.~Warmuth, University of California, Santa Cruz Abstract: We give an adversary strategy that forces the Perceptron algorithm to make $(N-k+1)/2$ mistakes when learning $k$-literal disjunctions over $N$ variables. Experimentally we see that even for simple random data, the number of mistakes made by the Perceptron algorithm grows almost linearly with $N$, even if the number $k$ of relevant variable remains a small constant. Thus, the Perceptron algorithm suffers from the curse of dimensionality even when the target is extremely simple and almost all of the dimensions are irrelevant. In contrast, Littlestone's algorithm Winnow makes at most %$O(k(1+\log(N/k))$ mistakes for the same problem. $O(k\log N)$ mistakes for the same problem. Both algorithms use linear threshold functions as their hypotheses. However, Winnow does multiplicative updates to its weight vector instead of the additive updates of the Perceptron algorithm. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-030: ---------------------------------------- Identifying Regular Languages over Partially-Commutative Monoids by Claudio Ferretti -- Giancarlo Mauri, Universit\`a di Milano - ITALY Abstract: We define a new technique useful in identifying a subclass of regular languages defined on a free partially commutative monoid (regular trace languages), using equivalence and membership queries. Our algorithm extends an algorithm defined by Dana Angluin in 1987 to learn DFA's. The words of a trace language can be seen as equivalence classes of strings. We show how to extract, from a given equivalence class, a string of an unknown underlying regular langu age. These strings can drive the original learning algorithm which identify a regular string language that defines also the target trace language. In this way the algorithm applies also to classes of unrecognizable regular trace languages and, as a corollary, to a class of unrecognizable string languages. We also discuss bounds on the number of examples needed to identify the target language and on the time required to process them. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-031: ---------------------------------------- A Comparative Study For Forecasting Intra-daily Exchange Rate Data by Sabine P Toulson, London School of Economics, University of London Abstract: For the last few years neural nets have been applied to economic and financial forecasting where they have shown to be increasingly successful. This paper compares the performance of a two hidden layer multi-layer perceptron (MLP) with conventional statistical techniques. The statistical techniques used here consist of a structural model (SM) and the stochastic volatility model (SV). After reviewing each of the three models a comparison between the MLP and the SM is made investigating the predictive power of both models for a one-step ahead forecast of the Dollar-Deutschmark exchange rate. Reasons are given for why the MLP is expected to perform better than a conventional model in this case. A further study gives results on the performance of an MLP and a SV model in predicting the volatility of the Dollar- Deutschmark exchange rate and a combination of both models is proposed to decrease the forecasting error. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-032: ---------------------------------------- Characterizations of Learnability for Classes of $\{ 0,...,n \}$-valued Functions by Shai Ben-David, Technion, Israel Nicol\`o Cesa-Bianchi, Universit\`a di Milano, Italy David Haussler, University of California at Santa Cruz, USA Philip M. Long, Duke University, USA Abstract: We investigate the PAC learnability of classes of $\sn$-valued functions ($n < \infty$). For $n=1$ it is known that the finiteness of the Vapnik-Chervonenkis dimension is necessary and sufficient for learning. For $n > 1$ several generalizations of the VC-dimension, each yielding a distinct characterization of learnability, have been proposed by a number of researchers. In this paper we present a general scheme for extending the VC-dimension to the case $n > 1$. Our scheme defines a wide variety of notions of dimension in which all these variants of the VC-dimension, previously introduced in the context of learning, appear as special cases. Our main result is a simple condition characterizing the set of notions of dimension whose finiteness is necessary and sufficient for learning. This provides a variety of new tools for determining the learnability of a class of multi-valued functions. Our characterization is also shown to hold in the ``robust'' variant of PAC model and for any ``reasonable'' loss function. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-033: ---------------------------------------- Constructing Computationally Efficient Bayesian Models via Unsupervised Clustering by Petri Myllym\"aki and Henry Tirri, University of Helsinki, Finland Abstract: Given a set of samples of an unknown probability distribution, we study the problem of constructing a good approximative Bayesian network model of the probability distribution in question. This task can be viewed as a search problem, where the goal is to find a maximal probability network model, given the data. In this work, we do not make an attempt to learn arbitrarily complex multi-connected Bayesian network structures, since such resulting models can be unsuitable for practical purposes due to the exponential amount of time required for the reasoning task. Instead, we restrict ourselves to a special class of simple tree-structured Bayesian networks called Bayesian prototype trees, for which a polynomial time algorithm for Bayesian reasoning exists. We show how the probability of a given Bayesian prototype tree model can be evaluated, given the data, and how this evaluation criterion can be used in a stochastic simulated annealing algorithm for searching the model space. The simulated annealing algorithm provably finds the maximal probability model, provided that a sufficient amount of time is used. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-034: ---------------------------------------- Mapping Bayesian Networks to Boltzmann Machines by Petri Myllym\"aki, University of Helsinki, Finland Abstract: We study the task of finding a maximal a posteriori (MAP) instantiation of Bayesian network variables, given a partial value assignment as an initial constraint. This problem is known to be NP-hard, so we concentrate on a stochastic approximation algorithm, simulated annealing. This stochastic algorithm can be realized as a sequential process on the set of Bayesian network variables, where only one variable is allowed to change at a time. Consequently, the method can become impractically slow as the number of variables increases. We present a method for mapping a given Bayesian network to a massively parallel Bolztmann machine neural network architecture, in the sense that instead of using the normal sequential simulated annealing algorithm, we can use a massively parallel stochastic process on the Boltzmann machine architecture. The neural network updating process provably converges to a state which solves a given MAP task. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-035: ---------------------------------------- A MINIMAL LENGTH ENCODING SYSTEM by Tony Bellotti, London Electricity plc, UK Alex Gammerman, Royal Holloway, University of London, UK Abstract: Emily is a project to develop a computer system that can organise symbolic knowledge given in a high-level relational language, based on the principle of minimal length encoding (MLE). The purpose of developing this system is to test the hypothesis that minimal length encoding can be used as a general method for induction. A prototype version, Emily2, has already been implemented. It is the purpose of this paper to describe this system, to present some of our results and to indicate future developments. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-036: ---------------------------------------- Techniques in Neural Learning by Pascal Koiran, DIMACS, Rutgers University John Shawe-Taylor, Royal Holloway, University of London Abstract: This paper takes ideas developed in a theoretical framework by Maass and adapts them for a practical learning algorithm for feedforward sigmoid neural networks. A number of different techniques are presented which are based loosely around the common theme of taking advantage of the linearity of the net input to a neuron, or in other words the fact that there is only a single non-linearity per neuron. Some experimental results are included, though many of the ideas are as yet untested. The paper can therefore be viewed as a tool box offering a selection of possible techniques for incorporation in practical, heuristic learning algorithms for multi-layer perceptrons. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-037: ---------------------------------------- $\P\neq \NP$ over the non standard reals implies $\P\neq \NP$ over $\R$ by Christian Michaux, University of Mons-Hainaut, Belgium Abstract: Blum, Shub and Smale showed the existence of a $\NP$-complete problem over the real closed fields in the framework of their theory of computation over the reals. This allows to ask for the $\P\neq \NP$ question over real closed fields. Here we show that $\P\neq\NP$ over a real closed extension of the reals implies $\P\neq \NP$ over the reals. We also discuss the converse. This leads to define some subclasses of $\P/$poly. Finally we show that the transfer result about $\P\neq \NP$ is a part of a very abstract result. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-038: ---------------------------------------- Computing with Truly Asynchronous Threshold Logic Networks by Pekka Orponen, Technical University of Graz, Austria Abstract: We present simulation mechanisms by which any network of threshold logic units with either symmetric or asymmetric interunit connections (i.e., a symmetric or asymmetric ``Hopfield net'') can be simulated on a network of the same type, but without any a priori constraints on the order of updates of the units. Together with earlier constructions, the results show that the truly asynchronous network model is computationally equivalent to the seemingly more powerful models with either ordered sequential or fully parallel updates. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-040: ---------------------------------------- Descriptive Complexity Theory over the Real Numbers by Erich Gr\"adel and Klaus Meer, RWTH Aachen, Germany Abstract: We present a logical approach to complexity over the real numbers with respect to the model of Blum, Shub and Smale. The logics under consideration are interpreted over a special class of two-sorted structures, called {\em $\R$-structures}: They consist of a finite structure together with the ordered field of reals and a finite set of functions from the finite structure into $\R$. They are a special case of the {\em metafinite structures} introduced recently by Gr\"adel and Gurevich. We argue that $\R$-structures provide the right class of structures to develop a descriptive complexity theory over $\R$. We substantiate this claim by a number of results that relate logical definability on $\R$-structures with complexity of computations of BSS-machines. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-042: ---------------------------------------- Knowledge Extraction From Neural Networks : A Survey by R. Baron, ENS-Lyon CNRS, France Abstract: Artificial neural networks may learn to solve arbitrary complex problems. But knowledge acquired is hard to exhibit. Thus neural networks appear as ``black boxes'', the decisions of which can't be explained. In this survey, diff erent techniques for knowledge extraction from neural networks are presented. Early works have shown the interest of the study of internal representations, bu t these studies were domain specific. Thus, authors tried to extract a more general form of knowledge, like rules of an expert system. In a more restricted field, it is also possible to extract automata from neural networks, likely to recognize a formal language. Finally, numerical information may be obtained in process modelling, and this may be of interest in industrial applications. ----------------------- The Report NC-TR-95-011 can be accessed and printed as follows % ftp cscx.cs.rhbnc.ac.uk (134.219.200.45) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-95-011.ps.Z ftp> bye % zcat nc-tr-95-011.ps.Z | lpr -l Similarly for the other technical report. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. The files may also be accessed via WWW starting from the NeuroCOLT homepage: http://www.dcs.rhbnc.ac.uk/neural/neurocolt.html Best wishes John Shawe-Taylor From rsun at cs.ua.edu Wed May 10 14:17:14 1995 From: rsun at cs.ua.edu (Ron Sun) Date: Wed, 10 May 1995 13:17:14 -0500 Subject: No subject Message-ID: <9505101817.AA19625@athos.cs.ua.edu> In relation to the workshop > The IJCAI Workshop on > Connectionist-Symbolic Integration: > From Unified to Hybrid Approaches > > to be held at IJCAI'95 > Montreal, Canada > August 19-20, 1995 I like to update a bibliography of work on connectionist-symbolic integration. About two years ago, I solicited input from this mailing list and compiled a bibliography on the above topic (available in Neuroprose). However, since then, there has been a considerable amount of new developments that need to be collected and categorized. Therefore, I want to update (and re-compile) that bibliography. Please send me any of the followings: -- New publications since Spring 1993 -- Earlier publications that were inadvertently omitted in the the current bibliography -- Lists of your own publications in this area, preferably annotated (if they are not already in the bibliography). My e-mail address is: rsun at cs.ua.edu If you have hardcopies that you can send me, here is my address: Dr. Ron Sun Department of Computer Science The University of Alabama Tuscaloosa, AL 35487 (205) 348-6363 Your help is greatly appreciated. However, the decision whether to include a paper or not in the bibliography is solely the responsibility of the editor. ---Ron p.s. --------- The previous bibliography (36 pages) on connectionist models with symbolic processing is available in neuroprose. To get a copy of the bibliography, use FTP as follows: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get sun.nn-sp-bib.ps.Z ftp> quit unix> uncompress sun.nn-sp-bib.ps.Z unix> lpr sun.nn-sp-bib.ps (or however you print postscript) A clean-up version of the bibliography is published in the book: Ron Sun and Larry Bookman. (eds.) Computational Architectures Integrating Neural and Symbolic Processes. Kluwer Academic Publishers. 1994. From cabestan at eel.upc.es Thu May 11 16:45:48 1995 From: cabestan at eel.upc.es (Joan Cabestany) Date: Thu, 11 May 1995 16:45:48 UTC+0100 Subject: IWANN'95 Programme Message-ID: <1133*/S=cabestan/OU=eel/O=upc/PRMD=iris/ADMD=mensatex/C=es/@MHS> Dear colleagues, IWANN'95 (International Workshop on Artificial Neural Networks) will be held in Torremolinos (Malaga) in Spain next June, 7-9. All people interested in the final Programme and details can obtain a postcript file from our server: ftp ftp.upc.es username:anonymous password: e-mail address cd upc/eel get iwann95.ps (120K aprox.) Yours J.Cabestany From bhuiyan at mars.elcom.nitech.ac.jp Fri May 12 03:43:39 1995 From: bhuiyan at mars.elcom.nitech.ac.jp (bhuiyan@mars.elcom.nitech.ac.jp) Date: Fri, 12 May 95 16:43:39 +0900 Subject: Pre-print Available via FTP Message-ID: <9505120743.AA28284@mars.elcom.nitech.ac.jp> FTP-host: ftp.elcom.nitech.ac.jp (133.68.21.193) FTP-filename: /pub/WCNN_95.ps.gz URL ftp://133.68.21.193/pub/WCNN_95.ps.gz Performance Evaluation of a Neural Network based Edge Detector for high-contrast images Md. Shoaib Bhuiyan and Akira Iwata Dept. Electrical & Computer Engineering Nagoya Institute of Technology, Nagoya, Japan 466 To appear in Proc. World Congress on Neural Networks, 1995 ABSTRACT The performance of a neural network based edge detector for high-contrast images has been investigated both quantitatively and qualitatively. We have compared it's performance for both synthetic and natural images with those of four existing edge detection methods namely, Sobel's operator, Johnson proposed Contrast based Sobel operator, Marr-Hildreth's Laplacian-of-Gaussian (LoG) operator, and Canny's operator. We have also investigated it's noise immunity and compared with those of the above mentioned methods. We have found the performance of the neural network based edge detector to be consistently better, especially for images where the illumination varies widely. ----------------------------------------------------------------- The paper can be retrieved via anonymous ftp by following these instructions: unix> ftp ftp.elcom.nitech.ac.jp (133.68.21.193) ftp:name> anonymous Password:> your complete e-mail address ftp> cd pub ftp> get WCNN_95.ps.gz ftp> bye unix> gunzip WCNN_95.ps.gz unix> lpr WCNN_95.ps WCNN_95.ps is 5.57Mb, five pages in postscript format. The paper presents some results of our previously proposed algorithm to extract edges from an image with high contrast (also available by ftp from the same location, filename: ICONIP.ps.gz) and compares them with four existing edge detection mathods. Your feedback is very much appreciated (bhuiyan at mars.elcom.nitech.ac.jp) --Md. Shoaib Bhuiyan From jordan at psyche.mit.edu Sun May 14 18:57:22 1995 From: jordan at psyche.mit.edu (Michael Jordan) Date: Sun, 14 May 95 18:57:22 EDT Subject: Tech Report Available: EM algorithm Message-ID: FTP-host: psyche.mit.edu FTP-file: pub/jordan/AIM-1520.ps.Z The following paper is now available by anonymous ftp. ================================================================== On Convergence Properties of the EM Algorithm for Gaussian Mixtures (10 pages) Lei Xu and Michael I. Jordan CUHK and MIT Abstract: We build up the mathematical connection between the ``Expectation-Maximization'' (EM) algorithm and gradient-based approaches for maximum likelihood learning of finite Gaussian mixtures. We show that the EM step in parameter space is obtained from the gradient via a projection matrix $P$, and we provide an explicit expression for the matrix. We then analyze the convergence of EM in terms of special properties of $P$ and provide new results analyzing the effect that $P$ has on the likelihood surface. Based on these mathematical results, we present a comparative discussion of the advantages and disadvantages of EM and other algorithms for the learning of Gaussian mixture models. ================================================================== From massone at mimosa.eecs.nwu.edu Mon May 15 11:19:38 1995 From: massone at mimosa.eecs.nwu.edu (Lina Massone) Date: Mon, 15 May 1995 10:19:38 -0500 Subject: post-doc opening (forwarded) Message-ID: <199505151519.KAA03259@mimosa.eecs.nwu.edu> I am posting this message for a friend. Please do not send inquiries to me! L. Massone **************************************************** Motor Control Postdoctoral Fellow Start Date: September 1, 1995 I seek a postdoctoral fellow for an NSF funded study on the learning of coordination during complex multijoint voluntary actions. I use computer simulation and empirical methods to address phenomena and mechanisms that underlie the learning of coordination between balance, posture and voluntary task goals during multijoint pulls made by freely-standing humans. Two new, large movement analysis laboratories are available with full computerized capabilities for collecting and analyzing biomechanical and EMG data (force plates, motion analysis, load cells). Applicants must have completed their Ph.D. in motor systems neuroscience, engineering, kinesiology or a related discipline. Knowledge of biomechanics (including modeling), systems analysis, nonlinear dynamics, motor psychology or statistics are highly desirable. Good communication skills are a strong plus. Opportunities exist for participating in seminars and courses offered through the Institute for Neuroscience, Programs in Medical Biomechanics, Physiology, Programs in Physical Therapy, and other departments. Please send a letter of application (including career goals), vita and the names, addresses and phone numbers of two references to Wynne A. Lee, Ph.D., Programs in Physical Therapy, Northwestern University Medical School, 645 N. Michigan Ave., Chicago IL 60611-2814. Email: wlee at casbah.acns.nwu.edu ----------------------------------------------- Wynne A. Lee, Ph.D. Programs in Physical Therapy, and The Institute for Neuroscience Northwestern University Medical School 645 N. Michigan Avenue (Suite 1100) Chicago IL 60611-2814 voice: 312-908-6795 fax: 312-908-0741 email: wlee at casbah.acns.nwu.edu ----------------------------------------------- From lbl at nagoya.riken.go.jp Mon May 15 22:16:09 1995 From: lbl at nagoya.riken.go.jp (Bao-Liang Lu) Date: Tue, 16 May 1995 11:16:09 +0900 Subject: Paper available: parallel and modular multi-sieving net Message-ID: <9505160216.AA20492@xian.riken.go.jp> FTP-host:archive.cis.ohio-state.edu FTP-file:pub/neuroprose/lu.multisieve.ps.Z The following paper is now available by anonymous ftp. ========================================================================== A Parallel and Modular Multi-sieving Neural Network Architecture for Constructive Learning Bao-Liang Lu (RIKEN) Koji Ito (Toyohashi Univ. of Tech.; RIKEN) Hajime Kita (Kyoto Univ.) Yoshikazu Nishikawa (Kyoto Univ.) Abstract: In this paper we present a parallel and modular multi-sieving neural network (PMSN) architecture for constructive learning. This PMSN architecture is different from existing constructive learning networks such as the cascade correlation architecture. The constructing element of the PMSNs is a compound modular network rather than a hidden unit. This compound modular network is called a sieving module (SM). In the PMSN a complex learning task is docomposed into a set of relatively simple subtasks automatically. Each of these subtasks is solved by a corresponding individual SM and all of these SMs are processed in parallel. (It will appear in the Proc. of Fourth International Conference on Artificial Neural Networks (ANN'95), Cambridge, UK, 26-28 June 1995. 6 pages. No hard copies available.) ============================================================================ Bao-Liang Lu --------------------------------------------- Bio-Mimetic Control Research Center, RIKEN 3-8-31 Rokuban, Atsuta-ku, Nagoya 456, Japan Phone: +81-52-654-9137 Fax: +81-52-654-9138 Email: lbl at nagoya.riken.go.jp From gbugmann at school-of-computing.plymouth.ac.uk Mon May 15 06:00:26 1995 From: gbugmann at school-of-computing.plymouth.ac.uk (Guido.Bugmann xtn 2566) Date: Mon, 15 May 1995 11:00:26 +0100 (BST) Subject: Position for Research Assistant Message-ID: University of Plymouth School of Computing Neurodynamics Research Group Postgraduate Research Assistant Applications are invited for a University-funded three-year postgraduate research assistant, to carry out an investigation within a broad range of topics related to the development of novel, biologically-inspired, neural network based, learning control systems, with particular application to the control of autonomous mobile robots. The range of work extends from theoretical studies of cognition and intelligent behaviour, through computational modelling of brain function, to the construction of neural network based controllers. The research will be carried out within the Neurodynamics Research Group of the School of Computing, further details of which are given below. The research assistant will be required to register for a PhD degree (fees are waived for University staff) and to carry out limited teaching/demonstrating duties. We are looking for high quality candidates who have, or are in the process of completing, a first degree or masters degree in a relevant discipline, eg electronic/mechanical engineering, mathematics, psychology, cognitive science, who are willing or able to use computational tools for either simulation or real-time control, and who have a strong interest in pursuing research in neural networks/systems, adaptive learning systems, and control systems. The salary for the post will be on the University Research Assistant scale, with a salary in the range A39231 to 12756 pa., dependent upon age, qualifications, experience, etc. Informal discussions about the post can be held with Dr Guido Bugmann (e-mail: gbugmann at soc.plym.ac.uk; tel: 01752 232566). Applications (by mail or email) should comprise a CV, a short description of interests and the names of 2 referees. Applications should be sent to Guido Bugmann at the address below, as soon as possible. The position will stay open until a suitable candidate is found. ----------------------------- Dr. Guido Bugmann Neurodynamics Research Group School of Computing University of Plymouth Plymouth PL4 8AA United Kingdom ----------------------------- Tel: (+44) 1752 23 25 66 / 41 Fax: (+44) 1752 23 25 40 Email: gbugmann at soc.plym.ac.uk ----------------------------- The Neurodynamics Research Group - Background Information The aim of this group is to investigate and develop computational neural models of brain behaviour in sensory perception, learning, memory and motor action planning and generation, and to use these models to develop novel artificial systems for intelligent sensory-motor control, eg of autonomous robots. The group was started in September 1991 and is led by Professor Mike Denham. Researchers in the group currently include a postdoctoral University Research Fellow, Dr Guido Bugmann, who has an international reputation in the field of neural dynamics, an EPSRC-funded postdoctoral Research Fellow, Dr Raju Bapi, who was a member of Prof Dan Levine's research group at the University of Texas, and who has expertise in the modelling of frontal lobe behaviour. The Group also has one University-funded Research Assistant and four research students The Group was also recently expanded by the appointment of a Senior Lecturer in Artificial Intelligence, Dr Sue McCabe, who was previously at the Royal Naval Engineering College and has expertise in AI, intelligent control and intelligent sensing, especially neural network models of auditory processing. The Group was awarded an EPSRC research grant, starting in August 1994, to investigate a novel biologically-inspired architecture for an intelligent control system, which is a collaborative project with Professor John Taylor and the Centre for Neural Networks at Kings College London. So far, the Group have been working on specific parts of the proposed integrated learning control system and have been able to contribute significantly to knowledge on visual information processing and on planning. As a result of our work over the last year, we have now begun to define the approach necessary for solving the deep theoretical and practical problems of integrating the various parts of the proposed system, based around an "sensory-action" approach to object perception and recognition and to the learning of spatial maps and adaptive behaviours for changing control objectives and environments. A novel neural network based system for control of an autonomous mobile robot has been developed and a simulation has been constructed using the Cortex-Pro system on a 486 PC. This simulated system controls currently a real robot with video camera and provides a practical working example of the basic architecture of the proposed learning control system. The intention is to build more advanced and detailed models of individual modules into the system as a result of parallel conceptual and theoretical research, eg into perception, learning and motor planning, in the Group. Recent publications: "A model for latencies in the visual system" Bugmann, G. and Taylor J.G. (1993) Proc. 3rd Conf. on Artificial Neural Networks (ICANN'93, Amsterdam), Gielen S. and Kappen B. (eds), p.165-168. "Modelling of the high firing variability of real cortical neurons with the temporal noisy-leaky integrator neuron model" Christodoulou C., Clarkson T., Bugmann G. and Taylor J.G. (1994) Proc. IEEE Int. Conf. on Neural Networks (ICNN'94) part of the World Congress on Computational Intelligence (WCCI'94), Orlando, Florida, USA, 2239-2244.. "An artificial neural network architecture for multiple temporal sequence processing" McCabe S L and Denham M J (1994) Proc World Congress on Neural Networks (WCNN'94), San Diego, California, USA, 738-743. "Role of short-term memory in visual information processing" Bugmann, G. and Taylor J.G. (1994) Proc. of Int. Symp. on Dynamics of Neural Processing, Washington, DC, USA, 132-136. "Learning to control intelligently" Denham, M J (1994) Proc. IEE Int Conf Control'94, Warwick, UK (plenary paper) "Route finding by neural net" Bugmann G, Taylor J G and Denham M J (1995) in Taylor JG (ed) Neural Networks, Alfred Waller L, Henley on Thames, pp217-230. "Robot control using temporal sequence learnin" Denham M J and McCabe S L (1995) Proc. World Congress on Neural Networks (WCNN'95), Washington D.C., USA (accepted for presentation) "Segmentation of the auditory scene" McCabe S L and Denham M J(1995) Proc. World Congress on Neural Networks (WCNN'95), Washington D.C., USA (accepted for presentation) ------------------------------------------------------------------- From p.j.b.hancock at psych.stir.ac.uk Tue May 16 16:23:29 1995 From: p.j.b.hancock at psych.stir.ac.uk (Peter Hancock) Date: Tue, 16 May 95 16:23:29 BST Subject: Info theory workshop Message-ID: <9505161523.AA1552850034@nevis.stir.ac.uk> Call for contributions. Workshop on Information Theory and the Brain. 4-5th September 1995, University of Stirling, Scotland What is the goal of sensory coding? What algorithms help the brain to achieve that goal? What is the information content of spiking in neurons? Where is the trade-off between redundancy and decorrelation? How do internal representations reflect the statistics of sensory input? How is input from different modalities combined? These are the kind of issues to be discussed. The main thrust of the workshop is in furthering our understanding of what is happening in the brain, but with an eye also to possible applications of such algorithms. Numbers will be limited to around 30, for reasons of space and informality. Costs still to be determined, but minimal (less than 50 pounds, excluding accomodation). It is hoped that a proceedings will be published after the event. Postgraduates are particularly welcome. Stirling is situated in the centre of Scotland, with easy access by road, rail, and international airports at Edinburgh and Glasgow. The Edinburgh International Festival and Fringe will still be in progress and well worth a visit. Submissions: Please submit a one-page abstract, preferably by email, to Peter Hancock, pjh at psych.stir.ac.uk, by 30th June 1995. We expect that most people attending will contribute in some form. Organising committee: Roland Baddeley (Oxford) Peter Foldiak (St. Andrews) Colin Fyfe (Paisley) Peter Hancock (Stirling) Jim Kay (SASS, Aberdeen) Mark Plumbley (King's College London) Further information from Peter Hancock, Department of Psychology, University of Stirling, FK9 4LA, UK. Phone (+44) 1786 467675. Fax (+44) 1786 467641 Email pjh at psych.stir.ac.uk Note: the first question above is borrowed from David Field: What is the Goal of Sensory Coding?, Field, D., Neural Computation 6, 559-601, 1994. -- ------------------------------------------------------- Peter Hancock Department of Psychology 0 0 Face University of Stirling | Research FK9 4LA, UK \_/ Group Phone 01786 467675 Fax 01786 467641 pjh at psych.stir.ac.uk http://nevis.stir.ac.uk/~pjh ------------------------------------------------------- From S.Khebbal at cs.ucl.ac.uk Tue May 16 13:35:33 1995 From: S.Khebbal at cs.ucl.ac.uk (S.Khebbal@cs.ucl.ac.uk) Date: Tue, 16 May 95 18:35:33 +0100 Subject: New Intelligent Hybrid Systems Book Message-ID: NEW BOOK ANNOUCEMENT INTELLIGENT HYBRID SYSTEMS S. GOONATILAKE and S. KHEBBAL University College London There is now a growing realisation in the intelligent systems community that many complex problems require hybrid solutions. Increasingly hybrid systems combining genetic algorithms, fuzzy logic, neural networks, and expert systems are proving their effectiveness in a wide variety of real-world problems. This timely book brings together leading researchers from the United States, Europe and Asia who are pioneering the theory and application of Intelligent Hybrid Systems. The book provides a definition of hybrid systems, summarises the current state of the art, and details innovative methods for integrating different intelligent techniques. Application examples are drawn from domains including industrial control, financial and business modelling, and cognitive simulation. The book is also intended to equip researchers, application developers and managers with key reference and resource material for the successful development of hybrid systems. CONTENTS: ======== Chap 1: Intelligent Hybrid Systems : Issues, Classes and Future Trends - Suran Goonatilake & Sukhdev Khebbal, University College London, UK. PART ONE: FUNCTION-REPLACING HYBRIDS ==================================== Chap 2: Fuzzy Controller Synthesis with Neural Network Process Models - Wendy Foslien & Tariq Samad, Honeywell SSDC, California, USA. Chap 3: Replacing the Pattern Matcher of a Expert System with a Neural Network - Henry Tirri, Univ. of Helsinki, Finland. Chap 4: Genetic Algorithms and Fuzzy Logic for Adaptive Process Control - Charles Karr, U.S. Bureau of Mines, USA. Chap 5: Neural Network Weight Selection Using Genetic Algorithms - David Montana, BBN Systems & Technologies Inc, USA. PART TWO: INTERCOMMUNICATING HYBRIDS ==================================== Chap 6: A Unified Approach For Engineering Design - David Powell, Michael Skolnick & Shi Shing Tong, GEC, USA. Chap 7: A Hybrid System for Data Mining - Randy Kerber, Brian Livezey, & Evangelos Simoudis, Lockheed AI Center, Palo Alto, USA. Chap 8: Using Fuzzy Pre-processing with Neural Networks for Chemical Process Diagnostic Problems - Casimer Klimasaukas, NeuralWare, USA. Chap 9: A Multi Agent Approach for the Integration of Neural Networks and Expert Systems - Andreas Scherer & Gunter Schlageter, Praktische Informatik, FernUniversitaet, Hagen, Germany. PART THREE: POLYMORPHIC HYBRIDS =============================== Chap 10: Integrating Symbol Processing Systems and Connectionist Networks - Vasant Honavar & Leonard Uhr, Iowa State University & Dept of Computer Science, University of Wisconsin-Madison, USA. Chap 11: Reasoning with Rules and Variables in Neural Networks - Venkat Ajjanagadde & Lokendra Shastri, University of Pennsylvania, USA. Chap 12: The NeurOagent: A Neural Multi-agent Approach for Modelling, Distributed Processing and Learning - Khai Minh Pham, InferOne, France. Chap 13: Genetic Programming of Neural Networks: Theory and Practice - Frederic Gruau, Grenoble, France. PART FOUR: DEVELOPING HYBRID SYSTEMS ==================================== Chap 14: Tools and Environments for Hybrid Systems - Sukhdev Khebbal & Danny Shamhong, University College London, UK. ISBN 0471 94242 1 300pp January 1995 (Sterling) 29.95/$47.95 John Wiley & Sons Ltd, Baffins Lane, Chichester, West Sussex, PO19 1UD, UK. (also offices in New York, Brisbane, Toronto, and Singapore) There is also a World Wide Web page on Intelligent Hybrid Systems at : http://www.cs.ucl.ac.uk/staff/skhebbal/ihs and for the Intelligent Hybrid Systems Book at : http://www.cs.ucl.ac.uk/staff/skhebbal/ihs/ihsbook.html =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= # MAIL ADDRESS : | EMAIL ADDRESS : # # Sukhdev Khebbal, | S.Goonatilke at cs.ucl.ac.uk # # Suran Goonatilake, | S.Khebbal at cs.ucl.ac.uk # # Department of Computer Science,|------------------------------------# # University College London, | TELEPHONE NUMBERS : # # Gower Street, | Voice: +44 (0)171 391 1329 # # London WC1E 6BT. | Fax : +44 (0)171 387 1397 # =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= From markey at dendrite.cs.colorado.edu Tue May 16 14:11:27 1995 From: markey at dendrite.cs.colorado.edu (Kevin Markey) Date: Tue, 16 May 1995 12:11:27 -0600 Subject: NSF cognitive & behavioral budget cuts Message-ID: <199505161811.MAA22596@dendrite.cs.colorado.edu> The following alert was originally posted in Info-Childes. Congress will act on the budget resolution by the end of this week. ------------------------------------------------------------------------- EMERGENCY ACTION ALERT >From the Federation of Behavioral, Psychological and Cognitive Sciences The House Budget Committee has recommended the complete elimination of NSF research funding for Psychology, Anthropology, Sociology, Linguistics, Political Science, Economics, Geography, Cognitive Science, Decision, Risk and Management Sciences, History of Science, and Statistical Research for the Behavioral and Social Sciences-- as NSF's contribution to balancing the Federal budget. There is no doubt that NSF funding will be cut in the effort to balance the budget. But to selectively wipe out the behavioral and social sciences goes far beyond simply saving money. This is the most important crisis these sciences have faced since Ronald Reagan attempted to eliminate the same sciences in the early 1980s. Action on this will happen very quickly. The Budget Committee approved the budget package on May 11. The vote on the package by the full House will happen sometime between the 15th and 18th of May. In all likelihood, the budget resolution will pass the House unaltered. The Appropriations Committee will be bound by the spending limits imposed by the Budget Committee. But it need not be bound by the particular cuts recommended by the Budget Committee! Unfortunately, the House leadership has also made it known that no program that lacks a current authorization will be funded. The National Science Foundation is not currently authorized. Efforts to pass its authorization failed last year in the Senate. The House Science Committee Chair, Robert Walker (R-PA) has said that as soon as the budget is passed, the Science Committee will proceed to report its authorizations which include, among other things, NSF, NASA, and the research programs of the Department of Energy. Robert Walker is also the Vice-Chair of the Budget Committee, and he played a key role in determining the selective cuts at NSF. In a news conference on May 12, Walker said that the Directorate containing the research programs mentioned above was created simply because it was "politically correct" and that it is now time to make a correction. This means that there is little chance the NSF authorization from his Committee will contain an authorization for the Social, Behavioral, and Economic Sciences Directorate. If the Committee does not authorize the Directorate, the Appropriations Committee cannot fund the research programs it contains. So scientists must pay close attention to actions of the Budget, Appropriations, and the authorizing committee. The only way the course of events can be changed is for concerned citizens to let their elected representatives know that they as voters to not approve of these ideological cuts masquerading as budget balancing measures. You must take it on yourself immediately to 1) write or call your own representative and senator's office to express your disapproval 2) send a copy of your letter to: Robert Walker, George Brown (ranking minority member of the Science Committee and a likely ally of behavioral and social scientists), Jerry Lewis (Chairman of the House Appropriations Subcommittee that appropriates money for the National Science Foundation). And this next thing is equally important: SEND, FAX OR EMAIL A COPY OF YOUR CORRESPONDENCE TO THE FEDERATION OF BEHAVIORAL, PSYCHOLOGICAL, AND COGNITIVE SCIENCES. We have to be able to monitor how great an impact behavioral and social scientists are having, and the only way we can do that is by keeping track of how many contacts from scientists congressional offices have received. Any letter to Congress may be addressed as follows: Representative's name, U.S. House of Representatives (or U.S. Senate) Washington, D.C. 20515 (House) or 20510 (Senate). The Federation email is federation at apa.org. Federation fax is (202) 336-6158. If you need more information, our telephone number is (202) 336-5920. 3) Help us get the word out. Please see that the anthropology, sociology, linguistics, economics, political science, cognitive science, and geography departments on your campus receive this action alert as well. 4) It is very important that elected representatives do not hear only from the scientists affected. If you have acquaintances in the physical or biological sciences or the university administration who would write a letter or make a phone call to an elected representative, do everything you can to get such a communication sent. Margaret Jean Intons-Peterson Department of Psychology Indiana University Bloomington, Indiana 47405 INTONS at INDIANA.EDU Phone: 812-855-3991 Fax: 812-855-4691 From jaap.murre at mrc-apu.cam.ac.uk Fri May 12 12:40:47 1995 From: jaap.murre at mrc-apu.cam.ac.uk (Jaap Murre) Date: Fri, 12 May 1995 17:40:47 +0100 Subject: Paper on connectivity of the brain Message-ID: <199505121640.RAA24847@sirius.mrc-apu.cam.ac.uk> The following paper has been added to our ftp-site: J.M.J. Murre, & D.P.F. Sturdy (submitted). The connectivity of the brain: multi-level quantitative analysis. Revised version submitted to Biological Cybernetics. Abstract We develop a mathematical formalism for calculating connectivity volumes generated by specific topologies with various physical packing strategies. We consider four topologies (full, random, nearest neighbor, and modular connectivity) and three physical models: (i) interior packing, where neurons and connection fibers are intermixed, (ii) sheeted packing where neurons are located on a sheet with fibers running underneath, and (iii) exterior packing where the neurons are located at the surfaces of a cube or sphere with fibers taking up the internal volume. By extensive cross-referencing of available human neuroanatomical data we produce a consistent set of parameters for the whole brain, the cerebral cortex, and the cerebellar cortex. By comparing these inferred values with those predicted by the expressions, we draw the following general conclusions for the human brain, cortex, cerebellum: (i) Interior packing is less efficient than exterior packing (in a sphere). (ii) Fully and randomly connected topologies are extremely inefficient. More specifically we find evidence that different topologies and physical packing strategies might be used at different scales. (iii) For the human brain at a macrostructural level, modular topologies on an exterior sphere approach the data most closely. (iv) On a mesostructural level, laminarization and columnarization are evidence of the superior efficiency of organizing the wiring as sheets. (v) Within sheets, microstructures emerge in which interior models are shown to be the most efficient. With regard to interspecies similarities and differences we conjecture (vi) that the remarkable constancy of number of neurons per underlying mm2 of cortex may be the result of evolution minimizing interneuron distance in grey matter, and (vii) that the topologies that best fit the human brain data should not be assumed to apply to other mammals, such as the mouse for which we show that a random topology may be feasible for the cortex. The paper is 39 pages, single spaced. The postscript file and its compressed versions are called: ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre/connect.ps (940 Kb) ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre/connect.ps.Z (327 Kb) ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre/connect.zip (223 Kb) (with PKZIP 2.04g) -- Jaap Murre jaap.murre at mrc-apu.cam.ac.uk After 1 June 1995: pn_murre at macmail.psy.uva.nl From btelfer at relay.nswc.navy.mil Wed May 17 09:39:44 1995 From: btelfer at relay.nswc.navy.mil (Brian Telfer) Date: Wed, 17 May 95 09:39:44 EDT Subject: Final Call for WCNN-95 Special Session Message-ID: <9505171339.AA01538@ulysses.nswc.navy.mil> (Submitted by Harold Szu) Call for Papers for WCNN-95 special Novel Results Session by June 15 1995 World Congress on Neural Networks, Washington DC 7/17-21/95 Highlights & Attractions: @ Keynote speaker: Dr. M. Nelson, White House/Office Science Tech. Policy, will speak on Federal Programs on Information Superhighway. @ 7 plenary talks by Kohonen, Alkon, Carpenter, Szu, Freeman, Taylor, Amari @ 19 sessions, 7 special sessions, Special Interest Group meetings @ Neural Network Industrial Enterprise Day (Monday) & Federal Clean Car Initiative @ 2-day Fuzzy Neural Networks Symposium @ 24 INNS University Short Courses (P. Werbos's replaces H. Szu's) @ NIH/FDA Biomedical Symposium, highlights e.g., Telemedicine @ WCNN-95 Golf Range Sunday Afternoon Competition, @ Student Volunteers Application, please contact via e-mail: Charles at seas.gwu.edu Program details can be obtained by contacting Talley Management at address below or 74577.504 at compuserve.com. Background: The World Congress on Neural Networks is the only International Neural Network Conference on the North American Continent in 1995. Don't miss it. To encourage your active participation, here is a unique offer: The date is 1995 July 17-21 Washington DC: Renaissance Hotel($99/day; 800-228-9898, (202)962-4445 (fax)). WCNN is sponsored by the International Neural Network Society as an annual mechanism for Interdisciplinary Information Dissemination, in collaboration with IEEE, SPIE, NIH, FDA, ONR, APS, AAAI, AICE, APNNA, JNNS, ENNS, SME. IEEE members will enjoy the same low registration fee as INNS members ($75 Student members, $255 regular INNS or IEEE Members, registration must be sent prior to June 16 to TALLEY, 875 KINGS HIGHWAY, SUITE 200 WOODBURY, NJ 08096-3172, or 609-853-0411 FAX). Note that to enjoy the discount, your registration must be sent to Talley by June 16, but your technical paper must be sent directly to the Local Organization Committee address shown as follows. Accepted papers for WCNN-95 will be presented in "Novel Results" Session and published as a supplement to the paper proceedings available on-site. However, the final papers are not due until June 15 1995, a month before the DC Conference. It will give you plenty of time to report your most recent and exciting developments. (i) Submission Procedure: Paper format is: camera ready, 8.5x11 paper, 1" margins, single column, single spaced, minimum 10-pt font, 4 page limit ($20 per extra page). Cover letter should contain: full title of paper, corresponding and presenting authors, address, telephone and fax numbers, email address, preference of oral or poster presentation, audio-visual requirements. All poster presenters will give a 3 minute oral introduction (2 viewgraph maximum, including 1 quadchart containing authors/title, background, approach, results) to their poster during the oral session. (ii) Review Procedure: If a member of INNS Governing Board or SIGINNS Chair has already reviewed with an endorsement for acceptance for presentation in the submission letter, the paper will be accepted as it is. Send the original and one copy. The Local Organization Committee (LOC) will review it for the type of presentation and immediately inform the authors by e-mail or Fax ASAP. If not endorsed, the paper will be reviewed by the LOC. Send the original paper and five copies. (iii) All papers accepted for oral or poster presentation will be included in the session called Novel Results and will appear in a supplement to the WCNN-95 Proceedings and be distributed on-site. (iv) Best Poster Award will be chosen from all contributors who wish to be so considered. Best Poster Awards are rated according to: (a) Quality of Technical Content, (b) Quality of Oral Presentation, (c) Effectiveness of Poster Design, and the best three will be kept in a central area with all other winners throughout the conference. (v) Oral vs Poster: Oral presentation is prefered when a brand new result requires simultaneous peer review, when the main result can be presented in the limited time slot, and when a known speaker is capable to give a stimulating talk. Poster presentation is prefered if the author wishes to interact with the original inventors, if the result requires more than 15 minutes to do it justice, and if the paper requires nontraditional demo and tailoring to individual expertise. (vi) Submission address: Harold Szu WCNN-95 LOC Chair 9402 Wildoak Dr. Bethesda MD 20814. (301) 394-3097 (Office); (301) 394-1929 (Brian) (301) 394-3923 (Fax) e-mail: HSzu at Ulysses.NSWC.Navy.Mil In Summmary: Please encourage your colleagues that this is the only Conference in neural nets to attend to keep up with the interdisciplinary development related to the brain-style computing, Biomedical & Engineering Applications, Natural Intelligence, Mind & Body, Learning and Artificial Neural Network Models. (i) Governors & SIGINNS Chairs Guaranteed Acceptance,if reviewed by them. (ii) INNS & IEEE Identical Membership Discount Rate. (iii) Special Session: Novel Results DL: June 15 1995 (iv) Usual single column, single space, 12-pt font, 4 page limit. (vi) Papers accepted will be included in the WCNN Proceeding package. From maja at cs.brandeis.edu Wed May 17 18:18:38 1995 From: maja at cs.brandeis.edu (Maja Mataric) Date: Wed, 17 May 1995 18:18:38 -0400 Subject: Conference Announcement and Call For Papers Message-ID: <199505172218.SAA06588@garnet.cs.brandeis.edu> ============================================================================== Conference Announcement and Call For Papers FROM ANIMALS TO ANIMATS Fourth International Conference on Simulation of Adaptive Behavior (SAB96) Cape Cod, Massachusetts, USA, September 9-13, 1996 The objective of the conference is to bring together researchers in ethology, psychology, ecology, artificial intelligence, artificial life, robotics, and related fields so as to further our understanding of the behaviors and underlying mechanisms that allow natural and artificial animals to adapt and survive in uncertain environments. The conference will focus particularly on well-defined models, computer simulations, and robotics demonstrations, in order to help characterize and compare various organizational principles or architectures capable of inducing adaptive behavior in real animals or synthetic agents. Contributions treating any of the following topics from the perspective of adaptive behavior will receive special emphasis: Action selection Learning and development Perception and motor control Evolutionary computation Neural correlates of behavior Coevolutionary models Emergent structures and behaviors Parallel and distributed models Motivation and emotion Collective and social behavior Internal world models Autonomous robots Characterization of environments Applied adaptive behavior Authors should make every effort to suggest implications of their work for both natural and artificial animals. Papers which do not deal explicitly with adaptive behavior will be rejected. Submission Instructions Authors are requested to send five copies (hard copy only) of a full paper to the Program Chair (Pattie Maes) arriving no later than Feb 9th, 1996. Late submissions will not be considered. Papers should not exceed 10 pages (excluding the title page), with 1 inch margins all around, and no smaller than 10 pt (12 pitch) type (Times Roman preferred). The Web site listed below contains the latex .sty file producing the preferred format for submissions. Each paper must include a title page containing the following: (1) Full names, postal addresses, phone numbers, email addresses (if available), and fax numbers for each author, (2) A 100-200 word abstract, (3) The topic area(s) in which the paper could be reviewed (see list above). Camera ready versions of the papers, in two-column format, will be required by May 10th. Computer, video, and robotic demonstrations are also invited for submission. Submit a 2-page proposal plus a title page as above to the program chair. Indicate equipment requirements and relevance to the themes of the conference. Conference Chairs Pattie Maes, Program MIT Media Lab 20 Ames Street Rm 305 Cambridge, MA 02139 USA email: pattie at media.mit.edu Maja Mataric, Local Arrangements Volen Center for Complex Systems Computer Science Department Brandeis University Waltham, MA 02254 USA email: maja at cs.brandeis.edu Jean-Arcady Meyer, Publicity Groupe de Bioinformatique URA686.Ecole Normale Superieure 46 rue d'Ulm 75230 Paris Cedex 05 France email: meyer at wotan.ens.fr Jordan Pollack, Local Arrangements/Financial Volen Center for Complex Systems Computer Science Department Brandeis University Waltham, MA 02254 USA email: pollack at cs.brandeis.edu Herbert Roitblat, Financial Department of Psychology University of Hawaii 2430 Campus Road Honolulu, HI 96822 USA email: roitblat at uhunix.uhcc.hawaii.edu Stewart Wilson, Proceedings The Rowland Institute for Science 100 Edwin H. Land Blvd. Cambridge, MA 02142 USA email: wilson at smith.rowland.org (Tentative) Program Committee: M. Arbib, USA; R. Arkin, USA; R. Beer, USA; A. Berthoz, France; B. Blumberg, USA; L. Booker, USA; R. Brooks, USA; D. Cliff, UK; P. Colgan, Canada; T. Collett, UK; H. Cruse, Germany; J. Delius, Germany; A. Dickinson, UK; J. Ferber, France; D. Floreano, UK; N. Franceschini, France; S. Giszter, USA; S. Goss, Belgium; J. Hallam, UK; I. Harvey, UK; I. Horswill, USA; P. Husbands, UK; L. Kaelbling, USA; H. Klopf, USA; L-J. Lin, USA; M. Littman, USA; D. McFarland, UK; J. Millan, Spain; G. Miller, UK; R. Pfeifer, Switzerland; J. Slotine, USA; T. Smithers, Spain; O. Sporns, USA; J. Staddon, USA; L. Steels, Belgium; L. Stein, USA; F. Toates, UK; P. Todd, USA; S. Tsuji, Japan; W. Uttal, USA; D. Waltz, USA. Official Language: English Publisher: MIT Press/Bradford Books Important Dates =============== FEB 9, 1996: Submissions must be received APR 12: Notification of acceptance or rejection (via email) MAY 10: Camera ready revised versions due JUN 10: Early registration deadline AUG 8: Hotel reservations and regular registration deadline SEP 9-13: Conference dates General queries to: sab96 at cs.brandeis.edu WWW Page: http://www.cs.brandeis.edu/conferences/sab96 ============================================================================== From tibs at pc-tibs.Stanford.EDU Wed May 17 20:28:08 1995 From: tibs at pc-tibs.Stanford.EDU (Rob Tibshirani) Date: Wed, 17 May 1995 17:28:08 -0700 Subject: new paper Message-ID: <199505180028.RAA14900@pc-tibs.Stanford.EDU> The following paper (without figures) is now available at the ftp site utstat.toronto.edu in pub/bootpred.shar (shar postscript files). A paper copy with figures is available upon request from karola at playfair.stanford.edu Cross-Validation and the Bootstrap: Estimating the Error Rate of a Prediction Rule Bradley Efron Robert Tibshirani Stanford Univ Univ of Toronto A training set of data has been used to construct a rule for predicting future responses. What is the error rate of this rule? The traditional answer to this question is given by cross-validation. The cross-validation estimate of prediction error is nearly unbiased, but can be highly variable. This article discusses bootstrap estimates of prediction error, which can be thought of as smoothed versions of cross-validation. A particular bootstrap method, the $632+$ rule, is shown to substantially outperform cross-validation in a catalog of 24 simulation experiments. Besides providing point estimates, we also consider estimating the variability of an error rate estimate. All of the results here are nonparametric, and apply to any possible prediction rule. The simulations include ``smooth'' prediction rules like Fisher's Linear Discriminant Function, and unsmooth ones like Nearest Neighbors. ============================================================= | Rob Tibshirani The History of science | Dept. of Statistics is full of fruitful errors | Sequoia Hall and barren truths | Stanford Univ | Stanford, CA Arthur Koestler | USA 94305 Phone: 1-415-725 2237 Email: tibs at playfair.stanford.edu FAX: 1-416-725-8977 From jhoh at vision.postech.ac.kr Wed May 17 21:40:52 1995 From: jhoh at vision.postech.ac.kr (Prof. Jong-Hoon Oh) Date: Thu, 18 May 1995 10:40:52 +0900 Subject: Statistical Physics of Neural Networks: Preprints available via ftp Message-ID: Dear Colleagues, Preprints to be published in the proceedings of NNSMP95 (Neural Networks: The Statistical Mechanics Perspective) are available. Here is the table of contents for NNSMP95 Proceedings. It is available via anonymous ftp at tico.postech.ac.kr in the pub/NNSMP/proceedings95. For accompanying mailing list service, please read README file. **************************************************************************** Jong-Hoon Oh Associate Professor, Department of Physics jhoh at vision.postech.ac.kr Pohang Institute of Science Technology Tel) +82-562-2792069 Hyoja San 31 Pohang, 790-784 Kyoungbuk, Korea Fax) +82-562-2793099 **************************************************************************** -----------------Table of Contents - LaTeX format -------------------------- \documentstyle[12pt]{article} \begin{document} \centerline{\Large\bf Table of Contents} \bigskip \bigskip Preface \bigskip {\bf Part I. Learning Curves} \begin{itemize} \item Statistical Theory of Learning Curves S. Amari, N. Murata and K. Ikeda. \item Generalization in Two-Layer Neural Networks J.-H. Oh, K. Kang, C. Kwon and Y. Park \item Annealed Theories of Learning H. S. Seung \item Mutual Information and Bayes Methods for Learning a Distribution D. Haussler and M. Opper \item General Bounds for Predictive Errors in Supervised Learning M. Opper and D. Haussler \item Perceptron Learning: The Largest Version Space M. Biehl and M. Opper \item Large Scale Simulations for Learning Curves K.-R. M\"uller, M. Finke, N. Murata and S. Amari \item Geometry of Admissible Parameter Region in Neural Learning K. Ikeda and S. Amari \item Learning by a Population of Perceptrons K. Kang, J.-H. Oh and C. Kwon \end{itemize} {\bf Part II. Dynamics} \begin{itemize} \item On-Line Learning of Dichotomies: Algorithms and Learning Curves H. Sompolinsky, N. Barkai and H. S. Seung \item The Bit-Generator and Time-Series Prediction E. Eisenstein, I. Kanter, D. A. Kessler and W. Kinzel \item Phase Dynamics of Two and Three Coupled Hodgkin-Huxley Neurons under DC Currents S. Kim, S. G. Lee, H. Kook and J. H. Shin \item Periodic Synchronization in Networks of Neuronal Oscillators M. Y. Choi \item Synchronization in Neural Networks with Finite Storage Capacity K. Park and M. Y. Choi \end{itemize} {\bf Part III. Associative Memory and Other Topics} \begin{itemize} \item The Cavity Method: Applications to Learning and Retrieval in Neural Networks K. Y. M. Wong \item Storage Capacity of a Fully Connected Committee Machine C. Kwon, Y. Park and J.-H. Oh \item Thermodynamic Properties of the Multi-Neuron Interaction Model without Truncating the Interaction D.~Boll\'e, J.~Huyghebaert and G.~M.~Shim \item Symmetry between Neuronal and Synaptic Dynamics of Neural Net H.-F. Yanai \item Learning and Maximum Entropy in General Boltzmann Machines C. Hicks and H. Ogawa \item On the (Free) Energy of Stochastic and Continuous Hopfield Neural Networks J. van den Berg and J. C. Bioch \item Neural Thermodynamics for Biological Ensembles A. Coster \end{itemize} {\bf Part IV. Applications} \begin{itemize} \item Learning Algorithms for Classification: A Comparison on Handwritten Digit Recognition Y. LeCun, L. D. Jackel, L. Bottou, C. Cortes, J. S. Denker, H. Drucker, I. Guyon, U. A. M\"uller, E. S\"ackinger, P. Simard and V. Vapnik \item On the Consequences of the Statistical Mechanics Theory of Learning Curves for the Model Selection Problem M. J. Kearns \item Learning of a Two-Layer Neural Network with Flexible Hidden Layer Size J. Kim, K. Kang and J.-H. Oh \item Distributed Population Representation in Proprioceptive Cortex S. Cho, M. Jang and J. A. Reggia \item Designing Cost Functions for Additional Network Functionality S. Y. Lee \item Self-Organization of Gaussian Mixture Model for PDF Estimation S. Lee and S. Shimoji \end{itemize} \end{document} **************************************************************************** Jong-Hoon Oh Associate Professor, Department of Physics jhoh at vision.postech.ac.kr Pohang Institute of Science Technology Tel) +82-562-2792069 Hyoja San 31 Pohang, 790-784 Kyoungbuk, Korea Fax) +82-562-2793099 **************************************************************************** From jaap.murre at mrc-apu.cam.ac.uk Wed May 17 11:36:43 1995 From: jaap.murre at mrc-apu.cam.ac.uk (Jaap Murre) Date: Wed, 17 May 1995 16:36:43 +0100 Subject: Brain-size neurocomputers Message-ID: <199505171536.QAA26136@sirius.mrc-apu.cam.ac.uk> The following paper has been added to our ftp site: Heemskerk, J.N.H., & J.M.J. Murre (submitted). Brain-size neurocomputers: analyses and simulations of neural topologies on Fractal Architectures. Submitted to the IEEE Transactions on Neural Networks. Abstract Current neurocomputers are more than 50 million times slower than the brain. Although chip speeds exceed the switching speed of biological neurons with several orders of magnitude, artificial neural networks are of a much smaller scale than real brains. The primary aim of most neurocomputer designs is speeding up neural paradigms rather than implementing large-scale neural networks. In order to simulate neural networks of brain-size, neurocomputers need to be scaled up. We here present MindShape which is a design concept for a very large-scale neurocomputer based on a hierarchical-modular or Fractal Architecture. A Fractal Architecture can be built up from two types of elements: neural processing elements (NPEs) and communication elements (CEs). Massive usage of these elements allows for both distributed calculation and distributed control. A detailed description of this machine is presented, with reference to a realized feasibility study (the BSP400, see [1][2]). Through performance analyses and simulations of data- communication, it is shown that the Fractal Architecture supports efficient implementation of structured neural networks. We finally demonstrate that physical realization of brain-size neurocomputers is feasible with current technology. Files can be found at: ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre/lsize.ps (1589 Kb) ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre/lsize.ps.Z (347 Kb) ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre/lsize.zip (495 Kb) Jan N.H. Heemskerk hmskerk at rulfsw.leidenuniv.nl Jacob M.J. Murre jaap.murre at mrc-apu.cam.ac.uk (after 1 June 1995: pn_murre at macmail.psy.uva.nl) From pihong at cse.ogi.edu Thu May 18 20:42:00 1995 From: pihong at cse.ogi.edu (Hong Pi) Date: Thu, 18 May 95 17:42 PDT Subject: Neural net short course at OGI Message-ID: Oregon Graduate Institute of Science & Technology, Office of Continuing Education, offers the short course: NEURAL NETWORKS: ALGORITHMS AND APPLICATIONS June 12-16, 1995, at the OGI campus near Portland, Oregon. Course Organizer: John E. Moody Lead Instructor: Hong Pi With Lectures By: Dan Hammerstrom Todd K. Leen John E. Moody Thorsteinn S. Rognvaldsson Eric A. Wan Artificial neural networks (ANN) have emerged as a new information processing technique and an effective computational model for solving pattern recognition and completion, feature extraction, optimization, and function approximation problems. This course introduces participants to the neural network paradigms and their applications in pattern classification; system identification; signal processing and image analysis; control engineering; diagnosis; time series prediction; financial analysis and trading; and speech recognition. Designing a neural network application involves steps from data preprocessing to network tuning and selection. This course, with many examples, application demos and hands-on lab practice, will familiarize the participants with the techniques necessary for building successful applications. About 50 percent of the class time is assigned to lab sessions. The simulations will be based on Matlab, the Matlab Neural Net Toolbox, and other software running on 486 PCs. Prerequisites: Linear algebra and calculus. Previous experience with using Matlab is helpful, but not required. Who will benefit: Technical professionals, business analysts and other individuals who wish to gain a basic understanding of the theory and algorithms of neural computation and/or are interested in applying ANN techniques to real-world, data-driven modeling problems. Course Objectives: After completing the course, students will: - Understand the basic neural networks paradigms - Be familiar with the range of ANN applications - Have a good understanding of the techniques for designing successful applications - Gain hands-on experience with ANN modeling. Course Outline Neural Networks: Biological and Artificial The biological inspiration. History of neural computing. Types of architectures and learning algorithms. Application areas. Simple Perceptrons and Adalines Decision surfaces. Perceptron and Adaline learning rules. Stochastic gradient descent. Lab experiments. Multi-Layer Feed-Forward Networks I Multi-Layer Perceptrons. Back-propagation learning. Generalization. Early Stopping. Network performance analysis. Lab experiments. Multi-Layer Feed-Forward Networks II Radial basis function networks. Projection pursuit regression. Variants of back-propagation. Levenburg-Marquardt optimization. Lab experiments. Network Performance Optimization Network pruning techniques. Input variable selection. Sensitivity Analysis. Regularization. Lab experiments. Neural Networks for Pattern Recognition and Classification Nonparametric classification. Logistic regression. Bayesian approach. Statistical inference. Relation to other classification methods. Self-Organized Networks and Unsupervised Learning K-means clustering. Kohonen feature mapping. Learning vector quantization. Adaptive principal components analysis. Exploratory projection pursuit. Applications. Lab experiments. Time Series Prediction with Neural Networks Linear time series models. Nonlinear approaches. Case studies: economic and financial time series analysis. Lab experiments. Neural Network for Adaptive Control Nonlinear modeling in control. Neural network representations for dynamical systems. Reinforcement learning. Applications. Lab Experiments. Massively Parallel Implementation of Neural Nets on the Desktop Architecture and application demos of the Adaptive Solutions' CNAPS System. Summary and Perspectives About the Instructors Dan Hammerstrom received the B.S. degree in Electrical Engineering, with distinction, from Montana State University, the M.S. degree in Electrical Engineering from Stanford University, and the Ph.D. degree in Electrical Engineering from the University of Illinois. He was on the faculty of Cornell University from 1977 to 1980 as an assistant professor. From 1980 to 1985 he worked for Intel where he participated in the development and implementation of the iAPX-432 and i960 and, as a consultant, the iWarp systolic processor that was jointly developed by Intel and Carnegie Mellon University. He is an associate professor at Oregon Graduate Institute where he is pursuing research in massively parallel VLSI architectures, and is the founder and Chief Technical Officer of Adaptive Solutions, Inc. He is the architect of the Adaptive Solutions CNAPS neurocomputer.Dr. Hammerstrom's research interests are in the area of the VLSI architectures for pattern recognition. Todd K. Leen is associate professor of Computer Science and Engineering at Oregon Graduate Institute of Science & Technology. He received his Ph.D. in theoretical Physics from the University of Wisconsin in 1982. From 1982-1987 he worked at IBM Corporation, and then pursued research in mathematical biology at Good Samaritan Hospital's Neurological Sciences Institute. He joined OGI in 1989. Dr. Leen's current research interests include neural learning, algorithms and architectures, stochastic optimization, model constraints and pruning, and neural and non-neural approaches to data representation and coding. He is particularly interested in fast, local modeling approaches, and applications to image and speech processing. Dr. Leen served as theory program chair for the 1993 Neural Information Processing Systems (NIPS) conference, and workshops chair for the 1994 NIPS conference. John E. Moody is associate professor of Computer Science and Engineering at Oregon Graduate Institute of Science & Technology. His current research focuses on neural network learning theory and algorithms in it's many manifestations. He is particularly interested in statistical learning theory, the dynamics of learning, and learning in dynamical contexts. Key application areas of his work are adaptive signal processing, adaptive control, time series analysis, forecasting, economics and finance. Moody has authored over 35 scientific papers, more than 25 of which concern the theory, algorithms, and applications of neural networks. Prior to joining the Oregon Graduate Institute, Moody was a member of the Computer Science and Neuroscience faculties at Yale University. Moody received his Ph.D. and M.A. degrees in Theoretical Physics from Princeton University, and graduated Summa Cum Laude with a B.A. in Physics from the University of Chicago. Hong Pi is a senior research associate at Oregon Graduate Institute. He received his Ph.D. in theoretical physics from University of Wisconsin. His research interests include nonlinear modeling, neural network algorithms and applications. Thorsteinn S. Rognvaldsson received the Ph.D. degree in theoretical physics from Lund University, Sweden, in 1994. His research interests are Neural Networks for prediction and classification. He is currently a postdoctoral research associate at Oregon Graduate Institute. Eric A. Wan, Assistant Professor of Electrical Engineering and Applied Physics, Oregon Graduate Institute of Science & Technology, received his Ph.D. in electrical engineering from Stanford University in 1994. His research interests include learning algorithms and architectures for neural networks and adaptive signal processing. He is particularly interested in neural applications to time series prediction, speech enhancement, system identification, and adaptive control. He is a member of IEEE, INNS, Tau Beta Pi, Sigma Xi, and Phi Beta Kappa. For a complete course brochure contact: Linda M. Pease, Director Office of Continuing Education Oregon Graduate Institute of Science & Technology PO Box 91000 Portland, OR 97291-1000 +1-503-690-1259 +1-503-690-1686 (fax) e-mail: continuinged at admin.ogi.edu WWW home page: http://www.ogi.edu From patrick at magi.ncsl.nist.gov Fri May 19 09:43:52 1995 From: patrick at magi.ncsl.nist.gov (Patrick Grother) Date: Fri, 19 May 95 09:43:52 EDT Subject: New NIST Technical Document Image Database Message-ID: <9505191343.AA24500@magi.ncsl.nist.gov> NIST Special Database 20 Scientific and Technical Document Database Special Database 20 contains 23468 high resolution binary images obtained from copyright-expired scientific and technical journals and books. The images contain a very rich set of graphic elements such as graphs, tables, equations, two column text, maps, pictues, footnotes, annotations, and arrays of such elements. No ground truthing or original typesetting information is available. The images contain predominantly machine printed English, although three French and German documents are included. + 104 articles, books, journals + 23468 full page binary images + High Resolution 15.75 dots per mm ( 400 dpi ) + 4 compact discs each containing about 500 Mb + Updated CCITT IV Compression Source Code: 25x compression + A structural statistics file for each image + Page rotation estimates + Software utilities Special Database 20 is available as a four 5.25 inch CD-ROM set in the ISO-9660 format. Price: $1000.00 US. For sales contact: Standard Reference Data National Institute of Standards and Technology Building 221, Room A323 Gaithersburg, MD 20899 Voice: (301) 975-2208 FAX: (301) 926-0416 email: srdata at enh.nist.gov For technical details contact: Patrick Grother Visual Image Processing Group National Institute of Standards and Technology Building 225, Room A216 Gaithersburg, Maryland 20899 Voice: (301) 975-4157 email: patrick at magi.ncsl.nist.gov From lopez at Physik.Uni-Wuerzburg.DE Mon May 22 15:40:15 1995 From: lopez at Physik.Uni-Wuerzburg.DE (Bernardo Lopez) Date: Mon, 22 May 95 15:40:15 MESZ Subject: preprint Message-ID: <199505221340.PAA01017@wptx14.physik.uni-wuerzburg.de> FTP-host: ftp.physik.uni-wuerzburg.de FTP-filename: /pub/preprint/WUE-ITP-95-011.ps.gz The following paper has been placed in the Neuroprose archive (see above for ftp-host) as a compressed postscript file named WUE-ITP-95-011.ps.gz (10 pages of output) email address: lopez at physik.uni-wuerzburg.de **** Hardcopies cannot be provided **** ------------------------------------------------------------------ Title: Storage of correlated patterns in a perceptron Authors: B L\'opez, M Schr\"oder and M Opper Institut f\"ur Theoretische Physik, Universit\"at W\"urzburg, Am Hubland, D-97074 W\"urzburg ------------------------------------------------------------------ Abstract: We calculate the storage capacity of a perceptron for correlated gaussian patterns. We find that the storage capacity $\alpha_c$ can be less than 2 if similar patterns are mapped onto different outputs and vice versa. As long as the patterns are in general position we obtain, in contrast to previous works, that $\alpha_c \geq 1$ in agreement with Cover's theorem. Numerical simulations confirm the results. ------------------------------------------------------------------ From ingber at alumni.caltech.edu Mon May 22 10:47:23 1995 From: ingber at alumni.caltech.edu (Lester Ingber) Date: 22 May 1995 14:47:23 GMT Subject: paper: Statistical mechanics ... short-term memory Message-ID: <3pq85r$8qq@gap.cco.caltech.edu> The following paper, appearing this month in Phys Rev E, is available via anonymous ftp. ======================================================================== %A L. Ingber %A P.L. Nunez %T Statistical mechanics of neocortical interactions: High resolution path-integral calculation of short-term memory %J Phys. Rev. E %V 51 %N 5 %P 5074-5083 %D 1995 Statistical mechanics of neocortical interactions: High resolution path-integral calculation of short-term memory Lester Ingber Lester Ingber Research P.O. Box 857, McLean, Virginia 22101 ingber at alumni.caltech.edu and Paul L. Nunez Department of Biomedical Engineering Tulane University, New Orleans, Louisiana 70118 pln at bmen.tulane.edu We present high-resolution path-integral calculations of a previously developed model of short-term memory in neocortex. These calculations, made possible with supercomputer resources, supplant similar calculations made in L. Ingber, Phys. Rev. E 49, 4652 (1994), and support coarser estimates made in L. Ingber, Phys. Rev. A 29, 3346 (1984). We also present a current experimental context for the relevance of these calculations using the approach of statistical mechanics of neocortical interactions, especially in the context of electroencephalographic data. ======================================================================== Interactively [brackets signify machine prompts]: [your_machine%] ftp ftp.alumni.caltech.edu [Name (...):] anonymous [Password:] your_e-mail_address [ftp>] cd pub/ingber [ftp>] binary [ftp>] ls [ftp>] get smni95_stm.ps.Z [ftp>] quit The 00index file contains an index of the other files. This archive also can be accessed via WWW path http://alumni.caltech.edu/~ingber/ If you do not have ftp access, get information on the FTPmail service by: mail ftpmail at decwrl.dec.com, and send only the word "help" in the body of the message. Sorry, I cannot assume the task of mailing out hardcopies of code or papers. My volunteer time assisting people with their queries on my codes and papers must be limited to electronic mail correspondence. -- /* RESEARCH E-Mail: ingber at alumni.caltech.edu * * INGBER WWW: http://alumni.caltech.edu/~ingber/ * * LESTER Archive: ftp.alumni.caltech.edu:/pub/ingber * * Prof. Lester Ingber _ P.O. Box 857 _ McLean, VA 22101 _ 1.800.L.INGBER */ From sjoberg at isy.liu.se Mon May 22 11:48:37 1995 From: sjoberg at isy.liu.se (Jonas Sjoberg) Date: Mon, 22 May 95 17:48:37 +0200 Subject: PhD thesis available Message-ID: <9505221548.AA01932@joakim.isy.liu.se> My PhD thesis with the title NON-LINEAR SYSTEM IDENTIFICATION WITH NEURAL NETWORKS is available by FTP or WWW. It contains 223 pages and it is stored as compressed postscript. (3.6 Mbyte uncompressed, 1.2 Mbyte compressed). ________________________________________________________________________________ Jonas Sjo"berg Dept. of El. Engineering University of Linkoping Telefax: +46-13-282622, or +46-13-139282 S-581 83 Linko"ping E-Mail: sjoberg at isy.liu.se Sweden ________________________________________________________________________________ Anonymous FTP: joakim.isy.liu.se or 130.236.24.1 directory: pub/Misc/NN/ file : PhDsjoberg.ps.Z WWW: file://joakim.isy.liu.se/pub/Misc/NN/ file : PhDsjoberg.ps. Abstract: This thesis addresses the non-linear system identification problem, and in particular, investigates the use of neural networks in system identification. An overview of different possible model structures is given in a common framework. A nonlinear structure is described as the concatenation of a map from the observed data to the regressor, and a map from the regressor to the output space. This divides the model structure selection problem into two problems with lower complexity: that of choosing the regressor and that of choosing the non-linear map. The possible choices for the regressors consists of past inputs and outputs, and filtered versions of them. The dynamics of the model depends on the choice of regressor, and families of different model structures are suggested based on analogies to linear black-box models. State-space models are also described within this common framework by a special choice of regressor. It is shown that state-space models which have no parameters in the state update function can be viewed as an input-output model preceded by a pre-filter. A parameterized state update function, on the other hand, can be seen as a data driven regressor selector. The second step of the non-linear identification is the mapping from the regressor to the output space. It is often advantageous to try some intermediate mappings between the linear and the general non-linear mapping. Such non-linear black-box mappings are discussed and motivated by considering different noise assumptions. The validation of a linear model should contain a test for non-linearities and it is shown that, in general, it is easy to detect non-linearities. This implies that it is not worth spending too much energy searching for optimal non-linear validation methods for a specific problem. Instead the validation method should be chosen so that it is easy to apply. Two such methods, based on polynomials and neural nets, are suggested. Further, two validation methods, the correlation-test and the parametric F-test, are investigated. It is shown that under certain conditions these methods coincide. Parameter estimates are usually based on criterion minimization. In connection with neural nets it has been noted that it is not always optimal to try to find the absolute minimum point of the criterion. Instead a better estimate can be obtained if the numerical search for the minimum is prematurely stopped. A formal connection between this stopped search and regularization is given. It is shown that the numerical minimization of the criterion can be view as a regularization term which is gradually turned to zero. This closely connects to, and explains, what is called overtraining in the neural net literature. From martin.davies at psy.ox.ac.uk Mon May 22 13:15:31 1995 From: martin.davies at psy.ox.ac.uk (Martin Davies) Date: Mon, 22 May 1995 18:15:31 +0100 Subject: Euro-SPP '95 Message-ID: <9505221815.AA31230@Mac8> ************************************************************ EUROPEAN SOCIETY FOR PHILOSOPHY AND PSYCHOLOGY FOURTH ANNUAL CONFERENCE St. Catherine's College Oxford OX1 3UJ 30 August - 1 September 1995 ************************************************************ This conference is supported by the McDonnell-Pew Centre for Cognitive Neuroscience and by the Mind Association. ************************************************************ The Fourth Annual Conference of the European Society for Philosophy and Psychology will be held at St. Catherine's College Oxford, from Wednesday 30 August to Friday 1 September 1995. The programme includes: *Invited Lectures* by Michael Posner, Wolfgang Kunne, Jacques Mehler and Alan Cowey; *Invited Symposia* on Attention and Space, Emotion and Irrationality, Foundations of Artificial Life, and Brain Imaging; plus Submitted Papers and Posters. The conference desk will open at 9.00 am on Wednesday 30 August. Coffee will be available from 11.00 am and the first session will commence at 11.30 am. The final session of the conference will end at about 4.30 pm on Friday 1 September. There will be a Conference Dinner on the Friday evening at a cost of approximately 20 pounds per head. ************************************************************ **REGISTER NOW PAY LATER!** The Euro-SPP and St. Catherine's College would welcome an early indication of your intention to attend this conference. Please register now. We will invoice you later. If you pay your Euro-SPP Membership Fees and return your completed Conference Registration Form by *Friday 16 June* then a special Conference Registration Fee of 30 pounds will apply. After that date, the Conference Registration Fee will be 35 pounds for Euro-SPP members. The Conference Registration Fee for non-members of the Euro-SPP is 55 pounds. The basic accommodation and meals package, from mid-morning on Wednesday 30 August until late afternoon on Friday 1 September, costs 108 pounds. Bed and breakfast accommodation is also available for the nights of Tuesday 29 August, Friday 1 September, and Saturday 2 September, at an additional cost of 28 pounds per night. Accommodation is in single study-bedrooms. A limited number of bedrooms with en suite bathroom may also be available at a supplement of 12 pounds per night. For those who do not require accommodation, the basic meals package (excluding breakfast), from mid-morning on Wednesday 30 August until late afternoon on Friday 1 September, costs 60 pounds. ************************************************************ **DISCOUNTS FOR STUDENTS** The Conference Registration Fee for students is 10 pounds for Euro-SPP members or 20 pounds for non-members. The McDonnell-Pew Centre for Cognitive Neuroscience and the Mind Association have provided a number of bursaries to assist students with the costs of accommodation and meals at Euro-SPP '95. These will be awarded to students in order of receipt of applications, until the resources are used up. A McDonnell-Pew Bursary or Mind Association Bursary offers a discount of 40 pounds on the accommodation and meals package or a discount of 20 pounds on the meals only package. In order to apply for a bursary, you must include with your Conference Registration Form a letter from your supervisor confirming that the conference is relevant to your studies. ************************************************************ This announcement is also available on the World Wide Web at: http://www.cogs.susx.ac.uk/users/ronaldl/espp.html ************************************************************ For information about the conference, please contact: Euro-SPP '95, Department of Experimental Psychology, South Parks Road, Oxford OX1 3UD, England. Email: espp95 at psy.ox.ac.uk Fax: +44 1865 310447 ************************************************************ Euro-SPP Membership fees: Dfl. 50,- or Dfl. 35,- for students. Fees must be paid in Dutch currency. Fees may be paid by Mastercard. For membership details, please contact: Mrs. Susan Struycken, Department of Psychology, Tilburg University, P.O. Box 90153, 5000 LE Tilburg, The Netherlands. Email: espp at kub.nl ************************************************************ ************************************************************ PROVISIONAL PROGRAMME ************************************************************ WEDNESDAY 30 AUGUST 9.00 am Conference desk opens 11.00 am COFFEE 11.30 am - 12.45 pm INVITED LECTURE 1: The McDonnell-Pew Lecture Imaging the Mechanisms of Consciousness Speaker: Michael Posner (Psychology, Oregon) 1.00 pm LUNCH 2.30 - 4.30 pm SYMPOSIUM 1: Attention and Space Speakers: John Duncan (MRC Applied Psychology Unit, Cambridge) Pierre Jacob (Philosophy, CNRS, Paris) Albert Newen (Philosophy, Bielefeld) 4.30 - 5.30 pm POSTER SESSION and TEA 5.30 - 7.00 pm SUBMITTED PAPER SESSIONS 1A/1B 1A 5.30 Wendy Clements (Psychology, Sussex) and Josef Perner (Psychology, Salzburg) 6.15 Ted Ruffman (Psychology, Sussex) 1B 5.30 Greg Currie (Philosophy, Flinders University, Adelaide) 6.15 Sonia Sedivy (Philosophy, Toronto) 7.15 pm DINNER followed by INVITED LECTURE 2: The Mind Association Lecture Intentional Content Speaker: Wolfgang Kunne (Philosophy, Hamburg) ************************************************************ THURSDAY 31 AUGUST 9.00 - 11.00 am SYMPOSIUM 2: Emotion and Irrationality Speakers: Nico Frijda (Psychology, Amsterdam) James Hopkins (Philosophy, King's College, London) 11.00 am COFFEE 11.30 am - 12.45 pm INVITED LECTURE 3: Towards a Biology of Language Speaker: Jacques Mehler (Cognitive Science, EHESS, Paris) 1.00 pm LUNCH 2.15 - 4.30 pm SUBMITTED PAPER SESSIONS 2A/2B/2C 2A 2.15 Georges Rey (Philosophy, Maryland) 3.00 Owen Flanagan (Philosophy, Duke University, North Carolina) 3.45 Ullin Place (Thirsk, North Yorkshire) 2B 2.15 J.G. Taylor (Mathematics, King's College, London) 3.00 Laurie Stowe (Language and Literature, Groningen) 3.45 Susan Dwyer (Philosophy, McGill University, Montreal) 2C 2.15 Christoph Hoerl (Philosophy, Oxford) 3.00 Michel Treisman (Psychology, Oxford) 3.45 Karen Neander (Philosophy, Australian National University) 4.30 - 5.00 pm TEA 5.00 - 7.00 pm SYMPOSIUM 3: Foundations of Artificial Life Speakers: Christopher Langton (Santa Fe Institute, New Mexico) Luc Steels (AI Laboratory, Free University of Brussels) Michael Wheeler (Cognitive and Computing Sciences, Sussex) 7.00 pm BUSINESS MEETING followed by RECEPTION and DINNER ************************************************************ FRIDAY 1 SEPTEMBER 9.00 - 11.15 am SUBMITTED PAPER SESSIONS 3A/3B/3C 3A 9.00 Paul Pietroski (Philosophy, McGill University, Montreal) 9.45 Kirk Ludwig (Philosophy, Florida) 10.30 Antoni Gomila (Philosophy, University of the Balearic Islands) 3B 9.00 Beatrice de Gelder, Jean Vroomen and Jan Pieter Teunisse (Psychology, Tilburg) 9.45 Philip Benson (Physiology, Oxford) and Mary Katsikitis (Psychiatry, Adelaide) 10.30 Anthony Skillen (Philosophy, Kent) 3C 9.00 Andy Young (MRC Applied Psychology Unit, Cambridge) and Tony Stone (King Alfred's College, Winchester) 9.45 Richard Held (Brain and Cognitive Science, MIT) 10.30 Lawrence Weiskrantz (Psychology, Oxford), John Barbur and Arash Sahraie (Applied Vision Research Centre, City University, London) 11.15 am COFFEE 11.45 am - 1.00 pm INVITED LECTURE 4: Localisation of Functions in the Cerebral Cortex: Modern Phrenology? Speaker: Alan Cowey (Psychology, Oxford) 1.00 pm LUNCH 2.15 - 4.30 pm SYMPOSIUM 4: Brain Imaging (A McDonnell-Pew Symposium in Cognitive Neuroscience) Speakers: Dick Passingham (Psychology, Oxford) Chris Frith (MRC Cyclotron Unit, Hammersmith Hospital) David Rosenthal (Philosophy, City University of New York) 4.30 pm TEA There will be a CONFERENCE DINNER on Friday evening ************************************************************ POSTER PRESENTATIONS by David Buller (Philosophy, Northern Illinois); Matthew Elton (Philosophy, Stirling); Brian Keeley (Philosophy, University of California, San Diego); Jonathan Knowles (Philosophy, Birkbeck College, London); Ronald Lemmen (Cognitive and Computing Sciences. Sussex); Kenneth Livingston (Philosophy, Vassar College, New York); Gregory Mulhauser (Philosophy, Glasgow); Ajit Narayan and Jeremy Olsen (Computer and Cognitive Sciences, Exeter); Greg Ray (Philosophy, Florida); Antti Revonsuo (Cognitive Neuroscience, Turku); Carolien Rieffe (Psychology, Free University, Amsterdam); Tadeusz Szubka (Philosophy, Catholic University of Lublin); Yoshio Yano (Psychology, Kyoto University of Education) ************************************************************ ************************************************************ CONFERENCE REGISTRATION FORM ************************************************************ Please complete and sign this form and send it, as soon as possible, to: Euro-SPP '95, Department of Experimental Psychology, South Parks Road, Oxford OX1 3UD, England. Name: ______________________________________________________ Professional affiliation: __________________________________ Address: ___________________________________________________ _____________________________________________________ _____________________________________________________ Email: _____________________________________________________ Fax: _____________________________________________________ Conference Registration Fees: 35 pounds (or 30 pounds before 16 June) for non-student members 55 pounds for non-student non-members 10 pounds for student members 20 pounds for student non-members I expect to attend the Euro-SPP '95 Conference in Oxford. I have indicated my requirements by ticking below. Please send me an invoice in due course. I shall keep you informed of any changes in my plans. Signed: _____________________________________________________ Date: _____________________________________________________ ** I shall require the basic meals only package at 60 pounds. (Please note any special dietary requirements.) ** I shall require the basic meals and accommodation package at 108 pounds.(Please note any special dietary requirements.) ** I shall require a room for the night of Tuesday 29 August at 28 pounds. ** I shall require a room for the night of Friday 1 September at 28 pounds. ** I shall require a room for the night of Saturday 2 September at 28 pounds. ** I request a room with en suite bathroom at an additional cost of 12 pounds per night (subject to availability). ** I expect to attend the Conference Dinner on Friday 1 September at a cost of 20 pounds. ** Please send me information about hotel accommodation in Oxford. ** I am a student, and I am applying for a *McDonnell-Pew* or *Mind Association* Bursary. I enclose a letter from my supervisor. From ingber at alumni.caltech.edu Mon May 22 10:28:51 1995 From: ingber at alumni.caltech.edu (Lester Ingber) Date: 22 May 1995 14:28:51 GMT Subject: paper: Path-integral evolution of chaos embedded in noise: Message-ID: <3pq733$7us@gap.cco.caltech.edu> The following paper is available via anonymous ftp. ======================================================================== Path-integral evolution of chaos embedded in noise: Duffing neocortical analog Lester Ingber Lester Ingber Research P.O. Box 857, McLean, Virginia 22101 U.S.A. ingber at alumni.caltech.edu Ramesh Srinivasan Department of Psychology University of Oregon, Eugene, Oregon 97403 U.S.A. ramesh at oregon.uoregon.edu and Electrical Geodesics Eugene, Oregon 97403 U.S.A. Paul L. Nunez Department of Biomedical Engineering Tulane University, New Orleans, Louisiana 70118 U.S.A. pnunez at mailhost.tcs.tulane.edu Abstract--A two dimensional time-dependent Duffing oscillator model of macroscopic neocortex exhibits chaos for some ranges of parameters. We embed this model in moderate noise, typical of the context presented in real neocortex, using PATHINT, a non- Monte-Carlo path-integral algorithm that is particularly adept in handling nonlinear Fokker-Planck systems. This approach shows promise to investigate whether chaos in neocortex, as predicted by such models, can survive in noisy contexts. Keywords: chaos, path integral, Fokker Planck, Duffing equation, neocortex ======================================================================== Interactively [brackets signify machine prompts]: [your_machine%] ftp ftp.alumni.caltech.edu [Name (...):] anonymous [Password:] your_e-mail_address [ftp>] cd pub/ingber [ftp>] binary [ftp>] ls [ftp>] get path95_duffing.ps.Z [ftp>] quit The 00index file contains an index of the other files. This archive also can be accessed via WWW path http://alumni.caltech.edu/~ingber/ If you do not have ftp access, get information on the FTPmail service by: mail ftpmail at decwrl.dec.com, and send only the word "help" in the body of the message. Sorry, I cannot assume the task of mailing out hardcopies of code or papers. My volunteer time assisting people with their queries on my codes and papers must be limited to electronic mail correspondence. -- /* RESEARCH E-Mail: ingber at alumni.caltech.edu * * INGBER WWW: http://alumni.caltech.edu/~ingber/ * * LESTER Archive: ftp.alumni.caltech.edu:/pub/ingber * * Prof. Lester Ingber _ P.O. Box 857 _ McLean, VA 22101 _ 1.800.L.INGBER */ From listerrj at helios.aston.ac.uk Tue May 23 04:59:13 1995 From: listerrj at helios.aston.ac.uk (listerrj) Date: Tue, 23 May 1995 09:59:13 +0100 (BST) Subject: Four Postdoctoral Research Fellowships Message-ID: <17253.9505230859@sun.aston.ac.uk> Neural Computing Research Group ------------------------------- Dept of Computer Science and Applied Mathematics Aston University, Birmingham, UK FOUR POSTDOCTORAL RESEARCH FELLOWSHIPS -------------------------------------- *** Full details at http://neural-server.aston.ac.uk/ *** The Neural Computing Research Group has recently been successful in attracting significant levels of funding from the Engineering and Physical Sciences Research Council, and consequently is able to offer 4 full-time postdoctoral Research Fellowships. These positions have a nominal start date of 1 October 1995, although earlier or later start dates can be agreed if appropriate. Validation and Verification of Neural Network Systems ----------------------------------------------------- (Two Posts) One of the major factors limiting the widespread exploitation of neural networks has been the perceived difficulty of ensuring that a trained network will continue to perform satisfactorily when installed in an operational system. In the case of safety-critical systems it is clearly vital that a high degree of overall system integrity be achieved. However, almost all potential applications of neural networks entail some level of undesirable consequence if the network generates incorrect or inaccurate predictions. Currently there is no general framework for assessing the robustness of neural network solutions or of systems containing embedded neural networks. This substantial and ambitious programme will address the basic issues involved in neural network validation and verification in the context both of stand-alone network solutions and of embedded systems. It will develop and demonstrate robust techniques for quantifying the reliability of neural network predictions, and will also provide the necessary theoretical foundation of a subsequent framework for neural network system validation. This project will address the theoretical basis for determining valid generalisation and error estimates for neural network predictions, and will aim to understand the impact of uncertainties in network predictions on overall system performance in the context of embedded applications. It will also demonstrate the use of these techniques for validation of network solutions through case studies based on real-world applications and data, which will be provided by industrial collaborators. Potential candidates should be mathematically and computationally competent with a background either in artificial neural networks or a relevant field. These posts are tenable for two years in the first instance, with a possible extension for a further three years. Nonstationary Feature Extraction and Tracking for the Classification -------------------------------------------------------------------- of Turning Points in Multivariate Time Series ---------------------------------------------- (One Post) The project is aimed at extracting information from nonstationary, nonlinear time series. Real-world examples which have motivated the proposal include: the early classification of highs, lows and sideways drift in financial global bond markets; the forecasting of characteristic clustering such as peaks and troughs in consumer driven electricity load demand, along with the corresponding impacts on pool-price prediction, or the expectation of dynamic loading patterns in telecommunications networks. The key lies in an appropriate {\em representation} of the data. The intended methodology is to extend the theoretical basis of the current state of the art on neural network feature extraction techniques to tackle real-world problems presented by industry and commerce. The emphasis is to seek appropriate representations of nonstationary data such that the resulting `clusterings' may be exploited to perform classification. Because real data is generally nonstationary the principal axes of the feature space change in time and so we need to track this nonstationarity if `market' characteristics as determined by the features are to be useful. Potential candidates should be mathematically and computationally competent with a background either in artificial neural networks, dynamical systems theory, statistical pattern processing, or have relevant experience from a physics or electrical engineering background. This post is tenable for three years. Neural Networks for Visualization of High Dimensional Data ---------------------------------------------------------- (One Post) Visualization has proven to be one of the most powerful ways to interpret and understand complex sets of data, such as records of financial transactions, corporate databases, customer profiles, and marketing surveys. Particular problems arise, however, when the data involves large numbers of variables, corresponding to spaces of high dimensionality. Additionally, the data is often plagued with deficiencies such as missing variables, mislabelled values, and inconsistencies in the representations of different quantities (for instance, the same attribute may be represented in different ways in different parts of the data base). Such problems severely limit the performance of current visualization algorithms. This project will investigate the theoretical basis for visualizing data using neural networks, and will develop practical techniques for visualization applicable to large-scale data sets. These techniques will be based, for example, on recent developments in latent-variable density estimation. Potential candidates should be mathematically and computationally competent with a background either in artificial neural networks or a relevant field. This post is tenable for two years. Neural Computing Research Group ------------------------------- The Neural Computing Research Group currently comprises the following academic staff Chris Bishop Professor David Lowe Professor David Bounds Professor Geoffrey Hinton Visiting Professor Richard Rohwer Lecturer Alan Harget Lecturer Ian Nabney Lecturer David Saad Lecturer (arrives 1 August) two further posts (currently being appointed) together with the following Research Fellows Chris Williams Shane Murnion Alan McLachlan Huaihu Zhu a full-time software support assistant, and eleven postgraduate research students. Conditions of Service --------------------- Salaries will be up to point 6 on the RA 1A scale, currently 15,556 UK pounds. These salary scales are currently under review, and are subject to annual increments. How to Apply ------------ If you wish to be considered for one of these positions, please send a full CV and publications list, together with the names of 4 referees, to: Professor C M Bishop Neural Computing Research Group Department of Computer Science and Applied Mathematics Aston University Birmingham B4 7ET, U.K. Tel: 0121 359 3611 ext. 4270 Fax: 0121 333 6215 e-mail: c.m.bishop at aston.ac.uk (email submission of postscript files is welcome) Closing date: 7 July 1995 ------------------------- From Guszti.Bartfai at Comp.VUW.AC.NZ Tue May 23 18:45:59 1995 From: Guszti.Bartfai at Comp.VUW.AC.NZ (Guszti Bartfai) Date: Wed, 24 May 1995 10:45:59 +1200 Subject: Paper announcement Message-ID: <199505232246.KAA19738@circa.comp.vuw.ac.nz> The following paper has been accepted for publication in the journal "Neural Networks". Title: On the Match Tracking Anomaly of the ARTMAP Neural Network Author: Guszti Bartfai Department of Computer Science Victoria University of Wellington New Zealand Abstract: This article analyses the {\em match tracking anomaly} ({\em MTA}) of the ARTMAP neural network. The anomaly arises when an input pattern exactly matches its category prototype that the network has previously learned, and the network generates a prediction (through a previously learned associative link) that contradicts the output category that was selected upon presentation of the corresponding target output. Carpenter at al.\ claimed that such anomalous situation will never arise if the (binary) input vectors have the same number of 1's [Carpenter91]. This paper shows that such situations {\em can} in fact occur. The {\em timing} according to which inputs are presented to the network in each learning trial is crucial: if the target output is presented to the network {\em before} the corresponding input pattern, certain pattern sequences will lead the network to the {\em MTA}. Two kinds of {\em MTA} are distinguished: one that is independent of the choice parameter ($\beta$) of the ART$_b$ module, and another that is not. Results of experiments that were carried out on a machine learning database demonstrate the existence of the match tracking anomaly as well as support the analytical results presented here. Reference: [Carpenter91] Author = "G.A. Carpenter and S. Grossberg and J.H. Reynolds", Title = "{ARTMAP: Supervised Real-Time Learning and Classification of Nonstationary Data by a Self-Organizing Neural Network}", Journal = "Neural Networks", Year = 1991, Volume = 4, Pages = "565--588", ------- The paper is available by anonymous FTP (24 pages, 88k). Full WWW access path: ftp://ftp.comp.vuw.ac.nz/doc/vuw-publications/CS-TR-95/CS-TR-95-1.ps.gz FTP instructions: % ftp ftp.comp.vuw.ac.nz Name: anonymous Password: ftp> cd doc/vuw-publications/CS-TR-95 ftp> binary ftp> get CS-TR-95-1.ps.gz ftp> quit % gunzip CS-TR-95-1.ps.gz % lpr CS-TR-95-1.ps Any comments are welcome. Guszti Bartfai http://www.comp.vuw.ac.nz/~guszti/ From dave at twinearth.wustl.edu Tue May 23 20:05:39 1995 From: dave at twinearth.wustl.edu (David Chalmers) Date: Tue, 23 May 95 19:05:39 CDT Subject: New archives Message-ID: <9505240005.AA03600@twinearth.wustl.edu> The archives for the Philosophy/Neuroscience/Psychology program at Washington University have been reconfigured. The ftp archive has been moved from thalamus.wustl.edu (which has been down for a while now), and more convenient access on the World Wide Web is now in place. (1) The archive of PNP technical reports is now available by anonymous ftp to wuarchive.wustl.edu, in the directory doc/techreports/wustl.edu/philosophy. Files have the form author.title.format, where format is usually ASCII or PS. An "INDEX" file contains an index with abstracts. Plenty of new papers here! (2) There is now a PNP home page on the WWW, at http://www.artsci.wustl.edu/~philos/pnp.html. At the moment this serves mostly as a gateway to the PNP archive, but more will be appearing shortly. Those with WWW access will probably want to use this to get to the archive, rather than the ftp option. (3) My own home page is now set up at http://www.artsci.wustl.edu/~chalmers/. This has links to a number of my papers on consciousness, content, artificial intelligence, and other topics (including some papers that aren't in the PNP archive), and also to my annotated bibliography in the philosophy of mind (see below). (4) There is now a convenient home page for my bibliography in the philosophy of mind at http://www.artsci.wustl.edu/~chalmers/biblio.html. Many of the current links to the bibliography on the net proceed via an extraordinarily slow connection through apa.oxy.edu and Indiana; this should work much better. Incidentally the bibliography is also available by anonymous ftp to wuarchive in the directory mentioned above, files chalmers.biblio.*. The current version has about 1830 entries in 5 parts. Those who maintain pages with links to these things (especially the archive and the bibliography) might like to update them. --Dave Chalmers (dave at twinearth.wustl.edu) From biehl at Physik.Uni-Wuerzburg.DE Wed May 24 15:36:04 1995 From: biehl at Physik.Uni-Wuerzburg.DE (Michael Biehl) Date: Wed, 24 May 95 15:36:04 MESZ Subject: papers available: learning from clustered input examples Message-ID: <199505241336.PAA00473@wptx08.physik.uni-wuerzburg.de> FTP-host: ftp.physik.uni-wuerzburg.de FTP-filename: /pub/preprint/WUE-ITP-95-013.ps.gz /pub/preprint/WUE-ITP-95-014.ps.gz The following papers are now available via anonymous ftp: (See below for the retrieval procedure) ------------------------------------------------------------------ "On-line learning from clustered input examples" Peter Riegler, Michael Biehl, Sara A. Solla, and Carmela Marangi Ref. WUE-ITP-95-013 AND "Off-line supervised learning from clustered input examples" Carmela Marangi, Sara A. Solla, Michael Biehl, and Peter Riegler Ref. WUE-ITP-95-014 --------------------------------------------------------------------- Both presented at the VII Italian Workshop on Neural Nets in Vietri s/m, May 1995 proceedings to be published by World Scientific ______________________________________________________________________ Retrieval procedure: unix> ftp ftp.physik.uni-wuerzburg.de Name: anonymous Password: {your e-mail address} ftp> cd pub/preprint ftp> get WUE-ITP-95-0??.ps.gz (*) ftp> quit unix> gunzip WUE-ITP-95-0??.ps.gz e.g. unix> lp WUE-ITP-95-0??.ps (7 pages of output) (*) can be replaced by "get WUE-ITP-95-0??.ps". The file will then be uncompressed before transmission (slower!). _____________________________________________________________________ -- Michael Biehl Institut fuer theoretische Physik Julius-Maximilians-Universitaet Wuerzburg Am Hubland D-97074 Wuerzburg email: biehl at physik.uni-wuerzburg.de Tel.: (+49) (0)931 888 5865 " " " 5131 Fax : (+49) (0)931 888 5141 From zoubin at psyche.mit.edu Wed May 24 14:16:35 1995 From: zoubin at psyche.mit.edu (Zoubin Ghahramani) Date: Wed, 24 May 95 14:16:35 EDT Subject: Paper available on factorial hidden Markov models Message-ID: <9505241816.AA24969@psyche.mit.edu> FTP-host: psyche.mit.edu FTP-filename: /pub/zoubin/facthmm.ps.Z URL: ftp://psyche.mit.edu/pub/zoubin/facthmm.ps.Z This technical report is 13 pages long [102K compressed]. Factorial hidden Markov models Zoubin Ghahramani and Michael I. Jordan Department of Brain & Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 02139 We present a framework for learning in hidden Markov models with distributed state representations. Within this framework, we derive a learning algorithm based on the Expectation--Maximization (EM) procedure for maximum likelihood estimation. Analogous to the standard Baum-Welch update rules, the M-step of our algorithm is exact and can be solved via a set of linear equations. However, due to the combinatorial nature of the hidden state representation, the exact E-step is intractable. A simple and tractable mean field approximation is derived. Promising empirical results on a small time series modeling problem are presented for both the mean field approximation and Gibbs sampling. MIT COMPUTATIONAL COGNITIVE SCIENCE TECHNICAL REPORT 9502 From M.Dye at ukc.ac.uk Thu May 25 09:18:52 1995 From: M.Dye at ukc.ac.uk (Matt Dye) Date: Thu, 25 May 1995 14:18:52 +0100 Subject: Lectureship Psychology (Cognitive Neuroscience) Message-ID: UNIVERSITY OF KENT AT CANTERBURY INSTITUTE OF SOCIAL AND APPLIED PSYCHOLOGY DEPARTMENT OF PSYCHOLOGY Canterbury, U.K. Lectureship in Psychology (Cognitive Neuroscience) The Institute of Social and Applied Psychology at the University of Kent attained a 4A rating in the most recent HEFCE Research Assessment exercise, and has been designated as a significant area of expansion within the University. ISAP comprises the Department of Psychology and the Tizard Centre. The Department has 16 HEFCE funded teaching staff in addition to associated research and support staff, independently funded research staff and postgraduate students. The Tizard Centre has a similar number of staff but is funded primarily from research and consultancy contracts. Further Psychology posts have been approved by the University. As part of the Department s development plan we are seeking applications for a permanent lectureship from active post-doctoral researchers in the general area of cognitive neuroscience. Candidates with research interests in any area of cognitive neuroscience or its related disciplines are encouraged to apply. Of particular interest will be candidates who have an expertise in computational, especially connectionist, modelling. Ideally, the successful applicant will be expected to take up his/her appointment by the 1st September 1995 or as soon as possible thereafter. Main responsibilities of the post holder The successful candidate will have both research and teaching responsibilities. General information Research The Departments research activities are concentrated in the broad areas of cognition and of social psychology. Current research within the cognitive neuroscience group includes investigations of object processing, categorization, speech production and language disorders. In all these areas, staff are involved in experimental research with normal subjects as well as the testing of brain-damaged subjects and attempts to model findings using connectionist systems. Research programmes in Social Psychology include the areas of group processes, psychology of health and social psychological aspects of Forensic Psychology. In all of these fields, the department has an excellent record of attracting research funds and studentships. At present substantial research awards are held from the ESRC, ARC (Australian Research Council), the Rowntree Foundation, Nuffield, Canterbury and Thanet Healthcare Trust, Wellcome and The Leverhulme Trust. The successful candidate will be expected and encouraged to develop a substantive programme of research based on his/her own interests. It should also be noted that existing members of staff are keen to develop collaborative research projects where common interests exist. Recent publications Donnelly, N., Humphreys, G.W. & Riddoch, M.J. (1991). Parallel computation of primitive shape descriptions. Journal of Experimental Psychology: Human Perception and Performance, 17, 2, 561-570. Humphreys, G.W., Lloyd-Jones, T.J. & Fias, W. (in press). Semantic interference on naming using a post-sue procedure: Tapping the links between semantics and phonology with pictures and words. Journal of Experimental Psychology: Learning, Memory and Cognition. Muller, H., Humphreys, G.W. & Donnelly, N. (1994). Search via recursive rejection (SERR): Visual search for single and dual form conjuction targets. Journal of Experimental Psychology: Human Perception and Performance, 20, 2, 235-258. Weekes, B.S. (1994). A cognitive-neuropsychological analysis of allograph errors from a patient with acquired disgraphia. Aphasiology, 8, 409-425. Teaching The Department currently offers undergraduate BSc degrees in Psychology, Social Psychology, Social and Clinical Psychology (including Applied, four-year variants), along with an MSc in Social and Applied Psychology. In addition two new MSc degrees, in Forensic Psychology and Health Psychology will be available from October 1995, and an MSc in Cognitive Neuroscience is planned in cooperation with the Department of Biology. The Department has ESRC priority recognition (Mode A and B) for its postgraduate training. We have a large and lively group of postgraduate students, and with our current and planned MSc programmes and our involvement in European Exchange Programmes we are planning for a significant long-term expansion in our postgraduate training and research. The post holder s main teaching responsibility will be in the second and third years of our undergraduate programmes. At present, second year students have the option of taking a course which includes perception, language, memory and basic issues relating to neural computation. Third year students have an option to study Neuropsychology (convened by Dr Donnelly). This course is split into two units. In unit 1 we address basic methodological issues and the neuropsychology of vision and in unit 2 we address the neuropsychology of language, memory and other higher order processes. The appointee would be expected to contribute (along with Drs Donnelly, Weekes and Lloyd-Jones) to both units and the final component of the 2nd year course, and to develop teaching in their own area of research. The appointee will also be expected to take over the modest administrative role involved in convening the second year course. In addition, all final year undergraduate students currently undertake an empirical project and the person appointed will take a share of the supervision and support of these, as well as postgraduate supervision. Teaching methods at Kent comprise the usual mix of large lecture format (especially at the first and second-year undergraduate level), small group seminar teaching, and co-operative group working. Administration The Institute has a fully devolved budget for all non-staff costs and the management of this falls to the relevant Head of Department. All staff contribute to the administration of the Department, and several key managerial responsibilities are delegated from the Head of Department to other colleagues. The staff responsibilities change from year to year in the light of varying teaching and research activities and to allow for study leave. Within this collegial atmosphere a supportive appraisal and probation system is operated under the supervision of senior members of staff. All staff are defined as "research active" and it is the Departmental policy to maximise time available for research. The administration of the Department's teaching and research activities is supported by a full-time departmental administrator and secretarial staff. Technical facilities The Department has a good level of technical support and provision. We have installed a new computer network comprising both Apple Macintoshs and PC's. All staff have their own microcomputers in their offices. We are connected to the University's central computing facilities which include several UNIX-based super-minicomputers and access to national and international computer networks. We have recently refurbished our laboratory facilities to include a high quality audio-visual studio. As a result of our successful expansion, the University has committed funding to a new Psychology building, due for completion during 1996. These resources are supported by a full-time Senior Experimental Officer and a Technician/Demonstrator who have responsibility of their development, operation and maintenance. If you require any further information please contact Professor Dominic Abrams ( Tel: 01227 764000 ext 3084: email D.Abrams at ukc.ac.uk), Dr Brendan Weekes (Tel: 01227 827411: email B.S.Weekes at ukc.ac.uk) or Dr Toby Lloyd-Jones (Tel: 01227 827611: email T.J.Lloyd-Jones at ukc.ac.uk) Salary will be within the Lecturer Grade A/B scale - 14,756 - 25,735 per annum. The University has adopted a policy of making most jobs available for job-sharing if suitable applicants come forward. Applications to job-share this post are welcomed and will be considered without prejudice. Closing date for applications: 16 June 1995 The University is committed to implementing its Equal Opportunities Policy. ------------------------------------------------------------------------------ Matthew Dye (BSc MSc) "I think, therefore I hesitate." Department of Psychology email: see header University of Kent at Canterbury tel: +44 (0)1227 764000 ext 3080 Canterbury FAX: +44 (0)1227 763674 Kent CT2 7LZ ftp server: ftp.ukc.ac.uk UNITED KINGDOM /pub/mwgd1 ------------------------------------------------------------------------------ From N.Sharkey at dcs.shef.ac.uk Thu May 25 11:27:25 1995 From: N.Sharkey at dcs.shef.ac.uk (N.Sharkey@dcs.shef.ac.uk) Date: Thu, 25 May 95 16:27:25 +0100 Subject: summer fellowship Message-ID: <9505251527.AA07192@entropy.dcs.shef.ac.uk> The Department of computer science at Sheffield, UK are offering 4 Summer fellowships for 1995. These would essentially cover travel and living expenses. I am interested in receiving applications from people from the Neural Network Community. The more senior the better. Please email me informally if you are interested. noel Noel Sharkey N.Sharkey at dcs.shef.ac.uk Professor of Computer Science FAX: (0114) 2780972 Department of Computer Science Regent Court University of Sheffield S1 4DP, Sheffield, UK From jbower at bbb.caltech.edu Thu May 25 12:41:58 1995 From: jbower at bbb.caltech.edu (jbower@bbb.caltech.edu) Date: Thu, 25 May 95 09:41:58 PDT Subject: CNS*95 registration announcement Message-ID: <9505251641.AA01679@bbb.caltech.edu> ************************************************************************ REGISTRATION INFORMATION THE FOURTH ANNUAL COMPUTATIONAL NEUROSCIENCE MEETING CNS*95 JULY 12 - JULY 15, 1995 MONTEREY, CALIFORNIA ************************************************************************ CNS*95: Registration is now open for this year's Computational Neuroscience meeting (CNS*95). This is the fourth in a series of annual inter-disciplinary conferences intended to address the broad range of research approaches and issues involved in the general field of computational neuroscience. As in the previous years, this meeting will bring together experimental and theoretical neurobiologists along with engineers, computer scientists, cognitive scientists, physicists, and mathematicians interested in understanding how biological neural systems compute. The meeting will equally emphasize experimental, model-based, and more abstract theoretical approaches to understanding neurobiological computation. MEETING STRUCTURE: The meeting will be organized in two parts: three days of research presentations, and one day of follow up workshops. Most presentations will be based on submitted papers, with several presentations by specially invited speakers. 126 submitted papers have been accepted for presentation at the meeting based on peer review. Details on the agenda can be obtained via ftp, through telnet or on the web as described below. LOCATION: The meeting will take place on the Monterey Peninsula on the coast south of San Francisco, California at the Doubletree Hotel in downtown Monterey. This modern hotel is located on Monterey's historic Fisherman's Wharf. MEETING ACCOMMODATIONS: Accommodations for the meeting have been arranged at the Doubletree Hotel. We have reserved a block of rooms at the special rate for all attendees of $122 per night single or double occupancy in the conference hotel (that is, 2 people sharing a room would split the $122!). A fixed number of rooms have been reserved for students at the rate of $99 per night single or double occupancy (yes, that means $50 a night per student!). These student room rates are on a first-come-first-served basis, so we recommend acting quickly to reserve these slots. Each additional person per room is a $20 charge. Registering for the meeting, WILL NOT result in an automatic room reservation. Instead you must make your own reservations by returning the enclosed registration sheet to the hotel, faxing, or by contacting: the Doubletree Hotel at Fisherman's Wharf ATTENTION: Reservations Dept. Two Portola Plaza Monterey, CA 93940 (408) 649-4511 Fax No. (408) 649-3109 NOTE: IN ORDER TO GET THE REDUCED RATES, YOU MUST CONFIRM HOTEL REGISTRATIONS BY JUNE 12, 1995. When making reservations by phone, make sure and indicate that you are registering for the CNS*95 meeting. Students will be asked to verify their status on check in with a student ID or other documentation. REGISTRATION INFORMATION: You can register for the meeting either electronically, by FAX or by surface mail. Details are provided below. Note that registration fees increase after June 12th. Registration received before June 12, 1995: Students, meeting: $ 90 Others, meeting: $ 200 Meeting registration after June 12, 1995: Students, meeting: $ 125 Others, meeting: $ 235 BANQUET: Registration for the meeting includes a single ticket to the annual CNS Banquet this year to be held within the Monterey Aquarium on Friday evening, July 14st. Additional Banquet tickets can be purchased for $35 each person. HOW TO REGISTER: To register for CNS*95 you can: 1) use our on-line registration form via http://www.bbb.caltech.edu/cns95.html. Note that this will NOT make a hotel reservation for you. You must do that yourself. 2) get an ftp-able copy of a registration form from ftp.bbb.caltech.edu under pub/cns95/registration_form95 URL: http://www.bbb.caltech.edu/cns95.html FTP: ftp.bbb.caltech.edu -- pub/cns95 3) You can also FAX, email, or surface mail the following registration form to: CNS*95 Caltech, Division of Biology 216-76, Pasadena, CA 91125 Attn: Judy Macias Fax Number: (818) 795-2088 email: judy at smaug.bbb.caltech.edu ************************************************************************ CNS*95 REGISTRATION FORM Last Name: First Name: Title: Organization: Address: City: State: Zip: Country: Telephone Email Address: REGISTRATION FEES: Technical Program -- July 12 - 15 Regular $200 ($225 after June 12th) Student $ 90 ($125 after June 12th) Banquet $ 35 (each additional banquet ticket) - July 14th Total Payment: $ Please Indicate Method of Payment: Check or Money Order Payable in U. S. Dollars to: CNS*95 - Caltech Please make sure to indicate CNS*95 and YOUR name on all money transfers. Charge my card: Visa Mastercard American Express Number: Expiration Date: Name of Cardholder: Signature as appears on card (for mailed in applications): Date: Additional questions: Did you submit an abstract and summary? ( ) Yes ( )No Title: Have you attended CNS meetings previously ( ) Yes ( ) No Do you have special dietary preferences or restrictions (e.g., diabetic, low sodium, kosher, vegetarian)? If so, please note: Some grants to cover partial travel expenses may become available. Do you wish further information? ( ) Yes ( ) No ************************************************************************** ADDITIONAL MEETING INFORMATION: Information on the full meeting agenda,the current list of registered attendees, paper abstracts, etc are available on line: Information is available via http://www.bbb.caltech.edu/cns95.html You can ftp information about CNS*95 from ftp.bbb.caltech.edu. Use 'ftp' or 'anonymous' as your ftp login name, enter your email address as the password. Information is available under pub/cns95 HOPE TO SEE YOU AT THE MEETING From watrous at scr.siemens.com Thu May 25 17:20:21 1995 From: watrous at scr.siemens.com (Raymond L Watrous) Date: Thu, 25 May 1995 17:20:21 -0400 Subject: Paper available: KBANN applied to ECG processing Message-ID: <199505252120.RAA00519@tiercel.scr.siemens.com> FTP-HOST: scr.siemens.com FTP-filename: /pub/learning/Papers/watrous/soar.ps.Z The following paper (7 pages, 3 figures) is now available via anonymous ftp: Synthesize, Optimize, Analyze, Repeat (SOAR): Application of Neural Network Tools to ECG Patient Monitoring Raymond Watrous, Geoffrey Towell and Martin S. Glassman Siemens Corporate Research 755 College Road East Princeton, NJ 08540 Abstract Results are reported from the application of tools for synthesizing, optimizing and analyzing neural networks to an ECG Patient Monitoring task. A neural network was synthesized from a rule-based classifier and optimized over a set of normal and abnormal heartbeats. The classification error rate on a separate and larger test set was reduced by a factor of 2. Sensitivity analysis of the synthesized and optimized networks revealed informative differences. Analysis of the weights and unit activations of the optimized network enabled a reduction in size of the network by a factor of 40% without loss of accuracy. +=+=+= The paper will appear in the Proceedings of the Workshop on Environmental and Energy Applications of Neural Networks, March 30-31, 1995, Richland, Washington, and is reprinted from the Proceedings of the Third International Congress on Air- and Structure-Borne Sound and Vibration, June 13-15, 1994, Montreal, Quebec, pp. 997-1004, and the Proceedings of the 1993 Symposium on Nonlinear Theory and its Applications, December 5-10, Honolulu, Hawaii, pp. 565-570. We regret that we are unable to provide hard copies. Raymond Watrous +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+ Learning Systems Department Phone: (609) 734-6596 Siemens Corporate Research FAX: (609) 734-6565 755 College Road East Princeton, NJ 08540 watrous at learning.scr.siemens.com From sutton at gte.com Fri May 26 18:26:12 1995 From: sutton at gte.com (Rich Sutton) Date: Fri, 26 May 1995 17:26:12 -0500 Subject: New RL paper and WWW interface to archive Message-ID: GENERALIZATION IN REINFORCEMENT LEARNING: SUCCESSFUL EXAMPLES USING SPARSE COARSE CODING Richard S. Sutton submitted to NIPS'95 On large problems, reinforcement learning systems must use parameterized function approximators such as neural networks in order to generalize between similar situations and actions. In these cases there are no strong theoretical results on the accuracy of convergence, and computational results have been mixed. In particular, Boyan and Moore reported at last year's meeting a series of negative results in attempting to apply dynamic programming together with function approximation to simple control problems with continuous state spaces. In this paper, we present positive results for all the control tasks they attempted, and for one that is significantly larger. The most important differences are that we used sparse-coarse-coded function approximators (CMACs) whereas they used mostly global function approximators, and that we learned online whereas they learned offline. Boyan and Moore and others have suggested that the problems they encountered could be solved by using actual outcomes ("rollouts"), as in classical Monte Carlo methods, and as in the TD(lambda) algorithm when lambda=1. However, in our experiments this always resulted in substantially poorer performance. We conclude that reinforcement learning can work robustly in conjunction with function approximators, and that there is little justification at present for avoiding the case of general lambda. ________________ The paper is available by ftp as ftp://ftp.cs.umass.edu/pub/anw/pub/sutton/sutton-inprer.ps.gz or via a new WWW interface to my small archive at http://envy.cs.umass.edu/People/sutton/archive.html. Please change any WWW pointers to the old ftp archive. From bilmes at ICSI.Berkeley.EDU Fri May 26 19:19:48 1995 From: bilmes at ICSI.Berkeley.EDU (Jeff Bilmes) Date: Fri, 26 May 1995 16:19:48 PDT Subject: Speech Recognition Systems Position Available, Berkeley CA Message-ID: <199505262319.QAA15929@anchorsteam.ICSI.Berkeley.EDU> Speech Recognition Systems Position Available International Computer Science Institute Berkeley CA The International Computer Science Institute (ICSI) in Berkeley, California now has a position available in the Realization Group. This is a staff position with a focus on speech recognition system design. The job will involve both research and development. Applicants should have several years of experience in designing and writing speech recognition software, and have an interest in developing recognizers that incorporate results from basic research within the group. It would be best if the applicant also had experience in speech research as well, although development experience is the more important qualification. Ordinarily, applicants will have a Ph.D., although strong professional experience can substitute for this requirement. The Realization group at ICSI does a combination of hardware, software, and algorithms for research and development of systems for speech processing and other machine implementations of human signal processing and pattern recognition. In the past, much of the emphasis has been on hybrid neural network/ hidden Markov model based speech recognition, robust recognition in the context of additive and convolutional noise, study of robust properties of the human auditory system, and research into the interaction of statistical sequence recognition with robust signal processing. Hardware and software tools also continue to be developed in the group, leading to strong capabilities for training with computationally intensive algorithms. The group includes a strong group of Berkeley CS and EE graduate students, as well as by fruitful collaborations with other researchers (including Greenberg from Berkeley, Bourlard from Belgium, Hermansky from OGI, Cohen and Franco from SRI, and Robinson from Cambridge U). The group is led by Nelson Morgan. ICSI is an independent, non-profit basic research institute affiliated with the University of California campus in Berkeley, California, and is an Equal Opportunity Employer. For further information, please contact: Dr. Nelson Morgan International Computer Science Institute 1947 Center St., Suite 600 Berkeley, CA. 94704 (415) 642-4274 x131 morgan at icsi.berkeley.edu From eann95 at ra.abo.fi Sat May 27 09:13:46 1995 From: eann95 at ra.abo.fi (EANN-95 Konferensomrede VT) Date: Sat, 27 May 1995 16:13:46 +0300 Subject: EANN 95 ftp,http sites Message-ID: <199505271313.QAA08210@aton.abo.fi> Sorry for the unsolicited mail. We will be really brief. The EANN '95 conference program is in /pub/vt/ab/eann95schedule.ps.Z and can be picked up by anonymous ftp from ftp.abo.fi. EANN '95 home page is at http://www.abo.fi/~abulsari/EANN95.html. EANN '95 organisers From tishby at CS.HUJI.AC.IL Mon May 29 08:21:02 1995 From: tishby at CS.HUJI.AC.IL (Tali Tishby) Date: Mon, 29 May 1995 15:21:02 +0300 Subject: Cortical Dynamics in Jerusalem: Final program Message-ID: <199505291221.AA01406@fugue.cs.huji.ac.il> CORTICAL DYNAMICS IN JERUSALEM JUNE 11-15, 1995 TENTATIVE PROGRAM 24 April 1995 Sunday, 11/6/1995 9-10.00 Registration 10-10.30 Opening Ceremony From Single Neurons to Cortical Networks 10.30-11.30 H.B.Barlow: The Division of Labour between Single Neurons and Networks. 11.30-12.00 Coffee 12.00-13.00 A.Schuz: Anatomical Aspects Relevant to Cortical Dynamics 13.00-14.30 Lunch Dynamics of Single Neurons and Synapses 14.30-15.30 Y. Amitai: Functional Implications of Cellular Classification in theNeocortex. 15.30-16.30 I.Segev: Backward Propagating Action Potential and Forward Synaptic Amplification in Excitable Dendrites of Neocortical Pyramidal Cells. 16.30-17.00 Coffee 17.00-18.00 A.M.Thomson: Synaptic Interactions between Neocortical Neurons; Temporal Patterning Selectively Activates Excitatory or Inhibitory Circuits 18.00-19.00 H Markram: Neocortical Pyramidal Neurons Scale the Efficacy of Synaptic Input according to Arrival Time: A proposed Selection Principle of the most Appropriate Synaptic Information. 19.30 Reception. Monday: 12.6.1995 Recurrent dynamics in cortical circuits I 9.00 10.00 K.A.C.Martin: Recurrent Excitation in Neocortical Circuits 10.00-11.00 H.Sompolinsky: Visual Processing by Recurrent Cortical Networks 11.00-11.30 Coffee 11.30-12.30 R.Eckhorn: Loosely Synchronized Rhythmic and non-Rhythmic Activities in the Visual Cortex and their Potential Roles for Spatial and Tempo 12.30-13.30 A K.Kreiter: Functional Aspects of Neuronal Synchronization in Monkeys. 13.30-16.30 poster session Recurrent Dynamics in Cortical Circuits II 16.30-17.30 J.Bullier:Temporal aspects of cortical Processing in monkey visual cortex. 17.30-18.30 D.Hansel: Chaos and Synchrony Tuesday, 13.6.1995 Brain states and neural codes I 9.00 - 10.00 M. Abeles: Spatio-Temporal Firing Patterns in the Frontal Cortex 10.00-11.00 E. Bienenstock: On the Dynamics of Synfire Chains 11.00-11.30 Coffee 11.30-12.30 D.J. Amit: Global Spontaneous Activity and Local Structured (learned) Delay Activity in the Cortex 12.30-13.30 G.L. Gerstein: Neuronal Assembly Dynamics: Experiments, Analyses and Models 13.30-16.30 poster session Brain states and neural codes II 16.30-17.30 B.J.Richmond: Dimensionality of Neuronal Codes 17.30-18.30 N.Tishby: Analyzing Cortical Activity Using Hidden Markov Models. Wednesday, 14.6.1995 Vision 9.00 - 10.00 C.D.Gilbert: Spatial Integration and Cortical Dynamics 10.00-11.00 A.Grinvald: Cortical dynamics Revealed by Optical Imaging. 11.00-11.30 Coffee 11.30-12.30 D.Mumford: Biological and Computational Models for Low Level Vision. 12.30-13.30 S.Ullman: The Sequence-Seeking Model for Bidirectional Information Flow in the Visual Cortex. 13.30-15.00 Lunch 15.00-18.00 Tour Thursday, 15.6.1995 Mechanisms of Behavior and Cognition 9.00 - 10.00 M.N.Shadlen: Seeing and Deciding about Motion: Neural Correlates of a Psychophysical Decision in Area LIP of the Monkey 10.00-11.00 A.B.Schwartz: Population Response in Motor Cortex during Figure Drawing. 11.00-11.30 Coffee 11.30-12.30 A.M.Graybiel: The Basal Ganglia and Adaptive Control of Behavioral Routines 12.30-13.30 E. Vaadia: Does Coherent Activity in Groups of Neurons Serve a Neural Code? 13.30-15.30 Lunch 15.30-16.30 L.G.Valiant: A Computational Model for Cognition 16.30-17.00 Coffee 17.00-19.00 Discussion: Is Dynamics Relevant to Cortical Function? Moderator: S. Hochstein 20.00 Farewell Banquet ------- End of Forwarded Message From peterk at nsi.edu Thu May 25 22:24:53 1995 From: peterk at nsi.edu (Peter Konig) Date: Thu, 25 May 1995 18:24:53 -0800 Subject: postdoc position available Message-ID: Junior Fellow Position in Cortical Neurophysiology available. Applications are invited for the postdoctoral position of Junior Fellow in Experimental Neurobiology at the Neuroscience Institut, La Jolla, to study mechanisms underlying visual perception and sensorimotor integration in the cat. Applicants should have a background in neuro-physiological techniques and data analysis. Fellows will receive stipends appropriate to their quali- fications and experience. The position is available immediatly. Submit a curriculum vitae, statement of research interests, and names of three references to: Dr. Peter Konig; The Neurosciences Institute 10640 John J Hopkins Drive, San Diego, CA, 92121, USA FAX: 619-626-2199 From jaap.murre at mrc-apu.cam.ac.uk Fri May 26 11:44:06 1995 From: jaap.murre at mrc-apu.cam.ac.uk (Jaap Murre) Date: Fri, 26 May 1995 16:44:06 +0100 Subject: Paper on Amnesia Model Message-ID: <199505261544.QAA10765@sirius.mrc-apu.cam.ac.uk> A new paper has been added to our ftp site: Reference: Murre, J.M.J. (submitted). A model of amnesia. Submitted to Psychological Review. Abstract: We present a model of amnesia (the TraceLink model) based on a review of its neuropsychology, neuroanatomy, and connectionist modelling. The model consists of three subsystems: (1) a trace system, (2) a link system, and (3) a modulatory system. The hippocampus is assigned a double role, being involved in both the link system and the modulatory system. The model is able to account for many of the characteristics of semantic dementia, retrograde amnesia (AA) and anterograde amnesia (RA), including: Ribot gradients, transient-global amnesia, patterns of shrinkage and recovery from AA, correlations between AA and RA or the absence thereof (e.g., in isolated RA). In addition we derive testable predictions concerning implicit memory, forgetting gradients, and neuroanatomy. ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre/trclink.ps (1253 Kb) ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre/trclink.ps.Z ( 406 Kb) ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre/trclink.zip ( 325 Kb) N.B. There is a 10-user limit on our ftp-site. If you don't succeed in connecting, please, try again later. (Normally, there should be a message indicating that the limit has been exceeded.) I am at moment moving to the University of Amsterdam. After 1 June 1995 my mail address will be: j.murre at psy.uva.nl. There may be a delay in answering questions due to moving overhead. -- Jaap Murre jaap.murre at mrc-apu.cam.ac.uk Medical Research Council - Applied Psychology Unit, Cambridge University of Amsterdam, Department of Psychonomics (after June 1) From zhangw at chert.CS.ORST.EDU Tue May 30 15:25:57 1995 From: zhangw at chert.CS.ORST.EDU (Wei Zhang) Date: Tue, 30 May 95 12:25:57 PDT Subject: Reinforcement learning papers Message-ID: <9505301925.AA13760@curie.CS.ORST.EDU> This is to announce the availability of two new postscript preprints: High-Performance Job-Shop Scheduling With A Time-Delay TD($\lambda$) Network Wei Zhang and Thomas G. Dietterich submitted to NIPS-95 ftp://ftp.cs.orst.edu/users/z/zhangw/papers/zhang-tgd-nips95.ps.gz Abstract: Job-shop scheduling is an important task for manufacturing industries. We are interested in the particular task of scheduling payload processing for NASA's space shuttle program. This paper summarizes our previous work on formulating this task for solution by the reinforcement learning algorithm $TD(\lambda)$. A shortcoming of this previous work was its reliance on hand-engineered input features. This paper shows how to extend the time-delay neural network (TDNN) architecture to apply it to irregular-length schedules. Experimental tests show that this TDNN-$TD(\lambda)$ network can match the performance of our previous hand-engineered system. The tests also show that both neural network approaches significantly out-perform the best previous (non-learning) solution to this problem in terms of the quality of the resulting schedules and the number of search steps required to construct them. Value Function Approximations and Job-Shop Scheduling Wei Zhang and Thomas G. Dietterich submitted to Workshop of Value Function Approximation in Reinforcement Learning in ML-95 ftp://ftp.cs.orst.edu/users/z/zhangw/papers/zhang-tgd-ml95rl.ps.gz Abstract We report a successful application of TD($\lambda$) with value function approximation to the task of job-shop scheduling. Our scheduling problems are based on the problem of scheduling payload processing steps for the NASA space shuttle program. The value function is approximated by a 2-layer feedforward network of sigmoid units. A one-step lookahead greedy algorithm using the learned evaluation function outperforms the best existing algorithm for this task, which is an iterative repair method incorporating simulated annealing. To understand the reasons for this performance improvement, this paper introduces several measurements of the learning process and discusses several hypotheses suggested by these measurements. We conclude that the use of value function approximation is not a source of difficulty for our method, and in fact, it may explain the success of the method independent of the use of value iteration. Additional experiments are required to discriminate among our hypotheses. The following reinforcement learning paper is also available at the site: Zhang, W. and Dietterich, T., A Reinforcement Learning Approach to Job-shop Scheduling, to appear in Proc. IJCAI-95, 1995. ftp://ftp.cs.orst.edu/users/z/zhangw/papers/zhang-tgd-ijcai95.ps.gz From csvarer at eivind.ei.dtu.dk Wed May 31 16:54:39 1995 From: csvarer at eivind.ei.dtu.dk (Claus Svarer) Date: Wed, 31 May 95 16:54:39 METDST Subject: ph.d. thesis available: NN for Signal Processing Message-ID: <9505311454.AA08746@ei.dtu.dk> ------------------------------------------------------------------------ FTP-host: eivind.ei.dtu.dk FTP-file: dist/PhD_thesis/csvarer.thesis.ps.Z ------------------------------------------------------------------------ The following ph.d. thesis is now available by anonymous ftp: Neural Networks for Signal Processing Claus Svarer CONNECT, Electronics Institute B349 Technical University of Denmark DK-2800 Lyngby Denmark csvarer at ei.dtu.dk The main themes of the thesis are: Optimization of neural network architectures by pruning of parameters. Pruning is based on Optimal Brain Damage estimates of which parameters induce the least increase the cost-function when removed. Selection of the optimal architecture in a family of pruned networks based on a generalization error estimate (Akaike's Final Prediction Error estimate). Methods for on-line tuning of the different parameters of the network optimization algorithms. The gradient-descent parameter is tuned using a second order Gauss-Newton method, while the weight-decay parameters are tuned to minimize the generalization ability (FPE estimate) of the network. The methods proposed for network optimization are all examined by experiments. Examples are selected from the areas of classification, time-series prediction and non-linear modeling. > SORRY: NO HARD COPIES AVAILABLE < -- _______________________________________ ____________________________________ | | | | Claus Svarer | e-mail : csvarer at ei.dtu.dk | | Rigshospitalet, N2081 | Phone : +45 3545 3545 | | Department of Neurology | Direct : +45 3545 2088 or | | DK-2100 Copenhagen O | +45 3545 3957 | | Denmark | Fax : +45 3545 3898 | |_______________________________________|____________________________________|