From Connectionists-Request at cs.cmu.edu Tue Mar 1 00:05:17 1994 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Tue, 01 Mar 94 00:05:17 EST Subject: Bi-monthly Reminder Message-ID: <17743.762498317@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated January 4, 1993. This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is not an edited forum like the Neuron Digest, or a free-for-all newsgroup like comp.ai.neural-nets. It's somewhere in between, relying on the self-restraint of its subscribers. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to over a thousand busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. Happy hacking. -- Dave Touretzky & David Redish --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject lately. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, and found the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new text books related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. - Do NOT tell a friend about Connectionists at cs.cmu.edu. Tell him or her only about Connectionists-Request at cs.cmu.edu. This will save your friend from public embarrassment if she/he tries to subscribe. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU (Internet address 128.2.242.8). 2. Login as user anonymous with password your username. 3. 'cd' directly to one of the following directories: /usr/connect/connectionists/archives /usr/connect/connectionists/bibliographies 4. The archives and bibliographies directories are the ONLY ones you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into one of these two directories. Access will be denied to any others, including their parent directory. 5. The archives subdirectory contains back issues of the mailing list. Some bibliographies are in the bibliographies subdirectory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. We strongly discourage the merger into the repository of existing bodies of work or the use of this medium as a vanity press for papers which are not of publication quality. PLACING A FILE To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. Current naming convention is author.title.filetype.Z where title is just enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. The Z indicates that the file has been compressed by the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is in the appendix. Make sure your paper is single-spaced, so as to save paper, and include an INDEX Entry, consisting of 1) the filename, 2) the email contact for problems, 3) the number of pages and 4) a one sentence description. See the INDEX file for examples. ANNOUNCING YOUR PAPER It is the author's responsibility to invite other researchers to make copies of their paper. Before announcing, have a friend at another institution retrieve and print the file, so as to avoid easily found local postscript library errors. And let the community know how many pages to expect on their printer. Finally, information about where the paper will/might appear is appropriate inside the paper as well as in the announcement. Please add two lines to your mail header, or the top of your message, so as to facilitate the development of mailer scripts and macros which can automatically retrieve files from both NEUROPROSE and other lab-specific repositories: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/filename.ps.Z When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. To prevent forwarding, place a "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your file. If you do offer hard copies, be prepared for a high cost. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! A shell script called Getps, written by Tony Plate, is in the directory, and can perform the necessary retrieval operations, given the file name. Functions for GNU Emacs RMAIL, and other mailing systems will also be posted as debugged and available. At any time, for any reason, the author may request their paper be updated or removed. For further questions contact: Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Phone: (614) 292-4890 APPENDIX: Here is an example of naming and placing a file: gvax> cp i-was-right.txt.ps rosenblatt.reborn.ps gvax> compress rosenblatt.reborn.ps gvax> ftp cheops.cis.ohio-state.edu Connected to cheops.cis.ohio-state.edu. 220 cheops.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put rosenblatt.reborn.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for rosenblatt.reborn.ps.Z 226 Transfer complete. 100000 bytes sent in 3.14159 seconds ftp> quit 221 Goodbye. gvax> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file rosenblatt.reborn.ps.Z in the Inbox. Here is the INDEX entry: rosenblatt.reborn.ps.Z rosenblatt at gvax.cs.cornell.edu 17 pages. Boastful statements by the deceased leader of the neurocomputing field. Let me know when it is in place so I can announce it to Connectionists at cmu. Frank ^D AFTER FRANK RECEIVES THE GO-AHEAD, AND HAS A FRIEND TEST RETRIEVE THE FILE, HE DOES THE FOLLOWING: gvax> mail connectionists Subject: TR announcement: Born Again Perceptrons FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/rosenblatt.reborn.ps.Z The file rosenblatt.reborn.ps.Z is now available for copying from the Neuroprose repository: Born Again Perceptrons (17 pages) Frank Rosenblatt Cornell University ABSTRACT: In this unpublished paper, I review the historical facts regarding my death at sea: Was it an accident or suicide? Moreover, I look over the past 23 years of work and find that I was right in my initial overblown assessments of the field of neural networks. ~r.signature ^D ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "pt.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "nn-bench-request at cs.cmu.edu".  From radford at cs.toronto.edu Tue Mar 1 21:39:15 1994 From: radford at cs.toronto.edu (Radford Neal) Date: Tue, 1 Mar 1994 21:39:15 -0500 Subject: TR available: "Priors for infinite networks" Message-ID: <94Mar1.213924edt.161@neuron.ai.toronto.edu> FTP-host: ftp.cs.toronto.edu FTP-filename: /pub/radford/pin.ps.Z The following technical report is now available via ftp, as described below. PRIORS FOR INFINITE NETWORKS Radford M. Neal Department of Computer Science University of Toronto 1 March 1994 Bayesian inference begins with a prior distribution for model parameters that is meant to capture prior beliefs about the relationship being modeled. For multilayer perceptron networks, where the parameters are the connection weights, the prior lacks any direct meaning --- what matters is the prior over functions computed by the network that is implied by this prior over weights. In this paper, I show that priors over weights can be defined in such a way that the corresponding priors over functions reach reasonable limits as the number of hidden units in the network goes to infinity. When using such priors, there is thus no need to limit the size of the network in order to avoid ``overfitting''. The infinite network limit also provides insight into the properties of different priors. A Gaussian prior for hidden-to-output weights results in a Gaussian process prior for functions, which can be smooth, Brownian, or fractional Brownian, depending on the hidden unit activation function and the prior for input-to-hidden weights. Quite different effects can be obtained using priors based on non-Gaussian stable distributions. In networks with more than one hidden layer, a combination of Gaussian and non-Gaussian priors appears most interesting. The paper may be obtained in PostScript form as follows: unix> ftp ftp.cs.toronto.edu (or 128.100.3.6, or 128.100.1.105) (log in as user 'anonymous', your e-mail address as password) ftp> cd pub/radford ftp> binary ftp> get pin.ps.Z ftp> quit unix> uncompress pin.ps.Z unix> lpr pin.ps (or however you print PostScript) The report is 22 pages in length. Due to figures, the uncompressed PostScript is about 2 megabytes in size. The files pin[123].ps.Z in the same directory contain the same paper in smaller chunks; these may prove useful if your printer cannot digest the paper all at once. Some of the figures take a while to print; the largest such is the sole content of pin2.ps Radford Neal radford at cs.toronto.edu  From JANOUSEK at dcse.fee.vutbr.cz Wed Mar 2 20:37:54 1994 From: JANOUSEK at dcse.fee.vutbr.cz (Vladimir Janousek) Date: Wed, 2 Mar 1994 20:37:54 MET-0100 Subject: NN workshop Message-ID: <20366F782A@fik.dcse.fee.vutbr.cz> ---------------------------------------------------------------------- CALL FOR PARTICIPACION Sixth International Microcomputer School on NEURAL NETWORKS September 19-23, 1994 Sedmihorky (Bohemian Paradise), Czech Republic Organised by Technical University of Brno with financial support from the CEC COST Programme. The workshop "Microcomputer School" is one of the series of events organised every second year since 1984. Its aim is to promote knowledge of new developments in Computer Science and Engineering among the educational and research community. Each meeting has a different subject area at its focus. The subject of this School is Neural Networks - Theory and Applications. Main topics * Architectures of artificial neural networks * Learning theory and applications * Speech and character recognition * Neural network-based controllers * Commercial and industrial hardware systems * Cellular neural networks in image processing Programme and location The workshop sessions will be led by invited scientists who will present extensive lectures on the topic of the field. Participants are also invited to submit individual papers (max 6 pages) on their research to be presented at the session. English is the official language of the workshop. The workshop is located near Sandstone cities of the Bohemian Paradise, a well-known natural wonderland some 80 km from Prague. The place is ideal for hiking trips. Participation will be by registration only and numbrer of participants will be limited. Registration Fee ACM members until 15 June 1994 USD 135 Kc 4.100,- after 15 June 1994 USD 155 Kc 4.700,- Non ACM members until 15 June 1994 USD 145 Kc 4.400,- after 15 June 1994 USD 165 Kc 4.990,- The registration fee covers workshop participation, cost of accomodation, meals and prceedings. Scholarship for Young Scientist PhD students from Central and Eastern Europe presenting the most valuable contribution to the Workshop will be granted a scholarship in the form of exemption from the workshop fee. The scholarship will be granted by competition based on quality of submitted papers. Please note that the scholarships will not cover travel expenses. Deadlines for the Authors Manuscript of the paper should be typed and written in English and must be received before April 15. Notification of acceptance with a guide for the authors will be send before May 15. Full, camera ready paper must be received before June 30. To register, or for additional information, contact: Mrs. Sylva Papezova Application Software, Ltd. Bozetechova 2 61266 Brno CZECH REPUBLIC phone/fax: +42-5-41211479 phone: +42-5-740741 e-mail: nnet at dcse.fee.vutbr.cz Payment and Banking Information Method of Payment: Wire transfer. The amount is to be paid in USD, Kc or any convertible equaivalent. IMPORTANT: Please use NETTO payment in your payment order ir increase the amount by USD 10 for banking fee deducted by beneficiary's bank. Bank: Account number: Komercni banka Brno-mesto 113545-621/0100 nam.Svobody 21 CZ-631 31 Brno Czech Republic ------------------------------------------------------------------ Vladimir Janousek janousek at dcse.fee.vutbr.cz Technical University of Brno Faculty of Electrical Engineering & Informatics Department of Computer Science & Engineering Bozetechova 2, CZ-612 66 Brno, Czech Republic  From jameel at cs.tulane.edu Thu Mar 3 03:06:09 1994 From: jameel at cs.tulane.edu (Akhtar Jameel) Date: Thu, 3 Mar 1994 02:06:09 -0600 (CST) Subject: Call for papers Message-ID: <9403030806.AA03339@pegasus.cs.tulane.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 5388 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/c8760e88/attachment.ksh From mtx004 at cck.coventry.ac.uk Thu Mar 3 10:31:46 1994 From: mtx004 at cck.coventry.ac.uk (NSteele) Date: Thu, 3 Mar 94 10:31:46 WET Subject: ANNGA95 Message-ID: <15290.9403031031@cck.coventry.ac.uk> Please could you post the following invitation to participate.... ****************************************************************************** ICANNGA95 INTERNATIONAL CONFERENCE on ARTIFICIAL NEURAL NETWORKS and GENETIC ALGORITHMS Preceded by a one-day Introductory Workshop ECOLE DES MINES d'ALES , FRANCE 18th - 21st April 1995 Call for Papers and Invitation to Participate Purpose and Scope of the Conference Artificial neural networks and genetic algorithms are two areas of emerging technology, with their origins in the field of biology. Independently or in conjunction, approaches based on these techniques have produced interesting and useful results in many fields of application. The conference has two purposes, namely to bring together established workers in the fields and also to provide an opportunity for those wishing to gain understanding and experience of these areas. Thus the conference will be preceded by a one-day workshop. At this workshop, introductory presentations covering the basic concepts and recent developments will be given, with lectures based on printed course notes. The language of instruction will be English, but it is expected that assistance will be available in French and German. "Hands-on" experience will be available and the workshop fee will include the cost of some introductory software. Workshop participants will be able to register as listeners for the conference itself at a reduced rate. This conference follows the highly successful ICANNGA93 held at Innsbruck, Austria in April 1993. Owing to the exceptionally high quality of publications and friendly atmosphere evident at the 1993 conference, the organisers have decided to continue the conference theme with a conference every two years. As a result the Ecole des Mines d'Ales is honoured and delighted to be chosen to host the second of this conference series, in close collaboration with the organisers of ICANNGA93 from the University of Innsbruck and Coventry University. Call for Papers The conference will focus on both the theoretical and practical aspects of the technologies. Accordingly, contributions are sought based on the following list, which is indicative only. 1 Theoretical aspects of Artificial Neural Networks. Novel paradigms, training methods, analysis of results and models, trajectories and dynamics. 2. Practical applications of Artificial Neural Networks. Pattern recognition, classification problems, fault detection, optimisation, prediction, risk assessment, data compression and image processing, process monitoring and control, financial forecasting, etc... 3. Theoretical and computational aspects of Genetic Algorithms. New algorithms/processes, performance measurement. 4. Practical applications of Genetic Algorithms. Optimisation, scheduling and design problems, classifier systems, application to artificial neural networks, etc... Authors wishing to contribute to the conference should send an abstract of 600-1000 words of their proposed contribution before 31st August 1994. Abstracts should be in English and three typewritten copies should be sent to the address below. David Pearson Laboratoire d'Electronique d'Automatique et d'Informatique Ecole des Mines d'Ales 6, avenue de Clavieres 30319 Ales Cedex France Alternatively abstracts may be sent by electronic mail to either of the following email addresses. 1. dpearson at soleil.ENSM-ALES.FR (Ales) 2. NSTEELE at cov.ac.uk (Coventry) Refereeing of abstracts submitted before the deadline date will take place on a regular basis, allowing early decisions to be taken in order to help contributors plan their visit. ADVISORY COMMITTEE R. Albrecht, University of Innsbruck D. Pearson, Ecole des Mines d'Ales N. Steele, Coventry University. Accommodation charges are not included in the fees. Details on hotel reservation will be available later in 1994. For further information on the conference or workshop please contact :- Nigel Steele Department of Mathematics Coventry University Priory Street Coventry CV1 5FB UK tel: +44 203 838568 fax: +44 203 838585 email: NSTEELE at cck.cov.ac.uk or David Pearson Laboratoire d'Electronique d'Automatique et d'Informatique Ecole des Mines d'Ales 6, avenue de Clavieres 30319 Ales Cedex France tel: +33 66785249 fax: +33 66785201 email: dpearson at soleil.ENSM-ALES.FR General Information Ales-en-Cevennes is situated at the South-Eastern outcrop of the Massif Central between the "garrigues" of Languedoc and the Cevennes mountains and owes its existence to its abundant mineral resources. Means of access: Ales is located 40 kilometres from Nimes, 70 kilometres from Avignon, Montpellier and the Mediterranean beaches and 150 kilometres from Marseille. By road: The "Nimes-Ouest" exit on the Lyon-Barcelona and the Marseille-Barcelona motorways. By train: The Paris-Lyon-Nimes TGV (high speed train, 4.5 hours from Paris to Nimes), connection by train or bus to Ales from Nimes. By plane: Marseille and Montpellier international airports, Nimes national airport with several daily flights from Paris. More detailed information on the various means of access will be available later in 1994.  -- ========================== Nigel Steele Chairman, Division of Mathematics School of Mathematical and Information Sciences Coventry University Priory Street Coventry CV1 5FB United Kingdom. tel: (0203) 838568 +44 203 838568 email: NSTEELE at uk.ac.cov.cck (JANET) or NSTEELE at cck.cov.ac.uk (EARN BITNET etc.) fax: (0203) 838585 +44 203 838585  From peterw at cogs.susx.ac.uk Thu Mar 3 06:53:00 1994 From: peterw at cogs.susx.ac.uk (Peter Williams) Date: Thu, 3 Mar 94 11:53 GMT Subject: Technical report Message-ID: FTP-host: ftp.cogs.susx.ac.uk FTP-filename: /pub/reports/csrp/csrp312.ps.Z The following technical report is available by anonymous ftp. ------------------------------------------------------------------------ BAYESIAN REGULARISATION AND PRUNING USING A LAPLACE PRIOR Peter M Williams Cognitive Science Research Paper CSRP-312 School of Cognitive and Computing Sciences University of Sussex Falmer, Brighton BN1 9QH England email: peterw at cogs.susx.ac.uk Abstract Standard techniques for improved generalisation from neural networks include weight decay and pruning. Weight decay has a Bayesian interpretation with the decay function corresponding to a prior over weights. The method of transformation groups and maximum entropy indicates a Laplace rather than a Gaussian prior. After training, the weights then arrange themselves into two classes: (1) those with a common sensitivity to the data error (2) those failing to achieve this sensitivity and which therefore vanish. Since the critical value is determined adaptively during training, pruning---in the sense of setting weights to exact zeros---becomes a consequence of regularisation alone. The count of free parameters is also reduced automatically as weights are pruned. A comparison is made with results of MacKay using the evidence framework and a Gaussian regulariser. ------------------------------------------------------------------------ [113755 bytes, 25 pages] unix> ftp ftp.cogs.susx.ac.uk Name: anonymous Password: (email address) ftp> cd pub/reports/csrp ftp> binary ftp> get csrp312.ps.Z  From charles at playfair.Stanford.EDU Thu Mar 3 16:29:20 1994 From: charles at playfair.Stanford.EDU (Charles Roosen) Date: Thu, 03 Mar 94 13:29:20 -0800 Subject: Projection Pursuit Papers Available Message-ID: <199403032129.NAA28374@playfair.Stanford.EDU> The following Tech Reports are now available by anonymous ftp from research.att.com. They are in the directory /dist/trevor as "asp.tm.ps.Z" and "lrpp.tm.ps.Z". Charles Roosen charles at playfair.stanford.edu --- Automatic Smoothing Spline Projection Pursuit Charles Roosen Trevor Hastie Dept. of Stat. Stat. & Data Analysis Research Dept. Stanford U. AT&T Bell Labs Abstract A highly flexible nonparametric regression model for predicting a response y given covariates {x_k}_{k=1}^d is the projection pursuit regression (PPR) model yhat=h(x)=\beta_0 + \sum_j \beta_j f_j(\alpha_j^T x), where the f_j are general smooth functions with mean zero and norm one, and \sum_{k=1}^d \alpha_{kj}^2=1. The standard PPR algorithm of Friedman estimates the smooth functions f_j(v_j) using the supersmoother nonparametric scatterplot smoother. Friedman's algorithm constructs a model with M_{max} linear combinations, then prunes back to a simpler model of size M \leq M_{max}, where M and M_{max} are specified by the user. This paper discusses an alternative algorithm in which the smooth functions are estimated using smoothing splines, and the number of terms M and M_{max} are chosen by generalized cross-validation. Logistic Response Projection Pursuit Charles Roosen Trevor Hastie Dept. of Stat. Stat. & Data Analysis Research Dept. Stanford U. AT&T Bell Labs Abstract A highly flexible nonparametric regression model for predicting a response y given covariates x is the projection pursuit regression (PPR) model yhat=h(x)=\beta_0 + \sum_j \beta_j f_j(\alpha_j^T x), where the f_j are general smooth functions with mean zero and norm one, and \sum_{k=1}^d \alpha_{kj}^2=1. With a binary response $y$, the common approach to fitting a PPR model is to fit yhat to minimize average squared error without explicitly considering the binary nature of the response. We develop an alternative logistic response projection pursuit model, in which y is take to be binomial(p), where \log({p \over 1-p})=h(x). This may be fit by minimizing either binomial deviance or average squared error. We compare the logistic response models to the linear model on simulated data. In addition, we develop a generalized projection pursuit framework for exponential family models. We also present a smoothing spline based PPR algorithm, and compare it to supersmoother and polynomial based PPR algorithms.  From ling at csd.uwo.ca Thu Mar 3 16:47:02 1994 From: ling at csd.uwo.ca (Charles X. Ling) Date: Thu, 3 Mar 94 16:47:02 EST Subject: Overfitting in learning discrete patterns Message-ID: <9403032147.AA28654@mccarthy.csd.uwo.ca> Hi everyone, A few weeks ago I posted several questions regarding overfitting in the network training. I got many helpful replies (some did not forward to the list). Thanks very much to all. After some thoughts, and following Andreas Weigend's Summer School 93 paper, I designed and implemented the following experiments on the overfitting problem. The design is very simple but with a clear rationale, the results seem to be conclusive, and anyone can verify them easily. THE FUNCTION TO BE LEARNED: First, a Boolean function with 7 variables and 1 output is defined by: count := 2*v1 - v2 + 3*v3 + 2*v4 -v5 + 3*v6 + v7; if (count > 2) and (count < 7) then f1 := 0 else f1 := 1 This function needs a network with 2 one-layer hidden units to represent it. The actual target function is the one above except 10% of function values chosen randomly are flipped (0 to 1 and 1 to 0). Note that those flipped values can be regarded as having a "high-level regularity", and no sampling noise is added to this target function. (It is like learning verb past tenses which have a few so called "irregular" verbs, or learning string to phoneme mappings which have some exceptions). The 128 possible examples are split randomly to the training set (with 64 examples), the testing set (64), non-overlap. It happens that the training set has 4 flipped examples, and the testing set has 7. TRAINING: Xerion from U of Toronto is used. Backprop with small learning rate and momentum 0. Testing accuracies are monitored every 10 epochs till 1000, then several points are checked till 50,000 epochs. Note that only the basic BP is used without weight decaying. Networks have 7 input units and one output unit. Networks with 1 (too small), 2 (right model), 3, 4 (enough to train to 0 symbolic-level error, see below), 6, 8, 12, 20 (over sized) hidden units are trained, each with 5 different random seeds. ERROR: The most important change from Weigend's paper is the way the error is measured. Since we are learning discrete patterns, the error we really care is the sum of misclassifications (i.e., symbolic-level error). The discrete patterns that have the smallest real-number Hamming distance (smallest angle) with the actual network outputs should be taken. Since there is only one output here, if the output is greater than 0.5, then it is 1, otherwise is 0. RESULTS: See table below. Results are represented as average at standard_error # of min training min testing overfitting percent of hidden U error error (# of increase) increase 1 19.0 at 0.0 30.5 at 0.6 2.5 at 0.6 8% 2 2.0 at 0.0 11.0 at 0.0 1.0 at 0.0 9% 3 1.0 at 0.0 11.3 at 0.9 2.5 at 1.7 22% 4 0.6 at 0.5 11.3 at 0.9 6.3 at 1.9 56% 6 0.4 at 0.5 13.2 at 1.3 2.6 at 0.5 20% 8 0.0 at 0.0 13.4 at 1.3 5.2 at 1.1 39% 12 0.2 at 0.5 12.4 at 1.1 3.4 at 2.8 27% 20 0.5 at 0.6 12.8 at 1.5 4.0 at 2.2 31% min training error: minimal number of misclassifications for the training set overfitting: increases in the number of misclassification on the testing set when training to 50,000 epochs. percent of increase: overfitting/min-testing-error CONCLUSIONS: 1. Too-small networks and right-sized networks do overfit, but with very small amount (8%, 9%). Testing errors and overfitting amount are stable. 2. The right-size network (2 hidden units) has the minimal testing errors and minimal fluctuations. 3. Too-large networks overfit a lot. This is because the trained networks represent a very complex separator which does not align with the hyper-planes. 4. Too-large networks have large variations on testing errors using different random seeds. Therefore, the results are not as reliable. Why? Too many freedoms in the net. 5. The average error with too-large networks is slightly *higher* than the right-sized networks. Therefore, training overly large networks and stopping early does not seem to be beneficial. IN SUM: In terms of symbolic-level errors... Even without noise, training to 0 symbolic-level error overfits. Training overly large networks does not seem beneficial (see 3, 4, 5 above). We should look for a small network with minimal cross-validation error. That net should also have small overfitting effect and variations. That is... 1. Split the training set as TR1 (training) and TR2 (validation). 2. Train net1, net2, ... net_n (with different numbers of hidden units) on TR1, stop training when minimal validation errors are reached. 3. Find a small net, net_k, with the smallest validation error. 4. Train net_k with TR1+TR2 until error on the training drops to the scaled-up error when training TR1. This net will not overfit too much anyway, and the accuracy on testing would be stable. The only thing that should be explored further is the basic BP with weight decaying. I will do that in the near future. Comments and suggestions are very welcome. Regards, Charles  From oded at ai.mit.edu Thu Mar 3 22:10:46 1994 From: oded at ai.mit.edu (Oded Maron) Date: Thu, 3 Mar 94 22:10:46 EST Subject: pre-print announcement: Hoeffding Races - Accelerating Model Selection Message-ID: <9403040310.AA09710@fiber-bits> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/maron.hoeffding.ps.Z The file maron.hoeffding.ps.Z is now available for copying from the Neuroprose repository: Hoeffding Races: Accelerating model selection for classification and function approximation (8 pages) Oded Maron, MIT AI Lab and Andrew W. Moore, CMU ABSTRACT: Selecting a good model of a set of input points by cross validation is a computationally intensive process, especially if the number of possible models or the number of training points is high. Techniques such as gradient descent are helpful in searching through the space of models, but problems such as local minima, and more importantly, lack of a distance metric between various models reduce the applicability of these search methods. Hoeffding Races is a technique for finding a good model for the data by quickly discarding bad models, and concentrating the computational effort at differentiating between the better ones. This paper focuses on the special case of leave-one-out cross validation applied to memory-based learning algorithms, but we also argue that it is applicable to any class of model selection problems. This paper will appear in NIPS-6. Maron, Oded and Moore, Andrew W. (1994). Hoeffding Races: Accelerating model selection for classification and function approximation. In Cowan, J.D., Tesauro, G., and Alspector, J. (eds)., Advances in Neural Information Processing Systems 6. San Francisco, CA: Morgan Kaufmann Publishers.  From ray at basser.cs.su.OZ.AU Fri Mar 4 03:20:04 1994 From: ray at basser.cs.su.OZ.AU (Raymond Lister) Date: Fri, 4 Mar 1994 19:20:04 +1100 Subject: Tutorial - Neural Networks, Speech Technology, and Other Applications Message-ID: <9403040825.20347@munnari.oz.au> ************************************************************** NEURAL NETWORKS, SPEECH TECHNOLOGY, AND OTHER APPLICATIONS -- A TUTORIAL -- Thursday April 28 and Friday April 29 1994 Queensland University of Technology, Brisbane, AUSTRALIA Featured Speaker: PROF. NELSON MORGAN, International Computer Science Institute Berkeley, California, USA. PROF. MORGAN is co-author, with Herve Bourlard, of the recent Kluwer Academic Press book ``Connectionist Speech Recognition, A Hybrid Approach'' PROF. MORGAN will be giving a tutorial similar to the one he gave at the NIPS-6 conference at Denver, Colorado in December 1993. NIPS is the premier international conference for research in artificial neural networks. Other speakers: PROF. TOM DOWNS/AH CHUNG TSOI and co-workers from the Speaker Verification Project, Department of Electrical and Computer Engineering, University of Queensland, Australia PROF. JOACHIM DIEDERICH, Professor of Neurocomputing, and other members of the Queensland University of Technology Neurocomputing Research Centre. Venue: 12th Floor, Building ITE ("Information Technology and Engineering"), Gardens Point Campus, Queensland University of Technology, 2 George Street, Brisbane, AUSTRALIA (The venue is a short walk from the Brisbane Central Business District.) Day 1 (2-6PM) 1. An Introduction to Neural Networks. This session will serve as a primer for those attendees with no prior background in artificial neural networks. 2. Demonstrations of Applications of Neural Networks at QUT. The Neurocomputing Research Centre at QUT is developing a number of applications, including systems for: predicting blue-green algae blooms; advising in dairy breeding programs; predicting the bleeding rate of patients undergoing heart bypass surgery; and a computer assistant for the handling of electronic mail. Day 2 (full day): 1. Connectionist Continuous Speech Recognition, by Nelson Morgan. This will consist of three 90 minute sessions. 2. Speaker Verification Research at the University of Queensland, by Professors Tom Downs/Ah Chung Tsoi and co-workers. 1. Overview 2. Neural Networks applied to speaker verification 3. Dynamic time warping applied to speaker verification 4. Vector quantization applied to speaker verification 5. Demonstration of a speaker verification system Cost: Registration $A200, Pre-Registration $150 (for payment received 1 week prior to tutorial) Full time postgraduate students are eligible for a 50% discount on the full fee. Proof of enrollment is required: either a photocopy of a current student card, or a letter from the Head of Department. Lunch $A30 Optional, and second day only. Must be accompanied by early registration fee, up to one week prior to tutorial. It will be possible to register on the day, but only cash and cheques will be acceptable. Credit cards cannot be accepted. ******************************************************************* REGISTRATION FORM NEURAL NETWORKS, SPEECH TECHNOLOGY, AND OTHER APPLICATIONS Thursday April 28 and Friday April 29 1994 NAME:_______________________________ AFFILIATION:________________________________________________ ADDRESS:__________________________ __________________________ __________________________ __________________________ TEL: ____________________________ (office hours) FAX: ____________________________ EMAIL: ____________________________ REGISTRATION (tick as appropriate) Full Fee: $200 Early Fee: $150 Student: $100 Lunch: $ 30 ________________ Total: $ - or - I expect to attend but will pay on the day (tick) (Notification of an expectation to attend would be appreciated, as it will aid in making tutorial arrangements. Such notification may be made by electronic mail, along with above particulars.) Make cheques payable to "Faculty Research - Neurocomputing". Credit cards cannot be accepted. Send the registration form and remittance to: Neural Network Tutorial Neurocomputing Research Centre School of Computing Science Queensland University of Technology GPO Box 2434 Brisbane Australia 4001 ******************************************************************* Connectionist Continuous Speech Recognition: A Tutorial by Professor Nelson Morgan Automatic Speech Recognition (ASR) has been a major topic of research for over 40 years. While there has been much progress in this time, it is still a difficult task, and the best systems are still quite limited. Since computers have rapidly grown much more powerful, statistically-oriented data-driven approaches have received much more attention over the last 10 years. These approaches automatically learn speech model parameters from the data, and have proven to be very successful. The dominant approach for such systems uses Hidden Markov Models (commonly based on an assumption of Gaussian or mixture Gaussian probability densities for the data in each sound class) to represent speech. However, over the last 5 years, a set of techniques have been developed at Berkeley and elsewhere using a hybrid of connectionist probability estimators and Hidden Markov Models. In this tutorial, the basics of automatic speech recognition, Hidden Markov Models, and probability estimation with layered connectionist networks will be reviewed, followed by a more detailed explanation of the current state of development for this class of approaches. The goal of the tutorial will be to acquaint the participants with the major issues of connectionist speech recognition, rather than to exhaustively review the range of approaches under investigation worldwide. Brief Notes about the Instructor: Nelson Morgan is the leader of a research group at the International Computer Science Institute whose charter is a mixture of connectionist computational engine design and the incorporation of such engines into research into speech and hearing in order to improve auditory machine perception. Together with Herve Bourlard, he is the author of the recent Kluwer Academic Press book ``Connectionist Speech Recognition, A Hybrid Approach'', and was the co-developer (with Bourlard) of many of these techniques. He is also on the faculty at the University of California at Berkeley. ******************************************************************* For further information please contact: Dr Raymond Lister email: raymond at fitmail.fit.qut.edu.au - or - Prof. Joachim Diederich Tel: +617 864 2143 email: joachim at fitmail.fit.qut.edu.au - or - either, by Fax: +617 864 1801  From lautrup at connect.nbi.dk Fri Mar 4 14:04:55 1994 From: lautrup at connect.nbi.dk (Benny Lautrup) Date: Fri, 4 Mar 94 14:04:55 MET Subject: preprint Message-ID: The following preprint is now available: FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/hertz.nonlin.ps.Z Authors: J. Hertz, A. Krogh, B. Lautrup and T. Lehmann Title: Non-Linear Back-propagation: Doing Back-Propagation without Derivatives of the Activation Function. Size: 13 pages Abstract: The conventional linear back-propagation algorithm is replaced by a non-linear version, which avoids the necessity for calculating the derivative of the activation function. This may be exploited in hardware realizations of neural processors. In this paper we derive the non-linear back-propagation algorithms in the framework of recurrent back-propagation and present some numerical simulations of feed-forward networks on the NetTalk problem. A discussion of implementation in analog VLSI electronics concludes the paper.  From Fabien.Moutarde at aar.alcatel-alsthom.fr Fri Mar 4 08:00:49 1994 From: Fabien.Moutarde at aar.alcatel-alsthom.fr (Fabien Moutarde) Date: Fri, 4 Mar 94 14:00:49 +0100 Subject: Overfitting in learning discrete patterns In-Reply-To: Mail from '"Charles X. Ling" ' dated: Thu, 3 Mar 94 16:47:02 EST Message-ID: <9403041300.AA05822@orchidee.dinsunnet> Hi everybody. Here are some questions and comments about the overfitting experiment reported by Charles X. Ling. I would like to know how were the weights initialized ? Were they taken from uniform distribution in some fixed interval whatever the network architecture ? Which interval ? I ask this because when we tried to provoke some overfitting on a very simple problem, we realized that the initial amplitude of weights is a crucial factor : if you begin learning with some neurons already in their non linear regime somewhere in learning space, then the initial function realized by the network is not smooth, and the irregularities are likely to remain between learning points and to produce overfitting. This implies that the bigger the network, the lower the initial weights should be. Ling's problem is a discrete value one, and the above remarks were related to a continuous function approximation, however I think it is possible that his results are more a consequence of his initialization procedure than of the training itself. Any further comments, and details on the experiment ? Fabien. +===========================================+ | | | Fabien Moutarde | +---------------+ | Alcatel Alsthom Recherche | | A L C A T E L | | Route de Nozay | +---------------+ | 91460 Marcoussis | | A L S T H O M | | FRANCE | +===============+ | | RECHERCHE | tel: 33-1-64.49.16.98 | | fax: 33-1-64.49.06.95 | | e-mail: moutarde at aar.alcatel-alsthom.fr | | | +===========================================+  From lautrup at connect.nbi.dk Fri Mar 4 15:01:47 1994 From: lautrup at connect.nbi.dk (Benny Lautrup) Date: Fri, 4 Mar 94 15:01:47 MET Subject: IJNS Vol 4.3 contents Message-ID: Begin Message: ----------------------------------------------------------------------- INTERNATIONAL JOURNAL OF NEURAL SYSTEMS The International Journal of Neural Systems is a quarterly journal which covers information processing in natural and artificial neural systems. It publishes original contributions on all aspects of this broad subject which involves physics, biology, psychology, computer science and engineering. Contributions include research papers, reviews and short communications. The journal presents a fresh undogmatic attitude towards this multidisciplinary field with the aim to be a forum for novel ideas and improved understanding of collective and cooperative phenomena with computational capabilities. ISSN: 0129-0657 (IJNS) ---------------------------------- Contents of Volume 4, issue number 3 (1993): 1. X. Yao: Evolutionary Artificial Neural Networks 2. A. Wendemuth & D. Sherrington: Fast Learning of Biased Patterns in Neural Networks 3. H. S. Toh: Weight Configurations of Trained Perceptrons 4. W. Hsu, L. S. Hsu & M. F. Tenorio: The ClusNet Algorithm and Time Series Prediction 5. A. Holst & A. Lansner: A Flexible and Fault Tolerant Query-Reply System Based on a Bayesian Neural Network 6. S. Cavalieri, A. Di Stefano & O. Mirabella: Neural Strategie to Handle Routing in Computer Networks 7. G. K. Knopf & M. M. Gupta: Dynamics of Antagonistic Neural Processing Elements Book Review: K. Venkatesh Prasad: Neural Networks for Optimization and Signal Processing by A. Cichocki & R. Unbehauen ---------------------------------- Editorial board: B. Lautrup (Niels Bohr Institute, Denmark) (Editor-in-charge) S. Brunak (Technical Univ. of Denmark) (Assistant Editor-in-Charge) D. Stork (Stanford) (Book review editor) Associate editors: J. Alspector (Bellcore) B. Baird (University of California, Berkeley) D. Ballard (University of Rochester) E. Baum (NEC Research Institute) S. Bjornsson (University of Iceland) J. M. Bower (CalTech) S. S. Chen (University of North Carolina) J. L. Elman (University of California, San Diego) M. V. Feigelman (Landau Institute for Theoretical Physics) F. Fogelman-Soulie (Paris) K. Fukushima (Osaka University) A. Gjedde (Montreal Neurological Institute) S. Grillner (Nobel Institute for Neurophysiology, Stockholm) T. Gulliksen (University of Oslo) S. Hanson (SIEMENS Research) J. Hertz (Nordita) D. Horn (Tel Aviv University) J. Hounsgaard (University of Copenhagen) B. A. Huberman (XEROX PARC) L. B. Ioffe (Rutgers University) P. I. M. Johannesma (Katholieke Univ. Nijmegen) M. Jordan (MIT) G. Josin (Neural Systems Inc.) I. Kanter (Bar-Ilan University, Israel) J. H. Kaas (Vanderbilt University) A. Lansner (Royal Institute of Technology, Stockholm) A. Lapedes (Los Alamos) B. McWhinney (Carnegie-Mellon University) J. Moody (Oregon Graduate Institute, USA) A. F. Murray (University of Edinburgh) J. P. Nadal (Ecole Normale Superieure, Paris) N. Parga (Centro Atomico Bariloche, Argentina) S. Patarnello (IBM ECSEC, Italy) P. Peretto (Centre d'Etudes Nucleaires de Grenoble) C. Peterson (University of Lund) K. Plunkett (University of Oxford) S. A. Solla (AT&T Bell Labs) A. Weigend (University of Colerado) M. A. Virasoro (University of Rome) D. Zipser (University of California, San Diego) ---------------------------------- CALL FOR PAPERS Original contributions consistent with the scope of the journal are welcome. Complete instructions as well as sample copies and subscription information are available from The Editorial Secretariat, IJNS World Scientific Publishing Co. Pte. Ltd. 73, Lynton Mead, Totteridge London N20 8DH ENGLAND Telephone: (44)81-446-2461 or World Scientific Publishing Co. Inc. Suite 1B 1060 Main Street River Edge New Jersey 07661 USA Telephone: (1)201-487-9655 or World Scientific Publishing Co. Pte. Ltd. Farrer Road, P. O. Box 128 SINGAPORE 9128 Telephone (65)382-5663 ----------------------------------------------------------------------- End Message  From davec at cogs.susx.ac.uk Fri Mar 4 08:59:37 1994 From: davec at cogs.susx.ac.uk (Dave Cliff) Date: Fri, 4 Mar 1994 13:59:37 +0000 (GMT) Subject: Postdoc at Sussex Message-ID: Sussex Centre for Neuroscience RESEARCH FELLOW Applications are invited for a postdoctoral research fellow to investigate visuo-spatial memories and navigation in hymenoptera (ants, bees, and wasps). Candidates with interest and experience in vision, robotics and computational modelling are especially welcome. The post will be for two years in the first instance with a possibility of renewal. Starting salary will be in the range U.K.Pounds12828-20442. Enquiries and applications (curriculum vitae, one or two sample publications, and the names and addresses of at least two referees) should be addressed to: Dr T. S. Collett, Sussex Center for Neuroscience, School of Biological Sciences, Falmer, Brighton BN1 9QG, U.K. Tel: +44 (0)273 678507 Fax: +44 (0)273-678535 Closing date: 1 May 1994.  From venu at pixel.mipg.upenn.edu Fri Mar 4 15:40:20 1994 From: venu at pixel.mipg.upenn.edu (Venugopal) Date: Fri, 4 Mar 94 15:40:20 EST Subject: Alopex: Pre-print available on FTP Message-ID: <9403042040.AA05883@pixel.mipg.upenn.edu> Pre-print of the following paper (which is to appear in NEURAL COMPUTATION, vol. 4, 1994, pp. 467-488) is available on ftp from neuroprose archive: ALOPEX: A CORRELATION BASED LEARNING ALGORITHM FOR FEEDFORWARD AND RECURRENT NEURAL NETWORKS K. P. Unnikrishnan GM Research Laboratories, Warren, MI AI Laboratory, Univ. of Michigan, Ann Arbor, MI K. P. Venugopal Medical Image Processing Group University of Pennsylvania, Philadelphia, PA Abstract: We present a learning algorithm for neural networks, called Alopex. Instead of error gradient, Alopex uses local correlations between changes in individual weights and changes in the global error measure. The algorithm does not make any assumptions about transfer functions of individual neurons, and does not explicitly depend on the functional form of the error measure. Hence, it can be used in networks with arbitrary transfer functions and for minimizing a large class of error measures. The learning algorithm is the same for feed-forward and recurrent networks. All the weights in a network are updated simultaneously, using only local computations. This allows complete parallelization of the algorithm. The algorithm is stochastic and it uses a `temperature' parameter in a manner similar to that in simulated annealing. A heuristic `annealing schedule' is presented which is effective in finding global minima of error surfaces. In this paper, we report extensive simulation studies illustrating these advantages and show that learning times are comparable to those for standard gradient descent methods. Feed-forward networks trained with Alopex are used to solve the MONK's problems and symmetry problems. Recurrent networks trained with the same algorithm are used for solving temporal XOR problems. Scaling properties of the algorithm are demonstrated using encoder problems of different sizes and advantages of appropriate error measures are illustrated using a variety of problems. ----------------------------------------- The file at archive.cis.ohio-state.edu is venugopal.alopex.ps.Z (472K compressed) to ftp the file: unix> ftp archive.cis.ohio-state.edu Name (archive.cis.ohio-state.edu:xxxxx): anonymous Password: your address ftp> cd pub/neuroprose ftp> binary ftp> mget venugopal.alopex.ps.Z uncompress the file after transfering to your machine, before printing. ----------------------------------------------------------- K. P. Venugopal Medical Image Processing Group University of Pennsylvania 423 Blockley Hall Philadelphia, PA 19104 (venu at pixel.mipg.upenn.edu)  From trevor at research.att.com Fri Mar 4 16:28:00 1994 From: trevor at research.att.com (trevor@research.att.com) Date: Fri, 4 Mar 94 16:28 EST Subject: Report on Gaussian Mixtures available Message-ID: The following report is available via anonymous ftp. FTP-host: research.att.com FTP-filename: /dist/trevor/mda.ps.Z Software for fitting these models in S will soon be available from the S archive at statlib at lib.stat.cmu.edu. Discriminant Analysis by Gaussian Mixtures Trevor Hastie and Robert Tibshirani Fisher-Rao linear discriminant analysis (LDA) is a valuable tool for multigroup classification. LDA is equivalent to maximum likelihood classification assuming Gaussian distributions for each class. In this paper, we fit Gaussian mixtures to each class to facilitate effective classification in non-normal settings, especially when the classes are clustered. Low dimensional views are an important by-product of LDA---our new techniques inherit this feature. We are able to control the within-class spread of the subclass centers relative to the between-class spread. Our technique for fitting these models permits a natural blend with nonparametric versions of LDA. To retrieve from research.att.com: unix> ftp research.att.com Name (research.att.com:trevor): anonymous Password: (use your email address) ftp> cd dist/trevor ftp> binary ftp> get mda.ps.Z ftp> quit unix> uncompress mda.ps.Z unix> lpr mda.ps  From jras at uned.es Fri Mar 4 20:55:07 1994 From: jras at uned.es (Jose Ramon Alvarez Sanchez) Date: Fri, 4 Mar 1994 20:55:07 UTC+0100 Subject: IWANN'95 Call for Papers Message-ID: <107*/S=jras/O=uned/PRMD=iris/ADMD=mensatex/C=es/@MHS> INTERNATIONAL WORKSHOP ON ARTIFICIAL NEURAL NETWORKS IWANN'95 Preliminary Announcement and First Call for Papers Malaga - Costa del Sol, Spain June 7 - 9, 1995 SPONSORED BY IFIP (Working Group in Neural Computer Systems, WG10.6) Spanish RIG IEEE Neural Networks Council UK&RI communication chapter of IEEE Spanish Computer Society chapter of IEEE AEIA (IEEE affiliate society) ORGANISED BY Universidad de Malaga UNED (Madrid) IWANN'95. The third International Workshop on artificial Neural Networks, will take place in the Spanish "Costa del Sol" (Malaga) from 7 to 9 of June, 1995. This biennial meeting with focus on Biological Models and New Computing Paradigms, was first held in Granada (1991) and Sitges (1993) with a growing number of participants from more than 20 countries and with high quality papers published by Springer-Verlag (LNCS 540 and 686). SCOPE From the computational viewpoint, standard neural networks paradigms are nearly exhausted and some fresh air is needed. In this workshop, remaining with the powerful roots of neural computation (modularity, autonomy, distributed computation and self-programming via supervised or non-supervised learning), focus is placed on Biological Modeling, the search of Theory and Design Methodologies and the bridge between Connectionism and Symbolic Computation. IWANN's main objective is to offer an interdisciplinary forum for scientists and engineers from Neurology, Computer Science, Artificial Intelligence, Electronics, Cognitive Science and applied domains, looking after brain storming and innovative formulations of Natural and Artificial Neural Computation. It is the deep feeling of the IWANN's organizers that this more-complex, biologically inspired, and theoretical and methodologically supported approach will also provide us with more powerful tools for applied domains. Contributions on the following or related topics are welcome. TOPICS 1. Neuroscience: (Principles, methodologies in brain research, modeling and simulation, central and peripheral neural coding, dendro-dentritic nets, local circuits, anatomical and physiological organizations, plasticity, learning and memory in natural neural nets, models of development and evolution, specific circuits in sensorial and motor pathways, networks in cerebral cortex). 2. Computational Models of Neurons and Neural Nets: Continuous (linear, high order, recurrent), logic, sequential, inferential (object oriented, production rules, frames), probabilistic, Bayesian, fuzzy and chaotic models, hybrid formulations, massive computation and learning enabling structures for all these formulations. 3. Organizational Principles: The living organization, deterministic networks dynamics, autopoiesis, self-organization, cooperative processes and emergent computation, synergetics, evolutive optimization and genetic algorithms. 4. Learning: Inspirations from the biological mechanisms of learning, supervised and unsupervised strategies, local self-programming, continous learning, evolutive algorithms, symbolic-subsymbolic formulations. 5. Cognitive Science and AI: Neural networks for knowledge acquisition, multisensorial integration, perception, knowledge- based neural nets, inductive, deductive and abductive reasoning, memory mechanisms, natural language. 6. Neurosimulators: Languages, environments, parallelization, modularity, extensibility and benchmarks. 7. Hardware Implementation: VLSI, parallel architectures, neurochips, preprocessing networks, neurodevices, FPGA's, benchmarks, optical and other technologies. 8. Neural Networks for Perception: Low level processing, segmentation, feature extraction, pattern recognition, adaptive filtering, noise reduction, texture, motion analysis, hybrid symbolic-neural architectures for artificial vision. 9. Neural Networks for communications systems: Modems and codecs, network management, digital communications. 10. Neural Networks for control and robotics: Systems identification, motion planning and control, adaptive and predictive control, navigation, real time applications. LOCATION Malaga - Costa del Sol, June 7-9, 1995. Malaga, capital of the Costa del Sol, is strategically located on the southern coast of Spain. It is a genuine crossroads of communication and culture. Malaga is well-know for its history (Cathedral, historic down-town, arabian citadel, roman amphitheatre, ...) and excelent beaches. Malaga, with many modern hotels, is very well communicated by car or plane; its international airport has direct flights to all major European capitals, to America and some destinations on the other continents. LANGUAGE English will the official language of IWANN'95. Simultaneous translation will not be provided. CALL FOR PAPERS The Programme Committee seeks original papers on the above mentioned Topics. Authors should pay special attention to explanation of theoretical and technical choices involved, point out possible limitations and describe the current state of their work. Authors must take into account the following: INSTRUCTIONS TO AUTHORS Authors must submit four copies of full papers, not exceeding 8 pages in DIN-A4 format. The heading should be centered and include: . Title in capitals. . Name(s) of author(s). . Address(es) of author(s). . A 10 line abstract. Three blank lines should be left between each of the above items, and four between the heading and the body of the paper, 1.6 cm left, right, top and bottom margins, single-spaced and not exceeding the 8 page limit. In addition, one sheet should be attached including the following information: . Title and author(s) name(s). . A list of five keywords. . A reference to the Topics the paper relates to. . Postal address, phone and fax numbers and E-mail (if available). All received papers will be reviewed by the Programme Committee. Accepted papers may be presented orally or as poster panels, however all accepted contributions will be published in full length. (Springer-Verlag Proceedings are expected). IMPORTANT DATES Second Call for Papers September, 1994 Final date for submission January 15, 1995 Notification of acceptance March 15, 1995 Workshop June 7-9, 1995 CONTRIBUTIONS MUST BE SENT TO: Prof. Jose Mira Dpto. Informatica y Automatica UNED Senda del Rey, s/n 28040 MADRID (Spain) Phone: +34 (1) 398-7155 Fax: +34 (1) 398-6697 Email: jose.mira at uned.es GENERAL CHAIRMAN Alberto Prieto Unv. de Granada (E) ORGANIZATION COMMITTEE Joan Cabestany Unv. Pltca. de Catalunya (E) Chairman Senen Barro Unv. de Santiago de Compostela (E) Trevor Clarkson King's College London (UK) Dante Del Corso Politecnico de Torino (I) Ana Delgado UNED. Madrid (E) Karl Goser Unv. Dortmund (G) Jeanny Herault INPG Grenoble (F) K.Nicholas Leibovic SUNY at Buffalo (U.S.A.) Jose Mira UNED. Madrid (E) Federico Moran Unv. Complutense. Madrid (E) Stanislaw Osowski Tech. Unv. Warsaw (Po) Conrad Perez Unv. de Barcelona (E) Francisco Sandoval Unv. de Malaga (E) Juan A. Siguenza Unv. Autonoma de Madrid (E) Elena Valderrama CNM-Unv. Autonoma de Barcelona (E) Marley Vellasco Pont. U. Catolica do Rio de Janeiro (Br) Michel Verleysen Unv. Catholique de Louvain (B) LOCAL COMMITTEE Francisco Sandoval Unv. de Malaga (E) Chairman Antonio Diaz Unv. de Malaga (E) Gonzalo Joya Unv. de Malaga (E) Francisco Vico Unv. de Malaga (E) TENTATIVE PROGRAMME COMMITTEE Jose Mira UNED. Madrid (E) Chairman Carlos Acuna C. Unv. Santiago de Compostela (E) Joshua Alspector Bellcore. (USA) Sanjeev B.Ahuja Nielsen A.I. Research & Development. Bannokburn (USA) Igor Aleksander Imperial College. London (UK) Luis B. Almeida INESC. Lisboa (P) Shun-ichi Amari Unv. Tokyo (Jp) Michael Arbit Unv. Southern, CA (USA) Xavier Arreguit CSEM SA (CH) Francois Blayo LERI-EERIE. Nimes (F) Colin Campbell University of Bristol (UK) Jordi Carrabina CNM- Universidad Autonoma de Barcelona (E) Francisco Castillo Unv. Pltca. de Catalunya (E) Antoni Catala Unv. Pltca. de Catalunya (E) Gloria Cembrano Instituto de Cibernetica. CSIC. Barcelona (E) Leon Chua Unv. California, Berkeley (USA) Michael Cosnard LIP. Ecole Normale Superieure de Lyon (F) Marie Cottrell Unv. Paris I (F) Dante A. Couto B. Instituto de Informatica (Br) Gerard Dreyfus ESPCI. Paris (F) F.K. Fogelman Soulie Mimetics. Chatenay Malabry (F) J. Simoes da Fonseca Unv. Lisboa (P) Kunihiko Fukushima Unv. Osaka (Jp) Hans Peter Graf AT&T Bell Laboratories, New Jersey (USA) Francesco Gregoretti Politecnico di Torino (I) Karl E. Grosspietsch Mathematik und Datenverarbeitung (GMD) St. Augustin (D) Mohamad H. Hassoun Wayne State University (USA) Jaap Hoekstra Delft University of Technology (NL) P.T.W. Hudson Leiden University (NL) Jose Luis Huertas CNM- Universidad de Sevilla (E) Paul G.A. Jespers Universite Catholique de Louvain (B) Simon Jones IERI Loughborough University of Technology (UK) Chistian Jutten INPG Grenoble (F) H. Klar Technische Universitat Berlin (D) C.Koch CalTech. (USA) Teuvo Kohonen Helsinki Unv. of Techn. (Fin) Michael D. Lemmon University of Notre Dame. Notre Dame (USA) K. Nicholas Leibovic SUNY at Buffalo, NY (USA) Panos A. Ligomenides Unv. of Maryland (USA) Javier Lopez Aligue Unv. de Extremadura. (E) Pierre Marchal CSEM SA (CH) Anthony N. Michel University of Notre Dame. Notre Dame (USA) Roberto Moreno Unv. Las Palmas Gran Canaria (E) Jean Daniel Nicoud EPFL (CH) Josef A. Nossek Tech. Univ. of Munich (D) Julio Ortega Unv. de Granada (E) Marco Pacheco Pont. U. Catolica do Rio de Janeiro (Br) Conrad Perez Unv. de Barcelona (E) Francisco J. Pelayo Unv. de Granada (E) Franz Pichler Johannes Kepler Univ. (A) Ulrich Ramacher Siemens AG. Munich (D) J.Ramirez Paradigma C.A. Caracas (V) Leonardo Reyneri Unv. di Pisa (I) Tamas Roska Hungarian Academy of Science. Budapest (H) Peter A. Rounce Unv. College London (UK) V.B. David Sanchez German Aerospace Research Establishment. Wessling (G) E. Sanchez-Sinencio Texas A&M University (USA) David Sherrington University of Oxford (UK) Renato Stefanelli Politecnico di Milano (I) T.J. Stonham Brunel-University of West London (UK) John G. Taylor King's College London (UK) Carme Torras Instituto de Cibernetica. CSIC. Barcelona (E) Philip Treleaven Unv. College London (UK) Eric Vittoz CSEM SA (CH) Michel Weinfeld Ecole Polytechnique Paris (F) Bernard Widrow Stanford University CA (USA) R.Yager Iona College NY (USA) INFORMATION FORM to be returned as soon as possible to: Prof. F. Sandoval IWANN'95 Dept. Tecnologia Electronica Universidad de Malaga Pza. El Ejido, s/n E-29013 Malaga SPAIN Phone: +34.5.213.13.52 Fax: +34.5.213.14.47 E-mail: iwann95 at ctima.uma.es ---------------------------------------------------------------- ___ I wish to attend the Workshop ___ I intend to submit a paper Tentative title: ................................................. ................................................................. Author (s): ...................................................... ................................................................. Related Topics: .................................................. ................................................................. Last name: ....................................................... First name: ...................................................... Company/Organization: ............................................ ................................................................. ................................................................. Address: ......................................................... ................................................................. ................................................................. ................................................................. Postal code/Zip code: ............................................ City: ............................................................ State/Country: ................................................... Phone: ........................................................... Fax: ............................................................ E-mail: ..........................................................  From kolen-j at cis.ohio-state.edu Sun Mar 6 10:39:14 1994 From: kolen-j at cis.ohio-state.edu (john kolen) Date: Sun, 6 Mar 1994 10:39:14 -0500 Subject: Overfitting in learning discrete patterns In-Reply-To: <9403041300.AA05822@orchidee.dinsunnet> (message from Fabien Moutarde on Fri, 4 Mar 94 14:00:49 +0100) Message-ID: <199403061539.KAA09725@pons.cis.ohio-state.edu> Fabien.Moutarde at aar.alcatel-alsthom.fr wrote: I would like to know how were the weights initialized ? Were they taken from uniform distribution in some fixed interval whatever the network architecture ? Which interval ? You are asking the right questions. Are you aware of (Kolen & Pollack, 1990) which explores the effects of initial weights on back propagation? if you begin learning with some neurons already in their non linear regime somewhere in learning space, then the initial function realized by the network is not smooth, and the irregularities are likely to remain between learning points and to produce overfitting. This implies that the bigger the network, the lower the initial weights should be. The last sentence does not necessarily follow from the previous line. The magnitude of the weights is less important than the magnitude of the *net input* reaching the unit. For instance, if the network operates in an environment in which there are between unit correllations in the input, then large magnitude weights can effectively become small magnitude weights from the perspective of the nonlinear squashing function. In this situation, I would predict that large weights actually help in the distribution of error to the previous layer. John Kolen References J. F. Kolen and J. B. Pollack, 1990. Backpropagation is Sensitive to Initial Conditions. _Complex Systems_. 4:3. pg 269-280. Available from neuroprose as kolen.bpsic.*.ps.Z (8 files).  From robbie at psych.rochester.edu Sun Mar 6 16:21:17 1994 From: robbie at psych.rochester.edu (Robbie Jacobs) Date: Sun, 6 Mar 1994 16:21:17 -0500 Subject: postdoc position(s) Message-ID: <199403062121.QAA17782@biku.psych.rochester.edu> Postdoctoral Fellowship(s) Available The Center for Sciences of Language at the University of Rochester anticipates having one and possibly two NIH-funded post-doctoral fellowships available for the 1994-95 academic year. If two positions are available, preference for one of the positions will be given to candidates who already have the Ph.D. and can begin before July 1, 1994. The appointment will be for one year with the possibility of renewal for a second year. The Center brings together faculty and students with interests in spoken and signed languages from the departments of Linguistics, Computer Science, Psychology, and Philosophy; and the interdisciplinary programs in Cognitive Science and Neuroscience. We encourage applicants from any of these disciplines who have expertise in any area of natural language. We are particularly interested in post-doctoral fellows who want to contribute to an interdisciplinary community. Applications should be sent to Michael K. Tanenhaus, University of Rochester, Department of Psychology, Meliora Hall, Rochester, NY 14627. Include a vita, sample reprints and/or pre-prints, a statement of research and training interests, and arrange for letters of reference from at least three referees. In order to guarantee full consideration, applications should be received by April 1. The University of Rochester is an equal opportunity employer. We encourage applications from women and from minorities.  From lautrup at connect.nbi.dk Mon Mar 7 10:49:27 1994 From: lautrup at connect.nbi.dk (Benny Lautrup) Date: Mon, 7 Mar 94 10:49:27 MET Subject: preprint (fwd) Message-ID: The preprint announced below has not yet arrived in archive.cis.ohio-state.edu It may be retrieved through anonymous ftp from connect.nbi.dk (129.142.100.17) in the neuroprose directory. > > The following preprint is now available: > > FTP-host: archive.cis.ohio-state.edu > FTP-file: pub/neuroprose/hertz.nonlin.ps.Z > > Authors: J. Hertz, A. Krogh, B. Lautrup and T. Lehmann > > Title: Non-Linear Back-propagation: Doing Back-Propagation without > Derivatives of the Activation Function. > > Size: 13 pages > > Abstract: > > The conventional linear back-propagation algorithm is replaced by a non-linear > version, which avoids the necessity for calculating the derivative of the > activation function. This may be exploited in hardware realizations of neural > processors. In this paper we derive the non-linear back-propagation algorithms > in the framework of recurrent back-propagation and present some numerical > simulations of feed-forward networks on the NetTalk problem. A discussion of > implementation in analog VLSI electronics concludes the paper. > -- Benny Lautrup, professor Computational Neural Network Center (CONNECT) Niels Bohr Institute Blegdamsvej 17 2100 Copenhagen Denmark Telephone: +45-3532-5200 Direct: +45-3532-5358 Fax: +45-3142-1016 e-mail: lautrup at connect.nbi.dk  From PREFENES at NEPTUNE.FAC.CS.CMU.EDU Mon Mar 7 13:04:03 1994 From: PREFENES at NEPTUNE.FAC.CS.CMU.EDU (Paul Refenes) Date: Mon, 7 Mar 1994 13:04:03 BST Subject: Paper Pre-print Message-ID: The following pre-print is available. Please send requests to H.tracey at lbs.lon.ac.uk. Paper copy only. MEASURING THE PERFORMANCE OF NEURAL NETWORKS IN MODERN PORTFOLIO MANAGEMENT: TESTING STRATEGIES AND METRICS A. N. REFENES Department of Decision Science London Business School Sussex Place, Regents Park London NW1 4SA, UK ABSTRACT Neural networks have attracted much interest in financial engineering and modern portfolio management with many researchers claiming that they signal the beginning of a new era in the evolution of forecasting and decision support systems. Various performance figures are being quoted to support these claims but there is rarely a comprehensive testing strategy to quantify the performance of neural networks in ways that are meaningful to the practitioner in the field. In the context of asset management some of the quoted figures could be at best misleading and others are often irrelevant. In this paper we review some well known metrics for measuring estimator performance both in absolute and relative terms, measuring profitability of the final objective function, and analysing the characteristics of the equity curves.  From gfh at eng.cam.ac.uk Mon Mar 7 10:59:48 1994 From: gfh at eng.cam.ac.uk (gfh@eng.cam.ac.uk) Date: Mon, 7 Mar 94 15:59:48 GMT Subject: Technical report available by anonymous ftp Message-ID: <6641.9403071559@atom.eng.cam.ac.uk> The following technical report is available by anonymous ftp from the archive of the Speech, Vision and Robotics Group at the Cambridge University Engineering Department. EXPERIMENTS WITH SIMPLE HEBBIAN-BASED LEARNING RULES IN PATTERN-CLASIFICATION TASKS George F. Harpur and Richard W. Prager Technical Report CUED/F-INFENG/TR168 Cambridge University Engineering Department Trumpington Street Cambridge CB2 1PZ England Abstract This report presents a neural network architecture which performs pattern classification using a simple form of learning based on the Hebb rule. The work was motivated by the desires to decrease computational complexity and to maintain a greater degree of biological plausibility than most other networks designed to perform similar tasks. A method of pre-processing the inputs to provide a distributed representation is described. A scheme for increasing the power of the network using a layer of `feature detectors' is introduced: these use an unsupervised competitive learning scheme, again based on Hebbian learning. Simulation results from testing the networks on two `real-world' problems are presented, and compared to those produced by other types of neural network. ************************ How to obtain a copy ************************ Via FTP: unix> ftp svr-ftp.eng.cam.ac.uk Name: anonymous Password: (type your email address) ftp> cd reports ftp> binary ftp> get harpur_tr168.ps.Z ftp> quit unix> uncompress harpur_tr168.ps.Z unix> lpr harpur_tr168.ps (or however you print PostScript)  From dwm at signal.dra.hmg.gb Mon Mar 7 05:36:25 1994 From: dwm at signal.dra.hmg.gb (Daniel McMichael) Date: Mon, 7 Mar 1994 10:36:25 GMT Subject: IEE ANN 95 - CALL FOR PAPERS Message-ID: Fourth International Conference on Artificial Neural Networks Churchill College, Cambridge, UK 26-28 June 1995 ******************************************************************** ******************* Call for papers ************************ ******************************************************************** Objective This International Conference will produce an up-to-date report on the current state of research in the field of artificial neural networks. This will include theory fundamental structures, learning algorithms, implementation, vision, speech, robotics, control, medical and finance. The conference seeks original contributions spanning the entire field of neural networks. Suggestions for possible topics include: Architecture's and learning algorithms: Theory and design of neural networks, comparison with classical techniques. Applications and industrial systems: Vision and image processing, speech and language processing, communications systems, biomedical systems, robotics and control, financial and business systems. Implementations: hardware implementations (analogue and digital), VLSI devices or systems, optoelectronics Contributions Papers will be selected on the basis of wide audience appeal, ease of understanding and potential stimulation of broad ranging discussion. The quality of preparation and presentation expected will be very high. In addition to lecture theatre presentations a selection of papers will be presented in poster sessions. This has become an increasingly popular method of presentation as it offers good opportunities for one-to-one discussion. If you would prefer to present your paper in this way please indicate so when sending in your abstract, In addition please indicate which subject area (see list below) the abstract should be included in. Submissions should be in the form of an extended abstract up to 4 pages, to be received by the secretariat on or before 30 November 1994. The abstract must indicate clearly the novelty of the proposed contribution. Authors whose abstract are selected will be required to provide a typescript of a maximum of 6 pages by 31 March 1995. topic areas 1. Vision 2. Speech 3. Control and robotics 4. Biomedical 5. Financial and business 6. Signal processing 7. Radar/sonar 8. Data fusion 9. Analogue 10. Digital 11. Optical 12. Learning algorithms 13. Network architectures 14. Functional approximations 15. Statistical Methods 16. None of the above Deadlines Intending authors should note the following deadline dates: Receipt of extended abstract 30 November 1994 Notification of acceptance Late January 1995 Receipt of full typescript 31 March 1995 Bursary scheme limited financial support may be available to overseas participants presenting papers at this conference. please indicate on the reply slip if you wish to receive further information. Organising Committee Dr C J Satchwell (Chairman), Neural Statistics Ltd Prof C Harris, Southampton University Prof D Lowe, Aston University Dr D McMichael, Defence Research Agency Dr M Niranjan, Cambridge University Dr P Refenes, London Business School Dr W A Wright, British Areospace Corresponding Members L Giles, NEC, USA R Goodman, Caltech, USA P Lieshout, Advanced Information Processing, The Netherlands Organisers The Convention is being organised by the Electronics Division of the Institution of Electrical Engineers For futher information and address for submission of abstracts: ANN95 Secretariat IEE Conference Department Savoy Place London WC2R 0BL, UK Tel: 44(0)71 344 5478/5477 Fax: 44(0)71 497 3633 Email: sgriffiths at iee.org.uk ************************************************************************** !!!!!!!!!!!!!!!!! do NOT reply to dwm at signal.dra.hmg.gb !!!!!!!!!!!!!!!!!! **************************************************************************  From tedwards at src.umd.edu Mon Mar 7 15:34:31 1994 From: tedwards at src.umd.edu (Thomas Grant Edwards) Date: Mon, 7 Mar 1994 15:34:31 -0500 (EST) Subject: VLSI Phase-locking architecture NIPS 6 pre-print Message-ID: **DO NOT FORWARD TO OTHER GROUPS** The file andreou.vlsi-phase-lock.ps.Z is now available for downloading from the Neuroprose repository: VLSI Phase Locking Architectures for Feature Linking in Multiple Target Tracking Systems Andreas G. Andreou Thomas G. Edwards The Johns Hopkins University University of Maryland ABSTRACT: Recent physiological research has shown that synchronization of oscillatory responses in striate cortex may code for relationships between visual features of objects. A VLSI circuit has been designed to provide rapid phase-locking synchronization of multiple oscillators to allow for further exploration of this neural mechanism. By exploiting the intrinsic random transistor mismatch of devices operated in subthreshold, large groups of phase-locked oscillators can be readily partitioned into smaller phase-locked groups. A multiple target tracker for binary images is described utilizing this phase-locking architecture. A VLSI chip has been fabricated and tested to verify the architecture. The chip employs Pulse Amplitude Modulation (PAM) to encode the output at the periphery of the system. (NIPS 6 Pre-Print)  From risto at cs.utexas.edu Mon Mar 7 23:43:02 1994 From: risto at cs.utexas.edu (Risto Miikkulainen) Date: Mon, 7 Mar 94 22:43:02 -0600 Subject: Papers available on connectionist NLP, neuro-evolution/Othello Message-ID: <9403080443.AA26396@cascais.cs.utexas.edu> The following papers on - processing complex sentences, - disambiguation in distributed parsing networks, - learning German verb inflections, and - evolving networks to play Othello are available by anonymous ftp from our archive site at cs.utexas.edu:pub/neural-nets/papers. -- Risto Miikkulainen ------------------------------------------------------------------------- miikkulainen.subsymbolic-caseroles.ps.Z (21 pages) SUBSYMBOLIC CASE-ROLE ANALYSIS OF SENTENCES WITH EMBEDDED CLAUSES Risto Miikkulainen Department of Computer Sciences, The University of Texas at Austin. Technical Report AI93-202, July 1993. A distributed neural network model called SPEC for processing sentences with recursive relative clauses is described. The model is based on separating the tasks of segmenting the input word sequence into clauses, forming the case-role representations, and keeping track of the recursive embeddings into different modules. The system needs to be trained only with the basic sentence constructs, and it generalizes not only to new instances of familiar relative clause structures, but to novel structures as well. SPEC exhibits plausible memory degradation as the depth of the center embeddings increases, its memory is primed by earlier constituents, and its performance is aided by semantic constraints between the constituents. The ability to process structure is largely due to a central executive network that monitors and controls the execution of the entire system. This way, in contrast to earlier subsymbolic systems, parsing is modeled as a controlled high-level process rather than one based on automatic reflex responses. ------------------------------------------------------------------------- mayberry.disambiguation.ps.Z (10 pages) LEXICAL DISAMBIGUATION BASED ON DISTRIBUTED REPRESENTATIONS OF CONTEXT FREQUENCY Marshall R. Mayberry, III, and Risto Miikkulainen Department of Computer Sciences, The University of Texas at Austin. Technical Report AI94-217, February 1994. A model for lexical disambiguation is presented that is based on combining the frequencies of past contexts of ambiguous words. The frequences are encoded in the word representations and define the words' semantics. A Simple Recurrent Network (SRN) parser combines the context frequences one word at a time, always producing the most likely interpretation of the current sentence at its output. This disambiguation process is most striking when the interpretation involves semantic flipping, that is, an alternation between two opposing meanings as more words are read in. The sense of throwing a ball alternates between dance and baseball as indicators such as the agent, location, and recipient are input. The SRN parser demonstrates how the context frequencies are dynamically combined to determine the interpretation of such sentences. We hypothesize that other aspects of ambiguity resolution are based on similar mechanisms are well, and can be naturally approached from the distributed connectionist viewpoint. ------------------------------------------------------------------------- westermann.inflections.ps.Z (9 pages) VERB INFLECTIONS IN GERMAN CHILD LANGUAGE: A CONNECTIONIST ACCOUNT Gert Westermann(1) and Risto Miikkulainen(2) (1) Department of Computer Science, Technical University of Braunschweig. (2) Department of Computer Sciences, The University of Texas at Austin. Technical Report AI94-216, February 1994. The emerging function of verb inflections in German language acquisition is modeled with a connectionist network. A network that is initially presented only with a semantic representation of sentences uses the inflectional verb ending -t to mark those sentences that are low in transitivity, whereas all other verb endings occur randomly. This behavior matches an early stage in German language acquisition where verb endings encode a similar semantic rather than a grammatical function. When information about the surface structure of the sentences is added to the input data, the network learns to use the correct verb inflections in a process very similar to children's learning. This second phase is facilitated by the semantic phase, suggesting that there is no shift from semantic to grammatical encoding, but rather an extension of the initial semantic encoding to include grammatical information. This can be seen as evidence for the strong version of the functionalist hypothesis of language acquisition. ------------------------------------------------------------------------- moriarty.othello.ps.Z (6 pages) EVOLVING COMPLEX OTHELLO STRATEGIES USING MARKER-BASED GENETIC ENCODING OF NEURAL NETWORKS David E. Moriarty and Risto Miikkulainen Department of Computer Sciences, The University of Texas at Austin, Austin, TX 78712. Technical Report AI93-206, September 1993. A system based on artificial evolution of neural networks for developing new game playing strategies is presented. The system uses marker-based genes to encode nodes in a neural network. The game-playing networks were forced to evolve sophisticated strategies in Othello to compete first with a random mover and then with an alpha-beta search program. Without any direction, the networks discovered first the standard positional strategy, and subsequently the mobility strategy, an advanced strategy rarely seen outside of tournaments. The latter discovery demonstrates how evolution can develop novel solutions by turning an initial disadvantage into an advantage in a changed environment. [ see also moriarty.focus.ps.Z: "Evolving Neural Networks to Focus Minimax Search" ]  From risto at cs.utexas.edu Mon Mar 7 23:44:41 1994 From: risto at cs.utexas.edu (Risto Miikkulainen) Date: Mon, 7 Mar 94 22:44:41 -0600 Subject: ..and episodic memory, cortical self-organization, schema-based vision Message-ID: <9403080444.AA26400@cascais.cs.utexas.edu> The following papers on - the capacity of episodic memory, - self-organization in the primary visual cortex, and - schema-based scene analysis are available by anonymous ftp from cs.utexas.edu:pub/neural-nets/papers as well. As always, comments are welcome. -- Risto Miikkulainen ------------------------------------------------------------------------- moll.convergence-zone.ps.Z (6 pages) THE CAPACITY OF CONVERGENCE-ZONE EPISODIC MEMORY Mark Moll(1), Risto Miikkulainen(2), Jonathan Abbey(3) (1) Department of Computer Science, University of Twente, the Netherlands. (2) Department of Computer Sciences, The University of Texas at Austin. (3) Applied Research Laboratories, Austin, TX. Technical Report AI93-210, December 1993. Human episodic memory provides a seemingly unlimited storage for everyday experiences, and a retrieval system that allows us to access the experiences with partial activation of their components. This paper presents a computational model of episodic memory inspired by Damasio's idea of Convergence Zones. The model consists of a layer of perceptual feature maps and a binding layer. A perceptual feature pattern is coarse coded in the binding layer, and stored on the weights between layers. A partial activation of the stored features activates the binding pattern which in turn reactivates the entire stored pattern. A worst-case analysis shows that with realistic-size layers, the memory capacity of the model is several times larger than the number of units in the model, and could account for the large capacity of human episodic memory. ------------------------------------------------------------------------- sirosh.unified.ps.Z (8 pages) A UNIFIED NEURAL NETWORK MODEL FOR THE SELF-ORGANIZATION OF TOPOGRAPHIC RECEPTIVE FIELDS AND LATERAL INTERACTION Joseph Sirosh and Risto Miikkulainen Department of Computer Sciences, The University of Texas at Austin. Technical Report AI94-213, January 1994. A self-organizing neural network model for the simultaneous development of topographic receptive fields and lateral interactions in cortical maps is presented. Both afferent and lateral connections adapt by the same Hebbian mechanism in a purely local and unsupervised learning process. Afferent input weights of each neuron self-organize into hill-shaped profiles, receptive fields organize topographically across the network, and unique lateral interaction profiles develop for each neuron. The resulting self-organized structure remains in a dynamic and continuously-adapting equilibrium with the input. The model can be seen as a generalization of previous self-organizing models of the visual cortex, and provides a general computational framework for experiments on receptive field development and cortical plasticity. The model also serves to point out general limits on activity-dependent self-organization: when multiple inputs are presented simultaneously, the receptive field centers need to be initially ordered for stable self-organization to occur. [see also sirosh.cooperative-selforganization.tar: "Cooperative Self- Organization of Afferent and Lateral Connections in Cortical Maps" ] ------------------------------------------------------------------------- leow.analyzing.ps.Z (11 pages) ANALYZING SCENES IN A NEURAL NETWORK MODEL OF SCHEMA-BASED VISION Wee Kheng Leow, Risto Miikkulainen Department of Computer Sciences, The University of Texas at Austin. Technical Report AI94-214, February 1994. A novel approach to object recognition and scene analysis based on neural network representation of visual schemas is described. Given an input scene, the VISOR system focuses attention successively at each component, and the schema representations cooperate and compete to match the inputs. The schema hierarchy is learned from examples through unsupervised adaptation and reinforcement learning. VISOR learns that some objects are more important than others in identifying a scene, and that the importance of spatial relations varies depending on the scene. It learns three types of visual schemas: (1) rigid spatial layouts of components used primarily for describing objects; (2) collections of components located anywhere in the scene for recognizing certain man-made scenes (such as a dining table); and (3) rough spatial layouts of regions of uniform texture and no specific shape that are often found in natural scenes (such as a road scene). Compared to traditional rule-based systems, VISOR shows remarkable robustness of recognition, and is able to indicate the confidence of its analysis as the inputs differ increasingly from the schemas. With such properties, VISOR is a promising first step towards a general vision system that can be used in different applications after learning the application-specific schemas. [ see also leow.priming.ps.Z: "Priming, Perceptual Reversal, and Circular Reaction in A Neural Network Model of Schema-Based Vision" ]  From terry at salk.edu Wed Mar 9 03:38:36 1994 From: terry at salk.edu (Terry Sejnowski) Date: Wed, 9 Mar 94 00:38:36 PST Subject: FAX for Telluride Workshops Message-ID: <9403090838.AA23081@salk.edu> Two workshops to be held in Telluride Colorado: NEUROMORPHIC ANALOG VLSI SYSTEMS Sunday, July 3 to Saturday, July 9, 1994 SYSTEMS LEVEL MODELS OF VISUAL BEHAVIOR Sunday, July 10 to Saturday, July 16, 1994 Complete applications should be sent by March 10, 1994 to: Terrence Sejnowski The Salk Institute 10010 North Torrey Pines Road La Jolla, CA 90037 FAX: (619) 587 0417 -----  From roy at mbfys.kun.nl Wed Mar 9 04:32:54 1994 From: roy at mbfys.kun.nl (Roy Glasius) Date: Wed, 9 Mar 94 10:32:54 +0100 Subject: paper available Message-ID: <9403090932.AA15327@augustus.mbfys.kun.nl> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/glasius.labyrinth.ps.Z **DO NOT FORWARD TO OTHER GROUPS** The file glasius.labyrinth.ps.Z is now available for copying from the Neuroprose repository: (16 pages) NEURAL NETWORK DYNAMICS FOR PATH PLANNING AND OBSTACLE AVOIDANCE. Roy Glasius, Andrzej Komoda, Stan C.A.M. Gielen. University of Nijmegen. ABSTRACT A model of a topologically organized neural network of a Hopfield type with nonlinear analog neurons is shown to be very effective for path planning and obstacle avoidance. This deterministic system can rapidly provide a proper path, from any arbitrary start position to any target position, avoiding both static and moving obstacles of arbitrary shape. The model assumes that an (external) input activates a target neuron, corresponding to the target position, and specifies obstacles in the topologically ordered neural map. The path follows from the neural network dynamics and the neural activity gradient in the topologically ordered map. The analytical results are supported by computer simulations to illustrate the performance of the network. (Neural Networks preprint) Roy Glasius, Department Medical Physics and Biophysics, University of Nijmegen, Geert Grooteplein Noord 21, 6525 EZ Nijmegen, The Netherlands, tel: +31-80615040, email:roy at mbfys.kun.nl.  From P.McKevitt at dcs.shef.ac.uk Thu Mar 10 04:37:25 1994 From: P.McKevitt at dcs.shef.ac.uk (Paul Mc Kevitt) Date: Thu, 10 Mar 94 09:37:25 GMT Subject: No subject Message-ID: <9403100937.AA03327@dcs.shef.ac.uk> THE UNIVERSITY OF SHEFFIELD The Department of Computer Science wishes to recruit a Lecturer Grade A to a fixed 5 year appointment arising from the award of an SERC Advanced Research Fellowship to Dr. P Mc Kevitt. The lectureship will be tenable from 1/10/94 and applications are invited from anyone with research interests in the following areas: Cognitive Systems Computational Models of Hearing Speech Technology Natural Language Processing Computer Graphics Intelligent Tutoring Systems Computer Argumentation Connectionist Language Processing Formal Methods and Software Engineering Theory of Computer Science Software and systems engineering Communication Networks Neural Networks Parallel Systems Safety Critical Systems Parallel Databases CASE Tools for Parallel Systems Further details are available from the Department of Computer Science. Closing date for applications 8th April, 1994. Department of Computer Science Regent Court University of Sheffield 211 Portobello Street GB- S1 4DP, Sheffield England, UK, EU. e-mail: dept at dcs.shef.ac.uk fax: +44 742 780972 phone: +44 742 825590  From isabelle at neural.att.com Fri Mar 11 12:08:37 1994 From: isabelle at neural.att.com (Isabelle Guyon) Date: Fri, 11 Mar 94 12:08:37 EST Subject: UNIPEN project of data exchange and recognizer benchmarks Message-ID: <9403111708.AA06471@neural> - > - > - > - > - > - > - > - > - < - < - < - < - < - < - < - < - < - - > UNIPEN project of data exchange and recognizer benchmarks < - - > - > - > - > - > - > - > - > - < - < - < - < - < - < - < - < - < - Isabelle Guyon and Lambert Schomaker - > - > - > - > - > - < - < - < - < - < - < - March 1994 Content: I - UNIPEN ftp site. II - Scrib-L mailing list. III - Tentative schedule for the first UNIPEN benchmark. IV - Information on the IAPR and the Technical Committee 11. V - Information on the Linguistic Data Consortium. VI - Information on the US National Institute of Standards and Technologies. VII - Wish list. Abstract: UNIPEN is a project of data exchange and benchmarks for on-line handwriting recognition, started at the initiative of the technical committee 11 of the IAPR. The data of concern may include handprint and cursive from various alphabets, signatures and gestures captured by a digitizing device providing the pen trajectory. Several tens of companies and universities have already joined UNIPEN and participated in defining a standard data format. These data will be provided by the participants in this common data format and distributed by the Linguistic Data Consortium (LDC) We have the pleasure to confirm that a benchmark organized by the US National Institute of Standards and Technologies (NIST) will take place this year. It will be restricted to the Latin alphabet. Subscription: To subscribe to this news letters, please the following information to: isabelle at neural.att.com Name: Affiliation: Address: Phone: Fax: Email:  From jordan at psyche.mit.edu Fri Mar 11 17:49:00 1994 From: jordan at psyche.mit.edu (Michael Jordan) Date: Fri, 11 Mar 94 17:49:00 EST Subject: symposium announcement Message-ID: Control of the Physical World by Intelligent Agents AAAI 1994 Fall Symposium November 4-6, 1994 The Monteleone Hotel, New Orleans, Louisiana Call for Participation The Problem An intelligent agent, interacting with the physical world, must cope with a wide range of demands. Different scientific and engineering disciplines, with different abstractions of the world, have found different "pieces of the puzzle" for the problem of how the agent can successfully control its world. These disciplines include: - AI = qualitative reasoning = planning = machine learning = intelligently guided numerical simulation - control theory - dynamical systems - fault diagnosis - fuzzy logic and systems - neural nets - computer vision - robotics The goal of this symposium is to attempt to understand the puzzle as a whole, by bringing together researchers with experience assembling two or more pieces. The emphasis will be on learning from successful projects in this area that exploit results or methods from several disciplines. Communication Abstractly, the important questions will be: - What are the strengths and weaknesses of each piece of the puzzle? - How do we put together two pieces to exploit their strengths and avoid their weaknesses? - How do we reconcile the different conceptual frameworks to make different approaches mutually comprehensible? In order to make our discussions mutually comprehensible, participants should relate their work to one of a small number of everyday tasks: - vacuuming the floors in an ordinary house, coping with furniture, pets, trash, etc. - controlling a process such as a pressure-cooker, including set-up, start-up, normal operation, anticipating and handling emergencies, shut-down, and clean-up. - automated driving of a car through city and/or highway traffic, including learning spatial structure and using maps. - learning to understand and control one's own sensory-motor system, including seeing, grabbing, walking, running, bicycling, juggling, etc. Format: The symposium will be organized around a few presentations and lots of discussion. In some sessions, a successful project will be presented and critiqued. In others, a problem will be posed, and relevant contributions collected and evaluated. The working papers will be distributed (we hope) in advance, so participants can familiarize themselves with each others' positions before the symposium. We expect conversations of the form: - "What problem are you working on?" - "Why is that important?" - "How can I help you?" - "How can you help me?" Attendance: Attendance at the workshop will be limited. Some attention will be given to balance among areas, but the primary criteria will be successful synthesis of multiple approaches to intelligent agenthood, and ability of the participant to communicate across discipline boundaries. In addition to invited participants, a limited number of other interested parties will be able to register in each symposium on a first-come, first-served basis. Registration will be available by mid-July 1994. To obtain registration information write to the AAAI at 445 Burgess Drive, Menlo Park, CA 94025 (fss at aaai.org). Submission requirements: Papers should focus on one of the above everyday tasks (or a task of similar familiarity and concreteness). It would be helpful to include a glossary of key concepts to help bring the reader into your conceptual framework. Five copies of either full papers (twenty pages max) or short position papers (five pages max) should be sent to: Benjamin Kuipers Co-chair, AAAI Intelligent Agent Workshop Computer Sciences Department University of Texas at Austin Austin, Texas 78712 USA Dates: - Submissions due: April 15, 1994. - Notification by: May 17, 1994. - Final versions due: August 19, 1994. Workshop committee: Benjamin Kuipers, University of Texas, co-chair; Lyle Ungar, University of Pennsylvania, co-chair; Piero Bonnisone, General Electric; Jim Hendler, University of Maryland; Michael Jordan, MIT. Sponsored by the American Association for Artificial Intelligence 445 Burgess Drive, Menlo Park, CA 94025 (415) 328-3123 fss at aaai.org  From greiner at scr.siemens.com Fri Mar 11 18:16:49 1994 From: greiner at scr.siemens.com (Russell Greiner) Date: Fri, 11 Mar 1994 18:16:49 -0500 Subject: CFP: "Relevance" Symposium Message-ID: <199403112316.SAA14487@eagle.siemens.com> ============================================================================== AAAI 1994 Fall Symposium RELEVANCE 4-6 November 1994 The Monteleone Hotel, New Orleans, Louisiana == Call for Participation == With too little information, reasoning and learning systems cannot work effectively. Surprisingly, too much information can also cause the performance of these systems to degrade, in terms of both accuracy and efficiency. It is therefore important to determine what information must be preserved, or more generally, to determine how best to cope with superfluous information. The goal of this workshop is a better understanding of this topic, relevance, with a focus on techniques for improving a system's performance (along some dimension) by ignoring or de-emphasizing irrelevant and superfluous information. These techniques will clearly be of increasing importance as knowledge bases, and learning systems, become more comprehensive to accommodate real-world applications. There are many forms of irrelevancy. In many contexts (including both deduction and induction), the initial theory may include more information than the task requires. Here, the system may perform more effectively if certain irrelevant *facts* (or nodes in a neural net or Bayesian network) are ignored or deleted. In the context of learning, certain *attributes* of each individual sample may be irrelevant in that they will play essentially no role in the eventual classification or clustering. Also, the learner may choose to view certain *samples* to be irrelevant, knowing that they contain essentially no new information. Yet another flavor of irrelevance arises during the course of a general computation: A computing process can ignore certain *intermediate results*, once it has established that they will not contribute to the eventual answer; consider alpha-beta pruning or conspiracy numbers in game-playing and other contexts, or control heuristics in derivation. == Submission Information == Potential attendees should submit a one-page summary of their relevant research, together with a set of their relevant papers (pun unavoidable). People wishing to present material should also submit a 2000 word abstract. We invite papers that deal with any aspect of this topic, including characterizations of irrelevancies, ways of coping with superfluous information, ways of detecting irrelevancies and focusing on relevant information, and so forth; and are particularly interested in studies that suggest ways to improve the efficiency or accuracy of reasoning systems (including question-answerers, planners, diagnosticians, and so forth) or to improve the accuracy, sample complexity, or computational or space requirement of learning processes. We encourage empirical studies and cognitive theories, as well as theoretical results. We prefer plain-text, stand-alone LaTeX or Postscript submissions sent by electronic mail to greiner at learning.scr.siemens.com. Otherwise, please mail three copies to Russell Greiner "Relevance Symposium" Siemens Corporate Research, Inc 755 College Road East Princeton, NJ 08540-6632 In either case, the submission must arrive by 15 Apr 1994. == Important Dates == - Submissions due 15 April 1994 - Notification of acceptance 17 May 1994 - Working notes mailed out 20 Sept 1994 - Fall Symposium Series 4-6 Nov 1994 == Organizing Committee == Russ Greiner (co-chair, Siemens Corporate Research, greiner at learning.scr.siemens.com) Yann Le Cun (AT&T Bell Laboratories) Nick Littlestone (NEC Research Institute) David McAllester (MIT) Judea Pearl (UCLA) Bart Selman (AT&T Bell Laboratories) Devika Subramanian (co-chair, Cornell, devika at cs.cornell.edu) == Attendance == The symposium will be limited to between forty and sixty participants. In addition to invited participants, a limited number of other interested parties will be able to register on a first-come, first-served basis. Registration will be available by mid-July 1994. To obtain registration information, contact AAAI at fss at aaai.org; (415) 328-3123; or 445 Burgess Drive, Menlo Park, CA 94025. == Sponsored by == American Association for Artificial Intelligence as part of the AAAI 1994 Fall Symposium Series.  From mm at santafe.edu Sun Mar 13 16:16:38 1994 From: mm at santafe.edu (Melanie Mitchell) Date: Sun, 13 Mar 94 14:16:38 MST Subject: Job available Message-ID: <9403132116.AA05077@wupatki> JOB AVAILABLE: INTERVAL RESEARCH POSTDOCTORAL FELLOWSHIP IN ADAPTIVE COMPUTATION AT THE SANTA FE INSTITUTE The Santa Fe Institute has an opening for a Postdoctoral Fellow in Adaptive Computation beginning in September, 1994. The position is sponsored by Interval Research Corporation. The fellowship will last for one-to-two years. The Institute's research program is devoted to the study of complex systems, especially complex adaptive systems. SFI's Adaptive Computation program is an interdisciplinary effort focusing on computational aspects of the study of complex adaptive systems. Its purpose is to make fundamental progress on issues in computer science that are related to complex adaptive systems, and to export the results to researchers in other fields. These issues include both computational models of complex adaptive systems and theory and application of adaptive algorithms inspired by natural systems. Systems and techniques currently under study at the Santa Fe Institute include genetic algorithms, classifier systems, neural networks, and other adaptive computation techniques; the immune system; biomolecular sequence and structure; the origin of life; artificial life; models of evolution; the physics of information; nonlinear modeling and prediction; the economy; and others. Candidates should have a Ph.D. (or expect to receive one before September, 1994) and should have backgrounds in computer science, mathematics, economics, theoretical physics or chemistry, game theory, cognitive science, theoretical biology, dynamical systems theory, or related fields. A strong background in computational approaches is essential, as is an interest in interdisciplinary work. Evidence of these interests, in the form of previous research experience and publications, is helpful. Applicants should submit a curriculum vitae, list of publications, and statement of research interests, and arrange for three letters of recommendation to be sent. Incomplete applications will not be processed. All application materials must be received by April 15, 1994. Decisions will be made in early May. Send applications to: Interval Research Postdoctoral Committee, Santa Fe Institute, 1660 Old Pecos Trail, Suite A, Santa Fe, New Mexico 87501. Applications or inquiries may also be sent by electronic mail to: postdoc at santafe.edu. SFI is an equal opportunity employer.  From lazzaro at CS.Berkeley.EDU Sun Mar 13 18:12:45 1994 From: lazzaro at CS.Berkeley.EDU (John Lazzaro) Date: Sun, 13 Mar 1994 15:12:45 -0800 Subject: Bibliography for Silicon Auditory Models Message-ID: <199403132312.PAA20966@boom.CS.Berkeley.EDU> I gave a tutorial at NIPS last year on VLSI implementations of auditory representations, and part of the handout packet was a bibliography of all papers published in the field. Enough people have asked me for copies of it that I cleaned it up, added extra sections for tutorial readings, and have placed it on anonymous FTP. Here's how to grab a copy, from your Unix prompt "%" : % % ftp hobiecat.pcmp.caltech.edu Connected to hobiecat.pcmp.caltech.edu. 220 hobiecat FTP server (Version 16.2 Fri Apr 26 18:20:43 GMT 1991) ready. Name (hobiecat.caltech.edu): anonymous 331 Guest login ok, send ident as password. Password: 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/anaprose/lazzaro 250 CWD command successful. ftp> get sa-biblio.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for sa-biblio.ps.Z (37029 bytes). 226 Transfer complete. local: sa-biblio.ps.Z remote: sa-biblio.ps.Z 37029 bytes received in 1.5 seconds (25 Kbytes/s) ftp> quit 221 Goodbye. uncompress sa-biblio.ps.Z --- Thanks to everyone who added contributions! --john lazzaro  From roebel at cs.tu-berlin.de Mon Mar 14 07:07:34 1994 From: roebel at cs.tu-berlin.de (Axel Roebel) Date: Mon, 14 Mar 1994 13:07:34 +0100 Subject: Techreport on Dynamic Pattern Selection in Neuroprose Message-ID: <199403141207.AA08973@mail.cs.tu-berlin.de> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/roebel.dynada.ps.Z With Terry and Isabelle we state: One man`s outlyer is OUR data point The file roebel.dynada.ps.Z (22 pages) is now available via anonymous ftp from the neuroprose archive. Title and abstract are given below. We regret that hardcopies are not available. ---------------------------------------------------------------------- The Dynamic Pattern Selection Algorithm: Effective Training and Controlled Generalization of Backpropagation Neural Networks A. R"obel Technical University of Berlin Department of Computer Science (Technical Report 93/23) (Subsets of this Report will appear in the conference proceedings of the Intern. Conference on Neural Networks, Italy, 1994 and the European Symposium on Artificial Neural Networks, Belgium, 1994) -- ABSTRACT -- In the following report the problem of selecting proper training sets for neural network time series prediction or function approximation is addressed. As a result of analyzing the relation between approximation and generalization, a new measure, the generalization factor is introduced. Using this factor and cross validation a new algorithm, the {\em dynamic pattern selection}, is developed. \\ Dynamically selecting the training patterns during training establishes the possibility of controlling the generalization properties of the neural net. As a consequence of the proposed selection criterion, the generalization error is limited to the training error. As an additional benefit, the practical problem of selecting a concise training set out of known data is likewise solved. \\ By employing two time series prediction tasks, the results for dynamic pattern selection training and for fixed training sets are compared. The favorable properties of the dynamic pattern selection, namely lower computational expense and control of generalization, are demonstrated. \\ This report describes a revised version of the algorithm introduced in \cite{Roebel_e:92}. ---------------------------------------------------------------------------- Axel Roebel ** E-Mail: roebel at cs.tu-berlin.de ** Technische Universitaet Berlin ** Phone : +49 - 30 - 314 24892 ** Department of Applied Computer Science ** Fax : +49 - 30 - 314 24891 **  From plunkett at psy.ox.ac.uk Tue Mar 15 07:38:08 1994 From: plunkett at psy.ox.ac.uk (Kim Plunkett) Date: Tue, 15 Mar 94 12:38:08 GMT Subject: No subject Message-ID: <9403151238.AA12405@dragon.psych.ox.ac.uk> Connectionism and Language Acquisition SERC Postgraduate Studentship Department of Experimental Psychology University of Oxford The Science and Engineering Research Council has allocated a postgraduate studentship within the area of "Connectionism and Language Acquisition" to the Department of Experimental Psychology, Oxford University, starting in October 1994. Individuals interested in applying for this studentship should have or expect to obtain a good undergraduate degree in Psychology, Linguistics or Computer Science. The success- ful applicant will be expected to engage in both connection- ist modelling and experimental work within the area of language acquisition. The studentship is expected to lead to the award of D.Phil at the University of Oxford. Application forms can be obtained from: Mrs. B. Hammond Department of Experimental Psychology University of Oxford South Parks Road Oxford OX1 3UD UK Tel: 0865-271379 Applications should be marked "SERC IT Application". Further information concerning the studentship and research facilities in the Department can be obtained from Kim Plunkett Department of Experimental Psychology University of Oxford South Parks Road Oxford OX1 3UD UK Tel: 0865-271398 email: plunkett at psy.ox.ac.uk Please note that SERC studentship awards can only be held by UK or EEC nationals.  From stiber at cs.ust.hk Wed Mar 16 16:35:03 1994 From: stiber at cs.ust.hk (Dr. Michael Stiber) Date: Wed, 16 Mar 94 16:35:03 HKT Subject: Paper available in Neuroprose Message-ID: <9403160835.AA24410@cs.ust.hk> The file stiber.transient.ps.Z (4 pages) is now available via anonymous ftp from the neuroprose archive. It will appear in _Proc. Int. Symp. on Speech, Image Processing & Neural Networks_, Hong Kong, 1994, and is also available as Technical Report HKUST-CS94-6 (file://ftp.cs.ust.hk/pub/techreport/postscript/tr94-6.ps.gz). If you absolutely, positively cannot access it any other way, send me email and I'll send you a hardcopy. ---------------------------------------------------------------------- Transient Responses in Dynamical Neural Models Michael Stiber Department of Computer Science The Hong Kong University of Science and Technology Clear Water Bay Kowloon Hong Kong Jose P. Segundo Department of Anatomy and Cell Biology and Brain Research Institute University of California Los Angeles, CA 90024 USA We consider the input/output behavior of a realistic dynamical neural model in comparison to those typically used in artificial neural networks. We have found that such models duplicate well those behaviors seen in living neurons, displaying a range of behaviors commonly seen in a wide variety of nonlinear dynamical systems. This is not captured well by weighted sum/monotonic transfer function models. An example of the consequences of nonlinear dynamics in neural responses is presented for monotonically changing input transients. ---------------------------------------------------------------------- Dr. Michael Stiber stiber at cs.ust.hk Department of Computer Science tel: (852) 358 6981 The Hong Kong University of Science & Technology fax: (852) 358 1477 Clear Water Bay, Kowloon, Hong Kong  From richardd at logcam.co.uk Wed Mar 16 05:12:58 1994 From: richardd at logcam.co.uk (Richard Dallaway) Date: Wed, 16 Mar 94 10:12:58 GMT Subject: Thesis available Message-ID: <9403161013.AA09849@logcam.co.uk> FTP-host: ftp.cogs.susx.ac.uk FTP-filename: /pub/reports/csrp/csrp306.ps.Z The following thesis is available via anonymous ftp. DYNAMICS OF ARITHMETIC: A CONNECTIONIST VIEW OF ARITHMETIC SKILLS Richard Dallaway email: richardd at cogs.susx.ac.uk Cognitive Science Research Paper CSRP-306 School of Cognitive & Computing Sciences University of Sussex, Brighton, UK SUMMARY: Connectionist models of adult memory for multiplication facts and children's multicolumn multiplication errors. Full abstract at then end of this message. FTP instructions: unix> ftp ftp.cogs.susx.ac.uk [ or ftp 192.33.16.70] login: anonymous password: ftp> cd pub/reports/csrp ftp> binary ftp> get csrp306.ps.Z ftp> bye 155 pages. 552567 bytes compressed, 1922143 bytes uncompressed The file is over a megabyte, so some of you may find that you have to login to your printer server and use the "lpr -s" option. See man lpr. Your printer may not recognize the "Bembo" font used on the very first page (only). Paper copies can be ordered (5pounds, US$10) from: Berry Harper School of Cognitive & Computing Sciences University of Sussex Falmer, Brighton, UK. ------------------------------------------------------------------------ ABSTRACT: Arithmetic takes time. Children need five or six years to master the one hundred multiplication facts (0x0 to 9x9), and it takes adults approximately one second to recall an answer to a problem like 7x8. Multicolumn arithmetic (e.g., 45x67) requires a sequence of actions, and children produce a host of systematic mistakes when solving such problems. This thesis models the time course and mistakes of adults and children solving arithmetic problems. Two models are presented, both of which are built from connectionist components. First, a model of memory for multiplication facts is described. A system is built to capture the response time and slips of adults recalling two digit multiplication facts. The phenomenon is thought of as spreading activation between problem nodes (such as 7 and 8) and product nodes (56). The model is a multilayer perceptron trained with backpropagation, and McClelland's (1988) cascade equations are used to simulate the spread of activation. The resulting reaction times and errors are comparable to those reported for adults. An analysis of the system, together with variations in the experiments, suggest that problem frequency and the "coarseness" of the input encoding have a strong effect on the phenomena. Preliminary results from damaging the network are compared to the arithmetic abilities of brain-damaged subjects. The second model is of children's errors in multicolumn multiplication. Here the aim is not to produce a detailed fit to the empirical observations of errors, but to demonstrate how a connectionist system can model the behaviour, and what advantages this brings. Previous production system models are based on an impasse-repair process: when an child encounters a problem an impasse is said to have occurred, which is then repaired with general-purpose heuristics. The style of the connectionist model moves away from this. A simple recurrent network is trained with backpropagation through time to activate procedures which manipulate a multiplication problem. Training progresses through a curriculum of problems, and the system is tested on unseen problems. Errors can occur during testing, and these are compared to children's errors. The system is analysed in terms of hidden unit activation trajectories, and the errors are characterized as "capture errors". That is, during processing the system may be attracted into a region of state space that produces an incorrect response but corresponds to a similar arithmetic subprocedure. The result is a graded state machine---a system with some of the properties of finite state machines, but with the additional flexibility of connectionist networks. The analysis shows that connectionist representations can be structured in ways that are useful for modelling procedural skills such as arithmetic. It is suggested that one of the strengths of the model is its emphasis on development, rather than on "snap-shot" accounts. Notions such as "impasse" and "repair" are discussed from a connectionist perspective. -----------------------------------------------------------------------  From yorick at dcs.shef.ac.uk Wed Mar 16 11:41:27 1994 From: yorick at dcs.shef.ac.uk (Yorick Wilks) Date: Wed, 16 Mar 94 16:41:27 GMT Subject: No subject Message-ID: <9403161641.AA02186@dcs.shef.ac.uk> THE UNIVERSITY OF SHEFFIELD The Department of Computer Science wishes to recruit a Lecturer Grade A to a fixed 5 year appointment arising from the award of an SERC Advanced Research Fellowship to Dr.P Mc Kevitt, who lectures in natural language processing. The lectureship is to replace his teaching and will be tenable from 1/10/94 and applications are invited from anyone with research interests in the following areas: Cognitive Systems Computational Models of Hearing Speech Technology Natural Language Processing Computer Graphics Intelligent Tutoring Systems Computer Argumentation Connectionist Language Processing Formal Methods and Software Engineering Theory of Computer Science Software and systems engineering Communication Networks Neural Networks Parallel Systems Safety Critical Systems Parallel Databases CASE Tools for Parallel Systems Further details are available from the Department of Computer Science: jean at dcs.sheffield.ac.uk. Closing date for applications 1st April, 1994 to the Personnel Department, Western Bank, University of Sheffield, Sheffield, S10 2TN.  From isabelle at inrs-telecom.uquebec.ca Wed Mar 16 17:40:06 1994 From: isabelle at inrs-telecom.uquebec.ca (Jean Francois Isabelle) Date: Wed, 16 Mar 1994 17:40:06 -0500 (EST) Subject: master thesis available Message-ID: <199403162241.AA16367@velcro.inrs-telecom.uquebec.ca> A non-text attachment was scrubbed... Name: not available Type: text Size: 3052 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/0aaf3b01/attachment.ksh From bishopc at sun.aston.ac.uk Thu Mar 17 14:22:42 1994 From: bishopc at sun.aston.ac.uk (bishopc) Date: Thu, 17 Mar 94 19:22:42 GMT Subject: Paper available by ftp Message-ID: <15152.9403171922@sun.aston.ac.uk> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/bishop.mixture*.ps.Z The following technical report is available by anonymous ftp. ------------------------------------------------------------------------ MIXTURE DENSITY NETWORKS Chris M Bishop Neural Computing Research Group Report: NCRG/4288 Neural Computing Research Group Aston University Birmingham, B4 7ET, U.K. email: c.m.bishop at aston.ac.uk Abstract In this paper we introduce a general technique for modelling conditional probability density functions, by combining a mixture distribution model with a standard feedforward network. The conventional technique of minimizing a sum-of-squares or cross-entropy error function leads to network outputs which approximate the conditional averages of the target data, conditioned on the input vector. For classifications problems, with a suitably chosen target coding scheme, these averages represent the posterior probabilities of class membership, and so can be regared as optimal. For problems involving the prediction of continuous variables, however, the conditional averages provide only a very limited description of the properties of the target variables. This is particularly true for problems in which the mapping to be learned is multi-valued, as often arises in the solution of inverse problems, since the average of several correct target values is not necessarily itself a correct value. In order to obtain a complete description of the data, for the purposes of predicting the outputs corresponding to new input vectors, we must model the conditional probability distribution of the target data, again conditioned on the input vector. In this paper we introduce a new class of network models obtained by combining a conventional neural network with a mixture density model. The complete system is called a Mixture Density Network, and can in principle represent arbitrary conditional probability distributions in the same way that a conventional neural network can represent arbitrary non-linear functions. We demonstrate the effectiveness of Mixture Density Networks using both a simple 1-input 1-output mapping, and a problem involving robot inverse kinematics. -------------------------------------------------------------------- ftp instructions: This paper is split into two files to keep the uncompressed postscript files below 2Mb. bishop.mixture1.ps.Z (size 445839) pages 1 -- 16 bishop.mixture2.ps.Z (size 364598) pages 17 -- 25 % ftp archive.cis.ohio-state.edu Name: anonymous password: your full email address ftp> cd pub/neuroprose ftp> binary ftp> get bishop.mixture1.ps.Z ftp> get bishop.mixture2.ps.Z ftp> bye % uncompress bishop* % lpr bishop* -------------------------------------------------------------------- Professor Chris M Bishop Tel. +44 (0)21 359 3611 x4270 Neural Computing Research Group Fax. +44 (0)21 333 6215 Dept. of Computer Science c.m.bishop at aston.ac.uk Aston University Birmingham B4 7ET, UK --------------------------------------------------------------------  From billl at head.neurology.wisc.edu Thu Mar 17 15:45:03 1994 From: billl at head.neurology.wisc.edu (Bill Lytton) Date: Thu, 17 Mar 94 14:45:03 CST Subject: Postdoctoral opportunities at U Wisconsin, Madison Message-ID: <9403172045.AA11636@head.neurology.wisc.edu> Postdoctoral fellowships available in Computational Neuroscience starting immediately or in the fall. Realistic simulations of single neurons and neuronal networks are being performed to better understand neural function with particular emphasis on epileptogenesis and seizure spread. Close collaborations are available on-site with physiologists using electrophysiology and optical methods to assess activity in thalamus, piriform cortex and hippocampus in vivo and in vitro. Opportunities for involvement in ongoing projects or development of new research directions are available. Computational laboratory uses networked UNIX workstations. Parallel supercomputing facilities are available as well as collaboration on VLSI implementations. Send or email CV and statement of research experience/interests to billl at head.neurology.wisc.edu. Bill Lytton Dept. of Neurology University of Wisconsin 1300 University Ave., MSC 103 Madison, WI 53703 (EOAAE) Tiring of the bicoastal lifestyle? Try the midcoast next.  From RAMPO at SALERNO.INFN.IT Thu Mar 17 17:42:00 1994 From: RAMPO at SALERNO.INFN.IT (RAMPO@SALERNO.INFN.IT) Date: Thu, 17 MAR 94 22:42 GMT Subject: ICANN'94 Program Message-ID: <5301@SALERNO.INFN.IT> -------------------------------------------------------------------- | ************************************************ | | * * | | * EUROPEAN NEURAL NETWORK SOCIETY * | | *----------------------------------------------* | | * P R E L I M I N A R Y P R O G R A M * | | *----------------------------------------------* | | * I C A N N ' 94 - SORRENTO * | | * * | | ************************************************ | | | | ICANN'94 (INTERNATIONAL CONFERENCE ON ARTIFICIAL NEURAL NETWORKS)| | is the fourth Annual Conference of ENNS and it comes after | | ICANN'91(Helsinki), ICANN'92 (Brighton), ICANN'93 (Amsterdam). | | It is co-sponsored by INNS, IEEE-NC, JNNS. | | It will take place at the Sorrento Congress Center, near Naples, | | Italy, on May 26-29, 1994. | |------------------------------------------------------------------| | Conference Chair: Prof. Maria Marinaro, Univ. Salerno, | | Italy, Dept. Theoretic Physics; email: iiass at salerno.infn.it | | | | Conference Co-Chair: Prof. Pietro G. Morasso, Univ. Genova, | | Italy, Dept. Informatics, Systems, Telecommunication; | | email: morasso at dist.unige.it; fax: +39 10 3532948 | |------------------------------------------------------------------| | May 26 - Tutorials | |------------------------------------------------------------------| |* Introduction to neural networks (J.G. Taylor) | |* Advanced techniques in supervised learning I (F. Fogelman) | |* Advanced techniques in supervised learning II (F. Fogelman) | |* Advanced techniques for self organising maps (T. Kohonen) | |* Weightless NNs (I. Aleksander) | |* Information theory in NNs (M. Plumbley) | |* Hybrid systems (T. Schwarz) | |* From neuroscience to neurocomputation for robotics and | | prediction (R. Eckmiller) | |* Applications of neural nets (R. Hecht-Nielsen) | |------------------------------------------------------------------| | May 27/29 - Scientific sessions | |------------------------------------------------------------------| | Plenary presentations: S. Grossberg, H. Szu, E. Bizzi, D. Amit, | | L. Zadeh | | | | 356 contributions, including 21 invited presentations, are | | presented in 27 oral sessions and 6 poster sessions which are | | grouped into 4 main areas: | | | |A: Neurobiology | |--------------- | |Invited presentations: | | | | S. Grossberg et al.: Spatial pooling and perceptual framing by | | synchronizing cortical dynamics. | | J. Herault: Vertebrate retina: sub-sampling and aliasing effects | | can explain colour-opponent and colour constancy | | phenomena. | | L.W. Stark: ANNs and MAMFs: transparency or opacity? | | S. Usui et al.: Dry electrophysiology: an approach to the | | internal representation of the brain functions | | through artificial neural networks. | | | | There are 4 oral sessions and 1 poster session covering topics on| | vision, motor control, models of biological neurons and circuits.| | | |B: Mathematical models | |---------------------- | |Invited presentations: | | | | J.G. Taylor: Neuronal network models of mind. | | I. Aleksander: The consciousness of a neural state machine. | | T. Kohonen: What generalisations of the self-organizing map make | | sense? | | F. Fogelman: Variable selection with neural networks. | | M.L. Jordan et al.: Hierarchical mixtures of experts and the EM | | algorithm. | | M. Marinaro et al.: Outline of a linear neural network and | | applications. | | M. Kawato et al.: Teaching by showing in Kendama based on | | optimization principle. | | S. Amari: Information geometry and the EM algorithm. | | C.C.A.M. Gielen: Learning and interpretation of weights in neural| | networks. | | | | There are 10 oral sessions and 3 poster session covering topics | | on fuzzy systems, symbolic and hybrid systems, self-organizing | | maps, attractor networks, RBF networks, reinforcement learning, | | optimization, statistical models, and network growing. | | | |C: Applications | |--------------- | |Invited presentations: | | | | H. Ritter: Parametrized self-organizing maps for vision learning | | tasks. | | R. De Mori et al.: Artificial neural networks for source code in | | formal information analysis. | | E. Oja: Beyond PCA: statistical expansions by nonlinear neural | | networks. | | R.J. Marks II et al.: Fourier analysis and filtering of a single | | hidden layer perceptron. | | V. Lopez et al.: Neural forecasting in real time industrial | | control. | | P. Morasso et al.: Cortical representation of external space. | | | | There are 10 oral sessions and 2 poster sessions covering topics | | on classification models, speech, character recognition, signal | | and image processing, clustering and quantization, robotics and | | control. | | | |D: Neurocomputing | |----------------- | |Invited presentations: | | | | C. Nicolini: From neural network to biomolecular electronics. | | R. Eckmiller: Biology-inspired pulse processing neural networks | | (BPN) for neurotechnology. | | | | There are 3 oral sessions and 1 poster session covering topics | | of computational architecture, hardware design, software tools, | | and fault tolerance. | |------------------------------------------------------------------| | T E C H N I C A L E X H I B I T I O N | |------------------------------------------------------------------| | A technical exhibition will be organized for presenting the | | literature on neural networks and related fields, neural networks| | design and simulation tools, electronic and optical | | implementation of neural computers, and application | | demonstration systems. Potential exhibitors are kindly requested | | to contact the industrial liaison chair. | | | | Industrial Liaison Chair: Dr. Roberto Serra, Ferruzzi | | Finanziaria, Ravenna, fax: +39 544 35692/32358 | | | |------------------------------------------------------------------| | S O C I A L P R O G R A M | |------------------------------------------------------------------| | Social activities will include a welcome party, a banquet, and | | post-conference tours to some of the many possible targets of | | the area (participants will also have no difficulty to | | self-organize a la carte). | |------------------------------------------------------------------| | C O R R E S P O N D E N C E | |------------------------------------------------------------------| | EMAIL where to send correspondence (not papers): | | Dr. Salvatore Rampone - iiass at salerno.infn.it | | FAX where to send correspondence (not papers): | | Mr. V. DiMarino - +39 89 822275 | |------------------------------------------------------------------| | R E G I S T R A T I O N F O R M | |------------------------------------------------------------------| | FAMILY NAME ____________________________________________________ | | FIRST NAME, MIDDLE INITIAL _____________________________________ | | AFFILIATION ____________________________________________________ | | MAILING ADDRESS ________________________________________________ | | ZIP CODE, CITY, COUNTRY ________________________________________ | | FAX ____________________________________________________________ | | PHONE __________________________________________________________ | | EMAIL __________________________________________________________ | | ACCOMPANIED BY _________________________________________________ | | MEMBERSHIP (Regular/ENNS member/Student) _______________________ | | ENNS MEMBERSHIP NO. ____________________________________________ | | REGISTRATION FEE _______________________________________________ | | TUTORIAL FEE ___________________________________________________ | | DATE ______________________ SIGNATURE __________________________ | | | |------------------------------------------------------------------| | C O N F E R E N C E R E G I S T R A T I O N F E E S (in LIT) | |------------------------------------------------------------------| | MEMBERSHIP | Before 15/12/93 | Before 15/2/94 | On site | |--------------|-------------------|------------------|------------| | REGULAR | ------- | ------- | 950,000 | | ENNS MEMBER | ------- | ------- | 850,000 | | STUDENT | ------- | ------- | 300,000 | |------------------------------------------------------------------| | T U T O R I A L F E E S (in LIT) | |------------------------------------------------------------------| | | Before 15/2/94 | On site | |--------------|-------------------|-------------------------------| | REGULAR | ------- | 350,000 | | STUDENT | ------- | 150,000 | |------------------------------------------------------------------| | - Regular registrants become ENNS members. | | - Student registrants must provide an official certification of | | their status. | | - Pre-registration payment: Remittance in LIT to | | BANCO DI NAPOLI, Branch of FISCIANO, FISCIANO (SALERNO), ITALY| | on the Account of "Dipartimento di Fisica Teorica e S.M.S.A." | | clearly stating the motivation (Registration Fee for ICANN'94) | | and the attendee name. | | Bank Codes: | | ABI 1010 | | CAB 76210 | | - On-site payment: cash. | | - The registration form together with a copy of the bank | | remittance must be mailed to: | | Dr. Roberto Tagliaferri, Dept. Informatics, Univ. Salerno, | | I-84081 Baronissi, Salerno, Italy | | Fax +39 89 822275 | |------------------------------------------------------------------| | H O T E L R E S E R V A T I O N | |------------------------------------------------------------------| | The official travel agent is (fax for a booking form): | | RUSSO TRAVEL srl | | Via S. Antonio, I-80067 Sorrento, Italy | | Fax: +39 81 807 1367 Phone: +39 81 807 1845 | --------------------------------------------------------------------  From zoubin at psyche.mit.edu Thu Mar 17 20:16:26 1994 From: zoubin at psyche.mit.edu (Zoubin Ghahramani) Date: Thu, 17 Mar 94 20:16:26 EST Subject: Paper available by ftp Message-ID: <9403180116.AA17071@psyche.mit.edu> FTP-host: psyche.mit.edu FTP-filename: /pub/zoubin.cmss.ps.Z The following paper is very closely related to Chris M Bishop's recently announced paper on MIXTURE DENSITY NETWORKS. It also addresses the problem of learning multi-valued mappings such as those that arise in inverse kinematics, acoustics, object localization, etc. The approach also involves learning a mixture density, though it does not combine that with the use of a feedforward network. ----------------------------------------------------------------------------- Solving Inverse Problems Using an EM Approach to Density Estimation Zoubin Ghahramani Department of Brain & Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 02139 zoubin at psyche.mit.edu Abstract This paper proposes density estimation as a feasible approach to the wide class of learning problems where traditional function approximation methods fail. These problems generally involve learning the inverse of causal systems, specifically when the inverse is a non-convex mapping. We demonstrate the approach through three case studies: the inverse kinematics of a three-joint planar arm, the acoustics of a four-tube articulatory model, and the localization of multiple objects from sensor data. The learning algorithm presented differs from regression-based algorithms in that no distinction is made between input and output variables; the joint density is estimated via the EM algorithm and can be used to represent any input/output map by forming the conditional density of the output given the input. In M. C. Mozer, P. Smolensky, D. S. Touretzky, J. L. Elman, & A. S. Weigend (eds.), Proceedings of the 1993 Connectionist Models Summer School. pp. 316--323. Hillsdale, NJ: Erlbaum Associates, 1994. -------------------------------------------------------------------- ftp instructions: % ftp psyche.mit.edu login: anonymous password: ftp> cd pub ftp> binary ftp> get zoubin.cmss.ps.Z ftp> bye % uncompress zoubin.cmss.ps.Z % lpr zoubin.cmss.ps -------------------------------------------------------------------- Matlab code for the EM mixture algorithms for real, binary, and classification problems for both complete and incomplete data (*) is also available by anonymous ftp from the same site: ftp> get zoubin.EMcode.README ftp> get zoubin.EMcode.tar.Z Please email me if you intend to use the code so I can keep you updated with newer releases and possibly C++ and CM5 code. (*) cf. Ghahramani & Jordan 1993, "Supervised learning from incomplete data using an EM approach": ftp> get zoubin.nips93.ps.Z -------------------------------------------------------------------- Zoubin Ghahramani  From davec at cogs.susx.ac.uk Fri Mar 18 09:36:11 1994 From: davec at cogs.susx.ac.uk (Dave Cliff) Date: Fri, 18 Mar 1994 14:36:11 +0000 (GMT) Subject: PhD at Sussex Message-ID: DPhil Studentship The Sussex Centre for Neuroscience, School of Biological Sciences and The School of Cognitive and Computing Sciences University of Sussex Applications are invited for a three-year SERC DPhil (PhD) studentship to commence in October 1994. The project will use computational modelling techniques to study small neural networks involved in pattern generation and motor coordination in invertebrates. The successful candidate will be based in the School of Cognitive and Computing Sciences, but will be required to work closely with a group of researchers in the School of Biological Sciences, lead by Prof. P. Benjamin. Candidates should possess or expect to gain at least a 2i or equivalent degree in a numerate discipline (e.g. Computer Science, Electronic Engineering, etc), although candidates from other disciplines may also be considered. For further information, contact Dr Dave Cliff, School of Coginitive and Computing Sciences, University of Sussex, Brighton BN1 9QH, UK. Tel: 0273 606755 ext 3205; Fax 0273 671320; e-mail davec at cogs.susx.ac.uk  From N.Sharkey at dcs.shef.ac.uk Fri Mar 18 11:19:31 1994 From: N.Sharkey at dcs.shef.ac.uk (N.Sharkey@dcs.shef.ac.uk) Date: Fri, 18 Mar 94 16:19:31 GMT Subject: job ad. Message-ID: <9403181619.AA17733@entropy.dcs.shef.ac.uk> There are two things that should be said to accompany the following job ad. First, a lecturer grade A in the UK is the direct equivalent to an Assistant professor. Second, I would like to see good neural net people apply, particulary in the area of Connectionist Natural Language Proceessing. We have an institute for Language Speech and Hearing (ILASH)directed by Professor Yorick Wilkes. I should stress, however, that while I will have a say in the appointment, so will several other people from different areas. That is why I am trying to encourage first rate applicants. noel THE UNIVERSITY OF SHEFFIELD The Department of Computer Science wishes to recruit a Lecturer Grade A to a fixed 5 year appointment arising from the award of an SERC Advanced Research Fellowship to Dr.P Mc Kevitt, who lectures in natural language processing. The lectureship is to replace his teaching and will be tenable from 1/10/94 and applications are invited from anyone with research interests in the following areas: Cognitive Systems Computational Models of Hearing Speech Technology Natural Language Processing Computer Graphics Intelligent Tutoring Systems Computer Argumentation Connectionist Language Processing Formal Methods and Software Engineering Theory of Computer Science Software and systems engineering Communication Networks Neural Networks Parallel Systems Safety Critical Systems Parallel Databases CASE Tools for Parallel Systems Further details are available from the Department of Computer Science: jean at dcs.sheffield.ac.uk. Closing date for applications 1st April, 1994 to the Personnel Department, Western Bank, University of Sheffield, Sheffield, S10 2TN. Noel Sharkey Professor of Computer Science Department of Computer Science Regent Court University of Sheffield S1 4DP, Sheffield, UK N.Sharkey at dcs.shef.ac.uk  From ess94%TRBOUN.BITNET at vm.gmd.de Fri Mar 18 07:14:21 1994 From: ess94%TRBOUN.BITNET at vm.gmd.de (ess94%TRBOUN.BITNET@vm.gmd.de) Date: Fri, 18 Mar 1994 14:14:21 +0200 Subject: REMINDER FOR EUROPEAN SIMULATION SYMPOSIUM 1994 Message-ID: <0097B9F1.8F0A8FC0.20614@trboun.bitnet> ***************REMINDER FOR SUBMITTING ABSTRACTS TO******************** EUROPEAN SIMULATION SYMPOSIUM 1994 DEADLINE EXTENDED TO APRIL 12, 1994. This is a reminder that the deadline to submit abstracts for the European Simulation Symposium 1994 which will be held in Istanbul during Oct 9-12, 1994 is EXTENDED to APRIL 12, 1994. You may find other pertinent information about the symposium in the electronic copy of the Call for Papers. *********************************************************************** ESS'94 EUROPEAN SIMULATION SYMPOSIUM CALL FOR PAPERS ISTANBUL, TURKEY OCTOBER 9-12, 1994 HOSTED BY BOGAZICI UNIVERSITY Organized and sponsored by: The Society for Computer Simulation International (SCS) With cooperation of: The European Simulation Council (ESC) Ministry of Industry and Trade, Turkey Operational Research Society of Turkey (ORST) Cosponsored by: Bekoteknik Digital Equipment Turkiye Hewlett Packard IBM Turk Main Topics: * Advances in Simulation Methodology and Practices * Artificial Intelligence in Simulation * Innovative Simulation Technologies * Industrial Simulation * Computer and Telecommunication Systems CONFERENCE COMMITTEE Conference Chairman: Prof. Dr. Tuncer I. Oren University of Ottawa, Computer Science Department, 150 Louis Pasteur / Pri., Ottawa, Ontario, Canada K1N 6N5 Phone: 1.613.564.5068 Fax: 1.613.738-0701 E-mail: oren at csi.uottawa.ca Program Chairman: Prof. Dr. Ali Riza Kaylan Bogazici University, Dept.of Industrial Engineering, 80815 Bebek, Istanbul, Turkey Phone: 90.212.2631540/2072 Fax: 90.212.2651800 E-Mail: Kaylan at trboun.bitnet Program Co-chairman: Prof. Dr. Axel Lehmann Universitaet der Bundeswehr, Munchen, Institut fur Technische Informatik, Werner-Heisenberg-Weg 39, D 85577 Neubiberg, Germany. Phone: 49.89.6004.2648/2654 Fax: 49.89.6004.3560 E-Mail: Lehmann at informatik.unibw-muenchen.de Finance Chairman: Rainer Rimane, University of Erlangen - Nurnberg Organization Committee: Ali Riza Kaylan, Yaman Barlas, Murat Draman, Levent Mollamustafaoglu, Tulin Yazgac International Program Committee (Preliminary): O. Balci, USA J. Banks, USA G. Bolch, Germany W. Borutzky, Germany R. Crosbie, USA M. Dal Cin, Germany M. S. Elzas, Netherlands H. Erkut, Turkey A. Eyler, Turkey P. Fishwick, USA E. Gelenbe, USA A. Guasch, Spain M. Hitz, Austria R. Huntsinger, USA G. Iazeolla, Italy K. Irmscher, Germany K. Juslin, Finland A. Javor, Hungary E. Kerckhoffs, Netherlands J. Kleijnen, Netherlands M. Kotva, Czech Rep. M. Koksalan, Turkey M. L. Pagdett, USA M. Pior, Germany R. Reddy, USA S. Reddy, USA B. Schmidt, Germany S. Sevinc, Australia H. Szczerbicka, Germany S. Tabaka, Japan O. Tanir, Canada G. Vansteenkiste, Belgium M. Wildberger, USA S. Xia, UK R. Zobel, UK CONFERENCE INFORMATION The ESS series (organized by SCS, the Society for Computer Simulation International) is now in its fifth year. SCS is an international non-profit organization founded in 1952. On a yearly basis SCS organizes 6 Simulation Conferences worldwide, cooperates in 2 others, and publishes the monthly magazine Simulation, a quarterly Transactions, and books. For more information, please tick the appropriate box on the reply card. During ESS'94 the following events will be presented besides the scientific program: Professional Seminars The first day of the conference is dedicated to professional seminars, which will be presented for those interested participants to expose the state-of-art overview of each of the five main themes of this conference. Participation fee is included in the conference registration fee. If you have suggestions for other advanced tutorial topics, please contact one of the program chairmen. Exhibits An exhibition will be held in the central hall where all participants meet for coffee and tea. There will be a special exhibition section for universities and non-profit organizations, and a special section for publishers and commercial stands. If you would like to participate in the exhibition, please contact the SCS European Office. Vendor Sessions, Demonstrations and Video Presentations For demonstrations or video sessions, please contact SCS International at the European Office. Special sessions within the scientific program will be set up for vendor presentations. Other Organized Meetings Several User Group meetings for simulation languages and tools will be organized on Monday. It is possible to have other meetings on Monday as well. If you would like to arrange a meeting, please contact the Conference Chairman. We will be happy to provide a meeting room and other necessary equipment. VENUE Istanbul, the only city in the world built on two continents, stands on the shores of the Istanbul Bogazi (Bosphorus) where the waters of the Black Sea mingle with those of the Sea of Marmara and the Golden Horn. Here on this splendid site, Istanbul guards the precious relics of three empires of which she has been the capital; a unique link between East and West, past and present. Istanbul has infinite variety: museums, ancient churches, palaces, great mosques, bazaars and the Bosphorus. However long you stay, just a few days or longer, your time will be wonderfully filled in this unforgettable city. Bogazici University, which will host ESS'94 has its origins in Robert College, first American College founded outside of the United States in 1863. It has a well deserved reputation for academic excellence and accordingly attracts students from among the best and brightest in Turkey. The University is composed of four faculties, six institutes (offering graduate programs), and two other schools. The conference location is Istanbul Dedeman, an international five star hotel, which is located in the center of the city with a spectacular view of the Bosphorus. It is in a very close district to the most of the historical places as well as to the business center. For the conference participants the single room special rate is 65 US dollars. SCIENTIFIC PROGRAM The 1994 SCS European Simulation Symposium is structured around the following five major themes. A parallel track will be devoted to each of the five topics. The conference language is English. * Advances in Simulation Methodology and Practices, e.g.: - Advanced Modelling, Experimentation, and Output Analysis and Display - Object-Oriented System Design and Simulation - Optimization of Simulation Models - Validation and Verification Techniques - Mixed Methodology Modelling - Special Simulation Tools and Environments * Artificial Intelligence in Simulation, e.g.: - Knowledge-based Simulation Environments and Knowledge Bases - Knowledge-based System Applications - Reliability Assurance through Knowledge-based Techniques - Mixed Qualitative and Quantitative Simulation - Neural Networks in Simulation * Innovative Simulation Technologies: - Virtual Reality - Multimedia Applications * Industrial Simulation, e.g. Simulation in: - Design and Manufacturing, CAD, CIM - Process Control - Robotics and Automation - Concurrent Engineering, Scheduling * Computer and Telecommunication Systems, e.g.: - Circuit Simulation, Fault Simulation - Computer Systems - Telecommunication Devices and Systems - Networks INVITED SPEAKERS Focusing on the main tracks of the conference, invited speakers will give special in-depth presentations in plenary sessions, which will be included in the proceedings of the conference. BEST PAPER AWARDS The 1994 European Simulation Symposium will award the best five papers, one in each of the five tracks. From these five papers, the best overall paper of the conference will be chosen. The awarded papers will be published in an International Journal, if necessary after incorporating modifications in the paper. DEADLINES AND REQUIREMENTS Extended abstracts (300 words, 2-3 pages for full and 150 words, 1 page for short papers typewritten without drawings and tables) are due to arrive in QUADRUPLICATE at the office of Ali Riza Kaylan, at the Industrial Engineering Department of Bogazici University, TURKEY before April 12, 1994. Only original papers, written in English, which have not previously been published elsewhere will be accepted. In case you want to organize a panel discussion, please contact the program chairmen. Authors are expected to register early (at a reduced fee) and to attend the conference at their own expense to present the accepted papers. If early registration and payment are not made, the paper will not be published in the conference proceedings. In the case of multi-authors, one author should be identified as the person who will act as correspondent for the paper. Abstracts will be reviewed by 3 members of the International Program Committee for full papers and one member for short papers. Notification of acceptance or rejection will be sent by April 30, 1994. An author kit with complete instruction for preparing a camera-ready copy for the proceedings will be sent to authors of accepted abstracts. The camera-ready copy of the papers must be in by July 15, 1994. Only the full papers, which are expected to be 5-6 pages long, will be published in the conference proceedings. In order to guarantee a high-quality conference, the full papers will be reviewed as well, to check whether the suggestions of the program committee have been incorporated. The nominees for the best paper awards will be selected as well. REGISTRATION FEE Author SCS members Other participants ----------------------------------------------- Registration before BF 15000 BF 15000 BF 17000 August 31, 1994 (375 ECU) (375 ECU) (425 ECU) Registration after Preregistration BF 17000 BF 20000 August 31, 1994 required (425 ECU) (500 ECU) or at the conference The registration fee includes one copy of the Conference Proceedings, attending professional seminars, coffee and tea during the breaks, all lunches, a welcome cocktail and the conference dinner. CORRESPONDENCE ADDRESS Philippe Geril The Society for Computer Simulation, European Simulation Office, University of Ghent Coupure Links 653, B-9000 Ghent, Belgium. Phone (Office): 32.9.233.77.90 Phone (Home): 32.59.800.804 Fax (Office): 32.9.223.49.41 E-Mail: Philippe.Geril at rug.ac.be REPLY CARD Family Name: First Name: Occupation and/or Title: Affiliation: Mailing Address: Zip: City: Country: Telephone: Fax: E-mail: Yes, I intend to attend the European Simulation Symposium ESS'94: o Proposing a paper o Proposing a panel discussion o Participating a vendor session o Contributing to the exhibition o Without presenting a paper The provisional title of my paper / poster / exhibited tool is: With the following topics: The paper belongs to the category (please tick one): o Advances in Simulation Methodology and Practices o Artificial Intelligence in Simulation o Innovative Simulation Technologies o Industrial Simulation o Computer and Telecommunication Systems The paper will be submitted as a: o Full paper o Short Paper o Poster session o Demonstration Other colleague(s) interested in the topics of the conference is/are: Name: Address: Name: Address: If you would like to receive more information about SCS and its activities, please tick the following box: o YES, I would like to know more about SCS. Please mail this card immediately to: Philippe Geril, The Society for Computer Simulation, European Simulation Office University of Ghent, Coupure Links 653, B-9000 Ghent, Belgium.  From cogsci at birmingham.ac.uk Sun Mar 20 16:23:55 1994 From: cogsci at birmingham.ac.uk (cogsci@birmingham.ac.uk) Date: Sun, 20 Mar 94 21:23:55 GMT Subject: Cognitive Science MSc Programme at Birmingham Message-ID: ____________________________________________________________________________ M S c i n C o g n i t i v e S c i e n c e a t t h e U n i v e r s i t y o f B i r m i n g h a m ____________________________________________________________________________ The University of Birmingham runs a programme of inter-disciplinary teaching and research in Cognitive Science notable for its breadth and cross- disciplinary interaction. Staff have a wide range of relevant research interests, and Cognitive Science is supported by extensive computing facilities comprising Unix workstations and X-terminals. The MSc in Cognitive Science is a one-year modular programme consisting of taught courses followed by a substantial project. The taught courses (including options) on the MSc comprise: Artificial Intelligence Programming and Logic, Overview of Cognitive Science, Knowledge Representation Inference and Expert Systems, General Linguistics, Human Information Processing, Structures for Data and Knowledge, Philosophy of Science for Cognitive Science, Philosophy of Mind for Cognitive Science, C++ Programming, Human-Computer Interaction, Biological and Computational Architectures, Current Issues in Cognitive Science, Artificial and Natural Perceptual Systems, Speech and Natural Language Processing, and Parallel Distributed Processing. Projects can be pursued in a wide range of topics. Admissions requirements for the MSc in Cognitive Science are flexible, but normally include a good degree in a relevant area such as psychology, artificial intelligence, computer science, linguistics or philosophy. Addresses for further information are given below. The same addresses can be used for enquiries concerning the PhD programme in Cognitive Science and the Cognitive Science Seminar Series at Birmingham. Phone: (+4421) 414 3683 Fax: (+4421) 414 4897 E-mail: cogsci at bham.ac.uk WWW URL:http://www.cs.bham.ac.uk/ Gopher: gopher.cs.bham.ac.uk Mail: Cognitive Science Admissions, School of Psychology, University of Birmingham, Birmingham, B15 2TT, U.K. Donald Peterson.  From dhw at santafe.edu Mon Mar 21 15:21:52 1994 From: dhw at santafe.edu (dhw@santafe.edu) Date: Mon, 21 Mar 94 13:21:52 MST Subject: New file in neuroprose Message-ID: <9403212021.AA01270@chimayo> **** DO NOT FORWARD TO OTHER GROUPS **** The following paper has been placed in neuroprose, under the name wolpert.unify.ps.Z. It is a draft, 82 pages long. Because of the breadth of its subject matter, comments/suggestions are strongly encouraged. The Relationship Between PAC, the Statistical Physics framework, the Bayesian framework, and the VC framework. by David H. Wolpert The Santa Fe Institute, 1660 Old Pecos Trail, Suite A, Santa Fe, NM, 87505, dhw at santafe.edu Abstract: This paper discusses the intimate relationships between the supervised learning frameworks mentioned in the title. In particular, it shows how all those frameworks can be viewed as particular instances of a single overarching formalism. In doing this many commonly misunderstood aspects of those frameworks are explored. In addition the strengths and weaknesses of those frameworks are compared, and some novel frameworks are suggested (resulting, for example, in a `correction' to the familiar bias-plus-variance formula). To print the file: unix> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive FTP server (Version wu-2.1c(3) Thu Dec 16 08:45:43 EST 1993) ready. Name (archive.cis.ohio-state.edu:dhw): anonymous 331 Guest login ok, send your complete e-mail address as password. Password: 230 Guest login ok, access restrictions apply. ftp> cd pub/neuroprose 250-Please read the file README 250- it was last modified on Fri Jul 2 09:04:46 1993 - 262 days ago 250 CWD command successful. ftp> binary ftp> get wolpert.unify.ps.Z ftp> quit unix> lpr wolpert.unify.ps.Z (or however you print postscript)  From mli at math.uwaterloo.ca Tue Mar 22 17:35:31 1994 From: mli at math.uwaterloo.ca (Ming Li) Date: Tue, 22 Mar 1994 17:35:31 -0500 Subject: Preliminary Announcement: ML'94 + COLT'94 Message-ID: <94Mar22.173539est.77988-4@math.uwaterloo.ca> An unabbreviated version of this announcement in Latex or postscript can be obtained via anonymous ftp from cs.rutgers.edu in the directory pub/learning94. If you do not have access to ftp, send email to ml94 at cs.rutgers.edu or colt94 at research.att.com. *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=* --- Preliminary Announcement --- ML '94 COLT '94 Eleventh International Conference Seventh ACM Conference on on Machine Learning Computational Learning Theory July 10-13, 1994 July 12-15, 1994 Rutgers, The State University of New Jersey, New Brunswick *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* The COLT and ML conferences will be held together this year at Rutgers University in New Brunswick. This is the first time that COLT and ML will be held in the same location, and we are looking forward to a lively and interdisciplinary meeting of the two communities. Please come and help make this exciting experiment a success. Among the highlights of the conferences are three invited lectures, and, on Sunday, July 10, a day of workshops and tutorials on a variety of topics relevant to machine learning. The tutorials are sponsored by DIMACS, and are free and open to the general public. COLT is sponsored by the ACM Special Interest Groups on Algorithms and Computation Theory (SIGACT) and on Artificial Intelligence (SIGART). In addition, COLT and ML received generous support this year from AT&T Bell Laboratories and the NEC Research Institute. This preliminary announcement, which omits the final technical program, is being provided so that travel arrangements can be made as early as possible. An updated announcement, including the technical program, will be distributed sometime in April. >>>> WARNING <<<< The dates of the conferences coincide this year with the World Cup soccer matches being held at Giants Stadium in East Rutherford, New Jersey. These games are expected to be the largest sporting event ever held in the New York metropolitan area, and it is possible that the volume of soccer fans in the area could adversely affect your ability to make travel reservations. Therefore, IT IS EXTREMELY IMPORTANT THAT YOU MAKE ALL YOUR TRAVEL ARRANGEMENTS AS EARLY AS POSSIBLE. GENERAL INFORMATION LOCATION. The conferences will be held at the College Avenue Campus of Rutgers University in downtown New Brunswick, which is easily accessible by air, train, and car. For air travel, New Brunswick is 35 minutes from Newark International Airport, a major U.S. and international airline hub. By rail, the New Brunswick train station is located less than four blocks from the conference site and is on Amtrak's Northeast corridor. For travel by car, the conference site is approximately three miles from Exit 9 of the New Jersey Turnpike. See instructions below for obtaining a map of the campus. Most conference activities will take place in Scott Hall (#21 on map) and Murray Hall (#22). Conference check-in and on-site registration will take place in Scott Hall (follow signs for exact room location) on Saturday, July 9 at 3-6pm, and everday after that beginning at 8am. REGISTRATION. Please complete the attached registration form, and return it with a check or money order for the full amount. The early registration (postmark) deadline is May 27, 1994. HOUSING. We have established the group rate of $91/night for a single or a double at the HYATT REGENCY HOTEL (about five blocks from the conference site). This rate is only guaranteed through June 10, 1994, and, due to limited availability, it is strongly recommended that you make reservations as soon as possible. To reserve a room, please call the Hyatt directly at 908-873-1234 or 800-233-1234 and be sure to reference ML94 or COLT94. Parking is available at the hotel for a discounted $3/night. We have also reserved dormitory space in two dorms, both of which are an easy walk to the main conference site. Dorm reservations must be made by the early registration deadline of May 27, 1994. Both dorms include daily maid service (linens provided first day for the week and daily fresh towels and beds made). The Stonier Hall dorms (#56 on map) are air-conditioned with private bath and are situated in the center of the campus. Due to limited availability, only shared double rooms are available in Stonier. Only a block away, the Campbell Hall dorms (#50) are one of a set of three "river dorms" overlooking the Raritan River. Although Campbell Hall is not air-conditioned, the view of the river is quite pleasing and rooms on the river side should offer good air flow. Baths in Campbell are shared on each floor, with single and double rooms available. Please specify your dorm preference on your registration form, and we will assign space accordingly on a first come, first served basis as long as rooms are available. Unfortunately, because there are only a finite number of rooms within each dormitory, we cannot absolutely guarantee your request. Check-in for the dorms will take place at the Housing Office in Clothier Hall (#35) which is located next to the Hurtado Health Center (#37) on Bishop Place. Check-in hours will be 4pm to midnight, July 9-13. Parking passes, for those staying in the dorms, will be available upon check-in. TRAVEL BY AIR. Newark International Airport is by far the most convenient. A taxi from the airport to New Brunswick costs about $36 (plus nominal tolls) for up to four passengers. (This is the flat-rate fare for a _licensed_ taxi from the official-looking taxi stand; it is strongly recommended that you refuse rides offered by unlicensed taxi drivers who may approach you elsewhere in the airport.) Shuttle service to New Brunswick is available from ICS for $23 per person. ICS shuttles run direct to the Hyatt, and require at least one-day advance reservations (908-566-0795 or 800-225-4427). If renting a car, follow signs out of the airport to New Jersey Turnpike South, and continue with the directions below. By public transportation, take the Airlink bus ($4 exact fare) to Newark Penn Station and follow the "by rail" directions below. (New Jersey Transit train fare is $5.25 one-way or $8 round trip excursion; trains run about twice an hour during the week, and less often in the evening and on weekends.) TRAVEL BY CAR. Take the New Jersey Turnpike (south from Newark or New York, north from Philadelphia) to Exit 9. Follow signs onto Route 18 North or West (labeled differently at different spots) toward New Brunswick. Take the Route 27, Princeton exit onto Albany Street (Route 27) into downtown New Brunswick. The Hyatt Regency Hotel will be on your left after the first light. If staying at the Hyatt, turn left at the next light, Neilson Street, and left again into the front entrance of the hotel. If staying in the dorms, continue past this light to the following light, George Street, and turn right. Stay on George Street to just before the fifth street and turn left into the Parking Deck (#55 on map). Walk to the Housing Office in Clothier Hall (#35) for dormitory check-in. TRAVEL BY RAIL. Take either an Amtrak or a New Jersey Transit train to the New Brunswick train station. This is located at the corner of Albany Street and Easton Avenue. If staying at the Hyatt Regency Hotel, it is a (long) three block walk to the left on Albany Street to the hotel. If staying in the dorms it is a (long) six block walk to the Housing Office in Clothier Hall (#35 on map) for dormitory check-in. (The taxi stand is in front of the train station on Albany Street.) MEALS. Continental breakfast is included with registration, but not lunch or dinner. Restaurants abound within walking distance of the conference and housing venue, ranging from inexpensive food geared to college students to more expensive dining. A reception on July 12 is scheduled at the rustic Log Cabin, situated next to the experimental gardens of the agricultural campus, as part of the registration package for all ML94 and COLT94 attendees. The banquet on July 13 is included in the registration package for everyone except students. CLIMATE. New Jersey in July is typically hot, with average daily highs around 85 degrees, and overnight lows around 70. Most days in July are sunny, but also come prepared for the possibility of occasional rain. THINGS TO DO. The newly opened Liberty Science Center is a fun, hands-on science museum located in Liberty State Park, about 30-45 minutes from New Brunswick (201-200-1000). From Liberty State Park, one can also take a ferry to the Statue of Liberty and the Immigration Museum at Ellis Island. New York City can be reached in under an hour by rail on New Jersey Transit. Trains run about twice an hour during the week, and once an hour on weekends and at night. Fare is $7.75 one-way, $11.50 round trip excursion. New Brunswick has a number of theaters, including the State Theater (908-247-7200), the George Street Playhouse (908-246-7717), and the Crossroads Theater (908-249-5560). The New Jersey shore is less than an hour from New Brunswick. Points along the shore vary greatly in character. Some, such as Point Pleasant, have long boardwalks with amusement park rides, video arcades, etc. Others, such as Spring Lake, are quiet and uncommercialized with clean and very pretty beaches. Further south, about two hours from New Brunswick, are the casinos of Atlantic City. You can walk for miles and miles along the towpath of the peaceful Delaware and Raritan Canal which runs from New Brunswick south past Princeton. Your registration packet will include a pass for access to the College Avenue Gymnasium (near the dormitories, #77 on map). FURTHER INFORMATION. If you have any questions or problems, please send email to colt94 at research.att.com or to ml94 at cs.rutgers.edu. A map of the campus, abstracts of workshops/tutorials, updates of this announcement, and other information will be available via anonymous ftp from cs.rutgers.edu in the directory pub/learning94. For New Jersey Transit fare and schedule information, call 800-772-2222 (in New Jersey) or 201-762-5100 (out-of-state). TECHNICAL PROGRAM The technical program for the conferences has not yet been finalized, but will be distributed sometime in April. All ML technical sessions will be held July 11-13, and all COLT sessions will be held July 12-15. INVITED LECTURES: * Michael Jordan, "Hidden decision tree models." * Stephen Muggleton, "Recent advances in inductive logic programming." * Fernando Pereira, "Frequencies vs biases: Machine learning problems in natural language processing." PAPERS ACCEPTED TO ML: A Baysian framework to integrate symbolic and neural learning. Irina Tchoumatchenko, Jean Gabriel Ganascia. A case for Occam's razor in the task of rule-base refinement. J. Jeffrey Mahoney, Raymond Mooney. A conservation law for generalization performance. Cullen Schaffer. A constraint-based induction algorithm in FOL. Michele Sebag. A Modular Q-learning architecture for manipulator task decomposition. Chen Tham, Richard Prager. A new method for predicting protein secondary structures based on stochastic tree grammars. Naoki Abe, Hiroshi Mamitsuka. A powerful heuristic for the discovery of complex patterned behavior. Raul E. Valdes-Perez, Aurora Perez. An efficient subsumption algorithm for inductive logic programming. Jorg-Uwe Kietz, Marcus Lubbe. An improved algorithm for incremental induction of decision trees. Paul Utgoff. An incremental learning approach for completable planning. Melinda T. Gervasio, Gerald F. DeJong. Combining top-down and bottom-up techniques in inductive logic programming. John M. Zelle, Raymond Mooney, Joshua Konvisser. Comparison of boosting to other ensemble methods using neural networks. Harris Drucker, Yann LeCun, L. Jackel, Corinna Cortes, Vladimir Vapnik. Compositional instance-based learning. Karl Branting, Patrick Broos. Consideration of risk in reinforcement learning. Matthias Heger. Efficient algorithms for minimizing cross validation error. Mary Lee, Andrew W. Moore. Exploiting the ordering of observed problem-solving steps for knowledge base refinement: an apprenticeship approach. Steven Donoho, David C. Wilkins. Getting the most from flawed theories. Moshe Koppel, Alberto Segre, Ronen Feldman. Greedy attribute selection. Richard A. Caruana, Dayne Freitag. Hierarchical self-organization in genetic programming. Justinian Rosca, Dana Ballard. Heterogeneous uncertainty sampling for supervised learning. David D. Lewis, Jason Catlett. Improving accuracy of incorrect domain theories. Lars Asker. In defense of C4.5: notes on learning one-level decision trees. Tapio Elomaa. Increasing the efficiency of simulated annealing search by learning to recognize (un)promising runs. Yoichihro Nakakuki, Norman Sadeh. Incremental multi-step Q-learning. Jing Peng, Ronald Williams. Incremental reduced error pruning. Johannes Furnkranz, Gerhard Widmer. Irrelevant features and the subset selection problem. George H. John, Ron Kohavi, Karl Pfleger. Learning by experimentation: incremental refinement of incomplete planning domains. Yolanda Gil. Learning disjunctive concepts by means of genetic algorithms. Attilio Giordana, Lorenza Saitta, F. Zini. Learning recursive relations with randomly selected small training sets. David W. Aha, Stephane Lapointe, Charles Ling, Stan Matwin. Learning semantic rules for query reformulation. Chun-Nan Hsu, Craig Knoblock. Markov games as a framework for multi-agent reinforcement learning. Michael Littman. Model-Free reinforcement learning for non-markovian decision problems. Satinder Pal Singh, Tommi Jaakkola, Michael I. Jordan. On the worst-case analysis of temporal-difference learning algorithms. Robert Schapire, Manfred Warmuth. Prototype and feature selection by sampling and random mutation hill climbing algorithms. David B. Skalak. Reducing misclassification costs: Knowledge-intensive approaches to learning from noisy data. Michael J. Pazzani, Christopher Merz, Patrick M. Murphy, Kamal M. Ali, Timothy Hume, Clifford Brunk. Revision of production system rule-bases. Patrick M. Murphy, Michael J. Pazzani. Reward functions for accelerated learning. Maja Mataric. Selective reformulation of examples in concept learning. Jean-Daniel Zucker, Jean Gabriel Ganascia. Small sample decision tree pruning. Sholom Weiss, Nitin Indurkhya. The generate, test and explain discovery system architecture. Michael de la Maza. The minimum description length principle and categorical theories. J. R. Quinlan. To discount or not to discount in reinforcement learning: a case study comparing R~learning and Q~learning. Sridhar Mahadevan. Towards a better understanding of memory-based and Bayesian classifiers. John Rachlin, Simon Kasif, Steven Salzberg, David W. Aha. Using genetic search to refine knowledge-based neural networks. David W. Opitz, Jude Shavlik. Using sampling and queries to extract rules from trained neural networks. Mark W. Craven, Jude Shavlik. WORKSHOPS AND DIMACS-SPONSORED TUTORIALS On Sunday, July 10, we are pleased to present four all-day workshops, five half-day tutorials, and one full-day advanced tutorial. The DIMACS-sponsored tutorials are free and open to the general public. Participation in the workshops is also free, but is at the discretion of the workshop organizers. Note that some of the workshops have quickly approaching application deadlines. Please contact the workshop organizers directly for further information. Some information is also available on our ftp site (see "further information" above). TUTORIALS: T1. State of the art in learning DNF rules morning/afternoon (advanced tutorial) Dan Roth danr at das.harvard.edu Jason Catlett catlett at research.att.com T2. Descriptional complexity and inductive learning morning Ed Pednault epdp at research.att.com T3. Computational learning theory: introduction and survey morning Lenny Pitt pitt at cs.uiuc.edu T4. What does statistical physics have to say about learning? morning Sebastian Seung seung at physics.att.com Michael Kearns mkearns at research.att.com T5. Reinforcement learning afternoon Leslie Kaelbling lpk at cs.brown.edu T6. Connectionist supervised learning--an engineering afternoon approach Tom Dietterich tgd at research.cs.orst.edu Andreas Weigend andreas at cs.colorado.edu WORKSHOPS: W1. Robot Learning morning/afternoon/evening Sridhar Mahadevan mahadeva at csee.usf.edu W2. Applications of descriptional complexity to afternoon/evening inductive, statistical and visual inference Ed Pednault epdp at research.att.com W3. Constructive induction and change of morning/afternoon representation Tom Fawcett fawcett at nynexst.com W4. Computational biology and machine learning morning/afternoon Mick Noordewier noordewi at cs.rutgers.edu Lindley Darden darden at umiacs.umd.edu REGISTRATION FOR COLT94/ML94 Please complete the registration form below, and mail it with your payment for the full amount to: Priscilla Rasmussen, ML/COLT'94 Rutgers, The State University of NJ Laboratory for Computer Science Research Hill Center, Busch Campus Piscataway, NJ 08855 (Sorry, registration cannot be made by email, phone or fax.) Make your check or money order payable in U.S. dollars to Rutgers University. For early registration, and to request dorm housing, this form must be mailed by May 27, 1994. For questions about registration, please contact Priscilla Rasmussen (rasmussen at cs.rutgers.edu; 908-932-2768). Name: _____________________________________________________ Affiliation: ______________________________________________ Address: __________________________________________________ ___________________________________________________________ Country: __________________________________________________ Phone: _______________________ Fax: _______________________ Email: ____________________________________________________ Confirmation will be sent to you by email. REGISTRATION. Please circle the *one* conference for which you are registering. (Even if you are planning to attend both conferences, please indicate the one conference that you consider to be "primary.") COLT94 ML94 The registration fee includes a copy of the proceedings for the *one* conference circled above (extra proceedings can be ordered below). Also included is admission to all ML94 and COLT94 talks and events (except that student registration does not include a banquet ticket). Regular advance registration: $190 $_______ ACM/SIG member advance registration: $175 $_______ Late registration (after May 27): $230 $_______ Student advance registration: $85 $_______ Student late registration (after May 27): $110 $_______ Extra reception tickets (July 12): _____ x $17 = _______ Extra banquet tickets (July 13): _____ x $40 = _______ Extra COLT proceedings: _____ x $35 = _______ Extra ML proceedings: _____ x $35 = _______ Dorm housing (from below): $_______ TOTAL ENCLOSED: $_______ How many in your party have dietary restrictions? Vegetarian: _____ Kosher: _____ Other: ______________ Circle your shirt size: small medium large X-large HOUSING. Please indicate your housing preference below. Descriptions of the dorms are given under "housing" above. Dorm assignments will be made on a first come, first served basis, so please send your request in as early as possible. We will notify you by email if we cannot fill your request. _____ Check here if you plan to stay at the Hyatt (reservations must be made directly with the hotel by June 10). _____ Check here if you plan to make your own housing arrangements (other than at the Hyatt). _____ Check here to request a room in the dorms and circle the appropriate dollar amount below: Dorm: Stonier Campbell Length of stay: dbl. sing. dbl. ML only (July 9-13): $144 144 108 COLT only (July 11-15): 144 144 108 ML and COLT (July 9-15): 216 216 162 If staying in a double in the dorms, who will your roommate be? ____________________________________ For either dorm, please indicate expected day and time of arrival and departure. Note that check-in for the dorms must take place between 4pm and midnight on July 9-13. Expected arrival: ______ ______ (date) (time) Expected departure: ______ ______ (date) (time) TUTORIALS. The DIMACS-sponsored tutorials on July 10 are free and open to the general public. For our planning purposes, please circle those tutorials you plan to attend. Morning: T1 T2 T3 T4 Afternoon: T1 T5 T6 To participate in a workshop, please contact the workshop organizer directly. There is no fee for any workshop, and all workshops will be held on July 10. REFUNDS. The entire dorm fee, and one-half of the registration fee are refundable through June 24. Send all requests by email to rasmussen at cs.rutgers.edu.  From dhw at santafe.edu Wed Mar 23 22:24:40 1994 From: dhw at santafe.edu (dhw@santafe.edu) Date: Wed, 23 Mar 94 20:24:40 MST Subject: New paper on Bayesian backprop Message-ID: <9403240324.AA04558@chimayo> *** DO NOT FORWARD TO OTHER GROUPS *** The following paper has been placed in an FTP repository at the Santa Fe Institute. An abbreviated version of this paper will appear in the proceedings of NIPS '93. The paper consists of two files. Retrieval instructions appear at the end of this message. Bayesian Backpropagation Over I-O Functions Rather Than Weights David H. Wolpert The Santa Fe Institute, 1660 Old Pecos Trail, Santa Fe, NM 87501 (dhw at santafe.edu) Abstract: The conventional Bayesian justification for backprop is that it finds the MAP weight vector. As this paper shows, to find the MAP i-o function instead, one must add a correction term to backprop. That term biases one towards i-o functions with small description lengths, and in particular favors (some kinds of) feature-selection, pruning, and weight-sharing. This can be viewed as an a priori argument in favor of those techniques. To retrieve the paper: unix> ftp ftp.santafe.edu Name: anonymous Password: (Your e-mail address) ftp> binary ftp> cd pub/Users/dhw ftp> get nips.93.figs.ps.Z ftp> get nips.93.text.ps.Z ftp> quit unix> uncompress nips.93.figs.ps.Z unix> uncompress nips.93.text.ps.Z unix> lpr nips.93.figs.ps (or however you print postscript) unix> lpr nips.93.text.ps (or however you print postscript) Note: The .figs file uncompresses to close to 2.5 meg. It may be necessary to use the -s option to lpr to print it.  From hilario at cui.unige.ch Wed Mar 23 13:21:22 1994 From: hilario at cui.unige.ch (Hilario Melanie) Date: Wed, 23 Mar 1994 19:21:22 +0100 Subject: Please disseminate via connectionists-ml Message-ID: <347*/S=hilario/OU=cui/O=unige/PRMD=switch/ADMD=arcom/C=ch/@MHS> ----------------------REMINDER : DEADLINE IS APRIL 1 ---------------------- Final Call for Papers COMBINING SYMBOLIC AND CONNECTIONIST PROCESSING Workshop held in conjunction with ECAI-94 August 9, 1994 - Amsterdam, The Netherlands ----------------------REMINDER : DEADLINE IS APRIL 1 ---------------------- Until a few years ago, the history of AI has been marked by two parallel, often antagonistic streams of development -- classical or symbolic AI and connectionist processing. A recent research trend, premised on the complementarity of these two paradigms, strives to build hybrid systems which combine the advantages of both to overcome the limitations of each. For instance, attempts have been made to accomplish complex tasks by blending neural networks with rule-based or case-based reasoning. This workshop will be the first Europe-wide effort to bring together researchers active in the area in view of laying the groundwork for a theory and methodology of symbolic/connectionist integration (SCI). The workshop will focus on the following topics: o theoretical (cognitive and computational) foundations of SCI o techniques and mechanisms for combining symbolic and neural processing methods (e.g. ways of improving and going beyond state-of-the-art rule compilation and extraction techniques) o outstanding problems encountered and issues involved in SCI (e.g. Which symbolic or connectionist representation schemes are best adapted to SCI? The vector space used in neural nets and the symbolic space have fundamental mathematical differences; how will these differences impact SCI? Do we have the conceptual tools needed to cope with this representation problem?) o profiles of application domains in which SCI has been (or can be) shown to perform better than traditional approaches o description, analysis and comparison of implemented symbolic/connectionist systems SUBMISSION REQUIREMENTS Prospective participants should submit an extended abstract to the contact person below, either via email in postscript format or via regular mail, in which case 3 copies are required. Each submission should include a separate information page containing the title of the paper, author names and affiliations, and the complete address (including telephone, fax and email) of the first author. The paper itself should not exceed 12 pages. Submission deadline is April 1, 1994. Each paper will be reviewed by at least two members of the Program Committee. Notification of acceptance or rejection will be sent to first authors by May 1, 1994. Camera-ready copies of accepted papers are due on June 1st and will be reproduced for distribution at the workshop. Those who wish to participate without presenting a paper should send a request describing their research interests and/or previous work in the field of SCI. Since attendance will be limited to ensure effective interaction, these requests will be considered after screening of submitted papers. All workshop participants are required to register for the main conference. PROGRAM COMMITTEE Bernard Amy (LIFIA-IMAG, Grenoble, France) Patrick Gallinari (LAFORIA, University of Paris 6, France) Franz Kurfess (Dept. Neural Information Processing, University of Ulm, Germany) Christian Pellegrini (CUI, University of Geneva, Switzerland) Noel Sharkey (DCS, University of Sheffield, UK) Alessandro Sperduti (CSD, University of Pisa, Italy) IMPORTANT DATES Submission deadline April 1, 1994 Notification of acceptance/rejection May 1, 1994 Final papers due June 1, 1994 Date of the workshop August 9, 1994 CONTACT PERSON Melanie Hilario CUI - University of Geneva 24 rue General Dufour CH-1211 Geneva 4 Voice: +41 22/705 7791 Fax: +41 22/320 2927 Email: hilario at cui.unige.ch  From franz at neuro.informatik.uni-ulm.de Wed Mar 23 13:26:02 1994 From: franz at neuro.informatik.uni-ulm.de (Franz Kurfess) Date: Wed, 23 Mar 1994 19:26:02 +0100 Subject: CfP: Workshop "Logic and Reasoning with Neural Networks" Message-ID: Could you please distribute the following Final Call for Papers / Participation? Thank you very much Franz Kurfess, Alessandro Sperduti FINAL CALL FOR PAPERS "Logic and Reasoning with Neural Networks" Workshop at the International Conference on Logic Programming ICLP'94 Santa Margherita Ligure, Italy June 17 or 18, 1994 Description of the Workshop =========================== The goal of the workshop is to initiate discussions and foster interaction between researchers interested in the use of neural networks and connectionist models for various aspects of logic and reasoning. There are a number of domains where the combination of neural networks and logic opens up interesting perspectives: * Methods for Reasoning - cognitively plausible models of reasoning - reasoning with vague knowledge - neural inference mechanisms - probabilistic reasoning with neural networks * Knowledge Representation Aspects - representation of non-symbolic information - knowledge acquisition from raw data (rule extraction) with neural networks - representation of vague knowledge - similarity-based access to knowledge - context-dependent retrieval of facts * Integration of Symbolic and Neural Components - combining sub-symbolic and symbolic information - pattern recognition - sensor fusion * Implementation Techniques - connectionist implementations of symbolic inference mechanisms - neural networks as massively parallel implementation technique - neural networks for learning of search heuristics There are at least three major aspects where a discussion of neural networks / connectionist models can be beneficial to the logic programming community at this time: * development of reasoning techniques which are closer to the way humans reason in everyday situation * dealing with vague knowledge, i.e. imprecise, uncertain, incomplete, inconsistent information, possibly from different sources and in various formats * efficiency improvements for symbolic inference mechanisms, e.g. through adaptive learning from previously solved problems, or content-oriented access to rules and facts Submission of Papers ==================== Prospective contributors are invited to submit papers or extended abstracts to the organizers by April 1, 1994. They will be notified about acceptance or rejection by May 1. The final version of the papers is due June 1. We are planning to make the full papers accessible to the workshop participants in an ftp archive, and hand out only copies of the abstracts. If possible, please use a text processing program that allows you to produce PostScript output; otherwise it might be difficult to print out copies on other systems than the one you used. ******** Papers should be sent to Franz Kurfess *********** Preliminary Agenda ================== There will be one or two talks of approximately 30 min. where the essential background on the use of neural networks for logic and reasoning will be presented. The main purpose for this is to offer a brief introduction to those attendants with little knowledge of neural networks, and to provide a common framework of reference for the workshop. Care will be taken that these presentations concentrate on fundamental aspects, providing an overview of the field rather than a detailed technical review of one particular system or approach. The rest of the time slots will be used for presentations of submitted papers, i.e. approximately two in each section, with enough time for discussion. The final time schedule will be distributed after May 1. The workshop will be concluded by a final discussion and a wrap-up of important aspects. Important Dates =============== Submission deadline April 1, 1994 Notification of acceptance/rejection May 1, 1994 Final version of papers due June 1, 1994 Date of the workshop June 17 or 18, 1994 Registration ============ According to the standard policy of LP post-coference workshops, the workshops are integrating part of the conference. This means that participants of the workshop are expected to register for the conference. Workshop Organizers =================== Franz Kurfess Dept. of Neural Information Processing University of Ulm D-89069 Ulm, Germany Voice : +49/731 502-41+4953 Fax : +49/731 502-4156 E-mail: kurfess at neuro.informatik.uni-ulm.de Alessandro Sperduti CSD - University of Pisa Corso Italia 40 56100 Pisa, Italy Voice : +39/50 887 248 Fax : +39/50 887 226 E-mail: perso at di.unipi.it  From geva at fit.qut.edu.au Fri Mar 25 10:36:56 1994 From: geva at fit.qut.edu.au (Mr Shlomo Geva) Date: Fri, 25 Mar 94 10:36:56 EST Subject: ANZIIS 94 Call for Papers Message-ID: <199403250037.KAA18407@sleet.fit.qut.edu.au> ******************* CALL FOR PAPERS ******************* ANZIIS-94 Second Australian and New Zealand Conference on Intelligent Information Systems Brisbane, Queensland, Australia Tutorials:29 November, Conference: 30 Nov - 2 December 1994 Major fields: Artificial Intelligence Fuzzy Systems Neural Networks Evolutionary Computation The Second Australian and New Zealand Conference on Intelligent Information Systems (ANZIIS-94) will be held in Brisbane, from 29 November to 2 December 1994. This follows the successful inaugural conference, ANZIIS-93, held in Perth in December 1993. The Conference will offer an international forum for discussion of new research on the key methods of intelligent information processing: conventional artificial intelligence, fuzzy logic, artificial neural networks, and evolutionary algorithms. The conference will include invited keynote presentations and contributed papers in oral and poster presentations. All papers will be refereed and published in the proceedings. TUTORIALS AND PANEL SESSIONS The Organising Programme Committee cordially invites proposals for tutorials and special interest sessions relevant to the scope of the conference. Proposals should include details of the proponent including mailing, e-mail and fax addresses, and research record. ABOUT BRISBANE Brisbane is a cosmopolitan and pleasant subtropical city. It is the heart of the vibrant south-east Queensland region that streches over 200 Km from the Gold to the Sunshine Coasts. It is not only a focal point for national and international tourists but tens of thousands Australians every year decide to set up home here. We recommed conference participants to set aside a few extra days to explore the region, either on their own leisure or by taking part in the special pre and post conference activities to be announced. Application areas will include, but will not be limited to: Adaptive Systems Artificial Life Autonomous Vehicles Data Analysis Factory Automation Financial Markets Intelligent Databases Knowledge Engineering Machine Vision Pattern Recognition Machine Learning Neurobiological Systems Control Systems Optimisation Parallel and Distributed Computing Robotics Prediction Sensorimotor Systems Signal Processing Speech Processing Virtual Reality INFORMATION ANZIIS-94 Secretariat School of Computing Science Queensland University of Technology GPO Box 2434 Brisbane, Q 4001, Australia. Telephone: + 61 7 864 2925 Fax: + 61 7 864 1801 e-mail: anziis94 at qut.edu.au SUBMISSION OF PAPERS For the speedy processing of the papers authors are requested to submit their contributions camera-ready on paper and by mail only. Papers should be laser printed on A4 size pages with 25 mm margins on all four sides using a Roman font not smaller than 10 points. The maximum allowed length of an article is 5 pages. The paper should be set in two column format, using the LaTex "article" style or following the style of the IEEE Transaction journals. The papers should contain an abstract and the complete mailing addresses of the authors. Papers will be reviewed internationally. Accepted articles will be published as submitted, as there is no opportunity for revision. Only those papers for which the presenting author has registered as a conference delegate will be printed in the proceedings. Extra copies of the Proceedings will be marketed through the IEEE book brokerage program. IMPORTANT DATES Papers due: 15 July 1994 Tutorial proposals due: 15 July 1994 Notification of acceptance: 15 September 1994 Registration for authors due: 1 October 1994 FEES before 1 Oct Member of IEEE/IEAust/ACS A$400 Other A$450 Student member of IEEE/IEAust/ACS A$150 Other Student A$200 after 1 Oct Member of IEEE/IEAust/ACS A$450 Other A$500 Student member of IEEE/IEAust/ACS A$200 Other Student A$250 GOVERNMENT TRAINING LEVY The conference programme will meet the requirements of the Australian Government Training Levy for inclusion in an employer's training programme. ANZIIS-94 ORGANISED BY IEEE Australia Council IEEE New Zealand Council IEEE Queensland Section IN CO-OPERATION WITH IEAust - The Institution of Engineers, Australia Australian Computer Society Queensland University of Technology-School of Computing Science ORGANISING COMMITTEE Dr. J. Sitte, Queensland University of Technology General Conference Chair Dr. W. Boles, Queensland University of Technology Mr. S. Ellis, IEEE Queensland Section Dr. S. Geva, Queensland University of Technology Mr. R. Prandolini, IEEE Queensland Chapter Ms. R. Sitte, Griffith University - Nathan Mr. C.Thorne, Griffith University - Gold Coast Dr. R. Zurawski, Swinburne University of Technology Prof.Y.Attikiouzel, University of Western Australia Advisory Committe Chair Dr. Nicola Kasabov, University of Otago New Zealand Liaison Chair TECHNICAL COMMITTEE Dr. J. Andreae, University of Canterbury, New Zealand Prof.S. Bang, Pohang Institute of Science and Technology, Korea Prof. B. Boashash, Queensland University of Technology, Australia Ms. A. Bowles, BHP Research Laboratories, Australia Prof. T. Caelli, University of Melbourne, Australia Dr. L. Cahill, La Trobe University, Australia Dr. G. Coghill, University of Auckland, New Zealand Prof. A. Constantinides, Imperial College, U.K. Dr. J. Cybulski, La Trobe University, Australia Prof. T Dillon, La Trobe University,Australia Prof. T. Downs, University of Queensland, Australia Prof. R. Evans, The University of Melbourne, Australia Prof. N. Foo, University of Sydney, Australia Prof. T Fukuda, Nagoya University, Japan Prof. R. Hodgson, Massey University, New Zealand Mr. A. Horsfall, Fujitsu Australia Ltd., Australia Prof. H. Hsu, National Taiwan University, Taiwan Prof. R. Jarvis, Monash University, Australia Dr. A. Jennings, Telecom Research Laboratories, Australia Dr. J. Kacprzyk, Polish Academy of Sciences, Poland Prof. S. Kollias, National Technical University of Athens, Greece Prof. B. Kosco, University of Southern California, USA Dr. A. Kowalczyk, Telecom Research Laboratories Dr. H.C. Lui, National University of Singapore, Singapore Prof. T Mitchell, Carnegie Mellon University, USA Dr. J. Morris, University of Tasmania, Australia Dr. D. Nandagopal, DSTO, Australia Prof. T. Nguyen, University of Tasmania, Australia Dr. M. Palaniswami, The University of Melbourne, Australia Prof. L. Patnaik, Indian Institute of Science, India Dr. P.K. Simpson, Orincon Corp., San Diego, USA Prof. A.C. Tsoi, University of Queensland, Australia Dr. R Uthurusamy, GM Research Labs. USA Prof. A. Venetsanopoulos, University of Toronto, Canada Prof. K. Wong, The University of Western Australia, Austrlia Dr. A. Zomaya, The University of Western Australia, Austrlia Prof. J. Zurada, University of Louisvill, USA  From esann at dice.ucl.ac.be Fri Mar 25 12:26:53 1994 From: esann at dice.ucl.ac.be (esann@dice.ucl.ac.be) Date: Fri, 25 Mar 94 18:26:53 +0100 Subject: ESANN'94: European Symposium on ANNs Message-ID: <9403251726.AA16937@ns1.dice.ucl.ac.be> ****************************************************************** * European Symposium * * on Artificial Neural Networks * * * * Brussels (Belgium) - April 20-21-22, 1994 * * * * PROGRAM and REGISTRATION FORM * ****************************************************************** Foreword ******** The actual developments in the field of artificial neural networks mark a watershed in its relatively young history. Far from the blind passion for disparate applications some years ago, the tendency is now to an objective assessment of this emerging technology, with a better knowledge of the basic concepts, and more appropriate comparisons and links with classical methods of computing. Neural networks are not restricted to the use of back-propagation and multi-layer perceptrons. Self-organization, adaptive signal processing, vector quantization, classification, statistics, image and speech processing are some of the domains where neural networks techniques may be successfully used; but a beneficial use goes through an in-depth examination of both the theoretical basis of the neural techniques and standard methods commonly used in the specified domain. ESANN'94 is the second symposium covering these specified aspects of neural networks computing. After a successful edition in 1993, ESANN'94 will open new perspectives, by focusing on theoretical and mathematical aspects of neural networks, biologically-inspired models, statistical aspects, and relations between neural networks and both information and signal processing (classification, vector quantization, self-organization, approximation of functions, image and speech processing,...). The steering and program committees of ESANN'94 are pleased to invite you to participate to this symposium. More than a formal conference presenting the last developments in the field, ESANN'94 will be also a forum for open discussions, round tables and opportunities for future collaborations. We hope to have the pleasure to meet you in April, in the splendid town of Brussels, and that your stay in Belgium will be as scientifically beneficial as agreeable. Symposium information ********************* Registration fees for symposium ------------------------------- registration before registration after 18th March 1994 18th March 1994 Universities BEF 14500 BEF 15500 Industries BEF 18500 BEF 19500 Registration fees include attendance to all sessions, the ESANN'94 banquet, a copy of the conference proceedings, daily lunches (20-22 April '94), and coffee breaks twice a day during the symposium. Advance registration is mandatory. Young researchers may apply for grants offered by the European Community (restricted to citizens or residents of a Western European country or, tentatively, Central or Eastern European country - deadline for applications: March 11th, 1994 - please write to the conference secretariat for details). Advance payments (see registration form) must be made to the conference secretariat by bank transfers in Belgian Francs (free of charges) or by sending a cheque (add BEF 500 for processing fees). Language -------- The official language of the conference is English. It will be used for all printed material, presentations and discussions. Proceedings ----------- A copy of the proceedings will be provided to all Conference Registrants. All technical papers will be included in the proceedings. Additional copies of the proceedings (ESANN'93 and ESANN'94) may be purchased at the following rate: ESANN'94 proceedings: BEF 2000 ESANN'93 proceedings: BEF 1500. Add BEF 500 to any order for p.&p. and/or bank charges. Please write to the conference secretariat for ordering proceedings. Conference dinner ----------------- A banquet will be offered on Thursday 21th to all conference registrants in a famous and typical place of Brussels. Additional vouchers for the banquet may be purchased on Wednesday 20th at the conference. Cancellation ------------ If cancellation is received by 25th March 1994, 50% of the registration fees will be returned. Cancellation received after this date will not be entitled to any refund. General information ******************* Brussels, Belgium ----------------- Brussels is not only the host city of the European Commission and of hundreds of multinational companies; it is also a marvelous historical town, with typical quarters, famous monuments known throughout the world, and the splendid "Grand-Place". It is a cultural and artistic center, with numerous museums. Night life in Brussels is considerable. There are of lot of restaurants and pubs open late in the night, where typical Belgian dishes can be tasted with one of the more than 1000 different beers. Hotel accommodation ------------------- Special rates for participants to ESANN'94 have been arranged at the MAYFAIR HOTEL, a De Luxe 4 stars hotel with 99 fully air conditioned guest rooms, tastefully decorated to the highest standards of luxury and comfort. The hotel includes two restaurants, a bar and private parking. Public transportation (trams n93 & 94) goes directly from the hotel to the conference center (Parc stop) Single room BEF 2800 Double room or twin room BEF 3500 Prices include breakfast, taxes and service. Rooms can only be confirmed upon receipt of booking form (see at the end of this booklet) and deposit. Located on the elegant Avenue Louise, the exclusive Hotel Mayfair is a short walk from the "uppertown" luxurious shopping district. Also nearby is the 14th century Cistercian abbey and the magnificent "Bois de la Cambre" park with its open-air cafes - ideal for a leisurely stroll at the end of a busy day. HOTEL MAYFAIR tel: +32 2 649 98 00 381 av. Louise fax: +32 2 649 22 49 1050 Brussels - Belgium Conference location ------------------- The conference will be held at the "Chancellerie" of the Generale de Banque. A map is included in the printed programme. Generale de Banque - Chancellerie 1 rue de la Chancellerie 1000 Brussels - Belgium Conference secretariat D facto conference services tel: + 32 2 245 43 63 45 rue Masui fax: + 32 2 245 46 94 B-1210 Brussels - Belgium E-mail: esann at dice.ucl.ac.be PROGRAM OF THE CONFERENCE ************************* Wednesday 20th April 1994 ------------------------- 9H30 Registration 10H00 Opening session Session 1: Neural networks and chaos Chairman: M. Hasler (Ecole Polytechnique Fdrale de Lausanne, Switzerland) 10H10 "Concerning the formation of chaotic behaviour in recurrent neural networks" T. Kolb, K. Berns Forschungszentrum Informatik Karlsruhe (Germany) 10H30 "Stability and bifurcation in an autoassociative memory model" W.G. Gibson, J. Robinson, C.M. Thomas University of Sidney (Australia) 10H50 Coffee break Session 2: Theoretical aspects 1 Chairman: C. Jutten (Institut National Polytechnique de Grenoble, France) 11H30 "Capabilities of a structured neural network. Learning and comparison with classical techniques" J. Codina, J. C. Aguado, J.M. Fuertes Universitat Politecnica de Catalunya (Spain) 11H50 "Projection learning: alternative approaches to the computation of the projection" K. Weigl, M. Berthod INRIA Sophia Antipolis (France) 12H10 "Stability bounds of momentum coefficient and learning rate in backpropagation algorithm"" Z. Mao, T.C. Hsia University of California at Davis (USA) 12H30 Lunch Session 3: Links between neural networks and statistics Chairman: J.C. Fort (Universit Nancy I, France) 14H00 "Model selection for neural networks: comparing MDL and NIC"" G. te Brake*, J.N. Kok*, P.M.B. Vitanyi** *Utrecht University, **Centre for Mathematics and Computer Science, Amsterdam (Netherlands) 14H20 "Estimation of performance bounds in supervised classification" P. Comon*, J.L. Voz**, M. Verleysen** *Thomson-Sintra Sophia Antipolis (France), **Universit Catholique de Louvain, Louvain-la-Neuve (Belgium) 14H40 "Input Parameters' estimation via neural networks" I.V. Tetko, A.I. Luik Institute of Bioorganic & Petroleum Chemistry, Kiev (Ukraine) 15H00 "Combining multi-layer perceptrons in classification problems" E. Filippi, M. Costa, E. Pasero Politecnico di Torino (Italy) 15H20 Coffee break Session 4: Algorithms 1 Chairman: J. Hrault (Institut National Polytechnique de Grenoble, France) 16H00 "Diluted neural networks with binary couplings: a replica symmetry breaking calculation of the storage capacity" J. Iwanski, J. Schietse Limburgs Universitair Centrum (Belgium) 16H20 "Storage capacity of the reversed wedge perceptron with binary connections" G.J. Bex, R. Serneels Limburgs Universitair Centrum (Belgium) 16H40 "A general model for higher order neurons" F.J. Lopez-Aligue, M.A. Jaramillo-Moran, I. Acedevo-Sotoca, M.G. Valle Universidad de Extremadura, Badajoz (Spain) 17H00 "A discriminative HCNN modeling" B. Petek University of Ljubljana (Slovenia) Thursday 21th April 1994 ------------------------ Session 5: Biological models Chairman: P. Lansky (Academy of Science of the Czech Republic) 9H00 "Biologically plausible hybrid network design and motor control" G.R. Mulhauser University of Edinburgh (Scotland) 9H20 "Analysis of critical effects in a stochastic neural model" W. Mommaerts, E.C. van der Meulen, T.S. Turova K.U. Leuven (Belgium) 9H40 "Stochastic model of odor intensity coding in first-order olfactory neurons" J.P. Rospars*, P. Lansky** *INRA Versailles (France), **Academy of Sciences, Prague (Czech Republic) 10H00 "Memory, learning and neuromediators" A.S. Mikhailov Fritz-Haber-Institut der MPG, Berlin (Germany), and Russian Academy of Sciences, Moscow (Russia) 10H20 "An explicit comparison of spike dynamics and firing rate dynamics in neural network modeling" F. Chapeau-Blondeau, N. Chambet Universit d'Angers (France) 10H40 Coffee break Session 6: Algorithms 2 Chairman: T. Denoeux (Universit Technologique de Compigne, France) 11H10 "A stop criterion for the Boltzmann machine learning algorithm" B. Ruf Carleton University (Canada) 11H30 "High-order Boltzmann machines applied to the Monk's problems" M. Grana, V. Lavin, A. D'Anjou, F.X. Albizuri, J.A. Lozano UPV/EHU, San Sebastian (Spain) 11H50 "A constructive training algorithm for feedforward neural networks with ternary weights" F. Aviolat, E. Mayoraz Ecole Polytechnique Fdrale de Lausanne (Switzerland) 12H10 "Synchronization in a neural network of phase oscillators with time delayed coupling" T.B. Luzyanina Russian Academy of Sciences, Moscow (Russia) 12H30 Lunch Session 7: Evolutive and incremental learning Chairman: T.J. Stonham (Brunel University, UK) - to be confirmed 14H00 "Reinforcement learning and neural reinforcement learning" S. Sehad, C. Touzet Ecole pour les Etudes et la Recherche en Informatique et Electronique, Nmes (France) 14H20 "Improving piecewise linear separation incremental algorithms using complexity reduction methods" J.M. Moreno, F. Castillo, J. Cabestany Universitat Politecnica de Catalunya (Spain) 14H40 "A comparison of two weight pruning methods" O. Fambon, C. Jutten Institut National Polytechnique de Grenoble (France) 15H00 "Extending immediate reinforcement learning on neural networks to multiple actions" C. Touzet Ecole pour les Etudes et la Recherche en Informatique et Electronique, Nmes (France) 15H20 "Incremental increased complexity training" J. Ludik, I. Cloete University of Stellenbosch (South Africa) 15H40 Coffee break Session 8: Function approximation Chairman: E. Filippi (Politecnico di Torino, Italy) - to be confirmed 16H20 "Approximation of continuous functions by RBF and KBF networks" V. Kurkova, K. Hlavackova Academy of Sciences of the Czech Republic 16H40 "An optimized RBF network for approximation of functions" M. Verleysen*, K. Hlavackova** *Universit Catholique de Louvain, Louvain-la-Neuve (Belgium), **Academy of Science of the Czech Republic 17H00 "VLSI complexity reduction by piece-wise approximation of the sigmoid function" V. Beiu, J.A. Peperstraete, J. Vandewalle, R. Lauwereins K.U. Leuven (Belgium) 20H00 Conference dinner Friday 22th April 1994 ---------------------- Session 9: Algorithms 3 Chairman: J. Vandewalle (K.U. Leuven, Belgium) - to be confirmed 9H00 "Dynamic pattern selection for faster learning and controlled generalization of neural networks" A. Rbel Technische Universitt Berlin (Germany) 9H20 "Noise reduction by multi-target learning" J.A. Bullinaria Edinburgh University (Scotland) 9H40 "Variable binding in a neural network using a distributed representation" A. Browne, J. Pilkington South Bank University, London (UK) 10H00 "A comparison of neural networks, linear controllers, genetic algorithms and simulated annealing for real time control" M. Chiaberge*, J.J. Merelo**, L.M. Reyneri*, A. Prieto**, L. Zocca* *Politecnico di Torino (Italy), **Universidad de Granada (Spain) 10H20 "Visualizing the learning process for neural networks" R. Rojas Freie Universitt Berlin (Germany) 10H40 Coffee break Session 10: Theoretical aspects 2 Chairman: M. Cottrell (Universit Paris I, France) 11H20 "Stability analysis of diagonal recurrent neural networks" Y. Tan, M. Loccufier, R. De Keyser, E. Noldus University of Gent (Belgium) 11H40 "Stochastics of on-line back-propagation" T. Heskes University of Illinois at Urbana-Champaign (USA) 12H00 "A lateral contribution learning algorithm for multi MLP architecture" N. Pican*, J.C. Fort**, F. Alexandre* *INRIA Lorraine, **Universit Nancy I (France) 12H20 Lunch Session 11: Self-organization Chairman: F. Blayo (EERIE Nmes, France) 14H00 "Two or three things that we know about the Kohonen algorithm" M. Cottrell*, J.C. Fort**, G. Pags*** Universits *Paris 1, **Nancy 1, ***Paris 6 (France) 14H20 "Decoding functions for Kohonen maps" M. Alvarez, A. Varfis CEC Joint Research Center, Ispra (Italy) 14H40 "Improvement of learning results of the selforganizing map by calculating fractal dimensions" H. Speckmann, G. Raddatz, W. Rosenstiel University of Tbingen (Germany) 15H00 Coffee break Session 11 (continued): Self-organization Chairman: F. Blayo (EERIE Nmes, France) 15H40 "A non linear Kohonen algorithm" J.-C. Fort*, G. Pags** *Universit Nancy 1, **Universits Pierre et Marie Curie, et Paris 12 (France) 16H00 "Self-organizing maps based on differential equations" A. Kanstein, K. Goser Universitt Dortmund (Germany) 16H20 "Instabilities in self-organized feature maps with short neighbourhood range" R. Der, M. Herrmann Universitt Leipzig (Germany) ESANN'94 Registration and Hotel Booking Form ******************************************** Registration fees ----------------- registration before registration after 18th March 1994 18th March 1994 Universities BEF 14500 BEF 15500 Industries BEF 18500 BEF 19500 University fees are applicable to members and students of academic and teaching institutions. Each registration will be confirmed by an acknowledgment of receipt, which must be given to the registration desk of the conference to get entry badge, proceedings and all materials. Registration fees include attendance to all sessions, the ESANN'94 banquet, a copy of the conference proceedings, daily lunches (20-22 April '94), and coffee breaks twice a day during the symposium. Advance registration is mandatory. Students and young researchers from European countries may apply for European Community grants. Hotel booking ------------- Hotel MAYFAIR (4 stars) - 381 av. Louise - 1050 Brussels Single room : BEF 2800 Double room (large bed) : BEF 3500 Twin room (2 beds) : BEF 3500 Prices include breakfast, service and taxes. A deposit corresponding to the first night is mandatory. Registration to ESANN'94 (please give full address and tick appropriate) ------------------------------------------------------------------------ Ms., Mr., Dr., Prof.:............................................... Name:............................................................... First Name:......................................................... Institution:........................................................ ................................................................... Address:............................................................ ................................................................... ZIP:................................................................ Town:............................................................... Country:............................................................ Tel:................................................................ Fax:................................................................ E-mail:............................................................. VAT n:............................................................. Universities: O registration before 18th March 1994: BEF 14500 O registration after 18th March 1994: BEF 15500 Industries: O registration before 18th March 1994: BEF 18500 O registration after 18th March 1994: BEF 19500 Hotel Mayfair booking (please tick appropriate) O single room deposit: BEF 2800 O double room (large bed) deposit: BEF 3500 O twin room (twin beds) deposit: BEF 3500 Arrival date: ..../..../1994 Departure date: ..../..../1994 O Additional payment if fees are paid through bank abroad check: BEF 500 Total BEF ____ Payment (please tick): O Bank transfer, stating name of participant, made payable to: Gnrale de Banque ch. de Waterloo 1341 A B-1180 Brussels - Belgium Acc.no: 210-0468648-93 of D facto (45 rue Masui, B-1210 Brussels) Bank transfers must be free of charges. EVENTUAL CHARGES MUST BE PAID BY THE PARTICIPANT. O Cheques/Postal Money Orders made payable to: D facto 45 rue Masui B-1210 Brussels - Belgium A SUPPLEMENTARY FEE OF BEF 500 MUST BE ADDED if the payment is made through bank abroad cheque or postal money order. Only registrations accompanied by a cheque, a postal money order or the proof of bank transfer will be considered. Registration and hotel booking form, together with payment, must be send as soon as possible, and in no case later than 8th April 1994, to the conference secretariat: &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& & D facto conference services - ESANN'94 & & 45, rue Masui - B-1210 Brussels - Belgium & &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& Support ******* ESANN'94 is organized with the support of: - Commission of the European Communities (DG XII, Human Capital and Mobility programme) - IEEE Region 8 - IFIP WG 10.6 on neural networks - Region of Brussels-Capital - EERIE (Ecole pour les Etudes et la Recherche en Informatique et Electronique - Nmes) - UCL (Universit Catholique de Louvain - Louvain-la-Neuve) - REGARDS (Research Group on Algorithmic, Related Devices and Systems - UCL) Steering committee ****************** Franois Blayo EERIE, Nmes (F) Marie Cottrell Univ. Paris I (F) Nicolas Franceschini CNRS Marseille (F) Jeanny Hrault INPG Grenoble (F) Michel Verleysen UCL Louvain-la-Neuve (B) Scientific committee ******************** Luis Almeida INESC - Lisboa (P) Jorge Barreto UCL Louvain-en-Woluwe (B) Herv Bourlard L. & H. Speech Products (B) Joan Cabestany Univ. Polit. de Catalunya (E) Dave Cliff University of Sussex (UK) Pierre Comon Thomson-Sintra Sophia (F) Holk Cruse Universitt Bielefeld (D) Dante Del Corso Politecnico di Torino (I) Marc Duranton Philips / LEP (F) Jean-Claude Fort Universit Nancy I (F) Karl Goser Universitt Dortmund (D) Martin Hasler EPFL Lausanne (CH) Philip Husbands University of Sussex (UK) Christian Jutten INPG Grenoble (F) Petr Lansky Acad. of Science of the Czech Rep. (CZ) Jean-Didier Legat UCL Louvain-la-Neuve (B) Jean Arcady Meyer Ecole Normale Suprieure - Paris (F) Erkki Oja Helsinky University of Technology (SF) Guy Orban KU Leuven (B) Gilles Pags Universit Paris I (F) Alberto Prieto Universitad de Granada (E) Pierre Puget LETI Grenoble (F) Ronan Reilly University College Dublin (IRE) Tamas Roska Hungarian Academy of Science (H) Jean-Pierre Rospars INRA Versailles (F) Jean-Pierre Royet Universit Lyon 1 (F) John Stonham Brunel University (UK) Lionel Tarassenko University of Oxford (UK) John Taylor King's College London (UK) Vincent Torre Universita di Genova (I) Claude Touzet EERIE Nmes (F) Joos Vandewalle KUL Leuven (B) Eric Vittoz CSEM Neuchtel (CH) Christian Wellekens Eurecom Sophia-Antipolis (F) _____________________________ Michel Verleysen D facto conference services 45 rue Masui 1210 Brussels Belgium tel: +32 2 245 43 63 fax: +32 2 245 46 94 E-mail: esann at dice.ucl.ac.be _____________________________  From hongchen at ndcvx.cc.nd.edu Sat Mar 26 02:31:58 1994 From: hongchen at ndcvx.cc.nd.edu (Hong Chen) Date: Sat, 26 Mar 94 02:31:58 -0500 Subject: Paper available in Neuroprose Message-ID: <9403260731.AA16126@ndcvx.cc.nd.edu> Ftp-Host: archive.cis.ohio-state.edu Ftp-Filename: /pub/neuroprose/chen.dynamic_approx.ps.Z The following paper chen.dynamic_approx.ps.Z (28 pages) is now available via anonymous ftp from the neuroprose archive. It appeared in Nov. issue (1993) of IEEE Transactions on Neural Networks. ---------------------------------------------------------------------- Approximations of Continuous Functionals by Neural Networks with Application to Dynamical Systems Tianping Chen Department of Mathematics Fudan University Shanghai, P.R. China Hong Chen VLSI Libraries, Inc. 3135 Kifer Road Santa Clara, CA 95052 USA ABSTRACT: The main concern of this paper is to give several strong results on neural network representation in an explicit form. Under very mild conditions, a functional defined on a compact set in C[a,b] or L^p[a,b], spaces of infinite dimensions, can be approximated arbitrarily well by a neural network with one hidden layer. In particular, if U is a compact set in C[a,b], sigma is a bounded sigmoidal function, and f is a continuous functional defined on U, then for all u in U, f(u) can be approximated by the summation: c_i sigma( sum_{j=0}^m xi_{i,j} u(x_j) + theta_i) where c_i, xi_{ij}, theta_i are real numbers. u(x_j) is the value of u evaluated at point x_j. These results are a significant development beyond existing works, where theorems of approximating continuous functions defined on R^n, a space of finite dimension by neural networks with one hidden layer were given. Finally, all the results are shown applicable to the approximation of the output of dynamical systems at any particular time. ---------------------------------------------------------------------- Instruction for retrieving this paper: unix% ftp archive.cis.ohio-state.edu ftp-login: anonymous ftp-password: ftp> cd pub/neuroprose ftp> binary ftp> get chen.dynamic_approx.ps.Z ftp> bye unix% uncompress chen.dynamic_approx.ps.Z unix% lpr chen.dynamic_approx.ps (or however you print postscript)  From hongchen at ndcvx.cc.nd.edu Sat Mar 26 02:28:59 1994 From: hongchen at ndcvx.cc.nd.edu (Hong Chen) Date: Sat, 26 Mar 94 02:28:59 -0500 Subject: Paper available in Neuroprose Message-ID: <9403260728.AA16099@ndcvx.cc.nd.edu> Ftp-Host: archive.cis.ohio-state.edu Ftp-Filename: /pub/neuroprose/chen.function_approx.ps.Z The following paper chen.function_approx.ps.Z (15 pages) is now available via anonymous ftp from the neuroprose archive. It has been accepted by IEEE Transactions on Neural Networks. ---------------------------------------------------------------------- Approximation Capability in C(R^n) by Multilayer Feedforward Networks and Related Problems Tianping Chen Department of Mathematics Fudan University Shanghai, P.R. China Hong Chen VLSI Libraries, Inc. 3135 Kifer Road Santa Clara, CA 95052 USA Ruey-wen Liu Department of Electrical Engineering University of Notre Dame Notre Dame, IN 46556 USA ABSTRACT: In this paper, we investigate the capability of approximating functions in C(R^n) by three-layered neural networks with sigmoidal function in the hidden layer. It is found that the boundedness condition on the sigmoidal function plays an essential role in the approximation, in contrast to continuity or monotonity condition. We point out that in order to prove the neural network approximation capability in the n-dimensional case, all one needs to do is to prove the case for one dimension. The approximation in L^p-norm and some related problems are also discussed. ---------------------------------------------------------------------- Instruction for retrieving this paper: unix% ftp archive.cis.ohio-state.edu ftp-login: anonymous ftp-password: ftp> cd pub/neuroprose ftp> binary ftp> get chen.function_approx.ps.Z ftp> bye unix% uncompress chen.function_approx.ps.Z unix% lpr chen.function_approx.ps (or however you print postscript)  From harry at brain.Jpl.Nasa.Gov Sun Mar 27 17:58:14 1994 From: harry at brain.Jpl.Nasa.Gov (Harry Langenbacher) Date: Sun, 27 Mar 1994 14:58:14 -0800 Subject: Neural Network Workshop Announcement - JPL Message-ID: <199403272258.OAA11218@brain.Jpl.Nasa.Gov> NEURAL NET WORKSHOP ANNOUNCEMENT "A Decade of Neural Networks: Practical Applications and Prospects" May 11 - 13, 1994 The Jet Propulsion Laboratory's Center for Space Microelectronics Technology (CSMT) is hosting this neural network workshop sponsored by DoD and NASA. After 10 years of renewed activity in neural network research, the technology has matured and stands at a crossroads regarding its future practical applicability. The focus of the workshop is to provide an avenue for sponsoring agencies, active researchers, and the user community to formulate a cohesive vision for the next decade of neural network research and applications. Such a plan will directly address relevance to US technology competitiveness in the global market. In order to maintain a balance among the participants, attendance is by invitation only. If interested in receiving an invitation, please contact Dr. Sabrina Kemeny (jplnn94 at brain.jpl.nasa.gov) as soon as possible, since space is limited. The workshop will begin at JPL at 1:00 p.m. on Wednesday, May 11, 1994 and end at 10:00 a.m. on the 13th. Following two plenary sessions on the 11th and the morning of the 12th, we will split into working groups targeting three specific application areas. The splinter groups will focus on a government/industry investment strategy for future neural network research. The groups will address issues such as: overcoming barriers impeding technology insertion and creating a better user-developer interface. Friday's session will include summaries from the splinter groups and a sponsor-industry assessment panel. A registration fee of $75.00 will include a welcome reception the first evening, dinner the second evening, coffee breaks, and a copy of the workshop proceedings. Invited speakers will highlight clear benefits of neural networks in real-world applications compared to conventional computing techniques. Topics such as: fault diagnosis (vehicle engine health monitoring for automotives), pattern recognition (document analysis), and multiparameter optimization (unsteady aerodynamic control) will be covered in the presentations. Invited Speakers * Josh Alspector, Bellcore * Dave Andes, NAWC * William Campbell, Goddard Space Flight Center * John Carson, Irvine Sensors Corporation * Laurence Clarke, University of South Florida * Dwight Duston, BMDO * William Faller, USAF Academy * Lee Feldkamp, Ford Motor Company * Erol Gelenbe, Duke University/IBM * Karl Goser, University of Dortmund * Hans Peter Graf, AT&T * Sandeep Gulati, Jet Propulsion Laboratory * Michael Henry, Martin Marietta Astronautics * Christof Koch, California Institute of Technology * Peter Lichtenwalner, McDonnell Douglas * Kenneth Marko, Ford Motor Company * William Miceli, ONR * Steven Rogers, Air Force Institute of Technology * Joseph Sgro, Alacron Inc. * Bing Sheu, University of Southern California * Padraic Smyth, Jet Propulsion Laboratory * Simon Tam, Intel Corporation For further information please contact Dr. Sabrina Kemeny Phone: (818) 354-0660, Fax: (818) 393-4540, Email: jplnn94 at brain.jpl.nasa.gov Postal address: Mail Stop 302-231 Jet Propulsion Laboratory 4800 Oak Grove Dr. Pasadena, CA 91109-8099  From eppler at hpesun4.kfk.de Mon Mar 28 05:48:46 1994 From: eppler at hpesun4.kfk.de (Wolfgang Eppler) Date: Mon, 28 Mar 94 10:58:46+010 Subject: PhD position Message-ID: <9403280958.AA11254@hpesun4.kfk.de> A Ph.D position is available at the Nuclear Research Center Karlsruhe. The department of Data Processing and Electronics has a small group working with neuro-fuzzy methods. One working domain is adaptive control, a second one are applications in pattern recognition. Applicants with expertise in the last domain and very good certificates are encouraged to send me a mail the next few days. The appointment is restricted to three years, the salary is moderate. Applications to: eppler at hpesun3.kfk.de, or KfK, HPE-TTL c/o Wolfgang Eppler Postfach 3640 D-76021 Karlsruhe Germany  From antonio at gsc.ele.puc-rio.br Mon Mar 28 16:08:40 1994 From: antonio at gsc.ele.puc-rio.br (Antonio J. G. Abelem [Marco]) Date: Mon, 28 Mar 94 16:08:40 EST Subject: Input/Output Data Convertion in BackProp. Message-ID: <9403281908.AA03167@Cygnus > I'm using neural networks to predict financial time series, specifically the gold-prices' time series. I have experienced some problems with the convertion scheme used to present data to the network. I have mainly used LINEAR CONVERTION (original data value converted to the ranges 0, 1 or -1,+ 1), but it does not seem to work properly. I have also had some attempts with: a) input data in its original value b) in its derivative form c) the input minus data average d) the percent difference between input Ti+1 and Ti However, for all these cases the target patterns need to be converted before presenting to the network once its output is between 0 and 1 (to the sigmoid) or between 1 and -1 (to the hyperbolic tangent). My results for the single-step mode are good, but i think it could be better. On the other hand, for the multi-step, the results are very bad. Any suggestions on that will be very appreciated. Thanks. Antonio  From announce at PARK.BU.EDU Mon Mar 28 15:39:57 1994 From: announce at PARK.BU.EDU (announce@PARK.BU.EDU) Date: Mon, 28 Mar 94 15:39:57 -0500 Subject: Call for Articles: Automatic Target Recognition issue, Neural Networks Message-ID: <9403282039.AA24040@retina.bu.edu> ***** CALL FOR PAPERS ***** 1995 Special Issue of the journal Neural Networks on "Neural Networks for Automatic Target Recognition" ATR is a many-faceted problem of tremendous importance in industrial and defense applications. Biological systems excel at these tasks, and neural networks may provide a robust, real-time, and compact means for achieving solutions to ATR problems. ATR systems utilize a host of sensing modalities (visible, multispectral, IR, SAR, and ISAR imagery; radar, sonar, and acoustic time series; and fusion of multiple sensing modalities) in order to detect and track targets in clutter, and classify them. This Special Issue will bring together a broad range of invited and contributed articles that explore a variety of software and hardware modules and systems, and biological inspirations, focused on solving ATR problems. We particularly welcome articles involving applications to real data, though the journal cannot publish classified material. It will be the responsibility of the submitting authors to insure that all submissions are of an unclassified nature. Co-Editors: ----------- Professor Stephen Grossberg, Boston University Dr. Harold Hawkins, Office of Naval Research Dr. Allen Waxman, MIT Lincoln Laboratory Submission: ----------- Deadline for submission: October 31, 1994 Notification of acceptance: January 15, 1995 Format: as for normal papers in the journal (APA format) and no longer than 10,000 words Address for Papers: ------------------- Professor Stephen Grossberg Editor, Neural Networks Boston University Department of Cognitive and Neural Systems 111 Cummington Street Room 244 Boston, MA 02215 USA  From sef+ at cs.cmu.edu Mon Mar 28 21:11:32 1994 From: sef+ at cs.cmu.edu (Scott E. Fahlman) Date: Mon, 28 Mar 94 21:11:32 EST Subject: Input/Output Data Convertion in BackProp. In-Reply-To: Your message of Mon, 28 Mar 94 16:08:40 -0500. <9403281908.AA03167@Cygnus > Message-ID: I'm using neural networks to predict financial time series, specifically the gold-prices' time series. I have mainly used LINEAR CONVERTION (original data value converted to the ranges 0, 1 or -1,+ 1), but it does not seem to work properly. My results for the single-step mode are good, but i think it could be better. On the other hand, for the multi-step, the results are very bad. With so little information about your problem or the architecture (backprop?) you are using, it is impossible to diagnose the problem. Perhaps the net is doing as well as might be expected for this data set. Perhaps you have chosen a poor net topology or parameters. One thing does jump out, however: if you are trying to produce a continuous-valued output, you might be better off with linear output units than with sigmoids. Then you won't have to pre-scale your data to fit the sigmoid's range, and you won't be getting distortion due to gratuitous nonlinearities. -- Scott =========================================================================== Scott E. Fahlman Internet: sef+ at cs.cmu.edu Senior Research Scientist Phone: 412 268-2575 School of Computer Science Fax: 412 681-5739 Carnegie Mellon University Latitude: 40:26:33 N 5000 Forbes Avenue Longitude: 79:56:48 W Pittsburgh, PA 15213 ===========================================================================  From eppler at hpesun4.kfk.de Tue Mar 29 09:02:06 1994 From: eppler at hpesun4.kfk.de (Wolfgang Eppler) Date: Tue, 29 Mar 94 14:12:06+010 Subject: PhD pos Message-ID: <9403291312.AA11835@hpesun4.kfk.de> Sorry, yesterday I offered a PhD position at this mailing list. This is not quite correct. The free position is for PhD students being interested in a doctoral thesis. Sorry for the misunderstanding. W. Eppler  From aminai at thor.ece.uc.edu Tue Mar 29 12:12:49 1994 From: aminai at thor.ece.uc.edu (Ali Minai) Date: Tue, 29 Mar 1994 12:12:49 -0500 Subject: Input/Output Data Convertion in BackProp. Message-ID: <199403291712.MAA01961@holmes.ece.uc.edu> If you are getting good results without rescaling the input, you could use linear output neurons to give you a corresponding dynamic range on the output side. However, a more interesting issue might be to explain the difference (if any) in prediction quality between the rescaled and unscaled cases. Is it because the data has a strange distribution? For example, if very small differences in the real data can lead to significantly different consequences, rescaling might be losing important information. Or you might just need to use a faster learning rate to make up for smaller gradient magnitudes in the rescaled case. If your 1-step predictions are good, you can use these to bootstrap up to longer term ones. The simplest way is to feed back the predicted output into the network input, but better results can probably be obtained as follows: train a 1-step predictor and a 2-step predictor; configure the 1-step predictor to produce 2-step predictions through re-iteration; then combine the two 2-step predictors to produce an averaged/weighted 2-step prediction; iterate on this 2-step predictor to produce longer term predictions. This method can be repeated for 4-step predictors using a direct 4-step predictor and the twice-iterated configuration of the 2-step predictor described above. I'm sure many people must have used similar methods (I have), but I refer you to an excellent paper by Tim Sauer: T. Sauer, "Time Series Prediction by Using Delay Coordinate Embedding", in TIME SERIES PREDICTION, A.S. Weigend & N.A. Gershenfeld (eds.), Addison-Wesley, 1994. He mentions the method in the context of non-neural time series prediction, but the applicability to neural net predictors is obvious. Ali Minai  From wahba at stat.wisc.edu Tue Mar 29 12:50:35 1994 From: wahba at stat.wisc.edu (Grace Wahba) Date: Tue, 29 Mar 94 11:50:35 -0600 Subject: gold-prices' time series, conversion to [0,1] Message-ID: <9403291750.AA21074@hera.stat.wisc.edu> I'm using neural networks to predict financial time series, specifically the gold-prices' time series. I have mainly used LINEAR CONVERTION (original data value converted to the ranges 0, 1 or -1,+ 1), but it does not seem to work properly. My results for the single-step mode are good, but i think it could be better. On the other hand, for the multi-step, the results are very bad. You might look at AUTHOR = {D. McCaffrey and S. Ellner and A. R. Gallant and D. Nychka}, TITLE = {Estimating the Lyapunv exponent of a chaotic system witn nonparametric regression}, JOURNAL = {J. Amer. Statist. Assoc.}, YEAR = {1992}, VOLUME = {87}, PAGES = {682-695} Among other things, they consider the model x_t = f(x_{t-1},..., x_{t-d}) + \sigma\epsilon_t and look at the estimation of f( .,...,.) using thin plate splines and other radial basis functions. Grace Wahba wahba at stat.wisc.edu  From elman at crl.ucsd.edu Tue Mar 29 14:10:29 1994 From: elman at crl.ucsd.edu (Jeff Elman) Date: Tue, 29 Mar 94 11:10:29 PST Subject: Postdoc announcement: CRL/UCSD Message-ID: <9403291910.AA05158@crl.ucsd.edu> CENTER FOR RESEARCH IN LANGUAGE UNIVERSITY OF CALIFORNIA, SAN DIEGO ANNOUNCEMENT OF POSTDOCTORAL FELLOWSHIPS Applications are invited for postdoctoral fellowships in Language, Communication and Brain at the Center for Research in Language at the University of California, San Diego. The fellowships are supported by the National Insti- tutes of Health (NIDCD), and provide an annual stipend rang- ing from $19,608 to $32,300 depending upon years of postdoc- toral experience. In addition, some funding is provided for medical insurance and travel. The program provides interdisciplinary training in: (1) psycholinguistics, including language processing in adults and language development in children; (2) communication disorders, including childhood language disorders and adult aphasia; (3) electrophysiological studies of language, and (4) neural network models of language learning and pro- cessing. Candidates are expected to work in at least one of these four areas. Grant conditions require that candidates be citizens or permanent residents of the U.S. Applicants should send a statement of interest, three letters of recommendation, a curriculum vitae and copies of relevant publications to: Jan Corte Center for Research in Language 0526 University of California, San Diego 9500 Gilman Drive La Jolla, California 92093-0526 (619) 534-2536 Women and minority candidates are specifically invited to apply.  From mm at santafe.edu Tue Mar 29 18:24:42 1994 From: mm at santafe.edu (Melanie Mitchell) Date: Tue, 29 Mar 94 16:24:42 MST Subject: papers available Message-ID: <9403292324.AA10868@wupatki> The following papers are available via anonymous ftp: The Evolution of Emergent Computation James P. Crutchfield Melanie Mitchell UC Berkeley Santa Fe Institute Santa Fe Institute Working Paper 94-03-012 Submitted to Science March, 1994 Abstract A simple evolutionary process can discover sophisticated methods for emergent information processing in decentralized spatially-extended systems. The mechanisms underlying the resulting emergent computation are explicated by a novel technique for analyzing particle-based logic embedded in pattern-forming systems. Understanding how globally-coordinated computation can emerge in evolution is relevant both for the scientific understanding of natural information processing and for engineering new forms of parallel computing systems. To obtain an electronic copy of this paper (9 pages): ftp ftp.santafe.edu login: anonymous password: cd /pub/Users/mm binary get EvEmComp.ps.Z quit Then at your system: uncompress EvEmComp.ps.Z lpr -P EvEmComp.ps If you cannot obtain an electronic copy, send a request for a hard copy to pdb at santafe.edu. ----------------------------------------------- A Genetic Algorithm Discovers Particle-Based Computation in Cellular Automata Rajarshi Das Melanie Mitchell James P. Crutchfield Santa Fe Institute Santa Fe Institute UC Berkeley Santa Fe Institute Working Paper 94-03-015 Submitted to the Third Parallel Problem-Solving From Nature Conference March, 1994 Abstract How does evolution produce sophisticated emergent computation in systems composed of simple components limited to local interactions? To model such a process, we used a genetic algorithm (GA) to evolve cellular automata to perform a computational task requiring globally-coordinated information processing. On most runs a class of relatively unsophisticated strategies was evolved, but on a subset of runs a number of quite sophisticated strategies was discovered. We analyze the emergent logic underlying these strategies in terms of information processing performed by ``particles'' in space-time, and we describe in detail the generational progression of the GA evolution of these strategies. Our analysis is a preliminary step in understanding the general mechanisms by which sophisticated emergent computational capabilities can be automatically produced in decentralized multiprocessor systems. To obtain an electronic copy of this paper (13 pages): (The electronic version of this paper has been broken up into four parts to facilitate printing.) ftp ftp.santafe.edu login: anonymous password: cd /pub/Users/mm binary get GA-Particle.part1.ps.Z get GA-Particle.part2.ps.Z get GA-Particle.part3.ps.Z get GA-Particle.part4.ps.Z quit Then at your system: uncompress GA-Particle.part1.ps.Z uncompress GA-Particle.part2.ps.Z uncompress GA-Particle.part3.ps.Z uncompress GA-Particle.part4.ps.Z lpr -P GA-Particle.part1.ps lpr -P GA-Particle.part2.ps lpr -P GA-Particle.part3.ps lpr -P GA-Particle.part4.ps If you cannot obtain an electronic copy, send a request for a hard copy to pdb at santafe.edu.  From qian at ai.mit.edu Tue Mar 29 19:29:48 1994 From: qian at ai.mit.edu (Ning Qian) Date: Tue, 29 Mar 94 19:29:48 EST Subject: Postdoc Position in Comp. Neurosci. Message-ID: <9403300029.AA05241@peduncle> Postdoctoral Position in Computational Neuroscience Center for Neurobiology and Behavior Columbia University A postdoctoral position is now available in Center for Neurobiology and Behavior at Columbia University. The position is for someone who is interested in computational modeling and/or visual psychophysics of motion analysis, stereoscopic depth perception and/or motion-stereo integration in biological visual systems. Opportunities for modeling other neural systems are also available. Please submit a CV, representative publications and two letters of references to: Dr. Ning Qian Center for Neurobiology and Behavior Columbia University 722 W. 168th St., Annex #730 New York, NY 10032 If you have questions or need further information, please feel free to send me email at qian at ai.mit.edu, or call me at (212) 960-2213 or (212) 960-2561.  From mozer at neuron.cs.colorado.edu Wed Mar 30 15:24:53 1994 From: mozer at neuron.cs.colorado.edu (Mike Mozer) Date: Wed, 30 Mar 94 13:24:53 -0700 Subject: TR announcement -- neural net music composition Message-ID: <199403302024.OAA19267@neuron.cs.colorado.edu> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/mozer.musiccomp.ps.Z Neural network music composition by prediction: Exploring the benefits of psychoacoustic constraints and multiscale processing Michael C. Mozer Department of Computer Science and Institute of Cognitive Science University of Colorado Boulder, CO 80309-0430 ABSTRACT: In algorithmic music composition, a simple technique involves selecting notes sequentially according to a transition table that specifies the probability of the next note as a function of the previous context. I describe an extension of this transition table approach using a recurrent autopredictive connectionist network called CONCERT. CONCERT is trained on a set of pieces with the aim of extracting stylistic regularities. CONCERT can then be used to compose new pieces. A central ingredient of CONCERT is the incorporation of psychologically-grounded representations of pitch, duration, and harmonic structure. CONCERT was tested on sets of examples artificially generated according to simple rules and was shown to learn the underlying structure, even where other approaches failed. In larger experiments, CONCERT was trained on sets of J. S. Bach pieces and traditional European folk melodies and was then allowed to compose novel melodies. Although the compositions are occasionally pleasant, and are preferred over compositions generated by a third-order transition table, the compositions suffer from a lack of global coherence. To overcome this limitation, several methods are explored to permit CONCERT to induce structure at both fine and coarse scales. In experiments with a training set of waltzes, these methods yielded limited success, but the overall results cast doubt on the promise of note-by-note prediction for composition. 32 pages total TO APPEAR IN _Connection Science_ special issue on music and creativity, 1994.  From jlm at crab.psy.cmu.edu Wed Mar 30 18:30:10 1994 From: jlm at crab.psy.cmu.edu (James L. McClelland) Date: Wed, 30 Mar 94 18:30:10 EST Subject: TR: Complementary Learning Systems in Hippocampus and Neocortex Message-ID: <9403302330.AA19368@crab.psy.cmu.edu.psy.cmu.edu> The following Technical Report is available both electronically from our own FTP server or in hard copy form. Instructions for obtaining copies may be found at the end of this post. ======================================================================== Why there are Complementary Learning Systems in the Hippocampus and Neocortex: Insights from the Successes and Failures of Connectionist Models of Learning and Memory James L. McClelland, Bruce L. McNaughton & Randall C. O'Reilly Carnegie Mellon University & The University of Arizona Technical Report PDP.CNS.94.1 March, 1994 The influence of prior experience on some forms of behavior and cognition is drastically affected by damage to the hippocampal system. However, if the hippocampal system is left intact both during the experience and for a period of time thereafter, subsequent damage can have much less or even no effect. Such findings suggest that memory traces change over time in a way that makes them less dependent on the hippocampal system. This process of change has often been called consolidation. Consolidation is a very gradual process; in humans, it appears to span up to 15 years. This article asks what consolidation is and why it occurs. We take as our point of departure the view that the initial memory trace that results from a relevant experience consists of changes to the strengths of the connections among neurons in the hippocampal system. Bidirectional connections between the neocortex and the hippocampus allow these initial traces to mediate the reinstatement of representations of events or experiences in the neocortex. Consolidation results from the cumulative effects of small, incremental changes to connections among neurons in the neocortex that occur each time such a representation is reinstated. This view leads to two key questions: 1) Why are plastic changes made initially in the hippocampus, if ultimately the substrate of a consolidated memory lies in the neocortex? 2) Why does consolidation span such an extended period of time? Insights from connectionist network models of learning and memory provide one set of possible answers to these questions. These models consist of networks of simple processing units and weighted connections among the units, and they offer procedures for discovering what weights or values to use on the connections so that the network can capture the structure present in ensembles of events and experiences drawn from some domain. These connection weights then provide the basis for appropriate generalization to novel examples from the same domain. Crucially, the success of these procedures depends on interleaved learning: making only very small changes to the connection weights on each learning trial, so that the overall direction of weight change can be governed by the structure of the domain rather than the individual examples. The sequential acquisition of new data is incompatible with the gradual discovery of structure and can lead to catastrophic interference with what has previously been learned. In the light of these observations, we suggest that the neocortex may be optimized for the gradual discovery of the shared structure of events and experiences, and that the hippocampal system is there to provide a mechanism for rapid acquisition of new information without interference with previously discovered regularities. After this initial acquisition, the hippocampal system serves as teacher to the neocortex: That is, it allows for the reinstatement in the neocortex of representations of past events, so that they may be gradually acquired by the cortical system via interleaved learning. We equate this interleaved learning process with consolidation, and we suggest that it is necessarily slow so that new knowledge can be integrated effectively into the structured knowledge contained in the neocortical system. ======================================================================= Retrieval information for pdp.cns TRs: unix> ftp 128.2.248.152 # hydra.psy.cmu.edu Name: anonymous Password: ftp> cd pub/pdp.cns ftp> binary ftp> get pdp.cns.94.1.ps.Z ftp> quit unix> zcat pdp.cns.94.1.ps.Z | lpr # or however you print postscript NOTE: The compressed file is 306994 bytes long. Uncompressed, the file is 840184 byes long. The printed version is 63 total pages long. For those who do not have FTP access, physical copies can be requested from Barbara Dorney .  From Ralph.Neuneier at zfe.siemens.de Thu Mar 31 02:23:43 1994 From: Ralph.Neuneier at zfe.siemens.de (Ralph Neumeier) Date: Thu, 31 Mar 1994 09:23:43 +0200 Subject: paper available Message-ID: <199403310723.AA05693@train.zfe.siemens.de> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/.ps.Z The following paper, which is closely related to the two recently announced paper of Chris. M. Bishop and Z. Ghahramani on density estimation is now available by anonymous ftp (7 pages, no hardcopies). It will appear in the proceedings of ICANN'94 (INTERNATIONAL CONFERENCE ON ARTIFICIAL NEURAL NETWORKS) Springer-Verlag London Ltd Any questions or comments will be highly appreciated. E-mail: Ralph.Neuneier at zfe.siemens.de -------------------------------------------------------------- Estimation of Conditional Densities: A Comparison of Neural Network Approaches R.Neuneier, F.Hergert, Siemens AG, Corporate Research and Development, D-81730 Munich, Germany W.Finnoff, Prediction Company, Santa Fe, NM 8750 D.Ormoneit, Dept. of CS, TUM, D-80290 Munich, Germany ABSTRACT: We present a comparison of various network architectures and learning algorithms (EM, Gradient Descent) to estimate conditional densities p(y|x) for future values of time series given past observations. There are two principal ways to approach this problem: Either one can estimate the conditional density directly, or first compute the joint densities p(x,y) and subsequently derive the conditional density p(y|x) from p(x,y). We compared the performance of both approaches using a bounded Brownian process and real exchange rates (U.S.$-SFR). In our experiments, the direct approach turned out to be superior. -------------------------------------------------------------- Instruction for retrieving this paper: ftp archive.cis.ohio-state.edu Name: anonymous password: your full email address ftp> cd pub/neuroprose ftp> binary ftp> get neuneier.cond_dens.ps.Z ftp> bye uncompress neuneier.cond_dens.ps.Z lpr neuneier.cond_dens.ps ------------------------------------------------------------ Ralph Neuneier ZFE ST SN 41 Siemens AG Otto-Hahn-Ring 6 D 81730 Muenchen, Germany Phone: +49/89/636-49506 Fax: +49/89/636-3320 e-mail: Ralph.Neuneier at zfe.siemens.de  From philh at cogs.susx.ac.uk Thu Mar 31 08:59:09 1994 From: philh at cogs.susx.ac.uk (Phil Husbands) Date: Thu, 31 Mar 1994 14:59:09 +0100 (BST) Subject: SAB94 Registration Details Message-ID: CONFERENCE ANNOUNCEMENT AND REGISTRATION DETAILS You are cordially invited to FROM ANIMALS TO ANIMATS Third International Conference on Simulation of Adaptive Behavior (SAB94) Brighton, UK, August 8-12, 1994 The object of the conference is to bring together researchers in ethology, psychology, ecology, cybernetics, artificial intelligence, robotics, and related fields so as to further our understanding of the behaviors and underlying mechanisms that allow animals and, potentially, robots to adapt and survive in uncertain environments. The conference will focus particularly on well-defined models, computer simulations, and built robots in order to help characterize and compare various organizational principles or architectures capable of inducing adaptive behavior in real or artificial animals. Technical Programme =================== The full technical programme will be announced in due course. There will be a single track of oral presentations, with poster sessions separately timetabled. There will also be computer, video and robotic demonstrations. Major topics covered will include: Individual and collective behavior Autonomous robots Neural correlates of behavior Hierarchical and parallel organizations Perception and motor control Emergent structures and behaviors Motivation and emotion Problem solving and planning Action selection and behavioral Goal directed behavior sequences Neural networks and evolutionary Ontogeny, learning and evolution computation Internal world models Characterization of environments and cognitive processes Applied adaptive behavior Invited speakers ================ Prof. Michael Arbib, University of Southern California, "Rats Running and Humans Reaching: The Brain's Multiple Styles of Learning" Prof. Rodney Brooks, MIT, "Coherent Behavior from Many Adaptive Processes" Prof. Herbert Roitblat, University of Hawaii, "Mechanisms and Process in Animal Behaviour: Models of Animals, Animals as Models" Prof. Jean-Jacques Slotine, MIT, "Stability in Adaptation and Learning" Prof. John Maynard Smith, University of Sussex,"The Evolution of Animal Signals" Proceedings =========== The conference proceeding will be published by MIT Press/Bradford Books and will be available at the conference. Official Language: English ========================== Demonstrations ============== Computer, video and robotic demonstrations are invited. They should be of work relevant to the conference. If you wish to offer a demonstration, please send a letter with your registration form briefly describing your contribution and indicating space and equipment requirements. Conference Committee ==================== Conference Chairs: Philip HUSBANDS Jean-Arcady MEYER Stewart WILSON School of Cognitive Groupe de Bioinformatique The Rowland Institute and Comp. Sciences Ecole Normale Superieure for Science University of Sussex 46 rue d'Ulm 100 Cambridge Parkway Brighton BN1 9QH, UK 75230 Paris Cedex 05 Cambridge, MA 02142, USA philh at cogs.susx.ac.uk meyer at wotan.ens.fr wilson at smith.rowland.org Program Chair: David CLIFF School of Cognitive and Computing Sciences University of Sussex Brighton BN1 9QH, UK davec at cogs.susx.ac.uk Financial Chair: P. Husbands, H. Roitblat Local Arrangements: I. Harvey, P. Husbands Program Committee ================= M. Arbib, USA R. Arkin, USA R. Beer, USA A. Berthoz, France L. Booker, USA R. Brooks, USA P. Colgan, Canada T. Collett, UK H. Cruse, Germany J. Delius, Germany J. Ferber, France N. Franceschini, France S. Goss, Belgium J. Halperin, Canada I. Harvey, UK I. Horswill, USA A. Houston, UK L. Kaelbling, USA H. Klopf, USA L-J. Lin, USA P. Maes, USA M. Mataric, USA D. McFarland, UK G. Miller, UK R. Pfeifer, Switzerland H. Roitblat, USA J. Slotine, USA O. Sporns, USA J. Staddon, USA F. Toates, UK P. Todd, USA S. Tsuji, Japan D. Waltz, USA R. Williams, USA Local Arrangements ================== For general enquiries contact: SAB94 Administration COGS University of Sussex Falmer, Brighton, BN1 9QH UK Tel: +44 (0)273 678448 Fax: +44 (0)273 671320 Email: sab94 at cogs.susx.ac.uk ftp === The SAB94 archive can be accessed by anonymous ftp. % ftp ftp.cogs.susx.ac.uk login: anonymous password: ftp> cd pub/sab94 ftp> get * ftp> quit * Files available at present are: README announcement reg_document hotel_booking_form Sponsors ======== Sponsors include: British Telecom University of Sussex Applied AI Systems Inc Uchidate Co., Ltd. Mitsubishi Corporation Brighton Council Financial Support ================ Limited financial support may be available to graduate students and young researchers in the field. Applicants should submit a letter describing their research, the year they expect to receive their degree, a letter of recommendation from their supervisor, and confirmation that they have no other sources of funds available. The number and size of awards will depend on the amount of money available. Venue ===== The conference will be held at the Brighton Centre, the largest conference venue in the town, situated on the seafront in Brighton's town centre and adjacent to the 'Lanes' district. Brighton is a thriving seaside resort, with many local attractions, situated on the south coast of England. It is just a 50 minute train journey from London, and 30 minutes from London Gatwick airport -- when making travel arrangements we advise, where possible, using London Gatwick in preference to London Heathrow. Social Activities ================= A welcome reception will be held on Sunday 7th August. The conference banquet will take place on Thursday 11th August. There will also be opportunities for sightseeing, wine cellar tours and a visit to Brighton's Royal Pavilion. Accommodation ============= We have organised preferential rates for SAB94 delegates at several good quality hotels along the seafront. All hotels are within easy walking distance of the Brighton Centre. Costs vary from 29 pounds to 70 pounds inclusive per night for bed and breakfast. An accommodation booking form will be sent out to you on request, or can be obtained by ftp (instructions above). Details of cheaper budget accommodation can be obtained from Brighton Accommodation Marketing Bureau (Tel: +44 273 327560 Fax: +44 273 777409). Insurance ========= The SAB94 organisers and sponsors can not accept liablility for personal injuries, or for loss or damage to property belonging to conference participants or their guests. It is recommended that attendees take out personal travel insurance. Registration Fees ================= Registration includes: the conference proceedings; technical program; lunch each day (except Wednesday when there will be no afternoon sessions); welcome reception; free entry to Brighton's Royal Pavilion; complimentary temporary membership of the Arts Club of Brighton. ----------------------------------------------------------------------------- REGISTRATION FORM 3rd International Conference on the Simulation of Adaptive Behaviour (SAB94) 8-12 August 1994 Brighton Centre, Brighton, UK Please complete the form below and send to the conference office with full payment. Name: ______________________________________________________________ Address: __________________________________________________________ ____________________________________________________________________ ____________________________________________________________________ Country: ___________________________________________________________ Postal Code or Zip Code: ___________________________________________ Email: _____________________________________________________________ Telephone:____________________________ Fax:_________________________ Professional Affiliation:___________________________________________ Name(s) of accompanying person(s): 1. ________________________________________________________________ 2. ________________________________________________________________ Dietary needs: ____________________________________________________ Any other special needs: _________________________________________ PAYMENTS ======== All payments must be made in pounds sterling. Delegates: ========= Tick if you will be attending the welcome reception on Sunday 7 August _____ Tick appropriate boxes. Individual Student Early (before 15 May 1994) 200 pounds ( ) 100 pounds ( ) Late (after 15 May 1994) 230 pounds ( ) 115 pounds ( ) On site 260 pounds ( ) 130 pounds ( ) Banquet 18 pounds ( ) 18 pounds ( ) STUDENTS MUST SUBMIT PROOF OF THEIR STATUS ALONG WITH THEIR REGISTRATION FEE. Accompanying persons: =================== Welcoming reception 10 pounds Banquet 28 pounds TOTAL PAYMENT ___________ Registration ___________ Banquet (delegate rate) (Please tick if vegetarian _____) ___________ Banquet (guest rate) (Please tick if vegetarian _____) ___________ Reception (guests only) ___________ Donation to support student scholarship fund METHOD OF PAYMENT ================= Please make payable to "SAB94", pounds sterling only. _____ Bank Draft or International Money Order: __________________ pounds _____ Cheque: (drawn on a UK bank) __________________ pounds Send to: SAB Administration COGS University of Sussex Falmer, Brighton, BN1 9QH UK CANCELLATIONS ============= The SAB Administration should be notified in writing of all cancellations. Cancellations received before 10 July will incur a 20% administration charge. We cannot accept any cancellations after that date. ---------------------------------------------------------------------------------------  From Connectionists-Request at cs.cmu.edu Tue Mar 1 00:05:17 1994 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Tue, 01 Mar 94 00:05:17 EST Subject: Bi-monthly Reminder Message-ID: <17743.762498317@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated January 4, 1993. This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is not an edited forum like the Neuron Digest, or a free-for-all newsgroup like comp.ai.neural-nets. It's somewhere in between, relying on the self-restraint of its subscribers. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to over a thousand busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. Happy hacking. -- Dave Touretzky & David Redish --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject lately. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, and found the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new text books related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. - Do NOT tell a friend about Connectionists at cs.cmu.edu. Tell him or her only about Connectionists-Request at cs.cmu.edu. This will save your friend from public embarrassment if she/he tries to subscribe. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU (Internet address 128.2.242.8). 2. Login as user anonymous with password your username. 3. 'cd' directly to one of the following directories: /usr/connect/connectionists/archives /usr/connect/connectionists/bibliographies 4. The archives and bibliographies directories are the ONLY ones you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into one of these two directories. Access will be denied to any others, including their parent directory. 5. The archives subdirectory contains back issues of the mailing list. Some bibliographies are in the bibliographies subdirectory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. We strongly discourage the merger into the repository of existing bodies of work or the use of this medium as a vanity press for papers which are not of publication quality. PLACING A FILE To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. Current naming convention is author.title.filetype.Z where title is just enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. The Z indicates that the file has been compressed by the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is in the appendix. Make sure your paper is single-spaced, so as to save paper, and include an INDEX Entry, consisting of 1) the filename, 2) the email contact for problems, 3) the number of pages and 4) a one sentence description. See the INDEX file for examples. ANNOUNCING YOUR PAPER It is the author's responsibility to invite other researchers to make copies of their paper. Before announcing, have a friend at another institution retrieve and print the file, so as to avoid easily found local postscript library errors. And let the community know how many pages to expect on their printer. Finally, information about where the paper will/might appear is appropriate inside the paper as well as in the announcement. Please add two lines to your mail header, or the top of your message, so as to facilitate the development of mailer scripts and macros which can automatically retrieve files from both NEUROPROSE and other lab-specific repositories: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/filename.ps.Z When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. To prevent forwarding, place a "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your file. If you do offer hard copies, be prepared for a high cost. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! A shell script called Getps, written by Tony Plate, is in the directory, and can perform the necessary retrieval operations, given the file name. Functions for GNU Emacs RMAIL, and other mailing systems will also be posted as debugged and available. At any time, for any reason, the author may request their paper be updated or removed. For further questions contact: Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Phone: (614) 292-4890 APPENDIX: Here is an example of naming and placing a file: gvax> cp i-was-right.txt.ps rosenblatt.reborn.ps gvax> compress rosenblatt.reborn.ps gvax> ftp cheops.cis.ohio-state.edu Connected to cheops.cis.ohio-state.edu. 220 cheops.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put rosenblatt.reborn.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for rosenblatt.reborn.ps.Z 226 Transfer complete. 100000 bytes sent in 3.14159 seconds ftp> quit 221 Goodbye. gvax> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file rosenblatt.reborn.ps.Z in the Inbox. Here is the INDEX entry: rosenblatt.reborn.ps.Z rosenblatt at gvax.cs.cornell.edu 17 pages. Boastful statements by the deceased leader of the neurocomputing field. Let me know when it is in place so I can announce it to Connectionists at cmu. Frank ^D AFTER FRANK RECEIVES THE GO-AHEAD, AND HAS A FRIEND TEST RETRIEVE THE FILE, HE DOES THE FOLLOWING: gvax> mail connectionists Subject: TR announcement: Born Again Perceptrons FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/rosenblatt.reborn.ps.Z The file rosenblatt.reborn.ps.Z is now available for copying from the Neuroprose repository: Born Again Perceptrons (17 pages) Frank Rosenblatt Cornell University ABSTRACT: In this unpublished paper, I review the historical facts regarding my death at sea: Was it an accident or suicide? Moreover, I look over the past 23 years of work and find that I was right in my initial overblown assessments of the field of neural networks. ~r.signature ^D ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "pt.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "nn-bench-request at cs.cmu.edu".  From radford at cs.toronto.edu Tue Mar 1 21:39:15 1994 From: radford at cs.toronto.edu (Radford Neal) Date: Tue, 1 Mar 1994 21:39:15 -0500 Subject: TR available: "Priors for infinite networks" Message-ID: <94Mar1.213924edt.161@neuron.ai.toronto.edu> FTP-host: ftp.cs.toronto.edu FTP-filename: /pub/radford/pin.ps.Z The following technical report is now available via ftp, as described below. PRIORS FOR INFINITE NETWORKS Radford M. Neal Department of Computer Science University of Toronto 1 March 1994 Bayesian inference begins with a prior distribution for model parameters that is meant to capture prior beliefs about the relationship being modeled. For multilayer perceptron networks, where the parameters are the connection weights, the prior lacks any direct meaning --- what matters is the prior over functions computed by the network that is implied by this prior over weights. In this paper, I show that priors over weights can be defined in such a way that the corresponding priors over functions reach reasonable limits as the number of hidden units in the network goes to infinity. When using such priors, there is thus no need to limit the size of the network in order to avoid ``overfitting''. The infinite network limit also provides insight into the properties of different priors. A Gaussian prior for hidden-to-output weights results in a Gaussian process prior for functions, which can be smooth, Brownian, or fractional Brownian, depending on the hidden unit activation function and the prior for input-to-hidden weights. Quite different effects can be obtained using priors based on non-Gaussian stable distributions. In networks with more than one hidden layer, a combination of Gaussian and non-Gaussian priors appears most interesting. The paper may be obtained in PostScript form as follows: unix> ftp ftp.cs.toronto.edu (or 128.100.3.6, or 128.100.1.105) (log in as user 'anonymous', your e-mail address as password) ftp> cd pub/radford ftp> binary ftp> get pin.ps.Z ftp> quit unix> uncompress pin.ps.Z unix> lpr pin.ps (or however you print PostScript) The report is 22 pages in length. Due to figures, the uncompressed PostScript is about 2 megabytes in size. The files pin[123].ps.Z in the same directory contain the same paper in smaller chunks; these may prove useful if your printer cannot digest the paper all at once. Some of the figures take a while to print; the largest such is the sole content of pin2.ps Radford Neal radford at cs.toronto.edu  From JANOUSEK at dcse.fee.vutbr.cz Wed Mar 2 20:37:54 1994 From: JANOUSEK at dcse.fee.vutbr.cz (Vladimir Janousek) Date: Wed, 2 Mar 1994 20:37:54 MET-0100 Subject: NN workshop Message-ID: <20366F782A@fik.dcse.fee.vutbr.cz> ---------------------------------------------------------------------- CALL FOR PARTICIPACION Sixth International Microcomputer School on NEURAL NETWORKS September 19-23, 1994 Sedmihorky (Bohemian Paradise), Czech Republic Organised by Technical University of Brno with financial support from the CEC COST Programme. The workshop "Microcomputer School" is one of the series of events organised every second year since 1984. Its aim is to promote knowledge of new developments in Computer Science and Engineering among the educational and research community. Each meeting has a different subject area at its focus. The subject of this School is Neural Networks - Theory and Applications. Main topics * Architectures of artificial neural networks * Learning theory and applications * Speech and character recognition * Neural network-based controllers * Commercial and industrial hardware systems * Cellular neural networks in image processing Programme and location The workshop sessions will be led by invited scientists who will present extensive lectures on the topic of the field. Participants are also invited to submit individual papers (max 6 pages) on their research to be presented at the session. English is the official language of the workshop. The workshop is located near Sandstone cities of the Bohemian Paradise, a well-known natural wonderland some 80 km from Prague. The place is ideal for hiking trips. Participation will be by registration only and numbrer of participants will be limited. Registration Fee ACM members until 15 June 1994 USD 135 Kc 4.100,- after 15 June 1994 USD 155 Kc 4.700,- Non ACM members until 15 June 1994 USD 145 Kc 4.400,- after 15 June 1994 USD 165 Kc 4.990,- The registration fee covers workshop participation, cost of accomodation, meals and prceedings. Scholarship for Young Scientist PhD students from Central and Eastern Europe presenting the most valuable contribution to the Workshop will be granted a scholarship in the form of exemption from the workshop fee. The scholarship will be granted by competition based on quality of submitted papers. Please note that the scholarships will not cover travel expenses. Deadlines for the Authors Manuscript of the paper should be typed and written in English and must be received before April 15. Notification of acceptance with a guide for the authors will be send before May 15. Full, camera ready paper must be received before June 30. To register, or for additional information, contact: Mrs. Sylva Papezova Application Software, Ltd. Bozetechova 2 61266 Brno CZECH REPUBLIC phone/fax: +42-5-41211479 phone: +42-5-740741 e-mail: nnet at dcse.fee.vutbr.cz Payment and Banking Information Method of Payment: Wire transfer. The amount is to be paid in USD, Kc or any convertible equaivalent. IMPORTANT: Please use NETTO payment in your payment order ir increase the amount by USD 10 for banking fee deducted by beneficiary's bank. Bank: Account number: Komercni banka Brno-mesto 113545-621/0100 nam.Svobody 21 CZ-631 31 Brno Czech Republic ------------------------------------------------------------------ Vladimir Janousek janousek at dcse.fee.vutbr.cz Technical University of Brno Faculty of Electrical Engineering & Informatics Department of Computer Science & Engineering Bozetechova 2, CZ-612 66 Brno, Czech Republic  From jameel at cs.tulane.edu Thu Mar 3 03:06:09 1994 From: jameel at cs.tulane.edu (Akhtar Jameel) Date: Thu, 3 Mar 1994 02:06:09 -0600 (CST) Subject: Call for papers Message-ID: <9403030806.AA03339@pegasus.cs.tulane.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 5388 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/c8760e88/attachment-0001.ksh From mtx004 at cck.coventry.ac.uk Thu Mar 3 10:31:46 1994 From: mtx004 at cck.coventry.ac.uk (NSteele) Date: Thu, 3 Mar 94 10:31:46 WET Subject: ANNGA95 Message-ID: <15290.9403031031@cck.coventry.ac.uk> Please could you post the following invitation to participate.... ****************************************************************************** ICANNGA95 INTERNATIONAL CONFERENCE on ARTIFICIAL NEURAL NETWORKS and GENETIC ALGORITHMS Preceded by a one-day Introductory Workshop ECOLE DES MINES d'ALES , FRANCE 18th - 21st April 1995 Call for Papers and Invitation to Participate Purpose and Scope of the Conference Artificial neural networks and genetic algorithms are two areas of emerging technology, with their origins in the field of biology. Independently or in conjunction, approaches based on these techniques have produced interesting and useful results in many fields of application. The conference has two purposes, namely to bring together established workers in the fields and also to provide an opportunity for those wishing to gain understanding and experience of these areas. Thus the conference will be preceded by a one-day workshop. At this workshop, introductory presentations covering the basic concepts and recent developments will be given, with lectures based on printed course notes. The language of instruction will be English, but it is expected that assistance will be available in French and German. "Hands-on" experience will be available and the workshop fee will include the cost of some introductory software. Workshop participants will be able to register as listeners for the conference itself at a reduced rate. This conference follows the highly successful ICANNGA93 held at Innsbruck, Austria in April 1993. Owing to the exceptionally high quality of publications and friendly atmosphere evident at the 1993 conference, the organisers have decided to continue the conference theme with a conference every two years. As a result the Ecole des Mines d'Ales is honoured and delighted to be chosen to host the second of this conference series, in close collaboration with the organisers of ICANNGA93 from the University of Innsbruck and Coventry University. Call for Papers The conference will focus on both the theoretical and practical aspects of the technologies. Accordingly, contributions are sought based on the following list, which is indicative only. 1 Theoretical aspects of Artificial Neural Networks. Novel paradigms, training methods, analysis of results and models, trajectories and dynamics. 2. Practical applications of Artificial Neural Networks. Pattern recognition, classification problems, fault detection, optimisation, prediction, risk assessment, data compression and image processing, process monitoring and control, financial forecasting, etc... 3. Theoretical and computational aspects of Genetic Algorithms. New algorithms/processes, performance measurement. 4. Practical applications of Genetic Algorithms. Optimisation, scheduling and design problems, classifier systems, application to artificial neural networks, etc... Authors wishing to contribute to the conference should send an abstract of 600-1000 words of their proposed contribution before 31st August 1994. Abstracts should be in English and three typewritten copies should be sent to the address below. David Pearson Laboratoire d'Electronique d'Automatique et d'Informatique Ecole des Mines d'Ales 6, avenue de Clavieres 30319 Ales Cedex France Alternatively abstracts may be sent by electronic mail to either of the following email addresses. 1. dpearson at soleil.ENSM-ALES.FR (Ales) 2. NSTEELE at cov.ac.uk (Coventry) Refereeing of abstracts submitted before the deadline date will take place on a regular basis, allowing early decisions to be taken in order to help contributors plan their visit. ADVISORY COMMITTEE R. Albrecht, University of Innsbruck D. Pearson, Ecole des Mines d'Ales N. Steele, Coventry University. Accommodation charges are not included in the fees. Details on hotel reservation will be available later in 1994. For further information on the conference or workshop please contact :- Nigel Steele Department of Mathematics Coventry University Priory Street Coventry CV1 5FB UK tel: +44 203 838568 fax: +44 203 838585 email: NSTEELE at cck.cov.ac.uk or David Pearson Laboratoire d'Electronique d'Automatique et d'Informatique Ecole des Mines d'Ales 6, avenue de Clavieres 30319 Ales Cedex France tel: +33 66785249 fax: +33 66785201 email: dpearson at soleil.ENSM-ALES.FR General Information Ales-en-Cevennes is situated at the South-Eastern outcrop of the Massif Central between the "garrigues" of Languedoc and the Cevennes mountains and owes its existence to its abundant mineral resources. Means of access: Ales is located 40 kilometres from Nimes, 70 kilometres from Avignon, Montpellier and the Mediterranean beaches and 150 kilometres from Marseille. By road: The "Nimes-Ouest" exit on the Lyon-Barcelona and the Marseille-Barcelona motorways. By train: The Paris-Lyon-Nimes TGV (high speed train, 4.5 hours from Paris to Nimes), connection by train or bus to Ales from Nimes. By plane: Marseille and Montpellier international airports, Nimes national airport with several daily flights from Paris. More detailed information on the various means of access will be available later in 1994.  -- ========================== Nigel Steele Chairman, Division of Mathematics School of Mathematical and Information Sciences Coventry University Priory Street Coventry CV1 5FB United Kingdom. tel: (0203) 838568 +44 203 838568 email: NSTEELE at uk.ac.cov.cck (JANET) or NSTEELE at cck.cov.ac.uk (EARN BITNET etc.) fax: (0203) 838585 +44 203 838585  From peterw at cogs.susx.ac.uk Thu Mar 3 06:53:00 1994 From: peterw at cogs.susx.ac.uk (Peter Williams) Date: Thu, 3 Mar 94 11:53 GMT Subject: Technical report Message-ID: FTP-host: ftp.cogs.susx.ac.uk FTP-filename: /pub/reports/csrp/csrp312.ps.Z The following technical report is available by anonymous ftp. ------------------------------------------------------------------------ BAYESIAN REGULARISATION AND PRUNING USING A LAPLACE PRIOR Peter M Williams Cognitive Science Research Paper CSRP-312 School of Cognitive and Computing Sciences University of Sussex Falmer, Brighton BN1 9QH England email: peterw at cogs.susx.ac.uk Abstract Standard techniques for improved generalisation from neural networks include weight decay and pruning. Weight decay has a Bayesian interpretation with the decay function corresponding to a prior over weights. The method of transformation groups and maximum entropy indicates a Laplace rather than a Gaussian prior. After training, the weights then arrange themselves into two classes: (1) those with a common sensitivity to the data error (2) those failing to achieve this sensitivity and which therefore vanish. Since the critical value is determined adaptively during training, pruning---in the sense of setting weights to exact zeros---becomes a consequence of regularisation alone. The count of free parameters is also reduced automatically as weights are pruned. A comparison is made with results of MacKay using the evidence framework and a Gaussian regulariser. ------------------------------------------------------------------------ [113755 bytes, 25 pages] unix> ftp ftp.cogs.susx.ac.uk Name: anonymous Password: (email address) ftp> cd pub/reports/csrp ftp> binary ftp> get csrp312.ps.Z  From charles at playfair.Stanford.EDU Thu Mar 3 16:29:20 1994 From: charles at playfair.Stanford.EDU (Charles Roosen) Date: Thu, 03 Mar 94 13:29:20 -0800 Subject: Projection Pursuit Papers Available Message-ID: <199403032129.NAA28374@playfair.Stanford.EDU> The following Tech Reports are now available by anonymous ftp from research.att.com. They are in the directory /dist/trevor as "asp.tm.ps.Z" and "lrpp.tm.ps.Z". Charles Roosen charles at playfair.stanford.edu --- Automatic Smoothing Spline Projection Pursuit Charles Roosen Trevor Hastie Dept. of Stat. Stat. & Data Analysis Research Dept. Stanford U. AT&T Bell Labs Abstract A highly flexible nonparametric regression model for predicting a response y given covariates {x_k}_{k=1}^d is the projection pursuit regression (PPR) model yhat=h(x)=\beta_0 + \sum_j \beta_j f_j(\alpha_j^T x), where the f_j are general smooth functions with mean zero and norm one, and \sum_{k=1}^d \alpha_{kj}^2=1. The standard PPR algorithm of Friedman estimates the smooth functions f_j(v_j) using the supersmoother nonparametric scatterplot smoother. Friedman's algorithm constructs a model with M_{max} linear combinations, then prunes back to a simpler model of size M \leq M_{max}, where M and M_{max} are specified by the user. This paper discusses an alternative algorithm in which the smooth functions are estimated using smoothing splines, and the number of terms M and M_{max} are chosen by generalized cross-validation. Logistic Response Projection Pursuit Charles Roosen Trevor Hastie Dept. of Stat. Stat. & Data Analysis Research Dept. Stanford U. AT&T Bell Labs Abstract A highly flexible nonparametric regression model for predicting a response y given covariates x is the projection pursuit regression (PPR) model yhat=h(x)=\beta_0 + \sum_j \beta_j f_j(\alpha_j^T x), where the f_j are general smooth functions with mean zero and norm one, and \sum_{k=1}^d \alpha_{kj}^2=1. With a binary response $y$, the common approach to fitting a PPR model is to fit yhat to minimize average squared error without explicitly considering the binary nature of the response. We develop an alternative logistic response projection pursuit model, in which y is take to be binomial(p), where \log({p \over 1-p})=h(x). This may be fit by minimizing either binomial deviance or average squared error. We compare the logistic response models to the linear model on simulated data. In addition, we develop a generalized projection pursuit framework for exponential family models. We also present a smoothing spline based PPR algorithm, and compare it to supersmoother and polynomial based PPR algorithms.  From ling at csd.uwo.ca Thu Mar 3 16:47:02 1994 From: ling at csd.uwo.ca (Charles X. Ling) Date: Thu, 3 Mar 94 16:47:02 EST Subject: Overfitting in learning discrete patterns Message-ID: <9403032147.AA28654@mccarthy.csd.uwo.ca> Hi everyone, A few weeks ago I posted several questions regarding overfitting in the network training. I got many helpful replies (some did not forward to the list). Thanks very much to all. After some thoughts, and following Andreas Weigend's Summer School 93 paper, I designed and implemented the following experiments on the overfitting problem. The design is very simple but with a clear rationale, the results seem to be conclusive, and anyone can verify them easily. THE FUNCTION TO BE LEARNED: First, a Boolean function with 7 variables and 1 output is defined by: count := 2*v1 - v2 + 3*v3 + 2*v4 -v5 + 3*v6 + v7; if (count > 2) and (count < 7) then f1 := 0 else f1 := 1 This function needs a network with 2 one-layer hidden units to represent it. The actual target function is the one above except 10% of function values chosen randomly are flipped (0 to 1 and 1 to 0). Note that those flipped values can be regarded as having a "high-level regularity", and no sampling noise is added to this target function. (It is like learning verb past tenses which have a few so called "irregular" verbs, or learning string to phoneme mappings which have some exceptions). The 128 possible examples are split randomly to the training set (with 64 examples), the testing set (64), non-overlap. It happens that the training set has 4 flipped examples, and the testing set has 7. TRAINING: Xerion from U of Toronto is used. Backprop with small learning rate and momentum 0. Testing accuracies are monitored every 10 epochs till 1000, then several points are checked till 50,000 epochs. Note that only the basic BP is used without weight decaying. Networks have 7 input units and one output unit. Networks with 1 (too small), 2 (right model), 3, 4 (enough to train to 0 symbolic-level error, see below), 6, 8, 12, 20 (over sized) hidden units are trained, each with 5 different random seeds. ERROR: The most important change from Weigend's paper is the way the error is measured. Since we are learning discrete patterns, the error we really care is the sum of misclassifications (i.e., symbolic-level error). The discrete patterns that have the smallest real-number Hamming distance (smallest angle) with the actual network outputs should be taken. Since there is only one output here, if the output is greater than 0.5, then it is 1, otherwise is 0. RESULTS: See table below. Results are represented as average at standard_error # of min training min testing overfitting percent of hidden U error error (# of increase) increase 1 19.0 at 0.0 30.5 at 0.6 2.5 at 0.6 8% 2 2.0 at 0.0 11.0 at 0.0 1.0 at 0.0 9% 3 1.0 at 0.0 11.3 at 0.9 2.5 at 1.7 22% 4 0.6 at 0.5 11.3 at 0.9 6.3 at 1.9 56% 6 0.4 at 0.5 13.2 at 1.3 2.6 at 0.5 20% 8 0.0 at 0.0 13.4 at 1.3 5.2 at 1.1 39% 12 0.2 at 0.5 12.4 at 1.1 3.4 at 2.8 27% 20 0.5 at 0.6 12.8 at 1.5 4.0 at 2.2 31% min training error: minimal number of misclassifications for the training set overfitting: increases in the number of misclassification on the testing set when training to 50,000 epochs. percent of increase: overfitting/min-testing-error CONCLUSIONS: 1. Too-small networks and right-sized networks do overfit, but with very small amount (8%, 9%). Testing errors and overfitting amount are stable. 2. The right-size network (2 hidden units) has the minimal testing errors and minimal fluctuations. 3. Too-large networks overfit a lot. This is because the trained networks represent a very complex separator which does not align with the hyper-planes. 4. Too-large networks have large variations on testing errors using different random seeds. Therefore, the results are not as reliable. Why? Too many freedoms in the net. 5. The average error with too-large networks is slightly *higher* than the right-sized networks. Therefore, training overly large networks and stopping early does not seem to be beneficial. IN SUM: In terms of symbolic-level errors... Even without noise, training to 0 symbolic-level error overfits. Training overly large networks does not seem beneficial (see 3, 4, 5 above). We should look for a small network with minimal cross-validation error. That net should also have small overfitting effect and variations. That is... 1. Split the training set as TR1 (training) and TR2 (validation). 2. Train net1, net2, ... net_n (with different numbers of hidden units) on TR1, stop training when minimal validation errors are reached. 3. Find a small net, net_k, with the smallest validation error. 4. Train net_k with TR1+TR2 until error on the training drops to the scaled-up error when training TR1. This net will not overfit too much anyway, and the accuracy on testing would be stable. The only thing that should be explored further is the basic BP with weight decaying. I will do that in the near future. Comments and suggestions are very welcome. Regards, Charles  From oded at ai.mit.edu Thu Mar 3 22:10:46 1994 From: oded at ai.mit.edu (Oded Maron) Date: Thu, 3 Mar 94 22:10:46 EST Subject: pre-print announcement: Hoeffding Races - Accelerating Model Selection Message-ID: <9403040310.AA09710@fiber-bits> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/maron.hoeffding.ps.Z The file maron.hoeffding.ps.Z is now available for copying from the Neuroprose repository: Hoeffding Races: Accelerating model selection for classification and function approximation (8 pages) Oded Maron, MIT AI Lab and Andrew W. Moore, CMU ABSTRACT: Selecting a good model of a set of input points by cross validation is a computationally intensive process, especially if the number of possible models or the number of training points is high. Techniques such as gradient descent are helpful in searching through the space of models, but problems such as local minima, and more importantly, lack of a distance metric between various models reduce the applicability of these search methods. Hoeffding Races is a technique for finding a good model for the data by quickly discarding bad models, and concentrating the computational effort at differentiating between the better ones. This paper focuses on the special case of leave-one-out cross validation applied to memory-based learning algorithms, but we also argue that it is applicable to any class of model selection problems. This paper will appear in NIPS-6. Maron, Oded and Moore, Andrew W. (1994). Hoeffding Races: Accelerating model selection for classification and function approximation. In Cowan, J.D., Tesauro, G., and Alspector, J. (eds)., Advances in Neural Information Processing Systems 6. San Francisco, CA: Morgan Kaufmann Publishers.  From ray at basser.cs.su.OZ.AU Fri Mar 4 03:20:04 1994 From: ray at basser.cs.su.OZ.AU (Raymond Lister) Date: Fri, 4 Mar 1994 19:20:04 +1100 Subject: Tutorial - Neural Networks, Speech Technology, and Other Applications Message-ID: <9403040825.20347@munnari.oz.au> ************************************************************** NEURAL NETWORKS, SPEECH TECHNOLOGY, AND OTHER APPLICATIONS -- A TUTORIAL -- Thursday April 28 and Friday April 29 1994 Queensland University of Technology, Brisbane, AUSTRALIA Featured Speaker: PROF. NELSON MORGAN, International Computer Science Institute Berkeley, California, USA. PROF. MORGAN is co-author, with Herve Bourlard, of the recent Kluwer Academic Press book ``Connectionist Speech Recognition, A Hybrid Approach'' PROF. MORGAN will be giving a tutorial similar to the one he gave at the NIPS-6 conference at Denver, Colorado in December 1993. NIPS is the premier international conference for research in artificial neural networks. Other speakers: PROF. TOM DOWNS/AH CHUNG TSOI and co-workers from the Speaker Verification Project, Department of Electrical and Computer Engineering, University of Queensland, Australia PROF. JOACHIM DIEDERICH, Professor of Neurocomputing, and other members of the Queensland University of Technology Neurocomputing Research Centre. Venue: 12th Floor, Building ITE ("Information Technology and Engineering"), Gardens Point Campus, Queensland University of Technology, 2 George Street, Brisbane, AUSTRALIA (The venue is a short walk from the Brisbane Central Business District.) Day 1 (2-6PM) 1. An Introduction to Neural Networks. This session will serve as a primer for those attendees with no prior background in artificial neural networks. 2. Demonstrations of Applications of Neural Networks at QUT. The Neurocomputing Research Centre at QUT is developing a number of applications, including systems for: predicting blue-green algae blooms; advising in dairy breeding programs; predicting the bleeding rate of patients undergoing heart bypass surgery; and a computer assistant for the handling of electronic mail. Day 2 (full day): 1. Connectionist Continuous Speech Recognition, by Nelson Morgan. This will consist of three 90 minute sessions. 2. Speaker Verification Research at the University of Queensland, by Professors Tom Downs/Ah Chung Tsoi and co-workers. 1. Overview 2. Neural Networks applied to speaker verification 3. Dynamic time warping applied to speaker verification 4. Vector quantization applied to speaker verification 5. Demonstration of a speaker verification system Cost: Registration $A200, Pre-Registration $150 (for payment received 1 week prior to tutorial) Full time postgraduate students are eligible for a 50% discount on the full fee. Proof of enrollment is required: either a photocopy of a current student card, or a letter from the Head of Department. Lunch $A30 Optional, and second day only. Must be accompanied by early registration fee, up to one week prior to tutorial. It will be possible to register on the day, but only cash and cheques will be acceptable. Credit cards cannot be accepted. ******************************************************************* REGISTRATION FORM NEURAL NETWORKS, SPEECH TECHNOLOGY, AND OTHER APPLICATIONS Thursday April 28 and Friday April 29 1994 NAME:_______________________________ AFFILIATION:________________________________________________ ADDRESS:__________________________ __________________________ __________________________ __________________________ TEL: ____________________________ (office hours) FAX: ____________________________ EMAIL: ____________________________ REGISTRATION (tick as appropriate) Full Fee: $200 Early Fee: $150 Student: $100 Lunch: $ 30 ________________ Total: $ - or - I expect to attend but will pay on the day (tick) (Notification of an expectation to attend would be appreciated, as it will aid in making tutorial arrangements. Such notification may be made by electronic mail, along with above particulars.) Make cheques payable to "Faculty Research - Neurocomputing". Credit cards cannot be accepted. Send the registration form and remittance to: Neural Network Tutorial Neurocomputing Research Centre School of Computing Science Queensland University of Technology GPO Box 2434 Brisbane Australia 4001 ******************************************************************* Connectionist Continuous Speech Recognition: A Tutorial by Professor Nelson Morgan Automatic Speech Recognition (ASR) has been a major topic of research for over 40 years. While there has been much progress in this time, it is still a difficult task, and the best systems are still quite limited. Since computers have rapidly grown much more powerful, statistically-oriented data-driven approaches have received much more attention over the last 10 years. These approaches automatically learn speech model parameters from the data, and have proven to be very successful. The dominant approach for such systems uses Hidden Markov Models (commonly based on an assumption of Gaussian or mixture Gaussian probability densities for the data in each sound class) to represent speech. However, over the last 5 years, a set of techniques have been developed at Berkeley and elsewhere using a hybrid of connectionist probability estimators and Hidden Markov Models. In this tutorial, the basics of automatic speech recognition, Hidden Markov Models, and probability estimation with layered connectionist networks will be reviewed, followed by a more detailed explanation of the current state of development for this class of approaches. The goal of the tutorial will be to acquaint the participants with the major issues of connectionist speech recognition, rather than to exhaustively review the range of approaches under investigation worldwide. Brief Notes about the Instructor: Nelson Morgan is the leader of a research group at the International Computer Science Institute whose charter is a mixture of connectionist computational engine design and the incorporation of such engines into research into speech and hearing in order to improve auditory machine perception. Together with Herve Bourlard, he is the author of the recent Kluwer Academic Press book ``Connectionist Speech Recognition, A Hybrid Approach'', and was the co-developer (with Bourlard) of many of these techniques. He is also on the faculty at the University of California at Berkeley. ******************************************************************* For further information please contact: Dr Raymond Lister email: raymond at fitmail.fit.qut.edu.au - or - Prof. Joachim Diederich Tel: +617 864 2143 email: joachim at fitmail.fit.qut.edu.au - or - either, by Fax: +617 864 1801  From lautrup at connect.nbi.dk Fri Mar 4 14:04:55 1994 From: lautrup at connect.nbi.dk (Benny Lautrup) Date: Fri, 4 Mar 94 14:04:55 MET Subject: preprint Message-ID: The following preprint is now available: FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/hertz.nonlin.ps.Z Authors: J. Hertz, A. Krogh, B. Lautrup and T. Lehmann Title: Non-Linear Back-propagation: Doing Back-Propagation without Derivatives of the Activation Function. Size: 13 pages Abstract: The conventional linear back-propagation algorithm is replaced by a non-linear version, which avoids the necessity for calculating the derivative of the activation function. This may be exploited in hardware realizations of neural processors. In this paper we derive the non-linear back-propagation algorithms in the framework of recurrent back-propagation and present some numerical simulations of feed-forward networks on the NetTalk problem. A discussion of implementation in analog VLSI electronics concludes the paper.  From Fabien.Moutarde at aar.alcatel-alsthom.fr Fri Mar 4 08:00:49 1994 From: Fabien.Moutarde at aar.alcatel-alsthom.fr (Fabien Moutarde) Date: Fri, 4 Mar 94 14:00:49 +0100 Subject: Overfitting in learning discrete patterns In-Reply-To: Mail from '"Charles X. Ling" ' dated: Thu, 3 Mar 94 16:47:02 EST Message-ID: <9403041300.AA05822@orchidee.dinsunnet> Hi everybody. Here are some questions and comments about the overfitting experiment reported by Charles X. Ling. I would like to know how were the weights initialized ? Were they taken from uniform distribution in some fixed interval whatever the network architecture ? Which interval ? I ask this because when we tried to provoke some overfitting on a very simple problem, we realized that the initial amplitude of weights is a crucial factor : if you begin learning with some neurons already in their non linear regime somewhere in learning space, then the initial function realized by the network is not smooth, and the irregularities are likely to remain between learning points and to produce overfitting. This implies that the bigger the network, the lower the initial weights should be. Ling's problem is a discrete value one, and the above remarks were related to a continuous function approximation, however I think it is possible that his results are more a consequence of his initialization procedure than of the training itself. Any further comments, and details on the experiment ? Fabien. +===========================================+ | | | Fabien Moutarde | +---------------+ | Alcatel Alsthom Recherche | | A L C A T E L | | Route de Nozay | +---------------+ | 91460 Marcoussis | | A L S T H O M | | FRANCE | +===============+ | | RECHERCHE | tel: 33-1-64.49.16.98 | | fax: 33-1-64.49.06.95 | | e-mail: moutarde at aar.alcatel-alsthom.fr | | | +===========================================+  From lautrup at connect.nbi.dk Fri Mar 4 15:01:47 1994 From: lautrup at connect.nbi.dk (Benny Lautrup) Date: Fri, 4 Mar 94 15:01:47 MET Subject: IJNS Vol 4.3 contents Message-ID: Begin Message: ----------------------------------------------------------------------- INTERNATIONAL JOURNAL OF NEURAL SYSTEMS The International Journal of Neural Systems is a quarterly journal which covers information processing in natural and artificial neural systems. It publishes original contributions on all aspects of this broad subject which involves physics, biology, psychology, computer science and engineering. Contributions include research papers, reviews and short communications. The journal presents a fresh undogmatic attitude towards this multidisciplinary field with the aim to be a forum for novel ideas and improved understanding of collective and cooperative phenomena with computational capabilities. ISSN: 0129-0657 (IJNS) ---------------------------------- Contents of Volume 4, issue number 3 (1993): 1. X. Yao: Evolutionary Artificial Neural Networks 2. A. Wendemuth & D. Sherrington: Fast Learning of Biased Patterns in Neural Networks 3. H. S. Toh: Weight Configurations of Trained Perceptrons 4. W. Hsu, L. S. Hsu & M. F. Tenorio: The ClusNet Algorithm and Time Series Prediction 5. A. Holst & A. Lansner: A Flexible and Fault Tolerant Query-Reply System Based on a Bayesian Neural Network 6. S. Cavalieri, A. Di Stefano & O. Mirabella: Neural Strategie to Handle Routing in Computer Networks 7. G. K. Knopf & M. M. Gupta: Dynamics of Antagonistic Neural Processing Elements Book Review: K. Venkatesh Prasad: Neural Networks for Optimization and Signal Processing by A. Cichocki & R. Unbehauen ---------------------------------- Editorial board: B. Lautrup (Niels Bohr Institute, Denmark) (Editor-in-charge) S. Brunak (Technical Univ. of Denmark) (Assistant Editor-in-Charge) D. Stork (Stanford) (Book review editor) Associate editors: J. Alspector (Bellcore) B. Baird (University of California, Berkeley) D. Ballard (University of Rochester) E. Baum (NEC Research Institute) S. Bjornsson (University of Iceland) J. M. Bower (CalTech) S. S. Chen (University of North Carolina) J. L. Elman (University of California, San Diego) M. V. Feigelman (Landau Institute for Theoretical Physics) F. Fogelman-Soulie (Paris) K. Fukushima (Osaka University) A. Gjedde (Montreal Neurological Institute) S. Grillner (Nobel Institute for Neurophysiology, Stockholm) T. Gulliksen (University of Oslo) S. Hanson (SIEMENS Research) J. Hertz (Nordita) D. Horn (Tel Aviv University) J. Hounsgaard (University of Copenhagen) B. A. Huberman (XEROX PARC) L. B. Ioffe (Rutgers University) P. I. M. Johannesma (Katholieke Univ. Nijmegen) M. Jordan (MIT) G. Josin (Neural Systems Inc.) I. Kanter (Bar-Ilan University, Israel) J. H. Kaas (Vanderbilt University) A. Lansner (Royal Institute of Technology, Stockholm) A. Lapedes (Los Alamos) B. McWhinney (Carnegie-Mellon University) J. Moody (Oregon Graduate Institute, USA) A. F. Murray (University of Edinburgh) J. P. Nadal (Ecole Normale Superieure, Paris) N. Parga (Centro Atomico Bariloche, Argentina) S. Patarnello (IBM ECSEC, Italy) P. Peretto (Centre d'Etudes Nucleaires de Grenoble) C. Peterson (University of Lund) K. Plunkett (University of Oxford) S. A. Solla (AT&T Bell Labs) A. Weigend (University of Colerado) M. A. Virasoro (University of Rome) D. Zipser (University of California, San Diego) ---------------------------------- CALL FOR PAPERS Original contributions consistent with the scope of the journal are welcome. Complete instructions as well as sample copies and subscription information are available from The Editorial Secretariat, IJNS World Scientific Publishing Co. Pte. Ltd. 73, Lynton Mead, Totteridge London N20 8DH ENGLAND Telephone: (44)81-446-2461 or World Scientific Publishing Co. Inc. Suite 1B 1060 Main Street River Edge New Jersey 07661 USA Telephone: (1)201-487-9655 or World Scientific Publishing Co. Pte. Ltd. Farrer Road, P. O. Box 128 SINGAPORE 9128 Telephone (65)382-5663 ----------------------------------------------------------------------- End Message  From davec at cogs.susx.ac.uk Fri Mar 4 08:59:37 1994 From: davec at cogs.susx.ac.uk (Dave Cliff) Date: Fri, 4 Mar 1994 13:59:37 +0000 (GMT) Subject: Postdoc at Sussex Message-ID: Sussex Centre for Neuroscience RESEARCH FELLOW Applications are invited for a postdoctoral research fellow to investigate visuo-spatial memories and navigation in hymenoptera (ants, bees, and wasps). Candidates with interest and experience in vision, robotics and computational modelling are especially welcome. The post will be for two years in the first instance with a possibility of renewal. Starting salary will be in the range U.K.Pounds12828-20442. Enquiries and applications (curriculum vitae, one or two sample publications, and the names and addresses of at least two referees) should be addressed to: Dr T. S. Collett, Sussex Center for Neuroscience, School of Biological Sciences, Falmer, Brighton BN1 9QG, U.K. Tel: +44 (0)273 678507 Fax: +44 (0)273-678535 Closing date: 1 May 1994.  From venu at pixel.mipg.upenn.edu Fri Mar 4 15:40:20 1994 From: venu at pixel.mipg.upenn.edu (Venugopal) Date: Fri, 4 Mar 94 15:40:20 EST Subject: Alopex: Pre-print available on FTP Message-ID: <9403042040.AA05883@pixel.mipg.upenn.edu> Pre-print of the following paper (which is to appear in NEURAL COMPUTATION, vol. 4, 1994, pp. 467-488) is available on ftp from neuroprose archive: ALOPEX: A CORRELATION BASED LEARNING ALGORITHM FOR FEEDFORWARD AND RECURRENT NEURAL NETWORKS K. P. Unnikrishnan GM Research Laboratories, Warren, MI AI Laboratory, Univ. of Michigan, Ann Arbor, MI K. P. Venugopal Medical Image Processing Group University of Pennsylvania, Philadelphia, PA Abstract: We present a learning algorithm for neural networks, called Alopex. Instead of error gradient, Alopex uses local correlations between changes in individual weights and changes in the global error measure. The algorithm does not make any assumptions about transfer functions of individual neurons, and does not explicitly depend on the functional form of the error measure. Hence, it can be used in networks with arbitrary transfer functions and for minimizing a large class of error measures. The learning algorithm is the same for feed-forward and recurrent networks. All the weights in a network are updated simultaneously, using only local computations. This allows complete parallelization of the algorithm. The algorithm is stochastic and it uses a `temperature' parameter in a manner similar to that in simulated annealing. A heuristic `annealing schedule' is presented which is effective in finding global minima of error surfaces. In this paper, we report extensive simulation studies illustrating these advantages and show that learning times are comparable to those for standard gradient descent methods. Feed-forward networks trained with Alopex are used to solve the MONK's problems and symmetry problems. Recurrent networks trained with the same algorithm are used for solving temporal XOR problems. Scaling properties of the algorithm are demonstrated using encoder problems of different sizes and advantages of appropriate error measures are illustrated using a variety of problems. ----------------------------------------- The file at archive.cis.ohio-state.edu is venugopal.alopex.ps.Z (472K compressed) to ftp the file: unix> ftp archive.cis.ohio-state.edu Name (archive.cis.ohio-state.edu:xxxxx): anonymous Password: your address ftp> cd pub/neuroprose ftp> binary ftp> mget venugopal.alopex.ps.Z uncompress the file after transfering to your machine, before printing. ----------------------------------------------------------- K. P. Venugopal Medical Image Processing Group University of Pennsylvania 423 Blockley Hall Philadelphia, PA 19104 (venu at pixel.mipg.upenn.edu)  From trevor at research.att.com Fri Mar 4 16:28:00 1994 From: trevor at research.att.com (trevor@research.att.com) Date: Fri, 4 Mar 94 16:28 EST Subject: Report on Gaussian Mixtures available Message-ID: The following report is available via anonymous ftp. FTP-host: research.att.com FTP-filename: /dist/trevor/mda.ps.Z Software for fitting these models in S will soon be available from the S archive at statlib at lib.stat.cmu.edu. Discriminant Analysis by Gaussian Mixtures Trevor Hastie and Robert Tibshirani Fisher-Rao linear discriminant analysis (LDA) is a valuable tool for multigroup classification. LDA is equivalent to maximum likelihood classification assuming Gaussian distributions for each class. In this paper, we fit Gaussian mixtures to each class to facilitate effective classification in non-normal settings, especially when the classes are clustered. Low dimensional views are an important by-product of LDA---our new techniques inherit this feature. We are able to control the within-class spread of the subclass centers relative to the between-class spread. Our technique for fitting these models permits a natural blend with nonparametric versions of LDA. To retrieve from research.att.com: unix> ftp research.att.com Name (research.att.com:trevor): anonymous Password: (use your email address) ftp> cd dist/trevor ftp> binary ftp> get mda.ps.Z ftp> quit unix> uncompress mda.ps.Z unix> lpr mda.ps  From jras at uned.es Fri Mar 4 20:55:07 1994 From: jras at uned.es (Jose Ramon Alvarez Sanchez) Date: Fri, 4 Mar 1994 20:55:07 UTC+0100 Subject: IWANN'95 Call for Papers Message-ID: <107*/S=jras/O=uned/PRMD=iris/ADMD=mensatex/C=es/@MHS> INTERNATIONAL WORKSHOP ON ARTIFICIAL NEURAL NETWORKS IWANN'95 Preliminary Announcement and First Call for Papers Malaga - Costa del Sol, Spain June 7 - 9, 1995 SPONSORED BY IFIP (Working Group in Neural Computer Systems, WG10.6) Spanish RIG IEEE Neural Networks Council UK&RI communication chapter of IEEE Spanish Computer Society chapter of IEEE AEIA (IEEE affiliate society) ORGANISED BY Universidad de Malaga UNED (Madrid) IWANN'95. The third International Workshop on artificial Neural Networks, will take place in the Spanish "Costa del Sol" (Malaga) from 7 to 9 of June, 1995. This biennial meeting with focus on Biological Models and New Computing Paradigms, was first held in Granada (1991) and Sitges (1993) with a growing number of participants from more than 20 countries and with high quality papers published by Springer-Verlag (LNCS 540 and 686). SCOPE From the computational viewpoint, standard neural networks paradigms are nearly exhausted and some fresh air is needed. In this workshop, remaining with the powerful roots of neural computation (modularity, autonomy, distributed computation and self-programming via supervised or non-supervised learning), focus is placed on Biological Modeling, the search of Theory and Design Methodologies and the bridge between Connectionism and Symbolic Computation. IWANN's main objective is to offer an interdisciplinary forum for scientists and engineers from Neurology, Computer Science, Artificial Intelligence, Electronics, Cognitive Science and applied domains, looking after brain storming and innovative formulations of Natural and Artificial Neural Computation. It is the deep feeling of the IWANN's organizers that this more-complex, biologically inspired, and theoretical and methodologically supported approach will also provide us with more powerful tools for applied domains. Contributions on the following or related topics are welcome. TOPICS 1. Neuroscience: (Principles, methodologies in brain research, modeling and simulation, central and peripheral neural coding, dendro-dentritic nets, local circuits, anatomical and physiological organizations, plasticity, learning and memory in natural neural nets, models of development and evolution, specific circuits in sensorial and motor pathways, networks in cerebral cortex). 2. Computational Models of Neurons and Neural Nets: Continuous (linear, high order, recurrent), logic, sequential, inferential (object oriented, production rules, frames), probabilistic, Bayesian, fuzzy and chaotic models, hybrid formulations, massive computation and learning enabling structures for all these formulations. 3. Organizational Principles: The living organization, deterministic networks dynamics, autopoiesis, self-organization, cooperative processes and emergent computation, synergetics, evolutive optimization and genetic algorithms. 4. Learning: Inspirations from the biological mechanisms of learning, supervised and unsupervised strategies, local self-programming, continous learning, evolutive algorithms, symbolic-subsymbolic formulations. 5. Cognitive Science and AI: Neural networks for knowledge acquisition, multisensorial integration, perception, knowledge- based neural nets, inductive, deductive and abductive reasoning, memory mechanisms, natural language. 6. Neurosimulators: Languages, environments, parallelization, modularity, extensibility and benchmarks. 7. Hardware Implementation: VLSI, parallel architectures, neurochips, preprocessing networks, neurodevices, FPGA's, benchmarks, optical and other technologies. 8. Neural Networks for Perception: Low level processing, segmentation, feature extraction, pattern recognition, adaptive filtering, noise reduction, texture, motion analysis, hybrid symbolic-neural architectures for artificial vision. 9. Neural Networks for communications systems: Modems and codecs, network management, digital communications. 10. Neural Networks for control and robotics: Systems identification, motion planning and control, adaptive and predictive control, navigation, real time applications. LOCATION Malaga - Costa del Sol, June 7-9, 1995. Malaga, capital of the Costa del Sol, is strategically located on the southern coast of Spain. It is a genuine crossroads of communication and culture. Malaga is well-know for its history (Cathedral, historic down-town, arabian citadel, roman amphitheatre, ...) and excelent beaches. Malaga, with many modern hotels, is very well communicated by car or plane; its international airport has direct flights to all major European capitals, to America and some destinations on the other continents. LANGUAGE English will the official language of IWANN'95. Simultaneous translation will not be provided. CALL FOR PAPERS The Programme Committee seeks original papers on the above mentioned Topics. Authors should pay special attention to explanation of theoretical and technical choices involved, point out possible limitations and describe the current state of their work. Authors must take into account the following: INSTRUCTIONS TO AUTHORS Authors must submit four copies of full papers, not exceeding 8 pages in DIN-A4 format. The heading should be centered and include: . Title in capitals. . Name(s) of author(s). . Address(es) of author(s). . A 10 line abstract. Three blank lines should be left between each of the above items, and four between the heading and the body of the paper, 1.6 cm left, right, top and bottom margins, single-spaced and not exceeding the 8 page limit. In addition, one sheet should be attached including the following information: . Title and author(s) name(s). . A list of five keywords. . A reference to the Topics the paper relates to. . Postal address, phone and fax numbers and E-mail (if available). All received papers will be reviewed by the Programme Committee. Accepted papers may be presented orally or as poster panels, however all accepted contributions will be published in full length. (Springer-Verlag Proceedings are expected). IMPORTANT DATES Second Call for Papers September, 1994 Final date for submission January 15, 1995 Notification of acceptance March 15, 1995 Workshop June 7-9, 1995 CONTRIBUTIONS MUST BE SENT TO: Prof. Jose Mira Dpto. Informatica y Automatica UNED Senda del Rey, s/n 28040 MADRID (Spain) Phone: +34 (1) 398-7155 Fax: +34 (1) 398-6697 Email: jose.mira at uned.es GENERAL CHAIRMAN Alberto Prieto Unv. de Granada (E) ORGANIZATION COMMITTEE Joan Cabestany Unv. Pltca. de Catalunya (E) Chairman Senen Barro Unv. de Santiago de Compostela (E) Trevor Clarkson King's College London (UK) Dante Del Corso Politecnico de Torino (I) Ana Delgado UNED. Madrid (E) Karl Goser Unv. Dortmund (G) Jeanny Herault INPG Grenoble (F) K.Nicholas Leibovic SUNY at Buffalo (U.S.A.) Jose Mira UNED. Madrid (E) Federico Moran Unv. Complutense. Madrid (E) Stanislaw Osowski Tech. Unv. Warsaw (Po) Conrad Perez Unv. de Barcelona (E) Francisco Sandoval Unv. de Malaga (E) Juan A. Siguenza Unv. Autonoma de Madrid (E) Elena Valderrama CNM-Unv. Autonoma de Barcelona (E) Marley Vellasco Pont. U. Catolica do Rio de Janeiro (Br) Michel Verleysen Unv. Catholique de Louvain (B) LOCAL COMMITTEE Francisco Sandoval Unv. de Malaga (E) Chairman Antonio Diaz Unv. de Malaga (E) Gonzalo Joya Unv. de Malaga (E) Francisco Vico Unv. de Malaga (E) TENTATIVE PROGRAMME COMMITTEE Jose Mira UNED. Madrid (E) Chairman Carlos Acuna C. Unv. Santiago de Compostela (E) Joshua Alspector Bellcore. (USA) Sanjeev B.Ahuja Nielsen A.I. Research & Development. Bannokburn (USA) Igor Aleksander Imperial College. London (UK) Luis B. Almeida INESC. Lisboa (P) Shun-ichi Amari Unv. Tokyo (Jp) Michael Arbit Unv. Southern, CA (USA) Xavier Arreguit CSEM SA (CH) Francois Blayo LERI-EERIE. Nimes (F) Colin Campbell University of Bristol (UK) Jordi Carrabina CNM- Universidad Autonoma de Barcelona (E) Francisco Castillo Unv. Pltca. de Catalunya (E) Antoni Catala Unv. Pltca. de Catalunya (E) Gloria Cembrano Instituto de Cibernetica. CSIC. Barcelona (E) Leon Chua Unv. California, Berkeley (USA) Michael Cosnard LIP. Ecole Normale Superieure de Lyon (F) Marie Cottrell Unv. Paris I (F) Dante A. Couto B. Instituto de Informatica (Br) Gerard Dreyfus ESPCI. Paris (F) F.K. Fogelman Soulie Mimetics. Chatenay Malabry (F) J. Simoes da Fonseca Unv. Lisboa (P) Kunihiko Fukushima Unv. Osaka (Jp) Hans Peter Graf AT&T Bell Laboratories, New Jersey (USA) Francesco Gregoretti Politecnico di Torino (I) Karl E. Grosspietsch Mathematik und Datenverarbeitung (GMD) St. Augustin (D) Mohamad H. Hassoun Wayne State University (USA) Jaap Hoekstra Delft University of Technology (NL) P.T.W. Hudson Leiden University (NL) Jose Luis Huertas CNM- Universidad de Sevilla (E) Paul G.A. Jespers Universite Catholique de Louvain (B) Simon Jones IERI Loughborough University of Technology (UK) Chistian Jutten INPG Grenoble (F) H. Klar Technische Universitat Berlin (D) C.Koch CalTech. (USA) Teuvo Kohonen Helsinki Unv. of Techn. (Fin) Michael D. Lemmon University of Notre Dame. Notre Dame (USA) K. Nicholas Leibovic SUNY at Buffalo, NY (USA) Panos A. Ligomenides Unv. of Maryland (USA) Javier Lopez Aligue Unv. de Extremadura. (E) Pierre Marchal CSEM SA (CH) Anthony N. Michel University of Notre Dame. Notre Dame (USA) Roberto Moreno Unv. Las Palmas Gran Canaria (E) Jean Daniel Nicoud EPFL (CH) Josef A. Nossek Tech. Univ. of Munich (D) Julio Ortega Unv. de Granada (E) Marco Pacheco Pont. U. Catolica do Rio de Janeiro (Br) Conrad Perez Unv. de Barcelona (E) Francisco J. Pelayo Unv. de Granada (E) Franz Pichler Johannes Kepler Univ. (A) Ulrich Ramacher Siemens AG. Munich (D) J.Ramirez Paradigma C.A. Caracas (V) Leonardo Reyneri Unv. di Pisa (I) Tamas Roska Hungarian Academy of Science. Budapest (H) Peter A. Rounce Unv. College London (UK) V.B. David Sanchez German Aerospace Research Establishment. Wessling (G) E. Sanchez-Sinencio Texas A&M University (USA) David Sherrington University of Oxford (UK) Renato Stefanelli Politecnico di Milano (I) T.J. Stonham Brunel-University of West London (UK) John G. Taylor King's College London (UK) Carme Torras Instituto de Cibernetica. CSIC. Barcelona (E) Philip Treleaven Unv. College London (UK) Eric Vittoz CSEM SA (CH) Michel Weinfeld Ecole Polytechnique Paris (F) Bernard Widrow Stanford University CA (USA) R.Yager Iona College NY (USA) INFORMATION FORM to be returned as soon as possible to: Prof. F. Sandoval IWANN'95 Dept. Tecnologia Electronica Universidad de Malaga Pza. El Ejido, s/n E-29013 Malaga SPAIN Phone: +34.5.213.13.52 Fax: +34.5.213.14.47 E-mail: iwann95 at ctima.uma.es ---------------------------------------------------------------- ___ I wish to attend the Workshop ___ I intend to submit a paper Tentative title: ................................................. ................................................................. Author (s): ...................................................... ................................................................. Related Topics: .................................................. ................................................................. Last name: ....................................................... First name: ...................................................... Company/Organization: ............................................ ................................................................. ................................................................. Address: ......................................................... ................................................................. ................................................................. ................................................................. Postal code/Zip code: ............................................ City: ............................................................ State/Country: ................................................... Phone: ........................................................... Fax: ............................................................ E-mail: ..........................................................  From kolen-j at cis.ohio-state.edu Sun Mar 6 10:39:14 1994 From: kolen-j at cis.ohio-state.edu (john kolen) Date: Sun, 6 Mar 1994 10:39:14 -0500 Subject: Overfitting in learning discrete patterns In-Reply-To: <9403041300.AA05822@orchidee.dinsunnet> (message from Fabien Moutarde on Fri, 4 Mar 94 14:00:49 +0100) Message-ID: <199403061539.KAA09725@pons.cis.ohio-state.edu> Fabien.Moutarde at aar.alcatel-alsthom.fr wrote: I would like to know how were the weights initialized ? Were they taken from uniform distribution in some fixed interval whatever the network architecture ? Which interval ? You are asking the right questions. Are you aware of (Kolen & Pollack, 1990) which explores the effects of initial weights on back propagation? if you begin learning with some neurons already in their non linear regime somewhere in learning space, then the initial function realized by the network is not smooth, and the irregularities are likely to remain between learning points and to produce overfitting. This implies that the bigger the network, the lower the initial weights should be. The last sentence does not necessarily follow from the previous line. The magnitude of the weights is less important than the magnitude of the *net input* reaching the unit. For instance, if the network operates in an environment in which there are between unit correllations in the input, then large magnitude weights can effectively become small magnitude weights from the perspective of the nonlinear squashing function. In this situation, I would predict that large weights actually help in the distribution of error to the previous layer. John Kolen References J. F. Kolen and J. B. Pollack, 1990. Backpropagation is Sensitive to Initial Conditions. _Complex Systems_. 4:3. pg 269-280. Available from neuroprose as kolen.bpsic.*.ps.Z (8 files).  From robbie at psych.rochester.edu Sun Mar 6 16:21:17 1994 From: robbie at psych.rochester.edu (Robbie Jacobs) Date: Sun, 6 Mar 1994 16:21:17 -0500 Subject: postdoc position(s) Message-ID: <199403062121.QAA17782@biku.psych.rochester.edu> Postdoctoral Fellowship(s) Available The Center for Sciences of Language at the University of Rochester anticipates having one and possibly two NIH-funded post-doctoral fellowships available for the 1994-95 academic year. If two positions are available, preference for one of the positions will be given to candidates who already have the Ph.D. and can begin before July 1, 1994. The appointment will be for one year with the possibility of renewal for a second year. The Center brings together faculty and students with interests in spoken and signed languages from the departments of Linguistics, Computer Science, Psychology, and Philosophy; and the interdisciplinary programs in Cognitive Science and Neuroscience. We encourage applicants from any of these disciplines who have expertise in any area of natural language. We are particularly interested in post-doctoral fellows who want to contribute to an interdisciplinary community. Applications should be sent to Michael K. Tanenhaus, University of Rochester, Department of Psychology, Meliora Hall, Rochester, NY 14627. Include a vita, sample reprints and/or pre-prints, a statement of research and training interests, and arrange for letters of reference from at least three referees. In order to guarantee full consideration, applications should be received by April 1. The University of Rochester is an equal opportunity employer. We encourage applications from women and from minorities.  From lautrup at connect.nbi.dk Mon Mar 7 10:49:27 1994 From: lautrup at connect.nbi.dk (Benny Lautrup) Date: Mon, 7 Mar 94 10:49:27 MET Subject: preprint (fwd) Message-ID: The preprint announced below has not yet arrived in archive.cis.ohio-state.edu It may be retrieved through anonymous ftp from connect.nbi.dk (129.142.100.17) in the neuroprose directory. > > The following preprint is now available: > > FTP-host: archive.cis.ohio-state.edu > FTP-file: pub/neuroprose/hertz.nonlin.ps.Z > > Authors: J. Hertz, A. Krogh, B. Lautrup and T. Lehmann > > Title: Non-Linear Back-propagation: Doing Back-Propagation without > Derivatives of the Activation Function. > > Size: 13 pages > > Abstract: > > The conventional linear back-propagation algorithm is replaced by a non-linear > version, which avoids the necessity for calculating the derivative of the > activation function. This may be exploited in hardware realizations of neural > processors. In this paper we derive the non-linear back-propagation algorithms > in the framework of recurrent back-propagation and present some numerical > simulations of feed-forward networks on the NetTalk problem. A discussion of > implementation in analog VLSI electronics concludes the paper. > -- Benny Lautrup, professor Computational Neural Network Center (CONNECT) Niels Bohr Institute Blegdamsvej 17 2100 Copenhagen Denmark Telephone: +45-3532-5200 Direct: +45-3532-5358 Fax: +45-3142-1016 e-mail: lautrup at connect.nbi.dk  From PREFENES at NEPTUNE.FAC.CS.CMU.EDU Mon Mar 7 13:04:03 1994 From: PREFENES at NEPTUNE.FAC.CS.CMU.EDU (Paul Refenes) Date: Mon, 7 Mar 1994 13:04:03 BST Subject: Paper Pre-print Message-ID: The following pre-print is available. Please send requests to H.tracey at lbs.lon.ac.uk. Paper copy only. MEASURING THE PERFORMANCE OF NEURAL NETWORKS IN MODERN PORTFOLIO MANAGEMENT: TESTING STRATEGIES AND METRICS A. N. REFENES Department of Decision Science London Business School Sussex Place, Regents Park London NW1 4SA, UK ABSTRACT Neural networks have attracted much interest in financial engineering and modern portfolio management with many researchers claiming that they signal the beginning of a new era in the evolution of forecasting and decision support systems. Various performance figures are being quoted to support these claims but there is rarely a comprehensive testing strategy to quantify the performance of neural networks in ways that are meaningful to the practitioner in the field. In the context of asset management some of the quoted figures could be at best misleading and others are often irrelevant. In this paper we review some well known metrics for measuring estimator performance both in absolute and relative terms, measuring profitability of the final objective function, and analysing the characteristics of the equity curves.  From gfh at eng.cam.ac.uk Mon Mar 7 10:59:48 1994 From: gfh at eng.cam.ac.uk (gfh@eng.cam.ac.uk) Date: Mon, 7 Mar 94 15:59:48 GMT Subject: Technical report available by anonymous ftp Message-ID: <6641.9403071559@atom.eng.cam.ac.uk> The following technical report is available by anonymous ftp from the archive of the Speech, Vision and Robotics Group at the Cambridge University Engineering Department. EXPERIMENTS WITH SIMPLE HEBBIAN-BASED LEARNING RULES IN PATTERN-CLASIFICATION TASKS George F. Harpur and Richard W. Prager Technical Report CUED/F-INFENG/TR168 Cambridge University Engineering Department Trumpington Street Cambridge CB2 1PZ England Abstract This report presents a neural network architecture which performs pattern classification using a simple form of learning based on the Hebb rule. The work was motivated by the desires to decrease computational complexity and to maintain a greater degree of biological plausibility than most other networks designed to perform similar tasks. A method of pre-processing the inputs to provide a distributed representation is described. A scheme for increasing the power of the network using a layer of `feature detectors' is introduced: these use an unsupervised competitive learning scheme, again based on Hebbian learning. Simulation results from testing the networks on two `real-world' problems are presented, and compared to those produced by other types of neural network. ************************ How to obtain a copy ************************ Via FTP: unix> ftp svr-ftp.eng.cam.ac.uk Name: anonymous Password: (type your email address) ftp> cd reports ftp> binary ftp> get harpur_tr168.ps.Z ftp> quit unix> uncompress harpur_tr168.ps.Z unix> lpr harpur_tr168.ps (or however you print PostScript)  From dwm at signal.dra.hmg.gb Mon Mar 7 05:36:25 1994 From: dwm at signal.dra.hmg.gb (Daniel McMichael) Date: Mon, 7 Mar 1994 10:36:25 GMT Subject: IEE ANN 95 - CALL FOR PAPERS Message-ID: Fourth International Conference on Artificial Neural Networks Churchill College, Cambridge, UK 26-28 June 1995 ******************************************************************** ******************* Call for papers ************************ ******************************************************************** Objective This International Conference will produce an up-to-date report on the current state of research in the field of artificial neural networks. This will include theory fundamental structures, learning algorithms, implementation, vision, speech, robotics, control, medical and finance. The conference seeks original contributions spanning the entire field of neural networks. Suggestions for possible topics include: Architecture's and learning algorithms: Theory and design of neural networks, comparison with classical techniques. Applications and industrial systems: Vision and image processing, speech and language processing, communications systems, biomedical systems, robotics and control, financial and business systems. Implementations: hardware implementations (analogue and digital), VLSI devices or systems, optoelectronics Contributions Papers will be selected on the basis of wide audience appeal, ease of understanding and potential stimulation of broad ranging discussion. The quality of preparation and presentation expected will be very high. In addition to lecture theatre presentations a selection of papers will be presented in poster sessions. This has become an increasingly popular method of presentation as it offers good opportunities for one-to-one discussion. If you would prefer to present your paper in this way please indicate so when sending in your abstract, In addition please indicate which subject area (see list below) the abstract should be included in. Submissions should be in the form of an extended abstract up to 4 pages, to be received by the secretariat on or before 30 November 1994. The abstract must indicate clearly the novelty of the proposed contribution. Authors whose abstract are selected will be required to provide a typescript of a maximum of 6 pages by 31 March 1995. topic areas 1. Vision 2. Speech 3. Control and robotics 4. Biomedical 5. Financial and business 6. Signal processing 7. Radar/sonar 8. Data fusion 9. Analogue 10. Digital 11. Optical 12. Learning algorithms 13. Network architectures 14. Functional approximations 15. Statistical Methods 16. None of the above Deadlines Intending authors should note the following deadline dates: Receipt of extended abstract 30 November 1994 Notification of acceptance Late January 1995 Receipt of full typescript 31 March 1995 Bursary scheme limited financial support may be available to overseas participants presenting papers at this conference. please indicate on the reply slip if you wish to receive further information. Organising Committee Dr C J Satchwell (Chairman), Neural Statistics Ltd Prof C Harris, Southampton University Prof D Lowe, Aston University Dr D McMichael, Defence Research Agency Dr M Niranjan, Cambridge University Dr P Refenes, London Business School Dr W A Wright, British Areospace Corresponding Members L Giles, NEC, USA R Goodman, Caltech, USA P Lieshout, Advanced Information Processing, The Netherlands Organisers The Convention is being organised by the Electronics Division of the Institution of Electrical Engineers For futher information and address for submission of abstracts: ANN95 Secretariat IEE Conference Department Savoy Place London WC2R 0BL, UK Tel: 44(0)71 344 5478/5477 Fax: 44(0)71 497 3633 Email: sgriffiths at iee.org.uk ************************************************************************** !!!!!!!!!!!!!!!!! do NOT reply to dwm at signal.dra.hmg.gb !!!!!!!!!!!!!!!!!! **************************************************************************  From tedwards at src.umd.edu Mon Mar 7 15:34:31 1994 From: tedwards at src.umd.edu (Thomas Grant Edwards) Date: Mon, 7 Mar 1994 15:34:31 -0500 (EST) Subject: VLSI Phase-locking architecture NIPS 6 pre-print Message-ID: **DO NOT FORWARD TO OTHER GROUPS** The file andreou.vlsi-phase-lock.ps.Z is now available for downloading from the Neuroprose repository: VLSI Phase Locking Architectures for Feature Linking in Multiple Target Tracking Systems Andreas G. Andreou Thomas G. Edwards The Johns Hopkins University University of Maryland ABSTRACT: Recent physiological research has shown that synchronization of oscillatory responses in striate cortex may code for relationships between visual features of objects. A VLSI circuit has been designed to provide rapid phase-locking synchronization of multiple oscillators to allow for further exploration of this neural mechanism. By exploiting the intrinsic random transistor mismatch of devices operated in subthreshold, large groups of phase-locked oscillators can be readily partitioned into smaller phase-locked groups. A multiple target tracker for binary images is described utilizing this phase-locking architecture. A VLSI chip has been fabricated and tested to verify the architecture. The chip employs Pulse Amplitude Modulation (PAM) to encode the output at the periphery of the system. (NIPS 6 Pre-Print)  From risto at cs.utexas.edu Mon Mar 7 23:43:02 1994 From: risto at cs.utexas.edu (Risto Miikkulainen) Date: Mon, 7 Mar 94 22:43:02 -0600 Subject: Papers available on connectionist NLP, neuro-evolution/Othello Message-ID: <9403080443.AA26396@cascais.cs.utexas.edu> The following papers on - processing complex sentences, - disambiguation in distributed parsing networks, - learning German verb inflections, and - evolving networks to play Othello are available by anonymous ftp from our archive site at cs.utexas.edu:pub/neural-nets/papers. -- Risto Miikkulainen ------------------------------------------------------------------------- miikkulainen.subsymbolic-caseroles.ps.Z (21 pages) SUBSYMBOLIC CASE-ROLE ANALYSIS OF SENTENCES WITH EMBEDDED CLAUSES Risto Miikkulainen Department of Computer Sciences, The University of Texas at Austin. Technical Report AI93-202, July 1993. A distributed neural network model called SPEC for processing sentences with recursive relative clauses is described. The model is based on separating the tasks of segmenting the input word sequence into clauses, forming the case-role representations, and keeping track of the recursive embeddings into different modules. The system needs to be trained only with the basic sentence constructs, and it generalizes not only to new instances of familiar relative clause structures, but to novel structures as well. SPEC exhibits plausible memory degradation as the depth of the center embeddings increases, its memory is primed by earlier constituents, and its performance is aided by semantic constraints between the constituents. The ability to process structure is largely due to a central executive network that monitors and controls the execution of the entire system. This way, in contrast to earlier subsymbolic systems, parsing is modeled as a controlled high-level process rather than one based on automatic reflex responses. ------------------------------------------------------------------------- mayberry.disambiguation.ps.Z (10 pages) LEXICAL DISAMBIGUATION BASED ON DISTRIBUTED REPRESENTATIONS OF CONTEXT FREQUENCY Marshall R. Mayberry, III, and Risto Miikkulainen Department of Computer Sciences, The University of Texas at Austin. Technical Report AI94-217, February 1994. A model for lexical disambiguation is presented that is based on combining the frequencies of past contexts of ambiguous words. The frequences are encoded in the word representations and define the words' semantics. A Simple Recurrent Network (SRN) parser combines the context frequences one word at a time, always producing the most likely interpretation of the current sentence at its output. This disambiguation process is most striking when the interpretation involves semantic flipping, that is, an alternation between two opposing meanings as more words are read in. The sense of throwing a ball alternates between dance and baseball as indicators such as the agent, location, and recipient are input. The SRN parser demonstrates how the context frequencies are dynamically combined to determine the interpretation of such sentences. We hypothesize that other aspects of ambiguity resolution are based on similar mechanisms are well, and can be naturally approached from the distributed connectionist viewpoint. ------------------------------------------------------------------------- westermann.inflections.ps.Z (9 pages) VERB INFLECTIONS IN GERMAN CHILD LANGUAGE: A CONNECTIONIST ACCOUNT Gert Westermann(1) and Risto Miikkulainen(2) (1) Department of Computer Science, Technical University of Braunschweig. (2) Department of Computer Sciences, The University of Texas at Austin. Technical Report AI94-216, February 1994. The emerging function of verb inflections in German language acquisition is modeled with a connectionist network. A network that is initially presented only with a semantic representation of sentences uses the inflectional verb ending -t to mark those sentences that are low in transitivity, whereas all other verb endings occur randomly. This behavior matches an early stage in German language acquisition where verb endings encode a similar semantic rather than a grammatical function. When information about the surface structure of the sentences is added to the input data, the network learns to use the correct verb inflections in a process very similar to children's learning. This second phase is facilitated by the semantic phase, suggesting that there is no shift from semantic to grammatical encoding, but rather an extension of the initial semantic encoding to include grammatical information. This can be seen as evidence for the strong version of the functionalist hypothesis of language acquisition. ------------------------------------------------------------------------- moriarty.othello.ps.Z (6 pages) EVOLVING COMPLEX OTHELLO STRATEGIES USING MARKER-BASED GENETIC ENCODING OF NEURAL NETWORKS David E. Moriarty and Risto Miikkulainen Department of Computer Sciences, The University of Texas at Austin, Austin, TX 78712. Technical Report AI93-206, September 1993. A system based on artificial evolution of neural networks for developing new game playing strategies is presented. The system uses marker-based genes to encode nodes in a neural network. The game-playing networks were forced to evolve sophisticated strategies in Othello to compete first with a random mover and then with an alpha-beta search program. Without any direction, the networks discovered first the standard positional strategy, and subsequently the mobility strategy, an advanced strategy rarely seen outside of tournaments. The latter discovery demonstrates how evolution can develop novel solutions by turning an initial disadvantage into an advantage in a changed environment. [ see also moriarty.focus.ps.Z: "Evolving Neural Networks to Focus Minimax Search" ]  From risto at cs.utexas.edu Mon Mar 7 23:44:41 1994 From: risto at cs.utexas.edu (Risto Miikkulainen) Date: Mon, 7 Mar 94 22:44:41 -0600 Subject: ..and episodic memory, cortical self-organization, schema-based vision Message-ID: <9403080444.AA26400@cascais.cs.utexas.edu> The following papers on - the capacity of episodic memory, - self-organization in the primary visual cortex, and - schema-based scene analysis are available by anonymous ftp from cs.utexas.edu:pub/neural-nets/papers as well. As always, comments are welcome. -- Risto Miikkulainen ------------------------------------------------------------------------- moll.convergence-zone.ps.Z (6 pages) THE CAPACITY OF CONVERGENCE-ZONE EPISODIC MEMORY Mark Moll(1), Risto Miikkulainen(2), Jonathan Abbey(3) (1) Department of Computer Science, University of Twente, the Netherlands. (2) Department of Computer Sciences, The University of Texas at Austin. (3) Applied Research Laboratories, Austin, TX. Technical Report AI93-210, December 1993. Human episodic memory provides a seemingly unlimited storage for everyday experiences, and a retrieval system that allows us to access the experiences with partial activation of their components. This paper presents a computational model of episodic memory inspired by Damasio's idea of Convergence Zones. The model consists of a layer of perceptual feature maps and a binding layer. A perceptual feature pattern is coarse coded in the binding layer, and stored on the weights between layers. A partial activation of the stored features activates the binding pattern which in turn reactivates the entire stored pattern. A worst-case analysis shows that with realistic-size layers, the memory capacity of the model is several times larger than the number of units in the model, and could account for the large capacity of human episodic memory. ------------------------------------------------------------------------- sirosh.unified.ps.Z (8 pages) A UNIFIED NEURAL NETWORK MODEL FOR THE SELF-ORGANIZATION OF TOPOGRAPHIC RECEPTIVE FIELDS AND LATERAL INTERACTION Joseph Sirosh and Risto Miikkulainen Department of Computer Sciences, The University of Texas at Austin. Technical Report AI94-213, January 1994. A self-organizing neural network model for the simultaneous development of topographic receptive fields and lateral interactions in cortical maps is presented. Both afferent and lateral connections adapt by the same Hebbian mechanism in a purely local and unsupervised learning process. Afferent input weights of each neuron self-organize into hill-shaped profiles, receptive fields organize topographically across the network, and unique lateral interaction profiles develop for each neuron. The resulting self-organized structure remains in a dynamic and continuously-adapting equilibrium with the input. The model can be seen as a generalization of previous self-organizing models of the visual cortex, and provides a general computational framework for experiments on receptive field development and cortical plasticity. The model also serves to point out general limits on activity-dependent self-organization: when multiple inputs are presented simultaneously, the receptive field centers need to be initially ordered for stable self-organization to occur. [see also sirosh.cooperative-selforganization.tar: "Cooperative Self- Organization of Afferent and Lateral Connections in Cortical Maps" ] ------------------------------------------------------------------------- leow.analyzing.ps.Z (11 pages) ANALYZING SCENES IN A NEURAL NETWORK MODEL OF SCHEMA-BASED VISION Wee Kheng Leow, Risto Miikkulainen Department of Computer Sciences, The University of Texas at Austin. Technical Report AI94-214, February 1994. A novel approach to object recognition and scene analysis based on neural network representation of visual schemas is described. Given an input scene, the VISOR system focuses attention successively at each component, and the schema representations cooperate and compete to match the inputs. The schema hierarchy is learned from examples through unsupervised adaptation and reinforcement learning. VISOR learns that some objects are more important than others in identifying a scene, and that the importance of spatial relations varies depending on the scene. It learns three types of visual schemas: (1) rigid spatial layouts of components used primarily for describing objects; (2) collections of components located anywhere in the scene for recognizing certain man-made scenes (such as a dining table); and (3) rough spatial layouts of regions of uniform texture and no specific shape that are often found in natural scenes (such as a road scene). Compared to traditional rule-based systems, VISOR shows remarkable robustness of recognition, and is able to indicate the confidence of its analysis as the inputs differ increasingly from the schemas. With such properties, VISOR is a promising first step towards a general vision system that can be used in different applications after learning the application-specific schemas. [ see also leow.priming.ps.Z: "Priming, Perceptual Reversal, and Circular Reaction in A Neural Network Model of Schema-Based Vision" ]  From terry at salk.edu Wed Mar 9 03:38:36 1994 From: terry at salk.edu (Terry Sejnowski) Date: Wed, 9 Mar 94 00:38:36 PST Subject: FAX for Telluride Workshops Message-ID: <9403090838.AA23081@salk.edu> Two workshops to be held in Telluride Colorado: NEUROMORPHIC ANALOG VLSI SYSTEMS Sunday, July 3 to Saturday, July 9, 1994 SYSTEMS LEVEL MODELS OF VISUAL BEHAVIOR Sunday, July 10 to Saturday, July 16, 1994 Complete applications should be sent by March 10, 1994 to: Terrence Sejnowski The Salk Institute 10010 North Torrey Pines Road La Jolla, CA 90037 FAX: (619) 587 0417 -----  From roy at mbfys.kun.nl Wed Mar 9 04:32:54 1994 From: roy at mbfys.kun.nl (Roy Glasius) Date: Wed, 9 Mar 94 10:32:54 +0100 Subject: paper available Message-ID: <9403090932.AA15327@augustus.mbfys.kun.nl> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/glasius.labyrinth.ps.Z **DO NOT FORWARD TO OTHER GROUPS** The file glasius.labyrinth.ps.Z is now available for copying from the Neuroprose repository: (16 pages) NEURAL NETWORK DYNAMICS FOR PATH PLANNING AND OBSTACLE AVOIDANCE. Roy Glasius, Andrzej Komoda, Stan C.A.M. Gielen. University of Nijmegen. ABSTRACT A model of a topologically organized neural network of a Hopfield type with nonlinear analog neurons is shown to be very effective for path planning and obstacle avoidance. This deterministic system can rapidly provide a proper path, from any arbitrary start position to any target position, avoiding both static and moving obstacles of arbitrary shape. The model assumes that an (external) input activates a target neuron, corresponding to the target position, and specifies obstacles in the topologically ordered neural map. The path follows from the neural network dynamics and the neural activity gradient in the topologically ordered map. The analytical results are supported by computer simulations to illustrate the performance of the network. (Neural Networks preprint) Roy Glasius, Department Medical Physics and Biophysics, University of Nijmegen, Geert Grooteplein Noord 21, 6525 EZ Nijmegen, The Netherlands, tel: +31-80615040, email:roy at mbfys.kun.nl.  From P.McKevitt at dcs.shef.ac.uk Thu Mar 10 04:37:25 1994 From: P.McKevitt at dcs.shef.ac.uk (Paul Mc Kevitt) Date: Thu, 10 Mar 94 09:37:25 GMT Subject: No subject Message-ID: <9403100937.AA03327@dcs.shef.ac.uk> THE UNIVERSITY OF SHEFFIELD The Department of Computer Science wishes to recruit a Lecturer Grade A to a fixed 5 year appointment arising from the award of an SERC Advanced Research Fellowship to Dr. P Mc Kevitt. The lectureship will be tenable from 1/10/94 and applications are invited from anyone with research interests in the following areas: Cognitive Systems Computational Models of Hearing Speech Technology Natural Language Processing Computer Graphics Intelligent Tutoring Systems Computer Argumentation Connectionist Language Processing Formal Methods and Software Engineering Theory of Computer Science Software and systems engineering Communication Networks Neural Networks Parallel Systems Safety Critical Systems Parallel Databases CASE Tools for Parallel Systems Further details are available from the Department of Computer Science. Closing date for applications 8th April, 1994. Department of Computer Science Regent Court University of Sheffield 211 Portobello Street GB- S1 4DP, Sheffield England, UK, EU. e-mail: dept at dcs.shef.ac.uk fax: +44 742 780972 phone: +44 742 825590  From isabelle at neural.att.com Fri Mar 11 12:08:37 1994 From: isabelle at neural.att.com (Isabelle Guyon) Date: Fri, 11 Mar 94 12:08:37 EST Subject: UNIPEN project of data exchange and recognizer benchmarks Message-ID: <9403111708.AA06471@neural> - > - > - > - > - > - > - > - > - < - < - < - < - < - < - < - < - < - - > UNIPEN project of data exchange and recognizer benchmarks < - - > - > - > - > - > - > - > - > - < - < - < - < - < - < - < - < - < - Isabelle Guyon and Lambert Schomaker - > - > - > - > - > - < - < - < - < - < - < - March 1994 Content: I - UNIPEN ftp site. II - Scrib-L mailing list. III - Tentative schedule for the first UNIPEN benchmark. IV - Information on the IAPR and the Technical Committee 11. V - Information on the Linguistic Data Consortium. VI - Information on the US National Institute of Standards and Technologies. VII - Wish list. Abstract: UNIPEN is a project of data exchange and benchmarks for on-line handwriting recognition, started at the initiative of the technical committee 11 of the IAPR. The data of concern may include handprint and cursive from various alphabets, signatures and gestures captured by a digitizing device providing the pen trajectory. Several tens of companies and universities have already joined UNIPEN and participated in defining a standard data format. These data will be provided by the participants in this common data format and distributed by the Linguistic Data Consortium (LDC) We have the pleasure to confirm that a benchmark organized by the US National Institute of Standards and Technologies (NIST) will take place this year. It will be restricted to the Latin alphabet. Subscription: To subscribe to this news letters, please the following information to: isabelle at neural.att.com Name: Affiliation: Address: Phone: Fax: Email:  From jordan at psyche.mit.edu Fri Mar 11 17:49:00 1994 From: jordan at psyche.mit.edu (Michael Jordan) Date: Fri, 11 Mar 94 17:49:00 EST Subject: symposium announcement Message-ID: Control of the Physical World by Intelligent Agents AAAI 1994 Fall Symposium November 4-6, 1994 The Monteleone Hotel, New Orleans, Louisiana Call for Participation The Problem An intelligent agent, interacting with the physical world, must cope with a wide range of demands. Different scientific and engineering disciplines, with different abstractions of the world, have found different "pieces of the puzzle" for the problem of how the agent can successfully control its world. These disciplines include: - AI = qualitative reasoning = planning = machine learning = intelligently guided numerical simulation - control theory - dynamical systems - fault diagnosis - fuzzy logic and systems - neural nets - computer vision - robotics The goal of this symposium is to attempt to understand the puzzle as a whole, by bringing together researchers with experience assembling two or more pieces. The emphasis will be on learning from successful projects in this area that exploit results or methods from several disciplines. Communication Abstractly, the important questions will be: - What are the strengths and weaknesses of each piece of the puzzle? - How do we put together two pieces to exploit their strengths and avoid their weaknesses? - How do we reconcile the different conceptual frameworks to make different approaches mutually comprehensible? In order to make our discussions mutually comprehensible, participants should relate their work to one of a small number of everyday tasks: - vacuuming the floors in an ordinary house, coping with furniture, pets, trash, etc. - controlling a process such as a pressure-cooker, including set-up, start-up, normal operation, anticipating and handling emergencies, shut-down, and clean-up. - automated driving of a car through city and/or highway traffic, including learning spatial structure and using maps. - learning to understand and control one's own sensory-motor system, including seeing, grabbing, walking, running, bicycling, juggling, etc. Format: The symposium will be organized around a few presentations and lots of discussion. In some sessions, a successful project will be presented and critiqued. In others, a problem will be posed, and relevant contributions collected and evaluated. The working papers will be distributed (we hope) in advance, so participants can familiarize themselves with each others' positions before the symposium. We expect conversations of the form: - "What problem are you working on?" - "Why is that important?" - "How can I help you?" - "How can you help me?" Attendance: Attendance at the workshop will be limited. Some attention will be given to balance among areas, but the primary criteria will be successful synthesis of multiple approaches to intelligent agenthood, and ability of the participant to communicate across discipline boundaries. In addition to invited participants, a limited number of other interested parties will be able to register in each symposium on a first-come, first-served basis. Registration will be available by mid-July 1994. To obtain registration information write to the AAAI at 445 Burgess Drive, Menlo Park, CA 94025 (fss at aaai.org). Submission requirements: Papers should focus on one of the above everyday tasks (or a task of similar familiarity and concreteness). It would be helpful to include a glossary of key concepts to help bring the reader into your conceptual framework. Five copies of either full papers (twenty pages max) or short position papers (five pages max) should be sent to: Benjamin Kuipers Co-chair, AAAI Intelligent Agent Workshop Computer Sciences Department University of Texas at Austin Austin, Texas 78712 USA Dates: - Submissions due: April 15, 1994. - Notification by: May 17, 1994. - Final versions due: August 19, 1994. Workshop committee: Benjamin Kuipers, University of Texas, co-chair; Lyle Ungar, University of Pennsylvania, co-chair; Piero Bonnisone, General Electric; Jim Hendler, University of Maryland; Michael Jordan, MIT. Sponsored by the American Association for Artificial Intelligence 445 Burgess Drive, Menlo Park, CA 94025 (415) 328-3123 fss at aaai.org  From greiner at scr.siemens.com Fri Mar 11 18:16:49 1994 From: greiner at scr.siemens.com (Russell Greiner) Date: Fri, 11 Mar 1994 18:16:49 -0500 Subject: CFP: "Relevance" Symposium Message-ID: <199403112316.SAA14487@eagle.siemens.com> ============================================================================== AAAI 1994 Fall Symposium RELEVANCE 4-6 November 1994 The Monteleone Hotel, New Orleans, Louisiana == Call for Participation == With too little information, reasoning and learning systems cannot work effectively. Surprisingly, too much information can also cause the performance of these systems to degrade, in terms of both accuracy and efficiency. It is therefore important to determine what information must be preserved, or more generally, to determine how best to cope with superfluous information. The goal of this workshop is a better understanding of this topic, relevance, with a focus on techniques for improving a system's performance (along some dimension) by ignoring or de-emphasizing irrelevant and superfluous information. These techniques will clearly be of increasing importance as knowledge bases, and learning systems, become more comprehensive to accommodate real-world applications. There are many forms of irrelevancy. In many contexts (including both deduction and induction), the initial theory may include more information than the task requires. Here, the system may perform more effectively if certain irrelevant *facts* (or nodes in a neural net or Bayesian network) are ignored or deleted. In the context of learning, certain *attributes* of each individual sample may be irrelevant in that they will play essentially no role in the eventual classification or clustering. Also, the learner may choose to view certain *samples* to be irrelevant, knowing that they contain essentially no new information. Yet another flavor of irrelevance arises during the course of a general computation: A computing process can ignore certain *intermediate results*, once it has established that they will not contribute to the eventual answer; consider alpha-beta pruning or conspiracy numbers in game-playing and other contexts, or control heuristics in derivation. == Submission Information == Potential attendees should submit a one-page summary of their relevant research, together with a set of their relevant papers (pun unavoidable). People wishing to present material should also submit a 2000 word abstract. We invite papers that deal with any aspect of this topic, including characterizations of irrelevancies, ways of coping with superfluous information, ways of detecting irrelevancies and focusing on relevant information, and so forth; and are particularly interested in studies that suggest ways to improve the efficiency or accuracy of reasoning systems (including question-answerers, planners, diagnosticians, and so forth) or to improve the accuracy, sample complexity, or computational or space requirement of learning processes. We encourage empirical studies and cognitive theories, as well as theoretical results. We prefer plain-text, stand-alone LaTeX or Postscript submissions sent by electronic mail to greiner at learning.scr.siemens.com. Otherwise, please mail three copies to Russell Greiner "Relevance Symposium" Siemens Corporate Research, Inc 755 College Road East Princeton, NJ 08540-6632 In either case, the submission must arrive by 15 Apr 1994. == Important Dates == - Submissions due 15 April 1994 - Notification of acceptance 17 May 1994 - Working notes mailed out 20 Sept 1994 - Fall Symposium Series 4-6 Nov 1994 == Organizing Committee == Russ Greiner (co-chair, Siemens Corporate Research, greiner at learning.scr.siemens.com) Yann Le Cun (AT&T Bell Laboratories) Nick Littlestone (NEC Research Institute) David McAllester (MIT) Judea Pearl (UCLA) Bart Selman (AT&T Bell Laboratories) Devika Subramanian (co-chair, Cornell, devika at cs.cornell.edu) == Attendance == The symposium will be limited to between forty and sixty participants. In addition to invited participants, a limited number of other interested parties will be able to register on a first-come, first-served basis. Registration will be available by mid-July 1994. To obtain registration information, contact AAAI at fss at aaai.org; (415) 328-3123; or 445 Burgess Drive, Menlo Park, CA 94025. == Sponsored by == American Association for Artificial Intelligence as part of the AAAI 1994 Fall Symposium Series.  From mm at santafe.edu Sun Mar 13 16:16:38 1994 From: mm at santafe.edu (Melanie Mitchell) Date: Sun, 13 Mar 94 14:16:38 MST Subject: Job available Message-ID: <9403132116.AA05077@wupatki> JOB AVAILABLE: INTERVAL RESEARCH POSTDOCTORAL FELLOWSHIP IN ADAPTIVE COMPUTATION AT THE SANTA FE INSTITUTE The Santa Fe Institute has an opening for a Postdoctoral Fellow in Adaptive Computation beginning in September, 1994. The position is sponsored by Interval Research Corporation. The fellowship will last for one-to-two years. The Institute's research program is devoted to the study of complex systems, especially complex adaptive systems. SFI's Adaptive Computation program is an interdisciplinary effort focusing on computational aspects of the study of complex adaptive systems. Its purpose is to make fundamental progress on issues in computer science that are related to complex adaptive systems, and to export the results to researchers in other fields. These issues include both computational models of complex adaptive systems and theory and application of adaptive algorithms inspired by natural systems. Systems and techniques currently under study at the Santa Fe Institute include genetic algorithms, classifier systems, neural networks, and other adaptive computation techniques; the immune system; biomolecular sequence and structure; the origin of life; artificial life; models of evolution; the physics of information; nonlinear modeling and prediction; the economy; and others. Candidates should have a Ph.D. (or expect to receive one before September, 1994) and should have backgrounds in computer science, mathematics, economics, theoretical physics or chemistry, game theory, cognitive science, theoretical biology, dynamical systems theory, or related fields. A strong background in computational approaches is essential, as is an interest in interdisciplinary work. Evidence of these interests, in the form of previous research experience and publications, is helpful. Applicants should submit a curriculum vitae, list of publications, and statement of research interests, and arrange for three letters of recommendation to be sent. Incomplete applications will not be processed. All application materials must be received by April 15, 1994. Decisions will be made in early May. Send applications to: Interval Research Postdoctoral Committee, Santa Fe Institute, 1660 Old Pecos Trail, Suite A, Santa Fe, New Mexico 87501. Applications or inquiries may also be sent by electronic mail to: postdoc at santafe.edu. SFI is an equal opportunity employer.  From lazzaro at CS.Berkeley.EDU Sun Mar 13 18:12:45 1994 From: lazzaro at CS.Berkeley.EDU (John Lazzaro) Date: Sun, 13 Mar 1994 15:12:45 -0800 Subject: Bibliography for Silicon Auditory Models Message-ID: <199403132312.PAA20966@boom.CS.Berkeley.EDU> I gave a tutorial at NIPS last year on VLSI implementations of auditory representations, and part of the handout packet was a bibliography of all papers published in the field. Enough people have asked me for copies of it that I cleaned it up, added extra sections for tutorial readings, and have placed it on anonymous FTP. Here's how to grab a copy, from your Unix prompt "%" : % % ftp hobiecat.pcmp.caltech.edu Connected to hobiecat.pcmp.caltech.edu. 220 hobiecat FTP server (Version 16.2 Fri Apr 26 18:20:43 GMT 1991) ready. Name (hobiecat.caltech.edu): anonymous 331 Guest login ok, send ident as password. Password: 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/anaprose/lazzaro 250 CWD command successful. ftp> get sa-biblio.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for sa-biblio.ps.Z (37029 bytes). 226 Transfer complete. local: sa-biblio.ps.Z remote: sa-biblio.ps.Z 37029 bytes received in 1.5 seconds (25 Kbytes/s) ftp> quit 221 Goodbye. uncompress sa-biblio.ps.Z --- Thanks to everyone who added contributions! --john lazzaro  From roebel at cs.tu-berlin.de Mon Mar 14 07:07:34 1994 From: roebel at cs.tu-berlin.de (Axel Roebel) Date: Mon, 14 Mar 1994 13:07:34 +0100 Subject: Techreport on Dynamic Pattern Selection in Neuroprose Message-ID: <199403141207.AA08973@mail.cs.tu-berlin.de> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/roebel.dynada.ps.Z With Terry and Isabelle we state: One man`s outlyer is OUR data point The file roebel.dynada.ps.Z (22 pages) is now available via anonymous ftp from the neuroprose archive. Title and abstract are given below. We regret that hardcopies are not available. ---------------------------------------------------------------------- The Dynamic Pattern Selection Algorithm: Effective Training and Controlled Generalization of Backpropagation Neural Networks A. R"obel Technical University of Berlin Department of Computer Science (Technical Report 93/23) (Subsets of this Report will appear in the conference proceedings of the Intern. Conference on Neural Networks, Italy, 1994 and the European Symposium on Artificial Neural Networks, Belgium, 1994) -- ABSTRACT -- In the following report the problem of selecting proper training sets for neural network time series prediction or function approximation is addressed. As a result of analyzing the relation between approximation and generalization, a new measure, the generalization factor is introduced. Using this factor and cross validation a new algorithm, the {\em dynamic pattern selection}, is developed. \\ Dynamically selecting the training patterns during training establishes the possibility of controlling the generalization properties of the neural net. As a consequence of the proposed selection criterion, the generalization error is limited to the training error. As an additional benefit, the practical problem of selecting a concise training set out of known data is likewise solved. \\ By employing two time series prediction tasks, the results for dynamic pattern selection training and for fixed training sets are compared. The favorable properties of the dynamic pattern selection, namely lower computational expense and control of generalization, are demonstrated. \\ This report describes a revised version of the algorithm introduced in \cite{Roebel_e:92}. ---------------------------------------------------------------------------- Axel Roebel ** E-Mail: roebel at cs.tu-berlin.de ** Technische Universitaet Berlin ** Phone : +49 - 30 - 314 24892 ** Department of Applied Computer Science ** Fax : +49 - 30 - 314 24891 **  From plunkett at psy.ox.ac.uk Tue Mar 15 07:38:08 1994 From: plunkett at psy.ox.ac.uk (Kim Plunkett) Date: Tue, 15 Mar 94 12:38:08 GMT Subject: No subject Message-ID: <9403151238.AA12405@dragon.psych.ox.ac.uk> Connectionism and Language Acquisition SERC Postgraduate Studentship Department of Experimental Psychology University of Oxford The Science and Engineering Research Council has allocated a postgraduate studentship within the area of "Connectionism and Language Acquisition" to the Department of Experimental Psychology, Oxford University, starting in October 1994. Individuals interested in applying for this studentship should have or expect to obtain a good undergraduate degree in Psychology, Linguistics or Computer Science. The success- ful applicant will be expected to engage in both connection- ist modelling and experimental work within the area of language acquisition. The studentship is expected to lead to the award of D.Phil at the University of Oxford. Application forms can be obtained from: Mrs. B. Hammond Department of Experimental Psychology University of Oxford South Parks Road Oxford OX1 3UD UK Tel: 0865-271379 Applications should be marked "SERC IT Application". Further information concerning the studentship and research facilities in the Department can be obtained from Kim Plunkett Department of Experimental Psychology University of Oxford South Parks Road Oxford OX1 3UD UK Tel: 0865-271398 email: plunkett at psy.ox.ac.uk Please note that SERC studentship awards can only be held by UK or EEC nationals.  From stiber at cs.ust.hk Wed Mar 16 16:35:03 1994 From: stiber at cs.ust.hk (Dr. Michael Stiber) Date: Wed, 16 Mar 94 16:35:03 HKT Subject: Paper available in Neuroprose Message-ID: <9403160835.AA24410@cs.ust.hk> The file stiber.transient.ps.Z (4 pages) is now available via anonymous ftp from the neuroprose archive. It will appear in _Proc. Int. Symp. on Speech, Image Processing & Neural Networks_, Hong Kong, 1994, and is also available as Technical Report HKUST-CS94-6 (file://ftp.cs.ust.hk/pub/techreport/postscript/tr94-6.ps.gz). If you absolutely, positively cannot access it any other way, send me email and I'll send you a hardcopy. ---------------------------------------------------------------------- Transient Responses in Dynamical Neural Models Michael Stiber Department of Computer Science The Hong Kong University of Science and Technology Clear Water Bay Kowloon Hong Kong Jose P. Segundo Department of Anatomy and Cell Biology and Brain Research Institute University of California Los Angeles, CA 90024 USA We consider the input/output behavior of a realistic dynamical neural model in comparison to those typically used in artificial neural networks. We have found that such models duplicate well those behaviors seen in living neurons, displaying a range of behaviors commonly seen in a wide variety of nonlinear dynamical systems. This is not captured well by weighted sum/monotonic transfer function models. An example of the consequences of nonlinear dynamics in neural responses is presented for monotonically changing input transients. ---------------------------------------------------------------------- Dr. Michael Stiber stiber at cs.ust.hk Department of Computer Science tel: (852) 358 6981 The Hong Kong University of Science & Technology fax: (852) 358 1477 Clear Water Bay, Kowloon, Hong Kong  From richardd at logcam.co.uk Wed Mar 16 05:12:58 1994 From: richardd at logcam.co.uk (Richard Dallaway) Date: Wed, 16 Mar 94 10:12:58 GMT Subject: Thesis available Message-ID: <9403161013.AA09849@logcam.co.uk> FTP-host: ftp.cogs.susx.ac.uk FTP-filename: /pub/reports/csrp/csrp306.ps.Z The following thesis is available via anonymous ftp. DYNAMICS OF ARITHMETIC: A CONNECTIONIST VIEW OF ARITHMETIC SKILLS Richard Dallaway email: richardd at cogs.susx.ac.uk Cognitive Science Research Paper CSRP-306 School of Cognitive & Computing Sciences University of Sussex, Brighton, UK SUMMARY: Connectionist models of adult memory for multiplication facts and children's multicolumn multiplication errors. Full abstract at then end of this message. FTP instructions: unix> ftp ftp.cogs.susx.ac.uk [ or ftp 192.33.16.70] login: anonymous password: ftp> cd pub/reports/csrp ftp> binary ftp> get csrp306.ps.Z ftp> bye 155 pages. 552567 bytes compressed, 1922143 bytes uncompressed The file is over a megabyte, so some of you may find that you have to login to your printer server and use the "lpr -s" option. See man lpr. Your printer may not recognize the "Bembo" font used on the very first page (only). Paper copies can be ordered (5pounds, US$10) from: Berry Harper School of Cognitive & Computing Sciences University of Sussex Falmer, Brighton, UK. ------------------------------------------------------------------------ ABSTRACT: Arithmetic takes time. Children need five or six years to master the one hundred multiplication facts (0x0 to 9x9), and it takes adults approximately one second to recall an answer to a problem like 7x8. Multicolumn arithmetic (e.g., 45x67) requires a sequence of actions, and children produce a host of systematic mistakes when solving such problems. This thesis models the time course and mistakes of adults and children solving arithmetic problems. Two models are presented, both of which are built from connectionist components. First, a model of memory for multiplication facts is described. A system is built to capture the response time and slips of adults recalling two digit multiplication facts. The phenomenon is thought of as spreading activation between problem nodes (such as 7 and 8) and product nodes (56). The model is a multilayer perceptron trained with backpropagation, and McClelland's (1988) cascade equations are used to simulate the spread of activation. The resulting reaction times and errors are comparable to those reported for adults. An analysis of the system, together with variations in the experiments, suggest that problem frequency and the "coarseness" of the input encoding have a strong effect on the phenomena. Preliminary results from damaging the network are compared to the arithmetic abilities of brain-damaged subjects. The second model is of children's errors in multicolumn multiplication. Here the aim is not to produce a detailed fit to the empirical observations of errors, but to demonstrate how a connectionist system can model the behaviour, and what advantages this brings. Previous production system models are based on an impasse-repair process: when an child encounters a problem an impasse is said to have occurred, which is then repaired with general-purpose heuristics. The style of the connectionist model moves away from this. A simple recurrent network is trained with backpropagation through time to activate procedures which manipulate a multiplication problem. Training progresses through a curriculum of problems, and the system is tested on unseen problems. Errors can occur during testing, and these are compared to children's errors. The system is analysed in terms of hidden unit activation trajectories, and the errors are characterized as "capture errors". That is, during processing the system may be attracted into a region of state space that produces an incorrect response but corresponds to a similar arithmetic subprocedure. The result is a graded state machine---a system with some of the properties of finite state machines, but with the additional flexibility of connectionist networks. The analysis shows that connectionist representations can be structured in ways that are useful for modelling procedural skills such as arithmetic. It is suggested that one of the strengths of the model is its emphasis on development, rather than on "snap-shot" accounts. Notions such as "impasse" and "repair" are discussed from a connectionist perspective. -----------------------------------------------------------------------  From yorick at dcs.shef.ac.uk Wed Mar 16 11:41:27 1994 From: yorick at dcs.shef.ac.uk (Yorick Wilks) Date: Wed, 16 Mar 94 16:41:27 GMT Subject: No subject Message-ID: <9403161641.AA02186@dcs.shef.ac.uk> THE UNIVERSITY OF SHEFFIELD The Department of Computer Science wishes to recruit a Lecturer Grade A to a fixed 5 year appointment arising from the award of an SERC Advanced Research Fellowship to Dr.P Mc Kevitt, who lectures in natural language processing. The lectureship is to replace his teaching and will be tenable from 1/10/94 and applications are invited from anyone with research interests in the following areas: Cognitive Systems Computational Models of Hearing Speech Technology Natural Language Processing Computer Graphics Intelligent Tutoring Systems Computer Argumentation Connectionist Language Processing Formal Methods and Software Engineering Theory of Computer Science Software and systems engineering Communication Networks Neural Networks Parallel Systems Safety Critical Systems Parallel Databases CASE Tools for Parallel Systems Further details are available from the Department of Computer Science: jean at dcs.sheffield.ac.uk. Closing date for applications 1st April, 1994 to the Personnel Department, Western Bank, University of Sheffield, Sheffield, S10 2TN.  From isabelle at inrs-telecom.uquebec.ca Wed Mar 16 17:40:06 1994 From: isabelle at inrs-telecom.uquebec.ca (Jean Francois Isabelle) Date: Wed, 16 Mar 1994 17:40:06 -0500 (EST) Subject: master thesis available Message-ID: <199403162241.AA16367@velcro.inrs-telecom.uquebec.ca> A non-text attachment was scrubbed... Name: not available Type: text Size: 3052 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/0aaf3b01/attachment-0001.ksh From bishopc at sun.aston.ac.uk Thu Mar 17 14:22:42 1994 From: bishopc at sun.aston.ac.uk (bishopc) Date: Thu, 17 Mar 94 19:22:42 GMT Subject: Paper available by ftp Message-ID: <15152.9403171922@sun.aston.ac.uk> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/bishop.mixture*.ps.Z The following technical report is available by anonymous ftp. ------------------------------------------------------------------------ MIXTURE DENSITY NETWORKS Chris M Bishop Neural Computing Research Group Report: NCRG/4288 Neural Computing Research Group Aston University Birmingham, B4 7ET, U.K. email: c.m.bishop at aston.ac.uk Abstract In this paper we introduce a general technique for modelling conditional probability density functions, by combining a mixture distribution model with a standard feedforward network. The conventional technique of minimizing a sum-of-squares or cross-entropy error function leads to network outputs which approximate the conditional averages of the target data, conditioned on the input vector. For classifications problems, with a suitably chosen target coding scheme, these averages represent the posterior probabilities of class membership, and so can be regared as optimal. For problems involving the prediction of continuous variables, however, the conditional averages provide only a very limited description of the properties of the target variables. This is particularly true for problems in which the mapping to be learned is multi-valued, as often arises in the solution of inverse problems, since the average of several correct target values is not necessarily itself a correct value. In order to obtain a complete description of the data, for the purposes of predicting the outputs corresponding to new input vectors, we must model the conditional probability distribution of the target data, again conditioned on the input vector. In this paper we introduce a new class of network models obtained by combining a conventional neural network with a mixture density model. The complete system is called a Mixture Density Network, and can in principle represent arbitrary conditional probability distributions in the same way that a conventional neural network can represent arbitrary non-linear functions. We demonstrate the effectiveness of Mixture Density Networks using both a simple 1-input 1-output mapping, and a problem involving robot inverse kinematics. -------------------------------------------------------------------- ftp instructions: This paper is split into two files to keep the uncompressed postscript files below 2Mb. bishop.mixture1.ps.Z (size 445839) pages 1 -- 16 bishop.mixture2.ps.Z (size 364598) pages 17 -- 25 % ftp archive.cis.ohio-state.edu Name: anonymous password: your full email address ftp> cd pub/neuroprose ftp> binary ftp> get bishop.mixture1.ps.Z ftp> get bishop.mixture2.ps.Z ftp> bye % uncompress bishop* % lpr bishop* -------------------------------------------------------------------- Professor Chris M Bishop Tel. +44 (0)21 359 3611 x4270 Neural Computing Research Group Fax. +44 (0)21 333 6215 Dept. of Computer Science c.m.bishop at aston.ac.uk Aston University Birmingham B4 7ET, UK --------------------------------------------------------------------  From billl at head.neurology.wisc.edu Thu Mar 17 15:45:03 1994 From: billl at head.neurology.wisc.edu (Bill Lytton) Date: Thu, 17 Mar 94 14:45:03 CST Subject: Postdoctoral opportunities at U Wisconsin, Madison Message-ID: <9403172045.AA11636@head.neurology.wisc.edu> Postdoctoral fellowships available in Computational Neuroscience starting immediately or in the fall. Realistic simulations of single neurons and neuronal networks are being performed to better understand neural function with particular emphasis on epileptogenesis and seizure spread. Close collaborations are available on-site with physiologists using electrophysiology and optical methods to assess activity in thalamus, piriform cortex and hippocampus in vivo and in vitro. Opportunities for involvement in ongoing projects or development of new research directions are available. Computational laboratory uses networked UNIX workstations. Parallel supercomputing facilities are available as well as collaboration on VLSI implementations. Send or email CV and statement of research experience/interests to billl at head.neurology.wisc.edu. Bill Lytton Dept. of Neurology University of Wisconsin 1300 University Ave., MSC 103 Madison, WI 53703 (EOAAE) Tiring of the bicoastal lifestyle? Try the midcoast next.  From RAMPO at SALERNO.INFN.IT Thu Mar 17 17:42:00 1994 From: RAMPO at SALERNO.INFN.IT (RAMPO@SALERNO.INFN.IT) Date: Thu, 17 MAR 94 22:42 GMT Subject: ICANN'94 Program Message-ID: <5301@SALERNO.INFN.IT> -------------------------------------------------------------------- | ************************************************ | | * * | | * EUROPEAN NEURAL NETWORK SOCIETY * | | *----------------------------------------------* | | * P R E L I M I N A R Y P R O G R A M * | | *----------------------------------------------* | | * I C A N N ' 94 - SORRENTO * | | * * | | ************************************************ | | | | ICANN'94 (INTERNATIONAL CONFERENCE ON ARTIFICIAL NEURAL NETWORKS)| | is the fourth Annual Conference of ENNS and it comes after | | ICANN'91(Helsinki), ICANN'92 (Brighton), ICANN'93 (Amsterdam). | | It is co-sponsored by INNS, IEEE-NC, JNNS. | | It will take place at the Sorrento Congress Center, near Naples, | | Italy, on May 26-29, 1994. | |------------------------------------------------------------------| | Conference Chair: Prof. Maria Marinaro, Univ. Salerno, | | Italy, Dept. Theoretic Physics; email: iiass at salerno.infn.it | | | | Conference Co-Chair: Prof. Pietro G. Morasso, Univ. Genova, | | Italy, Dept. Informatics, Systems, Telecommunication; | | email: morasso at dist.unige.it; fax: +39 10 3532948 | |------------------------------------------------------------------| | May 26 - Tutorials | |------------------------------------------------------------------| |* Introduction to neural networks (J.G. Taylor) | |* Advanced techniques in supervised learning I (F. Fogelman) | |* Advanced techniques in supervised learning II (F. Fogelman) | |* Advanced techniques for self organising maps (T. Kohonen) | |* Weightless NNs (I. Aleksander) | |* Information theory in NNs (M. Plumbley) | |* Hybrid systems (T. Schwarz) | |* From neuroscience to neurocomputation for robotics and | | prediction (R. Eckmiller) | |* Applications of neural nets (R. Hecht-Nielsen) | |------------------------------------------------------------------| | May 27/29 - Scientific sessions | |------------------------------------------------------------------| | Plenary presentations: S. Grossberg, H. Szu, E. Bizzi, D. Amit, | | L. Zadeh | | | | 356 contributions, including 21 invited presentations, are | | presented in 27 oral sessions and 6 poster sessions which are | | grouped into 4 main areas: | | | |A: Neurobiology | |--------------- | |Invited presentations: | | | | S. Grossberg et al.: Spatial pooling and perceptual framing by | | synchronizing cortical dynamics. | | J. Herault: Vertebrate retina: sub-sampling and aliasing effects | | can explain colour-opponent and colour constancy | | phenomena. | | L.W. Stark: ANNs and MAMFs: transparency or opacity? | | S. Usui et al.: Dry electrophysiology: an approach to the | | internal representation of the brain functions | | through artificial neural networks. | | | | There are 4 oral sessions and 1 poster session covering topics on| | vision, motor control, models of biological neurons and circuits.| | | |B: Mathematical models | |---------------------- | |Invited presentations: | | | | J.G. Taylor: Neuronal network models of mind. | | I. Aleksander: The consciousness of a neural state machine. | | T. Kohonen: What generalisations of the self-organizing map make | | sense? | | F. Fogelman: Variable selection with neural networks. | | M.L. Jordan et al.: Hierarchical mixtures of experts and the EM | | algorithm. | | M. Marinaro et al.: Outline of a linear neural network and | | applications. | | M. Kawato et al.: Teaching by showing in Kendama based on | | optimization principle. | | S. Amari: Information geometry and the EM algorithm. | | C.C.A.M. Gielen: Learning and interpretation of weights in neural| | networks. | | | | There are 10 oral sessions and 3 poster session covering topics | | on fuzzy systems, symbolic and hybrid systems, self-organizing | | maps, attractor networks, RBF networks, reinforcement learning, | | optimization, statistical models, and network growing. | | | |C: Applications | |--------------- | |Invited presentations: | | | | H. Ritter: Parametrized self-organizing maps for vision learning | | tasks. | | R. De Mori et al.: Artificial neural networks for source code in | | formal information analysis. | | E. Oja: Beyond PCA: statistical expansions by nonlinear neural | | networks. | | R.J. Marks II et al.: Fourier analysis and filtering of a single | | hidden layer perceptron. | | V. Lopez et al.: Neural forecasting in real time industrial | | control. | | P. Morasso et al.: Cortical representation of external space. | | | | There are 10 oral sessions and 2 poster sessions covering topics | | on classification models, speech, character recognition, signal | | and image processing, clustering and quantization, robotics and | | control. | | | |D: Neurocomputing | |----------------- | |Invited presentations: | | | | C. Nicolini: From neural network to biomolecular electronics. | | R. Eckmiller: Biology-inspired pulse processing neural networks | | (BPN) for neurotechnology. | | | | There are 3 oral sessions and 1 poster session covering topics | | of computational architecture, hardware design, software tools, | | and fault tolerance. | |------------------------------------------------------------------| | T E C H N I C A L E X H I B I T I O N | |------------------------------------------------------------------| | A technical exhibition will be organized for presenting the | | literature on neural networks and related fields, neural networks| | design and simulation tools, electronic and optical | | implementation of neural computers, and application | | demonstration systems. Potential exhibitors are kindly requested | | to contact the industrial liaison chair. | | | | Industrial Liaison Chair: Dr. Roberto Serra, Ferruzzi | | Finanziaria, Ravenna, fax: +39 544 35692/32358 | | | |------------------------------------------------------------------| | S O C I A L P R O G R A M | |------------------------------------------------------------------| | Social activities will include a welcome party, a banquet, and | | post-conference tours to some of the many possible targets of | | the area (participants will also have no difficulty to | | self-organize a la carte). | |------------------------------------------------------------------| | C O R R E S P O N D E N C E | |------------------------------------------------------------------| | EMAIL where to send correspondence (not papers): | | Dr. Salvatore Rampone - iiass at salerno.infn.it | | FAX where to send correspondence (not papers): | | Mr. V. DiMarino - +39 89 822275 | |------------------------------------------------------------------| | R E G I S T R A T I O N F O R M | |------------------------------------------------------------------| | FAMILY NAME ____________________________________________________ | | FIRST NAME, MIDDLE INITIAL _____________________________________ | | AFFILIATION ____________________________________________________ | | MAILING ADDRESS ________________________________________________ | | ZIP CODE, CITY, COUNTRY ________________________________________ | | FAX ____________________________________________________________ | | PHONE __________________________________________________________ | | EMAIL __________________________________________________________ | | ACCOMPANIED BY _________________________________________________ | | MEMBERSHIP (Regular/ENNS member/Student) _______________________ | | ENNS MEMBERSHIP NO. ____________________________________________ | | REGISTRATION FEE _______________________________________________ | | TUTORIAL FEE ___________________________________________________ | | DATE ______________________ SIGNATURE __________________________ | | | |------------------------------------------------------------------| | C O N F E R E N C E R E G I S T R A T I O N F E E S (in LIT) | |------------------------------------------------------------------| | MEMBERSHIP | Before 15/12/93 | Before 15/2/94 | On site | |--------------|-------------------|------------------|------------| | REGULAR | ------- | ------- | 950,000 | | ENNS MEMBER | ------- | ------- | 850,000 | | STUDENT | ------- | ------- | 300,000 | |------------------------------------------------------------------| | T U T O R I A L F E E S (in LIT) | |------------------------------------------------------------------| | | Before 15/2/94 | On site | |--------------|-------------------|-------------------------------| | REGULAR | ------- | 350,000 | | STUDENT | ------- | 150,000 | |------------------------------------------------------------------| | - Regular registrants become ENNS members. | | - Student registrants must provide an official certification of | | their status. | | - Pre-registration payment: Remittance in LIT to | | BANCO DI NAPOLI, Branch of FISCIANO, FISCIANO (SALERNO), ITALY| | on the Account of "Dipartimento di Fisica Teorica e S.M.S.A." | | clearly stating the motivation (Registration Fee for ICANN'94) | | and the attendee name. | | Bank Codes: | | ABI 1010 | | CAB 76210 | | - On-site payment: cash. | | - The registration form together with a copy of the bank | | remittance must be mailed to: | | Dr. Roberto Tagliaferri, Dept. Informatics, Univ. Salerno, | | I-84081 Baronissi, Salerno, Italy | | Fax +39 89 822275 | |------------------------------------------------------------------| | H O T E L R E S E R V A T I O N | |------------------------------------------------------------------| | The official travel agent is (fax for a booking form): | | RUSSO TRAVEL srl | | Via S. Antonio, I-80067 Sorrento, Italy | | Fax: +39 81 807 1367 Phone: +39 81 807 1845 | --------------------------------------------------------------------  From zoubin at psyche.mit.edu Thu Mar 17 20:16:26 1994 From: zoubin at psyche.mit.edu (Zoubin Ghahramani) Date: Thu, 17 Mar 94 20:16:26 EST Subject: Paper available by ftp Message-ID: <9403180116.AA17071@psyche.mit.edu> FTP-host: psyche.mit.edu FTP-filename: /pub/zoubin.cmss.ps.Z The following paper is very closely related to Chris M Bishop's recently announced paper on MIXTURE DENSITY NETWORKS. It also addresses the problem of learning multi-valued mappings such as those that arise in inverse kinematics, acoustics, object localization, etc. The approach also involves learning a mixture density, though it does not combine that with the use of a feedforward network. ----------------------------------------------------------------------------- Solving Inverse Problems Using an EM Approach to Density Estimation Zoubin Ghahramani Department of Brain & Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 02139 zoubin at psyche.mit.edu Abstract This paper proposes density estimation as a feasible approach to the wide class of learning problems where traditional function approximation methods fail. These problems generally involve learning the inverse of causal systems, specifically when the inverse is a non-convex mapping. We demonstrate the approach through three case studies: the inverse kinematics of a three-joint planar arm, the acoustics of a four-tube articulatory model, and the localization of multiple objects from sensor data. The learning algorithm presented differs from regression-based algorithms in that no distinction is made between input and output variables; the joint density is estimated via the EM algorithm and can be used to represent any input/output map by forming the conditional density of the output given the input. In M. C. Mozer, P. Smolensky, D. S. Touretzky, J. L. Elman, & A. S. Weigend (eds.), Proceedings of the 1993 Connectionist Models Summer School. pp. 316--323. Hillsdale, NJ: Erlbaum Associates, 1994. -------------------------------------------------------------------- ftp instructions: % ftp psyche.mit.edu login: anonymous password: ftp> cd pub ftp> binary ftp> get zoubin.cmss.ps.Z ftp> bye % uncompress zoubin.cmss.ps.Z % lpr zoubin.cmss.ps -------------------------------------------------------------------- Matlab code for the EM mixture algorithms for real, binary, and classification problems for both complete and incomplete data (*) is also available by anonymous ftp from the same site: ftp> get zoubin.EMcode.README ftp> get zoubin.EMcode.tar.Z Please email me if you intend to use the code so I can keep you updated with newer releases and possibly C++ and CM5 code. (*) cf. Ghahramani & Jordan 1993, "Supervised learning from incomplete data using an EM approach": ftp> get zoubin.nips93.ps.Z -------------------------------------------------------------------- Zoubin Ghahramani  From davec at cogs.susx.ac.uk Fri Mar 18 09:36:11 1994 From: davec at cogs.susx.ac.uk (Dave Cliff) Date: Fri, 18 Mar 1994 14:36:11 +0000 (GMT) Subject: PhD at Sussex Message-ID: DPhil Studentship The Sussex Centre for Neuroscience, School of Biological Sciences and The School of Cognitive and Computing Sciences University of Sussex Applications are invited for a three-year SERC DPhil (PhD) studentship to commence in October 1994. The project will use computational modelling techniques to study small neural networks involved in pattern generation and motor coordination in invertebrates. The successful candidate will be based in the School of Cognitive and Computing Sciences, but will be required to work closely with a group of researchers in the School of Biological Sciences, lead by Prof. P. Benjamin. Candidates should possess or expect to gain at least a 2i or equivalent degree in a numerate discipline (e.g. Computer Science, Electronic Engineering, etc), although candidates from other disciplines may also be considered. For further information, contact Dr Dave Cliff, School of Coginitive and Computing Sciences, University of Sussex, Brighton BN1 9QH, UK. Tel: 0273 606755 ext 3205; Fax 0273 671320; e-mail davec at cogs.susx.ac.uk  From N.Sharkey at dcs.shef.ac.uk Fri Mar 18 11:19:31 1994 From: N.Sharkey at dcs.shef.ac.uk (N.Sharkey@dcs.shef.ac.uk) Date: Fri, 18 Mar 94 16:19:31 GMT Subject: job ad. Message-ID: <9403181619.AA17733@entropy.dcs.shef.ac.uk> There are two things that should be said to accompany the following job ad. First, a lecturer grade A in the UK is the direct equivalent to an Assistant professor. Second, I would like to see good neural net people apply, particulary in the area of Connectionist Natural Language Proceessing. We have an institute for Language Speech and Hearing (ILASH)directed by Professor Yorick Wilkes. I should stress, however, that while I will have a say in the appointment, so will several other people from different areas. That is why I am trying to encourage first rate applicants. noel THE UNIVERSITY OF SHEFFIELD The Department of Computer Science wishes to recruit a Lecturer Grade A to a fixed 5 year appointment arising from the award of an SERC Advanced Research Fellowship to Dr.P Mc Kevitt, who lectures in natural language processing. The lectureship is to replace his teaching and will be tenable from 1/10/94 and applications are invited from anyone with research interests in the following areas: Cognitive Systems Computational Models of Hearing Speech Technology Natural Language Processing Computer Graphics Intelligent Tutoring Systems Computer Argumentation Connectionist Language Processing Formal Methods and Software Engineering Theory of Computer Science Software and systems engineering Communication Networks Neural Networks Parallel Systems Safety Critical Systems Parallel Databases CASE Tools for Parallel Systems Further details are available from the Department of Computer Science: jean at dcs.sheffield.ac.uk. Closing date for applications 1st April, 1994 to the Personnel Department, Western Bank, University of Sheffield, Sheffield, S10 2TN. Noel Sharkey Professor of Computer Science Department of Computer Science Regent Court University of Sheffield S1 4DP, Sheffield, UK N.Sharkey at dcs.shef.ac.uk  From ess94%TRBOUN.BITNET at vm.gmd.de Fri Mar 18 07:14:21 1994 From: ess94%TRBOUN.BITNET at vm.gmd.de (ess94%TRBOUN.BITNET@vm.gmd.de) Date: Fri, 18 Mar 1994 14:14:21 +0200 Subject: REMINDER FOR EUROPEAN SIMULATION SYMPOSIUM 1994 Message-ID: <0097B9F1.8F0A8FC0.20614@trboun.bitnet> ***************REMINDER FOR SUBMITTING ABSTRACTS TO******************** EUROPEAN SIMULATION SYMPOSIUM 1994 DEADLINE EXTENDED TO APRIL 12, 1994. This is a reminder that the deadline to submit abstracts for the European Simulation Symposium 1994 which will be held in Istanbul during Oct 9-12, 1994 is EXTENDED to APRIL 12, 1994. You may find other pertinent information about the symposium in the electronic copy of the Call for Papers. *********************************************************************** ESS'94 EUROPEAN SIMULATION SYMPOSIUM CALL FOR PAPERS ISTANBUL, TURKEY OCTOBER 9-12, 1994 HOSTED BY BOGAZICI UNIVERSITY Organized and sponsored by: The Society for Computer Simulation International (SCS) With cooperation of: The European Simulation Council (ESC) Ministry of Industry and Trade, Turkey Operational Research Society of Turkey (ORST) Cosponsored by: Bekoteknik Digital Equipment Turkiye Hewlett Packard IBM Turk Main Topics: * Advances in Simulation Methodology and Practices * Artificial Intelligence in Simulation * Innovative Simulation Technologies * Industrial Simulation * Computer and Telecommunication Systems CONFERENCE COMMITTEE Conference Chairman: Prof. Dr. Tuncer I. Oren University of Ottawa, Computer Science Department, 150 Louis Pasteur / Pri., Ottawa, Ontario, Canada K1N 6N5 Phone: 1.613.564.5068 Fax: 1.613.738-0701 E-mail: oren at csi.uottawa.ca Program Chairman: Prof. Dr. Ali Riza Kaylan Bogazici University, Dept.of Industrial Engineering, 80815 Bebek, Istanbul, Turkey Phone: 90.212.2631540/2072 Fax: 90.212.2651800 E-Mail: Kaylan at trboun.bitnet Program Co-chairman: Prof. Dr. Axel Lehmann Universitaet der Bundeswehr, Munchen, Institut fur Technische Informatik, Werner-Heisenberg-Weg 39, D 85577 Neubiberg, Germany. Phone: 49.89.6004.2648/2654 Fax: 49.89.6004.3560 E-Mail: Lehmann at informatik.unibw-muenchen.de Finance Chairman: Rainer Rimane, University of Erlangen - Nurnberg Organization Committee: Ali Riza Kaylan, Yaman Barlas, Murat Draman, Levent Mollamustafaoglu, Tulin Yazgac International Program Committee (Preliminary): O. Balci, USA J. Banks, USA G. Bolch, Germany W. Borutzky, Germany R. Crosbie, USA M. Dal Cin, Germany M. S. Elzas, Netherlands H. Erkut, Turkey A. Eyler, Turkey P. Fishwick, USA E. Gelenbe, USA A. Guasch, Spain M. Hitz, Austria R. Huntsinger, USA G. Iazeolla, Italy K. Irmscher, Germany K. Juslin, Finland A. Javor, Hungary E. Kerckhoffs, Netherlands J. Kleijnen, Netherlands M. Kotva, Czech Rep. M. Koksalan, Turkey M. L. Pagdett, USA M. Pior, Germany R. Reddy, USA S. Reddy, USA B. Schmidt, Germany S. Sevinc, Australia H. Szczerbicka, Germany S. Tabaka, Japan O. Tanir, Canada G. Vansteenkiste, Belgium M. Wildberger, USA S. Xia, UK R. Zobel, UK CONFERENCE INFORMATION The ESS series (organized by SCS, the Society for Computer Simulation International) is now in its fifth year. SCS is an international non-profit organization founded in 1952. On a yearly basis SCS organizes 6 Simulation Conferences worldwide, cooperates in 2 others, and publishes the monthly magazine Simulation, a quarterly Transactions, and books. For more information, please tick the appropriate box on the reply card. During ESS'94 the following events will be presented besides the scientific program: Professional Seminars The first day of the conference is dedicated to professional seminars, which will be presented for those interested participants to expose the state-of-art overview of each of the five main themes of this conference. Participation fee is included in the conference registration fee. If you have suggestions for other advanced tutorial topics, please contact one of the program chairmen. Exhibits An exhibition will be held in the central hall where all participants meet for coffee and tea. There will be a special exhibition section for universities and non-profit organizations, and a special section for publishers and commercial stands. If you would like to participate in the exhibition, please contact the SCS European Office. Vendor Sessions, Demonstrations and Video Presentations For demonstrations or video sessions, please contact SCS International at the European Office. Special sessions within the scientific program will be set up for vendor presentations. Other Organized Meetings Several User Group meetings for simulation languages and tools will be organized on Monday. It is possible to have other meetings on Monday as well. If you would like to arrange a meeting, please contact the Conference Chairman. We will be happy to provide a meeting room and other necessary equipment. VENUE Istanbul, the only city in the world built on two continents, stands on the shores of the Istanbul Bogazi (Bosphorus) where the waters of the Black Sea mingle with those of the Sea of Marmara and the Golden Horn. Here on this splendid site, Istanbul guards the precious relics of three empires of which she has been the capital; a unique link between East and West, past and present. Istanbul has infinite variety: museums, ancient churches, palaces, great mosques, bazaars and the Bosphorus. However long you stay, just a few days or longer, your time will be wonderfully filled in this unforgettable city. Bogazici University, which will host ESS'94 has its origins in Robert College, first American College founded outside of the United States in 1863. It has a well deserved reputation for academic excellence and accordingly attracts students from among the best and brightest in Turkey. The University is composed of four faculties, six institutes (offering graduate programs), and two other schools. The conference location is Istanbul Dedeman, an international five star hotel, which is located in the center of the city with a spectacular view of the Bosphorus. It is in a very close district to the most of the historical places as well as to the business center. For the conference participants the single room special rate is 65 US dollars. SCIENTIFIC PROGRAM The 1994 SCS European Simulation Symposium is structured around the following five major themes. A parallel track will be devoted to each of the five topics. The conference language is English. * Advances in Simulation Methodology and Practices, e.g.: - Advanced Modelling, Experimentation, and Output Analysis and Display - Object-Oriented System Design and Simulation - Optimization of Simulation Models - Validation and Verification Techniques - Mixed Methodology Modelling - Special Simulation Tools and Environments * Artificial Intelligence in Simulation, e.g.: - Knowledge-based Simulation Environments and Knowledge Bases - Knowledge-based System Applications - Reliability Assurance through Knowledge-based Techniques - Mixed Qualitative and Quantitative Simulation - Neural Networks in Simulation * Innovative Simulation Technologies: - Virtual Reality - Multimedia Applications * Industrial Simulation, e.g. Simulation in: - Design and Manufacturing, CAD, CIM - Process Control - Robotics and Automation - Concurrent Engineering, Scheduling * Computer and Telecommunication Systems, e.g.: - Circuit Simulation, Fault Simulation - Computer Systems - Telecommunication Devices and Systems - Networks INVITED SPEAKERS Focusing on the main tracks of the conference, invited speakers will give special in-depth presentations in plenary sessions, which will be included in the proceedings of the conference. BEST PAPER AWARDS The 1994 European Simulation Symposium will award the best five papers, one in each of the five tracks. From these five papers, the best overall paper of the conference will be chosen. The awarded papers will be published in an International Journal, if necessary after incorporating modifications in the paper. DEADLINES AND REQUIREMENTS Extended abstracts (300 words, 2-3 pages for full and 150 words, 1 page for short papers typewritten without drawings and tables) are due to arrive in QUADRUPLICATE at the office of Ali Riza Kaylan, at the Industrial Engineering Department of Bogazici University, TURKEY before April 12, 1994. Only original papers, written in English, which have not previously been published elsewhere will be accepted. In case you want to organize a panel discussion, please contact the program chairmen. Authors are expected to register early (at a reduced fee) and to attend the conference at their own expense to present the accepted papers. If early registration and payment are not made, the paper will not be published in the conference proceedings. In the case of multi-authors, one author should be identified as the person who will act as correspondent for the paper. Abstracts will be reviewed by 3 members of the International Program Committee for full papers and one member for short papers. Notification of acceptance or rejection will be sent by April 30, 1994. An author kit with complete instruction for preparing a camera-ready copy for the proceedings will be sent to authors of accepted abstracts. The camera-ready copy of the papers must be in by July 15, 1994. Only the full papers, which are expected to be 5-6 pages long, will be published in the conference proceedings. In order to guarantee a high-quality conference, the full papers will be reviewed as well, to check whether the suggestions of the program committee have been incorporated. The nominees for the best paper awards will be selected as well. REGISTRATION FEE Author SCS members Other participants ----------------------------------------------- Registration before BF 15000 BF 15000 BF 17000 August 31, 1994 (375 ECU) (375 ECU) (425 ECU) Registration after Preregistration BF 17000 BF 20000 August 31, 1994 required (425 ECU) (500 ECU) or at the conference The registration fee includes one copy of the Conference Proceedings, attending professional seminars, coffee and tea during the breaks, all lunches, a welcome cocktail and the conference dinner. CORRESPONDENCE ADDRESS Philippe Geril The Society for Computer Simulation, European Simulation Office, University of Ghent Coupure Links 653, B-9000 Ghent, Belgium. Phone (Office): 32.9.233.77.90 Phone (Home): 32.59.800.804 Fax (Office): 32.9.223.49.41 E-Mail: Philippe.Geril at rug.ac.be REPLY CARD Family Name: First Name: Occupation and/or Title: Affiliation: Mailing Address: Zip: City: Country: Telephone: Fax: E-mail: Yes, I intend to attend the European Simulation Symposium ESS'94: o Proposing a paper o Proposing a panel discussion o Participating a vendor session o Contributing to the exhibition o Without presenting a paper The provisional title of my paper / poster / exhibited tool is: With the following topics: The paper belongs to the category (please tick one): o Advances in Simulation Methodology and Practices o Artificial Intelligence in Simulation o Innovative Simulation Technologies o Industrial Simulation o Computer and Telecommunication Systems The paper will be submitted as a: o Full paper o Short Paper o Poster session o Demonstration Other colleague(s) interested in the topics of the conference is/are: Name: Address: Name: Address: If you would like to receive more information about SCS and its activities, please tick the following box: o YES, I would like to know more about SCS. Please mail this card immediately to: Philippe Geril, The Society for Computer Simulation, European Simulation Office University of Ghent, Coupure Links 653, B-9000 Ghent, Belgium.  From cogsci at birmingham.ac.uk Sun Mar 20 16:23:55 1994 From: cogsci at birmingham.ac.uk (cogsci@birmingham.ac.uk) Date: Sun, 20 Mar 94 21:23:55 GMT Subject: Cognitive Science MSc Programme at Birmingham Message-ID: ____________________________________________________________________________ M S c i n C o g n i t i v e S c i e n c e a t t h e U n i v e r s i t y o f B i r m i n g h a m ____________________________________________________________________________ The University of Birmingham runs a programme of inter-disciplinary teaching and research in Cognitive Science notable for its breadth and cross- disciplinary interaction. Staff have a wide range of relevant research interests, and Cognitive Science is supported by extensive computing facilities comprising Unix workstations and X-terminals. The MSc in Cognitive Science is a one-year modular programme consisting of taught courses followed by a substantial project. The taught courses (including options) on the MSc comprise: Artificial Intelligence Programming and Logic, Overview of Cognitive Science, Knowledge Representation Inference and Expert Systems, General Linguistics, Human Information Processing, Structures for Data and Knowledge, Philosophy of Science for Cognitive Science, Philosophy of Mind for Cognitive Science, C++ Programming, Human-Computer Interaction, Biological and Computational Architectures, Current Issues in Cognitive Science, Artificial and Natural Perceptual Systems, Speech and Natural Language Processing, and Parallel Distributed Processing. Projects can be pursued in a wide range of topics. Admissions requirements for the MSc in Cognitive Science are flexible, but normally include a good degree in a relevant area such as psychology, artificial intelligence, computer science, linguistics or philosophy. Addresses for further information are given below. The same addresses can be used for enquiries concerning the PhD programme in Cognitive Science and the Cognitive Science Seminar Series at Birmingham. Phone: (+4421) 414 3683 Fax: (+4421) 414 4897 E-mail: cogsci at bham.ac.uk WWW URL:http://www.cs.bham.ac.uk/ Gopher: gopher.cs.bham.ac.uk Mail: Cognitive Science Admissions, School of Psychology, University of Birmingham, Birmingham, B15 2TT, U.K. Donald Peterson.  From dhw at santafe.edu Mon Mar 21 15:21:52 1994 From: dhw at santafe.edu (dhw@santafe.edu) Date: Mon, 21 Mar 94 13:21:52 MST Subject: New file in neuroprose Message-ID: <9403212021.AA01270@chimayo> **** DO NOT FORWARD TO OTHER GROUPS **** The following paper has been placed in neuroprose, under the name wolpert.unify.ps.Z. It is a draft, 82 pages long. Because of the breadth of its subject matter, comments/suggestions are strongly encouraged. The Relationship Between PAC, the Statistical Physics framework, the Bayesian framework, and the VC framework. by David H. Wolpert The Santa Fe Institute, 1660 Old Pecos Trail, Suite A, Santa Fe, NM, 87505, dhw at santafe.edu Abstract: This paper discusses the intimate relationships between the supervised learning frameworks mentioned in the title. In particular, it shows how all those frameworks can be viewed as particular instances of a single overarching formalism. In doing this many commonly misunderstood aspects of those frameworks are explored. In addition the strengths and weaknesses of those frameworks are compared, and some novel frameworks are suggested (resulting, for example, in a `correction' to the familiar bias-plus-variance formula). To print the file: unix> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive FTP server (Version wu-2.1c(3) Thu Dec 16 08:45:43 EST 1993) ready. Name (archive.cis.ohio-state.edu:dhw): anonymous 331 Guest login ok, send your complete e-mail address as password. Password: 230 Guest login ok, access restrictions apply. ftp> cd pub/neuroprose 250-Please read the file README 250- it was last modified on Fri Jul 2 09:04:46 1993 - 262 days ago 250 CWD command successful. ftp> binary ftp> get wolpert.unify.ps.Z ftp> quit unix> lpr wolpert.unify.ps.Z (or however you print postscript)  From mli at math.uwaterloo.ca Tue Mar 22 17:35:31 1994 From: mli at math.uwaterloo.ca (Ming Li) Date: Tue, 22 Mar 1994 17:35:31 -0500 Subject: Preliminary Announcement: ML'94 + COLT'94 Message-ID: <94Mar22.173539est.77988-4@math.uwaterloo.ca> An unabbreviated version of this announcement in Latex or postscript can be obtained via anonymous ftp from cs.rutgers.edu in the directory pub/learning94. If you do not have access to ftp, send email to ml94 at cs.rutgers.edu or colt94 at research.att.com. *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=* --- Preliminary Announcement --- ML '94 COLT '94 Eleventh International Conference Seventh ACM Conference on on Machine Learning Computational Learning Theory July 10-13, 1994 July 12-15, 1994 Rutgers, The State University of New Jersey, New Brunswick *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* The COLT and ML conferences will be held together this year at Rutgers University in New Brunswick. This is the first time that COLT and ML will be held in the same location, and we are looking forward to a lively and interdisciplinary meeting of the two communities. Please come and help make this exciting experiment a success. Among the highlights of the conferences are three invited lectures, and, on Sunday, July 10, a day of workshops and tutorials on a variety of topics relevant to machine learning. The tutorials are sponsored by DIMACS, and are free and open to the general public. COLT is sponsored by the ACM Special Interest Groups on Algorithms and Computation Theory (SIGACT) and on Artificial Intelligence (SIGART). In addition, COLT and ML received generous support this year from AT&T Bell Laboratories and the NEC Research Institute. This preliminary announcement, which omits the final technical program, is being provided so that travel arrangements can be made as early as possible. An updated announcement, including the technical program, will be distributed sometime in April. >>>> WARNING <<<< The dates of the conferences coincide this year with the World Cup soccer matches being held at Giants Stadium in East Rutherford, New Jersey. These games are expected to be the largest sporting event ever held in the New York metropolitan area, and it is possible that the volume of soccer fans in the area could adversely affect your ability to make travel reservations. Therefore, IT IS EXTREMELY IMPORTANT THAT YOU MAKE ALL YOUR TRAVEL ARRANGEMENTS AS EARLY AS POSSIBLE. GENERAL INFORMATION LOCATION. The conferences will be held at the College Avenue Campus of Rutgers University in downtown New Brunswick, which is easily accessible by air, train, and car. For air travel, New Brunswick is 35 minutes from Newark International Airport, a major U.S. and international airline hub. By rail, the New Brunswick train station is located less than four blocks from the conference site and is on Amtrak's Northeast corridor. For travel by car, the conference site is approximately three miles from Exit 9 of the New Jersey Turnpike. See instructions below for obtaining a map of the campus. Most conference activities will take place in Scott Hall (#21 on map) and Murray Hall (#22). Conference check-in and on-site registration will take place in Scott Hall (follow signs for exact room location) on Saturday, July 9 at 3-6pm, and everday after that beginning at 8am. REGISTRATION. Please complete the attached registration form, and return it with a check or money order for the full amount. The early registration (postmark) deadline is May 27, 1994. HOUSING. We have established the group rate of $91/night for a single or a double at the HYATT REGENCY HOTEL (about five blocks from the conference site). This rate is only guaranteed through June 10, 1994, and, due to limited availability, it is strongly recommended that you make reservations as soon as possible. To reserve a room, please call the Hyatt directly at 908-873-1234 or 800-233-1234 and be sure to reference ML94 or COLT94. Parking is available at the hotel for a discounted $3/night. We have also reserved dormitory space in two dorms, both of which are an easy walk to the main conference site. Dorm reservations must be made by the early registration deadline of May 27, 1994. Both dorms include daily maid service (linens provided first day for the week and daily fresh towels and beds made). The Stonier Hall dorms (#56 on map) are air-conditioned with private bath and are situated in the center of the campus. Due to limited availability, only shared double rooms are available in Stonier. Only a block away, the Campbell Hall dorms (#50) are one of a set of three "river dorms" overlooking the Raritan River. Although Campbell Hall is not air-conditioned, the view of the river is quite pleasing and rooms on the river side should offer good air flow. Baths in Campbell are shared on each floor, with single and double rooms available. Please specify your dorm preference on your registration form, and we will assign space accordingly on a first come, first served basis as long as rooms are available. Unfortunately, because there are only a finite number of rooms within each dormitory, we cannot absolutely guarantee your request. Check-in for the dorms will take place at the Housing Office in Clothier Hall (#35) which is located next to the Hurtado Health Center (#37) on Bishop Place. Check-in hours will be 4pm to midnight, July 9-13. Parking passes, for those staying in the dorms, will be available upon check-in. TRAVEL BY AIR. Newark International Airport is by far the most convenient. A taxi from the airport to New Brunswick costs about $36 (plus nominal tolls) for up to four passengers. (This is the flat-rate fare for a _licensed_ taxi from the official-looking taxi stand; it is strongly recommended that you refuse rides offered by unlicensed taxi drivers who may approach you elsewhere in the airport.) Shuttle service to New Brunswick is available from ICS for $23 per person. ICS shuttles run direct to the Hyatt, and require at least one-day advance reservations (908-566-0795 or 800-225-4427). If renting a car, follow signs out of the airport to New Jersey Turnpike South, and continue with the directions below. By public transportation, take the Airlink bus ($4 exact fare) to Newark Penn Station and follow the "by rail" directions below. (New Jersey Transit train fare is $5.25 one-way or $8 round trip excursion; trains run about twice an hour during the week, and less often in the evening and on weekends.) TRAVEL BY CAR. Take the New Jersey Turnpike (south from Newark or New York, north from Philadelphia) to Exit 9. Follow signs onto Route 18 North or West (labeled differently at different spots) toward New Brunswick. Take the Route 27, Princeton exit onto Albany Street (Route 27) into downtown New Brunswick. The Hyatt Regency Hotel will be on your left after the first light. If staying at the Hyatt, turn left at the next light, Neilson Street, and left again into the front entrance of the hotel. If staying in the dorms, continue past this light to the following light, George Street, and turn right. Stay on George Street to just before the fifth street and turn left into the Parking Deck (#55 on map). Walk to the Housing Office in Clothier Hall (#35) for dormitory check-in. TRAVEL BY RAIL. Take either an Amtrak or a New Jersey Transit train to the New Brunswick train station. This is located at the corner of Albany Street and Easton Avenue. If staying at the Hyatt Regency Hotel, it is a (long) three block walk to the left on Albany Street to the hotel. If staying in the dorms it is a (long) six block walk to the Housing Office in Clothier Hall (#35 on map) for dormitory check-in. (The taxi stand is in front of the train station on Albany Street.) MEALS. Continental breakfast is included with registration, but not lunch or dinner. Restaurants abound within walking distance of the conference and housing venue, ranging from inexpensive food geared to college students to more expensive dining. A reception on July 12 is scheduled at the rustic Log Cabin, situated next to the experimental gardens of the agricultural campus, as part of the registration package for all ML94 and COLT94 attendees. The banquet on July 13 is included in the registration package for everyone except students. CLIMATE. New Jersey in July is typically hot, with average daily highs around 85 degrees, and overnight lows around 70. Most days in July are sunny, but also come prepared for the possibility of occasional rain. THINGS TO DO. The newly opened Liberty Science Center is a fun, hands-on science museum located in Liberty State Park, about 30-45 minutes from New Brunswick (201-200-1000). From Liberty State Park, one can also take a ferry to the Statue of Liberty and the Immigration Museum at Ellis Island. New York City can be reached in under an hour by rail on New Jersey Transit. Trains run about twice an hour during the week, and once an hour on weekends and at night. Fare is $7.75 one-way, $11.50 round trip excursion. New Brunswick has a number of theaters, including the State Theater (908-247-7200), the George Street Playhouse (908-246-7717), and the Crossroads Theater (908-249-5560). The New Jersey shore is less than an hour from New Brunswick. Points along the shore vary greatly in character. Some, such as Point Pleasant, have long boardwalks with amusement park rides, video arcades, etc. Others, such as Spring Lake, are quiet and uncommercialized with clean and very pretty beaches. Further south, about two hours from New Brunswick, are the casinos of Atlantic City. You can walk for miles and miles along the towpath of the peaceful Delaware and Raritan Canal which runs from New Brunswick south past Princeton. Your registration packet will include a pass for access to the College Avenue Gymnasium (near the dormitories, #77 on map). FURTHER INFORMATION. If you have any questions or problems, please send email to colt94 at research.att.com or to ml94 at cs.rutgers.edu. A map of the campus, abstracts of workshops/tutorials, updates of this announcement, and other information will be available via anonymous ftp from cs.rutgers.edu in the directory pub/learning94. For New Jersey Transit fare and schedule information, call 800-772-2222 (in New Jersey) or 201-762-5100 (out-of-state). TECHNICAL PROGRAM The technical program for the conferences has not yet been finalized, but will be distributed sometime in April. All ML technical sessions will be held July 11-13, and all COLT sessions will be held July 12-15. INVITED LECTURES: * Michael Jordan, "Hidden decision tree models." * Stephen Muggleton, "Recent advances in inductive logic programming." * Fernando Pereira, "Frequencies vs biases: Machine learning problems in natural language processing." PAPERS ACCEPTED TO ML: A Baysian framework to integrate symbolic and neural learning. Irina Tchoumatchenko, Jean Gabriel Ganascia. A case for Occam's razor in the task of rule-base refinement. J. Jeffrey Mahoney, Raymond Mooney. A conservation law for generalization performance. Cullen Schaffer. A constraint-based induction algorithm in FOL. Michele Sebag. A Modular Q-learning architecture for manipulator task decomposition. Chen Tham, Richard Prager. A new method for predicting protein secondary structures based on stochastic tree grammars. Naoki Abe, Hiroshi Mamitsuka. A powerful heuristic for the discovery of complex patterned behavior. Raul E. Valdes-Perez, Aurora Perez. An efficient subsumption algorithm for inductive logic programming. Jorg-Uwe Kietz, Marcus Lubbe. An improved algorithm for incremental induction of decision trees. Paul Utgoff. An incremental learning approach for completable planning. Melinda T. Gervasio, Gerald F. DeJong. Combining top-down and bottom-up techniques in inductive logic programming. John M. Zelle, Raymond Mooney, Joshua Konvisser. Comparison of boosting to other ensemble methods using neural networks. Harris Drucker, Yann LeCun, L. Jackel, Corinna Cortes, Vladimir Vapnik. Compositional instance-based learning. Karl Branting, Patrick Broos. Consideration of risk in reinforcement learning. Matthias Heger. Efficient algorithms for minimizing cross validation error. Mary Lee, Andrew W. Moore. Exploiting the ordering of observed problem-solving steps for knowledge base refinement: an apprenticeship approach. Steven Donoho, David C. Wilkins. Getting the most from flawed theories. Moshe Koppel, Alberto Segre, Ronen Feldman. Greedy attribute selection. Richard A. Caruana, Dayne Freitag. Hierarchical self-organization in genetic programming. Justinian Rosca, Dana Ballard. Heterogeneous uncertainty sampling for supervised learning. David D. Lewis, Jason Catlett. Improving accuracy of incorrect domain theories. Lars Asker. In defense of C4.5: notes on learning one-level decision trees. Tapio Elomaa. Increasing the efficiency of simulated annealing search by learning to recognize (un)promising runs. Yoichihro Nakakuki, Norman Sadeh. Incremental multi-step Q-learning. Jing Peng, Ronald Williams. Incremental reduced error pruning. Johannes Furnkranz, Gerhard Widmer. Irrelevant features and the subset selection problem. George H. John, Ron Kohavi, Karl Pfleger. Learning by experimentation: incremental refinement of incomplete planning domains. Yolanda Gil. Learning disjunctive concepts by means of genetic algorithms. Attilio Giordana, Lorenza Saitta, F. Zini. Learning recursive relations with randomly selected small training sets. David W. Aha, Stephane Lapointe, Charles Ling, Stan Matwin. Learning semantic rules for query reformulation. Chun-Nan Hsu, Craig Knoblock. Markov games as a framework for multi-agent reinforcement learning. Michael Littman. Model-Free reinforcement learning for non-markovian decision problems. Satinder Pal Singh, Tommi Jaakkola, Michael I. Jordan. On the worst-case analysis of temporal-difference learning algorithms. Robert Schapire, Manfred Warmuth. Prototype and feature selection by sampling and random mutation hill climbing algorithms. David B. Skalak. Reducing misclassification costs: Knowledge-intensive approaches to learning from noisy data. Michael J. Pazzani, Christopher Merz, Patrick M. Murphy, Kamal M. Ali, Timothy Hume, Clifford Brunk. Revision of production system rule-bases. Patrick M. Murphy, Michael J. Pazzani. Reward functions for accelerated learning. Maja Mataric. Selective reformulation of examples in concept learning. Jean-Daniel Zucker, Jean Gabriel Ganascia. Small sample decision tree pruning. Sholom Weiss, Nitin Indurkhya. The generate, test and explain discovery system architecture. Michael de la Maza. The minimum description length principle and categorical theories. J. R. Quinlan. To discount or not to discount in reinforcement learning: a case study comparing R~learning and Q~learning. Sridhar Mahadevan. Towards a better understanding of memory-based and Bayesian classifiers. John Rachlin, Simon Kasif, Steven Salzberg, David W. Aha. Using genetic search to refine knowledge-based neural networks. David W. Opitz, Jude Shavlik. Using sampling and queries to extract rules from trained neural networks. Mark W. Craven, Jude Shavlik. WORKSHOPS AND DIMACS-SPONSORED TUTORIALS On Sunday, July 10, we are pleased to present four all-day workshops, five half-day tutorials, and one full-day advanced tutorial. The DIMACS-sponsored tutorials are free and open to the general public. Participation in the workshops is also free, but is at the discretion of the workshop organizers. Note that some of the workshops have quickly approaching application deadlines. Please contact the workshop organizers directly for further information. Some information is also available on our ftp site (see "further information" above). TUTORIALS: T1. State of the art in learning DNF rules morning/afternoon (advanced tutorial) Dan Roth danr at das.harvard.edu Jason Catlett catlett at research.att.com T2. Descriptional complexity and inductive learning morning Ed Pednault epdp at research.att.com T3. Computational learning theory: introduction and survey morning Lenny Pitt pitt at cs.uiuc.edu T4. What does statistical physics have to say about learning? morning Sebastian Seung seung at physics.att.com Michael Kearns mkearns at research.att.com T5. Reinforcement learning afternoon Leslie Kaelbling lpk at cs.brown.edu T6. Connectionist supervised learning--an engineering afternoon approach Tom Dietterich tgd at research.cs.orst.edu Andreas Weigend andreas at cs.colorado.edu WORKSHOPS: W1. Robot Learning morning/afternoon/evening Sridhar Mahadevan mahadeva at csee.usf.edu W2. Applications of descriptional complexity to afternoon/evening inductive, statistical and visual inference Ed Pednault epdp at research.att.com W3. Constructive induction and change of morning/afternoon representation Tom Fawcett fawcett at nynexst.com W4. Computational biology and machine learning morning/afternoon Mick Noordewier noordewi at cs.rutgers.edu Lindley Darden darden at umiacs.umd.edu REGISTRATION FOR COLT94/ML94 Please complete the registration form below, and mail it with your payment for the full amount to: Priscilla Rasmussen, ML/COLT'94 Rutgers, The State University of NJ Laboratory for Computer Science Research Hill Center, Busch Campus Piscataway, NJ 08855 (Sorry, registration cannot be made by email, phone or fax.) Make your check or money order payable in U.S. dollars to Rutgers University. For early registration, and to request dorm housing, this form must be mailed by May 27, 1994. For questions about registration, please contact Priscilla Rasmussen (rasmussen at cs.rutgers.edu; 908-932-2768). Name: _____________________________________________________ Affiliation: ______________________________________________ Address: __________________________________________________ ___________________________________________________________ Country: __________________________________________________ Phone: _______________________ Fax: _______________________ Email: ____________________________________________________ Confirmation will be sent to you by email. REGISTRATION. Please circle the *one* conference for which you are registering. (Even if you are planning to attend both conferences, please indicate the one conference that you consider to be "primary.") COLT94 ML94 The registration fee includes a copy of the proceedings for the *one* conference circled above (extra proceedings can be ordered below). Also included is admission to all ML94 and COLT94 talks and events (except that student registration does not include a banquet ticket). Regular advance registration: $190 $_______ ACM/SIG member advance registration: $175 $_______ Late registration (after May 27): $230 $_______ Student advance registration: $85 $_______ Student late registration (after May 27): $110 $_______ Extra reception tickets (July 12): _____ x $17 = _______ Extra banquet tickets (July 13): _____ x $40 = _______ Extra COLT proceedings: _____ x $35 = _______ Extra ML proceedings: _____ x $35 = _______ Dorm housing (from below): $_______ TOTAL ENCLOSED: $_______ How many in your party have dietary restrictions? Vegetarian: _____ Kosher: _____ Other: ______________ Circle your shirt size: small medium large X-large HOUSING. Please indicate your housing preference below. Descriptions of the dorms are given under "housing" above. Dorm assignments will be made on a first come, first served basis, so please send your request in as early as possible. We will notify you by email if we cannot fill your request. _____ Check here if you plan to stay at the Hyatt (reservations must be made directly with the hotel by June 10). _____ Check here if you plan to make your own housing arrangements (other than at the Hyatt). _____ Check here to request a room in the dorms and circle the appropriate dollar amount below: Dorm: Stonier Campbell Length of stay: dbl. sing. dbl. ML only (July 9-13): $144 144 108 COLT only (July 11-15): 144 144 108 ML and COLT (July 9-15): 216 216 162 If staying in a double in the dorms, who will your roommate be? ____________________________________ For either dorm, please indicate expected day and time of arrival and departure. Note that check-in for the dorms must take place between 4pm and midnight on July 9-13. Expected arrival: ______ ______ (date) (time) Expected departure: ______ ______ (date) (time) TUTORIALS. The DIMACS-sponsored tutorials on July 10 are free and open to the general public. For our planning purposes, please circle those tutorials you plan to attend. Morning: T1 T2 T3 T4 Afternoon: T1 T5 T6 To participate in a workshop, please contact the workshop organizer directly. There is no fee for any workshop, and all workshops will be held on July 10. REFUNDS. The entire dorm fee, and one-half of the registration fee are refundable through June 24. Send all requests by email to rasmussen at cs.rutgers.edu.  From dhw at santafe.edu Wed Mar 23 22:24:40 1994 From: dhw at santafe.edu (dhw@santafe.edu) Date: Wed, 23 Mar 94 20:24:40 MST Subject: New paper on Bayesian backprop Message-ID: <9403240324.AA04558@chimayo> *** DO NOT FORWARD TO OTHER GROUPS *** The following paper has been placed in an FTP repository at the Santa Fe Institute. An abbreviated version of this paper will appear in the proceedings of NIPS '93. The paper consists of two files. Retrieval instructions appear at the end of this message. Bayesian Backpropagation Over I-O Functions Rather Than Weights David H. Wolpert The Santa Fe Institute, 1660 Old Pecos Trail, Santa Fe, NM 87501 (dhw at santafe.edu) Abstract: The conventional Bayesian justification for backprop is that it finds the MAP weight vector. As this paper shows, to find the MAP i-o function instead, one must add a correction term to backprop. That term biases one towards i-o functions with small description lengths, and in particular favors (some kinds of) feature-selection, pruning, and weight-sharing. This can be viewed as an a priori argument in favor of those techniques. To retrieve the paper: unix> ftp ftp.santafe.edu Name: anonymous Password: (Your e-mail address) ftp> binary ftp> cd pub/Users/dhw ftp> get nips.93.figs.ps.Z ftp> get nips.93.text.ps.Z ftp> quit unix> uncompress nips.93.figs.ps.Z unix> uncompress nips.93.text.ps.Z unix> lpr nips.93.figs.ps (or however you print postscript) unix> lpr nips.93.text.ps (or however you print postscript) Note: The .figs file uncompresses to close to 2.5 meg. It may be necessary to use the -s option to lpr to print it.  From hilario at cui.unige.ch Wed Mar 23 13:21:22 1994 From: hilario at cui.unige.ch (Hilario Melanie) Date: Wed, 23 Mar 1994 19:21:22 +0100 Subject: Please disseminate via connectionists-ml Message-ID: <347*/S=hilario/OU=cui/O=unige/PRMD=switch/ADMD=arcom/C=ch/@MHS> ----------------------REMINDER : DEADLINE IS APRIL 1 ---------------------- Final Call for Papers COMBINING SYMBOLIC AND CONNECTIONIST PROCESSING Workshop held in conjunction with ECAI-94 August 9, 1994 - Amsterdam, The Netherlands ----------------------REMINDER : DEADLINE IS APRIL 1 ---------------------- Until a few years ago, the history of AI has been marked by two parallel, often antagonistic streams of development -- classical or symbolic AI and connectionist processing. A recent research trend, premised on the complementarity of these two paradigms, strives to build hybrid systems which combine the advantages of both to overcome the limitations of each. For instance, attempts have been made to accomplish complex tasks by blending neural networks with rule-based or case-based reasoning. This workshop will be the first Europe-wide effort to bring together researchers active in the area in view of laying the groundwork for a theory and methodology of symbolic/connectionist integration (SCI). The workshop will focus on the following topics: o theoretical (cognitive and computational) foundations of SCI o techniques and mechanisms for combining symbolic and neural processing methods (e.g. ways of improving and going beyond state-of-the-art rule compilation and extraction techniques) o outstanding problems encountered and issues involved in SCI (e.g. Which symbolic or connectionist representation schemes are best adapted to SCI? The vector space used in neural nets and the symbolic space have fundamental mathematical differences; how will these differences impact SCI? Do we have the conceptual tools needed to cope with this representation problem?) o profiles of application domains in which SCI has been (or can be) shown to perform better than traditional approaches o description, analysis and comparison of implemented symbolic/connectionist systems SUBMISSION REQUIREMENTS Prospective participants should submit an extended abstract to the contact person below, either via email in postscript format or via regular mail, in which case 3 copies are required. Each submission should include a separate information page containing the title of the paper, author names and affiliations, and the complete address (including telephone, fax and email) of the first author. The paper itself should not exceed 12 pages. Submission deadline is April 1, 1994. Each paper will be reviewed by at least two members of the Program Committee. Notification of acceptance or rejection will be sent to first authors by May 1, 1994. Camera-ready copies of accepted papers are due on June 1st and will be reproduced for distribution at the workshop. Those who wish to participate without presenting a paper should send a request describing their research interests and/or previous work in the field of SCI. Since attendance will be limited to ensure effective interaction, these requests will be considered after screening of submitted papers. All workshop participants are required to register for the main conference. PROGRAM COMMITTEE Bernard Amy (LIFIA-IMAG, Grenoble, France) Patrick Gallinari (LAFORIA, University of Paris 6, France) Franz Kurfess (Dept. Neural Information Processing, University of Ulm, Germany) Christian Pellegrini (CUI, University of Geneva, Switzerland) Noel Sharkey (DCS, University of Sheffield, UK) Alessandro Sperduti (CSD, University of Pisa, Italy) IMPORTANT DATES Submission deadline April 1, 1994 Notification of acceptance/rejection May 1, 1994 Final papers due June 1, 1994 Date of the workshop August 9, 1994 CONTACT PERSON Melanie Hilario CUI - University of Geneva 24 rue General Dufour CH-1211 Geneva 4 Voice: +41 22/705 7791 Fax: +41 22/320 2927 Email: hilario at cui.unige.ch  From franz at neuro.informatik.uni-ulm.de Wed Mar 23 13:26:02 1994 From: franz at neuro.informatik.uni-ulm.de (Franz Kurfess) Date: Wed, 23 Mar 1994 19:26:02 +0100 Subject: CfP: Workshop "Logic and Reasoning with Neural Networks" Message-ID: Could you please distribute the following Final Call for Papers / Participation? Thank you very much Franz Kurfess, Alessandro Sperduti FINAL CALL FOR PAPERS "Logic and Reasoning with Neural Networks" Workshop at the International Conference on Logic Programming ICLP'94 Santa Margherita Ligure, Italy June 17 or 18, 1994 Description of the Workshop =========================== The goal of the workshop is to initiate discussions and foster interaction between researchers interested in the use of neural networks and connectionist models for various aspects of logic and reasoning. There are a number of domains where the combination of neural networks and logic opens up interesting perspectives: * Methods for Reasoning - cognitively plausible models of reasoning - reasoning with vague knowledge - neural inference mechanisms - probabilistic reasoning with neural networks * Knowledge Representation Aspects - representation of non-symbolic information - knowledge acquisition from raw data (rule extraction) with neural networks - representation of vague knowledge - similarity-based access to knowledge - context-dependent retrieval of facts * Integration of Symbolic and Neural Components - combining sub-symbolic and symbolic information - pattern recognition - sensor fusion * Implementation Techniques - connectionist implementations of symbolic inference mechanisms - neural networks as massively parallel implementation technique - neural networks for learning of search heuristics There are at least three major aspects where a discussion of neural networks / connectionist models can be beneficial to the logic programming community at this time: * development of reasoning techniques which are closer to the way humans reason in everyday situation * dealing with vague knowledge, i.e. imprecise, uncertain, incomplete, inconsistent information, possibly from different sources and in various formats * efficiency improvements for symbolic inference mechanisms, e.g. through adaptive learning from previously solved problems, or content-oriented access to rules and facts Submission of Papers ==================== Prospective contributors are invited to submit papers or extended abstracts to the organizers by April 1, 1994. They will be notified about acceptance or rejection by May 1. The final version of the papers is due June 1. We are planning to make the full papers accessible to the workshop participants in an ftp archive, and hand out only copies of the abstracts. If possible, please use a text processing program that allows you to produce PostScript output; otherwise it might be difficult to print out copies on other systems than the one you used. ******** Papers should be sent to Franz Kurfess *********** Preliminary Agenda ================== There will be one or two talks of approximately 30 min. where the essential background on the use of neural networks for logic and reasoning will be presented. The main purpose for this is to offer a brief introduction to those attendants with little knowledge of neural networks, and to provide a common framework of reference for the workshop. Care will be taken that these presentations concentrate on fundamental aspects, providing an overview of the field rather than a detailed technical review of one particular system or approach. The rest of the time slots will be used for presentations of submitted papers, i.e. approximately two in each section, with enough time for discussion. The final time schedule will be distributed after May 1. The workshop will be concluded by a final discussion and a wrap-up of important aspects. Important Dates =============== Submission deadline April 1, 1994 Notification of acceptance/rejection May 1, 1994 Final version of papers due June 1, 1994 Date of the workshop June 17 or 18, 1994 Registration ============ According to the standard policy of LP post-coference workshops, the workshops are integrating part of the conference. This means that participants of the workshop are expected to register for the conference. Workshop Organizers =================== Franz Kurfess Dept. of Neural Information Processing University of Ulm D-89069 Ulm, Germany Voice : +49/731 502-41+4953 Fax : +49/731 502-4156 E-mail: kurfess at neuro.informatik.uni-ulm.de Alessandro Sperduti CSD - University of Pisa Corso Italia 40 56100 Pisa, Italy Voice : +39/50 887 248 Fax : +39/50 887 226 E-mail: perso at di.unipi.it  From geva at fit.qut.edu.au Fri Mar 25 10:36:56 1994 From: geva at fit.qut.edu.au (Mr Shlomo Geva) Date: Fri, 25 Mar 94 10:36:56 EST Subject: ANZIIS 94 Call for Papers Message-ID: <199403250037.KAA18407@sleet.fit.qut.edu.au> ******************* CALL FOR PAPERS ******************* ANZIIS-94 Second Australian and New Zealand Conference on Intelligent Information Systems Brisbane, Queensland, Australia Tutorials:29 November, Conference: 30 Nov - 2 December 1994 Major fields: Artificial Intelligence Fuzzy Systems Neural Networks Evolutionary Computation The Second Australian and New Zealand Conference on Intelligent Information Systems (ANZIIS-94) will be held in Brisbane, from 29 November to 2 December 1994. This follows the successful inaugural conference, ANZIIS-93, held in Perth in December 1993. The Conference will offer an international forum for discussion of new research on the key methods of intelligent information processing: conventional artificial intelligence, fuzzy logic, artificial neural networks, and evolutionary algorithms. The conference will include invited keynote presentations and contributed papers in oral and poster presentations. All papers will be refereed and published in the proceedings. TUTORIALS AND PANEL SESSIONS The Organising Programme Committee cordially invites proposals for tutorials and special interest sessions relevant to the scope of the conference. Proposals should include details of the proponent including mailing, e-mail and fax addresses, and research record. ABOUT BRISBANE Brisbane is a cosmopolitan and pleasant subtropical city. It is the heart of the vibrant south-east Queensland region that streches over 200 Km from the Gold to the Sunshine Coasts. It is not only a focal point for national and international tourists but tens of thousands Australians every year decide to set up home here. We recommed conference participants to set aside a few extra days to explore the region, either on their own leisure or by taking part in the special pre and post conference activities to be announced. Application areas will include, but will not be limited to: Adaptive Systems Artificial Life Autonomous Vehicles Data Analysis Factory Automation Financial Markets Intelligent Databases Knowledge Engineering Machine Vision Pattern Recognition Machine Learning Neurobiological Systems Control Systems Optimisation Parallel and Distributed Computing Robotics Prediction Sensorimotor Systems Signal Processing Speech Processing Virtual Reality INFORMATION ANZIIS-94 Secretariat School of Computing Science Queensland University of Technology GPO Box 2434 Brisbane, Q 4001, Australia. Telephone: + 61 7 864 2925 Fax: + 61 7 864 1801 e-mail: anziis94 at qut.edu.au SUBMISSION OF PAPERS For the speedy processing of the papers authors are requested to submit their contributions camera-ready on paper and by mail only. Papers should be laser printed on A4 size pages with 25 mm margins on all four sides using a Roman font not smaller than 10 points. The maximum allowed length of an article is 5 pages. The paper should be set in two column format, using the LaTex "article" style or following the style of the IEEE Transaction journals. The papers should contain an abstract and the complete mailing addresses of the authors. Papers will be reviewed internationally. Accepted articles will be published as submitted, as there is no opportunity for revision. Only those papers for which the presenting author has registered as a conference delegate will be printed in the proceedings. Extra copies of the Proceedings will be marketed through the IEEE book brokerage program. IMPORTANT DATES Papers due: 15 July 1994 Tutorial proposals due: 15 July 1994 Notification of acceptance: 15 September 1994 Registration for authors due: 1 October 1994 FEES before 1 Oct Member of IEEE/IEAust/ACS A$400 Other A$450 Student member of IEEE/IEAust/ACS A$150 Other Student A$200 after 1 Oct Member of IEEE/IEAust/ACS A$450 Other A$500 Student member of IEEE/IEAust/ACS A$200 Other Student A$250 GOVERNMENT TRAINING LEVY The conference programme will meet the requirements of the Australian Government Training Levy for inclusion in an employer's training programme. ANZIIS-94 ORGANISED BY IEEE Australia Council IEEE New Zealand Council IEEE Queensland Section IN CO-OPERATION WITH IEAust - The Institution of Engineers, Australia Australian Computer Society Queensland University of Technology-School of Computing Science ORGANISING COMMITTEE Dr. J. Sitte, Queensland University of Technology General Conference Chair Dr. W. Boles, Queensland University of Technology Mr. S. Ellis, IEEE Queensland Section Dr. S. Geva, Queensland University of Technology Mr. R. Prandolini, IEEE Queensland Chapter Ms. R. Sitte, Griffith University - Nathan Mr. C.Thorne, Griffith University - Gold Coast Dr. R. Zurawski, Swinburne University of Technology Prof.Y.Attikiouzel, University of Western Australia Advisory Committe Chair Dr. Nicola Kasabov, University of Otago New Zealand Liaison Chair TECHNICAL COMMITTEE Dr. J. Andreae, University of Canterbury, New Zealand Prof.S. Bang, Pohang Institute of Science and Technology, Korea Prof. B. Boashash, Queensland University of Technology, Australia Ms. A. Bowles, BHP Research Laboratories, Australia Prof. T. Caelli, University of Melbourne, Australia Dr. L. Cahill, La Trobe University, Australia Dr. G. Coghill, University of Auckland, New Zealand Prof. A. Constantinides, Imperial College, U.K. Dr. J. Cybulski, La Trobe University, Australia Prof. T Dillon, La Trobe University,Australia Prof. T. Downs, University of Queensland, Australia Prof. R. Evans, The University of Melbourne, Australia Prof. N. Foo, University of Sydney, Australia Prof. T Fukuda, Nagoya University, Japan Prof. R. Hodgson, Massey University, New Zealand Mr. A. Horsfall, Fujitsu Australia Ltd., Australia Prof. H. Hsu, National Taiwan University, Taiwan Prof. R. Jarvis, Monash University, Australia Dr. A. Jennings, Telecom Research Laboratories, Australia Dr. J. Kacprzyk, Polish Academy of Sciences, Poland Prof. S. Kollias, National Technical University of Athens, Greece Prof. B. Kosco, University of Southern California, USA Dr. A. Kowalczyk, Telecom Research Laboratories Dr. H.C. Lui, National University of Singapore, Singapore Prof. T Mitchell, Carnegie Mellon University, USA Dr. J. Morris, University of Tasmania, Australia Dr. D. Nandagopal, DSTO, Australia Prof. T. Nguyen, University of Tasmania, Australia Dr. M. Palaniswami, The University of Melbourne, Australia Prof. L. Patnaik, Indian Institute of Science, India Dr. P.K. Simpson, Orincon Corp., San Diego, USA Prof. A.C. Tsoi, University of Queensland, Australia Dr. R Uthurusamy, GM Research Labs. USA Prof. A. Venetsanopoulos, University of Toronto, Canada Prof. K. Wong, The University of Western Australia, Austrlia Dr. A. Zomaya, The University of Western Australia, Austrlia Prof. J. Zurada, University of Louisvill, USA  From esann at dice.ucl.ac.be Fri Mar 25 12:26:53 1994 From: esann at dice.ucl.ac.be (esann@dice.ucl.ac.be) Date: Fri, 25 Mar 94 18:26:53 +0100 Subject: ESANN'94: European Symposium on ANNs Message-ID: <9403251726.AA16937@ns1.dice.ucl.ac.be> ****************************************************************** * European Symposium * * on Artificial Neural Networks * * * * Brussels (Belgium) - April 20-21-22, 1994 * * * * PROGRAM and REGISTRATION FORM * ****************************************************************** Foreword ******** The actual developments in the field of artificial neural networks mark a watershed in its relatively young history. Far from the blind passion for disparate applications some years ago, the tendency is now to an objective assessment of this emerging technology, with a better knowledge of the basic concepts, and more appropriate comparisons and links with classical methods of computing. Neural networks are not restricted to the use of back-propagation and multi-layer perceptrons. Self-organization, adaptive signal processing, vector quantization, classification, statistics, image and speech processing are some of the domains where neural networks techniques may be successfully used; but a beneficial use goes through an in-depth examination of both the theoretical basis of the neural techniques and standard methods commonly used in the specified domain. ESANN'94 is the second symposium covering these specified aspects of neural networks computing. After a successful edition in 1993, ESANN'94 will open new perspectives, by focusing on theoretical and mathematical aspects of neural networks, biologically-inspired models, statistical aspects, and relations between neural networks and both information and signal processing (classification, vector quantization, self-organization, approximation of functions, image and speech processing,...). The steering and program committees of ESANN'94 are pleased to invite you to participate to this symposium. More than a formal conference presenting the last developments in the field, ESANN'94 will be also a forum for open discussions, round tables and opportunities for future collaborations. We hope to have the pleasure to meet you in April, in the splendid town of Brussels, and that your stay in Belgium will be as scientifically beneficial as agreeable. Symposium information ********************* Registration fees for symposium ------------------------------- registration before registration after 18th March 1994 18th March 1994 Universities BEF 14500 BEF 15500 Industries BEF 18500 BEF 19500 Registration fees include attendance to all sessions, the ESANN'94 banquet, a copy of the conference proceedings, daily lunches (20-22 April '94), and coffee breaks twice a day during the symposium. Advance registration is mandatory. Young researchers may apply for grants offered by the European Community (restricted to citizens or residents of a Western European country or, tentatively, Central or Eastern European country - deadline for applications: March 11th, 1994 - please write to the conference secretariat for details). Advance payments (see registration form) must be made to the conference secretariat by bank transfers in Belgian Francs (free of charges) or by sending a cheque (add BEF 500 for processing fees). Language -------- The official language of the conference is English. It will be used for all printed material, presentations and discussions. Proceedings ----------- A copy of the proceedings will be provided to all Conference Registrants. All technical papers will be included in the proceedings. Additional copies of the proceedings (ESANN'93 and ESANN'94) may be purchased at the following rate: ESANN'94 proceedings: BEF 2000 ESANN'93 proceedings: BEF 1500. Add BEF 500 to any order for p.&p. and/or bank charges. Please write to the conference secretariat for ordering proceedings. Conference dinner ----------------- A banquet will be offered on Thursday 21th to all conference registrants in a famous and typical place of Brussels. Additional vouchers for the banquet may be purchased on Wednesday 20th at the conference. Cancellation ------------ If cancellation is received by 25th March 1994, 50% of the registration fees will be returned. Cancellation received after this date will not be entitled to any refund. General information ******************* Brussels, Belgium ----------------- Brussels is not only the host city of the European Commission and of hundreds of multinational companies; it is also a marvelous historical town, with typical quarters, famous monuments known throughout the world, and the splendid "Grand-Place". It is a cultural and artistic center, with numerous museums. Night life in Brussels is considerable. There are of lot of restaurants and pubs open late in the night, where typical Belgian dishes can be tasted with one of the more than 1000 different beers. Hotel accommodation ------------------- Special rates for participants to ESANN'94 have been arranged at the MAYFAIR HOTEL, a De Luxe 4 stars hotel with 99 fully air conditioned guest rooms, tastefully decorated to the highest standards of luxury and comfort. The hotel includes two restaurants, a bar and private parking. Public transportation (trams n93 & 94) goes directly from the hotel to the conference center (Parc stop) Single room BEF 2800 Double room or twin room BEF 3500 Prices include breakfast, taxes and service. Rooms can only be confirmed upon receipt of booking form (see at the end of this booklet) and deposit. Located on the elegant Avenue Louise, the exclusive Hotel Mayfair is a short walk from the "uppertown" luxurious shopping district. Also nearby is the 14th century Cistercian abbey and the magnificent "Bois de la Cambre" park with its open-air cafes - ideal for a leisurely stroll at the end of a busy day. HOTEL MAYFAIR tel: +32 2 649 98 00 381 av. Louise fax: +32 2 649 22 49 1050 Brussels - Belgium Conference location ------------------- The conference will be held at the "Chancellerie" of the Generale de Banque. A map is included in the printed programme. Generale de Banque - Chancellerie 1 rue de la Chancellerie 1000 Brussels - Belgium Conference secretariat D facto conference services tel: + 32 2 245 43 63 45 rue Masui fax: + 32 2 245 46 94 B-1210 Brussels - Belgium E-mail: esann at dice.ucl.ac.be PROGRAM OF THE CONFERENCE ************************* Wednesday 20th April 1994 ------------------------- 9H30 Registration 10H00 Opening session Session 1: Neural networks and chaos Chairman: M. Hasler (Ecole Polytechnique Fdrale de Lausanne, Switzerland) 10H10 "Concerning the formation of chaotic behaviour in recurrent neural networks" T. Kolb, K. Berns Forschungszentrum Informatik Karlsruhe (Germany) 10H30 "Stability and bifurcation in an autoassociative memory model" W.G. Gibson, J. Robinson, C.M. Thomas University of Sidney (Australia) 10H50 Coffee break Session 2: Theoretical aspects 1 Chairman: C. Jutten (Institut National Polytechnique de Grenoble, France) 11H30 "Capabilities of a structured neural network. Learning and comparison with classical techniques" J. Codina, J. C. Aguado, J.M. Fuertes Universitat Politecnica de Catalunya (Spain) 11H50 "Projection learning: alternative approaches to the computation of the projection" K. Weigl, M. Berthod INRIA Sophia Antipolis (France) 12H10 "Stability bounds of momentum coefficient and learning rate in backpropagation algorithm"" Z. Mao, T.C. Hsia University of California at Davis (USA) 12H30 Lunch Session 3: Links between neural networks and statistics Chairman: J.C. Fort (Universit Nancy I, France) 14H00 "Model selection for neural networks: comparing MDL and NIC"" G. te Brake*, J.N. Kok*, P.M.B. Vitanyi** *Utrecht University, **Centre for Mathematics and Computer Science, Amsterdam (Netherlands) 14H20 "Estimation of performance bounds in supervised classification" P. Comon*, J.L. Voz**, M. Verleysen** *Thomson-Sintra Sophia Antipolis (France), **Universit Catholique de Louvain, Louvain-la-Neuve (Belgium) 14H40 "Input Parameters' estimation via neural networks" I.V. Tetko, A.I. Luik Institute of Bioorganic & Petroleum Chemistry, Kiev (Ukraine) 15H00 "Combining multi-layer perceptrons in classification problems" E. Filippi, M. Costa, E. Pasero Politecnico di Torino (Italy) 15H20 Coffee break Session 4: Algorithms 1 Chairman: J. Hrault (Institut National Polytechnique de Grenoble, France) 16H00 "Diluted neural networks with binary couplings: a replica symmetry breaking calculation of the storage capacity" J. Iwanski, J. Schietse Limburgs Universitair Centrum (Belgium) 16H20 "Storage capacity of the reversed wedge perceptron with binary connections" G.J. Bex, R. Serneels Limburgs Universitair Centrum (Belgium) 16H40 "A general model for higher order neurons" F.J. Lopez-Aligue, M.A. Jaramillo-Moran, I. Acedevo-Sotoca, M.G. Valle Universidad de Extremadura, Badajoz (Spain) 17H00 "A discriminative HCNN modeling" B. Petek University of Ljubljana (Slovenia) Thursday 21th April 1994 ------------------------ Session 5: Biological models Chairman: P. Lansky (Academy of Science of the Czech Republic) 9H00 "Biologically plausible hybrid network design and motor control" G.R. Mulhauser University of Edinburgh (Scotland) 9H20 "Analysis of critical effects in a stochastic neural model" W. Mommaerts, E.C. van der Meulen, T.S. Turova K.U. Leuven (Belgium) 9H40 "Stochastic model of odor intensity coding in first-order olfactory neurons" J.P. Rospars*, P. Lansky** *INRA Versailles (France), **Academy of Sciences, Prague (Czech Republic) 10H00 "Memory, learning and neuromediators" A.S. Mikhailov Fritz-Haber-Institut der MPG, Berlin (Germany), and Russian Academy of Sciences, Moscow (Russia) 10H20 "An explicit comparison of spike dynamics and firing rate dynamics in neural network modeling" F. Chapeau-Blondeau, N. Chambet Universit d'Angers (France) 10H40 Coffee break Session 6: Algorithms 2 Chairman: T. Denoeux (Universit Technologique de Compigne, France) 11H10 "A stop criterion for the Boltzmann machine learning algorithm" B. Ruf Carleton University (Canada) 11H30 "High-order Boltzmann machines applied to the Monk's problems" M. Grana, V. Lavin, A. D'Anjou, F.X. Albizuri, J.A. Lozano UPV/EHU, San Sebastian (Spain) 11H50 "A constructive training algorithm for feedforward neural networks with ternary weights" F. Aviolat, E. Mayoraz Ecole Polytechnique Fdrale de Lausanne (Switzerland) 12H10 "Synchronization in a neural network of phase oscillators with time delayed coupling" T.B. Luzyanina Russian Academy of Sciences, Moscow (Russia) 12H30 Lunch Session 7: Evolutive and incremental learning Chairman: T.J. Stonham (Brunel University, UK) - to be confirmed 14H00 "Reinforcement learning and neural reinforcement learning" S. Sehad, C. Touzet Ecole pour les Etudes et la Recherche en Informatique et Electronique, Nmes (France) 14H20 "Improving piecewise linear separation incremental algorithms using complexity reduction methods" J.M. Moreno, F. Castillo, J. Cabestany Universitat Politecnica de Catalunya (Spain) 14H40 "A comparison of two weight pruning methods" O. Fambon, C. Jutten Institut National Polytechnique de Grenoble (France) 15H00 "Extending immediate reinforcement learning on neural networks to multiple actions" C. Touzet Ecole pour les Etudes et la Recherche en Informatique et Electronique, Nmes (France) 15H20 "Incremental increased complexity training" J. Ludik, I. Cloete University of Stellenbosch (South Africa) 15H40 Coffee break Session 8: Function approximation Chairman: E. Filippi (Politecnico di Torino, Italy) - to be confirmed 16H20 "Approximation of continuous functions by RBF and KBF networks" V. Kurkova, K. Hlavackova Academy of Sciences of the Czech Republic 16H40 "An optimized RBF network for approximation of functions" M. Verleysen*, K. Hlavackova** *Universit Catholique de Louvain, Louvain-la-Neuve (Belgium), **Academy of Science of the Czech Republic 17H00 "VLSI complexity reduction by piece-wise approximation of the sigmoid function" V. Beiu, J.A. Peperstraete, J. Vandewalle, R. Lauwereins K.U. Leuven (Belgium) 20H00 Conference dinner Friday 22th April 1994 ---------------------- Session 9: Algorithms 3 Chairman: J. Vandewalle (K.U. Leuven, Belgium) - to be confirmed 9H00 "Dynamic pattern selection for faster learning and controlled generalization of neural networks" A. Rbel Technische Universitt Berlin (Germany) 9H20 "Noise reduction by multi-target learning" J.A. Bullinaria Edinburgh University (Scotland) 9H40 "Variable binding in a neural network using a distributed representation" A. Browne, J. Pilkington South Bank University, London (UK) 10H00 "A comparison of neural networks, linear controllers, genetic algorithms and simulated annealing for real time control" M. Chiaberge*, J.J. Merelo**, L.M. Reyneri*, A. Prieto**, L. Zocca* *Politecnico di Torino (Italy), **Universidad de Granada (Spain) 10H20 "Visualizing the learning process for neural networks" R. Rojas Freie Universitt Berlin (Germany) 10H40 Coffee break Session 10: Theoretical aspects 2 Chairman: M. Cottrell (Universit Paris I, France) 11H20 "Stability analysis of diagonal recurrent neural networks" Y. Tan, M. Loccufier, R. De Keyser, E. Noldus University of Gent (Belgium) 11H40 "Stochastics of on-line back-propagation" T. Heskes University of Illinois at Urbana-Champaign (USA) 12H00 "A lateral contribution learning algorithm for multi MLP architecture" N. Pican*, J.C. Fort**, F. Alexandre* *INRIA Lorraine, **Universit Nancy I (France) 12H20 Lunch Session 11: Self-organization Chairman: F. Blayo (EERIE Nmes, France) 14H00 "Two or three things that we know about the Kohonen algorithm" M. Cottrell*, J.C. Fort**, G. Pags*** Universits *Paris 1, **Nancy 1, ***Paris 6 (France) 14H20 "Decoding functions for Kohonen maps" M. Alvarez, A. Varfis CEC Joint Research Center, Ispra (Italy) 14H40 "Improvement of learning results of the selforganizing map by calculating fractal dimensions" H. Speckmann, G. Raddatz, W. Rosenstiel University of Tbingen (Germany) 15H00 Coffee break Session 11 (continued): Self-organization Chairman: F. Blayo (EERIE Nmes, France) 15H40 "A non linear Kohonen algorithm" J.-C. Fort*, G. Pags** *Universit Nancy 1, **Universits Pierre et Marie Curie, et Paris 12 (France) 16H00 "Self-organizing maps based on differential equations" A. Kanstein, K. Goser Universitt Dortmund (Germany) 16H20 "Instabilities in self-organized feature maps with short neighbourhood range" R. Der, M. Herrmann Universitt Leipzig (Germany) ESANN'94 Registration and Hotel Booking Form ******************************************** Registration fees ----------------- registration before registration after 18th March 1994 18th March 1994 Universities BEF 14500 BEF 15500 Industries BEF 18500 BEF 19500 University fees are applicable to members and students of academic and teaching institutions. Each registration will be confirmed by an acknowledgment of receipt, which must be given to the registration desk of the conference to get entry badge, proceedings and all materials. Registration fees include attendance to all sessions, the ESANN'94 banquet, a copy of the conference proceedings, daily lunches (20-22 April '94), and coffee breaks twice a day during the symposium. Advance registration is mandatory. Students and young researchers from European countries may apply for European Community grants. Hotel booking ------------- Hotel MAYFAIR (4 stars) - 381 av. Louise - 1050 Brussels Single room : BEF 2800 Double room (large bed) : BEF 3500 Twin room (2 beds) : BEF 3500 Prices include breakfast, service and taxes. A deposit corresponding to the first night is mandatory. Registration to ESANN'94 (please give full address and tick appropriate) ------------------------------------------------------------------------ Ms., Mr., Dr., Prof.:............................................... Name:............................................................... First Name:......................................................... Institution:........................................................ ................................................................... Address:............................................................ ................................................................... ZIP:................................................................ Town:............................................................... Country:............................................................ Tel:................................................................ Fax:................................................................ E-mail:............................................................. VAT n:............................................................. Universities: O registration before 18th March 1994: BEF 14500 O registration after 18th March 1994: BEF 15500 Industries: O registration before 18th March 1994: BEF 18500 O registration after 18th March 1994: BEF 19500 Hotel Mayfair booking (please tick appropriate) O single room deposit: BEF 2800 O double room (large bed) deposit: BEF 3500 O twin room (twin beds) deposit: BEF 3500 Arrival date: ..../..../1994 Departure date: ..../..../1994 O Additional payment if fees are paid through bank abroad check: BEF 500 Total BEF ____ Payment (please tick): O Bank transfer, stating name of participant, made payable to: Gnrale de Banque ch. de Waterloo 1341 A B-1180 Brussels - Belgium Acc.no: 210-0468648-93 of D facto (45 rue Masui, B-1210 Brussels) Bank transfers must be free of charges. EVENTUAL CHARGES MUST BE PAID BY THE PARTICIPANT. O Cheques/Postal Money Orders made payable to: D facto 45 rue Masui B-1210 Brussels - Belgium A SUPPLEMENTARY FEE OF BEF 500 MUST BE ADDED if the payment is made through bank abroad cheque or postal money order. Only registrations accompanied by a cheque, a postal money order or the proof of bank transfer will be considered. Registration and hotel booking form, together with payment, must be send as soon as possible, and in no case later than 8th April 1994, to the conference secretariat: &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& & D facto conference services - ESANN'94 & & 45, rue Masui - B-1210 Brussels - Belgium & &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& Support ******* ESANN'94 is organized with the support of: - Commission of the European Communities (DG XII, Human Capital and Mobility programme) - IEEE Region 8 - IFIP WG 10.6 on neural networks - Region of Brussels-Capital - EERIE (Ecole pour les Etudes et la Recherche en Informatique et Electronique - Nmes) - UCL (Universit Catholique de Louvain - Louvain-la-Neuve) - REGARDS (Research Group on Algorithmic, Related Devices and Systems - UCL) Steering committee ****************** Franois Blayo EERIE, Nmes (F) Marie Cottrell Univ. Paris I (F) Nicolas Franceschini CNRS Marseille (F) Jeanny Hrault INPG Grenoble (F) Michel Verleysen UCL Louvain-la-Neuve (B) Scientific committee ******************** Luis Almeida INESC - Lisboa (P) Jorge Barreto UCL Louvain-en-Woluwe (B) Herv Bourlard L. & H. Speech Products (B) Joan Cabestany Univ. Polit. de Catalunya (E) Dave Cliff University of Sussex (UK) Pierre Comon Thomson-Sintra Sophia (F) Holk Cruse Universitt Bielefeld (D) Dante Del Corso Politecnico di Torino (I) Marc Duranton Philips / LEP (F) Jean-Claude Fort Universit Nancy I (F) Karl Goser Universitt Dortmund (D) Martin Hasler EPFL Lausanne (CH) Philip Husbands University of Sussex (UK) Christian Jutten INPG Grenoble (F) Petr Lansky Acad. of Science of the Czech Rep. (CZ) Jean-Didier Legat UCL Louvain-la-Neuve (B) Jean Arcady Meyer Ecole Normale Suprieure - Paris (F) Erkki Oja Helsinky University of Technology (SF) Guy Orban KU Leuven (B) Gilles Pags Universit Paris I (F) Alberto Prieto Universitad de Granada (E) Pierre Puget LETI Grenoble (F) Ronan Reilly University College Dublin (IRE) Tamas Roska Hungarian Academy of Science (H) Jean-Pierre Rospars INRA Versailles (F) Jean-Pierre Royet Universit Lyon 1 (F) John Stonham Brunel University (UK) Lionel Tarassenko University of Oxford (UK) John Taylor King's College London (UK) Vincent Torre Universita di Genova (I) Claude Touzet EERIE Nmes (F) Joos Vandewalle KUL Leuven (B) Eric Vittoz CSEM Neuchtel (CH) Christian Wellekens Eurecom Sophia-Antipolis (F) _____________________________ Michel Verleysen D facto conference services 45 rue Masui 1210 Brussels Belgium tel: +32 2 245 43 63 fax: +32 2 245 46 94 E-mail: esann at dice.ucl.ac.be _____________________________  From hongchen at ndcvx.cc.nd.edu Sat Mar 26 02:31:58 1994 From: hongchen at ndcvx.cc.nd.edu (Hong Chen) Date: Sat, 26 Mar 94 02:31:58 -0500 Subject: Paper available in Neuroprose Message-ID: <9403260731.AA16126@ndcvx.cc.nd.edu> Ftp-Host: archive.cis.ohio-state.edu Ftp-Filename: /pub/neuroprose/chen.dynamic_approx.ps.Z The following paper chen.dynamic_approx.ps.Z (28 pages) is now available via anonymous ftp from the neuroprose archive. It appeared in Nov. issue (1993) of IEEE Transactions on Neural Networks. ---------------------------------------------------------------------- Approximations of Continuous Functionals by Neural Networks with Application to Dynamical Systems Tianping Chen Department of Mathematics Fudan University Shanghai, P.R. China Hong Chen VLSI Libraries, Inc. 3135 Kifer Road Santa Clara, CA 95052 USA ABSTRACT: The main concern of this paper is to give several strong results on neural network representation in an explicit form. Under very mild conditions, a functional defined on a compact set in C[a,b] or L^p[a,b], spaces of infinite dimensions, can be approximated arbitrarily well by a neural network with one hidden layer. In particular, if U is a compact set in C[a,b], sigma is a bounded sigmoidal function, and f is a continuous functional defined on U, then for all u in U, f(u) can be approximated by the summation: c_i sigma( sum_{j=0}^m xi_{i,j} u(x_j) + theta_i) where c_i, xi_{ij}, theta_i are real numbers. u(x_j) is the value of u evaluated at point x_j. These results are a significant development beyond existing works, where theorems of approximating continuous functions defined on R^n, a space of finite dimension by neural networks with one hidden layer were given. Finally, all the results are shown applicable to the approximation of the output of dynamical systems at any particular time. ---------------------------------------------------------------------- Instruction for retrieving this paper: unix% ftp archive.cis.ohio-state.edu ftp-login: anonymous ftp-password: ftp> cd pub/neuroprose ftp> binary ftp> get chen.dynamic_approx.ps.Z ftp> bye unix% uncompress chen.dynamic_approx.ps.Z unix% lpr chen.dynamic_approx.ps (or however you print postscript)  From hongchen at ndcvx.cc.nd.edu Sat Mar 26 02:28:59 1994 From: hongchen at ndcvx.cc.nd.edu (Hong Chen) Date: Sat, 26 Mar 94 02:28:59 -0500 Subject: Paper available in Neuroprose Message-ID: <9403260728.AA16099@ndcvx.cc.nd.edu> Ftp-Host: archive.cis.ohio-state.edu Ftp-Filename: /pub/neuroprose/chen.function_approx.ps.Z The following paper chen.function_approx.ps.Z (15 pages) is now available via anonymous ftp from the neuroprose archive. It has been accepted by IEEE Transactions on Neural Networks. ---------------------------------------------------------------------- Approximation Capability in C(R^n) by Multilayer Feedforward Networks and Related Problems Tianping Chen Department of Mathematics Fudan University Shanghai, P.R. China Hong Chen VLSI Libraries, Inc. 3135 Kifer Road Santa Clara, CA 95052 USA Ruey-wen Liu Department of Electrical Engineering University of Notre Dame Notre Dame, IN 46556 USA ABSTRACT: In this paper, we investigate the capability of approximating functions in C(R^n) by three-layered neural networks with sigmoidal function in the hidden layer. It is found that the boundedness condition on the sigmoidal function plays an essential role in the approximation, in contrast to continuity or monotonity condition. We point out that in order to prove the neural network approximation capability in the n-dimensional case, all one needs to do is to prove the case for one dimension. The approximation in L^p-norm and some related problems are also discussed. ---------------------------------------------------------------------- Instruction for retrieving this paper: unix% ftp archive.cis.ohio-state.edu ftp-login: anonymous ftp-password: ftp> cd pub/neuroprose ftp> binary ftp> get chen.function_approx.ps.Z ftp> bye unix% uncompress chen.function_approx.ps.Z unix% lpr chen.function_approx.ps (or however you print postscript)  From harry at brain.Jpl.Nasa.Gov Sun Mar 27 17:58:14 1994 From: harry at brain.Jpl.Nasa.Gov (Harry Langenbacher) Date: Sun, 27 Mar 1994 14:58:14 -0800 Subject: Neural Network Workshop Announcement - JPL Message-ID: <199403272258.OAA11218@brain.Jpl.Nasa.Gov> NEURAL NET WORKSHOP ANNOUNCEMENT "A Decade of Neural Networks: Practical Applications and Prospects" May 11 - 13, 1994 The Jet Propulsion Laboratory's Center for Space Microelectronics Technology (CSMT) is hosting this neural network workshop sponsored by DoD and NASA. After 10 years of renewed activity in neural network research, the technology has matured and stands at a crossroads regarding its future practical applicability. The focus of the workshop is to provide an avenue for sponsoring agencies, active researchers, and the user community to formulate a cohesive vision for the next decade of neural network research and applications. Such a plan will directly address relevance to US technology competitiveness in the global market. In order to maintain a balance among the participants, attendance is by invitation only. If interested in receiving an invitation, please contact Dr. Sabrina Kemeny (jplnn94 at brain.jpl.nasa.gov) as soon as possible, since space is limited. The workshop will begin at JPL at 1:00 p.m. on Wednesday, May 11, 1994 and end at 10:00 a.m. on the 13th. Following two plenary sessions on the 11th and the morning of the 12th, we will split into working groups targeting three specific application areas. The splinter groups will focus on a government/industry investment strategy for future neural network research. The groups will address issues such as: overcoming barriers impeding technology insertion and creating a better user-developer interface. Friday's session will include summaries from the splinter groups and a sponsor-industry assessment panel. A registration fee of $75.00 will include a welcome reception the first evening, dinner the second evening, coffee breaks, and a copy of the workshop proceedings. Invited speakers will highlight clear benefits of neural networks in real-world applications compared to conventional computing techniques. Topics such as: fault diagnosis (vehicle engine health monitoring for automotives), pattern recognition (document analysis), and multiparameter optimization (unsteady aerodynamic control) will be covered in the presentations. Invited Speakers * Josh Alspector, Bellcore * Dave Andes, NAWC * William Campbell, Goddard Space Flight Center * John Carson, Irvine Sensors Corporation * Laurence Clarke, University of South Florida * Dwight Duston, BMDO * William Faller, USAF Academy * Lee Feldkamp, Ford Motor Company * Erol Gelenbe, Duke University/IBM * Karl Goser, University of Dortmund * Hans Peter Graf, AT&T * Sandeep Gulati, Jet Propulsion Laboratory * Michael Henry, Martin Marietta Astronautics * Christof Koch, California Institute of Technology * Peter Lichtenwalner, McDonnell Douglas * Kenneth Marko, Ford Motor Company * William Miceli, ONR * Steven Rogers, Air Force Institute of Technology * Joseph Sgro, Alacron Inc. * Bing Sheu, University of Southern California * Padraic Smyth, Jet Propulsion Laboratory * Simon Tam, Intel Corporation For further information please contact Dr. Sabrina Kemeny Phone: (818) 354-0660, Fax: (818) 393-4540, Email: jplnn94 at brain.jpl.nasa.gov Postal address: Mail Stop 302-231 Jet Propulsion Laboratory 4800 Oak Grove Dr. Pasadena, CA 91109-8099  From eppler at hpesun4.kfk.de Mon Mar 28 05:48:46 1994 From: eppler at hpesun4.kfk.de (Wolfgang Eppler) Date: Mon, 28 Mar 94 10:58:46+010 Subject: PhD position Message-ID: <9403280958.AA11254@hpesun4.kfk.de> A Ph.D position is available at the Nuclear Research Center Karlsruhe. The department of Data Processing and Electronics has a small group working with neuro-fuzzy methods. One working domain is adaptive control, a second one are applications in pattern recognition. Applicants with expertise in the last domain and very good certificates are encouraged to send me a mail the next few days. The appointment is restricted to three years, the salary is moderate. Applications to: eppler at hpesun3.kfk.de, or KfK, HPE-TTL c/o Wolfgang Eppler Postfach 3640 D-76021 Karlsruhe Germany  From antonio at gsc.ele.puc-rio.br Mon Mar 28 16:08:40 1994 From: antonio at gsc.ele.puc-rio.br (Antonio J. G. Abelem [Marco]) Date: Mon, 28 Mar 94 16:08:40 EST Subject: Input/Output Data Convertion in BackProp. Message-ID: <9403281908.AA03167@Cygnus > I'm using neural networks to predict financial time series, specifically the gold-prices' time series. I have experienced some problems with the convertion scheme used to present data to the network. I have mainly used LINEAR CONVERTION (original data value converted to the ranges 0, 1 or -1,+ 1), but it does not seem to work properly. I have also had some attempts with: a) input data in its original value b) in its derivative form c) the input minus data average d) the percent difference between input Ti+1 and Ti However, for all these cases the target patterns need to be converted before presenting to the network once its output is between 0 and 1 (to the sigmoid) or between 1 and -1 (to the hyperbolic tangent). My results for the single-step mode are good, but i think it could be better. On the other hand, for the multi-step, the results are very bad. Any suggestions on that will be very appreciated. Thanks. Antonio  From announce at PARK.BU.EDU Mon Mar 28 15:39:57 1994 From: announce at PARK.BU.EDU (announce@PARK.BU.EDU) Date: Mon, 28 Mar 94 15:39:57 -0500 Subject: Call for Articles: Automatic Target Recognition issue, Neural Networks Message-ID: <9403282039.AA24040@retina.bu.edu> ***** CALL FOR PAPERS ***** 1995 Special Issue of the journal Neural Networks on "Neural Networks for Automatic Target Recognition" ATR is a many-faceted problem of tremendous importance in industrial and defense applications. Biological systems excel at these tasks, and neural networks may provide a robust, real-time, and compact means for achieving solutions to ATR problems. ATR systems utilize a host of sensing modalities (visible, multispectral, IR, SAR, and ISAR imagery; radar, sonar, and acoustic time series; and fusion of multiple sensing modalities) in order to detect and track targets in clutter, and classify them. This Special Issue will bring together a broad range of invited and contributed articles that explore a variety of software and hardware modules and systems, and biological inspirations, focused on solving ATR problems. We particularly welcome articles involving applications to real data, though the journal cannot publish classified material. It will be the responsibility of the submitting authors to insure that all submissions are of an unclassified nature. Co-Editors: ----------- Professor Stephen Grossberg, Boston University Dr. Harold Hawkins, Office of Naval Research Dr. Allen Waxman, MIT Lincoln Laboratory Submission: ----------- Deadline for submission: October 31, 1994 Notification of acceptance: January 15, 1995 Format: as for normal papers in the journal (APA format) and no longer than 10,000 words Address for Papers: ------------------- Professor Stephen Grossberg Editor, Neural Networks Boston University Department of Cognitive and Neural Systems 111 Cummington Street Room 244 Boston, MA 02215 USA  From sef+ at cs.cmu.edu Mon Mar 28 21:11:32 1994 From: sef+ at cs.cmu.edu (Scott E. Fahlman) Date: Mon, 28 Mar 94 21:11:32 EST Subject: Input/Output Data Convertion in BackProp. In-Reply-To: Your message of Mon, 28 Mar 94 16:08:40 -0500. <9403281908.AA03167@Cygnus > Message-ID: I'm using neural networks to predict financial time series, specifically the gold-prices' time series. I have mainly used LINEAR CONVERTION (original data value converted to the ranges 0, 1 or -1,+ 1), but it does not seem to work properly. My results for the single-step mode are good, but i think it could be better. On the other hand, for the multi-step, the results are very bad. With so little information about your problem or the architecture (backprop?) you are using, it is impossible to diagnose the problem. Perhaps the net is doing as well as might be expected for this data set. Perhaps you have chosen a poor net topology or parameters. One thing does jump out, however: if you are trying to produce a continuous-valued output, you might be better off with linear output units than with sigmoids. Then you won't have to pre-scale your data to fit the sigmoid's range, and you won't be getting distortion due to gratuitous nonlinearities. -- Scott =========================================================================== Scott E. Fahlman Internet: sef+ at cs.cmu.edu Senior Research Scientist Phone: 412 268-2575 School of Computer Science Fax: 412 681-5739 Carnegie Mellon University Latitude: 40:26:33 N 5000 Forbes Avenue Longitude: 79:56:48 W Pittsburgh, PA 15213 ===========================================================================  From eppler at hpesun4.kfk.de Tue Mar 29 09:02:06 1994 From: eppler at hpesun4.kfk.de (Wolfgang Eppler) Date: Tue, 29 Mar 94 14:12:06+010 Subject: PhD pos Message-ID: <9403291312.AA11835@hpesun4.kfk.de> Sorry, yesterday I offered a PhD position at this mailing list. This is not quite correct. The free position is for PhD students being interested in a doctoral thesis. Sorry for the misunderstanding. W. Eppler  From aminai at thor.ece.uc.edu Tue Mar 29 12:12:49 1994 From: aminai at thor.ece.uc.edu (Ali Minai) Date: Tue, 29 Mar 1994 12:12:49 -0500 Subject: Input/Output Data Convertion in BackProp. Message-ID: <199403291712.MAA01961@holmes.ece.uc.edu> If you are getting good results without rescaling the input, you could use linear output neurons to give you a corresponding dynamic range on the output side. However, a more interesting issue might be to explain the difference (if any) in prediction quality between the rescaled and unscaled cases. Is it because the data has a strange distribution? For example, if very small differences in the real data can lead to significantly different consequences, rescaling might be losing important information. Or you might just need to use a faster learning rate to make up for smaller gradient magnitudes in the rescaled case. If your 1-step predictions are good, you can use these to bootstrap up to longer term ones. The simplest way is to feed back the predicted output into the network input, but better results can probably be obtained as follows: train a 1-step predictor and a 2-step predictor; configure the 1-step predictor to produce 2-step predictions through re-iteration; then combine the two 2-step predictors to produce an averaged/weighted 2-step prediction; iterate on this 2-step predictor to produce longer term predictions. This method can be repeated for 4-step predictors using a direct 4-step predictor and the twice-iterated configuration of the 2-step predictor described above. I'm sure many people must have used similar methods (I have), but I refer you to an excellent paper by Tim Sauer: T. Sauer, "Time Series Prediction by Using Delay Coordinate Embedding", in TIME SERIES PREDICTION, A.S. Weigend & N.A. Gershenfeld (eds.), Addison-Wesley, 1994. He mentions the method in the context of non-neural time series prediction, but the applicability to neural net predictors is obvious. Ali Minai  From wahba at stat.wisc.edu Tue Mar 29 12:50:35 1994 From: wahba at stat.wisc.edu (Grace Wahba) Date: Tue, 29 Mar 94 11:50:35 -0600 Subject: gold-prices' time series, conversion to [0,1] Message-ID: <9403291750.AA21074@hera.stat.wisc.edu> I'm using neural networks to predict financial time series, specifically the gold-prices' time series. I have mainly used LINEAR CONVERTION (original data value converted to the ranges 0, 1 or -1,+ 1), but it does not seem to work properly. My results for the single-step mode are good, but i think it could be better. On the other hand, for the multi-step, the results are very bad. You might look at AUTHOR = {D. McCaffrey and S. Ellner and A. R. Gallant and D. Nychka}, TITLE = {Estimating the Lyapunv exponent of a chaotic system witn nonparametric regression}, JOURNAL = {J. Amer. Statist. Assoc.}, YEAR = {1992}, VOLUME = {87}, PAGES = {682-695} Among other things, they consider the model x_t = f(x_{t-1},..., x_{t-d}) + \sigma\epsilon_t and look at the estimation of f( .,...,.) using thin plate splines and other radial basis functions. Grace Wahba wahba at stat.wisc.edu  From elman at crl.ucsd.edu Tue Mar 29 14:10:29 1994 From: elman at crl.ucsd.edu (Jeff Elman) Date: Tue, 29 Mar 94 11:10:29 PST Subject: Postdoc announcement: CRL/UCSD Message-ID: <9403291910.AA05158@crl.ucsd.edu> CENTER FOR RESEARCH IN LANGUAGE UNIVERSITY OF CALIFORNIA, SAN DIEGO ANNOUNCEMENT OF POSTDOCTORAL FELLOWSHIPS Applications are invited for postdoctoral fellowships in Language, Communication and Brain at the Center for Research in Language at the University of California, San Diego. The fellowships are supported by the National Insti- tutes of Health (NIDCD), and provide an annual stipend rang- ing from $19,608 to $32,300 depending upon years of postdoc- toral experience. In addition, some funding is provided for medical insurance and travel. The program provides interdisciplinary training in: (1) psycholinguistics, including language processing in adults and language development in children; (2) communication disorders, including childhood language disorders and adult aphasia; (3) electrophysiological studies of language, and (4) neural network models of language learning and pro- cessing. Candidates are expected to work in at least one of these four areas. Grant conditions require that candidates be citizens or permanent residents of the U.S. Applicants should send a statement of interest, three letters of recommendation, a curriculum vitae and copies of relevant publications to: Jan Corte Center for Research in Language 0526 University of California, San Diego 9500 Gilman Drive La Jolla, California 92093-0526 (619) 534-2536 Women and minority candidates are specifically invited to apply.  From mm at santafe.edu Tue Mar 29 18:24:42 1994 From: mm at santafe.edu (Melanie Mitchell) Date: Tue, 29 Mar 94 16:24:42 MST Subject: papers available Message-ID: <9403292324.AA10868@wupatki> The following papers are available via anonymous ftp: The Evolution of Emergent Computation James P. Crutchfield Melanie Mitchell UC Berkeley Santa Fe Institute Santa Fe Institute Working Paper 94-03-012 Submitted to Science March, 1994 Abstract A simple evolutionary process can discover sophisticated methods for emergent information processing in decentralized spatially-extended systems. The mechanisms underlying the resulting emergent computation are explicated by a novel technique for analyzing particle-based logic embedded in pattern-forming systems. Understanding how globally-coordinated computation can emerge in evolution is relevant both for the scientific understanding of natural information processing and for engineering new forms of parallel computing systems. To obtain an electronic copy of this paper (9 pages): ftp ftp.santafe.edu login: anonymous password: cd /pub/Users/mm binary get EvEmComp.ps.Z quit Then at your system: uncompress EvEmComp.ps.Z lpr -P EvEmComp.ps If you cannot obtain an electronic copy, send a request for a hard copy to pdb at santafe.edu. ----------------------------------------------- A Genetic Algorithm Discovers Particle-Based Computation in Cellular Automata Rajarshi Das Melanie Mitchell James P. Crutchfield Santa Fe Institute Santa Fe Institute UC Berkeley Santa Fe Institute Working Paper 94-03-015 Submitted to the Third Parallel Problem-Solving From Nature Conference March, 1994 Abstract How does evolution produce sophisticated emergent computation in systems composed of simple components limited to local interactions? To model such a process, we used a genetic algorithm (GA) to evolve cellular automata to perform a computational task requiring globally-coordinated information processing. On most runs a class of relatively unsophisticated strategies was evolved, but on a subset of runs a number of quite sophisticated strategies was discovered. We analyze the emergent logic underlying these strategies in terms of information processing performed by ``particles'' in space-time, and we describe in detail the generational progression of the GA evolution of these strategies. Our analysis is a preliminary step in understanding the general mechanisms by which sophisticated emergent computational capabilities can be automatically produced in decentralized multiprocessor systems. To obtain an electronic copy of this paper (13 pages): (The electronic version of this paper has been broken up into four parts to facilitate printing.) ftp ftp.santafe.edu login: anonymous password: cd /pub/Users/mm binary get GA-Particle.part1.ps.Z get GA-Particle.part2.ps.Z get GA-Particle.part3.ps.Z get GA-Particle.part4.ps.Z quit Then at your system: uncompress GA-Particle.part1.ps.Z uncompress GA-Particle.part2.ps.Z uncompress GA-Particle.part3.ps.Z uncompress GA-Particle.part4.ps.Z lpr -P GA-Particle.part1.ps lpr -P GA-Particle.part2.ps lpr -P GA-Particle.part3.ps lpr -P GA-Particle.part4.ps If you cannot obtain an electronic copy, send a request for a hard copy to pdb at santafe.edu.  From qian at ai.mit.edu Tue Mar 29 19:29:48 1994 From: qian at ai.mit.edu (Ning Qian) Date: Tue, 29 Mar 94 19:29:48 EST Subject: Postdoc Position in Comp. Neurosci. Message-ID: <9403300029.AA05241@peduncle> Postdoctoral Position in Computational Neuroscience Center for Neurobiology and Behavior Columbia University A postdoctoral position is now available in Center for Neurobiology and Behavior at Columbia University. The position is for someone who is interested in computational modeling and/or visual psychophysics of motion analysis, stereoscopic depth perception and/or motion-stereo integration in biological visual systems. Opportunities for modeling other neural systems are also available. Please submit a CV, representative publications and two letters of references to: Dr. Ning Qian Center for Neurobiology and Behavior Columbia University 722 W. 168th St., Annex #730 New York, NY 10032 If you have questions or need further information, please feel free to send me email at qian at ai.mit.edu, or call me at (212) 960-2213 or (212) 960-2561.  From mozer at neuron.cs.colorado.edu Wed Mar 30 15:24:53 1994 From: mozer at neuron.cs.colorado.edu (Mike Mozer) Date: Wed, 30 Mar 94 13:24:53 -0700 Subject: TR announcement -- neural net music composition Message-ID: <199403302024.OAA19267@neuron.cs.colorado.edu> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/mozer.musiccomp.ps.Z Neural network music composition by prediction: Exploring the benefits of psychoacoustic constraints and multiscale processing Michael C. Mozer Department of Computer Science and Institute of Cognitive Science University of Colorado Boulder, CO 80309-0430 ABSTRACT: In algorithmic music composition, a simple technique involves selecting notes sequentially according to a transition table that specifies the probability of the next note as a function of the previous context. I describe an extension of this transition table approach using a recurrent autopredictive connectionist network called CONCERT. CONCERT is trained on a set of pieces with the aim of extracting stylistic regularities. CONCERT can then be used to compose new pieces. A central ingredient of CONCERT is the incorporation of psychologically-grounded representations of pitch, duration, and harmonic structure. CONCERT was tested on sets of examples artificially generated according to simple rules and was shown to learn the underlying structure, even where other approaches failed. In larger experiments, CONCERT was trained on sets of J. S. Bach pieces and traditional European folk melodies and was then allowed to compose novel melodies. Although the compositions are occasionally pleasant, and are preferred over compositions generated by a third-order transition table, the compositions suffer from a lack of global coherence. To overcome this limitation, several methods are explored to permit CONCERT to induce structure at both fine and coarse scales. In experiments with a training set of waltzes, these methods yielded limited success, but the overall results cast doubt on the promise of note-by-note prediction for composition. 32 pages total TO APPEAR IN _Connection Science_ special issue on music and creativity, 1994.  From jlm at crab.psy.cmu.edu Wed Mar 30 18:30:10 1994 From: jlm at crab.psy.cmu.edu (James L. McClelland) Date: Wed, 30 Mar 94 18:30:10 EST Subject: TR: Complementary Learning Systems in Hippocampus and Neocortex Message-ID: <9403302330.AA19368@crab.psy.cmu.edu.psy.cmu.edu> The following Technical Report is available both electronically from our own FTP server or in hard copy form. Instructions for obtaining copies may be found at the end of this post. ======================================================================== Why there are Complementary Learning Systems in the Hippocampus and Neocortex: Insights from the Successes and Failures of Connectionist Models of Learning and Memory James L. McClelland, Bruce L. McNaughton & Randall C. O'Reilly Carnegie Mellon University & The University of Arizona Technical Report PDP.CNS.94.1 March, 1994 The influence of prior experience on some forms of behavior and cognition is drastically affected by damage to the hippocampal system. However, if the hippocampal system is left intact both during the experience and for a period of time thereafter, subsequent damage can have much less or even no effect. Such findings suggest that memory traces change over time in a way that makes them less dependent on the hippocampal system. This process of change has often been called consolidation. Consolidation is a very gradual process; in humans, it appears to span up to 15 years. This article asks what consolidation is and why it occurs. We take as our point of departure the view that the initial memory trace that results from a relevant experience consists of changes to the strengths of the connections among neurons in the hippocampal system. Bidirectional connections between the neocortex and the hippocampus allow these initial traces to mediate the reinstatement of representations of events or experiences in the neocortex. Consolidation results from the cumulative effects of small, incremental changes to connections among neurons in the neocortex that occur each time such a representation is reinstated. This view leads to two key questions: 1) Why are plastic changes made initially in the hippocampus, if ultimately the substrate of a consolidated memory lies in the neocortex? 2) Why does consolidation span such an extended period of time? Insights from connectionist network models of learning and memory provide one set of possible answers to these questions. These models consist of networks of simple processing units and weighted connections among the units, and they offer procedures for discovering what weights or values to use on the connections so that the network can capture the structure present in ensembles of events and experiences drawn from some domain. These connection weights then provide the basis for appropriate generalization to novel examples from the same domain. Crucially, the success of these procedures depends on interleaved learning: making only very small changes to the connection weights on each learning trial, so that the overall direction of weight change can be governed by the structure of the domain rather than the individual examples. The sequential acquisition of new data is incompatible with the gradual discovery of structure and can lead to catastrophic interference with what has previously been learned. In the light of these observations, we suggest that the neocortex may be optimized for the gradual discovery of the shared structure of events and experiences, and that the hippocampal system is there to provide a mechanism for rapid acquisition of new information without interference with previously discovered regularities. After this initial acquisition, the hippocampal system serves as teacher to the neocortex: That is, it allows for the reinstatement in the neocortex of representations of past events, so that they may be gradually acquired by the cortical system via interleaved learning. We equate this interleaved learning process with consolidation, and we suggest that it is necessarily slow so that new knowledge can be integrated effectively into the structured knowledge contained in the neocortical system. ======================================================================= Retrieval information for pdp.cns TRs: unix> ftp 128.2.248.152 # hydra.psy.cmu.edu Name: anonymous Password: ftp> cd pub/pdp.cns ftp> binary ftp> get pdp.cns.94.1.ps.Z ftp> quit unix> zcat pdp.cns.94.1.ps.Z | lpr # or however you print postscript NOTE: The compressed file is 306994 bytes long. Uncompressed, the file is 840184 byes long. The printed version is 63 total pages long. For those who do not have FTP access, physical copies can be requested from Barbara Dorney .  From Ralph.Neuneier at zfe.siemens.de Thu Mar 31 02:23:43 1994 From: Ralph.Neuneier at zfe.siemens.de (Ralph Neumeier) Date: Thu, 31 Mar 1994 09:23:43 +0200 Subject: paper available Message-ID: <199403310723.AA05693@train.zfe.siemens.de> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/.ps.Z The following paper, which is closely related to the two recently announced paper of Chris. M. Bishop and Z. Ghahramani on density estimation is now available by anonymous ftp (7 pages, no hardcopies). It will appear in the proceedings of ICANN'94 (INTERNATIONAL CONFERENCE ON ARTIFICIAL NEURAL NETWORKS) Springer-Verlag London Ltd Any questions or comments will be highly appreciated. E-mail: Ralph.Neuneier at zfe.siemens.de -------------------------------------------------------------- Estimation of Conditional Densities: A Comparison of Neural Network Approaches R.Neuneier, F.Hergert, Siemens AG, Corporate Research and Development, D-81730 Munich, Germany W.Finnoff, Prediction Company, Santa Fe, NM 8750 D.Ormoneit, Dept. of CS, TUM, D-80290 Munich, Germany ABSTRACT: We present a comparison of various network architectures and learning algorithms (EM, Gradient Descent) to estimate conditional densities p(y|x) for future values of time series given past observations. There are two principal ways to approach this problem: Either one can estimate the conditional density directly, or first compute the joint densities p(x,y) and subsequently derive the conditional density p(y|x) from p(x,y). We compared the performance of both approaches using a bounded Brownian process and real exchange rates (U.S.$-SFR). In our experiments, the direct approach turned out to be superior. -------------------------------------------------------------- Instruction for retrieving this paper: ftp archive.cis.ohio-state.edu Name: anonymous password: your full email address ftp> cd pub/neuroprose ftp> binary ftp> get neuneier.cond_dens.ps.Z ftp> bye uncompress neuneier.cond_dens.ps.Z lpr neuneier.cond_dens.ps ------------------------------------------------------------ Ralph Neuneier ZFE ST SN 41 Siemens AG Otto-Hahn-Ring 6 D 81730 Muenchen, Germany Phone: +49/89/636-49506 Fax: +49/89/636-3320 e-mail: Ralph.Neuneier at zfe.siemens.de  From philh at cogs.susx.ac.uk Thu Mar 31 08:59:09 1994 From: philh at cogs.susx.ac.uk (Phil Husbands) Date: Thu, 31 Mar 1994 14:59:09 +0100 (BST) Subject: SAB94 Registration Details Message-ID: CONFERENCE ANNOUNCEMENT AND REGISTRATION DETAILS You are cordially invited to FROM ANIMALS TO ANIMATS Third International Conference on Simulation of Adaptive Behavior (SAB94) Brighton, UK, August 8-12, 1994 The object of the conference is to bring together researchers in ethology, psychology, ecology, cybernetics, artificial intelligence, robotics, and related fields so as to further our understanding of the behaviors and underlying mechanisms that allow animals and, potentially, robots to adapt and survive in uncertain environments. The conference will focus particularly on well-defined models, computer simulations, and built robots in order to help characterize and compare various organizational principles or architectures capable of inducing adaptive behavior in real or artificial animals. Technical Programme =================== The full technical programme will be announced in due course. There will be a single track of oral presentations, with poster sessions separately timetabled. There will also be computer, video and robotic demonstrations. Major topics covered will include: Individual and collective behavior Autonomous robots Neural correlates of behavior Hierarchical and parallel organizations Perception and motor control Emergent structures and behaviors Motivation and emotion Problem solving and planning Action selection and behavioral Goal directed behavior sequences Neural networks and evolutionary Ontogeny, learning and evolution computation Internal world models Characterization of environments and cognitive processes Applied adaptive behavior Invited speakers ================ Prof. Michael Arbib, University of Southern California, "Rats Running and Humans Reaching: The Brain's Multiple Styles of Learning" Prof. Rodney Brooks, MIT, "Coherent Behavior from Many Adaptive Processes" Prof. Herbert Roitblat, University of Hawaii, "Mechanisms and Process in Animal Behaviour: Models of Animals, Animals as Models" Prof. Jean-Jacques Slotine, MIT, "Stability in Adaptation and Learning" Prof. John Maynard Smith, University of Sussex,"The Evolution of Animal Signals" Proceedings =========== The conference proceeding will be published by MIT Press/Bradford Books and will be available at the conference. Official Language: English ========================== Demonstrations ============== Computer, video and robotic demonstrations are invited. They should be of work relevant to the conference. If you wish to offer a demonstration, please send a letter with your registration form briefly describing your contribution and indicating space and equipment requirements. Conference Committee ==================== Conference Chairs: Philip HUSBANDS Jean-Arcady MEYER Stewart WILSON School of Cognitive Groupe de Bioinformatique The Rowland Institute and Comp. Sciences Ecole Normale Superieure for Science University of Sussex 46 rue d'Ulm 100 Cambridge Parkway Brighton BN1 9QH, UK 75230 Paris Cedex 05 Cambridge, MA 02142, USA philh at cogs.susx.ac.uk meyer at wotan.ens.fr wilson at smith.rowland.org Program Chair: David CLIFF School of Cognitive and Computing Sciences University of Sussex Brighton BN1 9QH, UK davec at cogs.susx.ac.uk Financial Chair: P. Husbands, H. Roitblat Local Arrangements: I. Harvey, P. Husbands Program Committee ================= M. Arbib, USA R. Arkin, USA R. Beer, USA A. Berthoz, France L. Booker, USA R. Brooks, USA P. Colgan, Canada T. Collett, UK H. Cruse, Germany J. Delius, Germany J. Ferber, France N. Franceschini, France S. Goss, Belgium J. Halperin, Canada I. Harvey, UK I. Horswill, USA A. Houston, UK L. Kaelbling, USA H. Klopf, USA L-J. Lin, USA P. Maes, USA M. Mataric, USA D. McFarland, UK G. Miller, UK R. Pfeifer, Switzerland H. Roitblat, USA J. Slotine, USA O. Sporns, USA J. Staddon, USA F. Toates, UK P. Todd, USA S. Tsuji, Japan D. Waltz, USA R. Williams, USA Local Arrangements ================== For general enquiries contact: SAB94 Administration COGS University of Sussex Falmer, Brighton, BN1 9QH UK Tel: +44 (0)273 678448 Fax: +44 (0)273 671320 Email: sab94 at cogs.susx.ac.uk ftp === The SAB94 archive can be accessed by anonymous ftp. % ftp ftp.cogs.susx.ac.uk login: anonymous password: ftp> cd pub/sab94 ftp> get * ftp> quit * Files available at present are: README announcement reg_document hotel_booking_form Sponsors ======== Sponsors include: British Telecom University of Sussex Applied AI Systems Inc Uchidate Co., Ltd. Mitsubishi Corporation Brighton Council Financial Support ================ Limited financial support may be available to graduate students and young researchers in the field. Applicants should submit a letter describing their research, the year they expect to receive their degree, a letter of recommendation from their supervisor, and confirmation that they have no other sources of funds available. The number and size of awards will depend on the amount of money available. Venue ===== The conference will be held at the Brighton Centre, the largest conference venue in the town, situated on the seafront in Brighton's town centre and adjacent to the 'Lanes' district. Brighton is a thriving seaside resort, with many local attractions, situated on the south coast of England. It is just a 50 minute train journey from London, and 30 minutes from London Gatwick airport -- when making travel arrangements we advise, where possible, using London Gatwick in preference to London Heathrow. Social Activities ================= A welcome reception will be held on Sunday 7th August. The conference banquet will take place on Thursday 11th August. There will also be opportunities for sightseeing, wine cellar tours and a visit to Brighton's Royal Pavilion. Accommodation ============= We have organised preferential rates for SAB94 delegates at several good quality hotels along the seafront. All hotels are within easy walking distance of the Brighton Centre. Costs vary from 29 pounds to 70 pounds inclusive per night for bed and breakfast. An accommodation booking form will be sent out to you on request, or can be obtained by ftp (instructions above). Details of cheaper budget accommodation can be obtained from Brighton Accommodation Marketing Bureau (Tel: +44 273 327560 Fax: +44 273 777409). Insurance ========= The SAB94 organisers and sponsors can not accept liablility for personal injuries, or for loss or damage to property belonging to conference participants or their guests. It is recommended that attendees take out personal travel insurance. Registration Fees ================= Registration includes: the conference proceedings; technical program; lunch each day (except Wednesday when there will be no afternoon sessions); welcome reception; free entry to Brighton's Royal Pavilion; complimentary temporary membership of the Arts Club of Brighton. ----------------------------------------------------------------------------- REGISTRATION FORM 3rd International Conference on the Simulation of Adaptive Behaviour (SAB94) 8-12 August 1994 Brighton Centre, Brighton, UK Please complete the form below and send to the conference office with full payment. Name: ______________________________________________________________ Address: __________________________________________________________ ____________________________________________________________________ ____________________________________________________________________ Country: ___________________________________________________________ Postal Code or Zip Code: ___________________________________________ Email: _____________________________________________________________ Telephone:____________________________ Fax:_________________________ Professional Affiliation:___________________________________________ Name(s) of accompanying person(s): 1. ________________________________________________________________ 2. ________________________________________________________________ Dietary needs: ____________________________________________________ Any other special needs: _________________________________________ PAYMENTS ======== All payments must be made in pounds sterling. Delegates: ========= Tick if you will be attending the welcome reception on Sunday 7 August _____ Tick appropriate boxes. Individual Student Early (before 15 May 1994) 200 pounds ( ) 100 pounds ( ) Late (after 15 May 1994) 230 pounds ( ) 115 pounds ( ) On site 260 pounds ( ) 130 pounds ( ) Banquet 18 pounds ( ) 18 pounds ( ) STUDENTS MUST SUBMIT PROOF OF THEIR STATUS ALONG WITH THEIR REGISTRATION FEE. Accompanying persons: =================== Welcoming reception 10 pounds Banquet 28 pounds TOTAL PAYMENT ___________ Registration ___________ Banquet (delegate rate) (Please tick if vegetarian _____) ___________ Banquet (guest rate) (Please tick if vegetarian _____) ___________ Reception (guests only) ___________ Donation to support student scholarship fund METHOD OF PAYMENT ================= Please make payable to "SAB94", pounds sterling only. _____ Bank Draft or International Money Order: __________________ pounds _____ Cheque: (drawn on a UK bank) __________________ pounds Send to: SAB Administration COGS University of Sussex Falmer, Brighton, BN1 9QH UK CANCELLATIONS ============= The SAB Administration should be notified in writing of all cancellations. Cancellations received before 10 July will incur a 20% administration charge. We cannot accept any cancellations after that date. ---------------------------------------------------------------------------------------